text
string
filename
string
file_size
int64
title
string
authors
string
journal
string
category
string
publisher
string
license
string
license_url
string
doi
string
source_file
string
content
string
year
string
# Mitochondrial DNA Efflux Maintained in Gingival Fibroblasts of Patients with Periodontitis through ROS/mPTP Pathway **Authors:** Jia Liu; Yanfeng Wang; Qiao Shi; Xiaoxuan Wang; Peihui Zou; Ming Zheng; Qingxian Luan **Journal:** Oxidative Medicine and Cellular Longevity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1000213 --- ## Abstract Mitochondria have their own mitochondrial DNA (mtDNA). Aberrant mtDNA is associated with inflammatory diseases. mtDNA is believed to induce inflammation via the abnormal mtDNA release. Periodontitis is an infectious, oral inflammatory disease. Human gingival fibroblasts (HGFs) from patients with chronic periodontitis (CP) have shown to generate higher reactive oxygen species (ROS) that cause oxidative stress and have decreased mtDNA copy number. Firstly, cell-free mtDNA was identified in plasma from CP mice through qRT-PCR. Next, we investigated whether mtDNA efflux was maintained in primary cultures of HGFs from CP patients and the possible underlying mechanisms using adenovirus-mediated transduction live cell imaging and qRT-PCR analysis. Here, we reported that mtDNA was increased in plasma from the CP mice. Additionally, we confirmed that CP HGFs had significant mtDNA efflux from mitochondria compared with healthy HGFs. Furthermore, lipopolysaccharide (LPS) fromPorphyromonas gingivalis can also cause mtDNA release in healthy HGFs. Mechanistically, LPS upregulated ROS levels and mitochondrial permeability transition pore (mPTP) opening by inhibition of pyruvate dehydrogenase kinase (PDK)2 expression, resulting in mtDNA release. Importantly, mtDNA efflux was even persistent in HGFs after LPS was removed and cells were passaged to the next three generations, indicating that mtDNA abnormalities were retained in HGFs in vitro, similar to the primary hosts. Taken together, our results elucidate that mtDNA efflux was maintained in HGFs from periodontitis patients through abnormal ROS/mPTP activity. Therefore, our work indicates that persistent mtDNA efflux may be a possible diagnostic and therapeutic target for patients with periodontitis. --- ## Body ## 1. Introduction Periodontal inflammation is known to affect 20%-50% of the global population, and it often interacts with other inflammatory diseases such as heart disease and diabetes [1–3]. Periodontitis is associated with lipopolysaccharide (LPS) from the cell walls of gram-negative bacteria-mediated inflammatory responses and represents the most common cause of teeth loss [1, 4, 5]. Increasing evidences suggest that mitochondrial dysfunction appears to result in periodontitis during LPS stimulus [6–8]. As such, abnormal mitochondria are considered to be one of the major contributors to the periodontitis development. Several mitochondrial components, including mitochondrial DNA (mtDNA), have also been implicated in inflammatory responses [9].mtDNA exists in mitochondrial matrix and is in intimate contact with the electron transport chain, one of the principal sources of reactive oxygen species (ROS). Therefore, mtDNA is particularly susceptible to oxidation, which can cause mutations and damages, leading to the pathogenesis of inflammation [10]. ROS production and ROS produced in mitochondria (mtROS) are indeed significantly enhanced in human gingival fibroblasts (HGFs) after LPS stimulation or from periodontitis hosts [11]. These results indicate that oxidative stress is induced during periodontitis [12]. According to current studies linking oxidative stress to decreased mtDNA copy number [13, 14], it is becoming clear that mtDNA disruption may be associated with chronic inflammation [15]. Consistent with this assumption, previous research has also demonstrated that mtDNA deletion is present in the gingival tissues of patients with periodontitis [16]. Moreover, a decrease in mtDNA in periodontitis rats suggested that aberrant mtDNA might contribute to aggravated periodontitis [7]. Considering our recent observation that HGFs and gingival tissues of patients with periodontitis in vitro had decreased mtDNA levels and decreased mitochondrial matrix protein expression, especially in pyruvate dehydrogenase kinase 2 (PDK2) when compared with those from healthy subjects [8], suggesting that mtDNA and mitochondria disruption in peripheral HGFs might replicate the mitochondrial dysfunction observed in vivo during periodontitis development. Therefore, we hypothesized that abnormal mtDNA might be maintained in HGFs in vitro connecting the disease in vivo with a certain mechanism. Answering these problems will improve our understanding of the periodontitis etiology, and it might lead to new treatment options.Superresolution imaging demonstrated that mtDNA constitutes one copy of mtDNA and a number of different proteins, presenting densely compacted nucleoids [17]. Mitochondrial transcription factor A (TFAM) is the most notable mtDNA nucleoid protein that can be assumed to specifically recognize mtDNA [15]. mtDNA can escape from mitochondria and release into the cytoplasm under various pathological situations [18, 19]. Multiple major factors have been attributed to driving mtDNA release from damaged mitochondria, including the opening of mitochondrial permeability transition pore (mPTP), mitochondrial stress, and calcium overload [20–22]. Nonetheless, the biological mechanisms provide limited information on this process during periodontitis conditions. Our recent study demonstrated that mitochondria in HGFs from periodontitis patients appeared to retain many of the damaged features, as observed in donors [8]. In addition, previous studies have suggested that the primary host has profound influence on the cells in vitro, such as higher oxidative stress in HGFs of periodontitis patients than that of healthy individuals [8, 23]. However, differences of mtDNA release in HGFs from chronic periodontitis (CP) patients and healthy HGFs have not been tested and if this abnormal intracellular mtDNA activity would last need to be further elucidated. Thus, therapies targeting mtDNA may become a potential approach to patients suffered from severe recurrent periodontitis.The mPTP spans the mitochondrial inner membrane, and its formation is associated with various cellular stresses [24, 25]. Interestingly, the opening of mPTP has been detected in the metabolic stress observed in inflammatory diseases [26]. More recently, using an in vitro and in vivo approach, studies have shown that genetic removal of one of the mPTP component proteins ameliorated mtDNA release into the cytoplasm during the neuroinflammatory response [20]. Pharmacological inhibition of mPTP by cyclosporin A (CsA) has also been shown to be effective in preventing mtDNA leakage into the cytoplasm [20]. Despite these results in previous studies, the notion that the opening of mPTP may directly drive mtDNA efflux remains controversial and is still unclear in periodontitis. It has been reported that ROS contributes directly to mPTP opening during ischemia-reperfusion [27]. As a result of cellular ROS and mtROS outburst, mPTP opening can be activated. Nevertheless, its association with mPTP involved in mtDNA efflux in periodontitis is scarcely understood.In this study, we discovered the differences in the mtDNA efflux process, ROS levels, and mPTP opening between primary HGFs, isolated from patients with CP and age-matched periodontally healthy patients. This elevated mtDNA efflux together with high ROS levels, and mPTP opening in CP HGFs could be more enhanced in response to LPS. Furthermore, this identified mtDNA efflux as an important modifier could be maintained in HGFs even withdrawing the LPS stimulation after passages. Consequently, we explored if the ROS/mPTP pathway involving in the mtDNA efflux along with the progression of periodontitis. This may be a promising target for early diagnosing periodontitis and provides preclinical evidence for therapeutic strategy to people with periodontal inflammation tolerating common anti-microorganism therapy. ## 2. Materials and Methods ### 2.1. Ethics Approval The study was approved by the Review Board and Ethics Committee of Peking University Health Science Center (PKUS-SIRB-2013017) and conducted in agreement with the Declaration of Helsinki II. Written informed consent was obtained from all subjects before inclusion in the study. All animal work was approved by the Review Board and Ethics Committee of Peking University Health Science Center (LA2018076). ### 2.2. Animals and Experimental Groups Specific-pathogen-free male C57BL/6 wild-type mice (6-wk-old) (Figure1(a)) were purchased from Experimental Animal Laboratory, Peking University Health Science Center, in compliance with established polices. All mice were randomly divided into the normal control groups or CP groups of four mice each. The control group was left untreated, and the CP group had their maxillary second molar tooth ligated with a 5-0 silk ligature (Roboz Surgical Instrument Co, MD, USA) (Figure 1(a)). The ligatures remained in place in CP groups throughout the experimental period. All mice were sacrificed at three weeks postligation (Figure 1(a)). Microcomputed tomography (CT) was used for assuring the CP model was established successfully.Figure 1 Cell-free- (cf-) mtDNA in plasma from chronic periodontitis mice and control healthy mice. (a) 5-0 silk suture was sutured for three weeks passing around the maxillary second molar in 6-week-old mice for establishing experimental chronic periodontitis (CP) mouse model. Control normal mice had no treatment. (b–c) Micro-CT showed obviously increased bone loss in CP mice after ligation for three weeks when compared to the control group. One representative image for 2-dimensional and 3-Dmode is shown. The red arrow represents bone loss areas. The yellow line indicates the distance between cement-enamel junction (CEJ) and alveolar bone crest (AEJ). (d) ND1 levels in plasma between CP and control mice (n=4). ∗p<0.05. (a)(b)(c)(d) ### 2.3. Microcomputed Tomography In brief, after sacrificing the mice, the maxillary teeth were carefully dissected and soft tissues were removed. The sample was fixed with 4% paraformaldehyde for 24 h, and scanned using theμCT50 (Scanco Medical) with a resolution of 1024×1024, pixel size of 15×15μm, and layer spacing of 15 μm. The region of interest was assessed by 3D reconstructed. Bone loss was evaluated by 3D micro-CT. ### 2.4. Human Subjects HGFs were obtained from six CP patients and six age-matched healthy donors. These participants were recruited from the Department of Periodontology, Peking University School and Hospital of Stomatology. The exclusion criteria included smoking and systemic health issues including hypertension, diabetes, and immune-related diseases within the past six months. CP was defined according to the American Academy of Periodontology and European Federation of Periodontology criteria based on staging and grading [28]. CP patients included in this study were grade B and stage III. Gingival tissues from CP were acquired through flap surgery, with PD≥6mm. Tissues in the healthy group were harvested during crown lengthening surgery, with PD<4mm. Table 1 lists the detailed characteristics of the participants.Table 1 Clinical characteristics at surgery site of patients included in this study. AbbreviationNumber of patientsRange of agePercent womenBI PD (mm)CAL (mm)Con627-40501-20-0.5CP633-4566.76-104-7 ### 2.5. Primary Culture of HGFs HGFs were prepared from the gingival tissues of six CP patients and six healthy controls during periodontal surgery. Cells were grown in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, USA) and 1% penicillin-streptomycin. The cells were incubated at 37°C with 5% CO2. The medium was changed after a week. In approximately two weeks, the cells reached subconfluency and the pieces of gingival tissue were removed from the culture flask. Cells from the third to the eighth passage were used in the subsequent study. ### 2.6. Cell Treatment and Stimulation HGFs from healthy and CP patients were stimulated with or without 5μg/mL LPS from Porphyromonas gingivalis (P.g) (ATCC33277, Standard, InvivoGen, San Diego, CA, USA) for 24 h. To investigate whether inflammatory features of donors were retained in HGFs, HGFs from healthy donors were treated with 5 μg/mL of LPS for 24 h, followed by discarding the medium then passaging to the next three generations for analysis. The cells were assessed by indicated assays and compared with cells from healthy donors that were directly stimulated with the same amount of LPS for the same time. ### 2.7. Cellular ROS and Mitochondrial ROS (mtROS) Detection 2′,7′-Dichlorodihydrofluorescein diacetate (H2DCF-DA) (Sigma-Aldrich, St. Louis, MO) and MitoSOX Red (Invitrogen, Carlsbad, CA) were used to detect total ROS and mtROS, respectively, as previously described [11]. HGFs were loaded with H2DCF-DA (10 μM) or MitoSOX Red (5 μM) for 30 min and then observed using a microscope [11]. To inhibit ROS levels, HGFs were preincubated with 3 mM N-acetylcysteine (NAC) (Sigma Aldrich, St. Louis, MO) for 2 h. The mtROS scavenger 50 μM Mito-TEMPO (Santa Cruz Biotech, Dallas, TX) were pretreated for 2 h. ### 2.8. Western Blotting Proteins were extracted from HGFs using ice-cold radioimmunoprecipitation (RIPA) lysis buffer (Solarbio). After being quantified by BCA (Thermo Fisher Scientific), the protein samples were mixed with loading buffer (Solarbio), separated by electrophoresis on SDS-PAGE. The proteins in the gel were transferred on a polyvinylidene fluoride (PVDF) membrane (Beyotime). The membranes were blocked with 5% skimmed milk (Solarbio) and incubated overnight at 4°C with primary antibody. The membranes were washed with Tris-buffered saline and incubated with secondary antibody for 90 min at room temperature. The PVDF membranes were subjected to chemiluminescence detection using an ECL Western Blotting Detection Kit (Solarbio). ### 2.9. DNA Isolation and mtDNA Quantification by Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Genomic DNA from HGFs was extracted using the Universal Genomic DNA Kit (ZOMAN, Beijing, China), following the manufacturer’s instructions. The mtDNA levels in HGFswere assessed using primers against mitochondrial genes (ND1), while nuclear 18S rRNA served as a loading control. Detailed ND1 and 18S rRNA sequences are presented in Table2. Cytosolic mtDNA extraction was performed according to the methods established by West et al. [29]. The plasma from the mice was centrifuged at 1000 g for 5 min, and then the supernatant was centrifuged a second time at 5000 g for 10 min. The top 80% of the volume can be used for cell-free- (cf-) mtDNA quantification. DNA from cell supernatants, cf-mtDNA in plasma, and cytosol DNA (200 μL) were isolated using the QIAamp DNA Mini Kit (Qiagen, Germany). ND1 levels in the samples were analyzed according to a standard curve based on ND1 plasmid (Sangon, Shanghai, China) levels.Table 2 List of primers for real-time PCR studies. GenePrimerSequenceND 1Forward primer5′-CACACTAGCAGAGACCAACCGAAC-3′Reverse primer5′-CGGCTATGAAGAATAGGGCGAAGG-3′18S rRNAForward primer5′-GACTCAACACGGGAAACCTCACC-3′Reverse primer5′-ACCAGACAAATCGCTCCACCAAC-3′ ### 2.10. Adenovirus Transduction for Mitochondria and mtDNA Detection HGFs were transduced with adenovirus encoding the mitochondrial outer-membrane protein Tomm 20 bearing a mCherry fluorescence protein. mtDNA was detected by coexpression of TFAM, tagged with the green fluorescent protein (GFP) variant mNeonGreen. HGFs were seeded on 10 mm round confocal glass coverslips at a density of 50% and were infected with specified amounts of the Tomm 20-mCherry and TFAM-mNeonGreen adenoviruses. Forty-eight hours after transduction, the medium was changed, and the cells were processed for further analysis. ### 2.11. Live Cell Imaging Microscopy Live cells were captured using a fluorescence microscope (TCS-STED; Leica, Wetzlar, Germany) with a63×oil immersion objective. For all experiments, HGFs were grown in 10 mm round glass bottom confocal wells (Cedarlane, Southern Ontario, Canada). Laser excitation was achieved at 488 nm for mNeonGreen and 561 nm for mCherry. LPS treatment was performed after sample mounting in the medium chamber, if needed. HGFs expressing Tomm 20-mCherry and TFAM-mNeonGreen were imaged serially at every 10 s for 10-15 min. Image processing and analysis were performed using ImageJ (NIH, http://rsb.info.nih.gov/ij/) and Huygens Professional software (Scientific Volume Imaging, Amsterdam, Holland). ### 2.12. Detection of mPTP Opening HGFs were incubated with 50 mM cobalt chloride for 15 min, before treatment with 1μM Calcein Green AM (Solarbio, Beijing, China) for 30 min. Free Calcein quenching by cobalt chloride preserved mitochondrial integrity, which could be used to indicate mPTP opening. Calcein fluorescence was detected by confocal microscopy (Leica) using a 488 nm excitation wavelength. Quantification of the Calcein fluorescence intensity was conducted by analyzing 20 cells for every indicated condition using ImageJ software. To prevent mPTP opening, HGFs were preincubated with 0.5 μM cyclosporine A (CsA; Sigma) for 2 h, following the manufacturer’s recommendations. ### 2.13. Flow Cytometric Analysis Cells were briefly washed with1×phosphate-buffered saline (PBS), resuspended in 1×binding buffer, and centrifuged at 300×g for 10 min. The pellets were resuspended with 1×binding buffer at a density of 1×106 cells/mL. Cells were replated in a flow cytometric tube at a density of 1×105 cells/mL and processed for Annexin V-FITC staining (Solarbio, Beijing, China)for 10 min at 20-25°C. Subsequently, the cells were stained with propidium iodide (PI) for 5 min at 20-25°C and analyzed for apoptosis by flow cytometry. ### 2.14. Statistical Analysis Data are expressed as themean±standarderror (SE). All p values were determined by two-way Student’s t-test or one-way analysis of variance (ANOVA) with a post hoc Student Knewman-Keuls test for multiple comparisons. Significant differences were accepted at p<0.05. Statistical analysis was performed using GraphPad Prism software (version 9.00; GraphPad Software). ## 2.1. Ethics Approval The study was approved by the Review Board and Ethics Committee of Peking University Health Science Center (PKUS-SIRB-2013017) and conducted in agreement with the Declaration of Helsinki II. Written informed consent was obtained from all subjects before inclusion in the study. All animal work was approved by the Review Board and Ethics Committee of Peking University Health Science Center (LA2018076). ## 2.2. Animals and Experimental Groups Specific-pathogen-free male C57BL/6 wild-type mice (6-wk-old) (Figure1(a)) were purchased from Experimental Animal Laboratory, Peking University Health Science Center, in compliance with established polices. All mice were randomly divided into the normal control groups or CP groups of four mice each. The control group was left untreated, and the CP group had their maxillary second molar tooth ligated with a 5-0 silk ligature (Roboz Surgical Instrument Co, MD, USA) (Figure 1(a)). The ligatures remained in place in CP groups throughout the experimental period. All mice were sacrificed at three weeks postligation (Figure 1(a)). Microcomputed tomography (CT) was used for assuring the CP model was established successfully.Figure 1 Cell-free- (cf-) mtDNA in plasma from chronic periodontitis mice and control healthy mice. (a) 5-0 silk suture was sutured for three weeks passing around the maxillary second molar in 6-week-old mice for establishing experimental chronic periodontitis (CP) mouse model. Control normal mice had no treatment. (b–c) Micro-CT showed obviously increased bone loss in CP mice after ligation for three weeks when compared to the control group. One representative image for 2-dimensional and 3-Dmode is shown. The red arrow represents bone loss areas. The yellow line indicates the distance between cement-enamel junction (CEJ) and alveolar bone crest (AEJ). (d) ND1 levels in plasma between CP and control mice (n=4). ∗p<0.05. (a)(b)(c)(d) ## 2.3. Microcomputed Tomography In brief, after sacrificing the mice, the maxillary teeth were carefully dissected and soft tissues were removed. The sample was fixed with 4% paraformaldehyde for 24 h, and scanned using theμCT50 (Scanco Medical) with a resolution of 1024×1024, pixel size of 15×15μm, and layer spacing of 15 μm. The region of interest was assessed by 3D reconstructed. Bone loss was evaluated by 3D micro-CT. ## 2.4. Human Subjects HGFs were obtained from six CP patients and six age-matched healthy donors. These participants were recruited from the Department of Periodontology, Peking University School and Hospital of Stomatology. The exclusion criteria included smoking and systemic health issues including hypertension, diabetes, and immune-related diseases within the past six months. CP was defined according to the American Academy of Periodontology and European Federation of Periodontology criteria based on staging and grading [28]. CP patients included in this study were grade B and stage III. Gingival tissues from CP were acquired through flap surgery, with PD≥6mm. Tissues in the healthy group were harvested during crown lengthening surgery, with PD<4mm. Table 1 lists the detailed characteristics of the participants.Table 1 Clinical characteristics at surgery site of patients included in this study. AbbreviationNumber of patientsRange of agePercent womenBI PD (mm)CAL (mm)Con627-40501-20-0.5CP633-4566.76-104-7 ## 2.5. Primary Culture of HGFs HGFs were prepared from the gingival tissues of six CP patients and six healthy controls during periodontal surgery. Cells were grown in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, USA) and 1% penicillin-streptomycin. The cells were incubated at 37°C with 5% CO2. The medium was changed after a week. In approximately two weeks, the cells reached subconfluency and the pieces of gingival tissue were removed from the culture flask. Cells from the third to the eighth passage were used in the subsequent study. ## 2.6. Cell Treatment and Stimulation HGFs from healthy and CP patients were stimulated with or without 5μg/mL LPS from Porphyromonas gingivalis (P.g) (ATCC33277, Standard, InvivoGen, San Diego, CA, USA) for 24 h. To investigate whether inflammatory features of donors were retained in HGFs, HGFs from healthy donors were treated with 5 μg/mL of LPS for 24 h, followed by discarding the medium then passaging to the next three generations for analysis. The cells were assessed by indicated assays and compared with cells from healthy donors that were directly stimulated with the same amount of LPS for the same time. ## 2.7. Cellular ROS and Mitochondrial ROS (mtROS) Detection 2′,7′-Dichlorodihydrofluorescein diacetate (H2DCF-DA) (Sigma-Aldrich, St. Louis, MO) and MitoSOX Red (Invitrogen, Carlsbad, CA) were used to detect total ROS and mtROS, respectively, as previously described [11]. HGFs were loaded with H2DCF-DA (10 μM) or MitoSOX Red (5 μM) for 30 min and then observed using a microscope [11]. To inhibit ROS levels, HGFs were preincubated with 3 mM N-acetylcysteine (NAC) (Sigma Aldrich, St. Louis, MO) for 2 h. The mtROS scavenger 50 μM Mito-TEMPO (Santa Cruz Biotech, Dallas, TX) were pretreated for 2 h. ## 2.8. Western Blotting Proteins were extracted from HGFs using ice-cold radioimmunoprecipitation (RIPA) lysis buffer (Solarbio). After being quantified by BCA (Thermo Fisher Scientific), the protein samples were mixed with loading buffer (Solarbio), separated by electrophoresis on SDS-PAGE. The proteins in the gel were transferred on a polyvinylidene fluoride (PVDF) membrane (Beyotime). The membranes were blocked with 5% skimmed milk (Solarbio) and incubated overnight at 4°C with primary antibody. The membranes were washed with Tris-buffered saline and incubated with secondary antibody for 90 min at room temperature. The PVDF membranes were subjected to chemiluminescence detection using an ECL Western Blotting Detection Kit (Solarbio). ## 2.9. DNA Isolation and mtDNA Quantification by Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Genomic DNA from HGFs was extracted using the Universal Genomic DNA Kit (ZOMAN, Beijing, China), following the manufacturer’s instructions. The mtDNA levels in HGFswere assessed using primers against mitochondrial genes (ND1), while nuclear 18S rRNA served as a loading control. Detailed ND1 and 18S rRNA sequences are presented in Table2. Cytosolic mtDNA extraction was performed according to the methods established by West et al. [29]. The plasma from the mice was centrifuged at 1000 g for 5 min, and then the supernatant was centrifuged a second time at 5000 g for 10 min. The top 80% of the volume can be used for cell-free- (cf-) mtDNA quantification. DNA from cell supernatants, cf-mtDNA in plasma, and cytosol DNA (200 μL) were isolated using the QIAamp DNA Mini Kit (Qiagen, Germany). ND1 levels in the samples were analyzed according to a standard curve based on ND1 plasmid (Sangon, Shanghai, China) levels.Table 2 List of primers for real-time PCR studies. GenePrimerSequenceND 1Forward primer5′-CACACTAGCAGAGACCAACCGAAC-3′Reverse primer5′-CGGCTATGAAGAATAGGGCGAAGG-3′18S rRNAForward primer5′-GACTCAACACGGGAAACCTCACC-3′Reverse primer5′-ACCAGACAAATCGCTCCACCAAC-3′ ## 2.10. Adenovirus Transduction for Mitochondria and mtDNA Detection HGFs were transduced with adenovirus encoding the mitochondrial outer-membrane protein Tomm 20 bearing a mCherry fluorescence protein. mtDNA was detected by coexpression of TFAM, tagged with the green fluorescent protein (GFP) variant mNeonGreen. HGFs were seeded on 10 mm round confocal glass coverslips at a density of 50% and were infected with specified amounts of the Tomm 20-mCherry and TFAM-mNeonGreen adenoviruses. Forty-eight hours after transduction, the medium was changed, and the cells were processed for further analysis. ## 2.11. Live Cell Imaging Microscopy Live cells were captured using a fluorescence microscope (TCS-STED; Leica, Wetzlar, Germany) with a63×oil immersion objective. For all experiments, HGFs were grown in 10 mm round glass bottom confocal wells (Cedarlane, Southern Ontario, Canada). Laser excitation was achieved at 488 nm for mNeonGreen and 561 nm for mCherry. LPS treatment was performed after sample mounting in the medium chamber, if needed. HGFs expressing Tomm 20-mCherry and TFAM-mNeonGreen were imaged serially at every 10 s for 10-15 min. Image processing and analysis were performed using ImageJ (NIH, http://rsb.info.nih.gov/ij/) and Huygens Professional software (Scientific Volume Imaging, Amsterdam, Holland). ## 2.12. Detection of mPTP Opening HGFs were incubated with 50 mM cobalt chloride for 15 min, before treatment with 1μM Calcein Green AM (Solarbio, Beijing, China) for 30 min. Free Calcein quenching by cobalt chloride preserved mitochondrial integrity, which could be used to indicate mPTP opening. Calcein fluorescence was detected by confocal microscopy (Leica) using a 488 nm excitation wavelength. Quantification of the Calcein fluorescence intensity was conducted by analyzing 20 cells for every indicated condition using ImageJ software. To prevent mPTP opening, HGFs were preincubated with 0.5 μM cyclosporine A (CsA; Sigma) for 2 h, following the manufacturer’s recommendations. ## 2.13. Flow Cytometric Analysis Cells were briefly washed with1×phosphate-buffered saline (PBS), resuspended in 1×binding buffer, and centrifuged at 300×g for 10 min. The pellets were resuspended with 1×binding buffer at a density of 1×106 cells/mL. Cells were replated in a flow cytometric tube at a density of 1×105 cells/mL and processed for Annexin V-FITC staining (Solarbio, Beijing, China)for 10 min at 20-25°C. Subsequently, the cells were stained with propidium iodide (PI) for 5 min at 20-25°C and analyzed for apoptosis by flow cytometry. ## 2.14. Statistical Analysis Data are expressed as themean±standarderror (SE). All p values were determined by two-way Student’s t-test or one-way analysis of variance (ANOVA) with a post hoc Student Knewman-Keuls test for multiple comparisons. Significant differences were accepted at p<0.05. Statistical analysis was performed using GraphPad Prism software (version 9.00; GraphPad Software). ## 3. Results ### 3.1. mtDNA Release from Mitochondria during Periodontitis Development Micro-CT results revealed that alveolar bone around the ligated molar was significantly reduced in CP mice compared to control mice, suggesting experimental periodontitis in the CP group established (Figures1(b) and 1(c)). Intriguingly, mtDNA in plasma from CP mice were enriched compared to age-matched wild-type control mice (Figure 1(d)). These results indicated that mtDNA release might be involved in periodontitis development. However, mtDNA efflux in HGFs during periodontitis is still unclear. Next, we transduced primary HGFs with adenovirus encoding Tomm 20-mCherry and TFAM-mNeonGreen to show mitochondria and mtDNA, respectively (Figure 2(a)). mtDNA were detected robust release into the cytoplasm in CP HGFs (Figure 2(b)). This process was also found by real-time microscopy (Figure 2(c), Movie 1). In contrast, no mtDNA efflux was detected in healthy HGFs (Movie S1). LPS caused remarkable mtDNA release in healthy HGFs and led to more significant mtDNA release in periodontitis-affected samples (Figures 2(b)–2(d), and 2(e), Movie 2 and 3).Next, we calculated a significant increase in the percentage of HGFs with mtDNA efflux in CP HGFs as compared with that in control HGFs (Figure 2(f)). LPS treatment caused marked increase in the percentage of HGFs with mtDNA efflux compared with those without LPS treatment in healthy and CP states (Figure 2(f)). Moreover, qRT-PCR confirmed that mtDNA release into cytosol and out of cells during periodontitis (Figures 2(g)–2(i)). These results indicated that mtDNA release might be involved in periodontitis development.Figure 2 mtDNA released from mitochondria in human gingival fibroblasts from patients with chronic periodontitis. (a) Using fluorescent fusion proteins to visualize mitochondria (Tomm 20-mCherry) and mtDNA (TFAM-mNeonGreen). TM: transmembrane domain; MLS: mitochondrial localization sequence; DBD1 and DBD2: DNA binding domain-1 and DNA binding domain-2. (b) Typical illustration of the human gingival fibroblasts (HGFs) for mitochondria and mtDNA among control, CP, and with or without LPS stimulation (5μg/mL, 24 h) groups (scale bars: 5 μm). (c) Still image of mtDNA efflux in HGFs from CP patients (scale bar: 2.5 μm, see Movie 1). (d) Still image of mtDNA efflux in HGFs with LPS stimulation (scale bar: 2.5 μm, see Movie 2). (e) Still image showing mtDNA efflux in HGFs from CP patients with LPS stimulation (scale bar: 2.5 μm, see Movie 3). (f) Percentage of HGFs with, without mtDNA efflux. Data are the mean±SE of 20 fields in each group. (g–i) mtDNA in HGFs, cell supernatants, and cell cytosol. Data are obtained by six independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. (a)(b)(c)(d)(e)(f)(g)(h)(i) ### 3.2. mtDNA Efflux Maintained in HGFs during Periodontitis LPS is a principal factor that determines the periodontal inflammation; we decided to clarify if LPS causes mtDNA efflux maintained in HGFs. In these experiments, healthy HGFs were exposed to LPS stimulation for 24 h. Next, LPS was removed and HGFs were cultured into the next three generations for analysis (Figure3(a)). In contrast, healthy HGFs were directly treated with LPS for 24 h (Figure 3(b)). The results showed that LPS reinforced the mtDNA efflux effect even in the next three generational HGFs (Figures 3(c)–3(e)). No significant differences were observed compared to the LPS directly treated HGFs (Figure 3(f)). Next, we examined the mtDNA levels in the cytosol using qRT-PCR analyzing these groups. LPS directly stimulated HGFs, and LPS treatment following passages of HGFs were both enriched in cytosolic mtDNA (Figure 3(g)). In addition, the percentages of HGFs with mtDNA efflux between LPS direct treatment and LPS treatment followed by HGFs passages were similar (Figure 3(h)). These results suggest that LPS treatment can enhance this mtDNA efflux phenomenon, and the facilitative mtDNA release effects can be maintained in HGFs even in the next-generations HGFs, which is consistent with those mtDNA release of CP HGFs and CP mice.Figure 3 mtDNA efflux maintained inhuman gingival fibroblasts during periodontitis. (a) Human gingival fibroblasts (HGFs) were treated with lipopolysaccharide (LPS) (5μg/mL, 24 h), which was later removed from the culture medium. These LPS-treated HGFs were cultured to next three generations for analysis. (b) HGFs were cultured with LPS direct stimulation (5 μg/mL, 24 h), which will be directly analyzed for mtDNA activity. (c–e) Images of HGFs in the A group for analysis after passaging three generations, respectively. (f) Images of HGFs in the B group. Scale bars: 5 μm. (g) mtDNA levels in cytosol among the five groups of HGFs. n=6. (h) Percentage of HGFs with, without mtDNA efflux. Data in (h) are the mean±standarderror (SE) of 20 fields per group. ns=no significant difference. ∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. ### 3.3. ROS and mtROS Is Overproduction in HGFs from CP Patients To investigate in further details how mtDNA efflux effect remained during periodontitis at cellular level, we firstly analyzed the ROS and mtROS levels in HGFs from different hosts. Control healthy HGFs had the lowest ROS and mtROS levels (Figure4(a)). HGFs from CP had significantly greater levels of ROS and mtROS (Figure 4(a)). We found that the ROS and mtROS were more activated in the presence of LPS compared to those in the absence of LPS groups (Figure 4(a)). In addition, LPS can affect ROS and mtROS even in the next three generations HGFs (Figure 4(a)). Furthermore, the levels of these fluorescent signals reflecting ROS and mtROS levels in Figure 4(a) were calculated by ImageJ (Figures 4(b) and 4(c)). Western blot analysis showed PDK2 exhibited decreased expression in CP HGFs (Figures 4(d) and 4(e)). Meanwhile, the expression levels of PDK2 were also reduced after LPS stimulation and showed low levels even in the next three-generation HGFs (Figures 4(d) and 4(e)). In summary, CP HGFs are primed for ROS activation, and LPS can persistently upregulate the ROS levels in HGFs by suppressing the PDK2 expression. Its regulation may contribute to this mtDNA efflux process.Figure 4 Reactive oxygen species (ROS) and mitochondrial ROS is overproduction in human gingival fibroblasts from chronic periodontitis patients. (a) Human gingival fibroblasts (HGFs) were incubated with 2′,7′-dichlorodihydrofluorescein diacetate (H2DCF-DA) (10 μM, 30 minutes) to indicate the ROS levels (green) in HGFs (scale bars: 75 μm). HGFs were incubated with MitoSOX Red (5 μM, 30 minutes) to visualize mitochondrial ROS (mtROS) levels (red) (scale bars: 50 μm). (b, c) The arbitrary fluorescence intensity of ROS and mtROS in (a) were calculated by ImageJ based on per 10 cells in each group from (a). Data represent the mean±standarderror (SE) from 10 cells from each group. (d) Western blot to evaluate the protein expression of pyruvate dehydrogenase kinase 2 (PDK2). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the loading control. (e) Quantification of each band intensity with respect to loading control in D. n=3 (LPS: 5 μg/mL). Statistically significant p value is indicated as follows: ∗p<0.05,∗∗∗p<0.001,and∗∗∗∗p<0.0001 as compared with the control group; #p<0.05,##p<0.01,and###p<0.001 as compared with the CP group; ++p<0.01,+++p<0.001,and++++p<0.0001 as compared with the LPS group. (a)(b)(c)(d)(e) ### 3.4. mPTP Opening in HGFs from CP Patients via ROS Activation mPTP opening in HGFs was indicative using Calcein AM fluorescence (Figure5(a)). Control HGFs showed strong green fluorescence (Figure 5(a)), suggesting that mPTP remained in a closed state under normal condition [30]. However, the fluorescence was hardly detected in CP HGFs (Figure 5(a)). LPS further resulted in a much more decrease in fluorescence in the control and CP groups (Figure 5(a)). Decreased level of fluorescence signal was also detected in the LPS treated following passaging three generational HGFs (Figure 5(b)). A significant increase in fluorescence was observed in HGFs in the presence of CsA when compared with that in the absence of CsA (Figures 5(a)–5(d)). It was shown that inhibition of ROS and mtROS activation contributes to suppression of mPTP opening (Figures 5(a)–5(d)). Collectively, these data show that CP HGFs display mPTP opening and that mPTP opening in the LPS-treated HGFs was maintained within the HGFs even in the later three generations. Additionally, this observed mPTP opening is dependent on ROS activation.Figure 5 Human gingival fibroblasts (HGFs) from chronic periodontitis presented active mitochondrial permeability transition pore opening via reactive oxygen species. (a) Human gingival fibroblasts (HGFs) were loaded with cobalt chloride (50 mM, 15 minutes) and Calcein AM (green) to determine the opening of the mitochondrial permeability transition pore (mPTP) in HGFs in the presence or absence of lipopolysaccharide (LPS) treatment (5μg/mL, 24 h). The opening of mPTP in HGFs was measured after cyclosporin A (CsA) (0.5 μM, 2 h) or N-acetylcysteine (NAC) (3 mM, 2 h)or Mito-TEMPO (50 μM, 2 h) treatment (scale bars: 25 μm). (b) Images for mPTP opening in the control, LPS-treated, and LPS-treated group after passaging three generations (scale bars: 25 μm). (c) Quantification of the observed Calcein green signal in HGFs from (a, b). Mean±SE are indicated (n=20 cells). The CP group was observed a lower signal; LPS was also observeda lower signal compared with control HGFs. LPS can aggravate this lower signal in the control and CP groups, and this phenomenon can be retained in HGFs after LPS was removed and passaged to next three generations as compared with the control group. (d) The intensity of the indicated Calcein green signal was detected per 20 cells from the control, CP, LPS, and CP LPS groups with CsA, NAC, and Mito-TEMPO treatment compared to HGFs without any treatment, respectively. CsA, NAC, and Mito-TEMPO all downregulated the signal in the LPS, CP, and CP LPS groups, while they fail to induce this phenomenon in control HGFs. p values were determined by 1-way analysis of variance followed by post hoc tests. ∗∗p<0.01and∗∗∗∗p<0.0001; ns: not significant. (a)(b)(c)(d) ### 3.5. mtDNA Release in CP HGFs via ROS and mPTP Opening We performed real-time fluorescent microscopy for control, CP, LPS treatment, and CP LPSHGFs in the presence of CsA (Figure6(a)). It was observed that mtDNA displayed mild or no efflux in the four CsA-treated groups of HGFs (Figure 6(a)). These data demonstrated that mPTP was critical for the mtDNA release under these conditions. We performed qRT-PCR to detect the cytosolic mtDNA levels by the inhibitors of mPTP, ROS, and mtROS. CsA, NAC, and Mito-TEMPO all decreased the cytosolic mtDNA levels in the CP, LPS, and CP LPS groups when compared with the three groups without any treatment, whereas the control group showed similar cytosolic mtDNA levels in the presence and absence of CsA and NAC (Figure 6(b)). We also showed that Mito-TEMPO slightly decreased cytosolic mtDNA concentration in healthy HGFs (Figure 6(b)). When we examined the difference in apoptosis of HGFs among the four groups, we found that HGFs from CP, LPS, and CP LPS showed no significant apoptosis when compared with the HGFs of control healthy donors (Fig. S1). Cumulatively, these results provide further evidence that ROS-mPTP opening causes mtDNA release in CP and LPS-treated HGFs.Figure 6 mtDNA release from mitochondria in human gingival fibroblasts of chronic periodontitis patients via reactive oxygen species-mitochondrial permeability transition pore pathways. (a) Human gingival fibroblasts (HGFs) from the control, chronic periodontitis (CP), and lipopolysaccharide (LPS) stimulation infection (5μg/mL, 24 h), and CP LPS groups were pretreated with 0.5 μM cyclosporin A (CsA) for 2 h and subjected to analysis for mtDNA release. HGFs expressing Tomm 20-mCherry (red) and TFAM-mNeonGreen (green) revealed mtDNA nucleoid presented along with mitochondria. Yellow circles and blue arrows mark areas where mtDNA (green) clearly stops efflux from mitochondria (red). Scale bars: 2.5 μm. See Movies 4, 5, 6, and 7). (b) Bar graphs illustrate the average mtDNA levels in cytosol among four groups of HGFs with or without CsA, N-acetylcysteine (NAC) (3 mM, 2 h), and Mito-TEMPO (50 μM, 2 h) treatment. All quantified data represent the mean±SE. p values were determined by 1-way analysis of variance followed by post hoc tests. Graphs represent at least 3 independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001; ns: not significant. (a)(b) ## 3.1. mtDNA Release from Mitochondria during Periodontitis Development Micro-CT results revealed that alveolar bone around the ligated molar was significantly reduced in CP mice compared to control mice, suggesting experimental periodontitis in the CP group established (Figures1(b) and 1(c)). Intriguingly, mtDNA in plasma from CP mice were enriched compared to age-matched wild-type control mice (Figure 1(d)). These results indicated that mtDNA release might be involved in periodontitis development. However, mtDNA efflux in HGFs during periodontitis is still unclear. Next, we transduced primary HGFs with adenovirus encoding Tomm 20-mCherry and TFAM-mNeonGreen to show mitochondria and mtDNA, respectively (Figure 2(a)). mtDNA were detected robust release into the cytoplasm in CP HGFs (Figure 2(b)). This process was also found by real-time microscopy (Figure 2(c), Movie 1). In contrast, no mtDNA efflux was detected in healthy HGFs (Movie S1). LPS caused remarkable mtDNA release in healthy HGFs and led to more significant mtDNA release in periodontitis-affected samples (Figures 2(b)–2(d), and 2(e), Movie 2 and 3).Next, we calculated a significant increase in the percentage of HGFs with mtDNA efflux in CP HGFs as compared with that in control HGFs (Figure 2(f)). LPS treatment caused marked increase in the percentage of HGFs with mtDNA efflux compared with those without LPS treatment in healthy and CP states (Figure 2(f)). Moreover, qRT-PCR confirmed that mtDNA release into cytosol and out of cells during periodontitis (Figures 2(g)–2(i)). These results indicated that mtDNA release might be involved in periodontitis development.Figure 2 mtDNA released from mitochondria in human gingival fibroblasts from patients with chronic periodontitis. (a) Using fluorescent fusion proteins to visualize mitochondria (Tomm 20-mCherry) and mtDNA (TFAM-mNeonGreen). TM: transmembrane domain; MLS: mitochondrial localization sequence; DBD1 and DBD2: DNA binding domain-1 and DNA binding domain-2. (b) Typical illustration of the human gingival fibroblasts (HGFs) for mitochondria and mtDNA among control, CP, and with or without LPS stimulation (5μg/mL, 24 h) groups (scale bars: 5 μm). (c) Still image of mtDNA efflux in HGFs from CP patients (scale bar: 2.5 μm, see Movie 1). (d) Still image of mtDNA efflux in HGFs with LPS stimulation (scale bar: 2.5 μm, see Movie 2). (e) Still image showing mtDNA efflux in HGFs from CP patients with LPS stimulation (scale bar: 2.5 μm, see Movie 3). (f) Percentage of HGFs with, without mtDNA efflux. Data are the mean±SE of 20 fields in each group. (g–i) mtDNA in HGFs, cell supernatants, and cell cytosol. Data are obtained by six independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. (a)(b)(c)(d)(e)(f)(g)(h)(i) ## 3.2. mtDNA Efflux Maintained in HGFs during Periodontitis LPS is a principal factor that determines the periodontal inflammation; we decided to clarify if LPS causes mtDNA efflux maintained in HGFs. In these experiments, healthy HGFs were exposed to LPS stimulation for 24 h. Next, LPS was removed and HGFs were cultured into the next three generations for analysis (Figure3(a)). In contrast, healthy HGFs were directly treated with LPS for 24 h (Figure 3(b)). The results showed that LPS reinforced the mtDNA efflux effect even in the next three generational HGFs (Figures 3(c)–3(e)). No significant differences were observed compared to the LPS directly treated HGFs (Figure 3(f)). Next, we examined the mtDNA levels in the cytosol using qRT-PCR analyzing these groups. LPS directly stimulated HGFs, and LPS treatment following passages of HGFs were both enriched in cytosolic mtDNA (Figure 3(g)). In addition, the percentages of HGFs with mtDNA efflux between LPS direct treatment and LPS treatment followed by HGFs passages were similar (Figure 3(h)). These results suggest that LPS treatment can enhance this mtDNA efflux phenomenon, and the facilitative mtDNA release effects can be maintained in HGFs even in the next-generations HGFs, which is consistent with those mtDNA release of CP HGFs and CP mice.Figure 3 mtDNA efflux maintained inhuman gingival fibroblasts during periodontitis. (a) Human gingival fibroblasts (HGFs) were treated with lipopolysaccharide (LPS) (5μg/mL, 24 h), which was later removed from the culture medium. These LPS-treated HGFs were cultured to next three generations for analysis. (b) HGFs were cultured with LPS direct stimulation (5 μg/mL, 24 h), which will be directly analyzed for mtDNA activity. (c–e) Images of HGFs in the A group for analysis after passaging three generations, respectively. (f) Images of HGFs in the B group. Scale bars: 5 μm. (g) mtDNA levels in cytosol among the five groups of HGFs. n=6. (h) Percentage of HGFs with, without mtDNA efflux. Data in (h) are the mean±standarderror (SE) of 20 fields per group. ns=no significant difference. ∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. ## 3.3. ROS and mtROS Is Overproduction in HGFs from CP Patients To investigate in further details how mtDNA efflux effect remained during periodontitis at cellular level, we firstly analyzed the ROS and mtROS levels in HGFs from different hosts. Control healthy HGFs had the lowest ROS and mtROS levels (Figure4(a)). HGFs from CP had significantly greater levels of ROS and mtROS (Figure 4(a)). We found that the ROS and mtROS were more activated in the presence of LPS compared to those in the absence of LPS groups (Figure 4(a)). In addition, LPS can affect ROS and mtROS even in the next three generations HGFs (Figure 4(a)). Furthermore, the levels of these fluorescent signals reflecting ROS and mtROS levels in Figure 4(a) were calculated by ImageJ (Figures 4(b) and 4(c)). Western blot analysis showed PDK2 exhibited decreased expression in CP HGFs (Figures 4(d) and 4(e)). Meanwhile, the expression levels of PDK2 were also reduced after LPS stimulation and showed low levels even in the next three-generation HGFs (Figures 4(d) and 4(e)). In summary, CP HGFs are primed for ROS activation, and LPS can persistently upregulate the ROS levels in HGFs by suppressing the PDK2 expression. Its regulation may contribute to this mtDNA efflux process.Figure 4 Reactive oxygen species (ROS) and mitochondrial ROS is overproduction in human gingival fibroblasts from chronic periodontitis patients. (a) Human gingival fibroblasts (HGFs) were incubated with 2′,7′-dichlorodihydrofluorescein diacetate (H2DCF-DA) (10 μM, 30 minutes) to indicate the ROS levels (green) in HGFs (scale bars: 75 μm). HGFs were incubated with MitoSOX Red (5 μM, 30 minutes) to visualize mitochondrial ROS (mtROS) levels (red) (scale bars: 50 μm). (b, c) The arbitrary fluorescence intensity of ROS and mtROS in (a) were calculated by ImageJ based on per 10 cells in each group from (a). Data represent the mean±standarderror (SE) from 10 cells from each group. (d) Western blot to evaluate the protein expression of pyruvate dehydrogenase kinase 2 (PDK2). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the loading control. (e) Quantification of each band intensity with respect to loading control in D. n=3 (LPS: 5 μg/mL). Statistically significant p value is indicated as follows: ∗p<0.05,∗∗∗p<0.001,and∗∗∗∗p<0.0001 as compared with the control group; #p<0.05,##p<0.01,and###p<0.001 as compared with the CP group; ++p<0.01,+++p<0.001,and++++p<0.0001 as compared with the LPS group. (a)(b)(c)(d)(e) ## 3.4. mPTP Opening in HGFs from CP Patients via ROS Activation mPTP opening in HGFs was indicative using Calcein AM fluorescence (Figure5(a)). Control HGFs showed strong green fluorescence (Figure 5(a)), suggesting that mPTP remained in a closed state under normal condition [30]. However, the fluorescence was hardly detected in CP HGFs (Figure 5(a)). LPS further resulted in a much more decrease in fluorescence in the control and CP groups (Figure 5(a)). Decreased level of fluorescence signal was also detected in the LPS treated following passaging three generational HGFs (Figure 5(b)). A significant increase in fluorescence was observed in HGFs in the presence of CsA when compared with that in the absence of CsA (Figures 5(a)–5(d)). It was shown that inhibition of ROS and mtROS activation contributes to suppression of mPTP opening (Figures 5(a)–5(d)). Collectively, these data show that CP HGFs display mPTP opening and that mPTP opening in the LPS-treated HGFs was maintained within the HGFs even in the later three generations. Additionally, this observed mPTP opening is dependent on ROS activation.Figure 5 Human gingival fibroblasts (HGFs) from chronic periodontitis presented active mitochondrial permeability transition pore opening via reactive oxygen species. (a) Human gingival fibroblasts (HGFs) were loaded with cobalt chloride (50 mM, 15 minutes) and Calcein AM (green) to determine the opening of the mitochondrial permeability transition pore (mPTP) in HGFs in the presence or absence of lipopolysaccharide (LPS) treatment (5μg/mL, 24 h). The opening of mPTP in HGFs was measured after cyclosporin A (CsA) (0.5 μM, 2 h) or N-acetylcysteine (NAC) (3 mM, 2 h)or Mito-TEMPO (50 μM, 2 h) treatment (scale bars: 25 μm). (b) Images for mPTP opening in the control, LPS-treated, and LPS-treated group after passaging three generations (scale bars: 25 μm). (c) Quantification of the observed Calcein green signal in HGFs from (a, b). Mean±SE are indicated (n=20 cells). The CP group was observed a lower signal; LPS was also observeda lower signal compared with control HGFs. LPS can aggravate this lower signal in the control and CP groups, and this phenomenon can be retained in HGFs after LPS was removed and passaged to next three generations as compared with the control group. (d) The intensity of the indicated Calcein green signal was detected per 20 cells from the control, CP, LPS, and CP LPS groups with CsA, NAC, and Mito-TEMPO treatment compared to HGFs without any treatment, respectively. CsA, NAC, and Mito-TEMPO all downregulated the signal in the LPS, CP, and CP LPS groups, while they fail to induce this phenomenon in control HGFs. p values were determined by 1-way analysis of variance followed by post hoc tests. ∗∗p<0.01and∗∗∗∗p<0.0001; ns: not significant. (a)(b)(c)(d) ## 3.5. mtDNA Release in CP HGFs via ROS and mPTP Opening We performed real-time fluorescent microscopy for control, CP, LPS treatment, and CP LPSHGFs in the presence of CsA (Figure6(a)). It was observed that mtDNA displayed mild or no efflux in the four CsA-treated groups of HGFs (Figure 6(a)). These data demonstrated that mPTP was critical for the mtDNA release under these conditions. We performed qRT-PCR to detect the cytosolic mtDNA levels by the inhibitors of mPTP, ROS, and mtROS. CsA, NAC, and Mito-TEMPO all decreased the cytosolic mtDNA levels in the CP, LPS, and CP LPS groups when compared with the three groups without any treatment, whereas the control group showed similar cytosolic mtDNA levels in the presence and absence of CsA and NAC (Figure 6(b)). We also showed that Mito-TEMPO slightly decreased cytosolic mtDNA concentration in healthy HGFs (Figure 6(b)). When we examined the difference in apoptosis of HGFs among the four groups, we found that HGFs from CP, LPS, and CP LPS showed no significant apoptosis when compared with the HGFs of control healthy donors (Fig. S1). Cumulatively, these results provide further evidence that ROS-mPTP opening causes mtDNA release in CP and LPS-treated HGFs.Figure 6 mtDNA release from mitochondria in human gingival fibroblasts of chronic periodontitis patients via reactive oxygen species-mitochondrial permeability transition pore pathways. (a) Human gingival fibroblasts (HGFs) from the control, chronic periodontitis (CP), and lipopolysaccharide (LPS) stimulation infection (5μg/mL, 24 h), and CP LPS groups were pretreated with 0.5 μM cyclosporin A (CsA) for 2 h and subjected to analysis for mtDNA release. HGFs expressing Tomm 20-mCherry (red) and TFAM-mNeonGreen (green) revealed mtDNA nucleoid presented along with mitochondria. Yellow circles and blue arrows mark areas where mtDNA (green) clearly stops efflux from mitochondria (red). Scale bars: 2.5 μm. See Movies 4, 5, 6, and 7). (b) Bar graphs illustrate the average mtDNA levels in cytosol among four groups of HGFs with or without CsA, N-acetylcysteine (NAC) (3 mM, 2 h), and Mito-TEMPO (50 μM, 2 h) treatment. All quantified data represent the mean±SE. p values were determined by 1-way analysis of variance followed by post hoc tests. Graphs represent at least 3 independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001; ns: not significant. (a)(b) ## 4. Discussion Mitochondrial dysfunction is an important component of periodontitis pathogenesis [31], as defects in mitochondrial structure and function have been shown in periodontitis in our previous work [8]. mtDNA is crucial for mitochondrial function. It is known that mtDNA has structural similarities with microbial DNA [32]. Hence, mtDNA could result in an inflammatory response when released into the cytoplasm or extracellular milieu in susceptible patients. These mtDNA characteristics confirmed the significant role of mtDNA in the pathogenesis of inflammation-related diseases in humans. In this study, we examined mtDNA efflux activity and extent using confocal microscopy and qRT-PCR analysis between primary HGFs from periodontitis patients and healthy donors. We demonstrated for the first time that mtDNA released from the mitochondria in HGFs from CP patients. LPS stimuli was found to trigger this mtDNA efflux activity and keep these properties within the HGFs for some periods.Studies have previously identified that mtDNA is found outside the mitochondria and sometimes even outside the cells in certain circumstances [33, 34]. mtDNA release was first reported that LPS pointed to extrude mtDNA into the cytoplasm [35]. Another key evidence for mtDNA extruding into the extracellular space is that LPS induces neutrophil extracellular traps (NETs) formation, largely consisting of mtDNA [36, 37]. This mtDNA release may result in substantial tissue damage, leading to chronic inflammation. Periodontitis is a kind of chronic inflammatory disease driving the destruction of soft and hard periodontal tissues such as gingiva recession and alveolar bone loss [5], suggesting a role for mtDNA efflux in the periodontitis. Consistent with the reported mtDNA efflux in other studies, we identified a significant increased mtDNA levels in the plasma from CP mice, implying an association with periodontitis and this mtDNA efflux. One study demonstrated that the mtDNA outside of mitochondria was found to be crucial for inflammation via inducing bone-destructing immunity [38]. Owing to the mtDNA accumulation in the plasma of CP mice, little is known about the mtDNA function and activity in HGFs during periodontitis. In the context of periodontitis, in vitro studies of periodontitis patients have confirmed alterations in mitochondrial structure, function, and hyperoxidative stress in HGFs and gingival tissues compared to normal individuals [8, 12], which indicates that there may be a correlation between periodontitis progression and mitochondrial dysfunction in HGFs from different hosts. Interestingly, we confirmed that aberrant mtDNA release into cytosol and supernatants of HGFs from CP patients. It is evident that LPS stimulation could also induce this phenomenon in healthy HGFs. The observed high mtDNA levels in the cytosol and supernatants of CP HGFs were more significant in the presence of LPS than in HGFs without LPS, indicating that mtDNA release was maintained in inflamed cells. It is possible that LPS, a major trigger of periodontitis, enables mtDNA to release from mitochondria in periodontitis mouse model and periodontitis patients, but the retained mtDNA efflux in HGFs from CP patients was inadequate to understand.Our data showed that LPS increased ROS levels and mPTP opening, and it also led to this variation in next three generational HGFs. This comparison of mtDNA efflux activity between the next three generations HGFs after LPS stimuli and direct LPS treatment was similar, in line with the above findings. In addition, we demonstrated that LPS upregulated ROS generation through PDK2 inhibition even in the next three generations HGFs. It was reported that PDK2 activation has beneficial effects on ROS suppression [39]. Thus, we reasoned that LPS might mediate irreversible high ROS generation by downregulation of PDK2 expression [6, 40], leading to sustained increased mtDNA release activity even in the next-generation HGFs without LPS stimulation. As widely reported in the literature, LPS activated Toll-like receptor (TLR) was abundantly expressed in the inflammatory cells, leading to the ROS production as well as the lower PDK2 expression [8, 41]. Some reported that ROS triggered mtDNA damages and release into cytosol in cancer [42]. Another study showed that mitochondrial ROS induced inflammation dependent on disrupting mtDNA maintenance [15]. In agreement, other study detected that LPS induced accumulation of free mtDNA outside of mitochondria contributing to inflammation via TLR9 activation [43]. We provided herein the proof of mtDNA efflux arising from LPS mediating ROS activation by blocking PDK2 in HGFs. The exact reason for this phenomenon is unclear. It is crucial to note that studies have observed the transfer of entire mitochondria between cells [44]. However, whether the entire mitochondria or mtDNA is transferred is controversial [45]. Therefore, mtDNA is thought to be a signal molecule that spreads inflammatory signals across a population of cells. This suggests that inflammation could spread between cells via the detection of mtDNA [9]. Based on these results, we propose that LPS modulates mtDNA efflux remained in HGFs closely linked to sustained ROS overproduction.Some studies have concluded mtDNA release in the context of cell apoptosis [46], while other studies have indicated that mPTP opening leads to increased mtDNA release [46]. Given that cell apoptosis was similar among our divided groups, the role of mPTP opening in mtDNA release in HGFs was focused. Based on the inhibitory effect of CsA on mPTP opening, CsA successfully restored mtDNA efflux from mitochondria and reduced mtDNA levels in cytosol within inflamed HGFs. These results highlight that mPTP opening potentially modulates mtDNA release in HGFs with periodontitis. One of the possible physiological mechanisms mediating increased mPTP opening could be linked to ROS increase in inflammatory HGFs. Exceeding levels of ROS can trigger mPTP opening via mitochondrial ATP-sensitive potassium channels, and voltage-dependent anion channel-1 oligomerization, suggesting that ROS works as an important molecular leading to downstream mPTP opening and eventually disruption of cellular functions [47, 48]. Of note, earlier, Bullon’s work together with our recent work demonstrated that HGFs and gingival tissues from CP patients were observed impaired mitochondria and higher oxidative stress [8, 12, 49]. As a result of cellular ROS and mtROS outburst, mPTP opening can be activated. According to our data, we confirmed positive relationship between ROS overproduction and mPTP opening in inflammatory HGFs. In addition, ROS and mPTP both played a critical role in mtDNA release during periodontitis. Our results highlight that ROS could be one possible explanation for mPTP opening, contributing to mtDNA release in HGFs during periodontitis. ## 5. Conclusion In summary, mtDNA efflux maintained in primary HGFs could reflect mitochondrial dysfunction detected in periodontitis. This work provides initial preclinical evidence for a new candidate biomarker for mtDNA efflux in HGFs predicting periodontitis. In addition to this focused investigation of mtDNA efflux in HGFs during inflammation, our results also indicate that ROS/mPTP pathway could be the principal mediator of mtDNA efflux in inflamed HGFs. Further investigation is needed to determine how mtDNA release causes periodontitis, which may reveal new therapeutic strategies for the treatment of patients with periodontitis. --- *Source: 1000213-2022-06-08.xml*
1000213-2022-06-08_1000213-2022-06-08.md
62,217
Mitochondrial DNA Efflux Maintained in Gingival Fibroblasts of Patients with Periodontitis through ROS/mPTP Pathway
Jia Liu; Yanfeng Wang; Qiao Shi; Xiaoxuan Wang; Peihui Zou; Ming Zheng; Qingxian Luan
Oxidative Medicine and Cellular Longevity (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1000213
1000213-2022-06-08.xml
--- ## Abstract Mitochondria have their own mitochondrial DNA (mtDNA). Aberrant mtDNA is associated with inflammatory diseases. mtDNA is believed to induce inflammation via the abnormal mtDNA release. Periodontitis is an infectious, oral inflammatory disease. Human gingival fibroblasts (HGFs) from patients with chronic periodontitis (CP) have shown to generate higher reactive oxygen species (ROS) that cause oxidative stress and have decreased mtDNA copy number. Firstly, cell-free mtDNA was identified in plasma from CP mice through qRT-PCR. Next, we investigated whether mtDNA efflux was maintained in primary cultures of HGFs from CP patients and the possible underlying mechanisms using adenovirus-mediated transduction live cell imaging and qRT-PCR analysis. Here, we reported that mtDNA was increased in plasma from the CP mice. Additionally, we confirmed that CP HGFs had significant mtDNA efflux from mitochondria compared with healthy HGFs. Furthermore, lipopolysaccharide (LPS) fromPorphyromonas gingivalis can also cause mtDNA release in healthy HGFs. Mechanistically, LPS upregulated ROS levels and mitochondrial permeability transition pore (mPTP) opening by inhibition of pyruvate dehydrogenase kinase (PDK)2 expression, resulting in mtDNA release. Importantly, mtDNA efflux was even persistent in HGFs after LPS was removed and cells were passaged to the next three generations, indicating that mtDNA abnormalities were retained in HGFs in vitro, similar to the primary hosts. Taken together, our results elucidate that mtDNA efflux was maintained in HGFs from periodontitis patients through abnormal ROS/mPTP activity. Therefore, our work indicates that persistent mtDNA efflux may be a possible diagnostic and therapeutic target for patients with periodontitis. --- ## Body ## 1. Introduction Periodontal inflammation is known to affect 20%-50% of the global population, and it often interacts with other inflammatory diseases such as heart disease and diabetes [1–3]. Periodontitis is associated with lipopolysaccharide (LPS) from the cell walls of gram-negative bacteria-mediated inflammatory responses and represents the most common cause of teeth loss [1, 4, 5]. Increasing evidences suggest that mitochondrial dysfunction appears to result in periodontitis during LPS stimulus [6–8]. As such, abnormal mitochondria are considered to be one of the major contributors to the periodontitis development. Several mitochondrial components, including mitochondrial DNA (mtDNA), have also been implicated in inflammatory responses [9].mtDNA exists in mitochondrial matrix and is in intimate contact with the electron transport chain, one of the principal sources of reactive oxygen species (ROS). Therefore, mtDNA is particularly susceptible to oxidation, which can cause mutations and damages, leading to the pathogenesis of inflammation [10]. ROS production and ROS produced in mitochondria (mtROS) are indeed significantly enhanced in human gingival fibroblasts (HGFs) after LPS stimulation or from periodontitis hosts [11]. These results indicate that oxidative stress is induced during periodontitis [12]. According to current studies linking oxidative stress to decreased mtDNA copy number [13, 14], it is becoming clear that mtDNA disruption may be associated with chronic inflammation [15]. Consistent with this assumption, previous research has also demonstrated that mtDNA deletion is present in the gingival tissues of patients with periodontitis [16]. Moreover, a decrease in mtDNA in periodontitis rats suggested that aberrant mtDNA might contribute to aggravated periodontitis [7]. Considering our recent observation that HGFs and gingival tissues of patients with periodontitis in vitro had decreased mtDNA levels and decreased mitochondrial matrix protein expression, especially in pyruvate dehydrogenase kinase 2 (PDK2) when compared with those from healthy subjects [8], suggesting that mtDNA and mitochondria disruption in peripheral HGFs might replicate the mitochondrial dysfunction observed in vivo during periodontitis development. Therefore, we hypothesized that abnormal mtDNA might be maintained in HGFs in vitro connecting the disease in vivo with a certain mechanism. Answering these problems will improve our understanding of the periodontitis etiology, and it might lead to new treatment options.Superresolution imaging demonstrated that mtDNA constitutes one copy of mtDNA and a number of different proteins, presenting densely compacted nucleoids [17]. Mitochondrial transcription factor A (TFAM) is the most notable mtDNA nucleoid protein that can be assumed to specifically recognize mtDNA [15]. mtDNA can escape from mitochondria and release into the cytoplasm under various pathological situations [18, 19]. Multiple major factors have been attributed to driving mtDNA release from damaged mitochondria, including the opening of mitochondrial permeability transition pore (mPTP), mitochondrial stress, and calcium overload [20–22]. Nonetheless, the biological mechanisms provide limited information on this process during periodontitis conditions. Our recent study demonstrated that mitochondria in HGFs from periodontitis patients appeared to retain many of the damaged features, as observed in donors [8]. In addition, previous studies have suggested that the primary host has profound influence on the cells in vitro, such as higher oxidative stress in HGFs of periodontitis patients than that of healthy individuals [8, 23]. However, differences of mtDNA release in HGFs from chronic periodontitis (CP) patients and healthy HGFs have not been tested and if this abnormal intracellular mtDNA activity would last need to be further elucidated. Thus, therapies targeting mtDNA may become a potential approach to patients suffered from severe recurrent periodontitis.The mPTP spans the mitochondrial inner membrane, and its formation is associated with various cellular stresses [24, 25]. Interestingly, the opening of mPTP has been detected in the metabolic stress observed in inflammatory diseases [26]. More recently, using an in vitro and in vivo approach, studies have shown that genetic removal of one of the mPTP component proteins ameliorated mtDNA release into the cytoplasm during the neuroinflammatory response [20]. Pharmacological inhibition of mPTP by cyclosporin A (CsA) has also been shown to be effective in preventing mtDNA leakage into the cytoplasm [20]. Despite these results in previous studies, the notion that the opening of mPTP may directly drive mtDNA efflux remains controversial and is still unclear in periodontitis. It has been reported that ROS contributes directly to mPTP opening during ischemia-reperfusion [27]. As a result of cellular ROS and mtROS outburst, mPTP opening can be activated. Nevertheless, its association with mPTP involved in mtDNA efflux in periodontitis is scarcely understood.In this study, we discovered the differences in the mtDNA efflux process, ROS levels, and mPTP opening between primary HGFs, isolated from patients with CP and age-matched periodontally healthy patients. This elevated mtDNA efflux together with high ROS levels, and mPTP opening in CP HGFs could be more enhanced in response to LPS. Furthermore, this identified mtDNA efflux as an important modifier could be maintained in HGFs even withdrawing the LPS stimulation after passages. Consequently, we explored if the ROS/mPTP pathway involving in the mtDNA efflux along with the progression of periodontitis. This may be a promising target for early diagnosing periodontitis and provides preclinical evidence for therapeutic strategy to people with periodontal inflammation tolerating common anti-microorganism therapy. ## 2. Materials and Methods ### 2.1. Ethics Approval The study was approved by the Review Board and Ethics Committee of Peking University Health Science Center (PKUS-SIRB-2013017) and conducted in agreement with the Declaration of Helsinki II. Written informed consent was obtained from all subjects before inclusion in the study. All animal work was approved by the Review Board and Ethics Committee of Peking University Health Science Center (LA2018076). ### 2.2. Animals and Experimental Groups Specific-pathogen-free male C57BL/6 wild-type mice (6-wk-old) (Figure1(a)) were purchased from Experimental Animal Laboratory, Peking University Health Science Center, in compliance with established polices. All mice were randomly divided into the normal control groups or CP groups of four mice each. The control group was left untreated, and the CP group had their maxillary second molar tooth ligated with a 5-0 silk ligature (Roboz Surgical Instrument Co, MD, USA) (Figure 1(a)). The ligatures remained in place in CP groups throughout the experimental period. All mice were sacrificed at three weeks postligation (Figure 1(a)). Microcomputed tomography (CT) was used for assuring the CP model was established successfully.Figure 1 Cell-free- (cf-) mtDNA in plasma from chronic periodontitis mice and control healthy mice. (a) 5-0 silk suture was sutured for three weeks passing around the maxillary second molar in 6-week-old mice for establishing experimental chronic periodontitis (CP) mouse model. Control normal mice had no treatment. (b–c) Micro-CT showed obviously increased bone loss in CP mice after ligation for three weeks when compared to the control group. One representative image for 2-dimensional and 3-Dmode is shown. The red arrow represents bone loss areas. The yellow line indicates the distance between cement-enamel junction (CEJ) and alveolar bone crest (AEJ). (d) ND1 levels in plasma between CP and control mice (n=4). ∗p<0.05. (a)(b)(c)(d) ### 2.3. Microcomputed Tomography In brief, after sacrificing the mice, the maxillary teeth were carefully dissected and soft tissues were removed. The sample was fixed with 4% paraformaldehyde for 24 h, and scanned using theμCT50 (Scanco Medical) with a resolution of 1024×1024, pixel size of 15×15μm, and layer spacing of 15 μm. The region of interest was assessed by 3D reconstructed. Bone loss was evaluated by 3D micro-CT. ### 2.4. Human Subjects HGFs were obtained from six CP patients and six age-matched healthy donors. These participants were recruited from the Department of Periodontology, Peking University School and Hospital of Stomatology. The exclusion criteria included smoking and systemic health issues including hypertension, diabetes, and immune-related diseases within the past six months. CP was defined according to the American Academy of Periodontology and European Federation of Periodontology criteria based on staging and grading [28]. CP patients included in this study were grade B and stage III. Gingival tissues from CP were acquired through flap surgery, with PD≥6mm. Tissues in the healthy group were harvested during crown lengthening surgery, with PD<4mm. Table 1 lists the detailed characteristics of the participants.Table 1 Clinical characteristics at surgery site of patients included in this study. AbbreviationNumber of patientsRange of agePercent womenBI PD (mm)CAL (mm)Con627-40501-20-0.5CP633-4566.76-104-7 ### 2.5. Primary Culture of HGFs HGFs were prepared from the gingival tissues of six CP patients and six healthy controls during periodontal surgery. Cells were grown in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, USA) and 1% penicillin-streptomycin. The cells were incubated at 37°C with 5% CO2. The medium was changed after a week. In approximately two weeks, the cells reached subconfluency and the pieces of gingival tissue were removed from the culture flask. Cells from the third to the eighth passage were used in the subsequent study. ### 2.6. Cell Treatment and Stimulation HGFs from healthy and CP patients were stimulated with or without 5μg/mL LPS from Porphyromonas gingivalis (P.g) (ATCC33277, Standard, InvivoGen, San Diego, CA, USA) for 24 h. To investigate whether inflammatory features of donors were retained in HGFs, HGFs from healthy donors were treated with 5 μg/mL of LPS for 24 h, followed by discarding the medium then passaging to the next three generations for analysis. The cells were assessed by indicated assays and compared with cells from healthy donors that were directly stimulated with the same amount of LPS for the same time. ### 2.7. Cellular ROS and Mitochondrial ROS (mtROS) Detection 2′,7′-Dichlorodihydrofluorescein diacetate (H2DCF-DA) (Sigma-Aldrich, St. Louis, MO) and MitoSOX Red (Invitrogen, Carlsbad, CA) were used to detect total ROS and mtROS, respectively, as previously described [11]. HGFs were loaded with H2DCF-DA (10 μM) or MitoSOX Red (5 μM) for 30 min and then observed using a microscope [11]. To inhibit ROS levels, HGFs were preincubated with 3 mM N-acetylcysteine (NAC) (Sigma Aldrich, St. Louis, MO) for 2 h. The mtROS scavenger 50 μM Mito-TEMPO (Santa Cruz Biotech, Dallas, TX) were pretreated for 2 h. ### 2.8. Western Blotting Proteins were extracted from HGFs using ice-cold radioimmunoprecipitation (RIPA) lysis buffer (Solarbio). After being quantified by BCA (Thermo Fisher Scientific), the protein samples were mixed with loading buffer (Solarbio), separated by electrophoresis on SDS-PAGE. The proteins in the gel were transferred on a polyvinylidene fluoride (PVDF) membrane (Beyotime). The membranes were blocked with 5% skimmed milk (Solarbio) and incubated overnight at 4°C with primary antibody. The membranes were washed with Tris-buffered saline and incubated with secondary antibody for 90 min at room temperature. The PVDF membranes were subjected to chemiluminescence detection using an ECL Western Blotting Detection Kit (Solarbio). ### 2.9. DNA Isolation and mtDNA Quantification by Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Genomic DNA from HGFs was extracted using the Universal Genomic DNA Kit (ZOMAN, Beijing, China), following the manufacturer’s instructions. The mtDNA levels in HGFswere assessed using primers against mitochondrial genes (ND1), while nuclear 18S rRNA served as a loading control. Detailed ND1 and 18S rRNA sequences are presented in Table2. Cytosolic mtDNA extraction was performed according to the methods established by West et al. [29]. The plasma from the mice was centrifuged at 1000 g for 5 min, and then the supernatant was centrifuged a second time at 5000 g for 10 min. The top 80% of the volume can be used for cell-free- (cf-) mtDNA quantification. DNA from cell supernatants, cf-mtDNA in plasma, and cytosol DNA (200 μL) were isolated using the QIAamp DNA Mini Kit (Qiagen, Germany). ND1 levels in the samples were analyzed according to a standard curve based on ND1 plasmid (Sangon, Shanghai, China) levels.Table 2 List of primers for real-time PCR studies. GenePrimerSequenceND 1Forward primer5′-CACACTAGCAGAGACCAACCGAAC-3′Reverse primer5′-CGGCTATGAAGAATAGGGCGAAGG-3′18S rRNAForward primer5′-GACTCAACACGGGAAACCTCACC-3′Reverse primer5′-ACCAGACAAATCGCTCCACCAAC-3′ ### 2.10. Adenovirus Transduction for Mitochondria and mtDNA Detection HGFs were transduced with adenovirus encoding the mitochondrial outer-membrane protein Tomm 20 bearing a mCherry fluorescence protein. mtDNA was detected by coexpression of TFAM, tagged with the green fluorescent protein (GFP) variant mNeonGreen. HGFs were seeded on 10 mm round confocal glass coverslips at a density of 50% and were infected with specified amounts of the Tomm 20-mCherry and TFAM-mNeonGreen adenoviruses. Forty-eight hours after transduction, the medium was changed, and the cells were processed for further analysis. ### 2.11. Live Cell Imaging Microscopy Live cells were captured using a fluorescence microscope (TCS-STED; Leica, Wetzlar, Germany) with a63×oil immersion objective. For all experiments, HGFs were grown in 10 mm round glass bottom confocal wells (Cedarlane, Southern Ontario, Canada). Laser excitation was achieved at 488 nm for mNeonGreen and 561 nm for mCherry. LPS treatment was performed after sample mounting in the medium chamber, if needed. HGFs expressing Tomm 20-mCherry and TFAM-mNeonGreen were imaged serially at every 10 s for 10-15 min. Image processing and analysis were performed using ImageJ (NIH, http://rsb.info.nih.gov/ij/) and Huygens Professional software (Scientific Volume Imaging, Amsterdam, Holland). ### 2.12. Detection of mPTP Opening HGFs were incubated with 50 mM cobalt chloride for 15 min, before treatment with 1μM Calcein Green AM (Solarbio, Beijing, China) for 30 min. Free Calcein quenching by cobalt chloride preserved mitochondrial integrity, which could be used to indicate mPTP opening. Calcein fluorescence was detected by confocal microscopy (Leica) using a 488 nm excitation wavelength. Quantification of the Calcein fluorescence intensity was conducted by analyzing 20 cells for every indicated condition using ImageJ software. To prevent mPTP opening, HGFs were preincubated with 0.5 μM cyclosporine A (CsA; Sigma) for 2 h, following the manufacturer’s recommendations. ### 2.13. Flow Cytometric Analysis Cells were briefly washed with1×phosphate-buffered saline (PBS), resuspended in 1×binding buffer, and centrifuged at 300×g for 10 min. The pellets were resuspended with 1×binding buffer at a density of 1×106 cells/mL. Cells were replated in a flow cytometric tube at a density of 1×105 cells/mL and processed for Annexin V-FITC staining (Solarbio, Beijing, China)for 10 min at 20-25°C. Subsequently, the cells were stained with propidium iodide (PI) for 5 min at 20-25°C and analyzed for apoptosis by flow cytometry. ### 2.14. Statistical Analysis Data are expressed as themean±standarderror (SE). All p values were determined by two-way Student’s t-test or one-way analysis of variance (ANOVA) with a post hoc Student Knewman-Keuls test for multiple comparisons. Significant differences were accepted at p<0.05. Statistical analysis was performed using GraphPad Prism software (version 9.00; GraphPad Software). ## 2.1. Ethics Approval The study was approved by the Review Board and Ethics Committee of Peking University Health Science Center (PKUS-SIRB-2013017) and conducted in agreement with the Declaration of Helsinki II. Written informed consent was obtained from all subjects before inclusion in the study. All animal work was approved by the Review Board and Ethics Committee of Peking University Health Science Center (LA2018076). ## 2.2. Animals and Experimental Groups Specific-pathogen-free male C57BL/6 wild-type mice (6-wk-old) (Figure1(a)) were purchased from Experimental Animal Laboratory, Peking University Health Science Center, in compliance with established polices. All mice were randomly divided into the normal control groups or CP groups of four mice each. The control group was left untreated, and the CP group had their maxillary second molar tooth ligated with a 5-0 silk ligature (Roboz Surgical Instrument Co, MD, USA) (Figure 1(a)). The ligatures remained in place in CP groups throughout the experimental period. All mice were sacrificed at three weeks postligation (Figure 1(a)). Microcomputed tomography (CT) was used for assuring the CP model was established successfully.Figure 1 Cell-free- (cf-) mtDNA in plasma from chronic periodontitis mice and control healthy mice. (a) 5-0 silk suture was sutured for three weeks passing around the maxillary second molar in 6-week-old mice for establishing experimental chronic periodontitis (CP) mouse model. Control normal mice had no treatment. (b–c) Micro-CT showed obviously increased bone loss in CP mice after ligation for three weeks when compared to the control group. One representative image for 2-dimensional and 3-Dmode is shown. The red arrow represents bone loss areas. The yellow line indicates the distance between cement-enamel junction (CEJ) and alveolar bone crest (AEJ). (d) ND1 levels in plasma between CP and control mice (n=4). ∗p<0.05. (a)(b)(c)(d) ## 2.3. Microcomputed Tomography In brief, after sacrificing the mice, the maxillary teeth were carefully dissected and soft tissues were removed. The sample was fixed with 4% paraformaldehyde for 24 h, and scanned using theμCT50 (Scanco Medical) with a resolution of 1024×1024, pixel size of 15×15μm, and layer spacing of 15 μm. The region of interest was assessed by 3D reconstructed. Bone loss was evaluated by 3D micro-CT. ## 2.4. Human Subjects HGFs were obtained from six CP patients and six age-matched healthy donors. These participants were recruited from the Department of Periodontology, Peking University School and Hospital of Stomatology. The exclusion criteria included smoking and systemic health issues including hypertension, diabetes, and immune-related diseases within the past six months. CP was defined according to the American Academy of Periodontology and European Federation of Periodontology criteria based on staging and grading [28]. CP patients included in this study were grade B and stage III. Gingival tissues from CP were acquired through flap surgery, with PD≥6mm. Tissues in the healthy group were harvested during crown lengthening surgery, with PD<4mm. Table 1 lists the detailed characteristics of the participants.Table 1 Clinical characteristics at surgery site of patients included in this study. AbbreviationNumber of patientsRange of agePercent womenBI PD (mm)CAL (mm)Con627-40501-20-0.5CP633-4566.76-104-7 ## 2.5. Primary Culture of HGFs HGFs were prepared from the gingival tissues of six CP patients and six healthy controls during periodontal surgery. Cells were grown in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, USA) and 1% penicillin-streptomycin. The cells were incubated at 37°C with 5% CO2. The medium was changed after a week. In approximately two weeks, the cells reached subconfluency and the pieces of gingival tissue were removed from the culture flask. Cells from the third to the eighth passage were used in the subsequent study. ## 2.6. Cell Treatment and Stimulation HGFs from healthy and CP patients were stimulated with or without 5μg/mL LPS from Porphyromonas gingivalis (P.g) (ATCC33277, Standard, InvivoGen, San Diego, CA, USA) for 24 h. To investigate whether inflammatory features of donors were retained in HGFs, HGFs from healthy donors were treated with 5 μg/mL of LPS for 24 h, followed by discarding the medium then passaging to the next three generations for analysis. The cells were assessed by indicated assays and compared with cells from healthy donors that were directly stimulated with the same amount of LPS for the same time. ## 2.7. Cellular ROS and Mitochondrial ROS (mtROS) Detection 2′,7′-Dichlorodihydrofluorescein diacetate (H2DCF-DA) (Sigma-Aldrich, St. Louis, MO) and MitoSOX Red (Invitrogen, Carlsbad, CA) were used to detect total ROS and mtROS, respectively, as previously described [11]. HGFs were loaded with H2DCF-DA (10 μM) or MitoSOX Red (5 μM) for 30 min and then observed using a microscope [11]. To inhibit ROS levels, HGFs were preincubated with 3 mM N-acetylcysteine (NAC) (Sigma Aldrich, St. Louis, MO) for 2 h. The mtROS scavenger 50 μM Mito-TEMPO (Santa Cruz Biotech, Dallas, TX) were pretreated for 2 h. ## 2.8. Western Blotting Proteins were extracted from HGFs using ice-cold radioimmunoprecipitation (RIPA) lysis buffer (Solarbio). After being quantified by BCA (Thermo Fisher Scientific), the protein samples were mixed with loading buffer (Solarbio), separated by electrophoresis on SDS-PAGE. The proteins in the gel were transferred on a polyvinylidene fluoride (PVDF) membrane (Beyotime). The membranes were blocked with 5% skimmed milk (Solarbio) and incubated overnight at 4°C with primary antibody. The membranes were washed with Tris-buffered saline and incubated with secondary antibody for 90 min at room temperature. The PVDF membranes were subjected to chemiluminescence detection using an ECL Western Blotting Detection Kit (Solarbio). ## 2.9. DNA Isolation and mtDNA Quantification by Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) Genomic DNA from HGFs was extracted using the Universal Genomic DNA Kit (ZOMAN, Beijing, China), following the manufacturer’s instructions. The mtDNA levels in HGFswere assessed using primers against mitochondrial genes (ND1), while nuclear 18S rRNA served as a loading control. Detailed ND1 and 18S rRNA sequences are presented in Table2. Cytosolic mtDNA extraction was performed according to the methods established by West et al. [29]. The plasma from the mice was centrifuged at 1000 g for 5 min, and then the supernatant was centrifuged a second time at 5000 g for 10 min. The top 80% of the volume can be used for cell-free- (cf-) mtDNA quantification. DNA from cell supernatants, cf-mtDNA in plasma, and cytosol DNA (200 μL) were isolated using the QIAamp DNA Mini Kit (Qiagen, Germany). ND1 levels in the samples were analyzed according to a standard curve based on ND1 plasmid (Sangon, Shanghai, China) levels.Table 2 List of primers for real-time PCR studies. GenePrimerSequenceND 1Forward primer5′-CACACTAGCAGAGACCAACCGAAC-3′Reverse primer5′-CGGCTATGAAGAATAGGGCGAAGG-3′18S rRNAForward primer5′-GACTCAACACGGGAAACCTCACC-3′Reverse primer5′-ACCAGACAAATCGCTCCACCAAC-3′ ## 2.10. Adenovirus Transduction for Mitochondria and mtDNA Detection HGFs were transduced with adenovirus encoding the mitochondrial outer-membrane protein Tomm 20 bearing a mCherry fluorescence protein. mtDNA was detected by coexpression of TFAM, tagged with the green fluorescent protein (GFP) variant mNeonGreen. HGFs were seeded on 10 mm round confocal glass coverslips at a density of 50% and were infected with specified amounts of the Tomm 20-mCherry and TFAM-mNeonGreen adenoviruses. Forty-eight hours after transduction, the medium was changed, and the cells were processed for further analysis. ## 2.11. Live Cell Imaging Microscopy Live cells were captured using a fluorescence microscope (TCS-STED; Leica, Wetzlar, Germany) with a63×oil immersion objective. For all experiments, HGFs were grown in 10 mm round glass bottom confocal wells (Cedarlane, Southern Ontario, Canada). Laser excitation was achieved at 488 nm for mNeonGreen and 561 nm for mCherry. LPS treatment was performed after sample mounting in the medium chamber, if needed. HGFs expressing Tomm 20-mCherry and TFAM-mNeonGreen were imaged serially at every 10 s for 10-15 min. Image processing and analysis were performed using ImageJ (NIH, http://rsb.info.nih.gov/ij/) and Huygens Professional software (Scientific Volume Imaging, Amsterdam, Holland). ## 2.12. Detection of mPTP Opening HGFs were incubated with 50 mM cobalt chloride for 15 min, before treatment with 1μM Calcein Green AM (Solarbio, Beijing, China) for 30 min. Free Calcein quenching by cobalt chloride preserved mitochondrial integrity, which could be used to indicate mPTP opening. Calcein fluorescence was detected by confocal microscopy (Leica) using a 488 nm excitation wavelength. Quantification of the Calcein fluorescence intensity was conducted by analyzing 20 cells for every indicated condition using ImageJ software. To prevent mPTP opening, HGFs were preincubated with 0.5 μM cyclosporine A (CsA; Sigma) for 2 h, following the manufacturer’s recommendations. ## 2.13. Flow Cytometric Analysis Cells were briefly washed with1×phosphate-buffered saline (PBS), resuspended in 1×binding buffer, and centrifuged at 300×g for 10 min. The pellets were resuspended with 1×binding buffer at a density of 1×106 cells/mL. Cells were replated in a flow cytometric tube at a density of 1×105 cells/mL and processed for Annexin V-FITC staining (Solarbio, Beijing, China)for 10 min at 20-25°C. Subsequently, the cells were stained with propidium iodide (PI) for 5 min at 20-25°C and analyzed for apoptosis by flow cytometry. ## 2.14. Statistical Analysis Data are expressed as themean±standarderror (SE). All p values were determined by two-way Student’s t-test or one-way analysis of variance (ANOVA) with a post hoc Student Knewman-Keuls test for multiple comparisons. Significant differences were accepted at p<0.05. Statistical analysis was performed using GraphPad Prism software (version 9.00; GraphPad Software). ## 3. Results ### 3.1. mtDNA Release from Mitochondria during Periodontitis Development Micro-CT results revealed that alveolar bone around the ligated molar was significantly reduced in CP mice compared to control mice, suggesting experimental periodontitis in the CP group established (Figures1(b) and 1(c)). Intriguingly, mtDNA in plasma from CP mice were enriched compared to age-matched wild-type control mice (Figure 1(d)). These results indicated that mtDNA release might be involved in periodontitis development. However, mtDNA efflux in HGFs during periodontitis is still unclear. Next, we transduced primary HGFs with adenovirus encoding Tomm 20-mCherry and TFAM-mNeonGreen to show mitochondria and mtDNA, respectively (Figure 2(a)). mtDNA were detected robust release into the cytoplasm in CP HGFs (Figure 2(b)). This process was also found by real-time microscopy (Figure 2(c), Movie 1). In contrast, no mtDNA efflux was detected in healthy HGFs (Movie S1). LPS caused remarkable mtDNA release in healthy HGFs and led to more significant mtDNA release in periodontitis-affected samples (Figures 2(b)–2(d), and 2(e), Movie 2 and 3).Next, we calculated a significant increase in the percentage of HGFs with mtDNA efflux in CP HGFs as compared with that in control HGFs (Figure 2(f)). LPS treatment caused marked increase in the percentage of HGFs with mtDNA efflux compared with those without LPS treatment in healthy and CP states (Figure 2(f)). Moreover, qRT-PCR confirmed that mtDNA release into cytosol and out of cells during periodontitis (Figures 2(g)–2(i)). These results indicated that mtDNA release might be involved in periodontitis development.Figure 2 mtDNA released from mitochondria in human gingival fibroblasts from patients with chronic periodontitis. (a) Using fluorescent fusion proteins to visualize mitochondria (Tomm 20-mCherry) and mtDNA (TFAM-mNeonGreen). TM: transmembrane domain; MLS: mitochondrial localization sequence; DBD1 and DBD2: DNA binding domain-1 and DNA binding domain-2. (b) Typical illustration of the human gingival fibroblasts (HGFs) for mitochondria and mtDNA among control, CP, and with or without LPS stimulation (5μg/mL, 24 h) groups (scale bars: 5 μm). (c) Still image of mtDNA efflux in HGFs from CP patients (scale bar: 2.5 μm, see Movie 1). (d) Still image of mtDNA efflux in HGFs with LPS stimulation (scale bar: 2.5 μm, see Movie 2). (e) Still image showing mtDNA efflux in HGFs from CP patients with LPS stimulation (scale bar: 2.5 μm, see Movie 3). (f) Percentage of HGFs with, without mtDNA efflux. Data are the mean±SE of 20 fields in each group. (g–i) mtDNA in HGFs, cell supernatants, and cell cytosol. Data are obtained by six independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. (a)(b)(c)(d)(e)(f)(g)(h)(i) ### 3.2. mtDNA Efflux Maintained in HGFs during Periodontitis LPS is a principal factor that determines the periodontal inflammation; we decided to clarify if LPS causes mtDNA efflux maintained in HGFs. In these experiments, healthy HGFs were exposed to LPS stimulation for 24 h. Next, LPS was removed and HGFs were cultured into the next three generations for analysis (Figure3(a)). In contrast, healthy HGFs were directly treated with LPS for 24 h (Figure 3(b)). The results showed that LPS reinforced the mtDNA efflux effect even in the next three generational HGFs (Figures 3(c)–3(e)). No significant differences were observed compared to the LPS directly treated HGFs (Figure 3(f)). Next, we examined the mtDNA levels in the cytosol using qRT-PCR analyzing these groups. LPS directly stimulated HGFs, and LPS treatment following passages of HGFs were both enriched in cytosolic mtDNA (Figure 3(g)). In addition, the percentages of HGFs with mtDNA efflux between LPS direct treatment and LPS treatment followed by HGFs passages were similar (Figure 3(h)). These results suggest that LPS treatment can enhance this mtDNA efflux phenomenon, and the facilitative mtDNA release effects can be maintained in HGFs even in the next-generations HGFs, which is consistent with those mtDNA release of CP HGFs and CP mice.Figure 3 mtDNA efflux maintained inhuman gingival fibroblasts during periodontitis. (a) Human gingival fibroblasts (HGFs) were treated with lipopolysaccharide (LPS) (5μg/mL, 24 h), which was later removed from the culture medium. These LPS-treated HGFs were cultured to next three generations for analysis. (b) HGFs were cultured with LPS direct stimulation (5 μg/mL, 24 h), which will be directly analyzed for mtDNA activity. (c–e) Images of HGFs in the A group for analysis after passaging three generations, respectively. (f) Images of HGFs in the B group. Scale bars: 5 μm. (g) mtDNA levels in cytosol among the five groups of HGFs. n=6. (h) Percentage of HGFs with, without mtDNA efflux. Data in (h) are the mean±standarderror (SE) of 20 fields per group. ns=no significant difference. ∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. ### 3.3. ROS and mtROS Is Overproduction in HGFs from CP Patients To investigate in further details how mtDNA efflux effect remained during periodontitis at cellular level, we firstly analyzed the ROS and mtROS levels in HGFs from different hosts. Control healthy HGFs had the lowest ROS and mtROS levels (Figure4(a)). HGFs from CP had significantly greater levels of ROS and mtROS (Figure 4(a)). We found that the ROS and mtROS were more activated in the presence of LPS compared to those in the absence of LPS groups (Figure 4(a)). In addition, LPS can affect ROS and mtROS even in the next three generations HGFs (Figure 4(a)). Furthermore, the levels of these fluorescent signals reflecting ROS and mtROS levels in Figure 4(a) were calculated by ImageJ (Figures 4(b) and 4(c)). Western blot analysis showed PDK2 exhibited decreased expression in CP HGFs (Figures 4(d) and 4(e)). Meanwhile, the expression levels of PDK2 were also reduced after LPS stimulation and showed low levels even in the next three-generation HGFs (Figures 4(d) and 4(e)). In summary, CP HGFs are primed for ROS activation, and LPS can persistently upregulate the ROS levels in HGFs by suppressing the PDK2 expression. Its regulation may contribute to this mtDNA efflux process.Figure 4 Reactive oxygen species (ROS) and mitochondrial ROS is overproduction in human gingival fibroblasts from chronic periodontitis patients. (a) Human gingival fibroblasts (HGFs) were incubated with 2′,7′-dichlorodihydrofluorescein diacetate (H2DCF-DA) (10 μM, 30 minutes) to indicate the ROS levels (green) in HGFs (scale bars: 75 μm). HGFs were incubated with MitoSOX Red (5 μM, 30 minutes) to visualize mitochondrial ROS (mtROS) levels (red) (scale bars: 50 μm). (b, c) The arbitrary fluorescence intensity of ROS and mtROS in (a) were calculated by ImageJ based on per 10 cells in each group from (a). Data represent the mean±standarderror (SE) from 10 cells from each group. (d) Western blot to evaluate the protein expression of pyruvate dehydrogenase kinase 2 (PDK2). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the loading control. (e) Quantification of each band intensity with respect to loading control in D. n=3 (LPS: 5 μg/mL). Statistically significant p value is indicated as follows: ∗p<0.05,∗∗∗p<0.001,and∗∗∗∗p<0.0001 as compared with the control group; #p<0.05,##p<0.01,and###p<0.001 as compared with the CP group; ++p<0.01,+++p<0.001,and++++p<0.0001 as compared with the LPS group. (a)(b)(c)(d)(e) ### 3.4. mPTP Opening in HGFs from CP Patients via ROS Activation mPTP opening in HGFs was indicative using Calcein AM fluorescence (Figure5(a)). Control HGFs showed strong green fluorescence (Figure 5(a)), suggesting that mPTP remained in a closed state under normal condition [30]. However, the fluorescence was hardly detected in CP HGFs (Figure 5(a)). LPS further resulted in a much more decrease in fluorescence in the control and CP groups (Figure 5(a)). Decreased level of fluorescence signal was also detected in the LPS treated following passaging three generational HGFs (Figure 5(b)). A significant increase in fluorescence was observed in HGFs in the presence of CsA when compared with that in the absence of CsA (Figures 5(a)–5(d)). It was shown that inhibition of ROS and mtROS activation contributes to suppression of mPTP opening (Figures 5(a)–5(d)). Collectively, these data show that CP HGFs display mPTP opening and that mPTP opening in the LPS-treated HGFs was maintained within the HGFs even in the later three generations. Additionally, this observed mPTP opening is dependent on ROS activation.Figure 5 Human gingival fibroblasts (HGFs) from chronic periodontitis presented active mitochondrial permeability transition pore opening via reactive oxygen species. (a) Human gingival fibroblasts (HGFs) were loaded with cobalt chloride (50 mM, 15 minutes) and Calcein AM (green) to determine the opening of the mitochondrial permeability transition pore (mPTP) in HGFs in the presence or absence of lipopolysaccharide (LPS) treatment (5μg/mL, 24 h). The opening of mPTP in HGFs was measured after cyclosporin A (CsA) (0.5 μM, 2 h) or N-acetylcysteine (NAC) (3 mM, 2 h)or Mito-TEMPO (50 μM, 2 h) treatment (scale bars: 25 μm). (b) Images for mPTP opening in the control, LPS-treated, and LPS-treated group after passaging three generations (scale bars: 25 μm). (c) Quantification of the observed Calcein green signal in HGFs from (a, b). Mean±SE are indicated (n=20 cells). The CP group was observed a lower signal; LPS was also observeda lower signal compared with control HGFs. LPS can aggravate this lower signal in the control and CP groups, and this phenomenon can be retained in HGFs after LPS was removed and passaged to next three generations as compared with the control group. (d) The intensity of the indicated Calcein green signal was detected per 20 cells from the control, CP, LPS, and CP LPS groups with CsA, NAC, and Mito-TEMPO treatment compared to HGFs without any treatment, respectively. CsA, NAC, and Mito-TEMPO all downregulated the signal in the LPS, CP, and CP LPS groups, while they fail to induce this phenomenon in control HGFs. p values were determined by 1-way analysis of variance followed by post hoc tests. ∗∗p<0.01and∗∗∗∗p<0.0001; ns: not significant. (a)(b)(c)(d) ### 3.5. mtDNA Release in CP HGFs via ROS and mPTP Opening We performed real-time fluorescent microscopy for control, CP, LPS treatment, and CP LPSHGFs in the presence of CsA (Figure6(a)). It was observed that mtDNA displayed mild or no efflux in the four CsA-treated groups of HGFs (Figure 6(a)). These data demonstrated that mPTP was critical for the mtDNA release under these conditions. We performed qRT-PCR to detect the cytosolic mtDNA levels by the inhibitors of mPTP, ROS, and mtROS. CsA, NAC, and Mito-TEMPO all decreased the cytosolic mtDNA levels in the CP, LPS, and CP LPS groups when compared with the three groups without any treatment, whereas the control group showed similar cytosolic mtDNA levels in the presence and absence of CsA and NAC (Figure 6(b)). We also showed that Mito-TEMPO slightly decreased cytosolic mtDNA concentration in healthy HGFs (Figure 6(b)). When we examined the difference in apoptosis of HGFs among the four groups, we found that HGFs from CP, LPS, and CP LPS showed no significant apoptosis when compared with the HGFs of control healthy donors (Fig. S1). Cumulatively, these results provide further evidence that ROS-mPTP opening causes mtDNA release in CP and LPS-treated HGFs.Figure 6 mtDNA release from mitochondria in human gingival fibroblasts of chronic periodontitis patients via reactive oxygen species-mitochondrial permeability transition pore pathways. (a) Human gingival fibroblasts (HGFs) from the control, chronic periodontitis (CP), and lipopolysaccharide (LPS) stimulation infection (5μg/mL, 24 h), and CP LPS groups were pretreated with 0.5 μM cyclosporin A (CsA) for 2 h and subjected to analysis for mtDNA release. HGFs expressing Tomm 20-mCherry (red) and TFAM-mNeonGreen (green) revealed mtDNA nucleoid presented along with mitochondria. Yellow circles and blue arrows mark areas where mtDNA (green) clearly stops efflux from mitochondria (red). Scale bars: 2.5 μm. See Movies 4, 5, 6, and 7). (b) Bar graphs illustrate the average mtDNA levels in cytosol among four groups of HGFs with or without CsA, N-acetylcysteine (NAC) (3 mM, 2 h), and Mito-TEMPO (50 μM, 2 h) treatment. All quantified data represent the mean±SE. p values were determined by 1-way analysis of variance followed by post hoc tests. Graphs represent at least 3 independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001; ns: not significant. (a)(b) ## 3.1. mtDNA Release from Mitochondria during Periodontitis Development Micro-CT results revealed that alveolar bone around the ligated molar was significantly reduced in CP mice compared to control mice, suggesting experimental periodontitis in the CP group established (Figures1(b) and 1(c)). Intriguingly, mtDNA in plasma from CP mice were enriched compared to age-matched wild-type control mice (Figure 1(d)). These results indicated that mtDNA release might be involved in periodontitis development. However, mtDNA efflux in HGFs during periodontitis is still unclear. Next, we transduced primary HGFs with adenovirus encoding Tomm 20-mCherry and TFAM-mNeonGreen to show mitochondria and mtDNA, respectively (Figure 2(a)). mtDNA were detected robust release into the cytoplasm in CP HGFs (Figure 2(b)). This process was also found by real-time microscopy (Figure 2(c), Movie 1). In contrast, no mtDNA efflux was detected in healthy HGFs (Movie S1). LPS caused remarkable mtDNA release in healthy HGFs and led to more significant mtDNA release in periodontitis-affected samples (Figures 2(b)–2(d), and 2(e), Movie 2 and 3).Next, we calculated a significant increase in the percentage of HGFs with mtDNA efflux in CP HGFs as compared with that in control HGFs (Figure 2(f)). LPS treatment caused marked increase in the percentage of HGFs with mtDNA efflux compared with those without LPS treatment in healthy and CP states (Figure 2(f)). Moreover, qRT-PCR confirmed that mtDNA release into cytosol and out of cells during periodontitis (Figures 2(g)–2(i)). These results indicated that mtDNA release might be involved in periodontitis development.Figure 2 mtDNA released from mitochondria in human gingival fibroblasts from patients with chronic periodontitis. (a) Using fluorescent fusion proteins to visualize mitochondria (Tomm 20-mCherry) and mtDNA (TFAM-mNeonGreen). TM: transmembrane domain; MLS: mitochondrial localization sequence; DBD1 and DBD2: DNA binding domain-1 and DNA binding domain-2. (b) Typical illustration of the human gingival fibroblasts (HGFs) for mitochondria and mtDNA among control, CP, and with or without LPS stimulation (5μg/mL, 24 h) groups (scale bars: 5 μm). (c) Still image of mtDNA efflux in HGFs from CP patients (scale bar: 2.5 μm, see Movie 1). (d) Still image of mtDNA efflux in HGFs with LPS stimulation (scale bar: 2.5 μm, see Movie 2). (e) Still image showing mtDNA efflux in HGFs from CP patients with LPS stimulation (scale bar: 2.5 μm, see Movie 3). (f) Percentage of HGFs with, without mtDNA efflux. Data are the mean±SE of 20 fields in each group. (g–i) mtDNA in HGFs, cell supernatants, and cell cytosol. Data are obtained by six independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. (a)(b)(c)(d)(e)(f)(g)(h)(i) ## 3.2. mtDNA Efflux Maintained in HGFs during Periodontitis LPS is a principal factor that determines the periodontal inflammation; we decided to clarify if LPS causes mtDNA efflux maintained in HGFs. In these experiments, healthy HGFs were exposed to LPS stimulation for 24 h. Next, LPS was removed and HGFs were cultured into the next three generations for analysis (Figure3(a)). In contrast, healthy HGFs were directly treated with LPS for 24 h (Figure 3(b)). The results showed that LPS reinforced the mtDNA efflux effect even in the next three generational HGFs (Figures 3(c)–3(e)). No significant differences were observed compared to the LPS directly treated HGFs (Figure 3(f)). Next, we examined the mtDNA levels in the cytosol using qRT-PCR analyzing these groups. LPS directly stimulated HGFs, and LPS treatment following passages of HGFs were both enriched in cytosolic mtDNA (Figure 3(g)). In addition, the percentages of HGFs with mtDNA efflux between LPS direct treatment and LPS treatment followed by HGFs passages were similar (Figure 3(h)). These results suggest that LPS treatment can enhance this mtDNA efflux phenomenon, and the facilitative mtDNA release effects can be maintained in HGFs even in the next-generations HGFs, which is consistent with those mtDNA release of CP HGFs and CP mice.Figure 3 mtDNA efflux maintained inhuman gingival fibroblasts during periodontitis. (a) Human gingival fibroblasts (HGFs) were treated with lipopolysaccharide (LPS) (5μg/mL, 24 h), which was later removed from the culture medium. These LPS-treated HGFs were cultured to next three generations for analysis. (b) HGFs were cultured with LPS direct stimulation (5 μg/mL, 24 h), which will be directly analyzed for mtDNA activity. (c–e) Images of HGFs in the A group for analysis after passaging three generations, respectively. (f) Images of HGFs in the B group. Scale bars: 5 μm. (g) mtDNA levels in cytosol among the five groups of HGFs. n=6. (h) Percentage of HGFs with, without mtDNA efflux. Data in (h) are the mean±standarderror (SE) of 20 fields per group. ns=no significant difference. ∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001. ## 3.3. ROS and mtROS Is Overproduction in HGFs from CP Patients To investigate in further details how mtDNA efflux effect remained during periodontitis at cellular level, we firstly analyzed the ROS and mtROS levels in HGFs from different hosts. Control healthy HGFs had the lowest ROS and mtROS levels (Figure4(a)). HGFs from CP had significantly greater levels of ROS and mtROS (Figure 4(a)). We found that the ROS and mtROS were more activated in the presence of LPS compared to those in the absence of LPS groups (Figure 4(a)). In addition, LPS can affect ROS and mtROS even in the next three generations HGFs (Figure 4(a)). Furthermore, the levels of these fluorescent signals reflecting ROS and mtROS levels in Figure 4(a) were calculated by ImageJ (Figures 4(b) and 4(c)). Western blot analysis showed PDK2 exhibited decreased expression in CP HGFs (Figures 4(d) and 4(e)). Meanwhile, the expression levels of PDK2 were also reduced after LPS stimulation and showed low levels even in the next three-generation HGFs (Figures 4(d) and 4(e)). In summary, CP HGFs are primed for ROS activation, and LPS can persistently upregulate the ROS levels in HGFs by suppressing the PDK2 expression. Its regulation may contribute to this mtDNA efflux process.Figure 4 Reactive oxygen species (ROS) and mitochondrial ROS is overproduction in human gingival fibroblasts from chronic periodontitis patients. (a) Human gingival fibroblasts (HGFs) were incubated with 2′,7′-dichlorodihydrofluorescein diacetate (H2DCF-DA) (10 μM, 30 minutes) to indicate the ROS levels (green) in HGFs (scale bars: 75 μm). HGFs were incubated with MitoSOX Red (5 μM, 30 minutes) to visualize mitochondrial ROS (mtROS) levels (red) (scale bars: 50 μm). (b, c) The arbitrary fluorescence intensity of ROS and mtROS in (a) were calculated by ImageJ based on per 10 cells in each group from (a). Data represent the mean±standarderror (SE) from 10 cells from each group. (d) Western blot to evaluate the protein expression of pyruvate dehydrogenase kinase 2 (PDK2). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the loading control. (e) Quantification of each band intensity with respect to loading control in D. n=3 (LPS: 5 μg/mL). Statistically significant p value is indicated as follows: ∗p<0.05,∗∗∗p<0.001,and∗∗∗∗p<0.0001 as compared with the control group; #p<0.05,##p<0.01,and###p<0.001 as compared with the CP group; ++p<0.01,+++p<0.001,and++++p<0.0001 as compared with the LPS group. (a)(b)(c)(d)(e) ## 3.4. mPTP Opening in HGFs from CP Patients via ROS Activation mPTP opening in HGFs was indicative using Calcein AM fluorescence (Figure5(a)). Control HGFs showed strong green fluorescence (Figure 5(a)), suggesting that mPTP remained in a closed state under normal condition [30]. However, the fluorescence was hardly detected in CP HGFs (Figure 5(a)). LPS further resulted in a much more decrease in fluorescence in the control and CP groups (Figure 5(a)). Decreased level of fluorescence signal was also detected in the LPS treated following passaging three generational HGFs (Figure 5(b)). A significant increase in fluorescence was observed in HGFs in the presence of CsA when compared with that in the absence of CsA (Figures 5(a)–5(d)). It was shown that inhibition of ROS and mtROS activation contributes to suppression of mPTP opening (Figures 5(a)–5(d)). Collectively, these data show that CP HGFs display mPTP opening and that mPTP opening in the LPS-treated HGFs was maintained within the HGFs even in the later three generations. Additionally, this observed mPTP opening is dependent on ROS activation.Figure 5 Human gingival fibroblasts (HGFs) from chronic periodontitis presented active mitochondrial permeability transition pore opening via reactive oxygen species. (a) Human gingival fibroblasts (HGFs) were loaded with cobalt chloride (50 mM, 15 minutes) and Calcein AM (green) to determine the opening of the mitochondrial permeability transition pore (mPTP) in HGFs in the presence or absence of lipopolysaccharide (LPS) treatment (5μg/mL, 24 h). The opening of mPTP in HGFs was measured after cyclosporin A (CsA) (0.5 μM, 2 h) or N-acetylcysteine (NAC) (3 mM, 2 h)or Mito-TEMPO (50 μM, 2 h) treatment (scale bars: 25 μm). (b) Images for mPTP opening in the control, LPS-treated, and LPS-treated group after passaging three generations (scale bars: 25 μm). (c) Quantification of the observed Calcein green signal in HGFs from (a, b). Mean±SE are indicated (n=20 cells). The CP group was observed a lower signal; LPS was also observeda lower signal compared with control HGFs. LPS can aggravate this lower signal in the control and CP groups, and this phenomenon can be retained in HGFs after LPS was removed and passaged to next three generations as compared with the control group. (d) The intensity of the indicated Calcein green signal was detected per 20 cells from the control, CP, LPS, and CP LPS groups with CsA, NAC, and Mito-TEMPO treatment compared to HGFs without any treatment, respectively. CsA, NAC, and Mito-TEMPO all downregulated the signal in the LPS, CP, and CP LPS groups, while they fail to induce this phenomenon in control HGFs. p values were determined by 1-way analysis of variance followed by post hoc tests. ∗∗p<0.01and∗∗∗∗p<0.0001; ns: not significant. (a)(b)(c)(d) ## 3.5. mtDNA Release in CP HGFs via ROS and mPTP Opening We performed real-time fluorescent microscopy for control, CP, LPS treatment, and CP LPSHGFs in the presence of CsA (Figure6(a)). It was observed that mtDNA displayed mild or no efflux in the four CsA-treated groups of HGFs (Figure 6(a)). These data demonstrated that mPTP was critical for the mtDNA release under these conditions. We performed qRT-PCR to detect the cytosolic mtDNA levels by the inhibitors of mPTP, ROS, and mtROS. CsA, NAC, and Mito-TEMPO all decreased the cytosolic mtDNA levels in the CP, LPS, and CP LPS groups when compared with the three groups without any treatment, whereas the control group showed similar cytosolic mtDNA levels in the presence and absence of CsA and NAC (Figure 6(b)). We also showed that Mito-TEMPO slightly decreased cytosolic mtDNA concentration in healthy HGFs (Figure 6(b)). When we examined the difference in apoptosis of HGFs among the four groups, we found that HGFs from CP, LPS, and CP LPS showed no significant apoptosis when compared with the HGFs of control healthy donors (Fig. S1). Cumulatively, these results provide further evidence that ROS-mPTP opening causes mtDNA release in CP and LPS-treated HGFs.Figure 6 mtDNA release from mitochondria in human gingival fibroblasts of chronic periodontitis patients via reactive oxygen species-mitochondrial permeability transition pore pathways. (a) Human gingival fibroblasts (HGFs) from the control, chronic periodontitis (CP), and lipopolysaccharide (LPS) stimulation infection (5μg/mL, 24 h), and CP LPS groups were pretreated with 0.5 μM cyclosporin A (CsA) for 2 h and subjected to analysis for mtDNA release. HGFs expressing Tomm 20-mCherry (red) and TFAM-mNeonGreen (green) revealed mtDNA nucleoid presented along with mitochondria. Yellow circles and blue arrows mark areas where mtDNA (green) clearly stops efflux from mitochondria (red). Scale bars: 2.5 μm. See Movies 4, 5, 6, and 7). (b) Bar graphs illustrate the average mtDNA levels in cytosol among four groups of HGFs with or without CsA, N-acetylcysteine (NAC) (3 mM, 2 h), and Mito-TEMPO (50 μM, 2 h) treatment. All quantified data represent the mean±SE. p values were determined by 1-way analysis of variance followed by post hoc tests. Graphs represent at least 3 independent experiments. ∗p<0.05,∗∗p<0.01,∗∗∗p<0.001,and∗∗∗∗p<0.0001; ns: not significant. (a)(b) ## 4. Discussion Mitochondrial dysfunction is an important component of periodontitis pathogenesis [31], as defects in mitochondrial structure and function have been shown in periodontitis in our previous work [8]. mtDNA is crucial for mitochondrial function. It is known that mtDNA has structural similarities with microbial DNA [32]. Hence, mtDNA could result in an inflammatory response when released into the cytoplasm or extracellular milieu in susceptible patients. These mtDNA characteristics confirmed the significant role of mtDNA in the pathogenesis of inflammation-related diseases in humans. In this study, we examined mtDNA efflux activity and extent using confocal microscopy and qRT-PCR analysis between primary HGFs from periodontitis patients and healthy donors. We demonstrated for the first time that mtDNA released from the mitochondria in HGFs from CP patients. LPS stimuli was found to trigger this mtDNA efflux activity and keep these properties within the HGFs for some periods.Studies have previously identified that mtDNA is found outside the mitochondria and sometimes even outside the cells in certain circumstances [33, 34]. mtDNA release was first reported that LPS pointed to extrude mtDNA into the cytoplasm [35]. Another key evidence for mtDNA extruding into the extracellular space is that LPS induces neutrophil extracellular traps (NETs) formation, largely consisting of mtDNA [36, 37]. This mtDNA release may result in substantial tissue damage, leading to chronic inflammation. Periodontitis is a kind of chronic inflammatory disease driving the destruction of soft and hard periodontal tissues such as gingiva recession and alveolar bone loss [5], suggesting a role for mtDNA efflux in the periodontitis. Consistent with the reported mtDNA efflux in other studies, we identified a significant increased mtDNA levels in the plasma from CP mice, implying an association with periodontitis and this mtDNA efflux. One study demonstrated that the mtDNA outside of mitochondria was found to be crucial for inflammation via inducing bone-destructing immunity [38]. Owing to the mtDNA accumulation in the plasma of CP mice, little is known about the mtDNA function and activity in HGFs during periodontitis. In the context of periodontitis, in vitro studies of periodontitis patients have confirmed alterations in mitochondrial structure, function, and hyperoxidative stress in HGFs and gingival tissues compared to normal individuals [8, 12], which indicates that there may be a correlation between periodontitis progression and mitochondrial dysfunction in HGFs from different hosts. Interestingly, we confirmed that aberrant mtDNA release into cytosol and supernatants of HGFs from CP patients. It is evident that LPS stimulation could also induce this phenomenon in healthy HGFs. The observed high mtDNA levels in the cytosol and supernatants of CP HGFs were more significant in the presence of LPS than in HGFs without LPS, indicating that mtDNA release was maintained in inflamed cells. It is possible that LPS, a major trigger of periodontitis, enables mtDNA to release from mitochondria in periodontitis mouse model and periodontitis patients, but the retained mtDNA efflux in HGFs from CP patients was inadequate to understand.Our data showed that LPS increased ROS levels and mPTP opening, and it also led to this variation in next three generational HGFs. This comparison of mtDNA efflux activity between the next three generations HGFs after LPS stimuli and direct LPS treatment was similar, in line with the above findings. In addition, we demonstrated that LPS upregulated ROS generation through PDK2 inhibition even in the next three generations HGFs. It was reported that PDK2 activation has beneficial effects on ROS suppression [39]. Thus, we reasoned that LPS might mediate irreversible high ROS generation by downregulation of PDK2 expression [6, 40], leading to sustained increased mtDNA release activity even in the next-generation HGFs without LPS stimulation. As widely reported in the literature, LPS activated Toll-like receptor (TLR) was abundantly expressed in the inflammatory cells, leading to the ROS production as well as the lower PDK2 expression [8, 41]. Some reported that ROS triggered mtDNA damages and release into cytosol in cancer [42]. Another study showed that mitochondrial ROS induced inflammation dependent on disrupting mtDNA maintenance [15]. In agreement, other study detected that LPS induced accumulation of free mtDNA outside of mitochondria contributing to inflammation via TLR9 activation [43]. We provided herein the proof of mtDNA efflux arising from LPS mediating ROS activation by blocking PDK2 in HGFs. The exact reason for this phenomenon is unclear. It is crucial to note that studies have observed the transfer of entire mitochondria between cells [44]. However, whether the entire mitochondria or mtDNA is transferred is controversial [45]. Therefore, mtDNA is thought to be a signal molecule that spreads inflammatory signals across a population of cells. This suggests that inflammation could spread between cells via the detection of mtDNA [9]. Based on these results, we propose that LPS modulates mtDNA efflux remained in HGFs closely linked to sustained ROS overproduction.Some studies have concluded mtDNA release in the context of cell apoptosis [46], while other studies have indicated that mPTP opening leads to increased mtDNA release [46]. Given that cell apoptosis was similar among our divided groups, the role of mPTP opening in mtDNA release in HGFs was focused. Based on the inhibitory effect of CsA on mPTP opening, CsA successfully restored mtDNA efflux from mitochondria and reduced mtDNA levels in cytosol within inflamed HGFs. These results highlight that mPTP opening potentially modulates mtDNA release in HGFs with periodontitis. One of the possible physiological mechanisms mediating increased mPTP opening could be linked to ROS increase in inflammatory HGFs. Exceeding levels of ROS can trigger mPTP opening via mitochondrial ATP-sensitive potassium channels, and voltage-dependent anion channel-1 oligomerization, suggesting that ROS works as an important molecular leading to downstream mPTP opening and eventually disruption of cellular functions [47, 48]. Of note, earlier, Bullon’s work together with our recent work demonstrated that HGFs and gingival tissues from CP patients were observed impaired mitochondria and higher oxidative stress [8, 12, 49]. As a result of cellular ROS and mtROS outburst, mPTP opening can be activated. According to our data, we confirmed positive relationship between ROS overproduction and mPTP opening in inflammatory HGFs. In addition, ROS and mPTP both played a critical role in mtDNA release during periodontitis. Our results highlight that ROS could be one possible explanation for mPTP opening, contributing to mtDNA release in HGFs during periodontitis. ## 5. Conclusion In summary, mtDNA efflux maintained in primary HGFs could reflect mitochondrial dysfunction detected in periodontitis. This work provides initial preclinical evidence for a new candidate biomarker for mtDNA efflux in HGFs predicting periodontitis. In addition to this focused investigation of mtDNA efflux in HGFs during inflammation, our results also indicate that ROS/mPTP pathway could be the principal mediator of mtDNA efflux in inflamed HGFs. Further investigation is needed to determine how mtDNA release causes periodontitis, which may reveal new therapeutic strategies for the treatment of patients with periodontitis. --- *Source: 1000213-2022-06-08.xml*
2022
# Research on the Characteristics of Food Impaction with Tight Proximal Contacts Based on Deep Learning **Authors:** Yitong Cheng; Zhijiang Wang; Yue Shi; Qiaoling Guo; Qian Li; Rui Chai; Feng Wu **Journal:** Computational and Mathematical Methods in Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1000820 --- ## Abstract Objective. Based on deep learning, the characteristics of food impaction with tight proximal contacts were studied to guide the subsequent clinical treatment of occlusal adjustment. At the same time, digital model building, software measurement, and statistical correlation analysis were used to explore the cause of tooth impaction and to provide evidence for clinical treatment. Methods. Volunteers with (n=250) and without (n=250) tooth impaction were recruited, respectively, to conduct a questionnaire survey. Meanwhile, models were made and perfused by skilled clinical physicians for these patients, and characteristics such as adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle were measured. A normality test, differential analysis, correlation analysis of pathological characteristics of the impaction group, principal component analysis (PCA), and binary logistic regression analysis were performed. Results. The adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all met normal distribution. There were statistically significant differences in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001) between the two groups. After dimensionality reduction by PCA on characteristics, adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had a strong correlation with the principal components. Binary logistic regression analysis showed that adjacent line length and adjacent surface area had positive effects on impaction. The buccal abduction gap angle and occlusal abduction gap angle had a significant negative influence on impaction. Conclusion. Adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle are independent factors influencing food impaction. --- ## Body ## 1. Introduction In daily life, most people have food impaction. During chewing, food impaction occurs when food fragments or fibers wedge into the space between adjacent teeth due to the action of occlusal pressure or gingival retreat, which is called food impaction [1]. According to the different ways of the impaction, impaction can be divided into vertical impaction and horizontal impaction [2]. If it is not clear in time, impacted food will help bacterial reproduction, stimulate adjacent tissue, and cause a series of problems such as periodontal atrophy, gum papillitis, adjacent surface caries, halitosis, heavier periodontitis [3, 4]. As time goes by, oral health problems become more serious [5], which seriously influence people’s daily life.The main causes of occlusal food impaction are [6] caries, occlusal disorder, tooth wear, tooth defect, periodontal disease, alveolar bone atrophy, and other problems [4]. We based on the deep learning theory to study the influence factor for food impaction with tight proximal contacts [6]. We also illustrate problems in the process of food impaction through the analysis of adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle [7].In the treatment of food impaction, the cause of the impaction should be identified first, and the treatment should be carried out according to the cause. If there is caries on the adjacent surface, the corresponding filling or repair method should be selected according to the specific situation, and the normal contact relationship should be restored. Restoration treatment should be carried out as soon as possible after tooth loss [8]. Patients with periodontal inflammation should be treated after anti-inflammation [9]. Patients with dental malformations can undergo oral grinding adjustment [10]. As for the treatment of food impaction without anatomical structure destruction, a large number of domestic and foreign literature reports have taken the occlusal adjustment as the basic treatment [3]. It has the advantages of less grinding tissue, easy operation, and easy acceptance by patients. The methods for occlusal adjustment include adjusting the grinding and filling tooth tip, expanding the food overflow channel, deepening the food overflow groove, and adjusting the main functional area of occlusal. According to relevant literature reports, it can be systematically summarized into the following three methods: sequence adjustment [11], adjustment of the main functional area of the bite [12], and measurement of buccal tongue abduction gap angle [13]. The goal of therapy of food impaction with tight proximal contacts is to reshape the close adjacent surface contact [14] and maintain its stability.Deep learning is a branch of machine learning. It is an algorithm based on the artificial neural network to learn information representation. Deep learning is increasingly widely used in clinical practice. For example, Chung et al. [15] verified the feasibility of automatic segmentation based on deep learning in breast radiotherapy planning. Clinical applications of deep learning include medical image analysis, disease diagnosis, and other aspects [16, 17]. Along with the rapid development of three-dimensional (3D) acquisition technology, 3D data is increasingly applied in the medical field, including the 3D point cloud. Guo and his partners [18] analyzed several deep learning methods for processing 3D point clouds in their study. Based on the LCCP method, Wang et al. [19] segmented the 3D point cloud image of plants based on locally convex connected patches to measure the length, width, and area of leaves. Nevertheless, there is no relevant study using the 3D point cloud network for analysis and processing of tooth characteristic extraction.Incidence of food impaction with tight proximal contacts is on the rise clinically. In order to improve oral diagnosis and treatment technology and promote the process of treating food impaction with tight proximal contacts, this study is aimed at exploring the characteristics of food impaction with tight proximal contacts [20]. Based on deep learning theory, in recording the steps of tooth segmentation, a comparison method is used to collect and analyze differences of food impaction with tight proximal contacts in adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle, so as to provide the basis for clinical treatment of adjacency of close food. ## 2. Materials and Methods ### 2.1. Study Object A total of 500 volunteers (250 for food impaction and 250 for nonfood impaction) who are ages 25 to 50 were recruited according to the eligibility criteria for the study with no gender limitation. Inclusion criteria: (1) dental dentistry was complete, arranged basically neatly, and no adjacent surface caries was found between the first molar and the second molar by clinical examination and X-ray examination; (2) the first molars and the second molars are not loose; and (3) the impaction area between the first molar and the second molar is close, and the measurement of the feeler is less than 60μm. Exclusion criteria: (1) patients have severe periodontal disease; (2) the impaction is not complete to the jaw; (3) the first or second molars have full crown or inlay restoration or have adjacent surface fillings; (4) there are obvious “steps” between the first molars and the second molars; and (5) between the upper and lower jaws is reverse occlusion, locking occlusion, and other states. The basic information of food impaction was obtained by a questionnaire. ### 2.2. Experimental Materials The following are the experimental materials: mint and wax dental floss (Qizhimei Commercial and Trading Co., Ltd), DMG Silagum silicone rubber impression material heavy body+light body (DMG Dental, Germany), DMG O-bite Bite record, silicone rubber (DMG dental, Germany), 3-shape TRIOS second-generation oral scanner, silicone rubber blending gun (DMG dental, Germany), and superanhydrite (Heraeus Kulzer). ### 2.3. Experimental Steps #### 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. #### 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. #### 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) #### 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. #### 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ### 2.4. Statistical Analysis Statistical analysis was performed using R software and SPSS 24.0. Kolmogorov-Smirnov and Shapiro-Wilk methods were used to test the normal distribution of the characteristics of the impaction group. An independent sampleT-test was used to analyze statistical differences in features between the impaction and nonimpaction groups. Correlation analysis was used to test correlations between features. Principal component analysis (PCA) was used for dimensionality reduction of characteristics, and the characteristic value greater than 1 was the screening condition of PCA. Binary logistic regression analysis was used to analyze the effect of characteristics on impaction. p<0.05 was considered statistically significant. ## 2.1. Study Object A total of 500 volunteers (250 for food impaction and 250 for nonfood impaction) who are ages 25 to 50 were recruited according to the eligibility criteria for the study with no gender limitation. Inclusion criteria: (1) dental dentistry was complete, arranged basically neatly, and no adjacent surface caries was found between the first molar and the second molar by clinical examination and X-ray examination; (2) the first molars and the second molars are not loose; and (3) the impaction area between the first molar and the second molar is close, and the measurement of the feeler is less than 60μm. Exclusion criteria: (1) patients have severe periodontal disease; (2) the impaction is not complete to the jaw; (3) the first or second molars have full crown or inlay restoration or have adjacent surface fillings; (4) there are obvious “steps” between the first molars and the second molars; and (5) between the upper and lower jaws is reverse occlusion, locking occlusion, and other states. The basic information of food impaction was obtained by a questionnaire. ## 2.2. Experimental Materials The following are the experimental materials: mint and wax dental floss (Qizhimei Commercial and Trading Co., Ltd), DMG Silagum silicone rubber impression material heavy body+light body (DMG Dental, Germany), DMG O-bite Bite record, silicone rubber (DMG dental, Germany), 3-shape TRIOS second-generation oral scanner, silicone rubber blending gun (DMG dental, Germany), and superanhydrite (Heraeus Kulzer). ## 2.3. Experimental Steps ### 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. ### 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. ### 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) ### 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. ### 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ## 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. ## 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. ## 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) ## 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. ## 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ## 2.4. Statistical Analysis Statistical analysis was performed using R software and SPSS 24.0. Kolmogorov-Smirnov and Shapiro-Wilk methods were used to test the normal distribution of the characteristics of the impaction group. An independent sampleT-test was used to analyze statistical differences in features between the impaction and nonimpaction groups. Correlation analysis was used to test correlations between features. Principal component analysis (PCA) was used for dimensionality reduction of characteristics, and the characteristic value greater than 1 was the screening condition of PCA. Binary logistic regression analysis was used to analyze the effect of characteristics on impaction. p<0.05 was considered statistically significant. ## 3. Results ### 3.1. Normality Test Normality tests were performed for adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle in the patients with and without food impaction. Table1 displays that the sample size of research data was less than or equal to 50, so the S-W test was used. The results showed that the adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all fitted with a normal distribution (p>0.05).Table 1 Result of normality test. ItemSample sizeMean valueStandard deviationSkewnessKurtosisKolmogorov-Smirnov testShapiro-Wilk testD valuep valueW valuep valueAdjacent line length5004.2720.9590.0740.3960.1080.1570.9780.476Adjacent surface area5008.2900.3940.2750.3060.1280.040∗0.9770.423Tongue abduction gap angle50050.2469.8820.2360.0490.0580.9410.9900.947Buccal abduction gap angle50050.98311.7970.3121.0670.0690.7930.9720.267Occlusal abduction gap angle50058.91629.4290.235-0.2800.0840.5160.9730.317Note:∗p<0.05. ### 3.2. Differential Analysis of Characteristics between the Impaction and Nonimpaction Groups AT-test was used to compare the characteristic differences between the impaction group and the nonimpaction group. The analysis results are shown in Table 2. Statistically significant differences between the two groups were presented in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001). Additionally, the mean value of the tongue abduction gap angle (p=0.087) and buccal abduction gap angle (p=0.105) in the nonimpaction group was larger than those in the impaction group, but the difference was not statistically significant.Table 2 T-test for characteristics of impaction and nonimpaction patients. Characteristics (mean (SD))NonimpactionImpactionp valueAdjacent line length3.304 (1.187)4.267 (0.937)<0.001Adjacent surface area7.763 (0.466)8.278 (0.390)<0.001Tongue abduction gap angle53.630 (10.275)50.236 (9.673)0.087Buccal abduction gap angle55.849 (15.047)51.462 (12.121)0.105Occlusal abduction gap angle92.006 (13.091)59.466 (29.288)<0.001Note:∗p<0.05. ### 3.3. Correlation Analysis of Tooth Characteristics in Food Impaction Patients It could be seen from Table3 that the area of the adjacent surface (ρ=0.317, p=0.025) and the length of the adjacent line (ρ=0.297,p=0.036) of food impaction were significantly positively correlated with the tongue abduction gap angle.Table 3 Correlation analysis of tooth characteristics in food impaction patients. Adjacent surface areaAdjacent line lengthTongue abduction gap angleCorrelation coefficient0.3170.297p value0.025∗0.036∗Buccal abduction gap angleCorrelation coefficient0.0770.171p value0.5940.236Occlusal abduction gap angleCorrelation coefficient0.0780.036p value0.5920.803Note:∗p<0.05; ∗∗p<0.01. ### 3.4. PCA As could be seen from Table4, for the degree of commonality, there was a total of 1 item involving the tongue abduction gap angle, indicating that the relationship between the principal components and the study item was very weak, and the principal components were unable to effectively extract the information of the study item. Therefore, this item should be deleted and analyzed again after deletion.Table 4 Principal component analysis of impaction and nonimpaction. ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.7070.1110.513Adjacent surface area0.7190.0690.522Tongue abduction gap angle0.0510.5960.358Buccal abduction gap angle-0.168-0.7570.601Occlusal abduction gap angle-0.6500.4390.615Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4.After deleting the tongue abduction gap angle data, the main components of the patients with and without impaction were analyzed again. It could be seen from Table5 that the corresponding communality degree values of all the study items were higher than 0.4, indicating that there was a strong correlation between the study items and the principal components, and the principal components could effectively extract the information. After ensuring that the principal components could extract most of the information of the research item, the corresponding relationship between the principal components and the research item was analyzed (when the absolute value of the loading coefficient was greater than 0.4, it was indicated that the item had a corresponding relationship with the principal components).Table 5 Principal component analysis of impaction and nonimpaction (adjusted). ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.705-0.1000.507Adjacent surface area0.719-0.1340.535Buccal abduction gap angle-0.1580.9090.851Occlusal abduction gap angle-0.657-0.4720.654Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4. ### 3.5. Binary Logistic Regression Analysis The adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle were used as independent variables, and the impaction or not was used as the dependent variable for binary logistic regression analysis. The model formula islnp/1−p=0.889∗adjacentlinelength+3.396∗adjacentsurfacearea−0.071∗buccalabductiongapangle−0.089∗occlusalabductiongapangle−19.797 (where p represents the probability of having impaction and 1−p represents the probability of do not have impaction. Unit: adjacent line length (mm); adjacent surface area (mm2); angle (°)).As displayed in Table6, the adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had significant effects on the impaction (p<0.05). To be specific, adjacent line length had a positive effect on impaction, and the regression coefficient was 0.889, which passed the significance level test. In other words, if the adjacent line length increased by 1 unit, the probability of impaction increased by 2.432 times. The adjacent surface area also had a positive effect on impaction (regression coefficient=3.396,p<0.05). Specifically, if the adjacent surface area increased by 1 unit, the probability of impaction increased by 29.835 times. However, the buccal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.071), which indicated that the probability of impaction decreased by 0.931 times when the buccal abduction gap angle increased by 1 unit. At the same time, the occlusal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.089), reflecting that the probability of impaction decreased by 0.915 times when the occlusal abduction gap angle increased by 1 unit.Table 6 Results of binary logistic regression analysis were summarized. Regression coefficientStandard errorWaldχ2pOROR value 95% CILower limitUpper limitAdjacent line length0.8890.3337.1080.0082.4321.2654.676Adjacent surface area3.3960.93313.240<0.00129.8354.790185.819Buccal abduction gap angle-0.0710.0315.1190.0240.9310.8760.991Occlusal abduction gap angle-0.0890.02216.310<0.0010.9150.8770.955 ## 3.1. Normality Test Normality tests were performed for adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle in the patients with and without food impaction. Table1 displays that the sample size of research data was less than or equal to 50, so the S-W test was used. The results showed that the adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all fitted with a normal distribution (p>0.05).Table 1 Result of normality test. ItemSample sizeMean valueStandard deviationSkewnessKurtosisKolmogorov-Smirnov testShapiro-Wilk testD valuep valueW valuep valueAdjacent line length5004.2720.9590.0740.3960.1080.1570.9780.476Adjacent surface area5008.2900.3940.2750.3060.1280.040∗0.9770.423Tongue abduction gap angle50050.2469.8820.2360.0490.0580.9410.9900.947Buccal abduction gap angle50050.98311.7970.3121.0670.0690.7930.9720.267Occlusal abduction gap angle50058.91629.4290.235-0.2800.0840.5160.9730.317Note:∗p<0.05. ## 3.2. Differential Analysis of Characteristics between the Impaction and Nonimpaction Groups AT-test was used to compare the characteristic differences between the impaction group and the nonimpaction group. The analysis results are shown in Table 2. Statistically significant differences between the two groups were presented in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001). Additionally, the mean value of the tongue abduction gap angle (p=0.087) and buccal abduction gap angle (p=0.105) in the nonimpaction group was larger than those in the impaction group, but the difference was not statistically significant.Table 2 T-test for characteristics of impaction and nonimpaction patients. Characteristics (mean (SD))NonimpactionImpactionp valueAdjacent line length3.304 (1.187)4.267 (0.937)<0.001Adjacent surface area7.763 (0.466)8.278 (0.390)<0.001Tongue abduction gap angle53.630 (10.275)50.236 (9.673)0.087Buccal abduction gap angle55.849 (15.047)51.462 (12.121)0.105Occlusal abduction gap angle92.006 (13.091)59.466 (29.288)<0.001Note:∗p<0.05. ## 3.3. Correlation Analysis of Tooth Characteristics in Food Impaction Patients It could be seen from Table3 that the area of the adjacent surface (ρ=0.317, p=0.025) and the length of the adjacent line (ρ=0.297,p=0.036) of food impaction were significantly positively correlated with the tongue abduction gap angle.Table 3 Correlation analysis of tooth characteristics in food impaction patients. Adjacent surface areaAdjacent line lengthTongue abduction gap angleCorrelation coefficient0.3170.297p value0.025∗0.036∗Buccal abduction gap angleCorrelation coefficient0.0770.171p value0.5940.236Occlusal abduction gap angleCorrelation coefficient0.0780.036p value0.5920.803Note:∗p<0.05; ∗∗p<0.01. ## 3.4. PCA As could be seen from Table4, for the degree of commonality, there was a total of 1 item involving the tongue abduction gap angle, indicating that the relationship between the principal components and the study item was very weak, and the principal components were unable to effectively extract the information of the study item. Therefore, this item should be deleted and analyzed again after deletion.Table 4 Principal component analysis of impaction and nonimpaction. ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.7070.1110.513Adjacent surface area0.7190.0690.522Tongue abduction gap angle0.0510.5960.358Buccal abduction gap angle-0.168-0.7570.601Occlusal abduction gap angle-0.6500.4390.615Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4.After deleting the tongue abduction gap angle data, the main components of the patients with and without impaction were analyzed again. It could be seen from Table5 that the corresponding communality degree values of all the study items were higher than 0.4, indicating that there was a strong correlation between the study items and the principal components, and the principal components could effectively extract the information. After ensuring that the principal components could extract most of the information of the research item, the corresponding relationship between the principal components and the research item was analyzed (when the absolute value of the loading coefficient was greater than 0.4, it was indicated that the item had a corresponding relationship with the principal components).Table 5 Principal component analysis of impaction and nonimpaction (adjusted). ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.705-0.1000.507Adjacent surface area0.719-0.1340.535Buccal abduction gap angle-0.1580.9090.851Occlusal abduction gap angle-0.657-0.4720.654Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4. ## 3.5. Binary Logistic Regression Analysis The adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle were used as independent variables, and the impaction or not was used as the dependent variable for binary logistic regression analysis. The model formula islnp/1−p=0.889∗adjacentlinelength+3.396∗adjacentsurfacearea−0.071∗buccalabductiongapangle−0.089∗occlusalabductiongapangle−19.797 (where p represents the probability of having impaction and 1−p represents the probability of do not have impaction. Unit: adjacent line length (mm); adjacent surface area (mm2); angle (°)).As displayed in Table6, the adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had significant effects on the impaction (p<0.05). To be specific, adjacent line length had a positive effect on impaction, and the regression coefficient was 0.889, which passed the significance level test. In other words, if the adjacent line length increased by 1 unit, the probability of impaction increased by 2.432 times. The adjacent surface area also had a positive effect on impaction (regression coefficient=3.396,p<0.05). Specifically, if the adjacent surface area increased by 1 unit, the probability of impaction increased by 29.835 times. However, the buccal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.071), which indicated that the probability of impaction decreased by 0.931 times when the buccal abduction gap angle increased by 1 unit. At the same time, the occlusal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.089), reflecting that the probability of impaction decreased by 0.915 times when the occlusal abduction gap angle increased by 1 unit.Table 6 Results of binary logistic regression analysis were summarized. Regression coefficientStandard errorWaldχ2pOROR value 95% CILower limitUpper limitAdjacent line length0.8890.3337.1080.0082.4321.2654.676Adjacent surface area3.3960.93313.240<0.00129.8354.790185.819Buccal abduction gap angle-0.0710.0315.1190.0240.9310.8760.991Occlusal abduction gap angle-0.0890.02216.310<0.0010.9150.8770.955 ## 4. Discussion The occurrence of food impaction is usually caused by contact damage of the adjacent teeth, severe abrasion of the occlusal surface, and abnormal contact [23]. This study measured the adjacent line length and adjacent area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle. Statistically significant differences were found in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001) between the two groups. By increasing the contact surface after occlusal adjustment, the space between adjacent teeth is formed and the food overflow channel is enlarged, which is conducive to food expulsion and alleviating the symptoms of food impaction.In the process of occlusal adjustment, errors occurred when measuring the tongue abduction gap angle, buccal abduction gap angle, adjacent line length, and so on. By applying a 3-shape scanner, we ensured the accuracy of the data and provided accurate data for occlusal adjustment. The cause of the effect on impaction was determined, the effect of before and after treatment was compared, and a basis for clinical treatment was provided.The results before and after treatment were compared. According to the above data, when the length of the adjacent line was3.52±1.62 mm, the area of the adjacent surface was 6.21±2.31 mm2, tongue abduction gap angle was 52.24±13.17°, buccal abduction gap angle was 54.15±13.61°, and occlusal abduction gap angle was 89.26±21.64°, the incidence of food impaction decreased.Wear is the main cause of severe dentition wear [24]. Due to the increased contact area of tooth wear, alveolar bone growth, and mesial displacement of teeth, severe dentition wear may lead to changes in maxillofacial height. This change makes it easy for food to accumulate in the alveolar ridge, forming a lot of small and sharp filled cusps. In the process of transverse transportation of food, it is easy to accumulate on the filled cusps and form a wedge-shaped extrusion, generating an instantaneous mechanical tooth effect on the adjacent surface area.This study provides a new research method and quantitative standard for the guidance of food impaction with tight proximal contacts. This paper points out a new idea of deep learning and modern information technology in the clinical oral cavity. Through measurement of the tongue abduction gap angle, buccal abduction gap angle, length of adjacent line, and cross-sectional area of the abductive gap channel, we provide a basis for occlusal treatment. These findings will provide new ideas and directions for the treatment of food impaction, guide the formulation of clinical protocols and the choice of treatment methods, and greatly improve the clinical efficacy of food impaction.However, there are some limitations to this study. Firstly, 250 patients with food impaction were studied and analyzed. Due to the differences in oral symptoms of each patient, there would be some errors in the data analyzed. Secondly, we have studied the main parts that cause food impaction, the first molar and the second molar. Other tooth positions have not been studied, and further improvement is needed in future studies.In conclusion, deep learning was used to study the characteristics of food impaction with tight proximal contacts. We learned that the main reason affecting food impaction with tight proximal contacts was the close relationship between the adjacent surface contact area of adjacent teeth. Our results will provide a new research direction for the clinical treatment of food impaction and guide the treatment of food impaction with tight proximal contacts to improve the symptoms of food impaction. --- *Source: 1000820-2021-11-05.xml*
1000820-2021-11-05_1000820-2021-11-05.md
44,134
Research on the Characteristics of Food Impaction with Tight Proximal Contacts Based on Deep Learning
Yitong Cheng; Zhijiang Wang; Yue Shi; Qiaoling Guo; Qian Li; Rui Chai; Feng Wu
Computational and Mathematical Methods in Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1000820
1000820-2021-11-05.xml
--- ## Abstract Objective. Based on deep learning, the characteristics of food impaction with tight proximal contacts were studied to guide the subsequent clinical treatment of occlusal adjustment. At the same time, digital model building, software measurement, and statistical correlation analysis were used to explore the cause of tooth impaction and to provide evidence for clinical treatment. Methods. Volunteers with (n=250) and without (n=250) tooth impaction were recruited, respectively, to conduct a questionnaire survey. Meanwhile, models were made and perfused by skilled clinical physicians for these patients, and characteristics such as adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle were measured. A normality test, differential analysis, correlation analysis of pathological characteristics of the impaction group, principal component analysis (PCA), and binary logistic regression analysis were performed. Results. The adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all met normal distribution. There were statistically significant differences in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001) between the two groups. After dimensionality reduction by PCA on characteristics, adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had a strong correlation with the principal components. Binary logistic regression analysis showed that adjacent line length and adjacent surface area had positive effects on impaction. The buccal abduction gap angle and occlusal abduction gap angle had a significant negative influence on impaction. Conclusion. Adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle are independent factors influencing food impaction. --- ## Body ## 1. Introduction In daily life, most people have food impaction. During chewing, food impaction occurs when food fragments or fibers wedge into the space between adjacent teeth due to the action of occlusal pressure or gingival retreat, which is called food impaction [1]. According to the different ways of the impaction, impaction can be divided into vertical impaction and horizontal impaction [2]. If it is not clear in time, impacted food will help bacterial reproduction, stimulate adjacent tissue, and cause a series of problems such as periodontal atrophy, gum papillitis, adjacent surface caries, halitosis, heavier periodontitis [3, 4]. As time goes by, oral health problems become more serious [5], which seriously influence people’s daily life.The main causes of occlusal food impaction are [6] caries, occlusal disorder, tooth wear, tooth defect, periodontal disease, alveolar bone atrophy, and other problems [4]. We based on the deep learning theory to study the influence factor for food impaction with tight proximal contacts [6]. We also illustrate problems in the process of food impaction through the analysis of adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle [7].In the treatment of food impaction, the cause of the impaction should be identified first, and the treatment should be carried out according to the cause. If there is caries on the adjacent surface, the corresponding filling or repair method should be selected according to the specific situation, and the normal contact relationship should be restored. Restoration treatment should be carried out as soon as possible after tooth loss [8]. Patients with periodontal inflammation should be treated after anti-inflammation [9]. Patients with dental malformations can undergo oral grinding adjustment [10]. As for the treatment of food impaction without anatomical structure destruction, a large number of domestic and foreign literature reports have taken the occlusal adjustment as the basic treatment [3]. It has the advantages of less grinding tissue, easy operation, and easy acceptance by patients. The methods for occlusal adjustment include adjusting the grinding and filling tooth tip, expanding the food overflow channel, deepening the food overflow groove, and adjusting the main functional area of occlusal. According to relevant literature reports, it can be systematically summarized into the following three methods: sequence adjustment [11], adjustment of the main functional area of the bite [12], and measurement of buccal tongue abduction gap angle [13]. The goal of therapy of food impaction with tight proximal contacts is to reshape the close adjacent surface contact [14] and maintain its stability.Deep learning is a branch of machine learning. It is an algorithm based on the artificial neural network to learn information representation. Deep learning is increasingly widely used in clinical practice. For example, Chung et al. [15] verified the feasibility of automatic segmentation based on deep learning in breast radiotherapy planning. Clinical applications of deep learning include medical image analysis, disease diagnosis, and other aspects [16, 17]. Along with the rapid development of three-dimensional (3D) acquisition technology, 3D data is increasingly applied in the medical field, including the 3D point cloud. Guo and his partners [18] analyzed several deep learning methods for processing 3D point clouds in their study. Based on the LCCP method, Wang et al. [19] segmented the 3D point cloud image of plants based on locally convex connected patches to measure the length, width, and area of leaves. Nevertheless, there is no relevant study using the 3D point cloud network for analysis and processing of tooth characteristic extraction.Incidence of food impaction with tight proximal contacts is on the rise clinically. In order to improve oral diagnosis and treatment technology and promote the process of treating food impaction with tight proximal contacts, this study is aimed at exploring the characteristics of food impaction with tight proximal contacts [20]. Based on deep learning theory, in recording the steps of tooth segmentation, a comparison method is used to collect and analyze differences of food impaction with tight proximal contacts in adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle, so as to provide the basis for clinical treatment of adjacency of close food. ## 2. Materials and Methods ### 2.1. Study Object A total of 500 volunteers (250 for food impaction and 250 for nonfood impaction) who are ages 25 to 50 were recruited according to the eligibility criteria for the study with no gender limitation. Inclusion criteria: (1) dental dentistry was complete, arranged basically neatly, and no adjacent surface caries was found between the first molar and the second molar by clinical examination and X-ray examination; (2) the first molars and the second molars are not loose; and (3) the impaction area between the first molar and the second molar is close, and the measurement of the feeler is less than 60μm. Exclusion criteria: (1) patients have severe periodontal disease; (2) the impaction is not complete to the jaw; (3) the first or second molars have full crown or inlay restoration or have adjacent surface fillings; (4) there are obvious “steps” between the first molars and the second molars; and (5) between the upper and lower jaws is reverse occlusion, locking occlusion, and other states. The basic information of food impaction was obtained by a questionnaire. ### 2.2. Experimental Materials The following are the experimental materials: mint and wax dental floss (Qizhimei Commercial and Trading Co., Ltd), DMG Silagum silicone rubber impression material heavy body+light body (DMG Dental, Germany), DMG O-bite Bite record, silicone rubber (DMG dental, Germany), 3-shape TRIOS second-generation oral scanner, silicone rubber blending gun (DMG dental, Germany), and superanhydrite (Heraeus Kulzer). ### 2.3. Experimental Steps #### 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. #### 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. #### 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) #### 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. #### 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ### 2.4. Statistical Analysis Statistical analysis was performed using R software and SPSS 24.0. Kolmogorov-Smirnov and Shapiro-Wilk methods were used to test the normal distribution of the characteristics of the impaction group. An independent sampleT-test was used to analyze statistical differences in features between the impaction and nonimpaction groups. Correlation analysis was used to test correlations between features. Principal component analysis (PCA) was used for dimensionality reduction of characteristics, and the characteristic value greater than 1 was the screening condition of PCA. Binary logistic regression analysis was used to analyze the effect of characteristics on impaction. p<0.05 was considered statistically significant. ## 2.1. Study Object A total of 500 volunteers (250 for food impaction and 250 for nonfood impaction) who are ages 25 to 50 were recruited according to the eligibility criteria for the study with no gender limitation. Inclusion criteria: (1) dental dentistry was complete, arranged basically neatly, and no adjacent surface caries was found between the first molar and the second molar by clinical examination and X-ray examination; (2) the first molars and the second molars are not loose; and (3) the impaction area between the first molar and the second molar is close, and the measurement of the feeler is less than 60μm. Exclusion criteria: (1) patients have severe periodontal disease; (2) the impaction is not complete to the jaw; (3) the first or second molars have full crown or inlay restoration or have adjacent surface fillings; (4) there are obvious “steps” between the first molars and the second molars; and (5) between the upper and lower jaws is reverse occlusion, locking occlusion, and other states. The basic information of food impaction was obtained by a questionnaire. ## 2.2. Experimental Materials The following are the experimental materials: mint and wax dental floss (Qizhimei Commercial and Trading Co., Ltd), DMG Silagum silicone rubber impression material heavy body+light body (DMG Dental, Germany), DMG O-bite Bite record, silicone rubber (DMG dental, Germany), 3-shape TRIOS second-generation oral scanner, silicone rubber blending gun (DMG dental, Germany), and superanhydrite (Heraeus Kulzer). ## 2.3. Experimental Steps ### 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. ### 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. ### 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) ### 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. ### 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ## 2.3.1. Model Preparation and Perfusion Oral education was conducted to the volunteers, and the correct way of brushing teeth and using dental floss was instructed. During the process of modeling, subjects should relax and sit and look straight ahead, and the mandibular plane should be parallel to the ground plane. During the detection process, subjects should maintain a constant posture, and those who cannot cooperate well will be excluded. After checking the integrity of the impression, the saliva was cleaned and the dentistry model was injected with superanhydrite by the same molding worker. After demolding, the mold was repaired in accordance with the standard model. ## 2.3.2. Records of Occlusal Relationship Volunteers were instructed to clench in the intercuspal position. A silicone rubber Bite record (DMG O-bite Bite record, silicone rubber, Germany) was used to record the posterior occlusal relationship. ## 2.3.3. Establishment of Digital 3D Scanning Model A 3-shape scanner was used to scan the repaired upper and lower jaw plaster models and the correct occlusal relationship, and the data in STL format was obtained, as shown in Figure1.Figure 1 3D model data obtained after scanning: (a) maxillary 3D model data; (b) mandibular 3D model data; (c) left occlusal relationship; (d) right occlusal relationship. (a)(b)(c)(d) ## 2.3.4. Deep Learning Network Segment Teeth Firstly, the tooth STL model was directly transformed into the point cloud model. Due to the excessive number of point clouds and the amount of calculation, the point cloud was subsampled during the network training. Different from the common point cloud preprocessing methods of random sampling and uniform sampling, in order to calculate the tooth impaction characteristics more accurately, and considering the relationship between the tooth impaction characteristics and the tooth surface curvature at the same time, the higher the tooth surface curvature, the higher the probability of containing the tooth impaction characteristics. In this experiment, geometric sampling was used to sample points on the tooth surface. Through variable Deformable Kernel, the location changes of sample points were learned on the basis of rigid sampling.The segmentation network adopted in this experiment is a segmentation network model provided by the North University of China and constructed based on KPConv [21] kernel convolution. The network model is similar to the U-Net [22] model and consists of two parts, encoder and decoder. It is a symmetric semantic segmentation model. Finally, the segmented point cloud results were mapped to the STL model for visualization, and the teeth were marked with different colors, as shown in Figure 2. Only the first and second molars studied in the experiment were separated to increase the accuracy of network training.Figure 2 Point cloud of first molars and second molars. ## 2.3.5. Feature Measurement The segmented tooth point cloud image was projected horizontally, and then, the teeth were compressed into two-dimensional images to facilitate the measurement of the length of the adjacent line and the fitting of the dividing line. The two farthest points on the dividing line were found as the two ends of the tooth adjacent line; that is, the length of the adjacent line was obtained by measuring the distance between the two ends, as shown in Figure3.Figure 3 The length of adjacent lines measured by horizontal projection.The tongue and buccal abduction gap angles were measured by horizontal projection based on point cloud images. The two ends of the secant line were extended outward for a fixed length, and a horizontal line was made from the extension line to both sides of the teeth to get the two points nearest to the extension line. The angles composed by the three points were the tongue abduction gap angle and buccal abduction gap angle, as shown in Figure4.Figure 4 The tongue abduction gap angle and buccal abduction gap angle measured by horizontal projection.By projecting the two segmented tooth point cloud images in the vertical direction, the two-dimensional point cloud in the vertical direction of the teeth can be obtained. In accordance with the above method for calculating the tongue abduction gap angle, the occlusal abduction gap angle can be obtained, as shown in Figure5. At the same time, the tooth partition plane in the vertical direction was taken out to obtain the adjacent surface area, as shown in Figure 6.Figure 5 The angle of the spread gap measured by vertical projection.Figure 6 The area obtained by cutting the plane. ## 2.4. Statistical Analysis Statistical analysis was performed using R software and SPSS 24.0. Kolmogorov-Smirnov and Shapiro-Wilk methods were used to test the normal distribution of the characteristics of the impaction group. An independent sampleT-test was used to analyze statistical differences in features between the impaction and nonimpaction groups. Correlation analysis was used to test correlations between features. Principal component analysis (PCA) was used for dimensionality reduction of characteristics, and the characteristic value greater than 1 was the screening condition of PCA. Binary logistic regression analysis was used to analyze the effect of characteristics on impaction. p<0.05 was considered statistically significant. ## 3. Results ### 3.1. Normality Test Normality tests were performed for adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle in the patients with and without food impaction. Table1 displays that the sample size of research data was less than or equal to 50, so the S-W test was used. The results showed that the adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all fitted with a normal distribution (p>0.05).Table 1 Result of normality test. ItemSample sizeMean valueStandard deviationSkewnessKurtosisKolmogorov-Smirnov testShapiro-Wilk testD valuep valueW valuep valueAdjacent line length5004.2720.9590.0740.3960.1080.1570.9780.476Adjacent surface area5008.2900.3940.2750.3060.1280.040∗0.9770.423Tongue abduction gap angle50050.2469.8820.2360.0490.0580.9410.9900.947Buccal abduction gap angle50050.98311.7970.3121.0670.0690.7930.9720.267Occlusal abduction gap angle50058.91629.4290.235-0.2800.0840.5160.9730.317Note:∗p<0.05. ### 3.2. Differential Analysis of Characteristics between the Impaction and Nonimpaction Groups AT-test was used to compare the characteristic differences between the impaction group and the nonimpaction group. The analysis results are shown in Table 2. Statistically significant differences between the two groups were presented in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001). Additionally, the mean value of the tongue abduction gap angle (p=0.087) and buccal abduction gap angle (p=0.105) in the nonimpaction group was larger than those in the impaction group, but the difference was not statistically significant.Table 2 T-test for characteristics of impaction and nonimpaction patients. Characteristics (mean (SD))NonimpactionImpactionp valueAdjacent line length3.304 (1.187)4.267 (0.937)<0.001Adjacent surface area7.763 (0.466)8.278 (0.390)<0.001Tongue abduction gap angle53.630 (10.275)50.236 (9.673)0.087Buccal abduction gap angle55.849 (15.047)51.462 (12.121)0.105Occlusal abduction gap angle92.006 (13.091)59.466 (29.288)<0.001Note:∗p<0.05. ### 3.3. Correlation Analysis of Tooth Characteristics in Food Impaction Patients It could be seen from Table3 that the area of the adjacent surface (ρ=0.317, p=0.025) and the length of the adjacent line (ρ=0.297,p=0.036) of food impaction were significantly positively correlated with the tongue abduction gap angle.Table 3 Correlation analysis of tooth characteristics in food impaction patients. Adjacent surface areaAdjacent line lengthTongue abduction gap angleCorrelation coefficient0.3170.297p value0.025∗0.036∗Buccal abduction gap angleCorrelation coefficient0.0770.171p value0.5940.236Occlusal abduction gap angleCorrelation coefficient0.0780.036p value0.5920.803Note:∗p<0.05; ∗∗p<0.01. ### 3.4. PCA As could be seen from Table4, for the degree of commonality, there was a total of 1 item involving the tongue abduction gap angle, indicating that the relationship between the principal components and the study item was very weak, and the principal components were unable to effectively extract the information of the study item. Therefore, this item should be deleted and analyzed again after deletion.Table 4 Principal component analysis of impaction and nonimpaction. ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.7070.1110.513Adjacent surface area0.7190.0690.522Tongue abduction gap angle0.0510.5960.358Buccal abduction gap angle-0.168-0.7570.601Occlusal abduction gap angle-0.6500.4390.615Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4.After deleting the tongue abduction gap angle data, the main components of the patients with and without impaction were analyzed again. It could be seen from Table5 that the corresponding communality degree values of all the study items were higher than 0.4, indicating that there was a strong correlation between the study items and the principal components, and the principal components could effectively extract the information. After ensuring that the principal components could extract most of the information of the research item, the corresponding relationship between the principal components and the research item was analyzed (when the absolute value of the loading coefficient was greater than 0.4, it was indicated that the item had a corresponding relationship with the principal components).Table 5 Principal component analysis of impaction and nonimpaction (adjusted). ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.705-0.1000.507Adjacent surface area0.719-0.1340.535Buccal abduction gap angle-0.1580.9090.851Occlusal abduction gap angle-0.657-0.4720.654Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4. ### 3.5. Binary Logistic Regression Analysis The adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle were used as independent variables, and the impaction or not was used as the dependent variable for binary logistic regression analysis. The model formula islnp/1−p=0.889∗adjacentlinelength+3.396∗adjacentsurfacearea−0.071∗buccalabductiongapangle−0.089∗occlusalabductiongapangle−19.797 (where p represents the probability of having impaction and 1−p represents the probability of do not have impaction. Unit: adjacent line length (mm); adjacent surface area (mm2); angle (°)).As displayed in Table6, the adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had significant effects on the impaction (p<0.05). To be specific, adjacent line length had a positive effect on impaction, and the regression coefficient was 0.889, which passed the significance level test. In other words, if the adjacent line length increased by 1 unit, the probability of impaction increased by 2.432 times. The adjacent surface area also had a positive effect on impaction (regression coefficient=3.396,p<0.05). Specifically, if the adjacent surface area increased by 1 unit, the probability of impaction increased by 29.835 times. However, the buccal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.071), which indicated that the probability of impaction decreased by 0.931 times when the buccal abduction gap angle increased by 1 unit. At the same time, the occlusal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.089), reflecting that the probability of impaction decreased by 0.915 times when the occlusal abduction gap angle increased by 1 unit.Table 6 Results of binary logistic regression analysis were summarized. Regression coefficientStandard errorWaldχ2pOROR value 95% CILower limitUpper limitAdjacent line length0.8890.3337.1080.0082.4321.2654.676Adjacent surface area3.3960.93313.240<0.00129.8354.790185.819Buccal abduction gap angle-0.0710.0315.1190.0240.9310.8760.991Occlusal abduction gap angle-0.0890.02216.310<0.0010.9150.8770.955 ## 3.1. Normality Test Normality tests were performed for adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle in the patients with and without food impaction. Table1 displays that the sample size of research data was less than or equal to 50, so the S-W test was used. The results showed that the adjacent line length, adjacent surface area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle all fitted with a normal distribution (p>0.05).Table 1 Result of normality test. ItemSample sizeMean valueStandard deviationSkewnessKurtosisKolmogorov-Smirnov testShapiro-Wilk testD valuep valueW valuep valueAdjacent line length5004.2720.9590.0740.3960.1080.1570.9780.476Adjacent surface area5008.2900.3940.2750.3060.1280.040∗0.9770.423Tongue abduction gap angle50050.2469.8820.2360.0490.0580.9410.9900.947Buccal abduction gap angle50050.98311.7970.3121.0670.0690.7930.9720.267Occlusal abduction gap angle50058.91629.4290.235-0.2800.0840.5160.9730.317Note:∗p<0.05. ## 3.2. Differential Analysis of Characteristics between the Impaction and Nonimpaction Groups AT-test was used to compare the characteristic differences between the impaction group and the nonimpaction group. The analysis results are shown in Table 2. Statistically significant differences between the two groups were presented in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001). Additionally, the mean value of the tongue abduction gap angle (p=0.087) and buccal abduction gap angle (p=0.105) in the nonimpaction group was larger than those in the impaction group, but the difference was not statistically significant.Table 2 T-test for characteristics of impaction and nonimpaction patients. Characteristics (mean (SD))NonimpactionImpactionp valueAdjacent line length3.304 (1.187)4.267 (0.937)<0.001Adjacent surface area7.763 (0.466)8.278 (0.390)<0.001Tongue abduction gap angle53.630 (10.275)50.236 (9.673)0.087Buccal abduction gap angle55.849 (15.047)51.462 (12.121)0.105Occlusal abduction gap angle92.006 (13.091)59.466 (29.288)<0.001Note:∗p<0.05. ## 3.3. Correlation Analysis of Tooth Characteristics in Food Impaction Patients It could be seen from Table3 that the area of the adjacent surface (ρ=0.317, p=0.025) and the length of the adjacent line (ρ=0.297,p=0.036) of food impaction were significantly positively correlated with the tongue abduction gap angle.Table 3 Correlation analysis of tooth characteristics in food impaction patients. Adjacent surface areaAdjacent line lengthTongue abduction gap angleCorrelation coefficient0.3170.297p value0.025∗0.036∗Buccal abduction gap angleCorrelation coefficient0.0770.171p value0.5940.236Occlusal abduction gap angleCorrelation coefficient0.0780.036p value0.5920.803Note:∗p<0.05; ∗∗p<0.01. ## 3.4. PCA As could be seen from Table4, for the degree of commonality, there was a total of 1 item involving the tongue abduction gap angle, indicating that the relationship between the principal components and the study item was very weak, and the principal components were unable to effectively extract the information of the study item. Therefore, this item should be deleted and analyzed again after deletion.Table 4 Principal component analysis of impaction and nonimpaction. ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.7070.1110.513Adjacent surface area0.7190.0690.522Tongue abduction gap angle0.0510.5960.358Buccal abduction gap angle-0.168-0.7570.601Occlusal abduction gap angle-0.6500.4390.615Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4.After deleting the tongue abduction gap angle data, the main components of the patients with and without impaction were analyzed again. It could be seen from Table5 that the corresponding communality degree values of all the study items were higher than 0.4, indicating that there was a strong correlation between the study items and the principal components, and the principal components could effectively extract the information. After ensuring that the principal components could extract most of the information of the research item, the corresponding relationship between the principal components and the research item was analyzed (when the absolute value of the loading coefficient was greater than 0.4, it was indicated that the item had a corresponding relationship with the principal components).Table 5 Principal component analysis of impaction and nonimpaction (adjusted). ItemLoading coefficientCommunality (common factor variance)Principal component 1Principal component 2Adjacent line length0.705-0.1000.507Adjacent surface area0.719-0.1340.535Buccal abduction gap angle-0.1580.9090.851Occlusal abduction gap angle-0.657-0.4720.654Note: the numbers in the table are overstriking: bold means the absolute loading factor coefficient is greater than 0.4; otherwise is less than 0.4. ## 3.5. Binary Logistic Regression Analysis The adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle were used as independent variables, and the impaction or not was used as the dependent variable for binary logistic regression analysis. The model formula islnp/1−p=0.889∗adjacentlinelength+3.396∗adjacentsurfacearea−0.071∗buccalabductiongapangle−0.089∗occlusalabductiongapangle−19.797 (where p represents the probability of having impaction and 1−p represents the probability of do not have impaction. Unit: adjacent line length (mm); adjacent surface area (mm2); angle (°)).As displayed in Table6, the adjacent line length, adjacent surface area, buccal abduction gap angle, and occlusal abduction gap angle had significant effects on the impaction (p<0.05). To be specific, adjacent line length had a positive effect on impaction, and the regression coefficient was 0.889, which passed the significance level test. In other words, if the adjacent line length increased by 1 unit, the probability of impaction increased by 2.432 times. The adjacent surface area also had a positive effect on impaction (regression coefficient=3.396,p<0.05). Specifically, if the adjacent surface area increased by 1 unit, the probability of impaction increased by 29.835 times. However, the buccal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.071), which indicated that the probability of impaction decreased by 0.931 times when the buccal abduction gap angle increased by 1 unit. At the same time, the occlusal abduction gap angle had a significant negative effect on impaction (regression coefficient=−0.089), reflecting that the probability of impaction decreased by 0.915 times when the occlusal abduction gap angle increased by 1 unit.Table 6 Results of binary logistic regression analysis were summarized. Regression coefficientStandard errorWaldχ2pOROR value 95% CILower limitUpper limitAdjacent line length0.8890.3337.1080.0082.4321.2654.676Adjacent surface area3.3960.93313.240<0.00129.8354.790185.819Buccal abduction gap angle-0.0710.0315.1190.0240.9310.8760.991Occlusal abduction gap angle-0.0890.02216.310<0.0010.9150.8770.955 ## 4. Discussion The occurrence of food impaction is usually caused by contact damage of the adjacent teeth, severe abrasion of the occlusal surface, and abnormal contact [23]. This study measured the adjacent line length and adjacent area, tongue abduction gap angle, buccal abduction gap angle, and occlusal abduction gap angle. Statistically significant differences were found in adjacent line length (p<0.001), adjacent surface area (p<0.001), and occlusal abduction gap angle (p<0.001) between the two groups. By increasing the contact surface after occlusal adjustment, the space between adjacent teeth is formed and the food overflow channel is enlarged, which is conducive to food expulsion and alleviating the symptoms of food impaction.In the process of occlusal adjustment, errors occurred when measuring the tongue abduction gap angle, buccal abduction gap angle, adjacent line length, and so on. By applying a 3-shape scanner, we ensured the accuracy of the data and provided accurate data for occlusal adjustment. The cause of the effect on impaction was determined, the effect of before and after treatment was compared, and a basis for clinical treatment was provided.The results before and after treatment were compared. According to the above data, when the length of the adjacent line was3.52±1.62 mm, the area of the adjacent surface was 6.21±2.31 mm2, tongue abduction gap angle was 52.24±13.17°, buccal abduction gap angle was 54.15±13.61°, and occlusal abduction gap angle was 89.26±21.64°, the incidence of food impaction decreased.Wear is the main cause of severe dentition wear [24]. Due to the increased contact area of tooth wear, alveolar bone growth, and mesial displacement of teeth, severe dentition wear may lead to changes in maxillofacial height. This change makes it easy for food to accumulate in the alveolar ridge, forming a lot of small and sharp filled cusps. In the process of transverse transportation of food, it is easy to accumulate on the filled cusps and form a wedge-shaped extrusion, generating an instantaneous mechanical tooth effect on the adjacent surface area.This study provides a new research method and quantitative standard for the guidance of food impaction with tight proximal contacts. This paper points out a new idea of deep learning and modern information technology in the clinical oral cavity. Through measurement of the tongue abduction gap angle, buccal abduction gap angle, length of adjacent line, and cross-sectional area of the abductive gap channel, we provide a basis for occlusal treatment. These findings will provide new ideas and directions for the treatment of food impaction, guide the formulation of clinical protocols and the choice of treatment methods, and greatly improve the clinical efficacy of food impaction.However, there are some limitations to this study. Firstly, 250 patients with food impaction were studied and analyzed. Due to the differences in oral symptoms of each patient, there would be some errors in the data analyzed. Secondly, we have studied the main parts that cause food impaction, the first molar and the second molar. Other tooth positions have not been studied, and further improvement is needed in future studies.In conclusion, deep learning was used to study the characteristics of food impaction with tight proximal contacts. We learned that the main reason affecting food impaction with tight proximal contacts was the close relationship between the adjacent surface contact area of adjacent teeth. Our results will provide a new research direction for the clinical treatment of food impaction and guide the treatment of food impaction with tight proximal contacts to improve the symptoms of food impaction. --- *Source: 1000820-2021-11-05.xml*
2021
# Influence of Heat-Treated and Vibratory-Assisted Weld Joints on the Mechanical Properties of 304L SS Material **Authors:** Muvvala Chinnam Naidu; K. T. Balaram Padal; Girma Eshete **Journal:** Journal of Nanomaterials (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1000859 --- ## Abstract Tensile and impact strengths of 304L SS stainless steel weldment prepared at different levels of heat treatments and with vibratory assistance were studied and compared with the conventional process of welding. The results reveal that the microstructures of weld joints after heat treatment and vibratory welded joints attained a fine grain structure, compared with the joints prepared with the conventional process of welding. By increasing the temperature of quenching and vibrations during welding, the grain size is gradually improving. Improvement in the tensile and impact is observed in the heat-treated and vibration-welded specimens. Similarity, in the weld joint properties of post weld heat treatment (PWHT) and vibratory-assisted welding (VAW) are observed. With the VAW technique, high quality weldments are produced and are more suitable than PWHT due to its less cost and time. --- ## Body ## 1. Introduction Austenitic stainless steels are a potential material for future nuclear power reactors that will need to meet high structural integrity demands while operating under extreme conditions. Welded joints are crucial components in any installation because they are more likely to contain faults than the base metal, and their physical qualities can differ dramatically from those of a wrought material of nominally similar composition. Because of their superior deterioration resistance, weldability, and formability, austenitic stainless steels are frequently utilized in critical elements of chemical industries. Due to their exceptional robustness in high pressure and temperature sets, these are widely used in nuclear energy plants, gas turbines, and jet propellants [1–4].“Submerged arc welding” is commonly used to join thick-wall stainless steels in ship construction and pipes [5, 6]. A solid-type filler material is mechanically put into dispersed granular flux in SAW. The flux melts when it comes into contact with the welding arc, and slag is formed that protects and covers the fused metal. In most cases, efficient joining is accomplished by passing current through a large diameter wire. This offers benefit of increasing productivity by allowing for quick solidification and enhancing quality weld by controlling weld heat [7].Inhomogeneity of the microstructure, concentration of residual stress, brittleness, and worsening of toughness in the welded material are examples of microstructural and mechanical qualities [8]. PWHT (post weld heat treatment) is commonly used. Microstructural and mechanical qualities such as inhomogeneity of the microstructure, concentration of brittleness, deterioration and residual stress of toughness of the welded material are degraded as a result of the input heat [8]. To address such degradation, PWHT is commonly used. However, delta ferrite as a component in austenitic SS material is present, so embrittlement of the welded component may occur owing to post heat treatment transition into another brittle phase, such as the sigma phase. It is critical to study post heat treatment processes to regulate the content of delta ferrite and hazardous carbides in order to alleviate these difficulties [9]. By homogenizing the weld specimen structure and limiting the production of favorable and hazardous phases, appropriate PWHT can progress mechanical qualities [10–13].Similarly, the vibrational stress-relief procedure is adapted during welding, with nonresonant and resonant repeated loading process for mild steel of low alloy, and is considered to enhance mechanical characteristics like strength and hardness, but unfortunately, fatigue strength is reduced for the specimens welded with the cyclic loads of the nonresonant process, which shows a disadvantage of the technique [14]. Vibratory stress relief (VSR) is used on stainless-steel weld specimens that have a combined weight of over 34 tonnes and are being tested for cyclic stress and strain. At a resonance frequency of 47.83 Hz, the plate was treated for around 15 minutes. The efficiency of VSR is measured by the number of cycles and the level of dynamic stress. In terms of longitudinal residual stresses, the variations are around an 11% decrease [15]. In order to quantify the vibration energy influence on stress concentration in the heat-affected and weld zones, the “vibratory stress relief” approach is explained and compared to heat treatment. Vibrational stress relief can be used to reduce residual strains generated during the welding process, according to the findings of this study’s X-ray diffraction [16].The effect of residual stresses on the surface stress distribution was investigated using the vibrating stress alleviation method. The “VSR” technique is most typically used to reduce residual flaws in manufacturing processes, and changes in the internal structures are readily observable wherever the strengths of tensile, yield, and fatigue are increased, and the current technique is beneficial for high-end applications for appropriate outcomes [17]. SS plates are used as specimens for the VSR procedure in nuclear reactors. The results reveal that after using the VSR method, residual stress is reduced by roughly 56 percent for the hoist-machine and around 31 percent for the SS plate. It is a different method compared to heat treatment [18]. To evaluate the cyclic strain and stress, the VSR approach was simulated on 304L SS weld-work pieces under repeated loads. According to the experimental findings, cyclic creep physical characteristics are identified at dynamic stresses. Cyclic loading affects creep and its pace. The higher the loading, the faster the creep and the longer it takes for the strain to stabilize. The defects at the weld zone are determined with the X-ray diffraction process at a variety of cyclic amplitude stresses [19]. The result of inelastic body moment was investigated throughout vibratory welding at two frequencies (50 Hz and 500 Hz) to determine the differences in characteristics. With any trend at low frequency, the tensions for both directions (longitudinal and transverse) are decreased and increased. At peak frequencies, residual-stresses remained constant. As a result, the stiff body motion effect at higher frequencies is thought to be ineffective for reducing residual stress [20]. The diversity in residual stresses created during welding was assessed using a variety of shaft designs. The findings of this study reveal that shear stresses of torsional moment can be used to minimize shaft residual stresses by a moderate amount, that shear stress aids in the transfer of retained austenite to the following stages, and that it influences the amount of residual stresses induced by the effects of the interphase [21]. The vibration effect on the residual stresses induced by the welding process was examined. Vibrations were sent to them across a predetermined temperature range. Three batches are being looked into, each with a distinct temperature range. With no obvious movement, residual stresses were reported to be improved in all three batches. The actions appear to have undesired outcomes, and it is finished that excitations at 400°C and 320°C did not considerably influence the ultimate state of residual stresses, as these do not change significantly with the vibration curve [22].The impact of vibratory stress on the weld’s microstructure and stress distribution was examined. The results show that in the first 5 seconds of vibration, stress levels drop by roughly 75 MPa. The increase in vibrations did not result in a further drop in stress levels after post weld vibratory treatment, and optical microscopy reveals no alterations in the crystal structure. Because of the vibratory treatment, the crystals develop in a specified direction during welding. As a result of the grain refinement, the toughness of the weldment increased by 25% [23, 24]. The effects of transverse oscillation frequency and amplitude on mild-steel weldment characteristics are explored. The findings of excitations produced welds revealed that grain refinement, as seen in the micrograph, is the cause of improved mechanical qualities such as yield, breaking strength, and ultimate tensile strengths, among others.Vibratory weld conditioning has a great impact on the mechanical characterization of the aluminum alloy weld joints. Hardness and ultimate tensile strength behavior of aluminum weldment was greatly influenced by the vibratory TIG welding. Mechanical properties of aluminum weldment were tested under the impact of voltage input to the vibromotor and time at which it is vibrated. Apart from the several vibratory TIG welding parameters, vibration amplitude is the one which has a significant impact on the weldment properties. Authors observed that in vibratory TIG welding, vibration amplitude improves the properties of weldment [25, 26]. In both thermal stress relief and vibratory thermal stress relief, residual stresses in aluminum 7075 alloys were examined. The outcomes were contrasted with the finite element models. The decrease of stresses was shown to be significantly impacted by the authors’ discovery of the thermal vibratory stress alleviation approach. Heat vibratory stress-relieving techniques relieve tension at rates that are, respectively, 20.43 and 38.56 percent faster than those of thermal and vibratory methods.The impact of vibration stress reduction on the steel butt-welded connections was studied by Ebrahimi et al. The findings showed that whenever the maximum stress frequency reaches its resonance frequency, lengthwise residual may be lowered more drastically. Additionally, the finite element approach was used to compare experimental findings with modeled outcomes, and it was shown that the simulations are often similar to the experimental outcomes [27].The present study thus investigated the effect of PWHT and vibrational welding on the change of mechanical characteristics like ultimate tensile and impact strength of the welded part of 304L stainless steel produced through the SAW. The resultant effects on tensile and impact were discussed, and a comparison is made between conventional, PWHT, and vibratory-assisted welding. ## 2. Methodology and Materials ### 2.1. Post Weld Heat Treatment Process (PWHT) Arc welding was used to determine the hardness of 304L SS steels in this investigation. The shielded metal arc welding technique is used to attach two plates that are 20 mm wide, 200 mm long, and 5 mm thick.To determine the resistance against indentation of 304L SS weld joints, post weld heat treatment was used at temperatures ranging from 650°C to 1050°C with a 200°C gap, as shown in Figure1. The temperature variation is 60 min to 240 min, with a 60 min break between them. The created samples are quenched in cold water at each stage to determine the tensile and impact strength of the material at various temperatures and time breaks.Figure 1 Temperatures vs. quenching time (heat treatment). ### 2.2. Vibratory-Assisted Welding A table for placing the specimen was attached to four springs, one on each corner. The vibration table was equipped by adding a vibromotor to the vibratory table configuration. An ammeter, voltmeter, and dimmerstat were linked to the vibromotor to produce the vibrations. Figure2 depicts the experimental setup. To increase the weld joint mechanical characteristics by giving constructive modifications in the microstructure of the weldment region, a supplemental vibratory setup capable of delivering mechanical excitations to the weld pool for “manual metal arc welding” is designed. Different frequencies were applied at varied amplitudes over the length of the weld bead, merely straggling behind the weld torch to physically stimulate the weld pool and generate the desired microstructural outcomes.Figure 2 Experimental setup. ### 2.3. Vibration Parameters with respect to the Voltage of Vibromotor This configuration generates the necessary frequency in terms of volts. At 70, 150, and 230 V, Table1 shows the variations in amplitude and acceleration. Various vibration parameters with respect to the input voltage are shown in Figure 2.Table 1 Vibration factors in terms of vibromotor voltage. S. no.Vibromotor voltage (volts)Frequency(Hz)Acceleration(mm/sec2)Amplitude(mm)150607715.410.235260615916.900.240370624118.390.245480632319.880.250590640421.380.2556100648622.870.2607110656824.360.2658120664925.850.2709130673127.340.27510140681328.830.28011150689430.320.28512160697632.70.29013170705835.070.29514180713937.440.30015190722139.810.30516200730342.180.31017210738444.550.31418220746646.920.31919230754849.290.324Acceleration, frequency, and amplitude cyclically vary with respect to the different levels of voltage of vibration; the root mean square (rms) values of these parameters have been considered for better operating conditions. The frequency, amplitude, and acceleration of specimens have been measured by a vibration tester shown in Table1. Specifications of the vibration tester are 10 Hz to 10 kHz frequency range, 0.01 to 4 mm amplitude range, and 0.1 to 400 mm/s2 for measuring vibration parameters. ### 2.4. Materials and the Weld-Joint Preparation The foundation material used in this investigation is 304L stainless steel with a thickness of 5 mm. “Carbon (C)–0.03%, manganese (Mn)–2%, phosphorus (P)–0.045%, sulphur (S)–0.03%, silicon (Si)–0.75%, chromium–18%, nickel (Ni)–8%, and nitrogen (N)–0.1%” are the material compositions, as indicated in Table2. To test the characteristics of vibrations, the weld joints were produced according to standards. 304L stainless steel is utilized in a variety of manufacturing applications due to its excellent formability, strength, and resistance against corrosion. Samples were positioned over the flat surface of the vibration platform for weld joint preparation, and the manual arc welding procedure was used. By changing the dimmerstat and voltmeter during the welding operation, the proper quantities of vibrations were imparted to the specimens. The line diagram of specimen preparation before welding is shown in Figure 3. During the welding operation, vibrations were continuously conveyed to the molten pool.Table 2 Chemical composition 304L SS. Material gradeWeight % of chemical compositionFeCSiMnPSCrNiN304L71.0450.030.7520.0450.031880.1Figure 3 Specimen dimension for welding.The vibrations were measured with a vibrometer, which is a particular equipment. Every set of vibration’s velocity, acceleration, and displacement is measured. Table1 shows the various material compositions. ### 2.5. Tensile Test of Weldment The tensile strength test is conducted using a universal tensile testing equipment by taking into account a number of variables such as welding current, vibration time to the fusion zone, and DC motor voltage. The dimensions of the test specimens are determined using ASTM D638 standards. Figure4 depicts the dimensions of the test specimen according to industry standards.Figure 4 Line diagram of a tensile test specimen.Ultimate and yield strength destructive testing (tensile testing) is used to determine the ductility of metallic materials. It calculates the amount of force required to break the weldments. The extent to which the specimen elongates or expands up to the breaking point is illustrated in the equation below.(1)∈=ΔLL0=L−L0L0,whereΔL is the specimen change in length of the specimen, L0 is the initial length, and L is the total length of the specimen.The applied force which is employed on a particular area is supplied by the equation below in order to compute the stressσ. (2)σ=FnA ### 2.6. Impact Test The Charpy impact testing machine is used to perform the impact test on welded specimens. The specimens which were prepared according to ASTM (E23) are presented in Figure5. The heavy weight pendulum is allowed to hit the specimen from a static height, and a specimen with a notch is introduced into the machine.Figure 5 Test sample for impact test. ## 2.1. Post Weld Heat Treatment Process (PWHT) Arc welding was used to determine the hardness of 304L SS steels in this investigation. The shielded metal arc welding technique is used to attach two plates that are 20 mm wide, 200 mm long, and 5 mm thick.To determine the resistance against indentation of 304L SS weld joints, post weld heat treatment was used at temperatures ranging from 650°C to 1050°C with a 200°C gap, as shown in Figure1. The temperature variation is 60 min to 240 min, with a 60 min break between them. The created samples are quenched in cold water at each stage to determine the tensile and impact strength of the material at various temperatures and time breaks.Figure 1 Temperatures vs. quenching time (heat treatment). ## 2.2. Vibratory-Assisted Welding A table for placing the specimen was attached to four springs, one on each corner. The vibration table was equipped by adding a vibromotor to the vibratory table configuration. An ammeter, voltmeter, and dimmerstat were linked to the vibromotor to produce the vibrations. Figure2 depicts the experimental setup. To increase the weld joint mechanical characteristics by giving constructive modifications in the microstructure of the weldment region, a supplemental vibratory setup capable of delivering mechanical excitations to the weld pool for “manual metal arc welding” is designed. Different frequencies were applied at varied amplitudes over the length of the weld bead, merely straggling behind the weld torch to physically stimulate the weld pool and generate the desired microstructural outcomes.Figure 2 Experimental setup. ## 2.3. Vibration Parameters with respect to the Voltage of Vibromotor This configuration generates the necessary frequency in terms of volts. At 70, 150, and 230 V, Table1 shows the variations in amplitude and acceleration. Various vibration parameters with respect to the input voltage are shown in Figure 2.Table 1 Vibration factors in terms of vibromotor voltage. S. no.Vibromotor voltage (volts)Frequency(Hz)Acceleration(mm/sec2)Amplitude(mm)150607715.410.235260615916.900.240370624118.390.245480632319.880.250590640421.380.2556100648622.870.2607110656824.360.2658120664925.850.2709130673127.340.27510140681328.830.28011150689430.320.28512160697632.70.29013170705835.070.29514180713937.440.30015190722139.810.30516200730342.180.31017210738444.550.31418220746646.920.31919230754849.290.324Acceleration, frequency, and amplitude cyclically vary with respect to the different levels of voltage of vibration; the root mean square (rms) values of these parameters have been considered for better operating conditions. The frequency, amplitude, and acceleration of specimens have been measured by a vibration tester shown in Table1. Specifications of the vibration tester are 10 Hz to 10 kHz frequency range, 0.01 to 4 mm amplitude range, and 0.1 to 400 mm/s2 for measuring vibration parameters. ## 2.4. Materials and the Weld-Joint Preparation The foundation material used in this investigation is 304L stainless steel with a thickness of 5 mm. “Carbon (C)–0.03%, manganese (Mn)–2%, phosphorus (P)–0.045%, sulphur (S)–0.03%, silicon (Si)–0.75%, chromium–18%, nickel (Ni)–8%, and nitrogen (N)–0.1%” are the material compositions, as indicated in Table2. To test the characteristics of vibrations, the weld joints were produced according to standards. 304L stainless steel is utilized in a variety of manufacturing applications due to its excellent formability, strength, and resistance against corrosion. Samples were positioned over the flat surface of the vibration platform for weld joint preparation, and the manual arc welding procedure was used. By changing the dimmerstat and voltmeter during the welding operation, the proper quantities of vibrations were imparted to the specimens. The line diagram of specimen preparation before welding is shown in Figure 3. During the welding operation, vibrations were continuously conveyed to the molten pool.Table 2 Chemical composition 304L SS. Material gradeWeight % of chemical compositionFeCSiMnPSCrNiN304L71.0450.030.7520.0450.031880.1Figure 3 Specimen dimension for welding.The vibrations were measured with a vibrometer, which is a particular equipment. Every set of vibration’s velocity, acceleration, and displacement is measured. Table1 shows the various material compositions. ## 2.5. Tensile Test of Weldment The tensile strength test is conducted using a universal tensile testing equipment by taking into account a number of variables such as welding current, vibration time to the fusion zone, and DC motor voltage. The dimensions of the test specimens are determined using ASTM D638 standards. Figure4 depicts the dimensions of the test specimen according to industry standards.Figure 4 Line diagram of a tensile test specimen.Ultimate and yield strength destructive testing (tensile testing) is used to determine the ductility of metallic materials. It calculates the amount of force required to break the weldments. The extent to which the specimen elongates or expands up to the breaking point is illustrated in the equation below.(1)∈=ΔLL0=L−L0L0,whereΔL is the specimen change in length of the specimen, L0 is the initial length, and L is the total length of the specimen.The applied force which is employed on a particular area is supplied by the equation below in order to compute the stressσ. (2)σ=FnA ## 2.6. Impact Test The Charpy impact testing machine is used to perform the impact test on welded specimens. The specimens which were prepared according to ASTM (E23) are presented in Figure5. The heavy weight pendulum is allowed to hit the specimen from a static height, and a specimen with a notch is introduced into the machine.Figure 5 Test sample for impact test. ## 3. Results and Discussion ### 3.1. Tensile and Impact Strength of Heat-Treated Specimens The samples were prepared in the absence of vibrations and moved to the heat treatment after welding method. The method is carried out at temperatures ranging from 650°C to1050°C at intervals of 200°C, with heat treatment times ranging from 1 hour to 4 hours at intervals of 1 hour. When comparing the heat treatment of the weld specimen at 1050°C to the heat treatment at 650°C and 850°C, the results demonstrate that the heat treatment at 1050°C produces better results.Figure6 depicts the tensile strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 60 min, and 1050°C for 60 min to 240 min at an interval of 60 min during the experiment. By comparing the results at various temperatures and times, it has been determined that the material’s tensile strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT tensile and impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 6.Figure 6 Tensile strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsWhen compared to the lower time intervals, the peak tensile strength value for ultimate tensile strength and yield strength is attained at 1050°C, 4 hrs, which is 508 MPa and 192 MPa, respectively. When compared to other metals, the “HAZ” of 304L stainless steels has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker.Similarly, Figure7 depicts the impact strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 1 hr, and 1050°C for 60 min to 240 min at 1 hr interval during the experiment. By comparing the outcomes at various temperatures and times, it has been determined that the material’s impact strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 5.Figure 7 Impact strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsSimilarly, when compared to the lower time intervals, the peak impact strength value is attained at 1050°C, 4 hrs, which is 246 Joules. When compared to other metals, the “HAZ” of 304L SS has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker. ### 3.2. Tensile and Impact Strength of Vibratory-Assisted Weld Specimens The samples were joined with vibrations by adjusting the voltage to the vibromotor and welding current during the welding operation. The vibromotor voltage is varied from 60 V to 230 V with a 10 V interval, and the variation in the welding current is 90-130 amps with a 20-amp interval. The specimens are welded at 0 voltage input at first, and then, the voltage is gradually increased during the welding process. Voltage of the vibrational motor which varies from 60 V to 230 V with a gap of 20 V at 90-amp, 110-amp, and 130-amp weld current is used. The resistance to elongation, i.e., tensile strength of the weldments, starts off reducing and steadily increasing as the vibrations increase for 90-130 amps from 60 V to 180 V, then drops from 180 V to 230 V. For all 90-130 amps of welding current, the ultimate value of tensile strength is recognized at 180 V of the vibromotor and 110 amps of the welding current, as shown in Figure8.Figure 8 Tensile strength of vibratory-assisted weld specimens. (a) Tensile strength at welding current of 90 amps(b) Tensile strength at welding current of 110 amps(c) Tensile strength at welding current of 130 amps(d) Comparison of tensile strength at different welding currentsSimilarly for 90 amps-130 amps of the weld current, the ultimate value of impact strength is identified at 180 V of the vibromotor voltage input and 110 amps of the weld current, as shown in Figure9. When the tensile and impact strength values of post weld heat-treated specimens prepared with vibrations are examined, it is discovered that, under ideal working conditions, the experiment outcomes of VAS (vibratory-assisted system) or “vibratory-assisted welding” are marginally higher than the those of heat-treated specimens. At 1050°C and 4 hours, the maximum ultimate tensile and impact strength for the heat-treated joints were 508 MPa and 246 J, respectively. At 180 V of the vibration motor and 110 amps of the weld current, the maximum ultimate tensile and impact strength attained for joints produced with vibrations are 520 MPa and 262 J, respectively. However, conventionally formed weld joints have ultimate tensile and impact strength values of 460 MPa and 201 J, respectively, which are lower than those of the previous two procedures. Both PWHT and VAW are excellent at decreasing residual stresses and other weld flaws and enhancing the tensile and impact strengths of weld joints.Figure 9 Impact strength of vibratory-assisted weld specimens. (a) Impact strength at 90-amp welding current(b) Impact strength at 110-amp welding current(c) Impact strength at 130-amp welding current(d) Comparison of impact strength at different welding currents ## 3.1. Tensile and Impact Strength of Heat-Treated Specimens The samples were prepared in the absence of vibrations and moved to the heat treatment after welding method. The method is carried out at temperatures ranging from 650°C to1050°C at intervals of 200°C, with heat treatment times ranging from 1 hour to 4 hours at intervals of 1 hour. When comparing the heat treatment of the weld specimen at 1050°C to the heat treatment at 650°C and 850°C, the results demonstrate that the heat treatment at 1050°C produces better results.Figure6 depicts the tensile strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 60 min, and 1050°C for 60 min to 240 min at an interval of 60 min during the experiment. By comparing the results at various temperatures and times, it has been determined that the material’s tensile strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT tensile and impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 6.Figure 6 Tensile strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsWhen compared to the lower time intervals, the peak tensile strength value for ultimate tensile strength and yield strength is attained at 1050°C, 4 hrs, which is 508 MPa and 192 MPa, respectively. When compared to other metals, the “HAZ” of 304L stainless steels has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker.Similarly, Figure7 depicts the impact strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 1 hr, and 1050°C for 60 min to 240 min at 1 hr interval during the experiment. By comparing the outcomes at various temperatures and times, it has been determined that the material’s impact strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 5.Figure 7 Impact strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsSimilarly, when compared to the lower time intervals, the peak impact strength value is attained at 1050°C, 4 hrs, which is 246 Joules. When compared to other metals, the “HAZ” of 304L SS has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker. ## 3.2. Tensile and Impact Strength of Vibratory-Assisted Weld Specimens The samples were joined with vibrations by adjusting the voltage to the vibromotor and welding current during the welding operation. The vibromotor voltage is varied from 60 V to 230 V with a 10 V interval, and the variation in the welding current is 90-130 amps with a 20-amp interval. The specimens are welded at 0 voltage input at first, and then, the voltage is gradually increased during the welding process. Voltage of the vibrational motor which varies from 60 V to 230 V with a gap of 20 V at 90-amp, 110-amp, and 130-amp weld current is used. The resistance to elongation, i.e., tensile strength of the weldments, starts off reducing and steadily increasing as the vibrations increase for 90-130 amps from 60 V to 180 V, then drops from 180 V to 230 V. For all 90-130 amps of welding current, the ultimate value of tensile strength is recognized at 180 V of the vibromotor and 110 amps of the welding current, as shown in Figure8.Figure 8 Tensile strength of vibratory-assisted weld specimens. (a) Tensile strength at welding current of 90 amps(b) Tensile strength at welding current of 110 amps(c) Tensile strength at welding current of 130 amps(d) Comparison of tensile strength at different welding currentsSimilarly for 90 amps-130 amps of the weld current, the ultimate value of impact strength is identified at 180 V of the vibromotor voltage input and 110 amps of the weld current, as shown in Figure9. When the tensile and impact strength values of post weld heat-treated specimens prepared with vibrations are examined, it is discovered that, under ideal working conditions, the experiment outcomes of VAS (vibratory-assisted system) or “vibratory-assisted welding” are marginally higher than the those of heat-treated specimens. At 1050°C and 4 hours, the maximum ultimate tensile and impact strength for the heat-treated joints were 508 MPa and 246 J, respectively. At 180 V of the vibration motor and 110 amps of the weld current, the maximum ultimate tensile and impact strength attained for joints produced with vibrations are 520 MPa and 262 J, respectively. However, conventionally formed weld joints have ultimate tensile and impact strength values of 460 MPa and 201 J, respectively, which are lower than those of the previous two procedures. Both PWHT and VAW are excellent at decreasing residual stresses and other weld flaws and enhancing the tensile and impact strengths of weld joints.Figure 9 Impact strength of vibratory-assisted weld specimens. (a) Impact strength at 90-amp welding current(b) Impact strength at 110-amp welding current(c) Impact strength at 130-amp welding current(d) Comparison of impact strength at different welding currents ## 4. Conclusion “Post weld heat treatment (PWHT) and vibratory-assisted welding (VSW)” improve the tensile and impact strength of 304L stainless steel. PWHT and VAW procedures refine the grain structure, which reduces weld flaws and residual stresses. A comparison of PWHT and VAW is done, and it is observed that vibratory-assisted welding has somewhat higher hardness. Despite the fact that PWHT and VAW provide nearly identical outcomes, PWHT techniques are more time-consuming, costly, and labor-intensive. Post weld heat treatment increases the tensile and impact strengths of 304L stainless steel by 10% and 22%, respectively, when compared to normal welding. The vibratory-aided welding approach increases the tensile and impact strength of 304L stainless steel by 13% and 23%, respectively. According to the current study, vibratory-aided welding is one of the best methods for eliminating weld flaws and improving characteristics. --- *Source: 1000859-2022-09-26.xml*
1000859-2022-09-26_1000859-2022-09-26.md
35,021
Influence of Heat-Treated and Vibratory-Assisted Weld Joints on the Mechanical Properties of 304L SS Material
Muvvala Chinnam Naidu; K. T. Balaram Padal; Girma Eshete
Journal of Nanomaterials (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1000859
1000859-2022-09-26.xml
--- ## Abstract Tensile and impact strengths of 304L SS stainless steel weldment prepared at different levels of heat treatments and with vibratory assistance were studied and compared with the conventional process of welding. The results reveal that the microstructures of weld joints after heat treatment and vibratory welded joints attained a fine grain structure, compared with the joints prepared with the conventional process of welding. By increasing the temperature of quenching and vibrations during welding, the grain size is gradually improving. Improvement in the tensile and impact is observed in the heat-treated and vibration-welded specimens. Similarity, in the weld joint properties of post weld heat treatment (PWHT) and vibratory-assisted welding (VAW) are observed. With the VAW technique, high quality weldments are produced and are more suitable than PWHT due to its less cost and time. --- ## Body ## 1. Introduction Austenitic stainless steels are a potential material for future nuclear power reactors that will need to meet high structural integrity demands while operating under extreme conditions. Welded joints are crucial components in any installation because they are more likely to contain faults than the base metal, and their physical qualities can differ dramatically from those of a wrought material of nominally similar composition. Because of their superior deterioration resistance, weldability, and formability, austenitic stainless steels are frequently utilized in critical elements of chemical industries. Due to their exceptional robustness in high pressure and temperature sets, these are widely used in nuclear energy plants, gas turbines, and jet propellants [1–4].“Submerged arc welding” is commonly used to join thick-wall stainless steels in ship construction and pipes [5, 6]. A solid-type filler material is mechanically put into dispersed granular flux in SAW. The flux melts when it comes into contact with the welding arc, and slag is formed that protects and covers the fused metal. In most cases, efficient joining is accomplished by passing current through a large diameter wire. This offers benefit of increasing productivity by allowing for quick solidification and enhancing quality weld by controlling weld heat [7].Inhomogeneity of the microstructure, concentration of residual stress, brittleness, and worsening of toughness in the welded material are examples of microstructural and mechanical qualities [8]. PWHT (post weld heat treatment) is commonly used. Microstructural and mechanical qualities such as inhomogeneity of the microstructure, concentration of brittleness, deterioration and residual stress of toughness of the welded material are degraded as a result of the input heat [8]. To address such degradation, PWHT is commonly used. However, delta ferrite as a component in austenitic SS material is present, so embrittlement of the welded component may occur owing to post heat treatment transition into another brittle phase, such as the sigma phase. It is critical to study post heat treatment processes to regulate the content of delta ferrite and hazardous carbides in order to alleviate these difficulties [9]. By homogenizing the weld specimen structure and limiting the production of favorable and hazardous phases, appropriate PWHT can progress mechanical qualities [10–13].Similarly, the vibrational stress-relief procedure is adapted during welding, with nonresonant and resonant repeated loading process for mild steel of low alloy, and is considered to enhance mechanical characteristics like strength and hardness, but unfortunately, fatigue strength is reduced for the specimens welded with the cyclic loads of the nonresonant process, which shows a disadvantage of the technique [14]. Vibratory stress relief (VSR) is used on stainless-steel weld specimens that have a combined weight of over 34 tonnes and are being tested for cyclic stress and strain. At a resonance frequency of 47.83 Hz, the plate was treated for around 15 minutes. The efficiency of VSR is measured by the number of cycles and the level of dynamic stress. In terms of longitudinal residual stresses, the variations are around an 11% decrease [15]. In order to quantify the vibration energy influence on stress concentration in the heat-affected and weld zones, the “vibratory stress relief” approach is explained and compared to heat treatment. Vibrational stress relief can be used to reduce residual strains generated during the welding process, according to the findings of this study’s X-ray diffraction [16].The effect of residual stresses on the surface stress distribution was investigated using the vibrating stress alleviation method. The “VSR” technique is most typically used to reduce residual flaws in manufacturing processes, and changes in the internal structures are readily observable wherever the strengths of tensile, yield, and fatigue are increased, and the current technique is beneficial for high-end applications for appropriate outcomes [17]. SS plates are used as specimens for the VSR procedure in nuclear reactors. The results reveal that after using the VSR method, residual stress is reduced by roughly 56 percent for the hoist-machine and around 31 percent for the SS plate. It is a different method compared to heat treatment [18]. To evaluate the cyclic strain and stress, the VSR approach was simulated on 304L SS weld-work pieces under repeated loads. According to the experimental findings, cyclic creep physical characteristics are identified at dynamic stresses. Cyclic loading affects creep and its pace. The higher the loading, the faster the creep and the longer it takes for the strain to stabilize. The defects at the weld zone are determined with the X-ray diffraction process at a variety of cyclic amplitude stresses [19]. The result of inelastic body moment was investigated throughout vibratory welding at two frequencies (50 Hz and 500 Hz) to determine the differences in characteristics. With any trend at low frequency, the tensions for both directions (longitudinal and transverse) are decreased and increased. At peak frequencies, residual-stresses remained constant. As a result, the stiff body motion effect at higher frequencies is thought to be ineffective for reducing residual stress [20]. The diversity in residual stresses created during welding was assessed using a variety of shaft designs. The findings of this study reveal that shear stresses of torsional moment can be used to minimize shaft residual stresses by a moderate amount, that shear stress aids in the transfer of retained austenite to the following stages, and that it influences the amount of residual stresses induced by the effects of the interphase [21]. The vibration effect on the residual stresses induced by the welding process was examined. Vibrations were sent to them across a predetermined temperature range. Three batches are being looked into, each with a distinct temperature range. With no obvious movement, residual stresses were reported to be improved in all three batches. The actions appear to have undesired outcomes, and it is finished that excitations at 400°C and 320°C did not considerably influence the ultimate state of residual stresses, as these do not change significantly with the vibration curve [22].The impact of vibratory stress on the weld’s microstructure and stress distribution was examined. The results show that in the first 5 seconds of vibration, stress levels drop by roughly 75 MPa. The increase in vibrations did not result in a further drop in stress levels after post weld vibratory treatment, and optical microscopy reveals no alterations in the crystal structure. Because of the vibratory treatment, the crystals develop in a specified direction during welding. As a result of the grain refinement, the toughness of the weldment increased by 25% [23, 24]. The effects of transverse oscillation frequency and amplitude on mild-steel weldment characteristics are explored. The findings of excitations produced welds revealed that grain refinement, as seen in the micrograph, is the cause of improved mechanical qualities such as yield, breaking strength, and ultimate tensile strengths, among others.Vibratory weld conditioning has a great impact on the mechanical characterization of the aluminum alloy weld joints. Hardness and ultimate tensile strength behavior of aluminum weldment was greatly influenced by the vibratory TIG welding. Mechanical properties of aluminum weldment were tested under the impact of voltage input to the vibromotor and time at which it is vibrated. Apart from the several vibratory TIG welding parameters, vibration amplitude is the one which has a significant impact on the weldment properties. Authors observed that in vibratory TIG welding, vibration amplitude improves the properties of weldment [25, 26]. In both thermal stress relief and vibratory thermal stress relief, residual stresses in aluminum 7075 alloys were examined. The outcomes were contrasted with the finite element models. The decrease of stresses was shown to be significantly impacted by the authors’ discovery of the thermal vibratory stress alleviation approach. Heat vibratory stress-relieving techniques relieve tension at rates that are, respectively, 20.43 and 38.56 percent faster than those of thermal and vibratory methods.The impact of vibration stress reduction on the steel butt-welded connections was studied by Ebrahimi et al. The findings showed that whenever the maximum stress frequency reaches its resonance frequency, lengthwise residual may be lowered more drastically. Additionally, the finite element approach was used to compare experimental findings with modeled outcomes, and it was shown that the simulations are often similar to the experimental outcomes [27].The present study thus investigated the effect of PWHT and vibrational welding on the change of mechanical characteristics like ultimate tensile and impact strength of the welded part of 304L stainless steel produced through the SAW. The resultant effects on tensile and impact were discussed, and a comparison is made between conventional, PWHT, and vibratory-assisted welding. ## 2. Methodology and Materials ### 2.1. Post Weld Heat Treatment Process (PWHT) Arc welding was used to determine the hardness of 304L SS steels in this investigation. The shielded metal arc welding technique is used to attach two plates that are 20 mm wide, 200 mm long, and 5 mm thick.To determine the resistance against indentation of 304L SS weld joints, post weld heat treatment was used at temperatures ranging from 650°C to 1050°C with a 200°C gap, as shown in Figure1. The temperature variation is 60 min to 240 min, with a 60 min break between them. The created samples are quenched in cold water at each stage to determine the tensile and impact strength of the material at various temperatures and time breaks.Figure 1 Temperatures vs. quenching time (heat treatment). ### 2.2. Vibratory-Assisted Welding A table for placing the specimen was attached to four springs, one on each corner. The vibration table was equipped by adding a vibromotor to the vibratory table configuration. An ammeter, voltmeter, and dimmerstat were linked to the vibromotor to produce the vibrations. Figure2 depicts the experimental setup. To increase the weld joint mechanical characteristics by giving constructive modifications in the microstructure of the weldment region, a supplemental vibratory setup capable of delivering mechanical excitations to the weld pool for “manual metal arc welding” is designed. Different frequencies were applied at varied amplitudes over the length of the weld bead, merely straggling behind the weld torch to physically stimulate the weld pool and generate the desired microstructural outcomes.Figure 2 Experimental setup. ### 2.3. Vibration Parameters with respect to the Voltage of Vibromotor This configuration generates the necessary frequency in terms of volts. At 70, 150, and 230 V, Table1 shows the variations in amplitude and acceleration. Various vibration parameters with respect to the input voltage are shown in Figure 2.Table 1 Vibration factors in terms of vibromotor voltage. S. no.Vibromotor voltage (volts)Frequency(Hz)Acceleration(mm/sec2)Amplitude(mm)150607715.410.235260615916.900.240370624118.390.245480632319.880.250590640421.380.2556100648622.870.2607110656824.360.2658120664925.850.2709130673127.340.27510140681328.830.28011150689430.320.28512160697632.70.29013170705835.070.29514180713937.440.30015190722139.810.30516200730342.180.31017210738444.550.31418220746646.920.31919230754849.290.324Acceleration, frequency, and amplitude cyclically vary with respect to the different levels of voltage of vibration; the root mean square (rms) values of these parameters have been considered for better operating conditions. The frequency, amplitude, and acceleration of specimens have been measured by a vibration tester shown in Table1. Specifications of the vibration tester are 10 Hz to 10 kHz frequency range, 0.01 to 4 mm amplitude range, and 0.1 to 400 mm/s2 for measuring vibration parameters. ### 2.4. Materials and the Weld-Joint Preparation The foundation material used in this investigation is 304L stainless steel with a thickness of 5 mm. “Carbon (C)–0.03%, manganese (Mn)–2%, phosphorus (P)–0.045%, sulphur (S)–0.03%, silicon (Si)–0.75%, chromium–18%, nickel (Ni)–8%, and nitrogen (N)–0.1%” are the material compositions, as indicated in Table2. To test the characteristics of vibrations, the weld joints were produced according to standards. 304L stainless steel is utilized in a variety of manufacturing applications due to its excellent formability, strength, and resistance against corrosion. Samples were positioned over the flat surface of the vibration platform for weld joint preparation, and the manual arc welding procedure was used. By changing the dimmerstat and voltmeter during the welding operation, the proper quantities of vibrations were imparted to the specimens. The line diagram of specimen preparation before welding is shown in Figure 3. During the welding operation, vibrations were continuously conveyed to the molten pool.Table 2 Chemical composition 304L SS. Material gradeWeight % of chemical compositionFeCSiMnPSCrNiN304L71.0450.030.7520.0450.031880.1Figure 3 Specimen dimension for welding.The vibrations were measured with a vibrometer, which is a particular equipment. Every set of vibration’s velocity, acceleration, and displacement is measured. Table1 shows the various material compositions. ### 2.5. Tensile Test of Weldment The tensile strength test is conducted using a universal tensile testing equipment by taking into account a number of variables such as welding current, vibration time to the fusion zone, and DC motor voltage. The dimensions of the test specimens are determined using ASTM D638 standards. Figure4 depicts the dimensions of the test specimen according to industry standards.Figure 4 Line diagram of a tensile test specimen.Ultimate and yield strength destructive testing (tensile testing) is used to determine the ductility of metallic materials. It calculates the amount of force required to break the weldments. The extent to which the specimen elongates or expands up to the breaking point is illustrated in the equation below.(1)∈=ΔLL0=L−L0L0,whereΔL is the specimen change in length of the specimen, L0 is the initial length, and L is the total length of the specimen.The applied force which is employed on a particular area is supplied by the equation below in order to compute the stressσ. (2)σ=FnA ### 2.6. Impact Test The Charpy impact testing machine is used to perform the impact test on welded specimens. The specimens which were prepared according to ASTM (E23) are presented in Figure5. The heavy weight pendulum is allowed to hit the specimen from a static height, and a specimen with a notch is introduced into the machine.Figure 5 Test sample for impact test. ## 2.1. Post Weld Heat Treatment Process (PWHT) Arc welding was used to determine the hardness of 304L SS steels in this investigation. The shielded metal arc welding technique is used to attach two plates that are 20 mm wide, 200 mm long, and 5 mm thick.To determine the resistance against indentation of 304L SS weld joints, post weld heat treatment was used at temperatures ranging from 650°C to 1050°C with a 200°C gap, as shown in Figure1. The temperature variation is 60 min to 240 min, with a 60 min break between them. The created samples are quenched in cold water at each stage to determine the tensile and impact strength of the material at various temperatures and time breaks.Figure 1 Temperatures vs. quenching time (heat treatment). ## 2.2. Vibratory-Assisted Welding A table for placing the specimen was attached to four springs, one on each corner. The vibration table was equipped by adding a vibromotor to the vibratory table configuration. An ammeter, voltmeter, and dimmerstat were linked to the vibromotor to produce the vibrations. Figure2 depicts the experimental setup. To increase the weld joint mechanical characteristics by giving constructive modifications in the microstructure of the weldment region, a supplemental vibratory setup capable of delivering mechanical excitations to the weld pool for “manual metal arc welding” is designed. Different frequencies were applied at varied amplitudes over the length of the weld bead, merely straggling behind the weld torch to physically stimulate the weld pool and generate the desired microstructural outcomes.Figure 2 Experimental setup. ## 2.3. Vibration Parameters with respect to the Voltage of Vibromotor This configuration generates the necessary frequency in terms of volts. At 70, 150, and 230 V, Table1 shows the variations in amplitude and acceleration. Various vibration parameters with respect to the input voltage are shown in Figure 2.Table 1 Vibration factors in terms of vibromotor voltage. S. no.Vibromotor voltage (volts)Frequency(Hz)Acceleration(mm/sec2)Amplitude(mm)150607715.410.235260615916.900.240370624118.390.245480632319.880.250590640421.380.2556100648622.870.2607110656824.360.2658120664925.850.2709130673127.340.27510140681328.830.28011150689430.320.28512160697632.70.29013170705835.070.29514180713937.440.30015190722139.810.30516200730342.180.31017210738444.550.31418220746646.920.31919230754849.290.324Acceleration, frequency, and amplitude cyclically vary with respect to the different levels of voltage of vibration; the root mean square (rms) values of these parameters have been considered for better operating conditions. The frequency, amplitude, and acceleration of specimens have been measured by a vibration tester shown in Table1. Specifications of the vibration tester are 10 Hz to 10 kHz frequency range, 0.01 to 4 mm amplitude range, and 0.1 to 400 mm/s2 for measuring vibration parameters. ## 2.4. Materials and the Weld-Joint Preparation The foundation material used in this investigation is 304L stainless steel with a thickness of 5 mm. “Carbon (C)–0.03%, manganese (Mn)–2%, phosphorus (P)–0.045%, sulphur (S)–0.03%, silicon (Si)–0.75%, chromium–18%, nickel (Ni)–8%, and nitrogen (N)–0.1%” are the material compositions, as indicated in Table2. To test the characteristics of vibrations, the weld joints were produced according to standards. 304L stainless steel is utilized in a variety of manufacturing applications due to its excellent formability, strength, and resistance against corrosion. Samples were positioned over the flat surface of the vibration platform for weld joint preparation, and the manual arc welding procedure was used. By changing the dimmerstat and voltmeter during the welding operation, the proper quantities of vibrations were imparted to the specimens. The line diagram of specimen preparation before welding is shown in Figure 3. During the welding operation, vibrations were continuously conveyed to the molten pool.Table 2 Chemical composition 304L SS. Material gradeWeight % of chemical compositionFeCSiMnPSCrNiN304L71.0450.030.7520.0450.031880.1Figure 3 Specimen dimension for welding.The vibrations were measured with a vibrometer, which is a particular equipment. Every set of vibration’s velocity, acceleration, and displacement is measured. Table1 shows the various material compositions. ## 2.5. Tensile Test of Weldment The tensile strength test is conducted using a universal tensile testing equipment by taking into account a number of variables such as welding current, vibration time to the fusion zone, and DC motor voltage. The dimensions of the test specimens are determined using ASTM D638 standards. Figure4 depicts the dimensions of the test specimen according to industry standards.Figure 4 Line diagram of a tensile test specimen.Ultimate and yield strength destructive testing (tensile testing) is used to determine the ductility of metallic materials. It calculates the amount of force required to break the weldments. The extent to which the specimen elongates or expands up to the breaking point is illustrated in the equation below.(1)∈=ΔLL0=L−L0L0,whereΔL is the specimen change in length of the specimen, L0 is the initial length, and L is the total length of the specimen.The applied force which is employed on a particular area is supplied by the equation below in order to compute the stressσ. (2)σ=FnA ## 2.6. Impact Test The Charpy impact testing machine is used to perform the impact test on welded specimens. The specimens which were prepared according to ASTM (E23) are presented in Figure5. The heavy weight pendulum is allowed to hit the specimen from a static height, and a specimen with a notch is introduced into the machine.Figure 5 Test sample for impact test. ## 3. Results and Discussion ### 3.1. Tensile and Impact Strength of Heat-Treated Specimens The samples were prepared in the absence of vibrations and moved to the heat treatment after welding method. The method is carried out at temperatures ranging from 650°C to1050°C at intervals of 200°C, with heat treatment times ranging from 1 hour to 4 hours at intervals of 1 hour. When comparing the heat treatment of the weld specimen at 1050°C to the heat treatment at 650°C and 850°C, the results demonstrate that the heat treatment at 1050°C produces better results.Figure6 depicts the tensile strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 60 min, and 1050°C for 60 min to 240 min at an interval of 60 min during the experiment. By comparing the results at various temperatures and times, it has been determined that the material’s tensile strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT tensile and impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 6.Figure 6 Tensile strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsWhen compared to the lower time intervals, the peak tensile strength value for ultimate tensile strength and yield strength is attained at 1050°C, 4 hrs, which is 508 MPa and 192 MPa, respectively. When compared to other metals, the “HAZ” of 304L stainless steels has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker.Similarly, Figure7 depicts the impact strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 1 hr, and 1050°C for 60 min to 240 min at 1 hr interval during the experiment. By comparing the outcomes at various temperatures and times, it has been determined that the material’s impact strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 5.Figure 7 Impact strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsSimilarly, when compared to the lower time intervals, the peak impact strength value is attained at 1050°C, 4 hrs, which is 246 Joules. When compared to other metals, the “HAZ” of 304L SS has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker. ### 3.2. Tensile and Impact Strength of Vibratory-Assisted Weld Specimens The samples were joined with vibrations by adjusting the voltage to the vibromotor and welding current during the welding operation. The vibromotor voltage is varied from 60 V to 230 V with a 10 V interval, and the variation in the welding current is 90-130 amps with a 20-amp interval. The specimens are welded at 0 voltage input at first, and then, the voltage is gradually increased during the welding process. Voltage of the vibrational motor which varies from 60 V to 230 V with a gap of 20 V at 90-amp, 110-amp, and 130-amp weld current is used. The resistance to elongation, i.e., tensile strength of the weldments, starts off reducing and steadily increasing as the vibrations increase for 90-130 amps from 60 V to 180 V, then drops from 180 V to 230 V. For all 90-130 amps of welding current, the ultimate value of tensile strength is recognized at 180 V of the vibromotor and 110 amps of the welding current, as shown in Figure8.Figure 8 Tensile strength of vibratory-assisted weld specimens. (a) Tensile strength at welding current of 90 amps(b) Tensile strength at welding current of 110 amps(c) Tensile strength at welding current of 130 amps(d) Comparison of tensile strength at different welding currentsSimilarly for 90 amps-130 amps of the weld current, the ultimate value of impact strength is identified at 180 V of the vibromotor voltage input and 110 amps of the weld current, as shown in Figure9. When the tensile and impact strength values of post weld heat-treated specimens prepared with vibrations are examined, it is discovered that, under ideal working conditions, the experiment outcomes of VAS (vibratory-assisted system) or “vibratory-assisted welding” are marginally higher than the those of heat-treated specimens. At 1050°C and 4 hours, the maximum ultimate tensile and impact strength for the heat-treated joints were 508 MPa and 246 J, respectively. At 180 V of the vibration motor and 110 amps of the weld current, the maximum ultimate tensile and impact strength attained for joints produced with vibrations are 520 MPa and 262 J, respectively. However, conventionally formed weld joints have ultimate tensile and impact strength values of 460 MPa and 201 J, respectively, which are lower than those of the previous two procedures. Both PWHT and VAW are excellent at decreasing residual stresses and other weld flaws and enhancing the tensile and impact strengths of weld joints.Figure 9 Impact strength of vibratory-assisted weld specimens. (a) Impact strength at 90-amp welding current(b) Impact strength at 110-amp welding current(c) Impact strength at 130-amp welding current(d) Comparison of impact strength at different welding currents ## 3.1. Tensile and Impact Strength of Heat-Treated Specimens The samples were prepared in the absence of vibrations and moved to the heat treatment after welding method. The method is carried out at temperatures ranging from 650°C to1050°C at intervals of 200°C, with heat treatment times ranging from 1 hour to 4 hours at intervals of 1 hour. When comparing the heat treatment of the weld specimen at 1050°C to the heat treatment at 650°C and 850°C, the results demonstrate that the heat treatment at 1050°C produces better results.Figure6 depicts the tensile strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 60 min, and 1050°C for 60 min to 240 min at an interval of 60 min during the experiment. By comparing the results at various temperatures and times, it has been determined that the material’s tensile strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT tensile and impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 6.Figure 6 Tensile strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsWhen compared to the lower time intervals, the peak tensile strength value for ultimate tensile strength and yield strength is attained at 1050°C, 4 hrs, which is 508 MPa and 192 MPa, respectively. When compared to other metals, the “HAZ” of 304L stainless steels has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker.Similarly, Figure7 depicts the impact strength of weld specimens that have undergone a heat treatment process. Specimens were heat treated at 650°C for 60 min to 240 min at an interval of 60 min, 850°C for 60 min to 240 min at an interval of 1 hr, and 1050°C for 60 min to 240 min at 1 hr interval during the experiment. By comparing the outcomes at various temperatures and times, it has been determined that the material’s impact strength properties are greater at 1050°C and 4 hrs than at other temperatures. PWHT impact strength values at 650°C, 850°C, and 1050°C for time periods of (a) 1 hr, (b) 2 hrs, (c) 3 hrs, and (d) 4 hrs are shown in Figure 5.Figure 7 Impact strength of PWHT at different time periods. (a) PWHT at 1 hour(b) PWHT at 2 hrs(c) PWHT 3 hrs(d) PWHT at 4 hrsSimilarly, when compared to the lower time intervals, the peak impact strength value is attained at 1050°C, 4 hrs, which is 246 Joules. When compared to other metals, the “HAZ” of 304L SS has reduced thermal diffusivity during the heat treatment process. As a result, the material grade changes from austenitic to martensitic, allowing the metal’s HAZ to become weaker. ## 3.2. Tensile and Impact Strength of Vibratory-Assisted Weld Specimens The samples were joined with vibrations by adjusting the voltage to the vibromotor and welding current during the welding operation. The vibromotor voltage is varied from 60 V to 230 V with a 10 V interval, and the variation in the welding current is 90-130 amps with a 20-amp interval. The specimens are welded at 0 voltage input at first, and then, the voltage is gradually increased during the welding process. Voltage of the vibrational motor which varies from 60 V to 230 V with a gap of 20 V at 90-amp, 110-amp, and 130-amp weld current is used. The resistance to elongation, i.e., tensile strength of the weldments, starts off reducing and steadily increasing as the vibrations increase for 90-130 amps from 60 V to 180 V, then drops from 180 V to 230 V. For all 90-130 amps of welding current, the ultimate value of tensile strength is recognized at 180 V of the vibromotor and 110 amps of the welding current, as shown in Figure8.Figure 8 Tensile strength of vibratory-assisted weld specimens. (a) Tensile strength at welding current of 90 amps(b) Tensile strength at welding current of 110 amps(c) Tensile strength at welding current of 130 amps(d) Comparison of tensile strength at different welding currentsSimilarly for 90 amps-130 amps of the weld current, the ultimate value of impact strength is identified at 180 V of the vibromotor voltage input and 110 amps of the weld current, as shown in Figure9. When the tensile and impact strength values of post weld heat-treated specimens prepared with vibrations are examined, it is discovered that, under ideal working conditions, the experiment outcomes of VAS (vibratory-assisted system) or “vibratory-assisted welding” are marginally higher than the those of heat-treated specimens. At 1050°C and 4 hours, the maximum ultimate tensile and impact strength for the heat-treated joints were 508 MPa and 246 J, respectively. At 180 V of the vibration motor and 110 amps of the weld current, the maximum ultimate tensile and impact strength attained for joints produced with vibrations are 520 MPa and 262 J, respectively. However, conventionally formed weld joints have ultimate tensile and impact strength values of 460 MPa and 201 J, respectively, which are lower than those of the previous two procedures. Both PWHT and VAW are excellent at decreasing residual stresses and other weld flaws and enhancing the tensile and impact strengths of weld joints.Figure 9 Impact strength of vibratory-assisted weld specimens. (a) Impact strength at 90-amp welding current(b) Impact strength at 110-amp welding current(c) Impact strength at 130-amp welding current(d) Comparison of impact strength at different welding currents ## 4. Conclusion “Post weld heat treatment (PWHT) and vibratory-assisted welding (VSW)” improve the tensile and impact strength of 304L stainless steel. PWHT and VAW procedures refine the grain structure, which reduces weld flaws and residual stresses. A comparison of PWHT and VAW is done, and it is observed that vibratory-assisted welding has somewhat higher hardness. Despite the fact that PWHT and VAW provide nearly identical outcomes, PWHT techniques are more time-consuming, costly, and labor-intensive. Post weld heat treatment increases the tensile and impact strengths of 304L stainless steel by 10% and 22%, respectively, when compared to normal welding. The vibratory-aided welding approach increases the tensile and impact strength of 304L stainless steel by 13% and 23%, respectively. According to the current study, vibratory-aided welding is one of the best methods for eliminating weld flaws and improving characteristics. --- *Source: 1000859-2022-09-26.xml*
2022
# A Fast Algorithm for Determining the Optimal Navigation Star for Responsive Launch Vehicles **Authors:** Yi Zhao; Hongbo Zhang; Pengfei Li; Guojian Tang **Journal:** International Journal of Aerospace Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1000865 --- ## Abstract The platform inertial-stellar composite guidance is a composite guidance method supplemented by stellar correction on the basis of inertial navigation, which can effectively improve the accuracy of responsive launch vehicles. In order to solve the problem of rapid determining the optimal navigation star in the system, this paper proposes an algorithm based on the equivalent information compression theory. At first, this paper explains why the single-star scheme can achieve the same accuracy as the dual-star scheme. At the same time, the analytical expression of the optimal navigation star with significant initial error is derived. In addition, the available optimal navigation star determination strategy is also designed according to the arrow-borne navigation star database. The proposed algorithm is evaluated by two representative responsive launch vehicle trajectory simulations. The simulation results demonstrate that the proposed algorithm can determine the optimal navigation star quickly, which greatly shorten the preparation time before the rapid launch of vehicles and improve the composite guidance accuracy. --- ## Body ## 1. Introduction Inertial-stellar composite guidance is a composite guidance method based on inertial guidance supplemented by stellar guidance. It utilizes the inertial space azimuth datum provided by the star to calibrate the error angle between the platform coordinate system and the launch inertial coordinate system and corrects the impact point deviation caused by the platform pointing error [1]. Inertial-stellar composite guidance system corrects the drift error of inertial platform according to the star sensor information, which can not only improve the guidance accuracy and rapid launch ability [2] but also reduce the cost. Moreover, the motion parameters of spacecraft in space can be determined [3–5], and it has strong environmental adaptability.Inertial-stellar guidance is essentially a problem of determining attitude through vector observation. This problem was first proposed by Wahba [6], and various attitude determination algorithms were developed, such as TRIAD [7], QUEST [8, 9], SVD [10], FOAM [11], Euler-q [12], and fast linear attitude estimator method [13–15]. In order to solve the case that there are a large number of outliers, Yang and Carlone formulated the Wahba problem by truncated least squares [16]. Ghadiri et al. [17] proposed a robust multi-objective optimization method to overcome the static attitude determination with bounded uncertainty. These algorithms need at least two vector information to calculate the attitude. However, in some cases, long-term observation of one vector is enough [18]. Reference [19] proposed an attitude determination algorithm based on the minimum squares sum of image point coordinate residuals. The algorithm can still determine the attitude when only one star is observed. Reference [20] derived the attitude analytical solution when only one sensor is used for observation. The analytical solution can be expressed by the combination of two limiting quaternions, and the covariance and singularity analyses were carried out. However, it did not determine the optimal attitude solution. Similarly, according to the number of observation vectors, the inertial-stellar composite guidance can also be divided into single vector observation and double vectors observation, that is, single-star scheme and double-star scheme. For the platform inertial navigation system, the star sensor is usually fixedly installed on the platform. Because the direction of the platform in the inertial space cannot be adjusted after launch, the double-star scheme needs to install two star sensors on the platform, which will greatly complicate the structure. It is found that observing the specific direction star, the single-star scheme can achieve the same accuracy as the double-star scheme [21, 22]. Zhang et al. have proved it theoretically [23]. As it is known, the only practical application is the single-star scheme, such as the American “Trident” submarine long-range ballistic missile. However, the single-star scheme needs to determine the optimal navigation star before the vehicle launch. At present, the optimal navigation star is determined by numerical method [24, 25], which increases the preparation time and limits the wide application.Motivated by the work of Zhang et al., this paper proposes a fast algorithm to determine the optimal navigation star for responsive launch vehicles. Firstly, the relationship equations between the initial error and the impact point deviation and the star sensor measurement are established. Then, our algorithm exploits the equivalent information compression theory [23] to explain why the single-star scheme can achieve the accuracy as the double-star scheme and deduces the optimal navigation star under the condition of significant initial error. The deduced analytical solution can greatly shorten the prelaunch preparation time. On this basis, the local navigation star database is determined according to the deviation angle, and the available optimal navigation star can be determined.The structure of this paper is as follows. Section2 presents the definitions of various coordinate system and the derivations of inertial platform system and star sensor model. Section 3 shows the analytical expression of the optimal navigation star. In Section 4, the available optimal navigation star is determined based on the arrow-borne navigation star database. The simulation results and conclusions are given in Section 5 and Section 6. The contribution of this paper is to provide an analytical solution of optimal navigation star to shorten the prelaunch preparation time and enhance the performance for responsive launch vehicles. ## 2. Inertial Platform System and Star Sensor Modeling ### 2.1. Definitions of Various Coordinate System #### 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. #### 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. #### 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. #### 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. #### 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. #### 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ### 2.2. Relationship between Impact Point Deviation and Platform Misalignment Angle The platform misalignment angle represents the inertial reference deviation, that is, the error angle between the inertial platform and the launch inertial coordinate system. It is mainly caused by various initial errors and inertial navigation errors and affects the landing point accuracy. Although the platform misalignment angle is affected by many factors, the initial error accounts for the main part under certain conditions. For the rapid maneuvering launch vehicle, the accuracy of pre-launch orientation and alignment may not be very high, which leads to the significant portion of the initial error in the platform misalignment angle. Therefore, this paper mainly studies the determination of the optimal navigation star under the significant initial error condition.The platform inertial system and the launch inertial system can be coincident with the help of the platform initial alignment. The initial alignment error will be caused due to the equipment inherent error, the external interference influence in the alignment process, and the method error. And the platform alignment error around they-axis will be caused owing to the initial orientation error during launch. Thus, the orientation error can be considered together with the initial alignment error.The initial alignment and orientation errors can be expressed by the three axis misalignment angles between thep′-system and the A-system, which are defined asε0xε0yε0zT. And there are two parts in ε0y: orientation error and aiming error. It is assumed that the adjustment platform adopts the method of yaw first and then pitch; there is (1)αxαyαz=cosφrcosψrsinφr−cosφrsinψr−sinφrcosψrcosφrsinφrsinψrsinψr0cosψrε0xε0yε0z=CAP′ε0xε0yε0z,whereαxαyαzT are misalignment angles caused by initial alignment and orientation errors, ψr and φr are the rotation angles around the y-axis and z-axis, respectively, and CAP′ is the transformation matrix from the A-system to the p′- system.The inertial guidance accuracy meets the following relationship with the initial alignment error:(2)ΔLΔH=nL1nL2nL3nH1nH2nH3ε0xε0yε0z.In which,nL1, nL2, and nL3 are the partial derivatives of the longitudinal impact point deviation to the initial errors in three directions, respectively. nH1, nH2, and nH3 are the partial derivatives of the lateral impact point deviation to the initial errors in three directions, respectively.It can be obtained by combining Equation (1) and Equation (2). (3)ΔLΔH=q11q12q13q21q22q23αxαyαz=q1Tq2Tαxαyαz.In which,(4)q11=nL1cosφrcosψr+nL2sinφr−nL3cosφrsinψr,q12=−nL1sinφrcosψr+nL2cosφr+nL3sinφrsinψr,q13=nL1sinψr+nL3cosψr,q21=nH1cosφrcosψr+nH2sinφr−nH3cosφrsinψr,q22=−nH1sinφrcosψr+nH2cosφr+nH3sinφrsinψr,q23=nH1sinψr+nH3cosψr. ### 2.3. Acquisition of Star Sensor Measurement The elevation and azimuth angle of the optimal navigation star in theA-system are defined as es and σs, respectively. Thus, the stellar direction unit vector in the A-system can be expressed as (5)SA=cosescosσssinescosessinσsT.In thes-system, the osxs axis is the optical axis. The angle between the optical axis and the stellar vector is very small, and its directional cosine is approximately 1. The osys and oszs are the output axes. The stellar vector representation in the star sensor coordinate system is shown in Figure 2. It is assumed that the star sensor outputs are ξ and η; the stellar vector can be expressed as (6)SS=1−ξ−ηT.Figure 2 Representation of the stellar vector in the star sensor coordinate system.The ideal star sensor output should beSs′=100T; then, there is the following equation according to the coordinate transformation relationship: (7)SS′=CPSCAP′SA,whereCPS is the transformation matrix from the p-system to the s-system and CAP′ is the transformation matrix from the A-system to the p′-system. According to the stellar vector representation in the A-system and the s-system, the following equation can be obtained by the transformation matrix between different coordinate systems. (8)SS=CPSCP′PCAP′SA,whereCP′P is the transformation matrix from the p′-system to the p-system, which can be expressed as (9)CP′P=1−αzαyαz1−αx−αyαx1.The stellar vector representation in thep-system is defined as SP, and there is (10)SP=CSPSS.According to Equation (7), the stellar vector representation in the p′-system can be obtained as follows: (11)SP′=CSPSS′=CAP′SA.Then, the following equation can be obtained from Equations (10) and (11). (12)ΔSP=SP−SP′=CSPSS−SS′.AndΔSp can also be represented as (13)ΔSP=SP′⋅a=CSPSS′⋅a.It can be obtained from Equations (12) and (13). (14)Ss−Ss′=CPSCSPSS′⋅a.Therefore, the star sensor measurement equation can be expressed as(15)ξη=−sinψ00cosψ0sinφ0cosψ0−cosφ0sinφ0sinψ0αxαyαz=h1Th2Tαxαyαz,whereφ0 and ψ0 are the star sensor installation angles. ## 2.1. Definitions of Various Coordinate System ### 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. ### 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. ### 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. ### 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. ### 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. ### 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ## 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. ## 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. ## 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. ## 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. ## 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. ## 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ## 2.2. Relationship between Impact Point Deviation and Platform Misalignment Angle The platform misalignment angle represents the inertial reference deviation, that is, the error angle between the inertial platform and the launch inertial coordinate system. It is mainly caused by various initial errors and inertial navigation errors and affects the landing point accuracy. Although the platform misalignment angle is affected by many factors, the initial error accounts for the main part under certain conditions. For the rapid maneuvering launch vehicle, the accuracy of pre-launch orientation and alignment may not be very high, which leads to the significant portion of the initial error in the platform misalignment angle. Therefore, this paper mainly studies the determination of the optimal navigation star under the significant initial error condition.The platform inertial system and the launch inertial system can be coincident with the help of the platform initial alignment. The initial alignment error will be caused due to the equipment inherent error, the external interference influence in the alignment process, and the method error. And the platform alignment error around they-axis will be caused owing to the initial orientation error during launch. Thus, the orientation error can be considered together with the initial alignment error.The initial alignment and orientation errors can be expressed by the three axis misalignment angles between thep′-system and the A-system, which are defined asε0xε0yε0zT. And there are two parts in ε0y: orientation error and aiming error. It is assumed that the adjustment platform adopts the method of yaw first and then pitch; there is (1)αxαyαz=cosφrcosψrsinφr−cosφrsinψr−sinφrcosψrcosφrsinφrsinψrsinψr0cosψrε0xε0yε0z=CAP′ε0xε0yε0z,whereαxαyαzT are misalignment angles caused by initial alignment and orientation errors, ψr and φr are the rotation angles around the y-axis and z-axis, respectively, and CAP′ is the transformation matrix from the A-system to the p′- system.The inertial guidance accuracy meets the following relationship with the initial alignment error:(2)ΔLΔH=nL1nL2nL3nH1nH2nH3ε0xε0yε0z.In which,nL1, nL2, and nL3 are the partial derivatives of the longitudinal impact point deviation to the initial errors in three directions, respectively. nH1, nH2, and nH3 are the partial derivatives of the lateral impact point deviation to the initial errors in three directions, respectively.It can be obtained by combining Equation (1) and Equation (2). (3)ΔLΔH=q11q12q13q21q22q23αxαyαz=q1Tq2Tαxαyαz.In which,(4)q11=nL1cosφrcosψr+nL2sinφr−nL3cosφrsinψr,q12=−nL1sinφrcosψr+nL2cosφr+nL3sinφrsinψr,q13=nL1sinψr+nL3cosψr,q21=nH1cosφrcosψr+nH2sinφr−nH3cosφrsinψr,q22=−nH1sinφrcosψr+nH2cosφr+nH3sinφrsinψr,q23=nH1sinψr+nH3cosψr. ## 2.3. Acquisition of Star Sensor Measurement The elevation and azimuth angle of the optimal navigation star in theA-system are defined as es and σs, respectively. Thus, the stellar direction unit vector in the A-system can be expressed as (5)SA=cosescosσssinescosessinσsT.In thes-system, the osxs axis is the optical axis. The angle between the optical axis and the stellar vector is very small, and its directional cosine is approximately 1. The osys and oszs are the output axes. The stellar vector representation in the star sensor coordinate system is shown in Figure 2. It is assumed that the star sensor outputs are ξ and η; the stellar vector can be expressed as (6)SS=1−ξ−ηT.Figure 2 Representation of the stellar vector in the star sensor coordinate system.The ideal star sensor output should beSs′=100T; then, there is the following equation according to the coordinate transformation relationship: (7)SS′=CPSCAP′SA,whereCPS is the transformation matrix from the p-system to the s-system and CAP′ is the transformation matrix from the A-system to the p′-system. According to the stellar vector representation in the A-system and the s-system, the following equation can be obtained by the transformation matrix between different coordinate systems. (8)SS=CPSCP′PCAP′SA,whereCP′P is the transformation matrix from the p′-system to the p-system, which can be expressed as (9)CP′P=1−αzαyαz1−αx−αyαx1.The stellar vector representation in thep-system is defined as SP, and there is (10)SP=CSPSS.According to Equation (7), the stellar vector representation in the p′-system can be obtained as follows: (11)SP′=CSPSS′=CAP′SA.Then, the following equation can be obtained from Equations (10) and (11). (12)ΔSP=SP−SP′=CSPSS−SS′.AndΔSp can also be represented as (13)ΔSP=SP′⋅a=CSPSS′⋅a.It can be obtained from Equations (12) and (13). (14)Ss−Ss′=CPSCSPSS′⋅a.Therefore, the star sensor measurement equation can be expressed as(15)ξη=−sinψ00cosψ0sinφ0cosψ0−cosφ0sinφ0sinψ0αxαyαz=h1Th2Tαxαyαz,whereφ0 and ψ0 are the star sensor installation angles. ## 3. Theoretical Optimal Navigation Star Determination Method For the platform inertial-stellar composite guidance scheme, the single-star scheme for measuring a special navigation star can achieve the same accuracy as the double-star scheme for measuring two stars. This special star is called the optimal navigation star. In terms of the difficulty and cost of realization, the single-star scheme is definitely better than the double-star scheme. Therefore, the single-star scheme is adopted in the practical engineering, which requires the determination of the optimal navigation star.In this section, the equivalent information compression theory is utilized to explain why the single-star scheme can achieve the same accuracy as the double-star scheme firstly. Then, the optimal navigation star is further determined based on the principle. Since it is not combined with the navigation star in the star library, it is also called the theoretical optimal navigation star. ### 3.1. Equivalent Information Compression Theory The impact point deviation and platform misalignment angle can be expressed in the matrix form(16)p=q⋅a,wherep=ΔLΔHT; q=q1q2T; and a=αxαyαzT. It can be seen from Equation (16) that the rank of q is 2, so there is information compression in the mapping from a to p. It is worth noting that a cannot be uniquely determined by p, which indicates that Equation (16) has numerous solutions. Although there are countless sets of solutions in Equation (16), there is a special solution a0, which belongs to the subspace qs=spanq1q2 formed by each row of vectors. Therefore, a0 can be expressed as (17)a0=α10q1+α20q2=qT⋅X.Substitute Equation (17) into Equation (16), and there is (18)p=qqTX.From Equation (17) and Equation (18), we can get (19)a0=qTqqT−1p.It can be seen from the above equation thata0 and p correspond to each other one by one. If the inner product of two column vectors is defined as a⋅b=aT⋅b, then Equation (16) can be expressed as (20)p=q1⋅aq2⋅aT,whereqi⋅a reflects the projection of a in the qi direction. q1 and q2 are linearly independent; therefore, p=q⋅a reflects the projection as of a on space qs, and the projection information as⊥ of a on the orthogonal complement qs⊥ of qs is lost. Since qs is a complete subspace on Hilbert space Rn, there is (21)Rn=qs+qs⊥.According to the projection theorem, we can get(22)a=as+as⊥.It can be seen from the relationship between the impact point deviation and the platform misalignment angle thatq is not full rank. So only the information as in the subspace can be obtained through the impact point deviation, and the information as⊥ in the orthogonal complement cannot be obtained.The star sensor measurement equation can also be expressed in the matrix form(23)Z=h⋅a.Assuming that another set of bases ofqs is h1h2, it can be seen from the above analysis that Z also reflects all the information projected by a on qs, which can be expressed as (24)a0=hThhT−1Z.Substitute Equation (24) into Equation (16), and we can get (25)p=q⋅hThhT−1Z.Therefore, from the perspective of information compression,q and h are equal compression maps; that is, the impact point deviation p can be uniquely determined by the single-star observation Z.The impact point deviation is only affected by the projectionas of the misalignment angle a on the subspace Qs=spanq1q2. Therefore, according to Equation (25), it is only necessary to select h1 and h2, so that h and q are equal information compression. Then, the observation information Z contains all the useful information. The schematic diagram of determining the optimal navigation star is shown in Figure 3.Figure 3 Schematic diagram of determining the optimal navigation star.In the figure, all the information of misalignment anglea is composed of as and as⊥, but only as affects the impact point deviation. If h and q are equal information compression maps, the single-star scheme can measure all the information of as. Although the double-star scheme can measure all the information of a, only the as part is used in the correction, and as⊥ belongs to the useless information, so it has the same accuracy as the single-star scheme. In more popular terms, there are only two indicators ΔL and ΔH describing the impact point deviation, which are the reflection of part of the misalignment angle a. When observing a single star, two measurements ξ and η can be obtained, which are also the reflection of part of the misalignment angle a. By selecting the optimal navigation star, ξ and η can include all the information of misalignment angle a contained in ΔL and ΔH. Therefore, the single-star scheme can achieve the same accuracy as the dual-star scheme. ### 3.2. Determining the Optimal Navigation Star According to the equivalent information compression theory, the optimal navigation star should satisfy(26)h1×h2=q1×q2q1×q2orh1×h2=−q1×q2q1×q2.For the left side of the above equation, it can be obtained according to the Equation (15). (27)h1×h2=cosφ0cosψ0sinφ0cosφ0sinψ0T.It is assumed that the star sensor is installed on thexoy plane of the platform, and the platform is adjusted to aim at the navigation star by first yaw and then pitch. By substitutingψ0=0∘ into Equation (27), we can get (28)h1×h2=cosφ0sinφ00T.Define(29)Qc=Qc1Qc2Qc3=q1×q2,where,(30)Qc1=nH1nL3−nH3nL1sinφr+nH3nL2−nH2nL3cosφrcosψr+nH1nL2−nH2nL1cosφrsinψr,Qc2=nH1nL3−nH3nL1cosφr−nH3nL2−nH2nL3sinφrcosψr−nH1nL2−nH2nL1sinφrsinψr,Qc3=−nH1nL2−nH2nL1cosψr+nH3nL2−nH2nL3sinψr.According to Equations (26), (27), and (28), we can get (31)ψr=tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,φr=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr−φ0.The optimal navigation star and the rotation angle satisfy the following relationship:(32)σs=−ψr,es=φr+φ0.Therefore, the orientation of the optimal navigation star can be expressed as(33)σs=−tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,es=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr.According to Equation (26), there is another solution for the orientation of the optimal navigation star. (34)σ′s=σs−π,e′s=−es. ## 3.1. Equivalent Information Compression Theory The impact point deviation and platform misalignment angle can be expressed in the matrix form(16)p=q⋅a,wherep=ΔLΔHT; q=q1q2T; and a=αxαyαzT. It can be seen from Equation (16) that the rank of q is 2, so there is information compression in the mapping from a to p. It is worth noting that a cannot be uniquely determined by p, which indicates that Equation (16) has numerous solutions. Although there are countless sets of solutions in Equation (16), there is a special solution a0, which belongs to the subspace qs=spanq1q2 formed by each row of vectors. Therefore, a0 can be expressed as (17)a0=α10q1+α20q2=qT⋅X.Substitute Equation (17) into Equation (16), and there is (18)p=qqTX.From Equation (17) and Equation (18), we can get (19)a0=qTqqT−1p.It can be seen from the above equation thata0 and p correspond to each other one by one. If the inner product of two column vectors is defined as a⋅b=aT⋅b, then Equation (16) can be expressed as (20)p=q1⋅aq2⋅aT,whereqi⋅a reflects the projection of a in the qi direction. q1 and q2 are linearly independent; therefore, p=q⋅a reflects the projection as of a on space qs, and the projection information as⊥ of a on the orthogonal complement qs⊥ of qs is lost. Since qs is a complete subspace on Hilbert space Rn, there is (21)Rn=qs+qs⊥.According to the projection theorem, we can get(22)a=as+as⊥.It can be seen from the relationship between the impact point deviation and the platform misalignment angle thatq is not full rank. So only the information as in the subspace can be obtained through the impact point deviation, and the information as⊥ in the orthogonal complement cannot be obtained.The star sensor measurement equation can also be expressed in the matrix form(23)Z=h⋅a.Assuming that another set of bases ofqs is h1h2, it can be seen from the above analysis that Z also reflects all the information projected by a on qs, which can be expressed as (24)a0=hThhT−1Z.Substitute Equation (24) into Equation (16), and we can get (25)p=q⋅hThhT−1Z.Therefore, from the perspective of information compression,q and h are equal compression maps; that is, the impact point deviation p can be uniquely determined by the single-star observation Z.The impact point deviation is only affected by the projectionas of the misalignment angle a on the subspace Qs=spanq1q2. Therefore, according to Equation (25), it is only necessary to select h1 and h2, so that h and q are equal information compression. Then, the observation information Z contains all the useful information. The schematic diagram of determining the optimal navigation star is shown in Figure 3.Figure 3 Schematic diagram of determining the optimal navigation star.In the figure, all the information of misalignment anglea is composed of as and as⊥, but only as affects the impact point deviation. If h and q are equal information compression maps, the single-star scheme can measure all the information of as. Although the double-star scheme can measure all the information of a, only the as part is used in the correction, and as⊥ belongs to the useless information, so it has the same accuracy as the single-star scheme. In more popular terms, there are only two indicators ΔL and ΔH describing the impact point deviation, which are the reflection of part of the misalignment angle a. When observing a single star, two measurements ξ and η can be obtained, which are also the reflection of part of the misalignment angle a. By selecting the optimal navigation star, ξ and η can include all the information of misalignment angle a contained in ΔL and ΔH. Therefore, the single-star scheme can achieve the same accuracy as the dual-star scheme. ## 3.2. Determining the Optimal Navigation Star According to the equivalent information compression theory, the optimal navigation star should satisfy(26)h1×h2=q1×q2q1×q2orh1×h2=−q1×q2q1×q2.For the left side of the above equation, it can be obtained according to the Equation (15). (27)h1×h2=cosφ0cosψ0sinφ0cosφ0sinψ0T.It is assumed that the star sensor is installed on thexoy plane of the platform, and the platform is adjusted to aim at the navigation star by first yaw and then pitch. By substitutingψ0=0∘ into Equation (27), we can get (28)h1×h2=cosφ0sinφ00T.Define(29)Qc=Qc1Qc2Qc3=q1×q2,where,(30)Qc1=nH1nL3−nH3nL1sinφr+nH3nL2−nH2nL3cosφrcosψr+nH1nL2−nH2nL1cosφrsinψr,Qc2=nH1nL3−nH3nL1cosφr−nH3nL2−nH2nL3sinφrcosψr−nH1nL2−nH2nL1sinφrsinψr,Qc3=−nH1nL2−nH2nL1cosψr+nH3nL2−nH2nL3sinψr.According to Equations (26), (27), and (28), we can get (31)ψr=tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,φr=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr−φ0.The optimal navigation star and the rotation angle satisfy the following relationship:(32)σs=−ψr,es=φr+φ0.Therefore, the orientation of the optimal navigation star can be expressed as(33)σs=−tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,es=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr.According to Equation (26), there is another solution for the orientation of the optimal navigation star. (34)σ′s=σs−π,e′s=−es. ## 4. Determining the Available Optimal Navigation Star Based on the Star Database ### 4.1. Angle Analysis of the Deviation from the Optimal Navigation Star For the single-star platform inertial-stellar composite guidance scheme, only observing the optimal navigation star can achieve the same accuracy as the double star guidance scheme. However, in star database, there are not necessarily stars in the optimal navigation star direction. And only one real star can be selected as the navigation star in the star library according to a certain principle. This star is called the available optimal navigation star. In this section, the angle that the available navigation star deviates from the optimal navigation star is analyzed.In thei-system, several groups of optimal navigation stars are randomly generated, in which the elevation angles are evenly distributed within −90°,90°, and azimuth angles are evenly distributed within −180°,180°. Each combination of elevation angles and azimuth angles eNi,σNi represents a group of possible optimal navigation star, and its direction vector in the i-system is (35)VNi=coseNicosσNicoseNisinσNisineNiT.For any star above 5.5 mag in the star database, its elevation angle and azimuth angle areeSj,σSj; then, the direction vector in the i-system can be expressed as (36)VSj=coseSjcosσSjcoseSjsinσSjsineSjT.The angle between the optimal navigation star and the available navigation star can be calculated according to the following equation:(37)αij=arccosVNi⋅VSj.By traversingj, the minimum angular distance between the optimal navigation star and the available navigation star can be obtained.100000 samples are sampled, and the results are shown in Figures4 and 5.Figure 4 Statistical histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figure 5 Probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figures4 and 5, respectively, show the statistical histogram and probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star. Here, each straight bar represents 0.1°, and the sum of all the sampling times is 100000. In Figure 4, the angular deviations are between 0 and 6° mostly, which mainly concentrated in 1°~3° and relatively few more than 5° or less than 1°. Compared with Figure 4, the shapes of the statistical histogram and probability density histogram are basically the same. Figure 5 also shows the probability density function diagram of the corresponding normal distribution. However, it is obvious that the distribution is not quite consistent with the normal distribution.Tables1 and 2 provide the corresponding numerical statistical results. It can be seen from Table 1 that the maximum deviation angle is 7.4221° and the mean deviation angle is 2.0949°. The more detailed statistical analysis results of the available navigation star deviation from the optimal navigation star are illustrated in Table 2. The table counts the single probability and cumulative probability of the deviation angle. It can be observed that 2°>α≧1° is the most, accounting for 33.558% and the deviation angle greater than 7° accounts for only 0.014%.Table 1 The basic analysis results of the available navigation star deviation from the optimal navigation star. EventMaximumMinimumMeanSquare3σ rangeValue (deg)7.42210.00412.09491.1139[-1.2467, 5.4365]Table 2 statistical analysis results of the available navigation star deviation from the optimal navigation star. Deviation angleNumberProbability (%)Cumulative numberCumulative probability (%)1°>α≧0°1544915.4491544915.4492°>α≧1°3355833.5584900749.0073°>α≧2°2939229.3927839978.3994°>α≧3°1484214.8429324193.2415°>α≧4°52165.2169845798.4576°>α≧5°12891.2899974699.7467°>α≧6°2400.2409998699.986α≧7°140.014100000100.000Therefore, if the upper limit of star-sensitive measurement magnitude is 5.5 mag, the available navigation star can be found within the angular distance range within 7° from the optimal navigation star.In the above analysis, constraints such as occlusion of the sun, moon, and earth have not been taken into account, and the deviation angle will be much larger after consideration. ### 4.2. Determining the Available Optimal Navigation Star Based on the Arrow-Borne Navigation Star Database In practical application, the navigation star must be selected in the arrow-borne navigation star database. According to the above analysis, within the range of 7° from the theoretical optimal navigation star, the probability of finding the navigation star is 100%. Therefore, a method determining the available optimal navigation star based on the local navigation star database is proposed to improve the efficiency of star selection. #### 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. #### 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 4.1. Angle Analysis of the Deviation from the Optimal Navigation Star For the single-star platform inertial-stellar composite guidance scheme, only observing the optimal navigation star can achieve the same accuracy as the double star guidance scheme. However, in star database, there are not necessarily stars in the optimal navigation star direction. And only one real star can be selected as the navigation star in the star library according to a certain principle. This star is called the available optimal navigation star. In this section, the angle that the available navigation star deviates from the optimal navigation star is analyzed.In thei-system, several groups of optimal navigation stars are randomly generated, in which the elevation angles are evenly distributed within −90°,90°, and azimuth angles are evenly distributed within −180°,180°. Each combination of elevation angles and azimuth angles eNi,σNi represents a group of possible optimal navigation star, and its direction vector in the i-system is (35)VNi=coseNicosσNicoseNisinσNisineNiT.For any star above 5.5 mag in the star database, its elevation angle and azimuth angle areeSj,σSj; then, the direction vector in the i-system can be expressed as (36)VSj=coseSjcosσSjcoseSjsinσSjsineSjT.The angle between the optimal navigation star and the available navigation star can be calculated according to the following equation:(37)αij=arccosVNi⋅VSj.By traversingj, the minimum angular distance between the optimal navigation star and the available navigation star can be obtained.100000 samples are sampled, and the results are shown in Figures4 and 5.Figure 4 Statistical histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figure 5 Probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figures4 and 5, respectively, show the statistical histogram and probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star. Here, each straight bar represents 0.1°, and the sum of all the sampling times is 100000. In Figure 4, the angular deviations are between 0 and 6° mostly, which mainly concentrated in 1°~3° and relatively few more than 5° or less than 1°. Compared with Figure 4, the shapes of the statistical histogram and probability density histogram are basically the same. Figure 5 also shows the probability density function diagram of the corresponding normal distribution. However, it is obvious that the distribution is not quite consistent with the normal distribution.Tables1 and 2 provide the corresponding numerical statistical results. It can be seen from Table 1 that the maximum deviation angle is 7.4221° and the mean deviation angle is 2.0949°. The more detailed statistical analysis results of the available navigation star deviation from the optimal navigation star are illustrated in Table 2. The table counts the single probability and cumulative probability of the deviation angle. It can be observed that 2°>α≧1° is the most, accounting for 33.558% and the deviation angle greater than 7° accounts for only 0.014%.Table 1 The basic analysis results of the available navigation star deviation from the optimal navigation star. EventMaximumMinimumMeanSquare3σ rangeValue (deg)7.42210.00412.09491.1139[-1.2467, 5.4365]Table 2 statistical analysis results of the available navigation star deviation from the optimal navigation star. Deviation angleNumberProbability (%)Cumulative numberCumulative probability (%)1°>α≧0°1544915.4491544915.4492°>α≧1°3355833.5584900749.0073°>α≧2°2939229.3927839978.3994°>α≧3°1484214.8429324193.2415°>α≧4°52165.2169845798.4576°>α≧5°12891.2899974699.7467°>α≧6°2400.2409998699.986α≧7°140.014100000100.000Therefore, if the upper limit of star-sensitive measurement magnitude is 5.5 mag, the available navigation star can be found within the angular distance range within 7° from the optimal navigation star.In the above analysis, constraints such as occlusion of the sun, moon, and earth have not been taken into account, and the deviation angle will be much larger after consideration. ## 4.2. Determining the Available Optimal Navigation Star Based on the Arrow-Borne Navigation Star Database In practical application, the navigation star must be selected in the arrow-borne navigation star database. According to the above analysis, within the range of 7° from the theoretical optimal navigation star, the probability of finding the navigation star is 100%. Therefore, a method determining the available optimal navigation star based on the local navigation star database is proposed to improve the efficiency of star selection. ### 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. ### 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. ## 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 5. Simulation Results This section mainly include two parts: (1) determining the theoretical optimal navigation star and (2) determining the available optimal navigation star based on the star database. The simulations are primarily aimed at verifying the effectiveness of the proposed method.In the simulation, two representative responsive launch vehicle trajectories are adopted. The launch time is 00 : 00 : 00, 1 January 2019 (UTC). The first whole flight time is 1300 s, and the second whole flight time is 2300 s. The initial position is (0°N, 0°E). The star sensor works beyond the atmosphere. And the star sensor installation angle isφ0,ψ0=20°,0°. The simulation parameters for the initial alignment error and star sensor error are listed in Table 3. Two trajectories can better verify the effectiveness of the proposed method.Table 3 The value of each error in the simulation. Error typesError symbolsValue (3σ)UnitsInitial orientation (alignment) errorε0x100″ε0y300ε0z100Star sensor measurement errorεξ, εη0″Star sensor installation errorΔφ0, Δψ00 ### 5.1. Determining the Optimal Navigation Star This section is used to evaluate the effectiveness of the algorithm in Section3. In the simulation, the optimal navigation star is determined by three methods, which are traversal method, simplex evolutionary method, and analytical method proposed in this paper.The traversal method searches in the full dimensional space with−90°≤es≤90° and −180°<σs≤180°, and the step is 1°. Taking the 6000 km trajectory (first trajectory) as an example, when the optimal navigation star is determined by the traversal method, the composite guidance accuracy under different measurement orientations is shown in Figure 6, and the composite guidance CEP contour is shown in Figure 7.Figure 6 Composite guidance CEP variation diagram for 6000 km.Figure 7 Composite guidance CEP contour map for 6000 km.Figure6 shows the composite guidance CEP variation diagram for 6000 km. It can be observed that there are two minimum points in the composite guidance accuracy variation diagram corresponding to the single-star tuning platform; that is, there are two optimal navigation stars. And the two optimal navigation stars azimuth is approximately on the same line as the emission point, that is, e′s=−es and σ′s=σs−π. This is consistent with the analysis conclusion of Equation (33). Besides, the composite guidance accuracy is approximately symmetric with respect to the line according to Figure 7.In the simplex evolutionary method, the initial vertex isX0=es0σs0T=20°0°T. Take the distance between vertices Δd=10° to construct the initial simplex, and the iteration termination condition is taken as ε=0.1m. The change of the simplex optimal vertex in the iteration process is shown in Figure 8, and the convergence error is shown in Figure 9.Figure 8 Simplex optimal vertex iterative change diagram with a range of 6000 km.Figure 9 Simplex convergent error variation diagram with a range of 6000 km.It can be seen from the above figures that when utilizing the simplex evolutionary method to determine the optimal navigation star, the simplex converges quickly in the solution process, and the algorithm has a large search range. At the same time, it can achieve high accuracy by controlling the convergence domain.Table4 represents the required time and the optimal navigation stars determined by the three methods. In the table, CEPINS is the pure inertial guidance accuracy, and CEPCOM is the composite guidance accuracy.Table 4 Optimal navigation star at different range. RangeMethodes (deg)σs (deg)CEPINS (m)CEPCOM (m)t (s)6000kmTraversing3802628.896.782963.05Simplex37.6011-0.04112628.890.01312.09Analysis37.6011-0.04042628.890.0010.00112000kmTraversing3003198.364.022985.53Simplex30.0226-0.13163198.360.01513.22Analysis30.0264-0.13183198.360.0010.003The simulation results show that under the condition of only considering the initial alignment error, the results obtained by analytical method are consistent with those obtained by traversal method and simplex evolutionary method, which verify the effectiveness of the proposed method. At the same time, the azimuth angle of the navigation star is about 0°, which indicates that the optimal navigation star is near the shooting plane when the star sensor is installed on thexoy plane of the platform.According to the results of the optimal navigation stars and the corresponding composite guidance accuracy, the accuracy of the traversal method is limited because the traversal method is searched with a fixed step, while the analytic method and simplex evolutionary method have no such limitation. Thus, the optimal navigation star azimuth can achieve high accuracy. When comparing the calculation time of the three methods, the results are calculated on PC. By contrast, the traversal calendar takes about 50 minutes, while the analytic method can be completed in a very short time. And the composite guidance accuracy corresponding to the optimal navigation star obtained by the analytical method is 99.99% (from 6.78 m to 0.001 m) and 92.31% (from 0.013 m to 0.001 m) higher than that obtained by the traversal method and the simplex evolutionary method. Moreover, the time-consuming of the traversal method is related to the traversal step size. The smaller the step size, the more time-consuming, but the more accurate the optimal star azimuth is determined. Therefore, under the condition of significant initial error, the method proposed in this paper can be used to help determine the optimal navigation star quickly. Since only the initial orientation error is considered in the simulation, the stellar guidance can correct all the effects of the error, and the corrected accuracy is close to 0 m. Of course, it is impossible to achieve when all error factors are considered.Taking the responsive launch vehicle with a range of 6000 km as an example, the influence of the star sensor installation error and measurement error on the optimal navigation star is analyzed. In the simulation, simplex evolutionary method and analytic method are utilized to determine the optimal navigation star. Table5 represents the optimal navigation stars when considering the star sensor installation error, and Table 6 represents the optimal navigation stars when considering the star sensor measurement error.Table 5 Optimal navigation stars with different star sensor installation error. Δφ0, Δψ0″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.5998-0.0417Analysis37.6011-0.040420Simplex37.5991-0.0423Analysis37.6011-0.040430Simplex37.5979-0.0440Analysis37.6011-0.0404Table 6 Optimal navigation star with different star sensor measurement error. εξ, εη″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.6011-0.0404Analysis37.6011-0.040420Simplex37.6012-0.0403Analysis37.6011-0.040430Simplex37.6012-0.0404Analysis37.6011-0.0404The star sensor installation error has a certain impact on the optimal navigation star, but the impact is small. The range of changes in elevation angle and azimuth angle is both within 0.01°. Comparing Tables5 and 6, it can be seen that the star sensor measurement error has less impact on the optimal navigation star. Therefore, the method proposed in this paper can determine the optimal navigation star effectively. ### 5.2. Determining the Optimal Available Navigation Star Stars are basically evenly distributed in the celestial coordinate system, and the earth shielding range of in the star sensor view field is basically fixed. Due to the physical realization of the inertial platform frame angle, there will be some restrictions on the azimuth and elevation angles. It is assumed that the azimuth angle has a limit of ±45°, and the elevation angle has a limit of ±60°. At the same time, it is assumed that the sun’s avoidance angle is 20°, the moon’s avoidance angle is 10°, the horizon’s additional avoidance angle is 5°, and the large planet’s avoidance angle is 2°. The arrow-borne navigation star database is shown in Figure10, and the generated local navigation star database based on the Section 4.2.1 is shown in Table 7.Figure 10 Arrow-borne navigation star database.Table 7 Local navigation star database. Numberes (deg)σs (deg)CEPINS (m)CEPCOM (m)135.0459-7.38242628.89139.409236.4093-3.56982628.8967.152340.6013-3.48932628.8983.301432.84180.94062628.8979.633541.21341.42352628.8969.357631.51622.50292628.89108.593738.15456.39152628.89117.695839.92437.91222628.89150.393Figure10 shows the generated arrow-borne navigation star database when the launch time is January 1, 2019. Due to the influence of constraints, the final number of available navigation stars is 292. Compared with Figure 10, it can be seen that there are only 8 alternative navigation stars in the local navigation star database, indicating that most stars in the array-borne navigation star database can be excluded based on the maximum deviation angle from the theoretical optimal navigation star, thus shortening the time to determine the available optimal navigation star. Tables 8 and 9 represent the available optimal navigation stars for the 6000 km and 12000 km launch vehicle.Table 8 The available optimal navigation stars for the 6000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)37.601137.601137.6011σs (deg)-0.0404-0.0404-0.0404CEPINS (m)2628.892628.892628.89CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Available optimal navigation star (proposed method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Table 9 The available optimal navigation stars for the 12000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)30.026430.026430.0264σs (deg)-0.1318-0.1318-0.1318CEPINS (m)3198.363198.363198.36CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Available optimal navigation star (proposed method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Table8 and 9 show the comparison results of the proposed method and the traversal method to determine the available optimal navigation star. The traversal method in the table refers to traversing all stars in the local navigation star database, and the results obtained can be considered as accurate. The proposed method refers to the available optimal navigation star determined according to Section 4.2.2. The above results are the navigation stars selected from the real local navigation star database after considering various star selection constraints. It can be observed from the tables that the navigation star determined by the proposed method is the same as the traversal method, which proves that this method in this paper is effective. At the same, the angular distance between the theoretical optimal navigation star and the available optimal navigation star is within 5°, and the variation of composite guidance accuracy is less than 70 m, indicating that the available optimal navigation star still has a good correction effect. ## 5.1. Determining the Optimal Navigation Star This section is used to evaluate the effectiveness of the algorithm in Section3. In the simulation, the optimal navigation star is determined by three methods, which are traversal method, simplex evolutionary method, and analytical method proposed in this paper.The traversal method searches in the full dimensional space with−90°≤es≤90° and −180°<σs≤180°, and the step is 1°. Taking the 6000 km trajectory (first trajectory) as an example, when the optimal navigation star is determined by the traversal method, the composite guidance accuracy under different measurement orientations is shown in Figure 6, and the composite guidance CEP contour is shown in Figure 7.Figure 6 Composite guidance CEP variation diagram for 6000 km.Figure 7 Composite guidance CEP contour map for 6000 km.Figure6 shows the composite guidance CEP variation diagram for 6000 km. It can be observed that there are two minimum points in the composite guidance accuracy variation diagram corresponding to the single-star tuning platform; that is, there are two optimal navigation stars. And the two optimal navigation stars azimuth is approximately on the same line as the emission point, that is, e′s=−es and σ′s=σs−π. This is consistent with the analysis conclusion of Equation (33). Besides, the composite guidance accuracy is approximately symmetric with respect to the line according to Figure 7.In the simplex evolutionary method, the initial vertex isX0=es0σs0T=20°0°T. Take the distance between vertices Δd=10° to construct the initial simplex, and the iteration termination condition is taken as ε=0.1m. The change of the simplex optimal vertex in the iteration process is shown in Figure 8, and the convergence error is shown in Figure 9.Figure 8 Simplex optimal vertex iterative change diagram with a range of 6000 km.Figure 9 Simplex convergent error variation diagram with a range of 6000 km.It can be seen from the above figures that when utilizing the simplex evolutionary method to determine the optimal navigation star, the simplex converges quickly in the solution process, and the algorithm has a large search range. At the same time, it can achieve high accuracy by controlling the convergence domain.Table4 represents the required time and the optimal navigation stars determined by the three methods. In the table, CEPINS is the pure inertial guidance accuracy, and CEPCOM is the composite guidance accuracy.Table 4 Optimal navigation star at different range. RangeMethodes (deg)σs (deg)CEPINS (m)CEPCOM (m)t (s)6000kmTraversing3802628.896.782963.05Simplex37.6011-0.04112628.890.01312.09Analysis37.6011-0.04042628.890.0010.00112000kmTraversing3003198.364.022985.53Simplex30.0226-0.13163198.360.01513.22Analysis30.0264-0.13183198.360.0010.003The simulation results show that under the condition of only considering the initial alignment error, the results obtained by analytical method are consistent with those obtained by traversal method and simplex evolutionary method, which verify the effectiveness of the proposed method. At the same time, the azimuth angle of the navigation star is about 0°, which indicates that the optimal navigation star is near the shooting plane when the star sensor is installed on thexoy plane of the platform.According to the results of the optimal navigation stars and the corresponding composite guidance accuracy, the accuracy of the traversal method is limited because the traversal method is searched with a fixed step, while the analytic method and simplex evolutionary method have no such limitation. Thus, the optimal navigation star azimuth can achieve high accuracy. When comparing the calculation time of the three methods, the results are calculated on PC. By contrast, the traversal calendar takes about 50 minutes, while the analytic method can be completed in a very short time. And the composite guidance accuracy corresponding to the optimal navigation star obtained by the analytical method is 99.99% (from 6.78 m to 0.001 m) and 92.31% (from 0.013 m to 0.001 m) higher than that obtained by the traversal method and the simplex evolutionary method. Moreover, the time-consuming of the traversal method is related to the traversal step size. The smaller the step size, the more time-consuming, but the more accurate the optimal star azimuth is determined. Therefore, under the condition of significant initial error, the method proposed in this paper can be used to help determine the optimal navigation star quickly. Since only the initial orientation error is considered in the simulation, the stellar guidance can correct all the effects of the error, and the corrected accuracy is close to 0 m. Of course, it is impossible to achieve when all error factors are considered.Taking the responsive launch vehicle with a range of 6000 km as an example, the influence of the star sensor installation error and measurement error on the optimal navigation star is analyzed. In the simulation, simplex evolutionary method and analytic method are utilized to determine the optimal navigation star. Table5 represents the optimal navigation stars when considering the star sensor installation error, and Table 6 represents the optimal navigation stars when considering the star sensor measurement error.Table 5 Optimal navigation stars with different star sensor installation error. Δφ0, Δψ0″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.5998-0.0417Analysis37.6011-0.040420Simplex37.5991-0.0423Analysis37.6011-0.040430Simplex37.5979-0.0440Analysis37.6011-0.0404Table 6 Optimal navigation star with different star sensor measurement error. εξ, εη″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.6011-0.0404Analysis37.6011-0.040420Simplex37.6012-0.0403Analysis37.6011-0.040430Simplex37.6012-0.0404Analysis37.6011-0.0404The star sensor installation error has a certain impact on the optimal navigation star, but the impact is small. The range of changes in elevation angle and azimuth angle is both within 0.01°. Comparing Tables5 and 6, it can be seen that the star sensor measurement error has less impact on the optimal navigation star. Therefore, the method proposed in this paper can determine the optimal navigation star effectively. ## 5.2. Determining the Optimal Available Navigation Star Stars are basically evenly distributed in the celestial coordinate system, and the earth shielding range of in the star sensor view field is basically fixed. Due to the physical realization of the inertial platform frame angle, there will be some restrictions on the azimuth and elevation angles. It is assumed that the azimuth angle has a limit of ±45°, and the elevation angle has a limit of ±60°. At the same time, it is assumed that the sun’s avoidance angle is 20°, the moon’s avoidance angle is 10°, the horizon’s additional avoidance angle is 5°, and the large planet’s avoidance angle is 2°. The arrow-borne navigation star database is shown in Figure10, and the generated local navigation star database based on the Section 4.2.1 is shown in Table 7.Figure 10 Arrow-borne navigation star database.Table 7 Local navigation star database. Numberes (deg)σs (deg)CEPINS (m)CEPCOM (m)135.0459-7.38242628.89139.409236.4093-3.56982628.8967.152340.6013-3.48932628.8983.301432.84180.94062628.8979.633541.21341.42352628.8969.357631.51622.50292628.89108.593738.15456.39152628.89117.695839.92437.91222628.89150.393Figure10 shows the generated arrow-borne navigation star database when the launch time is January 1, 2019. Due to the influence of constraints, the final number of available navigation stars is 292. Compared with Figure 10, it can be seen that there are only 8 alternative navigation stars in the local navigation star database, indicating that most stars in the array-borne navigation star database can be excluded based on the maximum deviation angle from the theoretical optimal navigation star, thus shortening the time to determine the available optimal navigation star. Tables 8 and 9 represent the available optimal navigation stars for the 6000 km and 12000 km launch vehicle.Table 8 The available optimal navigation stars for the 6000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)37.601137.601137.6011σs (deg)-0.0404-0.0404-0.0404CEPINS (m)2628.892628.892628.89CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Available optimal navigation star (proposed method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Table 9 The available optimal navigation stars for the 12000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)30.026430.026430.0264σs (deg)-0.1318-0.1318-0.1318CEPINS (m)3198.363198.363198.36CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Available optimal navigation star (proposed method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Table8 and 9 show the comparison results of the proposed method and the traversal method to determine the available optimal navigation star. The traversal method in the table refers to traversing all stars in the local navigation star database, and the results obtained can be considered as accurate. The proposed method refers to the available optimal navigation star determined according to Section 4.2.2. The above results are the navigation stars selected from the real local navigation star database after considering various star selection constraints. It can be observed from the tables that the navigation star determined by the proposed method is the same as the traversal method, which proves that this method in this paper is effective. At the same, the angular distance between the theoretical optimal navigation star and the available optimal navigation star is within 5°, and the variation of composite guidance accuracy is less than 70 m, indicating that the available optimal navigation star still has a good correction effect. ## 6. Conclusion The demand for the application of single-star inertial-stellar guidance system in responsive launch vehicles is to determine the optimal navigation star quickly. However, the current optimal navigation star selection schemes are to determine the star by numerical method, which increase the preparation time before launch. This paper proposes a fast algorithm to determine the star. The key of this algorithm is to deduce the optimal navigation star based on the equivalent information compression theory under the condition of significant initial error. It is obvious that the analytical solution is less time-consuming than the numerical solution. And the analytical solution can achieve the same accuracy as the numerical solution, or even higher.On the basis of determining the optimal navigation star, the available optimal navigation star should be further determined in combination with the arrow-borne navigation star database. There are certain deviations between the optimal navigation star and the navigation stars in the database. Therefore, the deviation angles between them without considering constraints are analyzed firstly. Based on the deviation angle, the navigation stars are selected to the local navigation database. Then, the available optimal navigation star can be determined according to certain criteria. The algorithm proposed in this paper can quickly determine the optimal navigation star and the available optimal navigation star. --- *Source: 1000865-2022-04-19.xml*
1000865-2022-04-19_1000865-2022-04-19.md
82,670
A Fast Algorithm for Determining the Optimal Navigation Star for Responsive Launch Vehicles
Yi Zhao; Hongbo Zhang; Pengfei Li; Guojian Tang
International Journal of Aerospace Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1000865
1000865-2022-04-19.xml
--- ## Abstract The platform inertial-stellar composite guidance is a composite guidance method supplemented by stellar correction on the basis of inertial navigation, which can effectively improve the accuracy of responsive launch vehicles. In order to solve the problem of rapid determining the optimal navigation star in the system, this paper proposes an algorithm based on the equivalent information compression theory. At first, this paper explains why the single-star scheme can achieve the same accuracy as the dual-star scheme. At the same time, the analytical expression of the optimal navigation star with significant initial error is derived. In addition, the available optimal navigation star determination strategy is also designed according to the arrow-borne navigation star database. The proposed algorithm is evaluated by two representative responsive launch vehicle trajectory simulations. The simulation results demonstrate that the proposed algorithm can determine the optimal navigation star quickly, which greatly shorten the preparation time before the rapid launch of vehicles and improve the composite guidance accuracy. --- ## Body ## 1. Introduction Inertial-stellar composite guidance is a composite guidance method based on inertial guidance supplemented by stellar guidance. It utilizes the inertial space azimuth datum provided by the star to calibrate the error angle between the platform coordinate system and the launch inertial coordinate system and corrects the impact point deviation caused by the platform pointing error [1]. Inertial-stellar composite guidance system corrects the drift error of inertial platform according to the star sensor information, which can not only improve the guidance accuracy and rapid launch ability [2] but also reduce the cost. Moreover, the motion parameters of spacecraft in space can be determined [3–5], and it has strong environmental adaptability.Inertial-stellar guidance is essentially a problem of determining attitude through vector observation. This problem was first proposed by Wahba [6], and various attitude determination algorithms were developed, such as TRIAD [7], QUEST [8, 9], SVD [10], FOAM [11], Euler-q [12], and fast linear attitude estimator method [13–15]. In order to solve the case that there are a large number of outliers, Yang and Carlone formulated the Wahba problem by truncated least squares [16]. Ghadiri et al. [17] proposed a robust multi-objective optimization method to overcome the static attitude determination with bounded uncertainty. These algorithms need at least two vector information to calculate the attitude. However, in some cases, long-term observation of one vector is enough [18]. Reference [19] proposed an attitude determination algorithm based on the minimum squares sum of image point coordinate residuals. The algorithm can still determine the attitude when only one star is observed. Reference [20] derived the attitude analytical solution when only one sensor is used for observation. The analytical solution can be expressed by the combination of two limiting quaternions, and the covariance and singularity analyses were carried out. However, it did not determine the optimal attitude solution. Similarly, according to the number of observation vectors, the inertial-stellar composite guidance can also be divided into single vector observation and double vectors observation, that is, single-star scheme and double-star scheme. For the platform inertial navigation system, the star sensor is usually fixedly installed on the platform. Because the direction of the platform in the inertial space cannot be adjusted after launch, the double-star scheme needs to install two star sensors on the platform, which will greatly complicate the structure. It is found that observing the specific direction star, the single-star scheme can achieve the same accuracy as the double-star scheme [21, 22]. Zhang et al. have proved it theoretically [23]. As it is known, the only practical application is the single-star scheme, such as the American “Trident” submarine long-range ballistic missile. However, the single-star scheme needs to determine the optimal navigation star before the vehicle launch. At present, the optimal navigation star is determined by numerical method [24, 25], which increases the preparation time and limits the wide application.Motivated by the work of Zhang et al., this paper proposes a fast algorithm to determine the optimal navigation star for responsive launch vehicles. Firstly, the relationship equations between the initial error and the impact point deviation and the star sensor measurement are established. Then, our algorithm exploits the equivalent information compression theory [23] to explain why the single-star scheme can achieve the accuracy as the double-star scheme and deduces the optimal navigation star under the condition of significant initial error. The deduced analytical solution can greatly shorten the prelaunch preparation time. On this basis, the local navigation star database is determined according to the deviation angle, and the available optimal navigation star can be determined.The structure of this paper is as follows. Section2 presents the definitions of various coordinate system and the derivations of inertial platform system and star sensor model. Section 3 shows the analytical expression of the optimal navigation star. In Section 4, the available optimal navigation star is determined based on the arrow-borne navigation star database. The simulation results and conclusions are given in Section 5 and Section 6. The contribution of this paper is to provide an analytical solution of optimal navigation star to shorten the prelaunch preparation time and enhance the performance for responsive launch vehicles. ## 2. Inertial Platform System and Star Sensor Modeling ### 2.1. Definitions of Various Coordinate System #### 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. #### 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. #### 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. #### 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. #### 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. #### 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ### 2.2. Relationship between Impact Point Deviation and Platform Misalignment Angle The platform misalignment angle represents the inertial reference deviation, that is, the error angle between the inertial platform and the launch inertial coordinate system. It is mainly caused by various initial errors and inertial navigation errors and affects the landing point accuracy. Although the platform misalignment angle is affected by many factors, the initial error accounts for the main part under certain conditions. For the rapid maneuvering launch vehicle, the accuracy of pre-launch orientation and alignment may not be very high, which leads to the significant portion of the initial error in the platform misalignment angle. Therefore, this paper mainly studies the determination of the optimal navigation star under the significant initial error condition.The platform inertial system and the launch inertial system can be coincident with the help of the platform initial alignment. The initial alignment error will be caused due to the equipment inherent error, the external interference influence in the alignment process, and the method error. And the platform alignment error around they-axis will be caused owing to the initial orientation error during launch. Thus, the orientation error can be considered together with the initial alignment error.The initial alignment and orientation errors can be expressed by the three axis misalignment angles between thep′-system and the A-system, which are defined asε0xε0yε0zT. And there are two parts in ε0y: orientation error and aiming error. It is assumed that the adjustment platform adopts the method of yaw first and then pitch; there is (1)αxαyαz=cosφrcosψrsinφr−cosφrsinψr−sinφrcosψrcosφrsinφrsinψrsinψr0cosψrε0xε0yε0z=CAP′ε0xε0yε0z,whereαxαyαzT are misalignment angles caused by initial alignment and orientation errors, ψr and φr are the rotation angles around the y-axis and z-axis, respectively, and CAP′ is the transformation matrix from the A-system to the p′- system.The inertial guidance accuracy meets the following relationship with the initial alignment error:(2)ΔLΔH=nL1nL2nL3nH1nH2nH3ε0xε0yε0z.In which,nL1, nL2, and nL3 are the partial derivatives of the longitudinal impact point deviation to the initial errors in three directions, respectively. nH1, nH2, and nH3 are the partial derivatives of the lateral impact point deviation to the initial errors in three directions, respectively.It can be obtained by combining Equation (1) and Equation (2). (3)ΔLΔH=q11q12q13q21q22q23αxαyαz=q1Tq2Tαxαyαz.In which,(4)q11=nL1cosφrcosψr+nL2sinφr−nL3cosφrsinψr,q12=−nL1sinφrcosψr+nL2cosφr+nL3sinφrsinψr,q13=nL1sinψr+nL3cosψr,q21=nH1cosφrcosψr+nH2sinφr−nH3cosφrsinψr,q22=−nH1sinφrcosψr+nH2cosφr+nH3sinφrsinψr,q23=nH1sinψr+nH3cosψr. ### 2.3. Acquisition of Star Sensor Measurement The elevation and azimuth angle of the optimal navigation star in theA-system are defined as es and σs, respectively. Thus, the stellar direction unit vector in the A-system can be expressed as (5)SA=cosescosσssinescosessinσsT.In thes-system, the osxs axis is the optical axis. The angle between the optical axis and the stellar vector is very small, and its directional cosine is approximately 1. The osys and oszs are the output axes. The stellar vector representation in the star sensor coordinate system is shown in Figure 2. It is assumed that the star sensor outputs are ξ and η; the stellar vector can be expressed as (6)SS=1−ξ−ηT.Figure 2 Representation of the stellar vector in the star sensor coordinate system.The ideal star sensor output should beSs′=100T; then, there is the following equation according to the coordinate transformation relationship: (7)SS′=CPSCAP′SA,whereCPS is the transformation matrix from the p-system to the s-system and CAP′ is the transformation matrix from the A-system to the p′-system. According to the stellar vector representation in the A-system and the s-system, the following equation can be obtained by the transformation matrix between different coordinate systems. (8)SS=CPSCP′PCAP′SA,whereCP′P is the transformation matrix from the p′-system to the p-system, which can be expressed as (9)CP′P=1−αzαyαz1−αx−αyαx1.The stellar vector representation in thep-system is defined as SP, and there is (10)SP=CSPSS.According to Equation (7), the stellar vector representation in the p′-system can be obtained as follows: (11)SP′=CSPSS′=CAP′SA.Then, the following equation can be obtained from Equations (10) and (11). (12)ΔSP=SP−SP′=CSPSS−SS′.AndΔSp can also be represented as (13)ΔSP=SP′⋅a=CSPSS′⋅a.It can be obtained from Equations (12) and (13). (14)Ss−Ss′=CPSCSPSS′⋅a.Therefore, the star sensor measurement equation can be expressed as(15)ξη=−sinψ00cosψ0sinφ0cosψ0−cosφ0sinφ0sinψ0αxαyαz=h1Th2Tαxαyαz,whereφ0 and ψ0 are the star sensor installation angles. ## 2.1. Definitions of Various Coordinate System ### 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. ### 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. ### 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. ### 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. ### 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. ### 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ## 2.1.1. Geocentric Inertial Coordinate SystemoE−xIyIzI The coordinate system originoE is the earth centroid, and the basic plane is the J2000 earth equatorial plane. The oExI axis points from the earth centroid to the J2000 mean equinox in the basic plane. The oEzI axis points to the north pole along the normal of the basic plane. The oEyI axis and the other two axes constitute the right hand system. This coordinate system is abbreviated as the i-system. ## 2.1.2. Launch Coordinate Systemo−xyz The system mainly describes the motion of responsive launch vehicle relative to the earth. The launch coordinate system is fixedly connected with the earth, and the origin is taken as the launch pointo. In the system, the ox axis points to the launch aiming direction in the launch horizontal plane, the oy axis points upward perpendicular to the launch point horizontal plane, and the oz axis is perpendicular to the xoy plane. The axes ox, oy, and oz form the right hand coordinate system. This coordinate system is abbreviated as the g-system (Figure 1).Figure 1 Launch coordinate system. ## 2.1.3. Launch Inertial Coordinate SystemoA−xAyAzA The launch inertial coordinate system coincides with the launch coordinate system at the launch time. But after launching the vehicle, the origin and the direction of each axis remain stationary in the inertial space. The coordinate system is used to establish the vehicle motion equation in inertial space. This coordinate system is abbreviated as theA-system. ## 2.1.4. Ideal Inertial Platform Coordinate SystemoP′−xP′yP′zP′ The coordinate system originop′ is located at the platform datum, and the coordinate axis is defined by the platform frame axis or the gyro-sensitive axis. After pre-launch alignment and leveling, each coordinate axis shall be parallel to each coordinate axis of the launch inertial coordinate system. This coordinate system is abbreviated as the p′-system. ## 2.1.5. Inertial Platform Coordinate SystemoP−xPyPzP Due to the platform misalignment angle, there is a deviation between the inertial platform coordinate system and the ideal inertial platform coordinate system. This coordinate system is abbreviated as thep-system. ## 2.1.6. Star Sensor Coordinate Systemos−xsyszs The coordinate system mainly describes the star sensor measurement. In the system, the coordinate system originos is at the centre of the star sensor imaging device (charge couple device, complementary metal oxide semiconductor, etc.). The osxs axis is consistent with the axis of the optical lens, the osys axis is the vertical to the pixel readout direction, and the oszs axis is the horizontal to the pixel readout direction. The ysoszs plane is consistent with the imaging device plane. The transformation matrix between the star sensor coordinate system and the vehicle body coordinate system is determined by the star sensor installation angle. This coordinate system is abbreviated as the s-system. ## 2.2. Relationship between Impact Point Deviation and Platform Misalignment Angle The platform misalignment angle represents the inertial reference deviation, that is, the error angle between the inertial platform and the launch inertial coordinate system. It is mainly caused by various initial errors and inertial navigation errors and affects the landing point accuracy. Although the platform misalignment angle is affected by many factors, the initial error accounts for the main part under certain conditions. For the rapid maneuvering launch vehicle, the accuracy of pre-launch orientation and alignment may not be very high, which leads to the significant portion of the initial error in the platform misalignment angle. Therefore, this paper mainly studies the determination of the optimal navigation star under the significant initial error condition.The platform inertial system and the launch inertial system can be coincident with the help of the platform initial alignment. The initial alignment error will be caused due to the equipment inherent error, the external interference influence in the alignment process, and the method error. And the platform alignment error around they-axis will be caused owing to the initial orientation error during launch. Thus, the orientation error can be considered together with the initial alignment error.The initial alignment and orientation errors can be expressed by the three axis misalignment angles between thep′-system and the A-system, which are defined asε0xε0yε0zT. And there are two parts in ε0y: orientation error and aiming error. It is assumed that the adjustment platform adopts the method of yaw first and then pitch; there is (1)αxαyαz=cosφrcosψrsinφr−cosφrsinψr−sinφrcosψrcosφrsinφrsinψrsinψr0cosψrε0xε0yε0z=CAP′ε0xε0yε0z,whereαxαyαzT are misalignment angles caused by initial alignment and orientation errors, ψr and φr are the rotation angles around the y-axis and z-axis, respectively, and CAP′ is the transformation matrix from the A-system to the p′- system.The inertial guidance accuracy meets the following relationship with the initial alignment error:(2)ΔLΔH=nL1nL2nL3nH1nH2nH3ε0xε0yε0z.In which,nL1, nL2, and nL3 are the partial derivatives of the longitudinal impact point deviation to the initial errors in three directions, respectively. nH1, nH2, and nH3 are the partial derivatives of the lateral impact point deviation to the initial errors in three directions, respectively.It can be obtained by combining Equation (1) and Equation (2). (3)ΔLΔH=q11q12q13q21q22q23αxαyαz=q1Tq2Tαxαyαz.In which,(4)q11=nL1cosφrcosψr+nL2sinφr−nL3cosφrsinψr,q12=−nL1sinφrcosψr+nL2cosφr+nL3sinφrsinψr,q13=nL1sinψr+nL3cosψr,q21=nH1cosφrcosψr+nH2sinφr−nH3cosφrsinψr,q22=−nH1sinφrcosψr+nH2cosφr+nH3sinφrsinψr,q23=nH1sinψr+nH3cosψr. ## 2.3. Acquisition of Star Sensor Measurement The elevation and azimuth angle of the optimal navigation star in theA-system are defined as es and σs, respectively. Thus, the stellar direction unit vector in the A-system can be expressed as (5)SA=cosescosσssinescosessinσsT.In thes-system, the osxs axis is the optical axis. The angle between the optical axis and the stellar vector is very small, and its directional cosine is approximately 1. The osys and oszs are the output axes. The stellar vector representation in the star sensor coordinate system is shown in Figure 2. It is assumed that the star sensor outputs are ξ and η; the stellar vector can be expressed as (6)SS=1−ξ−ηT.Figure 2 Representation of the stellar vector in the star sensor coordinate system.The ideal star sensor output should beSs′=100T; then, there is the following equation according to the coordinate transformation relationship: (7)SS′=CPSCAP′SA,whereCPS is the transformation matrix from the p-system to the s-system and CAP′ is the transformation matrix from the A-system to the p′-system. According to the stellar vector representation in the A-system and the s-system, the following equation can be obtained by the transformation matrix between different coordinate systems. (8)SS=CPSCP′PCAP′SA,whereCP′P is the transformation matrix from the p′-system to the p-system, which can be expressed as (9)CP′P=1−αzαyαz1−αx−αyαx1.The stellar vector representation in thep-system is defined as SP, and there is (10)SP=CSPSS.According to Equation (7), the stellar vector representation in the p′-system can be obtained as follows: (11)SP′=CSPSS′=CAP′SA.Then, the following equation can be obtained from Equations (10) and (11). (12)ΔSP=SP−SP′=CSPSS−SS′.AndΔSp can also be represented as (13)ΔSP=SP′⋅a=CSPSS′⋅a.It can be obtained from Equations (12) and (13). (14)Ss−Ss′=CPSCSPSS′⋅a.Therefore, the star sensor measurement equation can be expressed as(15)ξη=−sinψ00cosψ0sinφ0cosψ0−cosφ0sinφ0sinψ0αxαyαz=h1Th2Tαxαyαz,whereφ0 and ψ0 are the star sensor installation angles. ## 3. Theoretical Optimal Navigation Star Determination Method For the platform inertial-stellar composite guidance scheme, the single-star scheme for measuring a special navigation star can achieve the same accuracy as the double-star scheme for measuring two stars. This special star is called the optimal navigation star. In terms of the difficulty and cost of realization, the single-star scheme is definitely better than the double-star scheme. Therefore, the single-star scheme is adopted in the practical engineering, which requires the determination of the optimal navigation star.In this section, the equivalent information compression theory is utilized to explain why the single-star scheme can achieve the same accuracy as the double-star scheme firstly. Then, the optimal navigation star is further determined based on the principle. Since it is not combined with the navigation star in the star library, it is also called the theoretical optimal navigation star. ### 3.1. Equivalent Information Compression Theory The impact point deviation and platform misalignment angle can be expressed in the matrix form(16)p=q⋅a,wherep=ΔLΔHT; q=q1q2T; and a=αxαyαzT. It can be seen from Equation (16) that the rank of q is 2, so there is information compression in the mapping from a to p. It is worth noting that a cannot be uniquely determined by p, which indicates that Equation (16) has numerous solutions. Although there are countless sets of solutions in Equation (16), there is a special solution a0, which belongs to the subspace qs=spanq1q2 formed by each row of vectors. Therefore, a0 can be expressed as (17)a0=α10q1+α20q2=qT⋅X.Substitute Equation (17) into Equation (16), and there is (18)p=qqTX.From Equation (17) and Equation (18), we can get (19)a0=qTqqT−1p.It can be seen from the above equation thata0 and p correspond to each other one by one. If the inner product of two column vectors is defined as a⋅b=aT⋅b, then Equation (16) can be expressed as (20)p=q1⋅aq2⋅aT,whereqi⋅a reflects the projection of a in the qi direction. q1 and q2 are linearly independent; therefore, p=q⋅a reflects the projection as of a on space qs, and the projection information as⊥ of a on the orthogonal complement qs⊥ of qs is lost. Since qs is a complete subspace on Hilbert space Rn, there is (21)Rn=qs+qs⊥.According to the projection theorem, we can get(22)a=as+as⊥.It can be seen from the relationship between the impact point deviation and the platform misalignment angle thatq is not full rank. So only the information as in the subspace can be obtained through the impact point deviation, and the information as⊥ in the orthogonal complement cannot be obtained.The star sensor measurement equation can also be expressed in the matrix form(23)Z=h⋅a.Assuming that another set of bases ofqs is h1h2, it can be seen from the above analysis that Z also reflects all the information projected by a on qs, which can be expressed as (24)a0=hThhT−1Z.Substitute Equation (24) into Equation (16), and we can get (25)p=q⋅hThhT−1Z.Therefore, from the perspective of information compression,q and h are equal compression maps; that is, the impact point deviation p can be uniquely determined by the single-star observation Z.The impact point deviation is only affected by the projectionas of the misalignment angle a on the subspace Qs=spanq1q2. Therefore, according to Equation (25), it is only necessary to select h1 and h2, so that h and q are equal information compression. Then, the observation information Z contains all the useful information. The schematic diagram of determining the optimal navigation star is shown in Figure 3.Figure 3 Schematic diagram of determining the optimal navigation star.In the figure, all the information of misalignment anglea is composed of as and as⊥, but only as affects the impact point deviation. If h and q are equal information compression maps, the single-star scheme can measure all the information of as. Although the double-star scheme can measure all the information of a, only the as part is used in the correction, and as⊥ belongs to the useless information, so it has the same accuracy as the single-star scheme. In more popular terms, there are only two indicators ΔL and ΔH describing the impact point deviation, which are the reflection of part of the misalignment angle a. When observing a single star, two measurements ξ and η can be obtained, which are also the reflection of part of the misalignment angle a. By selecting the optimal navigation star, ξ and η can include all the information of misalignment angle a contained in ΔL and ΔH. Therefore, the single-star scheme can achieve the same accuracy as the dual-star scheme. ### 3.2. Determining the Optimal Navigation Star According to the equivalent information compression theory, the optimal navigation star should satisfy(26)h1×h2=q1×q2q1×q2orh1×h2=−q1×q2q1×q2.For the left side of the above equation, it can be obtained according to the Equation (15). (27)h1×h2=cosφ0cosψ0sinφ0cosφ0sinψ0T.It is assumed that the star sensor is installed on thexoy plane of the platform, and the platform is adjusted to aim at the navigation star by first yaw and then pitch. By substitutingψ0=0∘ into Equation (27), we can get (28)h1×h2=cosφ0sinφ00T.Define(29)Qc=Qc1Qc2Qc3=q1×q2,where,(30)Qc1=nH1nL3−nH3nL1sinφr+nH3nL2−nH2nL3cosφrcosψr+nH1nL2−nH2nL1cosφrsinψr,Qc2=nH1nL3−nH3nL1cosφr−nH3nL2−nH2nL3sinφrcosψr−nH1nL2−nH2nL1sinφrsinψr,Qc3=−nH1nL2−nH2nL1cosψr+nH3nL2−nH2nL3sinψr.According to Equations (26), (27), and (28), we can get (31)ψr=tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,φr=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr−φ0.The optimal navigation star and the rotation angle satisfy the following relationship:(32)σs=−ψr,es=φr+φ0.Therefore, the orientation of the optimal navigation star can be expressed as(33)σs=−tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,es=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr.According to Equation (26), there is another solution for the orientation of the optimal navigation star. (34)σ′s=σs−π,e′s=−es. ## 3.1. Equivalent Information Compression Theory The impact point deviation and platform misalignment angle can be expressed in the matrix form(16)p=q⋅a,wherep=ΔLΔHT; q=q1q2T; and a=αxαyαzT. It can be seen from Equation (16) that the rank of q is 2, so there is information compression in the mapping from a to p. It is worth noting that a cannot be uniquely determined by p, which indicates that Equation (16) has numerous solutions. Although there are countless sets of solutions in Equation (16), there is a special solution a0, which belongs to the subspace qs=spanq1q2 formed by each row of vectors. Therefore, a0 can be expressed as (17)a0=α10q1+α20q2=qT⋅X.Substitute Equation (17) into Equation (16), and there is (18)p=qqTX.From Equation (17) and Equation (18), we can get (19)a0=qTqqT−1p.It can be seen from the above equation thata0 and p correspond to each other one by one. If the inner product of two column vectors is defined as a⋅b=aT⋅b, then Equation (16) can be expressed as (20)p=q1⋅aq2⋅aT,whereqi⋅a reflects the projection of a in the qi direction. q1 and q2 are linearly independent; therefore, p=q⋅a reflects the projection as of a on space qs, and the projection information as⊥ of a on the orthogonal complement qs⊥ of qs is lost. Since qs is a complete subspace on Hilbert space Rn, there is (21)Rn=qs+qs⊥.According to the projection theorem, we can get(22)a=as+as⊥.It can be seen from the relationship between the impact point deviation and the platform misalignment angle thatq is not full rank. So only the information as in the subspace can be obtained through the impact point deviation, and the information as⊥ in the orthogonal complement cannot be obtained.The star sensor measurement equation can also be expressed in the matrix form(23)Z=h⋅a.Assuming that another set of bases ofqs is h1h2, it can be seen from the above analysis that Z also reflects all the information projected by a on qs, which can be expressed as (24)a0=hThhT−1Z.Substitute Equation (24) into Equation (16), and we can get (25)p=q⋅hThhT−1Z.Therefore, from the perspective of information compression,q and h are equal compression maps; that is, the impact point deviation p can be uniquely determined by the single-star observation Z.The impact point deviation is only affected by the projectionas of the misalignment angle a on the subspace Qs=spanq1q2. Therefore, according to Equation (25), it is only necessary to select h1 and h2, so that h and q are equal information compression. Then, the observation information Z contains all the useful information. The schematic diagram of determining the optimal navigation star is shown in Figure 3.Figure 3 Schematic diagram of determining the optimal navigation star.In the figure, all the information of misalignment anglea is composed of as and as⊥, but only as affects the impact point deviation. If h and q are equal information compression maps, the single-star scheme can measure all the information of as. Although the double-star scheme can measure all the information of a, only the as part is used in the correction, and as⊥ belongs to the useless information, so it has the same accuracy as the single-star scheme. In more popular terms, there are only two indicators ΔL and ΔH describing the impact point deviation, which are the reflection of part of the misalignment angle a. When observing a single star, two measurements ξ and η can be obtained, which are also the reflection of part of the misalignment angle a. By selecting the optimal navigation star, ξ and η can include all the information of misalignment angle a contained in ΔL and ΔH. Therefore, the single-star scheme can achieve the same accuracy as the dual-star scheme. ## 3.2. Determining the Optimal Navigation Star According to the equivalent information compression theory, the optimal navigation star should satisfy(26)h1×h2=q1×q2q1×q2orh1×h2=−q1×q2q1×q2.For the left side of the above equation, it can be obtained according to the Equation (15). (27)h1×h2=cosφ0cosψ0sinφ0cosφ0sinψ0T.It is assumed that the star sensor is installed on thexoy plane of the platform, and the platform is adjusted to aim at the navigation star by first yaw and then pitch. By substitutingψ0=0∘ into Equation (27), we can get (28)h1×h2=cosφ0sinφ00T.Define(29)Qc=Qc1Qc2Qc3=q1×q2,where,(30)Qc1=nH1nL3−nH3nL1sinφr+nH3nL2−nH2nL3cosφrcosψr+nH1nL2−nH2nL1cosφrsinψr,Qc2=nH1nL3−nH3nL1cosφr−nH3nL2−nH2nL3sinφrcosψr−nH1nL2−nH2nL1sinφrsinψr,Qc3=−nH1nL2−nH2nL1cosψr+nH3nL2−nH2nL3sinψr.According to Equations (26), (27), and (28), we can get (31)ψr=tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,φr=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr−φ0.The optimal navigation star and the rotation angle satisfy the following relationship:(32)σs=−ψr,es=φr+φ0.Therefore, the orientation of the optimal navigation star can be expressed as(33)σs=−tan−1nH1nL2−nH2nL1nH3nL2−nH2nL3,es=−tan−1nH1nL3−nH3nL1nH2nL3−nH3nL2cosψr+nH2nL1−nH1nL2sinψr.According to Equation (26), there is another solution for the orientation of the optimal navigation star. (34)σ′s=σs−π,e′s=−es. ## 4. Determining the Available Optimal Navigation Star Based on the Star Database ### 4.1. Angle Analysis of the Deviation from the Optimal Navigation Star For the single-star platform inertial-stellar composite guidance scheme, only observing the optimal navigation star can achieve the same accuracy as the double star guidance scheme. However, in star database, there are not necessarily stars in the optimal navigation star direction. And only one real star can be selected as the navigation star in the star library according to a certain principle. This star is called the available optimal navigation star. In this section, the angle that the available navigation star deviates from the optimal navigation star is analyzed.In thei-system, several groups of optimal navigation stars are randomly generated, in which the elevation angles are evenly distributed within −90°,90°, and azimuth angles are evenly distributed within −180°,180°. Each combination of elevation angles and azimuth angles eNi,σNi represents a group of possible optimal navigation star, and its direction vector in the i-system is (35)VNi=coseNicosσNicoseNisinσNisineNiT.For any star above 5.5 mag in the star database, its elevation angle and azimuth angle areeSj,σSj; then, the direction vector in the i-system can be expressed as (36)VSj=coseSjcosσSjcoseSjsinσSjsineSjT.The angle between the optimal navigation star and the available navigation star can be calculated according to the following equation:(37)αij=arccosVNi⋅VSj.By traversingj, the minimum angular distance between the optimal navigation star and the available navigation star can be obtained.100000 samples are sampled, and the results are shown in Figures4 and 5.Figure 4 Statistical histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figure 5 Probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figures4 and 5, respectively, show the statistical histogram and probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star. Here, each straight bar represents 0.1°, and the sum of all the sampling times is 100000. In Figure 4, the angular deviations are between 0 and 6° mostly, which mainly concentrated in 1°~3° and relatively few more than 5° or less than 1°. Compared with Figure 4, the shapes of the statistical histogram and probability density histogram are basically the same. Figure 5 also shows the probability density function diagram of the corresponding normal distribution. However, it is obvious that the distribution is not quite consistent with the normal distribution.Tables1 and 2 provide the corresponding numerical statistical results. It can be seen from Table 1 that the maximum deviation angle is 7.4221° and the mean deviation angle is 2.0949°. The more detailed statistical analysis results of the available navigation star deviation from the optimal navigation star are illustrated in Table 2. The table counts the single probability and cumulative probability of the deviation angle. It can be observed that 2°>α≧1° is the most, accounting for 33.558% and the deviation angle greater than 7° accounts for only 0.014%.Table 1 The basic analysis results of the available navigation star deviation from the optimal navigation star. EventMaximumMinimumMeanSquare3σ rangeValue (deg)7.42210.00412.09491.1139[-1.2467, 5.4365]Table 2 statistical analysis results of the available navigation star deviation from the optimal navigation star. Deviation angleNumberProbability (%)Cumulative numberCumulative probability (%)1°>α≧0°1544915.4491544915.4492°>α≧1°3355833.5584900749.0073°>α≧2°2939229.3927839978.3994°>α≧3°1484214.8429324193.2415°>α≧4°52165.2169845798.4576°>α≧5°12891.2899974699.7467°>α≧6°2400.2409998699.986α≧7°140.014100000100.000Therefore, if the upper limit of star-sensitive measurement magnitude is 5.5 mag, the available navigation star can be found within the angular distance range within 7° from the optimal navigation star.In the above analysis, constraints such as occlusion of the sun, moon, and earth have not been taken into account, and the deviation angle will be much larger after consideration. ### 4.2. Determining the Available Optimal Navigation Star Based on the Arrow-Borne Navigation Star Database In practical application, the navigation star must be selected in the arrow-borne navigation star database. According to the above analysis, within the range of 7° from the theoretical optimal navigation star, the probability of finding the navigation star is 100%. Therefore, a method determining the available optimal navigation star based on the local navigation star database is proposed to improve the efficiency of star selection. #### 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. #### 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 4.1. Angle Analysis of the Deviation from the Optimal Navigation Star For the single-star platform inertial-stellar composite guidance scheme, only observing the optimal navigation star can achieve the same accuracy as the double star guidance scheme. However, in star database, there are not necessarily stars in the optimal navigation star direction. And only one real star can be selected as the navigation star in the star library according to a certain principle. This star is called the available optimal navigation star. In this section, the angle that the available navigation star deviates from the optimal navigation star is analyzed.In thei-system, several groups of optimal navigation stars are randomly generated, in which the elevation angles are evenly distributed within −90°,90°, and azimuth angles are evenly distributed within −180°,180°. Each combination of elevation angles and azimuth angles eNi,σNi represents a group of possible optimal navigation star, and its direction vector in the i-system is (35)VNi=coseNicosσNicoseNisinσNisineNiT.For any star above 5.5 mag in the star database, its elevation angle and azimuth angle areeSj,σSj; then, the direction vector in the i-system can be expressed as (36)VSj=coseSjcosσSjcoseSjsinσSjsineSjT.The angle between the optimal navigation star and the available navigation star can be calculated according to the following equation:(37)αij=arccosVNi⋅VSj.By traversingj, the minimum angular distance between the optimal navigation star and the available navigation star can be obtained.100000 samples are sampled, and the results are shown in Figures4 and 5.Figure 4 Statistical histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figure 5 Probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star.Figures4 and 5, respectively, show the statistical histogram and probability density histogram of the angles that the available navigation stars deviate from the optimal navigation star. Here, each straight bar represents 0.1°, and the sum of all the sampling times is 100000. In Figure 4, the angular deviations are between 0 and 6° mostly, which mainly concentrated in 1°~3° and relatively few more than 5° or less than 1°. Compared with Figure 4, the shapes of the statistical histogram and probability density histogram are basically the same. Figure 5 also shows the probability density function diagram of the corresponding normal distribution. However, it is obvious that the distribution is not quite consistent with the normal distribution.Tables1 and 2 provide the corresponding numerical statistical results. It can be seen from Table 1 that the maximum deviation angle is 7.4221° and the mean deviation angle is 2.0949°. The more detailed statistical analysis results of the available navigation star deviation from the optimal navigation star are illustrated in Table 2. The table counts the single probability and cumulative probability of the deviation angle. It can be observed that 2°>α≧1° is the most, accounting for 33.558% and the deviation angle greater than 7° accounts for only 0.014%.Table 1 The basic analysis results of the available navigation star deviation from the optimal navigation star. EventMaximumMinimumMeanSquare3σ rangeValue (deg)7.42210.00412.09491.1139[-1.2467, 5.4365]Table 2 statistical analysis results of the available navigation star deviation from the optimal navigation star. Deviation angleNumberProbability (%)Cumulative numberCumulative probability (%)1°>α≧0°1544915.4491544915.4492°>α≧1°3355833.5584900749.0073°>α≧2°2939229.3927839978.3994°>α≧3°1484214.8429324193.2415°>α≧4°52165.2169845798.4576°>α≧5°12891.2899974699.7467°>α≧6°2400.2409998699.986α≧7°140.014100000100.000Therefore, if the upper limit of star-sensitive measurement magnitude is 5.5 mag, the available navigation star can be found within the angular distance range within 7° from the optimal navigation star.In the above analysis, constraints such as occlusion of the sun, moon, and earth have not been taken into account, and the deviation angle will be much larger after consideration. ## 4.2. Determining the Available Optimal Navigation Star Based on the Arrow-Borne Navigation Star Database In practical application, the navigation star must be selected in the arrow-borne navigation star database. According to the above analysis, within the range of 7° from the theoretical optimal navigation star, the probability of finding the navigation star is 100%. Therefore, a method determining the available optimal navigation star based on the local navigation star database is proposed to improve the efficiency of star selection. ### 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. ### 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 4.2.1. Determining the Local Navigation Star Database Strong light sources should be avoided when determining the local navigation star database (this paper takes avoiding the sun as an example). The right ascension and declination of the sun obtained from the ephemeris table are defined asαsun and δsun, and the unit sun direction vector in the i-system can be expressed as (38)iI=cosδsuncosαsuncosδsunsinαsunsinαsun,whereiI is the unit sun direction vector in the i-system.Then, the unit sun direction vector in theA-system can be further obtained: (39)isun=CIA⋅iI,whereisun is the unit sun direction vector in the A-system and CIA is the transformation matrix from the i-system to the A-system.Therefore, the angular distanceθs between the theoretical optimal navigation star and the sun can be calculated as (40)θs=arccosSI⋅isun.Ifθs is less than the sum of the solar avoidance angle αsun and deviation angle Δα, the deviation angle can be recalculated according to the following equation: (41)Δβ=−αsunαsun+Δαθs+αsun+Δα,whereΔβ is the recalculated deviation angle.The navigation star orientation in the star database is defined ase0σ0, and its unit vector in the A-system can be expressed as (42)i0=cose0cosσ0sine0cose0sinσ0.Then, the angular distanceθI between the navigation star and the theoretical optical navigation star and the angular distance θ0 between the navigation star and the sun can be calculated, respectively. (43)θI=arccosi0⋅SI,θ0=arccosi0⋅isun.Therefore, the value ofθI and the deviation angle can be compared, so as θ0 and αsun. (44)θI<Δα,θs≥αsun+Δα,θI<Δβ,θs<αsun+Δα,θ0>αsun.If the above equation is valid, it means that the navigation star is within the deviation angle range of the optimal navigator star and outside the sun avoidance angle range. And the navigation star can be put into the local navigation star database. After calculating all the navigation stars in the star database, the local star database for determining the available optimal navigation star can be obtained. ## 4.2.2. Determining the Available Optical Navigation Star Considering that the navigation star with the smallest angular distance from the theoretical optimal navigation star is not necessarily the available optimal navigation star, this paper utilizes the combination of minimum angular distance and minimum accuracy change to determine the available optimal navigation star. Firstly, the angular distance between the stars in the local navigation star database and the optimal navigation star is calculated, and the one with the smallest angular distance is the first available navigation star. Secondly, estimate the accuracy variation of any star in the local navigation star database, and the smallest is the second available navigation star. The calculation method for estimating the accuracy change caused by navigation star deviation is as follows.The gradient can be calculated from the partial derivative of the composite guidance accuracy at the optimal navigation star varying with the navigation star orientation.(45)d∇=∂CEP∂esi+∂CEP∂σsj,whereCEP is the circular error probable and ∂CEP/∂es and ∂CEP/∂σs are the partial derivative of composite guidance CEP to elevation and azimuth angle at the optimal navigation star. The direction perpendicular to the gradient is the direction with the slowest change in the composite guidance accuracy. (46)d∇⊥=−∂CEP∂σsi+∂CEP∂esj.Therefore, for any star in the local navigation star database, the accuracy change ∆CEP can be estimated according to(47)ΔCEP=∂CEP∂σsΔes2+∂CEP∂esΔσs2,whereΔCEP is the estimated value of the accuracy change between the navigation star and the optimal navigation star. Δes and Δσs are the difference of elevation angle and azimuth angle between the star in the local navigation star database and the optimal navigation star. When the star is smallest, it is selected as the second available navigation star. For the first and the second available navigation star, the one with smaller CEP is the available optimal navigation star. ## 5. Simulation Results This section mainly include two parts: (1) determining the theoretical optimal navigation star and (2) determining the available optimal navigation star based on the star database. The simulations are primarily aimed at verifying the effectiveness of the proposed method.In the simulation, two representative responsive launch vehicle trajectories are adopted. The launch time is 00 : 00 : 00, 1 January 2019 (UTC). The first whole flight time is 1300 s, and the second whole flight time is 2300 s. The initial position is (0°N, 0°E). The star sensor works beyond the atmosphere. And the star sensor installation angle isφ0,ψ0=20°,0°. The simulation parameters for the initial alignment error and star sensor error are listed in Table 3. Two trajectories can better verify the effectiveness of the proposed method.Table 3 The value of each error in the simulation. Error typesError symbolsValue (3σ)UnitsInitial orientation (alignment) errorε0x100″ε0y300ε0z100Star sensor measurement errorεξ, εη0″Star sensor installation errorΔφ0, Δψ00 ### 5.1. Determining the Optimal Navigation Star This section is used to evaluate the effectiveness of the algorithm in Section3. In the simulation, the optimal navigation star is determined by three methods, which are traversal method, simplex evolutionary method, and analytical method proposed in this paper.The traversal method searches in the full dimensional space with−90°≤es≤90° and −180°<σs≤180°, and the step is 1°. Taking the 6000 km trajectory (first trajectory) as an example, when the optimal navigation star is determined by the traversal method, the composite guidance accuracy under different measurement orientations is shown in Figure 6, and the composite guidance CEP contour is shown in Figure 7.Figure 6 Composite guidance CEP variation diagram for 6000 km.Figure 7 Composite guidance CEP contour map for 6000 km.Figure6 shows the composite guidance CEP variation diagram for 6000 km. It can be observed that there are two minimum points in the composite guidance accuracy variation diagram corresponding to the single-star tuning platform; that is, there are two optimal navigation stars. And the two optimal navigation stars azimuth is approximately on the same line as the emission point, that is, e′s=−es and σ′s=σs−π. This is consistent with the analysis conclusion of Equation (33). Besides, the composite guidance accuracy is approximately symmetric with respect to the line according to Figure 7.In the simplex evolutionary method, the initial vertex isX0=es0σs0T=20°0°T. Take the distance between vertices Δd=10° to construct the initial simplex, and the iteration termination condition is taken as ε=0.1m. The change of the simplex optimal vertex in the iteration process is shown in Figure 8, and the convergence error is shown in Figure 9.Figure 8 Simplex optimal vertex iterative change diagram with a range of 6000 km.Figure 9 Simplex convergent error variation diagram with a range of 6000 km.It can be seen from the above figures that when utilizing the simplex evolutionary method to determine the optimal navigation star, the simplex converges quickly in the solution process, and the algorithm has a large search range. At the same time, it can achieve high accuracy by controlling the convergence domain.Table4 represents the required time and the optimal navigation stars determined by the three methods. In the table, CEPINS is the pure inertial guidance accuracy, and CEPCOM is the composite guidance accuracy.Table 4 Optimal navigation star at different range. RangeMethodes (deg)σs (deg)CEPINS (m)CEPCOM (m)t (s)6000kmTraversing3802628.896.782963.05Simplex37.6011-0.04112628.890.01312.09Analysis37.6011-0.04042628.890.0010.00112000kmTraversing3003198.364.022985.53Simplex30.0226-0.13163198.360.01513.22Analysis30.0264-0.13183198.360.0010.003The simulation results show that under the condition of only considering the initial alignment error, the results obtained by analytical method are consistent with those obtained by traversal method and simplex evolutionary method, which verify the effectiveness of the proposed method. At the same time, the azimuth angle of the navigation star is about 0°, which indicates that the optimal navigation star is near the shooting plane when the star sensor is installed on thexoy plane of the platform.According to the results of the optimal navigation stars and the corresponding composite guidance accuracy, the accuracy of the traversal method is limited because the traversal method is searched with a fixed step, while the analytic method and simplex evolutionary method have no such limitation. Thus, the optimal navigation star azimuth can achieve high accuracy. When comparing the calculation time of the three methods, the results are calculated on PC. By contrast, the traversal calendar takes about 50 minutes, while the analytic method can be completed in a very short time. And the composite guidance accuracy corresponding to the optimal navigation star obtained by the analytical method is 99.99% (from 6.78 m to 0.001 m) and 92.31% (from 0.013 m to 0.001 m) higher than that obtained by the traversal method and the simplex evolutionary method. Moreover, the time-consuming of the traversal method is related to the traversal step size. The smaller the step size, the more time-consuming, but the more accurate the optimal star azimuth is determined. Therefore, under the condition of significant initial error, the method proposed in this paper can be used to help determine the optimal navigation star quickly. Since only the initial orientation error is considered in the simulation, the stellar guidance can correct all the effects of the error, and the corrected accuracy is close to 0 m. Of course, it is impossible to achieve when all error factors are considered.Taking the responsive launch vehicle with a range of 6000 km as an example, the influence of the star sensor installation error and measurement error on the optimal navigation star is analyzed. In the simulation, simplex evolutionary method and analytic method are utilized to determine the optimal navigation star. Table5 represents the optimal navigation stars when considering the star sensor installation error, and Table 6 represents the optimal navigation stars when considering the star sensor measurement error.Table 5 Optimal navigation stars with different star sensor installation error. Δφ0, Δψ0″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.5998-0.0417Analysis37.6011-0.040420Simplex37.5991-0.0423Analysis37.6011-0.040430Simplex37.5979-0.0440Analysis37.6011-0.0404Table 6 Optimal navigation star with different star sensor measurement error. εξ, εη″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.6011-0.0404Analysis37.6011-0.040420Simplex37.6012-0.0403Analysis37.6011-0.040430Simplex37.6012-0.0404Analysis37.6011-0.0404The star sensor installation error has a certain impact on the optimal navigation star, but the impact is small. The range of changes in elevation angle and azimuth angle is both within 0.01°. Comparing Tables5 and 6, it can be seen that the star sensor measurement error has less impact on the optimal navigation star. Therefore, the method proposed in this paper can determine the optimal navigation star effectively. ### 5.2. Determining the Optimal Available Navigation Star Stars are basically evenly distributed in the celestial coordinate system, and the earth shielding range of in the star sensor view field is basically fixed. Due to the physical realization of the inertial platform frame angle, there will be some restrictions on the azimuth and elevation angles. It is assumed that the azimuth angle has a limit of ±45°, and the elevation angle has a limit of ±60°. At the same time, it is assumed that the sun’s avoidance angle is 20°, the moon’s avoidance angle is 10°, the horizon’s additional avoidance angle is 5°, and the large planet’s avoidance angle is 2°. The arrow-borne navigation star database is shown in Figure10, and the generated local navigation star database based on the Section 4.2.1 is shown in Table 7.Figure 10 Arrow-borne navigation star database.Table 7 Local navigation star database. Numberes (deg)σs (deg)CEPINS (m)CEPCOM (m)135.0459-7.38242628.89139.409236.4093-3.56982628.8967.152340.6013-3.48932628.8983.301432.84180.94062628.8979.633541.21341.42352628.8969.357631.51622.50292628.89108.593738.15456.39152628.89117.695839.92437.91222628.89150.393Figure10 shows the generated arrow-borne navigation star database when the launch time is January 1, 2019. Due to the influence of constraints, the final number of available navigation stars is 292. Compared with Figure 10, it can be seen that there are only 8 alternative navigation stars in the local navigation star database, indicating that most stars in the array-borne navigation star database can be excluded based on the maximum deviation angle from the theoretical optimal navigation star, thus shortening the time to determine the available optimal navigation star. Tables 8 and 9 represent the available optimal navigation stars for the 6000 km and 12000 km launch vehicle.Table 8 The available optimal navigation stars for the 6000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)37.601137.601137.6011σs (deg)-0.0404-0.0404-0.0404CEPINS (m)2628.892628.892628.89CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Available optimal navigation star (proposed method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Table 9 The available optimal navigation stars for the 12000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)30.026430.026430.0264σs (deg)-0.1318-0.1318-0.1318CEPINS (m)3198.363198.363198.36CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Available optimal navigation star (proposed method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Table8 and 9 show the comparison results of the proposed method and the traversal method to determine the available optimal navigation star. The traversal method in the table refers to traversing all stars in the local navigation star database, and the results obtained can be considered as accurate. The proposed method refers to the available optimal navigation star determined according to Section 4.2.2. The above results are the navigation stars selected from the real local navigation star database after considering various star selection constraints. It can be observed from the tables that the navigation star determined by the proposed method is the same as the traversal method, which proves that this method in this paper is effective. At the same, the angular distance between the theoretical optimal navigation star and the available optimal navigation star is within 5°, and the variation of composite guidance accuracy is less than 70 m, indicating that the available optimal navigation star still has a good correction effect. ## 5.1. Determining the Optimal Navigation Star This section is used to evaluate the effectiveness of the algorithm in Section3. In the simulation, the optimal navigation star is determined by three methods, which are traversal method, simplex evolutionary method, and analytical method proposed in this paper.The traversal method searches in the full dimensional space with−90°≤es≤90° and −180°<σs≤180°, and the step is 1°. Taking the 6000 km trajectory (first trajectory) as an example, when the optimal navigation star is determined by the traversal method, the composite guidance accuracy under different measurement orientations is shown in Figure 6, and the composite guidance CEP contour is shown in Figure 7.Figure 6 Composite guidance CEP variation diagram for 6000 km.Figure 7 Composite guidance CEP contour map for 6000 km.Figure6 shows the composite guidance CEP variation diagram for 6000 km. It can be observed that there are two minimum points in the composite guidance accuracy variation diagram corresponding to the single-star tuning platform; that is, there are two optimal navigation stars. And the two optimal navigation stars azimuth is approximately on the same line as the emission point, that is, e′s=−es and σ′s=σs−π. This is consistent with the analysis conclusion of Equation (33). Besides, the composite guidance accuracy is approximately symmetric with respect to the line according to Figure 7.In the simplex evolutionary method, the initial vertex isX0=es0σs0T=20°0°T. Take the distance between vertices Δd=10° to construct the initial simplex, and the iteration termination condition is taken as ε=0.1m. The change of the simplex optimal vertex in the iteration process is shown in Figure 8, and the convergence error is shown in Figure 9.Figure 8 Simplex optimal vertex iterative change diagram with a range of 6000 km.Figure 9 Simplex convergent error variation diagram with a range of 6000 km.It can be seen from the above figures that when utilizing the simplex evolutionary method to determine the optimal navigation star, the simplex converges quickly in the solution process, and the algorithm has a large search range. At the same time, it can achieve high accuracy by controlling the convergence domain.Table4 represents the required time and the optimal navigation stars determined by the three methods. In the table, CEPINS is the pure inertial guidance accuracy, and CEPCOM is the composite guidance accuracy.Table 4 Optimal navigation star at different range. RangeMethodes (deg)σs (deg)CEPINS (m)CEPCOM (m)t (s)6000kmTraversing3802628.896.782963.05Simplex37.6011-0.04112628.890.01312.09Analysis37.6011-0.04042628.890.0010.00112000kmTraversing3003198.364.022985.53Simplex30.0226-0.13163198.360.01513.22Analysis30.0264-0.13183198.360.0010.003The simulation results show that under the condition of only considering the initial alignment error, the results obtained by analytical method are consistent with those obtained by traversal method and simplex evolutionary method, which verify the effectiveness of the proposed method. At the same time, the azimuth angle of the navigation star is about 0°, which indicates that the optimal navigation star is near the shooting plane when the star sensor is installed on thexoy plane of the platform.According to the results of the optimal navigation stars and the corresponding composite guidance accuracy, the accuracy of the traversal method is limited because the traversal method is searched with a fixed step, while the analytic method and simplex evolutionary method have no such limitation. Thus, the optimal navigation star azimuth can achieve high accuracy. When comparing the calculation time of the three methods, the results are calculated on PC. By contrast, the traversal calendar takes about 50 minutes, while the analytic method can be completed in a very short time. And the composite guidance accuracy corresponding to the optimal navigation star obtained by the analytical method is 99.99% (from 6.78 m to 0.001 m) and 92.31% (from 0.013 m to 0.001 m) higher than that obtained by the traversal method and the simplex evolutionary method. Moreover, the time-consuming of the traversal method is related to the traversal step size. The smaller the step size, the more time-consuming, but the more accurate the optimal star azimuth is determined. Therefore, under the condition of significant initial error, the method proposed in this paper can be used to help determine the optimal navigation star quickly. Since only the initial orientation error is considered in the simulation, the stellar guidance can correct all the effects of the error, and the corrected accuracy is close to 0 m. Of course, it is impossible to achieve when all error factors are considered.Taking the responsive launch vehicle with a range of 6000 km as an example, the influence of the star sensor installation error and measurement error on the optimal navigation star is analyzed. In the simulation, simplex evolutionary method and analytic method are utilized to determine the optimal navigation star. Table5 represents the optimal navigation stars when considering the star sensor installation error, and Table 6 represents the optimal navigation stars when considering the star sensor measurement error.Table 5 Optimal navigation stars with different star sensor installation error. Δφ0, Δψ0″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.5998-0.0417Analysis37.6011-0.040420Simplex37.5991-0.0423Analysis37.6011-0.040430Simplex37.5979-0.0440Analysis37.6011-0.0404Table 6 Optimal navigation star with different star sensor measurement error. εξ, εη″,3σMethodes (deg)σs (deg)0Simplex37.6011-0.0411Analysis37.6011-0.040410Simplex37.6011-0.0404Analysis37.6011-0.040420Simplex37.6012-0.0403Analysis37.6011-0.040430Simplex37.6012-0.0404Analysis37.6011-0.0404The star sensor installation error has a certain impact on the optimal navigation star, but the impact is small. The range of changes in elevation angle and azimuth angle is both within 0.01°. Comparing Tables5 and 6, it can be seen that the star sensor measurement error has less impact on the optimal navigation star. Therefore, the method proposed in this paper can determine the optimal navigation star effectively. ## 5.2. Determining the Optimal Available Navigation Star Stars are basically evenly distributed in the celestial coordinate system, and the earth shielding range of in the star sensor view field is basically fixed. Due to the physical realization of the inertial platform frame angle, there will be some restrictions on the azimuth and elevation angles. It is assumed that the azimuth angle has a limit of ±45°, and the elevation angle has a limit of ±60°. At the same time, it is assumed that the sun’s avoidance angle is 20°, the moon’s avoidance angle is 10°, the horizon’s additional avoidance angle is 5°, and the large planet’s avoidance angle is 2°. The arrow-borne navigation star database is shown in Figure10, and the generated local navigation star database based on the Section 4.2.1 is shown in Table 7.Figure 10 Arrow-borne navigation star database.Table 7 Local navigation star database. Numberes (deg)σs (deg)CEPINS (m)CEPCOM (m)135.0459-7.38242628.89139.409236.4093-3.56982628.8967.152340.6013-3.48932628.8983.301432.84180.94062628.8979.633541.21341.42352628.8969.357631.51622.50292628.89108.593738.15456.39152628.89117.695839.92437.91222628.89150.393Figure10 shows the generated arrow-borne navigation star database when the launch time is January 1, 2019. Due to the influence of constraints, the final number of available navigation stars is 292. Compared with Figure 10, it can be seen that there are only 8 alternative navigation stars in the local navigation star database, indicating that most stars in the array-borne navigation star database can be excluded based on the maximum deviation angle from the theoretical optimal navigation star, thus shortening the time to determine the available optimal navigation star. Tables 8 and 9 represent the available optimal navigation stars for the 6000 km and 12000 km launch vehicle.Table 8 The available optimal navigation stars for the 6000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)37.601137.601137.6011σs (deg)-0.0404-0.0404-0.0404CEPINS (m)2628.892628.892628.89CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Available optimal navigation star (proposed method)e′s (deg)36.409336.582737.0740σ′s (deg)-3.56981.0344-2.9698CEPINS (m)2628.892628.892628.89CEPCOM (m)67.15226.32054.120Table 9 The available optimal navigation stars for the 12000 km launch vehicle. Date2019/1/12019/3/102019/8/20Theoretical optimal navigation stares (deg)30.026430.026430.0264σs (deg)-0.1318-0.1318-0.1318CEPINS (m)3198.363198.363198.36CEPCOM (m)0.0010.0010.001Available optimal navigation star (traversal method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Available optimal navigation star (proposed method)e′s (deg)30.571832.259325.3675σ′s (deg)0.78250.3548-0.4691CEPINS (m)3198.363198.363198.36CEPCOM (m)26.33233.22464.516Table8 and 9 show the comparison results of the proposed method and the traversal method to determine the available optimal navigation star. The traversal method in the table refers to traversing all stars in the local navigation star database, and the results obtained can be considered as accurate. The proposed method refers to the available optimal navigation star determined according to Section 4.2.2. The above results are the navigation stars selected from the real local navigation star database after considering various star selection constraints. It can be observed from the tables that the navigation star determined by the proposed method is the same as the traversal method, which proves that this method in this paper is effective. At the same, the angular distance between the theoretical optimal navigation star and the available optimal navigation star is within 5°, and the variation of composite guidance accuracy is less than 70 m, indicating that the available optimal navigation star still has a good correction effect. ## 6. Conclusion The demand for the application of single-star inertial-stellar guidance system in responsive launch vehicles is to determine the optimal navigation star quickly. However, the current optimal navigation star selection schemes are to determine the star by numerical method, which increase the preparation time before launch. This paper proposes a fast algorithm to determine the star. The key of this algorithm is to deduce the optimal navigation star based on the equivalent information compression theory under the condition of significant initial error. It is obvious that the analytical solution is less time-consuming than the numerical solution. And the analytical solution can achieve the same accuracy as the numerical solution, or even higher.On the basis of determining the optimal navigation star, the available optimal navigation star should be further determined in combination with the arrow-borne navigation star database. There are certain deviations between the optimal navigation star and the navigation stars in the database. Therefore, the deviation angles between them without considering constraints are analyzed firstly. Based on the deviation angle, the navigation stars are selected to the local navigation database. Then, the available optimal navigation star can be determined according to certain criteria. The algorithm proposed in this paper can quickly determine the optimal navigation star and the available optimal navigation star. --- *Source: 1000865-2022-04-19.xml*
2022
# Therapeutic Effects of the Proximal Femoral Nail for the Treatment of Unstable Intertrochanteric Fractures **Authors:** Yuwei Cai; Wenjun Zhu; Nan Wang; Zhongxiang Yu; Yu Chen; Shengming Xu; Juntao Feng **Journal:** Evidence-Based Complementary and Alternative Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1001354 --- ## Abstract Objective. The aim of this study was to analyze the clinical effect of the proximal femoral nail on elderly patients with unstable intertrochanteric fracture and the effect of the proximal femoral nail on serum levels of matrix metalloproteinases (MMPs) and osteoprotegerin (OPG). Methods. The elderly patients with unstable intertrochanteric fracture of the femur admitted to our hospital from January 2017 to January 2021 were studied. 100 patients were randomly divided into two groups: the control group (n = 50) and the observation group (n = 50). The patients in the control group were treated with a proximal femoral locking compression plate. The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The clinical therapeutic effects of the two groups and the changes in serum MMPs and OPG levels before and after treatment were analyzed. Results. Compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, and intraoperative blood loss was significantly reduced (P < 0.05). Compared with the control group, the total effective rate of patients in the observation group was significantly higher (P < 0.05). After treatment, the levels of CRP, IL1β, IL2, MMP-2, MMP-6, TIMP-1, and RANKL decreased significantly in both groups (P < 0.05), while the levels of OPG increased significantly (P < 0.05). Compared with the control group, the changes in the above indexes were more obvious in the observation group (P < 0.05). Conclusion. The proximal femoral antirotation intramedullary nail has a better therapeutic effect on elderly patients with unstable intertrochanteric fracture, and the level of MMPs and OPG may be related to the treatment process. --- ## Body ## 1. Introduction Intertrochanteric fractures of the femur are common surgical fractures with high morbidity rates [1, 2]. In order to improve the mobility of the lower extremities and improve the prognosis of patients, patients with intertrochanteric fractures should receive immediate surgery and actively perform lower extremity functional exercises [3]. The proximal femoral anti-rotation intramedullary nail is a common method for the treatment of femoral intertrochanteric fractures, with the advantages of less trauma and easy operation [4]. Matrix metalloproteinases (MMPs) regulate the remodeling process of the extracellular matrix and are also widely involved in the process of bone tissue injury and healing [5]. The receptor activator of nuclear factor kappa B ligand/osteoprotegerin (RANKL/OPG) is a key signal transduction pathway of bone metabolism and is closely related to the pathological and physiological processes of bone tissue [6]. However, there is no relevant report on the level of MMPs during the treatment of elderly unstable intertrochanteric fractures with proximal femoral antirotation intramedullary nails. This study analyzed the clinical effect of the proximal femoral antirotation intramedullary nail on elderly patients with unstable intertrochanteric fractures and the effect on serum levels of MMPs and OPG in order to provide a reference for clinical treatment. ## 2. Materials and Methods ### 2.1. Research Objects Elderly patients with unstable femoral intertrochanteric fractures admitted to our hospital from January 2017 to January 2021 were selected as the research subjects. Inclusion criteria were as follows: all patients met the diagnostic criteria for unstable intertrochanteric fractures [7] and were diagnosed by X-ray; there is a clear history of trauma; hip pain, swelling, lower extremity dysfunction; patient has severe lower extremity deformity and valgus, and local tenderness is obvious; clinical data are complete. Exclusion criteria are as follows: combined with severe bone disease; severe metabolic dysfunction; patients with concurrent malignant tumors; patients taking drugs that affect bone metabolism within the past 3 months; patients with incomplete clinical data or who disagree with this study. 100 patients were randomly divided into two groups according to the treatment method. Control group (n = 50): 24 males and 26 females; aged 61–72 years, mean 66.5 ± 8.5 years old; 28 patients with EvansIII (intertrochanteric fractures combined with greater trochanteric fractures with displacement, no posterolateral support, and comminuted posterior fracture); and 22 patients with EvansIV (combined lesser trochanter fracture with displacement and no medial support). Observation group (n = 50): 23 males and 27 females; aged 61–72 years, mean 66.8 ± 8.8 years old; 29 patients with EvansIII; and 21 patients with EvansIV. There was no statistical difference in general data such as gender and average age between the two groups (P > 0.05). ### 2.2. Treatment Methods The patients in the control group were treated with the proximal femoral locking compression plate. The patients were in a supine position, continuous epidural anesthesia was administered, a soft pillow was placed on the affected buttocks, and the surgical incision was selected at 2.9 ± 1 cm above the apex of the greater trochanter, extending laterally. Separate the skin and subcutaneous tissue of the patient, expose the fracture end, use Kirschner wire fixation after satisfactory reduction and traction, insert screw-type screws through C-arm fluoroscopy, lock the screws at the distal end, and move the affected limb.The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The patients were in a supine position, continuous epidural anesthesia was administered, adduction of the affected limb about 15° in neutral position, reduction under C-arm fluoroscopy, and longitudinal incision about 1 cm above the greater trochanter, and an open 5.5 ± 0.5 cm. The needle was placed at the apex of the tuberosity, and after reaming, the proximal antirotation intramedullary nail was inserted and screwed into the helical blade and distal locking nail. ### 2.3. Observation Indicators and Methods #### 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. #### 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. #### 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ### 2.4. Statistical Analysis SPSS 20.0 statistical software was used to analyze the data. Measurement data were expressed as mean ± standard deviation (x¯ ± s), t-test was used for comparison between the two groups, count data was expressed as percentage, and the chi-square test was used for comparison between the two groups. P < 0.05 means the difference is statistically significant. ## 2.1. Research Objects Elderly patients with unstable femoral intertrochanteric fractures admitted to our hospital from January 2017 to January 2021 were selected as the research subjects. Inclusion criteria were as follows: all patients met the diagnostic criteria for unstable intertrochanteric fractures [7] and were diagnosed by X-ray; there is a clear history of trauma; hip pain, swelling, lower extremity dysfunction; patient has severe lower extremity deformity and valgus, and local tenderness is obvious; clinical data are complete. Exclusion criteria are as follows: combined with severe bone disease; severe metabolic dysfunction; patients with concurrent malignant tumors; patients taking drugs that affect bone metabolism within the past 3 months; patients with incomplete clinical data or who disagree with this study. 100 patients were randomly divided into two groups according to the treatment method. Control group (n = 50): 24 males and 26 females; aged 61–72 years, mean 66.5 ± 8.5 years old; 28 patients with EvansIII (intertrochanteric fractures combined with greater trochanteric fractures with displacement, no posterolateral support, and comminuted posterior fracture); and 22 patients with EvansIV (combined lesser trochanter fracture with displacement and no medial support). Observation group (n = 50): 23 males and 27 females; aged 61–72 years, mean 66.8 ± 8.8 years old; 29 patients with EvansIII; and 21 patients with EvansIV. There was no statistical difference in general data such as gender and average age between the two groups (P > 0.05). ## 2.2. Treatment Methods The patients in the control group were treated with the proximal femoral locking compression plate. The patients were in a supine position, continuous epidural anesthesia was administered, a soft pillow was placed on the affected buttocks, and the surgical incision was selected at 2.9 ± 1 cm above the apex of the greater trochanter, extending laterally. Separate the skin and subcutaneous tissue of the patient, expose the fracture end, use Kirschner wire fixation after satisfactory reduction and traction, insert screw-type screws through C-arm fluoroscopy, lock the screws at the distal end, and move the affected limb.The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The patients were in a supine position, continuous epidural anesthesia was administered, adduction of the affected limb about 15° in neutral position, reduction under C-arm fluoroscopy, and longitudinal incision about 1 cm above the greater trochanter, and an open 5.5 ± 0.5 cm. The needle was placed at the apex of the tuberosity, and after reaming, the proximal antirotation intramedullary nail was inserted and screwed into the helical blade and distal locking nail. ## 2.3. Observation Indicators and Methods ### 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. ### 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. ### 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ## 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. ## 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. ## 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ## 2.4. Statistical Analysis SPSS 20.0 statistical software was used to analyze the data. Measurement data were expressed as mean ± standard deviation (x¯ ± s), t-test was used for comparison between the two groups, count data was expressed as percentage, and the chi-square test was used for comparison between the two groups. P < 0.05 means the difference is statistically significant. ## 3. Results ### 3.1. Analysis of the Surgical Conditions of the Two Groups of Patients The study found that compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, intraoperative blood loss was significantly reduced, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 1.Table 1 Analysis of the surgical conditions of the two groups of patients. GroupnOperation time (min)Intraoperative blood loss (ml)Postoperative landing time (d)Fracture healing time (weeks)Control groupn = 5078.43 ± 10.44116.56 ± 12.5610.43 ± 1.4313.34 ± 3.21Observationn = 5049.56 ± 9.56a97.21 ± 8.48a6.68 ± 1.90a11.19 ± 3.09aGroupt1.8652.3341.7621.856P<0.05<0.05<0.05<0.05Note. Compared with the control group, aP<0.05. ### 3.2. Analysis of the Improvement Effect of Hip Joint Function in the Two Groups of Patients The study found that compared with the control group, the total effective rate of patients in the observation group was significantly higher, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 2.Table 2 Analysis of the improvement effect of hip joint function in the two groups of patients. GroupnCuredSignificantly effectValidInvalidTotal efficiency (%)Control group50171018545 (90.0%)Observation group50221413149 (98.0%)X22.996P<0.05 ### 3.3. Changes of Serum MMPs Levels in the Two Groups of Patients before and after Treatment The study found that there was no statistical difference in the levels of MMP2, MMP6, and TIMP1 between the two groups before treatment (P > 0.05). After treatment, the levels of MMP2, MP6, and TIMP 1 in the two groups of patients were lower than those before treatment, and compared with the control group, the decreases in the observation group were more significant (P < 0.05), as shown in Table 3.Table 3 Changes of serum MMP levels in the two groups of patients before and after treatment. GroupMMP2 (mg/L)MMP6 (mg/L)TIMP1 (mg/L)Control group (n = 50)Before treatment34.52 ± 9.2324.76 ± 5.6023.65 ± 7.12After treatment29.45 ± 7.09b19.78 ± 4.81b19.60 ± 4.65bObservation group (n = 50)Before treatment35.09 ± 10.1125.01 ± 6.3323.98 ± 8.33After treatment25.53 ± 4.59ab14.54 ± 4.62ab14.36 ± 3.35abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.4. Changes of Serum RANKL/OPG Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in RANKL/OPG between the two groups before treatment (P > 0.05). After treatment, OPG level in two groups increased, while RANKL level decreased. Compared with the control group, the change trend in the observation group was more significant (P < 0.05), as shown in Table 4.Table 4 Changes of serum RANKL/OPG levels in the two groups of patients before and after treatment. GroupOPG (ng/L)RANKL (ng/L)Control group (n = 50)Before treatment301.33 ± 32.7614.47 ± 3.88After treatment335.76 ± 27.89b11.09 ± 4.21bObservation group (n = 50)Before treatment300.98 ± 19.8815.01 ± 4.09After treatment387.95 ± 25.66ab8.13 ± 1.44abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.5. Changes of Serum Interleukin and C-Reactive Protein Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in the levels of CRP, IL1β, and IL2 between the two groups before treatment (P > 0.05), and the above indicators were significantly decreased after treatment (P < 0.05), and compared with the control group, the above indicators of the observation group were decreased more significantly (P < 0.05), as shown in Table 5.Table 5 Changes of serum CRP, IL1β, and IL2 levels in the two groups of patients before and after treatment. groupCRP (mg/L)IL1β (ng/L)IL2 (ng/L)Control group (n = 50)Before treatment54.54 ± 7.6647.87 ± 9.4451.43 ± 8.89After treatment43.43 ± 6.89b40.54 ± 5.21b43.56 ± 7.33bTest group (n = 50)Before treatment54.09 ± 7.3248.01 ± 8.3352.01 ± 9.21After treatment25.64 ± 4.22ab33.13 ± 5.65ab32.34 ± 6.66abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.6. Complications No serious complications occurred in the two groups after treatment. ## 3.1. Analysis of the Surgical Conditions of the Two Groups of Patients The study found that compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, intraoperative blood loss was significantly reduced, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 1.Table 1 Analysis of the surgical conditions of the two groups of patients. GroupnOperation time (min)Intraoperative blood loss (ml)Postoperative landing time (d)Fracture healing time (weeks)Control groupn = 5078.43 ± 10.44116.56 ± 12.5610.43 ± 1.4313.34 ± 3.21Observationn = 5049.56 ± 9.56a97.21 ± 8.48a6.68 ± 1.90a11.19 ± 3.09aGroupt1.8652.3341.7621.856P<0.05<0.05<0.05<0.05Note. Compared with the control group, aP<0.05. ## 3.2. Analysis of the Improvement Effect of Hip Joint Function in the Two Groups of Patients The study found that compared with the control group, the total effective rate of patients in the observation group was significantly higher, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 2.Table 2 Analysis of the improvement effect of hip joint function in the two groups of patients. GroupnCuredSignificantly effectValidInvalidTotal efficiency (%)Control group50171018545 (90.0%)Observation group50221413149 (98.0%)X22.996P<0.05 ## 3.3. Changes of Serum MMPs Levels in the Two Groups of Patients before and after Treatment The study found that there was no statistical difference in the levels of MMP2, MMP6, and TIMP1 between the two groups before treatment (P > 0.05). After treatment, the levels of MMP2, MP6, and TIMP 1 in the two groups of patients were lower than those before treatment, and compared with the control group, the decreases in the observation group were more significant (P < 0.05), as shown in Table 3.Table 3 Changes of serum MMP levels in the two groups of patients before and after treatment. GroupMMP2 (mg/L)MMP6 (mg/L)TIMP1 (mg/L)Control group (n = 50)Before treatment34.52 ± 9.2324.76 ± 5.6023.65 ± 7.12After treatment29.45 ± 7.09b19.78 ± 4.81b19.60 ± 4.65bObservation group (n = 50)Before treatment35.09 ± 10.1125.01 ± 6.3323.98 ± 8.33After treatment25.53 ± 4.59ab14.54 ± 4.62ab14.36 ± 3.35abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.4. Changes of Serum RANKL/OPG Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in RANKL/OPG between the two groups before treatment (P > 0.05). After treatment, OPG level in two groups increased, while RANKL level decreased. Compared with the control group, the change trend in the observation group was more significant (P < 0.05), as shown in Table 4.Table 4 Changes of serum RANKL/OPG levels in the two groups of patients before and after treatment. GroupOPG (ng/L)RANKL (ng/L)Control group (n = 50)Before treatment301.33 ± 32.7614.47 ± 3.88After treatment335.76 ± 27.89b11.09 ± 4.21bObservation group (n = 50)Before treatment300.98 ± 19.8815.01 ± 4.09After treatment387.95 ± 25.66ab8.13 ± 1.44abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.5. Changes of Serum Interleukin and C-Reactive Protein Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in the levels of CRP, IL1β, and IL2 between the two groups before treatment (P > 0.05), and the above indicators were significantly decreased after treatment (P < 0.05), and compared with the control group, the above indicators of the observation group were decreased more significantly (P < 0.05), as shown in Table 5.Table 5 Changes of serum CRP, IL1β, and IL2 levels in the two groups of patients before and after treatment. groupCRP (mg/L)IL1β (ng/L)IL2 (ng/L)Control group (n = 50)Before treatment54.54 ± 7.6647.87 ± 9.4451.43 ± 8.89After treatment43.43 ± 6.89b40.54 ± 5.21b43.56 ± 7.33bTest group (n = 50)Before treatment54.09 ± 7.3248.01 ± 8.3352.01 ± 9.21After treatment25.64 ± 4.22ab33.13 ± 5.65ab32.34 ± 6.66abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.6. Complications No serious complications occurred in the two groups after treatment. ## 4. Discussion Elderly unstable femoral intertrochanteric fracture is a common clinical disease and frequently occurring disease, which brings a heavy burden to patients and families. Currently, surgery is usually used for the treatment of femoral intertrochanteric fractures. In this study, it was found that the clinical effect of the proximal femoral antirotation intramedullary nail was better. It is suggested that the proximal antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable femoral intertrochanteric fracture, and the recovery of MMP and OPG/RANKL levels may be related to the treatment process. Although it is generally believed that proximal femoral anti-rotation intramedullary nails and locking compression plates affect the efficacy of unstable intertrochanteric fractures mainly due to biomechanical rather than biological factors, through this study we confirmed that biological factors, especially changes in the levels of MMPs and their inhibitors, may be involved in the above-mentioned treatment process.The abnormal expression of MMPs and their inhibitors is involved in the pathological process of a variety of bone tissues and is closely related to the clinical treatment effects [8]. Many clinical medications for bone or joint diseases achieve their therapeutic effects by interfering with MMP levels. Both Polygonatum preparation and celecoxib can improve the joint function score of patients and knee joint function by reducing the content of MMP-13 in their serum, and the reduction of MMP-13 is also related to reducing inflammatory responses and protecting chondrocytes [9, 10]. Drug treatment of knee osteoarthritis can effectively reduce the levels of inflammatory factors such as serum metalloproteinases, thereby improving knee joint mobility and quality of life [11]. When calcitriol is used in the treatment of knee osteoarthritis, it can inhibit MMP by inhibiting MMP-1, MMP-3, and MMP-13 gene expression, thereby reducing the severity of arthritis in patients, reducing pain in patients, and improving the living conditions of patients [12]. This study also found that the levels of MMP2, MMP6, and their inhibitor TIMP1 were significantly reduced in the two groups after treatment, and compared with the control group, the above indicators in the observation group changed more significantly, suggesting that the better therapeutic effect of the proximal bone antirotation intramedullary nail on elderly patients with unstable intertrochanteric fractures may be related to the regulation of abnormal levels of MMPs. Therefore, in addition to the influence of biomechanical factors, biological factors, especially MMPs and their inhibitors, play a key role in the effect of the proximal femoral antirotation intramedullary nail and locking compression plate on the efficacy of unstable intertrochanteric fractures. However, the changes in the above cytokines in patients treated with the proximal femoral antirotation intramedullary nail were more obvious than in those treated with a locking compression plate. Although the changes of the above factors were analyzed in this paper, the regulatory pathways upstream of these factors have not been effectively explored. Future work will focus on the analysis of changes from upstream cytokines.The occurrence, development, and treatment of many bone and joint diseases are related to MMPS, RANKL, and OPG. Yougui Pill can delay cartilage degeneration by inhibiting the activity of MMPs and the expression of inflammatory factors. The RANKL/OPG signaling pathway is also closely related to the process of bone metabolism [13, 14]; the effect of traditional Chinese medicine treatment on the levels of serum OPG and RANKL in patients with rheumatoid arthritis of the wind-cold-dampness-type is the key to clinical efficacy [15]. Serum RANKL and OPG levels in patients with ankylosing spondylitis (AS) are significantly correlated with enthesopathy, and they can be used as reliable indicators for predicting the presence of enthesopathy in AS patients, especially the presence of bone erosion [16]. At the same time, percutaneous vertebroplasty can effectively treat senile osteoporotic thoracolumbar fractures and can also significantly reduce the levels of OPG and RANKL and promote bone healing [17]. In this study, after treatment, the levels of RANKL in the two groups were significantly decreased and the level of OPG was significantly increased, indicating that the proximal bone antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable femoral intertrochanteric fractures, and its effect may be related to the regulation of abnormal levels of MMPs and OPG. Previous studies have also found that interleukin and C-reactive protein are also closely related to the pathology and recovery process of fractures [18, 19]. Incision infection after calcaneal fracture affects the clinical treatment effect, and serum IL-2, IL-6, and CRP levels increase in patients with postoperative incision infection after bone fracture [20]. Closed negative pressure drainage combined with astragalus injection irrigation in the treatment of traumatic suppurative osteomyelitis can effectively improve the patient's limb function, improve treatment efficiency, and reduce complications and hospitalization costs, which may be related to the inhibition of CRP and IL-6 secretion [21, 22]. At the same time, the abnormal recovery of interleukin and C-reactive protein levels has a certain correlation with the recovery of fractures [23–26]. Lugua polypeptide can increase bone mineral density, improve red blood cell-related, bone metabolism and inflammatory indexes in patients with osteoporotic fractures [27], the increase of serum IL-6 level is involved in the injury of elderly femoral neck fracture and acute trauma in the early stage of surgery, and inflammatory response participates in the bone remodeling of postoperative fracture healing [28]. This study also confirmed that the levels of interleukin and CRP in the two groups of patients were significantly reduced after treatment, and the changes in the above indicators in the observation group were more obvious, suggesting that the proximal bone antirotation intramedullary nail is more effective in regulating the abnormally elevated inflammation level in elderly patients with unstable femoral intertrochanteric fractures, which may be related to the regulation of abnormal levels of MMPs and OPG. In future studies, we will further analyze whether MMPs and OPG are risk factors for unstable intertrochanteric fractures and analyze the correlation between them, so as to provide a reference for clinically relevant disease prevention and treatment.In conclusion, the proximal femoral antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable intertrochanteric fractures, and the changes in MMPs and OPG levels may be related to the treatment process. --- *Source: 1001354-2022-09-02.xml*
1001354-2022-09-02_1001354-2022-09-02.md
32,503
Therapeutic Effects of the Proximal Femoral Nail for the Treatment of Unstable Intertrochanteric Fractures
Yuwei Cai; Wenjun Zhu; Nan Wang; Zhongxiang Yu; Yu Chen; Shengming Xu; Juntao Feng
Evidence-Based Complementary and Alternative Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1001354
1001354-2022-09-02.xml
--- ## Abstract Objective. The aim of this study was to analyze the clinical effect of the proximal femoral nail on elderly patients with unstable intertrochanteric fracture and the effect of the proximal femoral nail on serum levels of matrix metalloproteinases (MMPs) and osteoprotegerin (OPG). Methods. The elderly patients with unstable intertrochanteric fracture of the femur admitted to our hospital from January 2017 to January 2021 were studied. 100 patients were randomly divided into two groups: the control group (n = 50) and the observation group (n = 50). The patients in the control group were treated with a proximal femoral locking compression plate. The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The clinical therapeutic effects of the two groups and the changes in serum MMPs and OPG levels before and after treatment were analyzed. Results. Compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, and intraoperative blood loss was significantly reduced (P < 0.05). Compared with the control group, the total effective rate of patients in the observation group was significantly higher (P < 0.05). After treatment, the levels of CRP, IL1β, IL2, MMP-2, MMP-6, TIMP-1, and RANKL decreased significantly in both groups (P < 0.05), while the levels of OPG increased significantly (P < 0.05). Compared with the control group, the changes in the above indexes were more obvious in the observation group (P < 0.05). Conclusion. The proximal femoral antirotation intramedullary nail has a better therapeutic effect on elderly patients with unstable intertrochanteric fracture, and the level of MMPs and OPG may be related to the treatment process. --- ## Body ## 1. Introduction Intertrochanteric fractures of the femur are common surgical fractures with high morbidity rates [1, 2]. In order to improve the mobility of the lower extremities and improve the prognosis of patients, patients with intertrochanteric fractures should receive immediate surgery and actively perform lower extremity functional exercises [3]. The proximal femoral anti-rotation intramedullary nail is a common method for the treatment of femoral intertrochanteric fractures, with the advantages of less trauma and easy operation [4]. Matrix metalloproteinases (MMPs) regulate the remodeling process of the extracellular matrix and are also widely involved in the process of bone tissue injury and healing [5]. The receptor activator of nuclear factor kappa B ligand/osteoprotegerin (RANKL/OPG) is a key signal transduction pathway of bone metabolism and is closely related to the pathological and physiological processes of bone tissue [6]. However, there is no relevant report on the level of MMPs during the treatment of elderly unstable intertrochanteric fractures with proximal femoral antirotation intramedullary nails. This study analyzed the clinical effect of the proximal femoral antirotation intramedullary nail on elderly patients with unstable intertrochanteric fractures and the effect on serum levels of MMPs and OPG in order to provide a reference for clinical treatment. ## 2. Materials and Methods ### 2.1. Research Objects Elderly patients with unstable femoral intertrochanteric fractures admitted to our hospital from January 2017 to January 2021 were selected as the research subjects. Inclusion criteria were as follows: all patients met the diagnostic criteria for unstable intertrochanteric fractures [7] and were diagnosed by X-ray; there is a clear history of trauma; hip pain, swelling, lower extremity dysfunction; patient has severe lower extremity deformity and valgus, and local tenderness is obvious; clinical data are complete. Exclusion criteria are as follows: combined with severe bone disease; severe metabolic dysfunction; patients with concurrent malignant tumors; patients taking drugs that affect bone metabolism within the past 3 months; patients with incomplete clinical data or who disagree with this study. 100 patients were randomly divided into two groups according to the treatment method. Control group (n = 50): 24 males and 26 females; aged 61–72 years, mean 66.5 ± 8.5 years old; 28 patients with EvansIII (intertrochanteric fractures combined with greater trochanteric fractures with displacement, no posterolateral support, and comminuted posterior fracture); and 22 patients with EvansIV (combined lesser trochanter fracture with displacement and no medial support). Observation group (n = 50): 23 males and 27 females; aged 61–72 years, mean 66.8 ± 8.8 years old; 29 patients with EvansIII; and 21 patients with EvansIV. There was no statistical difference in general data such as gender and average age between the two groups (P > 0.05). ### 2.2. Treatment Methods The patients in the control group were treated with the proximal femoral locking compression plate. The patients were in a supine position, continuous epidural anesthesia was administered, a soft pillow was placed on the affected buttocks, and the surgical incision was selected at 2.9 ± 1 cm above the apex of the greater trochanter, extending laterally. Separate the skin and subcutaneous tissue of the patient, expose the fracture end, use Kirschner wire fixation after satisfactory reduction and traction, insert screw-type screws through C-arm fluoroscopy, lock the screws at the distal end, and move the affected limb.The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The patients were in a supine position, continuous epidural anesthesia was administered, adduction of the affected limb about 15° in neutral position, reduction under C-arm fluoroscopy, and longitudinal incision about 1 cm above the greater trochanter, and an open 5.5 ± 0.5 cm. The needle was placed at the apex of the tuberosity, and after reaming, the proximal antirotation intramedullary nail was inserted and screwed into the helical blade and distal locking nail. ### 2.3. Observation Indicators and Methods #### 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. #### 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. #### 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ### 2.4. Statistical Analysis SPSS 20.0 statistical software was used to analyze the data. Measurement data were expressed as mean ± standard deviation (x¯ ± s), t-test was used for comparison between the two groups, count data was expressed as percentage, and the chi-square test was used for comparison between the two groups. P < 0.05 means the difference is statistically significant. ## 2.1. Research Objects Elderly patients with unstable femoral intertrochanteric fractures admitted to our hospital from January 2017 to January 2021 were selected as the research subjects. Inclusion criteria were as follows: all patients met the diagnostic criteria for unstable intertrochanteric fractures [7] and were diagnosed by X-ray; there is a clear history of trauma; hip pain, swelling, lower extremity dysfunction; patient has severe lower extremity deformity and valgus, and local tenderness is obvious; clinical data are complete. Exclusion criteria are as follows: combined with severe bone disease; severe metabolic dysfunction; patients with concurrent malignant tumors; patients taking drugs that affect bone metabolism within the past 3 months; patients with incomplete clinical data or who disagree with this study. 100 patients were randomly divided into two groups according to the treatment method. Control group (n = 50): 24 males and 26 females; aged 61–72 years, mean 66.5 ± 8.5 years old; 28 patients with EvansIII (intertrochanteric fractures combined with greater trochanteric fractures with displacement, no posterolateral support, and comminuted posterior fracture); and 22 patients with EvansIV (combined lesser trochanter fracture with displacement and no medial support). Observation group (n = 50): 23 males and 27 females; aged 61–72 years, mean 66.8 ± 8.8 years old; 29 patients with EvansIII; and 21 patients with EvansIV. There was no statistical difference in general data such as gender and average age between the two groups (P > 0.05). ## 2.2. Treatment Methods The patients in the control group were treated with the proximal femoral locking compression plate. The patients were in a supine position, continuous epidural anesthesia was administered, a soft pillow was placed on the affected buttocks, and the surgical incision was selected at 2.9 ± 1 cm above the apex of the greater trochanter, extending laterally. Separate the skin and subcutaneous tissue of the patient, expose the fracture end, use Kirschner wire fixation after satisfactory reduction and traction, insert screw-type screws through C-arm fluoroscopy, lock the screws at the distal end, and move the affected limb.The patients in the observation group were treated with the proximal femoral antirotation intramedullary nail. The patients were in a supine position, continuous epidural anesthesia was administered, adduction of the affected limb about 15° in neutral position, reduction under C-arm fluoroscopy, and longitudinal incision about 1 cm above the greater trochanter, and an open 5.5 ± 0.5 cm. The needle was placed at the apex of the tuberosity, and after reaming, the proximal antirotation intramedullary nail was inserted and screwed into the helical blade and distal locking nail. ## 2.3. Observation Indicators and Methods ### 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. ### 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. ### 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ## 2.3.1. Analysis of Clinical Treatment Effect of the Two Groups of Patients In this study, the operation time, intraoperative blood loss, postoperative landing time, and fracture healing time of the two groups of patients were analyzed, and the total effective rate of the treatment was also analyzed. The treatment effect is divided into four categories. Cure: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 90 points. Significant effect: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 80 points. Valid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is more than 70 points. Invalid: at the follow-up after 12 months of treatment, the Harris score of the hip joint of the patient is not more than 70 points. Total effective rate = (cure + significantly effect + valid)/total number of cases × 100%. ## 2.3.2. Biochemical Index Analysis Before and after treatment, 5 ml of fasting venous blood was collected in the morning. Serum MMPs (including MMP2, MMP6, and its inhibitor TIMP1), RANKL/OPG levels, CRP, Interleukin (IL)1β, and IL2 before and after treatment were analyzed by enzyme-linked immunosorbent assay. MMP2 and MMP6 detection kits were purchased from Cell Signaling. TIMP1 detection kit was purchased from R&D company. RANKL detection kit was purchased from Santa Cruz company. The OPG, CRP, IL1β, and IL2 detection kit was purchased from Abcam Company. All detection operations were performed in accordance with the kit instructions. ## 2.3.3. Patient Follow-Up All patients were followed up for more than 12 months. After discharge, the patients were investigated by telephone and clinic every 2 months, and the complications of patients during this period were counted. ## 2.4. Statistical Analysis SPSS 20.0 statistical software was used to analyze the data. Measurement data were expressed as mean ± standard deviation (x¯ ± s), t-test was used for comparison between the two groups, count data was expressed as percentage, and the chi-square test was used for comparison between the two groups. P < 0.05 means the difference is statistically significant. ## 3. Results ### 3.1. Analysis of the Surgical Conditions of the Two Groups of Patients The study found that compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, intraoperative blood loss was significantly reduced, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 1.Table 1 Analysis of the surgical conditions of the two groups of patients. GroupnOperation time (min)Intraoperative blood loss (ml)Postoperative landing time (d)Fracture healing time (weeks)Control groupn = 5078.43 ± 10.44116.56 ± 12.5610.43 ± 1.4313.34 ± 3.21Observationn = 5049.56 ± 9.56a97.21 ± 8.48a6.68 ± 1.90a11.19 ± 3.09aGroupt1.8652.3341.7621.856P<0.05<0.05<0.05<0.05Note. Compared with the control group, aP<0.05. ### 3.2. Analysis of the Improvement Effect of Hip Joint Function in the Two Groups of Patients The study found that compared with the control group, the total effective rate of patients in the observation group was significantly higher, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 2.Table 2 Analysis of the improvement effect of hip joint function in the two groups of patients. GroupnCuredSignificantly effectValidInvalidTotal efficiency (%)Control group50171018545 (90.0%)Observation group50221413149 (98.0%)X22.996P<0.05 ### 3.3. Changes of Serum MMPs Levels in the Two Groups of Patients before and after Treatment The study found that there was no statistical difference in the levels of MMP2, MMP6, and TIMP1 between the two groups before treatment (P > 0.05). After treatment, the levels of MMP2, MP6, and TIMP 1 in the two groups of patients were lower than those before treatment, and compared with the control group, the decreases in the observation group were more significant (P < 0.05), as shown in Table 3.Table 3 Changes of serum MMP levels in the two groups of patients before and after treatment. GroupMMP2 (mg/L)MMP6 (mg/L)TIMP1 (mg/L)Control group (n = 50)Before treatment34.52 ± 9.2324.76 ± 5.6023.65 ± 7.12After treatment29.45 ± 7.09b19.78 ± 4.81b19.60 ± 4.65bObservation group (n = 50)Before treatment35.09 ± 10.1125.01 ± 6.3323.98 ± 8.33After treatment25.53 ± 4.59ab14.54 ± 4.62ab14.36 ± 3.35abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.4. Changes of Serum RANKL/OPG Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in RANKL/OPG between the two groups before treatment (P > 0.05). After treatment, OPG level in two groups increased, while RANKL level decreased. Compared with the control group, the change trend in the observation group was more significant (P < 0.05), as shown in Table 4.Table 4 Changes of serum RANKL/OPG levels in the two groups of patients before and after treatment. GroupOPG (ng/L)RANKL (ng/L)Control group (n = 50)Before treatment301.33 ± 32.7614.47 ± 3.88After treatment335.76 ± 27.89b11.09 ± 4.21bObservation group (n = 50)Before treatment300.98 ± 19.8815.01 ± 4.09After treatment387.95 ± 25.66ab8.13 ± 1.44abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.5. Changes of Serum Interleukin and C-Reactive Protein Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in the levels of CRP, IL1β, and IL2 between the two groups before treatment (P > 0.05), and the above indicators were significantly decreased after treatment (P < 0.05), and compared with the control group, the above indicators of the observation group were decreased more significantly (P < 0.05), as shown in Table 5.Table 5 Changes of serum CRP, IL1β, and IL2 levels in the two groups of patients before and after treatment. groupCRP (mg/L)IL1β (ng/L)IL2 (ng/L)Control group (n = 50)Before treatment54.54 ± 7.6647.87 ± 9.4451.43 ± 8.89After treatment43.43 ± 6.89b40.54 ± 5.21b43.56 ± 7.33bTest group (n = 50)Before treatment54.09 ± 7.3248.01 ± 8.3352.01 ± 9.21After treatment25.64 ± 4.22ab33.13 ± 5.65ab32.34 ± 6.66abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ### 3.6. Complications No serious complications occurred in the two groups after treatment. ## 3.1. Analysis of the Surgical Conditions of the Two Groups of Patients The study found that compared with the control group, the operation time, postoperative landing time, and fracture healing time of the observation group were significantly shortened, intraoperative blood loss was significantly reduced, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 1.Table 1 Analysis of the surgical conditions of the two groups of patients. GroupnOperation time (min)Intraoperative blood loss (ml)Postoperative landing time (d)Fracture healing time (weeks)Control groupn = 5078.43 ± 10.44116.56 ± 12.5610.43 ± 1.4313.34 ± 3.21Observationn = 5049.56 ± 9.56a97.21 ± 8.48a6.68 ± 1.90a11.19 ± 3.09aGroupt1.8652.3341.7621.856P<0.05<0.05<0.05<0.05Note. Compared with the control group, aP<0.05. ## 3.2. Analysis of the Improvement Effect of Hip Joint Function in the Two Groups of Patients The study found that compared with the control group, the total effective rate of patients in the observation group was significantly higher, and the difference between the two groups was statistically significant (P < 0.05), as shown in Table 2.Table 2 Analysis of the improvement effect of hip joint function in the two groups of patients. GroupnCuredSignificantly effectValidInvalidTotal efficiency (%)Control group50171018545 (90.0%)Observation group50221413149 (98.0%)X22.996P<0.05 ## 3.3. Changes of Serum MMPs Levels in the Two Groups of Patients before and after Treatment The study found that there was no statistical difference in the levels of MMP2, MMP6, and TIMP1 between the two groups before treatment (P > 0.05). After treatment, the levels of MMP2, MP6, and TIMP 1 in the two groups of patients were lower than those before treatment, and compared with the control group, the decreases in the observation group were more significant (P < 0.05), as shown in Table 3.Table 3 Changes of serum MMP levels in the two groups of patients before and after treatment. GroupMMP2 (mg/L)MMP6 (mg/L)TIMP1 (mg/L)Control group (n = 50)Before treatment34.52 ± 9.2324.76 ± 5.6023.65 ± 7.12After treatment29.45 ± 7.09b19.78 ± 4.81b19.60 ± 4.65bObservation group (n = 50)Before treatment35.09 ± 10.1125.01 ± 6.3323.98 ± 8.33After treatment25.53 ± 4.59ab14.54 ± 4.62ab14.36 ± 3.35abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.4. Changes of Serum RANKL/OPG Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in RANKL/OPG between the two groups before treatment (P > 0.05). After treatment, OPG level in two groups increased, while RANKL level decreased. Compared with the control group, the change trend in the observation group was more significant (P < 0.05), as shown in Table 4.Table 4 Changes of serum RANKL/OPG levels in the two groups of patients before and after treatment. GroupOPG (ng/L)RANKL (ng/L)Control group (n = 50)Before treatment301.33 ± 32.7614.47 ± 3.88After treatment335.76 ± 27.89b11.09 ± 4.21bObservation group (n = 50)Before treatment300.98 ± 19.8815.01 ± 4.09After treatment387.95 ± 25.66ab8.13 ± 1.44abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.5. Changes of Serum Interleukin and C-Reactive Protein Levels in the Two Groups of Patients before and after Treatment The study found that there was no significant difference in the levels of CRP, IL1β, and IL2 between the two groups before treatment (P > 0.05), and the above indicators were significantly decreased after treatment (P < 0.05), and compared with the control group, the above indicators of the observation group were decreased more significantly (P < 0.05), as shown in Table 5.Table 5 Changes of serum CRP, IL1β, and IL2 levels in the two groups of patients before and after treatment. groupCRP (mg/L)IL1β (ng/L)IL2 (ng/L)Control group (n = 50)Before treatment54.54 ± 7.6647.87 ± 9.4451.43 ± 8.89After treatment43.43 ± 6.89b40.54 ± 5.21b43.56 ± 7.33bTest group (n = 50)Before treatment54.09 ± 7.3248.01 ± 8.3352.01 ± 9.21After treatment25.64 ± 4.22ab33.13 ± 5.65ab32.34 ± 6.66abNote. Compared with the control group in the same period, aP<0.05; compared with before treatment, bP < 0.05. ## 3.6. Complications No serious complications occurred in the two groups after treatment. ## 4. Discussion Elderly unstable femoral intertrochanteric fracture is a common clinical disease and frequently occurring disease, which brings a heavy burden to patients and families. Currently, surgery is usually used for the treatment of femoral intertrochanteric fractures. In this study, it was found that the clinical effect of the proximal femoral antirotation intramedullary nail was better. It is suggested that the proximal antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable femoral intertrochanteric fracture, and the recovery of MMP and OPG/RANKL levels may be related to the treatment process. Although it is generally believed that proximal femoral anti-rotation intramedullary nails and locking compression plates affect the efficacy of unstable intertrochanteric fractures mainly due to biomechanical rather than biological factors, through this study we confirmed that biological factors, especially changes in the levels of MMPs and their inhibitors, may be involved in the above-mentioned treatment process.The abnormal expression of MMPs and their inhibitors is involved in the pathological process of a variety of bone tissues and is closely related to the clinical treatment effects [8]. Many clinical medications for bone or joint diseases achieve their therapeutic effects by interfering with MMP levels. Both Polygonatum preparation and celecoxib can improve the joint function score of patients and knee joint function by reducing the content of MMP-13 in their serum, and the reduction of MMP-13 is also related to reducing inflammatory responses and protecting chondrocytes [9, 10]. Drug treatment of knee osteoarthritis can effectively reduce the levels of inflammatory factors such as serum metalloproteinases, thereby improving knee joint mobility and quality of life [11]. When calcitriol is used in the treatment of knee osteoarthritis, it can inhibit MMP by inhibiting MMP-1, MMP-3, and MMP-13 gene expression, thereby reducing the severity of arthritis in patients, reducing pain in patients, and improving the living conditions of patients [12]. This study also found that the levels of MMP2, MMP6, and their inhibitor TIMP1 were significantly reduced in the two groups after treatment, and compared with the control group, the above indicators in the observation group changed more significantly, suggesting that the better therapeutic effect of the proximal bone antirotation intramedullary nail on elderly patients with unstable intertrochanteric fractures may be related to the regulation of abnormal levels of MMPs. Therefore, in addition to the influence of biomechanical factors, biological factors, especially MMPs and their inhibitors, play a key role in the effect of the proximal femoral antirotation intramedullary nail and locking compression plate on the efficacy of unstable intertrochanteric fractures. However, the changes in the above cytokines in patients treated with the proximal femoral antirotation intramedullary nail were more obvious than in those treated with a locking compression plate. Although the changes of the above factors were analyzed in this paper, the regulatory pathways upstream of these factors have not been effectively explored. Future work will focus on the analysis of changes from upstream cytokines.The occurrence, development, and treatment of many bone and joint diseases are related to MMPS, RANKL, and OPG. Yougui Pill can delay cartilage degeneration by inhibiting the activity of MMPs and the expression of inflammatory factors. The RANKL/OPG signaling pathway is also closely related to the process of bone metabolism [13, 14]; the effect of traditional Chinese medicine treatment on the levels of serum OPG and RANKL in patients with rheumatoid arthritis of the wind-cold-dampness-type is the key to clinical efficacy [15]. Serum RANKL and OPG levels in patients with ankylosing spondylitis (AS) are significantly correlated with enthesopathy, and they can be used as reliable indicators for predicting the presence of enthesopathy in AS patients, especially the presence of bone erosion [16]. At the same time, percutaneous vertebroplasty can effectively treat senile osteoporotic thoracolumbar fractures and can also significantly reduce the levels of OPG and RANKL and promote bone healing [17]. In this study, after treatment, the levels of RANKL in the two groups were significantly decreased and the level of OPG was significantly increased, indicating that the proximal bone antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable femoral intertrochanteric fractures, and its effect may be related to the regulation of abnormal levels of MMPs and OPG. Previous studies have also found that interleukin and C-reactive protein are also closely related to the pathology and recovery process of fractures [18, 19]. Incision infection after calcaneal fracture affects the clinical treatment effect, and serum IL-2, IL-6, and CRP levels increase in patients with postoperative incision infection after bone fracture [20]. Closed negative pressure drainage combined with astragalus injection irrigation in the treatment of traumatic suppurative osteomyelitis can effectively improve the patient's limb function, improve treatment efficiency, and reduce complications and hospitalization costs, which may be related to the inhibition of CRP and IL-6 secretion [21, 22]. At the same time, the abnormal recovery of interleukin and C-reactive protein levels has a certain correlation with the recovery of fractures [23–26]. Lugua polypeptide can increase bone mineral density, improve red blood cell-related, bone metabolism and inflammatory indexes in patients with osteoporotic fractures [27], the increase of serum IL-6 level is involved in the injury of elderly femoral neck fracture and acute trauma in the early stage of surgery, and inflammatory response participates in the bone remodeling of postoperative fracture healing [28]. This study also confirmed that the levels of interleukin and CRP in the two groups of patients were significantly reduced after treatment, and the changes in the above indicators in the observation group were more obvious, suggesting that the proximal bone antirotation intramedullary nail is more effective in regulating the abnormally elevated inflammation level in elderly patients with unstable femoral intertrochanteric fractures, which may be related to the regulation of abnormal levels of MMPs and OPG. In future studies, we will further analyze whether MMPs and OPG are risk factors for unstable intertrochanteric fractures and analyze the correlation between them, so as to provide a reference for clinically relevant disease prevention and treatment.In conclusion, the proximal femoral antirotation intramedullary nail has a good therapeutic effect on elderly patients with unstable intertrochanteric fractures, and the changes in MMPs and OPG levels may be related to the treatment process. --- *Source: 1001354-2022-09-02.xml*
2022
# ALKBH5 Is Lowly Expressed in Esophageal Squamous Cell Carcinoma and Inhibits the Malignant Proliferation and Invasion of Tumor Cells **Authors:** Jinqiu Li; Hongqiang Liu; Shanglin Dong; Yunbo Zhang; Xiao Li; Jing Wang **Journal:** Computational and Mathematical Methods in Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1001446 --- ## Abstract Background. Modification of N6-methyladenosine (m6A) and RNA m6A regulatory factors is required in cancer advancement. The contribution of m6A and its alteration in esophageal squamous cell carcinoma (ESCC) is still unclear. Results. ALKBH5 was lowly expressed in ESCC tissues, which the total m6A level was increased in ESCC tissue than the presentation in normal healthy tissue. The pcDNA3.1-ALKBH5 recombinant plasmid was transfected into KYSE-150 and Eca-109 cells. The overexpression of ALKBH5 is responsible for a significant reduction of the total m6A levels in Eca-109 and KYSE150 cells, inhibiting the proliferation capability, migration, and cell invasion. Conclusions. ALKBH5 as a demethylase was lowly expressed in cancer progression of ESCC and acts as a crucial component in ESCC progression. --- ## Body ## 1. Background Esophageal cancer (EC) ranks seventh in the global cancer incidence rate and sixth cause of cancer mortalities, globally [1]. Two leading pathological conditions of EC are squamous cell carcinoma (ESCC) or skin cancer and adenocarcinoma (EAC) or distal esophagus cancer, which have significantly different risk factors, geographic distribution, and treatment strategies. In China, EC accounts for the majority of cancer mortalities, among which ESCC is the leading subtype (80%) [2]. China’s ESCC cases account for half of the burden of the global ESCC patients and more than 90% of Asia’s [3]. Its patients are generally diagnosed as advanced [4]. At present, the survival rate in ESCC cases is still very low [5].N6-Methyladenosine (m6A) is referred to as a modification of RNA in mammals [6]. m6A is edited by the complex of methyltransferase (“writers,” METTL3/METTL14/WTAP/RBM15/ZC3H1/KIAA1429) and cleared by demethylase (“erasers,” ALKBH5, and FTO). “Readers” (YTHDF1/2/3, YTHDC1/2, HNRNPA2/B1, HNRNPC, HNRNPG, eIF3, IGF2BP1/2/3, and Prrc2a) can selectively recognize m6A modified RNA and perform specific biological functions. Modification of m6A initiates RNA metabolism, which includes RNA degradation, transportation, positioning, and translation [7, 8].Xu et al. construct a prognostic mark composed of HNRNPC and ALKBH5, the development that the prognostic mark can act as an independent prognostic marker [9]. Moreover, Guo et al. identify the prognostic characteristics of EC patients including ALKBH5 and HNRNPA2B1 [10].In this research, we will clarify the function of ALKBH5 in the malignant growth and invasion of ESCC cellsin vitro. ## 2. Results ### 2.1. ALKBH5 Is Lowly Expressed in ESCC First, the presentation of gene expression of ALKBH5 was measured in 23 ESCC tissue samples using qRT-PCR and IHC. Figure1(a) shows the relative mRNA levels of ALKBH5 which was lowly expressed in ESCC tissues, i.e., P<0.05. The results of IHC showed that ALKBH5 had abundant protein expression in normal esophageal tissues (Figure 1(b)). However, the gene expression of ALKBH5 protein was significantly reduced in ESCC tissues (Figures 1(b) and 1(c)). The data statistics indicted that ALKBH5 was highly expressed in 78.26% (18/23) of normal and healthy esophageal tissues, but only highly expressed in 39.13% (9/23) of ESCC patients’ tissues (Figure 1(c), P<0.05). Adding that, the total m6A levels in ESCC were detected by ELISA (Figure 1(d)). The total m6A level in ESCC was higher compared to the normal esophageal tissue (Figure 1(d), P<0.05). In summary, there was a significant increase of total m6A levels seen in ESCC tissues, and the expression of ALKBH5 was significantly decreased compared with normal esophageal tissues.Figure 1 ALKBH5 is lowly expressed in ESCC tissues. The mRNA (a) and protein (b) expression of ALKBH5 in 23 pairs of ESCC and adjacent normal tissue samples using qRT-PCR and IHC. (c) The data statistics of IHC. (d) The total m6A levels in ESCC and normal tissues by ELISA. (e) ALKBH5 protein was significantly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (OE). NC: negative control. P<0.05. (a)(b)(c)(d)(e) ### 2.2. Proliferation Inhibition of ESCC Cells by ALKBH5 pcDNA3.1-ALKBH5 plasmid (OE) was transfected into ESCC cells, and the function-gain experiments were performed. As seen in Figure1(e), ALKBH5 was markedly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (P<0.05). In addition, the overexpression of ALKBH5 significantly reduced the total m6A levels (Figures 2(a) and 2(b)). Regarding the ESCC cells and its proliferation, both CCK-8 (Figure 2(c)) and clone formation (Figure 2(d)) assays suggested that ALKBH5 inhibited the ESCC cell proliferation and its growth.Figure 2 ALKBH5 inhibits ESCC cell proliferation. The total m6A levels in Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by dot blot assays (a) and ELISA (b). The proliferation and growth of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by CCK-8 (c) and colony formation (d) assays. NC: negative control. P<0.05. (a)(b)(c)(d) ### 2.3. Apoptosis Induction of ESCC Cells by ALKBH5 Apoptosis signals are responsible for regulating the proliferation of tumor cells. Therefore, the impact of ALKBH5 overexpression on the signals of apoptosis of Eca-109 and KYSE150 cells was detected. Flow cytometry indicated that after overexpression of ALKBH5, the apoptotic cell ratio increased significantly, the Bax expression and cleaved caspase3 increased, and Bcl2 expression decreased (Figures3(a) and 3(b)). In summary, ALKBH5 overexpression induced the apoptosis signals in ESCC cells.Figure 3 ALKBH5 induces ESCC cell apoptosis. (a) Flow cytometry detection for apoptosis of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE). (b) The expression of apoptosis-related proteins by western blot. NC: negative control.P<0.05. (a)(b) ### 2.4. Motility Inhibition of ESCC Cells by ALKBH5 The migration and invasion of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid were explained. As shown in Figure4(a), the invasion cell numbers of both transfected with the pcDNA3.1-ALKBH5 plasmid were obviously decreased. Furthermore, the overexpression of ALKBH5 significantly reduced the migration cell numbers in ESCC cells (Figures 4(b)–4(d)).Figure 4 ALKBH5 inhibits the ESCC cell migration and invasion. The invasion (a) and migration (b–d) of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by transwell and wound healing assays. NC: negative control.P<0.05. (a)(b)(c)(d) ## 2.1. ALKBH5 Is Lowly Expressed in ESCC First, the presentation of gene expression of ALKBH5 was measured in 23 ESCC tissue samples using qRT-PCR and IHC. Figure1(a) shows the relative mRNA levels of ALKBH5 which was lowly expressed in ESCC tissues, i.e., P<0.05. The results of IHC showed that ALKBH5 had abundant protein expression in normal esophageal tissues (Figure 1(b)). However, the gene expression of ALKBH5 protein was significantly reduced in ESCC tissues (Figures 1(b) and 1(c)). The data statistics indicted that ALKBH5 was highly expressed in 78.26% (18/23) of normal and healthy esophageal tissues, but only highly expressed in 39.13% (9/23) of ESCC patients’ tissues (Figure 1(c), P<0.05). Adding that, the total m6A levels in ESCC were detected by ELISA (Figure 1(d)). The total m6A level in ESCC was higher compared to the normal esophageal tissue (Figure 1(d), P<0.05). In summary, there was a significant increase of total m6A levels seen in ESCC tissues, and the expression of ALKBH5 was significantly decreased compared with normal esophageal tissues.Figure 1 ALKBH5 is lowly expressed in ESCC tissues. The mRNA (a) and protein (b) expression of ALKBH5 in 23 pairs of ESCC and adjacent normal tissue samples using qRT-PCR and IHC. (c) The data statistics of IHC. (d) The total m6A levels in ESCC and normal tissues by ELISA. (e) ALKBH5 protein was significantly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (OE). NC: negative control. P<0.05. (a)(b)(c)(d)(e) ## 2.2. Proliferation Inhibition of ESCC Cells by ALKBH5 pcDNA3.1-ALKBH5 plasmid (OE) was transfected into ESCC cells, and the function-gain experiments were performed. As seen in Figure1(e), ALKBH5 was markedly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (P<0.05). In addition, the overexpression of ALKBH5 significantly reduced the total m6A levels (Figures 2(a) and 2(b)). Regarding the ESCC cells and its proliferation, both CCK-8 (Figure 2(c)) and clone formation (Figure 2(d)) assays suggested that ALKBH5 inhibited the ESCC cell proliferation and its growth.Figure 2 ALKBH5 inhibits ESCC cell proliferation. The total m6A levels in Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by dot blot assays (a) and ELISA (b). The proliferation and growth of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by CCK-8 (c) and colony formation (d) assays. NC: negative control. P<0.05. (a)(b)(c)(d) ## 2.3. Apoptosis Induction of ESCC Cells by ALKBH5 Apoptosis signals are responsible for regulating the proliferation of tumor cells. Therefore, the impact of ALKBH5 overexpression on the signals of apoptosis of Eca-109 and KYSE150 cells was detected. Flow cytometry indicated that after overexpression of ALKBH5, the apoptotic cell ratio increased significantly, the Bax expression and cleaved caspase3 increased, and Bcl2 expression decreased (Figures3(a) and 3(b)). In summary, ALKBH5 overexpression induced the apoptosis signals in ESCC cells.Figure 3 ALKBH5 induces ESCC cell apoptosis. (a) Flow cytometry detection for apoptosis of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE). (b) The expression of apoptosis-related proteins by western blot. NC: negative control.P<0.05. (a)(b) ## 2.4. Motility Inhibition of ESCC Cells by ALKBH5 The migration and invasion of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid were explained. As shown in Figure4(a), the invasion cell numbers of both transfected with the pcDNA3.1-ALKBH5 plasmid were obviously decreased. Furthermore, the overexpression of ALKBH5 significantly reduced the migration cell numbers in ESCC cells (Figures 4(b)–4(d)).Figure 4 ALKBH5 inhibits the ESCC cell migration and invasion. The invasion (a) and migration (b–d) of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by transwell and wound healing assays. NC: negative control.P<0.05. (a)(b)(c)(d) ## 3. Discussion m6A modification and RNA m6A regulatory factors contribute to cancer progression. Among them, certain m6A regulatory factors have been found to be significantly related to the progression and prognosis of ESCC. For example, METTL3 is highly expressed in ESCC and significantly relates to the patient prognosis [11]. Furthermore, the overexpression of METL3 promotes the proliferation and invasion of ESCC cells [12]. Zhang et al. detect the gene expression proportion of m6A-related proteins in 348 ESCC tissues by using ELISA and find that the expressions of METTL3 and METTL14 in ESCC pathological tissues are significantly upregulated, while the expressions of FTO and ALKBH5 are significantly downregulated, compared with normal tissues. It is consistent with our research detected in 23 ESCC tissue samples using RT-qPCR and IHC. This study also detected the total m6A level in ESCC tissues by using ELISA. Consistent with the expression level of m6A regulatory factors, the total m6A level in ESCC tissues increased significantly.However, Xu et al. analyze the RNA transcriptome data of 161 EC samples and 11 normal tissue samples in the TCGA database and find no significant differential expression of FTO and ALKBH5 [9]. This result may be affected by the pathological subtype of tumor tissue samples or the lack of normal tissue samples. However, they still find the prognostic signal consisting of HNRNPC and ALKBH5 based on this cohort. In addition, Liu et al. find a significantly high expression of gene FTO by the ESCC tissue microarray [13]. ALKBH5, as a homologue of FTO [14], works with FTO to maintain the balance of levels of m6A in the transcriptome [15]. This result seems to be contrary to our research. However, our study failed to detect the expression level and mechanism of FTO.Genetic dissimilarity of m6A modifier genes is possibly linked to ESCC risk. The rs2416282 of YTHDC2 promoter is identified as significantly connected with ESCC risk [16]. This allele specifically affects the binding of transcription factors. In addition, knocking down the YTHDC2 expression significantly inhibits the proliferation of ESCC cells [16]. However, Xu et al. analyze the genetic mutations in the TCGA EC cohort through the cBioPortal database [9]. The results show that the genetic variation frequency of YTHDF1, ZC3H13, and KIAA1429 is 7-8%, and the most common variation is amplification. The frequency of other adjustment factors is less than 3%. They believe that the changes in the gene expression levels of these regulatory factors are not caused by genetic changes.This study demonstrates the use of pcDNA3.1 plasmid to overexpress ALKBH5 protein and found that ALKBH5 contributes to suppressing cancer. Our study reported the contribution of ALKBH5 expression to altering the biological functionality of human cells of ESCCin vitro. These outcomes point out that ALKBH5 is involved in the malignant growth and aggressive response of ESCC. ## 4. Conclusions Concluding this, ALKBH5 as a demethylase is lowly expressed in ESCC tissues. ALKBH5 regulates the total m6A level of ESCC cells and contributes as a vital entity in ESCC progression. However, we have not explored the expression and role of ALKBH5 homologue FTO, nor have we disclosed the molecular process that regulates the downregulation of ALKBH5 expression in ESCC tissues. ## 5. Methods ### 5.1. Patients and Tissue Samples This study recruited 23 ESCC patients who were diagnosed and received surgical resection without neoadjuvant/adjuvant treatment. The tissue samples were confirmed pathologically after resection. ### 5.2. Cell Culture and Transfection KYSE-150 and Eca-109 cell lines were used to execute this study. The selected cell bank to obtain these cell lines was the Chinese Academy of Sciences. RPMI 1640 medium was used to culture these cell lines with supplementation of 10% FBS. The culture conditions are 5% CO2 and 37°C. ALKBH5 cDNA synthesization was performed and cloned into pcDNA3.1 expression vector. Subsequently, the cell transfection of recombinant plasmid was carried out by the Lipofectamine 2000 reagent. ### 5.3. RT-qPCR Total RNA from tissues or cells transfected with recombinant plasmid for 24 h was extracted and synthesized into cDNA. GADPH is an internal reference, and the relative expression level of ALKBH5 was calculated using the 2-∆∆CT method. ### 5.4. Immunohistochemistry Staining Analysis (IHC) Tissue samples were formalin-fixed and paraffin-embedded. The paraffin samples were made into tissue sections with a thickness. The tissue segments underwent deparaffinization, rehydration, and microwave treatment. After blocking, the tissue segments were incubated with ALKBH5 antibody (1 : 500, ab195377, rabbit monoclonal antibody, Abcam) and with a biotin-labeled secondary antibody. Staining was performed by the enhanced DAB chromogenic kit, and the tissue segments were dehydrated and fixed.The score of each section was the product of the positivity proportion of stained cells (R) and the staining intensity score (S). R scores included 0 (<5%, negative), 1 (5-25%, occasional), 2 (25-50%, focus), and 3 (>51%, diffuse). S scores included 0 (negative), 1 (weak), 2 (medium), and 3 (strong). A score higher than 3 was considered a high expression of ALKBH5. ### 5.5. Total m6A Level Detection Total m6A level detection was performed to measure the total m6A levels of tissue samples or cells transfected with recombinant plasmid for 24 h. For ELISA, the EpiQuik m6A RNA methylation quantification ELISA kit (P-9005-96, Epigentek) was used. For the dot blot assay, the TRIzol reagent separated total RNA, and the mRNA was separated and enriched by mRNA isolation systems of Polyattract® (A-Z5300, A&D Technology Corporation). After being exposed to UV light for 7 min, the mRNA was cross-linked with the optimized Amersham Hybond-N+ membrane. After washing with PBST buffer, the membrane was blocked with 5% skimmed milk and incubated with anti-m6A antibody, and the HRP Substrate of Immobilon Western Chemilum was used to stain (Merck Millipore). ### 5.6. Western Blot Protein was solubilized by RIPA buffer, and SDS-PAGE gel electrophoresis was used for separation of protein bands and final transfer of protein bands to the PVDF membrane. 5% skimmed milk closed the membrane. Subsequently, the membrane was incubated with primary antibody and with the secondary antibody. The enhanced chemiluminescent HRP substrate (Menlo Park, CA) was used for the visualization of chemiluminescence. ### 5.7. Cell Proliferation Assays For the CCK-8 assay, 2000 cells/well were seeded in a 96-well plate. After culturing for a certain period of time, the old medium was removed, and a new medium was added containing 10% CCK-8 reagent. The absorbance value of each well at wavelength of 450 nm was measured with the help of a microplate reader. To perform the colony formation assay, 1000 transfected cells were extracted and seeded in each microplate well of a 6-well plate, and cell culturing was performed for up to 2 weeks. The formed clones were stained with 0.1% crystal violet. ### 5.8. Flow Cytometry Detection Trypsinization of cells was performed without adding EDTA and resuspended to1×106 cells/ml with binding buffer. FITC-Annexin V PI were added to the cell suspension. After mixing thoroughly, the FACScan flow cytometer system (BD Biosciences, San Jose, California) was used immediately for detection. ### 5.9. Mobility Assay For the transwell assay, cells suspended in serum-free medium were added to the upper chamber. After the 24-hour duration, fixation of cells was performed; the cells were present at the bottom of the chamber membrane where 4% paraformaldehyde was used for cell fixation and crystal violet dye for cell staining. To perform the wound healing assay, a wound was created in the cell monolayer with the help of a cell scraper. The washing of sloughed cells was performed with PBS. After culturing for a specified time, pictures were taken and the wound healing rate was calculated. ### 5.10. Statistics The data was analyzed with SPSS 20.0. Student’st-test and one-way ANOVA were used to test the difference between groups. P<0.05 was considered statistically significant. ## 5.1. Patients and Tissue Samples This study recruited 23 ESCC patients who were diagnosed and received surgical resection without neoadjuvant/adjuvant treatment. The tissue samples were confirmed pathologically after resection. ## 5.2. Cell Culture and Transfection KYSE-150 and Eca-109 cell lines were used to execute this study. The selected cell bank to obtain these cell lines was the Chinese Academy of Sciences. RPMI 1640 medium was used to culture these cell lines with supplementation of 10% FBS. The culture conditions are 5% CO2 and 37°C. ALKBH5 cDNA synthesization was performed and cloned into pcDNA3.1 expression vector. Subsequently, the cell transfection of recombinant plasmid was carried out by the Lipofectamine 2000 reagent. ## 5.3. RT-qPCR Total RNA from tissues or cells transfected with recombinant plasmid for 24 h was extracted and synthesized into cDNA. GADPH is an internal reference, and the relative expression level of ALKBH5 was calculated using the 2-∆∆CT method. ## 5.4. Immunohistochemistry Staining Analysis (IHC) Tissue samples were formalin-fixed and paraffin-embedded. The paraffin samples were made into tissue sections with a thickness. The tissue segments underwent deparaffinization, rehydration, and microwave treatment. After blocking, the tissue segments were incubated with ALKBH5 antibody (1 : 500, ab195377, rabbit monoclonal antibody, Abcam) and with a biotin-labeled secondary antibody. Staining was performed by the enhanced DAB chromogenic kit, and the tissue segments were dehydrated and fixed.The score of each section was the product of the positivity proportion of stained cells (R) and the staining intensity score (S). R scores included 0 (<5%, negative), 1 (5-25%, occasional), 2 (25-50%, focus), and 3 (>51%, diffuse). S scores included 0 (negative), 1 (weak), 2 (medium), and 3 (strong). A score higher than 3 was considered a high expression of ALKBH5. ## 5.5. Total m6A Level Detection Total m6A level detection was performed to measure the total m6A levels of tissue samples or cells transfected with recombinant plasmid for 24 h. For ELISA, the EpiQuik m6A RNA methylation quantification ELISA kit (P-9005-96, Epigentek) was used. For the dot blot assay, the TRIzol reagent separated total RNA, and the mRNA was separated and enriched by mRNA isolation systems of Polyattract® (A-Z5300, A&D Technology Corporation). After being exposed to UV light for 7 min, the mRNA was cross-linked with the optimized Amersham Hybond-N+ membrane. After washing with PBST buffer, the membrane was blocked with 5% skimmed milk and incubated with anti-m6A antibody, and the HRP Substrate of Immobilon Western Chemilum was used to stain (Merck Millipore). ## 5.6. Western Blot Protein was solubilized by RIPA buffer, and SDS-PAGE gel electrophoresis was used for separation of protein bands and final transfer of protein bands to the PVDF membrane. 5% skimmed milk closed the membrane. Subsequently, the membrane was incubated with primary antibody and with the secondary antibody. The enhanced chemiluminescent HRP substrate (Menlo Park, CA) was used for the visualization of chemiluminescence. ## 5.7. Cell Proliferation Assays For the CCK-8 assay, 2000 cells/well were seeded in a 96-well plate. After culturing for a certain period of time, the old medium was removed, and a new medium was added containing 10% CCK-8 reagent. The absorbance value of each well at wavelength of 450 nm was measured with the help of a microplate reader. To perform the colony formation assay, 1000 transfected cells were extracted and seeded in each microplate well of a 6-well plate, and cell culturing was performed for up to 2 weeks. The formed clones were stained with 0.1% crystal violet. ## 5.8. Flow Cytometry Detection Trypsinization of cells was performed without adding EDTA and resuspended to1×106 cells/ml with binding buffer. FITC-Annexin V PI were added to the cell suspension. After mixing thoroughly, the FACScan flow cytometer system (BD Biosciences, San Jose, California) was used immediately for detection. ## 5.9. Mobility Assay For the transwell assay, cells suspended in serum-free medium were added to the upper chamber. After the 24-hour duration, fixation of cells was performed; the cells were present at the bottom of the chamber membrane where 4% paraformaldehyde was used for cell fixation and crystal violet dye for cell staining. To perform the wound healing assay, a wound was created in the cell monolayer with the help of a cell scraper. The washing of sloughed cells was performed with PBS. After culturing for a specified time, pictures were taken and the wound healing rate was calculated. ## 5.10. Statistics The data was analyzed with SPSS 20.0. Student’st-test and one-way ANOVA were used to test the difference between groups. P<0.05 was considered statistically significant. --- *Source: 1001446-2021-11-28.xml*
1001446-2021-11-28_1001446-2021-11-28.md
24,324
ALKBH5 Is Lowly Expressed in Esophageal Squamous Cell Carcinoma and Inhibits the Malignant Proliferation and Invasion of Tumor Cells
Jinqiu Li; Hongqiang Liu; Shanglin Dong; Yunbo Zhang; Xiao Li; Jing Wang
Computational and Mathematical Methods in Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1001446
1001446-2021-11-28.xml
--- ## Abstract Background. Modification of N6-methyladenosine (m6A) and RNA m6A regulatory factors is required in cancer advancement. The contribution of m6A and its alteration in esophageal squamous cell carcinoma (ESCC) is still unclear. Results. ALKBH5 was lowly expressed in ESCC tissues, which the total m6A level was increased in ESCC tissue than the presentation in normal healthy tissue. The pcDNA3.1-ALKBH5 recombinant plasmid was transfected into KYSE-150 and Eca-109 cells. The overexpression of ALKBH5 is responsible for a significant reduction of the total m6A levels in Eca-109 and KYSE150 cells, inhibiting the proliferation capability, migration, and cell invasion. Conclusions. ALKBH5 as a demethylase was lowly expressed in cancer progression of ESCC and acts as a crucial component in ESCC progression. --- ## Body ## 1. Background Esophageal cancer (EC) ranks seventh in the global cancer incidence rate and sixth cause of cancer mortalities, globally [1]. Two leading pathological conditions of EC are squamous cell carcinoma (ESCC) or skin cancer and adenocarcinoma (EAC) or distal esophagus cancer, which have significantly different risk factors, geographic distribution, and treatment strategies. In China, EC accounts for the majority of cancer mortalities, among which ESCC is the leading subtype (80%) [2]. China’s ESCC cases account for half of the burden of the global ESCC patients and more than 90% of Asia’s [3]. Its patients are generally diagnosed as advanced [4]. At present, the survival rate in ESCC cases is still very low [5].N6-Methyladenosine (m6A) is referred to as a modification of RNA in mammals [6]. m6A is edited by the complex of methyltransferase (“writers,” METTL3/METTL14/WTAP/RBM15/ZC3H1/KIAA1429) and cleared by demethylase (“erasers,” ALKBH5, and FTO). “Readers” (YTHDF1/2/3, YTHDC1/2, HNRNPA2/B1, HNRNPC, HNRNPG, eIF3, IGF2BP1/2/3, and Prrc2a) can selectively recognize m6A modified RNA and perform specific biological functions. Modification of m6A initiates RNA metabolism, which includes RNA degradation, transportation, positioning, and translation [7, 8].Xu et al. construct a prognostic mark composed of HNRNPC and ALKBH5, the development that the prognostic mark can act as an independent prognostic marker [9]. Moreover, Guo et al. identify the prognostic characteristics of EC patients including ALKBH5 and HNRNPA2B1 [10].In this research, we will clarify the function of ALKBH5 in the malignant growth and invasion of ESCC cellsin vitro. ## 2. Results ### 2.1. ALKBH5 Is Lowly Expressed in ESCC First, the presentation of gene expression of ALKBH5 was measured in 23 ESCC tissue samples using qRT-PCR and IHC. Figure1(a) shows the relative mRNA levels of ALKBH5 which was lowly expressed in ESCC tissues, i.e., P<0.05. The results of IHC showed that ALKBH5 had abundant protein expression in normal esophageal tissues (Figure 1(b)). However, the gene expression of ALKBH5 protein was significantly reduced in ESCC tissues (Figures 1(b) and 1(c)). The data statistics indicted that ALKBH5 was highly expressed in 78.26% (18/23) of normal and healthy esophageal tissues, but only highly expressed in 39.13% (9/23) of ESCC patients’ tissues (Figure 1(c), P<0.05). Adding that, the total m6A levels in ESCC were detected by ELISA (Figure 1(d)). The total m6A level in ESCC was higher compared to the normal esophageal tissue (Figure 1(d), P<0.05). In summary, there was a significant increase of total m6A levels seen in ESCC tissues, and the expression of ALKBH5 was significantly decreased compared with normal esophageal tissues.Figure 1 ALKBH5 is lowly expressed in ESCC tissues. The mRNA (a) and protein (b) expression of ALKBH5 in 23 pairs of ESCC and adjacent normal tissue samples using qRT-PCR and IHC. (c) The data statistics of IHC. (d) The total m6A levels in ESCC and normal tissues by ELISA. (e) ALKBH5 protein was significantly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (OE). NC: negative control. P<0.05. (a)(b)(c)(d)(e) ### 2.2. Proliferation Inhibition of ESCC Cells by ALKBH5 pcDNA3.1-ALKBH5 plasmid (OE) was transfected into ESCC cells, and the function-gain experiments were performed. As seen in Figure1(e), ALKBH5 was markedly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (P<0.05). In addition, the overexpression of ALKBH5 significantly reduced the total m6A levels (Figures 2(a) and 2(b)). Regarding the ESCC cells and its proliferation, both CCK-8 (Figure 2(c)) and clone formation (Figure 2(d)) assays suggested that ALKBH5 inhibited the ESCC cell proliferation and its growth.Figure 2 ALKBH5 inhibits ESCC cell proliferation. The total m6A levels in Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by dot blot assays (a) and ELISA (b). The proliferation and growth of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by CCK-8 (c) and colony formation (d) assays. NC: negative control. P<0.05. (a)(b)(c)(d) ### 2.3. Apoptosis Induction of ESCC Cells by ALKBH5 Apoptosis signals are responsible for regulating the proliferation of tumor cells. Therefore, the impact of ALKBH5 overexpression on the signals of apoptosis of Eca-109 and KYSE150 cells was detected. Flow cytometry indicated that after overexpression of ALKBH5, the apoptotic cell ratio increased significantly, the Bax expression and cleaved caspase3 increased, and Bcl2 expression decreased (Figures3(a) and 3(b)). In summary, ALKBH5 overexpression induced the apoptosis signals in ESCC cells.Figure 3 ALKBH5 induces ESCC cell apoptosis. (a) Flow cytometry detection for apoptosis of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE). (b) The expression of apoptosis-related proteins by western blot. NC: negative control.P<0.05. (a)(b) ### 2.4. Motility Inhibition of ESCC Cells by ALKBH5 The migration and invasion of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid were explained. As shown in Figure4(a), the invasion cell numbers of both transfected with the pcDNA3.1-ALKBH5 plasmid were obviously decreased. Furthermore, the overexpression of ALKBH5 significantly reduced the migration cell numbers in ESCC cells (Figures 4(b)–4(d)).Figure 4 ALKBH5 inhibits the ESCC cell migration and invasion. The invasion (a) and migration (b–d) of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by transwell and wound healing assays. NC: negative control.P<0.05. (a)(b)(c)(d) ## 2.1. ALKBH5 Is Lowly Expressed in ESCC First, the presentation of gene expression of ALKBH5 was measured in 23 ESCC tissue samples using qRT-PCR and IHC. Figure1(a) shows the relative mRNA levels of ALKBH5 which was lowly expressed in ESCC tissues, i.e., P<0.05. The results of IHC showed that ALKBH5 had abundant protein expression in normal esophageal tissues (Figure 1(b)). However, the gene expression of ALKBH5 protein was significantly reduced in ESCC tissues (Figures 1(b) and 1(c)). The data statistics indicted that ALKBH5 was highly expressed in 78.26% (18/23) of normal and healthy esophageal tissues, but only highly expressed in 39.13% (9/23) of ESCC patients’ tissues (Figure 1(c), P<0.05). Adding that, the total m6A levels in ESCC were detected by ELISA (Figure 1(d)). The total m6A level in ESCC was higher compared to the normal esophageal tissue (Figure 1(d), P<0.05). In summary, there was a significant increase of total m6A levels seen in ESCC tissues, and the expression of ALKBH5 was significantly decreased compared with normal esophageal tissues.Figure 1 ALKBH5 is lowly expressed in ESCC tissues. The mRNA (a) and protein (b) expression of ALKBH5 in 23 pairs of ESCC and adjacent normal tissue samples using qRT-PCR and IHC. (c) The data statistics of IHC. (d) The total m6A levels in ESCC and normal tissues by ELISA. (e) ALKBH5 protein was significantly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (OE). NC: negative control. P<0.05. (a)(b)(c)(d)(e) ## 2.2. Proliferation Inhibition of ESCC Cells by ALKBH5 pcDNA3.1-ALKBH5 plasmid (OE) was transfected into ESCC cells, and the function-gain experiments were performed. As seen in Figure1(e), ALKBH5 was markedly overexpressed in Eca-109 and KYSE150 cells following transfection of the pcDNA3.1-ALKBH5 plasmid (P<0.05). In addition, the overexpression of ALKBH5 significantly reduced the total m6A levels (Figures 2(a) and 2(b)). Regarding the ESCC cells and its proliferation, both CCK-8 (Figure 2(c)) and clone formation (Figure 2(d)) assays suggested that ALKBH5 inhibited the ESCC cell proliferation and its growth.Figure 2 ALKBH5 inhibits ESCC cell proliferation. The total m6A levels in Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by dot blot assays (a) and ELISA (b). The proliferation and growth of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by CCK-8 (c) and colony formation (d) assays. NC: negative control. P<0.05. (a)(b)(c)(d) ## 2.3. Apoptosis Induction of ESCC Cells by ALKBH5 Apoptosis signals are responsible for regulating the proliferation of tumor cells. Therefore, the impact of ALKBH5 overexpression on the signals of apoptosis of Eca-109 and KYSE150 cells was detected. Flow cytometry indicated that after overexpression of ALKBH5, the apoptotic cell ratio increased significantly, the Bax expression and cleaved caspase3 increased, and Bcl2 expression decreased (Figures3(a) and 3(b)). In summary, ALKBH5 overexpression induced the apoptosis signals in ESCC cells.Figure 3 ALKBH5 induces ESCC cell apoptosis. (a) Flow cytometry detection for apoptosis of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE). (b) The expression of apoptosis-related proteins by western blot. NC: negative control.P<0.05. (a)(b) ## 2.4. Motility Inhibition of ESCC Cells by ALKBH5 The migration and invasion of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid were explained. As shown in Figure4(a), the invasion cell numbers of both transfected with the pcDNA3.1-ALKBH5 plasmid were obviously decreased. Furthermore, the overexpression of ALKBH5 significantly reduced the migration cell numbers in ESCC cells (Figures 4(b)–4(d)).Figure 4 ALKBH5 inhibits the ESCC cell migration and invasion. The invasion (a) and migration (b–d) of Eca-109 and KYSE150 cells transfected with the pcDNA3.1-ALKBH5 plasmid (OE) by transwell and wound healing assays. NC: negative control.P<0.05. (a)(b)(c)(d) ## 3. Discussion m6A modification and RNA m6A regulatory factors contribute to cancer progression. Among them, certain m6A regulatory factors have been found to be significantly related to the progression and prognosis of ESCC. For example, METTL3 is highly expressed in ESCC and significantly relates to the patient prognosis [11]. Furthermore, the overexpression of METL3 promotes the proliferation and invasion of ESCC cells [12]. Zhang et al. detect the gene expression proportion of m6A-related proteins in 348 ESCC tissues by using ELISA and find that the expressions of METTL3 and METTL14 in ESCC pathological tissues are significantly upregulated, while the expressions of FTO and ALKBH5 are significantly downregulated, compared with normal tissues. It is consistent with our research detected in 23 ESCC tissue samples using RT-qPCR and IHC. This study also detected the total m6A level in ESCC tissues by using ELISA. Consistent with the expression level of m6A regulatory factors, the total m6A level in ESCC tissues increased significantly.However, Xu et al. analyze the RNA transcriptome data of 161 EC samples and 11 normal tissue samples in the TCGA database and find no significant differential expression of FTO and ALKBH5 [9]. This result may be affected by the pathological subtype of tumor tissue samples or the lack of normal tissue samples. However, they still find the prognostic signal consisting of HNRNPC and ALKBH5 based on this cohort. In addition, Liu et al. find a significantly high expression of gene FTO by the ESCC tissue microarray [13]. ALKBH5, as a homologue of FTO [14], works with FTO to maintain the balance of levels of m6A in the transcriptome [15]. This result seems to be contrary to our research. However, our study failed to detect the expression level and mechanism of FTO.Genetic dissimilarity of m6A modifier genes is possibly linked to ESCC risk. The rs2416282 of YTHDC2 promoter is identified as significantly connected with ESCC risk [16]. This allele specifically affects the binding of transcription factors. In addition, knocking down the YTHDC2 expression significantly inhibits the proliferation of ESCC cells [16]. However, Xu et al. analyze the genetic mutations in the TCGA EC cohort through the cBioPortal database [9]. The results show that the genetic variation frequency of YTHDF1, ZC3H13, and KIAA1429 is 7-8%, and the most common variation is amplification. The frequency of other adjustment factors is less than 3%. They believe that the changes in the gene expression levels of these regulatory factors are not caused by genetic changes.This study demonstrates the use of pcDNA3.1 plasmid to overexpress ALKBH5 protein and found that ALKBH5 contributes to suppressing cancer. Our study reported the contribution of ALKBH5 expression to altering the biological functionality of human cells of ESCCin vitro. These outcomes point out that ALKBH5 is involved in the malignant growth and aggressive response of ESCC. ## 4. Conclusions Concluding this, ALKBH5 as a demethylase is lowly expressed in ESCC tissues. ALKBH5 regulates the total m6A level of ESCC cells and contributes as a vital entity in ESCC progression. However, we have not explored the expression and role of ALKBH5 homologue FTO, nor have we disclosed the molecular process that regulates the downregulation of ALKBH5 expression in ESCC tissues. ## 5. Methods ### 5.1. Patients and Tissue Samples This study recruited 23 ESCC patients who were diagnosed and received surgical resection without neoadjuvant/adjuvant treatment. The tissue samples were confirmed pathologically after resection. ### 5.2. Cell Culture and Transfection KYSE-150 and Eca-109 cell lines were used to execute this study. The selected cell bank to obtain these cell lines was the Chinese Academy of Sciences. RPMI 1640 medium was used to culture these cell lines with supplementation of 10% FBS. The culture conditions are 5% CO2 and 37°C. ALKBH5 cDNA synthesization was performed and cloned into pcDNA3.1 expression vector. Subsequently, the cell transfection of recombinant plasmid was carried out by the Lipofectamine 2000 reagent. ### 5.3. RT-qPCR Total RNA from tissues or cells transfected with recombinant plasmid for 24 h was extracted and synthesized into cDNA. GADPH is an internal reference, and the relative expression level of ALKBH5 was calculated using the 2-∆∆CT method. ### 5.4. Immunohistochemistry Staining Analysis (IHC) Tissue samples were formalin-fixed and paraffin-embedded. The paraffin samples were made into tissue sections with a thickness. The tissue segments underwent deparaffinization, rehydration, and microwave treatment. After blocking, the tissue segments were incubated with ALKBH5 antibody (1 : 500, ab195377, rabbit monoclonal antibody, Abcam) and with a biotin-labeled secondary antibody. Staining was performed by the enhanced DAB chromogenic kit, and the tissue segments were dehydrated and fixed.The score of each section was the product of the positivity proportion of stained cells (R) and the staining intensity score (S). R scores included 0 (<5%, negative), 1 (5-25%, occasional), 2 (25-50%, focus), and 3 (>51%, diffuse). S scores included 0 (negative), 1 (weak), 2 (medium), and 3 (strong). A score higher than 3 was considered a high expression of ALKBH5. ### 5.5. Total m6A Level Detection Total m6A level detection was performed to measure the total m6A levels of tissue samples or cells transfected with recombinant plasmid for 24 h. For ELISA, the EpiQuik m6A RNA methylation quantification ELISA kit (P-9005-96, Epigentek) was used. For the dot blot assay, the TRIzol reagent separated total RNA, and the mRNA was separated and enriched by mRNA isolation systems of Polyattract® (A-Z5300, A&D Technology Corporation). After being exposed to UV light for 7 min, the mRNA was cross-linked with the optimized Amersham Hybond-N+ membrane. After washing with PBST buffer, the membrane was blocked with 5% skimmed milk and incubated with anti-m6A antibody, and the HRP Substrate of Immobilon Western Chemilum was used to stain (Merck Millipore). ### 5.6. Western Blot Protein was solubilized by RIPA buffer, and SDS-PAGE gel electrophoresis was used for separation of protein bands and final transfer of protein bands to the PVDF membrane. 5% skimmed milk closed the membrane. Subsequently, the membrane was incubated with primary antibody and with the secondary antibody. The enhanced chemiluminescent HRP substrate (Menlo Park, CA) was used for the visualization of chemiluminescence. ### 5.7. Cell Proliferation Assays For the CCK-8 assay, 2000 cells/well were seeded in a 96-well plate. After culturing for a certain period of time, the old medium was removed, and a new medium was added containing 10% CCK-8 reagent. The absorbance value of each well at wavelength of 450 nm was measured with the help of a microplate reader. To perform the colony formation assay, 1000 transfected cells were extracted and seeded in each microplate well of a 6-well plate, and cell culturing was performed for up to 2 weeks. The formed clones were stained with 0.1% crystal violet. ### 5.8. Flow Cytometry Detection Trypsinization of cells was performed without adding EDTA and resuspended to1×106 cells/ml with binding buffer. FITC-Annexin V PI were added to the cell suspension. After mixing thoroughly, the FACScan flow cytometer system (BD Biosciences, San Jose, California) was used immediately for detection. ### 5.9. Mobility Assay For the transwell assay, cells suspended in serum-free medium were added to the upper chamber. After the 24-hour duration, fixation of cells was performed; the cells were present at the bottom of the chamber membrane where 4% paraformaldehyde was used for cell fixation and crystal violet dye for cell staining. To perform the wound healing assay, a wound was created in the cell monolayer with the help of a cell scraper. The washing of sloughed cells was performed with PBS. After culturing for a specified time, pictures were taken and the wound healing rate was calculated. ### 5.10. Statistics The data was analyzed with SPSS 20.0. Student’st-test and one-way ANOVA were used to test the difference between groups. P<0.05 was considered statistically significant. ## 5.1. Patients and Tissue Samples This study recruited 23 ESCC patients who were diagnosed and received surgical resection without neoadjuvant/adjuvant treatment. The tissue samples were confirmed pathologically after resection. ## 5.2. Cell Culture and Transfection KYSE-150 and Eca-109 cell lines were used to execute this study. The selected cell bank to obtain these cell lines was the Chinese Academy of Sciences. RPMI 1640 medium was used to culture these cell lines with supplementation of 10% FBS. The culture conditions are 5% CO2 and 37°C. ALKBH5 cDNA synthesization was performed and cloned into pcDNA3.1 expression vector. Subsequently, the cell transfection of recombinant plasmid was carried out by the Lipofectamine 2000 reagent. ## 5.3. RT-qPCR Total RNA from tissues or cells transfected with recombinant plasmid for 24 h was extracted and synthesized into cDNA. GADPH is an internal reference, and the relative expression level of ALKBH5 was calculated using the 2-∆∆CT method. ## 5.4. Immunohistochemistry Staining Analysis (IHC) Tissue samples were formalin-fixed and paraffin-embedded. The paraffin samples were made into tissue sections with a thickness. The tissue segments underwent deparaffinization, rehydration, and microwave treatment. After blocking, the tissue segments were incubated with ALKBH5 antibody (1 : 500, ab195377, rabbit monoclonal antibody, Abcam) and with a biotin-labeled secondary antibody. Staining was performed by the enhanced DAB chromogenic kit, and the tissue segments were dehydrated and fixed.The score of each section was the product of the positivity proportion of stained cells (R) and the staining intensity score (S). R scores included 0 (<5%, negative), 1 (5-25%, occasional), 2 (25-50%, focus), and 3 (>51%, diffuse). S scores included 0 (negative), 1 (weak), 2 (medium), and 3 (strong). A score higher than 3 was considered a high expression of ALKBH5. ## 5.5. Total m6A Level Detection Total m6A level detection was performed to measure the total m6A levels of tissue samples or cells transfected with recombinant plasmid for 24 h. For ELISA, the EpiQuik m6A RNA methylation quantification ELISA kit (P-9005-96, Epigentek) was used. For the dot blot assay, the TRIzol reagent separated total RNA, and the mRNA was separated and enriched by mRNA isolation systems of Polyattract® (A-Z5300, A&D Technology Corporation). After being exposed to UV light for 7 min, the mRNA was cross-linked with the optimized Amersham Hybond-N+ membrane. After washing with PBST buffer, the membrane was blocked with 5% skimmed milk and incubated with anti-m6A antibody, and the HRP Substrate of Immobilon Western Chemilum was used to stain (Merck Millipore). ## 5.6. Western Blot Protein was solubilized by RIPA buffer, and SDS-PAGE gel electrophoresis was used for separation of protein bands and final transfer of protein bands to the PVDF membrane. 5% skimmed milk closed the membrane. Subsequently, the membrane was incubated with primary antibody and with the secondary antibody. The enhanced chemiluminescent HRP substrate (Menlo Park, CA) was used for the visualization of chemiluminescence. ## 5.7. Cell Proliferation Assays For the CCK-8 assay, 2000 cells/well were seeded in a 96-well plate. After culturing for a certain period of time, the old medium was removed, and a new medium was added containing 10% CCK-8 reagent. The absorbance value of each well at wavelength of 450 nm was measured with the help of a microplate reader. To perform the colony formation assay, 1000 transfected cells were extracted and seeded in each microplate well of a 6-well plate, and cell culturing was performed for up to 2 weeks. The formed clones were stained with 0.1% crystal violet. ## 5.8. Flow Cytometry Detection Trypsinization of cells was performed without adding EDTA and resuspended to1×106 cells/ml with binding buffer. FITC-Annexin V PI were added to the cell suspension. After mixing thoroughly, the FACScan flow cytometer system (BD Biosciences, San Jose, California) was used immediately for detection. ## 5.9. Mobility Assay For the transwell assay, cells suspended in serum-free medium were added to the upper chamber. After the 24-hour duration, fixation of cells was performed; the cells were present at the bottom of the chamber membrane where 4% paraformaldehyde was used for cell fixation and crystal violet dye for cell staining. To perform the wound healing assay, a wound was created in the cell monolayer with the help of a cell scraper. The washing of sloughed cells was performed with PBS. After culturing for a specified time, pictures were taken and the wound healing rate was calculated. ## 5.10. Statistics The data was analyzed with SPSS 20.0. Student’st-test and one-way ANOVA were used to test the difference between groups. P<0.05 was considered statistically significant. --- *Source: 1001446-2021-11-28.xml*
2021
# Deconstruction of the Prevention of Knee Osteoarthritis by Swimming Based on Data Mining Technology **Authors:** Jianxia Yin; Qing Li; Yao Song **Journal:** BioMed Research International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1001686 --- ## Abstract With the continuous development of big data and the continuous improvement of people’s living standards, increasingly attention is paid to physical health. Swimming in this sport is effective in preventing the occurrence of arthritis. This paper analyzes the prevention and exploration of arthritis and relies on the traditional method of retrieving clinical literature on the treatment of knee osteoarthritis with traditional Chinese medicine and internal medicine, which requires a lot of manpower and material resources. At this time, the role of data mining technology is brought into play. This article analyzes the prevention of arthritis by swimming. If you rely on the traditional retrieval of clinical literature on the treatment of knee osteoarthritis with traditional Chinese medicine and internal medicine, you will find a lot of disordered data. It takes a lot of manpower and material resources to sort out the summary, and at this time, the role of data mining (DM) technology is brought into play. In this paper, the relevant information of the literature that meets the requirements is established in an Excel database, and the data of the relevant information is entered. Through sorting and analysis, the TCM syndrome types of knee osteoarthritis are summarized. Then, DM technology was used to carry out statistical analysis of frequency and prescription, to summarize the distribution characteristics of the corresponding knee osteoarthritis, TCM syndrome types, and the weight of each syndrome type, and to make a preliminary discussion at the same time. Finally, it is concluded that there are better prevention methods for arthritis in the research methods of traditional Chinese medicine. DM technology has been increasingly applied to all aspects of traditional Chinese medicine. DM technology has improved its research efficiency by 38% and achieved great results, which will play a greater role in promoting the research process of TCM syndrome. --- ## Body ## 1. Introduction In today’s era of rapid development of knowledge economy, with the rapid development of information industrialization and database technology, all walks of life are facing a rapid increase in the amount of data in the process of production practice. There is often a lot of important and useful information hidden behind the surge of data. People urgently need to apply the technology of “removing the rough and saving the fine” and “removing the false and saving the truth” to conduct a systematic and comprehensive analysis and to analyze these higher-level and comprehensive data. From this, DM technology is produced accordingly. The big data platform is built on the distributed storage system and distributed computing system. The distributed system is composed of some inexpensive and cost-effective machines, and dynamic expansion can be achieved by dynamically adding clusters. The dynamic expansion of the cluster can be realized by dynamically adding cheap PCs in the cluster, and the data storage capacity and processing efficiency can be improved, thereby saving resources. Therefore, the use of DM technology to comprehensively analyze, convert it into useful knowledge, explore the laws contained therein, and accelerate the utilization and dissemination of TCM informatization has become the key to the innovation and development of TCM.Swimming has a preventive effect on arthritis, and a small proportion of patients fail to achieve satisfactory clinical outcomes after knee arthroplasty, which may suggest that existing postoperative rehabilitation models may not be the most effective. Vadher et al. conducted the Post-Knee Replacement Community Rehabilitation Trial, which then evaluated the effects of a new community-based rehabilitation program after knee replacement compared with usual care [1]. Wang et al. have conducted several clinical studies to evaluate the effect of neuromuscular exercise therapy on joint stability in patients with knee osteoarthritis [2]. Li et al., who aimed to systematically evaluate the effect of motor imagery on improvement in functional performance in patients with total knee arthroplasty, included randomized controlled trials evaluating the effect of motor imagery on motor imagery [3]. These articles have well explained the importance of protecting the knee, but they have not been studied on the basis of swimming movement under DM and have certain limitations.DM technology is a process of extracting some relatively secret but potentially useful information from a large amount of data. Wang et al. excavated the relevant knowledge of extraction parameters from the historical data of the extraction process of traditional Chinese medicine and used it to guide the technicians to select the influencing factors of the orthogonal test and the level of each factor [4]. Using the theoretical basis of 5G and association analysis data mining, Li et al. designed a data model of tennis technical offensive tactics and association rules, which can calculate the distribution rate of certain methods [5]. Many heuristics have been proposed before to build near-optimal decision trees. However, most of them have the disadvantage that only local optima can be obtained. To solve the above problems, Wang et al. proposed a new algorithm with a new segmentation criterion and a new decision tree construction method [6]. Andrew recorded weather events and the resulting road surface conditions during preprocessing and during subsequent events using visual assessments and limited road grip tester assessments. In addition, he conducted extensive laboratory research to complement fieldwork. The combined findings form a decision tree to aid in operational planning and preprocessing [7]. Baneshi et al. proposed a tree-based model to assess the impact of different religious dimensions based on risk factors [8]. Although the above scholars have achieved some practical results, there are still some targeted researches; so, it is necessary to further improve.In this paper, DM technology is used to analyze the clinical literature of knee osteoarthritis. Through bold innovation and exploration, common single and compound syndrome types in clinical practice were obtained in this study, but it is still dominated by compound syndrome types. Most of the syndrome types of this disease have the compound syndrome type of liver and kidney deficiency syndrome, which will provide a theoretical basis for syndrome differentiation and treatment, and improve the clinical efficacy of knee osteoarthritis. The novelty of the article is as follows: this article will try to analyze and study the clinical syndromes of knee osteoarthritis by relying on the existing clinical literature without the support of clinical research and strive to explore the practical value of the literature so that it can be more reasonably applied in clinical practice. ## 2. DM Technology ### 2.1. Similarity Calculation Method in KNN Algorithm The KNN algorithm uses a similarity calculation method, arranges and combines according to the degree of similarity, extracts the firstK objects, uses the similarity between the first K objects and the target object as the weight, and then weights the sum. Finally, the result is normalized by the sum of the similarity between the top K objects and the target object [9]. #### 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. #### 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. #### 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. #### 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ### 2.2. Naive Bayes (4)PYX=PYPXYPX.PYX refers to the probability that event Y occurs under the condition that event X occurs, PYX refers to the probability that event X occurs under the condition that event Y occurs, and PX and PY represent the probability of event X and event Y, respectively, where event X and event Y are two independent events [10].Assuming that there arem classified samples in the training sample M, the sample attribute N=n1,n2⋯nk belongs to the c class, and ni represents the i-th attribute in the sample. According to Bayes’ theorem, there are (5)PcN=PNcPcPN.For the unknown sampleN, calculate the conditional probability that the sample is each class, and the class corresponding to the maximum probability value is determined as the class to which the sample belongs. Using the property independence assumption, PNc can be transformed into (6)PNc=∏i=1kPnic.Therefore, the Bayesian classification algorithm expression can be written as(7)hN=argmaxPc∏i=1kPnic. ### 2.3. Logistic Regression The regression coefficient is a parameter that represents the influence of the independent variablex on the dependent variable y in the regression equation. The larger the regression coefficient, the greater the influence of x on y. As the core algorithm of logistic regression [11], the sigmoid function is calculated as follows: (8)d=11−e−x.Figure2 is a graph of the sigmoid function, which converts the x value to a d value of 0 or 1. Among them, x is a regression function, set the regression coefficient δ, the input is M, and then x=δTM and bring it into the above formula to get (9)d=11−e−δTM.Figure 2 Sigmoid function graph.It can be transformed into(10)lnd1−d=δTM.Ifd is the probability of a sample M, 1-d is the inverse probability of M, and ln1/1−d is the relative probability of M. ### 2.4. Support Vector Machines Support vector machines can be used not only for classification tasks but also for regression tasks. For binary classification problems, it is necessary to draw a hyperplane between different classes to separate the two classes [12]. The equation description of the hyperplane is (11)ωTx+a=0.ω=ω1,ω2⋯ωk is the normal vector of the hyperplane, representing the direction of the plane, and a is the displacement, representing the distance between the hyperplane and the origin. In this way, the distance from the point in the sample to the hyperplane is (12)r=ωTx+aω.If the hyperplane can separate positive and negative samples, then(13)ωTxi+a≥1,yi=+1,ωTxi+a≤1,yi=−1.The sum of the distances from two support vectors of different types to the hyperplane is(14)R=2ω.Moreover, known as “interval,” to find the hyperplane that can maximize the interval, that is, to maximizeR under the condition of satisfying Equation (13), that is, (15)maxω,a2ωs.t.yiωTxi+a≥1,i=1,2⋯k.Maximizing1/ω is equivalent to minimizing ω2, and the above formula can be changed to (16)min12ω2ω,ks.t.yiωTxi+a≥1,i=1,2⋯k. ### 2.5. Decision Tree The decision tree algorithm is an instance-based inductive learning method. It is a modeling method that uses the tree structure from root to branch to branch. ID3 is a classification algorithm for decision tree learning. The ID3 algorithm is mainly divided according to the size of the information gain and then constructs a decision tree, which is suitable for discrete data. Which attribute is selected in each node in the decision tree is the core of the ID3 algorithm. Its task is to minimize the number of nodes on the decision tree. The smaller the number of nodes, the higher the recognition rate [13].Assuming that the training set isM, the proportion of the k-th sample is Pk, and the discrete attribute b has V values of b1,b2⋯bV, among which the sample with the attribute value of bV in the training set is called MV. The first is the concept of information entropy. Information entropy is an indicator to measure the purity of the sample set, which is defined as (17)EntM=−∑i=1kpilog2pk.The smaller the information entropy, the higher the purity. The information gain is the gain obtained when the calculation attributeb divides the sample M, expressed as (18)GainM,b=−∑i=1kMVMEntMV.The gain rate is defined as(19)Gain_ratioM,b=GainD,bIVb.Among them,(20)IVb=−∑V=1VMVMlog2MVM.This is a fixed value. The characteristic of this value is that the more possible the values of the attribute, the larger the value, which effectively avoids the possibility that the attribute with more values will be preferentially selected. ## 2.1. Similarity Calculation Method in KNN Algorithm The KNN algorithm uses a similarity calculation method, arranges and combines according to the degree of similarity, extracts the firstK objects, uses the similarity between the first K objects and the target object as the weight, and then weights the sum. Finally, the result is normalized by the sum of the similarity between the top K objects and the target object [9]. ### 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. ### 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. ### 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. ### 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ## 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. ## 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. ## 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. ## 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ## 2.2. Naive Bayes (4)PYX=PYPXYPX.PYX refers to the probability that event Y occurs under the condition that event X occurs, PYX refers to the probability that event X occurs under the condition that event Y occurs, and PX and PY represent the probability of event X and event Y, respectively, where event X and event Y are two independent events [10].Assuming that there arem classified samples in the training sample M, the sample attribute N=n1,n2⋯nk belongs to the c class, and ni represents the i-th attribute in the sample. According to Bayes’ theorem, there are (5)PcN=PNcPcPN.For the unknown sampleN, calculate the conditional probability that the sample is each class, and the class corresponding to the maximum probability value is determined as the class to which the sample belongs. Using the property independence assumption, PNc can be transformed into (6)PNc=∏i=1kPnic.Therefore, the Bayesian classification algorithm expression can be written as(7)hN=argmaxPc∏i=1kPnic. ## 2.3. Logistic Regression The regression coefficient is a parameter that represents the influence of the independent variablex on the dependent variable y in the regression equation. The larger the regression coefficient, the greater the influence of x on y. As the core algorithm of logistic regression [11], the sigmoid function is calculated as follows: (8)d=11−e−x.Figure2 is a graph of the sigmoid function, which converts the x value to a d value of 0 or 1. Among them, x is a regression function, set the regression coefficient δ, the input is M, and then x=δTM and bring it into the above formula to get (9)d=11−e−δTM.Figure 2 Sigmoid function graph.It can be transformed into(10)lnd1−d=δTM.Ifd is the probability of a sample M, 1-d is the inverse probability of M, and ln1/1−d is the relative probability of M. ## 2.4. Support Vector Machines Support vector machines can be used not only for classification tasks but also for regression tasks. For binary classification problems, it is necessary to draw a hyperplane between different classes to separate the two classes [12]. The equation description of the hyperplane is (11)ωTx+a=0.ω=ω1,ω2⋯ωk is the normal vector of the hyperplane, representing the direction of the plane, and a is the displacement, representing the distance between the hyperplane and the origin. In this way, the distance from the point in the sample to the hyperplane is (12)r=ωTx+aω.If the hyperplane can separate positive and negative samples, then(13)ωTxi+a≥1,yi=+1,ωTxi+a≤1,yi=−1.The sum of the distances from two support vectors of different types to the hyperplane is(14)R=2ω.Moreover, known as “interval,” to find the hyperplane that can maximize the interval, that is, to maximizeR under the condition of satisfying Equation (13), that is, (15)maxω,a2ωs.t.yiωTxi+a≥1,i=1,2⋯k.Maximizing1/ω is equivalent to minimizing ω2, and the above formula can be changed to (16)min12ω2ω,ks.t.yiωTxi+a≥1,i=1,2⋯k. ## 2.5. Decision Tree The decision tree algorithm is an instance-based inductive learning method. It is a modeling method that uses the tree structure from root to branch to branch. ID3 is a classification algorithm for decision tree learning. The ID3 algorithm is mainly divided according to the size of the information gain and then constructs a decision tree, which is suitable for discrete data. Which attribute is selected in each node in the decision tree is the core of the ID3 algorithm. Its task is to minimize the number of nodes on the decision tree. The smaller the number of nodes, the higher the recognition rate [13].Assuming that the training set isM, the proportion of the k-th sample is Pk, and the discrete attribute b has V values of b1,b2⋯bV, among which the sample with the attribute value of bV in the training set is called MV. The first is the concept of information entropy. Information entropy is an indicator to measure the purity of the sample set, which is defined as (17)EntM=−∑i=1kpilog2pk.The smaller the information entropy, the higher the purity. The information gain is the gain obtained when the calculation attributeb divides the sample M, expressed as (18)GainM,b=−∑i=1kMVMEntMV.The gain rate is defined as(19)Gain_ratioM,b=GainD,bIVb.Among them,(20)IVb=−∑V=1VMVMlog2MVM.This is a fixed value. The characteristic of this value is that the more possible the values of the attribute, the larger the value, which effectively avoids the possibility that the attribute with more values will be preferentially selected. ## 3. Deconstruction of the Effect of Swimming on Knee Osteoarthritis Based on DM Technology ### 3.1. Overview Knee osteoarthritis is a relatively common chronic joint disease, which is characterized by changes in noninflammatory articular cartilage and bone hyperplasia at the joint edge. Arthritis is more common in middle-aged and elderly patients, and the prevalence of women is twice as high as that of men. The main clinical manifestations are knee joint swelling and pain, morning stiffness, interlocking feeling, and poor mobility. In the later stage, it may progress to muscle atrophy, even varus deformity of the knee joint, and eventually develop the disability of the patient’s limbs [14]. The development process of the disease is hidden, which greatly harms people’s health and becomes the first cause of sports and chronic disability in the middle-aged and elderly.Traditional Chinese medicine has many methods for the treatment of knee osteoarthritis, with few side effects and low medical expenses; so, it has the advantage of being extensive and suitable for different types of people and syndrome types in different periods. However, the current research results on the standardization of TCM syndromes for knee osteoarthritis show that there is still no objective and unified standard for syndrome differentiation. It is because the current diagnostic criteria for existing syndrome types are mainly derived from collective research and discussion by some experts. They may not fully agree on the etiology and pathogenesis of knee osteoarthritis, and there are still many different opinions and differences on the syndrome differentiation of the disease: they stay on the personal experience reports or mainly from the statistical results of questionnaires in some areas or special prescriptions based on disease differentiation, resulting in extremely confusing clinical syndrome classification. This makes the implementation and promotion of many effective treatment methods impossible, and the progress of traditional Chinese medicine in the treatment of knee osteoarthritis is also greatly limited [15].It can be seen that it is essential to standardize the TCM syndrome types of knee osteoarthritis, which will provide a strong theoretical basis for the treatment of knee osteoarthritis based on syndrome differentiation. This helps guide clinical medication and active prevention, which has an important and far-reaching impact on relieving patients’ suffering, improving personal quality of life, which helps to guide clinical medication and active prevention, and which is important for alleviating the pain of patients, improving the quality of personal life, and promoting social harmony. ### 3.2. Prevention of Arthritis by Swimming When patients with knee osteoarthritis swim in water, they will find that the force of water can have a better and obvious therapeutic effect on the joints and relax the joints. At the same time, people reduce the pressure on the joints when swimming; so, the muscles are fully trained, which relieves inflammation and promotes the recovery of various functions to a certain extent. And because synovial fluid is the nutrition of articular cartilage, during exercise, the cartilage accelerates the circulation of synovial fluid during the occasional stress process, so that the condition of arthritis can be relieved.The horizontal spine of the body during swimming significantly reduces the burden on the spine, relieves the manifestations of pain and inflammation, and has the effect of physical therapy. Dorsiflexing the head from time to time during freestyle and breaststroke stretches the spine of people who work with their heads down for long periods of time. For the elderly, moderate swimming can also prevent osteoporosis, reduce the risk of fractures, improve the function of multiple organs such as the heart and lungs, and improve their own immunity. ### 3.3. Resolve Methods (1) This article determines the clinical literature that needs to be studied according to the research scope, inclusion criteria, and exclusion criteria(2) In the valid clinical literature, general information such as literature titles, authors, research objects, syndrome type, prescription names, treatment principles, prescription, and other related information are summarized and entered into Excel sheet in turn to establish a relevant information database(3) There is data preprocessing for some TCM syndrome types that are not clearly proposed in the effective clinical literature on knee osteoarthritis. However, the treatment principles of prescription are clearly stated, and the types of TCM syndromes are deduced through the reverse syndrome differentiation process of “testing syndromes by method”. To test by method is actually to extract some parts from a certain model and then give the corresponding prescription. After checking the indicators, compare the differences between before and after and between each group according to a certain method. At the same time, the TCM syndrome types are merged and standardized, and finally, a unified and standardized TCM syndrome type is formed(4) It uses the software to perform statistical analysis on the general data (average age, gender) and TCM syndrome types of research objects in the database using DM technology. And the frequency statistical analysis method is used to summarize the average age and gender of the research subjects in the incidence of knee osteoarthritis and the percentage of common clinical TCM syndrome types in the total number of cases and make corresponding charts(5) According to the statistical results of the above data, it is possible to understand the average age and gender of patients in the incidence of knee osteoarthritis, the distribution of common clinical syndrome types, and the weight of each syndrome type. There is a preliminary discussion to establish a more normative, objective, and standard clinical syndrome classification system. The specific process is shown in Figure3Figure 3 Wiring diagram. ### 3.4. Deconstruction #### 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% #### 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. #### 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% #### 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 #### 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. #### 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. #### 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. #### 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ### 3.5. Results TCM clinical syndrome types are the internal basis of diseases, are the pathological generalizations of pathogenic factors, pathological properties, lesion locations, and pathological trends at a certain stage in the development of the disease, and are the theoretical basis for dialectical treatment. Clinically, only by determining the type of TCM syndromes can we apply syndrome differentiation and treatment. However, it is difficult to unify the standard of syndrome types in clinical practice; so, it is essential to study the types of TCM syndromes in an objective, standardized, and normalized manner. DM technology has improved its research efficiency by 38%.According to the results of this DM, the statistical analysis points out that most of the patients are compound syndromes with a mixture of deficiency and reality, and the frequency of a single syndrome is lower than that of the compound type. It can be seen that the proportion of liver-kidney deficiency syndrome combined with a single syndrome type accounts for the largest proportion; most of the single syndrome types can be combined with liver-kidney deficiency syndrome. A review of relevant textbooks, literatures, and diagnostic criteria for clinical syndrome types of knee osteoarthritis found that most of them were of a single syndrome type. Combined with the etiology, pathogenesis, epidemiological characteristics of knee osteoarthritis, and the statistical results of this study, it is shown that most of the complex syndrome types are the main types, which is consistent with the actual syndrome types of clinical syndrome differentiation. Through the results of DM and statistical analysis, it is further concluded that the syndrome types have liver and kidney deficiency syndrome, which is consistent with the actual situation of clinical syndrome differentiation and treatment. Therefore, it is of important and far-reaching significance to guide clinical practice and improve the clinical theory level and clinical efficacy. Therefore, it is of great and far-reaching significance to have a relatively standard and standardized research result of this complex syndrome type to guide clinical practice and improve the theoretical level of syndrome and clinical efficacy. ## 3.1. Overview Knee osteoarthritis is a relatively common chronic joint disease, which is characterized by changes in noninflammatory articular cartilage and bone hyperplasia at the joint edge. Arthritis is more common in middle-aged and elderly patients, and the prevalence of women is twice as high as that of men. The main clinical manifestations are knee joint swelling and pain, morning stiffness, interlocking feeling, and poor mobility. In the later stage, it may progress to muscle atrophy, even varus deformity of the knee joint, and eventually develop the disability of the patient’s limbs [14]. The development process of the disease is hidden, which greatly harms people’s health and becomes the first cause of sports and chronic disability in the middle-aged and elderly.Traditional Chinese medicine has many methods for the treatment of knee osteoarthritis, with few side effects and low medical expenses; so, it has the advantage of being extensive and suitable for different types of people and syndrome types in different periods. However, the current research results on the standardization of TCM syndromes for knee osteoarthritis show that there is still no objective and unified standard for syndrome differentiation. It is because the current diagnostic criteria for existing syndrome types are mainly derived from collective research and discussion by some experts. They may not fully agree on the etiology and pathogenesis of knee osteoarthritis, and there are still many different opinions and differences on the syndrome differentiation of the disease: they stay on the personal experience reports or mainly from the statistical results of questionnaires in some areas or special prescriptions based on disease differentiation, resulting in extremely confusing clinical syndrome classification. This makes the implementation and promotion of many effective treatment methods impossible, and the progress of traditional Chinese medicine in the treatment of knee osteoarthritis is also greatly limited [15].It can be seen that it is essential to standardize the TCM syndrome types of knee osteoarthritis, which will provide a strong theoretical basis for the treatment of knee osteoarthritis based on syndrome differentiation. This helps guide clinical medication and active prevention, which has an important and far-reaching impact on relieving patients’ suffering, improving personal quality of life, which helps to guide clinical medication and active prevention, and which is important for alleviating the pain of patients, improving the quality of personal life, and promoting social harmony. ## 3.2. Prevention of Arthritis by Swimming When patients with knee osteoarthritis swim in water, they will find that the force of water can have a better and obvious therapeutic effect on the joints and relax the joints. At the same time, people reduce the pressure on the joints when swimming; so, the muscles are fully trained, which relieves inflammation and promotes the recovery of various functions to a certain extent. And because synovial fluid is the nutrition of articular cartilage, during exercise, the cartilage accelerates the circulation of synovial fluid during the occasional stress process, so that the condition of arthritis can be relieved.The horizontal spine of the body during swimming significantly reduces the burden on the spine, relieves the manifestations of pain and inflammation, and has the effect of physical therapy. Dorsiflexing the head from time to time during freestyle and breaststroke stretches the spine of people who work with their heads down for long periods of time. For the elderly, moderate swimming can also prevent osteoporosis, reduce the risk of fractures, improve the function of multiple organs such as the heart and lungs, and improve their own immunity. ## 3.3. Resolve Methods (1) This article determines the clinical literature that needs to be studied according to the research scope, inclusion criteria, and exclusion criteria(2) In the valid clinical literature, general information such as literature titles, authors, research objects, syndrome type, prescription names, treatment principles, prescription, and other related information are summarized and entered into Excel sheet in turn to establish a relevant information database(3) There is data preprocessing for some TCM syndrome types that are not clearly proposed in the effective clinical literature on knee osteoarthritis. However, the treatment principles of prescription are clearly stated, and the types of TCM syndromes are deduced through the reverse syndrome differentiation process of “testing syndromes by method”. To test by method is actually to extract some parts from a certain model and then give the corresponding prescription. After checking the indicators, compare the differences between before and after and between each group according to a certain method. At the same time, the TCM syndrome types are merged and standardized, and finally, a unified and standardized TCM syndrome type is formed(4) It uses the software to perform statistical analysis on the general data (average age, gender) and TCM syndrome types of research objects in the database using DM technology. And the frequency statistical analysis method is used to summarize the average age and gender of the research subjects in the incidence of knee osteoarthritis and the percentage of common clinical TCM syndrome types in the total number of cases and make corresponding charts(5) According to the statistical results of the above data, it is possible to understand the average age and gender of patients in the incidence of knee osteoarthritis, the distribution of common clinical syndrome types, and the weight of each syndrome type. There is a preliminary discussion to establish a more normative, objective, and standard clinical syndrome classification system. The specific process is shown in Figure3Figure 3 Wiring diagram. ## 3.4. Deconstruction ### 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% ### 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. ### 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% ### 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 ### 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. ### 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. ### 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. ### 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ## 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% ## 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. ## 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% ## 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 ## 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. ## 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. ## 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. ## 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ## 3.5. Results TCM clinical syndrome types are the internal basis of diseases, are the pathological generalizations of pathogenic factors, pathological properties, lesion locations, and pathological trends at a certain stage in the development of the disease, and are the theoretical basis for dialectical treatment. Clinically, only by determining the type of TCM syndromes can we apply syndrome differentiation and treatment. However, it is difficult to unify the standard of syndrome types in clinical practice; so, it is essential to study the types of TCM syndromes in an objective, standardized, and normalized manner. DM technology has improved its research efficiency by 38%.According to the results of this DM, the statistical analysis points out that most of the patients are compound syndromes with a mixture of deficiency and reality, and the frequency of a single syndrome is lower than that of the compound type. It can be seen that the proportion of liver-kidney deficiency syndrome combined with a single syndrome type accounts for the largest proportion; most of the single syndrome types can be combined with liver-kidney deficiency syndrome. A review of relevant textbooks, literatures, and diagnostic criteria for clinical syndrome types of knee osteoarthritis found that most of them were of a single syndrome type. Combined with the etiology, pathogenesis, epidemiological characteristics of knee osteoarthritis, and the statistical results of this study, it is shown that most of the complex syndrome types are the main types, which is consistent with the actual syndrome types of clinical syndrome differentiation. Through the results of DM and statistical analysis, it is further concluded that the syndrome types have liver and kidney deficiency syndrome, which is consistent with the actual situation of clinical syndrome differentiation and treatment. Therefore, it is of important and far-reaching significance to guide clinical practice and improve the clinical theory level and clinical efficacy. Therefore, it is of great and far-reaching significance to have a relatively standard and standardized research result of this complex syndrome type to guide clinical practice and improve the theoretical level of syndrome and clinical efficacy. ## 4. Conclusions DM technology has become an important part of TCM modernization technology innovation in the past decade, which will play a great role in promoting the progress of TCM modernization research and the improvement of academic level. The research shows that DM technology is the core of knowledge discovery and is a process of extracting valuable knowledge from the data.This article is the application of DM technology in the past decade about the prevention of knee osteoarthritis, clinical literature sorting, induction, and analysis. Using the treatment principles of classification results and weights, the classification results of clinical syndrome types and the weights of each syndrome type are inferred according to the frequency of occurrence. However, because the sample number did not meet the ideal requirements, sample randomness is not enough and limited by time and funds; so, no big data collection and sorting. Through DM technology to summarize the experience of famous old Chinese medicine, it can finally be established for an objective, standard, and standard clinical syndrome system. This research result is consistent with clinical syndrome differentiation and treatment and can guide clinical application. However, the experimental samples prepared in the experimental process of this paper are not large enough, and the data processing and classification are not very accurate. Therefore, in the subsequent research, we will go deep into the experimental design part. At the same time, the database further expands the database capacity and improves the physical and chemical examination of patients, and the analysis results will be more scientific and credible. --- *Source: 1001686-2022-08-16.xml*
1001686-2022-08-16_1001686-2022-08-16.md
70,889
Deconstruction of the Prevention of Knee Osteoarthritis by Swimming Based on Data Mining Technology
Jianxia Yin; Qing Li; Yao Song
BioMed Research International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1001686
1001686-2022-08-16.xml
--- ## Abstract With the continuous development of big data and the continuous improvement of people’s living standards, increasingly attention is paid to physical health. Swimming in this sport is effective in preventing the occurrence of arthritis. This paper analyzes the prevention and exploration of arthritis and relies on the traditional method of retrieving clinical literature on the treatment of knee osteoarthritis with traditional Chinese medicine and internal medicine, which requires a lot of manpower and material resources. At this time, the role of data mining technology is brought into play. This article analyzes the prevention of arthritis by swimming. If you rely on the traditional retrieval of clinical literature on the treatment of knee osteoarthritis with traditional Chinese medicine and internal medicine, you will find a lot of disordered data. It takes a lot of manpower and material resources to sort out the summary, and at this time, the role of data mining (DM) technology is brought into play. In this paper, the relevant information of the literature that meets the requirements is established in an Excel database, and the data of the relevant information is entered. Through sorting and analysis, the TCM syndrome types of knee osteoarthritis are summarized. Then, DM technology was used to carry out statistical analysis of frequency and prescription, to summarize the distribution characteristics of the corresponding knee osteoarthritis, TCM syndrome types, and the weight of each syndrome type, and to make a preliminary discussion at the same time. Finally, it is concluded that there are better prevention methods for arthritis in the research methods of traditional Chinese medicine. DM technology has been increasingly applied to all aspects of traditional Chinese medicine. DM technology has improved its research efficiency by 38% and achieved great results, which will play a greater role in promoting the research process of TCM syndrome. --- ## Body ## 1. Introduction In today’s era of rapid development of knowledge economy, with the rapid development of information industrialization and database technology, all walks of life are facing a rapid increase in the amount of data in the process of production practice. There is often a lot of important and useful information hidden behind the surge of data. People urgently need to apply the technology of “removing the rough and saving the fine” and “removing the false and saving the truth” to conduct a systematic and comprehensive analysis and to analyze these higher-level and comprehensive data. From this, DM technology is produced accordingly. The big data platform is built on the distributed storage system and distributed computing system. The distributed system is composed of some inexpensive and cost-effective machines, and dynamic expansion can be achieved by dynamically adding clusters. The dynamic expansion of the cluster can be realized by dynamically adding cheap PCs in the cluster, and the data storage capacity and processing efficiency can be improved, thereby saving resources. Therefore, the use of DM technology to comprehensively analyze, convert it into useful knowledge, explore the laws contained therein, and accelerate the utilization and dissemination of TCM informatization has become the key to the innovation and development of TCM.Swimming has a preventive effect on arthritis, and a small proportion of patients fail to achieve satisfactory clinical outcomes after knee arthroplasty, which may suggest that existing postoperative rehabilitation models may not be the most effective. Vadher et al. conducted the Post-Knee Replacement Community Rehabilitation Trial, which then evaluated the effects of a new community-based rehabilitation program after knee replacement compared with usual care [1]. Wang et al. have conducted several clinical studies to evaluate the effect of neuromuscular exercise therapy on joint stability in patients with knee osteoarthritis [2]. Li et al., who aimed to systematically evaluate the effect of motor imagery on improvement in functional performance in patients with total knee arthroplasty, included randomized controlled trials evaluating the effect of motor imagery on motor imagery [3]. These articles have well explained the importance of protecting the knee, but they have not been studied on the basis of swimming movement under DM and have certain limitations.DM technology is a process of extracting some relatively secret but potentially useful information from a large amount of data. Wang et al. excavated the relevant knowledge of extraction parameters from the historical data of the extraction process of traditional Chinese medicine and used it to guide the technicians to select the influencing factors of the orthogonal test and the level of each factor [4]. Using the theoretical basis of 5G and association analysis data mining, Li et al. designed a data model of tennis technical offensive tactics and association rules, which can calculate the distribution rate of certain methods [5]. Many heuristics have been proposed before to build near-optimal decision trees. However, most of them have the disadvantage that only local optima can be obtained. To solve the above problems, Wang et al. proposed a new algorithm with a new segmentation criterion and a new decision tree construction method [6]. Andrew recorded weather events and the resulting road surface conditions during preprocessing and during subsequent events using visual assessments and limited road grip tester assessments. In addition, he conducted extensive laboratory research to complement fieldwork. The combined findings form a decision tree to aid in operational planning and preprocessing [7]. Baneshi et al. proposed a tree-based model to assess the impact of different religious dimensions based on risk factors [8]. Although the above scholars have achieved some practical results, there are still some targeted researches; so, it is necessary to further improve.In this paper, DM technology is used to analyze the clinical literature of knee osteoarthritis. Through bold innovation and exploration, common single and compound syndrome types in clinical practice were obtained in this study, but it is still dominated by compound syndrome types. Most of the syndrome types of this disease have the compound syndrome type of liver and kidney deficiency syndrome, which will provide a theoretical basis for syndrome differentiation and treatment, and improve the clinical efficacy of knee osteoarthritis. The novelty of the article is as follows: this article will try to analyze and study the clinical syndromes of knee osteoarthritis by relying on the existing clinical literature without the support of clinical research and strive to explore the practical value of the literature so that it can be more reasonably applied in clinical practice. ## 2. DM Technology ### 2.1. Similarity Calculation Method in KNN Algorithm The KNN algorithm uses a similarity calculation method, arranges and combines according to the degree of similarity, extracts the firstK objects, uses the similarity between the first K objects and the target object as the weight, and then weights the sum. Finally, the result is normalized by the sum of the similarity between the top K objects and the target object [9]. #### 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. #### 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. #### 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. #### 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ### 2.2. Naive Bayes (4)PYX=PYPXYPX.PYX refers to the probability that event Y occurs under the condition that event X occurs, PYX refers to the probability that event X occurs under the condition that event Y occurs, and PX and PY represent the probability of event X and event Y, respectively, where event X and event Y are two independent events [10].Assuming that there arem classified samples in the training sample M, the sample attribute N=n1,n2⋯nk belongs to the c class, and ni represents the i-th attribute in the sample. According to Bayes’ theorem, there are (5)PcN=PNcPcPN.For the unknown sampleN, calculate the conditional probability that the sample is each class, and the class corresponding to the maximum probability value is determined as the class to which the sample belongs. Using the property independence assumption, PNc can be transformed into (6)PNc=∏i=1kPnic.Therefore, the Bayesian classification algorithm expression can be written as(7)hN=argmaxPc∏i=1kPnic. ### 2.3. Logistic Regression The regression coefficient is a parameter that represents the influence of the independent variablex on the dependent variable y in the regression equation. The larger the regression coefficient, the greater the influence of x on y. As the core algorithm of logistic regression [11], the sigmoid function is calculated as follows: (8)d=11−e−x.Figure2 is a graph of the sigmoid function, which converts the x value to a d value of 0 or 1. Among them, x is a regression function, set the regression coefficient δ, the input is M, and then x=δTM and bring it into the above formula to get (9)d=11−e−δTM.Figure 2 Sigmoid function graph.It can be transformed into(10)lnd1−d=δTM.Ifd is the probability of a sample M, 1-d is the inverse probability of M, and ln1/1−d is the relative probability of M. ### 2.4. Support Vector Machines Support vector machines can be used not only for classification tasks but also for regression tasks. For binary classification problems, it is necessary to draw a hyperplane between different classes to separate the two classes [12]. The equation description of the hyperplane is (11)ωTx+a=0.ω=ω1,ω2⋯ωk is the normal vector of the hyperplane, representing the direction of the plane, and a is the displacement, representing the distance between the hyperplane and the origin. In this way, the distance from the point in the sample to the hyperplane is (12)r=ωTx+aω.If the hyperplane can separate positive and negative samples, then(13)ωTxi+a≥1,yi=+1,ωTxi+a≤1,yi=−1.The sum of the distances from two support vectors of different types to the hyperplane is(14)R=2ω.Moreover, known as “interval,” to find the hyperplane that can maximize the interval, that is, to maximizeR under the condition of satisfying Equation (13), that is, (15)maxω,a2ωs.t.yiωTxi+a≥1,i=1,2⋯k.Maximizing1/ω is equivalent to minimizing ω2, and the above formula can be changed to (16)min12ω2ω,ks.t.yiωTxi+a≥1,i=1,2⋯k. ### 2.5. Decision Tree The decision tree algorithm is an instance-based inductive learning method. It is a modeling method that uses the tree structure from root to branch to branch. ID3 is a classification algorithm for decision tree learning. The ID3 algorithm is mainly divided according to the size of the information gain and then constructs a decision tree, which is suitable for discrete data. Which attribute is selected in each node in the decision tree is the core of the ID3 algorithm. Its task is to minimize the number of nodes on the decision tree. The smaller the number of nodes, the higher the recognition rate [13].Assuming that the training set isM, the proportion of the k-th sample is Pk, and the discrete attribute b has V values of b1,b2⋯bV, among which the sample with the attribute value of bV in the training set is called MV. The first is the concept of information entropy. Information entropy is an indicator to measure the purity of the sample set, which is defined as (17)EntM=−∑i=1kpilog2pk.The smaller the information entropy, the higher the purity. The information gain is the gain obtained when the calculation attributeb divides the sample M, expressed as (18)GainM,b=−∑i=1kMVMEntMV.The gain rate is defined as(19)Gain_ratioM,b=GainD,bIVb.Among them,(20)IVb=−∑V=1VMVMlog2MVM.This is a fixed value. The characteristic of this value is that the more possible the values of the attribute, the larger the value, which effectively avoids the possibility that the attribute with more values will be preferentially selected. ## 2.1. Similarity Calculation Method in KNN Algorithm The KNN algorithm uses a similarity calculation method, arranges and combines according to the degree of similarity, extracts the firstK objects, uses the similarity between the first K objects and the target object as the weight, and then weights the sum. Finally, the result is normalized by the sum of the similarity between the top K objects and the target object [9]. ### 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. ### 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. ### 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. ### 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ## 2.1.1. Euclidean Distance Method (1)Mma,mb=1X∑nak−nbk2.In the formula, it is assumed that the feature vector of one text isma, the feature vector of another text is mb, X is the dimension of the feature vector, and nk represents the k-th dimension, because it treats the differences between different properties of the samples as equals, which sometimes does not meet the actual requirements, and does not take into account the influence of the overall variation on the distance. ## 2.1.2. Angle Cosine Method (2)Simma,mb=∑nak×nbk∑n2ak∑n2bk.The meaning of each parameter is consistent with the Euclidean distance method. ## 2.1.3. Weight Calculation Formula (3)qy,va=∑Simy,mbqma,va.TheK training texts calculate the weight of each category in turn, and then the test texts are divided into the categories with the largest weight. Among them, Simy,mb is the similarity between the text to be classified and the training text, and qmb,va is the category attribute function, when the text mabelongs to category va, qma,va=1; otherwise, qma,va=0.The traditional KNN algorithm still has many shortcomings when dealing with big data. The first is that all training samples need to be stored, and the second is that the amount of calculation is large; so ,errors are prone to occur in the calculation process. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed. ## 2.1.4. Application of Density Cropping Based on Clustering Denoising in KNN Algorithm The traditional KNN algorithm still has many shortcomings when dealing with big data. Here, an improved KNN algorithm based on clustering, denoising, and density clipping is proposed.As can be seen from Figure1, even for samples of the same category, due to the differences between samples, and each sample has different representation capabilities for the category, there is a large difference in the degree of similarity between samples.Figure 1 Schematic diagram of removing noisy text from training set during clustering. ## 2.2. Naive Bayes (4)PYX=PYPXYPX.PYX refers to the probability that event Y occurs under the condition that event X occurs, PYX refers to the probability that event X occurs under the condition that event Y occurs, and PX and PY represent the probability of event X and event Y, respectively, where event X and event Y are two independent events [10].Assuming that there arem classified samples in the training sample M, the sample attribute N=n1,n2⋯nk belongs to the c class, and ni represents the i-th attribute in the sample. According to Bayes’ theorem, there are (5)PcN=PNcPcPN.For the unknown sampleN, calculate the conditional probability that the sample is each class, and the class corresponding to the maximum probability value is determined as the class to which the sample belongs. Using the property independence assumption, PNc can be transformed into (6)PNc=∏i=1kPnic.Therefore, the Bayesian classification algorithm expression can be written as(7)hN=argmaxPc∏i=1kPnic. ## 2.3. Logistic Regression The regression coefficient is a parameter that represents the influence of the independent variablex on the dependent variable y in the regression equation. The larger the regression coefficient, the greater the influence of x on y. As the core algorithm of logistic regression [11], the sigmoid function is calculated as follows: (8)d=11−e−x.Figure2 is a graph of the sigmoid function, which converts the x value to a d value of 0 or 1. Among them, x is a regression function, set the regression coefficient δ, the input is M, and then x=δTM and bring it into the above formula to get (9)d=11−e−δTM.Figure 2 Sigmoid function graph.It can be transformed into(10)lnd1−d=δTM.Ifd is the probability of a sample M, 1-d is the inverse probability of M, and ln1/1−d is the relative probability of M. ## 2.4. Support Vector Machines Support vector machines can be used not only for classification tasks but also for regression tasks. For binary classification problems, it is necessary to draw a hyperplane between different classes to separate the two classes [12]. The equation description of the hyperplane is (11)ωTx+a=0.ω=ω1,ω2⋯ωk is the normal vector of the hyperplane, representing the direction of the plane, and a is the displacement, representing the distance between the hyperplane and the origin. In this way, the distance from the point in the sample to the hyperplane is (12)r=ωTx+aω.If the hyperplane can separate positive and negative samples, then(13)ωTxi+a≥1,yi=+1,ωTxi+a≤1,yi=−1.The sum of the distances from two support vectors of different types to the hyperplane is(14)R=2ω.Moreover, known as “interval,” to find the hyperplane that can maximize the interval, that is, to maximizeR under the condition of satisfying Equation (13), that is, (15)maxω,a2ωs.t.yiωTxi+a≥1,i=1,2⋯k.Maximizing1/ω is equivalent to minimizing ω2, and the above formula can be changed to (16)min12ω2ω,ks.t.yiωTxi+a≥1,i=1,2⋯k. ## 2.5. Decision Tree The decision tree algorithm is an instance-based inductive learning method. It is a modeling method that uses the tree structure from root to branch to branch. ID3 is a classification algorithm for decision tree learning. The ID3 algorithm is mainly divided according to the size of the information gain and then constructs a decision tree, which is suitable for discrete data. Which attribute is selected in each node in the decision tree is the core of the ID3 algorithm. Its task is to minimize the number of nodes on the decision tree. The smaller the number of nodes, the higher the recognition rate [13].Assuming that the training set isM, the proportion of the k-th sample is Pk, and the discrete attribute b has V values of b1,b2⋯bV, among which the sample with the attribute value of bV in the training set is called MV. The first is the concept of information entropy. Information entropy is an indicator to measure the purity of the sample set, which is defined as (17)EntM=−∑i=1kpilog2pk.The smaller the information entropy, the higher the purity. The information gain is the gain obtained when the calculation attributeb divides the sample M, expressed as (18)GainM,b=−∑i=1kMVMEntMV.The gain rate is defined as(19)Gain_ratioM,b=GainD,bIVb.Among them,(20)IVb=−∑V=1VMVMlog2MVM.This is a fixed value. The characteristic of this value is that the more possible the values of the attribute, the larger the value, which effectively avoids the possibility that the attribute with more values will be preferentially selected. ## 3. Deconstruction of the Effect of Swimming on Knee Osteoarthritis Based on DM Technology ### 3.1. Overview Knee osteoarthritis is a relatively common chronic joint disease, which is characterized by changes in noninflammatory articular cartilage and bone hyperplasia at the joint edge. Arthritis is more common in middle-aged and elderly patients, and the prevalence of women is twice as high as that of men. The main clinical manifestations are knee joint swelling and pain, morning stiffness, interlocking feeling, and poor mobility. In the later stage, it may progress to muscle atrophy, even varus deformity of the knee joint, and eventually develop the disability of the patient’s limbs [14]. The development process of the disease is hidden, which greatly harms people’s health and becomes the first cause of sports and chronic disability in the middle-aged and elderly.Traditional Chinese medicine has many methods for the treatment of knee osteoarthritis, with few side effects and low medical expenses; so, it has the advantage of being extensive and suitable for different types of people and syndrome types in different periods. However, the current research results on the standardization of TCM syndromes for knee osteoarthritis show that there is still no objective and unified standard for syndrome differentiation. It is because the current diagnostic criteria for existing syndrome types are mainly derived from collective research and discussion by some experts. They may not fully agree on the etiology and pathogenesis of knee osteoarthritis, and there are still many different opinions and differences on the syndrome differentiation of the disease: they stay on the personal experience reports or mainly from the statistical results of questionnaires in some areas or special prescriptions based on disease differentiation, resulting in extremely confusing clinical syndrome classification. This makes the implementation and promotion of many effective treatment methods impossible, and the progress of traditional Chinese medicine in the treatment of knee osteoarthritis is also greatly limited [15].It can be seen that it is essential to standardize the TCM syndrome types of knee osteoarthritis, which will provide a strong theoretical basis for the treatment of knee osteoarthritis based on syndrome differentiation. This helps guide clinical medication and active prevention, which has an important and far-reaching impact on relieving patients’ suffering, improving personal quality of life, which helps to guide clinical medication and active prevention, and which is important for alleviating the pain of patients, improving the quality of personal life, and promoting social harmony. ### 3.2. Prevention of Arthritis by Swimming When patients with knee osteoarthritis swim in water, they will find that the force of water can have a better and obvious therapeutic effect on the joints and relax the joints. At the same time, people reduce the pressure on the joints when swimming; so, the muscles are fully trained, which relieves inflammation and promotes the recovery of various functions to a certain extent. And because synovial fluid is the nutrition of articular cartilage, during exercise, the cartilage accelerates the circulation of synovial fluid during the occasional stress process, so that the condition of arthritis can be relieved.The horizontal spine of the body during swimming significantly reduces the burden on the spine, relieves the manifestations of pain and inflammation, and has the effect of physical therapy. Dorsiflexing the head from time to time during freestyle and breaststroke stretches the spine of people who work with their heads down for long periods of time. For the elderly, moderate swimming can also prevent osteoporosis, reduce the risk of fractures, improve the function of multiple organs such as the heart and lungs, and improve their own immunity. ### 3.3. Resolve Methods (1) This article determines the clinical literature that needs to be studied according to the research scope, inclusion criteria, and exclusion criteria(2) In the valid clinical literature, general information such as literature titles, authors, research objects, syndrome type, prescription names, treatment principles, prescription, and other related information are summarized and entered into Excel sheet in turn to establish a relevant information database(3) There is data preprocessing for some TCM syndrome types that are not clearly proposed in the effective clinical literature on knee osteoarthritis. However, the treatment principles of prescription are clearly stated, and the types of TCM syndromes are deduced through the reverse syndrome differentiation process of “testing syndromes by method”. To test by method is actually to extract some parts from a certain model and then give the corresponding prescription. After checking the indicators, compare the differences between before and after and between each group according to a certain method. At the same time, the TCM syndrome types are merged and standardized, and finally, a unified and standardized TCM syndrome type is formed(4) It uses the software to perform statistical analysis on the general data (average age, gender) and TCM syndrome types of research objects in the database using DM technology. And the frequency statistical analysis method is used to summarize the average age and gender of the research subjects in the incidence of knee osteoarthritis and the percentage of common clinical TCM syndrome types in the total number of cases and make corresponding charts(5) According to the statistical results of the above data, it is possible to understand the average age and gender of patients in the incidence of knee osteoarthritis, the distribution of common clinical syndrome types, and the weight of each syndrome type. There is a preliminary discussion to establish a more normative, objective, and standard clinical syndrome classification system. The specific process is shown in Figure3Figure 3 Wiring diagram. ### 3.4. Deconstruction #### 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% #### 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. #### 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% #### 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 #### 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. #### 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. #### 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. #### 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ### 3.5. Results TCM clinical syndrome types are the internal basis of diseases, are the pathological generalizations of pathogenic factors, pathological properties, lesion locations, and pathological trends at a certain stage in the development of the disease, and are the theoretical basis for dialectical treatment. Clinically, only by determining the type of TCM syndromes can we apply syndrome differentiation and treatment. However, it is difficult to unify the standard of syndrome types in clinical practice; so, it is essential to study the types of TCM syndromes in an objective, standardized, and normalized manner. DM technology has improved its research efficiency by 38%.According to the results of this DM, the statistical analysis points out that most of the patients are compound syndromes with a mixture of deficiency and reality, and the frequency of a single syndrome is lower than that of the compound type. It can be seen that the proportion of liver-kidney deficiency syndrome combined with a single syndrome type accounts for the largest proportion; most of the single syndrome types can be combined with liver-kidney deficiency syndrome. A review of relevant textbooks, literatures, and diagnostic criteria for clinical syndrome types of knee osteoarthritis found that most of them were of a single syndrome type. Combined with the etiology, pathogenesis, epidemiological characteristics of knee osteoarthritis, and the statistical results of this study, it is shown that most of the complex syndrome types are the main types, which is consistent with the actual syndrome types of clinical syndrome differentiation. Through the results of DM and statistical analysis, it is further concluded that the syndrome types have liver and kidney deficiency syndrome, which is consistent with the actual situation of clinical syndrome differentiation and treatment. Therefore, it is of important and far-reaching significance to guide clinical practice and improve the clinical theory level and clinical efficacy. Therefore, it is of great and far-reaching significance to have a relatively standard and standardized research result of this complex syndrome type to guide clinical practice and improve the theoretical level of syndrome and clinical efficacy. ## 3.1. Overview Knee osteoarthritis is a relatively common chronic joint disease, which is characterized by changes in noninflammatory articular cartilage and bone hyperplasia at the joint edge. Arthritis is more common in middle-aged and elderly patients, and the prevalence of women is twice as high as that of men. The main clinical manifestations are knee joint swelling and pain, morning stiffness, interlocking feeling, and poor mobility. In the later stage, it may progress to muscle atrophy, even varus deformity of the knee joint, and eventually develop the disability of the patient’s limbs [14]. The development process of the disease is hidden, which greatly harms people’s health and becomes the first cause of sports and chronic disability in the middle-aged and elderly.Traditional Chinese medicine has many methods for the treatment of knee osteoarthritis, with few side effects and low medical expenses; so, it has the advantage of being extensive and suitable for different types of people and syndrome types in different periods. However, the current research results on the standardization of TCM syndromes for knee osteoarthritis show that there is still no objective and unified standard for syndrome differentiation. It is because the current diagnostic criteria for existing syndrome types are mainly derived from collective research and discussion by some experts. They may not fully agree on the etiology and pathogenesis of knee osteoarthritis, and there are still many different opinions and differences on the syndrome differentiation of the disease: they stay on the personal experience reports or mainly from the statistical results of questionnaires in some areas or special prescriptions based on disease differentiation, resulting in extremely confusing clinical syndrome classification. This makes the implementation and promotion of many effective treatment methods impossible, and the progress of traditional Chinese medicine in the treatment of knee osteoarthritis is also greatly limited [15].It can be seen that it is essential to standardize the TCM syndrome types of knee osteoarthritis, which will provide a strong theoretical basis for the treatment of knee osteoarthritis based on syndrome differentiation. This helps guide clinical medication and active prevention, which has an important and far-reaching impact on relieving patients’ suffering, improving personal quality of life, which helps to guide clinical medication and active prevention, and which is important for alleviating the pain of patients, improving the quality of personal life, and promoting social harmony. ## 3.2. Prevention of Arthritis by Swimming When patients with knee osteoarthritis swim in water, they will find that the force of water can have a better and obvious therapeutic effect on the joints and relax the joints. At the same time, people reduce the pressure on the joints when swimming; so, the muscles are fully trained, which relieves inflammation and promotes the recovery of various functions to a certain extent. And because synovial fluid is the nutrition of articular cartilage, during exercise, the cartilage accelerates the circulation of synovial fluid during the occasional stress process, so that the condition of arthritis can be relieved.The horizontal spine of the body during swimming significantly reduces the burden on the spine, relieves the manifestations of pain and inflammation, and has the effect of physical therapy. Dorsiflexing the head from time to time during freestyle and breaststroke stretches the spine of people who work with their heads down for long periods of time. For the elderly, moderate swimming can also prevent osteoporosis, reduce the risk of fractures, improve the function of multiple organs such as the heart and lungs, and improve their own immunity. ## 3.3. Resolve Methods (1) This article determines the clinical literature that needs to be studied according to the research scope, inclusion criteria, and exclusion criteria(2) In the valid clinical literature, general information such as literature titles, authors, research objects, syndrome type, prescription names, treatment principles, prescription, and other related information are summarized and entered into Excel sheet in turn to establish a relevant information database(3) There is data preprocessing for some TCM syndrome types that are not clearly proposed in the effective clinical literature on knee osteoarthritis. However, the treatment principles of prescription are clearly stated, and the types of TCM syndromes are deduced through the reverse syndrome differentiation process of “testing syndromes by method”. To test by method is actually to extract some parts from a certain model and then give the corresponding prescription. After checking the indicators, compare the differences between before and after and between each group according to a certain method. At the same time, the TCM syndrome types are merged and standardized, and finally, a unified and standardized TCM syndrome type is formed(4) It uses the software to perform statistical analysis on the general data (average age, gender) and TCM syndrome types of research objects in the database using DM technology. And the frequency statistical analysis method is used to summarize the average age and gender of the research subjects in the incidence of knee osteoarthritis and the percentage of common clinical TCM syndrome types in the total number of cases and make corresponding charts(5) According to the statistical results of the above data, it is possible to understand the average age and gender of patients in the incidence of knee osteoarthritis, the distribution of common clinical syndrome types, and the weight of each syndrome type. There is a preliminary discussion to establish a more normative, objective, and standard clinical syndrome classification system. The specific process is shown in Figure3Figure 3 Wiring diagram. ## 3.4. Deconstruction ### 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% ### 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. ### 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% ### 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 ### 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. ### 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. ### 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. ### 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ## 3.4.1. Age The mean age of subjects with knee osteoarthritis was adjusted according to the new regulations in a database of 1270 valid articles. At the same time, mining and analyzing these data, according to the following table, it can be concluded that middle-aged and elderly people over 45 years old are the high-risk group of knee osteoarthritis. Table1 is for details about the distribution of relevant information.Table 1 Relationship between patient age and knee osteoarthritis. Average ageFrequencyPercentage30-44282.28%45-6078261.57%Over 6046036.15% ## 3.4.2. Gender From a database of 1270 valid literatures, 120 patients were extracted for analysis. Among them, there were 20 male patients and 100 female patients, with a male-to-female ratio of 1 : 5. It can be seen that females account for a larger proportion of arthritis cases. ## 3.4.3. Frequency of Certificate Types First, standardize the names of 36 related clinical syndrome types by statistical analysis [16]. Statistical analysis will then summarize the relevant 36 types of clinical syndrome names for merging, such as insufficiency of kidney yin or deficiency of kidney yang or deficiency of yin and fire or deficiency of the liver and kidney or deficiency of kidney yuan or deficiency of both yin and yang of the kidney is combined as a syndrome of deficiency of the liver and kidney; wind-cold dampness soaking or cold-damp obstruction or rheumatic obstruction combined with wind-cold-damp obstruction syndrome; collateral obstruction or blood stasis blocking collaterals or meridian blockage or unfavorable meridian combined into tendon and meridian stasis syndrome; damp-heat soaking or rheumatic-heat internal accumulation or damp-heat resistance complex combined into rheumatic-heat arthralgia syndrome; phlegm and blood stasis blocking collaterals or cold-dampness and phlegm blood stasis or phlegm-dampness blocking collaterals or phlegm-dampness and blood stasis or phlegm-dampness cold coagulation or dampness evil blocking or phlegm-damp coagulation combined into phlegm and blood stasis syndrome; deficiency of the liver and kidney, stagnation of tendons and veins, or combined with deficiency of the kidney and blood stasis becomes syndrome of deficiency of the liver and kidney and stasis of tendons and veins [17].Knee osteoarthritis has a long course of disease, and its pathogenesis is complex, often a combination of multiple diseases. Its dialectical classification is mainly divided into four types: damp-heat obstruction syndrome, blood stasis obstruction syndrome, liver-kidney yin deficiency syndrome, and wind-cold dampness syndrome. From a database of 1270 valid literatures, 100 patients were extracted for analysis. The dialectical analysis is shown in Table2.Table 2 Frequency table of TCM syndrome types. SyndromesFrequencyFrequency (%)Damp-heat paralysis syndrome3232%Liver and kidney deficiency syndrome2828%Wind-cold dampness syndrome2626%Blood stasis syndrome1414%Through descriptive frequency and frequency analysis of the results of DM in Table2 above, and combined with relevant professional knowledge, this topic further summarizes the results of the ten common clinical syndrome types in this study, and a single syndrome type can be obtained. The frequencies are listed in Table 3 from high to low.Table 3 Frequency of compound syndrome types. SyndromesFrequencyPercentage (%)Liver and kidney deficiency syndrome and muscle and vessel stasis syndrome38544.77%Liver-kidney deficiency syndrome and wind-cold-dampness obstruction syndrome25830%Liver and kidney deficiency syndrome combined with phlegm and blood stasis syndrome11513.37%Liver and kidney deficiency syndrome and qi stagnation and blood stasis syndrome10211.86% ## 3.4.4. Frequency of Symptoms This is the frequency table of symptoms and signs of the top 10 patients, as shown in Table4.Table 4 Symptom frequency table. SyndromesFrequencyFrequency (%)SyndromesFrequencyFrequency (%)Joint pain12394.6Unfavorable flexion and extension9069.2Aggravation after activity10379.2Rainy days are more serious8263.0Morning stiffness10379.2Swollen joints4937.6Bone rubbing10076.9Limbs are heavy4635.3Joint tenderness9875.3Dry throat4030.7 ## 3.4.5. Analysis of Drug Composition A total of 129 traditional Chinese medicines were involved in the cases that met the inclusion criteria. It is divided into four qi, five flavors, meridian, and medicine. The treatment of knee osteoarthritis is mainly based on warm medicine, followed by the medicine of calm, cool, cold, and heat. Four qi generally means that the medicine usually contains four different medicinal properties; five flavors refer to the medicine containing five different medicinal flavors. Their frequency of use is shown in Figure4 [18].Figure 4 Distribution frequency map of four qi and five flavors of drugs. ## 3.4.6. Quantitative analysis of Single Drug Angelica sinensis is an essential medicine for promoting blood circulation, which can be used in the treatment of other medicines for promoting blood circulation and wound healing. To study the effect of Angelica sinensis injection—Achyranthes saponin group and ibuprofen group on rabbit knee arthritis, it is shown that it can effectively reduce the pathological process of chondrocyte apoptosis and improve the pathology of osteoarthritis to a certain extent. In the clinical application of orthopedics, licorice mostly plays the role of reconciling medicinal properties. Its taste is sweet, and qi is harmonious, which can reconcile the medicinal taste and relieve medicinal properties. In the clinical use of drugs with strong medicinal power or biased medicinal properties, compatibility with licorice can play a role in reconciling and relieving.The decoction or extract of Duhuo has sedative, analgesic, and hypnotic effects. Chuanxiong is known as the “qi medicine in blood” and “hemostatic medicine in gas,” which has the power of regulating blood and ventilation. It is a commonly used medicine in orthopaedics for all kinds of acute and chronic injuries and diseases with stagnant blood stasis and poor qi. The active ingredient of ligustrazine can promote the secretion of anabolic factors in chondrocytes, stimulating cell proliferation, and protein synthesis. Its comparison chart is shown in Figure5.Figure 5 Comparison of the dosage of Danggui Duhuo.Total glucosides of paeony can effectively help improve the condition of patients with osteoarthritis and reduce serum levels. Eucommia ulmoides strengthens tendons and bones, dehumidifies, and relieves pain. It is a good medicine for treating kidney deficiency and weak waist and knee pain. Rehmannia glutinosa is sweet, slightly warm, and returns to the liver and kidney meridians, suggesting that Rehmannia glutinosa has a two-way regulating blood-replenishing effect. Poria is sweet, mild in taste, and flat in nature, and it is widely used in various clinical diseases and is a good product for invigorating the spleen, inducing dampness, and diverting water. Its comparison chart is shown in Figure6.Figure 6 Comparison of dosage of white peony and tuckahoe. ## 3.4.7. Medication Rule Analysis Based on Association Rules and Prescription Rule Based on Association Rule Analysis The association rule algorithm is a rule-based machine learning algorithm that is able to discover interesting relationships from a large number of databases. The purpose is to identify the rules appearing in the database by using certain metrics, which belongs to the unsupervised machine learning method. The number of support degrees was set to 60 (the support degree was 40.3%), the confidence level was set to 0.90, and the formula of 149 prescriptions was summarized. There are 8 pairs of drugs in total, including 6 drugs [19]. The specific drug combination is shown in Table 5.Table 5 Analysis of medication patterns (frequency≥60). Drug patternFrequencyBull knee, Angelica90Bull knee, coix seed78Angelica, coix seed75Bull knee, licorice72Angelica, licorice70Coix seed, licorice66Bull knee, pawpaw63Bull knee, Tung leather60“Rule Analysis” is based on “Association Rules.” Figure7 is a comparison chart of the analysis results of association rules with different support degrees (the support degree in the left picture is 32.6%, and the support degree in the right picture is 47.4%).Figure 7 Comparison of different support levels. ## 3.4.8. Analysis of the Law of Formula Composition Based on Entropy Method (1) Interdrug Correlation Analysis Based on Improved Mutual Information Method. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty, and the smaller the entropy; the smaller the amount of information, the greater the uncertainty, and the greater the entropy. After setting the correlation and penalty, start clustering analysis. Afterwards, the correlation between these drugs can be obtained, and the top five drugs are shown in Table 6.Table 6 Interdrug correlation analysis. Drug 1Drug 2Correlation coefficientVaccariae SemenChuanxiong Rhizoma0.1665Vaccariae SemenCoix seed0.1618PawpawCoix seed0.1168Vaccariae SemenAstragalus0.1143TuckahoeAstragalus0.1045Using the complex system entropy clustering method, some core combinations of drugs can be obtained and displayed in a network, as shown in Figure8.Figure 8 Core portfolio.(2) New Square Analysis Based on Unsupervised Entropy Hierarchical Clustering. Through the unsupervised entropy hierarchical clustering analysis method, on the basis of the core combination, the new combination is further excavated, as shown in Figure 9.Figure 9 New party combination. ## 3.5. Results TCM clinical syndrome types are the internal basis of diseases, are the pathological generalizations of pathogenic factors, pathological properties, lesion locations, and pathological trends at a certain stage in the development of the disease, and are the theoretical basis for dialectical treatment. Clinically, only by determining the type of TCM syndromes can we apply syndrome differentiation and treatment. However, it is difficult to unify the standard of syndrome types in clinical practice; so, it is essential to study the types of TCM syndromes in an objective, standardized, and normalized manner. DM technology has improved its research efficiency by 38%.According to the results of this DM, the statistical analysis points out that most of the patients are compound syndromes with a mixture of deficiency and reality, and the frequency of a single syndrome is lower than that of the compound type. It can be seen that the proportion of liver-kidney deficiency syndrome combined with a single syndrome type accounts for the largest proportion; most of the single syndrome types can be combined with liver-kidney deficiency syndrome. A review of relevant textbooks, literatures, and diagnostic criteria for clinical syndrome types of knee osteoarthritis found that most of them were of a single syndrome type. Combined with the etiology, pathogenesis, epidemiological characteristics of knee osteoarthritis, and the statistical results of this study, it is shown that most of the complex syndrome types are the main types, which is consistent with the actual syndrome types of clinical syndrome differentiation. Through the results of DM and statistical analysis, it is further concluded that the syndrome types have liver and kidney deficiency syndrome, which is consistent with the actual situation of clinical syndrome differentiation and treatment. Therefore, it is of important and far-reaching significance to guide clinical practice and improve the clinical theory level and clinical efficacy. Therefore, it is of great and far-reaching significance to have a relatively standard and standardized research result of this complex syndrome type to guide clinical practice and improve the theoretical level of syndrome and clinical efficacy. ## 4. Conclusions DM technology has become an important part of TCM modernization technology innovation in the past decade, which will play a great role in promoting the progress of TCM modernization research and the improvement of academic level. The research shows that DM technology is the core of knowledge discovery and is a process of extracting valuable knowledge from the data.This article is the application of DM technology in the past decade about the prevention of knee osteoarthritis, clinical literature sorting, induction, and analysis. Using the treatment principles of classification results and weights, the classification results of clinical syndrome types and the weights of each syndrome type are inferred according to the frequency of occurrence. However, because the sample number did not meet the ideal requirements, sample randomness is not enough and limited by time and funds; so, no big data collection and sorting. Through DM technology to summarize the experience of famous old Chinese medicine, it can finally be established for an objective, standard, and standard clinical syndrome system. This research result is consistent with clinical syndrome differentiation and treatment and can guide clinical application. However, the experimental samples prepared in the experimental process of this paper are not large enough, and the data processing and classification are not very accurate. Therefore, in the subsequent research, we will go deep into the experimental design part. At the same time, the database further expands the database capacity and improves the physical and chemical examination of patients, and the analysis results will be more scientific and credible. --- *Source: 1001686-2022-08-16.xml*
2022
# Honokiol Provides Cardioprotection from Myocardial Ischemia/Reperfusion Injury (MI/RI) by Inhibiting Mitochondrial Apoptosis via the PI3K/AKT Signaling Pathway **Authors:** Linhua Lv; Qiuhuan Kong; Zhiying Li; Ying Zhang; Bijiao Chen; Lihua Lv; Yubi Zhang **Journal:** Cardiovascular Therapeutics (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1001692 --- ## Abstract Background. Myocardial injury refers to a major complication that occurs in myocardial ischemia/reperfusion injury (MI/RI). Honokiol is a well-recognized active compound extracted from the traditional Chinese herb known as Magnolia officinalis and is utilized in treating different vascular diseases. This research is aimed at examining whether Honokiol might alleviate myocardial injury in an MI/RI model. Methods. Seventy-eight male C57BL/6 mice were categorized randomly into three cohorts including the Sham operation (Sham) cohort, the MI/RI cohort (Con), and the Honokiol cohort (n=26 for each cohort). The mice in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, intraperitoneal), while the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (DMSO) daily in 14 days prior to exposure to MI/RI. After the surgery, creatine kinase- (CK-) MB and cardiac troponin T (cTnT) levels, as well as the infarct area, were measured to assess the degree of myocardial damage. Apoptotic levels were detected using terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) staining. Electron microscopy was utilized to identify mitochondrial damage. Lastly, the expression levels of glyceraldehyde-3-phosphate dehydrogenase (GAPDH), cleaved caspase-9, cytochrome C (Cyt-C), B cell lymphoma/leukemia-2 (Bcl-2), B cell lymphoma/leukemia-2 associated X (Bax), AKT, p-AKT, PI3K, and p-PI3K were analyzed utilizing western blotting. Results. Honokiol can reduce the MI/RI-induced cTnT and CK-MB levels, apoptosis index, and mitochondrial swelling in cardiomyocytes via activating the PI3K/AKT signaling pathway. Conclusion. Honokiol provides cardiac protection from MI/RI by suppressing mitochondrial apoptosis through the PI3K/AKT signaling pathway. --- ## Body ## 1. Introduction Myocardial ischemia/reperfusion injury (MI/RI) is a high degree of organ injury that may result from restoring blood flow following cross-clamping in cardiopulmonary bypass (CPB) in the process of heart surgery and after revascularization therapy postmyocardial infarction (MI) [1]. MI/RI promotes the development of reactive oxygen species (ROS) and calcium overload, both of which may lead to the production of cytochrome C (Cyt-C) into the cytoplasm [2]. This is followed by the activation of caspases, subsequently resulting in mitochondrial dysfunction and swelling and, eventually, apoptosis [3–5]. Hence, discovering effective methods of pharmacological preconditioning to reduce mitochondrial damage and cell apoptosis may be an effective therapy to alleviate MI/RI [6–8].Honokiol has been known as an active compound extracted from the traditional Chinese herb commonly referred to asMagnolia officinalis, which is utilized in the treatment of different vascular diseases including heart disease, stroke, and ischemia [9–11]. Early investigations showed that Honokiol limits infarct size and has antiarrhythmic impacts in rats with acute MI [12, 13]. Additionally, Honokiol was found to perform a function in enhancing postischemic cardiac functions, lessening infarct size, lowering myocardial apoptosis, and shrinking ROS levels in MI/RI injury in type 1 diabetes [14]. In addition to this, pretreatment using Honokiol also substantially decreased the infarct size, as well as the levels of serum creatine kinase (CK), nuclear factor κB (NF-κB), interleukin- (IL-) 6, tumor necrosis factor- (TNF-) α, and lactate dehydrogenase (LDH), in an MI/RI rat model [15]. Currently, studies about the mechanisms of the cardioprotective effects of Honokiol on MI/RI mainly point to inflammation as well as oxidative stress.In this research, we examined the function of Honokiol in preventing MI/RI, particularly its potential mechanism of decreasing mitochondrial damage in cardiomyocytes. ## 2. Methods ### 2.1. Mouse Care Adult male C57BL/6 mice (aged between 8 and 12 weeks and weight ranging between 20 and 25 g) were procured from GemPharmatech Co. Ltd. The mice were kept under a steady temperature of22±2°C and humidity (45±5%), with a cycle of 12-hour daylight and 12-hour darkness, and an unrestricted supply of water and food. All the experimentations were carried out as per the Guide for the Care and Use of Laboratory Animals by the National Academy of Sciences, published by the National Institutes of Health (NIH Publication No. 86-23, revised 1996), and certified for use by the Institutional Animal Care Committee of Shaoyang University. ### 2.2. Mouse Treatment and Surgical Procedure Adult male C57BL/6 mice used in the experiment were categorized randomly into three cohorts, which include the Sham operation cohort (Sham,n=26), the MIRI cohort (Con, n=26), and the Honokiol cohort (Honokiol, n=26).Animals in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, i.p.), whereas the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (dimethyl sulphoxide (DMSO)) each day for 14 days prior to the exposure to MI/RI. The dosage of Honokiol used in this research was based on previous study [16]. An equivalent volume of DMSO was given to the Con cohort. After anesthetizing with 2% Nembutal sodium (50 mg/kg), the mice underwent artificial ventilation via endotracheal intubation (110 breaths/min, 0.2 ml tidal volume) using a rodent ventilator. Then, the left thoracotomy was carried out in the 4th intercostal space, and the left anterior descending (LAD) coronary artery was occluded by a 10-0 PROLENE® suture for half an hour ensued by 2 hours of reperfusion after removal. The mice in the Sham cohort were subjected to a similar operation, with the exception of ligation of the coronary artery. After reperfusion, the mice were sacrificed via exsanguination after treatment with 2% Nembutal sodium. After blood extraction, the plasma was isolated through centrifugation, and after isolation, the plasma was kept under a temperature of −80°C before being used. Mice hearts were harvested after euthanasia for further evaluation. ### 2.3. Measurement of cTnT and CK-MB Levels The plasma levels of CK-MB and cTnT were estimated and analyzed to assess the degree of myocardial damage. The cTnT levels were evaluated utilizing a high-sensitivity mouse cTnT enzyme-linked immunosorbent assay (ELISA) kit (Life Diagnostics, West Chester, PA, USA) as per the instructions stipulated by the manufacturer. On the other hand, CK-MB levels were estimated utilizing an automated analyzer (Chemray 800, Rayto Life and Analytical Sciences, Shenzhen, China). ### 2.4. Estimation of the Myocardial Infarct Area After the completion of the reperfusion procedure, the myocardial infarct size was examined utilizing a double-staining technique called 2,3,5-triphenyl tetrazolium chloride- (TTC-) Evans blue staining. In this method, sections stained with blue signifies the nonischemic section, the red staining signifies the risk section, and the white sections represent the infarct section. After reperfusion, the coronary artery was again occluded using the PROLENE® suture, and 1 ml of 2 percent Evans blue solution (Sigma) was reversely introduced into the aorta. The hearts were quickly removed, kept at a temperature of −20°C for 20 min, and subsequently sliced into slices of 1 mm. The heart sections were subjected to incubation using a solution of 1% TTC (Sigma) for 20 min. After staining, a stopping solution (ice-cold sterile saline) was added, followed by fixing the slices in 10% neutral-buffered formaldehyde. An evaluator blinded to the identity of the specimen captured the images of both sides of each slice and then analyzed the images utilizing the Image-Pro Plus 6.0 software (Media Cybernetics, Silver Spring, MD, USA). ### 2.5. Echocardiography Assessment of the heart function was done at the end of the 2-hour reperfusion utilizing transthoracic echocardiography (VisualSonics system, Toronto, Ontario, Canada). In this procedure, cardiac variables, such as cardiac output, ejection fraction, LV fractional shortening, wall thickness, and left ventricular (LV) end-diastolic dimension, were assessed via M-mode and two-dimensional echocardiography. Anesthetization of the mice was done using 1.5% isoflurane/oxygen prior to the procedure. ### 2.6. TUNEL Staining of Apoptotic Cells Fixing of the hearts was done in 4-percent paraformaldehyde, entrenched in paraffin, followed by sectioning at a width of 5μm. Subsequently, in situ apoptosis was evaluated through the TUNEL technique utilizing an In Situ Cell Death Detection Kit, Fluorescein (Roche Applied Science, Mannheim, Germany). Succinctly, the slices were subjected to washing 3 times with 10 mM phosphate-buffered solution (PBS) ensued by permeabilization in proteinase K for a duration of 10 minutes. After washing another 3 times, incubation of the sections was done in TdT buffer at the temperature of 37°C for 1 hour and then with antibodies using the same parameters. Afterward, staining of the cell nuclei was done with 4,6-diamino-2-phenylindole (DAPI). The apoptotic cardiomyocytes are represented by TUNEL/DAPI-positive cells. Sections were randomly selected, and five areas were randomly selected from each section to estimate the percentage of apoptotic cells. The percentage of TUNEL-stained nuclei was computed as the proportion of the amount of TUNEL-positive nuclei/total nuclei. Quantification of the aggregate amount of apoptotic nuclei was done through tallying the overall number of TUNEL-positive nuclei in whole slices from seven distinct mice hearts per cohort. The photographs were taken utilizing a Zeiss digital camera attached to a Zeiss VivaTome microscope (Zeiss, Thornwood, NY, USA). The section images were then analyzed utilizing the ImagePro Plus 6.0 software (Media Cybernetics). ### 2.7. Electron Microscopy The specimens were submerged in 2.5 percent glutaraldehyde followed by postfixing in 2 percent osmium tetroxide in sodium phosphate buffer for a duration of 2 hours at a temperature of 4°C. Next, dehydration of the specimens was done in a graded sequence of ethanol and propylene oxide and entrenched in araldite. An ultramicrotome (Leica EM UC7; Leica, Nussloch, Germany) was utilized to cut 1μm sections. The sections were subjected to staining utilizing uranyl acetate and lead citrate and then viewed utilizing a Hitachi transmission electron microscope (HT7700; Hitachi, Tokyo, Japan). ### 2.8. Western Blotting (WB) The protocol was previously described [17–19]. A lysis buffer (Beyotime, Shanghai, China) containing a protease inhibitor cocktail (Millipore, Billerica, Massachusetts, USA) was utilized to isolate proteins from cardiac tissues. Subsequently, the proteins were analyzed utilizing sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in accordance with the basic instructions and then moved to polyvinylidene difluoride (PVDF) membranes to perform WB analysis (Millipore). Primary antibodies against Bcl-2 (Cell Signaling Technology (CST), Danvers, MA, USA), Bax (CST), Cyt-C (CST), caspase-9 (Abcam, Cambridge, MA, USA), PI3K (CST), p-PI3K (CST), AKT (CST), and p-AKT (CST) were used for protein detection. Later, incubation of the membranes was done using an HRP-conjugated secondary antibody (Thermo Fisher Scientific, Waltham, MA, USA) for 1 hour at ambient temperature, and detection of antigen-antibody complexes was done utilizing a WB luminol reagent (Sigma-Aldrich). For this experiment, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Proteintech, Rosemont, IL, USA) acted as an internal reference. The mean light density for each band was analyzed utilizing the ImageJ software. The target genes expression was standardized to the expression of GAPDH. ### 2.9. Statistical Analysis Statistical analysis was carried out utilizing SPSS v. 23.0 (IBM Corp, Armonk, NY, USA). The Studentt-test was carried out to evaluate the variations between the treatment and the sham cohorts. Comparisons among the three cohorts were done utilizing one-way ANOVA ensued by Tukey’s post hoc test. Data are articulated as meanvalues±standarderror of the mean (SEM). For all the tests, a p value of <0.05 was deemed statistically significant. ## 2.1. Mouse Care Adult male C57BL/6 mice (aged between 8 and 12 weeks and weight ranging between 20 and 25 g) were procured from GemPharmatech Co. Ltd. The mice were kept under a steady temperature of22±2°C and humidity (45±5%), with a cycle of 12-hour daylight and 12-hour darkness, and an unrestricted supply of water and food. All the experimentations were carried out as per the Guide for the Care and Use of Laboratory Animals by the National Academy of Sciences, published by the National Institutes of Health (NIH Publication No. 86-23, revised 1996), and certified for use by the Institutional Animal Care Committee of Shaoyang University. ## 2.2. Mouse Treatment and Surgical Procedure Adult male C57BL/6 mice used in the experiment were categorized randomly into three cohorts, which include the Sham operation cohort (Sham,n=26), the MIRI cohort (Con, n=26), and the Honokiol cohort (Honokiol, n=26).Animals in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, i.p.), whereas the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (dimethyl sulphoxide (DMSO)) each day for 14 days prior to the exposure to MI/RI. The dosage of Honokiol used in this research was based on previous study [16]. An equivalent volume of DMSO was given to the Con cohort. After anesthetizing with 2% Nembutal sodium (50 mg/kg), the mice underwent artificial ventilation via endotracheal intubation (110 breaths/min, 0.2 ml tidal volume) using a rodent ventilator. Then, the left thoracotomy was carried out in the 4th intercostal space, and the left anterior descending (LAD) coronary artery was occluded by a 10-0 PROLENE® suture for half an hour ensued by 2 hours of reperfusion after removal. The mice in the Sham cohort were subjected to a similar operation, with the exception of ligation of the coronary artery. After reperfusion, the mice were sacrificed via exsanguination after treatment with 2% Nembutal sodium. After blood extraction, the plasma was isolated through centrifugation, and after isolation, the plasma was kept under a temperature of −80°C before being used. Mice hearts were harvested after euthanasia for further evaluation. ## 2.3. Measurement of cTnT and CK-MB Levels The plasma levels of CK-MB and cTnT were estimated and analyzed to assess the degree of myocardial damage. The cTnT levels were evaluated utilizing a high-sensitivity mouse cTnT enzyme-linked immunosorbent assay (ELISA) kit (Life Diagnostics, West Chester, PA, USA) as per the instructions stipulated by the manufacturer. On the other hand, CK-MB levels were estimated utilizing an automated analyzer (Chemray 800, Rayto Life and Analytical Sciences, Shenzhen, China). ## 2.4. Estimation of the Myocardial Infarct Area After the completion of the reperfusion procedure, the myocardial infarct size was examined utilizing a double-staining technique called 2,3,5-triphenyl tetrazolium chloride- (TTC-) Evans blue staining. In this method, sections stained with blue signifies the nonischemic section, the red staining signifies the risk section, and the white sections represent the infarct section. After reperfusion, the coronary artery was again occluded using the PROLENE® suture, and 1 ml of 2 percent Evans blue solution (Sigma) was reversely introduced into the aorta. The hearts were quickly removed, kept at a temperature of −20°C for 20 min, and subsequently sliced into slices of 1 mm. The heart sections were subjected to incubation using a solution of 1% TTC (Sigma) for 20 min. After staining, a stopping solution (ice-cold sterile saline) was added, followed by fixing the slices in 10% neutral-buffered formaldehyde. An evaluator blinded to the identity of the specimen captured the images of both sides of each slice and then analyzed the images utilizing the Image-Pro Plus 6.0 software (Media Cybernetics, Silver Spring, MD, USA). ## 2.5. Echocardiography Assessment of the heart function was done at the end of the 2-hour reperfusion utilizing transthoracic echocardiography (VisualSonics system, Toronto, Ontario, Canada). In this procedure, cardiac variables, such as cardiac output, ejection fraction, LV fractional shortening, wall thickness, and left ventricular (LV) end-diastolic dimension, were assessed via M-mode and two-dimensional echocardiography. Anesthetization of the mice was done using 1.5% isoflurane/oxygen prior to the procedure. ## 2.6. TUNEL Staining of Apoptotic Cells Fixing of the hearts was done in 4-percent paraformaldehyde, entrenched in paraffin, followed by sectioning at a width of 5μm. Subsequently, in situ apoptosis was evaluated through the TUNEL technique utilizing an In Situ Cell Death Detection Kit, Fluorescein (Roche Applied Science, Mannheim, Germany). Succinctly, the slices were subjected to washing 3 times with 10 mM phosphate-buffered solution (PBS) ensued by permeabilization in proteinase K for a duration of 10 minutes. After washing another 3 times, incubation of the sections was done in TdT buffer at the temperature of 37°C for 1 hour and then with antibodies using the same parameters. Afterward, staining of the cell nuclei was done with 4,6-diamino-2-phenylindole (DAPI). The apoptotic cardiomyocytes are represented by TUNEL/DAPI-positive cells. Sections were randomly selected, and five areas were randomly selected from each section to estimate the percentage of apoptotic cells. The percentage of TUNEL-stained nuclei was computed as the proportion of the amount of TUNEL-positive nuclei/total nuclei. Quantification of the aggregate amount of apoptotic nuclei was done through tallying the overall number of TUNEL-positive nuclei in whole slices from seven distinct mice hearts per cohort. The photographs were taken utilizing a Zeiss digital camera attached to a Zeiss VivaTome microscope (Zeiss, Thornwood, NY, USA). The section images were then analyzed utilizing the ImagePro Plus 6.0 software (Media Cybernetics). ## 2.7. Electron Microscopy The specimens were submerged in 2.5 percent glutaraldehyde followed by postfixing in 2 percent osmium tetroxide in sodium phosphate buffer for a duration of 2 hours at a temperature of 4°C. Next, dehydration of the specimens was done in a graded sequence of ethanol and propylene oxide and entrenched in araldite. An ultramicrotome (Leica EM UC7; Leica, Nussloch, Germany) was utilized to cut 1μm sections. The sections were subjected to staining utilizing uranyl acetate and lead citrate and then viewed utilizing a Hitachi transmission electron microscope (HT7700; Hitachi, Tokyo, Japan). ## 2.8. Western Blotting (WB) The protocol was previously described [17–19]. A lysis buffer (Beyotime, Shanghai, China) containing a protease inhibitor cocktail (Millipore, Billerica, Massachusetts, USA) was utilized to isolate proteins from cardiac tissues. Subsequently, the proteins were analyzed utilizing sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in accordance with the basic instructions and then moved to polyvinylidene difluoride (PVDF) membranes to perform WB analysis (Millipore). Primary antibodies against Bcl-2 (Cell Signaling Technology (CST), Danvers, MA, USA), Bax (CST), Cyt-C (CST), caspase-9 (Abcam, Cambridge, MA, USA), PI3K (CST), p-PI3K (CST), AKT (CST), and p-AKT (CST) were used for protein detection. Later, incubation of the membranes was done using an HRP-conjugated secondary antibody (Thermo Fisher Scientific, Waltham, MA, USA) for 1 hour at ambient temperature, and detection of antigen-antibody complexes was done utilizing a WB luminol reagent (Sigma-Aldrich). For this experiment, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Proteintech, Rosemont, IL, USA) acted as an internal reference. The mean light density for each band was analyzed utilizing the ImageJ software. The target genes expression was standardized to the expression of GAPDH. ## 2.9. Statistical Analysis Statistical analysis was carried out utilizing SPSS v. 23.0 (IBM Corp, Armonk, NY, USA). The Studentt-test was carried out to evaluate the variations between the treatment and the sham cohorts. Comparisons among the three cohorts were done utilizing one-way ANOVA ensued by Tukey’s post hoc test. Data are articulated as meanvalues±standarderror of the mean (SEM). For all the tests, a p value of <0.05 was deemed statistically significant. ## 3. Results ### 3.1. Honokiol Pretreatment Preserves Cardiac Function in MI/RI The plasma levels of CK-MB and cTnT were considerably elevated in the Con cohort as opposed to the Sham cohort (bothp<0.0001, Figure 1). On the other hand, CK-MB and cTnT levels were found to be reduced in the Honokiol-treated cohort (Honokiol cohort) as opposed to the Con cohort (both p<0.0001, Figure 1).Figure 1 Plasma levels of myocardial injury markers. Expression levels of (a) cTnT and (b) CK-MB. Sham: sham operation; Con: MI/RI with the vehicle; Honokiol: MI/RI with Honokiol.N=10 for each cohort. Data are articulated as mean±SEM. ∗p<0.05; ∗∗∗∗p<0.0001. (a)(b)Aside from measuring the plasma cTnT and CK-MB levels, the myocardial infarct size was also estimated through TTC-Evans blue staining. The myocardial infarction severity (Figure2(a)) was evaluated via the estimation of the proportion of the aggregate ischemic area and the proportion of necrotic areas premised on the algorithm below: (1)%totalischemicareas=infarctplusat‐riskareastotalmyocardialareas,(2)%necroticareas=infractareasinfractplusat‐riskareas.Figure 2 Measurement of the myocardial infarct area. (a) TTC-Evans blue staining. (b) Assessment of myocardial infarct area. Both the proportion of aggregate ischemic areas and necrotic areas were elevated in the Con cohort as opposed to the Sham cohort and were considerably decreased by Honokiol treatment.N=10 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)(c)Both of the %total ischemic areas and %necrotic areas were increased in the Con cohort as opposed to the Sham cohort but were considerably reduced in the Honokiol cohort (allp<0.0001, Figure 2(b)).Mice subjected to MI/RI (Con cohort) indicated significantly worse cardiac function as opposed to the Sham cohort, assessed by measuring the left ventricular ejection fraction (LVEF), fractional shortening (LVFS), and left ventricular end-systolic diameter (LVESD) (allp<0.0001, Figures 3(a) and 3(b)). The results illustrated that both LVFS and LVEF were elevated, but LVESD was reduced in the Honokiol cohort as opposed to the Con cohort (all p<0.0001, Figures 3(a) and 3(b)). No difference was identified in the left ventricular end-diastolic diameter (LVEDD) among all cohorts (Figure 3(b)).Figure 3 Echocardiography of MI/RI mice. (a) Echocardiograms of the mice. (b) Both LVFS and LVEF were elevated, while LVESD was lower in the Honokiol cohort as opposed to the Con cohort. No difference was observed in the LVEDD in all cohorts.N=10for each cohort. Data are articulated as mean±SEM. ∗∗∗p<0.001. ∗∗∗∗p<0.0001. (a)(b) ### 3.2. Honokiol Pretreatment Reduces Cardiomyocyte Apoptosis Caused by MI/RI The influence of Honokiol on MI/RI-triggered apoptosis was examined utilizing TUNEL staining. As illustrated in Figure4, only a small amount of apoptotic cardiomyocytes were identified in myocardial tissues from the Sham cohort, but a considerably higher amount of TUNEL-positive cardiomyocytes were identified in the Con cohort (p<0.0001). The pretreatment with Honokiol reduced the amount of TUNEL-positive cells (p<0.0001), indicating its antiapoptotic effect on MI/RI (Figure 4).Figure 4 Assessment of cardiomyocyte apoptosis. The TUNEL assay was utilized to evaluate cardiomyocyte apoptosis in ventricular tissue (400x,n=7 per cohort). Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)The electron microscopy of cardiomyocytes indicated severe mitochondrial inflammation in the Con cohort but not in the Honokiol cohort (Figure5), implying that the mitochondrial apoptosis pathway may be implicated in MI/RI.Figure 5 Analysis of cardiomyocytes via electron microscopy. MI/RI resulted in the damage of the myocardial ultrastructure and mitochondrial inflammation, which was attenuated after melatonin treatment (n=3 for each cohort). Black arrow: mitochondria. Original magnification: 15000x.In addition to this, mitochondrial apoptosis markers were also detected and identified through western blotting. Honokiol considerably lowered the levels of the proapoptotic proteins cleaved caspase-9, Bax, and Cyt-C, all of which are stimulated by MI/RI, but elevated the antiapoptotic protein Bcl-2 expression (allp<0.05, Figure 6). These data propose that Honokiol could act as a prospective antiapoptotic drug by modulating the mitochondrial pathway for MI/RI.Figure 6 Western blot analysis of apoptosis-associated proteins. Western blotting was utilized to evaluate the protein levels of cleaved caspase-9, Cyt-C, Bcl-2, and Bax. GAPDH acted as the internal reference (n=6 for each cohort). Data are articulated as mean±SEM. ∗∗p<0.01; ∗∗∗p<0.001; ∗∗∗∗p<0.0001. (a)(b) ### 3.3. Honokiol Activates the PI3K/AKT Signaling Pathway in MI/RI The PI3K/AKT pathway can modulate cell apoptosis induced by the mitochondrial pathway [20–22]. Western blotting was performed to investigate whether Honokiol has a function in PI3K/AKT-induced mitochondrial apoptosis. It was found that the expression of p-AKT and p-PI3K in the Con cohort was significantly decreased and that Honokiol was able to restore the expression levels of the two proteins (Figure 7, all p<0.0001). These results suggest that Honokiol may be preventing mitochondrial apoptosis in the cardiomyocytes through the PI3K/AKT signaling pathway.Figure 7 Western blot analysis of PI3K/AKT pathway-related proteins. (a) AKT and (b) PI3K were identified through WB. Sham: sham operation; Con: MI/RI with vehicle; Honokiol: MI/RI with Honokiol.N=6 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b) ## 3.1. Honokiol Pretreatment Preserves Cardiac Function in MI/RI The plasma levels of CK-MB and cTnT were considerably elevated in the Con cohort as opposed to the Sham cohort (bothp<0.0001, Figure 1). On the other hand, CK-MB and cTnT levels were found to be reduced in the Honokiol-treated cohort (Honokiol cohort) as opposed to the Con cohort (both p<0.0001, Figure 1).Figure 1 Plasma levels of myocardial injury markers. Expression levels of (a) cTnT and (b) CK-MB. Sham: sham operation; Con: MI/RI with the vehicle; Honokiol: MI/RI with Honokiol.N=10 for each cohort. Data are articulated as mean±SEM. ∗p<0.05; ∗∗∗∗p<0.0001. (a)(b)Aside from measuring the plasma cTnT and CK-MB levels, the myocardial infarct size was also estimated through TTC-Evans blue staining. The myocardial infarction severity (Figure2(a)) was evaluated via the estimation of the proportion of the aggregate ischemic area and the proportion of necrotic areas premised on the algorithm below: (1)%totalischemicareas=infarctplusat‐riskareastotalmyocardialareas,(2)%necroticareas=infractareasinfractplusat‐riskareas.Figure 2 Measurement of the myocardial infarct area. (a) TTC-Evans blue staining. (b) Assessment of myocardial infarct area. Both the proportion of aggregate ischemic areas and necrotic areas were elevated in the Con cohort as opposed to the Sham cohort and were considerably decreased by Honokiol treatment.N=10 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)(c)Both of the %total ischemic areas and %necrotic areas were increased in the Con cohort as opposed to the Sham cohort but were considerably reduced in the Honokiol cohort (allp<0.0001, Figure 2(b)).Mice subjected to MI/RI (Con cohort) indicated significantly worse cardiac function as opposed to the Sham cohort, assessed by measuring the left ventricular ejection fraction (LVEF), fractional shortening (LVFS), and left ventricular end-systolic diameter (LVESD) (allp<0.0001, Figures 3(a) and 3(b)). The results illustrated that both LVFS and LVEF were elevated, but LVESD was reduced in the Honokiol cohort as opposed to the Con cohort (all p<0.0001, Figures 3(a) and 3(b)). No difference was identified in the left ventricular end-diastolic diameter (LVEDD) among all cohorts (Figure 3(b)).Figure 3 Echocardiography of MI/RI mice. (a) Echocardiograms of the mice. (b) Both LVFS and LVEF were elevated, while LVESD was lower in the Honokiol cohort as opposed to the Con cohort. No difference was observed in the LVEDD in all cohorts.N=10for each cohort. Data are articulated as mean±SEM. ∗∗∗p<0.001. ∗∗∗∗p<0.0001. (a)(b) ## 3.2. Honokiol Pretreatment Reduces Cardiomyocyte Apoptosis Caused by MI/RI The influence of Honokiol on MI/RI-triggered apoptosis was examined utilizing TUNEL staining. As illustrated in Figure4, only a small amount of apoptotic cardiomyocytes were identified in myocardial tissues from the Sham cohort, but a considerably higher amount of TUNEL-positive cardiomyocytes were identified in the Con cohort (p<0.0001). The pretreatment with Honokiol reduced the amount of TUNEL-positive cells (p<0.0001), indicating its antiapoptotic effect on MI/RI (Figure 4).Figure 4 Assessment of cardiomyocyte apoptosis. The TUNEL assay was utilized to evaluate cardiomyocyte apoptosis in ventricular tissue (400x,n=7 per cohort). Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)The electron microscopy of cardiomyocytes indicated severe mitochondrial inflammation in the Con cohort but not in the Honokiol cohort (Figure5), implying that the mitochondrial apoptosis pathway may be implicated in MI/RI.Figure 5 Analysis of cardiomyocytes via electron microscopy. MI/RI resulted in the damage of the myocardial ultrastructure and mitochondrial inflammation, which was attenuated after melatonin treatment (n=3 for each cohort). Black arrow: mitochondria. Original magnification: 15000x.In addition to this, mitochondrial apoptosis markers were also detected and identified through western blotting. Honokiol considerably lowered the levels of the proapoptotic proteins cleaved caspase-9, Bax, and Cyt-C, all of which are stimulated by MI/RI, but elevated the antiapoptotic protein Bcl-2 expression (allp<0.05, Figure 6). These data propose that Honokiol could act as a prospective antiapoptotic drug by modulating the mitochondrial pathway for MI/RI.Figure 6 Western blot analysis of apoptosis-associated proteins. Western blotting was utilized to evaluate the protein levels of cleaved caspase-9, Cyt-C, Bcl-2, and Bax. GAPDH acted as the internal reference (n=6 for each cohort). Data are articulated as mean±SEM. ∗∗p<0.01; ∗∗∗p<0.001; ∗∗∗∗p<0.0001. (a)(b) ## 3.3. Honokiol Activates the PI3K/AKT Signaling Pathway in MI/RI The PI3K/AKT pathway can modulate cell apoptosis induced by the mitochondrial pathway [20–22]. Western blotting was performed to investigate whether Honokiol has a function in PI3K/AKT-induced mitochondrial apoptosis. It was found that the expression of p-AKT and p-PI3K in the Con cohort was significantly decreased and that Honokiol was able to restore the expression levels of the two proteins (Figure 7, all p<0.0001). These results suggest that Honokiol may be preventing mitochondrial apoptosis in the cardiomyocytes through the PI3K/AKT signaling pathway.Figure 7 Western blot analysis of PI3K/AKT pathway-related proteins. (a) AKT and (b) PI3K were identified through WB. Sham: sham operation; Con: MI/RI with vehicle; Honokiol: MI/RI with Honokiol.N=6 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b) ## 4. Discussion In this research, we have demonstrated that Honokiol pretreatment can improve cardiac function during MI/RI injury. Honokiol considerably lowered the levels of myocardial injury indicators (cTnT and CK-MB) induced by MI/RI. Furthermore, Honokiol also inhibited the cardiomyocyte apoptosis and mitochondrial swelling that resulted from MI/RI, possibly involving the PI3K/AKT signaling pathway. With everything considered, these results indicate that pretreatment with Honokiol may be protecting the heart from MI/RI by inhibiting mitochondrial apoptosis through the PI3K/AKT signaling pathway.Honokiol is a bioactive natural compound with a potential therapeutic benefit because of its diverse pharmacological properties, including antiinflammatory, anticancer, and antiarrhythmic, without appreciable toxicity [23–25]. This compound was also considered a potential treatment for heart disease [26]. Honokiol has been shown to enhance myocardial contraction and cardiac contractile capacity and reduce myocardial degeneration in the β1-AAB-induced myocardial dysfunction model [27]. Furthermore, it was also shown in previous studies that Honokiol decreases intima-to-media area ratio and intimal hyperplasia, as well as deposition of collagen in rabbit carotid artery balloon injury model [28]. Honokiol was also found to have antihypertrophic effects and may block cardiac fibroblast proliferation and differentiation to myofibroblasts [16]. Moreover, it was previously observed that Honokiol treatment provides cardiac protection from Dox-cardiotoxicity by enhancing mitochondrial function, implying that Honokiol is an auspicious treatment for cancer patients under Dox treatment [29]. Honokiol also performs a protective function against I/R in multiple organs, including the heart [15, 30–32]. Currently, there is only a little evidence about the effect of Honokiol on MI/RI. It was shown in a previous study that Honokiol ameliorates MI/RI in rats with type 1 diabetes by decreasing apoptosis and oxidative stress [14]. In addition, pretreatment with Honokiol was also observed to significantly reduce infarct size and the levels of proinflammatory cytokines (TNF-α, NF-κB, and IL-6), oxidative stress indicators (myeloperoxidase, superoxide dismutase, catalase, malondialdehyde), and myocardial injury indicators (cTnT and CK-MB) in MI/RI [15].The data from this research are in harmony with earlier studies. The results have shown that Honokiol pretreatment improves cardiac function during MI/RI and decreases the levels of myocardial injury indicators (cTnT and CK-MB) in the blood plasma. Moreover, Honokiol pretreatment led to the reduction of proapoptotic protein expression and mitochondrial swelling, both of which are induced by MI/RI.Multiple signal pathways have been found to participate in the cardioprotective effects of Honokiol. Regulation of the NF-κB pathway by Honokiol may reduce ROS levels and endothelial cell apoptosis in high-glucose-stimulated oxidative stress and streptozocin- (STZ-) stimulated diabetes [33], inflammatory response, apoptosis of human umbilical vein endothelial cells [34], and even the proliferation and migration of rat aortic smooth muscle cells [35]. Honokiol may be affecting cardiomyocytes autophagy via the AMPK/ULK pathway [36], as well as interstitial fibrosis and cardiac hypertrophy via the PI3K/AKT pathway [16]. As observed in previous studies, the NF-κB pathway and the SIRT1-Nrf2 signaling pathway are implicated in the protective function of Honokiol in MI/RI [14, 15]. Our data suggest that the PI3K/AKT pathway is altered in the Honokiol pretreated cohort, suggesting its involvement in the cardioprotective effects of Honokiol in MI/RI. PI3K/AKT pathway triggering could reduce the levels of the proapoptotic protein Bax and stimulate the release of the antiapoptotic protein Bcl-2 [37]. Moreover, the caspase activation stimulates the release of Cyt-C and AIF, both of which facilitate the modulation of mitochondrial apoptosis [38–40]. Our results also showed that Honokiol pretreatment resulted in the decrease in the levels of Cyt-C, Bax, and cleaved caspase-9, as well as mitochondrial swelling in MI/RI, further suggesting that Honokiol may be protecting the heart from MI/RI by inhibiting mitochondrial apoptosis via the PI3K/AKT signaling pathway.There are several potential limitations in this research that may be addressed in future related studies. First, the study is only aimed at investigating the protective effects of Honokiol in MI/RI without covering possible adverse effects. Next, the method of administration (intraperitoneal injection), as opposed to oral or rectal administration, may also influence the pharmacokinetics of the drug. After the Houpo extract was administered rectally to Wistar rats at a dosage of 245 mg/kg (equal to 13.5 mg/kg of Honokiol), the Honokiol’s bioavailability was roughly sixfold greater than when orally administered at a similar dose [41]. This implies possible influences of the administration route in the therapeutic effects of Honokiol. For improved clinical applications, we will contrast the impacts of oral, rectal, and intraperitoneal injection in the bioavailability of Honokiol in MI/RI mouse models. Moreover, in order to further confirm the direct regulation of Honokiol on the PI3K/AKT pathway, the knock-out mice should be to use for verification. ## 5. Conclusion In conclusion, Honokiol significantly decreases the levels of myocardial injury indicators (cTnT and CK-MB) induced by MI/RI. Honokiol inhibits cardiomyocyte apoptosis and mitochondrial swelling resulting from MI/RI and may involve the PI3K/AKT signaling pathway. Taken together, pretreatment with Honokiol provides cardiac protection from MI/RI by suppressing mitochondrial apoptosis through the PI3K/AKT signaling pathway. --- *Source: 1001692-2022-03-27.xml*
1001692-2022-03-27_1001692-2022-03-27.md
38,447
Honokiol Provides Cardioprotection from Myocardial Ischemia/Reperfusion Injury (MI/RI) by Inhibiting Mitochondrial Apoptosis via the PI3K/AKT Signaling Pathway
Linhua Lv; Qiuhuan Kong; Zhiying Li; Ying Zhang; Bijiao Chen; Lihua Lv; Yubi Zhang
Cardiovascular Therapeutics (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1001692
1001692-2022-03-27.xml
--- ## Abstract Background. Myocardial injury refers to a major complication that occurs in myocardial ischemia/reperfusion injury (MI/RI). Honokiol is a well-recognized active compound extracted from the traditional Chinese herb known as Magnolia officinalis and is utilized in treating different vascular diseases. This research is aimed at examining whether Honokiol might alleviate myocardial injury in an MI/RI model. Methods. Seventy-eight male C57BL/6 mice were categorized randomly into three cohorts including the Sham operation (Sham) cohort, the MI/RI cohort (Con), and the Honokiol cohort (n=26 for each cohort). The mice in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, intraperitoneal), while the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (DMSO) daily in 14 days prior to exposure to MI/RI. After the surgery, creatine kinase- (CK-) MB and cardiac troponin T (cTnT) levels, as well as the infarct area, were measured to assess the degree of myocardial damage. Apoptotic levels were detected using terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) staining. Electron microscopy was utilized to identify mitochondrial damage. Lastly, the expression levels of glyceraldehyde-3-phosphate dehydrogenase (GAPDH), cleaved caspase-9, cytochrome C (Cyt-C), B cell lymphoma/leukemia-2 (Bcl-2), B cell lymphoma/leukemia-2 associated X (Bax), AKT, p-AKT, PI3K, and p-PI3K were analyzed utilizing western blotting. Results. Honokiol can reduce the MI/RI-induced cTnT and CK-MB levels, apoptosis index, and mitochondrial swelling in cardiomyocytes via activating the PI3K/AKT signaling pathway. Conclusion. Honokiol provides cardiac protection from MI/RI by suppressing mitochondrial apoptosis through the PI3K/AKT signaling pathway. --- ## Body ## 1. Introduction Myocardial ischemia/reperfusion injury (MI/RI) is a high degree of organ injury that may result from restoring blood flow following cross-clamping in cardiopulmonary bypass (CPB) in the process of heart surgery and after revascularization therapy postmyocardial infarction (MI) [1]. MI/RI promotes the development of reactive oxygen species (ROS) and calcium overload, both of which may lead to the production of cytochrome C (Cyt-C) into the cytoplasm [2]. This is followed by the activation of caspases, subsequently resulting in mitochondrial dysfunction and swelling and, eventually, apoptosis [3–5]. Hence, discovering effective methods of pharmacological preconditioning to reduce mitochondrial damage and cell apoptosis may be an effective therapy to alleviate MI/RI [6–8].Honokiol has been known as an active compound extracted from the traditional Chinese herb commonly referred to asMagnolia officinalis, which is utilized in the treatment of different vascular diseases including heart disease, stroke, and ischemia [9–11]. Early investigations showed that Honokiol limits infarct size and has antiarrhythmic impacts in rats with acute MI [12, 13]. Additionally, Honokiol was found to perform a function in enhancing postischemic cardiac functions, lessening infarct size, lowering myocardial apoptosis, and shrinking ROS levels in MI/RI injury in type 1 diabetes [14]. In addition to this, pretreatment using Honokiol also substantially decreased the infarct size, as well as the levels of serum creatine kinase (CK), nuclear factor κB (NF-κB), interleukin- (IL-) 6, tumor necrosis factor- (TNF-) α, and lactate dehydrogenase (LDH), in an MI/RI rat model [15]. Currently, studies about the mechanisms of the cardioprotective effects of Honokiol on MI/RI mainly point to inflammation as well as oxidative stress.In this research, we examined the function of Honokiol in preventing MI/RI, particularly its potential mechanism of decreasing mitochondrial damage in cardiomyocytes. ## 2. Methods ### 2.1. Mouse Care Adult male C57BL/6 mice (aged between 8 and 12 weeks and weight ranging between 20 and 25 g) were procured from GemPharmatech Co. Ltd. The mice were kept under a steady temperature of22±2°C and humidity (45±5%), with a cycle of 12-hour daylight and 12-hour darkness, and an unrestricted supply of water and food. All the experimentations were carried out as per the Guide for the Care and Use of Laboratory Animals by the National Academy of Sciences, published by the National Institutes of Health (NIH Publication No. 86-23, revised 1996), and certified for use by the Institutional Animal Care Committee of Shaoyang University. ### 2.2. Mouse Treatment and Surgical Procedure Adult male C57BL/6 mice used in the experiment were categorized randomly into three cohorts, which include the Sham operation cohort (Sham,n=26), the MIRI cohort (Con, n=26), and the Honokiol cohort (Honokiol, n=26).Animals in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, i.p.), whereas the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (dimethyl sulphoxide (DMSO)) each day for 14 days prior to the exposure to MI/RI. The dosage of Honokiol used in this research was based on previous study [16]. An equivalent volume of DMSO was given to the Con cohort. After anesthetizing with 2% Nembutal sodium (50 mg/kg), the mice underwent artificial ventilation via endotracheal intubation (110 breaths/min, 0.2 ml tidal volume) using a rodent ventilator. Then, the left thoracotomy was carried out in the 4th intercostal space, and the left anterior descending (LAD) coronary artery was occluded by a 10-0 PROLENE® suture for half an hour ensued by 2 hours of reperfusion after removal. The mice in the Sham cohort were subjected to a similar operation, with the exception of ligation of the coronary artery. After reperfusion, the mice were sacrificed via exsanguination after treatment with 2% Nembutal sodium. After blood extraction, the plasma was isolated through centrifugation, and after isolation, the plasma was kept under a temperature of −80°C before being used. Mice hearts were harvested after euthanasia for further evaluation. ### 2.3. Measurement of cTnT and CK-MB Levels The plasma levels of CK-MB and cTnT were estimated and analyzed to assess the degree of myocardial damage. The cTnT levels were evaluated utilizing a high-sensitivity mouse cTnT enzyme-linked immunosorbent assay (ELISA) kit (Life Diagnostics, West Chester, PA, USA) as per the instructions stipulated by the manufacturer. On the other hand, CK-MB levels were estimated utilizing an automated analyzer (Chemray 800, Rayto Life and Analytical Sciences, Shenzhen, China). ### 2.4. Estimation of the Myocardial Infarct Area After the completion of the reperfusion procedure, the myocardial infarct size was examined utilizing a double-staining technique called 2,3,5-triphenyl tetrazolium chloride- (TTC-) Evans blue staining. In this method, sections stained with blue signifies the nonischemic section, the red staining signifies the risk section, and the white sections represent the infarct section. After reperfusion, the coronary artery was again occluded using the PROLENE® suture, and 1 ml of 2 percent Evans blue solution (Sigma) was reversely introduced into the aorta. The hearts were quickly removed, kept at a temperature of −20°C for 20 min, and subsequently sliced into slices of 1 mm. The heart sections were subjected to incubation using a solution of 1% TTC (Sigma) for 20 min. After staining, a stopping solution (ice-cold sterile saline) was added, followed by fixing the slices in 10% neutral-buffered formaldehyde. An evaluator blinded to the identity of the specimen captured the images of both sides of each slice and then analyzed the images utilizing the Image-Pro Plus 6.0 software (Media Cybernetics, Silver Spring, MD, USA). ### 2.5. Echocardiography Assessment of the heart function was done at the end of the 2-hour reperfusion utilizing transthoracic echocardiography (VisualSonics system, Toronto, Ontario, Canada). In this procedure, cardiac variables, such as cardiac output, ejection fraction, LV fractional shortening, wall thickness, and left ventricular (LV) end-diastolic dimension, were assessed via M-mode and two-dimensional echocardiography. Anesthetization of the mice was done using 1.5% isoflurane/oxygen prior to the procedure. ### 2.6. TUNEL Staining of Apoptotic Cells Fixing of the hearts was done in 4-percent paraformaldehyde, entrenched in paraffin, followed by sectioning at a width of 5μm. Subsequently, in situ apoptosis was evaluated through the TUNEL technique utilizing an In Situ Cell Death Detection Kit, Fluorescein (Roche Applied Science, Mannheim, Germany). Succinctly, the slices were subjected to washing 3 times with 10 mM phosphate-buffered solution (PBS) ensued by permeabilization in proteinase K for a duration of 10 minutes. After washing another 3 times, incubation of the sections was done in TdT buffer at the temperature of 37°C for 1 hour and then with antibodies using the same parameters. Afterward, staining of the cell nuclei was done with 4,6-diamino-2-phenylindole (DAPI). The apoptotic cardiomyocytes are represented by TUNEL/DAPI-positive cells. Sections were randomly selected, and five areas were randomly selected from each section to estimate the percentage of apoptotic cells. The percentage of TUNEL-stained nuclei was computed as the proportion of the amount of TUNEL-positive nuclei/total nuclei. Quantification of the aggregate amount of apoptotic nuclei was done through tallying the overall number of TUNEL-positive nuclei in whole slices from seven distinct mice hearts per cohort. The photographs were taken utilizing a Zeiss digital camera attached to a Zeiss VivaTome microscope (Zeiss, Thornwood, NY, USA). The section images were then analyzed utilizing the ImagePro Plus 6.0 software (Media Cybernetics). ### 2.7. Electron Microscopy The specimens were submerged in 2.5 percent glutaraldehyde followed by postfixing in 2 percent osmium tetroxide in sodium phosphate buffer for a duration of 2 hours at a temperature of 4°C. Next, dehydration of the specimens was done in a graded sequence of ethanol and propylene oxide and entrenched in araldite. An ultramicrotome (Leica EM UC7; Leica, Nussloch, Germany) was utilized to cut 1μm sections. The sections were subjected to staining utilizing uranyl acetate and lead citrate and then viewed utilizing a Hitachi transmission electron microscope (HT7700; Hitachi, Tokyo, Japan). ### 2.8. Western Blotting (WB) The protocol was previously described [17–19]. A lysis buffer (Beyotime, Shanghai, China) containing a protease inhibitor cocktail (Millipore, Billerica, Massachusetts, USA) was utilized to isolate proteins from cardiac tissues. Subsequently, the proteins were analyzed utilizing sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in accordance with the basic instructions and then moved to polyvinylidene difluoride (PVDF) membranes to perform WB analysis (Millipore). Primary antibodies against Bcl-2 (Cell Signaling Technology (CST), Danvers, MA, USA), Bax (CST), Cyt-C (CST), caspase-9 (Abcam, Cambridge, MA, USA), PI3K (CST), p-PI3K (CST), AKT (CST), and p-AKT (CST) were used for protein detection. Later, incubation of the membranes was done using an HRP-conjugated secondary antibody (Thermo Fisher Scientific, Waltham, MA, USA) for 1 hour at ambient temperature, and detection of antigen-antibody complexes was done utilizing a WB luminol reagent (Sigma-Aldrich). For this experiment, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Proteintech, Rosemont, IL, USA) acted as an internal reference. The mean light density for each band was analyzed utilizing the ImageJ software. The target genes expression was standardized to the expression of GAPDH. ### 2.9. Statistical Analysis Statistical analysis was carried out utilizing SPSS v. 23.0 (IBM Corp, Armonk, NY, USA). The Studentt-test was carried out to evaluate the variations between the treatment and the sham cohorts. Comparisons among the three cohorts were done utilizing one-way ANOVA ensued by Tukey’s post hoc test. Data are articulated as meanvalues±standarderror of the mean (SEM). For all the tests, a p value of <0.05 was deemed statistically significant. ## 2.1. Mouse Care Adult male C57BL/6 mice (aged between 8 and 12 weeks and weight ranging between 20 and 25 g) were procured from GemPharmatech Co. Ltd. The mice were kept under a steady temperature of22±2°C and humidity (45±5%), with a cycle of 12-hour daylight and 12-hour darkness, and an unrestricted supply of water and food. All the experimentations were carried out as per the Guide for the Care and Use of Laboratory Animals by the National Academy of Sciences, published by the National Institutes of Health (NIH Publication No. 86-23, revised 1996), and certified for use by the Institutional Animal Care Committee of Shaoyang University. ## 2.2. Mouse Treatment and Surgical Procedure Adult male C57BL/6 mice used in the experiment were categorized randomly into three cohorts, which include the Sham operation cohort (Sham,n=26), the MIRI cohort (Con, n=26), and the Honokiol cohort (Honokiol, n=26).Animals in the Honokiol cohort were treated with Honokiol before MI/RI surgery (0.2 mg/kg/day for 14 days, i.p.), whereas the mice in the Con cohort were given an intraperitoneal injection with an equivalent volume of vehicle (dimethyl sulphoxide (DMSO)) each day for 14 days prior to the exposure to MI/RI. The dosage of Honokiol used in this research was based on previous study [16]. An equivalent volume of DMSO was given to the Con cohort. After anesthetizing with 2% Nembutal sodium (50 mg/kg), the mice underwent artificial ventilation via endotracheal intubation (110 breaths/min, 0.2 ml tidal volume) using a rodent ventilator. Then, the left thoracotomy was carried out in the 4th intercostal space, and the left anterior descending (LAD) coronary artery was occluded by a 10-0 PROLENE® suture for half an hour ensued by 2 hours of reperfusion after removal. The mice in the Sham cohort were subjected to a similar operation, with the exception of ligation of the coronary artery. After reperfusion, the mice were sacrificed via exsanguination after treatment with 2% Nembutal sodium. After blood extraction, the plasma was isolated through centrifugation, and after isolation, the plasma was kept under a temperature of −80°C before being used. Mice hearts were harvested after euthanasia for further evaluation. ## 2.3. Measurement of cTnT and CK-MB Levels The plasma levels of CK-MB and cTnT were estimated and analyzed to assess the degree of myocardial damage. The cTnT levels were evaluated utilizing a high-sensitivity mouse cTnT enzyme-linked immunosorbent assay (ELISA) kit (Life Diagnostics, West Chester, PA, USA) as per the instructions stipulated by the manufacturer. On the other hand, CK-MB levels were estimated utilizing an automated analyzer (Chemray 800, Rayto Life and Analytical Sciences, Shenzhen, China). ## 2.4. Estimation of the Myocardial Infarct Area After the completion of the reperfusion procedure, the myocardial infarct size was examined utilizing a double-staining technique called 2,3,5-triphenyl tetrazolium chloride- (TTC-) Evans blue staining. In this method, sections stained with blue signifies the nonischemic section, the red staining signifies the risk section, and the white sections represent the infarct section. After reperfusion, the coronary artery was again occluded using the PROLENE® suture, and 1 ml of 2 percent Evans blue solution (Sigma) was reversely introduced into the aorta. The hearts were quickly removed, kept at a temperature of −20°C for 20 min, and subsequently sliced into slices of 1 mm. The heart sections were subjected to incubation using a solution of 1% TTC (Sigma) for 20 min. After staining, a stopping solution (ice-cold sterile saline) was added, followed by fixing the slices in 10% neutral-buffered formaldehyde. An evaluator blinded to the identity of the specimen captured the images of both sides of each slice and then analyzed the images utilizing the Image-Pro Plus 6.0 software (Media Cybernetics, Silver Spring, MD, USA). ## 2.5. Echocardiography Assessment of the heart function was done at the end of the 2-hour reperfusion utilizing transthoracic echocardiography (VisualSonics system, Toronto, Ontario, Canada). In this procedure, cardiac variables, such as cardiac output, ejection fraction, LV fractional shortening, wall thickness, and left ventricular (LV) end-diastolic dimension, were assessed via M-mode and two-dimensional echocardiography. Anesthetization of the mice was done using 1.5% isoflurane/oxygen prior to the procedure. ## 2.6. TUNEL Staining of Apoptotic Cells Fixing of the hearts was done in 4-percent paraformaldehyde, entrenched in paraffin, followed by sectioning at a width of 5μm. Subsequently, in situ apoptosis was evaluated through the TUNEL technique utilizing an In Situ Cell Death Detection Kit, Fluorescein (Roche Applied Science, Mannheim, Germany). Succinctly, the slices were subjected to washing 3 times with 10 mM phosphate-buffered solution (PBS) ensued by permeabilization in proteinase K for a duration of 10 minutes. After washing another 3 times, incubation of the sections was done in TdT buffer at the temperature of 37°C for 1 hour and then with antibodies using the same parameters. Afterward, staining of the cell nuclei was done with 4,6-diamino-2-phenylindole (DAPI). The apoptotic cardiomyocytes are represented by TUNEL/DAPI-positive cells. Sections were randomly selected, and five areas were randomly selected from each section to estimate the percentage of apoptotic cells. The percentage of TUNEL-stained nuclei was computed as the proportion of the amount of TUNEL-positive nuclei/total nuclei. Quantification of the aggregate amount of apoptotic nuclei was done through tallying the overall number of TUNEL-positive nuclei in whole slices from seven distinct mice hearts per cohort. The photographs were taken utilizing a Zeiss digital camera attached to a Zeiss VivaTome microscope (Zeiss, Thornwood, NY, USA). The section images were then analyzed utilizing the ImagePro Plus 6.0 software (Media Cybernetics). ## 2.7. Electron Microscopy The specimens were submerged in 2.5 percent glutaraldehyde followed by postfixing in 2 percent osmium tetroxide in sodium phosphate buffer for a duration of 2 hours at a temperature of 4°C. Next, dehydration of the specimens was done in a graded sequence of ethanol and propylene oxide and entrenched in araldite. An ultramicrotome (Leica EM UC7; Leica, Nussloch, Germany) was utilized to cut 1μm sections. The sections were subjected to staining utilizing uranyl acetate and lead citrate and then viewed utilizing a Hitachi transmission electron microscope (HT7700; Hitachi, Tokyo, Japan). ## 2.8. Western Blotting (WB) The protocol was previously described [17–19]. A lysis buffer (Beyotime, Shanghai, China) containing a protease inhibitor cocktail (Millipore, Billerica, Massachusetts, USA) was utilized to isolate proteins from cardiac tissues. Subsequently, the proteins were analyzed utilizing sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in accordance with the basic instructions and then moved to polyvinylidene difluoride (PVDF) membranes to perform WB analysis (Millipore). Primary antibodies against Bcl-2 (Cell Signaling Technology (CST), Danvers, MA, USA), Bax (CST), Cyt-C (CST), caspase-9 (Abcam, Cambridge, MA, USA), PI3K (CST), p-PI3K (CST), AKT (CST), and p-AKT (CST) were used for protein detection. Later, incubation of the membranes was done using an HRP-conjugated secondary antibody (Thermo Fisher Scientific, Waltham, MA, USA) for 1 hour at ambient temperature, and detection of antigen-antibody complexes was done utilizing a WB luminol reagent (Sigma-Aldrich). For this experiment, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Proteintech, Rosemont, IL, USA) acted as an internal reference. The mean light density for each band was analyzed utilizing the ImageJ software. The target genes expression was standardized to the expression of GAPDH. ## 2.9. Statistical Analysis Statistical analysis was carried out utilizing SPSS v. 23.0 (IBM Corp, Armonk, NY, USA). The Studentt-test was carried out to evaluate the variations between the treatment and the sham cohorts. Comparisons among the three cohorts were done utilizing one-way ANOVA ensued by Tukey’s post hoc test. Data are articulated as meanvalues±standarderror of the mean (SEM). For all the tests, a p value of <0.05 was deemed statistically significant. ## 3. Results ### 3.1. Honokiol Pretreatment Preserves Cardiac Function in MI/RI The plasma levels of CK-MB and cTnT were considerably elevated in the Con cohort as opposed to the Sham cohort (bothp<0.0001, Figure 1). On the other hand, CK-MB and cTnT levels were found to be reduced in the Honokiol-treated cohort (Honokiol cohort) as opposed to the Con cohort (both p<0.0001, Figure 1).Figure 1 Plasma levels of myocardial injury markers. Expression levels of (a) cTnT and (b) CK-MB. Sham: sham operation; Con: MI/RI with the vehicle; Honokiol: MI/RI with Honokiol.N=10 for each cohort. Data are articulated as mean±SEM. ∗p<0.05; ∗∗∗∗p<0.0001. (a)(b)Aside from measuring the plasma cTnT and CK-MB levels, the myocardial infarct size was also estimated through TTC-Evans blue staining. The myocardial infarction severity (Figure2(a)) was evaluated via the estimation of the proportion of the aggregate ischemic area and the proportion of necrotic areas premised on the algorithm below: (1)%totalischemicareas=infarctplusat‐riskareastotalmyocardialareas,(2)%necroticareas=infractareasinfractplusat‐riskareas.Figure 2 Measurement of the myocardial infarct area. (a) TTC-Evans blue staining. (b) Assessment of myocardial infarct area. Both the proportion of aggregate ischemic areas and necrotic areas were elevated in the Con cohort as opposed to the Sham cohort and were considerably decreased by Honokiol treatment.N=10 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)(c)Both of the %total ischemic areas and %necrotic areas were increased in the Con cohort as opposed to the Sham cohort but were considerably reduced in the Honokiol cohort (allp<0.0001, Figure 2(b)).Mice subjected to MI/RI (Con cohort) indicated significantly worse cardiac function as opposed to the Sham cohort, assessed by measuring the left ventricular ejection fraction (LVEF), fractional shortening (LVFS), and left ventricular end-systolic diameter (LVESD) (allp<0.0001, Figures 3(a) and 3(b)). The results illustrated that both LVFS and LVEF were elevated, but LVESD was reduced in the Honokiol cohort as opposed to the Con cohort (all p<0.0001, Figures 3(a) and 3(b)). No difference was identified in the left ventricular end-diastolic diameter (LVEDD) among all cohorts (Figure 3(b)).Figure 3 Echocardiography of MI/RI mice. (a) Echocardiograms of the mice. (b) Both LVFS and LVEF were elevated, while LVESD was lower in the Honokiol cohort as opposed to the Con cohort. No difference was observed in the LVEDD in all cohorts.N=10for each cohort. Data are articulated as mean±SEM. ∗∗∗p<0.001. ∗∗∗∗p<0.0001. (a)(b) ### 3.2. Honokiol Pretreatment Reduces Cardiomyocyte Apoptosis Caused by MI/RI The influence of Honokiol on MI/RI-triggered apoptosis was examined utilizing TUNEL staining. As illustrated in Figure4, only a small amount of apoptotic cardiomyocytes were identified in myocardial tissues from the Sham cohort, but a considerably higher amount of TUNEL-positive cardiomyocytes were identified in the Con cohort (p<0.0001). The pretreatment with Honokiol reduced the amount of TUNEL-positive cells (p<0.0001), indicating its antiapoptotic effect on MI/RI (Figure 4).Figure 4 Assessment of cardiomyocyte apoptosis. The TUNEL assay was utilized to evaluate cardiomyocyte apoptosis in ventricular tissue (400x,n=7 per cohort). Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)The electron microscopy of cardiomyocytes indicated severe mitochondrial inflammation in the Con cohort but not in the Honokiol cohort (Figure5), implying that the mitochondrial apoptosis pathway may be implicated in MI/RI.Figure 5 Analysis of cardiomyocytes via electron microscopy. MI/RI resulted in the damage of the myocardial ultrastructure and mitochondrial inflammation, which was attenuated after melatonin treatment (n=3 for each cohort). Black arrow: mitochondria. Original magnification: 15000x.In addition to this, mitochondrial apoptosis markers were also detected and identified through western blotting. Honokiol considerably lowered the levels of the proapoptotic proteins cleaved caspase-9, Bax, and Cyt-C, all of which are stimulated by MI/RI, but elevated the antiapoptotic protein Bcl-2 expression (allp<0.05, Figure 6). These data propose that Honokiol could act as a prospective antiapoptotic drug by modulating the mitochondrial pathway for MI/RI.Figure 6 Western blot analysis of apoptosis-associated proteins. Western blotting was utilized to evaluate the protein levels of cleaved caspase-9, Cyt-C, Bcl-2, and Bax. GAPDH acted as the internal reference (n=6 for each cohort). Data are articulated as mean±SEM. ∗∗p<0.01; ∗∗∗p<0.001; ∗∗∗∗p<0.0001. (a)(b) ### 3.3. Honokiol Activates the PI3K/AKT Signaling Pathway in MI/RI The PI3K/AKT pathway can modulate cell apoptosis induced by the mitochondrial pathway [20–22]. Western blotting was performed to investigate whether Honokiol has a function in PI3K/AKT-induced mitochondrial apoptosis. It was found that the expression of p-AKT and p-PI3K in the Con cohort was significantly decreased and that Honokiol was able to restore the expression levels of the two proteins (Figure 7, all p<0.0001). These results suggest that Honokiol may be preventing mitochondrial apoptosis in the cardiomyocytes through the PI3K/AKT signaling pathway.Figure 7 Western blot analysis of PI3K/AKT pathway-related proteins. (a) AKT and (b) PI3K were identified through WB. Sham: sham operation; Con: MI/RI with vehicle; Honokiol: MI/RI with Honokiol.N=6 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b) ## 3.1. Honokiol Pretreatment Preserves Cardiac Function in MI/RI The plasma levels of CK-MB and cTnT were considerably elevated in the Con cohort as opposed to the Sham cohort (bothp<0.0001, Figure 1). On the other hand, CK-MB and cTnT levels were found to be reduced in the Honokiol-treated cohort (Honokiol cohort) as opposed to the Con cohort (both p<0.0001, Figure 1).Figure 1 Plasma levels of myocardial injury markers. Expression levels of (a) cTnT and (b) CK-MB. Sham: sham operation; Con: MI/RI with the vehicle; Honokiol: MI/RI with Honokiol.N=10 for each cohort. Data are articulated as mean±SEM. ∗p<0.05; ∗∗∗∗p<0.0001. (a)(b)Aside from measuring the plasma cTnT and CK-MB levels, the myocardial infarct size was also estimated through TTC-Evans blue staining. The myocardial infarction severity (Figure2(a)) was evaluated via the estimation of the proportion of the aggregate ischemic area and the proportion of necrotic areas premised on the algorithm below: (1)%totalischemicareas=infarctplusat‐riskareastotalmyocardialareas,(2)%necroticareas=infractareasinfractplusat‐riskareas.Figure 2 Measurement of the myocardial infarct area. (a) TTC-Evans blue staining. (b) Assessment of myocardial infarct area. Both the proportion of aggregate ischemic areas and necrotic areas were elevated in the Con cohort as opposed to the Sham cohort and were considerably decreased by Honokiol treatment.N=10 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)(c)Both of the %total ischemic areas and %necrotic areas were increased in the Con cohort as opposed to the Sham cohort but were considerably reduced in the Honokiol cohort (allp<0.0001, Figure 2(b)).Mice subjected to MI/RI (Con cohort) indicated significantly worse cardiac function as opposed to the Sham cohort, assessed by measuring the left ventricular ejection fraction (LVEF), fractional shortening (LVFS), and left ventricular end-systolic diameter (LVESD) (allp<0.0001, Figures 3(a) and 3(b)). The results illustrated that both LVFS and LVEF were elevated, but LVESD was reduced in the Honokiol cohort as opposed to the Con cohort (all p<0.0001, Figures 3(a) and 3(b)). No difference was identified in the left ventricular end-diastolic diameter (LVEDD) among all cohorts (Figure 3(b)).Figure 3 Echocardiography of MI/RI mice. (a) Echocardiograms of the mice. (b) Both LVFS and LVEF were elevated, while LVESD was lower in the Honokiol cohort as opposed to the Con cohort. No difference was observed in the LVEDD in all cohorts.N=10for each cohort. Data are articulated as mean±SEM. ∗∗∗p<0.001. ∗∗∗∗p<0.0001. (a)(b) ## 3.2. Honokiol Pretreatment Reduces Cardiomyocyte Apoptosis Caused by MI/RI The influence of Honokiol on MI/RI-triggered apoptosis was examined utilizing TUNEL staining. As illustrated in Figure4, only a small amount of apoptotic cardiomyocytes were identified in myocardial tissues from the Sham cohort, but a considerably higher amount of TUNEL-positive cardiomyocytes were identified in the Con cohort (p<0.0001). The pretreatment with Honokiol reduced the amount of TUNEL-positive cells (p<0.0001), indicating its antiapoptotic effect on MI/RI (Figure 4).Figure 4 Assessment of cardiomyocyte apoptosis. The TUNEL assay was utilized to evaluate cardiomyocyte apoptosis in ventricular tissue (400x,n=7 per cohort). Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b)The electron microscopy of cardiomyocytes indicated severe mitochondrial inflammation in the Con cohort but not in the Honokiol cohort (Figure5), implying that the mitochondrial apoptosis pathway may be implicated in MI/RI.Figure 5 Analysis of cardiomyocytes via electron microscopy. MI/RI resulted in the damage of the myocardial ultrastructure and mitochondrial inflammation, which was attenuated after melatonin treatment (n=3 for each cohort). Black arrow: mitochondria. Original magnification: 15000x.In addition to this, mitochondrial apoptosis markers were also detected and identified through western blotting. Honokiol considerably lowered the levels of the proapoptotic proteins cleaved caspase-9, Bax, and Cyt-C, all of which are stimulated by MI/RI, but elevated the antiapoptotic protein Bcl-2 expression (allp<0.05, Figure 6). These data propose that Honokiol could act as a prospective antiapoptotic drug by modulating the mitochondrial pathway for MI/RI.Figure 6 Western blot analysis of apoptosis-associated proteins. Western blotting was utilized to evaluate the protein levels of cleaved caspase-9, Cyt-C, Bcl-2, and Bax. GAPDH acted as the internal reference (n=6 for each cohort). Data are articulated as mean±SEM. ∗∗p<0.01; ∗∗∗p<0.001; ∗∗∗∗p<0.0001. (a)(b) ## 3.3. Honokiol Activates the PI3K/AKT Signaling Pathway in MI/RI The PI3K/AKT pathway can modulate cell apoptosis induced by the mitochondrial pathway [20–22]. Western blotting was performed to investigate whether Honokiol has a function in PI3K/AKT-induced mitochondrial apoptosis. It was found that the expression of p-AKT and p-PI3K in the Con cohort was significantly decreased and that Honokiol was able to restore the expression levels of the two proteins (Figure 7, all p<0.0001). These results suggest that Honokiol may be preventing mitochondrial apoptosis in the cardiomyocytes through the PI3K/AKT signaling pathway.Figure 7 Western blot analysis of PI3K/AKT pathway-related proteins. (a) AKT and (b) PI3K were identified through WB. Sham: sham operation; Con: MI/RI with vehicle; Honokiol: MI/RI with Honokiol.N=6 for each cohort. Data are articulated as mean±SEM. ∗∗∗∗p<0.0001. (a)(b) ## 4. Discussion In this research, we have demonstrated that Honokiol pretreatment can improve cardiac function during MI/RI injury. Honokiol considerably lowered the levels of myocardial injury indicators (cTnT and CK-MB) induced by MI/RI. Furthermore, Honokiol also inhibited the cardiomyocyte apoptosis and mitochondrial swelling that resulted from MI/RI, possibly involving the PI3K/AKT signaling pathway. With everything considered, these results indicate that pretreatment with Honokiol may be protecting the heart from MI/RI by inhibiting mitochondrial apoptosis through the PI3K/AKT signaling pathway.Honokiol is a bioactive natural compound with a potential therapeutic benefit because of its diverse pharmacological properties, including antiinflammatory, anticancer, and antiarrhythmic, without appreciable toxicity [23–25]. This compound was also considered a potential treatment for heart disease [26]. Honokiol has been shown to enhance myocardial contraction and cardiac contractile capacity and reduce myocardial degeneration in the β1-AAB-induced myocardial dysfunction model [27]. Furthermore, it was also shown in previous studies that Honokiol decreases intima-to-media area ratio and intimal hyperplasia, as well as deposition of collagen in rabbit carotid artery balloon injury model [28]. Honokiol was also found to have antihypertrophic effects and may block cardiac fibroblast proliferation and differentiation to myofibroblasts [16]. Moreover, it was previously observed that Honokiol treatment provides cardiac protection from Dox-cardiotoxicity by enhancing mitochondrial function, implying that Honokiol is an auspicious treatment for cancer patients under Dox treatment [29]. Honokiol also performs a protective function against I/R in multiple organs, including the heart [15, 30–32]. Currently, there is only a little evidence about the effect of Honokiol on MI/RI. It was shown in a previous study that Honokiol ameliorates MI/RI in rats with type 1 diabetes by decreasing apoptosis and oxidative stress [14]. In addition, pretreatment with Honokiol was also observed to significantly reduce infarct size and the levels of proinflammatory cytokines (TNF-α, NF-κB, and IL-6), oxidative stress indicators (myeloperoxidase, superoxide dismutase, catalase, malondialdehyde), and myocardial injury indicators (cTnT and CK-MB) in MI/RI [15].The data from this research are in harmony with earlier studies. The results have shown that Honokiol pretreatment improves cardiac function during MI/RI and decreases the levels of myocardial injury indicators (cTnT and CK-MB) in the blood plasma. Moreover, Honokiol pretreatment led to the reduction of proapoptotic protein expression and mitochondrial swelling, both of which are induced by MI/RI.Multiple signal pathways have been found to participate in the cardioprotective effects of Honokiol. Regulation of the NF-κB pathway by Honokiol may reduce ROS levels and endothelial cell apoptosis in high-glucose-stimulated oxidative stress and streptozocin- (STZ-) stimulated diabetes [33], inflammatory response, apoptosis of human umbilical vein endothelial cells [34], and even the proliferation and migration of rat aortic smooth muscle cells [35]. Honokiol may be affecting cardiomyocytes autophagy via the AMPK/ULK pathway [36], as well as interstitial fibrosis and cardiac hypertrophy via the PI3K/AKT pathway [16]. As observed in previous studies, the NF-κB pathway and the SIRT1-Nrf2 signaling pathway are implicated in the protective function of Honokiol in MI/RI [14, 15]. Our data suggest that the PI3K/AKT pathway is altered in the Honokiol pretreated cohort, suggesting its involvement in the cardioprotective effects of Honokiol in MI/RI. PI3K/AKT pathway triggering could reduce the levels of the proapoptotic protein Bax and stimulate the release of the antiapoptotic protein Bcl-2 [37]. Moreover, the caspase activation stimulates the release of Cyt-C and AIF, both of which facilitate the modulation of mitochondrial apoptosis [38–40]. Our results also showed that Honokiol pretreatment resulted in the decrease in the levels of Cyt-C, Bax, and cleaved caspase-9, as well as mitochondrial swelling in MI/RI, further suggesting that Honokiol may be protecting the heart from MI/RI by inhibiting mitochondrial apoptosis via the PI3K/AKT signaling pathway.There are several potential limitations in this research that may be addressed in future related studies. First, the study is only aimed at investigating the protective effects of Honokiol in MI/RI without covering possible adverse effects. Next, the method of administration (intraperitoneal injection), as opposed to oral or rectal administration, may also influence the pharmacokinetics of the drug. After the Houpo extract was administered rectally to Wistar rats at a dosage of 245 mg/kg (equal to 13.5 mg/kg of Honokiol), the Honokiol’s bioavailability was roughly sixfold greater than when orally administered at a similar dose [41]. This implies possible influences of the administration route in the therapeutic effects of Honokiol. For improved clinical applications, we will contrast the impacts of oral, rectal, and intraperitoneal injection in the bioavailability of Honokiol in MI/RI mouse models. Moreover, in order to further confirm the direct regulation of Honokiol on the PI3K/AKT pathway, the knock-out mice should be to use for verification. ## 5. Conclusion In conclusion, Honokiol significantly decreases the levels of myocardial injury indicators (cTnT and CK-MB) induced by MI/RI. Honokiol inhibits cardiomyocyte apoptosis and mitochondrial swelling resulting from MI/RI and may involve the PI3K/AKT signaling pathway. Taken together, pretreatment with Honokiol provides cardiac protection from MI/RI by suppressing mitochondrial apoptosis through the PI3K/AKT signaling pathway. --- *Source: 1001692-2022-03-27.xml*
2022
# The Correlation between the Use of the Proton Pump Inhibitor and the Clinical Efficacy of Immune Checkpoint Inhibitors in Non-Small Cell Lung Cancer **Authors:** Da-Hai Hu; Wan-Ching Wong; Jia-Xin Zhou; Ji Luo; Song-Wang Cai; Hong Zhou; Hui Tang **Journal:** Journal of Oncology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1001796 --- ## Abstract Background. To determine if the use of the Proton Pump Inhibitors (PPI) impacts the clinical efficacy of Immune Checkpoint Inhibitors (ICIs) in Non-Small Cell Lung Cancer (NSCLC), a meta-analysis was conducted. Method. Eleven studies from PubMed, EMBASE, Cochrane Library, Web of Science, and other databases up to May 2022, were selected. The pertinent clinical outcomes were assessed by applying the Progression-free survival (PFS), Overall Survival (OS), Hazard Ratio (HR), and 95% Confidence Interval (CI). Result. This study included eleven articles containing 7,893 NSCLC patients. The result indicated that PPI use was dramatically related to poor OS (HR: 1.30 [1.10–1.54]), and poor PFS (HR: 1.25 [1.09–1.42]) in case of patients treated with ICIs. With regard to the subgroup analysis, PPI use was dramatically associated with poor OS (Europe: HR = 1.48 [1.26, 1.74], Worldwide: HR = 1.54 [1.24, 1.91]), and poor PFS (Europe: HR = 1.36 [1.18, 1.57], Worldwide: HR = 1.34 [1.16, 1.55]) in patients from Europe and multi-center studies across the world, poor OS in patients with age less than or equal to 65 (HR = 1.56 [1.14, 2.15]), poor PFS in patients aged more than 65 (HR = 1.36 [1.18, 1.57]), poor OS for patients receiving with PD-1 (HR = 1.37 [1.04, 1.79]), poor PFS for patients receiving with PD-L1 (HR = 1.33 [1.19, 1.49]), and poor OS (−30: HR = 1.89 [1.29, 2.78], ±30: HR = 1.44 [1.27, 1.64]) and poor PFS (−30: HR = 1.51 [1.11, 2.05], ±30: HR = 1.32 [1.20, 1.45]) for patients who received PPI at 30 days before and/or after starting the ICIs treatment. Conclusion. Our meta-analysis indicated that PPI combined with ICIs in the treatment of NSCLC patients could result in poor OS and PFS. PPI use should be extremely cautious in clinical practices to avoid the impact on the efficacy of the ICIs. --- ## Body ## 1. Introduction Non-small cell lung cancer (NSCLC) is one of the most common cancers, accounting for about 80% of all lung cancers. According to incomplete statistics, NSCLC kills 1.6 million people worldwide each year [1]. The main pathologic types of NSCLC include squamous cell carcinoma, adenocarcinoma, and large cell carcinoma [2]. Compared with small cell carcinoma, the growth and division rate of cancer cells in NSCLC is slow, and the diffusion and metastasis occur relatively late. However, after systematic treatment, the 5-year survival rate of some NSCLC patients is still not ideal [3]. Nowadays, for NSCLC, a single treatment plan, such as surgery, radiotherapy, or immunotherapy, may be difficult to achieve ideal results. Therefore, combination therapy has gradually come into people’s vision, such as chemotherapy or radiotherapy after surgery, and Proton Pump Inhibitor (PPI) combined with Immune Checkpoint Inhibitors (ICIs). However, the clinical efficacy of these combined therapies for patients is still unclear, and more relevant studies are needed to explore.As an effective inhibitor of gastric acid secretion, the PPI has been used widely to treat the hypersecretion of gastric acid and other related diseases around the world, for instance, Gastroesophageal Reflux Disease (GERD) and gastric ulcers [4]. Studies have indicated that use of PPI for a long time could increase the risk of related tissue histopathological changes, and could lead to the disorder of normal colonies in the gastrointestinal tract, greatly increasing the risk of gastrointestinal infection [5–7]. In recent years, since PPI could enhance the sensitivity of cancer patients towards chemotherapy, hence, it has gained prominence in the field of tumor treatment [8, 9]. Nevertheless, the efficacy and risk of the use of PPI are different for various cancer types [10]. However, the efficacy of the use of PPI in the treatment of cancer is not very clear, and needs further research.Presently, the cancer immunotherapy mainly includes the following three kinds, ICIs and adoptive cell therapy, operating the immunologic defense to differentiate, and attack tumor cells [11]. Among them, the ICIs have been used in a widespread manner in the neighborhood of tumor treatment, greatly improving the strategy of treating related cancer. Cytotoxic drug T lymphocyte associated antigen 4 (CTLA-4) inhibitor, programmed cell death ligand 1 (PD-L1), as well as programmed cell death 1 (PD-1) are the three kinds of ICIs widely used clinically [12, 13]. However, certain controversial aspects of cancer immunotherapy still remain. As the immune system could be over activated during immunotherapy, bring with it, it could bring serious side effects to the patients, and the adverse reactions in individual cases were serious and even life-threatening at times [14]. These illustrated that the clinical efficacy of ICIs was not very clear. Today, with the popularity of the combination therapy, the ICIs are often combined with the PPI. In one study, for NSCLC patients, the PPI combined with ICIs led to a negative result, nevertheless, in case of the melanoma patients, it produced a positive result [15]. Hence, it is controversial whether the clinical efficacy of ICIs in NSCLC is related to the use of the PPI.This study was intended to determine if there was any correlation between the clinical efficacy of the ICIs in NSCLC and the use of PPI. ## 2. Materials and Method ### 2.1. Search Strategy The literatures involved in this study were independently screened by two researchers (D. H. and W. W.) to determine whether they met the inclusion or exclusion criteria, and any differences would be resolved by consensus with third party researcher (J. Z.). Our search strategy is as illustrated in Figure1, searching the studies from PubMed, EMBASE, Cochrane Library, and Web of Science databases up to May 2022. The keywords searched were, “Non-small Cell Lung Cancer,” “Non-Small Cell Lung Cancer,” “carcinoma, non-small-cell lung” “Non-Small Cell Lung Carcinoma,” “Lung Carcinoma, Non-Small-Cell,” “programmed death-ligand 1 inhibitor,” “PD-L1 inhibitor,” “Immunotherapy,” “programmed death receptor 1 inhibitor,” “PD-1 inhibitor,” “cytotoxic T lymphocyte antigen-4 inhibitor,” “CTLA-4 inhibitor,” and “proton pump inhibitor.”Figure 1 The flow chart of study selection. ### 2.2. The Criteria for Inclusion and Exclusion The criteria for inclusion were: (1) The collected literature involving the usage of PPI and the clinical efficacy in NSCLC of the ICIs; (2) Use of PPI done before and/or after starting the ICIs treatment; (3) Patients received just the ICIs treatments or combined with PPI; (4) The inclusion of the non-using PPI and using PPI groups; and (5) The outcome of study should contain the Overall Survival (OS) and/or Progression-Free Survival (PFS), Hazard Ratio (HR), and 95% Confidence Intervals (CIs). The exclusion criteria were as under: (1) Repetitive studies; (2) Non-human studies; (3) The study report was not in English; and (4) The reviews and meta-analyses, or the case report. ### 2.3. Data Extracting/ The result information like 95% CI of OS and/or PFS, HR, duration of exposure to PPI, cancer type, PPI treatment, type of ICIs treatment, sample size, age, region, first author, and year of publication were extracted from the studies that were included. To reduce the influence of the confounding factors, the multivariate analysis was selected to calculate the HRs value to the extent possible. ### 2.4. Quality Assessment The literature included in the study were retrospective studies and the quality evaluation of research referred to the Newcastle—Ottawa Quality Assessment Scale (NOS) [16]. The evaluation was scored with respect to three aspects, selection of topic, comparability, and evaluation of results. For the NOS system, a score of 6 or more of studies was defined as high quality [17]. ### 2.5. Statistical Analysis The HR and 95% CI of OS and/or PFS were meta-analyzed by applying the Review Manager 5.4 software for Win, while HR >1.0 was considered as poor OS or poor PFS in the outcomes. The funnel plot assessed the publication bias. The heterogeneity of the studies included was assessed byI2 statistics, and the sensitivity analysis, Begg’s tests, and Egger’s tests of studies were evaluated by Stata 15 software for Win. When I2 was greater than 50%, it was regarded that the research had great heterogeneity, and the random effect model was adopted. The extracted data were analyzed by the dichotomous, Mantel–Haenszel method model. In this study, P values <0.05 were considered as statistically significant. ## 2.1. Search Strategy The literatures involved in this study were independently screened by two researchers (D. H. and W. W.) to determine whether they met the inclusion or exclusion criteria, and any differences would be resolved by consensus with third party researcher (J. Z.). Our search strategy is as illustrated in Figure1, searching the studies from PubMed, EMBASE, Cochrane Library, and Web of Science databases up to May 2022. The keywords searched were, “Non-small Cell Lung Cancer,” “Non-Small Cell Lung Cancer,” “carcinoma, non-small-cell lung” “Non-Small Cell Lung Carcinoma,” “Lung Carcinoma, Non-Small-Cell,” “programmed death-ligand 1 inhibitor,” “PD-L1 inhibitor,” “Immunotherapy,” “programmed death receptor 1 inhibitor,” “PD-1 inhibitor,” “cytotoxic T lymphocyte antigen-4 inhibitor,” “CTLA-4 inhibitor,” and “proton pump inhibitor.”Figure 1 The flow chart of study selection. ## 2.2. The Criteria for Inclusion and Exclusion The criteria for inclusion were: (1) The collected literature involving the usage of PPI and the clinical efficacy in NSCLC of the ICIs; (2) Use of PPI done before and/or after starting the ICIs treatment; (3) Patients received just the ICIs treatments or combined with PPI; (4) The inclusion of the non-using PPI and using PPI groups; and (5) The outcome of study should contain the Overall Survival (OS) and/or Progression-Free Survival (PFS), Hazard Ratio (HR), and 95% Confidence Intervals (CIs). The exclusion criteria were as under: (1) Repetitive studies; (2) Non-human studies; (3) The study report was not in English; and (4) The reviews and meta-analyses, or the case report. ## 2.3. Data Extracting/ The result information like 95% CI of OS and/or PFS, HR, duration of exposure to PPI, cancer type, PPI treatment, type of ICIs treatment, sample size, age, region, first author, and year of publication were extracted from the studies that were included. To reduce the influence of the confounding factors, the multivariate analysis was selected to calculate the HRs value to the extent possible. ## 2.4. Quality Assessment The literature included in the study were retrospective studies and the quality evaluation of research referred to the Newcastle—Ottawa Quality Assessment Scale (NOS) [16]. The evaluation was scored with respect to three aspects, selection of topic, comparability, and evaluation of results. For the NOS system, a score of 6 or more of studies was defined as high quality [17]. ## 2.5. Statistical Analysis The HR and 95% CI of OS and/or PFS were meta-analyzed by applying the Review Manager 5.4 software for Win, while HR >1.0 was considered as poor OS or poor PFS in the outcomes. The funnel plot assessed the publication bias. The heterogeneity of the studies included was assessed byI2 statistics, and the sensitivity analysis, Begg’s tests, and Egger’s tests of studies were evaluated by Stata 15 software for Win. When I2 was greater than 50%, it was regarded that the research had great heterogeneity, and the random effect model was adopted. The extracted data were analyzed by the dichotomous, Mantel–Haenszel method model. In this study, P values <0.05 were considered as statistically significant. ## 3. Result ### 3.1. Selection of Study Figure1 illustrates the flow chart of the selection of studies. 52 studies that were searched from the database were included in this study, with 9 supplemental studies from other databases. After repetitive studies (n = 19) and studies of unrelated topics (n = 14) were deleted, with 42 articles being selected. Subsequently, according to the above including and excluding criteria, 17 articles were excluded, including reviews, and meta-analysis, case report (n = 8), no result posted (n = 4), and no NSCLC (n = 5). Finally, 11 published articles in total were selected in our study up to May 2022. ### 3.2. Characteristics of the Studies Included As shown in Table1, eleven published articles containing 7,893 patients were included in the study. Between the 11 studies, 3, 2, 2, 2, 2 studies were performed in Asia, Worldwide, Europe, America, and Oceania, respectively. Most patients were treated with PPI before and/or shortly after the beginning of ICIs, and the type of ICI treatment was dominated by PD-(L) 1. In the Table 2, the 11 studies included were retrospective studies with result information of OS and/or PFS. The HR values were extracted from the univariate analysis of 5 studies and multivariate analysis of another 5 studies. The NOS score of all the included studies was greater than or equal to 6, which could be considered as high-quality articles.Table 1 Baseline characteristics of the included studies. AuthorYearAgeRegionCancer typeICI treatmentPPI treatmentNo. of PPIPatientsPPI exposureChalabi et al. [30]2020NAWorldwideNSCLCPD-L1Omeprazole, pantoprazole, lansoprazole, rabeprazole, esomeprazole, dexlansoprazole234757Prior, within (30 days)Hakozaki et al. [31]201967AsiaNSCLCPD-1NA4790Prior (30 days)Svaton et al. [32]202067EuropeNSCLCPD-1Omeprazole, pantoprazole, lansoprazole64224Prior, within (30 days)Zhao et al. [33]201962AsiaNSCLCPD-1, otherNA40109Prior, within (30 days)Stokes et al. [34]202169AmericaNSCLCPD-(L)1Omeprazole (majority)21593634Within (90 days)Miura et al. [35]202165AsiaNSCLCPD-1Lansoprazole, rabeprazole, Esomeprazole163300WithinCortellini et al. [36]202170.1EuropeNSCLCPD-L1NA474950Prior, within (30 days)Giordan et al. [29]202163.9WorldwideNSCLCPD-(L)1Pantoprazole, esomeprazole, lansoprazole, Rabeprazole, omeprazole47212Prior (30 days)Hopkins et al. [37]2022NAOceaniaNSCLCPD-(L)1NA4411202Prior, within (30 days)Hopkins et al. [38]2022NAOceaniaNSCLCPD-L1Omeprazole,pantoprazole, esomeprazole, lansoprazole, rabeprazole, dexlansoprazole, vanoprazan12254458WithinHusain et al. [39]2021NAAmericaNSCLCPD-(L)1NA149415WithinNSCLC, non-small cell lung cancer; PPI, proton pump inhibitor; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; NA, not available; ICI, immune checkpoint inhibitor.Table 2 Quality assessment and prognostic information of the included studies. AuthorYearMethodOutcomeHR (95% CI) for OSHR (95% CI) for PFSAnalysisNOS scoreChalabi et al. [30]2020REOS/PFS1.45 (1.20–1.75)1.30 (1.10–1.53)NA8Hakozaki et al. [31]2019REOS1.90 (0.80–4.51)NAM6Svaton et al. [32]2020REOS/PFS1.22 (0.72–2.05)1.36 (0.89–2.06)M8Zhao et al. [33]2019REOS/PFS0.68 (0.33–1.43)0.91 (0.54–1.54)U8Stokes et al. [34]2021REOS0.96 (0.89–1.04)NAM7Miura et al. [35]2021REOS1.36 (0.96–1.91)NAM7Cortellini et al. [36]2021REOS/PFS1.51 (1.28–1.80)1.36 (1.17–1.59)U8Giordan et al. [29]2021REOS/PFS1.89 (1.23–2.90)1.51 (1.11–2.05)M7Hopkins et al. [37]2022REOS/PFS1.53 (1.21–1.95)1.34 (1.12–1.61)U7Hopkins et al. [38]2022REOS/PFS1.00 (0.85–1.17)0.93 (0.76–1.13)U8Husain et al. [39]2021REOS1.43 (1.06–1.92)NAU6OS, overall survival; PFS, progression-free survival; HR, hazard ratio, NA, not available; U, univariate; M, multivariate; NOS, Newcastle-Ottawa Scale; RE, retrospective. ### 3.3. The Association between PPI Use and OS As indicated in Figure2(a), 11 studies with 7,893 NSCLC patients were selected to perform meta-analysis for OS. The result revealed that in patients who had received ICIs treatment, the use of PPI was found to be significantly associated with poor OS (HR: 1.30, 95% CI: 1.10–1.54, P=0.003). Nevertheless, significant heterogeneity existed in this analysis (I2 = 82%, P<0.001).Figure 2 The forest plots of the hazard ratios (HRs) and 95% CIs for overall survival (a) and progression-free survival (b). (a)(b) ### 3.4. The Association between PPI Use and PFS As shown in Figure2(b), 7 studies with 3,454 NSCLC patients were selected to perform meta-analysis for PFS. The results revealed that PPI use was significantly associated with poor PFS in the patients who had received ICIs treatment (HR: 1.25, 95% CI: 1.09–1.42, P=0.001), with significant heterogeneity (I2 = 56%, P=0.04). ### 3.5. Subgroup Analysis of OS To further assess the influence of PPI use in OS, the subgroup analyses were performed with regard to region, age, sample size, immunotherapy drugs, and duration of PPI exposure. As illustrated in Table3, in terms of the region subgroup, PPI use was significantly related to poor OS in patients from Europe (HR = 1.48 [1.26, 1.74], P<0.001) and worldwide multi-center studies (HR = 1.54 [1.24, 1.91], P<0.001). The PPI use in patients with age less than or equal to 65, was found to be significantly associated to poor OS in the subgroup analysis related to age (HR = 1.56 [1.14, 2.15], P=0.006). With regard to the subgroup of sample size, the usage of PPI was found to be significantly associated to poor OS in studies with sample sizes less than or equal to 300 (HR = 1.37 [1.02, 1.84], P=0.04), and more than 300 (HR = 1.27 [1.04, 1.56], P=0.02). In the analysis of the subgroup of immunotherapy drugs, the PPI use was found to be significantly related to poor OS in patients who had received PD-1 treatment (HR = 1.37 [1.04, 1.79], P=0.03). With regard to the duration of PPI exposure subgroup, the result indicated that PPI use was significantly related to poor OS in patients who had received PPI treatment at 30 days before ICIs initiation (−30: HR = 1.89 [1.29, 2.78], P=0.001), and 30 days before and after starting ICIs treatment (±30: HR = 1.44 [1.27, 1.64], P<0.001).Table 3 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for overall survival. SubgroupNo. of studiesOS hazard ratios (95% CI)PvalueHeterogeneityI2 (%)PvalueRegionWorldwide21.54 [1.24, 1.91]<0.00119.000.27Asia31.21 [0.74, 1.98]0.4447.000.15Europe21.48 [1.26, 1.74]<0.00100.45America21.14 [0.77, 1.68]0.5185.000.01Oceania21.23 [0.81, 1.86]0.3488.000.004Age≤6531.56 [1.14, 2.15]0.00627.000.24>6541.26 [0.89, 1.79]0.1988.00<0.001Sample size≤30051.37 [1.02, 1.84]0.0437.000.17>30061.27 [1.04, 1.56]0.0289.00<0.001Immunotherapy drugPD-L131.30 [0.99, 1.69]0.0686.00<0.001PD-131.37 [1.04, 1.79]0.0300.69PD-1, other10.68 [0.33, 1.42]0.3NANAPD-(L)141.37 [0.98, 1.92]0.0788.00<0.001PPI exposure−3021.89 [1.29, 2.78]0.00100.99±3051.44 [1.27, 1.64]<0.00119.000.3∞41.10 [0.93, 1.30]0.2769.000.02OS, overall survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available; PPI: proton pump inhibitors. ### 3.6. Subgroup Analysis of PFS As in the subgroup analysis for OS, the PFS subgroup analyses were also performed with respect to region, age, sample size, immunotherapy drugs, and duration of PPI exposure, as shown in Table4. In term of region subgroup, the use of PPI was significantly related to poor PFS in patients from Europe (HR = 1.36 [1.18, 1.57], P<0.001) and worldwide multi-center studies (HR = 1.34 [1.16, 1.55], P<0.001). The PPI usage indicated significant association with poor PFS in patients having age above 65 years (HR = 1.36 [1.18, 1.57], P<0.001) in the age subgroup. In case of the sample size subgroup, the PPI use was found to be significantly related to poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44], P=0.01). With regard to the subgroup analysis of immunotherapy drugs, PPI use was found to be significantly related to poor PFS in patients having received PD-L1 treatment (HR = 1.33 [1.19, 1.49], P<0.001). In case of subgroup regarding duration of PPI exposure, the result revealed that PPI use was significantly associated with poor PFS in patients treated with PPI at 30 days before starting the ICIs treatment (−30: HR = 1.51 [1.11, 2.05], P=0.008), and 30 days before and after starting the ICIs treatment (±30: HR = 1.32 [1.20, 1.45], P<0.001).Table 4 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for progression-free survival. SubgroupNo. of studiesPFS hazard ratios (95% CI)P valueHeterogeneityI2 (%)P valueRegionWorldwide21.34 [1.16, 1.55]<0.00100.4Asia10.91 [0.54, 1.54]0.72NANAEurope21.36 [1.18, 1.57]<0.00100.99Oceania21.12 [0.78, 1.60]0.5486.000.008Age≤6521.23 [0.75, 2.00]0.4163.000.1>6521.36 [1.18, 1.57]<0.00100.99Sample size≤30031.31 [1.00, 1.71]0.0525.000.26>30041.23 [1.04, 1.44]0.0171.000.01Immunotherapy drugPD-L121.33 [1.19, 1.49]<0.00100.69PD-111.36 [0.89, 2.07]0.15NANAPD-1, other10.91 [0.54, 1.54]0.72NANAPD-(L)131.17 [0.73, 1.88]0.5285.000.009PPI exposure−3011.51 [1.11, 2.05]0.008NANA±3051.32 [1.20, 1.45]<0.00100.71∞10.93 [0.76, 1.13]0.47NANAPFS, progression-free survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available. ### 3.7. Publication Bias The funnel plots assisted in assessing the publication bias, while the results revealed that there was no significant asymmetry about HR (OS or PFS), confirming that there was little possibility of publication bias (Figures3(a) and 3(b)). In addition, Begg’s tests and Egger’s tests were also used to verify whether there is publication bias. As shown in Figures 3(c) and 3(d), there was no significant publication bias in the HR value of OS (Begg’s test, P=0.640; Egger’s test, P=0.059) and PFS (Begg’s test, P=0.368; Egger’s test, P=0.724).Figure 3 The Publication bias. (a) Funnel plot analysis of overall survival (OS). (b) Funnel plot analysis of progression-free survival (PFS). (c) Begg’s funnel plots for evaluating the publication bias of overall survival (OS). (d) Begg’s funnel plots for evaluating the publication bias of progression-free survival (PFS). (a)(b)(c)(d) ### 3.8. Sensitivity Analysis We performed sensitivity analysis on the included literature. It is obvious from the results that no single study had a great impact on the final combined HR value of OS and PFS. Therefore, we believe that the combined results of this study were reliable and robust (Figures4(a) and 4(b)).Figure 4 The sensitivity analysis. (a) Sensitivity analysis for hazard ratio (HR) of overall survival (OS). (b) Sensitivity analysis for hazard ratio (HR) of progression-free survival (PFS). (a)(b) ## 3.1. Selection of Study Figure1 illustrates the flow chart of the selection of studies. 52 studies that were searched from the database were included in this study, with 9 supplemental studies from other databases. After repetitive studies (n = 19) and studies of unrelated topics (n = 14) were deleted, with 42 articles being selected. Subsequently, according to the above including and excluding criteria, 17 articles were excluded, including reviews, and meta-analysis, case report (n = 8), no result posted (n = 4), and no NSCLC (n = 5). Finally, 11 published articles in total were selected in our study up to May 2022. ## 3.2. Characteristics of the Studies Included As shown in Table1, eleven published articles containing 7,893 patients were included in the study. Between the 11 studies, 3, 2, 2, 2, 2 studies were performed in Asia, Worldwide, Europe, America, and Oceania, respectively. Most patients were treated with PPI before and/or shortly after the beginning of ICIs, and the type of ICI treatment was dominated by PD-(L) 1. In the Table 2, the 11 studies included were retrospective studies with result information of OS and/or PFS. The HR values were extracted from the univariate analysis of 5 studies and multivariate analysis of another 5 studies. The NOS score of all the included studies was greater than or equal to 6, which could be considered as high-quality articles.Table 1 Baseline characteristics of the included studies. AuthorYearAgeRegionCancer typeICI treatmentPPI treatmentNo. of PPIPatientsPPI exposureChalabi et al. [30]2020NAWorldwideNSCLCPD-L1Omeprazole, pantoprazole, lansoprazole, rabeprazole, esomeprazole, dexlansoprazole234757Prior, within (30 days)Hakozaki et al. [31]201967AsiaNSCLCPD-1NA4790Prior (30 days)Svaton et al. [32]202067EuropeNSCLCPD-1Omeprazole, pantoprazole, lansoprazole64224Prior, within (30 days)Zhao et al. [33]201962AsiaNSCLCPD-1, otherNA40109Prior, within (30 days)Stokes et al. [34]202169AmericaNSCLCPD-(L)1Omeprazole (majority)21593634Within (90 days)Miura et al. [35]202165AsiaNSCLCPD-1Lansoprazole, rabeprazole, Esomeprazole163300WithinCortellini et al. [36]202170.1EuropeNSCLCPD-L1NA474950Prior, within (30 days)Giordan et al. [29]202163.9WorldwideNSCLCPD-(L)1Pantoprazole, esomeprazole, lansoprazole, Rabeprazole, omeprazole47212Prior (30 days)Hopkins et al. [37]2022NAOceaniaNSCLCPD-(L)1NA4411202Prior, within (30 days)Hopkins et al. [38]2022NAOceaniaNSCLCPD-L1Omeprazole,pantoprazole, esomeprazole, lansoprazole, rabeprazole, dexlansoprazole, vanoprazan12254458WithinHusain et al. [39]2021NAAmericaNSCLCPD-(L)1NA149415WithinNSCLC, non-small cell lung cancer; PPI, proton pump inhibitor; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; NA, not available; ICI, immune checkpoint inhibitor.Table 2 Quality assessment and prognostic information of the included studies. AuthorYearMethodOutcomeHR (95% CI) for OSHR (95% CI) for PFSAnalysisNOS scoreChalabi et al. [30]2020REOS/PFS1.45 (1.20–1.75)1.30 (1.10–1.53)NA8Hakozaki et al. [31]2019REOS1.90 (0.80–4.51)NAM6Svaton et al. [32]2020REOS/PFS1.22 (0.72–2.05)1.36 (0.89–2.06)M8Zhao et al. [33]2019REOS/PFS0.68 (0.33–1.43)0.91 (0.54–1.54)U8Stokes et al. [34]2021REOS0.96 (0.89–1.04)NAM7Miura et al. [35]2021REOS1.36 (0.96–1.91)NAM7Cortellini et al. [36]2021REOS/PFS1.51 (1.28–1.80)1.36 (1.17–1.59)U8Giordan et al. [29]2021REOS/PFS1.89 (1.23–2.90)1.51 (1.11–2.05)M7Hopkins et al. [37]2022REOS/PFS1.53 (1.21–1.95)1.34 (1.12–1.61)U7Hopkins et al. [38]2022REOS/PFS1.00 (0.85–1.17)0.93 (0.76–1.13)U8Husain et al. [39]2021REOS1.43 (1.06–1.92)NAU6OS, overall survival; PFS, progression-free survival; HR, hazard ratio, NA, not available; U, univariate; M, multivariate; NOS, Newcastle-Ottawa Scale; RE, retrospective. ## 3.3. The Association between PPI Use and OS As indicated in Figure2(a), 11 studies with 7,893 NSCLC patients were selected to perform meta-analysis for OS. The result revealed that in patients who had received ICIs treatment, the use of PPI was found to be significantly associated with poor OS (HR: 1.30, 95% CI: 1.10–1.54, P=0.003). Nevertheless, significant heterogeneity existed in this analysis (I2 = 82%, P<0.001).Figure 2 The forest plots of the hazard ratios (HRs) and 95% CIs for overall survival (a) and progression-free survival (b). (a)(b) ## 3.4. The Association between PPI Use and PFS As shown in Figure2(b), 7 studies with 3,454 NSCLC patients were selected to perform meta-analysis for PFS. The results revealed that PPI use was significantly associated with poor PFS in the patients who had received ICIs treatment (HR: 1.25, 95% CI: 1.09–1.42, P=0.001), with significant heterogeneity (I2 = 56%, P=0.04). ## 3.5. Subgroup Analysis of OS To further assess the influence of PPI use in OS, the subgroup analyses were performed with regard to region, age, sample size, immunotherapy drugs, and duration of PPI exposure. As illustrated in Table3, in terms of the region subgroup, PPI use was significantly related to poor OS in patients from Europe (HR = 1.48 [1.26, 1.74], P<0.001) and worldwide multi-center studies (HR = 1.54 [1.24, 1.91], P<0.001). The PPI use in patients with age less than or equal to 65, was found to be significantly associated to poor OS in the subgroup analysis related to age (HR = 1.56 [1.14, 2.15], P=0.006). With regard to the subgroup of sample size, the usage of PPI was found to be significantly associated to poor OS in studies with sample sizes less than or equal to 300 (HR = 1.37 [1.02, 1.84], P=0.04), and more than 300 (HR = 1.27 [1.04, 1.56], P=0.02). In the analysis of the subgroup of immunotherapy drugs, the PPI use was found to be significantly related to poor OS in patients who had received PD-1 treatment (HR = 1.37 [1.04, 1.79], P=0.03). With regard to the duration of PPI exposure subgroup, the result indicated that PPI use was significantly related to poor OS in patients who had received PPI treatment at 30 days before ICIs initiation (−30: HR = 1.89 [1.29, 2.78], P=0.001), and 30 days before and after starting ICIs treatment (±30: HR = 1.44 [1.27, 1.64], P<0.001).Table 3 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for overall survival. SubgroupNo. of studiesOS hazard ratios (95% CI)PvalueHeterogeneityI2 (%)PvalueRegionWorldwide21.54 [1.24, 1.91]<0.00119.000.27Asia31.21 [0.74, 1.98]0.4447.000.15Europe21.48 [1.26, 1.74]<0.00100.45America21.14 [0.77, 1.68]0.5185.000.01Oceania21.23 [0.81, 1.86]0.3488.000.004Age≤6531.56 [1.14, 2.15]0.00627.000.24>6541.26 [0.89, 1.79]0.1988.00<0.001Sample size≤30051.37 [1.02, 1.84]0.0437.000.17>30061.27 [1.04, 1.56]0.0289.00<0.001Immunotherapy drugPD-L131.30 [0.99, 1.69]0.0686.00<0.001PD-131.37 [1.04, 1.79]0.0300.69PD-1, other10.68 [0.33, 1.42]0.3NANAPD-(L)141.37 [0.98, 1.92]0.0788.00<0.001PPI exposure−3021.89 [1.29, 2.78]0.00100.99±3051.44 [1.27, 1.64]<0.00119.000.3∞41.10 [0.93, 1.30]0.2769.000.02OS, overall survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available; PPI: proton pump inhibitors. ## 3.6. Subgroup Analysis of PFS As in the subgroup analysis for OS, the PFS subgroup analyses were also performed with respect to region, age, sample size, immunotherapy drugs, and duration of PPI exposure, as shown in Table4. In term of region subgroup, the use of PPI was significantly related to poor PFS in patients from Europe (HR = 1.36 [1.18, 1.57], P<0.001) and worldwide multi-center studies (HR = 1.34 [1.16, 1.55], P<0.001). The PPI usage indicated significant association with poor PFS in patients having age above 65 years (HR = 1.36 [1.18, 1.57], P<0.001) in the age subgroup. In case of the sample size subgroup, the PPI use was found to be significantly related to poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44], P=0.01). With regard to the subgroup analysis of immunotherapy drugs, PPI use was found to be significantly related to poor PFS in patients having received PD-L1 treatment (HR = 1.33 [1.19, 1.49], P<0.001). In case of subgroup regarding duration of PPI exposure, the result revealed that PPI use was significantly associated with poor PFS in patients treated with PPI at 30 days before starting the ICIs treatment (−30: HR = 1.51 [1.11, 2.05], P=0.008), and 30 days before and after starting the ICIs treatment (±30: HR = 1.32 [1.20, 1.45], P<0.001).Table 4 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for progression-free survival. SubgroupNo. of studiesPFS hazard ratios (95% CI)P valueHeterogeneityI2 (%)P valueRegionWorldwide21.34 [1.16, 1.55]<0.00100.4Asia10.91 [0.54, 1.54]0.72NANAEurope21.36 [1.18, 1.57]<0.00100.99Oceania21.12 [0.78, 1.60]0.5486.000.008Age≤6521.23 [0.75, 2.00]0.4163.000.1>6521.36 [1.18, 1.57]<0.00100.99Sample size≤30031.31 [1.00, 1.71]0.0525.000.26>30041.23 [1.04, 1.44]0.0171.000.01Immunotherapy drugPD-L121.33 [1.19, 1.49]<0.00100.69PD-111.36 [0.89, 2.07]0.15NANAPD-1, other10.91 [0.54, 1.54]0.72NANAPD-(L)131.17 [0.73, 1.88]0.5285.000.009PPI exposure−3011.51 [1.11, 2.05]0.008NANA±3051.32 [1.20, 1.45]<0.00100.71∞10.93 [0.76, 1.13]0.47NANAPFS, progression-free survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available. ## 3.7. Publication Bias The funnel plots assisted in assessing the publication bias, while the results revealed that there was no significant asymmetry about HR (OS or PFS), confirming that there was little possibility of publication bias (Figures3(a) and 3(b)). In addition, Begg’s tests and Egger’s tests were also used to verify whether there is publication bias. As shown in Figures 3(c) and 3(d), there was no significant publication bias in the HR value of OS (Begg’s test, P=0.640; Egger’s test, P=0.059) and PFS (Begg’s test, P=0.368; Egger’s test, P=0.724).Figure 3 The Publication bias. (a) Funnel plot analysis of overall survival (OS). (b) Funnel plot analysis of progression-free survival (PFS). (c) Begg’s funnel plots for evaluating the publication bias of overall survival (OS). (d) Begg’s funnel plots for evaluating the publication bias of progression-free survival (PFS). (a)(b)(c)(d) ## 3.8. Sensitivity Analysis We performed sensitivity analysis on the included literature. It is obvious from the results that no single study had a great impact on the final combined HR value of OS and PFS. Therefore, we believe that the combined results of this study were reliable and robust (Figures4(a) and 4(b)).Figure 4 The sensitivity analysis. (a) Sensitivity analysis for hazard ratio (HR) of overall survival (OS). (b) Sensitivity analysis for hazard ratio (HR) of progression-free survival (PFS). (a)(b) ## 4. Discussion The clinical efficacy or survival outcome of the use of PPI combined with ICIs in patients with NSCLC has not been known properly. Nevertheless, the following points deserve our attention. First, PPI has been proved to play a pivotal role in operating the immunologic defense to the treatment of tumors by regulating the activity or compositions of the gastrointestinal bacteria [18]. Second, for patients with gastric ulcers or gastrointestinal bleeding history, PPI could be prophylactically used to prevent the occurrence of stress ulcers [10]. Third, since PPI could greatly increase the sensitivity of cancer patients to chemotherapy, it was often used together with other types of anticancer drugs, such as ICIs, nonetheless, the clinical efficacy of the combination has remained unknown until today. Finally, whether the combination of PPI and ICIs would increase the adverse reactions related to the two drugs. For example, the long-term use of omeprazole would correspondingly increase the risk of liver failure and chronic kidney disease, thus affecting the efficacy of cancer treatment. A study revealed that the PPI use would significantly impact the composition of the gastrointestinal bacteria and greatly reduce the clinical efficacy of ICIs [19, 20]. In the study of Derosa et al. [21], it is shown that for advanced renal cell carcinoma and NSCLC patients, the use of antibiotics combined with ICIs would also lead to poor OS and PFS, which may be due to the fact that antibiotics greatly inhibit the diversity and abundance of intestinal flora, leading to the inability to fully mobilize the immune function. This may be similar to the reason why PPI combined with ICIs produced poor clinical efficacy. Other studies revealed that the PPI use would not have any influence on the clinical efficacy of ICIs [22]. Hence, to determine the correlation between the clinical efficacy of ICIs in NSCLC and the use of PPI, a meta-analysis was conducted.On the one hand, compared with the study of Wei et al., Li et al., Qin et al., and Sophia et al. [10, 15, 23, 24], this study included more studies on NSCLC (n = 11 vs. n = 6 for Wei et al., n = 7 for Li et al., n = 7 for Qin et al., n = 4 for Sophia et al.) and more patients (n = 7,893 vs. n = 5,114 for Wei et al., n = 1,428 for Li et al., n = 3,647 for Qin et al., n = 2,940 for Sophia et al.), especially including two articles published in 2022. On the other hand, based on more factors, like the duration of exposure to PPI, the PPI treatment, type of ICIs treatment, sample size, age, and region, the subgroup analysis was conducted. It would further help to understand the actual role of PPI in the combined use of ICIs drugs in cancer treatment. Hence, it is believed that our study has been very necessary and could provide certain basis for the rationale in the usage of PPI in clinical practices.In patients treated with ICIs, the result indicated that PPI use was associated significantly to poor PFS (HR: 1.25 [1.09–1.42]) and poor OS (HR: 1.30 [1.10–1.54]). Nonetheless, in patients who had received ICIs treatment, the PPI use was not found to be associated with PFS and OS according to the study of Li et al. and Meng et al. [15, 25] in 2020. This could be due to the fact that the PPI use would lead to greater changes in the activity and composition of the gastrointestinal microbiota, which would be related to the tolerance of the T-cells. Simultaneously, the PPI use would not only affect the microbiota of gastrointestinal tract, but also would have a certain impact on the growth, metastasis, and progression of the tumor. In the study of De Milito et al. and Bellone et al. [26, 27], it was proposed that PPI could impact the tumor growth and metastasis by regulating the acidic microenvironment of the tissues around tumors. Meanwhile, the use of PPI severely inhibited the hydrogen ion ATPase pump, thus reversing the pH gradient of acidic microenvironment [27]. Besides, the use of PPI greatly promoted the generation of M2-subtype macrophages and pro-inflammatory cytokines (such as interleukin 7) [28]. All of these will reduce the immunosuppressive ability of tumor microenvironment and greatly inhibit the activity of ICIs [29]. Moreover, PPI would also increase the sensitivity of patients to chemotherapy and immunotherapy [9]. Hence, based on the PPI use, predicting the clinical efficacy of ICIs in NSCLC patients is highly difficult. Besides, further basic and relevant clinical studies would be required.On factors like duration of PPI exposure, type of ICIs treatment, sample size, age, and region, a subgroup analysis was conducted to explore further the correlation between the clinical efficacy of ICIs and the use of PPI. PPI use was found to be significantly related to poor OS in case of the analysis on the region subgroup (Europe: HR = 1.48 [1.26, 1.74], Worldwide: HR = 1.54 [1.24, 1.91]) and poor PFS (Europe: HR = 1.36 [1.18, 1.57], Worldwide: HR = 1.34 [1.16, 1.55]) in patients from Europe and worldwide multi-center studies. It was suggested that the multi-center research projects need to be promoted between regions and countries. In term of sample size subgroup, PPI usage was found to be significantly related to poor OS in studies with the sample size less than or equal to 300 (HR = 1.37 [1.02, 1.84]), and more than 300 (HR = 1.27 [1.04, 1.56]), and poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44]). In clarifying the correlation between clinical efficacy of ICIs and the PPI usage, the sample size played a crucial role, as confirmed from the results, and it has been recommended to include as many relevant samples as possible in clinical studies. For age subgroup, PPI use was significantly related to poor OS in patients with age less than or equal to 65 years (HR = 1.56 [1.14, 2.15]), and to poor PFS in patients with age more than 65 years (HR = 1.36 [1.18, 1.57]). People of different ages have different sensitivities to PPI. Hence, the clinical use of PPI needs to be cautious. With regard to the duration of PPI exposure subgroup, the PPI use was found to be significantly related to poor OS (−30: HR = 1.89 [1.29, 2.78], ±30: HR = 1.44 [1.27, 1.64]) and poor PFS (−30: HR = 1.51 [1.11, 2.05], ±30: HR = 1.32 [1.20, 1.45]) in patients treated with PPI drugs at 30 days before and/or after starting ICIs treatment. In the clinic, we need to stop the application of PPI immediately before and/or after starting the ICIs treatment, so as to provide the patients with good therapeutic effect. Finally, for type of ICIs treatment subgroup, PPI use displayed a poor prognosis for patients having received PD-L1 or PD-1 treatment, which possibly was related to the limited sample size included in this study. In addition, based on the above discussion, we suggest that when PPI is used in combination with ICIs in clinical practice, appropriate adjustment of the dysbiosis of organism and gastrointestinal bacterial colony disorder caused by the use of PPI may greatly improve the clinical efficacy and prognosis of relevant patients. Hence, to clarify the relationship between the clinical efficacy of ICIs in NSCLC and the PPI usage, more relevant research would be needed.Our study had certain limitation. First, some PFS data were missing in the included studies, and this study only extracted the HR value and 95% CI value of the included study rather than the initial data of the study, which could have a greater impact on our results. Second, the studies included were retrospective studies. In the process of extracting data, such as sample size, type of ICIs treatment, and duration of PPI exposure, region, and age, the detailed information of relevant data could not be known, which could lead to certain limitations in the overall and subgroup analysis of this study. Third, this study only included studies published in English, while those published in other languages, for instance, Chinese, were not included, which could indirectly lead to increased heterogeneity of this study. Finally, it was found that there was no direct study proving the association between the PPI use and the clinical efficacy of ICIs in NSCLC. Hence, more research related to this becomes imperative. ## 5. Conclusion In conclusion, our meta-analysis found that PPI combined with ICIs in the treatment of NSCLC patients possibly resulted in poor OS and PFS. In term of subgroup analysis, PPI use had a poor prognosis for patients having received PD-L1 or PD-1 treatment, or those who received PPI drugs at 30 days before and/or after ICIs initiation. Simultaneously, the effect of PPI on patients of different age groups was also different. Hence, in clinical practice, we need to be extremely cautious in the PPI use to avoid the influence in the efficacy of ICIs. Nevertheless, the concrete mechanism between the use of PPI and the efficacy of ICIs needs to be further studied, so as to further improve the clinical treatment level of the related tumors. --- *Source: 1001796-2022-07-09.xml*
1001796-2022-07-09_1001796-2022-07-09.md
43,403
The Correlation between the Use of the Proton Pump Inhibitor and the Clinical Efficacy of Immune Checkpoint Inhibitors in Non-Small Cell Lung Cancer
Da-Hai Hu; Wan-Ching Wong; Jia-Xin Zhou; Ji Luo; Song-Wang Cai; Hong Zhou; Hui Tang
Journal of Oncology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1001796
1001796-2022-07-09.xml
--- ## Abstract Background. To determine if the use of the Proton Pump Inhibitors (PPI) impacts the clinical efficacy of Immune Checkpoint Inhibitors (ICIs) in Non-Small Cell Lung Cancer (NSCLC), a meta-analysis was conducted. Method. Eleven studies from PubMed, EMBASE, Cochrane Library, Web of Science, and other databases up to May 2022, were selected. The pertinent clinical outcomes were assessed by applying the Progression-free survival (PFS), Overall Survival (OS), Hazard Ratio (HR), and 95% Confidence Interval (CI). Result. This study included eleven articles containing 7,893 NSCLC patients. The result indicated that PPI use was dramatically related to poor OS (HR: 1.30 [1.10–1.54]), and poor PFS (HR: 1.25 [1.09–1.42]) in case of patients treated with ICIs. With regard to the subgroup analysis, PPI use was dramatically associated with poor OS (Europe: HR = 1.48 [1.26, 1.74], Worldwide: HR = 1.54 [1.24, 1.91]), and poor PFS (Europe: HR = 1.36 [1.18, 1.57], Worldwide: HR = 1.34 [1.16, 1.55]) in patients from Europe and multi-center studies across the world, poor OS in patients with age less than or equal to 65 (HR = 1.56 [1.14, 2.15]), poor PFS in patients aged more than 65 (HR = 1.36 [1.18, 1.57]), poor OS for patients receiving with PD-1 (HR = 1.37 [1.04, 1.79]), poor PFS for patients receiving with PD-L1 (HR = 1.33 [1.19, 1.49]), and poor OS (−30: HR = 1.89 [1.29, 2.78], ±30: HR = 1.44 [1.27, 1.64]) and poor PFS (−30: HR = 1.51 [1.11, 2.05], ±30: HR = 1.32 [1.20, 1.45]) for patients who received PPI at 30 days before and/or after starting the ICIs treatment. Conclusion. Our meta-analysis indicated that PPI combined with ICIs in the treatment of NSCLC patients could result in poor OS and PFS. PPI use should be extremely cautious in clinical practices to avoid the impact on the efficacy of the ICIs. --- ## Body ## 1. Introduction Non-small cell lung cancer (NSCLC) is one of the most common cancers, accounting for about 80% of all lung cancers. According to incomplete statistics, NSCLC kills 1.6 million people worldwide each year [1]. The main pathologic types of NSCLC include squamous cell carcinoma, adenocarcinoma, and large cell carcinoma [2]. Compared with small cell carcinoma, the growth and division rate of cancer cells in NSCLC is slow, and the diffusion and metastasis occur relatively late. However, after systematic treatment, the 5-year survival rate of some NSCLC patients is still not ideal [3]. Nowadays, for NSCLC, a single treatment plan, such as surgery, radiotherapy, or immunotherapy, may be difficult to achieve ideal results. Therefore, combination therapy has gradually come into people’s vision, such as chemotherapy or radiotherapy after surgery, and Proton Pump Inhibitor (PPI) combined with Immune Checkpoint Inhibitors (ICIs). However, the clinical efficacy of these combined therapies for patients is still unclear, and more relevant studies are needed to explore.As an effective inhibitor of gastric acid secretion, the PPI has been used widely to treat the hypersecretion of gastric acid and other related diseases around the world, for instance, Gastroesophageal Reflux Disease (GERD) and gastric ulcers [4]. Studies have indicated that use of PPI for a long time could increase the risk of related tissue histopathological changes, and could lead to the disorder of normal colonies in the gastrointestinal tract, greatly increasing the risk of gastrointestinal infection [5–7]. In recent years, since PPI could enhance the sensitivity of cancer patients towards chemotherapy, hence, it has gained prominence in the field of tumor treatment [8, 9]. Nevertheless, the efficacy and risk of the use of PPI are different for various cancer types [10]. However, the efficacy of the use of PPI in the treatment of cancer is not very clear, and needs further research.Presently, the cancer immunotherapy mainly includes the following three kinds, ICIs and adoptive cell therapy, operating the immunologic defense to differentiate, and attack tumor cells [11]. Among them, the ICIs have been used in a widespread manner in the neighborhood of tumor treatment, greatly improving the strategy of treating related cancer. Cytotoxic drug T lymphocyte associated antigen 4 (CTLA-4) inhibitor, programmed cell death ligand 1 (PD-L1), as well as programmed cell death 1 (PD-1) are the three kinds of ICIs widely used clinically [12, 13]. However, certain controversial aspects of cancer immunotherapy still remain. As the immune system could be over activated during immunotherapy, bring with it, it could bring serious side effects to the patients, and the adverse reactions in individual cases were serious and even life-threatening at times [14]. These illustrated that the clinical efficacy of ICIs was not very clear. Today, with the popularity of the combination therapy, the ICIs are often combined with the PPI. In one study, for NSCLC patients, the PPI combined with ICIs led to a negative result, nevertheless, in case of the melanoma patients, it produced a positive result [15]. Hence, it is controversial whether the clinical efficacy of ICIs in NSCLC is related to the use of the PPI.This study was intended to determine if there was any correlation between the clinical efficacy of the ICIs in NSCLC and the use of PPI. ## 2. Materials and Method ### 2.1. Search Strategy The literatures involved in this study were independently screened by two researchers (D. H. and W. W.) to determine whether they met the inclusion or exclusion criteria, and any differences would be resolved by consensus with third party researcher (J. Z.). Our search strategy is as illustrated in Figure1, searching the studies from PubMed, EMBASE, Cochrane Library, and Web of Science databases up to May 2022. The keywords searched were, “Non-small Cell Lung Cancer,” “Non-Small Cell Lung Cancer,” “carcinoma, non-small-cell lung” “Non-Small Cell Lung Carcinoma,” “Lung Carcinoma, Non-Small-Cell,” “programmed death-ligand 1 inhibitor,” “PD-L1 inhibitor,” “Immunotherapy,” “programmed death receptor 1 inhibitor,” “PD-1 inhibitor,” “cytotoxic T lymphocyte antigen-4 inhibitor,” “CTLA-4 inhibitor,” and “proton pump inhibitor.”Figure 1 The flow chart of study selection. ### 2.2. The Criteria for Inclusion and Exclusion The criteria for inclusion were: (1) The collected literature involving the usage of PPI and the clinical efficacy in NSCLC of the ICIs; (2) Use of PPI done before and/or after starting the ICIs treatment; (3) Patients received just the ICIs treatments or combined with PPI; (4) The inclusion of the non-using PPI and using PPI groups; and (5) The outcome of study should contain the Overall Survival (OS) and/or Progression-Free Survival (PFS), Hazard Ratio (HR), and 95% Confidence Intervals (CIs). The exclusion criteria were as under: (1) Repetitive studies; (2) Non-human studies; (3) The study report was not in English; and (4) The reviews and meta-analyses, or the case report. ### 2.3. Data Extracting/ The result information like 95% CI of OS and/or PFS, HR, duration of exposure to PPI, cancer type, PPI treatment, type of ICIs treatment, sample size, age, region, first author, and year of publication were extracted from the studies that were included. To reduce the influence of the confounding factors, the multivariate analysis was selected to calculate the HRs value to the extent possible. ### 2.4. Quality Assessment The literature included in the study were retrospective studies and the quality evaluation of research referred to the Newcastle—Ottawa Quality Assessment Scale (NOS) [16]. The evaluation was scored with respect to three aspects, selection of topic, comparability, and evaluation of results. For the NOS system, a score of 6 or more of studies was defined as high quality [17]. ### 2.5. Statistical Analysis The HR and 95% CI of OS and/or PFS were meta-analyzed by applying the Review Manager 5.4 software for Win, while HR >1.0 was considered as poor OS or poor PFS in the outcomes. The funnel plot assessed the publication bias. The heterogeneity of the studies included was assessed byI2 statistics, and the sensitivity analysis, Begg’s tests, and Egger’s tests of studies were evaluated by Stata 15 software for Win. When I2 was greater than 50%, it was regarded that the research had great heterogeneity, and the random effect model was adopted. The extracted data were analyzed by the dichotomous, Mantel–Haenszel method model. In this study, P values <0.05 were considered as statistically significant. ## 2.1. Search Strategy The literatures involved in this study were independently screened by two researchers (D. H. and W. W.) to determine whether they met the inclusion or exclusion criteria, and any differences would be resolved by consensus with third party researcher (J. Z.). Our search strategy is as illustrated in Figure1, searching the studies from PubMed, EMBASE, Cochrane Library, and Web of Science databases up to May 2022. The keywords searched were, “Non-small Cell Lung Cancer,” “Non-Small Cell Lung Cancer,” “carcinoma, non-small-cell lung” “Non-Small Cell Lung Carcinoma,” “Lung Carcinoma, Non-Small-Cell,” “programmed death-ligand 1 inhibitor,” “PD-L1 inhibitor,” “Immunotherapy,” “programmed death receptor 1 inhibitor,” “PD-1 inhibitor,” “cytotoxic T lymphocyte antigen-4 inhibitor,” “CTLA-4 inhibitor,” and “proton pump inhibitor.”Figure 1 The flow chart of study selection. ## 2.2. The Criteria for Inclusion and Exclusion The criteria for inclusion were: (1) The collected literature involving the usage of PPI and the clinical efficacy in NSCLC of the ICIs; (2) Use of PPI done before and/or after starting the ICIs treatment; (3) Patients received just the ICIs treatments or combined with PPI; (4) The inclusion of the non-using PPI and using PPI groups; and (5) The outcome of study should contain the Overall Survival (OS) and/or Progression-Free Survival (PFS), Hazard Ratio (HR), and 95% Confidence Intervals (CIs). The exclusion criteria were as under: (1) Repetitive studies; (2) Non-human studies; (3) The study report was not in English; and (4) The reviews and meta-analyses, or the case report. ## 2.3. Data Extracting/ The result information like 95% CI of OS and/or PFS, HR, duration of exposure to PPI, cancer type, PPI treatment, type of ICIs treatment, sample size, age, region, first author, and year of publication were extracted from the studies that were included. To reduce the influence of the confounding factors, the multivariate analysis was selected to calculate the HRs value to the extent possible. ## 2.4. Quality Assessment The literature included in the study were retrospective studies and the quality evaluation of research referred to the Newcastle—Ottawa Quality Assessment Scale (NOS) [16]. The evaluation was scored with respect to three aspects, selection of topic, comparability, and evaluation of results. For the NOS system, a score of 6 or more of studies was defined as high quality [17]. ## 2.5. Statistical Analysis The HR and 95% CI of OS and/or PFS were meta-analyzed by applying the Review Manager 5.4 software for Win, while HR >1.0 was considered as poor OS or poor PFS in the outcomes. The funnel plot assessed the publication bias. The heterogeneity of the studies included was assessed byI2 statistics, and the sensitivity analysis, Begg’s tests, and Egger’s tests of studies were evaluated by Stata 15 software for Win. When I2 was greater than 50%, it was regarded that the research had great heterogeneity, and the random effect model was adopted. The extracted data were analyzed by the dichotomous, Mantel–Haenszel method model. In this study, P values <0.05 were considered as statistically significant. ## 3. Result ### 3.1. Selection of Study Figure1 illustrates the flow chart of the selection of studies. 52 studies that were searched from the database were included in this study, with 9 supplemental studies from other databases. After repetitive studies (n = 19) and studies of unrelated topics (n = 14) were deleted, with 42 articles being selected. Subsequently, according to the above including and excluding criteria, 17 articles were excluded, including reviews, and meta-analysis, case report (n = 8), no result posted (n = 4), and no NSCLC (n = 5). Finally, 11 published articles in total were selected in our study up to May 2022. ### 3.2. Characteristics of the Studies Included As shown in Table1, eleven published articles containing 7,893 patients were included in the study. Between the 11 studies, 3, 2, 2, 2, 2 studies were performed in Asia, Worldwide, Europe, America, and Oceania, respectively. Most patients were treated with PPI before and/or shortly after the beginning of ICIs, and the type of ICI treatment was dominated by PD-(L) 1. In the Table 2, the 11 studies included were retrospective studies with result information of OS and/or PFS. The HR values were extracted from the univariate analysis of 5 studies and multivariate analysis of another 5 studies. The NOS score of all the included studies was greater than or equal to 6, which could be considered as high-quality articles.Table 1 Baseline characteristics of the included studies. AuthorYearAgeRegionCancer typeICI treatmentPPI treatmentNo. of PPIPatientsPPI exposureChalabi et al. [30]2020NAWorldwideNSCLCPD-L1Omeprazole, pantoprazole, lansoprazole, rabeprazole, esomeprazole, dexlansoprazole234757Prior, within (30 days)Hakozaki et al. [31]201967AsiaNSCLCPD-1NA4790Prior (30 days)Svaton et al. [32]202067EuropeNSCLCPD-1Omeprazole, pantoprazole, lansoprazole64224Prior, within (30 days)Zhao et al. [33]201962AsiaNSCLCPD-1, otherNA40109Prior, within (30 days)Stokes et al. [34]202169AmericaNSCLCPD-(L)1Omeprazole (majority)21593634Within (90 days)Miura et al. [35]202165AsiaNSCLCPD-1Lansoprazole, rabeprazole, Esomeprazole163300WithinCortellini et al. [36]202170.1EuropeNSCLCPD-L1NA474950Prior, within (30 days)Giordan et al. [29]202163.9WorldwideNSCLCPD-(L)1Pantoprazole, esomeprazole, lansoprazole, Rabeprazole, omeprazole47212Prior (30 days)Hopkins et al. [37]2022NAOceaniaNSCLCPD-(L)1NA4411202Prior, within (30 days)Hopkins et al. [38]2022NAOceaniaNSCLCPD-L1Omeprazole,pantoprazole, esomeprazole, lansoprazole, rabeprazole, dexlansoprazole, vanoprazan12254458WithinHusain et al. [39]2021NAAmericaNSCLCPD-(L)1NA149415WithinNSCLC, non-small cell lung cancer; PPI, proton pump inhibitor; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; NA, not available; ICI, immune checkpoint inhibitor.Table 2 Quality assessment and prognostic information of the included studies. AuthorYearMethodOutcomeHR (95% CI) for OSHR (95% CI) for PFSAnalysisNOS scoreChalabi et al. [30]2020REOS/PFS1.45 (1.20–1.75)1.30 (1.10–1.53)NA8Hakozaki et al. [31]2019REOS1.90 (0.80–4.51)NAM6Svaton et al. [32]2020REOS/PFS1.22 (0.72–2.05)1.36 (0.89–2.06)M8Zhao et al. [33]2019REOS/PFS0.68 (0.33–1.43)0.91 (0.54–1.54)U8Stokes et al. [34]2021REOS0.96 (0.89–1.04)NAM7Miura et al. [35]2021REOS1.36 (0.96–1.91)NAM7Cortellini et al. [36]2021REOS/PFS1.51 (1.28–1.80)1.36 (1.17–1.59)U8Giordan et al. [29]2021REOS/PFS1.89 (1.23–2.90)1.51 (1.11–2.05)M7Hopkins et al. [37]2022REOS/PFS1.53 (1.21–1.95)1.34 (1.12–1.61)U7Hopkins et al. [38]2022REOS/PFS1.00 (0.85–1.17)0.93 (0.76–1.13)U8Husain et al. [39]2021REOS1.43 (1.06–1.92)NAU6OS, overall survival; PFS, progression-free survival; HR, hazard ratio, NA, not available; U, univariate; M, multivariate; NOS, Newcastle-Ottawa Scale; RE, retrospective. ### 3.3. The Association between PPI Use and OS As indicated in Figure2(a), 11 studies with 7,893 NSCLC patients were selected to perform meta-analysis for OS. The result revealed that in patients who had received ICIs treatment, the use of PPI was found to be significantly associated with poor OS (HR: 1.30, 95% CI: 1.10–1.54, P=0.003). Nevertheless, significant heterogeneity existed in this analysis (I2 = 82%, P<0.001).Figure 2 The forest plots of the hazard ratios (HRs) and 95% CIs for overall survival (a) and progression-free survival (b). (a)(b) ### 3.4. The Association between PPI Use and PFS As shown in Figure2(b), 7 studies with 3,454 NSCLC patients were selected to perform meta-analysis for PFS. The results revealed that PPI use was significantly associated with poor PFS in the patients who had received ICIs treatment (HR: 1.25, 95% CI: 1.09–1.42, P=0.001), with significant heterogeneity (I2 = 56%, P=0.04). ### 3.5. Subgroup Analysis of OS To further assess the influence of PPI use in OS, the subgroup analyses were performed with regard to region, age, sample size, immunotherapy drugs, and duration of PPI exposure. As illustrated in Table3, in terms of the region subgroup, PPI use was significantly related to poor OS in patients from Europe (HR = 1.48 [1.26, 1.74], P<0.001) and worldwide multi-center studies (HR = 1.54 [1.24, 1.91], P<0.001). The PPI use in patients with age less than or equal to 65, was found to be significantly associated to poor OS in the subgroup analysis related to age (HR = 1.56 [1.14, 2.15], P=0.006). With regard to the subgroup of sample size, the usage of PPI was found to be significantly associated to poor OS in studies with sample sizes less than or equal to 300 (HR = 1.37 [1.02, 1.84], P=0.04), and more than 300 (HR = 1.27 [1.04, 1.56], P=0.02). In the analysis of the subgroup of immunotherapy drugs, the PPI use was found to be significantly related to poor OS in patients who had received PD-1 treatment (HR = 1.37 [1.04, 1.79], P=0.03). With regard to the duration of PPI exposure subgroup, the result indicated that PPI use was significantly related to poor OS in patients who had received PPI treatment at 30 days before ICIs initiation (−30: HR = 1.89 [1.29, 2.78], P=0.001), and 30 days before and after starting ICIs treatment (±30: HR = 1.44 [1.27, 1.64], P<0.001).Table 3 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for overall survival. SubgroupNo. of studiesOS hazard ratios (95% CI)PvalueHeterogeneityI2 (%)PvalueRegionWorldwide21.54 [1.24, 1.91]<0.00119.000.27Asia31.21 [0.74, 1.98]0.4447.000.15Europe21.48 [1.26, 1.74]<0.00100.45America21.14 [0.77, 1.68]0.5185.000.01Oceania21.23 [0.81, 1.86]0.3488.000.004Age≤6531.56 [1.14, 2.15]0.00627.000.24>6541.26 [0.89, 1.79]0.1988.00<0.001Sample size≤30051.37 [1.02, 1.84]0.0437.000.17>30061.27 [1.04, 1.56]0.0289.00<0.001Immunotherapy drugPD-L131.30 [0.99, 1.69]0.0686.00<0.001PD-131.37 [1.04, 1.79]0.0300.69PD-1, other10.68 [0.33, 1.42]0.3NANAPD-(L)141.37 [0.98, 1.92]0.0788.00<0.001PPI exposure−3021.89 [1.29, 2.78]0.00100.99±3051.44 [1.27, 1.64]<0.00119.000.3∞41.10 [0.93, 1.30]0.2769.000.02OS, overall survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available; PPI: proton pump inhibitors. ### 3.6. Subgroup Analysis of PFS As in the subgroup analysis for OS, the PFS subgroup analyses were also performed with respect to region, age, sample size, immunotherapy drugs, and duration of PPI exposure, as shown in Table4. In term of region subgroup, the use of PPI was significantly related to poor PFS in patients from Europe (HR = 1.36 [1.18, 1.57], P<0.001) and worldwide multi-center studies (HR = 1.34 [1.16, 1.55], P<0.001). The PPI usage indicated significant association with poor PFS in patients having age above 65 years (HR = 1.36 [1.18, 1.57], P<0.001) in the age subgroup. In case of the sample size subgroup, the PPI use was found to be significantly related to poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44], P=0.01). With regard to the subgroup analysis of immunotherapy drugs, PPI use was found to be significantly related to poor PFS in patients having received PD-L1 treatment (HR = 1.33 [1.19, 1.49], P<0.001). In case of subgroup regarding duration of PPI exposure, the result revealed that PPI use was significantly associated with poor PFS in patients treated with PPI at 30 days before starting the ICIs treatment (−30: HR = 1.51 [1.11, 2.05], P=0.008), and 30 days before and after starting the ICIs treatment (±30: HR = 1.32 [1.20, 1.45], P<0.001).Table 4 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for progression-free survival. SubgroupNo. of studiesPFS hazard ratios (95% CI)P valueHeterogeneityI2 (%)P valueRegionWorldwide21.34 [1.16, 1.55]<0.00100.4Asia10.91 [0.54, 1.54]0.72NANAEurope21.36 [1.18, 1.57]<0.00100.99Oceania21.12 [0.78, 1.60]0.5486.000.008Age≤6521.23 [0.75, 2.00]0.4163.000.1>6521.36 [1.18, 1.57]<0.00100.99Sample size≤30031.31 [1.00, 1.71]0.0525.000.26>30041.23 [1.04, 1.44]0.0171.000.01Immunotherapy drugPD-L121.33 [1.19, 1.49]<0.00100.69PD-111.36 [0.89, 2.07]0.15NANAPD-1, other10.91 [0.54, 1.54]0.72NANAPD-(L)131.17 [0.73, 1.88]0.5285.000.009PPI exposure−3011.51 [1.11, 2.05]0.008NANA±3051.32 [1.20, 1.45]<0.00100.71∞10.93 [0.76, 1.13]0.47NANAPFS, progression-free survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available. ### 3.7. Publication Bias The funnel plots assisted in assessing the publication bias, while the results revealed that there was no significant asymmetry about HR (OS or PFS), confirming that there was little possibility of publication bias (Figures3(a) and 3(b)). In addition, Begg’s tests and Egger’s tests were also used to verify whether there is publication bias. As shown in Figures 3(c) and 3(d), there was no significant publication bias in the HR value of OS (Begg’s test, P=0.640; Egger’s test, P=0.059) and PFS (Begg’s test, P=0.368; Egger’s test, P=0.724).Figure 3 The Publication bias. (a) Funnel plot analysis of overall survival (OS). (b) Funnel plot analysis of progression-free survival (PFS). (c) Begg’s funnel plots for evaluating the publication bias of overall survival (OS). (d) Begg’s funnel plots for evaluating the publication bias of progression-free survival (PFS). (a)(b)(c)(d) ### 3.8. Sensitivity Analysis We performed sensitivity analysis on the included literature. It is obvious from the results that no single study had a great impact on the final combined HR value of OS and PFS. Therefore, we believe that the combined results of this study were reliable and robust (Figures4(a) and 4(b)).Figure 4 The sensitivity analysis. (a) Sensitivity analysis for hazard ratio (HR) of overall survival (OS). (b) Sensitivity analysis for hazard ratio (HR) of progression-free survival (PFS). (a)(b) ## 3.1. Selection of Study Figure1 illustrates the flow chart of the selection of studies. 52 studies that were searched from the database were included in this study, with 9 supplemental studies from other databases. After repetitive studies (n = 19) and studies of unrelated topics (n = 14) were deleted, with 42 articles being selected. Subsequently, according to the above including and excluding criteria, 17 articles were excluded, including reviews, and meta-analysis, case report (n = 8), no result posted (n = 4), and no NSCLC (n = 5). Finally, 11 published articles in total were selected in our study up to May 2022. ## 3.2. Characteristics of the Studies Included As shown in Table1, eleven published articles containing 7,893 patients were included in the study. Between the 11 studies, 3, 2, 2, 2, 2 studies were performed in Asia, Worldwide, Europe, America, and Oceania, respectively. Most patients were treated with PPI before and/or shortly after the beginning of ICIs, and the type of ICI treatment was dominated by PD-(L) 1. In the Table 2, the 11 studies included were retrospective studies with result information of OS and/or PFS. The HR values were extracted from the univariate analysis of 5 studies and multivariate analysis of another 5 studies. The NOS score of all the included studies was greater than or equal to 6, which could be considered as high-quality articles.Table 1 Baseline characteristics of the included studies. AuthorYearAgeRegionCancer typeICI treatmentPPI treatmentNo. of PPIPatientsPPI exposureChalabi et al. [30]2020NAWorldwideNSCLCPD-L1Omeprazole, pantoprazole, lansoprazole, rabeprazole, esomeprazole, dexlansoprazole234757Prior, within (30 days)Hakozaki et al. [31]201967AsiaNSCLCPD-1NA4790Prior (30 days)Svaton et al. [32]202067EuropeNSCLCPD-1Omeprazole, pantoprazole, lansoprazole64224Prior, within (30 days)Zhao et al. [33]201962AsiaNSCLCPD-1, otherNA40109Prior, within (30 days)Stokes et al. [34]202169AmericaNSCLCPD-(L)1Omeprazole (majority)21593634Within (90 days)Miura et al. [35]202165AsiaNSCLCPD-1Lansoprazole, rabeprazole, Esomeprazole163300WithinCortellini et al. [36]202170.1EuropeNSCLCPD-L1NA474950Prior, within (30 days)Giordan et al. [29]202163.9WorldwideNSCLCPD-(L)1Pantoprazole, esomeprazole, lansoprazole, Rabeprazole, omeprazole47212Prior (30 days)Hopkins et al. [37]2022NAOceaniaNSCLCPD-(L)1NA4411202Prior, within (30 days)Hopkins et al. [38]2022NAOceaniaNSCLCPD-L1Omeprazole,pantoprazole, esomeprazole, lansoprazole, rabeprazole, dexlansoprazole, vanoprazan12254458WithinHusain et al. [39]2021NAAmericaNSCLCPD-(L)1NA149415WithinNSCLC, non-small cell lung cancer; PPI, proton pump inhibitor; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; NA, not available; ICI, immune checkpoint inhibitor.Table 2 Quality assessment and prognostic information of the included studies. AuthorYearMethodOutcomeHR (95% CI) for OSHR (95% CI) for PFSAnalysisNOS scoreChalabi et al. [30]2020REOS/PFS1.45 (1.20–1.75)1.30 (1.10–1.53)NA8Hakozaki et al. [31]2019REOS1.90 (0.80–4.51)NAM6Svaton et al. [32]2020REOS/PFS1.22 (0.72–2.05)1.36 (0.89–2.06)M8Zhao et al. [33]2019REOS/PFS0.68 (0.33–1.43)0.91 (0.54–1.54)U8Stokes et al. [34]2021REOS0.96 (0.89–1.04)NAM7Miura et al. [35]2021REOS1.36 (0.96–1.91)NAM7Cortellini et al. [36]2021REOS/PFS1.51 (1.28–1.80)1.36 (1.17–1.59)U8Giordan et al. [29]2021REOS/PFS1.89 (1.23–2.90)1.51 (1.11–2.05)M7Hopkins et al. [37]2022REOS/PFS1.53 (1.21–1.95)1.34 (1.12–1.61)U7Hopkins et al. [38]2022REOS/PFS1.00 (0.85–1.17)0.93 (0.76–1.13)U8Husain et al. [39]2021REOS1.43 (1.06–1.92)NAU6OS, overall survival; PFS, progression-free survival; HR, hazard ratio, NA, not available; U, univariate; M, multivariate; NOS, Newcastle-Ottawa Scale; RE, retrospective. ## 3.3. The Association between PPI Use and OS As indicated in Figure2(a), 11 studies with 7,893 NSCLC patients were selected to perform meta-analysis for OS. The result revealed that in patients who had received ICIs treatment, the use of PPI was found to be significantly associated with poor OS (HR: 1.30, 95% CI: 1.10–1.54, P=0.003). Nevertheless, significant heterogeneity existed in this analysis (I2 = 82%, P<0.001).Figure 2 The forest plots of the hazard ratios (HRs) and 95% CIs for overall survival (a) and progression-free survival (b). (a)(b) ## 3.4. The Association between PPI Use and PFS As shown in Figure2(b), 7 studies with 3,454 NSCLC patients were selected to perform meta-analysis for PFS. The results revealed that PPI use was significantly associated with poor PFS in the patients who had received ICIs treatment (HR: 1.25, 95% CI: 1.09–1.42, P=0.001), with significant heterogeneity (I2 = 56%, P=0.04). ## 3.5. Subgroup Analysis of OS To further assess the influence of PPI use in OS, the subgroup analyses were performed with regard to region, age, sample size, immunotherapy drugs, and duration of PPI exposure. As illustrated in Table3, in terms of the region subgroup, PPI use was significantly related to poor OS in patients from Europe (HR = 1.48 [1.26, 1.74], P<0.001) and worldwide multi-center studies (HR = 1.54 [1.24, 1.91], P<0.001). The PPI use in patients with age less than or equal to 65, was found to be significantly associated to poor OS in the subgroup analysis related to age (HR = 1.56 [1.14, 2.15], P=0.006). With regard to the subgroup of sample size, the usage of PPI was found to be significantly associated to poor OS in studies with sample sizes less than or equal to 300 (HR = 1.37 [1.02, 1.84], P=0.04), and more than 300 (HR = 1.27 [1.04, 1.56], P=0.02). In the analysis of the subgroup of immunotherapy drugs, the PPI use was found to be significantly related to poor OS in patients who had received PD-1 treatment (HR = 1.37 [1.04, 1.79], P=0.03). With regard to the duration of PPI exposure subgroup, the result indicated that PPI use was significantly related to poor OS in patients who had received PPI treatment at 30 days before ICIs initiation (−30: HR = 1.89 [1.29, 2.78], P=0.001), and 30 days before and after starting ICIs treatment (±30: HR = 1.44 [1.27, 1.64], P<0.001).Table 3 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for overall survival. SubgroupNo. of studiesOS hazard ratios (95% CI)PvalueHeterogeneityI2 (%)PvalueRegionWorldwide21.54 [1.24, 1.91]<0.00119.000.27Asia31.21 [0.74, 1.98]0.4447.000.15Europe21.48 [1.26, 1.74]<0.00100.45America21.14 [0.77, 1.68]0.5185.000.01Oceania21.23 [0.81, 1.86]0.3488.000.004Age≤6531.56 [1.14, 2.15]0.00627.000.24>6541.26 [0.89, 1.79]0.1988.00<0.001Sample size≤30051.37 [1.02, 1.84]0.0437.000.17>30061.27 [1.04, 1.56]0.0289.00<0.001Immunotherapy drugPD-L131.30 [0.99, 1.69]0.0686.00<0.001PD-131.37 [1.04, 1.79]0.0300.69PD-1, other10.68 [0.33, 1.42]0.3NANAPD-(L)141.37 [0.98, 1.92]0.0788.00<0.001PPI exposure−3021.89 [1.29, 2.78]0.00100.99±3051.44 [1.27, 1.64]<0.00119.000.3∞41.10 [0.93, 1.30]0.2769.000.02OS, overall survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available; PPI: proton pump inhibitors. ## 3.6. Subgroup Analysis of PFS As in the subgroup analysis for OS, the PFS subgroup analyses were also performed with respect to region, age, sample size, immunotherapy drugs, and duration of PPI exposure, as shown in Table4. In term of region subgroup, the use of PPI was significantly related to poor PFS in patients from Europe (HR = 1.36 [1.18, 1.57], P<0.001) and worldwide multi-center studies (HR = 1.34 [1.16, 1.55], P<0.001). The PPI usage indicated significant association with poor PFS in patients having age above 65 years (HR = 1.36 [1.18, 1.57], P<0.001) in the age subgroup. In case of the sample size subgroup, the PPI use was found to be significantly related to poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44], P=0.01). With regard to the subgroup analysis of immunotherapy drugs, PPI use was found to be significantly related to poor PFS in patients having received PD-L1 treatment (HR = 1.33 [1.19, 1.49], P<0.001). In case of subgroup regarding duration of PPI exposure, the result revealed that PPI use was significantly associated with poor PFS in patients treated with PPI at 30 days before starting the ICIs treatment (−30: HR = 1.51 [1.11, 2.05], P=0.008), and 30 days before and after starting the ICIs treatment (±30: HR = 1.32 [1.20, 1.45], P<0.001).Table 4 The subgroup analysis of the correlation between the use of PPI and clinical efficacy of ICIs for progression-free survival. SubgroupNo. of studiesPFS hazard ratios (95% CI)P valueHeterogeneityI2 (%)P valueRegionWorldwide21.34 [1.16, 1.55]<0.00100.4Asia10.91 [0.54, 1.54]0.72NANAEurope21.36 [1.18, 1.57]<0.00100.99Oceania21.12 [0.78, 1.60]0.5486.000.008Age≤6521.23 [0.75, 2.00]0.4163.000.1>6521.36 [1.18, 1.57]<0.00100.99Sample size≤30031.31 [1.00, 1.71]0.0525.000.26>30041.23 [1.04, 1.44]0.0171.000.01Immunotherapy drugPD-L121.33 [1.19, 1.49]<0.00100.69PD-111.36 [0.89, 2.07]0.15NANAPD-1, other10.91 [0.54, 1.54]0.72NANAPD-(L)131.17 [0.73, 1.88]0.5285.000.009PPI exposure−3011.51 [1.11, 2.05]0.008NANA±3051.32 [1.20, 1.45]<0.00100.71∞10.93 [0.76, 1.13]0.47NANAPFS, progression-free survival; PD-1, programmed cell death protein-1; PD-L1, programmed cell death ligand 1; HR, hazard ratio; NA, not available. ## 3.7. Publication Bias The funnel plots assisted in assessing the publication bias, while the results revealed that there was no significant asymmetry about HR (OS or PFS), confirming that there was little possibility of publication bias (Figures3(a) and 3(b)). In addition, Begg’s tests and Egger’s tests were also used to verify whether there is publication bias. As shown in Figures 3(c) and 3(d), there was no significant publication bias in the HR value of OS (Begg’s test, P=0.640; Egger’s test, P=0.059) and PFS (Begg’s test, P=0.368; Egger’s test, P=0.724).Figure 3 The Publication bias. (a) Funnel plot analysis of overall survival (OS). (b) Funnel plot analysis of progression-free survival (PFS). (c) Begg’s funnel plots for evaluating the publication bias of overall survival (OS). (d) Begg’s funnel plots for evaluating the publication bias of progression-free survival (PFS). (a)(b)(c)(d) ## 3.8. Sensitivity Analysis We performed sensitivity analysis on the included literature. It is obvious from the results that no single study had a great impact on the final combined HR value of OS and PFS. Therefore, we believe that the combined results of this study were reliable and robust (Figures4(a) and 4(b)).Figure 4 The sensitivity analysis. (a) Sensitivity analysis for hazard ratio (HR) of overall survival (OS). (b) Sensitivity analysis for hazard ratio (HR) of progression-free survival (PFS). (a)(b) ## 4. Discussion The clinical efficacy or survival outcome of the use of PPI combined with ICIs in patients with NSCLC has not been known properly. Nevertheless, the following points deserve our attention. First, PPI has been proved to play a pivotal role in operating the immunologic defense to the treatment of tumors by regulating the activity or compositions of the gastrointestinal bacteria [18]. Second, for patients with gastric ulcers or gastrointestinal bleeding history, PPI could be prophylactically used to prevent the occurrence of stress ulcers [10]. Third, since PPI could greatly increase the sensitivity of cancer patients to chemotherapy, it was often used together with other types of anticancer drugs, such as ICIs, nonetheless, the clinical efficacy of the combination has remained unknown until today. Finally, whether the combination of PPI and ICIs would increase the adverse reactions related to the two drugs. For example, the long-term use of omeprazole would correspondingly increase the risk of liver failure and chronic kidney disease, thus affecting the efficacy of cancer treatment. A study revealed that the PPI use would significantly impact the composition of the gastrointestinal bacteria and greatly reduce the clinical efficacy of ICIs [19, 20]. In the study of Derosa et al. [21], it is shown that for advanced renal cell carcinoma and NSCLC patients, the use of antibiotics combined with ICIs would also lead to poor OS and PFS, which may be due to the fact that antibiotics greatly inhibit the diversity and abundance of intestinal flora, leading to the inability to fully mobilize the immune function. This may be similar to the reason why PPI combined with ICIs produced poor clinical efficacy. Other studies revealed that the PPI use would not have any influence on the clinical efficacy of ICIs [22]. Hence, to determine the correlation between the clinical efficacy of ICIs in NSCLC and the use of PPI, a meta-analysis was conducted.On the one hand, compared with the study of Wei et al., Li et al., Qin et al., and Sophia et al. [10, 15, 23, 24], this study included more studies on NSCLC (n = 11 vs. n = 6 for Wei et al., n = 7 for Li et al., n = 7 for Qin et al., n = 4 for Sophia et al.) and more patients (n = 7,893 vs. n = 5,114 for Wei et al., n = 1,428 for Li et al., n = 3,647 for Qin et al., n = 2,940 for Sophia et al.), especially including two articles published in 2022. On the other hand, based on more factors, like the duration of exposure to PPI, the PPI treatment, type of ICIs treatment, sample size, age, and region, the subgroup analysis was conducted. It would further help to understand the actual role of PPI in the combined use of ICIs drugs in cancer treatment. Hence, it is believed that our study has been very necessary and could provide certain basis for the rationale in the usage of PPI in clinical practices.In patients treated with ICIs, the result indicated that PPI use was associated significantly to poor PFS (HR: 1.25 [1.09–1.42]) and poor OS (HR: 1.30 [1.10–1.54]). Nonetheless, in patients who had received ICIs treatment, the PPI use was not found to be associated with PFS and OS according to the study of Li et al. and Meng et al. [15, 25] in 2020. This could be due to the fact that the PPI use would lead to greater changes in the activity and composition of the gastrointestinal microbiota, which would be related to the tolerance of the T-cells. Simultaneously, the PPI use would not only affect the microbiota of gastrointestinal tract, but also would have a certain impact on the growth, metastasis, and progression of the tumor. In the study of De Milito et al. and Bellone et al. [26, 27], it was proposed that PPI could impact the tumor growth and metastasis by regulating the acidic microenvironment of the tissues around tumors. Meanwhile, the use of PPI severely inhibited the hydrogen ion ATPase pump, thus reversing the pH gradient of acidic microenvironment [27]. Besides, the use of PPI greatly promoted the generation of M2-subtype macrophages and pro-inflammatory cytokines (such as interleukin 7) [28]. All of these will reduce the immunosuppressive ability of tumor microenvironment and greatly inhibit the activity of ICIs [29]. Moreover, PPI would also increase the sensitivity of patients to chemotherapy and immunotherapy [9]. Hence, based on the PPI use, predicting the clinical efficacy of ICIs in NSCLC patients is highly difficult. Besides, further basic and relevant clinical studies would be required.On factors like duration of PPI exposure, type of ICIs treatment, sample size, age, and region, a subgroup analysis was conducted to explore further the correlation between the clinical efficacy of ICIs and the use of PPI. PPI use was found to be significantly related to poor OS in case of the analysis on the region subgroup (Europe: HR = 1.48 [1.26, 1.74], Worldwide: HR = 1.54 [1.24, 1.91]) and poor PFS (Europe: HR = 1.36 [1.18, 1.57], Worldwide: HR = 1.34 [1.16, 1.55]) in patients from Europe and worldwide multi-center studies. It was suggested that the multi-center research projects need to be promoted between regions and countries. In term of sample size subgroup, PPI usage was found to be significantly related to poor OS in studies with the sample size less than or equal to 300 (HR = 1.37 [1.02, 1.84]), and more than 300 (HR = 1.27 [1.04, 1.56]), and poor PFS in studies with the sample size more than 300 (HR = 1.23 [1.04, 1.44]). In clarifying the correlation between clinical efficacy of ICIs and the PPI usage, the sample size played a crucial role, as confirmed from the results, and it has been recommended to include as many relevant samples as possible in clinical studies. For age subgroup, PPI use was significantly related to poor OS in patients with age less than or equal to 65 years (HR = 1.56 [1.14, 2.15]), and to poor PFS in patients with age more than 65 years (HR = 1.36 [1.18, 1.57]). People of different ages have different sensitivities to PPI. Hence, the clinical use of PPI needs to be cautious. With regard to the duration of PPI exposure subgroup, the PPI use was found to be significantly related to poor OS (−30: HR = 1.89 [1.29, 2.78], ±30: HR = 1.44 [1.27, 1.64]) and poor PFS (−30: HR = 1.51 [1.11, 2.05], ±30: HR = 1.32 [1.20, 1.45]) in patients treated with PPI drugs at 30 days before and/or after starting ICIs treatment. In the clinic, we need to stop the application of PPI immediately before and/or after starting the ICIs treatment, so as to provide the patients with good therapeutic effect. Finally, for type of ICIs treatment subgroup, PPI use displayed a poor prognosis for patients having received PD-L1 or PD-1 treatment, which possibly was related to the limited sample size included in this study. In addition, based on the above discussion, we suggest that when PPI is used in combination with ICIs in clinical practice, appropriate adjustment of the dysbiosis of organism and gastrointestinal bacterial colony disorder caused by the use of PPI may greatly improve the clinical efficacy and prognosis of relevant patients. Hence, to clarify the relationship between the clinical efficacy of ICIs in NSCLC and the PPI usage, more relevant research would be needed.Our study had certain limitation. First, some PFS data were missing in the included studies, and this study only extracted the HR value and 95% CI value of the included study rather than the initial data of the study, which could have a greater impact on our results. Second, the studies included were retrospective studies. In the process of extracting data, such as sample size, type of ICIs treatment, and duration of PPI exposure, region, and age, the detailed information of relevant data could not be known, which could lead to certain limitations in the overall and subgroup analysis of this study. Third, this study only included studies published in English, while those published in other languages, for instance, Chinese, were not included, which could indirectly lead to increased heterogeneity of this study. Finally, it was found that there was no direct study proving the association between the PPI use and the clinical efficacy of ICIs in NSCLC. Hence, more research related to this becomes imperative. ## 5. Conclusion In conclusion, our meta-analysis found that PPI combined with ICIs in the treatment of NSCLC patients possibly resulted in poor OS and PFS. In term of subgroup analysis, PPI use had a poor prognosis for patients having received PD-L1 or PD-1 treatment, or those who received PPI drugs at 30 days before and/or after ICIs initiation. Simultaneously, the effect of PPI on patients of different age groups was also different. Hence, in clinical practice, we need to be extremely cautious in the PPI use to avoid the influence in the efficacy of ICIs. Nevertheless, the concrete mechanism between the use of PPI and the efficacy of ICIs needs to be further studied, so as to further improve the clinical treatment level of the related tumors. --- *Source: 1001796-2022-07-09.xml*
2022
# Core-EP-Nilpotent Decomposition and Its Applications **Authors:** Yang Zhang; Zongyang Jiang **Journal:** Journal of Mathematics (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1001901 --- ## Abstract We give the illustration of a new decomposition: the core-EP-nilpotent decomposition, which is on the basis of the core-EP decomposition and EP-nilpotent decomposition for some square matrices in this thesis. According to the new decomposition, we show the definitions and characteristics of two new orders: core-E-N partial order and core-E-S partial order. We also illustrate relations of the two orders under some restricted conditions. --- ## Body ## 1. Introduction First of all, some mathematical notations are introduced as follows:ℂm,n denotes the m×n matrices in the complex field. A∗ is the conjugate transpose, ℜA denotes the range space (or column space), and rkA denotes the rank of A∈ℂm,n. In is the identity matrix of order n. The index of A∈ℂn,n, which is denoted by Ind A, satisfies rkAk+1=rkAk where k is the smallest positive integer. The symbol ℂnCM stands for a set of n×n matrices of index less than or equal to 1.A unique matrixX∈ℂn,m, which is called the Moore–Penrose inverse of A∈ℂm,n, satisfies the equations(1)1AXA=A,2XAX=X,3AX∗=AX,4XA∗=XA,and then, it is usually denoted as X=A†. Furthermore, we denote PA=AA†. The unique matrix X∈ℂn,n which is the group inverse of A∈ℂn,n satisfies the equations(2)1AXA=A,2XAX=X,5AX=XA,and then, it is usually denoted as X=A#.The definition of core invertible matrixA is defined as there can be at most one matrix X such that(3)AX=AA†,ℜX⊆ℜA,if equation (3) is satisfied. In this case, X is the core inverse of A and we denote . In [1], it had been proved that A∈ℂn,n is core invertible if and only if A∈ℂnCM.We denote a set of EP matrices overℂn,n by ℂnEP, where(4)ℂnEP=A|ℜA=ℜA∗,A∈ℂn,n.As it is well known thatℂnEP contains many special types of matrices, such as Hermitian matrices, normal matrices, and nonsingular matrices.Matrix decomposition and its research have been a hot research direction in recent years, among which some special matrices occupy the core position in matrix decomposition. For example, a decomposition named Toeplitz decomposition or Cartesian decomposition is introduced in [2, 3]. More details about other matrix decompositions can refer to [4–10]. Furthermore, there are some research issues about the systems of matrix equations by using generalized inverses of matrices which refer to [11–13].In this paper, we will adopt Schur upper triangulation matrix decomposition, construct a new matrix decomposition based on the known core-EP decomposition and EP-nilpotent decomposition, and investigate related properties of this new matrix decomposition. ## 2. Preliminary Results In this section, we give some preliminary results, (refer to [14], Theorem 5.4.1; [4], Theorem 2.1; [5], Theorem 2.1) which will be used in the next section.Lemma 1 (Schur Decomposition). LetA∈ℂn,n. Then, there exist a unitary matrix U∈ℂn,n and an upper-triangular matrix B∈ℂn,n such that(5)A=UBU∗.Lemma 2 (Core-EP decomposition). LetA∈ℂn,n with Ind A=k. Then, A can be written as the sum of matrices A1 and A2, i.e., A=A1+A2, where(1) A1∈ℂnCM;(2) A2k=0;(3) A1∗A2=A2A1=0.Here, one or both ofA1 and A2 can be null.Lemma 3 (EP-nilpotent Decomposition). LetA∈ℂn,n with Ind A=k, rkA=r, and rkAk=s. Then, A can be written as the sum of matrices A1 and A2, i.e., A=A1+A2, where(1) A1∈ℂnEP(2) A2k+1=0(3) A2A1=0.Here one or both ofA1 and A2 can be null. ## 3. Main Results First, we give a lemma as follows:Lemma 4. LetA∈ℂn,n. There exists a unitary matrix U∈ℂn,n such that(6)A=UT1S1S20T2S300NU∗,and thus, A can be written as A=A1+A2+A3, where(7)A1=UT1S10000000U∗,A2=U0000T20000U∗,A3=U00S200S300NU∗,where T1 and T2 are nonsingular and upper-triangular, the main diagonal of T1 and T2 are the eigenvalues of A, the last column of the T1 except the main diagonal exists at least one nonzero element, N is an upper-triangular matrix with zero elements on main diagonal, and S1,S2, and S3 are arbitrary matrices.Proof. According to Lemma1 and [4] (Theorem 2.2), for A∈ℂn,n and A can be written as follows:(8)A=UTS0NU∗,where T is nonsingular and upper-triangular, the main diagonal of T are the eigenvalues of A, N is an upper-triangular matrix with zero elements on main diagonal, and S is arbitrary matrix. Moreover, using the block matrix method to decompose T, we can derive that A=UT1S1S20T2S300NU∗. Then, the forms of A1,A2, and A3 can easily be obtained. Based on Lemma4, we give a theorem of decomposition for some square matrices below.Theorem 1. LetA∈ℂn,n Ind A=k, rkAk=s. A unique decomposition of A=A1+A2+A3, where the forms of A1,A2, and A3 are written as Lemma 4, where(i) A1∈ℂnCM(ii) A2∈ℂnEP or A2∈ℂnEP,rkA2=l−1(iii) A3k+1=0(iv) A1∗A2=A2A1=0, A1A3∗=A3A1=0, A2A3∗=A3A2=0 HereA1,A2, and A3 can be null, and we denote l2≤l<s, which represents the number of columns of T1 and T2 that have the nonzero element except the main diagonal.Proof. First, by calculating, we can obviously find that the matricesA1,A2, and A3 satisfy all four conditions of the theorem. Next, we will illustrate the uniqueness of the decomposition. It follows from equation (8) that(9)AkAk†=UIs000U∗. With Lemma1, we can convert equation (9) as follows:(10)AkAk†=UIs1000Is20000U∗,where rkT1=s1, rkT2=s2, and s1+s2=s. Therefore, with equation (10) and Lemma 4, we obtain the following equation:(11)A1+A2=AAkAk†=A1+A2AkAk†,A3=A−AAkAk†. According to equation (11), we know that A can be written as A1+A2 and A3 uniquely. Moreover, with the restricted condition of T1 and A=UT1S1S20T2S300NU∗ in Lemma 4, we can derive that when l=1, the decomposition of A1+A2 can be written uniquely. When l≥2, the condition rkA2=l−1 can always guarantee the decomposition of A1+A2 uniquely. In conclusion, A can be uniquely written as A1, A2, and A3. According to Theorem1, we give a definition of new decomposition as follows:Definition 1. LetA∈ℂn,n and Ind A=k, rkAk=s. If the matrix decomposition satisfies Theorem 1, we say it is the core-EP-nilpotent decomposition. A binary relation on a nonempty set is called partial order if it satisfies reflexivity, transitivity, and antisymmetry. It is significant to establish the partial orders by using matrix decomposition. Here are some well-known partial orders such as minus, sharp, star, core, E-N, and E-S partial orders, which are defined as follows:(a) A≤−B:A,B∈ℂm,n,rkB−rkA=rkB−A(b) A≤#B:A,B∈ℂnCM,AA#=BA# and A#A=A#B(c) A≤∗B:A,B∈ℂm,n,AA∗=BA∗ and A∗A=A∗B(d) A≤㊮B:A,B∈ℂnCM,A㊮A=A㊮B and AA㊮=BA㊮(e) A≤ENB:A,B∈ℂn,n,A1≤−B1 and A2≤−B2, in which A=A1+A2 and B=B1+B2 are the EP-nilpotent decompositions of A and B, respectively(f) A≤ESB:A,B∈ℂn,n,A1≤#B1 and A2≤−B2, in which A=A1+A2 and B=B1+B2 are the EP-nilpotent decompositions of A and B, respectively Next, based on the E-N and E-S partial orders, we will introduce two new partial order relations and describe some related properties of these two new partial orders.Definition 2. (Core-E-N order). LetA,B∈ℂn,n, A=A1+A2+A3 and B=B1+B2+B3 be decomposed as Definition 1, where A1 and B1 are core invertible, A2 and B2 are EP, and A3 and B3 are nilpotent. We consider the binary operation:(12)A≤CENB:A1≤㊮B1,A2≤−B2andA3≤−B3.Theorem 2. Operation (12) is called the core-E-N partial order.Proof. Reflexivity of the relation is obvious. IfA≤CENB and B≤CENA, with equation (12), and the definitions of core partial order, and minus partial order, we can easily obtain that A1=B1, A2=B2, and A3=B3, i.e., A=B. The antisymmetry condition holds. Then, we suppose A≤CENB and B≤CENC, applying the decomposition of equation (12) and definition of core partial order, A1≤㊮B1 and B1≤㊮C1 can imply that A1≤㊮C1. Similarly, with the definition of minus partial order, we can derive that A2≤−C2 and A3≤−C3. By (12), we have A≤CENC. The transitivity condition holds. The proof is complete. The constructional form in the following theorem is referred to [5] and by calculating with Lemma 4.Theorem 3. LetA,B∈ℂn,n. Then, A≤CENB if and only if there exists a unitary matrix U such that(13)A=UT1T2S1S20T40S3000S4000N1U∗,B=UT1T2S1S˜20T3+T4+D1T5D2D1T5+S˜1S˜30T5D2T5S˜4000N2U∗,where T1, T3, T4, and T5 are nonsingular and upper-triangular, N1 and N2 are nilpotent and upper-triangular, D1 and D2 are arbitrary matrices, and S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗.Proof. LetA≤CENB, where A and B are decomposed as Definition 1. Then, A1≤㊮B1, A2≤−B2, and A3≤−B3. Because of A1,B1∈ℂnCM, A1≤㊮B1, and A2,B2∈ℂnEP, A2≤−B2, we can derive the following equation:(14)A1=UT1T2S10000000000000U∗,A2=U00000T40000000000U∗,B1=UT1T2S100T3S˜1000000000U∗,B2=U00000T4+D1T5D2D1T500T5D2T500000U∗,where T1, T3, T4, and T5 are nonsingular and upper-triangular. Applying Definition 2, we can know that(15)A3=U000S2000S3000S4000N1U∗,B3=U000S˜2000S˜3000S˜4000N2U∗,where N1 and N2 are nilpotent and upper-triangular. Applying Definition 2 again, it follows from A3≤−B3 that(16)S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗. The proof is complete. It is worth noting that a few famous partial orders are not the minus type. Therefore, we need to study whether the core-E-N partial order described in equation (12) is a minus type or not; the following example will illustrate.Example 1. Let(17)A=1100001011001000000000000,B=1100101010001000000000000,in which A and B are decomposed as Definition 1, where A1 and B1 are core invertible, A2 and B2 are EP, and A3 and B3 are nilpotent. Then,(18)A1=1100001000000000000000000,A2=0000000000001000000000000,A3=0000000011000000000000000,B1=1100001000000000000000000,B2=0000000000001000000000000,B3=0000100010000000000000000. We can easily check thatA1≤㊮B1, A2≤−B2 and A3≤−B3, i.e., A≤CENB. SincerkB−rkA=0≠rkB−A=1, by condition (a), we can get that A is not below B under the minus partial order and a corollary as follows:Corollary 1. Core-E-N partial order is not the minus type.Next, we will give another example to illustrate whether the minus partial order can lead to core-E-N partial order or not.Example 2. Let(19)A=1101011100000000,B=1101011100100000,in which the decompositions of A and B and the definitions of their components are the same as Example 1. Then,(20)A1=1100010000000000,A2=0000000000000000,A3=0001001100000000,B1=1100011000100000,B2=0000000000000000,B3=0001000100000000. By calculating, we haverkB−A=1, rkB=3, rkA=2, and rkB−rkA=1. With condition (a), we have A≤−B. However, as rkB3−A3=1≠rkB3−rkA3=−1, that is, A3 is not below B3 under the minus partial order, we derive that A is not below B under the core-E-N partial order. Therefore, the minus partial order cannot lead to the core-E-N partial order. Next, we characterize and introduce another new partial order.Definition 3. (Core-E-S order). LetA,B∈ℂn,n, and they are decomposed as Definition 1 where A1 and B1 are core invertible, A2 and B2 are EP, A3 and B3 are nilpotent. We consider the binary operation:(21)A≤CESB:A1≤㊮B1,A2<#=B2andA3≤−B3.Theorem 4. Operation (21) is called the core-E-S partial order.Proof. Similar to the proof of Theorem2, according to Definition 3, we can easily conclude that the binary relation is partial order. The following theorem works in a similar way to Theorem3.Theorem 5. LetA,B∈ℂn,n. Then, A≤CESB if and only if there exists a unitary matrix U satisfying(22)A=UT1T2S1S20T40S3000S4000N1U∗,B=UT1T2S1S˜20T3+T4S˜1S˜300T5S˜4000N2U∗,where T1, T3, T4, and T5 are invertible and upper-triangular, N1 and N2 are nilpotent and upper-triangular, and S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗.Proof. LetA≤CESB, where A and B are decomposed as Definition 1. Then, A1≤㊮B1, A2≤#B2, and A3≤−B3. Because of A1,B1∈ℂnCM, A1≤㊮B1 and A2,B2∈ℂnEP, A2≤#B2, we can imply that(23)A1=UT1T2S10000000000000U∗,A2=U00000T40000000000U∗,B1=UT1T2S100T3S˜1000000000U∗,B2=U00000T40000T500000U∗,where T1, T3, T4, and T5 are invertible and upper-triangular. Because of Definition 3, we can derive that(24)A3=U000S2000S3000S4000N1U∗,B3=U000S˜2000S˜3000S˜4000N2U∗,where N1 and N2 are nilpotent and upper-triangular. Since A3≤−B3, we have the following results:(25)S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗. The proof is complete. Next, we will investigate whether core-E-S partial order is minus type or not.Example 3. We assumeA,B which their forms as shown in the Example 1. We can check that A1≤㊮B1, A2≤#B2 and A3≤−B3 by calculating, which implies A≤CESB. However, since rkB−A=1≠rkB−rkA=0, it is contradicted with condition (a) and we obtain the following corollary.Corollary 2. The core-E-S partial order is not the minus type.After the above discussion, we know that both the core-E-N and core-E-S partial orders are not minus type. Therefore, we will study the relationship between core-E-N partial order and core-E-S partial order under some conditions.It is worth noting that whenA,B∈ℂnEP, we assume(26)A=100000000,B=211111111∈ℂ3EP.By calculating, we can easily check thatA≤−B and A≤CENB. However,(27)AB=211000000≠BA=200100100.With the definition of (b), we get thatA and B do not hold sharp partial order relationship, i.e., A≤−B⇏A≤#B. With the Definitions 2 and 3, we derive that A≤CENB⇏A≤CESB.According to ([15], Remark 4.2.2), it has been proved that A≤#B⇒A≤−B, so we can draw a corollary as follows:Corollary 3. LetA,B∈ℂnEP, A≤CESB⇒A≤CENB. On the basis of [16] (Theorem 2.1), if A,B∈ℂnEP, then A≤#B⇔A≤∗B. Moreover, we can get another way of defining core-E-S partial order as follows:(28)A≤CESB:A1≤㊮B1,A2≤∗B2andA3≤−B3,where the decomposition of A and B are the core-EP-nilpotent decompositions. --- *Source: 1001901-2023-01-23.xml*
1001901-2023-01-23_1001901-2023-01-23.md
14,245
Core-EP-Nilpotent Decomposition and Its Applications
Yang Zhang; Zongyang Jiang
Journal of Mathematics (2023)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1001901
1001901-2023-01-23.xml
--- ## Abstract We give the illustration of a new decomposition: the core-EP-nilpotent decomposition, which is on the basis of the core-EP decomposition and EP-nilpotent decomposition for some square matrices in this thesis. According to the new decomposition, we show the definitions and characteristics of two new orders: core-E-N partial order and core-E-S partial order. We also illustrate relations of the two orders under some restricted conditions. --- ## Body ## 1. Introduction First of all, some mathematical notations are introduced as follows:ℂm,n denotes the m×n matrices in the complex field. A∗ is the conjugate transpose, ℜA denotes the range space (or column space), and rkA denotes the rank of A∈ℂm,n. In is the identity matrix of order n. The index of A∈ℂn,n, which is denoted by Ind A, satisfies rkAk+1=rkAk where k is the smallest positive integer. The symbol ℂnCM stands for a set of n×n matrices of index less than or equal to 1.A unique matrixX∈ℂn,m, which is called the Moore–Penrose inverse of A∈ℂm,n, satisfies the equations(1)1AXA=A,2XAX=X,3AX∗=AX,4XA∗=XA,and then, it is usually denoted as X=A†. Furthermore, we denote PA=AA†. The unique matrix X∈ℂn,n which is the group inverse of A∈ℂn,n satisfies the equations(2)1AXA=A,2XAX=X,5AX=XA,and then, it is usually denoted as X=A#.The definition of core invertible matrixA is defined as there can be at most one matrix X such that(3)AX=AA†,ℜX⊆ℜA,if equation (3) is satisfied. In this case, X is the core inverse of A and we denote . In [1], it had been proved that A∈ℂn,n is core invertible if and only if A∈ℂnCM.We denote a set of EP matrices overℂn,n by ℂnEP, where(4)ℂnEP=A|ℜA=ℜA∗,A∈ℂn,n.As it is well known thatℂnEP contains many special types of matrices, such as Hermitian matrices, normal matrices, and nonsingular matrices.Matrix decomposition and its research have been a hot research direction in recent years, among which some special matrices occupy the core position in matrix decomposition. For example, a decomposition named Toeplitz decomposition or Cartesian decomposition is introduced in [2, 3]. More details about other matrix decompositions can refer to [4–10]. Furthermore, there are some research issues about the systems of matrix equations by using generalized inverses of matrices which refer to [11–13].In this paper, we will adopt Schur upper triangulation matrix decomposition, construct a new matrix decomposition based on the known core-EP decomposition and EP-nilpotent decomposition, and investigate related properties of this new matrix decomposition. ## 2. Preliminary Results In this section, we give some preliminary results, (refer to [14], Theorem 5.4.1; [4], Theorem 2.1; [5], Theorem 2.1) which will be used in the next section.Lemma 1 (Schur Decomposition). LetA∈ℂn,n. Then, there exist a unitary matrix U∈ℂn,n and an upper-triangular matrix B∈ℂn,n such that(5)A=UBU∗.Lemma 2 (Core-EP decomposition). LetA∈ℂn,n with Ind A=k. Then, A can be written as the sum of matrices A1 and A2, i.e., A=A1+A2, where(1) A1∈ℂnCM;(2) A2k=0;(3) A1∗A2=A2A1=0.Here, one or both ofA1 and A2 can be null.Lemma 3 (EP-nilpotent Decomposition). LetA∈ℂn,n with Ind A=k, rkA=r, and rkAk=s. Then, A can be written as the sum of matrices A1 and A2, i.e., A=A1+A2, where(1) A1∈ℂnEP(2) A2k+1=0(3) A2A1=0.Here one or both ofA1 and A2 can be null. ## 3. Main Results First, we give a lemma as follows:Lemma 4. LetA∈ℂn,n. There exists a unitary matrix U∈ℂn,n such that(6)A=UT1S1S20T2S300NU∗,and thus, A can be written as A=A1+A2+A3, where(7)A1=UT1S10000000U∗,A2=U0000T20000U∗,A3=U00S200S300NU∗,where T1 and T2 are nonsingular and upper-triangular, the main diagonal of T1 and T2 are the eigenvalues of A, the last column of the T1 except the main diagonal exists at least one nonzero element, N is an upper-triangular matrix with zero elements on main diagonal, and S1,S2, and S3 are arbitrary matrices.Proof. According to Lemma1 and [4] (Theorem 2.2), for A∈ℂn,n and A can be written as follows:(8)A=UTS0NU∗,where T is nonsingular and upper-triangular, the main diagonal of T are the eigenvalues of A, N is an upper-triangular matrix with zero elements on main diagonal, and S is arbitrary matrix. Moreover, using the block matrix method to decompose T, we can derive that A=UT1S1S20T2S300NU∗. Then, the forms of A1,A2, and A3 can easily be obtained. Based on Lemma4, we give a theorem of decomposition for some square matrices below.Theorem 1. LetA∈ℂn,n Ind A=k, rkAk=s. A unique decomposition of A=A1+A2+A3, where the forms of A1,A2, and A3 are written as Lemma 4, where(i) A1∈ℂnCM(ii) A2∈ℂnEP or A2∈ℂnEP,rkA2=l−1(iii) A3k+1=0(iv) A1∗A2=A2A1=0, A1A3∗=A3A1=0, A2A3∗=A3A2=0 HereA1,A2, and A3 can be null, and we denote l2≤l<s, which represents the number of columns of T1 and T2 that have the nonzero element except the main diagonal.Proof. First, by calculating, we can obviously find that the matricesA1,A2, and A3 satisfy all four conditions of the theorem. Next, we will illustrate the uniqueness of the decomposition. It follows from equation (8) that(9)AkAk†=UIs000U∗. With Lemma1, we can convert equation (9) as follows:(10)AkAk†=UIs1000Is20000U∗,where rkT1=s1, rkT2=s2, and s1+s2=s. Therefore, with equation (10) and Lemma 4, we obtain the following equation:(11)A1+A2=AAkAk†=A1+A2AkAk†,A3=A−AAkAk†. According to equation (11), we know that A can be written as A1+A2 and A3 uniquely. Moreover, with the restricted condition of T1 and A=UT1S1S20T2S300NU∗ in Lemma 4, we can derive that when l=1, the decomposition of A1+A2 can be written uniquely. When l≥2, the condition rkA2=l−1 can always guarantee the decomposition of A1+A2 uniquely. In conclusion, A can be uniquely written as A1, A2, and A3. According to Theorem1, we give a definition of new decomposition as follows:Definition 1. LetA∈ℂn,n and Ind A=k, rkAk=s. If the matrix decomposition satisfies Theorem 1, we say it is the core-EP-nilpotent decomposition. A binary relation on a nonempty set is called partial order if it satisfies reflexivity, transitivity, and antisymmetry. It is significant to establish the partial orders by using matrix decomposition. Here are some well-known partial orders such as minus, sharp, star, core, E-N, and E-S partial orders, which are defined as follows:(a) A≤−B:A,B∈ℂm,n,rkB−rkA=rkB−A(b) A≤#B:A,B∈ℂnCM,AA#=BA# and A#A=A#B(c) A≤∗B:A,B∈ℂm,n,AA∗=BA∗ and A∗A=A∗B(d) A≤㊮B:A,B∈ℂnCM,A㊮A=A㊮B and AA㊮=BA㊮(e) A≤ENB:A,B∈ℂn,n,A1≤−B1 and A2≤−B2, in which A=A1+A2 and B=B1+B2 are the EP-nilpotent decompositions of A and B, respectively(f) A≤ESB:A,B∈ℂn,n,A1≤#B1 and A2≤−B2, in which A=A1+A2 and B=B1+B2 are the EP-nilpotent decompositions of A and B, respectively Next, based on the E-N and E-S partial orders, we will introduce two new partial order relations and describe some related properties of these two new partial orders.Definition 2. (Core-E-N order). LetA,B∈ℂn,n, A=A1+A2+A3 and B=B1+B2+B3 be decomposed as Definition 1, where A1 and B1 are core invertible, A2 and B2 are EP, and A3 and B3 are nilpotent. We consider the binary operation:(12)A≤CENB:A1≤㊮B1,A2≤−B2andA3≤−B3.Theorem 2. Operation (12) is called the core-E-N partial order.Proof. Reflexivity of the relation is obvious. IfA≤CENB and B≤CENA, with equation (12), and the definitions of core partial order, and minus partial order, we can easily obtain that A1=B1, A2=B2, and A3=B3, i.e., A=B. The antisymmetry condition holds. Then, we suppose A≤CENB and B≤CENC, applying the decomposition of equation (12) and definition of core partial order, A1≤㊮B1 and B1≤㊮C1 can imply that A1≤㊮C1. Similarly, with the definition of minus partial order, we can derive that A2≤−C2 and A3≤−C3. By (12), we have A≤CENC. The transitivity condition holds. The proof is complete. The constructional form in the following theorem is referred to [5] and by calculating with Lemma 4.Theorem 3. LetA,B∈ℂn,n. Then, A≤CENB if and only if there exists a unitary matrix U such that(13)A=UT1T2S1S20T40S3000S4000N1U∗,B=UT1T2S1S˜20T3+T4+D1T5D2D1T5+S˜1S˜30T5D2T5S˜4000N2U∗,where T1, T3, T4, and T5 are nonsingular and upper-triangular, N1 and N2 are nilpotent and upper-triangular, D1 and D2 are arbitrary matrices, and S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗.Proof. LetA≤CENB, where A and B are decomposed as Definition 1. Then, A1≤㊮B1, A2≤−B2, and A3≤−B3. Because of A1,B1∈ℂnCM, A1≤㊮B1, and A2,B2∈ℂnEP, A2≤−B2, we can derive the following equation:(14)A1=UT1T2S10000000000000U∗,A2=U00000T40000000000U∗,B1=UT1T2S100T3S˜1000000000U∗,B2=U00000T4+D1T5D2D1T500T5D2T500000U∗,where T1, T3, T4, and T5 are nonsingular and upper-triangular. Applying Definition 2, we can know that(15)A3=U000S2000S3000S4000N1U∗,B3=U000S˜2000S˜3000S˜4000N2U∗,where N1 and N2 are nilpotent and upper-triangular. Applying Definition 2 again, it follows from A3≤−B3 that(16)S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗. The proof is complete. It is worth noting that a few famous partial orders are not the minus type. Therefore, we need to study whether the core-E-N partial order described in equation (12) is a minus type or not; the following example will illustrate.Example 1. Let(17)A=1100001011001000000000000,B=1100101010001000000000000,in which A and B are decomposed as Definition 1, where A1 and B1 are core invertible, A2 and B2 are EP, and A3 and B3 are nilpotent. Then,(18)A1=1100001000000000000000000,A2=0000000000001000000000000,A3=0000000011000000000000000,B1=1100001000000000000000000,B2=0000000000001000000000000,B3=0000100010000000000000000. We can easily check thatA1≤㊮B1, A2≤−B2 and A3≤−B3, i.e., A≤CENB. SincerkB−rkA=0≠rkB−A=1, by condition (a), we can get that A is not below B under the minus partial order and a corollary as follows:Corollary 1. Core-E-N partial order is not the minus type.Next, we will give another example to illustrate whether the minus partial order can lead to core-E-N partial order or not.Example 2. Let(19)A=1101011100000000,B=1101011100100000,in which the decompositions of A and B and the definitions of their components are the same as Example 1. Then,(20)A1=1100010000000000,A2=0000000000000000,A3=0001001100000000,B1=1100011000100000,B2=0000000000000000,B3=0001000100000000. By calculating, we haverkB−A=1, rkB=3, rkA=2, and rkB−rkA=1. With condition (a), we have A≤−B. However, as rkB3−A3=1≠rkB3−rkA3=−1, that is, A3 is not below B3 under the minus partial order, we derive that A is not below B under the core-E-N partial order. Therefore, the minus partial order cannot lead to the core-E-N partial order. Next, we characterize and introduce another new partial order.Definition 3. (Core-E-S order). LetA,B∈ℂn,n, and they are decomposed as Definition 1 where A1 and B1 are core invertible, A2 and B2 are EP, A3 and B3 are nilpotent. We consider the binary operation:(21)A≤CESB:A1≤㊮B1,A2<#=B2andA3≤−B3.Theorem 4. Operation (21) is called the core-E-S partial order.Proof. Similar to the proof of Theorem2, according to Definition 3, we can easily conclude that the binary relation is partial order. The following theorem works in a similar way to Theorem3.Theorem 5. LetA,B∈ℂn,n. Then, A≤CESB if and only if there exists a unitary matrix U satisfying(22)A=UT1T2S1S20T40S3000S4000N1U∗,B=UT1T2S1S˜20T3+T4S˜1S˜300T5S˜4000N2U∗,where T1, T3, T4, and T5 are invertible and upper-triangular, N1 and N2 are nilpotent and upper-triangular, and S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗.Proof. LetA≤CESB, where A and B are decomposed as Definition 1. Then, A1≤㊮B1, A2≤#B2, and A3≤−B3. Because of A1,B1∈ℂnCM, A1≤㊮B1 and A2,B2∈ℂnEP, A2≤#B2, we can imply that(23)A1=UT1T2S10000000000000U∗,A2=U00000T40000000000U∗,B1=UT1T2S100T3S˜1000000000U∗,B2=U00000T40000T500000U∗,where T1, T3, T4, and T5 are invertible and upper-triangular. Because of Definition 3, we can derive that(24)A3=U000S2000S3000S4000N1U∗,B3=U000S˜2000S˜3000S˜4000N2U∗,where N1 and N2 are nilpotent and upper-triangular. Since A3≤−B3, we have the following results:(25)S2∗S3∗S4∗N1∗≤−S˜2∗S˜3∗S˜4∗N2∗. The proof is complete. Next, we will investigate whether core-E-S partial order is minus type or not.Example 3. We assumeA,B which their forms as shown in the Example 1. We can check that A1≤㊮B1, A2≤#B2 and A3≤−B3 by calculating, which implies A≤CESB. However, since rkB−A=1≠rkB−rkA=0, it is contradicted with condition (a) and we obtain the following corollary.Corollary 2. The core-E-S partial order is not the minus type.After the above discussion, we know that both the core-E-N and core-E-S partial orders are not minus type. Therefore, we will study the relationship between core-E-N partial order and core-E-S partial order under some conditions.It is worth noting that whenA,B∈ℂnEP, we assume(26)A=100000000,B=211111111∈ℂ3EP.By calculating, we can easily check thatA≤−B and A≤CENB. However,(27)AB=211000000≠BA=200100100.With the definition of (b), we get thatA and B do not hold sharp partial order relationship, i.e., A≤−B⇏A≤#B. With the Definitions 2 and 3, we derive that A≤CENB⇏A≤CESB.According to ([15], Remark 4.2.2), it has been proved that A≤#B⇒A≤−B, so we can draw a corollary as follows:Corollary 3. LetA,B∈ℂnEP, A≤CESB⇒A≤CENB. On the basis of [16] (Theorem 2.1), if A,B∈ℂnEP, then A≤#B⇔A≤∗B. Moreover, we can get another way of defining core-E-S partial order as follows:(28)A≤CESB:A1≤㊮B1,A2≤∗B2andA3≤−B3,where the decomposition of A and B are the core-EP-nilpotent decompositions. --- *Source: 1001901-2023-01-23.xml*
2023
# Study on the Application of Improved Audio Recognition Technology Based on Deep Learning in Vocal Music Teaching **Authors:** Nan Liu **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1002105 --- ## Abstract As one of the hotspots in music information extraction research, music recognition has received extensive attention from scholars in recent years. Most of the current research methods are based on traditional signal processing methods, and there is still a lot of room for improvement in recognition accuracy and recognition efficiency. There are few research studies on music recognition based on deep neural networks. This paper expounds on the basic principles of deep learning and the basic structure and training methods of neural networks. For two kinds of commonly used deep networks, convolutional neural network and recurrent neural network, their typical structures, training methods, advantages, and disadvantages are analyzed. At the same time, a variety of platforms and tools for training deep neural networks are introduced, and their advantages and disadvantages are compared. TensorFlow and Keras frameworks are selected from them, and the practice related to neural network research is carried out. Training lays the foundation. Results show that through the development and experimental demonstration of the prototype system, as well as the comparison with other researchers in the field of humming recognition, it is proved that the deep-learning method can be applied to the humming recognition problem, which can effectively improve the accuracy of humming recognition and improve the recognition time. A convolutional recurrent neural network is designed and implemented, combining the local feature extraction of the convolutional layer and the ability of the recurrent layer to summarize the sequence features, to learn the features of the humming signal, so as to obtain audio features with a higher degree of abstraction and complexity and improve the performance of the humming signal. The ability of neural networks to learn the features of audio signals lays the foundation for an efficient and accurate humming recognition process. --- ## Body ## 1. Introduction Traditional text-based retrieval techniques are still widely used in the field of music retrieval. There are many problems to be solved in music retrieval based on text information [1]. Firstly, users need to know the name, singer, music style and other information of the song they are looking for. Without this information, they cannot find the music they are interested in. Secondly, in the text-based music retrieval system, the music in the music library needs to be associated with various additional information. This kind of labeling work is difficult to be completed automatically by machines, requiring a lot of manpower and high cost [2].Researching more efficient music retrieval technology is a work with great practical value. Content-based music recognition is a research hotspot in the field of music retrieval in recent years. Compared with using text retrieval methods, content-based music retrieval is more convenient. The music retrieval based on humming recognition can also be well combined with the traditional text retrieval technology to provide a more accurate and richer music retrieval method. Humming recognition-related technologies have broad application prospects and are useful in practical applications. At the same time, there are still few research methods for humming recognition based on deep learning, and there is still a lot of research space for humming recognition models based on deep learning. Therefore, this subject has good research prospects and is a research subject worthy of in-depth exploration [3].Humming recognition has the advantages of user-friendly interaction and convenient use on mobile devices. The main technologies used in humming recognition research can be summarized into the following types [4]: recognition technology based on symbol matching, recognition technology based on melody matching, and recognition technology based on statistical models. The recognition technology based on symbol matching is developed from the traditional string matching method. It generally first extracts the note information to obtain the note sequence. Then the note sequence is regarded as a string, and the similarity of the note sequence is obtained by using the string matching correlation algorithm, and the recognition result is obtained accordingly [5]. The recognition technology based on melody matching first extracts the humming audio pitch signal, connects the curves of pitch changing with time to form a melody curve, and then analyzes and matches the melody feature with the audio melody feature in the database to obtain the recognition result. The recognition technology based on a statistical model utilizes the time-domain features or frequency-domain features of audio, adopts statistical models such as Hidden Markov, etc. to model the songs in the database, and then calculates the probability of humming audio predicted by the model, with the maximum probability. The songs are returned as the recognition result [3].In 1995, Ghias et al. developed and studied the first humming recognition system, which uses a typical string matching-based recognition technology, and uses the lettersU (up), D (down), or S for the pitch change of the audio signal (unchanged) to represent the humming audio signal using a string consisting of these three characters, and then use the string matching algorithm to calculate the matching probability of the song in the database [6]. The research from McNab et al. is also based on symbol matching technology [7]. They extracted the rhythm information and pitch change information of music, used the form of strings to represent such audio features, and verified the effectiveness of the method in the humming recognition system through experiments. In 1999, Kosugi et al. proposed a method to measure similarity by using Euclidean distance based on audio pitch and rhythm information. Scholars such as Calrisse proposed a new method in 2002, which introduced an auditory model and achieved good recognition accuracy improvement on public datasets [8]. Shih et al. creatively introduced the hidden Markov model in their research, using the audio pitch information as the input of the HMM, thus proving the feasibility of using the statistical model for humming recognition research [9]. Downie et al. introduced a dynamic time scaling algorithm for the robustness of the humming recognition system, which greatly improved the overall fault tolerance of the humming recognition system [10]. PardoB et al. comprehensively adopted a matching algorithm based on the hidden Markov model and a distance matching algorithm for similarity measurement in humming recognition [11].There are two main problems with the current humming recognition technology. The first is that the recognition accuracy still needs to be improved, especially when there is a partial deviation in the user’s humming, it is difficult for the existing music feature-based methods to obtain a satisfactory recognition accuracy [12]. On the other hand, there is a problem that the processing time is too long when using the feature matching correlation algorithm for humming recognition [13]. Extracting and processing relevant features from the user’s humming and matching with the songs in the database require a certain computing time, which brings a long waiting time to the user interaction process and is not user-friendly.Deep learning is a new research direction in the field of machine learning [14]. It is mainly based on artificial neural networks and uses multilayer representations to model complex relationships between data. In addition to the ability of traditional machine learning methods to discover the relationship between data features and tasks, deep learning can also summarize more abstract and complex features from simple feature learning [15]. In recent years, breakthrough achievements have been made in the fields of computer vision, image processing, natural language understanding, etc., which have attracted widespread attention.The development of deep learning can be roughly divided into three stages. Early neural network models were similar to bionic machine learning, which tried to mimic the learning mechanism of the brain [16]. The earliest neural network mathematical model was proposed by Professor Warren and Walter in 1943. In order to allow the computer to set the weights more automatically and reasonably, Professor Frank Rosenblatt proposed the perceptron model in 1958. The perceptron is the first model that can learn feature weights based on sample data [14]. However, under the limitation of computing power at that time, these research results were not taken seriously. At the end of the 1980s, the second wave of neural network research climax came with the proposal of distributed knowledge representation and neural network back-propagation algorithm. Neural network research in this period used multiple neurons to express knowledge and concepts in the real world, which greatly enhanced the expressive ability of the model and laid the foundation for later deep learning [17].The third high-speed development stage of neural network research comes with the improvement of computer performance and the development of cloud computing, GPU, and other technologies [18]. With the solid foundation provided by these hardware resources, the amount of computation is no longer an issue that hinders the development of neural networks. At the same time, with the popularization of the Internet and the development of search technology, people can easily obtain a large amount of data and information, which solves the long-standing problem of missing datasets in neural network training [19]. At this stage, deep learning has truly ushered in a development climax and has repeatedly achieved breakthrough results in many fields.In the field of speech recognition, the traditional GMM-HMM speech recognition model has encountered development bottlenecks after years of research, and the introduction of deep-learning-related technologies has significantly improved the accuracy of speech recognition [20]. Since the concept of deep learning was introduced into the field of speech recognition in 2009, in just a few years, the deep-learning method has reduced the error rate of the traditional Gaussian mixture model based on the TIMIT dataset from 21.7% to 17.9%. Dahl et al. used a combination of DBN and HMMs, and achieved certain results in the task of speech, large vocabulary continuous speech recognition (LVC-SR) [21].In the industrial world, most of the well-known Internet companies at home and abroad use deep-learning methods for speech recognition [22]. A fully automatic simultaneous interpretation system developed by Microsoft, based on deep-learning technology, can perform human voice recognition, machine translation, and Chinese speech synthesis processing synchronously with the speaker, achieving an effect close to manual simultaneous interpretation [23]. Baidu applied the deep neural network to speech recognition research. On the basis of the VGGNet model, it integrated the multilayer convolutional neural network and the long short-term memory network structure to develop an end-to-end speech recognition technology [24]. Experiments show that the system reduces the recognition error rate by more than 10%. The speech recognition model proposed by Microsoft achieved a historically low error rate of 6.3% on the industry-standard Switchboard speech recognition task [25]. In its speech recognition system, iFLYTEK, a domestic university of science and technology, uses a feedforward sequence memory network to model the sentence speech signal through multi-layer convolutional layers and summarizes and expresses the long-term related information of the speech [26]. It is widely used in academia and industry. The recognition rate of the best two-way recurrent neural network speech recognition system is improved by more than 15%.In our study, we first study humming audio signal processing methods, compare the differences and advantages, and disadvantages of different methods, including audio digitization, audio filtering, audio signal enhancement, note onset detection, audio signal spectrum analysis, and other related technologies, and finally form a set of humming audio signal processing pipeline to provide effective datasets for training and testing of deep-learning frameworks in Section2; in the Results section, Using the open-source deep-learning platform and tools, through training, learning and repeated testing on the data set, better humming recognition neural network model parameters are obtained. And through the evaluation test on the test data set, the feasibility and effectiveness of the proposed neural network model are verified, and the performance of the model in terms of recognition accuracy, robustness, and training time is analyzed. Finally, based on the proposed deep-learning framework for humming recognition, a C/S architecture is adopted, and a humming recognition prototype system is designed and implemented by using server-side and mobile-side development technologies. ## 2. Methods ### 2.1. Audio Signal Processing Flow First of all, for the test humming data, a digitized audio signal needs to be obtained through a process of sampling and quantization. Then, it is necessary to perform certain preprocessing on the original humming data and test data, including filtering, pre-emphasis, windowing, and framing, to reduce the interference of audio signal noise and improve the saliency of features. Secondly, in the process of training and recognition, it is also necessary to detect the starting point of the note. From the starting point of the note, the data set is intercepted to eliminate the interference of silent segment noise and improve the validity of the data, and the audio signal processing flow is shown in Figure1.Figure 1 The audio signal processing flow. ### 2.2. Sampling and Quantization An important step in converting the original humming signal into a digital signal is sampling and quantization. After sampling and quantization, the analog signal becomes a digital audio signal that is discrete in time and amplitude. The sampling theorem points out that a necessary condition that needs to be satisfied in the sampling process is to use a sampling frequency that is greater than twice the bandwidth of the audio signal so that the sampling operation will not lose the information of the original audio signal, and it can also be restored from the sampled digital signal [27]. For human voice signals, the frequency spectrum of the voiced signal is mainly concentrated in the low-frequency band below 4 kHz, and the frequency spectrum of the unvoiced signal is very wide, extending to the high-frequency band above 10 kHz. In the research of this paper, the sampling frequency of 16 kHz is uniformly used for the sampling of humming audio, so as to ensure that the humming information will not be lost.After sampling, the signal needs to be quantized. The quantization process can convert the continuous amplitude value of the signal on the time axis into discrete amplitude values [28]. The error generated in the quantization process is called quantization noise, which can be obtained by calculating the difference between the quantized discrete amplitude value and the original signal. In general, quantization noise has the following characteristics (1) it is a stationary white noise; (2) the quantization noise is irrelevant to the input signal; (3) the quantization noise is uniformly distributed within the quantization interval, that is, it has the characteristic of equal probability density distribution. The power ratio between the original audio signal and the quantization noise is called the quantization signal-to-noise ratio and is often used to characterize audio quality. In general, the amplitude of the speech signal obeys the Laplace distribution, and the quantized signal-to-noise ratio can be expressed by (1)S=6.02B−7.2,where S is the signal-to-noise ratio and B is the quantized word length. (1) shows that the word length of each bit in the quantizer corresponds to a quantized signal-to-noise ratio of about 6 dB. When the quantized signal-to-noise ratio reaches 35 dB and above, the audio quality can meet the requirements of general communication systems, so generally, the quantization word length should be greater than 7 dB bits. In practical applications, a word length of more than 12 bits is often used for quantization, because the variation range of the speech waveform can reach up to 55 dB. In order to maintain a signal-to-noise ratio of 35 dB within this range, an additional 9 bit word length is used for compensation. The dynamic range of the speech waveform around 30 dB changes. ### 2.3. Humming Signal Preprocessing For the humming signal input from the recording, quantization noise will be generated when it is converted from quantization to digitalization, and there will also be power frequency interference, aliasing interference, etc. In order to reduce these noises, the analysis and feature parameter extraction of the humming signal requires the interference generated, first of all, to filter the humming signal to be processed [29].The prefiltering operation first detects the frequency of each frequency domain component in the input signal and suppresses the components whose frequency value exceeds half of the sampling frequency to prevent aliasing interference. Then, suppress the power frequency interference of about 50 Hz. In the experiment of this paper, a bandpass filter is used to prefilter the humming audio, the upper cut-off frequency is set to 3400 Hz, and the lower cut-off frequency is set to 60 Hz to filter out the interference of the power frequency.The humming signal is easily affected by two types of noises: glottal excitation and mouth-nose radiation, and its effective components are relatively small in the high-frequency part. When finding the spectrum of the speech signal, the high-frequency part is more difficult to obtain than the low-frequency part. For the high-frequency part, additional processing is required, that is, pre-emphasis of the humming signal is performed first to increase the proportion of the high-frequency part of the humming signal, and smooth the spectrum of the humming signal, improving the high-frequency resolution of the humming part, which facilitates audio analysis operations in the frequency domain [30].The pre-emphasis process is generally completed by a pre-emphasis digital filter after the audio signal digitization operation is completed. This type of filter has the ability to improve high-frequency characteristics. First-order digital filters are often used. After pre-emphasis processing, the audio signal can be expressed as(2)H=1−εX−1,where X is the incoming humming signal. ε is weight and its value is regarded as 0.94 in our study.The humming audio sequence is a one-dimensional signal on the time axis. In order to analyze the signal, the audio signal needs to be regarded as a stable state in a short time of millisecond level. On this basis, the audio signal is windowed and divided into frames to maintain the short-term stationary characteristics of the speech signal, so that the subsequent speech feature vector representation can be performed on a short-term frame, so as to obtain the feature vector time series.There are generally two segmentation methods for windowing and framing operations, continuous segmentation, and overlapping segmentation [31]. The method of continuous segmentation means that there is no overlap between frames so that discrete frame feature vectors that do not interfere with each other can be obtained. In this study, in order to make a smooth transition between frames and maintain the continuity of features, the method of overlapping segmentation is adopted. In the overlap segment, the overlap of the previous frame and the next frame is usually taken as about 1/2 of the frame length. Specifically, a moving window of finite length can be used to weight the audio signal to achieve framing.Commonly used window functions include rectangular window, Hamming window, and Hanning window. Their representations are as follows.Rectangular window:(3)wn=1,0≤n≤N−10,others.Hamming window:(4)wn=0.54−0.46cos2nπN−1,0≤n≤N−10,others.Hanning window:(5)wn=0.51−cos2nπN−1,0,others.where N is the frame length. The choice of window shape has different effects on the audio signal. Generally speaking, the spectrum obtained by using the rectangular window is smoother, but in the high-frequency part, it is easy to lose waveform details, thus missing some important information; the Hamming window can effectively overcome the problem of information loss in the high-frequency part, but the spectrum is relatively unsmooth; the Hanning window generally needs to use a larger bandwidth, which is about twice the bandwidth of the rectangular window of the same width, and the attenuation effect of the Hanning window is larger than that of the rectangular window, which is much bigger.The selection of the window length is also very important when adding a window. There is the following relationship between the length of the window N, the sampling periodTs, and the frequency resolution Δf:(6)Δf=1NTs.In the case of a certain sampling period, if the window width increases, the frequency resolution will decrease accordingly, so that the details of audio changes cannot be reflected; on the contrary, if the window width N decreases, the frequency resolution increases, and the audio changes become not smooth enough. Therefore, it is necessary to consider the speed of signal change or the level of detail reflected, and set an appropriate window width according to the needs of the head.Based on the above analysis, this paper uses a Hamming window when windowing and dividing the humming audio signal in the experiment, in which the window length is taken as 5000 points, and the overlap between frames is taken as 2600 points. ### 2.4. Note Onset Detection In this paper, a two-threshold method is used for onset detection. The double-threshold method first examines the short-term energy of the humming signal, the short-term average energyEn of the speech signal at time n:(7)En=∑m=−∞∞hm2×cn−m.Here, h is input signal and c is the weight. Since voiced sounds with higher energy always appear after the speech starts, you can refer to the average short-term energy of the humming audio, set a higher threshold Th to confirm that the speech has started, and then take a threshold TL slightly lower than Th to determine the effective speech the starting point N1. To judge the difference between unvoiced and silent, we can examine the characteristics of the short-term zero-crossing rate of the humming signal, and the short-term zero-crossing rate Zn of the signal at time n is(8)Zn=∑−∞∞sgnhn−sgnhn−1×wn−m.Here, h is input signal, w is weight, and sgn is signal function. The double-threshold method uses a lower threshold T1, and uses this threshold as a reference to obtain the zero-crossing rate of the signal. Generally speaking, the low threshold zero-crossing rate of the noise or silent segment is significantly lower than the low threshold zero-crossing rate of the speech segment, so that the interference of the noise segment can be excluded. ### 2.5. Humming Signal Feature Representation Sound is an analog signal, and its one-dimensional time-domain waveform representation can only reflect the relationship of sound pressure with time, but cannot well reflect the characteristics of audio signals. The feature of the audio signal means that the humming signal can be analyzed and processed to remove the irrelevant redundant information in the audio signal and obtain important information that affects the recognition of the humming. Therefore, in order to use the humming audio signal for the deep-learning framework, it is very important to choose a suitable representation of the audio signal features. In the research of speech signal analysis, because the cepstral feature contains more information than other features, it can better characterize the speech signal and is widely used. The commonly used linear prediction cepstral coefficients (Linear Prediction Cepstrum Coefficients, LPCC) and Mel frequency cepstral coefficients (Mel-Frequency Cepstral Coefficients, MFCC) [32].Since the high-frequency part of the humming signal is easily disturbed by noise, resulting in a frequency shift, most of the effective information in humming recognition is concentrated in the low-frequency part. By converting the linear frequency scale into the Mel frequency scale, the Mel-frequency cepstral coefficient highlights the low-frequency part of the humming signal, that is, it gives information that is more conducive to identification, and at the same time shields the interference of some environmental noises on the audio [33]. Therefore, Mel-frequency cepstral coefficients are more commonly used in most speech and acoustic pattern recognition problems than linear prediction cepstral coefficients. In the study of the humming recognition problem in this paper, it is also used as the input feature vector of the deep-learning framework.Mel frequency cepstral coefficient (Mel-frequency cepstral coefficient ((MFCC)) is a kind of audio signal spectrum feature proposed based on human hearing characteristics, which has a nonlinear correspondence with Hertz frequency. The level of the sound heard by the human ear is not linearly related to the frequency of the sound. The Mel frequency scale is proposed to solve this problem. The use of the Mel frequency scale is in line with the auditory characteristics of the human ear. In the Mel frequency domain, human perception of the pitch has a linear relationship with pitch, that is, if two humming Mel audios are twice as different, then the human perception is also twice as different. Mel-frequency cepstral coefficients can be thought of as folding the short-time Fourier transform on the frequency axis, reducing the size while preserving the most important perceptible information.The correlation between Mel frequencyfMel and actual f is as follows:(9)fMel=2595lg1+f700.The input signal is filtered using a Mel bandpass filter. The effect of each frequency band component is superimposed in the human ear, so the energy in each filter band is superimposed, and then the logarithmic magnitude spectrum of all filters is discretized. Cosine transforms to get Mel frequency cepstral coefficients. The calculation process of Mel frequency is as follows.Firstly, pre-emphasis, windowing, and framing are performed on the humming signal, and then the frequency spectrum is calculated using the short-time Fourier transform. Secondly, a Mel filter bank ofL channels is set on the Mel frequency, and the L value is determined by the highest frequency of the signal, generally taking 12–16. Thirdly, pass the linear magnitude spectrum of the signal through the Mel filter to get the filter output Y(l):(10)Yl=∑k=olhlwlkxnk,l=1,2,…,L,where o(l) and h(l) are the lowest and highest frequencies of the first filter, x is the input signal, and w is the weight.Fourthly, taking the logarithm of the filter output and perform the discrete cosine transform:(11)CMFCCn=∑l=1LlgYl×cosπl−0.5nL,n=1,2,…,L. ### 2.6. Experiment Setup The main hardware used in the experiment is Intel(R) Core(TM) i7-7700K processor and NVIDIAGeForceGTX1070Ti graphics card. In terms of software, the deep-learning neural network model is implemented based on Keras, using TensorFlow as the Keras backend, and some algorithms are implemented based on scikit-leam.The data set used in the experiment comes from the DSD100 and MedleyDB data sets, and 50 songs are selected, covering a variety of music styles such as Rap, Country, Hip-Hop, and Rock. Each song uses music sung by 1–3 professional singers. For the humming audio file of each song, first, detect the starting points of the notes to obtain multiple starting points of notes, then randomly select the starting points of the notes to start, cut out 180 10-second segments, and finally get a total of 9,000 humming records. It is divided into a training set, validation set, and test set with a ratio of 4 : 1:1. Since this part of the test, the set is sung by professional singers, it is called the professional group 40 test set. In addition, for the 10 songs, 30 audio clips hummed by three students were recorded by themselves to form the nonprofessional group test set.Three types of evaluation indicators are used in the experiment: accuracy rate (Accuracy, ACC), the response time (TIME), and mean reciprocal ranking (Mean Reciprocal Rank (MRR)). Since the deep-learning framework for humming recognition can give multiple recognition results and their corresponding probabilities, only using the recognition accuracy rate for evaluation cannot fully reflect the performance of the framework. For this reason, MRR is used as one of the evaluation indicators. MRR is currently widely used. Applied in the evaluation of problems that return multiple results, it can reflect the pros and cons of the returned result set. Its formula is as follows:(12)MRR=1N∑i=1N1r,where N is the number of experiment objects and r is the rank number in the ith experiment.Figure 2 Humming recognition deep-learning framework. ## 2.1. Audio Signal Processing Flow First of all, for the test humming data, a digitized audio signal needs to be obtained through a process of sampling and quantization. Then, it is necessary to perform certain preprocessing on the original humming data and test data, including filtering, pre-emphasis, windowing, and framing, to reduce the interference of audio signal noise and improve the saliency of features. Secondly, in the process of training and recognition, it is also necessary to detect the starting point of the note. From the starting point of the note, the data set is intercepted to eliminate the interference of silent segment noise and improve the validity of the data, and the audio signal processing flow is shown in Figure1.Figure 1 The audio signal processing flow. ## 2.2. Sampling and Quantization An important step in converting the original humming signal into a digital signal is sampling and quantization. After sampling and quantization, the analog signal becomes a digital audio signal that is discrete in time and amplitude. The sampling theorem points out that a necessary condition that needs to be satisfied in the sampling process is to use a sampling frequency that is greater than twice the bandwidth of the audio signal so that the sampling operation will not lose the information of the original audio signal, and it can also be restored from the sampled digital signal [27]. For human voice signals, the frequency spectrum of the voiced signal is mainly concentrated in the low-frequency band below 4 kHz, and the frequency spectrum of the unvoiced signal is very wide, extending to the high-frequency band above 10 kHz. In the research of this paper, the sampling frequency of 16 kHz is uniformly used for the sampling of humming audio, so as to ensure that the humming information will not be lost.After sampling, the signal needs to be quantized. The quantization process can convert the continuous amplitude value of the signal on the time axis into discrete amplitude values [28]. The error generated in the quantization process is called quantization noise, which can be obtained by calculating the difference between the quantized discrete amplitude value and the original signal. In general, quantization noise has the following characteristics (1) it is a stationary white noise; (2) the quantization noise is irrelevant to the input signal; (3) the quantization noise is uniformly distributed within the quantization interval, that is, it has the characteristic of equal probability density distribution. The power ratio between the original audio signal and the quantization noise is called the quantization signal-to-noise ratio and is often used to characterize audio quality. In general, the amplitude of the speech signal obeys the Laplace distribution, and the quantized signal-to-noise ratio can be expressed by (1)S=6.02B−7.2,where S is the signal-to-noise ratio and B is the quantized word length. (1) shows that the word length of each bit in the quantizer corresponds to a quantized signal-to-noise ratio of about 6 dB. When the quantized signal-to-noise ratio reaches 35 dB and above, the audio quality can meet the requirements of general communication systems, so generally, the quantization word length should be greater than 7 dB bits. In practical applications, a word length of more than 12 bits is often used for quantization, because the variation range of the speech waveform can reach up to 55 dB. In order to maintain a signal-to-noise ratio of 35 dB within this range, an additional 9 bit word length is used for compensation. The dynamic range of the speech waveform around 30 dB changes. ## 2.3. Humming Signal Preprocessing For the humming signal input from the recording, quantization noise will be generated when it is converted from quantization to digitalization, and there will also be power frequency interference, aliasing interference, etc. In order to reduce these noises, the analysis and feature parameter extraction of the humming signal requires the interference generated, first of all, to filter the humming signal to be processed [29].The prefiltering operation first detects the frequency of each frequency domain component in the input signal and suppresses the components whose frequency value exceeds half of the sampling frequency to prevent aliasing interference. Then, suppress the power frequency interference of about 50 Hz. In the experiment of this paper, a bandpass filter is used to prefilter the humming audio, the upper cut-off frequency is set to 3400 Hz, and the lower cut-off frequency is set to 60 Hz to filter out the interference of the power frequency.The humming signal is easily affected by two types of noises: glottal excitation and mouth-nose radiation, and its effective components are relatively small in the high-frequency part. When finding the spectrum of the speech signal, the high-frequency part is more difficult to obtain than the low-frequency part. For the high-frequency part, additional processing is required, that is, pre-emphasis of the humming signal is performed first to increase the proportion of the high-frequency part of the humming signal, and smooth the spectrum of the humming signal, improving the high-frequency resolution of the humming part, which facilitates audio analysis operations in the frequency domain [30].The pre-emphasis process is generally completed by a pre-emphasis digital filter after the audio signal digitization operation is completed. This type of filter has the ability to improve high-frequency characteristics. First-order digital filters are often used. After pre-emphasis processing, the audio signal can be expressed as(2)H=1−εX−1,where X is the incoming humming signal. ε is weight and its value is regarded as 0.94 in our study.The humming audio sequence is a one-dimensional signal on the time axis. In order to analyze the signal, the audio signal needs to be regarded as a stable state in a short time of millisecond level. On this basis, the audio signal is windowed and divided into frames to maintain the short-term stationary characteristics of the speech signal, so that the subsequent speech feature vector representation can be performed on a short-term frame, so as to obtain the feature vector time series.There are generally two segmentation methods for windowing and framing operations, continuous segmentation, and overlapping segmentation [31]. The method of continuous segmentation means that there is no overlap between frames so that discrete frame feature vectors that do not interfere with each other can be obtained. In this study, in order to make a smooth transition between frames and maintain the continuity of features, the method of overlapping segmentation is adopted. In the overlap segment, the overlap of the previous frame and the next frame is usually taken as about 1/2 of the frame length. Specifically, a moving window of finite length can be used to weight the audio signal to achieve framing.Commonly used window functions include rectangular window, Hamming window, and Hanning window. Their representations are as follows.Rectangular window:(3)wn=1,0≤n≤N−10,others.Hamming window:(4)wn=0.54−0.46cos2nπN−1,0≤n≤N−10,others.Hanning window:(5)wn=0.51−cos2nπN−1,0,others.where N is the frame length. The choice of window shape has different effects on the audio signal. Generally speaking, the spectrum obtained by using the rectangular window is smoother, but in the high-frequency part, it is easy to lose waveform details, thus missing some important information; the Hamming window can effectively overcome the problem of information loss in the high-frequency part, but the spectrum is relatively unsmooth; the Hanning window generally needs to use a larger bandwidth, which is about twice the bandwidth of the rectangular window of the same width, and the attenuation effect of the Hanning window is larger than that of the rectangular window, which is much bigger.The selection of the window length is also very important when adding a window. There is the following relationship between the length of the window N, the sampling periodTs, and the frequency resolution Δf:(6)Δf=1NTs.In the case of a certain sampling period, if the window width increases, the frequency resolution will decrease accordingly, so that the details of audio changes cannot be reflected; on the contrary, if the window width N decreases, the frequency resolution increases, and the audio changes become not smooth enough. Therefore, it is necessary to consider the speed of signal change or the level of detail reflected, and set an appropriate window width according to the needs of the head.Based on the above analysis, this paper uses a Hamming window when windowing and dividing the humming audio signal in the experiment, in which the window length is taken as 5000 points, and the overlap between frames is taken as 2600 points. ## 2.4. Note Onset Detection In this paper, a two-threshold method is used for onset detection. The double-threshold method first examines the short-term energy of the humming signal, the short-term average energyEn of the speech signal at time n:(7)En=∑m=−∞∞hm2×cn−m.Here, h is input signal and c is the weight. Since voiced sounds with higher energy always appear after the speech starts, you can refer to the average short-term energy of the humming audio, set a higher threshold Th to confirm that the speech has started, and then take a threshold TL slightly lower than Th to determine the effective speech the starting point N1. To judge the difference between unvoiced and silent, we can examine the characteristics of the short-term zero-crossing rate of the humming signal, and the short-term zero-crossing rate Zn of the signal at time n is(8)Zn=∑−∞∞sgnhn−sgnhn−1×wn−m.Here, h is input signal, w is weight, and sgn is signal function. The double-threshold method uses a lower threshold T1, and uses this threshold as a reference to obtain the zero-crossing rate of the signal. Generally speaking, the low threshold zero-crossing rate of the noise or silent segment is significantly lower than the low threshold zero-crossing rate of the speech segment, so that the interference of the noise segment can be excluded. ## 2.5. Humming Signal Feature Representation Sound is an analog signal, and its one-dimensional time-domain waveform representation can only reflect the relationship of sound pressure with time, but cannot well reflect the characteristics of audio signals. The feature of the audio signal means that the humming signal can be analyzed and processed to remove the irrelevant redundant information in the audio signal and obtain important information that affects the recognition of the humming. Therefore, in order to use the humming audio signal for the deep-learning framework, it is very important to choose a suitable representation of the audio signal features. In the research of speech signal analysis, because the cepstral feature contains more information than other features, it can better characterize the speech signal and is widely used. The commonly used linear prediction cepstral coefficients (Linear Prediction Cepstrum Coefficients, LPCC) and Mel frequency cepstral coefficients (Mel-Frequency Cepstral Coefficients, MFCC) [32].Since the high-frequency part of the humming signal is easily disturbed by noise, resulting in a frequency shift, most of the effective information in humming recognition is concentrated in the low-frequency part. By converting the linear frequency scale into the Mel frequency scale, the Mel-frequency cepstral coefficient highlights the low-frequency part of the humming signal, that is, it gives information that is more conducive to identification, and at the same time shields the interference of some environmental noises on the audio [33]. Therefore, Mel-frequency cepstral coefficients are more commonly used in most speech and acoustic pattern recognition problems than linear prediction cepstral coefficients. In the study of the humming recognition problem in this paper, it is also used as the input feature vector of the deep-learning framework.Mel frequency cepstral coefficient (Mel-frequency cepstral coefficient ((MFCC)) is a kind of audio signal spectrum feature proposed based on human hearing characteristics, which has a nonlinear correspondence with Hertz frequency. The level of the sound heard by the human ear is not linearly related to the frequency of the sound. The Mel frequency scale is proposed to solve this problem. The use of the Mel frequency scale is in line with the auditory characteristics of the human ear. In the Mel frequency domain, human perception of the pitch has a linear relationship with pitch, that is, if two humming Mel audios are twice as different, then the human perception is also twice as different. Mel-frequency cepstral coefficients can be thought of as folding the short-time Fourier transform on the frequency axis, reducing the size while preserving the most important perceptible information.The correlation between Mel frequencyfMel and actual f is as follows:(9)fMel=2595lg1+f700.The input signal is filtered using a Mel bandpass filter. The effect of each frequency band component is superimposed in the human ear, so the energy in each filter band is superimposed, and then the logarithmic magnitude spectrum of all filters is discretized. Cosine transforms to get Mel frequency cepstral coefficients. The calculation process of Mel frequency is as follows.Firstly, pre-emphasis, windowing, and framing are performed on the humming signal, and then the frequency spectrum is calculated using the short-time Fourier transform. Secondly, a Mel filter bank ofL channels is set on the Mel frequency, and the L value is determined by the highest frequency of the signal, generally taking 12–16. Thirdly, pass the linear magnitude spectrum of the signal through the Mel filter to get the filter output Y(l):(10)Yl=∑k=olhlwlkxnk,l=1,2,…,L,where o(l) and h(l) are the lowest and highest frequencies of the first filter, x is the input signal, and w is the weight.Fourthly, taking the logarithm of the filter output and perform the discrete cosine transform:(11)CMFCCn=∑l=1LlgYl×cosπl−0.5nL,n=1,2,…,L. ## 2.6. Experiment Setup The main hardware used in the experiment is Intel(R) Core(TM) i7-7700K processor and NVIDIAGeForceGTX1070Ti graphics card. In terms of software, the deep-learning neural network model is implemented based on Keras, using TensorFlow as the Keras backend, and some algorithms are implemented based on scikit-leam.The data set used in the experiment comes from the DSD100 and MedleyDB data sets, and 50 songs are selected, covering a variety of music styles such as Rap, Country, Hip-Hop, and Rock. Each song uses music sung by 1–3 professional singers. For the humming audio file of each song, first, detect the starting points of the notes to obtain multiple starting points of notes, then randomly select the starting points of the notes to start, cut out 180 10-second segments, and finally get a total of 9,000 humming records. It is divided into a training set, validation set, and test set with a ratio of 4 : 1:1. Since this part of the test, the set is sung by professional singers, it is called the professional group 40 test set. In addition, for the 10 songs, 30 audio clips hummed by three students were recorded by themselves to form the nonprofessional group test set.Three types of evaluation indicators are used in the experiment: accuracy rate (Accuracy, ACC), the response time (TIME), and mean reciprocal ranking (Mean Reciprocal Rank (MRR)). Since the deep-learning framework for humming recognition can give multiple recognition results and their corresponding probabilities, only using the recognition accuracy rate for evaluation cannot fully reflect the performance of the framework. For this reason, MRR is used as one of the evaluation indicators. MRR is currently widely used. Applied in the evaluation of problems that return multiple results, it can reflect the pros and cons of the returned result set. Its formula is as follows:(12)MRR=1N∑i=1N1r,where N is the number of experiment objects and r is the rank number in the ith experiment.Figure 2 Humming recognition deep-learning framework. ## 3. Results and Discussion ### 3.1. Humming Recognition Deep-Learning Framework and Design The humming recognition deep-learning framework consists of the following parts:(1) Humming Audio Database. Including humming recognition training dataset and test dataset. The vocal track audio of the original singer (or singing professionally) of the song constitutes the training dataset. In addition to a part of the above audio, the test data set also contains a part of nonprofessional humming audio data to compare and evaluate the generalization ability of the model;(2) Preprocessing Module. The humming audio data is processed and analyzed to obtain the characteristic representation of the audio data. The main processing flow has been described in the third chapter;(3) Neural Network Training Module. Input the training data set into the humming recognition neural network, train in batches, use the validation set to calculate the loss function value after each iteration, stop training after reaching a certain accuracy requirement, and use the neural network with the smallest loss function value. The weight value is used as the optimal parameter output of the model;(4) Neural Network Test Module. Based on a certain amount of test humming audio data, appropriate evaluation indicators are used to test and evaluate the performance of the neural network, as the basis for repeatedly adjusting the neural network training process and parameter selection;(5) Humming Recognition System. The humming recognition system is based on a trained neural network model and accepts human humming as input. The humming signal undergoes audio processing steps such as digitization, filtering, pre-emphasis, windowing, calculation of Mel cepstral coefficients to obtain the mel-spectrogram representation of the humming signal, input the neural network recognition model, obtain the recognition result, and return it to the user;(6) Using modular design, the deep-learning framework for humming recognition has good scalability, and several modules with high cohesion and low coupling are obtained. For example, the audio database can be flexibly replaced or modified for neural network model training, so that training evaluation can be performed on different datasets; it is also possible to use a set of model parameters trained by the neural network training module to perform training on different datasets. Testing of network performance and end-to-end testing of prototype systems are shown in Figure2.The humming recognition neural network model is the core part of the deep-learning framework, and its overall design ideas are as follows:(1) The input layer receives the mel-spectrogram of the humming audio signal as input(2) Then, uses several convolutional layers to learn the local features of the audio signal to obtain the audio signal feature map(3) Then, several recurrent layers are used to induce and learn the sequence features of the audio signal over time(4) Finally, the probability distribution of the recognition result that the input audio signal is a certain song is obtained through the Softmax activation functionThe humming recognition deep neural network model designed accordingly is shown in Figure3.Figure 3 The overall structure of the neural network.The data received by the input layer is a two-dimensional mel-spectrogram representation of the audio signal. In the hidden layer part, in order to fully extract the spectral features of the audio signal, four layers of convolution and pooling layers are used to learn the local features of the signal, and a threshold control unit is used in the last two layers, learning, and induction of sequence features of audio signals. Finally, the Softmax activation function is used in the output layer to output the neural network calculation result as the probability distribution of song recognition. ### 3.2. A Prototype System for Recognition of Humming Audio Scores The humming audio music score recognition prototype system adopts the C/S architecture, the server is based on the pythonweb framework Bottle, the client is implemented based on the ReactNative framework, and the client and the server communicate based on the HTTP protocol. The overall interaction process design of the system is shown in Figure4.Figure 4 Interaction process of the humming audio score recognition system.The user inputs the humming signal through the client recording module, and the client records the humming audio with a sampling frequency of 16000 Hz, performs a series of preprocessing, and sends an HTTPS POST request through the audio uploading module to upload the audio to the server for processing. The server-side audio preprocessing module performs note start point detection and audio segmentation on the humming audio and then inputs the humming recognition module. This module uses the humming recognition, neural network model, to give the recognition result and returns the corresponding score image for client display.Considering that the server needs to compile the Keras model, read the trained neural network parameters and recognize the humming audio, the server is written in Python. The bottle is a lightweight Python Web framework that provides basic routing, encapsulation of request objects, template support, etc. It can realize the rapid development of small Web applications and meet the needs of the humming recognition prototype system server. The ReactNative framework adopted by the mobile terminal is a cross-platform mobile application development framework launched by Facebook, which can quickly develop mobile applications based on the React ecosystem. ### 3.3. Training the Neural Network In the neural network training process, the humming audio is sampled with a sampling frequency of 16000 Hz. For each training process, the network weights are initialized with a uniform distribution, and the cross-entropy function is used as the loss function, and then the stochastic gradient descent algorithm is used for learning.For the overfitting problem, the method of early termination of training is used in the experiment. In the training process, generally speaking, as the number of training increases, the error on the training set will be smaller, while the error on the validation set will first decrease and then increase, that is, the model begins to enter the overfitting state, and before that, there will be a good number of training cycles. Therefore, if the error accuracy set on the validation set is not improved in 10 generations of training, it means that the best number of training iterations has been reached, the training process is terminated in advance, and the network weights of the generation training with the best accuracy are reserved as a result of training. In addition, the Dropout method is also used to alleviate the overfitting phenomenon. Dropout is an extremely effective and simple regularization technique that can significantly reduce network parameter overfitting during training. Dropout can be visually understood as performing subsampling operations in a fully connected neural network and only updating the weights and parameters of the sampling network during training.During model training, Dropout randomly selects some nodes in the network and sets them in an inactive state. The inactive nodes will not participate in the operation during this training process, and their weights will not change. In the next training, repeating this process, nodes that were not activated last time may resume work. In the training process, the output of some neurons is randomly set to 0, making it invalid, so that each neuron does not completely depend on other neurons so that more feature expressions can be obtained.During the experiment, the Dropout ratio was selected asp = 0.3. Table 1 shows the accurate rate variation of the model on the training set and the validation set before and after applying the Dropout method when the hyperparameters remain unchanged during ten training sessions.Table 1 Suppress overfitting using the Dropout method. Mean value of ACC for 10 timesBefore dropoutAfter dropoutTraining dataset0.91270.98200Test dataset0.90160.94321It can be obtained from Table1 that after applying the Dropout method, although the accuracy rate on the training data set has decreased, the accuracy rate on the verification data set has increased. It can be considered that adding Dropout has improved the generalization of the model to a certain extent and capabilities and mitigates overfitting.In the humming recognition deep neural network model, there are several hyperparameters that will affect the model quality and training time, including the size of the convolution kernel, the size of the pooling kernel, the number of training cycles (epoch), the batch size (batch size), learning rate, momentum factor, and the dropout ratio described above.Among them, the size of the convolution kernel and the size of the pooling kernel are the network structure parameters. A training cycle refers to the process of training all sample data once. If the number of training cycles is too small, the model may not converge, and if it is too large, overfitting may occur. The batch size is the number of data blocks selected in each stochastic gradient descent. Usually, for relatively small datasets, the entire dataset can be input into the network for training, and the resulting gradient descent direction can better represent the overall characteristics of the dataset.For the humming recognition dataset, due to the large memory usage of audio data, it is not feasible to load all the data at one time. It is very important to use a reasonable batch size to make the sample distribution reasonable in each iteration. At the same time, within a reasonable range, increasing the batch size can reduce the number of iterations of model training and speed up the training speed of the model. The learning rate is the weight of the negative gradient in the stochastic gradient descent algorithm. Using a larger learning rate can speed up the training of the network, but it is easy to miss the minimum value and cause the model to fail to converge. When using a smaller learning rate, the network becomes slow to train and may get stuck in local minima. The momentum factor is a hyperparameter used in stochastic gradient descent to control the influence of the previous weight update on this weight update, and it also has a great impact on the training speed of the network.In the training of the humming recognition neural network model, grid search is used to assist in obtaining good values for the hyperparameters. The principle of grid search is relatively simple. First, for each hyperparameter, set a small optional set, for example, set the optional set {0.01, 0.02, 0.05, 0.1} for the learning rate. . Then, grid search will perform the Cartesian product of these hyperparameter sets to obtain multiple sets of hyperparameters, and automatically use each set of hyperparameters to conduct training experiments on the model, and obtain the hyperparameter combination with the smallest validation set error as the most excellent hyperparameter selection. The final selected model hyperparameter values are shown in Table2.Table 2 Humming recognition neural network hyperparameters. HyperparametersValueConvolution kernel size(4,4)Pooling kernel size(2,2)Number of training cycles60Batch size190Learning rate0.03Momentum factor0.9Dropout rate0.4 ### 3.4. Performance Analysis of Humming Recognition Neural Network The humming recognition neural network model with better test results is obtained by training. The composition of each network layer, the output dimension, and the number of trainable parameters are shown in Table3.Table 3 Humming recognition neural network model. Network layerOutput dimension (none indicates the number of audio)Number of parametersInput_1(input layer)(None,1,160000)0Melspectrogram_1(Mel spectrogram)(None,64,625,1)0Batch_normalization_1(batch normalization)(None,64,625,1)4Conv2d_1(convolutional layer)(None,64,625,21)210Batch_normalization_2(batch normalization)(None,64,625,21)84Activation_1(ReLU)(None,64,625,21)0Max_pooling2d_1(None,32,313,21)0Conv2d_2(convolutional layer)(None,32,313,21)3990Batch_normalization_3(batch normalization)(None,32,313,21)84Activation_2(ReLU)(None,32,313,21)0Max_pooling2d_2(None,16,157,21)0Conv2d_3(convolutional layer(None,16,157,21)3990Batch_mormalization_4(None,16,157,21)84Activation_3(None,16,157,21)0Max_pooling2d_3(None,8,79,21)0Conv2d_4(convolutional layer)(None,8,79,21)3990Batch_normalization_5(None,8,79,21)84Activation_4(ReLU)(None,8,79,21)0Max_pooling2d_4(None,4,40,21)0Reshape(reduce dimension)(None,160,21)0Gur1(GRU)(None,160,41)7749Gru2(GRU)(None,41)10209Dropout(None,41)0Dense_1(Softmax)(None,50)2100The accuracy and loss function value (LOSS) of the network model on the training set and validation set are shown in Table4.Table 4 Humming recognition neural network training results. DatasetACCLOSSTraining dataset0.982130.1662Test dataset0.941250.3211On both the training set and the validation set, the model achieved a high recognition accuracy rate of more than 93%, and at the same time, the loss function value was relatively small. It can be seen that the model has fully fitted the training data and achieved good results on the validation data set. Next, the model is used to identify the two sets of test sets, and the experimental results are shown in Table5.Table 5 Humming recognition test experimental results. LOSSACCMRRProfessional group0.32210.941200.97213Non-professional group1.12450.802150.8211On the professional group test set, the humming recognition neural network model achieved excellent recognition results, with a recognition accuracy rate of 0.9396 and an average reciprocal ranking of 0.9633. On the nonprofessional group test set, the accuracy rate has dropped to 0.7896, but it still achieves good recognition results. In terms of recognition efficiency, the average processing time of each fragment is about 0.6 s. On the whole, the proposed convolutional recurrent neural network model can better accomplish the task of humming recognition.The reason for the drop in accuracy on the nonprofessional group’s test set is mainly because there may be some inaccuracies in the group’s humming. In addition, judging from the fact that the accuracy of the training data set is significantly higher than that of the validation data set and the professional group test data set, the model may still have a certain overfitting problem, which also affects the recognition accuracy on the test set.In terms of recognition accuracy, the deep-learning framework for humming recognition has a certain improvement compared with the most humming recognition research work. In terms of response time, the deep-learning-based humming recognition method proposed in this paper has obvious advantages over the matching-based method. This is because for the deep-learning model, the recognition process just uses the trained model parameters to perform a series of matrix operations, and the results can be obtained quickly when the GPU is used for acceleration. Compared with the matching algorithm, the calculation speed has a natural advantage.Based on the above experimental results, the humming recognition method based on deep learning proposed in this paper can better complete the humming recognition task and has certain practicability. Compared with the traditional matching-based humming recognition method, there are certain improvements and improvements. ## 3.1. Humming Recognition Deep-Learning Framework and Design The humming recognition deep-learning framework consists of the following parts:(1) Humming Audio Database. Including humming recognition training dataset and test dataset. The vocal track audio of the original singer (or singing professionally) of the song constitutes the training dataset. In addition to a part of the above audio, the test data set also contains a part of nonprofessional humming audio data to compare and evaluate the generalization ability of the model;(2) Preprocessing Module. The humming audio data is processed and analyzed to obtain the characteristic representation of the audio data. The main processing flow has been described in the third chapter;(3) Neural Network Training Module. Input the training data set into the humming recognition neural network, train in batches, use the validation set to calculate the loss function value after each iteration, stop training after reaching a certain accuracy requirement, and use the neural network with the smallest loss function value. The weight value is used as the optimal parameter output of the model;(4) Neural Network Test Module. Based on a certain amount of test humming audio data, appropriate evaluation indicators are used to test and evaluate the performance of the neural network, as the basis for repeatedly adjusting the neural network training process and parameter selection;(5) Humming Recognition System. The humming recognition system is based on a trained neural network model and accepts human humming as input. The humming signal undergoes audio processing steps such as digitization, filtering, pre-emphasis, windowing, calculation of Mel cepstral coefficients to obtain the mel-spectrogram representation of the humming signal, input the neural network recognition model, obtain the recognition result, and return it to the user;(6) Using modular design, the deep-learning framework for humming recognition has good scalability, and several modules with high cohesion and low coupling are obtained. For example, the audio database can be flexibly replaced or modified for neural network model training, so that training evaluation can be performed on different datasets; it is also possible to use a set of model parameters trained by the neural network training module to perform training on different datasets. Testing of network performance and end-to-end testing of prototype systems are shown in Figure2.The humming recognition neural network model is the core part of the deep-learning framework, and its overall design ideas are as follows:(1) The input layer receives the mel-spectrogram of the humming audio signal as input(2) Then, uses several convolutional layers to learn the local features of the audio signal to obtain the audio signal feature map(3) Then, several recurrent layers are used to induce and learn the sequence features of the audio signal over time(4) Finally, the probability distribution of the recognition result that the input audio signal is a certain song is obtained through the Softmax activation functionThe humming recognition deep neural network model designed accordingly is shown in Figure3.Figure 3 The overall structure of the neural network.The data received by the input layer is a two-dimensional mel-spectrogram representation of the audio signal. In the hidden layer part, in order to fully extract the spectral features of the audio signal, four layers of convolution and pooling layers are used to learn the local features of the signal, and a threshold control unit is used in the last two layers, learning, and induction of sequence features of audio signals. Finally, the Softmax activation function is used in the output layer to output the neural network calculation result as the probability distribution of song recognition. ## 3.2. A Prototype System for Recognition of Humming Audio Scores The humming audio music score recognition prototype system adopts the C/S architecture, the server is based on the pythonweb framework Bottle, the client is implemented based on the ReactNative framework, and the client and the server communicate based on the HTTP protocol. The overall interaction process design of the system is shown in Figure4.Figure 4 Interaction process of the humming audio score recognition system.The user inputs the humming signal through the client recording module, and the client records the humming audio with a sampling frequency of 16000 Hz, performs a series of preprocessing, and sends an HTTPS POST request through the audio uploading module to upload the audio to the server for processing. The server-side audio preprocessing module performs note start point detection and audio segmentation on the humming audio and then inputs the humming recognition module. This module uses the humming recognition, neural network model, to give the recognition result and returns the corresponding score image for client display.Considering that the server needs to compile the Keras model, read the trained neural network parameters and recognize the humming audio, the server is written in Python. The bottle is a lightweight Python Web framework that provides basic routing, encapsulation of request objects, template support, etc. It can realize the rapid development of small Web applications and meet the needs of the humming recognition prototype system server. The ReactNative framework adopted by the mobile terminal is a cross-platform mobile application development framework launched by Facebook, which can quickly develop mobile applications based on the React ecosystem. ## 3.3. Training the Neural Network In the neural network training process, the humming audio is sampled with a sampling frequency of 16000 Hz. For each training process, the network weights are initialized with a uniform distribution, and the cross-entropy function is used as the loss function, and then the stochastic gradient descent algorithm is used for learning.For the overfitting problem, the method of early termination of training is used in the experiment. In the training process, generally speaking, as the number of training increases, the error on the training set will be smaller, while the error on the validation set will first decrease and then increase, that is, the model begins to enter the overfitting state, and before that, there will be a good number of training cycles. Therefore, if the error accuracy set on the validation set is not improved in 10 generations of training, it means that the best number of training iterations has been reached, the training process is terminated in advance, and the network weights of the generation training with the best accuracy are reserved as a result of training. In addition, the Dropout method is also used to alleviate the overfitting phenomenon. Dropout is an extremely effective and simple regularization technique that can significantly reduce network parameter overfitting during training. Dropout can be visually understood as performing subsampling operations in a fully connected neural network and only updating the weights and parameters of the sampling network during training.During model training, Dropout randomly selects some nodes in the network and sets them in an inactive state. The inactive nodes will not participate in the operation during this training process, and their weights will not change. In the next training, repeating this process, nodes that were not activated last time may resume work. In the training process, the output of some neurons is randomly set to 0, making it invalid, so that each neuron does not completely depend on other neurons so that more feature expressions can be obtained.During the experiment, the Dropout ratio was selected asp = 0.3. Table 1 shows the accurate rate variation of the model on the training set and the validation set before and after applying the Dropout method when the hyperparameters remain unchanged during ten training sessions.Table 1 Suppress overfitting using the Dropout method. Mean value of ACC for 10 timesBefore dropoutAfter dropoutTraining dataset0.91270.98200Test dataset0.90160.94321It can be obtained from Table1 that after applying the Dropout method, although the accuracy rate on the training data set has decreased, the accuracy rate on the verification data set has increased. It can be considered that adding Dropout has improved the generalization of the model to a certain extent and capabilities and mitigates overfitting.In the humming recognition deep neural network model, there are several hyperparameters that will affect the model quality and training time, including the size of the convolution kernel, the size of the pooling kernel, the number of training cycles (epoch), the batch size (batch size), learning rate, momentum factor, and the dropout ratio described above.Among them, the size of the convolution kernel and the size of the pooling kernel are the network structure parameters. A training cycle refers to the process of training all sample data once. If the number of training cycles is too small, the model may not converge, and if it is too large, overfitting may occur. The batch size is the number of data blocks selected in each stochastic gradient descent. Usually, for relatively small datasets, the entire dataset can be input into the network for training, and the resulting gradient descent direction can better represent the overall characteristics of the dataset.For the humming recognition dataset, due to the large memory usage of audio data, it is not feasible to load all the data at one time. It is very important to use a reasonable batch size to make the sample distribution reasonable in each iteration. At the same time, within a reasonable range, increasing the batch size can reduce the number of iterations of model training and speed up the training speed of the model. The learning rate is the weight of the negative gradient in the stochastic gradient descent algorithm. Using a larger learning rate can speed up the training of the network, but it is easy to miss the minimum value and cause the model to fail to converge. When using a smaller learning rate, the network becomes slow to train and may get stuck in local minima. The momentum factor is a hyperparameter used in stochastic gradient descent to control the influence of the previous weight update on this weight update, and it also has a great impact on the training speed of the network.In the training of the humming recognition neural network model, grid search is used to assist in obtaining good values for the hyperparameters. The principle of grid search is relatively simple. First, for each hyperparameter, set a small optional set, for example, set the optional set {0.01, 0.02, 0.05, 0.1} for the learning rate. . Then, grid search will perform the Cartesian product of these hyperparameter sets to obtain multiple sets of hyperparameters, and automatically use each set of hyperparameters to conduct training experiments on the model, and obtain the hyperparameter combination with the smallest validation set error as the most excellent hyperparameter selection. The final selected model hyperparameter values are shown in Table2.Table 2 Humming recognition neural network hyperparameters. HyperparametersValueConvolution kernel size(4,4)Pooling kernel size(2,2)Number of training cycles60Batch size190Learning rate0.03Momentum factor0.9Dropout rate0.4 ## 3.4. Performance Analysis of Humming Recognition Neural Network The humming recognition neural network model with better test results is obtained by training. The composition of each network layer, the output dimension, and the number of trainable parameters are shown in Table3.Table 3 Humming recognition neural network model. Network layerOutput dimension (none indicates the number of audio)Number of parametersInput_1(input layer)(None,1,160000)0Melspectrogram_1(Mel spectrogram)(None,64,625,1)0Batch_normalization_1(batch normalization)(None,64,625,1)4Conv2d_1(convolutional layer)(None,64,625,21)210Batch_normalization_2(batch normalization)(None,64,625,21)84Activation_1(ReLU)(None,64,625,21)0Max_pooling2d_1(None,32,313,21)0Conv2d_2(convolutional layer)(None,32,313,21)3990Batch_normalization_3(batch normalization)(None,32,313,21)84Activation_2(ReLU)(None,32,313,21)0Max_pooling2d_2(None,16,157,21)0Conv2d_3(convolutional layer(None,16,157,21)3990Batch_mormalization_4(None,16,157,21)84Activation_3(None,16,157,21)0Max_pooling2d_3(None,8,79,21)0Conv2d_4(convolutional layer)(None,8,79,21)3990Batch_normalization_5(None,8,79,21)84Activation_4(ReLU)(None,8,79,21)0Max_pooling2d_4(None,4,40,21)0Reshape(reduce dimension)(None,160,21)0Gur1(GRU)(None,160,41)7749Gru2(GRU)(None,41)10209Dropout(None,41)0Dense_1(Softmax)(None,50)2100The accuracy and loss function value (LOSS) of the network model on the training set and validation set are shown in Table4.Table 4 Humming recognition neural network training results. DatasetACCLOSSTraining dataset0.982130.1662Test dataset0.941250.3211On both the training set and the validation set, the model achieved a high recognition accuracy rate of more than 93%, and at the same time, the loss function value was relatively small. It can be seen that the model has fully fitted the training data and achieved good results on the validation data set. Next, the model is used to identify the two sets of test sets, and the experimental results are shown in Table5.Table 5 Humming recognition test experimental results. LOSSACCMRRProfessional group0.32210.941200.97213Non-professional group1.12450.802150.8211On the professional group test set, the humming recognition neural network model achieved excellent recognition results, with a recognition accuracy rate of 0.9396 and an average reciprocal ranking of 0.9633. On the nonprofessional group test set, the accuracy rate has dropped to 0.7896, but it still achieves good recognition results. In terms of recognition efficiency, the average processing time of each fragment is about 0.6 s. On the whole, the proposed convolutional recurrent neural network model can better accomplish the task of humming recognition.The reason for the drop in accuracy on the nonprofessional group’s test set is mainly because there may be some inaccuracies in the group’s humming. In addition, judging from the fact that the accuracy of the training data set is significantly higher than that of the validation data set and the professional group test data set, the model may still have a certain overfitting problem, which also affects the recognition accuracy on the test set.In terms of recognition accuracy, the deep-learning framework for humming recognition has a certain improvement compared with the most humming recognition research work. In terms of response time, the deep-learning-based humming recognition method proposed in this paper has obvious advantages over the matching-based method. This is because for the deep-learning model, the recognition process just uses the trained model parameters to perform a series of matrix operations, and the results can be obtained quickly when the GPU is used for acceleration. Compared with the matching algorithm, the calculation speed has a natural advantage.Based on the above experimental results, the humming recognition method based on deep learning proposed in this paper can better complete the humming recognition task and has certain practicability. Compared with the traditional matching-based humming recognition method, there are certain improvements and improvements. ## 4. Conclusion Focusing on the problem of automatic recognition of humming audio signals, this paper applies deep-learning methods to humming recognition and combines traditional audio signal processing methods to design a deep-learning framework for humming audio recognition. The feasibility of the humming recognition model finally realized a deep-learning-based humming audio score recognition system. The main conclusions are as follows.(1) Aiming at the problem of using humming audio signals in a deep-learning framework, this paper studies several techniques in the field of audio analysis, including audio sampling, filtering, pre-emphasis, two-dimensional representation of signals, and note onset detection. The differences and advantages and disadvantages of the two methods provide theoretical basis and processing methods for converting humming audio signals into deep neural network input vectors.(2) On the basis of deep-learning principles and theoretical research, for the recognition of humming audio signals, a convolutional recurrent neural network model is designed and implemented, which fully combines the advantages of convolutional neural networks for local feature extraction and the advantages of recurrent neural networks in The advantages in sequence data processing, through the use of reasonable neural network components, lead to a deep-learning framework for humming recognition.(3) Using open-source deep-learning platforms and tools, experiments and demonstrations are carried out on the proposed deep-learning model. By training and testing on the test data set, and adjusting the model repeatedly, the model parameters with better effect are obtained. And through the evaluation test on the test data set, the feasibility and effectiveness of the proposed neural network model are verified, and the performance of the model is analyzed and evaluated.(4) Based on the proposed deep-learning framework, using the server-side and mobile-side development technologies, a prototype system for humming audio score recognition was designed and implemented, including the server-side audio recognition service, mobile-side audio recording, audio uploading, and other functional modules.In our study, limited by the limited research time, and the lack of understanding and practical experience of deep-learning-related theories, there are still some deficiencies in many aspects, which are worthy of more in-depth research. For example, a method for detecting the onset point of audio signal sentences is developed, instead of the onset point detection method used in this paper. Our methods refers to improved method of humming signal feature representation, an in-depth study of neural network design for humming recognition, and enhancement robustness of Deep Learning Frameworks for Humming Recognition [34]. --- *Source: 1002105-2022-08-18.xml*
1002105-2022-08-18_1002105-2022-08-18.md
80,585
Study on the Application of Improved Audio Recognition Technology Based on Deep Learning in Vocal Music Teaching
Nan Liu
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1002105
1002105-2022-08-18.xml
--- ## Abstract As one of the hotspots in music information extraction research, music recognition has received extensive attention from scholars in recent years. Most of the current research methods are based on traditional signal processing methods, and there is still a lot of room for improvement in recognition accuracy and recognition efficiency. There are few research studies on music recognition based on deep neural networks. This paper expounds on the basic principles of deep learning and the basic structure and training methods of neural networks. For two kinds of commonly used deep networks, convolutional neural network and recurrent neural network, their typical structures, training methods, advantages, and disadvantages are analyzed. At the same time, a variety of platforms and tools for training deep neural networks are introduced, and their advantages and disadvantages are compared. TensorFlow and Keras frameworks are selected from them, and the practice related to neural network research is carried out. Training lays the foundation. Results show that through the development and experimental demonstration of the prototype system, as well as the comparison with other researchers in the field of humming recognition, it is proved that the deep-learning method can be applied to the humming recognition problem, which can effectively improve the accuracy of humming recognition and improve the recognition time. A convolutional recurrent neural network is designed and implemented, combining the local feature extraction of the convolutional layer and the ability of the recurrent layer to summarize the sequence features, to learn the features of the humming signal, so as to obtain audio features with a higher degree of abstraction and complexity and improve the performance of the humming signal. The ability of neural networks to learn the features of audio signals lays the foundation for an efficient and accurate humming recognition process. --- ## Body ## 1. Introduction Traditional text-based retrieval techniques are still widely used in the field of music retrieval. There are many problems to be solved in music retrieval based on text information [1]. Firstly, users need to know the name, singer, music style and other information of the song they are looking for. Without this information, they cannot find the music they are interested in. Secondly, in the text-based music retrieval system, the music in the music library needs to be associated with various additional information. This kind of labeling work is difficult to be completed automatically by machines, requiring a lot of manpower and high cost [2].Researching more efficient music retrieval technology is a work with great practical value. Content-based music recognition is a research hotspot in the field of music retrieval in recent years. Compared with using text retrieval methods, content-based music retrieval is more convenient. The music retrieval based on humming recognition can also be well combined with the traditional text retrieval technology to provide a more accurate and richer music retrieval method. Humming recognition-related technologies have broad application prospects and are useful in practical applications. At the same time, there are still few research methods for humming recognition based on deep learning, and there is still a lot of research space for humming recognition models based on deep learning. Therefore, this subject has good research prospects and is a research subject worthy of in-depth exploration [3].Humming recognition has the advantages of user-friendly interaction and convenient use on mobile devices. The main technologies used in humming recognition research can be summarized into the following types [4]: recognition technology based on symbol matching, recognition technology based on melody matching, and recognition technology based on statistical models. The recognition technology based on symbol matching is developed from the traditional string matching method. It generally first extracts the note information to obtain the note sequence. Then the note sequence is regarded as a string, and the similarity of the note sequence is obtained by using the string matching correlation algorithm, and the recognition result is obtained accordingly [5]. The recognition technology based on melody matching first extracts the humming audio pitch signal, connects the curves of pitch changing with time to form a melody curve, and then analyzes and matches the melody feature with the audio melody feature in the database to obtain the recognition result. The recognition technology based on a statistical model utilizes the time-domain features or frequency-domain features of audio, adopts statistical models such as Hidden Markov, etc. to model the songs in the database, and then calculates the probability of humming audio predicted by the model, with the maximum probability. The songs are returned as the recognition result [3].In 1995, Ghias et al. developed and studied the first humming recognition system, which uses a typical string matching-based recognition technology, and uses the lettersU (up), D (down), or S for the pitch change of the audio signal (unchanged) to represent the humming audio signal using a string consisting of these three characters, and then use the string matching algorithm to calculate the matching probability of the song in the database [6]. The research from McNab et al. is also based on symbol matching technology [7]. They extracted the rhythm information and pitch change information of music, used the form of strings to represent such audio features, and verified the effectiveness of the method in the humming recognition system through experiments. In 1999, Kosugi et al. proposed a method to measure similarity by using Euclidean distance based on audio pitch and rhythm information. Scholars such as Calrisse proposed a new method in 2002, which introduced an auditory model and achieved good recognition accuracy improvement on public datasets [8]. Shih et al. creatively introduced the hidden Markov model in their research, using the audio pitch information as the input of the HMM, thus proving the feasibility of using the statistical model for humming recognition research [9]. Downie et al. introduced a dynamic time scaling algorithm for the robustness of the humming recognition system, which greatly improved the overall fault tolerance of the humming recognition system [10]. PardoB et al. comprehensively adopted a matching algorithm based on the hidden Markov model and a distance matching algorithm for similarity measurement in humming recognition [11].There are two main problems with the current humming recognition technology. The first is that the recognition accuracy still needs to be improved, especially when there is a partial deviation in the user’s humming, it is difficult for the existing music feature-based methods to obtain a satisfactory recognition accuracy [12]. On the other hand, there is a problem that the processing time is too long when using the feature matching correlation algorithm for humming recognition [13]. Extracting and processing relevant features from the user’s humming and matching with the songs in the database require a certain computing time, which brings a long waiting time to the user interaction process and is not user-friendly.Deep learning is a new research direction in the field of machine learning [14]. It is mainly based on artificial neural networks and uses multilayer representations to model complex relationships between data. In addition to the ability of traditional machine learning methods to discover the relationship between data features and tasks, deep learning can also summarize more abstract and complex features from simple feature learning [15]. In recent years, breakthrough achievements have been made in the fields of computer vision, image processing, natural language understanding, etc., which have attracted widespread attention.The development of deep learning can be roughly divided into three stages. Early neural network models were similar to bionic machine learning, which tried to mimic the learning mechanism of the brain [16]. The earliest neural network mathematical model was proposed by Professor Warren and Walter in 1943. In order to allow the computer to set the weights more automatically and reasonably, Professor Frank Rosenblatt proposed the perceptron model in 1958. The perceptron is the first model that can learn feature weights based on sample data [14]. However, under the limitation of computing power at that time, these research results were not taken seriously. At the end of the 1980s, the second wave of neural network research climax came with the proposal of distributed knowledge representation and neural network back-propagation algorithm. Neural network research in this period used multiple neurons to express knowledge and concepts in the real world, which greatly enhanced the expressive ability of the model and laid the foundation for later deep learning [17].The third high-speed development stage of neural network research comes with the improvement of computer performance and the development of cloud computing, GPU, and other technologies [18]. With the solid foundation provided by these hardware resources, the amount of computation is no longer an issue that hinders the development of neural networks. At the same time, with the popularization of the Internet and the development of search technology, people can easily obtain a large amount of data and information, which solves the long-standing problem of missing datasets in neural network training [19]. At this stage, deep learning has truly ushered in a development climax and has repeatedly achieved breakthrough results in many fields.In the field of speech recognition, the traditional GMM-HMM speech recognition model has encountered development bottlenecks after years of research, and the introduction of deep-learning-related technologies has significantly improved the accuracy of speech recognition [20]. Since the concept of deep learning was introduced into the field of speech recognition in 2009, in just a few years, the deep-learning method has reduced the error rate of the traditional Gaussian mixture model based on the TIMIT dataset from 21.7% to 17.9%. Dahl et al. used a combination of DBN and HMMs, and achieved certain results in the task of speech, large vocabulary continuous speech recognition (LVC-SR) [21].In the industrial world, most of the well-known Internet companies at home and abroad use deep-learning methods for speech recognition [22]. A fully automatic simultaneous interpretation system developed by Microsoft, based on deep-learning technology, can perform human voice recognition, machine translation, and Chinese speech synthesis processing synchronously with the speaker, achieving an effect close to manual simultaneous interpretation [23]. Baidu applied the deep neural network to speech recognition research. On the basis of the VGGNet model, it integrated the multilayer convolutional neural network and the long short-term memory network structure to develop an end-to-end speech recognition technology [24]. Experiments show that the system reduces the recognition error rate by more than 10%. The speech recognition model proposed by Microsoft achieved a historically low error rate of 6.3% on the industry-standard Switchboard speech recognition task [25]. In its speech recognition system, iFLYTEK, a domestic university of science and technology, uses a feedforward sequence memory network to model the sentence speech signal through multi-layer convolutional layers and summarizes and expresses the long-term related information of the speech [26]. It is widely used in academia and industry. The recognition rate of the best two-way recurrent neural network speech recognition system is improved by more than 15%.In our study, we first study humming audio signal processing methods, compare the differences and advantages, and disadvantages of different methods, including audio digitization, audio filtering, audio signal enhancement, note onset detection, audio signal spectrum analysis, and other related technologies, and finally form a set of humming audio signal processing pipeline to provide effective datasets for training and testing of deep-learning frameworks in Section2; in the Results section, Using the open-source deep-learning platform and tools, through training, learning and repeated testing on the data set, better humming recognition neural network model parameters are obtained. And through the evaluation test on the test data set, the feasibility and effectiveness of the proposed neural network model are verified, and the performance of the model in terms of recognition accuracy, robustness, and training time is analyzed. Finally, based on the proposed deep-learning framework for humming recognition, a C/S architecture is adopted, and a humming recognition prototype system is designed and implemented by using server-side and mobile-side development technologies. ## 2. Methods ### 2.1. Audio Signal Processing Flow First of all, for the test humming data, a digitized audio signal needs to be obtained through a process of sampling and quantization. Then, it is necessary to perform certain preprocessing on the original humming data and test data, including filtering, pre-emphasis, windowing, and framing, to reduce the interference of audio signal noise and improve the saliency of features. Secondly, in the process of training and recognition, it is also necessary to detect the starting point of the note. From the starting point of the note, the data set is intercepted to eliminate the interference of silent segment noise and improve the validity of the data, and the audio signal processing flow is shown in Figure1.Figure 1 The audio signal processing flow. ### 2.2. Sampling and Quantization An important step in converting the original humming signal into a digital signal is sampling and quantization. After sampling and quantization, the analog signal becomes a digital audio signal that is discrete in time and amplitude. The sampling theorem points out that a necessary condition that needs to be satisfied in the sampling process is to use a sampling frequency that is greater than twice the bandwidth of the audio signal so that the sampling operation will not lose the information of the original audio signal, and it can also be restored from the sampled digital signal [27]. For human voice signals, the frequency spectrum of the voiced signal is mainly concentrated in the low-frequency band below 4 kHz, and the frequency spectrum of the unvoiced signal is very wide, extending to the high-frequency band above 10 kHz. In the research of this paper, the sampling frequency of 16 kHz is uniformly used for the sampling of humming audio, so as to ensure that the humming information will not be lost.After sampling, the signal needs to be quantized. The quantization process can convert the continuous amplitude value of the signal on the time axis into discrete amplitude values [28]. The error generated in the quantization process is called quantization noise, which can be obtained by calculating the difference between the quantized discrete amplitude value and the original signal. In general, quantization noise has the following characteristics (1) it is a stationary white noise; (2) the quantization noise is irrelevant to the input signal; (3) the quantization noise is uniformly distributed within the quantization interval, that is, it has the characteristic of equal probability density distribution. The power ratio between the original audio signal and the quantization noise is called the quantization signal-to-noise ratio and is often used to characterize audio quality. In general, the amplitude of the speech signal obeys the Laplace distribution, and the quantized signal-to-noise ratio can be expressed by (1)S=6.02B−7.2,where S is the signal-to-noise ratio and B is the quantized word length. (1) shows that the word length of each bit in the quantizer corresponds to a quantized signal-to-noise ratio of about 6 dB. When the quantized signal-to-noise ratio reaches 35 dB and above, the audio quality can meet the requirements of general communication systems, so generally, the quantization word length should be greater than 7 dB bits. In practical applications, a word length of more than 12 bits is often used for quantization, because the variation range of the speech waveform can reach up to 55 dB. In order to maintain a signal-to-noise ratio of 35 dB within this range, an additional 9 bit word length is used for compensation. The dynamic range of the speech waveform around 30 dB changes. ### 2.3. Humming Signal Preprocessing For the humming signal input from the recording, quantization noise will be generated when it is converted from quantization to digitalization, and there will also be power frequency interference, aliasing interference, etc. In order to reduce these noises, the analysis and feature parameter extraction of the humming signal requires the interference generated, first of all, to filter the humming signal to be processed [29].The prefiltering operation first detects the frequency of each frequency domain component in the input signal and suppresses the components whose frequency value exceeds half of the sampling frequency to prevent aliasing interference. Then, suppress the power frequency interference of about 50 Hz. In the experiment of this paper, a bandpass filter is used to prefilter the humming audio, the upper cut-off frequency is set to 3400 Hz, and the lower cut-off frequency is set to 60 Hz to filter out the interference of the power frequency.The humming signal is easily affected by two types of noises: glottal excitation and mouth-nose radiation, and its effective components are relatively small in the high-frequency part. When finding the spectrum of the speech signal, the high-frequency part is more difficult to obtain than the low-frequency part. For the high-frequency part, additional processing is required, that is, pre-emphasis of the humming signal is performed first to increase the proportion of the high-frequency part of the humming signal, and smooth the spectrum of the humming signal, improving the high-frequency resolution of the humming part, which facilitates audio analysis operations in the frequency domain [30].The pre-emphasis process is generally completed by a pre-emphasis digital filter after the audio signal digitization operation is completed. This type of filter has the ability to improve high-frequency characteristics. First-order digital filters are often used. After pre-emphasis processing, the audio signal can be expressed as(2)H=1−εX−1,where X is the incoming humming signal. ε is weight and its value is regarded as 0.94 in our study.The humming audio sequence is a one-dimensional signal on the time axis. In order to analyze the signal, the audio signal needs to be regarded as a stable state in a short time of millisecond level. On this basis, the audio signal is windowed and divided into frames to maintain the short-term stationary characteristics of the speech signal, so that the subsequent speech feature vector representation can be performed on a short-term frame, so as to obtain the feature vector time series.There are generally two segmentation methods for windowing and framing operations, continuous segmentation, and overlapping segmentation [31]. The method of continuous segmentation means that there is no overlap between frames so that discrete frame feature vectors that do not interfere with each other can be obtained. In this study, in order to make a smooth transition between frames and maintain the continuity of features, the method of overlapping segmentation is adopted. In the overlap segment, the overlap of the previous frame and the next frame is usually taken as about 1/2 of the frame length. Specifically, a moving window of finite length can be used to weight the audio signal to achieve framing.Commonly used window functions include rectangular window, Hamming window, and Hanning window. Their representations are as follows.Rectangular window:(3)wn=1,0≤n≤N−10,others.Hamming window:(4)wn=0.54−0.46cos2nπN−1,0≤n≤N−10,others.Hanning window:(5)wn=0.51−cos2nπN−1,0,others.where N is the frame length. The choice of window shape has different effects on the audio signal. Generally speaking, the spectrum obtained by using the rectangular window is smoother, but in the high-frequency part, it is easy to lose waveform details, thus missing some important information; the Hamming window can effectively overcome the problem of information loss in the high-frequency part, but the spectrum is relatively unsmooth; the Hanning window generally needs to use a larger bandwidth, which is about twice the bandwidth of the rectangular window of the same width, and the attenuation effect of the Hanning window is larger than that of the rectangular window, which is much bigger.The selection of the window length is also very important when adding a window. There is the following relationship between the length of the window N, the sampling periodTs, and the frequency resolution Δf:(6)Δf=1NTs.In the case of a certain sampling period, if the window width increases, the frequency resolution will decrease accordingly, so that the details of audio changes cannot be reflected; on the contrary, if the window width N decreases, the frequency resolution increases, and the audio changes become not smooth enough. Therefore, it is necessary to consider the speed of signal change or the level of detail reflected, and set an appropriate window width according to the needs of the head.Based on the above analysis, this paper uses a Hamming window when windowing and dividing the humming audio signal in the experiment, in which the window length is taken as 5000 points, and the overlap between frames is taken as 2600 points. ### 2.4. Note Onset Detection In this paper, a two-threshold method is used for onset detection. The double-threshold method first examines the short-term energy of the humming signal, the short-term average energyEn of the speech signal at time n:(7)En=∑m=−∞∞hm2×cn−m.Here, h is input signal and c is the weight. Since voiced sounds with higher energy always appear after the speech starts, you can refer to the average short-term energy of the humming audio, set a higher threshold Th to confirm that the speech has started, and then take a threshold TL slightly lower than Th to determine the effective speech the starting point N1. To judge the difference between unvoiced and silent, we can examine the characteristics of the short-term zero-crossing rate of the humming signal, and the short-term zero-crossing rate Zn of the signal at time n is(8)Zn=∑−∞∞sgnhn−sgnhn−1×wn−m.Here, h is input signal, w is weight, and sgn is signal function. The double-threshold method uses a lower threshold T1, and uses this threshold as a reference to obtain the zero-crossing rate of the signal. Generally speaking, the low threshold zero-crossing rate of the noise or silent segment is significantly lower than the low threshold zero-crossing rate of the speech segment, so that the interference of the noise segment can be excluded. ### 2.5. Humming Signal Feature Representation Sound is an analog signal, and its one-dimensional time-domain waveform representation can only reflect the relationship of sound pressure with time, but cannot well reflect the characteristics of audio signals. The feature of the audio signal means that the humming signal can be analyzed and processed to remove the irrelevant redundant information in the audio signal and obtain important information that affects the recognition of the humming. Therefore, in order to use the humming audio signal for the deep-learning framework, it is very important to choose a suitable representation of the audio signal features. In the research of speech signal analysis, because the cepstral feature contains more information than other features, it can better characterize the speech signal and is widely used. The commonly used linear prediction cepstral coefficients (Linear Prediction Cepstrum Coefficients, LPCC) and Mel frequency cepstral coefficients (Mel-Frequency Cepstral Coefficients, MFCC) [32].Since the high-frequency part of the humming signal is easily disturbed by noise, resulting in a frequency shift, most of the effective information in humming recognition is concentrated in the low-frequency part. By converting the linear frequency scale into the Mel frequency scale, the Mel-frequency cepstral coefficient highlights the low-frequency part of the humming signal, that is, it gives information that is more conducive to identification, and at the same time shields the interference of some environmental noises on the audio [33]. Therefore, Mel-frequency cepstral coefficients are more commonly used in most speech and acoustic pattern recognition problems than linear prediction cepstral coefficients. In the study of the humming recognition problem in this paper, it is also used as the input feature vector of the deep-learning framework.Mel frequency cepstral coefficient (Mel-frequency cepstral coefficient ((MFCC)) is a kind of audio signal spectrum feature proposed based on human hearing characteristics, which has a nonlinear correspondence with Hertz frequency. The level of the sound heard by the human ear is not linearly related to the frequency of the sound. The Mel frequency scale is proposed to solve this problem. The use of the Mel frequency scale is in line with the auditory characteristics of the human ear. In the Mel frequency domain, human perception of the pitch has a linear relationship with pitch, that is, if two humming Mel audios are twice as different, then the human perception is also twice as different. Mel-frequency cepstral coefficients can be thought of as folding the short-time Fourier transform on the frequency axis, reducing the size while preserving the most important perceptible information.The correlation between Mel frequencyfMel and actual f is as follows:(9)fMel=2595lg1+f700.The input signal is filtered using a Mel bandpass filter. The effect of each frequency band component is superimposed in the human ear, so the energy in each filter band is superimposed, and then the logarithmic magnitude spectrum of all filters is discretized. Cosine transforms to get Mel frequency cepstral coefficients. The calculation process of Mel frequency is as follows.Firstly, pre-emphasis, windowing, and framing are performed on the humming signal, and then the frequency spectrum is calculated using the short-time Fourier transform. Secondly, a Mel filter bank ofL channels is set on the Mel frequency, and the L value is determined by the highest frequency of the signal, generally taking 12–16. Thirdly, pass the linear magnitude spectrum of the signal through the Mel filter to get the filter output Y(l):(10)Yl=∑k=olhlwlkxnk,l=1,2,…,L,where o(l) and h(l) are the lowest and highest frequencies of the first filter, x is the input signal, and w is the weight.Fourthly, taking the logarithm of the filter output and perform the discrete cosine transform:(11)CMFCCn=∑l=1LlgYl×cosπl−0.5nL,n=1,2,…,L. ### 2.6. Experiment Setup The main hardware used in the experiment is Intel(R) Core(TM) i7-7700K processor and NVIDIAGeForceGTX1070Ti graphics card. In terms of software, the deep-learning neural network model is implemented based on Keras, using TensorFlow as the Keras backend, and some algorithms are implemented based on scikit-leam.The data set used in the experiment comes from the DSD100 and MedleyDB data sets, and 50 songs are selected, covering a variety of music styles such as Rap, Country, Hip-Hop, and Rock. Each song uses music sung by 1–3 professional singers. For the humming audio file of each song, first, detect the starting points of the notes to obtain multiple starting points of notes, then randomly select the starting points of the notes to start, cut out 180 10-second segments, and finally get a total of 9,000 humming records. It is divided into a training set, validation set, and test set with a ratio of 4 : 1:1. Since this part of the test, the set is sung by professional singers, it is called the professional group 40 test set. In addition, for the 10 songs, 30 audio clips hummed by three students were recorded by themselves to form the nonprofessional group test set.Three types of evaluation indicators are used in the experiment: accuracy rate (Accuracy, ACC), the response time (TIME), and mean reciprocal ranking (Mean Reciprocal Rank (MRR)). Since the deep-learning framework for humming recognition can give multiple recognition results and their corresponding probabilities, only using the recognition accuracy rate for evaluation cannot fully reflect the performance of the framework. For this reason, MRR is used as one of the evaluation indicators. MRR is currently widely used. Applied in the evaluation of problems that return multiple results, it can reflect the pros and cons of the returned result set. Its formula is as follows:(12)MRR=1N∑i=1N1r,where N is the number of experiment objects and r is the rank number in the ith experiment.Figure 2 Humming recognition deep-learning framework. ## 2.1. Audio Signal Processing Flow First of all, for the test humming data, a digitized audio signal needs to be obtained through a process of sampling and quantization. Then, it is necessary to perform certain preprocessing on the original humming data and test data, including filtering, pre-emphasis, windowing, and framing, to reduce the interference of audio signal noise and improve the saliency of features. Secondly, in the process of training and recognition, it is also necessary to detect the starting point of the note. From the starting point of the note, the data set is intercepted to eliminate the interference of silent segment noise and improve the validity of the data, and the audio signal processing flow is shown in Figure1.Figure 1 The audio signal processing flow. ## 2.2. Sampling and Quantization An important step in converting the original humming signal into a digital signal is sampling and quantization. After sampling and quantization, the analog signal becomes a digital audio signal that is discrete in time and amplitude. The sampling theorem points out that a necessary condition that needs to be satisfied in the sampling process is to use a sampling frequency that is greater than twice the bandwidth of the audio signal so that the sampling operation will not lose the information of the original audio signal, and it can also be restored from the sampled digital signal [27]. For human voice signals, the frequency spectrum of the voiced signal is mainly concentrated in the low-frequency band below 4 kHz, and the frequency spectrum of the unvoiced signal is very wide, extending to the high-frequency band above 10 kHz. In the research of this paper, the sampling frequency of 16 kHz is uniformly used for the sampling of humming audio, so as to ensure that the humming information will not be lost.After sampling, the signal needs to be quantized. The quantization process can convert the continuous amplitude value of the signal on the time axis into discrete amplitude values [28]. The error generated in the quantization process is called quantization noise, which can be obtained by calculating the difference between the quantized discrete amplitude value and the original signal. In general, quantization noise has the following characteristics (1) it is a stationary white noise; (2) the quantization noise is irrelevant to the input signal; (3) the quantization noise is uniformly distributed within the quantization interval, that is, it has the characteristic of equal probability density distribution. The power ratio between the original audio signal and the quantization noise is called the quantization signal-to-noise ratio and is often used to characterize audio quality. In general, the amplitude of the speech signal obeys the Laplace distribution, and the quantized signal-to-noise ratio can be expressed by (1)S=6.02B−7.2,where S is the signal-to-noise ratio and B is the quantized word length. (1) shows that the word length of each bit in the quantizer corresponds to a quantized signal-to-noise ratio of about 6 dB. When the quantized signal-to-noise ratio reaches 35 dB and above, the audio quality can meet the requirements of general communication systems, so generally, the quantization word length should be greater than 7 dB bits. In practical applications, a word length of more than 12 bits is often used for quantization, because the variation range of the speech waveform can reach up to 55 dB. In order to maintain a signal-to-noise ratio of 35 dB within this range, an additional 9 bit word length is used for compensation. The dynamic range of the speech waveform around 30 dB changes. ## 2.3. Humming Signal Preprocessing For the humming signal input from the recording, quantization noise will be generated when it is converted from quantization to digitalization, and there will also be power frequency interference, aliasing interference, etc. In order to reduce these noises, the analysis and feature parameter extraction of the humming signal requires the interference generated, first of all, to filter the humming signal to be processed [29].The prefiltering operation first detects the frequency of each frequency domain component in the input signal and suppresses the components whose frequency value exceeds half of the sampling frequency to prevent aliasing interference. Then, suppress the power frequency interference of about 50 Hz. In the experiment of this paper, a bandpass filter is used to prefilter the humming audio, the upper cut-off frequency is set to 3400 Hz, and the lower cut-off frequency is set to 60 Hz to filter out the interference of the power frequency.The humming signal is easily affected by two types of noises: glottal excitation and mouth-nose radiation, and its effective components are relatively small in the high-frequency part. When finding the spectrum of the speech signal, the high-frequency part is more difficult to obtain than the low-frequency part. For the high-frequency part, additional processing is required, that is, pre-emphasis of the humming signal is performed first to increase the proportion of the high-frequency part of the humming signal, and smooth the spectrum of the humming signal, improving the high-frequency resolution of the humming part, which facilitates audio analysis operations in the frequency domain [30].The pre-emphasis process is generally completed by a pre-emphasis digital filter after the audio signal digitization operation is completed. This type of filter has the ability to improve high-frequency characteristics. First-order digital filters are often used. After pre-emphasis processing, the audio signal can be expressed as(2)H=1−εX−1,where X is the incoming humming signal. ε is weight and its value is regarded as 0.94 in our study.The humming audio sequence is a one-dimensional signal on the time axis. In order to analyze the signal, the audio signal needs to be regarded as a stable state in a short time of millisecond level. On this basis, the audio signal is windowed and divided into frames to maintain the short-term stationary characteristics of the speech signal, so that the subsequent speech feature vector representation can be performed on a short-term frame, so as to obtain the feature vector time series.There are generally two segmentation methods for windowing and framing operations, continuous segmentation, and overlapping segmentation [31]. The method of continuous segmentation means that there is no overlap between frames so that discrete frame feature vectors that do not interfere with each other can be obtained. In this study, in order to make a smooth transition between frames and maintain the continuity of features, the method of overlapping segmentation is adopted. In the overlap segment, the overlap of the previous frame and the next frame is usually taken as about 1/2 of the frame length. Specifically, a moving window of finite length can be used to weight the audio signal to achieve framing.Commonly used window functions include rectangular window, Hamming window, and Hanning window. Their representations are as follows.Rectangular window:(3)wn=1,0≤n≤N−10,others.Hamming window:(4)wn=0.54−0.46cos2nπN−1,0≤n≤N−10,others.Hanning window:(5)wn=0.51−cos2nπN−1,0,others.where N is the frame length. The choice of window shape has different effects on the audio signal. Generally speaking, the spectrum obtained by using the rectangular window is smoother, but in the high-frequency part, it is easy to lose waveform details, thus missing some important information; the Hamming window can effectively overcome the problem of information loss in the high-frequency part, but the spectrum is relatively unsmooth; the Hanning window generally needs to use a larger bandwidth, which is about twice the bandwidth of the rectangular window of the same width, and the attenuation effect of the Hanning window is larger than that of the rectangular window, which is much bigger.The selection of the window length is also very important when adding a window. There is the following relationship between the length of the window N, the sampling periodTs, and the frequency resolution Δf:(6)Δf=1NTs.In the case of a certain sampling period, if the window width increases, the frequency resolution will decrease accordingly, so that the details of audio changes cannot be reflected; on the contrary, if the window width N decreases, the frequency resolution increases, and the audio changes become not smooth enough. Therefore, it is necessary to consider the speed of signal change or the level of detail reflected, and set an appropriate window width according to the needs of the head.Based on the above analysis, this paper uses a Hamming window when windowing and dividing the humming audio signal in the experiment, in which the window length is taken as 5000 points, and the overlap between frames is taken as 2600 points. ## 2.4. Note Onset Detection In this paper, a two-threshold method is used for onset detection. The double-threshold method first examines the short-term energy of the humming signal, the short-term average energyEn of the speech signal at time n:(7)En=∑m=−∞∞hm2×cn−m.Here, h is input signal and c is the weight. Since voiced sounds with higher energy always appear after the speech starts, you can refer to the average short-term energy of the humming audio, set a higher threshold Th to confirm that the speech has started, and then take a threshold TL slightly lower than Th to determine the effective speech the starting point N1. To judge the difference between unvoiced and silent, we can examine the characteristics of the short-term zero-crossing rate of the humming signal, and the short-term zero-crossing rate Zn of the signal at time n is(8)Zn=∑−∞∞sgnhn−sgnhn−1×wn−m.Here, h is input signal, w is weight, and sgn is signal function. The double-threshold method uses a lower threshold T1, and uses this threshold as a reference to obtain the zero-crossing rate of the signal. Generally speaking, the low threshold zero-crossing rate of the noise or silent segment is significantly lower than the low threshold zero-crossing rate of the speech segment, so that the interference of the noise segment can be excluded. ## 2.5. Humming Signal Feature Representation Sound is an analog signal, and its one-dimensional time-domain waveform representation can only reflect the relationship of sound pressure with time, but cannot well reflect the characteristics of audio signals. The feature of the audio signal means that the humming signal can be analyzed and processed to remove the irrelevant redundant information in the audio signal and obtain important information that affects the recognition of the humming. Therefore, in order to use the humming audio signal for the deep-learning framework, it is very important to choose a suitable representation of the audio signal features. In the research of speech signal analysis, because the cepstral feature contains more information than other features, it can better characterize the speech signal and is widely used. The commonly used linear prediction cepstral coefficients (Linear Prediction Cepstrum Coefficients, LPCC) and Mel frequency cepstral coefficients (Mel-Frequency Cepstral Coefficients, MFCC) [32].Since the high-frequency part of the humming signal is easily disturbed by noise, resulting in a frequency shift, most of the effective information in humming recognition is concentrated in the low-frequency part. By converting the linear frequency scale into the Mel frequency scale, the Mel-frequency cepstral coefficient highlights the low-frequency part of the humming signal, that is, it gives information that is more conducive to identification, and at the same time shields the interference of some environmental noises on the audio [33]. Therefore, Mel-frequency cepstral coefficients are more commonly used in most speech and acoustic pattern recognition problems than linear prediction cepstral coefficients. In the study of the humming recognition problem in this paper, it is also used as the input feature vector of the deep-learning framework.Mel frequency cepstral coefficient (Mel-frequency cepstral coefficient ((MFCC)) is a kind of audio signal spectrum feature proposed based on human hearing characteristics, which has a nonlinear correspondence with Hertz frequency. The level of the sound heard by the human ear is not linearly related to the frequency of the sound. The Mel frequency scale is proposed to solve this problem. The use of the Mel frequency scale is in line with the auditory characteristics of the human ear. In the Mel frequency domain, human perception of the pitch has a linear relationship with pitch, that is, if two humming Mel audios are twice as different, then the human perception is also twice as different. Mel-frequency cepstral coefficients can be thought of as folding the short-time Fourier transform on the frequency axis, reducing the size while preserving the most important perceptible information.The correlation between Mel frequencyfMel and actual f is as follows:(9)fMel=2595lg1+f700.The input signal is filtered using a Mel bandpass filter. The effect of each frequency band component is superimposed in the human ear, so the energy in each filter band is superimposed, and then the logarithmic magnitude spectrum of all filters is discretized. Cosine transforms to get Mel frequency cepstral coefficients. The calculation process of Mel frequency is as follows.Firstly, pre-emphasis, windowing, and framing are performed on the humming signal, and then the frequency spectrum is calculated using the short-time Fourier transform. Secondly, a Mel filter bank ofL channels is set on the Mel frequency, and the L value is determined by the highest frequency of the signal, generally taking 12–16. Thirdly, pass the linear magnitude spectrum of the signal through the Mel filter to get the filter output Y(l):(10)Yl=∑k=olhlwlkxnk,l=1,2,…,L,where o(l) and h(l) are the lowest and highest frequencies of the first filter, x is the input signal, and w is the weight.Fourthly, taking the logarithm of the filter output and perform the discrete cosine transform:(11)CMFCCn=∑l=1LlgYl×cosπl−0.5nL,n=1,2,…,L. ## 2.6. Experiment Setup The main hardware used in the experiment is Intel(R) Core(TM) i7-7700K processor and NVIDIAGeForceGTX1070Ti graphics card. In terms of software, the deep-learning neural network model is implemented based on Keras, using TensorFlow as the Keras backend, and some algorithms are implemented based on scikit-leam.The data set used in the experiment comes from the DSD100 and MedleyDB data sets, and 50 songs are selected, covering a variety of music styles such as Rap, Country, Hip-Hop, and Rock. Each song uses music sung by 1–3 professional singers. For the humming audio file of each song, first, detect the starting points of the notes to obtain multiple starting points of notes, then randomly select the starting points of the notes to start, cut out 180 10-second segments, and finally get a total of 9,000 humming records. It is divided into a training set, validation set, and test set with a ratio of 4 : 1:1. Since this part of the test, the set is sung by professional singers, it is called the professional group 40 test set. In addition, for the 10 songs, 30 audio clips hummed by three students were recorded by themselves to form the nonprofessional group test set.Three types of evaluation indicators are used in the experiment: accuracy rate (Accuracy, ACC), the response time (TIME), and mean reciprocal ranking (Mean Reciprocal Rank (MRR)). Since the deep-learning framework for humming recognition can give multiple recognition results and their corresponding probabilities, only using the recognition accuracy rate for evaluation cannot fully reflect the performance of the framework. For this reason, MRR is used as one of the evaluation indicators. MRR is currently widely used. Applied in the evaluation of problems that return multiple results, it can reflect the pros and cons of the returned result set. Its formula is as follows:(12)MRR=1N∑i=1N1r,where N is the number of experiment objects and r is the rank number in the ith experiment.Figure 2 Humming recognition deep-learning framework. ## 3. Results and Discussion ### 3.1. Humming Recognition Deep-Learning Framework and Design The humming recognition deep-learning framework consists of the following parts:(1) Humming Audio Database. Including humming recognition training dataset and test dataset. The vocal track audio of the original singer (or singing professionally) of the song constitutes the training dataset. In addition to a part of the above audio, the test data set also contains a part of nonprofessional humming audio data to compare and evaluate the generalization ability of the model;(2) Preprocessing Module. The humming audio data is processed and analyzed to obtain the characteristic representation of the audio data. The main processing flow has been described in the third chapter;(3) Neural Network Training Module. Input the training data set into the humming recognition neural network, train in batches, use the validation set to calculate the loss function value after each iteration, stop training after reaching a certain accuracy requirement, and use the neural network with the smallest loss function value. The weight value is used as the optimal parameter output of the model;(4) Neural Network Test Module. Based on a certain amount of test humming audio data, appropriate evaluation indicators are used to test and evaluate the performance of the neural network, as the basis for repeatedly adjusting the neural network training process and parameter selection;(5) Humming Recognition System. The humming recognition system is based on a trained neural network model and accepts human humming as input. The humming signal undergoes audio processing steps such as digitization, filtering, pre-emphasis, windowing, calculation of Mel cepstral coefficients to obtain the mel-spectrogram representation of the humming signal, input the neural network recognition model, obtain the recognition result, and return it to the user;(6) Using modular design, the deep-learning framework for humming recognition has good scalability, and several modules with high cohesion and low coupling are obtained. For example, the audio database can be flexibly replaced or modified for neural network model training, so that training evaluation can be performed on different datasets; it is also possible to use a set of model parameters trained by the neural network training module to perform training on different datasets. Testing of network performance and end-to-end testing of prototype systems are shown in Figure2.The humming recognition neural network model is the core part of the deep-learning framework, and its overall design ideas are as follows:(1) The input layer receives the mel-spectrogram of the humming audio signal as input(2) Then, uses several convolutional layers to learn the local features of the audio signal to obtain the audio signal feature map(3) Then, several recurrent layers are used to induce and learn the sequence features of the audio signal over time(4) Finally, the probability distribution of the recognition result that the input audio signal is a certain song is obtained through the Softmax activation functionThe humming recognition deep neural network model designed accordingly is shown in Figure3.Figure 3 The overall structure of the neural network.The data received by the input layer is a two-dimensional mel-spectrogram representation of the audio signal. In the hidden layer part, in order to fully extract the spectral features of the audio signal, four layers of convolution and pooling layers are used to learn the local features of the signal, and a threshold control unit is used in the last two layers, learning, and induction of sequence features of audio signals. Finally, the Softmax activation function is used in the output layer to output the neural network calculation result as the probability distribution of song recognition. ### 3.2. A Prototype System for Recognition of Humming Audio Scores The humming audio music score recognition prototype system adopts the C/S architecture, the server is based on the pythonweb framework Bottle, the client is implemented based on the ReactNative framework, and the client and the server communicate based on the HTTP protocol. The overall interaction process design of the system is shown in Figure4.Figure 4 Interaction process of the humming audio score recognition system.The user inputs the humming signal through the client recording module, and the client records the humming audio with a sampling frequency of 16000 Hz, performs a series of preprocessing, and sends an HTTPS POST request through the audio uploading module to upload the audio to the server for processing. The server-side audio preprocessing module performs note start point detection and audio segmentation on the humming audio and then inputs the humming recognition module. This module uses the humming recognition, neural network model, to give the recognition result and returns the corresponding score image for client display.Considering that the server needs to compile the Keras model, read the trained neural network parameters and recognize the humming audio, the server is written in Python. The bottle is a lightweight Python Web framework that provides basic routing, encapsulation of request objects, template support, etc. It can realize the rapid development of small Web applications and meet the needs of the humming recognition prototype system server. The ReactNative framework adopted by the mobile terminal is a cross-platform mobile application development framework launched by Facebook, which can quickly develop mobile applications based on the React ecosystem. ### 3.3. Training the Neural Network In the neural network training process, the humming audio is sampled with a sampling frequency of 16000 Hz. For each training process, the network weights are initialized with a uniform distribution, and the cross-entropy function is used as the loss function, and then the stochastic gradient descent algorithm is used for learning.For the overfitting problem, the method of early termination of training is used in the experiment. In the training process, generally speaking, as the number of training increases, the error on the training set will be smaller, while the error on the validation set will first decrease and then increase, that is, the model begins to enter the overfitting state, and before that, there will be a good number of training cycles. Therefore, if the error accuracy set on the validation set is not improved in 10 generations of training, it means that the best number of training iterations has been reached, the training process is terminated in advance, and the network weights of the generation training with the best accuracy are reserved as a result of training. In addition, the Dropout method is also used to alleviate the overfitting phenomenon. Dropout is an extremely effective and simple regularization technique that can significantly reduce network parameter overfitting during training. Dropout can be visually understood as performing subsampling operations in a fully connected neural network and only updating the weights and parameters of the sampling network during training.During model training, Dropout randomly selects some nodes in the network and sets them in an inactive state. The inactive nodes will not participate in the operation during this training process, and their weights will not change. In the next training, repeating this process, nodes that were not activated last time may resume work. In the training process, the output of some neurons is randomly set to 0, making it invalid, so that each neuron does not completely depend on other neurons so that more feature expressions can be obtained.During the experiment, the Dropout ratio was selected asp = 0.3. Table 1 shows the accurate rate variation of the model on the training set and the validation set before and after applying the Dropout method when the hyperparameters remain unchanged during ten training sessions.Table 1 Suppress overfitting using the Dropout method. Mean value of ACC for 10 timesBefore dropoutAfter dropoutTraining dataset0.91270.98200Test dataset0.90160.94321It can be obtained from Table1 that after applying the Dropout method, although the accuracy rate on the training data set has decreased, the accuracy rate on the verification data set has increased. It can be considered that adding Dropout has improved the generalization of the model to a certain extent and capabilities and mitigates overfitting.In the humming recognition deep neural network model, there are several hyperparameters that will affect the model quality and training time, including the size of the convolution kernel, the size of the pooling kernel, the number of training cycles (epoch), the batch size (batch size), learning rate, momentum factor, and the dropout ratio described above.Among them, the size of the convolution kernel and the size of the pooling kernel are the network structure parameters. A training cycle refers to the process of training all sample data once. If the number of training cycles is too small, the model may not converge, and if it is too large, overfitting may occur. The batch size is the number of data blocks selected in each stochastic gradient descent. Usually, for relatively small datasets, the entire dataset can be input into the network for training, and the resulting gradient descent direction can better represent the overall characteristics of the dataset.For the humming recognition dataset, due to the large memory usage of audio data, it is not feasible to load all the data at one time. It is very important to use a reasonable batch size to make the sample distribution reasonable in each iteration. At the same time, within a reasonable range, increasing the batch size can reduce the number of iterations of model training and speed up the training speed of the model. The learning rate is the weight of the negative gradient in the stochastic gradient descent algorithm. Using a larger learning rate can speed up the training of the network, but it is easy to miss the minimum value and cause the model to fail to converge. When using a smaller learning rate, the network becomes slow to train and may get stuck in local minima. The momentum factor is a hyperparameter used in stochastic gradient descent to control the influence of the previous weight update on this weight update, and it also has a great impact on the training speed of the network.In the training of the humming recognition neural network model, grid search is used to assist in obtaining good values for the hyperparameters. The principle of grid search is relatively simple. First, for each hyperparameter, set a small optional set, for example, set the optional set {0.01, 0.02, 0.05, 0.1} for the learning rate. . Then, grid search will perform the Cartesian product of these hyperparameter sets to obtain multiple sets of hyperparameters, and automatically use each set of hyperparameters to conduct training experiments on the model, and obtain the hyperparameter combination with the smallest validation set error as the most excellent hyperparameter selection. The final selected model hyperparameter values are shown in Table2.Table 2 Humming recognition neural network hyperparameters. HyperparametersValueConvolution kernel size(4,4)Pooling kernel size(2,2)Number of training cycles60Batch size190Learning rate0.03Momentum factor0.9Dropout rate0.4 ### 3.4. Performance Analysis of Humming Recognition Neural Network The humming recognition neural network model with better test results is obtained by training. The composition of each network layer, the output dimension, and the number of trainable parameters are shown in Table3.Table 3 Humming recognition neural network model. Network layerOutput dimension (none indicates the number of audio)Number of parametersInput_1(input layer)(None,1,160000)0Melspectrogram_1(Mel spectrogram)(None,64,625,1)0Batch_normalization_1(batch normalization)(None,64,625,1)4Conv2d_1(convolutional layer)(None,64,625,21)210Batch_normalization_2(batch normalization)(None,64,625,21)84Activation_1(ReLU)(None,64,625,21)0Max_pooling2d_1(None,32,313,21)0Conv2d_2(convolutional layer)(None,32,313,21)3990Batch_normalization_3(batch normalization)(None,32,313,21)84Activation_2(ReLU)(None,32,313,21)0Max_pooling2d_2(None,16,157,21)0Conv2d_3(convolutional layer(None,16,157,21)3990Batch_mormalization_4(None,16,157,21)84Activation_3(None,16,157,21)0Max_pooling2d_3(None,8,79,21)0Conv2d_4(convolutional layer)(None,8,79,21)3990Batch_normalization_5(None,8,79,21)84Activation_4(ReLU)(None,8,79,21)0Max_pooling2d_4(None,4,40,21)0Reshape(reduce dimension)(None,160,21)0Gur1(GRU)(None,160,41)7749Gru2(GRU)(None,41)10209Dropout(None,41)0Dense_1(Softmax)(None,50)2100The accuracy and loss function value (LOSS) of the network model on the training set and validation set are shown in Table4.Table 4 Humming recognition neural network training results. DatasetACCLOSSTraining dataset0.982130.1662Test dataset0.941250.3211On both the training set and the validation set, the model achieved a high recognition accuracy rate of more than 93%, and at the same time, the loss function value was relatively small. It can be seen that the model has fully fitted the training data and achieved good results on the validation data set. Next, the model is used to identify the two sets of test sets, and the experimental results are shown in Table5.Table 5 Humming recognition test experimental results. LOSSACCMRRProfessional group0.32210.941200.97213Non-professional group1.12450.802150.8211On the professional group test set, the humming recognition neural network model achieved excellent recognition results, with a recognition accuracy rate of 0.9396 and an average reciprocal ranking of 0.9633. On the nonprofessional group test set, the accuracy rate has dropped to 0.7896, but it still achieves good recognition results. In terms of recognition efficiency, the average processing time of each fragment is about 0.6 s. On the whole, the proposed convolutional recurrent neural network model can better accomplish the task of humming recognition.The reason for the drop in accuracy on the nonprofessional group’s test set is mainly because there may be some inaccuracies in the group’s humming. In addition, judging from the fact that the accuracy of the training data set is significantly higher than that of the validation data set and the professional group test data set, the model may still have a certain overfitting problem, which also affects the recognition accuracy on the test set.In terms of recognition accuracy, the deep-learning framework for humming recognition has a certain improvement compared with the most humming recognition research work. In terms of response time, the deep-learning-based humming recognition method proposed in this paper has obvious advantages over the matching-based method. This is because for the deep-learning model, the recognition process just uses the trained model parameters to perform a series of matrix operations, and the results can be obtained quickly when the GPU is used for acceleration. Compared with the matching algorithm, the calculation speed has a natural advantage.Based on the above experimental results, the humming recognition method based on deep learning proposed in this paper can better complete the humming recognition task and has certain practicability. Compared with the traditional matching-based humming recognition method, there are certain improvements and improvements. ## 3.1. Humming Recognition Deep-Learning Framework and Design The humming recognition deep-learning framework consists of the following parts:(1) Humming Audio Database. Including humming recognition training dataset and test dataset. The vocal track audio of the original singer (or singing professionally) of the song constitutes the training dataset. In addition to a part of the above audio, the test data set also contains a part of nonprofessional humming audio data to compare and evaluate the generalization ability of the model;(2) Preprocessing Module. The humming audio data is processed and analyzed to obtain the characteristic representation of the audio data. The main processing flow has been described in the third chapter;(3) Neural Network Training Module. Input the training data set into the humming recognition neural network, train in batches, use the validation set to calculate the loss function value after each iteration, stop training after reaching a certain accuracy requirement, and use the neural network with the smallest loss function value. The weight value is used as the optimal parameter output of the model;(4) Neural Network Test Module. Based on a certain amount of test humming audio data, appropriate evaluation indicators are used to test and evaluate the performance of the neural network, as the basis for repeatedly adjusting the neural network training process and parameter selection;(5) Humming Recognition System. The humming recognition system is based on a trained neural network model and accepts human humming as input. The humming signal undergoes audio processing steps such as digitization, filtering, pre-emphasis, windowing, calculation of Mel cepstral coefficients to obtain the mel-spectrogram representation of the humming signal, input the neural network recognition model, obtain the recognition result, and return it to the user;(6) Using modular design, the deep-learning framework for humming recognition has good scalability, and several modules with high cohesion and low coupling are obtained. For example, the audio database can be flexibly replaced or modified for neural network model training, so that training evaluation can be performed on different datasets; it is also possible to use a set of model parameters trained by the neural network training module to perform training on different datasets. Testing of network performance and end-to-end testing of prototype systems are shown in Figure2.The humming recognition neural network model is the core part of the deep-learning framework, and its overall design ideas are as follows:(1) The input layer receives the mel-spectrogram of the humming audio signal as input(2) Then, uses several convolutional layers to learn the local features of the audio signal to obtain the audio signal feature map(3) Then, several recurrent layers are used to induce and learn the sequence features of the audio signal over time(4) Finally, the probability distribution of the recognition result that the input audio signal is a certain song is obtained through the Softmax activation functionThe humming recognition deep neural network model designed accordingly is shown in Figure3.Figure 3 The overall structure of the neural network.The data received by the input layer is a two-dimensional mel-spectrogram representation of the audio signal. In the hidden layer part, in order to fully extract the spectral features of the audio signal, four layers of convolution and pooling layers are used to learn the local features of the signal, and a threshold control unit is used in the last two layers, learning, and induction of sequence features of audio signals. Finally, the Softmax activation function is used in the output layer to output the neural network calculation result as the probability distribution of song recognition. ## 3.2. A Prototype System for Recognition of Humming Audio Scores The humming audio music score recognition prototype system adopts the C/S architecture, the server is based on the pythonweb framework Bottle, the client is implemented based on the ReactNative framework, and the client and the server communicate based on the HTTP protocol. The overall interaction process design of the system is shown in Figure4.Figure 4 Interaction process of the humming audio score recognition system.The user inputs the humming signal through the client recording module, and the client records the humming audio with a sampling frequency of 16000 Hz, performs a series of preprocessing, and sends an HTTPS POST request through the audio uploading module to upload the audio to the server for processing. The server-side audio preprocessing module performs note start point detection and audio segmentation on the humming audio and then inputs the humming recognition module. This module uses the humming recognition, neural network model, to give the recognition result and returns the corresponding score image for client display.Considering that the server needs to compile the Keras model, read the trained neural network parameters and recognize the humming audio, the server is written in Python. The bottle is a lightweight Python Web framework that provides basic routing, encapsulation of request objects, template support, etc. It can realize the rapid development of small Web applications and meet the needs of the humming recognition prototype system server. The ReactNative framework adopted by the mobile terminal is a cross-platform mobile application development framework launched by Facebook, which can quickly develop mobile applications based on the React ecosystem. ## 3.3. Training the Neural Network In the neural network training process, the humming audio is sampled with a sampling frequency of 16000 Hz. For each training process, the network weights are initialized with a uniform distribution, and the cross-entropy function is used as the loss function, and then the stochastic gradient descent algorithm is used for learning.For the overfitting problem, the method of early termination of training is used in the experiment. In the training process, generally speaking, as the number of training increases, the error on the training set will be smaller, while the error on the validation set will first decrease and then increase, that is, the model begins to enter the overfitting state, and before that, there will be a good number of training cycles. Therefore, if the error accuracy set on the validation set is not improved in 10 generations of training, it means that the best number of training iterations has been reached, the training process is terminated in advance, and the network weights of the generation training with the best accuracy are reserved as a result of training. In addition, the Dropout method is also used to alleviate the overfitting phenomenon. Dropout is an extremely effective and simple regularization technique that can significantly reduce network parameter overfitting during training. Dropout can be visually understood as performing subsampling operations in a fully connected neural network and only updating the weights and parameters of the sampling network during training.During model training, Dropout randomly selects some nodes in the network and sets them in an inactive state. The inactive nodes will not participate in the operation during this training process, and their weights will not change. In the next training, repeating this process, nodes that were not activated last time may resume work. In the training process, the output of some neurons is randomly set to 0, making it invalid, so that each neuron does not completely depend on other neurons so that more feature expressions can be obtained.During the experiment, the Dropout ratio was selected asp = 0.3. Table 1 shows the accurate rate variation of the model on the training set and the validation set before and after applying the Dropout method when the hyperparameters remain unchanged during ten training sessions.Table 1 Suppress overfitting using the Dropout method. Mean value of ACC for 10 timesBefore dropoutAfter dropoutTraining dataset0.91270.98200Test dataset0.90160.94321It can be obtained from Table1 that after applying the Dropout method, although the accuracy rate on the training data set has decreased, the accuracy rate on the verification data set has increased. It can be considered that adding Dropout has improved the generalization of the model to a certain extent and capabilities and mitigates overfitting.In the humming recognition deep neural network model, there are several hyperparameters that will affect the model quality and training time, including the size of the convolution kernel, the size of the pooling kernel, the number of training cycles (epoch), the batch size (batch size), learning rate, momentum factor, and the dropout ratio described above.Among them, the size of the convolution kernel and the size of the pooling kernel are the network structure parameters. A training cycle refers to the process of training all sample data once. If the number of training cycles is too small, the model may not converge, and if it is too large, overfitting may occur. The batch size is the number of data blocks selected in each stochastic gradient descent. Usually, for relatively small datasets, the entire dataset can be input into the network for training, and the resulting gradient descent direction can better represent the overall characteristics of the dataset.For the humming recognition dataset, due to the large memory usage of audio data, it is not feasible to load all the data at one time. It is very important to use a reasonable batch size to make the sample distribution reasonable in each iteration. At the same time, within a reasonable range, increasing the batch size can reduce the number of iterations of model training and speed up the training speed of the model. The learning rate is the weight of the negative gradient in the stochastic gradient descent algorithm. Using a larger learning rate can speed up the training of the network, but it is easy to miss the minimum value and cause the model to fail to converge. When using a smaller learning rate, the network becomes slow to train and may get stuck in local minima. The momentum factor is a hyperparameter used in stochastic gradient descent to control the influence of the previous weight update on this weight update, and it also has a great impact on the training speed of the network.In the training of the humming recognition neural network model, grid search is used to assist in obtaining good values for the hyperparameters. The principle of grid search is relatively simple. First, for each hyperparameter, set a small optional set, for example, set the optional set {0.01, 0.02, 0.05, 0.1} for the learning rate. . Then, grid search will perform the Cartesian product of these hyperparameter sets to obtain multiple sets of hyperparameters, and automatically use each set of hyperparameters to conduct training experiments on the model, and obtain the hyperparameter combination with the smallest validation set error as the most excellent hyperparameter selection. The final selected model hyperparameter values are shown in Table2.Table 2 Humming recognition neural network hyperparameters. HyperparametersValueConvolution kernel size(4,4)Pooling kernel size(2,2)Number of training cycles60Batch size190Learning rate0.03Momentum factor0.9Dropout rate0.4 ## 3.4. Performance Analysis of Humming Recognition Neural Network The humming recognition neural network model with better test results is obtained by training. The composition of each network layer, the output dimension, and the number of trainable parameters are shown in Table3.Table 3 Humming recognition neural network model. Network layerOutput dimension (none indicates the number of audio)Number of parametersInput_1(input layer)(None,1,160000)0Melspectrogram_1(Mel spectrogram)(None,64,625,1)0Batch_normalization_1(batch normalization)(None,64,625,1)4Conv2d_1(convolutional layer)(None,64,625,21)210Batch_normalization_2(batch normalization)(None,64,625,21)84Activation_1(ReLU)(None,64,625,21)0Max_pooling2d_1(None,32,313,21)0Conv2d_2(convolutional layer)(None,32,313,21)3990Batch_normalization_3(batch normalization)(None,32,313,21)84Activation_2(ReLU)(None,32,313,21)0Max_pooling2d_2(None,16,157,21)0Conv2d_3(convolutional layer(None,16,157,21)3990Batch_mormalization_4(None,16,157,21)84Activation_3(None,16,157,21)0Max_pooling2d_3(None,8,79,21)0Conv2d_4(convolutional layer)(None,8,79,21)3990Batch_normalization_5(None,8,79,21)84Activation_4(ReLU)(None,8,79,21)0Max_pooling2d_4(None,4,40,21)0Reshape(reduce dimension)(None,160,21)0Gur1(GRU)(None,160,41)7749Gru2(GRU)(None,41)10209Dropout(None,41)0Dense_1(Softmax)(None,50)2100The accuracy and loss function value (LOSS) of the network model on the training set and validation set are shown in Table4.Table 4 Humming recognition neural network training results. DatasetACCLOSSTraining dataset0.982130.1662Test dataset0.941250.3211On both the training set and the validation set, the model achieved a high recognition accuracy rate of more than 93%, and at the same time, the loss function value was relatively small. It can be seen that the model has fully fitted the training data and achieved good results on the validation data set. Next, the model is used to identify the two sets of test sets, and the experimental results are shown in Table5.Table 5 Humming recognition test experimental results. LOSSACCMRRProfessional group0.32210.941200.97213Non-professional group1.12450.802150.8211On the professional group test set, the humming recognition neural network model achieved excellent recognition results, with a recognition accuracy rate of 0.9396 and an average reciprocal ranking of 0.9633. On the nonprofessional group test set, the accuracy rate has dropped to 0.7896, but it still achieves good recognition results. In terms of recognition efficiency, the average processing time of each fragment is about 0.6 s. On the whole, the proposed convolutional recurrent neural network model can better accomplish the task of humming recognition.The reason for the drop in accuracy on the nonprofessional group’s test set is mainly because there may be some inaccuracies in the group’s humming. In addition, judging from the fact that the accuracy of the training data set is significantly higher than that of the validation data set and the professional group test data set, the model may still have a certain overfitting problem, which also affects the recognition accuracy on the test set.In terms of recognition accuracy, the deep-learning framework for humming recognition has a certain improvement compared with the most humming recognition research work. In terms of response time, the deep-learning-based humming recognition method proposed in this paper has obvious advantages over the matching-based method. This is because for the deep-learning model, the recognition process just uses the trained model parameters to perform a series of matrix operations, and the results can be obtained quickly when the GPU is used for acceleration. Compared with the matching algorithm, the calculation speed has a natural advantage.Based on the above experimental results, the humming recognition method based on deep learning proposed in this paper can better complete the humming recognition task and has certain practicability. Compared with the traditional matching-based humming recognition method, there are certain improvements and improvements. ## 4. Conclusion Focusing on the problem of automatic recognition of humming audio signals, this paper applies deep-learning methods to humming recognition and combines traditional audio signal processing methods to design a deep-learning framework for humming audio recognition. The feasibility of the humming recognition model finally realized a deep-learning-based humming audio score recognition system. The main conclusions are as follows.(1) Aiming at the problem of using humming audio signals in a deep-learning framework, this paper studies several techniques in the field of audio analysis, including audio sampling, filtering, pre-emphasis, two-dimensional representation of signals, and note onset detection. The differences and advantages and disadvantages of the two methods provide theoretical basis and processing methods for converting humming audio signals into deep neural network input vectors.(2) On the basis of deep-learning principles and theoretical research, for the recognition of humming audio signals, a convolutional recurrent neural network model is designed and implemented, which fully combines the advantages of convolutional neural networks for local feature extraction and the advantages of recurrent neural networks in The advantages in sequence data processing, through the use of reasonable neural network components, lead to a deep-learning framework for humming recognition.(3) Using open-source deep-learning platforms and tools, experiments and demonstrations are carried out on the proposed deep-learning model. By training and testing on the test data set, and adjusting the model repeatedly, the model parameters with better effect are obtained. And through the evaluation test on the test data set, the feasibility and effectiveness of the proposed neural network model are verified, and the performance of the model is analyzed and evaluated.(4) Based on the proposed deep-learning framework, using the server-side and mobile-side development technologies, a prototype system for humming audio score recognition was designed and implemented, including the server-side audio recognition service, mobile-side audio recording, audio uploading, and other functional modules.In our study, limited by the limited research time, and the lack of understanding and practical experience of deep-learning-related theories, there are still some deficiencies in many aspects, which are worthy of more in-depth research. For example, a method for detecting the onset point of audio signal sentences is developed, instead of the onset point detection method used in this paper. Our methods refers to improved method of humming signal feature representation, an in-depth study of neural network design for humming recognition, and enhancement robustness of Deep Learning Frameworks for Humming Recognition [34]. --- *Source: 1002105-2022-08-18.xml*
2022
# Accurate Diagnosis and Treatment of Painful Temporomandibular Disorders: A Literature Review Supplemented by Own Clinical Experience **Authors:** Adam Andrzej Garstka; Lidia Kozowska; Konrad Kijak; Monika Brzózka; Helena Gronwald; Piotr Skomro; Danuta Lietz-Kijak **Journal:** Pain Research and Management (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1002235 --- ## Abstract Introduction. Temporomandibular disorders (TMD) is a multifactorial group of musculoskeletal disorders often with combined etiologies that demand different treatment plans. While pain is the most common reason why patients decide to seek help, TMD is not always painful. Pain is often described by patients as a headache, prompting patients to seek the help of neurologists, surgeons, and ultimately dentists. Due to the unique characteristics of this anatomical area, appropriate diagnostic tools are needed, as well as therapeutic regimens to alleviate and/or eliminate the pain experienced by patients. Aim of the Study. The aim of this study is to collect and organize information on the diagnosis and treatment of pain in TMD, through a review of the literature supplemented by our own clinical experience. Material and Methods. The study was conducted by searching scientific databases PubMed, Scopus, and Google Scholar for documents published from 2002–2022. The following keywords were used to build the full list of references: TMD, pain, temporomandibular joint (TMJ), TMJ disorders, occlusal splint, relaxing splints, physiotherapy TMD, pharmacology TMD, natural therapy TMD, diagnostic criteria for TMD, and DC/TMD. The literature review included 168 selected manuscripts, the content of which was important for pain diagnosis and clinical treatment of TMD. Results. An accurate diagnosis of TMD is the foundation of appropriate treatment. The most commonly described treatments include physiotherapy, occlusal splints therapy, and pharmacological treatment tailored to the type of TMD. Conclusions. Based on the literature review and their own experience, the authors concluded that there is no single ideal form of pain therapy for TMD. Treatment of TMD should be based on a thorough diagnostic process, including the DC/TMD examination protocol, psychological evaluation, and cone beam computer tomography (CBCT) imaging. Following the diagnostic process, once a diagnosis is established, a treatment plan can be constructed to address the patient’s complaints. --- ## Body ## 1. Introduction Temporomandibular disorders (TMD) can present with pain, prompting patients to seek help from various specialists [1–4]. TMD is most frequently seen in people aged between 20 and 40 years [5–8] and is more common in women due to hormonal changes and greater influence of psychosocial factors [9–12]. Thus, it can be concluded that TMD is a civilization problem, which may escalate due to the increasing pace of life, omnipresent stress, and improper use of the masticatory system [13–24]. One unquestionable causative factor is stress, which has a destructive effect on all masticatory structures, and if chronic, it may expose or aggravate temporomandibular disorders [25–32].Pain in temporomandibular disorders may have a diverse etiology, i.e., central or peripheral, as demonstrated by the 2020 study by Yin et al., finding that TMD is accompanied by functional and structural changes in the primary somatosensory cortex, prefrontal cortex, and basal ganglia of the brain, which should inform treatment decisions [33]. Temporomandibular disorders (TMD) are characterized by abnormalities in the temporomandibular joint, masticatory muscles, and other adjacent structures, often described by patients as a headache [34–40]. According to research findings, typical TMD symptoms are more common in patients with migraine or tension headaches. It has also been shown that patients with diagnosed TMD are more likely to experience migraines, and the coexistence of both problems exacerbates the symptoms of each [41–47]. This unique anatomical region does not lend itself easily to diagnosis and treatment. It is not uncommon for patients to be referred to neurologists, otolaryngologists, surgeons, and dentists. Undoubtedly, the involvement of many specialists in the problems affecting this area may be beneficial in the classification and differentiation of disorders [48–51].Masticatory dysfunction can be diagnosed when at least three of the following symptoms are reported: pain and acoustic symptoms during mandibular movements, limited mandibular mobility, difficulty with jaw opening, and occlusal or nonocclusal parafunction. The modern diagnosis of TMD should be based on the DC/TMD examination protocol because only with the correct diagnosis is the correct treatment possible [52–54]. ## 2. Aim of the Study The aim of this study is to collect and organize information on the accurate diagnosis and treatment of pain in TMD through a review of the literature supplemented by our own clinical experience. ## 3. Materials and Methods The study was conducted by searching scientific databases PubMed, Scopus, and Google Scholar for documents published from 2002–2022. The literature review included 168 selected manuscripts, the content of which was important for pain diagnosis and clinical treatment of TMD. These aspects mentioned previously were the criteria for the inclusion of the manuscripts in the review. The following keywords were used to build the full list of references: TMD, pain, TMJ disorders, occlusal splints, relaxing splints, physiotherapy TMD, pharmacology TMD, and natural therapy TMD. ## 4. The Essence of the Matter ### 4.1. TMD Pain Diagnosis #### 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. #### 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. #### 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. #### 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. #### 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. #### 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. #### 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. #### 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ### 4.2. TMD Pain Therapy #### 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. #### 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. #### 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. #### 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. #### 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. #### 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. #### 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. #### 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. #### 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. #### 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. #### 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 4.1. TMD Pain Diagnosis ### 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. ### 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. ### 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. ### 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. ### 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. ### 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. ### 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. ### 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ## 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. ## 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. ## 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. ## 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. ## 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. ## 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. ## 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. ## 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ## 4.2. TMD Pain Therapy ### 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. ### 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. ### 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. ### 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. ### 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. ### 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. ### 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. ### 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. ### 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. ### 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. ### 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. ## 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. ## 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. ## 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. ## 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. ## 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. ## 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. ## 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. ## 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. ## 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. ## 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 5. Summary Based on the literature review, the authors concluded that there is no single, ideal form of pain therapy for TMD. Treatment of TMD should be based on a thorough diagnostic process, including the DC/TMD examination protocol, psychological evaluation, and CBCT imaging. Following the diagnostic process, once a diagnosis is established, a treatment plan can be constructed to address the patient’s complaints.The treatment of temporomandibular dysfunctions requires a thorough diagnostic process, taking into account the etiology of the disorder. Having reviewed the relevant literature, the authors emphasize the need to combine multiple methods. For severe pain, pharmacotherapy may be used, while in other cases, it will be more appropriate to apply a combination of splint therapy and physiotherapy. While waiting for a custom-tailored occlusal splint, the patient can take advantage of behavioural and psychological methods, which should be continued after they have been fitted with the splint, as well as during physiotherapy treatments. Follow-up visits are an essential part of the TMD treatment process. The first follow-up visit should take place after one month of therapy and the next after three months. In the meantime, the patient should keep a diary describing their symptoms, pain levels, sleep quality, and wellbeing upon awakening and at bedtime. These observations, which should be reviewed at the follow-up visit, help build a full picture of the effects of the splint and other treatments, as well as inform the psychological assessment of the patient. An accurate diagnosis of TMD is the foundation of appropriate treatment. The most commonly described treatments include physiotherapy, occlusal splint therapy, and pharmacological treatment tailored to the type of TMD. --- *Source: 1002235-2023-01-31.xml*
1002235-2023-01-31_1002235-2023-01-31.md
68,438
Accurate Diagnosis and Treatment of Painful Temporomandibular Disorders: A Literature Review Supplemented by Own Clinical Experience
Adam Andrzej Garstka; Lidia Kozowska; Konrad Kijak; Monika Brzózka; Helena Gronwald; Piotr Skomro; Danuta Lietz-Kijak
Pain Research and Management (2023)
Other
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1002235
1002235-2023-01-31.xml
--- ## Abstract Introduction. Temporomandibular disorders (TMD) is a multifactorial group of musculoskeletal disorders often with combined etiologies that demand different treatment plans. While pain is the most common reason why patients decide to seek help, TMD is not always painful. Pain is often described by patients as a headache, prompting patients to seek the help of neurologists, surgeons, and ultimately dentists. Due to the unique characteristics of this anatomical area, appropriate diagnostic tools are needed, as well as therapeutic regimens to alleviate and/or eliminate the pain experienced by patients. Aim of the Study. The aim of this study is to collect and organize information on the diagnosis and treatment of pain in TMD, through a review of the literature supplemented by our own clinical experience. Material and Methods. The study was conducted by searching scientific databases PubMed, Scopus, and Google Scholar for documents published from 2002–2022. The following keywords were used to build the full list of references: TMD, pain, temporomandibular joint (TMJ), TMJ disorders, occlusal splint, relaxing splints, physiotherapy TMD, pharmacology TMD, natural therapy TMD, diagnostic criteria for TMD, and DC/TMD. The literature review included 168 selected manuscripts, the content of which was important for pain diagnosis and clinical treatment of TMD. Results. An accurate diagnosis of TMD is the foundation of appropriate treatment. The most commonly described treatments include physiotherapy, occlusal splints therapy, and pharmacological treatment tailored to the type of TMD. Conclusions. Based on the literature review and their own experience, the authors concluded that there is no single ideal form of pain therapy for TMD. Treatment of TMD should be based on a thorough diagnostic process, including the DC/TMD examination protocol, psychological evaluation, and cone beam computer tomography (CBCT) imaging. Following the diagnostic process, once a diagnosis is established, a treatment plan can be constructed to address the patient’s complaints. --- ## Body ## 1. Introduction Temporomandibular disorders (TMD) can present with pain, prompting patients to seek help from various specialists [1–4]. TMD is most frequently seen in people aged between 20 and 40 years [5–8] and is more common in women due to hormonal changes and greater influence of psychosocial factors [9–12]. Thus, it can be concluded that TMD is a civilization problem, which may escalate due to the increasing pace of life, omnipresent stress, and improper use of the masticatory system [13–24]. One unquestionable causative factor is stress, which has a destructive effect on all masticatory structures, and if chronic, it may expose or aggravate temporomandibular disorders [25–32].Pain in temporomandibular disorders may have a diverse etiology, i.e., central or peripheral, as demonstrated by the 2020 study by Yin et al., finding that TMD is accompanied by functional and structural changes in the primary somatosensory cortex, prefrontal cortex, and basal ganglia of the brain, which should inform treatment decisions [33]. Temporomandibular disorders (TMD) are characterized by abnormalities in the temporomandibular joint, masticatory muscles, and other adjacent structures, often described by patients as a headache [34–40]. According to research findings, typical TMD symptoms are more common in patients with migraine or tension headaches. It has also been shown that patients with diagnosed TMD are more likely to experience migraines, and the coexistence of both problems exacerbates the symptoms of each [41–47]. This unique anatomical region does not lend itself easily to diagnosis and treatment. It is not uncommon for patients to be referred to neurologists, otolaryngologists, surgeons, and dentists. Undoubtedly, the involvement of many specialists in the problems affecting this area may be beneficial in the classification and differentiation of disorders [48–51].Masticatory dysfunction can be diagnosed when at least three of the following symptoms are reported: pain and acoustic symptoms during mandibular movements, limited mandibular mobility, difficulty with jaw opening, and occlusal or nonocclusal parafunction. The modern diagnosis of TMD should be based on the DC/TMD examination protocol because only with the correct diagnosis is the correct treatment possible [52–54]. ## 2. Aim of the Study The aim of this study is to collect and organize information on the accurate diagnosis and treatment of pain in TMD through a review of the literature supplemented by our own clinical experience. ## 3. Materials and Methods The study was conducted by searching scientific databases PubMed, Scopus, and Google Scholar for documents published from 2002–2022. The literature review included 168 selected manuscripts, the content of which was important for pain diagnosis and clinical treatment of TMD. These aspects mentioned previously were the criteria for the inclusion of the manuscripts in the review. The following keywords were used to build the full list of references: TMD, pain, TMJ disorders, occlusal splints, relaxing splints, physiotherapy TMD, pharmacology TMD, and natural therapy TMD. ## 4. The Essence of the Matter ### 4.1. TMD Pain Diagnosis #### 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. #### 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. #### 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. #### 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. #### 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. #### 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. #### 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. #### 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ### 4.2. TMD Pain Therapy #### 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. #### 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. #### 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. #### 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. #### 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. #### 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. #### 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. #### 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. #### 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. #### 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. #### 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 4.1. TMD Pain Diagnosis ### 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. ### 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. ### 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. ### 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. ### 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. ### 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. ### 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. ### 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ## 4.1.1. Myalgia Myalgia (muscle pain) can be caused by mandibular movements, parafunctions, and excessive muscle tension due to the increased activity of masticatory muscles. Pain occurs upon provocation testing. The patient’s history may include pain in the jaw, temple, ear, or in front of the ear. Pain may be modified with jaw movement, function, or parafunction.Upon physical examination of the patient, the physician is able to confirm the location of pain in the temporalis or masseter muscle, additionally using muscle palpation and maximum unassisted or assisted jaw opening [55–75]. ## 4.1.2. Myofascial Pain Myofascial pain can be local or referred to and is experienced by the patient as deep and dull. Unlike myalgia, this pain spreads beyond the palpated area, remaining inside the boundary of the examined muscle or in the case of referred myofascial pain–beyond the area of the examined muscle. Myofascial trigger points may also be felt during palpation [76–80]. ## 4.1.3. Arthralgia The term arthralgia refers to pain in the temporomandibular joint without signs of joint inflammation. The onset of pain is associated with mandibular movement, function, and parafunction. Pain is also triggered during provocation testing. The patient’s history includes pain in the jaw, temple, ear, or in front of the ear. On physical examination, the physician confirms pain in the TMJ area, especially the lateral region, and examines the maximum range of jaw opening with and without assistance [81–83]. ## 4.1.4. TMD-Attributed Headache Headache attributed to temporomandibular dysfunction is characterized by a history of temporal pain of any nature. The pain can be modified by mandibular movement, function, and parafunction. Upon physical examination, pain in the temporalis region can also be observed in provocative tests. Pain may occur during palpation and when testing jaw opening [84]. ## 4.1.5. Disc Displacement with Reduction or with Intermittent Locking An intracapsular disorder involving the condyle-disc complex. To make the diagnosis, it is necessary to determine the closed mouth position according to the protocol. At the Department of Propaedeutics, Physical Diagnostics, and Dental Physiotherapy, we ask the patient to assume their habitual occlusion and then relax the mandible. In this way, we are able to assess the actual intraarticular status, which is confirmed by palpation, joint sound inspection (with a stethoscope), and diagnostic imaging (CBCT). When performing diagnostic imaging, it is essential to perform the examination under the same conditions, without the bite stick.In this disorder, the disc is positioned anteriorly relative to the condylar head and reduces with mouth opening movements. In some cases, medial and lateral displacement of the articular disc can be observed, as well as noises such as clicking, crackling, or popping [85–93]. Please note that if the patient has a history of joint locking and chewing problems, this diagnosis is ruled out.To make the diagnosis, the patient is asked to report all TMJ noises that have occurred in the last 30 days during mandibular movements, and additionally, the patient should report any noises during the examination:(i) Clicking, popping, and/or snapping noise during both opening and closing movements, detected with palpation during at least one of three repetitions of jaw opening and closing movements; or(ii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of opening or closing movement(s);(iii) Clicking, popping, and/or snapping noise detected with palpation during at least one of three repetitions of right or left lateral, or protrusive movement(s) [94–100].When discussing this disorder, it should be stated that imaging should be the reference standard for this diagnosis [101–103]. ## 4.1.6. Disc Displacement without Reduction with Limited Opening In this intracapsular disorder, in the closed mouth position, the disc is positioned anteriorly relative to the condylar head and does not reduce in size with the opening of the mouth. Characteristically, the disorder is associated with persistent limited mandibular opening, sometimes referred to as a closed lock, which is not resolved by a manipulative manoeuvre performed by the physician.Patient history includes a locked jaw, limited movement, and eating difficulties. In physical examination, during assisted jaw opening, the distance between the upper and lower incisors is less than 40 mm. Passive movements may be accompanied by noise [104–107]. ## 4.1.7. Osteoarthritis of the Temporomandibular Joint This disorder involves joint tissue deterioration with concomitant osseous changes in the condylar head and/or articular eminence.In history, the patient reports noise when chewing or opening the mouth in the last 30 days, and these phenomena may also appear during the examination. On physical examination, the physician detects snapping, popping sounds in the joint during the abduction, adduction, and lateral or protrusive movements. Imaging is required, as CBCT may help visualize subchondral cysts, erosions, generalized sclerosis/calcification, or osteophytes [108–111].. ## 4.1.8. Subluxation A hypermobility disorder involving the disc-condyle complex and the articular eminence. In the open mouth position, the disc is anterior to the articular eminence and the normally closed mouth position cannot be restored without a manipulative manoeuvre. The difference between subluxation and luxation is that in the former the patient is able to reduce the dislocation on their own, whereas the latter requires professional intervention. Patient history includes jaw locking upon abduction movement in the last 30 days. These locks may have been incidental and temporary, resulting in an inability to close the mouth [112, 113].The RDC/TMD and DC/TMD protocols make it possible to establish a diagnosis but do not shed any light on the etiology of the disorder, and elimination of the cause or an attempt to create the optimal conditions will be crucial in the treatment process.At the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy, the treatment team consists of an orthodontist, a physician dealing with dental prosthetics and restorative dentistry, a physiotherapist, and a dentist who coordinates the work of the whole team [114]. One of the most common signs of a disease process within the TMJ are sounds emitted by the articular structures, such as popping, clicking, humming, grinding, or crunching [114].Egermark et al., after examining 320 children aged 7, 11, and 15 years, reported that acoustic symptoms were more common in those with malocclusion (24%), with a predominance of transverse malocclusion. In their conclusions, they noted that there were no significant differences in the prevalence of masticatory dysfunction in the studied population between patients with malocclusion and those with a normal bite [115].Research findings provide no clear-cut conclusion as to how temporomandibular joint disorders are affected by a malocclusion. The consequences of malocclusion in terms of TMD development may be manifold and are undoubtedly related to age, gender, as well as the severity of the disorder.A fairly significant problem reported and observed in patients is nocturnal bruxism, which affects 8% of the population, and awake bruxism, the prevalence of which is estimated at 20%. At present, bruxism is defined not as a disorder but as a physiological stress-coping mechanism [116–121].Based on our own experience, we would like to note the relatively frequent coexistence of TMD with orthodontic disorders and temporomandibular disorders in post-orthodontic patients, where the teeth were often aligned in arches while the condylar heads were displaced posteriorly with reduced joint space [122]. In addition, it is important to consider that dental arches are somatic sites where excessive emotional tension can be diffused and reduced [123].Research into the associations between malocclusion and TMD, as well as the influence of malocclusion treatments on TMD should be conducted in large study samples.. ## 4.2. TMD Pain Therapy ### 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. ### 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. ### 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. ### 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. ### 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. ### 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. ### 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. ### 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. ### 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. ### 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. ### 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 4.2.1. Natural Methods Acupuncture is the best-known method of traditional Chinese medicine that is often used, also in Poland, in the treatment of chronic pain. Acupuncture points often coincide with so-called trigger points and correspond to sites of increased density ofA-δ and C fibre nerve endings that conduct pain sensations. Warm compress therapy is used for chronic inflammation and muscle strains. Ideally, a warm compress at 35–40 degrees C should be applied for 20–30 minutes. Cold compresses, on the other hand, are good for acute inflammation with pain and swelling [124, 125]. ## 4.2.2. Psychological and Behavioural Methods Psychological and behavioural programmes are effective in alleviating the psychological crisis, allowing the patient to change their perception of pain and improving functioning in patients with chronic pain. The therapeutic effect is not affected by the duration of the programme or by whether the treatment is delivered in an individual or group setting.Behavioural approaches aim to reduce the frequency of pain-promoting behaviours and increase the frequency of health-promoting behaviours. They include:(i) improving physical fitness(ii) social and employment activation(iii) reducing the amount of medication(iv) reducing overuse of health servicesPsychological methods include the following:(i) modifying ways of thinking about pain (misconceptions about pain) that cause prolonged suffering and disability(ii) replacing a sense of helplessness with a sense of control over pain and one’s own life(iii) developing strategies for adequate and effective pain management(iv) returning to work and promoting an active lifestyle [126, 127]It must be remembered that effective pain control requires a multidimensional approach, aiming to reduce the pain but also to improve the patient’s quality of life. ## 4.2.3. Interventional Methods-Splint Therapy Occlusal splint therapy can be used in all TMD disorders; however, it is vitally important to use the right splint for the patient’s unique situation.An occlusal splint is an appliance that affects the mutual relationship of the upper and lower teeth and, consequently, the relationship of the condylar process to the mandibular fossa and articular eminence within the TMJ. The purpose of splints is to stabilize occlusion or to protect teeth from excessive abrasion [128, 129].According to numerous studies, the use of splints has a significant effect on alleviating or even eliminating the patient’s pain symptoms. In cases of disc displacement, repositioning splints are used to stabilize the mandible in the centric relation, and in cases of masticatory muscle disorders, relaxation splints are used to prevent parafunctional effects [130, 131].Splints are most commonly made by obtaining dental impressions and making a bite registration with wax or silicone mass. An intraoral scanner and electronic bite registration can also be used.The technique recommended by our team for making occlusal splints is 3D printing using special resin, which makes it possible to avoid the mistakes common in the conventional hand-made process. On the basis of our own experience, research findings, and patient feedback, we use two types of splints in the Department of Propaedeutics, Physical diagnostics, and Dental Physiotherapy: the Michigan-type relaxation splint and the maxillomandibular repositioning splint [132, 133].The Michigan-type relaxation splint with canine guidance is used in cases involving: myalgia, myofascial pain, and TMD-attributed headache.The relaxation splint is made from hard resin and always applied to a single arch, with the upper usually being the arch of choice–unless there are missing teeth in the back. Importantly, in the case of missing teeth, the design of the splint should allow for retention elements.The hard repositioning splint joined interocclusal in the correct construction bite relationship is used in the following situations: arthralgia, disc displacement with reduction, disc displacement with reduction with intermittent locking, disc displacement without reduction with a limited opening, disc displacement without reduction and without limited opening, osteoarthritis of the temporomandibular joint, subluxation. ## 4.2.4. Physiotherapy Physiotherapy is a discipline of health science that aims to eliminate, alleviate, and prevent various ailments, as well as restore functional ability through movement and various physical agents. Physiotherapists are part of the treatment process in the case of dysfunctions involving the neuromuscular, musculoskeletal, and other systems [134].In their work, physiotherapists use kinesiotherapy and physical therapy techniques.(i) Self-therapy and muscle training. The patient is taught how to perform the correct opening, closing, lateral and protrusive movements of the mandible, as well as how to deal with sudden pain. Exercises should be performed daily in front of a mirror, and if the treatment includes a splint, it should also be used during exercises. The purpose of the exercises is to shorten the overstretched muscles and relax them, which may help improve symmetry and regulate muscle tone [135].(ii) Manual therapy makes use of trigger points. For disc displacement, a joint mobilization technique is applied, which involves the physiotherapist performing traction and gliding movements with low velocity but increasing amplitude. These movements are performed parallel and perpendicular to the joint surface. If the mandibular range of motion is limited, muscle energy techniques (MET) can be used. Treatments using the MET involve the repetition of three steps: in step one, the muscle is stretched to the point of resistance of the tissues; in step two, the patient slightly contracts muscles for about 10 seconds trying to resist the force generated by the physiotherapist; in the last step, the patient relaxes the muscles [136].(iii) Massage is used for myofascial pain in order to achieve pain relief and improve muscle length and flexibility, as well as loosen fascia [137, 138]. The frequency of massage sessions should be 30 minutes twice a week. With subsequent visits, the treatment should be applied with increasing force.(iv) Physical therapy, such as ultrasound and transcutaneous electrical nerve stimulation (TENS) can be used for pain of muscular origin. Therapeutic ultrasound can be applied in three modalities: using continuous waves, short bursts (pulsed ultrasound), and ultrasound combined with electrical stimulation, the latter of which has proven to be the most effective.TENS relieves pain and relaxes masticatory muscles in symptomatic patients with TMD [139–142]. In the pain of intracapsular origin, positive results have been observed after the application of a magnetic field combined with LED light therapy. The Solux infrared lamp can be used in cases of arthropathy and rheumatic diseases. The beneficial effects of heat therapy include the alleviation of pain.(i) The Kinesio Taping method is used for TMJ stabilization. It should be applied bilaterally. The tapes work by reducing the tension in the masticatory muscles, as well as the adjacent structures such as the muscles of the neck, shoulders, and spine [143–146]. The application of tapes also stimulates lymphatic drainage, which has a beneficial effect on inflammation accompanied by tissue swelling.(ii) Iontophoresis is the use of direct electrical current to accelerate the transdermal delivery of nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, and analgesics. While it is not associated with pain relief, a significant improvement in the range of motion in the joint has been observed [147]. ## 4.2.5. Pharmacotherapy The decision on the use of medications in temporomandibular disorders should be preceded by a thorough analysis of the risks and benefits of the drug [148–152]. Medications used to treat TMD include analgesics, nonsteroidal anti-inflammatory drugs, anticonvulsants, muscle relaxants, and benzodiazepines [153, 154]. ## 4.2.6. Nonsteroidal Anti-Inflammatory Drugs (NSAID) NSAIDs are beneficial for patients with acute temporomandibular arthritis resulting from sudden disc displacement. Treatment should continue for a minimum of two weeks, and it is important to combine NSAIDs with gastroprotective agents. Among NSAIDs, ibuprofen appears to be the safest for the gastrointestinal tract [155].It should also be noted that taking NSAIDs for more than 5 days may reduce the efficacy of antihypertensive drugs, such as diuretics, beta-blockers, and ACE inhibitors [154, 155]. In addition, NSAIDs used with anticoagulants such as warfarin or acenocoumarol may increase the risk of bleeding. ## 4.2.7. Opioids Due to the interactions of NSAIDs with anticoagulants, as well as the risk of gastritis, physicians sometimes choose to administer oral opioids, such as codeine and oxycodone. The intraarticular delivery route has been studied, but the findings are conflicting [156]. It is essential to bear in mind the side effects of opioid use, which include: dizziness, excessive sedation, nausea, vomiting, constipation, physical dependence and addiction, and respiratory depression. Because of the mentioned reasons, the use of opioids for the management of TMD should be discouraged [157–159]. ## 4.2.8. Corticosteroids Corticosteroids are helpful in the treatment of moderate to severe TMD. They can be administered by intraarticular injection or by oral route. They have an anti-inflammatory effect which can help relieve pain.For intraarticular injections, it is a good idea to combine corticosteroid preparation with a local anaesthetic, such as lidocaine. According to research findings, this approach provides for a significant reduction in pain, lasting 4 to 6 weeks, and a reduced risk of complications.Corticosteroids should be used with caution or discontinued in patients with hypertension, adrenal disease, or electrolyte problems. On day 4 after injection, it is recommended to introduce NSAIDs [160–163]. ## 4.2.9. Myorelaxants Muscle relaxants are used to reduce skeletal muscle tone and, therefore, may be helpful in the management of TMD of muscular origin and chronic orofacial pain [164]. The most common myorelaxants include cyclobenzaprine, metaxalone, methocarbamol, and carisoprodol. Based on numerous studies, cyclobenzaprine is considered to be the drug of choice due to relieving the pain of muscular origin and improving sleep quality [165].Caution should be exercised when using this type of medication due to its potential to induce significant sedation. These drugs are contraindicated in patients with hyperthyroidism, heart failure, after myocardial infarction, and heart rhythm disorders. The recommended dose is 10 mg at bedtime for 30 days, followed by a 2-week period to flush the drug out of the system and a medical follow-up. In the course of the therapy, the patient should always remain under medical supervision. ## 4.2.10. Anticonvulsants When discussing anticonvulsants, it is worth noting gabapentin, a GABA analogue. Gabapentin is thought to inhibit neurotransmitter release and reduce postsynaptic excitability [166].The use of gabapentin reduces the pain of muscular origin, particularly from the temporal and masseter muscles. The drug is generally well tolerated and is associated with transient and mild side effects, including dizziness, drowsiness, dry mouth, weight gain, and impaired concentration [167]. ## 4.2.11. Benzodiazepines Benzodiazepines facilitate transmission in the GABAergic system. They have been found to produce anxiolytic, sedative, hypnotic, anticonvulsant, and myorelaxant effects. Due to the risk of tolerance and dependence, as well as side effects including confusion, amnesia, and impaired motor coordination, these drugs are not recommended for the treatment of TMD [168]. ## 5. Summary Based on the literature review, the authors concluded that there is no single, ideal form of pain therapy for TMD. Treatment of TMD should be based on a thorough diagnostic process, including the DC/TMD examination protocol, psychological evaluation, and CBCT imaging. Following the diagnostic process, once a diagnosis is established, a treatment plan can be constructed to address the patient’s complaints.The treatment of temporomandibular dysfunctions requires a thorough diagnostic process, taking into account the etiology of the disorder. Having reviewed the relevant literature, the authors emphasize the need to combine multiple methods. For severe pain, pharmacotherapy may be used, while in other cases, it will be more appropriate to apply a combination of splint therapy and physiotherapy. While waiting for a custom-tailored occlusal splint, the patient can take advantage of behavioural and psychological methods, which should be continued after they have been fitted with the splint, as well as during physiotherapy treatments. Follow-up visits are an essential part of the TMD treatment process. The first follow-up visit should take place after one month of therapy and the next after three months. In the meantime, the patient should keep a diary describing their symptoms, pain levels, sleep quality, and wellbeing upon awakening and at bedtime. These observations, which should be reviewed at the follow-up visit, help build a full picture of the effects of the splint and other treatments, as well as inform the psychological assessment of the patient. An accurate diagnosis of TMD is the foundation of appropriate treatment. The most commonly described treatments include physiotherapy, occlusal splint therapy, and pharmacological treatment tailored to the type of TMD. --- *Source: 1002235-2023-01-31.xml*
2023
# Analysis of Persuasive Design Mechanism Based on Unconscious Calculation **Authors:** Hongtao Zheng; Shuo Li **Journal:** Scientific Programming (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1002517 --- ## Abstract In order to make users’ behavior more standardized, a persuasive design mechanism analysis method based on unconscious calculation is proposed. Taking the concept of persuasion and the goal of persuading function as the theoretical basis, the user behavior data is obtained in the form of correlation calculation, and the ant colony algorithm is used to classify the behavior data. According to the results of data processing, analyze unconscious behavior and its characteristics, and obtain the persuasion mechanism through unconscious calculation, persuasion model, and persuasion model design. The experimental results show that the method in this paper has a higher acceptance of behavioral persuasion and higher satisfaction of the persuaded, indicating that the method has strong practical applicability. --- ## Body ## 1. Introduction Guide the user’s behavior to be more in line with the norms, improve the user’s motivation and ability to complete the behavior, and enable them to develop good behavior habits so as to provide users with better services. Today, with the increasing popularity of Internet technology, persuasion technology can be used to change the behavior and attitude of users and to guide and persuade users’ behavior [1–3]. At present, the application research of persuasion technology mainly involves the field of health, and its persuasion strategy is also proposed for the field of health. The research is more one-sided and only takes meeting the needs of users as the research goal, which cannot achieve behavior persuasion in the real sense. Therefore, we should induce and persuade users to develop or change their behavior through design intervention so as to achieve a purpose other than the design itself [4–6]. Based on this, how to intervene with the target users, persuade them, and improve the execution of the users is a topic that needs to be studied.The application of persuasive technology in various fields needs to be realized through design. As a mode of thinking, persuasive design has been widely used in the cross-research of design and other disciplines, including education, medical care, health, sports, games, advertising, e-commerce, and other fields. Persuasion is to achieve psychological and behavioral guidance through nonmandatory means. There are various ways of persuasion, mainly including direct diarrhea persuasion, impact persuasion, and retrograde persuasion. Direct persuasion is mainly for the purpose of informing. There is no specific persuasion object and no targeted behavior to persuade. It is only a popular persuasion method to understand things from the perspective of cognition, which is not persuasive. Impulse persuasion is a special persuasion method. It has specific persuasion objects and targeted target behaviors. Finally, it is necessary to clearly change the deep-rooted views and opinions of the persuasion objects. Retrograde persuasion refers to stimulating the change of behavior or attitude from the opposite point of view. Although the above persuasion methods can achieve behavior persuasion to a certain extent, in practical application, the acceptance of behavior persuasion is not high, there is a certain gap between the persuasion effect and the expected effect, and the persuader’s satisfaction with the method is not high.In view of the problems existing in the above persuasion methods, this paper proposes an analysis method of persuasion design mechanism based on unconscious computing. Unconsciousness refers to a kind of consciousness that is unnoticed and unconscious. Unconscious thinking refers to the thinking process below the level of consciousness. Unconscious thinking plays a positive role in problem solving. Therefore, in the design of persuasion design mechanism, unconscious thinking calculation is introduced to further improve the persuasion effect. ## 2. Persuasion Concept and Functional Objective Analysis ### 2.1. Persuasion Concept Persuasion is a concept in psychology, which refers to allowing the persuaded to accept content that is purposeful under noncompulsory circumstances. Persuasive design refers to the use of persuasive psychological methods in the design to allow users to change their attitudes toward the product or their behavior in using the product. Its main purpose is to guide users to perform purposeful operations. Based on the theory of persuasive design, a persuasive design model is proposed. This model contains three elements, namely, motivation, ability, and motivation point. #### 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. #### 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. #### 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ### 2.2. Persuasive Functional Goals Based on the description of the concept of persuasion, the persuasive design model, and its elements, the persuasion technology uses information as the external trigger element of behavior change to enhance the user’s intrinsic motivation and behavior ability so as to achieve the purpose of persuading users to change their behavior. This process can be abstracted and become the basis for the realization of the persuasion mechanism. Based on this functional goal, through relevant technology to perceive, collect user behavior state data and environmental data included in the user persuasion target behavior context, and convert it into persuasion information that can be perceived by the user and generate behavioral ability and behavioral motivation. The purpose of persuading the target behavior can be achieved by timely selecting the appropriate carrier to convey information to user. ## 2.1. Persuasion Concept Persuasion is a concept in psychology, which refers to allowing the persuaded to accept content that is purposeful under noncompulsory circumstances. Persuasive design refers to the use of persuasive psychological methods in the design to allow users to change their attitudes toward the product or their behavior in using the product. Its main purpose is to guide users to perform purposeful operations. Based on the theory of persuasive design, a persuasive design model is proposed. This model contains three elements, namely, motivation, ability, and motivation point. ### 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. ### 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. ### 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ## 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. ## 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. ## 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ## 2.2. Persuasive Functional Goals Based on the description of the concept of persuasion, the persuasive design model, and its elements, the persuasion technology uses information as the external trigger element of behavior change to enhance the user’s intrinsic motivation and behavior ability so as to achieve the purpose of persuading users to change their behavior. This process can be abstracted and become the basis for the realization of the persuasion mechanism. Based on this functional goal, through relevant technology to perceive, collect user behavior state data and environmental data included in the user persuasion target behavior context, and convert it into persuasion information that can be perceived by the user and generate behavioral ability and behavioral motivation. The purpose of persuading the target behavior can be achieved by timely selecting the appropriate carrier to convey information to user. ## 3. User Behavior Data Processing According to the goal analysis of persuasion function, persuasion technology takes information as the external trigger element of behavior change. The information here specifically refers to user behavior data. Therefore, before the design of the persuasion mechanism, first process the user behavior data and obtain the user behavior data processing results through two steps of data mining [7, 8] and data classification [9, 10], and provide the necessary trigger elements for the design of persuasion mechanism. ### 3.1. Behavioral Data Mining Assuming that the user behavior data is in a data area, the data in the entire area is described through the undirected traversal graphH=A,B,C, where A represents the node set, B represents the link set, and C represents the number of users in the area. Assuming that the data in the regional environment is composed of K regions, it is represented by a set form, specifically K=k1,k2,…,kn, where n represents the number of data regions, and then the data flow density of the data region [11] can be expressed by formula (1):(1)ρ=1−dcαcx−12+x,whereαc represents the useful information in the data stream; dc represents the community to which the data stream belongs; x represents the node where the data stream is located.IfWi represents the amount of behavior data of a user i in the data area and Gw is the corresponding feature set, where w represents the user behavior feature, then the expression for the degree of association between user i and behavior feature w is(2)Diw=∑i=1Nwilog10wiβi+Dλiφi,whereβi represents the correlation factor, which is related to the amount of data in the data area; N represents the number of users; λi represents the salient features of user i; φi represents the insignificant features of user i [12, 13].Since the persuasion mechanism is designed to persuade users’ behaviors, persuasion is mainly oriented to the salient features of user behaviors, and the insignificant features that have a little impact can be ignored. Considering the above factors, ifμil represents the payload length of the salient features of user behavior in the data area environment and D represents the total length, formula (2) is optimized to obtain an improved correlation calculation expression:(3)μil=∑i=1N∑j=1NDiμij∑i=1N∑l=1NDlμij+ϖij,whereDi represents the set of salient features of user i; μij and ϖij represent the payload length and total length of salient features of user behavior, respectively. Combining formula (2) and formula (3), we can obtain the data of salient features of user behavior, that is, to achieve user behavior data mining. ### 3.2. Classification of Behavioral Data Based on the results of user behavior data mining, in order to avoid behavior deviations in the persuasion process and improve the persuasion effect, further classification of behavior data can not only reduce behavior deviations but also improve the efficiency of persuasion. The traditional method mainly uses the support vector machine (SVM) method to classify data. This method is mainly suitable for static data. There are certain limitations to the dynamic data of user behavior data [14–16]; therefore, this paper adopts the ant colony algorithm [17–19] to optimize it.Ant colony algorithm is an intelligent bionic algorithm through which the traditional SVM method is optimized and applied to user behavior data classification to achieve the purpose of improving the accuracy of persuasion results [20, 21]. The specific operation process is given as follows.Step 1. Initialize the position and pheromone of the ant colony. Determine the initial pheromone size of antz through the SVM parameter range; the calculation formula is(4)Ez=ezizj+zi′zj′, wherezi represents the initial pheromone concentration; zj represents the initial search speed; zi′ and zj′ both represent the direction guidance vector. In order to prevent the ant colony from accelerating the convergence, a fitness functionXt [22] is set, and the fitness function Xt is modified; then(5)Xt=maxq=1,2,…,QρSq,Sp, whereSq and Sp both represent genetic operators.Step 2. Ant colony transfer. Select the maximum pheromone concentration in the individual ant colony as the target individual, denoted byYuf.(6)Yuf=Ybest,y≤y′,Y′,otherwise, whereYbest represents the optimal solution obtained in the iterative process, that is, the maximum value of the pheromone concentration [23, 24]. Determine the moving direction of antz ’s position by formula (7):(7)Rz=∑z=1Npθz×Yuft, whereθz represents the expected moving direction of ant z. To perform a local search for the ant in the dominant position in the data field, there are(8)HzA,B,C=HzA2+HzB2+HzC2, whereHzA2, HzB2, and HzC2 represent the relevant pheromone of nodes, links, and users in the data area.Step 3. Pheromone update [25, 26]. After completing Step 1 and Step 2, update different pheromones. The specific update rules are as follows:(9)Habcz=∑c=1Npzθωz∩α, whereα represents the volatilization coefficient of the pheromone. Through the above steps, it can be seen that the use of the ant colony algorithm to optimize the data classification effect of the traditional support vector machine method can obtain more accurate data classification results [27, 28] and provide a user behavior data basis for the persuasion mechanism design. ## 3.1. Behavioral Data Mining Assuming that the user behavior data is in a data area, the data in the entire area is described through the undirected traversal graphH=A,B,C, where A represents the node set, B represents the link set, and C represents the number of users in the area. Assuming that the data in the regional environment is composed of K regions, it is represented by a set form, specifically K=k1,k2,…,kn, where n represents the number of data regions, and then the data flow density of the data region [11] can be expressed by formula (1):(1)ρ=1−dcαcx−12+x,whereαc represents the useful information in the data stream; dc represents the community to which the data stream belongs; x represents the node where the data stream is located.IfWi represents the amount of behavior data of a user i in the data area and Gw is the corresponding feature set, where w represents the user behavior feature, then the expression for the degree of association between user i and behavior feature w is(2)Diw=∑i=1Nwilog10wiβi+Dλiφi,whereβi represents the correlation factor, which is related to the amount of data in the data area; N represents the number of users; λi represents the salient features of user i; φi represents the insignificant features of user i [12, 13].Since the persuasion mechanism is designed to persuade users’ behaviors, persuasion is mainly oriented to the salient features of user behaviors, and the insignificant features that have a little impact can be ignored. Considering the above factors, ifμil represents the payload length of the salient features of user behavior in the data area environment and D represents the total length, formula (2) is optimized to obtain an improved correlation calculation expression:(3)μil=∑i=1N∑j=1NDiμij∑i=1N∑l=1NDlμij+ϖij,whereDi represents the set of salient features of user i; μij and ϖij represent the payload length and total length of salient features of user behavior, respectively. Combining formula (2) and formula (3), we can obtain the data of salient features of user behavior, that is, to achieve user behavior data mining. ## 3.2. Classification of Behavioral Data Based on the results of user behavior data mining, in order to avoid behavior deviations in the persuasion process and improve the persuasion effect, further classification of behavior data can not only reduce behavior deviations but also improve the efficiency of persuasion. The traditional method mainly uses the support vector machine (SVM) method to classify data. This method is mainly suitable for static data. There are certain limitations to the dynamic data of user behavior data [14–16]; therefore, this paper adopts the ant colony algorithm [17–19] to optimize it.Ant colony algorithm is an intelligent bionic algorithm through which the traditional SVM method is optimized and applied to user behavior data classification to achieve the purpose of improving the accuracy of persuasion results [20, 21]. The specific operation process is given as follows.Step 1. Initialize the position and pheromone of the ant colony. Determine the initial pheromone size of antz through the SVM parameter range; the calculation formula is(4)Ez=ezizj+zi′zj′, wherezi represents the initial pheromone concentration; zj represents the initial search speed; zi′ and zj′ both represent the direction guidance vector. In order to prevent the ant colony from accelerating the convergence, a fitness functionXt [22] is set, and the fitness function Xt is modified; then(5)Xt=maxq=1,2,…,QρSq,Sp, whereSq and Sp both represent genetic operators.Step 2. Ant colony transfer. Select the maximum pheromone concentration in the individual ant colony as the target individual, denoted byYuf.(6)Yuf=Ybest,y≤y′,Y′,otherwise, whereYbest represents the optimal solution obtained in the iterative process, that is, the maximum value of the pheromone concentration [23, 24]. Determine the moving direction of antz ’s position by formula (7):(7)Rz=∑z=1Npθz×Yuft, whereθz represents the expected moving direction of ant z. To perform a local search for the ant in the dominant position in the data field, there are(8)HzA,B,C=HzA2+HzB2+HzC2, whereHzA2, HzB2, and HzC2 represent the relevant pheromone of nodes, links, and users in the data area.Step 3. Pheromone update [25, 26]. After completing Step 1 and Step 2, update different pheromones. The specific update rules are as follows:(9)Habcz=∑c=1Npzθωz∩α, whereα represents the volatilization coefficient of the pheromone. Through the above steps, it can be seen that the use of the ant colony algorithm to optimize the data classification effect of the traditional support vector machine method can obtain more accurate data classification results [27, 28] and provide a user behavior data basis for the persuasion mechanism design. ## 4. Analysis Method of Persuasive Design Mechanism Based on Unconscious Calculation Through the above analysis, the results of user behavior data processing are obtained, and the design and design of the persuasion mechanism will be analyzed in detail in the following. Analyze unconscious behavior and its characteristics, and give the application case of the persuasion mechanism. On this basis, give the persuasion design model, design the persuasion model, and complete the design of the persuasion mechanism. ### 4.1. Unconscious Calculation Analysis #### 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. #### 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ### 4.2. Persuade Design Patterns Based on the unconscious computing theory, this paper determines five persuasion behavior components, constructs the persuasion behavior process mechanism, grasps the persuasion behavior psychological mechanism, analyzes the change mechanism of the five components and their overall impact on users’ attitude and behavior, and takes it as a breakthrough in persuasion design. The structure of the persuasion behavior component is shown in Figure1.(1) Clue reminder: in order to increase the degree of user substitution, design familiar, clear, and attractive scene themes and report on user behavior in real time(2) Behavior plan: in order to improve the user’s execution ability, introduce environmental variables to stimulate behavior, design a reasonable plan, and provide a heuristic path [35, 36](3) Execution plan: in order to maintain the persistence of behavior, create a clear task process and give positive encouragement and guidance(4) Social relevance: in order to enhance social recognition and improve behavior motivation, social sharing, self-expression, and peer comparison are carried out(5) Self-management: in order to conduct self-management scientifically and rationally, the user behavior execution process is managed based on monitoring, comparison and evaluationFigure 1 Persuasion behavior component structure.In actual persuasion, the brain will take intuitive response, active psychological tendency, and mental cooperation evaluation to make decisions according to similar or unfamiliar scenes. The behavior threshold and trigger conditions of persuasion are different. Therefore, according to the size of motivation and ability, this paper combines them into four quadrants and divides persuasion behavior into three types of modes. These three types of persuasion modes all have the above five persuasion design components, but each has its own emphasis. Class A represents repetitive habitual behavior with high motivation and ability, which refers to the formed behavior. It focuses on the cues and stimuli of creating a behavior environment, including explicit things or scene cues (interface cues), implicit habits, or experience connections (cognitive cues). Class B represents the assisted autonomous behavior with a low factor in motivation and ability, which refers to changing and transforming behavior habits. It focuses on activating behavior parameters and reducing obstacles to user behavior with the help of visible and measurable goals and feasible path plans. Class C represents the heuristic induced behavior with low motivation and ability. It refers to creating or using the situation shared by some persuasion behavior with the help of narrative, metaphor, empathy, and other rhetorical communication means such as cognitive therapy, design behavior route in line with the user’s cognition, and gradually inducing the user. ### 4.3. Persuade Model Design The persuasion context in the persuasion model includes intention (initiator of the intention to change behavior and attitude), event (clear the use context of persuasion technology, user context, and technical context), and strategy (information content, form, and dissemination path are accurately targeted to target users to achieve persuasion) aspects. Figure2 is a schematic diagram of the persuasion model.Figure 2 Schematic diagram of persuasion model. #### 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. #### 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 4.1. Unconscious Calculation Analysis ### 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. ### 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ## 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. ## 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ## 4.2. Persuade Design Patterns Based on the unconscious computing theory, this paper determines five persuasion behavior components, constructs the persuasion behavior process mechanism, grasps the persuasion behavior psychological mechanism, analyzes the change mechanism of the five components and their overall impact on users’ attitude and behavior, and takes it as a breakthrough in persuasion design. The structure of the persuasion behavior component is shown in Figure1.(1) Clue reminder: in order to increase the degree of user substitution, design familiar, clear, and attractive scene themes and report on user behavior in real time(2) Behavior plan: in order to improve the user’s execution ability, introduce environmental variables to stimulate behavior, design a reasonable plan, and provide a heuristic path [35, 36](3) Execution plan: in order to maintain the persistence of behavior, create a clear task process and give positive encouragement and guidance(4) Social relevance: in order to enhance social recognition and improve behavior motivation, social sharing, self-expression, and peer comparison are carried out(5) Self-management: in order to conduct self-management scientifically and rationally, the user behavior execution process is managed based on monitoring, comparison and evaluationFigure 1 Persuasion behavior component structure.In actual persuasion, the brain will take intuitive response, active psychological tendency, and mental cooperation evaluation to make decisions according to similar or unfamiliar scenes. The behavior threshold and trigger conditions of persuasion are different. Therefore, according to the size of motivation and ability, this paper combines them into four quadrants and divides persuasion behavior into three types of modes. These three types of persuasion modes all have the above five persuasion design components, but each has its own emphasis. Class A represents repetitive habitual behavior with high motivation and ability, which refers to the formed behavior. It focuses on the cues and stimuli of creating a behavior environment, including explicit things or scene cues (interface cues), implicit habits, or experience connections (cognitive cues). Class B represents the assisted autonomous behavior with a low factor in motivation and ability, which refers to changing and transforming behavior habits. It focuses on activating behavior parameters and reducing obstacles to user behavior with the help of visible and measurable goals and feasible path plans. Class C represents the heuristic induced behavior with low motivation and ability. It refers to creating or using the situation shared by some persuasion behavior with the help of narrative, metaphor, empathy, and other rhetorical communication means such as cognitive therapy, design behavior route in line with the user’s cognition, and gradually inducing the user. ## 4.3. Persuade Model Design The persuasion context in the persuasion model includes intention (initiator of the intention to change behavior and attitude), event (clear the use context of persuasion technology, user context, and technical context), and strategy (information content, form, and dissemination path are accurately targeted to target users to achieve persuasion) aspects. Figure2 is a schematic diagram of the persuasion model.Figure 2 Schematic diagram of persuasion model. ### 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. ### 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. ## 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 5. Application Case Analysis In order to verify the effectiveness of the method in this paper, the following takes the field of shared electric vehicles as an example to further verify the feasibility of the method in this paper. ### 5.1. Scenario Design and Research Purpose This paper attempts to introduce a persuasion mechanism to solve the problem of users’ bad behavior. Therefore, after a systematic understanding of the persuasion mechanism, it analyzes the efforts of persuasion technology in the field of shared electric vehicles. Refining the application methods and modes of persuasive technology in the shared electric vehicle service system provides theoretical hypotheses and clearer research goals for further empirical research.In recent years, the sharing economy of rent for sale mode has become more and more mature. The shared electric vehicle combined with the two has become a hot spot in the sharing field. Shared electric vehicles not only fill the gap in the field of public transportation but also make daily transportation more convenient and efficient and avoid environmental problems caused by exhaust emissions. In the long run, it provides a good scheme for the improvement of the energy crisis, air pollution, traffic congestion, and other problems. Even though electric shared vehicles have many advantages, they still encounter many obstacles in the early development. The main problem is that the vehicle body is damaged, the environment inside the vehicle becomes worse, the vehicle is parked disorderly, and even serious traffic accidents will be caused by users' nonstandard operation. In order to make the healthy and sustainable development of shared electric vehicles, it is necessary to guide users’ behavior to be more standardized, improve users’ motivation and ability to complete behavior, and make them form good behavior habits so as to make shared electric vehicles better serve users. Therefore, in the case analysis, this paper uses persuasion technology to change users’ behavior and attitude so as to guide and persuade users’ behavior habits.Through the investigation of users and their behaviors using shared electric vehicles, they can intuitively discover the users’ bad behavior problems in the process of using shared electric vehicles and analyze the influencing factors of users’ bad behaviors and attitudes. Accurately analyze the user’s travel behavior path and key scenarios so as to obtain an opportunity for behavior persuasion. At the same time, on the basis of the original persuasive design, investigate whether the user’s behavior is affected or changed and the user’s recognition and trust in such behavior persuasion. ### 5.2. Basic Information of Survey Objects #### 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. #### 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ### 5.3. Analysis of Experimental Results #### 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. #### 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. #### 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 5.1. Scenario Design and Research Purpose This paper attempts to introduce a persuasion mechanism to solve the problem of users’ bad behavior. Therefore, after a systematic understanding of the persuasion mechanism, it analyzes the efforts of persuasion technology in the field of shared electric vehicles. Refining the application methods and modes of persuasive technology in the shared electric vehicle service system provides theoretical hypotheses and clearer research goals for further empirical research.In recent years, the sharing economy of rent for sale mode has become more and more mature. The shared electric vehicle combined with the two has become a hot spot in the sharing field. Shared electric vehicles not only fill the gap in the field of public transportation but also make daily transportation more convenient and efficient and avoid environmental problems caused by exhaust emissions. In the long run, it provides a good scheme for the improvement of the energy crisis, air pollution, traffic congestion, and other problems. Even though electric shared vehicles have many advantages, they still encounter many obstacles in the early development. The main problem is that the vehicle body is damaged, the environment inside the vehicle becomes worse, the vehicle is parked disorderly, and even serious traffic accidents will be caused by users' nonstandard operation. In order to make the healthy and sustainable development of shared electric vehicles, it is necessary to guide users’ behavior to be more standardized, improve users’ motivation and ability to complete behavior, and make them form good behavior habits so as to make shared electric vehicles better serve users. Therefore, in the case analysis, this paper uses persuasion technology to change users’ behavior and attitude so as to guide and persuade users’ behavior habits.Through the investigation of users and their behaviors using shared electric vehicles, they can intuitively discover the users’ bad behavior problems in the process of using shared electric vehicles and analyze the influencing factors of users’ bad behaviors and attitudes. Accurately analyze the user’s travel behavior path and key scenarios so as to obtain an opportunity for behavior persuasion. At the same time, on the basis of the original persuasive design, investigate whether the user’s behavior is affected or changed and the user’s recognition and trust in such behavior persuasion. ## 5.2. Basic Information of Survey Objects ### 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. ### 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ## 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. ## 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ## 5.3. Analysis of Experimental Results ### 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. ### 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. ### 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. ## 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. ## 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 6. Conclusion For the purpose of improving the acceptance of behavioral persuasion, an analysis method of persuasion design mechanism based on unconscious calculation is proposed. Taking the concept of persuasion and the goal of persuasion as the theoretical basis, the analysis of the experimental results shows that the method in this paper has a higher acceptance of behavioral persuasion and can obtain a higher satisfaction from the persuaded, which fully validates the advantages of the method in this paper. --- *Source: 1002517-2022-02-22.xml*
1002517-2022-02-22_1002517-2022-02-22.md
85,121
Analysis of Persuasive Design Mechanism Based on Unconscious Calculation
Hongtao Zheng; Shuo Li
Scientific Programming (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1002517
1002517-2022-02-22.xml
--- ## Abstract In order to make users’ behavior more standardized, a persuasive design mechanism analysis method based on unconscious calculation is proposed. Taking the concept of persuasion and the goal of persuading function as the theoretical basis, the user behavior data is obtained in the form of correlation calculation, and the ant colony algorithm is used to classify the behavior data. According to the results of data processing, analyze unconscious behavior and its characteristics, and obtain the persuasion mechanism through unconscious calculation, persuasion model, and persuasion model design. The experimental results show that the method in this paper has a higher acceptance of behavioral persuasion and higher satisfaction of the persuaded, indicating that the method has strong practical applicability. --- ## Body ## 1. Introduction Guide the user’s behavior to be more in line with the norms, improve the user’s motivation and ability to complete the behavior, and enable them to develop good behavior habits so as to provide users with better services. Today, with the increasing popularity of Internet technology, persuasion technology can be used to change the behavior and attitude of users and to guide and persuade users’ behavior [1–3]. At present, the application research of persuasion technology mainly involves the field of health, and its persuasion strategy is also proposed for the field of health. The research is more one-sided and only takes meeting the needs of users as the research goal, which cannot achieve behavior persuasion in the real sense. Therefore, we should induce and persuade users to develop or change their behavior through design intervention so as to achieve a purpose other than the design itself [4–6]. Based on this, how to intervene with the target users, persuade them, and improve the execution of the users is a topic that needs to be studied.The application of persuasive technology in various fields needs to be realized through design. As a mode of thinking, persuasive design has been widely used in the cross-research of design and other disciplines, including education, medical care, health, sports, games, advertising, e-commerce, and other fields. Persuasion is to achieve psychological and behavioral guidance through nonmandatory means. There are various ways of persuasion, mainly including direct diarrhea persuasion, impact persuasion, and retrograde persuasion. Direct persuasion is mainly for the purpose of informing. There is no specific persuasion object and no targeted behavior to persuade. It is only a popular persuasion method to understand things from the perspective of cognition, which is not persuasive. Impulse persuasion is a special persuasion method. It has specific persuasion objects and targeted target behaviors. Finally, it is necessary to clearly change the deep-rooted views and opinions of the persuasion objects. Retrograde persuasion refers to stimulating the change of behavior or attitude from the opposite point of view. Although the above persuasion methods can achieve behavior persuasion to a certain extent, in practical application, the acceptance of behavior persuasion is not high, there is a certain gap between the persuasion effect and the expected effect, and the persuader’s satisfaction with the method is not high.In view of the problems existing in the above persuasion methods, this paper proposes an analysis method of persuasion design mechanism based on unconscious computing. Unconsciousness refers to a kind of consciousness that is unnoticed and unconscious. Unconscious thinking refers to the thinking process below the level of consciousness. Unconscious thinking plays a positive role in problem solving. Therefore, in the design of persuasion design mechanism, unconscious thinking calculation is introduced to further improve the persuasion effect. ## 2. Persuasion Concept and Functional Objective Analysis ### 2.1. Persuasion Concept Persuasion is a concept in psychology, which refers to allowing the persuaded to accept content that is purposeful under noncompulsory circumstances. Persuasive design refers to the use of persuasive psychological methods in the design to allow users to change their attitudes toward the product or their behavior in using the product. Its main purpose is to guide users to perform purposeful operations. Based on the theory of persuasive design, a persuasive design model is proposed. This model contains three elements, namely, motivation, ability, and motivation point. #### 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. #### 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. #### 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ### 2.2. Persuasive Functional Goals Based on the description of the concept of persuasion, the persuasive design model, and its elements, the persuasion technology uses information as the external trigger element of behavior change to enhance the user’s intrinsic motivation and behavior ability so as to achieve the purpose of persuading users to change their behavior. This process can be abstracted and become the basis for the realization of the persuasion mechanism. Based on this functional goal, through relevant technology to perceive, collect user behavior state data and environmental data included in the user persuasion target behavior context, and convert it into persuasion information that can be perceived by the user and generate behavioral ability and behavioral motivation. The purpose of persuading the target behavior can be achieved by timely selecting the appropriate carrier to convey information to user. ## 2.1. Persuasion Concept Persuasion is a concept in psychology, which refers to allowing the persuaded to accept content that is purposeful under noncompulsory circumstances. Persuasive design refers to the use of persuasive psychological methods in the design to allow users to change their attitudes toward the product or their behavior in using the product. Its main purpose is to guide users to perform purposeful operations. Based on the theory of persuasive design, a persuasive design model is proposed. This model contains three elements, namely, motivation, ability, and motivation point. ### 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. ### 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. ### 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ## 2.1.1. Motivation Motivation refers to the user’s internal reasons when performing operations or using behaviors. It can be divided into three categories: fun and pain, hope and fear, and social identification and social rejection. Among them, fun and pain are derived from human instinct, hope and fear are a result of human behavior, and social identification and social rejection are feedback level content after the behavior is over. ## 2.1.2. Ability Capability refers to the ability of a user to complete a certain behavior. In the persuasive design theoretical model, the most important principle is simplicity. For example, in product design, the higher the ease of use of the product, the lower the requirements for the user’s ability to complete the use of the product and the higher the user’s sense of pleasure. Therefore, designers should try their best to reduce the requirements of products to users’ ability in interactive design and make products more useable and easy to use. This principle is also applicable in other research fields. ## 2.1.3. Promoting Point The promotion point refers to the clue provided to the user or a metaphor so that the user can complete the persuasive operation behavior. The promotion point can be related to motivation and promotion of the motivation elements of users. The promotion point can also be related to the ability to complete a certain behavior under the existing abilities of the user. Finally, the promotion point can also be a reminder behavior point to remind users of some operation behaviors.In the process of persuasive design, according to the three elements of the persuasive design model, the guidance of user behavior is realized through the control of user motivation, ability and promotion point. Only skillfully balancing the relationship between users’ motivation, users’ ability, and promotion point can design an effective persuasion mechanism. ## 2.2. Persuasive Functional Goals Based on the description of the concept of persuasion, the persuasive design model, and its elements, the persuasion technology uses information as the external trigger element of behavior change to enhance the user’s intrinsic motivation and behavior ability so as to achieve the purpose of persuading users to change their behavior. This process can be abstracted and become the basis for the realization of the persuasion mechanism. Based on this functional goal, through relevant technology to perceive, collect user behavior state data and environmental data included in the user persuasion target behavior context, and convert it into persuasion information that can be perceived by the user and generate behavioral ability and behavioral motivation. The purpose of persuading the target behavior can be achieved by timely selecting the appropriate carrier to convey information to user. ## 3. User Behavior Data Processing According to the goal analysis of persuasion function, persuasion technology takes information as the external trigger element of behavior change. The information here specifically refers to user behavior data. Therefore, before the design of the persuasion mechanism, first process the user behavior data and obtain the user behavior data processing results through two steps of data mining [7, 8] and data classification [9, 10], and provide the necessary trigger elements for the design of persuasion mechanism. ### 3.1. Behavioral Data Mining Assuming that the user behavior data is in a data area, the data in the entire area is described through the undirected traversal graphH=A,B,C, where A represents the node set, B represents the link set, and C represents the number of users in the area. Assuming that the data in the regional environment is composed of K regions, it is represented by a set form, specifically K=k1,k2,…,kn, where n represents the number of data regions, and then the data flow density of the data region [11] can be expressed by formula (1):(1)ρ=1−dcαcx−12+x,whereαc represents the useful information in the data stream; dc represents the community to which the data stream belongs; x represents the node where the data stream is located.IfWi represents the amount of behavior data of a user i in the data area and Gw is the corresponding feature set, where w represents the user behavior feature, then the expression for the degree of association between user i and behavior feature w is(2)Diw=∑i=1Nwilog10wiβi+Dλiφi,whereβi represents the correlation factor, which is related to the amount of data in the data area; N represents the number of users; λi represents the salient features of user i; φi represents the insignificant features of user i [12, 13].Since the persuasion mechanism is designed to persuade users’ behaviors, persuasion is mainly oriented to the salient features of user behaviors, and the insignificant features that have a little impact can be ignored. Considering the above factors, ifμil represents the payload length of the salient features of user behavior in the data area environment and D represents the total length, formula (2) is optimized to obtain an improved correlation calculation expression:(3)μil=∑i=1N∑j=1NDiμij∑i=1N∑l=1NDlμij+ϖij,whereDi represents the set of salient features of user i; μij and ϖij represent the payload length and total length of salient features of user behavior, respectively. Combining formula (2) and formula (3), we can obtain the data of salient features of user behavior, that is, to achieve user behavior data mining. ### 3.2. Classification of Behavioral Data Based on the results of user behavior data mining, in order to avoid behavior deviations in the persuasion process and improve the persuasion effect, further classification of behavior data can not only reduce behavior deviations but also improve the efficiency of persuasion. The traditional method mainly uses the support vector machine (SVM) method to classify data. This method is mainly suitable for static data. There are certain limitations to the dynamic data of user behavior data [14–16]; therefore, this paper adopts the ant colony algorithm [17–19] to optimize it.Ant colony algorithm is an intelligent bionic algorithm through which the traditional SVM method is optimized and applied to user behavior data classification to achieve the purpose of improving the accuracy of persuasion results [20, 21]. The specific operation process is given as follows.Step 1. Initialize the position and pheromone of the ant colony. Determine the initial pheromone size of antz through the SVM parameter range; the calculation formula is(4)Ez=ezizj+zi′zj′, wherezi represents the initial pheromone concentration; zj represents the initial search speed; zi′ and zj′ both represent the direction guidance vector. In order to prevent the ant colony from accelerating the convergence, a fitness functionXt [22] is set, and the fitness function Xt is modified; then(5)Xt=maxq=1,2,…,QρSq,Sp, whereSq and Sp both represent genetic operators.Step 2. Ant colony transfer. Select the maximum pheromone concentration in the individual ant colony as the target individual, denoted byYuf.(6)Yuf=Ybest,y≤y′,Y′,otherwise, whereYbest represents the optimal solution obtained in the iterative process, that is, the maximum value of the pheromone concentration [23, 24]. Determine the moving direction of antz ’s position by formula (7):(7)Rz=∑z=1Npθz×Yuft, whereθz represents the expected moving direction of ant z. To perform a local search for the ant in the dominant position in the data field, there are(8)HzA,B,C=HzA2+HzB2+HzC2, whereHzA2, HzB2, and HzC2 represent the relevant pheromone of nodes, links, and users in the data area.Step 3. Pheromone update [25, 26]. After completing Step 1 and Step 2, update different pheromones. The specific update rules are as follows:(9)Habcz=∑c=1Npzθωz∩α, whereα represents the volatilization coefficient of the pheromone. Through the above steps, it can be seen that the use of the ant colony algorithm to optimize the data classification effect of the traditional support vector machine method can obtain more accurate data classification results [27, 28] and provide a user behavior data basis for the persuasion mechanism design. ## 3.1. Behavioral Data Mining Assuming that the user behavior data is in a data area, the data in the entire area is described through the undirected traversal graphH=A,B,C, where A represents the node set, B represents the link set, and C represents the number of users in the area. Assuming that the data in the regional environment is composed of K regions, it is represented by a set form, specifically K=k1,k2,…,kn, where n represents the number of data regions, and then the data flow density of the data region [11] can be expressed by formula (1):(1)ρ=1−dcαcx−12+x,whereαc represents the useful information in the data stream; dc represents the community to which the data stream belongs; x represents the node where the data stream is located.IfWi represents the amount of behavior data of a user i in the data area and Gw is the corresponding feature set, where w represents the user behavior feature, then the expression for the degree of association between user i and behavior feature w is(2)Diw=∑i=1Nwilog10wiβi+Dλiφi,whereβi represents the correlation factor, which is related to the amount of data in the data area; N represents the number of users; λi represents the salient features of user i; φi represents the insignificant features of user i [12, 13].Since the persuasion mechanism is designed to persuade users’ behaviors, persuasion is mainly oriented to the salient features of user behaviors, and the insignificant features that have a little impact can be ignored. Considering the above factors, ifμil represents the payload length of the salient features of user behavior in the data area environment and D represents the total length, formula (2) is optimized to obtain an improved correlation calculation expression:(3)μil=∑i=1N∑j=1NDiμij∑i=1N∑l=1NDlμij+ϖij,whereDi represents the set of salient features of user i; μij and ϖij represent the payload length and total length of salient features of user behavior, respectively. Combining formula (2) and formula (3), we can obtain the data of salient features of user behavior, that is, to achieve user behavior data mining. ## 3.2. Classification of Behavioral Data Based on the results of user behavior data mining, in order to avoid behavior deviations in the persuasion process and improve the persuasion effect, further classification of behavior data can not only reduce behavior deviations but also improve the efficiency of persuasion. The traditional method mainly uses the support vector machine (SVM) method to classify data. This method is mainly suitable for static data. There are certain limitations to the dynamic data of user behavior data [14–16]; therefore, this paper adopts the ant colony algorithm [17–19] to optimize it.Ant colony algorithm is an intelligent bionic algorithm through which the traditional SVM method is optimized and applied to user behavior data classification to achieve the purpose of improving the accuracy of persuasion results [20, 21]. The specific operation process is given as follows.Step 1. Initialize the position and pheromone of the ant colony. Determine the initial pheromone size of antz through the SVM parameter range; the calculation formula is(4)Ez=ezizj+zi′zj′, wherezi represents the initial pheromone concentration; zj represents the initial search speed; zi′ and zj′ both represent the direction guidance vector. In order to prevent the ant colony from accelerating the convergence, a fitness functionXt [22] is set, and the fitness function Xt is modified; then(5)Xt=maxq=1,2,…,QρSq,Sp, whereSq and Sp both represent genetic operators.Step 2. Ant colony transfer. Select the maximum pheromone concentration in the individual ant colony as the target individual, denoted byYuf.(6)Yuf=Ybest,y≤y′,Y′,otherwise, whereYbest represents the optimal solution obtained in the iterative process, that is, the maximum value of the pheromone concentration [23, 24]. Determine the moving direction of antz ’s position by formula (7):(7)Rz=∑z=1Npθz×Yuft, whereθz represents the expected moving direction of ant z. To perform a local search for the ant in the dominant position in the data field, there are(8)HzA,B,C=HzA2+HzB2+HzC2, whereHzA2, HzB2, and HzC2 represent the relevant pheromone of nodes, links, and users in the data area.Step 3. Pheromone update [25, 26]. After completing Step 1 and Step 2, update different pheromones. The specific update rules are as follows:(9)Habcz=∑c=1Npzθωz∩α, whereα represents the volatilization coefficient of the pheromone. Through the above steps, it can be seen that the use of the ant colony algorithm to optimize the data classification effect of the traditional support vector machine method can obtain more accurate data classification results [27, 28] and provide a user behavior data basis for the persuasion mechanism design. ## 4. Analysis Method of Persuasive Design Mechanism Based on Unconscious Calculation Through the above analysis, the results of user behavior data processing are obtained, and the design and design of the persuasion mechanism will be analyzed in detail in the following. Analyze unconscious behavior and its characteristics, and give the application case of the persuasion mechanism. On this basis, give the persuasion design model, design the persuasion model, and complete the design of the persuasion mechanism. ### 4.1. Unconscious Calculation Analysis #### 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. #### 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ### 4.2. Persuade Design Patterns Based on the unconscious computing theory, this paper determines five persuasion behavior components, constructs the persuasion behavior process mechanism, grasps the persuasion behavior psychological mechanism, analyzes the change mechanism of the five components and their overall impact on users’ attitude and behavior, and takes it as a breakthrough in persuasion design. The structure of the persuasion behavior component is shown in Figure1.(1) Clue reminder: in order to increase the degree of user substitution, design familiar, clear, and attractive scene themes and report on user behavior in real time(2) Behavior plan: in order to improve the user’s execution ability, introduce environmental variables to stimulate behavior, design a reasonable plan, and provide a heuristic path [35, 36](3) Execution plan: in order to maintain the persistence of behavior, create a clear task process and give positive encouragement and guidance(4) Social relevance: in order to enhance social recognition and improve behavior motivation, social sharing, self-expression, and peer comparison are carried out(5) Self-management: in order to conduct self-management scientifically and rationally, the user behavior execution process is managed based on monitoring, comparison and evaluationFigure 1 Persuasion behavior component structure.In actual persuasion, the brain will take intuitive response, active psychological tendency, and mental cooperation evaluation to make decisions according to similar or unfamiliar scenes. The behavior threshold and trigger conditions of persuasion are different. Therefore, according to the size of motivation and ability, this paper combines them into four quadrants and divides persuasion behavior into three types of modes. These three types of persuasion modes all have the above five persuasion design components, but each has its own emphasis. Class A represents repetitive habitual behavior with high motivation and ability, which refers to the formed behavior. It focuses on the cues and stimuli of creating a behavior environment, including explicit things or scene cues (interface cues), implicit habits, or experience connections (cognitive cues). Class B represents the assisted autonomous behavior with a low factor in motivation and ability, which refers to changing and transforming behavior habits. It focuses on activating behavior parameters and reducing obstacles to user behavior with the help of visible and measurable goals and feasible path plans. Class C represents the heuristic induced behavior with low motivation and ability. It refers to creating or using the situation shared by some persuasion behavior with the help of narrative, metaphor, empathy, and other rhetorical communication means such as cognitive therapy, design behavior route in line with the user’s cognition, and gradually inducing the user. ### 4.3. Persuade Model Design The persuasion context in the persuasion model includes intention (initiator of the intention to change behavior and attitude), event (clear the use context of persuasion technology, user context, and technical context), and strategy (information content, form, and dissemination path are accurately targeted to target users to achieve persuasion) aspects. Figure2 is a schematic diagram of the persuasion model.Figure 2 Schematic diagram of persuasion model. #### 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. #### 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 4.1. Unconscious Calculation Analysis ### 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. ### 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ## 4.1.1. Analysis of Unconscious Behavior and Its Characteristics Unconscious behavior is an instinctive behavior made without subjective analysis and judgment, such as reflection and stress response, which can be designed persuasively by using the characteristics of unconscious behavior [29, 30]. For example, when passengers enter the security inspection device and leave the security inspection device, setting the two links to move continuously will make passengers unconsciously avoid light due to stress response, so as to achieve the effect of driving passengers, so that passengers can quickly pass the security inspection device. After the security check, leave quickly, thereby improving the efficiency of passengers’ security check passage. Through the analysis of people’s unconscious behavior, it can be found that unconscious behavior has the characteristics of universality, richness, and concealment. Almost everyone has unconscious behaviors in their behaviors. Unconscious behaviors are common in daily life and gradually integrated into their daily habits.Unconscious behaviors are widely used in the design. Integrating unconscious behaviors into related designs can bring new design concepts to design and provide users with a natural user experience. A large number of unconscious behaviors run through people’s increasingly frequent operation of mobile terminals [31, 32]. Introducing unconscious design into the persuasion mechanism design can enhance the user experience. ## 4.1.2. Unconscious Calculation Based on unconscious behavior and its characteristics, unconscious computing has been studied. Unconscious computing has developed several times and has formed a variety of specific methods. Among these methods, the processing separation program (PDP) is still the best method. It separates conscious extraction and automatic extraction in simple recognition tasks by including tests and elimination tests. This paper intends to use the process separation program to separate the implicit and explicit components of unconscious behavior so as to realize unconscious computing [33, 34].In order to explore the complex relationship between consciousness and unconsciousness, 5 (age: elderly, middle-aged, college students, junior high school, and junior high school) were adopted × 2 (contribution source: consciousness and unconsciousness). The contribution of consciousness and unconsciousness was calculated through the inclusion and exclusion test of the processing separation program (PDP). The specific process is as follows.The subjects were divided into five age groups with 23 people in each group. All subjects were in good health and had a normal corrected vision. The age of the elderly group was 60–71 years. The age of the middle-aged group was between 30 and 55 years. The subjects in the university group were between 18 and 25 years old. The age of the subjects in the third group of junior middle school is between 14 and 15 years. The average age of the subjects in the high primary group was 11 years.In the experiment, 50 specific pictures in the study of the best age of recognition ability were used as experimental materials. The whole experiment was divided into two stages: learning and testing. All subjects were tested separately. Ten of the 50 pictures were randomly selected as learning materials. In the learning phase, the learning time of each material is 2 seconds. In the test stage, each learned material and 4 unlearned materials are grouped into a group, and then the 10 groups of materials are subjected to inclusion test and exclusion test successively. In the inclusion test, the subjects were told to recognize the materials just presented in the five pictures, and if they cannot recognize the pictures just presented, pick out the first picture they think of. In the exclusion test, the subjects were told to recognize the material just presented in the five pictures, but not this one, but another possible picture.According to the results of the inclusion test and exclusion test in the processing separation program (PDP), the conscious and unconscious contributions of each age group to recognizing specific pictures are calculated by using the formula (see Table1).Table 1 Conscious and unconscious contributions of each age group. High primaryThird juniorUniversityMiddle ageElderlyConsciousness contribution0.5690.6250.5840.3870.273Unconscious contribution0.2020.1900.2410.2530.305The results show that (1) there is a developmental separation between conscious and unconscious contributions; (2) the unconscious contribution of the elderly group was higher than that of the conscious contribution but did not reach a significant level. The other four groups showed that the level of conscious contribution was extremely significantly higher than that of the unconscious contribution.According to the above analysis, this paper uses the processing separation program to separate the implicit and explicit components of unconscious behavior and realizes unconscious computing. ## 4.2. Persuade Design Patterns Based on the unconscious computing theory, this paper determines five persuasion behavior components, constructs the persuasion behavior process mechanism, grasps the persuasion behavior psychological mechanism, analyzes the change mechanism of the five components and their overall impact on users’ attitude and behavior, and takes it as a breakthrough in persuasion design. The structure of the persuasion behavior component is shown in Figure1.(1) Clue reminder: in order to increase the degree of user substitution, design familiar, clear, and attractive scene themes and report on user behavior in real time(2) Behavior plan: in order to improve the user’s execution ability, introduce environmental variables to stimulate behavior, design a reasonable plan, and provide a heuristic path [35, 36](3) Execution plan: in order to maintain the persistence of behavior, create a clear task process and give positive encouragement and guidance(4) Social relevance: in order to enhance social recognition and improve behavior motivation, social sharing, self-expression, and peer comparison are carried out(5) Self-management: in order to conduct self-management scientifically and rationally, the user behavior execution process is managed based on monitoring, comparison and evaluationFigure 1 Persuasion behavior component structure.In actual persuasion, the brain will take intuitive response, active psychological tendency, and mental cooperation evaluation to make decisions according to similar or unfamiliar scenes. The behavior threshold and trigger conditions of persuasion are different. Therefore, according to the size of motivation and ability, this paper combines them into four quadrants and divides persuasion behavior into three types of modes. These three types of persuasion modes all have the above five persuasion design components, but each has its own emphasis. Class A represents repetitive habitual behavior with high motivation and ability, which refers to the formed behavior. It focuses on the cues and stimuli of creating a behavior environment, including explicit things or scene cues (interface cues), implicit habits, or experience connections (cognitive cues). Class B represents the assisted autonomous behavior with a low factor in motivation and ability, which refers to changing and transforming behavior habits. It focuses on activating behavior parameters and reducing obstacles to user behavior with the help of visible and measurable goals and feasible path plans. Class C represents the heuristic induced behavior with low motivation and ability. It refers to creating or using the situation shared by some persuasion behavior with the help of narrative, metaphor, empathy, and other rhetorical communication means such as cognitive therapy, design behavior route in line with the user’s cognition, and gradually inducing the user. ## 4.3. Persuade Model Design The persuasion context in the persuasion model includes intention (initiator of the intention to change behavior and attitude), event (clear the use context of persuasion technology, user context, and technical context), and strategy (information content, form, and dissemination path are accurately targeted to target users to achieve persuasion) aspects. Figure2 is a schematic diagram of the persuasion model.Figure 2 Schematic diagram of persuasion model. ### 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. ### 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 4.3.1. The Internal Function Stage of the Persuasion Model Combining with the behavioral data processing and unconscious calculations mentioned above, the functional phase process inside the persuasion model is obtained. From data information input to persuasion information output, the entire functional stage is divided into three functional modules that affect the persuasion function: data information acquisition, data information transformation, and persuasion information transmission. The data information acquisition module is used to perceive the original data information and pass this information into the persuasion product, and then the data information conversion module will process the original data information into persuasive information that can be used to improve the user’s behavior motivation and behavior ability. Finally, the persuasion information transmission module transmits the converted persuasion information to the user. Figure3 shows the internal functional phase process of the whole persuasion model.Figure 3 The internal function stage process of the persuasion model.Through the establishment of the functional modules of the above persuasion model, it can be seen that the persuasion model is divided into three functional modules: data information acquisition, data information transformation, and persuasion information transmission. ## 4.3.2. Persuasion Mechanism Design The application of persuasion mechanism in various fields needs to be realized through design. According to the above analysis, the specific process of persuasion mechanism design is as follows:(1) Determine the target behavior. First, choose a simple and specific target behavior, and develop it into a large series of target behaviors step by step in a step-by-step manner.(2) Determine the user object. Must first choose a willing target user group, and then step by step to expand those who are not willing or have the willingness to oppose the boycott.(3) Analyze the reasons why users do not adopt the target behavior. Find out whether the reason for preventing user behavior is the lack of motivation or lack of ability. If both lack of motivation and lack of ability, they need to go back to the previous two steps to consider whether the previously determined target behavior and target user are appropriate.(4) Choose a persuasion mechanism or persuasion strategy that meets the application conditions. The choice of persuasion mechanism is mainly considered from three aspects: the target behavior, the target user, and the reasons for preventing the behavior from occurring.(5) Investigate the application cases of the persuasion mechanism. To determine whether the persuasion mechanism is appropriate, we need to find three cases with similar target behaviors, three cases with similar target users, and three cases with the same persuasion mechanism.(6) Follow successful cases. According to previous studies on the application cases of the persuasion mechanism, we can find more successful cases from similar cases to imitate. This imitation is not just a hard copy, but to discover the essence of its persuasive effect from successful cases.(7) Rapid iteration of prototype design. In persuasive design, the persuasion mechanism must be iterated quickly. With the development of related technologies, the persuasion mechanism must also be continuously developed. This is more important than deep thinking about how to persuade with the best mechanism.(8) Expand the trial scope to verify the effectiveness of the function. As mentioned earlier, a small and easy-to-achieve goal should be selected to determine the goal. With the success of the small goal, the scale needs to be expanded. After the effectiveness of the persuasion mechanism is verified, a new target behavior can be established to achieve iterative and gradual development. ## 5. Application Case Analysis In order to verify the effectiveness of the method in this paper, the following takes the field of shared electric vehicles as an example to further verify the feasibility of the method in this paper. ### 5.1. Scenario Design and Research Purpose This paper attempts to introduce a persuasion mechanism to solve the problem of users’ bad behavior. Therefore, after a systematic understanding of the persuasion mechanism, it analyzes the efforts of persuasion technology in the field of shared electric vehicles. Refining the application methods and modes of persuasive technology in the shared electric vehicle service system provides theoretical hypotheses and clearer research goals for further empirical research.In recent years, the sharing economy of rent for sale mode has become more and more mature. The shared electric vehicle combined with the two has become a hot spot in the sharing field. Shared electric vehicles not only fill the gap in the field of public transportation but also make daily transportation more convenient and efficient and avoid environmental problems caused by exhaust emissions. In the long run, it provides a good scheme for the improvement of the energy crisis, air pollution, traffic congestion, and other problems. Even though electric shared vehicles have many advantages, they still encounter many obstacles in the early development. The main problem is that the vehicle body is damaged, the environment inside the vehicle becomes worse, the vehicle is parked disorderly, and even serious traffic accidents will be caused by users' nonstandard operation. In order to make the healthy and sustainable development of shared electric vehicles, it is necessary to guide users’ behavior to be more standardized, improve users’ motivation and ability to complete behavior, and make them form good behavior habits so as to make shared electric vehicles better serve users. Therefore, in the case analysis, this paper uses persuasion technology to change users’ behavior and attitude so as to guide and persuade users’ behavior habits.Through the investigation of users and their behaviors using shared electric vehicles, they can intuitively discover the users’ bad behavior problems in the process of using shared electric vehicles and analyze the influencing factors of users’ bad behaviors and attitudes. Accurately analyze the user’s travel behavior path and key scenarios so as to obtain an opportunity for behavior persuasion. At the same time, on the basis of the original persuasive design, investigate whether the user’s behavior is affected or changed and the user’s recognition and trust in such behavior persuasion. ### 5.2. Basic Information of Survey Objects #### 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. #### 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ### 5.3. Analysis of Experimental Results #### 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. #### 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. #### 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 5.1. Scenario Design and Research Purpose This paper attempts to introduce a persuasion mechanism to solve the problem of users’ bad behavior. Therefore, after a systematic understanding of the persuasion mechanism, it analyzes the efforts of persuasion technology in the field of shared electric vehicles. Refining the application methods and modes of persuasive technology in the shared electric vehicle service system provides theoretical hypotheses and clearer research goals for further empirical research.In recent years, the sharing economy of rent for sale mode has become more and more mature. The shared electric vehicle combined with the two has become a hot spot in the sharing field. Shared electric vehicles not only fill the gap in the field of public transportation but also make daily transportation more convenient and efficient and avoid environmental problems caused by exhaust emissions. In the long run, it provides a good scheme for the improvement of the energy crisis, air pollution, traffic congestion, and other problems. Even though electric shared vehicles have many advantages, they still encounter many obstacles in the early development. The main problem is that the vehicle body is damaged, the environment inside the vehicle becomes worse, the vehicle is parked disorderly, and even serious traffic accidents will be caused by users' nonstandard operation. In order to make the healthy and sustainable development of shared electric vehicles, it is necessary to guide users’ behavior to be more standardized, improve users’ motivation and ability to complete behavior, and make them form good behavior habits so as to make shared electric vehicles better serve users. Therefore, in the case analysis, this paper uses persuasion technology to change users’ behavior and attitude so as to guide and persuade users’ behavior habits.Through the investigation of users and their behaviors using shared electric vehicles, they can intuitively discover the users’ bad behavior problems in the process of using shared electric vehicles and analyze the influencing factors of users’ bad behaviors and attitudes. Accurately analyze the user’s travel behavior path and key scenarios so as to obtain an opportunity for behavior persuasion. At the same time, on the basis of the original persuasive design, investigate whether the user’s behavior is affected or changed and the user’s recognition and trust in such behavior persuasion. ## 5.2. Basic Information of Survey Objects ### 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. ### 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ## 5.2.1. Target User Attributes Users of shared electric vehicles are the main research objects. It is necessary to study their age, gender, occupation, driving age, purpose, and other pieces of information, analyze the attributes of users in the field of shared electric vehicles, understand the background factors of users, and more accurately find their behavior path so as to guide the establishment of key scenes. Table2 shows the relevant contents applied in the collection of basic information of survey objects.Table 2 Basic information collection form of survey objects. Basic situationAge, gender, occupation, driving experience, and ownership of a private carTravel situationThe selected shared travel brandThe reason and purpose for choosing shared travelShared travel frequency and durationBehavioral awarenessDegree of connection and perception of shared electric vehiclesShare the status that appears in the minds of usersGeneral feelings and difficulties encountered during useUncivilized and irregular behavior that users have learned about themselvesThe first part is the basic situation of shared electric vehicle users, collecting information on the user’s age, gender, occupation, driving age, and whether there is a private car. Quantitatively study the characteristics of target users in the shared electric vehicle field, and collect data for the output of user models.The second part is a study on the travel situation of shared electric vehicle users, which mainly includes the selected shared travel brand, the reason and purpose of choosing shared travel, and the frequency and duration of shared travel. The purpose of this part is to supplement the user model on the one hand and to make the behavior path for the subsequent on-site observation of users and on the other hand to analyze some of the reasons for the generation of user behavior.The third part is a study on the behavioral cognition of users of shared electric vehicles, which mainly includes users’ understanding and views on shared electric vehicles, the status of shared travel in the minds of users, overall feelings and difficulties encountered during use, and uncivilized and irregular behaviors that users have learned about themselves. On the one hand, this part studies the psychological attitude of users so as to see the implementation of behavior persuasion in sharing electric vehicles. On the other hand, they can also find out which factors may affect the behavior of shared electric vehicle users and the importance of these factors so as to conduct in-depth research on these factors in the next stage. ## 5.2.2. Travel Situation of Users Using Shared Electric Vehicles The above is mainly to study the attributes of target users and then need to study and analyze the use of shared electric vehicles for such users to travel, which can provide directions for behavioral persuasion design. The content of the travel situation study includes the average number of users using a car per month and the average length of time each time the car is used. See Figures4 and 5 for details.Figure 4 Average number of users using a car per month.Figure 5 The average car usage time each time.Based on the above survey results, the effectiveness of this method is verified from the two aspects of behavior persuasion acceptance and persuasion satisfaction. In the verification, in order to highlight the advantages of this method, direct persuasion and impact persuasion are used as comparative methods to analyze the application effects of different persuasion methods. ## 5.3. Analysis of Experimental Results ### 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. ### 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. ### 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 5.3.1. Satisfaction of the Persuaded Taking the satisfaction of the persuaded as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared. In the comparison process, 10 users were randomly selected, and the comparison was achieved through the form of scoring. The score interval is [0, 100]. The larger the value, the higher the satisfaction level. The results are shown in Table3.Table 3 Comparison results of persuaded persons’ satisfaction. UserMethod of this paperDirect persuasionImpact persuasion194.378.482.5295.677.183.6392.075.385.1497.176.284.7593.375.079.8696.079.478.9794.280.176.3895.781.778.4991.780.978.01093.479.979.0By analyzing the data in Table3, it can be seen that, in the scores given by 10 users, the satisfaction of this method is higher than that of the direct persuasion method and impact persuasion method. Among them, user 4 gave the highest score of 97.1 for the method of this paper, while the highest scores for the direct persuasion method and impact persuasion method were 81.7 and 85.1, respectively, which were significantly lower than this method. It shows that users can better accept the persuasion mechanism designed by this method, and satisfaction can reflect users’ executive power. The higher the satisfaction, the stronger the users’ executive power and the better the persuasion effect. Therefore, it can be seen that the application effect of this method is better. ## 5.3.2. Acceptance of Behavioral Persuasion Taking the acceptance of behavioral persuasion as the experimental index, the direct persuasion method, the impact persuasion method, and the method in this paper are compared, and the results are shown in Figure6. Among them, the acceptance of behavior persuasion is represented by data, specifically 0.1–1.0. The higher the value, the higher the acceptance.Figure 6 Comparison results of acceptance of behavioral persuasion.Analysis of Figure6 shows that, with the increase in the number of iterations, the acceptance of behavioral persuasion of different methods has shown a continuous downward trend. The change trend is more obvious than the method in this paper. At the same time, analyzing the results of the acceptance of behavioral persuasion shows that the method in this paper is significantly greater than the direct persuasion method and the impact persuasion method, which shows that the method of this paper is more effective. ## 5.3.3. Persuasion Duration Taking the persuasion duration as the experimental index, the direct diarrhea persuasion method, the impact persuasion method, and the method in this paper are compared. The results are shown in Figure7. Among them, the persuasion time is recorded through the clock. The higher the value, the longer the persuasion time.Figure 7 Comparison results of persuasion duration.According to the analysis of Figure7, the persuasion duration of different methods shows a continuous downward trend with the increase of the number of iterations. Among them, the persuasion duration of this method is less than 1.5 h, while the persuasion duration of the direct persuasion method and impact persuasion method is higher than that of this method, up to 1.8 h and 1.7 h. At the same time, by analyzing the persuasion time, it can be seen that the persuasion efficiency of this method is significantly higher than that of the direct diarrhea persuasion method and impact persuasion method, which shows that this method is more effective. ## 6. Conclusion For the purpose of improving the acceptance of behavioral persuasion, an analysis method of persuasion design mechanism based on unconscious calculation is proposed. Taking the concept of persuasion and the goal of persuasion as the theoretical basis, the analysis of the experimental results shows that the method in this paper has a higher acceptance of behavioral persuasion and can obtain a higher satisfaction from the persuaded, which fully validates the advantages of the method in this paper. --- *Source: 1002517-2022-02-22.xml*
2022
# The Modulation of Interferon Regulatory Factor-1 via Caspase-1-Mediated Alveolar Macrophage Pyroptosis in Ventilator-Induced Lung Injury **Authors:** Minhui Dai; Qian Li; Pinhua Pan **Journal:** Mediators of Inflammation (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1002582 --- ## Abstract Background. To examine the role of interferon regulatory factor-1 (IRF-1) and to explore the potential molecular mechanism in ventilator-induced lung injury. Methods. Wild-type C57BL/6 mice and IRF-1 gene knockout mice/caspase-1 knockout mice were mechanically ventilated with a high tidal volume to establish a ventilator-related lung injury model. The supernatant of the alveolar lavage solution and the lung tissues of these mice were collected. The degree of lung injury was examined by hematoxylin and eosin staining. The protein and mRNA expression levels of IRF-1, caspase-1 (p10), and interleukin (IL)-1β (p17) in lung tissues were measured by western blot and quantitative real-time polymerase chain reaction, respectively. Pyroptosis of alveolar macrophages was detected by flow cytometry and western blotting for active caspase-1 and cleaved GSDMD. An enzyme-linked immunosorbent assay was used to measure the levels of IL-1β, IL-18, IL-6, TNF-α, and high mobility group box protein 1 (HMGB-1) in alveolar lavage fluid. Results. IRF-1 expression and caspase-1-dependent pyroptosis in lung tissues of wild-type mice were significantly upregulated after mechanical ventilation with a high tidal volume. The degree of ventilator-related lung injury in IRF-1 gene knockout mice and caspase-1 knockout mice was significantly improved compared to that in wild-type mice, and the levels of GSDMD, IL-1β, IL-18, IL-6, and HMGB-1 in alveolar lavage solution were significantly reduced (P<0.05). The expression levels of caspase-1 (p10), cleaved GSDMD, and IL-1β (p17) proteins in lung tissues of IRF-1 knockout mice with ventilator-related lung injury were significantly lower than those of wild-type mice, and the level of pyroptosis of macrophages in alveolar lavage solution was significantly reduced. Conclusions. IRF-1 may aggravate ventilator-induced lung injury by regulating the activation of caspase-1 and the focal death of alveolar macrophages. --- ## Body ## 1. Introduction Ventilator-induced lung injury (VILI) has been reported in various experimental and clinical settings to potentially cause acute respiratory distress syndrome (ARDS) [1, 2]. Mechanical forces may result in excessive deformation of peripheral lung cells, following inflammatory mediators either directly (released by injured cells) or indirectly (mechanical forces transduced into the initiation of cell signaling pathways), eventually leading to VILI [3–5]. Resident alveolar macrophages (AMs) account for 5% of peripheral lung cells and >90% of leukocytes in bronchoalveolar lavage fluid (BALF) [6] under normal circumstances. Previous literature has described AM activation during mechanical ventilation, accompanied by air-blood barrier dysfunction and VILI. Depletion of AMs in rats attenuated VILI, indicating that AMs may participate in the pathogenesis of VILI [7].Pyroptosis, a type of programmed cell death, is the process of inflammasome activation and caspase-1/3/4/5/11-dependent cell death [8–11]. In the canonical pathway, following activation and oligomerization of the inflammasome, the caspase-1 zymogen in the inflammasome is cleaved and self-activated. Activated caspase-1 cleaves interleukin (IL)-1 and IL-18 precursors into IL-1β and IL-18 and processes gasdermin D (GSDMD) into GSDMD-N and GSDMD-C, resulting in rapid plasma membrane swelling and the release of intracellular proinflammatory contents. With GSDMD-C as inhibitory domain and GSDMD-N as active domain to cause pore-forming and membrane lysis, GSDMD cleaved by caspase-1/4/5/11 at the 272FLTD275 site in pyroptosis [12–14]. Macrophages are major cellular contributors to releasing of proinflammatory cytokines during VILI [15]. Meanwhile, high-mobility group box 1 (HMGB1) translocation, which could induce pyroptosis [16], from macrophages contributes to danger signaling in mediating inflammasome activation and cell death in VILI [17]. However, there is a significant gap in our knowledge concerning the role of AMs pyroptosis in VILI.Interferon regulatory factor-1 (IRF-1) belongs to a family of highly conserved transcription factors that regulate the expression of specific innate and acquired immune-related genes [18]. IRF-1 has been found to play an essential role in lung injury by modulating the expression of inflammatory mediators [19, 20]. It is also related to the release of inflammatory mediators and pyroptosis of AMs in LPS-related acute lung injury (ALI) [18–20]. The action and the underlying mechanism of IRF-1-mediated pyroptosis in VILI are poorly understood yet.In this study, we established a VILI mouse model to confirm whether IRF-1 and pyroptosis of AMs were involved in the pathogenesis of VILI. We also explored whether IRF-1 could modulate AM pyroptosis via caspase-1 activation during injurious ventilation. ## 2. Materials and Methods ### 2.1. Animals Wild-type (C57BL/6J) mice were purchased from SJA Laboratory Animal Co. (Changsha, China), caspase-1 knockout (caspase-1-/-) mice were obtained from the Model Animal Research Center of Nanjing University (Nanjing, China), and IRF-1 knockout (IRF-1-/-) mice were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). All the mice in this study were male, aged 6–8 weeks, and maintained in the laboratory animal center of the Central South University under specific pathogen-free conditions. The environment has controlled temperature, independent ventilation, and a 12-hour light/dark cycle. All procedures were approved by the Laboratory Animal Ethics Committee of Central South University. All surgeries were performed under a mixture of xylazine and ketamine anesthesia, and all measures were taken to minimize suffering. ### 2.2. VILI Model The modeling and grouping were performed as described previously, but are briefly explained below. Mice were anesthetized by intraperitoneal (i.p.) injection of ketamine (87.5 mg/kg) and xylazine (12.5 mg/kg) and kept in a prone position on a thermostatic blanket to maintain a temperature of 35 ± 1°C. The anterior neck skin and soft tissue were cut under sterile conditions to expose the trachea to observe the condition of the airway. Orotracheal intubation was then performed with a 20-gauge intravenous catheter (Becton, Dickinson and Company, Piscataway, NJ, USA). The catheter was connected to a ventilator (VentElite; Harvard Apparatus, Holliston, MA, USA) with a fraction of inspired oxygen (FiO2) of 0.2 and a volume-controlled setting. Parameters for the low-tidal-volume ventilation and the high-tidal-volume ventilation for 4 h were set as follows: tidal volume of 8 ml/kg body weight with 160 breaths/min and deep inflation with 27 cmH2O for 1 s in every 5 min or 34 ml/kg with 70 breaths/min. Spontaneous efforts were terminated using rocuronium bromide (Esmeron, 0.02 ml/h, i.p., 10 mg/ml) during mechanical ventilation. The sham mice underwent the same surgery and LTV ventilation for 10 min as control mice. ### 2.3. Lung Injury Assessment in Mice The lung wet-to-dry weight ratio was used as an indicator for the evaluation of pulmonary edema. After the right lower lobe was excised and rinsed quickly in saline, the excess water was drained off the lobe and weighed to determine the wet weight after the mice were killed. The dry weight was determined by weighing the lobe again after drying in an oven at 65°C for 48 h.The level of protein in BALF was used as an indicator for the evaluation of dysfunction of the alveolar barrier. The protein level in the BALF was evaluated using a BCA protein assay kit (Biomiga, USA) according to the manufacturer’s instructions.For lung histology, a portion of the left lung was fixed with 4% buffered paraformaldehyde and embedded in paraffin, and 6-μm sections were sliced and stained with hematoxylin and eosin. Pathologists blinded to the experimental protocol evaluated and scored the stained sections. The severity of lung injury was scored according to the following indicators: alveolar edema, hemorrhage, alveolar exudates, and leukocyte infiltration. ### 2.4. Isolation of AMs from BALF After mechanical ventilation/spontaneous breathing, AMs were isolated from the mouse lungs as previously described. In brief, mouse lung was lavaged with 1 mL of sterile saline containing 2% bovine serum albumin and 10 nM ethylenediaminetetraacetic acid disodium through orotracheal intubation, and a total of 10 ml of BALF was collected from each mouse. Leukocytes in the BALF were precipitated by centrifugation at 200×g for 10 min at 4°C. AMs in these leukocytes were separated by negative magnetic bead sorting. Magnetic nanoparticle-conjugated antibodies such as antimouse Gr-1, CD4, CD8, and CD45R/B220 antibodies (BD Biosciences Pharmingen, San Diego, CA, USA) were used to label and remove neutrophils and lymphocytes in the immunomagnetic separation system (BD Biosciences Pharmingen). Residual cells were stained and examined by Wright’s staining, and the purity of AMs was >95%. ### 2.5. Flow Cytometry Purified AMs were incubated with Fc block before staining with a fluorescently labeled inhibitor of caspase-1 (FLICA Caspase Assay Kit; ImmunoChemistry Technology, USA) and propidium iodide (ImmunoChemistry Technology) according to the manufacturer’s instructions. Flow cytometry analysis was conducted using a FACSVerse BD flow cytometer (BD Biosciences, Sparks, MD, USA). Raw data were analyzed using FlowJo software (TreeStar Corporation, USA). Fluorescently labeled active caspase-1- and propidium iodide-positive cells indicated pyroptosis. ### 2.6. Immunohistochemistry IRF-1 and cleaved caspase-1 were immunohistochemically stained in paraffin-embedded tissue sections by standard immunohistochemical protocol as described previously [21]. Briefly, pathology slides of lung tissues were incubated with antimouse IRF-1 and caspase-1 P10 (Santa Cruz, CA, USA) at a 1 : 100 dilution. The results were measured by positive cell counts in the field using Leica digital microscopy. All counts were performed by two independent observers to reduce counting bias. ### 2.7. Quantitative Real-Time Polymerase Chain Reaction The quantification of IRF-1 and caspase-1 was performed as described previously. Trizol reagent (Invitrogen, Carlsbad, CA, USA) was applied to isolate total RNA from AMs. The RNA was converted into reverse transcript (cDNA) using the all-in-one first-stand cDNA synthesis kit (CeneCopoeia, MD, US). The reaction of quantitative real-time PCR (qRT-PCR) was carried out by All-in-One qPCR Mix (CeneCopoeia, MD, US). The reaction system (10μL) was programmed as follows: 95°C for 10  min followed by 40 cycles at 95°C for 10 s, 60°C for 20 s, and 72°C for 40 s. GAPDH was used as the reference gene. The sequences of primers were as follows: IRF-1 forward: 5′-CTCACCAGGAACCAGAGGAA-3′, reverse: 5′-TGAGTGGTGTAACTGCTGTGG-3′; forward: 5′-ACAAGGCACGGGACCTATG-3′, reverse: 5′-TCCCAGTCAGTCCTGGAAATG-3′; GAPDH forward: 5′-TGCACCACCAACTGCTTAGC-3′, reverse: 5′-GGCATGGACTGTGGTCATGAG-3′. ### 2.8. Protein Extraction and Western Blotting Cellular and nuclear protein is extracted from AMs as described previously [22]. Total cellular protein extraction was processed with cytoplasmic extraction reagent (Vazyme, China) and protease inhibitor mix. Nuclear protein was isolated by using nuclear extraction reagent (Nanjing, Vazyme, China) with a protease inhibitor mix. Concentration of protein was assessed by a BCA kit (Shanghai, Biyuntian, China). 50 μg protein for western blotting per sample added 4-fold volume of 5Χ loading buffer and boiled for 8 min. Protein samples were electrophoresed in sodium dodecyl sulfate-polyacrylamide gels and then transferred to polyvinylidene fluoride (PVDF) membranes (Bio-Rad Laboratories, Berkeley, USA). The PVDF membranes were then incubated with primary antibodies, including antimouse IRF-1 antibody and caspase-1 P10 (Santa Cruz Biotechnology, USA), antimouse GSDMD and histone3 (Abcam, England), GAPDH (ImmunoWay Biotechnology, USA), and HSP90 (Aifang, China) overnight at 4°C after blocking with 5% skimmed milk for 1 h. After three washes with Tris-buffered solution with 0.1% Tween-20, the membranes were incubated with horseradish peroxidase-conjugated secondary antibody (Sigma-Aldrich, USA) for 1 h at room temperature. Signals were detected with a ClarityMax Western ECL Substrate kit (Bio-Rad Laboratories) and were quantified using ImageJ software (Rawak Software Inc., Stuttgart, Germany). ### 2.9. Enzyme-Linked Immunosorbent Assay The levels of IL-1β, IL-6, TNF-α, and HMGB-1 in the BALF and serum were measured using commercially available mouse ELISA kits from eBioscience (San Diego, CA, USA). The experimental procedures were performed according to the manufacturer’s instructions. ### 2.10. Statistical Analysis Variables are presented asmean±standarddeviation. Student t-test was used for comparisons between the two groups, and one-way analysis of variance was used for more than three groups. Multiple comparisons were corrected using the Bonferroni post hoc test. Correlations between data were assessed using Pearson’s correlation analysis. The difference was considered statistically significant when p was less than 0.05. All experimental results were repeated at least three times (unless otherwise indicated), and the representative results are shown. The sample sizes (n) are indicated in the figures. Statistical analyses were conducted using GraphPad 8 software (GraphPad Software, USA). ## 2.1. Animals Wild-type (C57BL/6J) mice were purchased from SJA Laboratory Animal Co. (Changsha, China), caspase-1 knockout (caspase-1-/-) mice were obtained from the Model Animal Research Center of Nanjing University (Nanjing, China), and IRF-1 knockout (IRF-1-/-) mice were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). All the mice in this study were male, aged 6–8 weeks, and maintained in the laboratory animal center of the Central South University under specific pathogen-free conditions. The environment has controlled temperature, independent ventilation, and a 12-hour light/dark cycle. All procedures were approved by the Laboratory Animal Ethics Committee of Central South University. All surgeries were performed under a mixture of xylazine and ketamine anesthesia, and all measures were taken to minimize suffering. ## 2.2. VILI Model The modeling and grouping were performed as described previously, but are briefly explained below. Mice were anesthetized by intraperitoneal (i.p.) injection of ketamine (87.5 mg/kg) and xylazine (12.5 mg/kg) and kept in a prone position on a thermostatic blanket to maintain a temperature of 35 ± 1°C. The anterior neck skin and soft tissue were cut under sterile conditions to expose the trachea to observe the condition of the airway. Orotracheal intubation was then performed with a 20-gauge intravenous catheter (Becton, Dickinson and Company, Piscataway, NJ, USA). The catheter was connected to a ventilator (VentElite; Harvard Apparatus, Holliston, MA, USA) with a fraction of inspired oxygen (FiO2) of 0.2 and a volume-controlled setting. Parameters for the low-tidal-volume ventilation and the high-tidal-volume ventilation for 4 h were set as follows: tidal volume of 8 ml/kg body weight with 160 breaths/min and deep inflation with 27 cmH2O for 1 s in every 5 min or 34 ml/kg with 70 breaths/min. Spontaneous efforts were terminated using rocuronium bromide (Esmeron, 0.02 ml/h, i.p., 10 mg/ml) during mechanical ventilation. The sham mice underwent the same surgery and LTV ventilation for 10 min as control mice. ## 2.3. Lung Injury Assessment in Mice The lung wet-to-dry weight ratio was used as an indicator for the evaluation of pulmonary edema. After the right lower lobe was excised and rinsed quickly in saline, the excess water was drained off the lobe and weighed to determine the wet weight after the mice were killed. The dry weight was determined by weighing the lobe again after drying in an oven at 65°C for 48 h.The level of protein in BALF was used as an indicator for the evaluation of dysfunction of the alveolar barrier. The protein level in the BALF was evaluated using a BCA protein assay kit (Biomiga, USA) according to the manufacturer’s instructions.For lung histology, a portion of the left lung was fixed with 4% buffered paraformaldehyde and embedded in paraffin, and 6-μm sections were sliced and stained with hematoxylin and eosin. Pathologists blinded to the experimental protocol evaluated and scored the stained sections. The severity of lung injury was scored according to the following indicators: alveolar edema, hemorrhage, alveolar exudates, and leukocyte infiltration. ## 2.4. Isolation of AMs from BALF After mechanical ventilation/spontaneous breathing, AMs were isolated from the mouse lungs as previously described. In brief, mouse lung was lavaged with 1 mL of sterile saline containing 2% bovine serum albumin and 10 nM ethylenediaminetetraacetic acid disodium through orotracheal intubation, and a total of 10 ml of BALF was collected from each mouse. Leukocytes in the BALF were precipitated by centrifugation at 200×g for 10 min at 4°C. AMs in these leukocytes were separated by negative magnetic bead sorting. Magnetic nanoparticle-conjugated antibodies such as antimouse Gr-1, CD4, CD8, and CD45R/B220 antibodies (BD Biosciences Pharmingen, San Diego, CA, USA) were used to label and remove neutrophils and lymphocytes in the immunomagnetic separation system (BD Biosciences Pharmingen). Residual cells were stained and examined by Wright’s staining, and the purity of AMs was >95%. ## 2.5. Flow Cytometry Purified AMs were incubated with Fc block before staining with a fluorescently labeled inhibitor of caspase-1 (FLICA Caspase Assay Kit; ImmunoChemistry Technology, USA) and propidium iodide (ImmunoChemistry Technology) according to the manufacturer’s instructions. Flow cytometry analysis was conducted using a FACSVerse BD flow cytometer (BD Biosciences, Sparks, MD, USA). Raw data were analyzed using FlowJo software (TreeStar Corporation, USA). Fluorescently labeled active caspase-1- and propidium iodide-positive cells indicated pyroptosis. ## 2.6. Immunohistochemistry IRF-1 and cleaved caspase-1 were immunohistochemically stained in paraffin-embedded tissue sections by standard immunohistochemical protocol as described previously [21]. Briefly, pathology slides of lung tissues were incubated with antimouse IRF-1 and caspase-1 P10 (Santa Cruz, CA, USA) at a 1 : 100 dilution. The results were measured by positive cell counts in the field using Leica digital microscopy. All counts were performed by two independent observers to reduce counting bias. ## 2.7. Quantitative Real-Time Polymerase Chain Reaction The quantification of IRF-1 and caspase-1 was performed as described previously. Trizol reagent (Invitrogen, Carlsbad, CA, USA) was applied to isolate total RNA from AMs. The RNA was converted into reverse transcript (cDNA) using the all-in-one first-stand cDNA synthesis kit (CeneCopoeia, MD, US). The reaction of quantitative real-time PCR (qRT-PCR) was carried out by All-in-One qPCR Mix (CeneCopoeia, MD, US). The reaction system (10μL) was programmed as follows: 95°C for 10  min followed by 40 cycles at 95°C for 10 s, 60°C for 20 s, and 72°C for 40 s. GAPDH was used as the reference gene. The sequences of primers were as follows: IRF-1 forward: 5′-CTCACCAGGAACCAGAGGAA-3′, reverse: 5′-TGAGTGGTGTAACTGCTGTGG-3′; forward: 5′-ACAAGGCACGGGACCTATG-3′, reverse: 5′-TCCCAGTCAGTCCTGGAAATG-3′; GAPDH forward: 5′-TGCACCACCAACTGCTTAGC-3′, reverse: 5′-GGCATGGACTGTGGTCATGAG-3′. ## 2.8. Protein Extraction and Western Blotting Cellular and nuclear protein is extracted from AMs as described previously [22]. Total cellular protein extraction was processed with cytoplasmic extraction reagent (Vazyme, China) and protease inhibitor mix. Nuclear protein was isolated by using nuclear extraction reagent (Nanjing, Vazyme, China) with a protease inhibitor mix. Concentration of protein was assessed by a BCA kit (Shanghai, Biyuntian, China). 50 μg protein for western blotting per sample added 4-fold volume of 5Χ loading buffer and boiled for 8 min. Protein samples were electrophoresed in sodium dodecyl sulfate-polyacrylamide gels and then transferred to polyvinylidene fluoride (PVDF) membranes (Bio-Rad Laboratories, Berkeley, USA). The PVDF membranes were then incubated with primary antibodies, including antimouse IRF-1 antibody and caspase-1 P10 (Santa Cruz Biotechnology, USA), antimouse GSDMD and histone3 (Abcam, England), GAPDH (ImmunoWay Biotechnology, USA), and HSP90 (Aifang, China) overnight at 4°C after blocking with 5% skimmed milk for 1 h. After three washes with Tris-buffered solution with 0.1% Tween-20, the membranes were incubated with horseradish peroxidase-conjugated secondary antibody (Sigma-Aldrich, USA) for 1 h at room temperature. Signals were detected with a ClarityMax Western ECL Substrate kit (Bio-Rad Laboratories) and were quantified using ImageJ software (Rawak Software Inc., Stuttgart, Germany). ## 2.9. Enzyme-Linked Immunosorbent Assay The levels of IL-1β, IL-6, TNF-α, and HMGB-1 in the BALF and serum were measured using commercially available mouse ELISA kits from eBioscience (San Diego, CA, USA). The experimental procedures were performed according to the manufacturer’s instructions. ## 2.10. Statistical Analysis Variables are presented asmean±standarddeviation. Student t-test was used for comparisons between the two groups, and one-way analysis of variance was used for more than three groups. Multiple comparisons were corrected using the Bonferroni post hoc test. Correlations between data were assessed using Pearson’s correlation analysis. The difference was considered statistically significant when p was less than 0.05. All experimental results were repeated at least three times (unless otherwise indicated), and the representative results are shown. The sample sizes (n) are indicated in the figures. Statistical analyses were conducted using GraphPad 8 software (GraphPad Software, USA). ## 3. Results ### 3.1. Ventilation with a High Tidal Volume Induces Elevated Caspase-1-Dependent Pyroptosis in AMs Previously, we demonstrated that lung injury occurred during high-tidal-volume ventilation [23]. To further investigate if AM pyroptosis had occurred, we randomized mice into three groups: a spontaneous breathing control group, a protective ventilation/low-tidal volume ventilation (low VT) group, and an injurious ventilation/high-tidal-volume ventilation (high VT) group. Caspase-1 is a biomarker of canonical pyroptosis. Therefore, we measured the number of active caspase-1-positive and PI-positive to measure caspase-1-related pyroptosis [24]. As illustrated in Figure 1(a), the flow cytometry results shows that percentage of caspase-1-induced pyroptosis was significantly increased in the high VT group, whereas there was no difference between the control group and the low VT group at 4 h after ventilation onset. The same results are verified in western blot as shown in Figure 1(b). The cleaved form of GSDMD, as a biomarker of pyroptosis, increased obviously in the high VT group, but not in low VT groups. The trend of activated caspase-1 was consistent with cleaved GSDMD at the protein level.Figure 1 Ventilation with a high tidal volume induces elevated caspase-1-dependent pyroptosis in AMs. Caspase-1-related death cells detected by flow cytometry (a) and the protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages were increased in high VT group compared with the group of control and low VT. The release of IL-1β in BALF (d) and serum (e) was significantly increased after ventilation with high VT than that in the low VT group, compared with control group. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)In addition, pyroptosis contributes to the mature and release of the proinflammatory cytokine IL-1β. The expression of mature IL-1β is detected by western blotting, and the release of IL-1β is measured by ELISA in BALF and serum and is increased in high VT group compared to that of low VT according to Figures 1(c)–1(e). These results suggest that ventilation with a high tidal volume resulted in elevated caspase-1-dependent pyroptosis in AMs in VILI. ### 3.2. Caspase-1 Deletion Abolishes VILI and Cytokine Release in Mice To investigate whether alveolar pyroptosis contributes to VILI, caspase-1-/- mice were ventilated with a high tidal volume. As shown in Figure 2(a), caspase-1-/- mice that underwent high-tidal-volume ventilation had barely any AM caspase-1-induced pyroptosis and drastic reduction in the proportion of death cells. The expression alteration in GSDMD could be as a supporting information for AM pyroptosis (Figure 2(b)). Protein level of cleaved GSDMD was significantly reduced after caspase-1 knockout. To assess pyroptosis-related inflammatory factor, we found that IL-1β and HMGB-1 in BALF was significantly increased in high VT group, but sharp decrease upon caspase-1 knockdown (Figures 2(c)–2(e)).Figure 2 Caspase-1 deletion abolishes VILI and cytokine release in mice. (a) The flow cytometry showed caspase-1 deletion abolished death cells in alveolar macrophage. The protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages was significantly decreased after caspase-1 knockout. Caspase-1 deletion attenuated pyroptosis-related cytokines in BALF of high VT, including IL-1β (d) and HMGB-1 (e). Results are representative of three independent experiments; the results of one representative experiment are shown (n =5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)As shown in Figures3(a)–3(d), the high-tidal-volume ventilation caused significant lung inflammation, alveolar congestion, alveolar septal thickening, and perivascular infiltration of inflammatory cells, whereas lung lesions showed significantly reduced inflammatory cell infiltration in the caspase-1-/- mice. Genetic caspase-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF with high VT (Figures 3(e) and 3(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluate the levels of IL-6 and TNF-α in BALF and shown as Figures 3(g) and 3(h). These cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the caspase-1-/- mice. These findings indicate that genetic caspase-1 deficiency decreases lung damage in VILI in mice.Figure 3 Caspase-1 deletion abolishes VILI and cytokine release in mice. Caspase-1 deletion alleviated the high-tidal-volume ventilation-induced lung injury measured by lung injury scores (d) for lung pathology. (a) Lung gross pathology is shown. Representative histologic sections for lung pathology (b) (magnification, 20×) and (c) (magnification, 400×) are shown. Caspase-1 deletion alleviated lung pathology in gross pathology (a) and HE-stained micrographs (b and c). Caspase-1 deletion reduced the wet/dry (W/D) ratio (e) and BALF protein concentration (f) in the high VT group. Caspase-1 deletion attenuated the release of IL-6 (g) and TNF-α (h) in BALF of high VT. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h) ### 3.3. IRF-1 Deletion Attenuates VILI and Cytokine Release in Mice We previously identified that IRF-1 has been implicated in the regulation of ALI-induced inflammatory response [21, 22]. To examine the functions of IRF-1 in VILI, we first examined its RNA and protein content in the group of low VT, high VT, and sham (Figures 4(a) and 4(b)). The expression of IRF-1 in lung homogenates was significantly increased in the high VT group compared with the sham and low VT group.Figure 4 IRF-1 deletion attenuates VILI and cytokine release in mice. The intranuclear protein level (a) and mRNA (b) of IRF-1 in alveolar macrophages were increased in the high VT group compared with the control group and the low VT group. IRF-1 deletion alleviated lung histopathologic damage in gross pathology (e), HE-stained micrographs for 20× magnification (f) and 400× magnification (g) induced by the high-tidal-volume ventilation assessed using lung injury scores (h). IRF-1 deletion alleviated the wet/dry (W/D) ratio (c) and BALF protein concentration (d) in the high VT group. The concentration of IL-6 (i) and TNF-α (j) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)Next, IRF-1-/- mice were also used to investigate whether IRF-1 mediates VILI and cytokine release. As shown in Figures 4(e)–4(h), lung lesions showed significantly reduced inflammatory cell infiltration in IRF-1-/- mice. Genetic IRF-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF (Figures 4(c) and 4(d)–4(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluated the levels of IL-6 and TNF-α in BALF. All cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the IRF-1-/- mice in Figures 3(i)–3(j). These findings indicate that genetic IRF-1 deficiency decreases lung damage in VILI in mice. These data indicate that IRF-1 plays an important role in the pathogenesis of VILI. ### 3.4. IRF-1 Was Required for Caspase-1 Activation in AMs Having shown that VILI is associated with pyroptosis of AMs and IRF-1 expression, we then investigated whether IRF-1 deletion enhances protection by inhibiting AM pyroptosis. As shown in Figure5(a), IRF-1-/- mice that underwent high-tidal-volume ventilation had very little caspase-1-induced pyroptosis. The levels of activated caspase-1, cleaved GSDMD, and IL-1β were detected by western blot analysis. Indeed, reduced expression of cleaved caspase-1, cleaved GSDMD, and IL-1β was observed in AMs of IRF-1-/- mice with high-tidal-volume ventilation compared to that of the control group (Figures 5(b) and 5(c)). The levels of IRF-1 are detected by western blot analysis as shown in Figure 5(d). Indeed, reduced expression of IRF-1 was observed in AMs of caspase-1-/- mice with high-tidal-volume ventilation compared to those of the control group. The concentration of IL-1β and HMGB-1 in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type (Figures 5(e) and 5(f)). These data indicated that IRF-1 was essential for caspase-1 activation and further precipitated the pathogenesis of VILI.Figure 5 IRF-1 was required for caspase-1 activation in AMs. The flow cytometry showed IRF-1 deletion attenuated alveolar macrophage pyroptosis in high VT (a). Western blotting analysis of protein expression of caspase-1 p10 (b), IL-1β (b), and GSDMD including full-length and cleaved forms (c) in alveolar macrophages. Analysis of IRF-1 protein levels (d) in the nucleus of alveolar macrophages indicated that caspase-1 deletion did not affect expression of IRF-1. The concentration of IL-1β (e) and HMGB-1 (f) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f) ## 3.1. Ventilation with a High Tidal Volume Induces Elevated Caspase-1-Dependent Pyroptosis in AMs Previously, we demonstrated that lung injury occurred during high-tidal-volume ventilation [23]. To further investigate if AM pyroptosis had occurred, we randomized mice into three groups: a spontaneous breathing control group, a protective ventilation/low-tidal volume ventilation (low VT) group, and an injurious ventilation/high-tidal-volume ventilation (high VT) group. Caspase-1 is a biomarker of canonical pyroptosis. Therefore, we measured the number of active caspase-1-positive and PI-positive to measure caspase-1-related pyroptosis [24]. As illustrated in Figure 1(a), the flow cytometry results shows that percentage of caspase-1-induced pyroptosis was significantly increased in the high VT group, whereas there was no difference between the control group and the low VT group at 4 h after ventilation onset. The same results are verified in western blot as shown in Figure 1(b). The cleaved form of GSDMD, as a biomarker of pyroptosis, increased obviously in the high VT group, but not in low VT groups. The trend of activated caspase-1 was consistent with cleaved GSDMD at the protein level.Figure 1 Ventilation with a high tidal volume induces elevated caspase-1-dependent pyroptosis in AMs. Caspase-1-related death cells detected by flow cytometry (a) and the protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages were increased in high VT group compared with the group of control and low VT. The release of IL-1β in BALF (d) and serum (e) was significantly increased after ventilation with high VT than that in the low VT group, compared with control group. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)In addition, pyroptosis contributes to the mature and release of the proinflammatory cytokine IL-1β. The expression of mature IL-1β is detected by western blotting, and the release of IL-1β is measured by ELISA in BALF and serum and is increased in high VT group compared to that of low VT according to Figures 1(c)–1(e). These results suggest that ventilation with a high tidal volume resulted in elevated caspase-1-dependent pyroptosis in AMs in VILI. ## 3.2. Caspase-1 Deletion Abolishes VILI and Cytokine Release in Mice To investigate whether alveolar pyroptosis contributes to VILI, caspase-1-/- mice were ventilated with a high tidal volume. As shown in Figure 2(a), caspase-1-/- mice that underwent high-tidal-volume ventilation had barely any AM caspase-1-induced pyroptosis and drastic reduction in the proportion of death cells. The expression alteration in GSDMD could be as a supporting information for AM pyroptosis (Figure 2(b)). Protein level of cleaved GSDMD was significantly reduced after caspase-1 knockout. To assess pyroptosis-related inflammatory factor, we found that IL-1β and HMGB-1 in BALF was significantly increased in high VT group, but sharp decrease upon caspase-1 knockdown (Figures 2(c)–2(e)).Figure 2 Caspase-1 deletion abolishes VILI and cytokine release in mice. (a) The flow cytometry showed caspase-1 deletion abolished death cells in alveolar macrophage. The protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages was significantly decreased after caspase-1 knockout. Caspase-1 deletion attenuated pyroptosis-related cytokines in BALF of high VT, including IL-1β (d) and HMGB-1 (e). Results are representative of three independent experiments; the results of one representative experiment are shown (n =5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)As shown in Figures3(a)–3(d), the high-tidal-volume ventilation caused significant lung inflammation, alveolar congestion, alveolar septal thickening, and perivascular infiltration of inflammatory cells, whereas lung lesions showed significantly reduced inflammatory cell infiltration in the caspase-1-/- mice. Genetic caspase-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF with high VT (Figures 3(e) and 3(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluate the levels of IL-6 and TNF-α in BALF and shown as Figures 3(g) and 3(h). These cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the caspase-1-/- mice. These findings indicate that genetic caspase-1 deficiency decreases lung damage in VILI in mice.Figure 3 Caspase-1 deletion abolishes VILI and cytokine release in mice. Caspase-1 deletion alleviated the high-tidal-volume ventilation-induced lung injury measured by lung injury scores (d) for lung pathology. (a) Lung gross pathology is shown. Representative histologic sections for lung pathology (b) (magnification, 20×) and (c) (magnification, 400×) are shown. Caspase-1 deletion alleviated lung pathology in gross pathology (a) and HE-stained micrographs (b and c). Caspase-1 deletion reduced the wet/dry (W/D) ratio (e) and BALF protein concentration (f) in the high VT group. Caspase-1 deletion attenuated the release of IL-6 (g) and TNF-α (h) in BALF of high VT. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h) ## 3.3. IRF-1 Deletion Attenuates VILI and Cytokine Release in Mice We previously identified that IRF-1 has been implicated in the regulation of ALI-induced inflammatory response [21, 22]. To examine the functions of IRF-1 in VILI, we first examined its RNA and protein content in the group of low VT, high VT, and sham (Figures 4(a) and 4(b)). The expression of IRF-1 in lung homogenates was significantly increased in the high VT group compared with the sham and low VT group.Figure 4 IRF-1 deletion attenuates VILI and cytokine release in mice. The intranuclear protein level (a) and mRNA (b) of IRF-1 in alveolar macrophages were increased in the high VT group compared with the control group and the low VT group. IRF-1 deletion alleviated lung histopathologic damage in gross pathology (e), HE-stained micrographs for 20× magnification (f) and 400× magnification (g) induced by the high-tidal-volume ventilation assessed using lung injury scores (h). IRF-1 deletion alleviated the wet/dry (W/D) ratio (c) and BALF protein concentration (d) in the high VT group. The concentration of IL-6 (i) and TNF-α (j) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)Next, IRF-1-/- mice were also used to investigate whether IRF-1 mediates VILI and cytokine release. As shown in Figures 4(e)–4(h), lung lesions showed significantly reduced inflammatory cell infiltration in IRF-1-/- mice. Genetic IRF-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF (Figures 4(c) and 4(d)–4(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluated the levels of IL-6 and TNF-α in BALF. All cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the IRF-1-/- mice in Figures 3(i)–3(j). These findings indicate that genetic IRF-1 deficiency decreases lung damage in VILI in mice. These data indicate that IRF-1 plays an important role in the pathogenesis of VILI. ## 3.4. IRF-1 Was Required for Caspase-1 Activation in AMs Having shown that VILI is associated with pyroptosis of AMs and IRF-1 expression, we then investigated whether IRF-1 deletion enhances protection by inhibiting AM pyroptosis. As shown in Figure5(a), IRF-1-/- mice that underwent high-tidal-volume ventilation had very little caspase-1-induced pyroptosis. The levels of activated caspase-1, cleaved GSDMD, and IL-1β were detected by western blot analysis. Indeed, reduced expression of cleaved caspase-1, cleaved GSDMD, and IL-1β was observed in AMs of IRF-1-/- mice with high-tidal-volume ventilation compared to that of the control group (Figures 5(b) and 5(c)). The levels of IRF-1 are detected by western blot analysis as shown in Figure 5(d). Indeed, reduced expression of IRF-1 was observed in AMs of caspase-1-/- mice with high-tidal-volume ventilation compared to those of the control group. The concentration of IL-1β and HMGB-1 in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type (Figures 5(e) and 5(f)). These data indicated that IRF-1 was essential for caspase-1 activation and further precipitated the pathogenesis of VILI.Figure 5 IRF-1 was required for caspase-1 activation in AMs. The flow cytometry showed IRF-1 deletion attenuated alveolar macrophage pyroptosis in high VT (a). Western blotting analysis of protein expression of caspase-1 p10 (b), IL-1β (b), and GSDMD including full-length and cleaved forms (c) in alveolar macrophages. Analysis of IRF-1 protein levels (d) in the nucleus of alveolar macrophages indicated that caspase-1 deletion did not affect expression of IRF-1. The concentration of IL-1β (e) and HMGB-1 (f) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f) ## 4. Discussion It has been identified that inhibition or knockout of caspase-1 or IRF-1 has a protect effect against many inflammatory diseases [25–27]. In this study, we demonstrated that caspase-1-related pyroptosis may be an important mechanism in pathogenesis for experimental VILI. Moreover, IRF-1 may positively regulate caspase-1-dependent pyroptosis and release of inflammatory factors in mechanical lung injury. Therefore, our study found accumulating evidence for the links between IRF-1 and pyroptosis-related molecules [28].Clinically, mechanical ventilation is the most dominant treatment strategy for ARDS. VAP is one of the most common complications in severe pneumonia and ARDS patient. The alveoli damage by mechanical force could further complicate the condition and prognosis of ALI/ARDS. Excluding infection, injurious mechanical ventilation only could induce AM pyroptosis and be associated by caspase-1 in our study. From our findings, caspase-1-dependent pyroptosis potentiates inflammatory response in VILI.Prior studies have identified the pivotal role of IRF-1 in mechanism of ALI/ARDS occurrence. As a transcription factor that is involved in tumor-related signaling pathways, IRF-1 is often elevated in patients with ARDS [29]. In addition, we found that IRF-1 deletion in LPS-induced ALI mouse could alleviate lung injury significantly [21, 22]. These studies proposed that IRF-1 plays a critical role in mediating cytokine storm of ALI/ARDS. No IRF-1-related signaling pathway contributing to VAP or VILI has been studied before. In our study, IRF-1 was significantly upregulated in AMs in high VT-induced lung injury. Moreover, caspase-1-induced pyroptosis of AMs and inflammation was impaired after IRF-1 knockdown. IRF-1 seems to be upstream in caspase-1 and pyroptosis-related molecules. In other words, it suggested that IRF-1 is a potential transcription factor implicated in caspase-1-related pyroptotic cell death. Previous studies showed that caspase-1 gene might be regulated by IRF-1 via a CRE site. Additionally, caspase-1 upregulation was unable to be observed in oligodendrocyte progenitor cells after IFN stimulation in absence of IRF-1 [30, 31].In mechanical ventilation-induced lung injury, there were fewer inflammatory factors in serum and BALF. It was distinct from the pathophysiological processes of LPS-induced ALI which could amplify the inflammatory response at the beginning of onset. Nonetheless, caspase-1-induced pyroptosis and the release of related inflammatory factors including IL-1β, IL-18, and HMGB-1 were partially responsible for pulmonary pathology in VILI. It has been confirmed that myeloid differentiation factor 88 (MyD88) adapter protein could recruit some members of IRF-1 family of transcription factors to evoke certain genes such as toll-like receptor (TLR) [32, 33]. In our study, IRF-1 knockdown could markedly reduce these effects mentioned above. It appears that IRF-1 regulate pyroptosis-associated cytokines.In conclusion, our study highlights the important role of caspase-1 and the promoting effect of IRF-1 in the pathogenesis of VILI. IRF-1 and pyroptosis-related inflammatory factors promise to be therapeutic targets or early warning signals in patients undergoing mechanical ventilation. However, our animal experiment may further verify by a prospective clinical study. --- *Source: 1002582-2022-04-15.xml*
1002582-2022-04-15_1002582-2022-04-15.md
45,750
The Modulation of Interferon Regulatory Factor-1 via Caspase-1-Mediated Alveolar Macrophage Pyroptosis in Ventilator-Induced Lung Injury
Minhui Dai; Qian Li; Pinhua Pan
Mediators of Inflammation (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1002582
1002582-2022-04-15.xml
--- ## Abstract Background. To examine the role of interferon regulatory factor-1 (IRF-1) and to explore the potential molecular mechanism in ventilator-induced lung injury. Methods. Wild-type C57BL/6 mice and IRF-1 gene knockout mice/caspase-1 knockout mice were mechanically ventilated with a high tidal volume to establish a ventilator-related lung injury model. The supernatant of the alveolar lavage solution and the lung tissues of these mice were collected. The degree of lung injury was examined by hematoxylin and eosin staining. The protein and mRNA expression levels of IRF-1, caspase-1 (p10), and interleukin (IL)-1β (p17) in lung tissues were measured by western blot and quantitative real-time polymerase chain reaction, respectively. Pyroptosis of alveolar macrophages was detected by flow cytometry and western blotting for active caspase-1 and cleaved GSDMD. An enzyme-linked immunosorbent assay was used to measure the levels of IL-1β, IL-18, IL-6, TNF-α, and high mobility group box protein 1 (HMGB-1) in alveolar lavage fluid. Results. IRF-1 expression and caspase-1-dependent pyroptosis in lung tissues of wild-type mice were significantly upregulated after mechanical ventilation with a high tidal volume. The degree of ventilator-related lung injury in IRF-1 gene knockout mice and caspase-1 knockout mice was significantly improved compared to that in wild-type mice, and the levels of GSDMD, IL-1β, IL-18, IL-6, and HMGB-1 in alveolar lavage solution were significantly reduced (P<0.05). The expression levels of caspase-1 (p10), cleaved GSDMD, and IL-1β (p17) proteins in lung tissues of IRF-1 knockout mice with ventilator-related lung injury were significantly lower than those of wild-type mice, and the level of pyroptosis of macrophages in alveolar lavage solution was significantly reduced. Conclusions. IRF-1 may aggravate ventilator-induced lung injury by regulating the activation of caspase-1 and the focal death of alveolar macrophages. --- ## Body ## 1. Introduction Ventilator-induced lung injury (VILI) has been reported in various experimental and clinical settings to potentially cause acute respiratory distress syndrome (ARDS) [1, 2]. Mechanical forces may result in excessive deformation of peripheral lung cells, following inflammatory mediators either directly (released by injured cells) or indirectly (mechanical forces transduced into the initiation of cell signaling pathways), eventually leading to VILI [3–5]. Resident alveolar macrophages (AMs) account for 5% of peripheral lung cells and >90% of leukocytes in bronchoalveolar lavage fluid (BALF) [6] under normal circumstances. Previous literature has described AM activation during mechanical ventilation, accompanied by air-blood barrier dysfunction and VILI. Depletion of AMs in rats attenuated VILI, indicating that AMs may participate in the pathogenesis of VILI [7].Pyroptosis, a type of programmed cell death, is the process of inflammasome activation and caspase-1/3/4/5/11-dependent cell death [8–11]. In the canonical pathway, following activation and oligomerization of the inflammasome, the caspase-1 zymogen in the inflammasome is cleaved and self-activated. Activated caspase-1 cleaves interleukin (IL)-1 and IL-18 precursors into IL-1β and IL-18 and processes gasdermin D (GSDMD) into GSDMD-N and GSDMD-C, resulting in rapid plasma membrane swelling and the release of intracellular proinflammatory contents. With GSDMD-C as inhibitory domain and GSDMD-N as active domain to cause pore-forming and membrane lysis, GSDMD cleaved by caspase-1/4/5/11 at the 272FLTD275 site in pyroptosis [12–14]. Macrophages are major cellular contributors to releasing of proinflammatory cytokines during VILI [15]. Meanwhile, high-mobility group box 1 (HMGB1) translocation, which could induce pyroptosis [16], from macrophages contributes to danger signaling in mediating inflammasome activation and cell death in VILI [17]. However, there is a significant gap in our knowledge concerning the role of AMs pyroptosis in VILI.Interferon regulatory factor-1 (IRF-1) belongs to a family of highly conserved transcription factors that regulate the expression of specific innate and acquired immune-related genes [18]. IRF-1 has been found to play an essential role in lung injury by modulating the expression of inflammatory mediators [19, 20]. It is also related to the release of inflammatory mediators and pyroptosis of AMs in LPS-related acute lung injury (ALI) [18–20]. The action and the underlying mechanism of IRF-1-mediated pyroptosis in VILI are poorly understood yet.In this study, we established a VILI mouse model to confirm whether IRF-1 and pyroptosis of AMs were involved in the pathogenesis of VILI. We also explored whether IRF-1 could modulate AM pyroptosis via caspase-1 activation during injurious ventilation. ## 2. Materials and Methods ### 2.1. Animals Wild-type (C57BL/6J) mice were purchased from SJA Laboratory Animal Co. (Changsha, China), caspase-1 knockout (caspase-1-/-) mice were obtained from the Model Animal Research Center of Nanjing University (Nanjing, China), and IRF-1 knockout (IRF-1-/-) mice were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). All the mice in this study were male, aged 6–8 weeks, and maintained in the laboratory animal center of the Central South University under specific pathogen-free conditions. The environment has controlled temperature, independent ventilation, and a 12-hour light/dark cycle. All procedures were approved by the Laboratory Animal Ethics Committee of Central South University. All surgeries were performed under a mixture of xylazine and ketamine anesthesia, and all measures were taken to minimize suffering. ### 2.2. VILI Model The modeling and grouping were performed as described previously, but are briefly explained below. Mice were anesthetized by intraperitoneal (i.p.) injection of ketamine (87.5 mg/kg) and xylazine (12.5 mg/kg) and kept in a prone position on a thermostatic blanket to maintain a temperature of 35 ± 1°C. The anterior neck skin and soft tissue were cut under sterile conditions to expose the trachea to observe the condition of the airway. Orotracheal intubation was then performed with a 20-gauge intravenous catheter (Becton, Dickinson and Company, Piscataway, NJ, USA). The catheter was connected to a ventilator (VentElite; Harvard Apparatus, Holliston, MA, USA) with a fraction of inspired oxygen (FiO2) of 0.2 and a volume-controlled setting. Parameters for the low-tidal-volume ventilation and the high-tidal-volume ventilation for 4 h were set as follows: tidal volume of 8 ml/kg body weight with 160 breaths/min and deep inflation with 27 cmH2O for 1 s in every 5 min or 34 ml/kg with 70 breaths/min. Spontaneous efforts were terminated using rocuronium bromide (Esmeron, 0.02 ml/h, i.p., 10 mg/ml) during mechanical ventilation. The sham mice underwent the same surgery and LTV ventilation for 10 min as control mice. ### 2.3. Lung Injury Assessment in Mice The lung wet-to-dry weight ratio was used as an indicator for the evaluation of pulmonary edema. After the right lower lobe was excised and rinsed quickly in saline, the excess water was drained off the lobe and weighed to determine the wet weight after the mice were killed. The dry weight was determined by weighing the lobe again after drying in an oven at 65°C for 48 h.The level of protein in BALF was used as an indicator for the evaluation of dysfunction of the alveolar barrier. The protein level in the BALF was evaluated using a BCA protein assay kit (Biomiga, USA) according to the manufacturer’s instructions.For lung histology, a portion of the left lung was fixed with 4% buffered paraformaldehyde and embedded in paraffin, and 6-μm sections were sliced and stained with hematoxylin and eosin. Pathologists blinded to the experimental protocol evaluated and scored the stained sections. The severity of lung injury was scored according to the following indicators: alveolar edema, hemorrhage, alveolar exudates, and leukocyte infiltration. ### 2.4. Isolation of AMs from BALF After mechanical ventilation/spontaneous breathing, AMs were isolated from the mouse lungs as previously described. In brief, mouse lung was lavaged with 1 mL of sterile saline containing 2% bovine serum albumin and 10 nM ethylenediaminetetraacetic acid disodium through orotracheal intubation, and a total of 10 ml of BALF was collected from each mouse. Leukocytes in the BALF were precipitated by centrifugation at 200×g for 10 min at 4°C. AMs in these leukocytes were separated by negative magnetic bead sorting. Magnetic nanoparticle-conjugated antibodies such as antimouse Gr-1, CD4, CD8, and CD45R/B220 antibodies (BD Biosciences Pharmingen, San Diego, CA, USA) were used to label and remove neutrophils and lymphocytes in the immunomagnetic separation system (BD Biosciences Pharmingen). Residual cells were stained and examined by Wright’s staining, and the purity of AMs was >95%. ### 2.5. Flow Cytometry Purified AMs were incubated with Fc block before staining with a fluorescently labeled inhibitor of caspase-1 (FLICA Caspase Assay Kit; ImmunoChemistry Technology, USA) and propidium iodide (ImmunoChemistry Technology) according to the manufacturer’s instructions. Flow cytometry analysis was conducted using a FACSVerse BD flow cytometer (BD Biosciences, Sparks, MD, USA). Raw data were analyzed using FlowJo software (TreeStar Corporation, USA). Fluorescently labeled active caspase-1- and propidium iodide-positive cells indicated pyroptosis. ### 2.6. Immunohistochemistry IRF-1 and cleaved caspase-1 were immunohistochemically stained in paraffin-embedded tissue sections by standard immunohistochemical protocol as described previously [21]. Briefly, pathology slides of lung tissues were incubated with antimouse IRF-1 and caspase-1 P10 (Santa Cruz, CA, USA) at a 1 : 100 dilution. The results were measured by positive cell counts in the field using Leica digital microscopy. All counts were performed by two independent observers to reduce counting bias. ### 2.7. Quantitative Real-Time Polymerase Chain Reaction The quantification of IRF-1 and caspase-1 was performed as described previously. Trizol reagent (Invitrogen, Carlsbad, CA, USA) was applied to isolate total RNA from AMs. The RNA was converted into reverse transcript (cDNA) using the all-in-one first-stand cDNA synthesis kit (CeneCopoeia, MD, US). The reaction of quantitative real-time PCR (qRT-PCR) was carried out by All-in-One qPCR Mix (CeneCopoeia, MD, US). The reaction system (10μL) was programmed as follows: 95°C for 10  min followed by 40 cycles at 95°C for 10 s, 60°C for 20 s, and 72°C for 40 s. GAPDH was used as the reference gene. The sequences of primers were as follows: IRF-1 forward: 5′-CTCACCAGGAACCAGAGGAA-3′, reverse: 5′-TGAGTGGTGTAACTGCTGTGG-3′; forward: 5′-ACAAGGCACGGGACCTATG-3′, reverse: 5′-TCCCAGTCAGTCCTGGAAATG-3′; GAPDH forward: 5′-TGCACCACCAACTGCTTAGC-3′, reverse: 5′-GGCATGGACTGTGGTCATGAG-3′. ### 2.8. Protein Extraction and Western Blotting Cellular and nuclear protein is extracted from AMs as described previously [22]. Total cellular protein extraction was processed with cytoplasmic extraction reagent (Vazyme, China) and protease inhibitor mix. Nuclear protein was isolated by using nuclear extraction reagent (Nanjing, Vazyme, China) with a protease inhibitor mix. Concentration of protein was assessed by a BCA kit (Shanghai, Biyuntian, China). 50 μg protein for western blotting per sample added 4-fold volume of 5Χ loading buffer and boiled for 8 min. Protein samples were electrophoresed in sodium dodecyl sulfate-polyacrylamide gels and then transferred to polyvinylidene fluoride (PVDF) membranes (Bio-Rad Laboratories, Berkeley, USA). The PVDF membranes were then incubated with primary antibodies, including antimouse IRF-1 antibody and caspase-1 P10 (Santa Cruz Biotechnology, USA), antimouse GSDMD and histone3 (Abcam, England), GAPDH (ImmunoWay Biotechnology, USA), and HSP90 (Aifang, China) overnight at 4°C after blocking with 5% skimmed milk for 1 h. After three washes with Tris-buffered solution with 0.1% Tween-20, the membranes were incubated with horseradish peroxidase-conjugated secondary antibody (Sigma-Aldrich, USA) for 1 h at room temperature. Signals were detected with a ClarityMax Western ECL Substrate kit (Bio-Rad Laboratories) and were quantified using ImageJ software (Rawak Software Inc., Stuttgart, Germany). ### 2.9. Enzyme-Linked Immunosorbent Assay The levels of IL-1β, IL-6, TNF-α, and HMGB-1 in the BALF and serum were measured using commercially available mouse ELISA kits from eBioscience (San Diego, CA, USA). The experimental procedures were performed according to the manufacturer’s instructions. ### 2.10. Statistical Analysis Variables are presented asmean±standarddeviation. Student t-test was used for comparisons between the two groups, and one-way analysis of variance was used for more than three groups. Multiple comparisons were corrected using the Bonferroni post hoc test. Correlations between data were assessed using Pearson’s correlation analysis. The difference was considered statistically significant when p was less than 0.05. All experimental results were repeated at least three times (unless otherwise indicated), and the representative results are shown. The sample sizes (n) are indicated in the figures. Statistical analyses were conducted using GraphPad 8 software (GraphPad Software, USA). ## 2.1. Animals Wild-type (C57BL/6J) mice were purchased from SJA Laboratory Animal Co. (Changsha, China), caspase-1 knockout (caspase-1-/-) mice were obtained from the Model Animal Research Center of Nanjing University (Nanjing, China), and IRF-1 knockout (IRF-1-/-) mice were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). All the mice in this study were male, aged 6–8 weeks, and maintained in the laboratory animal center of the Central South University under specific pathogen-free conditions. The environment has controlled temperature, independent ventilation, and a 12-hour light/dark cycle. All procedures were approved by the Laboratory Animal Ethics Committee of Central South University. All surgeries were performed under a mixture of xylazine and ketamine anesthesia, and all measures were taken to minimize suffering. ## 2.2. VILI Model The modeling and grouping were performed as described previously, but are briefly explained below. Mice were anesthetized by intraperitoneal (i.p.) injection of ketamine (87.5 mg/kg) and xylazine (12.5 mg/kg) and kept in a prone position on a thermostatic blanket to maintain a temperature of 35 ± 1°C. The anterior neck skin and soft tissue were cut under sterile conditions to expose the trachea to observe the condition of the airway. Orotracheal intubation was then performed with a 20-gauge intravenous catheter (Becton, Dickinson and Company, Piscataway, NJ, USA). The catheter was connected to a ventilator (VentElite; Harvard Apparatus, Holliston, MA, USA) with a fraction of inspired oxygen (FiO2) of 0.2 and a volume-controlled setting. Parameters for the low-tidal-volume ventilation and the high-tidal-volume ventilation for 4 h were set as follows: tidal volume of 8 ml/kg body weight with 160 breaths/min and deep inflation with 27 cmH2O for 1 s in every 5 min or 34 ml/kg with 70 breaths/min. Spontaneous efforts were terminated using rocuronium bromide (Esmeron, 0.02 ml/h, i.p., 10 mg/ml) during mechanical ventilation. The sham mice underwent the same surgery and LTV ventilation for 10 min as control mice. ## 2.3. Lung Injury Assessment in Mice The lung wet-to-dry weight ratio was used as an indicator for the evaluation of pulmonary edema. After the right lower lobe was excised and rinsed quickly in saline, the excess water was drained off the lobe and weighed to determine the wet weight after the mice were killed. The dry weight was determined by weighing the lobe again after drying in an oven at 65°C for 48 h.The level of protein in BALF was used as an indicator for the evaluation of dysfunction of the alveolar barrier. The protein level in the BALF was evaluated using a BCA protein assay kit (Biomiga, USA) according to the manufacturer’s instructions.For lung histology, a portion of the left lung was fixed with 4% buffered paraformaldehyde and embedded in paraffin, and 6-μm sections were sliced and stained with hematoxylin and eosin. Pathologists blinded to the experimental protocol evaluated and scored the stained sections. The severity of lung injury was scored according to the following indicators: alveolar edema, hemorrhage, alveolar exudates, and leukocyte infiltration. ## 2.4. Isolation of AMs from BALF After mechanical ventilation/spontaneous breathing, AMs were isolated from the mouse lungs as previously described. In brief, mouse lung was lavaged with 1 mL of sterile saline containing 2% bovine serum albumin and 10 nM ethylenediaminetetraacetic acid disodium through orotracheal intubation, and a total of 10 ml of BALF was collected from each mouse. Leukocytes in the BALF were precipitated by centrifugation at 200×g for 10 min at 4°C. AMs in these leukocytes were separated by negative magnetic bead sorting. Magnetic nanoparticle-conjugated antibodies such as antimouse Gr-1, CD4, CD8, and CD45R/B220 antibodies (BD Biosciences Pharmingen, San Diego, CA, USA) were used to label and remove neutrophils and lymphocytes in the immunomagnetic separation system (BD Biosciences Pharmingen). Residual cells were stained and examined by Wright’s staining, and the purity of AMs was >95%. ## 2.5. Flow Cytometry Purified AMs were incubated with Fc block before staining with a fluorescently labeled inhibitor of caspase-1 (FLICA Caspase Assay Kit; ImmunoChemistry Technology, USA) and propidium iodide (ImmunoChemistry Technology) according to the manufacturer’s instructions. Flow cytometry analysis was conducted using a FACSVerse BD flow cytometer (BD Biosciences, Sparks, MD, USA). Raw data were analyzed using FlowJo software (TreeStar Corporation, USA). Fluorescently labeled active caspase-1- and propidium iodide-positive cells indicated pyroptosis. ## 2.6. Immunohistochemistry IRF-1 and cleaved caspase-1 were immunohistochemically stained in paraffin-embedded tissue sections by standard immunohistochemical protocol as described previously [21]. Briefly, pathology slides of lung tissues were incubated with antimouse IRF-1 and caspase-1 P10 (Santa Cruz, CA, USA) at a 1 : 100 dilution. The results were measured by positive cell counts in the field using Leica digital microscopy. All counts were performed by two independent observers to reduce counting bias. ## 2.7. Quantitative Real-Time Polymerase Chain Reaction The quantification of IRF-1 and caspase-1 was performed as described previously. Trizol reagent (Invitrogen, Carlsbad, CA, USA) was applied to isolate total RNA from AMs. The RNA was converted into reverse transcript (cDNA) using the all-in-one first-stand cDNA synthesis kit (CeneCopoeia, MD, US). The reaction of quantitative real-time PCR (qRT-PCR) was carried out by All-in-One qPCR Mix (CeneCopoeia, MD, US). The reaction system (10μL) was programmed as follows: 95°C for 10  min followed by 40 cycles at 95°C for 10 s, 60°C for 20 s, and 72°C for 40 s. GAPDH was used as the reference gene. The sequences of primers were as follows: IRF-1 forward: 5′-CTCACCAGGAACCAGAGGAA-3′, reverse: 5′-TGAGTGGTGTAACTGCTGTGG-3′; forward: 5′-ACAAGGCACGGGACCTATG-3′, reverse: 5′-TCCCAGTCAGTCCTGGAAATG-3′; GAPDH forward: 5′-TGCACCACCAACTGCTTAGC-3′, reverse: 5′-GGCATGGACTGTGGTCATGAG-3′. ## 2.8. Protein Extraction and Western Blotting Cellular and nuclear protein is extracted from AMs as described previously [22]. Total cellular protein extraction was processed with cytoplasmic extraction reagent (Vazyme, China) and protease inhibitor mix. Nuclear protein was isolated by using nuclear extraction reagent (Nanjing, Vazyme, China) with a protease inhibitor mix. Concentration of protein was assessed by a BCA kit (Shanghai, Biyuntian, China). 50 μg protein for western blotting per sample added 4-fold volume of 5Χ loading buffer and boiled for 8 min. Protein samples were electrophoresed in sodium dodecyl sulfate-polyacrylamide gels and then transferred to polyvinylidene fluoride (PVDF) membranes (Bio-Rad Laboratories, Berkeley, USA). The PVDF membranes were then incubated with primary antibodies, including antimouse IRF-1 antibody and caspase-1 P10 (Santa Cruz Biotechnology, USA), antimouse GSDMD and histone3 (Abcam, England), GAPDH (ImmunoWay Biotechnology, USA), and HSP90 (Aifang, China) overnight at 4°C after blocking with 5% skimmed milk for 1 h. After three washes with Tris-buffered solution with 0.1% Tween-20, the membranes were incubated with horseradish peroxidase-conjugated secondary antibody (Sigma-Aldrich, USA) for 1 h at room temperature. Signals were detected with a ClarityMax Western ECL Substrate kit (Bio-Rad Laboratories) and were quantified using ImageJ software (Rawak Software Inc., Stuttgart, Germany). ## 2.9. Enzyme-Linked Immunosorbent Assay The levels of IL-1β, IL-6, TNF-α, and HMGB-1 in the BALF and serum were measured using commercially available mouse ELISA kits from eBioscience (San Diego, CA, USA). The experimental procedures were performed according to the manufacturer’s instructions. ## 2.10. Statistical Analysis Variables are presented asmean±standarddeviation. Student t-test was used for comparisons between the two groups, and one-way analysis of variance was used for more than three groups. Multiple comparisons were corrected using the Bonferroni post hoc test. Correlations between data were assessed using Pearson’s correlation analysis. The difference was considered statistically significant when p was less than 0.05. All experimental results were repeated at least three times (unless otherwise indicated), and the representative results are shown. The sample sizes (n) are indicated in the figures. Statistical analyses were conducted using GraphPad 8 software (GraphPad Software, USA). ## 3. Results ### 3.1. Ventilation with a High Tidal Volume Induces Elevated Caspase-1-Dependent Pyroptosis in AMs Previously, we demonstrated that lung injury occurred during high-tidal-volume ventilation [23]. To further investigate if AM pyroptosis had occurred, we randomized mice into three groups: a spontaneous breathing control group, a protective ventilation/low-tidal volume ventilation (low VT) group, and an injurious ventilation/high-tidal-volume ventilation (high VT) group. Caspase-1 is a biomarker of canonical pyroptosis. Therefore, we measured the number of active caspase-1-positive and PI-positive to measure caspase-1-related pyroptosis [24]. As illustrated in Figure 1(a), the flow cytometry results shows that percentage of caspase-1-induced pyroptosis was significantly increased in the high VT group, whereas there was no difference between the control group and the low VT group at 4 h after ventilation onset. The same results are verified in western blot as shown in Figure 1(b). The cleaved form of GSDMD, as a biomarker of pyroptosis, increased obviously in the high VT group, but not in low VT groups. The trend of activated caspase-1 was consistent with cleaved GSDMD at the protein level.Figure 1 Ventilation with a high tidal volume induces elevated caspase-1-dependent pyroptosis in AMs. Caspase-1-related death cells detected by flow cytometry (a) and the protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages were increased in high VT group compared with the group of control and low VT. The release of IL-1β in BALF (d) and serum (e) was significantly increased after ventilation with high VT than that in the low VT group, compared with control group. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)In addition, pyroptosis contributes to the mature and release of the proinflammatory cytokine IL-1β. The expression of mature IL-1β is detected by western blotting, and the release of IL-1β is measured by ELISA in BALF and serum and is increased in high VT group compared to that of low VT according to Figures 1(c)–1(e). These results suggest that ventilation with a high tidal volume resulted in elevated caspase-1-dependent pyroptosis in AMs in VILI. ### 3.2. Caspase-1 Deletion Abolishes VILI and Cytokine Release in Mice To investigate whether alveolar pyroptosis contributes to VILI, caspase-1-/- mice were ventilated with a high tidal volume. As shown in Figure 2(a), caspase-1-/- mice that underwent high-tidal-volume ventilation had barely any AM caspase-1-induced pyroptosis and drastic reduction in the proportion of death cells. The expression alteration in GSDMD could be as a supporting information for AM pyroptosis (Figure 2(b)). Protein level of cleaved GSDMD was significantly reduced after caspase-1 knockout. To assess pyroptosis-related inflammatory factor, we found that IL-1β and HMGB-1 in BALF was significantly increased in high VT group, but sharp decrease upon caspase-1 knockdown (Figures 2(c)–2(e)).Figure 2 Caspase-1 deletion abolishes VILI and cytokine release in mice. (a) The flow cytometry showed caspase-1 deletion abolished death cells in alveolar macrophage. The protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages was significantly decreased after caspase-1 knockout. Caspase-1 deletion attenuated pyroptosis-related cytokines in BALF of high VT, including IL-1β (d) and HMGB-1 (e). Results are representative of three independent experiments; the results of one representative experiment are shown (n =5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)As shown in Figures3(a)–3(d), the high-tidal-volume ventilation caused significant lung inflammation, alveolar congestion, alveolar septal thickening, and perivascular infiltration of inflammatory cells, whereas lung lesions showed significantly reduced inflammatory cell infiltration in the caspase-1-/- mice. Genetic caspase-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF with high VT (Figures 3(e) and 3(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluate the levels of IL-6 and TNF-α in BALF and shown as Figures 3(g) and 3(h). These cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the caspase-1-/- mice. These findings indicate that genetic caspase-1 deficiency decreases lung damage in VILI in mice.Figure 3 Caspase-1 deletion abolishes VILI and cytokine release in mice. Caspase-1 deletion alleviated the high-tidal-volume ventilation-induced lung injury measured by lung injury scores (d) for lung pathology. (a) Lung gross pathology is shown. Representative histologic sections for lung pathology (b) (magnification, 20×) and (c) (magnification, 400×) are shown. Caspase-1 deletion alleviated lung pathology in gross pathology (a) and HE-stained micrographs (b and c). Caspase-1 deletion reduced the wet/dry (W/D) ratio (e) and BALF protein concentration (f) in the high VT group. Caspase-1 deletion attenuated the release of IL-6 (g) and TNF-α (h) in BALF of high VT. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h) ### 3.3. IRF-1 Deletion Attenuates VILI and Cytokine Release in Mice We previously identified that IRF-1 has been implicated in the regulation of ALI-induced inflammatory response [21, 22]. To examine the functions of IRF-1 in VILI, we first examined its RNA and protein content in the group of low VT, high VT, and sham (Figures 4(a) and 4(b)). The expression of IRF-1 in lung homogenates was significantly increased in the high VT group compared with the sham and low VT group.Figure 4 IRF-1 deletion attenuates VILI and cytokine release in mice. The intranuclear protein level (a) and mRNA (b) of IRF-1 in alveolar macrophages were increased in the high VT group compared with the control group and the low VT group. IRF-1 deletion alleviated lung histopathologic damage in gross pathology (e), HE-stained micrographs for 20× magnification (f) and 400× magnification (g) induced by the high-tidal-volume ventilation assessed using lung injury scores (h). IRF-1 deletion alleviated the wet/dry (W/D) ratio (c) and BALF protein concentration (d) in the high VT group. The concentration of IL-6 (i) and TNF-α (j) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)Next, IRF-1-/- mice were also used to investigate whether IRF-1 mediates VILI and cytokine release. As shown in Figures 4(e)–4(h), lung lesions showed significantly reduced inflammatory cell infiltration in IRF-1-/- mice. Genetic IRF-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF (Figures 4(c) and 4(d)–4(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluated the levels of IL-6 and TNF-α in BALF. All cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the IRF-1-/- mice in Figures 3(i)–3(j). These findings indicate that genetic IRF-1 deficiency decreases lung damage in VILI in mice. These data indicate that IRF-1 plays an important role in the pathogenesis of VILI. ### 3.4. IRF-1 Was Required for Caspase-1 Activation in AMs Having shown that VILI is associated with pyroptosis of AMs and IRF-1 expression, we then investigated whether IRF-1 deletion enhances protection by inhibiting AM pyroptosis. As shown in Figure5(a), IRF-1-/- mice that underwent high-tidal-volume ventilation had very little caspase-1-induced pyroptosis. The levels of activated caspase-1, cleaved GSDMD, and IL-1β were detected by western blot analysis. Indeed, reduced expression of cleaved caspase-1, cleaved GSDMD, and IL-1β was observed in AMs of IRF-1-/- mice with high-tidal-volume ventilation compared to that of the control group (Figures 5(b) and 5(c)). The levels of IRF-1 are detected by western blot analysis as shown in Figure 5(d). Indeed, reduced expression of IRF-1 was observed in AMs of caspase-1-/- mice with high-tidal-volume ventilation compared to those of the control group. The concentration of IL-1β and HMGB-1 in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type (Figures 5(e) and 5(f)). These data indicated that IRF-1 was essential for caspase-1 activation and further precipitated the pathogenesis of VILI.Figure 5 IRF-1 was required for caspase-1 activation in AMs. The flow cytometry showed IRF-1 deletion attenuated alveolar macrophage pyroptosis in high VT (a). Western blotting analysis of protein expression of caspase-1 p10 (b), IL-1β (b), and GSDMD including full-length and cleaved forms (c) in alveolar macrophages. Analysis of IRF-1 protein levels (d) in the nucleus of alveolar macrophages indicated that caspase-1 deletion did not affect expression of IRF-1. The concentration of IL-1β (e) and HMGB-1 (f) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f) ## 3.1. Ventilation with a High Tidal Volume Induces Elevated Caspase-1-Dependent Pyroptosis in AMs Previously, we demonstrated that lung injury occurred during high-tidal-volume ventilation [23]. To further investigate if AM pyroptosis had occurred, we randomized mice into three groups: a spontaneous breathing control group, a protective ventilation/low-tidal volume ventilation (low VT) group, and an injurious ventilation/high-tidal-volume ventilation (high VT) group. Caspase-1 is a biomarker of canonical pyroptosis. Therefore, we measured the number of active caspase-1-positive and PI-positive to measure caspase-1-related pyroptosis [24]. As illustrated in Figure 1(a), the flow cytometry results shows that percentage of caspase-1-induced pyroptosis was significantly increased in the high VT group, whereas there was no difference between the control group and the low VT group at 4 h after ventilation onset. The same results are verified in western blot as shown in Figure 1(b). The cleaved form of GSDMD, as a biomarker of pyroptosis, increased obviously in the high VT group, but not in low VT groups. The trend of activated caspase-1 was consistent with cleaved GSDMD at the protein level.Figure 1 Ventilation with a high tidal volume induces elevated caspase-1-dependent pyroptosis in AMs. Caspase-1-related death cells detected by flow cytometry (a) and the protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages were increased in high VT group compared with the group of control and low VT. The release of IL-1β in BALF (d) and serum (e) was significantly increased after ventilation with high VT than that in the low VT group, compared with control group. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)In addition, pyroptosis contributes to the mature and release of the proinflammatory cytokine IL-1β. The expression of mature IL-1β is detected by western blotting, and the release of IL-1β is measured by ELISA in BALF and serum and is increased in high VT group compared to that of low VT according to Figures 1(c)–1(e). These results suggest that ventilation with a high tidal volume resulted in elevated caspase-1-dependent pyroptosis in AMs in VILI. ## 3.2. Caspase-1 Deletion Abolishes VILI and Cytokine Release in Mice To investigate whether alveolar pyroptosis contributes to VILI, caspase-1-/- mice were ventilated with a high tidal volume. As shown in Figure 2(a), caspase-1-/- mice that underwent high-tidal-volume ventilation had barely any AM caspase-1-induced pyroptosis and drastic reduction in the proportion of death cells. The expression alteration in GSDMD could be as a supporting information for AM pyroptosis (Figure 2(b)). Protein level of cleaved GSDMD was significantly reduced after caspase-1 knockout. To assess pyroptosis-related inflammatory factor, we found that IL-1β and HMGB-1 in BALF was significantly increased in high VT group, but sharp decrease upon caspase-1 knockdown (Figures 2(c)–2(e)).Figure 2 Caspase-1 deletion abolishes VILI and cytokine release in mice. (a) The flow cytometry showed caspase-1 deletion abolished death cells in alveolar macrophage. The protein level of GSDMD including full-length and cleaved forms (b) and mature IL-1β (c) in alveolar macrophages was significantly decreased after caspase-1 knockout. Caspase-1 deletion attenuated pyroptosis-related cytokines in BALF of high VT, including IL-1β (d) and HMGB-1 (e). Results are representative of three independent experiments; the results of one representative experiment are shown (n =5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)As shown in Figures3(a)–3(d), the high-tidal-volume ventilation caused significant lung inflammation, alveolar congestion, alveolar septal thickening, and perivascular infiltration of inflammatory cells, whereas lung lesions showed significantly reduced inflammatory cell infiltration in the caspase-1-/- mice. Genetic caspase-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF with high VT (Figures 3(e) and 3(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluate the levels of IL-6 and TNF-α in BALF and shown as Figures 3(g) and 3(h). These cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the caspase-1-/- mice. These findings indicate that genetic caspase-1 deficiency decreases lung damage in VILI in mice.Figure 3 Caspase-1 deletion abolishes VILI and cytokine release in mice. Caspase-1 deletion alleviated the high-tidal-volume ventilation-induced lung injury measured by lung injury scores (d) for lung pathology. (a) Lung gross pathology is shown. Representative histologic sections for lung pathology (b) (magnification, 20×) and (c) (magnification, 400×) are shown. Caspase-1 deletion alleviated lung pathology in gross pathology (a) and HE-stained micrographs (b and c). Caspase-1 deletion reduced the wet/dry (W/D) ratio (e) and BALF protein concentration (f) in the high VT group. Caspase-1 deletion attenuated the release of IL-6 (g) and TNF-α (h) in BALF of high VT. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h) ## 3.3. IRF-1 Deletion Attenuates VILI and Cytokine Release in Mice We previously identified that IRF-1 has been implicated in the regulation of ALI-induced inflammatory response [21, 22]. To examine the functions of IRF-1 in VILI, we first examined its RNA and protein content in the group of low VT, high VT, and sham (Figures 4(a) and 4(b)). The expression of IRF-1 in lung homogenates was significantly increased in the high VT group compared with the sham and low VT group.Figure 4 IRF-1 deletion attenuates VILI and cytokine release in mice. The intranuclear protein level (a) and mRNA (b) of IRF-1 in alveolar macrophages were increased in the high VT group compared with the control group and the low VT group. IRF-1 deletion alleviated lung histopathologic damage in gross pathology (e), HE-stained micrographs for 20× magnification (f) and 400× magnification (g) induced by the high-tidal-volume ventilation assessed using lung injury scores (h). IRF-1 deletion alleviated the wet/dry (W/D) ratio (c) and BALF protein concentration (d) in the high VT group. The concentration of IL-6 (i) and TNF-α (j) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)Next, IRF-1-/- mice were also used to investigate whether IRF-1 mediates VILI and cytokine release. As shown in Figures 4(e)–4(h), lung lesions showed significantly reduced inflammatory cell infiltration in IRF-1-/- mice. Genetic IRF-1 deficiency significantly alleviated the wet weight/dry weight ratio and reduced the total proteins in the BALF (Figures 4(c) and 4(d)–4(f)), which was consistent with our histopathological analysis. To further assess lung injury, we evaluated the levels of IL-6 and TNF-α in BALF. All cytokines increased dramatically in the wild-type mice that received high-tidal-volume ventilation (high VT group), but cytokines were partially reduced in the IRF-1-/- mice in Figures 3(i)–3(j). These findings indicate that genetic IRF-1 deficiency decreases lung damage in VILI in mice. These data indicate that IRF-1 plays an important role in the pathogenesis of VILI. ## 3.4. IRF-1 Was Required for Caspase-1 Activation in AMs Having shown that VILI is associated with pyroptosis of AMs and IRF-1 expression, we then investigated whether IRF-1 deletion enhances protection by inhibiting AM pyroptosis. As shown in Figure5(a), IRF-1-/- mice that underwent high-tidal-volume ventilation had very little caspase-1-induced pyroptosis. The levels of activated caspase-1, cleaved GSDMD, and IL-1β were detected by western blot analysis. Indeed, reduced expression of cleaved caspase-1, cleaved GSDMD, and IL-1β was observed in AMs of IRF-1-/- mice with high-tidal-volume ventilation compared to that of the control group (Figures 5(b) and 5(c)). The levels of IRF-1 are detected by western blot analysis as shown in Figure 5(d). Indeed, reduced expression of IRF-1 was observed in AMs of caspase-1-/- mice with high-tidal-volume ventilation compared to those of the control group. The concentration of IL-1β and HMGB-1 in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type (Figures 5(e) and 5(f)). These data indicated that IRF-1 was essential for caspase-1 activation and further precipitated the pathogenesis of VILI.Figure 5 IRF-1 was required for caspase-1 activation in AMs. The flow cytometry showed IRF-1 deletion attenuated alveolar macrophage pyroptosis in high VT (a). Western blotting analysis of protein expression of caspase-1 p10 (b), IL-1β (b), and GSDMD including full-length and cleaved forms (c) in alveolar macrophages. Analysis of IRF-1 protein levels (d) in the nucleus of alveolar macrophages indicated that caspase-1 deletion did not affect expression of IRF-1. The concentration of IL-1β (e) and HMGB-1 (f) in BALF was attenuated after high VT with IRF-1 deletion compared to that wild type. Results are representative of three independent experiments; the results of one representative experiment are shown (n=5/group, ∗p<0.05, ∗∗p<0.01). (a)(b)(c)(d)(e)(f) ## 4. Discussion It has been identified that inhibition or knockout of caspase-1 or IRF-1 has a protect effect against many inflammatory diseases [25–27]. In this study, we demonstrated that caspase-1-related pyroptosis may be an important mechanism in pathogenesis for experimental VILI. Moreover, IRF-1 may positively regulate caspase-1-dependent pyroptosis and release of inflammatory factors in mechanical lung injury. Therefore, our study found accumulating evidence for the links between IRF-1 and pyroptosis-related molecules [28].Clinically, mechanical ventilation is the most dominant treatment strategy for ARDS. VAP is one of the most common complications in severe pneumonia and ARDS patient. The alveoli damage by mechanical force could further complicate the condition and prognosis of ALI/ARDS. Excluding infection, injurious mechanical ventilation only could induce AM pyroptosis and be associated by caspase-1 in our study. From our findings, caspase-1-dependent pyroptosis potentiates inflammatory response in VILI.Prior studies have identified the pivotal role of IRF-1 in mechanism of ALI/ARDS occurrence. As a transcription factor that is involved in tumor-related signaling pathways, IRF-1 is often elevated in patients with ARDS [29]. In addition, we found that IRF-1 deletion in LPS-induced ALI mouse could alleviate lung injury significantly [21, 22]. These studies proposed that IRF-1 plays a critical role in mediating cytokine storm of ALI/ARDS. No IRF-1-related signaling pathway contributing to VAP or VILI has been studied before. In our study, IRF-1 was significantly upregulated in AMs in high VT-induced lung injury. Moreover, caspase-1-induced pyroptosis of AMs and inflammation was impaired after IRF-1 knockdown. IRF-1 seems to be upstream in caspase-1 and pyroptosis-related molecules. In other words, it suggested that IRF-1 is a potential transcription factor implicated in caspase-1-related pyroptotic cell death. Previous studies showed that caspase-1 gene might be regulated by IRF-1 via a CRE site. Additionally, caspase-1 upregulation was unable to be observed in oligodendrocyte progenitor cells after IFN stimulation in absence of IRF-1 [30, 31].In mechanical ventilation-induced lung injury, there were fewer inflammatory factors in serum and BALF. It was distinct from the pathophysiological processes of LPS-induced ALI which could amplify the inflammatory response at the beginning of onset. Nonetheless, caspase-1-induced pyroptosis and the release of related inflammatory factors including IL-1β, IL-18, and HMGB-1 were partially responsible for pulmonary pathology in VILI. It has been confirmed that myeloid differentiation factor 88 (MyD88) adapter protein could recruit some members of IRF-1 family of transcription factors to evoke certain genes such as toll-like receptor (TLR) [32, 33]. In our study, IRF-1 knockdown could markedly reduce these effects mentioned above. It appears that IRF-1 regulate pyroptosis-associated cytokines.In conclusion, our study highlights the important role of caspase-1 and the promoting effect of IRF-1 in the pathogenesis of VILI. IRF-1 and pyroptosis-related inflammatory factors promise to be therapeutic targets or early warning signals in patients undergoing mechanical ventilation. However, our animal experiment may further verify by a prospective clinical study. --- *Source: 1002582-2022-04-15.xml*
2022
# Genetic Relationship and Evolution Analysis among Malus Mill Plant Populations Based on SCoT Molecular Markers **Authors:** Yuan Yao **Journal:** Computational and Mathematical Methods in Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1002624 --- ## Abstract Malus Mill genotype is highly heterozygous, and many theoretical problems such as genetic relationship and evolution process among germplasm are difficult to be solved by traditional analysis methods. The development of SCoT(start codon targeted polymorphism) molecular markers suitable for apples is of great significance for studying the origin, evolution, genetic relationship and genetic diversity of Malus Mill germplasm resources. In this paper, the genetic relationship and evolution of 15 materials were analyzed by SCoT molecular marker. The results showed that the gene differentiation coefficient values of four Malus Mill plants at the species level were 0.423, 0.439, 0.428 and 0.460, respectively, which indicated that there was obvious genetic differentiation among the populations of these four Malus Mill plants, but there were some differences among the populations of different Malus Mill plants. The gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. This indicates that SCoT molecular markers can be effectively used in the analysis of intraspecific genetic relationship and identification of intraspecific strains of Malus Mill plants. --- ## Body ## 1. Introduction Malus Mill is a genus of Rosaceae with high economic value. Malus Mill plants such as M.hallianaKoehne, M.baccata (L.)Borkh, M.hupehensis (Pamp.)Rehd, M.micromalus are important garden trees.Many species are used as rootstocks in apple production [1–3]. The traditional Malus Mill classification is based on botanical characters such as the state of leaves in buds, the whole or shallow split of leaves, the number of styles and the color of stamens, the shedding or persistent sepals, the presence or absence of stone cells in pulp, and experimental taxonomic evidence such as chromosomes, plant chemical components and isozymes. Based on the development of molecular technology in recent years, it has become the mainstream to use molecular marker technology to reveal the genetic relationship between apple germplasm resources and materials, analyze the differences among materials from the genome level, and then assist breeders to choose suitable combinations of apple hybrid parents [4–6] However, the genomic sites revealed by different markers are different, and the sequence information that can reveal the different sites may not be transcribed and expressed.Genetic marker is an easily recognizable manifestation of biological genotype. With the understanding of gene from phenomenon to essence, genetic marker has gradually developed from simple morphological marker, cytological marker and biochemical marker to DNA molecular marker which can directly reflect genetic polymorphism at the DNA level [7].Scot (start codon targeted polymorphism) marker is a new marker method created by Collard and others in rice. This marker is not only highly polymorphic, but also convenient to operate. At present, it has been applied to the research of many fruit trees and plants. In addition, there are markers that use specific double primers to amplify and analyze the polymorphism of specific DNA regions, mainly including AFLP (amplified fragment length polymorphism), SCAR (sequence characterized amplified regions) and STS(Sequence Tag Site), among which AFLP markers are widely used. SCoT marker has been successfully applied to citrus [8], Dendrobium candidum and other crops [9, 10], and much progress has been made in the research of genetic diversity among species and varieties and genetic analysis of hybrid offspring. With more and more evidences being introduced into the systematic study of this genus, for example, whether the calyx of fruit persists, the growth state of leaves in buds, whether the fruit has stone cells or not, and the experimental new classification system such as chromosomes, plant chemical components, enzymes, molecular biology, etc., are more scientific, which can better reflect the genetic relationship and evolution path among plant groups in this genus.The emergence and development of DNA molecular markers make molecular systematics one of the latest experimental classification methods, and SSR (simple sequence repeats) markers with unique advantages have been widely used in Apple research [11, 12].However, the development of traditional genomic SSR markers needs complicated steps such as constructing genomic DNA library, probe hybridization, cloning and sequencing [13], which is time-consuming, labor-intensive and costly. The application of SSR markers is greatly limited. As a new SSR marker, SCoT marker has been applied in many plants such as grape, sugar cane, wheat, barley, soybean, rice and other species such as genetic diversity analysis, genetic map construction and systematic evolution research [14, 15]. Research on systematic classification and genetic relationship of apple cultivars based on SCoT technology has not been reported. In this study, making full use of database resources and developing new SCoT markers from Malus MillEST database will not only further enrich the number of markers, but also provide a new way for the study of genetic relationship of this genus. ## 2. Related Work SCoT marker can not only obtain the target genes closely related to traits, but also track traits. It has the advantages of simple operation, high polymorphism, abundant genetic information, low cost and strong universality of primers, among which the most prominent ones are high polymorphism detection efficiency and abundant genetic information. At present, it has been widely used in the research of genetic diversity, population genetic structure, germplasm identification, gene differential expression and molecular genetic map of plant germplasm resources. Literature [16] Construction of DNA fingerprint of Hemarthria spp cultivated plants by EST-SSR and SCoT markers; Literature [17] The application of SCoT molecular marker correspondence in the genetic analysis of Nicotiana plants and the identification of interspecific hybrids found that SCoT-labeled primers can be used for the analysis of the genetic relationship between tobacco species and the identification of distant hybrids. Literature [18] used SCoT diversity marker to mark the difference of genetic diversity between wild and cultivated species of Boehmerianivea L. Gaudich; Literature [19] The application of SCoT molecular marker correspondence in the genetic analysis of Nicotiana plants and the identification of interspecific hybrids found that SCoT-labeled primers can be used for the analysis of the genetic relationship between tobacco species and the identification of distant hybrids. At present, the application of this technology in medicinal plants has been reported.Genetic diversity refers to the sum total of genetic variation among different groups or individuals within a group, which is the basis and important component of biodiversity. The evaluation of genetic diversity of plant natural populations and artificially cultivated strains is the basis of effective protection, development and utilization of germplasm resources. Literature [20] 100 pairs of SSR primers were developed and designed from EST database of citrus to detect the genetic diversity and heterozygosity of two different citrus genera, Citrus sinensis(L.)Osbesk and Poncirus trifoliate(L.)Raf. Literature [21] Sixteen pairs of SSR primers were selected and designed from EST of grape to detect the genetic diversity of seven grape varieties, among which 10 pairs of primers could amplify polymorphic bands in these tested materials. Literature [22] Through cluster analysis, it is shown that Chinese varieties and other varieties in the world are clustered into two separate groups, respectively, while the hybrids of two peaches and apricots are closer to peaches in genetic relationship, and they are clustered into an outer group together. Literature [23] The genetic relationship among wild apples, cultivated apples and ornamental begonia originated in Belgium and Germany was analyzed by AFLP and SSR. Literature [24] also used SSR and SRAP markers to analyze the genetic relationship of apples. In this study, AFLP, SSR and two molecular markers were used to study the genetic relationship among 41 Malus Mill plant types, and the differences between the two techniques were discussed, which provided reference for the application of AFLP and SSR in the genetic relationship analysis of apples. Literature [25] RAPD (Random Amplified Polymorphic DNA) technology was applied to study the genetic relationship of Malus Mill plants, and the results were basically consistent with previous studies. However, in this study, only 10 primers were screened. For Malus Mill plants with complex origin and evolution, the results only have reference value, and cannot fundamentally solve our questions. Literature [26] AFLP technology has been applied to the study of the relationship between Malus Mill plants, and most of the classification results are the same as those of previous studies. Although this method is accurate, it is only for reference, and it still needs to be combined with other research results. Literature [27] expounds the practical value of M.hupehensis (Pamp.)Rehd. Compared with several other Malus Mill plants with apomixis, M.hupehensis (Pamp.)Rehd has the strongest apomixis ability. The results of SSR molecular markers of M.baccata (L.)Borkh in literature [28] showed that all materials had high genetic diversity, among which M.baccata (L.)Borkh from Hebei had the highest genetic diversity.Malus Mill, as a classified “difficult genus”, is characterized by diverse plant forms, complex variation of characters, and the intersection of characters among many species is difficult to classify. Due to the widespread interspecific hybridization, many species have complicated classification problems, which are difficult to identify. Literature [29] holds that M.hupehensis (Pamp.)Rehd should be tied to M.baccata (L.)Borkh, while literature [30] holds that M.hupehensis (Pamp.)Rehd should be incorporated into M.baccata (L.)Borkh as a species. More powerful evidence is still needed to prove this. M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. Mengshanens are two varieties of M. Hupehensis (Pamp.) Rehd, and they also need a strong evidence to prove their relationship with each other and with M. Hupehensis (Pamp.) Rehd.var. Mengshanens. Literature [31] holds that M.hupehensis (Pamp.)Rehd is a multi-point origin, but its origin mechanism has not been determined. The wide application of molecular marker technology has brought us hope to solve the problem. Compared with other technologies, this technology is expressed in the form of DNA, which has higher polymorphism and is not restricted by the environment. However, only using primers to amplify some small fragments of DNA for cluster analysis can only find the genetic diversity, and cannot provide strong evidence for judging the genetic relationship between species. A large number of studies also show that this technology can not fundamentally solve this problem.Gene mapping is an important condition for obtaining excellent genes. When new genes appear in genetic breeding work, it is necessary to locate them on specific chromosomes, and measure their arrangement order and distance on chromosomes, so as to quickly and accurately locate genes. The key of gene mapping is to determine the parent combination between the populations used for mapping. Molecular markers play an important role in the germplasm resources of fruit trees. Molecular markers have many advantages, such as greatly improving the efficiency of animal and plant varieties, helping to develop and identify new varieties, and reducing breeding costs through marker-assisted breeding. Developing more new molecular markers is the new direction of molecular marker technology development in the future, and it is also the effort direction of researchers. In-depth research on fruit tree germplasm resources by molecular markers is beneficial to all fields, which has become an irreversible trend. ## 3. Research Method ### 3.1. Materials and Methods #### 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ### 3.2. Method #### 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. #### 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. #### 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 3.1. Materials and Methods ### 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ## 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ## 3.2. Method ### 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. ### 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. ### 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. ## 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. ## 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 4. Results Analysis and Discussion ### 4.1. Genetic Relationship and Its Evolution Results The successful preparation of DNA and the avoidance of partial degradation are the basis and key to the success of SCoT, because the quality of the template, that is, whether it contains polysaccharides, polyphenols, protein, quinones and pigments, will affect the future PCR amplification reaction. In this experiment, the improved CTAB method was used to extract apple genomic DNA, and the OD260/OD280 values of the tested materials were all between 1.8 and 2.0. 0.8% agarose gel electrophoresis showed that the main band was clear, without degradation, and the purity was high, which met the requirements of SSR analysis.The most amplified bands of 9 SCoT primers were SCoT12, with 1,344 bands, and the least was SCoT9, with 688 bands. The average number of amplified bands per primer was 948.The polymorphism ratio of 9 primers ranged from 72.5% to 100%, among which SCoT33 had the highest polymorphism ratio and SCoT12 had the lowest polymorphism ratio, and the average polymorphism ratio (Np) of 9 SCoT primers was 85.34%. The polymorphism ratio of each primer is shown in Figure3.Figure 3 SCoT polymorphism analysis.Malus Mill plant seeds have dormancy phenomenon, so it is the most important step to explore the conditions for breaking seed dormancy to obtain experimental materials. The fifteen Malus Mill plant experimental materials studied in this experiment include eleven wild species materials (M.hupehensis (Pamp.) Rehd. var.Mengshanens), M.hupehensis Rehd. var.taiensis, M.hupehensis (Pamp.)Rehd, M.baccata (L.)Borkh, M.rockii Rehder, M.dumeri (Bois.)Chev, M.toringoDe Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M. ombrophila Hand.-Mazz) and four cultivar materials (M. ×robusta Rehd, M. micromalus Makino, M. purpurea, M. ‘Donghongguo’).According to the degree of chromosomal evolution, most of Malus Mill plants in this study are metaphase chromosomes and near metaphase chromosomes, with a karyotype of 2B, which belongs to a relatively primitive type. The karyotype of M.ombrophilaHand-Mazz is 1A, the most primitive, and that of M.toringo De Vriese is 3B, the most evolutionary.Karyotype parameters of fifteen Malus Mill taxa are shown in Figure4. The number of hybridized chromosomes and the proportion of hybridized chromosomes between each probe and the test material are shown in Figure 5 (A, B and C, respectively, represent M.hupehensis (Pamp.)Rehd.var. Mengshanens as probe, M. Hupehensis (Pamp.) rehd as probe and M.hupehensis Rehd. var.taiensis as probe).Figure 4 Karyotype parameters of fifteen Malus Mill taxa.Figure 5 Experimental results of genomic in situ hybridization.According to the number of hybrid chromosomes, the chromosome homology between them is lower than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, that is, the genetic relationship between them is not high. In the next research, the screening range of parents of M.hupehensis Rehd. var.taiensis should be expanded. ### 4.2. Genetic Structure of Malus Mill Natural Population According to Nei’s gene diversity analysis, the gene differentiation coefficient values of four Malus Mill plant species at the level of M. baccata(L.) Borkh, M.rockii Rehder, M.doumeri (Bois.)Chev. and M. Toringo de Vriese were 0.423, 0.439 and 0.428, respectively. 0.460, indicating that there is obvious genetic differentiation among the populations of these four Malus Mill plants, but there are some differences in the degree of population differentiation among different Malus Mill plants. That is, the main source of intraspecific genetic variation of four Malus Mill plants is intra-population variation among individuals, and the intraspecific genetic differentiation has reached a very significant level (P <0.001). As shown in Table1.Table 1 Genetic structure variance analysis. SpeciesSource of variationDegree of freedomVarianceMean square deviationVariance componentPercentage variation (%)P-valueM. baccata(L.) BorkhIntergroup101436.22133.0217.2630<0.001Within population442088.1744.1740.0670M.rockii RehderIntergroup5566.13146.3920.3633<0.001Within population20876.9130.3540.0167M.doumeri (bois.)Chev.Intergroup91066.87128.1641.3320<0.001Within population412869.3240.2727.6980M.toringo De VrieseIntergroup101102.75133.6916.8746<0.001Within population442126.8246.7533.2154In order to further explore the relationship between genetic distance and geographical distance, Mantel tests were carried out on the genetic distance and geographical distance of four tested species (Figure6). The results showed that the genetic distance and geographical distance of four Malus Mill plants were significantly correlated.Figure 6 Mantel detection of correlation between geographical distance and genetic distance.Cluster results of 15 materials by principal coordinate analysis (Figure7).Figure 7 Principal coordinate cluster analysis of 15 Malus Mill and Pyrus materials.It was found that 15 test materials were divided into three categories. Three species of Pyrus are the first category; M. doumeri (Bois.) of Malus mill Chev, grassland Begonia and narrow leaf Begonia are the second category, among which grassland Begonia and narrow leaf Begonia are more closely related. The third category is divided into four sub groups. The first category includes Hawthorn Begonia and m. ombrophila hand- Mazz and Dianchi Begonia, M. ombrophila hand- Mazz is closely related to Begonia Dianchi;Category 2 includes M. sikkimensis (Wenz.) Koehne, M.toringoides(Rehd.) Hughes and begonia; The fourth category includes: M. halliana Koehne, Xinjiang wild apple, betel seed, catalpa seed, flat edge Begonia, Chinese apple, Oriental apple, forest apple and brown Begonia.The analysis of population structure of 15 tested materials by using Structure 2.3.1 software showed that the number of allele frequency characteristic types K in the samples showed a continuous increasing trend. When K =3, from the distribution of the maximum covariant Q value of 15 materials, the covariant Q of 13 materials is ≥0.6, accounting for 69.7% of the tested materials, indicating that the genetic relationship among these materials is single (Figure8).Figure 8 The distribution of the maximum Q value of all kinds of groups in mathematical model.The composite ratio and efficacy index can evaluate the efficiency of marker polymorphic sites and the efficiency of site polymorphism (Figure9). Among them, ANEA: average effective allele number, AEHL: average expected heterozygosity of locus.Figure 9 Marking efficiency analysis.Among the two markers, the effective recombination ratio and efficacy index of ScoT are higher than those of EST-SSR, and SCoT marker has higher polymorphism detection efficiency. The marker index can comprehensively reflect the expected heterozygosity and effective recombination ratio of the average locus. The SCoT marker is 2.337, which is higher than the EST-SSR marker of 1.199. ### 4.3. The Interspecific Genetic Relationship of Apple Genus Coextensive Population In order to study whether there is mutual influence of population genetic structure among the sympatrically distributed species of Malus, this experiment analyzed the genetic structure of apple species within four sympatrically distributed populations of Malus, namely populations A, B, C and D (Table2).Table 2 Population genetic distance. Group ASpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh———M.rockii Rehder0.096——M.doumeri (bois.)Chev0.0520.055—Group BSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.093———M.doumeri (bois.)Chev0.0460.042——M.toringo De Vriese0.0330.0490.013—Group CSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh————M.rockii Rehder0.022———M.doumeri (bois.)Chev0.0260.021——Group DSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.103———M.doumeri (bois.)Chev0.1680.027——M.toringo De Vriese0.1440.0260.026—The results showed that the gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. The average gene flow among individuals in the same domain distribution population is 3.362, which indicates that the frequency of gene exchange among several species of Glycyrrhiza uralensis Fisch. distributed in the same domain is higher.However, the degree of interspecific differentiation in coextensive populations in different regions is relatively different, and the degree of genetic differentiation among species in group A is the highest, which indicates that there are differences in the degree of gene exchange among species in coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. It shows that there are differences in the degree of inter-species gene exchange among coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. ### 4.4. Discussion Malus Mill plants have natural hybridization [10]; The existence of hybrids in wild environment and different types of variation within species makes the classification and identification based on morphology complicated [20]. The pod shape of some wild apples is a transitional type between two or more apples, or it is inclined to one kind, such as Malus Mill plant group, which makes it difficult to classify and identify species, so it is necessary to classify and identify species by molecular markers.Among the designed 35 pairs of primers, 16 pairs can amplify polymorphic products. The reason why some primers cannot amplify products may be [10]: The primer sequence falls on two exons; There is a long intron between the two primers, and no product can be amplified. Wrong sequence information was used in primer design. When there are multiple alleles in the analyzed materials, although their coding regions and functions are the same, but the DNA sequences are not completely the same, the amplified products may have fragments with different sizes than expected. The specificity of the primer itself is not high enough to amplify the sequence homologous to the primer.The results of population structure analysis are also consistent with the results of cluster analysis and principal coordinate analysis. SCoT, IRAP and SSR all divide apple species into three groups when K =3, one of which is Malus Mill plant materials in North China and Northwest China, the other is Malus Mill plant materials in East China and South China, and the last is Malus Mill plant materials in Southwest China. From their structural analysis chart, it can be found that most apple varieties have gene exchange, and Malus Mill plants with similar regions are more likely to have gene exchange, which indicates that whether Malus Mill plant populations are hybridized or not is related to their regional distribution.The results are the same as those obtained by AMOVA analysis (Table1) that only a small part of the variation of four Malus plants comes from among populations. According to the traditional concept of gene flow, gene flow can prevent population differentiation caused by local adaptation or genetic drift. The gene flow among four species of Malus plants is less than 1, and there may be some obstacles in gene communication among populations due to geographical isolation effect.According to the analysis of gene differentiation coefficient, the genetic similarity among the four coextensive populations is high, and the gene flow among the coextensive populations is higher than that among the intraspecific populations. In addition, the analysis of the genetic distance among the coextensive composite populations (Table2) shows that the genetic distance among the species within the populations is small, and the individual clustering is always the priority clustering of coextensive species, which indicates that the gene flow among coextensive species is caused by hybridization. In addition, during the continuous interspecific hybridization and backcross in this population, new homozygous individuals of genetic type also appeared, such as M.doumeri (Bois.)Chev with homozygous genetic background in the subgroup, which also means that interspecific hybridization of Malus may lead to the formation of new species.Karyotype analysis and genome in situ hybridization both showed that Malus Mill’s plant classification system still had some imperfections.M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are both variants of M. hupehensis (Pamp.) rehd. The number of chromosomes with hybridization signals is less than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, The number and proportion of chromosomes with hybridization signals in M.sikkimensis (Wenz.)Koehne, M.toringoides(Rehd.) Hughes and M.toringo De Vriese are also higher than those in M. M.hupehensis Rehd. var.taiensis, so it is necessary to further study the genetic relationship among Malus Mill plants.In this experiment, the results obtained by cluster analysis, principal coordinate analysis and population structure analysis are roughly the same, and all three methods effectively show the genetic relationship and genetic diversity among Malus Mill materials. However, there are obvious differences in the classification order of groups and lines, which may be related to the fact that Malus Mill plants have both wild species and cultivated species, and there are many interspecific hybrids. There may be some differences between genetic relationship analysis at molecular level and morphological genetic relationship analysis based on some characters, and there may also be differences between different molecular marker techniques.Literature [26] studies the genetic diversity of apple male resources by means of cluster analysis and principal coordinate analysis, effectively distinguishing apple interspecific resources from different regions. Literature [13] makes population structure analysis and principal coordinate analysis of 146 Malus Mill materials, which can well reveal their genetic relationship and genetic diversity. It can be seen that principal coordinate analysis, cluster analysis and population structure analysis are effective and reliable for identifying the genetic relationship of Malus Mill plants.To sum up, Malus Mill plants in its neighborhood and coextensive distribution area, the transitional hybridization zone formed by interspecies hybridization, on the one hand, provides a source of variation for species evolution; On the other hand, Malus Mill plants in the hybridization zone, as an interspecific transition group, become the medium of gene flow among Malus Mill plants, and accelerate the gene exchange among Malus Mill plants. It is speculated that in the evolution process of Malus Mill plant species, due to the imperfect reproductive isolation mechanism, there has been a certain inter-species gene exchange. The evolution and formation of Malus Mill plants are related to the parent species with the same domain and domain distribution, which is more in line with the neighborhood and same domain model [16]. ## 4.1. Genetic Relationship and Its Evolution Results The successful preparation of DNA and the avoidance of partial degradation are the basis and key to the success of SCoT, because the quality of the template, that is, whether it contains polysaccharides, polyphenols, protein, quinones and pigments, will affect the future PCR amplification reaction. In this experiment, the improved CTAB method was used to extract apple genomic DNA, and the OD260/OD280 values of the tested materials were all between 1.8 and 2.0. 0.8% agarose gel electrophoresis showed that the main band was clear, without degradation, and the purity was high, which met the requirements of SSR analysis.The most amplified bands of 9 SCoT primers were SCoT12, with 1,344 bands, and the least was SCoT9, with 688 bands. The average number of amplified bands per primer was 948.The polymorphism ratio of 9 primers ranged from 72.5% to 100%, among which SCoT33 had the highest polymorphism ratio and SCoT12 had the lowest polymorphism ratio, and the average polymorphism ratio (Np) of 9 SCoT primers was 85.34%. The polymorphism ratio of each primer is shown in Figure3.Figure 3 SCoT polymorphism analysis.Malus Mill plant seeds have dormancy phenomenon, so it is the most important step to explore the conditions for breaking seed dormancy to obtain experimental materials. The fifteen Malus Mill plant experimental materials studied in this experiment include eleven wild species materials (M.hupehensis (Pamp.) Rehd. var.Mengshanens), M.hupehensis Rehd. var.taiensis, M.hupehensis (Pamp.)Rehd, M.baccata (L.)Borkh, M.rockii Rehder, M.dumeri (Bois.)Chev, M.toringoDe Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M. ombrophila Hand.-Mazz) and four cultivar materials (M. ×robusta Rehd, M. micromalus Makino, M. purpurea, M. ‘Donghongguo’).According to the degree of chromosomal evolution, most of Malus Mill plants in this study are metaphase chromosomes and near metaphase chromosomes, with a karyotype of 2B, which belongs to a relatively primitive type. The karyotype of M.ombrophilaHand-Mazz is 1A, the most primitive, and that of M.toringo De Vriese is 3B, the most evolutionary.Karyotype parameters of fifteen Malus Mill taxa are shown in Figure4. The number of hybridized chromosomes and the proportion of hybridized chromosomes between each probe and the test material are shown in Figure 5 (A, B and C, respectively, represent M.hupehensis (Pamp.)Rehd.var. Mengshanens as probe, M. Hupehensis (Pamp.) rehd as probe and M.hupehensis Rehd. var.taiensis as probe).Figure 4 Karyotype parameters of fifteen Malus Mill taxa.Figure 5 Experimental results of genomic in situ hybridization.According to the number of hybrid chromosomes, the chromosome homology between them is lower than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, that is, the genetic relationship between them is not high. In the next research, the screening range of parents of M.hupehensis Rehd. var.taiensis should be expanded. ## 4.2. Genetic Structure of Malus Mill Natural Population According to Nei’s gene diversity analysis, the gene differentiation coefficient values of four Malus Mill plant species at the level of M. baccata(L.) Borkh, M.rockii Rehder, M.doumeri (Bois.)Chev. and M. Toringo de Vriese were 0.423, 0.439 and 0.428, respectively. 0.460, indicating that there is obvious genetic differentiation among the populations of these four Malus Mill plants, but there are some differences in the degree of population differentiation among different Malus Mill plants. That is, the main source of intraspecific genetic variation of four Malus Mill plants is intra-population variation among individuals, and the intraspecific genetic differentiation has reached a very significant level (P <0.001). As shown in Table1.Table 1 Genetic structure variance analysis. SpeciesSource of variationDegree of freedomVarianceMean square deviationVariance componentPercentage variation (%)P-valueM. baccata(L.) BorkhIntergroup101436.22133.0217.2630<0.001Within population442088.1744.1740.0670M.rockii RehderIntergroup5566.13146.3920.3633<0.001Within population20876.9130.3540.0167M.doumeri (bois.)Chev.Intergroup91066.87128.1641.3320<0.001Within population412869.3240.2727.6980M.toringo De VrieseIntergroup101102.75133.6916.8746<0.001Within population442126.8246.7533.2154In order to further explore the relationship between genetic distance and geographical distance, Mantel tests were carried out on the genetic distance and geographical distance of four tested species (Figure6). The results showed that the genetic distance and geographical distance of four Malus Mill plants were significantly correlated.Figure 6 Mantel detection of correlation between geographical distance and genetic distance.Cluster results of 15 materials by principal coordinate analysis (Figure7).Figure 7 Principal coordinate cluster analysis of 15 Malus Mill and Pyrus materials.It was found that 15 test materials were divided into three categories. Three species of Pyrus are the first category; M. doumeri (Bois.) of Malus mill Chev, grassland Begonia and narrow leaf Begonia are the second category, among which grassland Begonia and narrow leaf Begonia are more closely related. The third category is divided into four sub groups. The first category includes Hawthorn Begonia and m. ombrophila hand- Mazz and Dianchi Begonia, M. ombrophila hand- Mazz is closely related to Begonia Dianchi;Category 2 includes M. sikkimensis (Wenz.) Koehne, M.toringoides(Rehd.) Hughes and begonia; The fourth category includes: M. halliana Koehne, Xinjiang wild apple, betel seed, catalpa seed, flat edge Begonia, Chinese apple, Oriental apple, forest apple and brown Begonia.The analysis of population structure of 15 tested materials by using Structure 2.3.1 software showed that the number of allele frequency characteristic types K in the samples showed a continuous increasing trend. When K =3, from the distribution of the maximum covariant Q value of 15 materials, the covariant Q of 13 materials is ≥0.6, accounting for 69.7% of the tested materials, indicating that the genetic relationship among these materials is single (Figure8).Figure 8 The distribution of the maximum Q value of all kinds of groups in mathematical model.The composite ratio and efficacy index can evaluate the efficiency of marker polymorphic sites and the efficiency of site polymorphism (Figure9). Among them, ANEA: average effective allele number, AEHL: average expected heterozygosity of locus.Figure 9 Marking efficiency analysis.Among the two markers, the effective recombination ratio and efficacy index of ScoT are higher than those of EST-SSR, and SCoT marker has higher polymorphism detection efficiency. The marker index can comprehensively reflect the expected heterozygosity and effective recombination ratio of the average locus. The SCoT marker is 2.337, which is higher than the EST-SSR marker of 1.199. ## 4.3. The Interspecific Genetic Relationship of Apple Genus Coextensive Population In order to study whether there is mutual influence of population genetic structure among the sympatrically distributed species of Malus, this experiment analyzed the genetic structure of apple species within four sympatrically distributed populations of Malus, namely populations A, B, C and D (Table2).Table 2 Population genetic distance. Group ASpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh———M.rockii Rehder0.096——M.doumeri (bois.)Chev0.0520.055—Group BSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.093———M.doumeri (bois.)Chev0.0460.042——M.toringo De Vriese0.0330.0490.013—Group CSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh————M.rockii Rehder0.022———M.doumeri (bois.)Chev0.0260.021——Group DSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.103———M.doumeri (bois.)Chev0.1680.027——M.toringo De Vriese0.1440.0260.026—The results showed that the gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. The average gene flow among individuals in the same domain distribution population is 3.362, which indicates that the frequency of gene exchange among several species of Glycyrrhiza uralensis Fisch. distributed in the same domain is higher.However, the degree of interspecific differentiation in coextensive populations in different regions is relatively different, and the degree of genetic differentiation among species in group A is the highest, which indicates that there are differences in the degree of gene exchange among species in coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. It shows that there are differences in the degree of inter-species gene exchange among coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. ## 4.4. Discussion Malus Mill plants have natural hybridization [10]; The existence of hybrids in wild environment and different types of variation within species makes the classification and identification based on morphology complicated [20]. The pod shape of some wild apples is a transitional type between two or more apples, or it is inclined to one kind, such as Malus Mill plant group, which makes it difficult to classify and identify species, so it is necessary to classify and identify species by molecular markers.Among the designed 35 pairs of primers, 16 pairs can amplify polymorphic products. The reason why some primers cannot amplify products may be [10]: The primer sequence falls on two exons; There is a long intron between the two primers, and no product can be amplified. Wrong sequence information was used in primer design. When there are multiple alleles in the analyzed materials, although their coding regions and functions are the same, but the DNA sequences are not completely the same, the amplified products may have fragments with different sizes than expected. The specificity of the primer itself is not high enough to amplify the sequence homologous to the primer.The results of population structure analysis are also consistent with the results of cluster analysis and principal coordinate analysis. SCoT, IRAP and SSR all divide apple species into three groups when K =3, one of which is Malus Mill plant materials in North China and Northwest China, the other is Malus Mill plant materials in East China and South China, and the last is Malus Mill plant materials in Southwest China. From their structural analysis chart, it can be found that most apple varieties have gene exchange, and Malus Mill plants with similar regions are more likely to have gene exchange, which indicates that whether Malus Mill plant populations are hybridized or not is related to their regional distribution.The results are the same as those obtained by AMOVA analysis (Table1) that only a small part of the variation of four Malus plants comes from among populations. According to the traditional concept of gene flow, gene flow can prevent population differentiation caused by local adaptation or genetic drift. The gene flow among four species of Malus plants is less than 1, and there may be some obstacles in gene communication among populations due to geographical isolation effect.According to the analysis of gene differentiation coefficient, the genetic similarity among the four coextensive populations is high, and the gene flow among the coextensive populations is higher than that among the intraspecific populations. In addition, the analysis of the genetic distance among the coextensive composite populations (Table2) shows that the genetic distance among the species within the populations is small, and the individual clustering is always the priority clustering of coextensive species, which indicates that the gene flow among coextensive species is caused by hybridization. In addition, during the continuous interspecific hybridization and backcross in this population, new homozygous individuals of genetic type also appeared, such as M.doumeri (Bois.)Chev with homozygous genetic background in the subgroup, which also means that interspecific hybridization of Malus may lead to the formation of new species.Karyotype analysis and genome in situ hybridization both showed that Malus Mill’s plant classification system still had some imperfections.M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are both variants of M. hupehensis (Pamp.) rehd. The number of chromosomes with hybridization signals is less than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, The number and proportion of chromosomes with hybridization signals in M.sikkimensis (Wenz.)Koehne, M.toringoides(Rehd.) Hughes and M.toringo De Vriese are also higher than those in M. M.hupehensis Rehd. var.taiensis, so it is necessary to further study the genetic relationship among Malus Mill plants.In this experiment, the results obtained by cluster analysis, principal coordinate analysis and population structure analysis are roughly the same, and all three methods effectively show the genetic relationship and genetic diversity among Malus Mill materials. However, there are obvious differences in the classification order of groups and lines, which may be related to the fact that Malus Mill plants have both wild species and cultivated species, and there are many interspecific hybrids. There may be some differences between genetic relationship analysis at molecular level and morphological genetic relationship analysis based on some characters, and there may also be differences between different molecular marker techniques.Literature [26] studies the genetic diversity of apple male resources by means of cluster analysis and principal coordinate analysis, effectively distinguishing apple interspecific resources from different regions. Literature [13] makes population structure analysis and principal coordinate analysis of 146 Malus Mill materials, which can well reveal their genetic relationship and genetic diversity. It can be seen that principal coordinate analysis, cluster analysis and population structure analysis are effective and reliable for identifying the genetic relationship of Malus Mill plants.To sum up, Malus Mill plants in its neighborhood and coextensive distribution area, the transitional hybridization zone formed by interspecies hybridization, on the one hand, provides a source of variation for species evolution; On the other hand, Malus Mill plants in the hybridization zone, as an interspecific transition group, become the medium of gene flow among Malus Mill plants, and accelerate the gene exchange among Malus Mill plants. It is speculated that in the evolution process of Malus Mill plant species, due to the imperfect reproductive isolation mechanism, there has been a certain inter-species gene exchange. The evolution and formation of Malus Mill plants are related to the parent species with the same domain and domain distribution, which is more in line with the neighborhood and same domain model [16]. ## 5. Conclusion In this study, SCoT molecular markers were used to study the genetic relationship among Malus populations, and the influence of interspecific hybridization infiltration on species formation and differentiation was discussed, and the formation and evolution trend of hybrid Malus plants was analyzed. The main conclusions are as follows:SCoT marker has a high polymorphism and a large amount of information, which can reveal the level of genetic diversity and genetic relationship between Malus Mill plants and within species. It is an effective molecular marker for studying the genetic structure of Malus Mill natural population, interspecific gene infiltration and so on.It is proved that M.sikkimensis (Wenz.)Koehne and M.toringoides(Rehd.) Hughes are closely related to M.baccata (L.)Borkh group. M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are the same varieties of M. hupehensis (Pamp.) rehd. Although the chromosome homology is not high, their morphology is similar. Riwacrabapple is quite different from M.sikkimensis (Wenz.)Koehne, which supports its independent seed formation; Support the viewpoint of M.hupehensis (Pamp.)Rehd’s multi-point origin.The diversity level of Malus plants is higher than that of other apple species distributed in the same domain, and it has certain advantages in quantity and adaptability, which promotes gene exchange among Malus plants.The classification system and genetic relationship of Malus Mill plants are becoming more and more clear, but the intra-genus classification system needs further improvement, and the inter-species and intra-species relationships need further experiments to prove that the classification of Malus Mill plants still needs further study. --- *Source: 1002624-2022-06-17.xml*
1002624-2022-06-17_1002624-2022-06-17.md
64,659
Genetic Relationship and Evolution Analysis among Malus Mill Plant Populations Based on SCoT Molecular Markers
Yuan Yao
Computational and Mathematical Methods in Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1002624
1002624-2022-06-17.xml
--- ## Abstract Malus Mill genotype is highly heterozygous, and many theoretical problems such as genetic relationship and evolution process among germplasm are difficult to be solved by traditional analysis methods. The development of SCoT(start codon targeted polymorphism) molecular markers suitable for apples is of great significance for studying the origin, evolution, genetic relationship and genetic diversity of Malus Mill germplasm resources. In this paper, the genetic relationship and evolution of 15 materials were analyzed by SCoT molecular marker. The results showed that the gene differentiation coefficient values of four Malus Mill plants at the species level were 0.423, 0.439, 0.428 and 0.460, respectively, which indicated that there was obvious genetic differentiation among the populations of these four Malus Mill plants, but there were some differences among the populations of different Malus Mill plants. The gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. This indicates that SCoT molecular markers can be effectively used in the analysis of intraspecific genetic relationship and identification of intraspecific strains of Malus Mill plants. --- ## Body ## 1. Introduction Malus Mill is a genus of Rosaceae with high economic value. Malus Mill plants such as M.hallianaKoehne, M.baccata (L.)Borkh, M.hupehensis (Pamp.)Rehd, M.micromalus are important garden trees.Many species are used as rootstocks in apple production [1–3]. The traditional Malus Mill classification is based on botanical characters such as the state of leaves in buds, the whole or shallow split of leaves, the number of styles and the color of stamens, the shedding or persistent sepals, the presence or absence of stone cells in pulp, and experimental taxonomic evidence such as chromosomes, plant chemical components and isozymes. Based on the development of molecular technology in recent years, it has become the mainstream to use molecular marker technology to reveal the genetic relationship between apple germplasm resources and materials, analyze the differences among materials from the genome level, and then assist breeders to choose suitable combinations of apple hybrid parents [4–6] However, the genomic sites revealed by different markers are different, and the sequence information that can reveal the different sites may not be transcribed and expressed.Genetic marker is an easily recognizable manifestation of biological genotype. With the understanding of gene from phenomenon to essence, genetic marker has gradually developed from simple morphological marker, cytological marker and biochemical marker to DNA molecular marker which can directly reflect genetic polymorphism at the DNA level [7].Scot (start codon targeted polymorphism) marker is a new marker method created by Collard and others in rice. This marker is not only highly polymorphic, but also convenient to operate. At present, it has been applied to the research of many fruit trees and plants. In addition, there are markers that use specific double primers to amplify and analyze the polymorphism of specific DNA regions, mainly including AFLP (amplified fragment length polymorphism), SCAR (sequence characterized amplified regions) and STS(Sequence Tag Site), among which AFLP markers are widely used. SCoT marker has been successfully applied to citrus [8], Dendrobium candidum and other crops [9, 10], and much progress has been made in the research of genetic diversity among species and varieties and genetic analysis of hybrid offspring. With more and more evidences being introduced into the systematic study of this genus, for example, whether the calyx of fruit persists, the growth state of leaves in buds, whether the fruit has stone cells or not, and the experimental new classification system such as chromosomes, plant chemical components, enzymes, molecular biology, etc., are more scientific, which can better reflect the genetic relationship and evolution path among plant groups in this genus.The emergence and development of DNA molecular markers make molecular systematics one of the latest experimental classification methods, and SSR (simple sequence repeats) markers with unique advantages have been widely used in Apple research [11, 12].However, the development of traditional genomic SSR markers needs complicated steps such as constructing genomic DNA library, probe hybridization, cloning and sequencing [13], which is time-consuming, labor-intensive and costly. The application of SSR markers is greatly limited. As a new SSR marker, SCoT marker has been applied in many plants such as grape, sugar cane, wheat, barley, soybean, rice and other species such as genetic diversity analysis, genetic map construction and systematic evolution research [14, 15]. Research on systematic classification and genetic relationship of apple cultivars based on SCoT technology has not been reported. In this study, making full use of database resources and developing new SCoT markers from Malus MillEST database will not only further enrich the number of markers, but also provide a new way for the study of genetic relationship of this genus. ## 2. Related Work SCoT marker can not only obtain the target genes closely related to traits, but also track traits. It has the advantages of simple operation, high polymorphism, abundant genetic information, low cost and strong universality of primers, among which the most prominent ones are high polymorphism detection efficiency and abundant genetic information. At present, it has been widely used in the research of genetic diversity, population genetic structure, germplasm identification, gene differential expression and molecular genetic map of plant germplasm resources. Literature [16] Construction of DNA fingerprint of Hemarthria spp cultivated plants by EST-SSR and SCoT markers; Literature [17] The application of SCoT molecular marker correspondence in the genetic analysis of Nicotiana plants and the identification of interspecific hybrids found that SCoT-labeled primers can be used for the analysis of the genetic relationship between tobacco species and the identification of distant hybrids. Literature [18] used SCoT diversity marker to mark the difference of genetic diversity between wild and cultivated species of Boehmerianivea L. Gaudich; Literature [19] The application of SCoT molecular marker correspondence in the genetic analysis of Nicotiana plants and the identification of interspecific hybrids found that SCoT-labeled primers can be used for the analysis of the genetic relationship between tobacco species and the identification of distant hybrids. At present, the application of this technology in medicinal plants has been reported.Genetic diversity refers to the sum total of genetic variation among different groups or individuals within a group, which is the basis and important component of biodiversity. The evaluation of genetic diversity of plant natural populations and artificially cultivated strains is the basis of effective protection, development and utilization of germplasm resources. Literature [20] 100 pairs of SSR primers were developed and designed from EST database of citrus to detect the genetic diversity and heterozygosity of two different citrus genera, Citrus sinensis(L.)Osbesk and Poncirus trifoliate(L.)Raf. Literature [21] Sixteen pairs of SSR primers were selected and designed from EST of grape to detect the genetic diversity of seven grape varieties, among which 10 pairs of primers could amplify polymorphic bands in these tested materials. Literature [22] Through cluster analysis, it is shown that Chinese varieties and other varieties in the world are clustered into two separate groups, respectively, while the hybrids of two peaches and apricots are closer to peaches in genetic relationship, and they are clustered into an outer group together. Literature [23] The genetic relationship among wild apples, cultivated apples and ornamental begonia originated in Belgium and Germany was analyzed by AFLP and SSR. Literature [24] also used SSR and SRAP markers to analyze the genetic relationship of apples. In this study, AFLP, SSR and two molecular markers were used to study the genetic relationship among 41 Malus Mill plant types, and the differences between the two techniques were discussed, which provided reference for the application of AFLP and SSR in the genetic relationship analysis of apples. Literature [25] RAPD (Random Amplified Polymorphic DNA) technology was applied to study the genetic relationship of Malus Mill plants, and the results were basically consistent with previous studies. However, in this study, only 10 primers were screened. For Malus Mill plants with complex origin and evolution, the results only have reference value, and cannot fundamentally solve our questions. Literature [26] AFLP technology has been applied to the study of the relationship between Malus Mill plants, and most of the classification results are the same as those of previous studies. Although this method is accurate, it is only for reference, and it still needs to be combined with other research results. Literature [27] expounds the practical value of M.hupehensis (Pamp.)Rehd. Compared with several other Malus Mill plants with apomixis, M.hupehensis (Pamp.)Rehd has the strongest apomixis ability. The results of SSR molecular markers of M.baccata (L.)Borkh in literature [28] showed that all materials had high genetic diversity, among which M.baccata (L.)Borkh from Hebei had the highest genetic diversity.Malus Mill, as a classified “difficult genus”, is characterized by diverse plant forms, complex variation of characters, and the intersection of characters among many species is difficult to classify. Due to the widespread interspecific hybridization, many species have complicated classification problems, which are difficult to identify. Literature [29] holds that M.hupehensis (Pamp.)Rehd should be tied to M.baccata (L.)Borkh, while literature [30] holds that M.hupehensis (Pamp.)Rehd should be incorporated into M.baccata (L.)Borkh as a species. More powerful evidence is still needed to prove this. M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. Mengshanens are two varieties of M. Hupehensis (Pamp.) Rehd, and they also need a strong evidence to prove their relationship with each other and with M. Hupehensis (Pamp.) Rehd.var. Mengshanens. Literature [31] holds that M.hupehensis (Pamp.)Rehd is a multi-point origin, but its origin mechanism has not been determined. The wide application of molecular marker technology has brought us hope to solve the problem. Compared with other technologies, this technology is expressed in the form of DNA, which has higher polymorphism and is not restricted by the environment. However, only using primers to amplify some small fragments of DNA for cluster analysis can only find the genetic diversity, and cannot provide strong evidence for judging the genetic relationship between species. A large number of studies also show that this technology can not fundamentally solve this problem.Gene mapping is an important condition for obtaining excellent genes. When new genes appear in genetic breeding work, it is necessary to locate them on specific chromosomes, and measure their arrangement order and distance on chromosomes, so as to quickly and accurately locate genes. The key of gene mapping is to determine the parent combination between the populations used for mapping. Molecular markers play an important role in the germplasm resources of fruit trees. Molecular markers have many advantages, such as greatly improving the efficiency of animal and plant varieties, helping to develop and identify new varieties, and reducing breeding costs through marker-assisted breeding. Developing more new molecular markers is the new direction of molecular marker technology development in the future, and it is also the effort direction of researchers. In-depth research on fruit tree germplasm resources by molecular markers is beneficial to all fields, which has become an irreversible trend. ## 3. Research Method ### 3.1. Materials and Methods #### 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ### 3.2. Method #### 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. #### 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. #### 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 3.1. Materials and Methods ### 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ## 3.1.1. Material The materials used are M.hupehensis (Pamp.)Rehd.var. Mengshanens, M. Hupehensis (Pamp.) Rehd, M.hupehensis Rehd. var.taiensis, M.toringo De Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M.rockii Rehder. The collected Malus Mill fruits are mashed and washed with water to separate the seeds from the pulp, dried to remove impurities, and the complete and full seeds are selected and stored in a refrigerator at 4°C for later use.The treated seeds were put into a refrigerator at 4°C to break dormancy at low temperature, and after 21 days, the seeds were put into an incubator at 28°C. Cross experiments were conducted on the time of breaking dormancy and the optimum temperature for seed germination after breaking dormancy. It was found that the dormancy of seeds could be basically broken in about 21 days, and 28°C was the optimum temperature for seed germination, and the time of breaking dormancy of different species was slightly different.ProFlex™ PCR System PCR thermal cycler, MultifugeX1R high-speed freezing centrifuge, Gel Doc™ EZ gel imaging system, JY600C electrophoresis tank, Thermo Scientific Nano-Drop 2000 Spectrophotometers spectrophotometer.Among the published SCoT primers of Malus Mill plants, the primers with high polymorphism and good stability were selected for the experiment. Among the SCoT-labeled primers, three fluorescent markers FAM (6-carboxy-fluoroscein,), hex (hexachlorofluoroscein) and Tamra (carboxy tetramethylrhodamine) were added to the 5’ end of each pair of primers. ## 3.2. Method ### 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. ### 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. ### 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 3.2.1. Extraction and Concentration Determination of DNA The total DNA of apple leaves was extracted by improved CTAB (cetyltrimethylamine bromide) (Figure1): (1) Put an appropriate amount of apple leaves in a mortar, add a small amount of quartz sand, grind into powder, and subpackage into 1.5 mLeppendorf tubes(2) Cells were lysed for 30 min~60 min in the warm bath of 65°C constant temperature water bath pot, during which the centrifugal tube was turned upside down for 2 ~ 3 times(3) Suck the supernatant, add 1/10 times the volume of 3mm⋅L‐1Na Ac and 2 ~ 2.5 times the volume of frozen absolute ethanol, and mix carefully(4) Put it in the refrigerator at-20°C for 30 min, gently pick out the flocculent precipitate in the tube, and put it into a new centrifuge tube (or centrifuge at 12000 rpm for 15 min, and pour out the supernatant).(5) Dry DNA at room temperature, add 1 × TE 5μL dissolved DNA in EP tube, add 1 μL RNase, and keep at 37°C for 1 h or -4°C for one night(6) The extracted DNA was stored in a refrigerator at -20°CFigure 1 Process of extracting total DNA from apple leaves.The extracted DNA was detected by 1% agarose gel electrophoresis. The DNA with high brightness, clear main band and no dispersion band was diluted and put into the ultraviolet spectrophotometer for detection. The ratio of OD260/280 was between 1.7 and 1.8, and the purity was high. The DNA concentration was calculated by probe labeling, and the concentration of DNA sample (μg⋅μL‐1) = OD260×50× dilution multiple×1000. ## 3.2.2. SCoT-PCR Amplification Procedure and Product Detection Each organism has a specific chromosome number, so it is particularly important to choose appropriate materials to count the chromosome number. Literature [26] holds that the development trend of angiosperm karyotype is from symmetry to asymmetry. That is to say, the karyotypes of plants that are in a relatively primitive taxonomic position in systematic evolution are mostly symmetrical; The karyotype of relatively more evolved plants is mainly asymmetric.There are still the following scientific problems to be solved in research:(1) The level of genetic diversity in Malus Mill plants and its formation reasons(2) The degree and direction of interspecific hybridization and infiltration of coextensive species and its influence on species formation and differentiation(3) The origin of Malus Mill plant hybridization and the revelation of the “identity” of hybridization, and the reasons for the differences of Malus Mill plant populations distributed in foreign countriesThe above research will provide favorable evidence for revealing the mechanism of species formation and evolution of Malus Mill. The technical route is shown in Figure2:Figure 2 Technology roadmap.The PCR amplification reaction system was improved on the basis of the optimized system obtained in reference [22], and the final reaction system was determined as follows: Premix Tap™10 μL, DNA template 2 μL, primer 2 μL, and finally ddH2O was added, with a total volume of 20 μL L.Then, it was placed in a PCR instrument for amplification reaction, and the amplification procedure was pre-denaturation at 94°C for 3 min. Denaturation at 94°C for 30 s, annealing at 55°C for 30 s, renaturation at 72°C for 1 min, 33 cycles; Finally, 72°C for 5 min.After the PCR amplification procedure is completed, 5 ~ 8μL of amplification products are slowly dropped into 1% agarose gel with nucleic acid dye, then placed in electrophoresis tank without electrophoresis in electrode buffer (1 × TAE) for 30 min, and then images are collected and saved by gel imaging system. ## 3.2.3. Data Processing Image Lab(BIO-RAD) is used to analyze the film, and the number of DNA bands amplified by each treatment is counted. After manual correction, D2000 DNA Marker is used as the reference standard molecular weight to predict the molecular weight of amplified bands, and the molecular weight is regarded as the same position within the range of 5 BP. Then, statistics are made according to the size and site of amplified fragments of each sample. The number of bands at the same position is “1”, and the number of bands with no bands or weak bands that are difficult to distinguish at the same position is “0”.Then, the similarity coefficient is calculated by the Qualitative date program in NTSYS-pc2.10e data software, and the similarity coefficient matrix is obtained. The SHAN program and UPGMA method are used for cluster analysis, and then the cluster diagram is generated by Treeplot module. ## 4. Results Analysis and Discussion ### 4.1. Genetic Relationship and Its Evolution Results The successful preparation of DNA and the avoidance of partial degradation are the basis and key to the success of SCoT, because the quality of the template, that is, whether it contains polysaccharides, polyphenols, protein, quinones and pigments, will affect the future PCR amplification reaction. In this experiment, the improved CTAB method was used to extract apple genomic DNA, and the OD260/OD280 values of the tested materials were all between 1.8 and 2.0. 0.8% agarose gel electrophoresis showed that the main band was clear, without degradation, and the purity was high, which met the requirements of SSR analysis.The most amplified bands of 9 SCoT primers were SCoT12, with 1,344 bands, and the least was SCoT9, with 688 bands. The average number of amplified bands per primer was 948.The polymorphism ratio of 9 primers ranged from 72.5% to 100%, among which SCoT33 had the highest polymorphism ratio and SCoT12 had the lowest polymorphism ratio, and the average polymorphism ratio (Np) of 9 SCoT primers was 85.34%. The polymorphism ratio of each primer is shown in Figure3.Figure 3 SCoT polymorphism analysis.Malus Mill plant seeds have dormancy phenomenon, so it is the most important step to explore the conditions for breaking seed dormancy to obtain experimental materials. The fifteen Malus Mill plant experimental materials studied in this experiment include eleven wild species materials (M.hupehensis (Pamp.) Rehd. var.Mengshanens), M.hupehensis Rehd. var.taiensis, M.hupehensis (Pamp.)Rehd, M.baccata (L.)Borkh, M.rockii Rehder, M.dumeri (Bois.)Chev, M.toringoDe Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M. ombrophila Hand.-Mazz) and four cultivar materials (M. ×robusta Rehd, M. micromalus Makino, M. purpurea, M. ‘Donghongguo’).According to the degree of chromosomal evolution, most of Malus Mill plants in this study are metaphase chromosomes and near metaphase chromosomes, with a karyotype of 2B, which belongs to a relatively primitive type. The karyotype of M.ombrophilaHand-Mazz is 1A, the most primitive, and that of M.toringo De Vriese is 3B, the most evolutionary.Karyotype parameters of fifteen Malus Mill taxa are shown in Figure4. The number of hybridized chromosomes and the proportion of hybridized chromosomes between each probe and the test material are shown in Figure 5 (A, B and C, respectively, represent M.hupehensis (Pamp.)Rehd.var. Mengshanens as probe, M. Hupehensis (Pamp.) rehd as probe and M.hupehensis Rehd. var.taiensis as probe).Figure 4 Karyotype parameters of fifteen Malus Mill taxa.Figure 5 Experimental results of genomic in situ hybridization.According to the number of hybrid chromosomes, the chromosome homology between them is lower than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, that is, the genetic relationship between them is not high. In the next research, the screening range of parents of M.hupehensis Rehd. var.taiensis should be expanded. ### 4.2. Genetic Structure of Malus Mill Natural Population According to Nei’s gene diversity analysis, the gene differentiation coefficient values of four Malus Mill plant species at the level of M. baccata(L.) Borkh, M.rockii Rehder, M.doumeri (Bois.)Chev. and M. Toringo de Vriese were 0.423, 0.439 and 0.428, respectively. 0.460, indicating that there is obvious genetic differentiation among the populations of these four Malus Mill plants, but there are some differences in the degree of population differentiation among different Malus Mill plants. That is, the main source of intraspecific genetic variation of four Malus Mill plants is intra-population variation among individuals, and the intraspecific genetic differentiation has reached a very significant level (P <0.001). As shown in Table1.Table 1 Genetic structure variance analysis. SpeciesSource of variationDegree of freedomVarianceMean square deviationVariance componentPercentage variation (%)P-valueM. baccata(L.) BorkhIntergroup101436.22133.0217.2630<0.001Within population442088.1744.1740.0670M.rockii RehderIntergroup5566.13146.3920.3633<0.001Within population20876.9130.3540.0167M.doumeri (bois.)Chev.Intergroup91066.87128.1641.3320<0.001Within population412869.3240.2727.6980M.toringo De VrieseIntergroup101102.75133.6916.8746<0.001Within population442126.8246.7533.2154In order to further explore the relationship between genetic distance and geographical distance, Mantel tests were carried out on the genetic distance and geographical distance of four tested species (Figure6). The results showed that the genetic distance and geographical distance of four Malus Mill plants were significantly correlated.Figure 6 Mantel detection of correlation between geographical distance and genetic distance.Cluster results of 15 materials by principal coordinate analysis (Figure7).Figure 7 Principal coordinate cluster analysis of 15 Malus Mill and Pyrus materials.It was found that 15 test materials were divided into three categories. Three species of Pyrus are the first category; M. doumeri (Bois.) of Malus mill Chev, grassland Begonia and narrow leaf Begonia are the second category, among which grassland Begonia and narrow leaf Begonia are more closely related. The third category is divided into four sub groups. The first category includes Hawthorn Begonia and m. ombrophila hand- Mazz and Dianchi Begonia, M. ombrophila hand- Mazz is closely related to Begonia Dianchi;Category 2 includes M. sikkimensis (Wenz.) Koehne, M.toringoides(Rehd.) Hughes and begonia; The fourth category includes: M. halliana Koehne, Xinjiang wild apple, betel seed, catalpa seed, flat edge Begonia, Chinese apple, Oriental apple, forest apple and brown Begonia.The analysis of population structure of 15 tested materials by using Structure 2.3.1 software showed that the number of allele frequency characteristic types K in the samples showed a continuous increasing trend. When K =3, from the distribution of the maximum covariant Q value of 15 materials, the covariant Q of 13 materials is ≥0.6, accounting for 69.7% of the tested materials, indicating that the genetic relationship among these materials is single (Figure8).Figure 8 The distribution of the maximum Q value of all kinds of groups in mathematical model.The composite ratio and efficacy index can evaluate the efficiency of marker polymorphic sites and the efficiency of site polymorphism (Figure9). Among them, ANEA: average effective allele number, AEHL: average expected heterozygosity of locus.Figure 9 Marking efficiency analysis.Among the two markers, the effective recombination ratio and efficacy index of ScoT are higher than those of EST-SSR, and SCoT marker has higher polymorphism detection efficiency. The marker index can comprehensively reflect the expected heterozygosity and effective recombination ratio of the average locus. The SCoT marker is 2.337, which is higher than the EST-SSR marker of 1.199. ### 4.3. The Interspecific Genetic Relationship of Apple Genus Coextensive Population In order to study whether there is mutual influence of population genetic structure among the sympatrically distributed species of Malus, this experiment analyzed the genetic structure of apple species within four sympatrically distributed populations of Malus, namely populations A, B, C and D (Table2).Table 2 Population genetic distance. Group ASpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh———M.rockii Rehder0.096——M.doumeri (bois.)Chev0.0520.055—Group BSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.093———M.doumeri (bois.)Chev0.0460.042——M.toringo De Vriese0.0330.0490.013—Group CSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh————M.rockii Rehder0.022———M.doumeri (bois.)Chev0.0260.021——Group DSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.103———M.doumeri (bois.)Chev0.1680.027——M.toringo De Vriese0.1440.0260.026—The results showed that the gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. The average gene flow among individuals in the same domain distribution population is 3.362, which indicates that the frequency of gene exchange among several species of Glycyrrhiza uralensis Fisch. distributed in the same domain is higher.However, the degree of interspecific differentiation in coextensive populations in different regions is relatively different, and the degree of genetic differentiation among species in group A is the highest, which indicates that there are differences in the degree of gene exchange among species in coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. It shows that there are differences in the degree of inter-species gene exchange among coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. ### 4.4. Discussion Malus Mill plants have natural hybridization [10]; The existence of hybrids in wild environment and different types of variation within species makes the classification and identification based on morphology complicated [20]. The pod shape of some wild apples is a transitional type between two or more apples, or it is inclined to one kind, such as Malus Mill plant group, which makes it difficult to classify and identify species, so it is necessary to classify and identify species by molecular markers.Among the designed 35 pairs of primers, 16 pairs can amplify polymorphic products. The reason why some primers cannot amplify products may be [10]: The primer sequence falls on two exons; There is a long intron between the two primers, and no product can be amplified. Wrong sequence information was used in primer design. When there are multiple alleles in the analyzed materials, although their coding regions and functions are the same, but the DNA sequences are not completely the same, the amplified products may have fragments with different sizes than expected. The specificity of the primer itself is not high enough to amplify the sequence homologous to the primer.The results of population structure analysis are also consistent with the results of cluster analysis and principal coordinate analysis. SCoT, IRAP and SSR all divide apple species into three groups when K =3, one of which is Malus Mill plant materials in North China and Northwest China, the other is Malus Mill plant materials in East China and South China, and the last is Malus Mill plant materials in Southwest China. From their structural analysis chart, it can be found that most apple varieties have gene exchange, and Malus Mill plants with similar regions are more likely to have gene exchange, which indicates that whether Malus Mill plant populations are hybridized or not is related to their regional distribution.The results are the same as those obtained by AMOVA analysis (Table1) that only a small part of the variation of four Malus plants comes from among populations. According to the traditional concept of gene flow, gene flow can prevent population differentiation caused by local adaptation or genetic drift. The gene flow among four species of Malus plants is less than 1, and there may be some obstacles in gene communication among populations due to geographical isolation effect.According to the analysis of gene differentiation coefficient, the genetic similarity among the four coextensive populations is high, and the gene flow among the coextensive populations is higher than that among the intraspecific populations. In addition, the analysis of the genetic distance among the coextensive composite populations (Table2) shows that the genetic distance among the species within the populations is small, and the individual clustering is always the priority clustering of coextensive species, which indicates that the gene flow among coextensive species is caused by hybridization. In addition, during the continuous interspecific hybridization and backcross in this population, new homozygous individuals of genetic type also appeared, such as M.doumeri (Bois.)Chev with homozygous genetic background in the subgroup, which also means that interspecific hybridization of Malus may lead to the formation of new species.Karyotype analysis and genome in situ hybridization both showed that Malus Mill’s plant classification system still had some imperfections.M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are both variants of M. hupehensis (Pamp.) rehd. The number of chromosomes with hybridization signals is less than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, The number and proportion of chromosomes with hybridization signals in M.sikkimensis (Wenz.)Koehne, M.toringoides(Rehd.) Hughes and M.toringo De Vriese are also higher than those in M. M.hupehensis Rehd. var.taiensis, so it is necessary to further study the genetic relationship among Malus Mill plants.In this experiment, the results obtained by cluster analysis, principal coordinate analysis and population structure analysis are roughly the same, and all three methods effectively show the genetic relationship and genetic diversity among Malus Mill materials. However, there are obvious differences in the classification order of groups and lines, which may be related to the fact that Malus Mill plants have both wild species and cultivated species, and there are many interspecific hybrids. There may be some differences between genetic relationship analysis at molecular level and morphological genetic relationship analysis based on some characters, and there may also be differences between different molecular marker techniques.Literature [26] studies the genetic diversity of apple male resources by means of cluster analysis and principal coordinate analysis, effectively distinguishing apple interspecific resources from different regions. Literature [13] makes population structure analysis and principal coordinate analysis of 146 Malus Mill materials, which can well reveal their genetic relationship and genetic diversity. It can be seen that principal coordinate analysis, cluster analysis and population structure analysis are effective and reliable for identifying the genetic relationship of Malus Mill plants.To sum up, Malus Mill plants in its neighborhood and coextensive distribution area, the transitional hybridization zone formed by interspecies hybridization, on the one hand, provides a source of variation for species evolution; On the other hand, Malus Mill plants in the hybridization zone, as an interspecific transition group, become the medium of gene flow among Malus Mill plants, and accelerate the gene exchange among Malus Mill plants. It is speculated that in the evolution process of Malus Mill plant species, due to the imperfect reproductive isolation mechanism, there has been a certain inter-species gene exchange. The evolution and formation of Malus Mill plants are related to the parent species with the same domain and domain distribution, which is more in line with the neighborhood and same domain model [16]. ## 4.1. Genetic Relationship and Its Evolution Results The successful preparation of DNA and the avoidance of partial degradation are the basis and key to the success of SCoT, because the quality of the template, that is, whether it contains polysaccharides, polyphenols, protein, quinones and pigments, will affect the future PCR amplification reaction. In this experiment, the improved CTAB method was used to extract apple genomic DNA, and the OD260/OD280 values of the tested materials were all between 1.8 and 2.0. 0.8% agarose gel electrophoresis showed that the main band was clear, without degradation, and the purity was high, which met the requirements of SSR analysis.The most amplified bands of 9 SCoT primers were SCoT12, with 1,344 bands, and the least was SCoT9, with 688 bands. The average number of amplified bands per primer was 948.The polymorphism ratio of 9 primers ranged from 72.5% to 100%, among which SCoT33 had the highest polymorphism ratio and SCoT12 had the lowest polymorphism ratio, and the average polymorphism ratio (Np) of 9 SCoT primers was 85.34%. The polymorphism ratio of each primer is shown in Figure3.Figure 3 SCoT polymorphism analysis.Malus Mill plant seeds have dormancy phenomenon, so it is the most important step to explore the conditions for breaking seed dormancy to obtain experimental materials. The fifteen Malus Mill plant experimental materials studied in this experiment include eleven wild species materials (M.hupehensis (Pamp.) Rehd. var.Mengshanens), M.hupehensis Rehd. var.taiensis, M.hupehensis (Pamp.)Rehd, M.baccata (L.)Borkh, M.rockii Rehder, M.dumeri (Bois.)Chev, M.toringoDe Vriese, M.toringoides(Rehd.) Hughes, M.sikkimensis (Wenz.)Koehne, Riwacrabapple, M. ombrophila Hand.-Mazz) and four cultivar materials (M. ×robusta Rehd, M. micromalus Makino, M. purpurea, M. ‘Donghongguo’).According to the degree of chromosomal evolution, most of Malus Mill plants in this study are metaphase chromosomes and near metaphase chromosomes, with a karyotype of 2B, which belongs to a relatively primitive type. The karyotype of M.ombrophilaHand-Mazz is 1A, the most primitive, and that of M.toringo De Vriese is 3B, the most evolutionary.Karyotype parameters of fifteen Malus Mill taxa are shown in Figure4. The number of hybridized chromosomes and the proportion of hybridized chromosomes between each probe and the test material are shown in Figure 5 (A, B and C, respectively, represent M.hupehensis (Pamp.)Rehd.var. Mengshanens as probe, M. Hupehensis (Pamp.) rehd as probe and M.hupehensis Rehd. var.taiensis as probe).Figure 4 Karyotype parameters of fifteen Malus Mill taxa.Figure 5 Experimental results of genomic in situ hybridization.According to the number of hybrid chromosomes, the chromosome homology between them is lower than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, that is, the genetic relationship between them is not high. In the next research, the screening range of parents of M.hupehensis Rehd. var.taiensis should be expanded. ## 4.2. Genetic Structure of Malus Mill Natural Population According to Nei’s gene diversity analysis, the gene differentiation coefficient values of four Malus Mill plant species at the level of M. baccata(L.) Borkh, M.rockii Rehder, M.doumeri (Bois.)Chev. and M. Toringo de Vriese were 0.423, 0.439 and 0.428, respectively. 0.460, indicating that there is obvious genetic differentiation among the populations of these four Malus Mill plants, but there are some differences in the degree of population differentiation among different Malus Mill plants. That is, the main source of intraspecific genetic variation of four Malus Mill plants is intra-population variation among individuals, and the intraspecific genetic differentiation has reached a very significant level (P <0.001). As shown in Table1.Table 1 Genetic structure variance analysis. SpeciesSource of variationDegree of freedomVarianceMean square deviationVariance componentPercentage variation (%)P-valueM. baccata(L.) BorkhIntergroup101436.22133.0217.2630<0.001Within population442088.1744.1740.0670M.rockii RehderIntergroup5566.13146.3920.3633<0.001Within population20876.9130.3540.0167M.doumeri (bois.)Chev.Intergroup91066.87128.1641.3320<0.001Within population412869.3240.2727.6980M.toringo De VrieseIntergroup101102.75133.6916.8746<0.001Within population442126.8246.7533.2154In order to further explore the relationship between genetic distance and geographical distance, Mantel tests were carried out on the genetic distance and geographical distance of four tested species (Figure6). The results showed that the genetic distance and geographical distance of four Malus Mill plants were significantly correlated.Figure 6 Mantel detection of correlation between geographical distance and genetic distance.Cluster results of 15 materials by principal coordinate analysis (Figure7).Figure 7 Principal coordinate cluster analysis of 15 Malus Mill and Pyrus materials.It was found that 15 test materials were divided into three categories. Three species of Pyrus are the first category; M. doumeri (Bois.) of Malus mill Chev, grassland Begonia and narrow leaf Begonia are the second category, among which grassland Begonia and narrow leaf Begonia are more closely related. The third category is divided into four sub groups. The first category includes Hawthorn Begonia and m. ombrophila hand- Mazz and Dianchi Begonia, M. ombrophila hand- Mazz is closely related to Begonia Dianchi;Category 2 includes M. sikkimensis (Wenz.) Koehne, M.toringoides(Rehd.) Hughes and begonia; The fourth category includes: M. halliana Koehne, Xinjiang wild apple, betel seed, catalpa seed, flat edge Begonia, Chinese apple, Oriental apple, forest apple and brown Begonia.The analysis of population structure of 15 tested materials by using Structure 2.3.1 software showed that the number of allele frequency characteristic types K in the samples showed a continuous increasing trend. When K =3, from the distribution of the maximum covariant Q value of 15 materials, the covariant Q of 13 materials is ≥0.6, accounting for 69.7% of the tested materials, indicating that the genetic relationship among these materials is single (Figure8).Figure 8 The distribution of the maximum Q value of all kinds of groups in mathematical model.The composite ratio and efficacy index can evaluate the efficiency of marker polymorphic sites and the efficiency of site polymorphism (Figure9). Among them, ANEA: average effective allele number, AEHL: average expected heterozygosity of locus.Figure 9 Marking efficiency analysis.Among the two markers, the effective recombination ratio and efficacy index of ScoT are higher than those of EST-SSR, and SCoT marker has higher polymorphism detection efficiency. The marker index can comprehensively reflect the expected heterozygosity and effective recombination ratio of the average locus. The SCoT marker is 2.337, which is higher than the EST-SSR marker of 1.199. ## 4.3. The Interspecific Genetic Relationship of Apple Genus Coextensive Population In order to study whether there is mutual influence of population genetic structure among the sympatrically distributed species of Malus, this experiment analyzed the genetic structure of apple species within four sympatrically distributed populations of Malus, namely populations A, B, C and D (Table2).Table 2 Population genetic distance. Group ASpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh———M.rockii Rehder0.096——M.doumeri (bois.)Chev0.0520.055—Group BSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.093———M.doumeri (bois.)Chev0.0460.042——M.toringo De Vriese0.0330.0490.013—Group CSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM. Baccata(L.) Borkh————M.rockii Rehder0.022———M.doumeri (bois.)Chev0.0260.021——Group DSpeciesM. baccata(L.) BorkhM.rockii RehderM.doumeri (bois.)ChevM.toringo De VrieseM. Baccata(L.) Borkh————M.rockii Rehder0.103———M.doumeri (bois.)Chev0.1680.027——M.toringo De Vriese0.1440.0260.026—The results showed that the gene differentiation coefficient of coextensive populations with different geographical distribution varied from 0.177 to 0.086 (average 0.138), which indicated that the genetic similarity of species in coextensive composite populations was high and there was a close genetic relationship among species. The average gene flow among individuals in the same domain distribution population is 3.362, which indicates that the frequency of gene exchange among several species of Glycyrrhiza uralensis Fisch. distributed in the same domain is higher.However, the degree of interspecific differentiation in coextensive populations in different regions is relatively different, and the degree of genetic differentiation among species in group A is the highest, which indicates that there are differences in the degree of gene exchange among species in coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. It shows that there are differences in the degree of inter-species gene exchange among coextensive populations in different regions. The more frequent the gene exchange, the higher the genetic similarity among species. ## 4.4. Discussion Malus Mill plants have natural hybridization [10]; The existence of hybrids in wild environment and different types of variation within species makes the classification and identification based on morphology complicated [20]. The pod shape of some wild apples is a transitional type between two or more apples, or it is inclined to one kind, such as Malus Mill plant group, which makes it difficult to classify and identify species, so it is necessary to classify and identify species by molecular markers.Among the designed 35 pairs of primers, 16 pairs can amplify polymorphic products. The reason why some primers cannot amplify products may be [10]: The primer sequence falls on two exons; There is a long intron between the two primers, and no product can be amplified. Wrong sequence information was used in primer design. When there are multiple alleles in the analyzed materials, although their coding regions and functions are the same, but the DNA sequences are not completely the same, the amplified products may have fragments with different sizes than expected. The specificity of the primer itself is not high enough to amplify the sequence homologous to the primer.The results of population structure analysis are also consistent with the results of cluster analysis and principal coordinate analysis. SCoT, IRAP and SSR all divide apple species into three groups when K =3, one of which is Malus Mill plant materials in North China and Northwest China, the other is Malus Mill plant materials in East China and South China, and the last is Malus Mill plant materials in Southwest China. From their structural analysis chart, it can be found that most apple varieties have gene exchange, and Malus Mill plants with similar regions are more likely to have gene exchange, which indicates that whether Malus Mill plant populations are hybridized or not is related to their regional distribution.The results are the same as those obtained by AMOVA analysis (Table1) that only a small part of the variation of four Malus plants comes from among populations. According to the traditional concept of gene flow, gene flow can prevent population differentiation caused by local adaptation or genetic drift. The gene flow among four species of Malus plants is less than 1, and there may be some obstacles in gene communication among populations due to geographical isolation effect.According to the analysis of gene differentiation coefficient, the genetic similarity among the four coextensive populations is high, and the gene flow among the coextensive populations is higher than that among the intraspecific populations. In addition, the analysis of the genetic distance among the coextensive composite populations (Table2) shows that the genetic distance among the species within the populations is small, and the individual clustering is always the priority clustering of coextensive species, which indicates that the gene flow among coextensive species is caused by hybridization. In addition, during the continuous interspecific hybridization and backcross in this population, new homozygous individuals of genetic type also appeared, such as M.doumeri (Bois.)Chev with homozygous genetic background in the subgroup, which also means that interspecific hybridization of Malus may lead to the formation of new species.Karyotype analysis and genome in situ hybridization both showed that Malus Mill’s plant classification system still had some imperfections.M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are both variants of M. hupehensis (Pamp.) rehd. The number of chromosomes with hybridization signals is less than that of M.toringo De Vriese and M.toringoides(Rehd.) Hughes, The number and proportion of chromosomes with hybridization signals in M.sikkimensis (Wenz.)Koehne, M.toringoides(Rehd.) Hughes and M.toringo De Vriese are also higher than those in M. M.hupehensis Rehd. var.taiensis, so it is necessary to further study the genetic relationship among Malus Mill plants.In this experiment, the results obtained by cluster analysis, principal coordinate analysis and population structure analysis are roughly the same, and all three methods effectively show the genetic relationship and genetic diversity among Malus Mill materials. However, there are obvious differences in the classification order of groups and lines, which may be related to the fact that Malus Mill plants have both wild species and cultivated species, and there are many interspecific hybrids. There may be some differences between genetic relationship analysis at molecular level and morphological genetic relationship analysis based on some characters, and there may also be differences between different molecular marker techniques.Literature [26] studies the genetic diversity of apple male resources by means of cluster analysis and principal coordinate analysis, effectively distinguishing apple interspecific resources from different regions. Literature [13] makes population structure analysis and principal coordinate analysis of 146 Malus Mill materials, which can well reveal their genetic relationship and genetic diversity. It can be seen that principal coordinate analysis, cluster analysis and population structure analysis are effective and reliable for identifying the genetic relationship of Malus Mill plants.To sum up, Malus Mill plants in its neighborhood and coextensive distribution area, the transitional hybridization zone formed by interspecies hybridization, on the one hand, provides a source of variation for species evolution; On the other hand, Malus Mill plants in the hybridization zone, as an interspecific transition group, become the medium of gene flow among Malus Mill plants, and accelerate the gene exchange among Malus Mill plants. It is speculated that in the evolution process of Malus Mill plant species, due to the imperfect reproductive isolation mechanism, there has been a certain inter-species gene exchange. The evolution and formation of Malus Mill plants are related to the parent species with the same domain and domain distribution, which is more in line with the neighborhood and same domain model [16]. ## 5. Conclusion In this study, SCoT molecular markers were used to study the genetic relationship among Malus populations, and the influence of interspecific hybridization infiltration on species formation and differentiation was discussed, and the formation and evolution trend of hybrid Malus plants was analyzed. The main conclusions are as follows:SCoT marker has a high polymorphism and a large amount of information, which can reveal the level of genetic diversity and genetic relationship between Malus Mill plants and within species. It is an effective molecular marker for studying the genetic structure of Malus Mill natural population, interspecific gene infiltration and so on.It is proved that M.sikkimensis (Wenz.)Koehne and M.toringoides(Rehd.) Hughes are closely related to M.baccata (L.)Borkh group. M.hupehensis Rehd. var.taiensis and M.hupehensis (Pamp.)Rehd.var. mengshanens are the same varieties of M. hupehensis (Pamp.) rehd. Although the chromosome homology is not high, their morphology is similar. Riwacrabapple is quite different from M.sikkimensis (Wenz.)Koehne, which supports its independent seed formation; Support the viewpoint of M.hupehensis (Pamp.)Rehd’s multi-point origin.The diversity level of Malus plants is higher than that of other apple species distributed in the same domain, and it has certain advantages in quantity and adaptability, which promotes gene exchange among Malus plants.The classification system and genetic relationship of Malus Mill plants are becoming more and more clear, but the intra-genus classification system needs further improvement, and the inter-species and intra-species relationships need further experiments to prove that the classification of Malus Mill plants still needs further study. --- *Source: 1002624-2022-06-17.xml*
2022
# A Novel and Robust Approach to Detect Tuberculosis Using Transfer Learning **Authors:** Omar Faruk; Eshan Ahmed; Sakil Ahmed; Anika Tabassum; Tahia Tazin; Sami Bourouis; Mohammad Monirujjaman Khan **Journal:** Journal of Healthcare Engineering (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1002799 --- ## Abstract Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, includingtuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection. --- ## Body ## 1. Introduction TB is the world's second most lethal infectious disease, trailing only human immunodeficiency virus (HIV), with an estimated 1.4 million deaths in 2019 [1]. Although it is most often associated with the lungs, it may also affect other organs such as the stomach (abdomen), glands, bones, and the neurological system. The top thirty tuberculosis-burdening countries accounted for 87% of tuberculosis cases in 2019 [2]. Two-thirds of the overall is made up of India, Indonesia, China, the Philippines, Pakistan, Nigeria, Bangladesh, and South Africa, with India leading the way, followed by Indonesia, China, the Philippines, Pakistan, Nigeria, Bangladesh, and South Africa. In 2019, an estimated 10 million people worldwide developed TB. There are 5.6 million men, 3.2 million women, and 1.2 million children in the country. Tuberculosis may be cured if diagnosed early and treated appropriately [3]. Almost always, tuberculosis can be cured with therapy. Antibiotics are often given for a six-month period [4].Chest X-ray screening for TB in the lungs is the simplest and most frequently used technique oftuberculosis detection. Another alternative is to have chest radiographs examined by a physician, which is a time-consuming clinical procedure [5]. Tuberculosis is often misclassified as other illnesses with similar radiographic patterns as a result of CXR imaging, resulting in ineffective treatment and deteriorating clinical conditions [6]. In this context, a transfer learning approach based on convolutional neural networks may be critical. CXR pictures were chosen as a sample dataset in this research because they are cost-effective and time-efficient, as well as compact and readily accessible in nearly every clinic. As a consequence, fewer poor nations will profit from this study. The main motivation for this study is to diagnose tuberculosis without any delay. This method will aid in the fast diagnosis of tuberculosis via the use of CXR images. False result problems may be resolved if a model is designed with a high degree of precision. If this test were adopted, the system would be more robust and allow for the evaluation of a greater number of individuals in a shorter amount of time, significantly decreasing the spread. ### 1.1. Existing Work Several research groups used CXR pictures to identifytuberculosis (TB) and normal patients using a standard machine learning method. But the objective of this article is to get a better understanding of the issue. We reviewed current papers and articles and considered strategies for improving the accuracy of our deep learning model. To compare our efforts, we utilized an existing dataset and examined their model. Using a deep learning approach, Hooda et al. classified CXR images into tuberculosis and non-TB groups with an accuracy of 82.09 percent. Evangelista and Guedes developed a computer-assisted technique based on intelligent pattern recognition [7]. By modifying the settings of deep-layered CNNs, it has been shown that deep machine learning techniques may be used to diagnose TB. Transfer learning was used in the context of deep learning to identify TB by utilizing pretrained models and their ensembles [8].Pasa et al. proposed a deep network architecture with an accuracy of 86.82 percent for TB screening. Additionally, they demonstrated an interactive visualization application for patients with TB [9]. Chhikara et al. investigated whether CXR pictures might be used to detect pneumonia. They used preprocessing methods like filtering and gamma correction to evaluate the performance of pretrained models (Resnet, ImageNet, Xception, and Inception) [10]. The paper “Reliable TB Detection Using Chest X-ray with Deep Learning, Segmentation, and Visualization” was authored by Tawsifur Rahman. He used deep convolutional neural networks and the modules ResNet18, ResNet50, ResNet101, ChexNet, InceptionV3, Vgg19, DenseNet201, SqueezeNet, and MobileNet to differentiate between tuberculosis and normal images. In the identification of tuberculosis using X-ray images, the top-performing model, ChexNet, had accuracy, precision, sensitivity, F1-score, and specificity of 96.47 percent, 96.62 percent, 96.47 percent, 96.47 percent, and 96.51 percent, respectively [11].Stefanus Kieu Tao Hwa showed deep learning for TB diagnosis using chest X-rays; according to the data, the suggested ensemble method achieved the highest accuracy of 89.77 percent, sensitivity of 90.91 percent, and specificity of 88.64 percent [12]. Priya Ebenezer et al. have extended all current TB detection proposals. They designed a new method for identifying overlapping TB items. To determine the boundaries between the single bacterium area, the overlapping bacilli zone, and the nonbacilli region, form characteristics such as eccentricity, compactness, circularity, and tortuosity were examined. A novel proposal for an overlapping bacilli area was made based on concavities in the region. Because concavities imply overlapping, the optimum separation line is determined by the concavity's deepest concavity point. This provides an additional advantage. When the separation is overlapped, the overall count of tuberculosis bacilli is much more precise [13]. Vishnu Makkapati et al. were the first to diagnose TB using the form characteristics of Mycobacterium tuberculosis bacteria. They proposed a method based on hue color components for segmenting bacilli through adaptive hue range selection. The existence of a beaded structure inside the bacilli, as well as the thread length and breadth parameters, indicates the validity or invalidity of the bacilli [14]. Sadaphal et al. developed a method in 2008 that incorporated (1) Bayesian segmentation, which relied on prior knowledge of ZN stain colors to estimate the likelihood of a pixel having a “TB item,” and (2) shape/size analysis [15].In the majority of studies, researchers claimed around 90% accuracy. However, the major contribution of this research is that several pretrained models were utilized. InceptionV3 was 96 percent accurate and 97.57 percent validated, MobileNetV2 was 98 percent accurate and 97.93 percent validated, and InceptionResNetV2 was 99 percent accurate and 99.36 percent validated. This study presents a novel method for detecting tuberculosis-infected individuals using deep learning.CNN (convolutional neural networking) is a technique that is well suited for this kind of issue. This method will aid in the rapid detection oftuberculosis from chest X-ray pictures.The remaining portion of the article is organized as follows: Section2 addressed the approach and methodology. Sections 3 and 4 addressed the analysis of the results and the conclusion, respectively. ## 1.1. Existing Work Several research groups used CXR pictures to identifytuberculosis (TB) and normal patients using a standard machine learning method. But the objective of this article is to get a better understanding of the issue. We reviewed current papers and articles and considered strategies for improving the accuracy of our deep learning model. To compare our efforts, we utilized an existing dataset and examined their model. Using a deep learning approach, Hooda et al. classified CXR images into tuberculosis and non-TB groups with an accuracy of 82.09 percent. Evangelista and Guedes developed a computer-assisted technique based on intelligent pattern recognition [7]. By modifying the settings of deep-layered CNNs, it has been shown that deep machine learning techniques may be used to diagnose TB. Transfer learning was used in the context of deep learning to identify TB by utilizing pretrained models and their ensembles [8].Pasa et al. proposed a deep network architecture with an accuracy of 86.82 percent for TB screening. Additionally, they demonstrated an interactive visualization application for patients with TB [9]. Chhikara et al. investigated whether CXR pictures might be used to detect pneumonia. They used preprocessing methods like filtering and gamma correction to evaluate the performance of pretrained models (Resnet, ImageNet, Xception, and Inception) [10]. The paper “Reliable TB Detection Using Chest X-ray with Deep Learning, Segmentation, and Visualization” was authored by Tawsifur Rahman. He used deep convolutional neural networks and the modules ResNet18, ResNet50, ResNet101, ChexNet, InceptionV3, Vgg19, DenseNet201, SqueezeNet, and MobileNet to differentiate between tuberculosis and normal images. In the identification of tuberculosis using X-ray images, the top-performing model, ChexNet, had accuracy, precision, sensitivity, F1-score, and specificity of 96.47 percent, 96.62 percent, 96.47 percent, 96.47 percent, and 96.51 percent, respectively [11].Stefanus Kieu Tao Hwa showed deep learning for TB diagnosis using chest X-rays; according to the data, the suggested ensemble method achieved the highest accuracy of 89.77 percent, sensitivity of 90.91 percent, and specificity of 88.64 percent [12]. Priya Ebenezer et al. have extended all current TB detection proposals. They designed a new method for identifying overlapping TB items. To determine the boundaries between the single bacterium area, the overlapping bacilli zone, and the nonbacilli region, form characteristics such as eccentricity, compactness, circularity, and tortuosity were examined. A novel proposal for an overlapping bacilli area was made based on concavities in the region. Because concavities imply overlapping, the optimum separation line is determined by the concavity's deepest concavity point. This provides an additional advantage. When the separation is overlapped, the overall count of tuberculosis bacilli is much more precise [13]. Vishnu Makkapati et al. were the first to diagnose TB using the form characteristics of Mycobacterium tuberculosis bacteria. They proposed a method based on hue color components for segmenting bacilli through adaptive hue range selection. The existence of a beaded structure inside the bacilli, as well as the thread length and breadth parameters, indicates the validity or invalidity of the bacilli [14]. Sadaphal et al. developed a method in 2008 that incorporated (1) Bayesian segmentation, which relied on prior knowledge of ZN stain colors to estimate the likelihood of a pixel having a “TB item,” and (2) shape/size analysis [15].In the majority of studies, researchers claimed around 90% accuracy. However, the major contribution of this research is that several pretrained models were utilized. InceptionV3 was 96 percent accurate and 97.57 percent validated, MobileNetV2 was 98 percent accurate and 97.93 percent validated, and InceptionResNetV2 was 99 percent accurate and 99.36 percent validated. This study presents a novel method for detecting tuberculosis-infected individuals using deep learning.CNN (convolutional neural networking) is a technique that is well suited for this kind of issue. This method will aid in the rapid detection oftuberculosis from chest X-ray pictures.The remaining portion of the article is organized as follows: Section2 addressed the approach and methodology. Sections 3 and 4 addressed the analysis of the results and the conclusion, respectively. ## 2. Method and Materials Open-source Kaggle provided the dataset used in this study. Patients withtuberculosis and those without the disease were represented in the dataset. For feature extraction, a CNN is used. A flatten layer, two dense layers, and a ReLU activation function are all included in the model. It also includes four Conv2D layers and three MaxPooling2D layers. SoftMax, the last and most thick layer, serves as an activation layer. Transfer learning is also utilized in this study to compare the accuracy of the created model with the accuracy of the pretrained model. With a few changes in the final layers, MobileNetV2, InceptionResNetV2, Xception, and InceptionV3 were utilized for pretrained models. Layers such as average pooling, flatten, dense, and dropout are used to create bespoke end results. When it comes to extracting visual details, the CNN model works effectively. The model learns and distinguishes between images by extracting characteristics from the input images. Figure 1 shows the workflow diagram of the TB and normal image detection.Figure 1 Workflow diagram of the TB or normal image detection.Python is the perfect programming language for data analysis. Because of Python's extensive library access, deep learning problems are very successful in the Python programming language. On a personal GPU, Anaconda Navigator and Jupyter Notebook were used for dataset preparation, as well as Google Colab for handling large datasets and online model training. ### 2.1. Dataset This system's dataset is made up of 3500 TB and 3500 normal images. For this study, the Tuberculosis (TB) Chest X-ray Database has been used [16]. The visualization of this dataset is shown in Figures 2 and 3.Figure 2 Non-TB X-ray images.Figure 3 Tuberculosis X-ray images.Figure2 depicts a healthy chest X-ray, whereas Figure 3 depicts a disease chest X-ray caused by tuberculosis. The pictures in the collection have varying starting heights and widths. Those provided pictures have a predetermined form thanks to the model. The Tuberculosis (TB) Chest X-ray Database is a balanced medical dataset. The total number of tuberculosis and nontuberculosis cases is equal (3500 each). Figure 4 displays the total number of TB and non-TB records in this dataset.Figure 4 Total number of TB and non-TB records. ### 2.2. Block Diagram A dataset with two subsections is provided as input in the block design shown in Figure5. This system underwent some preprocessing before fitting the model, such as importing pictures of a certain size, dividing the dataset, and using data augmentation methods. Better accuracy was achieved after fitting and fine-tuning the model. It was possible to see how loss and accuracy evolve over time by plotting a confusion matrix and a model of loss and accuracy. As a final step, the classification result section shows how well the model did in distinguishing between pictures of TB and those not associated with the disease.Figure 5 Block diagram of the proposed system.The entire system is shown in the block diagram in the simplest possible manner. In this research, the decision-making component of the system plays a very important role. An enormous quantity of data is used to train the model, which then uses that data to make a conclusion. ### 2.3. Preprocessing The preprocessing phase occurs before the training and testing of the data. Picture dimensions are redimensioned, images are transformed to an array, input is preprocessed using MobileNetV2, and hot labels are finally encoded throughout the four preparation stages. Because of the effectiveness of the training model, picture scaling is an important preprocessing step in computer vision. The smaller the image, the smoother it runs. In this research, an image was resized to 256 by 256 pixels. Following that, all of the images in the dataset will be processed into an array. For calling, the image is converted into a loop function array. The image is then used in conjunction with MobileNetV2 to preproceed input. The last step is hot coding on labels, since many computer learning algorithms cannot operate directly on data labelling. This method, as well as all input and output variables, must be numerical. The tagged data are transformed into a numerical label in order to be interpreted and analyzed. Following the preprocessing step, the data are divided into three batches: 70 percent training data, 20 percent validation data, and the remaining testing data. Each load contains both normal and TB images.Several increments were used to add variety to the original images. Because lung X-rays are generally symmetrical with a few minor characteristics, increases such as vertical images were used. However, the main goal was for TB and associated symptoms to occur in either lung and be detectable in both of our models. To provide more variety, these improved images have been gently rotated and illuminated or dimmed. ### 2.4. Convolutional Neural Network and Transfer Learning In artificial intelligence, CNNs (convolutional neural networks) are often employed for image categorization [17]. Before the input is sent through a neural network, it handles data convolution, maximum pooling, and flattening. It works because the various weights are set up using various inputs. Once the data have passed through the hidden layers, weights are computed and assessed. Following input from the cost function, the network goes through a back propagation phase [18]. During this procedure, the input layer weight is readjusted once again, and the process is repeated until it finds an optimal position for weight adjustment there. Epochs show how many times the cycle has repeated itself. It takes a long time to train a model using neural networks, which is a major disadvantage. To get around this, we will utilize transfer learning, another hot subject in computer vision research [19]. To learn from a dataset, we use transfer learning, which makes use of a pretrained model. It saves us a lot of time in training and takes care of a lot of different important things at the same time. As time passes, we will be able to fine-tune our networks for improved accuracy and simplicity.Transfer learning is to preserve knowledge from one area and apply it to another. Training takes a long time since model parameters are all initialized using a random Gaussian distribution and a convergence of at least 30 epochs with a lot of dimensions of 50 pictures is generated. The problem stems from the fact that big, well-noted pictures may be difficult to obtain in the medical profession. Due to a paucity of medical data, it is sometimes difficult to correctly predict models. One of the most difficult problems for medical researchers is the shortage of medical data or datasets. Data are an important factor in deep learning methods. Data processing and labelling are both time-consuming and costly. The advantage of transfer learning is that it does not require vast datasets. Computations are becoming easier and less costly. Transfer learning is a technique in which the knowledge from a pretrained model that was trained on a large dataset is transferred to a new model that has to be trained, incorporating fresh data that is comparatively less than needed. For a specific job, this method started CNN training with a tiny dataset, which included a large-scale dataset that had previously been trained in the pretrained models. ### 2.5. Overview of the Proposed Model Four CNN-based pretrained models were used in this research to classify chest X-ray pictures. Xception, InceptionV3, MobileNetV2, and InceptionResNetV2 are the models used. There are two types of chest X-ray pictures: one is unaffected bytuberculosis, whereas the other is. This study also utilized a transfer learning technique that can perform well with sparse data by utilizing ImageNet data and is efficient in terms of training time. The symmetric system architecture of the transfer learning method is shown in Figure 6.Figure 6 System architecture with InceptionResNetV2.InceptionResNetV2 was developed by merging two of the most well-known deep convolutional neural networks, Inception [20] and ResNet [21], and using batch normalization rather than summation for the traditional layers. The popular transfer learning model, InceptionResNetV2, was trained on data from the ImageNet database from various sources and classifications, and it is definitely making waves. When include top = “False”, it simply indicates that the fully connected layer will not be included, even if the input shape is specified. 224 × 224 × 3. When training = “False”, the weights in a particular layer are not changed during training. The dropout layer, which aids in overfitting prevention, randomly sets input units to 0 at a rate frequency at each step during training time. Dropout (0.5) represents a dropout effect of 50 percent. Flattening is the process of converting data into a one-dimensional array for use in the next layer. The output of the convolutional layers is flattened to generate a single large feature vector. It is also linked to the final classification model, forming what is known as a fully connected layer. Batch normalization is a method for improving the speed and stability of artificial neural networks by recentering and rescaling the inputs of the layers. When building the pretrained models, Table 1 indicates that the batch size is 32, the maximum epoch is 25, and a loss function of “binary cross-entropy” is used.Table 1 Parameters used for compiling various models. ParametersValueBatch size32ShufflingEach epochOptimizerAdamLearning rate1e−3Decay1e−3/epochLossBinary_crossentropyEpoch25Execution environmentGPUThe sigmoid function is an activation function that assists a neuron in making choices. These routines produce either a 0 or a 1. It employs probability to generate a binary output [22]. The outcome is determined by determining who has the highest probability value. This function outperforms the threshold function and is more useful for categorization. This activation function is often applied to the last dense block. The equation for the sigmoid function is(1)X=11+e−Y.ReLU is an activation function that operates on the concept of rectification. This function's output stays at 0 from the start until a specified point. After crossing or reaching a certain value, the output changes and continues to rise as the input changes [23]. Because it is only activated when there is a significant or significant input inside the neurons, this function works extremely well. ### 2.6. Evaluation Criteria Following the completion of the training phase, all models were evaluated on the test dataset. The performance of these systems was evaluated using the accuracy, precision, recall, F1-score, and AUC range. This study's performance measures are all mentioned below. True positives (TP) aretuberculosis images that were correctly identified as such; true negatives (TN) are normal images that were correctly identified as such; false positives (FP) are normal images that were incorrectly identified as tuberculosis images; and false negatives (FN) are normal tuberculosis images.Accuracy simply indicates how close our expected result is to the actual result [24]. It is represented as a percentage. It is determined by adding true positive and true negative and dividing the overall number of potential outcomes by the number of possible outcomes:(2)Accuracy=TP+FNTP+TF+FP+FN.Precision is a measure of how close predicted outcomes are to each other [25]. True positive is obtained by dividing true positive by the sum of true and false positives:(3)Precision=TPTP+FP.Recall is determined by dividing the total number of true positives by the total number of true positives and false negatives:(4)Recall=TPTP+FN.The F1-score combines a classifier's accuracy and recall into a single measure by calculating their harmonic mean. It is most commonly used to compare the results of two different classifiers. Assume classifier A has greater recall and classifier B has greater precision. In this case, the F1-scores of both classifiers may be used to determine which performs better. The F1-score of a classification model is computed as follows:(5)F1=2PrecisionxRecallPrecision+Recall.It evaluates the correctness of the model dataset. Although accuracy is difficult to grasp, the F1-score idea becomes more useful in situations of uneven class distribution. It is used by many machine learning models. It is utilized when false negatives and false positives are more important in the dataset than genuine positives and true negatives. When the data are wrongly categorized, it produces better results.The confusion matrix displays the total number of right and erroneous outcomes. It is possible to see all true-positive, false-positive, true-negative, and false-negative numbers [26].The greater the frequency of genuine positive and true-negative outcomes, the greater the accuracy. ## 2.1. Dataset This system's dataset is made up of 3500 TB and 3500 normal images. For this study, the Tuberculosis (TB) Chest X-ray Database has been used [16]. The visualization of this dataset is shown in Figures 2 and 3.Figure 2 Non-TB X-ray images.Figure 3 Tuberculosis X-ray images.Figure2 depicts a healthy chest X-ray, whereas Figure 3 depicts a disease chest X-ray caused by tuberculosis. The pictures in the collection have varying starting heights and widths. Those provided pictures have a predetermined form thanks to the model. The Tuberculosis (TB) Chest X-ray Database is a balanced medical dataset. The total number of tuberculosis and nontuberculosis cases is equal (3500 each). Figure 4 displays the total number of TB and non-TB records in this dataset.Figure 4 Total number of TB and non-TB records. ## 2.2. Block Diagram A dataset with two subsections is provided as input in the block design shown in Figure5. This system underwent some preprocessing before fitting the model, such as importing pictures of a certain size, dividing the dataset, and using data augmentation methods. Better accuracy was achieved after fitting and fine-tuning the model. It was possible to see how loss and accuracy evolve over time by plotting a confusion matrix and a model of loss and accuracy. As a final step, the classification result section shows how well the model did in distinguishing between pictures of TB and those not associated with the disease.Figure 5 Block diagram of the proposed system.The entire system is shown in the block diagram in the simplest possible manner. In this research, the decision-making component of the system plays a very important role. An enormous quantity of data is used to train the model, which then uses that data to make a conclusion. ## 2.3. Preprocessing The preprocessing phase occurs before the training and testing of the data. Picture dimensions are redimensioned, images are transformed to an array, input is preprocessed using MobileNetV2, and hot labels are finally encoded throughout the four preparation stages. Because of the effectiveness of the training model, picture scaling is an important preprocessing step in computer vision. The smaller the image, the smoother it runs. In this research, an image was resized to 256 by 256 pixels. Following that, all of the images in the dataset will be processed into an array. For calling, the image is converted into a loop function array. The image is then used in conjunction with MobileNetV2 to preproceed input. The last step is hot coding on labels, since many computer learning algorithms cannot operate directly on data labelling. This method, as well as all input and output variables, must be numerical. The tagged data are transformed into a numerical label in order to be interpreted and analyzed. Following the preprocessing step, the data are divided into three batches: 70 percent training data, 20 percent validation data, and the remaining testing data. Each load contains both normal and TB images.Several increments were used to add variety to the original images. Because lung X-rays are generally symmetrical with a few minor characteristics, increases such as vertical images were used. However, the main goal was for TB and associated symptoms to occur in either lung and be detectable in both of our models. To provide more variety, these improved images have been gently rotated and illuminated or dimmed. ## 2.4. Convolutional Neural Network and Transfer Learning In artificial intelligence, CNNs (convolutional neural networks) are often employed for image categorization [17]. Before the input is sent through a neural network, it handles data convolution, maximum pooling, and flattening. It works because the various weights are set up using various inputs. Once the data have passed through the hidden layers, weights are computed and assessed. Following input from the cost function, the network goes through a back propagation phase [18]. During this procedure, the input layer weight is readjusted once again, and the process is repeated until it finds an optimal position for weight adjustment there. Epochs show how many times the cycle has repeated itself. It takes a long time to train a model using neural networks, which is a major disadvantage. To get around this, we will utilize transfer learning, another hot subject in computer vision research [19]. To learn from a dataset, we use transfer learning, which makes use of a pretrained model. It saves us a lot of time in training and takes care of a lot of different important things at the same time. As time passes, we will be able to fine-tune our networks for improved accuracy and simplicity.Transfer learning is to preserve knowledge from one area and apply it to another. Training takes a long time since model parameters are all initialized using a random Gaussian distribution and a convergence of at least 30 epochs with a lot of dimensions of 50 pictures is generated. The problem stems from the fact that big, well-noted pictures may be difficult to obtain in the medical profession. Due to a paucity of medical data, it is sometimes difficult to correctly predict models. One of the most difficult problems for medical researchers is the shortage of medical data or datasets. Data are an important factor in deep learning methods. Data processing and labelling are both time-consuming and costly. The advantage of transfer learning is that it does not require vast datasets. Computations are becoming easier and less costly. Transfer learning is a technique in which the knowledge from a pretrained model that was trained on a large dataset is transferred to a new model that has to be trained, incorporating fresh data that is comparatively less than needed. For a specific job, this method started CNN training with a tiny dataset, which included a large-scale dataset that had previously been trained in the pretrained models. ## 2.5. Overview of the Proposed Model Four CNN-based pretrained models were used in this research to classify chest X-ray pictures. Xception, InceptionV3, MobileNetV2, and InceptionResNetV2 are the models used. There are two types of chest X-ray pictures: one is unaffected bytuberculosis, whereas the other is. This study also utilized a transfer learning technique that can perform well with sparse data by utilizing ImageNet data and is efficient in terms of training time. The symmetric system architecture of the transfer learning method is shown in Figure 6.Figure 6 System architecture with InceptionResNetV2.InceptionResNetV2 was developed by merging two of the most well-known deep convolutional neural networks, Inception [20] and ResNet [21], and using batch normalization rather than summation for the traditional layers. The popular transfer learning model, InceptionResNetV2, was trained on data from the ImageNet database from various sources and classifications, and it is definitely making waves. When include top = “False”, it simply indicates that the fully connected layer will not be included, even if the input shape is specified. 224 × 224 × 3. When training = “False”, the weights in a particular layer are not changed during training. The dropout layer, which aids in overfitting prevention, randomly sets input units to 0 at a rate frequency at each step during training time. Dropout (0.5) represents a dropout effect of 50 percent. Flattening is the process of converting data into a one-dimensional array for use in the next layer. The output of the convolutional layers is flattened to generate a single large feature vector. It is also linked to the final classification model, forming what is known as a fully connected layer. Batch normalization is a method for improving the speed and stability of artificial neural networks by recentering and rescaling the inputs of the layers. When building the pretrained models, Table 1 indicates that the batch size is 32, the maximum epoch is 25, and a loss function of “binary cross-entropy” is used.Table 1 Parameters used for compiling various models. ParametersValueBatch size32ShufflingEach epochOptimizerAdamLearning rate1e−3Decay1e−3/epochLossBinary_crossentropyEpoch25Execution environmentGPUThe sigmoid function is an activation function that assists a neuron in making choices. These routines produce either a 0 or a 1. It employs probability to generate a binary output [22]. The outcome is determined by determining who has the highest probability value. This function outperforms the threshold function and is more useful for categorization. This activation function is often applied to the last dense block. The equation for the sigmoid function is(1)X=11+e−Y.ReLU is an activation function that operates on the concept of rectification. This function's output stays at 0 from the start until a specified point. After crossing or reaching a certain value, the output changes and continues to rise as the input changes [23]. Because it is only activated when there is a significant or significant input inside the neurons, this function works extremely well. ## 2.6. Evaluation Criteria Following the completion of the training phase, all models were evaluated on the test dataset. The performance of these systems was evaluated using the accuracy, precision, recall, F1-score, and AUC range. This study's performance measures are all mentioned below. True positives (TP) aretuberculosis images that were correctly identified as such; true negatives (TN) are normal images that were correctly identified as such; false positives (FP) are normal images that were incorrectly identified as tuberculosis images; and false negatives (FN) are normal tuberculosis images.Accuracy simply indicates how close our expected result is to the actual result [24]. It is represented as a percentage. It is determined by adding true positive and true negative and dividing the overall number of potential outcomes by the number of possible outcomes:(2)Accuracy=TP+FNTP+TF+FP+FN.Precision is a measure of how close predicted outcomes are to each other [25]. True positive is obtained by dividing true positive by the sum of true and false positives:(3)Precision=TPTP+FP.Recall is determined by dividing the total number of true positives by the total number of true positives and false negatives:(4)Recall=TPTP+FN.The F1-score combines a classifier's accuracy and recall into a single measure by calculating their harmonic mean. It is most commonly used to compare the results of two different classifiers. Assume classifier A has greater recall and classifier B has greater precision. In this case, the F1-scores of both classifiers may be used to determine which performs better. The F1-score of a classification model is computed as follows:(5)F1=2PrecisionxRecallPrecision+Recall.It evaluates the correctness of the model dataset. Although accuracy is difficult to grasp, the F1-score idea becomes more useful in situations of uneven class distribution. It is used by many machine learning models. It is utilized when false negatives and false positives are more important in the dataset than genuine positives and true negatives. When the data are wrongly categorized, it produces better results.The confusion matrix displays the total number of right and erroneous outcomes. It is possible to see all true-positive, false-positive, true-negative, and false-negative numbers [26].The greater the frequency of genuine positive and true-negative outcomes, the greater the accuracy. ## 3. Result and Analysis In classifying normal images and TB, we experimented with a variety of models and methods in order to assess their utility and efficacy. Four pretrained CNN models were employed to classify chest X-ray pictures. The models that are relevant include MobileNetV2, Xception, InceptionResNetV2, and Inception V3. There are two kinds of chest X-ray pictures. The first is TB, whereas the second is normal. This study also used ImageNet data to use a transfer learning method that is successful in terms of training times when there is insufficient data.Several network designs, including Xception, InceptionV3, InceptionResNetV2, and MobileNetV2, are tested before deciding on a network architecture. A bespoke 19-layer ConvNet is also tried, but it performs badly. The InceptionResNetV2 performed the best of all networks, and the findings based on that design are included. Table2 shows the accuracy and loss history of four tried models.Table 2 Comparison of pretrained models. ModelTrain accuracyVal accuracyTrain lossVal lossImagesPrecisionRecallF1-scoreXception0.95960.95430.11550.1213Normal1.000.900.95TB0.911.000.95InceptionV30.98000.97570.11600.1243Normal0.950.970.96TB0.970.950.96MobileNetV20.99300.97930.02200.0548Normal0.961.000.98TB1.000.960.98InceptionResNetV20.99120.99360.03400.0237Normal0.990.980.99TB0.980.990.99According to Table2, the InceptionResNetV2 model achieves a validation or testing accuracy of 99.36 percent, with a validation loss of 2.37 percent. Normal images have 99 percent accuracy, 98 percent recall, and 99 percent F1-scores, whereas TB images have 98 percent precision, 99 percent recall, and 99 percent F1-scores.Figures7 and 8 indicate that, at epoch 1, the training accuracy was pretty low in the first few epochs. The starting training accuracy is 57.20 percent, the training loss is 69.01 percent, the validation accuracy is 74.79 percent, and the loss is also huge (905.03 percent), indicating that the model learns very slowly at first. The training accuracy improves as the number of epochs increases and the loss function begins to decrease. The model evaluates the results at the end of 25 epochs; InceptionResNetV2 has a 99.12 percent train accuracy and a 99.643 percent validation accuracy, with a train loss of 3.40 percent and a validation loss of 1.71 percent.Figure 7 Training and validation accuracy.Figure 8 Training and validation loss.In identifying normal and tuberculosis (TB) images, Figure9 illustrates true-positive, true-negative, false-positive, and false-negative scenarios.Figure 9 Confusion matrix.According to the results, the InceptionResNetV2 model correctly identifies images as “normal” 98 percent of the time and incorrectly labels normal photos as “TB” 2 percent of the time. Additionally, the confusion matrix shows that the algorithm accurately classifies 99 percent of TB photographs as “TB,” while predicting 1 percent of photographs as “normal.”Additionally, this research included genuine testing, which provided input to the model through chest X-ray images. When the model is complete, a file with the hdf5 extension is created containing the produced model. Four hdf5 files representing four different models were created for this study. Following that, a new notebook file is created with the ipynb test extension. Four models were included in this test file, and then, individual chest X-ray images were provided as input. Figures10 and 11 show the prediction results in real time.Figure 10 Normal prediction.Figure 11 TB prediction.The result shown in Figure10 is normal. The input image was normal, and the model correctly predicted it. Figure 11 shows a tuberculosis chest X-ray image as an input. The model then returned a valid result, indicating that the input picture was of a TB chest X-ray. In Table 3, classification results have been compared with the above reference papers.Table 3 Model comparison with other research. This paper (model name)Accuracy (%)References paper (model name)Accuracy (%)MobileNet97.93Ref [11] MobileNet94.33(Validation or testing)InceptionV398.00Ref [12] InceptionV383.57Ref [11] InceptionV398.54Xception95.96Ref model VGG1687.71InceptionResNetV299.36Ref [12] Model ChexNet96.47(Validation or testing)Ref [12] Model DenseNet20198.6MobileNet97.93Ref [27] GoogleNet89.6In this article, using MobileNet, we achieved 97.93 percent accuracy, but in [11], the authors achieved 94.33 percent accuracy by using the same algorithm. But the difference between their work and our work is that we fine-tuned the model to increase its accuracy, whereas they used the general MobileNet algorithm in their work. Using InceptionV3, they achieved higher accuracy than our model, but overall in this article, InceptionResnetV2 achieved the highest accuracy of 99.36 percent, which is greater than any previous work reported before. ## 4. Conclusion A framework for deep learning analysis may be very beneficial for individuals who do not get regular screening or checkups in countries with insufficient healthcare facilities. Deep learning in clinical imaging is particularly apparent in early analyses that may recommend preventive therapy. Due to a shortage of radiologists in resource-limited areas, technology-assistedtuberculosis diagnosis is required to help reduce the time and effort spent on tuberculosis detection. Medical analysis powered by deep learning is still not as remarkable as experts would want. This study suggests that we may approach that the level of accuracy by integrating many recent breakthroughs in deep learning and applying them to suitable situations. In this research, we utilized the pretrained model InceptionResNetV2 to develop an automated tuberculosis detection method. With a validation accuracy of 99.36 percent, the InceptionResNetV2 model has finally reached the new state of the art in identifying tuberculosis illness. Given the following, an automated TB diagnosis system would be very helpful in poor countries where trained pulmonologists are in limited supply. These approaches would improve access to health care by reducing both the patient's and the pulmonologist's time and screening expenses. This advancement will have a profound effect on the medical profession. Tuberculosis patients may be readily identified with this method. Diagnostics will benefit from this kind of approach in the future. Numerous deep learning methods may be used to improve the parameters and generate a trustworthy model that benefits mankind. Additionally, we may experiment with a number of image processing techniques in the future to aid the model in learning the picture pattern more accurately and provide improved performance. --- *Source: 1002799-2021-11-25.xml*
1002799-2021-11-25_1002799-2021-11-25.md
44,089
A Novel and Robust Approach to Detect Tuberculosis Using Transfer Learning
Omar Faruk; Eshan Ahmed; Sakil Ahmed; Anika Tabassum; Tahia Tazin; Sami Bourouis; Mohammad Monirujjaman Khan
Journal of Healthcare Engineering (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1002799
1002799-2021-11-25.xml
--- ## Abstract Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, includingtuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection. --- ## Body ## 1. Introduction TB is the world's second most lethal infectious disease, trailing only human immunodeficiency virus (HIV), with an estimated 1.4 million deaths in 2019 [1]. Although it is most often associated with the lungs, it may also affect other organs such as the stomach (abdomen), glands, bones, and the neurological system. The top thirty tuberculosis-burdening countries accounted for 87% of tuberculosis cases in 2019 [2]. Two-thirds of the overall is made up of India, Indonesia, China, the Philippines, Pakistan, Nigeria, Bangladesh, and South Africa, with India leading the way, followed by Indonesia, China, the Philippines, Pakistan, Nigeria, Bangladesh, and South Africa. In 2019, an estimated 10 million people worldwide developed TB. There are 5.6 million men, 3.2 million women, and 1.2 million children in the country. Tuberculosis may be cured if diagnosed early and treated appropriately [3]. Almost always, tuberculosis can be cured with therapy. Antibiotics are often given for a six-month period [4].Chest X-ray screening for TB in the lungs is the simplest and most frequently used technique oftuberculosis detection. Another alternative is to have chest radiographs examined by a physician, which is a time-consuming clinical procedure [5]. Tuberculosis is often misclassified as other illnesses with similar radiographic patterns as a result of CXR imaging, resulting in ineffective treatment and deteriorating clinical conditions [6]. In this context, a transfer learning approach based on convolutional neural networks may be critical. CXR pictures were chosen as a sample dataset in this research because they are cost-effective and time-efficient, as well as compact and readily accessible in nearly every clinic. As a consequence, fewer poor nations will profit from this study. The main motivation for this study is to diagnose tuberculosis without any delay. This method will aid in the fast diagnosis of tuberculosis via the use of CXR images. False result problems may be resolved if a model is designed with a high degree of precision. If this test were adopted, the system would be more robust and allow for the evaluation of a greater number of individuals in a shorter amount of time, significantly decreasing the spread. ### 1.1. Existing Work Several research groups used CXR pictures to identifytuberculosis (TB) and normal patients using a standard machine learning method. But the objective of this article is to get a better understanding of the issue. We reviewed current papers and articles and considered strategies for improving the accuracy of our deep learning model. To compare our efforts, we utilized an existing dataset and examined their model. Using a deep learning approach, Hooda et al. classified CXR images into tuberculosis and non-TB groups with an accuracy of 82.09 percent. Evangelista and Guedes developed a computer-assisted technique based on intelligent pattern recognition [7]. By modifying the settings of deep-layered CNNs, it has been shown that deep machine learning techniques may be used to diagnose TB. Transfer learning was used in the context of deep learning to identify TB by utilizing pretrained models and their ensembles [8].Pasa et al. proposed a deep network architecture with an accuracy of 86.82 percent for TB screening. Additionally, they demonstrated an interactive visualization application for patients with TB [9]. Chhikara et al. investigated whether CXR pictures might be used to detect pneumonia. They used preprocessing methods like filtering and gamma correction to evaluate the performance of pretrained models (Resnet, ImageNet, Xception, and Inception) [10]. The paper “Reliable TB Detection Using Chest X-ray with Deep Learning, Segmentation, and Visualization” was authored by Tawsifur Rahman. He used deep convolutional neural networks and the modules ResNet18, ResNet50, ResNet101, ChexNet, InceptionV3, Vgg19, DenseNet201, SqueezeNet, and MobileNet to differentiate between tuberculosis and normal images. In the identification of tuberculosis using X-ray images, the top-performing model, ChexNet, had accuracy, precision, sensitivity, F1-score, and specificity of 96.47 percent, 96.62 percent, 96.47 percent, 96.47 percent, and 96.51 percent, respectively [11].Stefanus Kieu Tao Hwa showed deep learning for TB diagnosis using chest X-rays; according to the data, the suggested ensemble method achieved the highest accuracy of 89.77 percent, sensitivity of 90.91 percent, and specificity of 88.64 percent [12]. Priya Ebenezer et al. have extended all current TB detection proposals. They designed a new method for identifying overlapping TB items. To determine the boundaries between the single bacterium area, the overlapping bacilli zone, and the nonbacilli region, form characteristics such as eccentricity, compactness, circularity, and tortuosity were examined. A novel proposal for an overlapping bacilli area was made based on concavities in the region. Because concavities imply overlapping, the optimum separation line is determined by the concavity's deepest concavity point. This provides an additional advantage. When the separation is overlapped, the overall count of tuberculosis bacilli is much more precise [13]. Vishnu Makkapati et al. were the first to diagnose TB using the form characteristics of Mycobacterium tuberculosis bacteria. They proposed a method based on hue color components for segmenting bacilli through adaptive hue range selection. The existence of a beaded structure inside the bacilli, as well as the thread length and breadth parameters, indicates the validity or invalidity of the bacilli [14]. Sadaphal et al. developed a method in 2008 that incorporated (1) Bayesian segmentation, which relied on prior knowledge of ZN stain colors to estimate the likelihood of a pixel having a “TB item,” and (2) shape/size analysis [15].In the majority of studies, researchers claimed around 90% accuracy. However, the major contribution of this research is that several pretrained models were utilized. InceptionV3 was 96 percent accurate and 97.57 percent validated, MobileNetV2 was 98 percent accurate and 97.93 percent validated, and InceptionResNetV2 was 99 percent accurate and 99.36 percent validated. This study presents a novel method for detecting tuberculosis-infected individuals using deep learning.CNN (convolutional neural networking) is a technique that is well suited for this kind of issue. This method will aid in the rapid detection oftuberculosis from chest X-ray pictures.The remaining portion of the article is organized as follows: Section2 addressed the approach and methodology. Sections 3 and 4 addressed the analysis of the results and the conclusion, respectively. ## 1.1. Existing Work Several research groups used CXR pictures to identifytuberculosis (TB) and normal patients using a standard machine learning method. But the objective of this article is to get a better understanding of the issue. We reviewed current papers and articles and considered strategies for improving the accuracy of our deep learning model. To compare our efforts, we utilized an existing dataset and examined their model. Using a deep learning approach, Hooda et al. classified CXR images into tuberculosis and non-TB groups with an accuracy of 82.09 percent. Evangelista and Guedes developed a computer-assisted technique based on intelligent pattern recognition [7]. By modifying the settings of deep-layered CNNs, it has been shown that deep machine learning techniques may be used to diagnose TB. Transfer learning was used in the context of deep learning to identify TB by utilizing pretrained models and their ensembles [8].Pasa et al. proposed a deep network architecture with an accuracy of 86.82 percent for TB screening. Additionally, they demonstrated an interactive visualization application for patients with TB [9]. Chhikara et al. investigated whether CXR pictures might be used to detect pneumonia. They used preprocessing methods like filtering and gamma correction to evaluate the performance of pretrained models (Resnet, ImageNet, Xception, and Inception) [10]. The paper “Reliable TB Detection Using Chest X-ray with Deep Learning, Segmentation, and Visualization” was authored by Tawsifur Rahman. He used deep convolutional neural networks and the modules ResNet18, ResNet50, ResNet101, ChexNet, InceptionV3, Vgg19, DenseNet201, SqueezeNet, and MobileNet to differentiate between tuberculosis and normal images. In the identification of tuberculosis using X-ray images, the top-performing model, ChexNet, had accuracy, precision, sensitivity, F1-score, and specificity of 96.47 percent, 96.62 percent, 96.47 percent, 96.47 percent, and 96.51 percent, respectively [11].Stefanus Kieu Tao Hwa showed deep learning for TB diagnosis using chest X-rays; according to the data, the suggested ensemble method achieved the highest accuracy of 89.77 percent, sensitivity of 90.91 percent, and specificity of 88.64 percent [12]. Priya Ebenezer et al. have extended all current TB detection proposals. They designed a new method for identifying overlapping TB items. To determine the boundaries between the single bacterium area, the overlapping bacilli zone, and the nonbacilli region, form characteristics such as eccentricity, compactness, circularity, and tortuosity were examined. A novel proposal for an overlapping bacilli area was made based on concavities in the region. Because concavities imply overlapping, the optimum separation line is determined by the concavity's deepest concavity point. This provides an additional advantage. When the separation is overlapped, the overall count of tuberculosis bacilli is much more precise [13]. Vishnu Makkapati et al. were the first to diagnose TB using the form characteristics of Mycobacterium tuberculosis bacteria. They proposed a method based on hue color components for segmenting bacilli through adaptive hue range selection. The existence of a beaded structure inside the bacilli, as well as the thread length and breadth parameters, indicates the validity or invalidity of the bacilli [14]. Sadaphal et al. developed a method in 2008 that incorporated (1) Bayesian segmentation, which relied on prior knowledge of ZN stain colors to estimate the likelihood of a pixel having a “TB item,” and (2) shape/size analysis [15].In the majority of studies, researchers claimed around 90% accuracy. However, the major contribution of this research is that several pretrained models were utilized. InceptionV3 was 96 percent accurate and 97.57 percent validated, MobileNetV2 was 98 percent accurate and 97.93 percent validated, and InceptionResNetV2 was 99 percent accurate and 99.36 percent validated. This study presents a novel method for detecting tuberculosis-infected individuals using deep learning.CNN (convolutional neural networking) is a technique that is well suited for this kind of issue. This method will aid in the rapid detection oftuberculosis from chest X-ray pictures.The remaining portion of the article is organized as follows: Section2 addressed the approach and methodology. Sections 3 and 4 addressed the analysis of the results and the conclusion, respectively. ## 2. Method and Materials Open-source Kaggle provided the dataset used in this study. Patients withtuberculosis and those without the disease were represented in the dataset. For feature extraction, a CNN is used. A flatten layer, two dense layers, and a ReLU activation function are all included in the model. It also includes four Conv2D layers and three MaxPooling2D layers. SoftMax, the last and most thick layer, serves as an activation layer. Transfer learning is also utilized in this study to compare the accuracy of the created model with the accuracy of the pretrained model. With a few changes in the final layers, MobileNetV2, InceptionResNetV2, Xception, and InceptionV3 were utilized for pretrained models. Layers such as average pooling, flatten, dense, and dropout are used to create bespoke end results. When it comes to extracting visual details, the CNN model works effectively. The model learns and distinguishes between images by extracting characteristics from the input images. Figure 1 shows the workflow diagram of the TB and normal image detection.Figure 1 Workflow diagram of the TB or normal image detection.Python is the perfect programming language for data analysis. Because of Python's extensive library access, deep learning problems are very successful in the Python programming language. On a personal GPU, Anaconda Navigator and Jupyter Notebook were used for dataset preparation, as well as Google Colab for handling large datasets and online model training. ### 2.1. Dataset This system's dataset is made up of 3500 TB and 3500 normal images. For this study, the Tuberculosis (TB) Chest X-ray Database has been used [16]. The visualization of this dataset is shown in Figures 2 and 3.Figure 2 Non-TB X-ray images.Figure 3 Tuberculosis X-ray images.Figure2 depicts a healthy chest X-ray, whereas Figure 3 depicts a disease chest X-ray caused by tuberculosis. The pictures in the collection have varying starting heights and widths. Those provided pictures have a predetermined form thanks to the model. The Tuberculosis (TB) Chest X-ray Database is a balanced medical dataset. The total number of tuberculosis and nontuberculosis cases is equal (3500 each). Figure 4 displays the total number of TB and non-TB records in this dataset.Figure 4 Total number of TB and non-TB records. ### 2.2. Block Diagram A dataset with two subsections is provided as input in the block design shown in Figure5. This system underwent some preprocessing before fitting the model, such as importing pictures of a certain size, dividing the dataset, and using data augmentation methods. Better accuracy was achieved after fitting and fine-tuning the model. It was possible to see how loss and accuracy evolve over time by plotting a confusion matrix and a model of loss and accuracy. As a final step, the classification result section shows how well the model did in distinguishing between pictures of TB and those not associated with the disease.Figure 5 Block diagram of the proposed system.The entire system is shown in the block diagram in the simplest possible manner. In this research, the decision-making component of the system plays a very important role. An enormous quantity of data is used to train the model, which then uses that data to make a conclusion. ### 2.3. Preprocessing The preprocessing phase occurs before the training and testing of the data. Picture dimensions are redimensioned, images are transformed to an array, input is preprocessed using MobileNetV2, and hot labels are finally encoded throughout the four preparation stages. Because of the effectiveness of the training model, picture scaling is an important preprocessing step in computer vision. The smaller the image, the smoother it runs. In this research, an image was resized to 256 by 256 pixels. Following that, all of the images in the dataset will be processed into an array. For calling, the image is converted into a loop function array. The image is then used in conjunction with MobileNetV2 to preproceed input. The last step is hot coding on labels, since many computer learning algorithms cannot operate directly on data labelling. This method, as well as all input and output variables, must be numerical. The tagged data are transformed into a numerical label in order to be interpreted and analyzed. Following the preprocessing step, the data are divided into three batches: 70 percent training data, 20 percent validation data, and the remaining testing data. Each load contains both normal and TB images.Several increments were used to add variety to the original images. Because lung X-rays are generally symmetrical with a few minor characteristics, increases such as vertical images were used. However, the main goal was for TB and associated symptoms to occur in either lung and be detectable in both of our models. To provide more variety, these improved images have been gently rotated and illuminated or dimmed. ### 2.4. Convolutional Neural Network and Transfer Learning In artificial intelligence, CNNs (convolutional neural networks) are often employed for image categorization [17]. Before the input is sent through a neural network, it handles data convolution, maximum pooling, and flattening. It works because the various weights are set up using various inputs. Once the data have passed through the hidden layers, weights are computed and assessed. Following input from the cost function, the network goes through a back propagation phase [18]. During this procedure, the input layer weight is readjusted once again, and the process is repeated until it finds an optimal position for weight adjustment there. Epochs show how many times the cycle has repeated itself. It takes a long time to train a model using neural networks, which is a major disadvantage. To get around this, we will utilize transfer learning, another hot subject in computer vision research [19]. To learn from a dataset, we use transfer learning, which makes use of a pretrained model. It saves us a lot of time in training and takes care of a lot of different important things at the same time. As time passes, we will be able to fine-tune our networks for improved accuracy and simplicity.Transfer learning is to preserve knowledge from one area and apply it to another. Training takes a long time since model parameters are all initialized using a random Gaussian distribution and a convergence of at least 30 epochs with a lot of dimensions of 50 pictures is generated. The problem stems from the fact that big, well-noted pictures may be difficult to obtain in the medical profession. Due to a paucity of medical data, it is sometimes difficult to correctly predict models. One of the most difficult problems for medical researchers is the shortage of medical data or datasets. Data are an important factor in deep learning methods. Data processing and labelling are both time-consuming and costly. The advantage of transfer learning is that it does not require vast datasets. Computations are becoming easier and less costly. Transfer learning is a technique in which the knowledge from a pretrained model that was trained on a large dataset is transferred to a new model that has to be trained, incorporating fresh data that is comparatively less than needed. For a specific job, this method started CNN training with a tiny dataset, which included a large-scale dataset that had previously been trained in the pretrained models. ### 2.5. Overview of the Proposed Model Four CNN-based pretrained models were used in this research to classify chest X-ray pictures. Xception, InceptionV3, MobileNetV2, and InceptionResNetV2 are the models used. There are two types of chest X-ray pictures: one is unaffected bytuberculosis, whereas the other is. This study also utilized a transfer learning technique that can perform well with sparse data by utilizing ImageNet data and is efficient in terms of training time. The symmetric system architecture of the transfer learning method is shown in Figure 6.Figure 6 System architecture with InceptionResNetV2.InceptionResNetV2 was developed by merging two of the most well-known deep convolutional neural networks, Inception [20] and ResNet [21], and using batch normalization rather than summation for the traditional layers. The popular transfer learning model, InceptionResNetV2, was trained on data from the ImageNet database from various sources and classifications, and it is definitely making waves. When include top = “False”, it simply indicates that the fully connected layer will not be included, even if the input shape is specified. 224 × 224 × 3. When training = “False”, the weights in a particular layer are not changed during training. The dropout layer, which aids in overfitting prevention, randomly sets input units to 0 at a rate frequency at each step during training time. Dropout (0.5) represents a dropout effect of 50 percent. Flattening is the process of converting data into a one-dimensional array for use in the next layer. The output of the convolutional layers is flattened to generate a single large feature vector. It is also linked to the final classification model, forming what is known as a fully connected layer. Batch normalization is a method for improving the speed and stability of artificial neural networks by recentering and rescaling the inputs of the layers. When building the pretrained models, Table 1 indicates that the batch size is 32, the maximum epoch is 25, and a loss function of “binary cross-entropy” is used.Table 1 Parameters used for compiling various models. ParametersValueBatch size32ShufflingEach epochOptimizerAdamLearning rate1e−3Decay1e−3/epochLossBinary_crossentropyEpoch25Execution environmentGPUThe sigmoid function is an activation function that assists a neuron in making choices. These routines produce either a 0 or a 1. It employs probability to generate a binary output [22]. The outcome is determined by determining who has the highest probability value. This function outperforms the threshold function and is more useful for categorization. This activation function is often applied to the last dense block. The equation for the sigmoid function is(1)X=11+e−Y.ReLU is an activation function that operates on the concept of rectification. This function's output stays at 0 from the start until a specified point. After crossing or reaching a certain value, the output changes and continues to rise as the input changes [23]. Because it is only activated when there is a significant or significant input inside the neurons, this function works extremely well. ### 2.6. Evaluation Criteria Following the completion of the training phase, all models were evaluated on the test dataset. The performance of these systems was evaluated using the accuracy, precision, recall, F1-score, and AUC range. This study's performance measures are all mentioned below. True positives (TP) aretuberculosis images that were correctly identified as such; true negatives (TN) are normal images that were correctly identified as such; false positives (FP) are normal images that were incorrectly identified as tuberculosis images; and false negatives (FN) are normal tuberculosis images.Accuracy simply indicates how close our expected result is to the actual result [24]. It is represented as a percentage. It is determined by adding true positive and true negative and dividing the overall number of potential outcomes by the number of possible outcomes:(2)Accuracy=TP+FNTP+TF+FP+FN.Precision is a measure of how close predicted outcomes are to each other [25]. True positive is obtained by dividing true positive by the sum of true and false positives:(3)Precision=TPTP+FP.Recall is determined by dividing the total number of true positives by the total number of true positives and false negatives:(4)Recall=TPTP+FN.The F1-score combines a classifier's accuracy and recall into a single measure by calculating their harmonic mean. It is most commonly used to compare the results of two different classifiers. Assume classifier A has greater recall and classifier B has greater precision. In this case, the F1-scores of both classifiers may be used to determine which performs better. The F1-score of a classification model is computed as follows:(5)F1=2PrecisionxRecallPrecision+Recall.It evaluates the correctness of the model dataset. Although accuracy is difficult to grasp, the F1-score idea becomes more useful in situations of uneven class distribution. It is used by many machine learning models. It is utilized when false negatives and false positives are more important in the dataset than genuine positives and true negatives. When the data are wrongly categorized, it produces better results.The confusion matrix displays the total number of right and erroneous outcomes. It is possible to see all true-positive, false-positive, true-negative, and false-negative numbers [26].The greater the frequency of genuine positive and true-negative outcomes, the greater the accuracy. ## 2.1. Dataset This system's dataset is made up of 3500 TB and 3500 normal images. For this study, the Tuberculosis (TB) Chest X-ray Database has been used [16]. The visualization of this dataset is shown in Figures 2 and 3.Figure 2 Non-TB X-ray images.Figure 3 Tuberculosis X-ray images.Figure2 depicts a healthy chest X-ray, whereas Figure 3 depicts a disease chest X-ray caused by tuberculosis. The pictures in the collection have varying starting heights and widths. Those provided pictures have a predetermined form thanks to the model. The Tuberculosis (TB) Chest X-ray Database is a balanced medical dataset. The total number of tuberculosis and nontuberculosis cases is equal (3500 each). Figure 4 displays the total number of TB and non-TB records in this dataset.Figure 4 Total number of TB and non-TB records. ## 2.2. Block Diagram A dataset with two subsections is provided as input in the block design shown in Figure5. This system underwent some preprocessing before fitting the model, such as importing pictures of a certain size, dividing the dataset, and using data augmentation methods. Better accuracy was achieved after fitting and fine-tuning the model. It was possible to see how loss and accuracy evolve over time by plotting a confusion matrix and a model of loss and accuracy. As a final step, the classification result section shows how well the model did in distinguishing between pictures of TB and those not associated with the disease.Figure 5 Block diagram of the proposed system.The entire system is shown in the block diagram in the simplest possible manner. In this research, the decision-making component of the system plays a very important role. An enormous quantity of data is used to train the model, which then uses that data to make a conclusion. ## 2.3. Preprocessing The preprocessing phase occurs before the training and testing of the data. Picture dimensions are redimensioned, images are transformed to an array, input is preprocessed using MobileNetV2, and hot labels are finally encoded throughout the four preparation stages. Because of the effectiveness of the training model, picture scaling is an important preprocessing step in computer vision. The smaller the image, the smoother it runs. In this research, an image was resized to 256 by 256 pixels. Following that, all of the images in the dataset will be processed into an array. For calling, the image is converted into a loop function array. The image is then used in conjunction with MobileNetV2 to preproceed input. The last step is hot coding on labels, since many computer learning algorithms cannot operate directly on data labelling. This method, as well as all input and output variables, must be numerical. The tagged data are transformed into a numerical label in order to be interpreted and analyzed. Following the preprocessing step, the data are divided into three batches: 70 percent training data, 20 percent validation data, and the remaining testing data. Each load contains both normal and TB images.Several increments were used to add variety to the original images. Because lung X-rays are generally symmetrical with a few minor characteristics, increases such as vertical images were used. However, the main goal was for TB and associated symptoms to occur in either lung and be detectable in both of our models. To provide more variety, these improved images have been gently rotated and illuminated or dimmed. ## 2.4. Convolutional Neural Network and Transfer Learning In artificial intelligence, CNNs (convolutional neural networks) are often employed for image categorization [17]. Before the input is sent through a neural network, it handles data convolution, maximum pooling, and flattening. It works because the various weights are set up using various inputs. Once the data have passed through the hidden layers, weights are computed and assessed. Following input from the cost function, the network goes through a back propagation phase [18]. During this procedure, the input layer weight is readjusted once again, and the process is repeated until it finds an optimal position for weight adjustment there. Epochs show how many times the cycle has repeated itself. It takes a long time to train a model using neural networks, which is a major disadvantage. To get around this, we will utilize transfer learning, another hot subject in computer vision research [19]. To learn from a dataset, we use transfer learning, which makes use of a pretrained model. It saves us a lot of time in training and takes care of a lot of different important things at the same time. As time passes, we will be able to fine-tune our networks for improved accuracy and simplicity.Transfer learning is to preserve knowledge from one area and apply it to another. Training takes a long time since model parameters are all initialized using a random Gaussian distribution and a convergence of at least 30 epochs with a lot of dimensions of 50 pictures is generated. The problem stems from the fact that big, well-noted pictures may be difficult to obtain in the medical profession. Due to a paucity of medical data, it is sometimes difficult to correctly predict models. One of the most difficult problems for medical researchers is the shortage of medical data or datasets. Data are an important factor in deep learning methods. Data processing and labelling are both time-consuming and costly. The advantage of transfer learning is that it does not require vast datasets. Computations are becoming easier and less costly. Transfer learning is a technique in which the knowledge from a pretrained model that was trained on a large dataset is transferred to a new model that has to be trained, incorporating fresh data that is comparatively less than needed. For a specific job, this method started CNN training with a tiny dataset, which included a large-scale dataset that had previously been trained in the pretrained models. ## 2.5. Overview of the Proposed Model Four CNN-based pretrained models were used in this research to classify chest X-ray pictures. Xception, InceptionV3, MobileNetV2, and InceptionResNetV2 are the models used. There are two types of chest X-ray pictures: one is unaffected bytuberculosis, whereas the other is. This study also utilized a transfer learning technique that can perform well with sparse data by utilizing ImageNet data and is efficient in terms of training time. The symmetric system architecture of the transfer learning method is shown in Figure 6.Figure 6 System architecture with InceptionResNetV2.InceptionResNetV2 was developed by merging two of the most well-known deep convolutional neural networks, Inception [20] and ResNet [21], and using batch normalization rather than summation for the traditional layers. The popular transfer learning model, InceptionResNetV2, was trained on data from the ImageNet database from various sources and classifications, and it is definitely making waves. When include top = “False”, it simply indicates that the fully connected layer will not be included, even if the input shape is specified. 224 × 224 × 3. When training = “False”, the weights in a particular layer are not changed during training. The dropout layer, which aids in overfitting prevention, randomly sets input units to 0 at a rate frequency at each step during training time. Dropout (0.5) represents a dropout effect of 50 percent. Flattening is the process of converting data into a one-dimensional array for use in the next layer. The output of the convolutional layers is flattened to generate a single large feature vector. It is also linked to the final classification model, forming what is known as a fully connected layer. Batch normalization is a method for improving the speed and stability of artificial neural networks by recentering and rescaling the inputs of the layers. When building the pretrained models, Table 1 indicates that the batch size is 32, the maximum epoch is 25, and a loss function of “binary cross-entropy” is used.Table 1 Parameters used for compiling various models. ParametersValueBatch size32ShufflingEach epochOptimizerAdamLearning rate1e−3Decay1e−3/epochLossBinary_crossentropyEpoch25Execution environmentGPUThe sigmoid function is an activation function that assists a neuron in making choices. These routines produce either a 0 or a 1. It employs probability to generate a binary output [22]. The outcome is determined by determining who has the highest probability value. This function outperforms the threshold function and is more useful for categorization. This activation function is often applied to the last dense block. The equation for the sigmoid function is(1)X=11+e−Y.ReLU is an activation function that operates on the concept of rectification. This function's output stays at 0 from the start until a specified point. After crossing or reaching a certain value, the output changes and continues to rise as the input changes [23]. Because it is only activated when there is a significant or significant input inside the neurons, this function works extremely well. ## 2.6. Evaluation Criteria Following the completion of the training phase, all models were evaluated on the test dataset. The performance of these systems was evaluated using the accuracy, precision, recall, F1-score, and AUC range. This study's performance measures are all mentioned below. True positives (TP) aretuberculosis images that were correctly identified as such; true negatives (TN) are normal images that were correctly identified as such; false positives (FP) are normal images that were incorrectly identified as tuberculosis images; and false negatives (FN) are normal tuberculosis images.Accuracy simply indicates how close our expected result is to the actual result [24]. It is represented as a percentage. It is determined by adding true positive and true negative and dividing the overall number of potential outcomes by the number of possible outcomes:(2)Accuracy=TP+FNTP+TF+FP+FN.Precision is a measure of how close predicted outcomes are to each other [25]. True positive is obtained by dividing true positive by the sum of true and false positives:(3)Precision=TPTP+FP.Recall is determined by dividing the total number of true positives by the total number of true positives and false negatives:(4)Recall=TPTP+FN.The F1-score combines a classifier's accuracy and recall into a single measure by calculating their harmonic mean. It is most commonly used to compare the results of two different classifiers. Assume classifier A has greater recall and classifier B has greater precision. In this case, the F1-scores of both classifiers may be used to determine which performs better. The F1-score of a classification model is computed as follows:(5)F1=2PrecisionxRecallPrecision+Recall.It evaluates the correctness of the model dataset. Although accuracy is difficult to grasp, the F1-score idea becomes more useful in situations of uneven class distribution. It is used by many machine learning models. It is utilized when false negatives and false positives are more important in the dataset than genuine positives and true negatives. When the data are wrongly categorized, it produces better results.The confusion matrix displays the total number of right and erroneous outcomes. It is possible to see all true-positive, false-positive, true-negative, and false-negative numbers [26].The greater the frequency of genuine positive and true-negative outcomes, the greater the accuracy. ## 3. Result and Analysis In classifying normal images and TB, we experimented with a variety of models and methods in order to assess their utility and efficacy. Four pretrained CNN models were employed to classify chest X-ray pictures. The models that are relevant include MobileNetV2, Xception, InceptionResNetV2, and Inception V3. There are two kinds of chest X-ray pictures. The first is TB, whereas the second is normal. This study also used ImageNet data to use a transfer learning method that is successful in terms of training times when there is insufficient data.Several network designs, including Xception, InceptionV3, InceptionResNetV2, and MobileNetV2, are tested before deciding on a network architecture. A bespoke 19-layer ConvNet is also tried, but it performs badly. The InceptionResNetV2 performed the best of all networks, and the findings based on that design are included. Table2 shows the accuracy and loss history of four tried models.Table 2 Comparison of pretrained models. ModelTrain accuracyVal accuracyTrain lossVal lossImagesPrecisionRecallF1-scoreXception0.95960.95430.11550.1213Normal1.000.900.95TB0.911.000.95InceptionV30.98000.97570.11600.1243Normal0.950.970.96TB0.970.950.96MobileNetV20.99300.97930.02200.0548Normal0.961.000.98TB1.000.960.98InceptionResNetV20.99120.99360.03400.0237Normal0.990.980.99TB0.980.990.99According to Table2, the InceptionResNetV2 model achieves a validation or testing accuracy of 99.36 percent, with a validation loss of 2.37 percent. Normal images have 99 percent accuracy, 98 percent recall, and 99 percent F1-scores, whereas TB images have 98 percent precision, 99 percent recall, and 99 percent F1-scores.Figures7 and 8 indicate that, at epoch 1, the training accuracy was pretty low in the first few epochs. The starting training accuracy is 57.20 percent, the training loss is 69.01 percent, the validation accuracy is 74.79 percent, and the loss is also huge (905.03 percent), indicating that the model learns very slowly at first. The training accuracy improves as the number of epochs increases and the loss function begins to decrease. The model evaluates the results at the end of 25 epochs; InceptionResNetV2 has a 99.12 percent train accuracy and a 99.643 percent validation accuracy, with a train loss of 3.40 percent and a validation loss of 1.71 percent.Figure 7 Training and validation accuracy.Figure 8 Training and validation loss.In identifying normal and tuberculosis (TB) images, Figure9 illustrates true-positive, true-negative, false-positive, and false-negative scenarios.Figure 9 Confusion matrix.According to the results, the InceptionResNetV2 model correctly identifies images as “normal” 98 percent of the time and incorrectly labels normal photos as “TB” 2 percent of the time. Additionally, the confusion matrix shows that the algorithm accurately classifies 99 percent of TB photographs as “TB,” while predicting 1 percent of photographs as “normal.”Additionally, this research included genuine testing, which provided input to the model through chest X-ray images. When the model is complete, a file with the hdf5 extension is created containing the produced model. Four hdf5 files representing four different models were created for this study. Following that, a new notebook file is created with the ipynb test extension. Four models were included in this test file, and then, individual chest X-ray images were provided as input. Figures10 and 11 show the prediction results in real time.Figure 10 Normal prediction.Figure 11 TB prediction.The result shown in Figure10 is normal. The input image was normal, and the model correctly predicted it. Figure 11 shows a tuberculosis chest X-ray image as an input. The model then returned a valid result, indicating that the input picture was of a TB chest X-ray. In Table 3, classification results have been compared with the above reference papers.Table 3 Model comparison with other research. This paper (model name)Accuracy (%)References paper (model name)Accuracy (%)MobileNet97.93Ref [11] MobileNet94.33(Validation or testing)InceptionV398.00Ref [12] InceptionV383.57Ref [11] InceptionV398.54Xception95.96Ref model VGG1687.71InceptionResNetV299.36Ref [12] Model ChexNet96.47(Validation or testing)Ref [12] Model DenseNet20198.6MobileNet97.93Ref [27] GoogleNet89.6In this article, using MobileNet, we achieved 97.93 percent accuracy, but in [11], the authors achieved 94.33 percent accuracy by using the same algorithm. But the difference between their work and our work is that we fine-tuned the model to increase its accuracy, whereas they used the general MobileNet algorithm in their work. Using InceptionV3, they achieved higher accuracy than our model, but overall in this article, InceptionResnetV2 achieved the highest accuracy of 99.36 percent, which is greater than any previous work reported before. ## 4. Conclusion A framework for deep learning analysis may be very beneficial for individuals who do not get regular screening or checkups in countries with insufficient healthcare facilities. Deep learning in clinical imaging is particularly apparent in early analyses that may recommend preventive therapy. Due to a shortage of radiologists in resource-limited areas, technology-assistedtuberculosis diagnosis is required to help reduce the time and effort spent on tuberculosis detection. Medical analysis powered by deep learning is still not as remarkable as experts would want. This study suggests that we may approach that the level of accuracy by integrating many recent breakthroughs in deep learning and applying them to suitable situations. In this research, we utilized the pretrained model InceptionResNetV2 to develop an automated tuberculosis detection method. With a validation accuracy of 99.36 percent, the InceptionResNetV2 model has finally reached the new state of the art in identifying tuberculosis illness. Given the following, an automated TB diagnosis system would be very helpful in poor countries where trained pulmonologists are in limited supply. These approaches would improve access to health care by reducing both the patient's and the pulmonologist's time and screening expenses. This advancement will have a profound effect on the medical profession. Tuberculosis patients may be readily identified with this method. Diagnostics will benefit from this kind of approach in the future. Numerous deep learning methods may be used to improve the parameters and generate a trustworthy model that benefits mankind. Additionally, we may experiment with a number of image processing techniques in the future to aid the model in learning the picture pattern more accurately and provide improved performance. --- *Source: 1002799-2021-11-25.xml*
2021
# The Analysis on the Current Situation of the Utilization Mode of Microalgal Biomass Materials **Authors:** Lina Zhang; Lianfeng Wang; Huizhong Nie; Changbin Liu **Journal:** Advances in Multimedia (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1002952 --- ## Abstract In recent years, global warming caused by the greenhouse effect has become one of the greatest threats to mankind. This will have a serious impact on the environment and human body, such as land desertification, increase in ocean acidity, sea level rise, and increase in pests and diseases; affect people’s normal work and rest; and make people feel dizzy and nauseated. Excessive emissions of carbon dioxide (CO2), the main component of the greenhouse gas, have contributed to the continued rise in Earth’s temperature. Although the world is vigorously developing clean energy to reduce carbon emissions, it will not replace fossil fuels in the short term. The conversion of biomass into energy is the most important way of energy utilization. Biomass energy refers to the solar energy stored in biomass in the form of chemical energy and is the fourth major energy source after oil, coal, and natural gas. At present, biofuels have gone through three developmental stages, which can be divided into first-generation biofuels, second-generation biofuels, and third-generation biofuels according to the types of raw materials and development history. The first generation of biofuels produced from food crops, such as bioethanol derived from sucrose and starch, has already entered the energy market. However, because the first generation of biofuels uses food crops as raw materials, there is a phenomenon of “competing with people for food,” and it is difficult to achieve large-scale application. To avoid the problem of food shortages, second-generation biofuels produced from nonfood crops, such as wood fiber, have been developed. Microalgae biomass energy is favored by governments and scholars all over the world because of its unique advantages of fast reproduction speed and high oil content. The cultivation of microalgae does not occupy traditional farmland, and the marginal land such as mountains, oceans, and deserts can cultivate microalgae, or develop microalgae cultivation in the air through the innovation of microalgae photosynthetic reactors. When municipal wastewater, food industry wastewater, and aquaculture wastewater are used as the medium for large-scale cultivation of microalgae, and waste gas from biogas power generation, flue gas from coal power plants, and industrial waste gas from fermentation are used as the CO2 gas source for large-scale cultivation of microalgae, it can be further reduced. The comprehensive production cost of microalgae bioenergy plays a significant emission reduction effect. Combining the above advantages, the use of microalgae to produce first- or second-generation bioenergy has become a new research direction. This study focuses on the review of microalgal biomass in fuel, nonfuel, wastewater treatment, and fuel cell. --- ## Body ## 1. Introduction Energy and environment are two major themes in the process of human development. With the rapid development of human society in recent years, the excessive use of fossil energy has led to serious environmental pollution and ecological damage. The global energy consumption increased by 2.3% compared with the last year in 2018, and CO2 emission only by fossil fuel is 33.1 Gt; also, the mass fraction of CO2 increased from 340μg/g in 1980 to 412 μg/g in 2020 [1]. The large amount of CO2 accelerates the greenhouse effect, which also causes many problems like sea level rise, food security crisis, species extinction and biodiversity crisis, and so on [2], as shown in Figure 1.Figure 1 The CO2 emission from fossil fuel.Due to the decrease in fossil energy and the aggravation of environmental pollution caused by the utilization of fossil energy, scientists have focused their attention on the development of new energy and renewable energy. Biomass energy as the only promising and sustainable carbon energy has been researched by the scientists and experts all around the world. There are some efficient methods, which could convert biomass materials like wood, straw, and leaf into high value-added products through the biological method, chemical method, thermal pyrolysis, and so on [3]. At present, the main development and utilization directions of biomass are as follows: (1) energy regeneration: we use the heat generated by biomass combustion to generate electricity or conduct pyrolysis to produce biomass and biological natural gas, etc. (2) Feed: we convert biomass into feed through drying, crushing, fermentation, pelletizing, and other processes. (3) Fertilizer: we use harmless pretreatment and technical processing of biomass to produce biomass solid molding fertilizer. (4) Composite materials: biomass can be used to make carbon materials, battery-based thin-film materials, and construction and packaging materials, etc. In particular, the research on the preparation of biomass oil and carbon materials will maximize the application value of biomass materials, as shown in Figure 2.Figure 2 The installed capacity of biomass.However, traditional terrestrial biomass materials are affected by many factors (such as production scale and geographical location); the cost of large-scale recycling is relatively high; and algal biomass is the most ideal biomass for energy regeneration and CO2 emission reduction because of its variety, fast reproduction speed, and not limited to geographical location, and does not occupy land resources[4]. Chlorella and Spirulina are the two most common types of microalgae in the ocean. They have low culture cost and fast reproduction, which have been widely used in medicine, food, and wastewater treatment field [5]. Microalgae have the fastest photosynthesis rates and are high in lipids, thus attracting increasing attention in the field of biofuels (as shown in Figure 3). Microalgae have high carbon-fixed efficiency to convert it into biomass energy with carbon balance, and the carbon sequestration efficiency of microalgae in closed reactors is listed in Table 1. Chlorella and Spirulina have already been used as raw materials for biomass oil production, which will greatly ease the pressure on crops to produce biomass oil. Microalgae have high photosynthesis efficiency, up to 50 g/(m2d), which is equivalent to 10–50 times the carbon sequestration capacity of deep forests. Instead of occupying crop paddy fields and arable land, microalgae can be cultivated in saline-alkali land, domestic sewage areas, and tidal flats. It is recognized as the third-generation biomass energy material [6, 7]. The main microalgae species and compositions are listed in Table 2.Figure 3 The amount of electricity from biomass.Table 1 Carbon sequestration efficiency of microalgae in closed reactors. SpeciesReactorCO2 volume fraction (%)CO2 fixed rate (g L−1d−1)CO2 fixed efficiencyChlorella fuscaTube10.000.2663.4Chlorella pyrenoidosaTube10.000.2595.1Chlorella vulgarisTube10.000.1295.3Scenedesmus dimorphusColumn2.000.8063.4Scenedesmus sp.Column0.250.1633.0Spirulina platensisFilm15.001.4485.0Spirulina sp.Column10.000.1821.8Table 2 The species and compositions of microalgae. SpeciesCompositionElement analysisIndustrial analysisProteinPolysaccharideLipidCHOWaterVolatileFCAshChlorella42.79.422.544.936.4240.64.1369.4516.2210.2Microcystis59.9320.195.2242.266.2743.079.5970.1314.146.14Nannochloropsis44213049.077.5935.63579.6910.645.03Scenedesmus36.429.319.5507.1130.74.5975.3312.787.3Spirulina48.3630.2113.346.167.1435.444.5479.1415.246.56We reviewed the status of the main utilization of microalgal biomass, mainly focusing on the preparation of bio-oil and carbon adsorption materials, which will have broad prospects and great application value in sustainable energy development and carbon dioxide capture and utilization, respectively. ## 2. Microalgae Biomass Fuel Product Microalgae can produce bio-oil, bioethanol, biomethane, and biohydrogen [8]. Compared with macroalgae with an oil yield of 30% (70,0001/ha/y), the oil yield of microalgae could achieve 70% (70,0001/ha/y) due to its rich lipid, which is highly competitive and fascinating.Luangpipat and Chisti [9] found that the lipid productivity of microalgae in nutrient-sufficient seawater exceed 37 mg·L−1·d−1 and was nearly 2-fold greater than in freshwater, which could magnify the advantage for microalgae to produce bio-oil because seawater is cheaply available and is in large amounts, whereas there is a global shortage in freshwater. Hydrothermal liquefaction (HTL) of wet biomass like microalgae is one of the most promising methods to produce renewable and sustainable energy as alternative energies to fossil fuels, which significantly reduces the cost of drying and heating. The schematic process of HTL for microalgae is shown in Figure 4. Hu et al. [10] used the aqueous phase obtained from the catalytic/noncatalytic hydrothermal liquefaction of Chlorella as the reaction medium for cyclic liquefaction. Without recycling, the bio-oil yield under Na2CO3 catalysis was lower than the yield in pure water. However, the bio-oil yield increased by 32.6 wt.% after recycling of the aqueous phase without sacrificing oil quality. Leng et al. [11] used the liquefied aqueous solution as nutrients (C, N, and P) for microalgae cultivation, which provided the possibility for the efficient production of liquefied biofuels and the cultivation of algal biomass, enabling the microalgal hydrothermal liquefaction system to realize closed-loop biofuel production. Shen et al. [12] found that MgAl-layered double hydroxides/oxides (MgAl-LDH/LDO) with tunable acidic and basic properties are developed for catalyzing the HTL of microalgae to obtain bio-oil with a low oxygen content (as shown in Figure 4). Dandamudi et al.[13] conducted the HTL treatment of Nannochloropsis sp. to obtain the bio-oil yield with 43 wt% at 350°C, and in most cases, the oil yield improves with increasing temperature and achieves the maximum during the temperature range between 280 and 350°C, which was caused by the hydrolysis of microalgae. Arun et al. [14] prepared biomass oil by hydrothermal liquefaction (HTL) of Chlorella and used the biochar produced during the HTL process to remove pollutants (COD, NO3, NH3, and PO4) from wastewater. The study found that the oil production rate of Chlorella is 29.37%; the biochar produced in the process can effectively remove pollutants in wastewater; and the removal rate is about 55%. The elemental compositions of biocrude oil and solid residue from microalgae are listed in Table 3.(1)HHV=0.355C+1.423H−0.154O−0.145N.Figure 4 The schematic process of HTL for microalgae.Table 3 The elemental compositions of biocrude oil and solid residue. SamplesElement analysisRatioCHOHHV (MJ/Kg)H/CN/CO/C300°C 5% 2 h61.256.1018.7126.031.200.04570.23350°C 0% 2 h72.505.6111.7230.120.930.04640.12350°C 5% 2 h61.725.4217.4325.441.050.04360.21Transesterification is a widely used method for converting oil into biodiesel, which converts the original viscous microalgal oil (triglycerides or free fatty acids) into fatty acid alkyl esters with smaller molecular weight. The process of transesterification is mainly affected by reaction conditions, molar ratio of ethanol and oil, catalyst type, reaction time, temperature and purification reactants, etc., while alkali catalysts are easily affected by free fatty acids. The growing aviation demand consumes more than 5 million barrels of aviation fuel every day, and releases a large amount of carbon dioxide, nitrogen oxides, carbon monoxide, sulfur oxides, and other environmental pollutants. Therefore, biojet fuel that can reduce greenhouse gas emissions by 80% has attracted much attention. Among them, green biojet fuel products mixed with microalgae biodiesel and traditional chemical fuel have been produced and put into production. Microalgae oil is hydrotreated (fatty acid and ester hydrotreating) into aviation fuel, and the whole process is processed according to ASTM D7566 standard. Another production method for the conversion of microalgal oil into aviation fuel is the Fischer-Tropsch synthesis process, which can extract high-quality fuels from natural gas, coal mines, biomass, etc. Microalgal biomass is converted into liquid fuels through a gasification process, that is, gaseous components such as carbon monoxide and hydrogen are converted into liquid hydrocarbon fuels. The reaction path of microalgae through the HTL process is shown in Figure5.Figure 5 The reaction path of microalgae through HTL process.(2)Biocrude yield=weight of biocrudeweight of dry biomass×100%,Solid residue yield=weight of residue yieldweight of dry biomass×100%,Other yield=100%−Biocrude yield–Solid residue yield,Solid conversion yield=100%−Solid residue yield,Energy revovery rate=HHVbio×BiodcrudeyieldHHVfeedback.Compared with other plants, algae contain higher soluble polysaccharides and can be used to produce bioethanol. Microalgae are easy to be crushed and dried due to the lack of differentiation of roots, stems, and leaves during the growth process, and the pretreatment cost is low. The cellulose contained in microalgae cells is different from that contained in terrestrial plants, and its hydrogen bonds are weaker and more easily degraded. Relatively simple processing establishes the advantages of microalgae as a feedstock for fuel ethanol production. Dilute acid or enzymatic pretreatment can be used in the saccharification process. After ultrasonic treatment of microalgae, acid hydrolysis and enzymatic hydrolysis of starch were used, respectively, andSaccharomyces cerevisiae was further used to produce ethanol. Acid hydrolysis and enzymatic hydrolysis are the two main ways to hydrolyze polysaccharides for subsequent fermentation. Dilute acid pretreatment is a commonly used method. For different algae species, the type, concentration, and reaction temperature adjust time and other aspects to achieve a better preprocessing effect. After pretreatment of microalgal biomass, ethanol can be obtained by microbial fermentation such as yeast. Different fermentation strains were used according to the difference in the biomass content of each microalga. Saccharomyces cerevisiae is currently the most used, and Z. mobiles has also been extensively studied in recent decades. Fermentation methods are mainly divided into the hydrolysis fermentation (SHF) method and the simultaneous saccharification fermentation (SSF) method. Compared with the two, the SHF method has a higher yield of ethanol, but the SSF method takes less time, and the yield of fuel ethanol is relatively high. The SSF method is divided into continuous or semicontinuous processes, and the semicontinuous SSF method uses less enzyme. ## 3. Microalgae Biomass Nonfuel Application Algal carbohydrates are synthesized by the immobilization of carbon dioxide in microalgae in the process of photosynthesis. They mainly use adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH) to absorb and fix carbon dioxide in the air to synthesize glucose and other sugars through the Calvin cycle metabolic pathway. These carbohydrates accumulate in plastids as reserve substances (such as starch) or the main components of cell walls. Studies have shown that there is a direct competitive relationship between lipid and starch synthesis, because the main precursor of triglyceride synthesis, glycerol-3-phosphate (G3P, glycerol-3-phosphate), is synthesized through the catabolism of glucose. Therefore, increasing carbohydrate accumulation in microalgal biomass can be achieved by enhancing glucan storage and reducing starch degradation, i.e., cultivation techniques to increase carbohydrate content in microalgae, including irradiance, nitrogen depletion, temperature variation, and pH methods such as adjustment and additional provision of carbon dioxide. The carbohydrates of the microalgae are mainly composed of starch, glucose, cellulose/hemicellulose, and various polysaccharides, and the starch and glucose can be converted into bioethanol including bioethanol and biofuels including hydrogen products. The polysaccharides contained in microalgae are mainly galactan including carrageenan and agar, and carrageenan can be stably extracted from red seaweed. At present, microalgal polysaccharides can be used in food, cosmetics, textiles, stabilizers, emulsifiers, lubricants, thickeners, and clinical drugs. Microalgae sulfated polysaccharides exhibit a wide range of pharmacological effects, such as antioxidant, antitumor, anticoagulant, anti-inflammatory, antiviral, and immunomodulatory agents. Sulfated polysaccharides can be extracted from Porphyra and are applied to the skin due to their ability to inhibit the adhesion and migration of polymorphonuclear leukocytes, and anti-inflammatory treatment of the skin.Microalgae are rich in many pigments related to light exposure; in addition to chlorophyll, most of them are Phycobiliproteins that contribute to the utilization of light-energy and light-protective substances such as carotenoids. Another important microalgal pigment is astaxanthin, which has a powerful antioxidant effect. Astaxanthin can prevent and treat chronic inflammatory diseases, cancer, neurological diseases, liver diseases, metabolic syndrome, diabetes, diabetic nephropathy, gastrointestinal diseases, etc. Haematococcus pluvialis has been found to have a high natural content of astaxanthin (1.5–3% dry matter content), which is also the main natural source of astaxanthin currently commercialized. There are also some microalgal pigments such as lutein, zeaxanthin, and canthaxanthin that are used for chicken skin coloring and pharmaceutical purposes. In addition, algal protein, phycocyanin, and phycoerythrin are used in the food and cosmetic industries; carotene is used as a prerequisite for vitamin A in health food; and many microalgal pigments are also used in natural food or beverage natural coloring agent.Protein is an important component of human nutrients, and lack of protein is one of the main causes of nutritional deficiencies. Some microalgae contain up to 60% protein, and microalgal protein can be used as animal or fish feed, chemical fertilizer, industrial enzymes, bioplastics, and surfactants. At present, the most widely cultivated protein-rich microalgae are Spirulina belonging to the cyanobacteria species, which are not only rich in 60% crude protein, but also contain vitamins, minerals, and other biologically active factors. The cell wall of Spirulina is composed of polysaccharides; its digestibility can reach 86%; and it is easily absorbed and utilized by the human body. Spirulina can be processed into tablets, flakes, and powders as a human dietary supplement, but also as a feed additive in the aquaculture, aquarium, and poultry industries. In addition, Anabaena, Chlorella, Dunaliella, Euglena, etc. are also high in protein, and the blue-green microalgae Anabaena has been found to be a good source of protein.High value-added biomaterials or bioproducts of microalgae have also been commercially used. Microalgae Arthrospira and Chlorella have been used in large quantities in the skin care market, and some cosmetic companies have carried out research work on their own microalgae product systems, which can extract antiaging, regenerating, emollient, anti-irritant, sunscreen, scalp care, and other cosmetic products. The most important pharmaceutical ingredient in Chlorella is 1, 3-glucan, which is an active immune stimulant, a free radical scavenger, and a blood lipid reducer, and can be effective in gastric ulcers, trauma, and constipation. Microalgal biomass, such as vitamins A, B1, B2, B6, and B12, is also the effective source of essential vitamins such as C, E, biotin, niacin, folic acid, and pantothenic acid. Carrageenan in microalgae can be widely used as an emulsifier and a stabilizer for food, such as chocolate milk, ice cream, evaporated milk, pudding, jelly, jam, and salad dressing. Due to its antitumor, antiviral, and anticoagulant properties, carrageenan also has potential pharmaceutical functions. ## 4. Microalgae Biomass Wastewater Treatment In recent years, the population growth and the rapid development of urbanization and industrialization have resulted in an increasing shortage of water resources and serious pollution. Urban life, industrial production, and agricultural activities will produce wastewater with excess organic carbon, nitrogen, phosphorus, and metal elements. After discharge, it will cause eutrophication of the water environment, damage the soil structure, and cause harmful effects on aquatic organisms and human health. The discharge and treatment of wastewater have always attracted much attention (as shown in Figure6).Figure 6 The work path of microalgae in wastewater treatment.As photosynthetic microorganisms, similar to plants, microalgae have chloroplasts and can provide energy for growth and metabolism through photosynthesis. Microalgae widely exist in various water environments such as freshwater, seawater, and wastewater from different sources. They can use nutrients such as nitrogen and phosphorus in wastewater for their own production while removing chemical oxygen demand (chemical oxygen demand, COD), ammonia nitrogen (NH3 -N), total nitrogen (TN, TN), and total phosphorus (total phosphorus, TP) and other pollutants in wastewater and high removal rate [15].Combining microalgae cultivation with wastewater treatment is an economical and environmentally friendly approach to improving microalgal oil production, simplifying wastewater treatment processes, reducing microalgal biomass production costs and wastewater treatment costs, removing pollutants, and reducing CO2. There are many aspects of capture, fixation, and utilization of advantages [16, 17]. Some algal species can fix nitrogen and phosphorus. Using organic and inorganic nitrogen compounds, microalgae cells can synthesize amino acids and proteins; as the growth cycle of microalgae cells increases, the absorption efficiency of nitrogen and phosphorus is gradually enhanced [18]. Microalgal biomass after wastewater treatment can be used to produce high value-added products such as carbohydrates, pigments, and proteins [19]. Compared with traditional oil plants, microalgae grow faster and have higher oil content; their oil content is generally 20%–70% (dry weight); microalgae oil is mostly neutral lipid suitable for biodiesel production; and after biorefining, it can be converted into biodiesel with the advantages of cleanness, environmental protection, carbon neutrality, etc. The oil derived from microalgal biomass is the third-generation biodiesel source with good development prospects. However, compared with the traditional raw materials for biodiesel production, the high production cost of microalgal oil is a major bottleneck for its industrialization. At present, the use of wastewater as an inexpensive alternative medium for microalgae growth can reduce the cost of microalgal lipids and achieve coupling with wastewater treatment [20]. ## 5. Microalgae Biofuel Cell Microbial fuel cell (MFC) is a technology that can directly convert the chemical energy of organic matter in wastewater into electrical energy by utilizing the metabolic process of microorganisms. The use of MFC for sewage treatment can not only greatly reduce the cost of sewage treatment but also bring certain economic benefits to the recovered electric energy, which is of great value to the sustainable development of human society [21, 22]. The MFC can use half of the organic and inorganic waste materials that cannot be used by fuel cells as fuel, and even use photosynthesis or directly use sewage as fuel. The operating conditions are mild, generally working in a normal temperature, normal pressure, and near-neutral environment, which makes the battery of low maintenance cost, strong safety, no pollution, and zero discharge; the only product of the battery is water. Without energy input, the microorganism itself is an energy conversion factory, which can convert chemical energy that cannot be directly used into electricity for human use (as shown in Figure 7).Figure 7 The working mechanism of MFC.The development of MFC has gone through several important stages. The research on microbial fuel cells can be traced back to the related research published by Potter et al. in 1911: in this study, after inserting platinum electrodes into the bacterial solution of yeast andEscherichia coli, a simple primary battery was successfully prepared and a weak current was obtained, thus verifying the feasibility of using bacteria to generate electric current [23, 24].An ideal air cathode should have good electrical conductivity, corrosion resistance, and high mechanical strength. At the same time, the pore structure inside the cathode should also be able to provide sufficient channels for the transmission of ions and oxygen and the discharge of liquid water, so as to reduce the resistance of material transport and charge transport as much as possible, so that the cathode catalyst can play a maximum role, thereby obtaining the efficient cathode performance. Electrode support materials generally use conductive materials with a certain mechanical strength, which can realize the function of current collection while completing the electrode assembly. Therefore, its electrical conductivity is one of the very important factors for a good electrode support material. Currently commonly used electrode support materials can be mainly divided into two categories: carbon-based materials and metal materials. Carbon-based materials include carbon cloth, carbon paper, etc., and metal materials include stainless steel mesh, nickel mesh, foamed nickel, and copper mesh. At present, the commonly used preparation methods of air cathodes mainly include the spraying method, drop-coating method, hot-pressing method, and rolling method. For the cathode with carbon cloth/carbon paper as the supporting material, PTFE is generally painted on one side as the gas diffusion layer, and the other side is supported with the catalytic layer by spraying, dripping, or hot pressing. For the supporting material of the metal substrate, carbon black and PTFE are generally supported on one side of the electrode as a gas diffusion layer, and a catalyst and a binder are supported on the other side as a catalytic layer by hot pressing or rolling. For example, Logan et al. used carbon cloth as the cathode substrate to form a catalytic layer by brushing a mixture of catalyst and PTFE and drying it naturally; the gas diffusion layer was made by high-temperature treatment after brushing PTFE, which effectively prevented the loss of moisture through the cathode. The stable operation of the MFC is maintained, which provides an idea for the preparation of the air cathode. Dong et al. used AC and PTFE as the catalytic layer, and CB and PTFE as the gas diffusion layer, which were pressed on both sides of the stainless steel mesh by roller pressing, and prepared a composite material with rich air cathode at a three-phase interface. Compared with the brushing method, the rolling method has more precise control of the catalyst loading, and the results are more reproducible. For carbonaceous catalyst cathodes, the rolling method can further improve the performance of MFC cathodes by further increasing the catalyst loading. However, the existing cathode preparation methods also have certain limitations. For example, due to the limitation of the cathode preparation method, when the catalyst loading is increased to a certain amount, the catalytic layer may peel off; in addition, the use of the binder will not only increase the cathode preparation cost, but also due to its poor conductivity, it will also increase the ohmic internal resistance of the cathode that leads to a decrease in the cathode performance; in addition, since the binder is a high molecular polymer, the dried binder will cover the catalyst surface, which will reduce the effective ORR active sites of the catalyst and reduce the cathode ORR performance, thereby reducing the output power density.Due to the above problems of binders, some researchers avoid the use of binders in the catalytic layer by in situ formation and growth of catalysts on support materials. For example, Cao et al. [25] painted a gas diffusion layer on one side of carbon cloth, then used the electrodeposition method to in situ grow nickel oxide nanosheets on the carbon cloth to prepare a low-cost binder-free air cathode, and achieved a maximum power density of 645 mW/m−2 in the MFC, which was 12.96% higher than that of commercial Pt/C cathode MFC (571 mW/m−2); Wang et al. [26] reported using the chemical vapor deposition method and growing graphene on a nickel mesh as a catalyst layer yielded a maximum power density 32% higher than that of a commercial Pt/C cathode; and Chen et al. [27] used a water bath method to in situ grow Pd nanocatalysts on stainless steel fiber mats and then used carbon black. Pore filling was performed to obtain a monolithic binder air cathode. Avoiding the use of binders during the preparation of the catalytic layer can greatly improve the electrode conductivity. However, the above method requires the use of expensive equipment and additional preparation of the gas diffusion layer, which must be combined with the support layer, the structure is relatively complex, and the preparation process is relatively cumbersome. In order to solve the above problems, Yang et al. [28] used the method of carbonizing corrugated paper to prepare an integrated binder-free air cathode with low cost and good scalability, and by doping FePc, the ORR catalytic performance of the integrated cathode was further improved, achieving 830 mW/m−2 maximum power density. Furthermore, Yang et al. [29] fabricated an integrated binder-free tubular air cathode with high mechanical strength by directly carbonizing bamboo tubes. The preparation method has the advantages of a simple and convenient preparation process. However, the cathode prepared by a natural bamboo tube is greatly limited by the natural material itself, which cannot be flexibly regulated in terms of cathode size and structure, and the cathode ORR is catalyzed. The performance is not ideal. Therefore, it is very necessary to study the preparation of integrated electrodes with controllable size and pore structure and high ORR performance, which is of great significance for the development of MFC air cathodes [30]. ## 6. Conclusions and Outlook Microalgae, as a potential raw material for energy production of biofuels, are favored by governments and scholars all over the world due to their unique advantages. The energy microalgae currently studied are mainly green algae and diatoms, such as Chlorella, Botrytis brauneni, Dunaliella salina, Phaeodactylum tricornutum, and Nannochloropsis. Compared with other bioenergy sources, the use of energy microalgae to produce biological energy has the following obvious advantages: wide variety, fast reproduction, short growth cycle, high-photosynthetic carbon fixation efficiency, and high yield of microalgae oil, which can synthesize a large amount of protein, fat, carbohydrate, and other biologically active substances in cells, with good energy efficiency, ecological, and economic benefits. However, there are many cost and technical problems in the production of biodiesel by microalgae cultivation. First, a large amount of nutrients such as nitrogen, phosphorus, and trace elements need to be added to maintain the normal growth and metabolism of microalgae in the process of microalgae cultivation. Statistics show that microalgae cultivation accounts for 70% of the total production cost. Second, due to the small particle size of microalgae, generally in the micron size, the concentration of microalgae reaching the stable phase after cultivation is not high, and the surface of algal cells is negatively charged and uniformly dispersed in the medium, which makes the recovery process of microalgae impossible, more difficult, and costly. Statistics show that the recovery cost of microalgae accounts for 20%–30% of the total cost of microalgae biomass energy oil production. Therefore, the cost of microalgae cultivation and collection has become the biggest problem in the large-scale development of microalgae biodiesel.Microalgae biomass energy has great research and application prospects, but most of the current research results are based on a laboratory scale, and there are still many key technologies in industrial-scale microalgae cultivation that have not yet been broken through, mainly including cultivation costs such as nutrition and water costs, and low light utilization. Future research should cover as much as possible the selection of microalgae strains, microalgae genetic engineering, microalgae wastewater culture, microalgae photoreactor design, light-energy regulation, microalgae circulatory culture, microalgae separation and recovery, and subsequent oil extraction and purification. In all aspects, we strive to explore and solve various problems existing in microalgae bioenergy, build a microalgae bioeconomic industrial chain, and achieve low-carbon green ecological industrialization.In this review, we mainly focus on the application of microalgae in biomass fuel production, nonfuel, wastewater treatment, and microalgae biofuel cells, which has an excellent performance in the future application and industry practice. --- *Source: 1002952-2022-07-21.xml*
1002952-2022-07-21_1002952-2022-07-21.md
34,044
The Analysis on the Current Situation of the Utilization Mode of Microalgal Biomass Materials
Lina Zhang; Lianfeng Wang; Huizhong Nie; Changbin Liu
Advances in Multimedia (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1002952
1002952-2022-07-21.xml
--- ## Abstract In recent years, global warming caused by the greenhouse effect has become one of the greatest threats to mankind. This will have a serious impact on the environment and human body, such as land desertification, increase in ocean acidity, sea level rise, and increase in pests and diseases; affect people’s normal work and rest; and make people feel dizzy and nauseated. Excessive emissions of carbon dioxide (CO2), the main component of the greenhouse gas, have contributed to the continued rise in Earth’s temperature. Although the world is vigorously developing clean energy to reduce carbon emissions, it will not replace fossil fuels in the short term. The conversion of biomass into energy is the most important way of energy utilization. Biomass energy refers to the solar energy stored in biomass in the form of chemical energy and is the fourth major energy source after oil, coal, and natural gas. At present, biofuels have gone through three developmental stages, which can be divided into first-generation biofuels, second-generation biofuels, and third-generation biofuels according to the types of raw materials and development history. The first generation of biofuels produced from food crops, such as bioethanol derived from sucrose and starch, has already entered the energy market. However, because the first generation of biofuels uses food crops as raw materials, there is a phenomenon of “competing with people for food,” and it is difficult to achieve large-scale application. To avoid the problem of food shortages, second-generation biofuels produced from nonfood crops, such as wood fiber, have been developed. Microalgae biomass energy is favored by governments and scholars all over the world because of its unique advantages of fast reproduction speed and high oil content. The cultivation of microalgae does not occupy traditional farmland, and the marginal land such as mountains, oceans, and deserts can cultivate microalgae, or develop microalgae cultivation in the air through the innovation of microalgae photosynthetic reactors. When municipal wastewater, food industry wastewater, and aquaculture wastewater are used as the medium for large-scale cultivation of microalgae, and waste gas from biogas power generation, flue gas from coal power plants, and industrial waste gas from fermentation are used as the CO2 gas source for large-scale cultivation of microalgae, it can be further reduced. The comprehensive production cost of microalgae bioenergy plays a significant emission reduction effect. Combining the above advantages, the use of microalgae to produce first- or second-generation bioenergy has become a new research direction. This study focuses on the review of microalgal biomass in fuel, nonfuel, wastewater treatment, and fuel cell. --- ## Body ## 1. Introduction Energy and environment are two major themes in the process of human development. With the rapid development of human society in recent years, the excessive use of fossil energy has led to serious environmental pollution and ecological damage. The global energy consumption increased by 2.3% compared with the last year in 2018, and CO2 emission only by fossil fuel is 33.1 Gt; also, the mass fraction of CO2 increased from 340μg/g in 1980 to 412 μg/g in 2020 [1]. The large amount of CO2 accelerates the greenhouse effect, which also causes many problems like sea level rise, food security crisis, species extinction and biodiversity crisis, and so on [2], as shown in Figure 1.Figure 1 The CO2 emission from fossil fuel.Due to the decrease in fossil energy and the aggravation of environmental pollution caused by the utilization of fossil energy, scientists have focused their attention on the development of new energy and renewable energy. Biomass energy as the only promising and sustainable carbon energy has been researched by the scientists and experts all around the world. There are some efficient methods, which could convert biomass materials like wood, straw, and leaf into high value-added products through the biological method, chemical method, thermal pyrolysis, and so on [3]. At present, the main development and utilization directions of biomass are as follows: (1) energy regeneration: we use the heat generated by biomass combustion to generate electricity or conduct pyrolysis to produce biomass and biological natural gas, etc. (2) Feed: we convert biomass into feed through drying, crushing, fermentation, pelletizing, and other processes. (3) Fertilizer: we use harmless pretreatment and technical processing of biomass to produce biomass solid molding fertilizer. (4) Composite materials: biomass can be used to make carbon materials, battery-based thin-film materials, and construction and packaging materials, etc. In particular, the research on the preparation of biomass oil and carbon materials will maximize the application value of biomass materials, as shown in Figure 2.Figure 2 The installed capacity of biomass.However, traditional terrestrial biomass materials are affected by many factors (such as production scale and geographical location); the cost of large-scale recycling is relatively high; and algal biomass is the most ideal biomass for energy regeneration and CO2 emission reduction because of its variety, fast reproduction speed, and not limited to geographical location, and does not occupy land resources[4]. Chlorella and Spirulina are the two most common types of microalgae in the ocean. They have low culture cost and fast reproduction, which have been widely used in medicine, food, and wastewater treatment field [5]. Microalgae have the fastest photosynthesis rates and are high in lipids, thus attracting increasing attention in the field of biofuels (as shown in Figure 3). Microalgae have high carbon-fixed efficiency to convert it into biomass energy with carbon balance, and the carbon sequestration efficiency of microalgae in closed reactors is listed in Table 1. Chlorella and Spirulina have already been used as raw materials for biomass oil production, which will greatly ease the pressure on crops to produce biomass oil. Microalgae have high photosynthesis efficiency, up to 50 g/(m2d), which is equivalent to 10–50 times the carbon sequestration capacity of deep forests. Instead of occupying crop paddy fields and arable land, microalgae can be cultivated in saline-alkali land, domestic sewage areas, and tidal flats. It is recognized as the third-generation biomass energy material [6, 7]. The main microalgae species and compositions are listed in Table 2.Figure 3 The amount of electricity from biomass.Table 1 Carbon sequestration efficiency of microalgae in closed reactors. SpeciesReactorCO2 volume fraction (%)CO2 fixed rate (g L−1d−1)CO2 fixed efficiencyChlorella fuscaTube10.000.2663.4Chlorella pyrenoidosaTube10.000.2595.1Chlorella vulgarisTube10.000.1295.3Scenedesmus dimorphusColumn2.000.8063.4Scenedesmus sp.Column0.250.1633.0Spirulina platensisFilm15.001.4485.0Spirulina sp.Column10.000.1821.8Table 2 The species and compositions of microalgae. SpeciesCompositionElement analysisIndustrial analysisProteinPolysaccharideLipidCHOWaterVolatileFCAshChlorella42.79.422.544.936.4240.64.1369.4516.2210.2Microcystis59.9320.195.2242.266.2743.079.5970.1314.146.14Nannochloropsis44213049.077.5935.63579.6910.645.03Scenedesmus36.429.319.5507.1130.74.5975.3312.787.3Spirulina48.3630.2113.346.167.1435.444.5479.1415.246.56We reviewed the status of the main utilization of microalgal biomass, mainly focusing on the preparation of bio-oil and carbon adsorption materials, which will have broad prospects and great application value in sustainable energy development and carbon dioxide capture and utilization, respectively. ## 2. Microalgae Biomass Fuel Product Microalgae can produce bio-oil, bioethanol, biomethane, and biohydrogen [8]. Compared with macroalgae with an oil yield of 30% (70,0001/ha/y), the oil yield of microalgae could achieve 70% (70,0001/ha/y) due to its rich lipid, which is highly competitive and fascinating.Luangpipat and Chisti [9] found that the lipid productivity of microalgae in nutrient-sufficient seawater exceed 37 mg·L−1·d−1 and was nearly 2-fold greater than in freshwater, which could magnify the advantage for microalgae to produce bio-oil because seawater is cheaply available and is in large amounts, whereas there is a global shortage in freshwater. Hydrothermal liquefaction (HTL) of wet biomass like microalgae is one of the most promising methods to produce renewable and sustainable energy as alternative energies to fossil fuels, which significantly reduces the cost of drying and heating. The schematic process of HTL for microalgae is shown in Figure 4. Hu et al. [10] used the aqueous phase obtained from the catalytic/noncatalytic hydrothermal liquefaction of Chlorella as the reaction medium for cyclic liquefaction. Without recycling, the bio-oil yield under Na2CO3 catalysis was lower than the yield in pure water. However, the bio-oil yield increased by 32.6 wt.% after recycling of the aqueous phase without sacrificing oil quality. Leng et al. [11] used the liquefied aqueous solution as nutrients (C, N, and P) for microalgae cultivation, which provided the possibility for the efficient production of liquefied biofuels and the cultivation of algal biomass, enabling the microalgal hydrothermal liquefaction system to realize closed-loop biofuel production. Shen et al. [12] found that MgAl-layered double hydroxides/oxides (MgAl-LDH/LDO) with tunable acidic and basic properties are developed for catalyzing the HTL of microalgae to obtain bio-oil with a low oxygen content (as shown in Figure 4). Dandamudi et al.[13] conducted the HTL treatment of Nannochloropsis sp. to obtain the bio-oil yield with 43 wt% at 350°C, and in most cases, the oil yield improves with increasing temperature and achieves the maximum during the temperature range between 280 and 350°C, which was caused by the hydrolysis of microalgae. Arun et al. [14] prepared biomass oil by hydrothermal liquefaction (HTL) of Chlorella and used the biochar produced during the HTL process to remove pollutants (COD, NO3, NH3, and PO4) from wastewater. The study found that the oil production rate of Chlorella is 29.37%; the biochar produced in the process can effectively remove pollutants in wastewater; and the removal rate is about 55%. The elemental compositions of biocrude oil and solid residue from microalgae are listed in Table 3.(1)HHV=0.355C+1.423H−0.154O−0.145N.Figure 4 The schematic process of HTL for microalgae.Table 3 The elemental compositions of biocrude oil and solid residue. SamplesElement analysisRatioCHOHHV (MJ/Kg)H/CN/CO/C300°C 5% 2 h61.256.1018.7126.031.200.04570.23350°C 0% 2 h72.505.6111.7230.120.930.04640.12350°C 5% 2 h61.725.4217.4325.441.050.04360.21Transesterification is a widely used method for converting oil into biodiesel, which converts the original viscous microalgal oil (triglycerides or free fatty acids) into fatty acid alkyl esters with smaller molecular weight. The process of transesterification is mainly affected by reaction conditions, molar ratio of ethanol and oil, catalyst type, reaction time, temperature and purification reactants, etc., while alkali catalysts are easily affected by free fatty acids. The growing aviation demand consumes more than 5 million barrels of aviation fuel every day, and releases a large amount of carbon dioxide, nitrogen oxides, carbon monoxide, sulfur oxides, and other environmental pollutants. Therefore, biojet fuel that can reduce greenhouse gas emissions by 80% has attracted much attention. Among them, green biojet fuel products mixed with microalgae biodiesel and traditional chemical fuel have been produced and put into production. Microalgae oil is hydrotreated (fatty acid and ester hydrotreating) into aviation fuel, and the whole process is processed according to ASTM D7566 standard. Another production method for the conversion of microalgal oil into aviation fuel is the Fischer-Tropsch synthesis process, which can extract high-quality fuels from natural gas, coal mines, biomass, etc. Microalgal biomass is converted into liquid fuels through a gasification process, that is, gaseous components such as carbon monoxide and hydrogen are converted into liquid hydrocarbon fuels. The reaction path of microalgae through the HTL process is shown in Figure5.Figure 5 The reaction path of microalgae through HTL process.(2)Biocrude yield=weight of biocrudeweight of dry biomass×100%,Solid residue yield=weight of residue yieldweight of dry biomass×100%,Other yield=100%−Biocrude yield–Solid residue yield,Solid conversion yield=100%−Solid residue yield,Energy revovery rate=HHVbio×BiodcrudeyieldHHVfeedback.Compared with other plants, algae contain higher soluble polysaccharides and can be used to produce bioethanol. Microalgae are easy to be crushed and dried due to the lack of differentiation of roots, stems, and leaves during the growth process, and the pretreatment cost is low. The cellulose contained in microalgae cells is different from that contained in terrestrial plants, and its hydrogen bonds are weaker and more easily degraded. Relatively simple processing establishes the advantages of microalgae as a feedstock for fuel ethanol production. Dilute acid or enzymatic pretreatment can be used in the saccharification process. After ultrasonic treatment of microalgae, acid hydrolysis and enzymatic hydrolysis of starch were used, respectively, andSaccharomyces cerevisiae was further used to produce ethanol. Acid hydrolysis and enzymatic hydrolysis are the two main ways to hydrolyze polysaccharides for subsequent fermentation. Dilute acid pretreatment is a commonly used method. For different algae species, the type, concentration, and reaction temperature adjust time and other aspects to achieve a better preprocessing effect. After pretreatment of microalgal biomass, ethanol can be obtained by microbial fermentation such as yeast. Different fermentation strains were used according to the difference in the biomass content of each microalga. Saccharomyces cerevisiae is currently the most used, and Z. mobiles has also been extensively studied in recent decades. Fermentation methods are mainly divided into the hydrolysis fermentation (SHF) method and the simultaneous saccharification fermentation (SSF) method. Compared with the two, the SHF method has a higher yield of ethanol, but the SSF method takes less time, and the yield of fuel ethanol is relatively high. The SSF method is divided into continuous or semicontinuous processes, and the semicontinuous SSF method uses less enzyme. ## 3. Microalgae Biomass Nonfuel Application Algal carbohydrates are synthesized by the immobilization of carbon dioxide in microalgae in the process of photosynthesis. They mainly use adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH) to absorb and fix carbon dioxide in the air to synthesize glucose and other sugars through the Calvin cycle metabolic pathway. These carbohydrates accumulate in plastids as reserve substances (such as starch) or the main components of cell walls. Studies have shown that there is a direct competitive relationship between lipid and starch synthesis, because the main precursor of triglyceride synthesis, glycerol-3-phosphate (G3P, glycerol-3-phosphate), is synthesized through the catabolism of glucose. Therefore, increasing carbohydrate accumulation in microalgal biomass can be achieved by enhancing glucan storage and reducing starch degradation, i.e., cultivation techniques to increase carbohydrate content in microalgae, including irradiance, nitrogen depletion, temperature variation, and pH methods such as adjustment and additional provision of carbon dioxide. The carbohydrates of the microalgae are mainly composed of starch, glucose, cellulose/hemicellulose, and various polysaccharides, and the starch and glucose can be converted into bioethanol including bioethanol and biofuels including hydrogen products. The polysaccharides contained in microalgae are mainly galactan including carrageenan and agar, and carrageenan can be stably extracted from red seaweed. At present, microalgal polysaccharides can be used in food, cosmetics, textiles, stabilizers, emulsifiers, lubricants, thickeners, and clinical drugs. Microalgae sulfated polysaccharides exhibit a wide range of pharmacological effects, such as antioxidant, antitumor, anticoagulant, anti-inflammatory, antiviral, and immunomodulatory agents. Sulfated polysaccharides can be extracted from Porphyra and are applied to the skin due to their ability to inhibit the adhesion and migration of polymorphonuclear leukocytes, and anti-inflammatory treatment of the skin.Microalgae are rich in many pigments related to light exposure; in addition to chlorophyll, most of them are Phycobiliproteins that contribute to the utilization of light-energy and light-protective substances such as carotenoids. Another important microalgal pigment is astaxanthin, which has a powerful antioxidant effect. Astaxanthin can prevent and treat chronic inflammatory diseases, cancer, neurological diseases, liver diseases, metabolic syndrome, diabetes, diabetic nephropathy, gastrointestinal diseases, etc. Haematococcus pluvialis has been found to have a high natural content of astaxanthin (1.5–3% dry matter content), which is also the main natural source of astaxanthin currently commercialized. There are also some microalgal pigments such as lutein, zeaxanthin, and canthaxanthin that are used for chicken skin coloring and pharmaceutical purposes. In addition, algal protein, phycocyanin, and phycoerythrin are used in the food and cosmetic industries; carotene is used as a prerequisite for vitamin A in health food; and many microalgal pigments are also used in natural food or beverage natural coloring agent.Protein is an important component of human nutrients, and lack of protein is one of the main causes of nutritional deficiencies. Some microalgae contain up to 60% protein, and microalgal protein can be used as animal or fish feed, chemical fertilizer, industrial enzymes, bioplastics, and surfactants. At present, the most widely cultivated protein-rich microalgae are Spirulina belonging to the cyanobacteria species, which are not only rich in 60% crude protein, but also contain vitamins, minerals, and other biologically active factors. The cell wall of Spirulina is composed of polysaccharides; its digestibility can reach 86%; and it is easily absorbed and utilized by the human body. Spirulina can be processed into tablets, flakes, and powders as a human dietary supplement, but also as a feed additive in the aquaculture, aquarium, and poultry industries. In addition, Anabaena, Chlorella, Dunaliella, Euglena, etc. are also high in protein, and the blue-green microalgae Anabaena has been found to be a good source of protein.High value-added biomaterials or bioproducts of microalgae have also been commercially used. Microalgae Arthrospira and Chlorella have been used in large quantities in the skin care market, and some cosmetic companies have carried out research work on their own microalgae product systems, which can extract antiaging, regenerating, emollient, anti-irritant, sunscreen, scalp care, and other cosmetic products. The most important pharmaceutical ingredient in Chlorella is 1, 3-glucan, which is an active immune stimulant, a free radical scavenger, and a blood lipid reducer, and can be effective in gastric ulcers, trauma, and constipation. Microalgal biomass, such as vitamins A, B1, B2, B6, and B12, is also the effective source of essential vitamins such as C, E, biotin, niacin, folic acid, and pantothenic acid. Carrageenan in microalgae can be widely used as an emulsifier and a stabilizer for food, such as chocolate milk, ice cream, evaporated milk, pudding, jelly, jam, and salad dressing. Due to its antitumor, antiviral, and anticoagulant properties, carrageenan also has potential pharmaceutical functions. ## 4. Microalgae Biomass Wastewater Treatment In recent years, the population growth and the rapid development of urbanization and industrialization have resulted in an increasing shortage of water resources and serious pollution. Urban life, industrial production, and agricultural activities will produce wastewater with excess organic carbon, nitrogen, phosphorus, and metal elements. After discharge, it will cause eutrophication of the water environment, damage the soil structure, and cause harmful effects on aquatic organisms and human health. The discharge and treatment of wastewater have always attracted much attention (as shown in Figure6).Figure 6 The work path of microalgae in wastewater treatment.As photosynthetic microorganisms, similar to plants, microalgae have chloroplasts and can provide energy for growth and metabolism through photosynthesis. Microalgae widely exist in various water environments such as freshwater, seawater, and wastewater from different sources. They can use nutrients such as nitrogen and phosphorus in wastewater for their own production while removing chemical oxygen demand (chemical oxygen demand, COD), ammonia nitrogen (NH3 -N), total nitrogen (TN, TN), and total phosphorus (total phosphorus, TP) and other pollutants in wastewater and high removal rate [15].Combining microalgae cultivation with wastewater treatment is an economical and environmentally friendly approach to improving microalgal oil production, simplifying wastewater treatment processes, reducing microalgal biomass production costs and wastewater treatment costs, removing pollutants, and reducing CO2. There are many aspects of capture, fixation, and utilization of advantages [16, 17]. Some algal species can fix nitrogen and phosphorus. Using organic and inorganic nitrogen compounds, microalgae cells can synthesize amino acids and proteins; as the growth cycle of microalgae cells increases, the absorption efficiency of nitrogen and phosphorus is gradually enhanced [18]. Microalgal biomass after wastewater treatment can be used to produce high value-added products such as carbohydrates, pigments, and proteins [19]. Compared with traditional oil plants, microalgae grow faster and have higher oil content; their oil content is generally 20%–70% (dry weight); microalgae oil is mostly neutral lipid suitable for biodiesel production; and after biorefining, it can be converted into biodiesel with the advantages of cleanness, environmental protection, carbon neutrality, etc. The oil derived from microalgal biomass is the third-generation biodiesel source with good development prospects. However, compared with the traditional raw materials for biodiesel production, the high production cost of microalgal oil is a major bottleneck for its industrialization. At present, the use of wastewater as an inexpensive alternative medium for microalgae growth can reduce the cost of microalgal lipids and achieve coupling with wastewater treatment [20]. ## 5. Microalgae Biofuel Cell Microbial fuel cell (MFC) is a technology that can directly convert the chemical energy of organic matter in wastewater into electrical energy by utilizing the metabolic process of microorganisms. The use of MFC for sewage treatment can not only greatly reduce the cost of sewage treatment but also bring certain economic benefits to the recovered electric energy, which is of great value to the sustainable development of human society [21, 22]. The MFC can use half of the organic and inorganic waste materials that cannot be used by fuel cells as fuel, and even use photosynthesis or directly use sewage as fuel. The operating conditions are mild, generally working in a normal temperature, normal pressure, and near-neutral environment, which makes the battery of low maintenance cost, strong safety, no pollution, and zero discharge; the only product of the battery is water. Without energy input, the microorganism itself is an energy conversion factory, which can convert chemical energy that cannot be directly used into electricity for human use (as shown in Figure 7).Figure 7 The working mechanism of MFC.The development of MFC has gone through several important stages. The research on microbial fuel cells can be traced back to the related research published by Potter et al. in 1911: in this study, after inserting platinum electrodes into the bacterial solution of yeast andEscherichia coli, a simple primary battery was successfully prepared and a weak current was obtained, thus verifying the feasibility of using bacteria to generate electric current [23, 24].An ideal air cathode should have good electrical conductivity, corrosion resistance, and high mechanical strength. At the same time, the pore structure inside the cathode should also be able to provide sufficient channels for the transmission of ions and oxygen and the discharge of liquid water, so as to reduce the resistance of material transport and charge transport as much as possible, so that the cathode catalyst can play a maximum role, thereby obtaining the efficient cathode performance. Electrode support materials generally use conductive materials with a certain mechanical strength, which can realize the function of current collection while completing the electrode assembly. Therefore, its electrical conductivity is one of the very important factors for a good electrode support material. Currently commonly used electrode support materials can be mainly divided into two categories: carbon-based materials and metal materials. Carbon-based materials include carbon cloth, carbon paper, etc., and metal materials include stainless steel mesh, nickel mesh, foamed nickel, and copper mesh. At present, the commonly used preparation methods of air cathodes mainly include the spraying method, drop-coating method, hot-pressing method, and rolling method. For the cathode with carbon cloth/carbon paper as the supporting material, PTFE is generally painted on one side as the gas diffusion layer, and the other side is supported with the catalytic layer by spraying, dripping, or hot pressing. For the supporting material of the metal substrate, carbon black and PTFE are generally supported on one side of the electrode as a gas diffusion layer, and a catalyst and a binder are supported on the other side as a catalytic layer by hot pressing or rolling. For example, Logan et al. used carbon cloth as the cathode substrate to form a catalytic layer by brushing a mixture of catalyst and PTFE and drying it naturally; the gas diffusion layer was made by high-temperature treatment after brushing PTFE, which effectively prevented the loss of moisture through the cathode. The stable operation of the MFC is maintained, which provides an idea for the preparation of the air cathode. Dong et al. used AC and PTFE as the catalytic layer, and CB and PTFE as the gas diffusion layer, which were pressed on both sides of the stainless steel mesh by roller pressing, and prepared a composite material with rich air cathode at a three-phase interface. Compared with the brushing method, the rolling method has more precise control of the catalyst loading, and the results are more reproducible. For carbonaceous catalyst cathodes, the rolling method can further improve the performance of MFC cathodes by further increasing the catalyst loading. However, the existing cathode preparation methods also have certain limitations. For example, due to the limitation of the cathode preparation method, when the catalyst loading is increased to a certain amount, the catalytic layer may peel off; in addition, the use of the binder will not only increase the cathode preparation cost, but also due to its poor conductivity, it will also increase the ohmic internal resistance of the cathode that leads to a decrease in the cathode performance; in addition, since the binder is a high molecular polymer, the dried binder will cover the catalyst surface, which will reduce the effective ORR active sites of the catalyst and reduce the cathode ORR performance, thereby reducing the output power density.Due to the above problems of binders, some researchers avoid the use of binders in the catalytic layer by in situ formation and growth of catalysts on support materials. For example, Cao et al. [25] painted a gas diffusion layer on one side of carbon cloth, then used the electrodeposition method to in situ grow nickel oxide nanosheets on the carbon cloth to prepare a low-cost binder-free air cathode, and achieved a maximum power density of 645 mW/m−2 in the MFC, which was 12.96% higher than that of commercial Pt/C cathode MFC (571 mW/m−2); Wang et al. [26] reported using the chemical vapor deposition method and growing graphene on a nickel mesh as a catalyst layer yielded a maximum power density 32% higher than that of a commercial Pt/C cathode; and Chen et al. [27] used a water bath method to in situ grow Pd nanocatalysts on stainless steel fiber mats and then used carbon black. Pore filling was performed to obtain a monolithic binder air cathode. Avoiding the use of binders during the preparation of the catalytic layer can greatly improve the electrode conductivity. However, the above method requires the use of expensive equipment and additional preparation of the gas diffusion layer, which must be combined with the support layer, the structure is relatively complex, and the preparation process is relatively cumbersome. In order to solve the above problems, Yang et al. [28] used the method of carbonizing corrugated paper to prepare an integrated binder-free air cathode with low cost and good scalability, and by doping FePc, the ORR catalytic performance of the integrated cathode was further improved, achieving 830 mW/m−2 maximum power density. Furthermore, Yang et al. [29] fabricated an integrated binder-free tubular air cathode with high mechanical strength by directly carbonizing bamboo tubes. The preparation method has the advantages of a simple and convenient preparation process. However, the cathode prepared by a natural bamboo tube is greatly limited by the natural material itself, which cannot be flexibly regulated in terms of cathode size and structure, and the cathode ORR is catalyzed. The performance is not ideal. Therefore, it is very necessary to study the preparation of integrated electrodes with controllable size and pore structure and high ORR performance, which is of great significance for the development of MFC air cathodes [30]. ## 6. Conclusions and Outlook Microalgae, as a potential raw material for energy production of biofuels, are favored by governments and scholars all over the world due to their unique advantages. The energy microalgae currently studied are mainly green algae and diatoms, such as Chlorella, Botrytis brauneni, Dunaliella salina, Phaeodactylum tricornutum, and Nannochloropsis. Compared with other bioenergy sources, the use of energy microalgae to produce biological energy has the following obvious advantages: wide variety, fast reproduction, short growth cycle, high-photosynthetic carbon fixation efficiency, and high yield of microalgae oil, which can synthesize a large amount of protein, fat, carbohydrate, and other biologically active substances in cells, with good energy efficiency, ecological, and economic benefits. However, there are many cost and technical problems in the production of biodiesel by microalgae cultivation. First, a large amount of nutrients such as nitrogen, phosphorus, and trace elements need to be added to maintain the normal growth and metabolism of microalgae in the process of microalgae cultivation. Statistics show that microalgae cultivation accounts for 70% of the total production cost. Second, due to the small particle size of microalgae, generally in the micron size, the concentration of microalgae reaching the stable phase after cultivation is not high, and the surface of algal cells is negatively charged and uniformly dispersed in the medium, which makes the recovery process of microalgae impossible, more difficult, and costly. Statistics show that the recovery cost of microalgae accounts for 20%–30% of the total cost of microalgae biomass energy oil production. Therefore, the cost of microalgae cultivation and collection has become the biggest problem in the large-scale development of microalgae biodiesel.Microalgae biomass energy has great research and application prospects, but most of the current research results are based on a laboratory scale, and there are still many key technologies in industrial-scale microalgae cultivation that have not yet been broken through, mainly including cultivation costs such as nutrition and water costs, and low light utilization. Future research should cover as much as possible the selection of microalgae strains, microalgae genetic engineering, microalgae wastewater culture, microalgae photoreactor design, light-energy regulation, microalgae circulatory culture, microalgae separation and recovery, and subsequent oil extraction and purification. In all aspects, we strive to explore and solve various problems existing in microalgae bioenergy, build a microalgae bioeconomic industrial chain, and achieve low-carbon green ecological industrialization.In this review, we mainly focus on the application of microalgae in biomass fuel production, nonfuel, wastewater treatment, and microalgae biofuel cells, which has an excellent performance in the future application and industry practice. --- *Source: 1002952-2022-07-21.xml*
2022
# Analysis of 22-year Drought Characteristics in Heilongjiang Province Based on Temperature Vegetation Drought Index **Authors:** Li Wu; Youzhi Zhang; Limin Wang; Wenhuan Xie; Lijuan Song; Haifeng Zhang; Hongwen Bi; Yanyan Zheng; Yu Zhang; Xiaofei Zhang; Yan Li; Zhiqun Lv **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1003243 --- ## Abstract Heilongjiang Province is the main grain producing region in China and an important part of Northeast China Plain, which is one of the three black soil belts in the world. The cultivated region of black soil accounts for 50.6% of the black soil region in Northeast China. Due to the obvious rise of temperature and uneven distribution of precipitation in the 20th century, it has been considered to be one of the important reasons for agricultural drought and aridity. Under the background of climate change, understanding the multiyear changes and occurrence characteristics of cultivated land drought in different agricultural regions in Heilongjiang Province is of great significance for the establishment of agricultural drought prediction and early warning system in the future, guiding agricultural high-standard farmland irrigation in different regions, promoting black soil protection, and then improving grain yield. This paper calculates the temperature vegetation drought index (TVDI) based on the normalized difference vegetation index (NDVI) and surface temperature (TS) product data of MODIS from 2000 to 2021. Taking TVDI as the drought evaluation index, this paper studies the temporal and spatial variation distribution characteristics and occurrence frequency of drought in the whole region and four agricultural regions of Heilongjiang Province: Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). The results show that medium drought generally occurred in Heilongjiang Province from 2000 to 2021, accounting for about 70% of the total cultivated land. The drought was severe from 2000 to 2009 and weakened from 2010 to 2021. In the 110 months of the crop growing season from 2000 to 2021, about 63.84% of the region suffered more than 60 droughts. It is found that the frequency of drought varies from region to region. More than 80 droughts occurred in the west of region IV and the middle of region II. The characteristics of region IV are large sandstorm, less precipitation, and lack of water conservancy facilities, resulting in frequent and strong drought. It is also found that the occurrence frequency, degree grade and regional distribution of drought are closely related to seasonal changes. In spring, the occurrence grade and frequency of drought in region IV are the strongest and the drought phenomenon is serious. In autumn, drought is frequent and distributed in all regions, but the grade is not strong (mainly medium drought), and the drought phenomenon is medium. It is humid in summer. Crops in Heilongjiang Province are one crop per annual. Spring drought seriously restricts the water content of crops. Long-term drought will lead to poor crop development and reduce yield. Therefore, only by clarifying the characteristics of regional time drought, monitoring accurate drought events and accurately predicting the occurrence of drought, can we guide high-standard farmland precision irrigation, improve crop yield and ensure national food security. At the same time, severe drought will affect the terrestrial ecosystem, resulting in the distribution of crops and microorganisms, and the transformation between carbon sink and carbon source. --- ## Body ## 1. Introduction Heilongjiang Province is the main grain producing region in China and an important part of Northeast China Plain, which is one of the three black soil belts in the world. It is an important commodity grain base in China, known as “Beidacang”. The western part of black soil region is located in the dual-transition zone of climate and ecology. It is a climate change-sensitive region with serious desertification [1]. Heilongjiang Province has become one of the main warming regions in China since 1980, and the average temperature has increased by 1.4℃ in recent 100 years (1905–2001). Drought has become one of the major natural disasters in the province. If the arid area and drought intensity level cannot be accurately determined or effective countermeasures cannot be taken, farmers’ agricultural income will suffer serious losses by 2030 [2]. Drought is affected by many factors, such as climate warming, soil erosion, land use/land cover change, and human activities. In recent years, black land degradation and protection, drought monitoring, and early warning have always been the focus of research. The eighth phase strategic plan (2014–2021) of the international hydrological plan (IHP) takes the change of water-related disasters affected by global climate change and intense human activities as one of the key research issues and coordinates the research on drought through the international drought initiative (IDI) to improve the ability to cope with drought [3].According to the statistics, from 2000 to 2019, the drought-affected region of crops in Heilongjiang Province has reached 2.1374 million hectares every year [4]. In the past 20 years, the drought area accounted for 63% of the total area of meteorological disasters, ranking the first in the province. Under the background of global climate change, extreme weather and drought events occur from time to time, such as the severe drought in the summer of 2000 [5], in which the drought-affected area of the whole province reached 5.016 million hectares and the drought time reached 30 to 40 days, and in 2002, the temperature in the whole province was generally on the high side, the precipitation in the northwest was less, and the drought situation was severe [6]. The research focus of this paper is to monitor and analyze the temporal and spatial variation characteristics and frequency of drought by using remote sensing indicators. Previous studies mainly used meteorological drought indicators and historical statistical data to monitor drought. The monitoring mainly analyzed the point results of meteorological stations or transformed the points into surface results through analysis methods. The accuracy of the results was limited by the number of stations and analysis methods. Using remote sensing indicators to monitor the spatial-temporal and frequency characteristics of drought in Heilongjiang Province was rare. The previous research took Heilongjiang Province as the research area, did not subdivide the region of the province, and the research on the temporal, spatial, and frequency characteristics of drought in different agricultural regions of Heilongjiang Province was also rare. Most research results [7–9] did not clearly define the occurrence, development, and end period of drought. It is very important to monitor and predict the occurrence and development of drought in the whole region and different agricultural regions of Heilongjiang Province, and accurately evaluate it. It is very important to carry out precision irrigation according to the different levels of drought in different regions at different times. Drought is a common natural disaster, which is gradual. Its occurrence and end are slow and difficult to detect. For drought-prone areas, accurate quantitative expression of drought is very important for effective drought management. Remote sensing drought monitoring indicators can be divided into two categories: one is the monitoring of a single index, including a single vegetation index or a single temperature index, and the other is the monitoring of a double index, considering both temperature and vegetation index. Commonly used single indexes include anomaly vegetation index (AVI), temperature state index (TCI), and normalized difference vegetation index (NDVI). However, the research shows that [10], the factors affecting drought are complex, and it is limited to use a single index to monitor drought. The double indexes include temperature vegetation drought index (TVDI), conditional vegetation index (VTCI), and vegetation water supply index (VSWI). Both vegetation index and temperature are considered, so drought can be retrieved effectively. Yu Min [11] used TVDI to monitor the summer drought in Heilongjiang Province in 2007, showing that this method can monitor the drought in real time and truly reflect the dynamic process of local drought occurrence and development. Zhong Wei [12] analyzed the characteristics of soil moisture in Lhasa based on TVDI and found that TVDI has a good negative correlation with the measured surface soil moisture. Using TVDI, it is monitored that the arid area is mainly distributed in the north, southwest, and some parts of central and southern China.This paper first compares the TVDI method of remote sensing drought index with the standard precipitation evapotranspiration index (SPEI) method of mature meteorological drought index and clarify the advanced nature and superiority of TVDI method. Second, this paper monitors the drought change situation of 110 months in the whole growth season of Heilongjiang Province in 22 years (every May to September in 2000–2021), and analyzes the spatial distribution characteristics, occurrence frequency, and future trend of Heilongjiang drought. This study is conducive to accurately grasp the multiyear changes and occurrence characteristics of drought in the whole region and agricultural subregion of Heilongjiang Province. It can not only establish an effective drought monitoring [13–15] and early warning system to guide the accurate and effective irrigation of high-standard farmland but also explore the response mechanism of ecological environment and surface crops to climate change and provide reference for agricultural production and managers [16]. ## 2. Data and Methods ### 2.1. Overview of the Study Region As the largest commercial grain production base in China, Heilongjiang Province is facing severe challenges in regional drought prevention and disaster reduction due to the impact of climate warming and uneven rainfall distribution. Heilongjiang Province (121°11′–135°05 ′E, 43°25′–53 °33′N) is located in Northeast China. It is the province with the highest latitude in China. The terrain is characterized by two mountains in the north and south and two plains in the east and west (Figure1). The crops in the study region are one crop per annual, and the whole growth period is from May to September, with an annual average temperature of −6°C to 4°C in the region. According to the temperature index from south to north, it can be divided into middle temperate zone and cold temperate zone. The annual precipitation is 350 to 650 mm. From east to west, it can be divided into humid region, semihumid region, and semiarid region according to the dryness index [17]. Agricultural characteristic area in Heilongjiang Province is divided into four regions [18] (Figure 1): Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). Region I is mainly forestry and a small amount of agriculture, mainly wheat, soybean, and potato. Regions II and IV are mainly agriculture, mainly planting corn, rice, soybean, wheat, potato, and so on. Region III is dominated by agriculture and forestry, and agriculture is mainly planted with rice, corn, soybean, and so on.Figure 1 Scope of study region and partition. ### 2.2. Data Processing #### 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. #### 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. #### 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ### 2.3. Research Methods #### 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) #### 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 #### 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. #### 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 2.1. Overview of the Study Region As the largest commercial grain production base in China, Heilongjiang Province is facing severe challenges in regional drought prevention and disaster reduction due to the impact of climate warming and uneven rainfall distribution. Heilongjiang Province (121°11′–135°05 ′E, 43°25′–53 °33′N) is located in Northeast China. It is the province with the highest latitude in China. The terrain is characterized by two mountains in the north and south and two plains in the east and west (Figure1). The crops in the study region are one crop per annual, and the whole growth period is from May to September, with an annual average temperature of −6°C to 4°C in the region. According to the temperature index from south to north, it can be divided into middle temperate zone and cold temperate zone. The annual precipitation is 350 to 650 mm. From east to west, it can be divided into humid region, semihumid region, and semiarid region according to the dryness index [17]. Agricultural characteristic area in Heilongjiang Province is divided into four regions [18] (Figure 1): Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). Region I is mainly forestry and a small amount of agriculture, mainly wheat, soybean, and potato. Regions II and IV are mainly agriculture, mainly planting corn, rice, soybean, wheat, potato, and so on. Region III is dominated by agriculture and forestry, and agriculture is mainly planted with rice, corn, soybean, and so on.Figure 1 Scope of study region and partition. ## 2.2. Data Processing ### 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. ### 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. ### 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ## 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. ## 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. ## 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ## 2.3. Research Methods ### 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) ### 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 ### 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. ### 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) ## 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 ## 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. ## 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 3. Results and Analysis ### 3.1. Comparison between TVDI Method and SPEI Method The standardized precipitation evapotranspiration index (SPEI) is one of the meteorological drought indexes. The calculation of SPEI in this paper is based on the impact of precipitation, temperature, and evapotranspiration items monitored by the meteorological station (Figure2) on meteorological drought, which can well describe the meteorological drought. Many scholars [26–31] have shown that this index is an ideal tool for monitoring crop drought. However, the deficiency of this method is that the monitoring results are meteorological station values and point data. Relying only on the data of meteorological stations cannot fully reflect the drought characteristics of the whole province. This is also the advantage of remote sensing monitoring drought, which can realize the full coverage of the province and provide area data. Graphs from remote sensing are fundamental models used in scientific approaches to describe the relation between objects in the real world [32]. Scholars use professional meteorological data interpolation software to spatially interpolate points and obtain surface data [33]. Owing to the limitations of the number of meteorological stations and conversion methods, the accuracy of monitoring surface drought is also limited. Owing to the complexity of region, environment and climate, TVDI has more advantages in monitoring drought pixel by pixel. Many scholars have found that the drought monitoring method of TVDI is superior and more feasible [34–38]. In this paper, the TVDI value and SPEI value of meteorological stations are selected to correspond one-to-one, and the correlation between them is analyzed (Figure 5). It is found that there is a negative correlation between January scale SPEI and monthly TVDI. The formula is SPEI = −0.0325TVDI + 1.2572, and the correlation coefficient is −0.6, P < 0.01. It shows that there is a very significant negative correlation between monthly TVDI value and monthly scale SPEI value, which shows that using TVDI to monitor drought can not only monitor large-scale regional drought in the whole province, but also retain the monitoring accuracy of SPEI method. The retrieval data of vegetation index and ground temperature obtained by remote sensing satellite can well express the ground agricultural drought.Figure 5 Relationship analysis between monthly SPEI and monthly TVDI. ### 3.2. Analysis of Spatial Characteristics of Drought in Heilongjiang Province from 2000 to 2021 The spatial distribution results of average TVDI (Figure6(a)) and TVDI range value (Figure 6(b)) from 2000 to 2021 show that the annual average TVDI and range value TVDI can reflect the occurrence, severity, and interannual changes of drought in Heilongjiang Province. Spatially, there are few cultivated lands in region I and are rarely affected by drought, and there is little difference between the middle ages of drought in region III. Regions IV and II carry the main cultivated land resources in the province. Region IV shows the level of perennial drought, especially in the west. There are various drought levels in region II. The drought in the east is the lightest and rarely affected by drought, but the difference between ages is obvious (TVDIR > 60). There are a small number of perennial severe drought regions in the middle, and the drought in other areas is medium. Region II is the region with the largest difference in drought change between ages.Figure 6 Results of average drought and range difference of the whole province in 22 years: (a) TVDI22 drought grade distribution map and (b) TVDIR map. (a)(b) ### 3.3. Analysis of Drought Time Characteristics in Heilongjiang Province from 2000 to 2021 #### 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. #### 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ### 3.4. Analysis of Drought Frequency in Heilongjiang Province Analyze the area proportion of the total frequency of drought in Heilongjiang Province from 2000 to 2021 (Figure9(a)) and the regional distribution of different frequencies (Figure 9(b)). The drought frequency distribution is similar to that of TVDI22. Surface temperature and effective precipitation determine the spatial distribution of drought grade and drought frequency. Figure 9(b) shows that the areas with drought frequency less than 40 times are mainly distributed in the east of region I and region II, and the areas with drought frequency more than 81 times are mainly distributed in the west of region IV and the middle of region II. In the 110 months during the crop growing season (May to September) from 2000 to 2021, about 22.28% of the cultivated land in the province has a drought frequency of less than 40 times, about 13.88% of the cultivated land has a drought frequency of 40 to 60 times, and about 63.84% of the cultivated land has a drought frequency of more than 60 times.Figure 9 Total frequency of drought in Heilongjiang Province from 2000 to 2021 A and proportion of different frequency regions B: (a) total frequency of drought and (b) proportion of different frequency regions. (a)(b)Analyze the distribution of drought occurrence frequency (Figure10(a)) and the proportion of drought area in Heilongjiang Province (Figure 10(b)) accumulated from May to September in the past 22 years. The results show that the average monthly drought frequency in the province is 15.71 times in May, 14.49 times in June, 7.18 times in July, 10.92 times in August and 17.34 times in September respectively. The highest drought occurrence frequency in the province is in September and May, respectively. The area with 22 droughts accounted for 17.9% in May, and the area with 20 droughts accounted for 33.4% in September. The area with one drought accounted for 10.7% in July and 0.2% in September. The area in May and September with less than 16 droughts accounted for no more than 5%.Figure 10 (a) Spatial distribution of drought frequency in Heilongjiang Province from May to September 2000–2021. (b) Proportion of drought frequency region in Heilongjiang Province from May to September 2000–2021. (a)(b)By analyzing the drought frequency of four agricultural regions accumulated in 22 years from May to September, it is found that the average drought frequency from May to September in region I is 11.3 times, 9.1 times, 3.6 times, 5.5 times, and 12.5 times, respectively, that in region II is 11.6 times, 10.4 times, 5.4 times, 9.4 times, and 16.2 times, that in region III is 13.8 times, 10 times, 5 times, 9 times, and 18.4 times, and that in region IV is 18.5 times, 17.5 times, 8.6 times, 12.5 times, and 18.2 times. The highest frequency of drought in each region is in May and September of region IV and September of region III. The local rainfall in spring (May) and autumn (September) is less, and it is more vulnerable to strong winds, accelerating soil water evaporation and aggravating the frequency of drought.Drought occurred frequently and widely from 2000 to 2021, which basically ran through the whole crop growth season, but there were some differences in drought area, drought frequency, and drought degree. The drought in spring and autumn is serious, frequent, and affects a wide area. Although the drought in summer is not serious, it is frequent and common in the southwest of region IV.In different years, in the beginning of rainy season, there is a certain randomness in the amount of rainfall and the distribution of rainfall. There is also a certain seasonality of rainfall on the regional scale. Therefore, drought itself has a certain randomness and periodicity. Drought frequency is an appropriate index to describe the randomness and periodicity of drought. The drought frequency in the whole province decreases from spring to summer and then increases from late summer to autumn. This paper finds that the drought frequency in the whole growing season of crops in regions I, II, and III changes to some extent, while the drought frequency in the whole growing season of crops in the southwest of region IV has been very high, indicating that precipitation in Heilongjiang province plays a significant role in the formation of a drought cycle. Precipitation is mainly concentrated in summer, but less in the southwest of region IV. ## 3.1. Comparison between TVDI Method and SPEI Method The standardized precipitation evapotranspiration index (SPEI) is one of the meteorological drought indexes. The calculation of SPEI in this paper is based on the impact of precipitation, temperature, and evapotranspiration items monitored by the meteorological station (Figure2) on meteorological drought, which can well describe the meteorological drought. Many scholars [26–31] have shown that this index is an ideal tool for monitoring crop drought. However, the deficiency of this method is that the monitoring results are meteorological station values and point data. Relying only on the data of meteorological stations cannot fully reflect the drought characteristics of the whole province. This is also the advantage of remote sensing monitoring drought, which can realize the full coverage of the province and provide area data. Graphs from remote sensing are fundamental models used in scientific approaches to describe the relation between objects in the real world [32]. Scholars use professional meteorological data interpolation software to spatially interpolate points and obtain surface data [33]. Owing to the limitations of the number of meteorological stations and conversion methods, the accuracy of monitoring surface drought is also limited. Owing to the complexity of region, environment and climate, TVDI has more advantages in monitoring drought pixel by pixel. Many scholars have found that the drought monitoring method of TVDI is superior and more feasible [34–38]. In this paper, the TVDI value and SPEI value of meteorological stations are selected to correspond one-to-one, and the correlation between them is analyzed (Figure 5). It is found that there is a negative correlation between January scale SPEI and monthly TVDI. The formula is SPEI = −0.0325TVDI + 1.2572, and the correlation coefficient is −0.6, P < 0.01. It shows that there is a very significant negative correlation between monthly TVDI value and monthly scale SPEI value, which shows that using TVDI to monitor drought can not only monitor large-scale regional drought in the whole province, but also retain the monitoring accuracy of SPEI method. The retrieval data of vegetation index and ground temperature obtained by remote sensing satellite can well express the ground agricultural drought.Figure 5 Relationship analysis between monthly SPEI and monthly TVDI. ## 3.2. Analysis of Spatial Characteristics of Drought in Heilongjiang Province from 2000 to 2021 The spatial distribution results of average TVDI (Figure6(a)) and TVDI range value (Figure 6(b)) from 2000 to 2021 show that the annual average TVDI and range value TVDI can reflect the occurrence, severity, and interannual changes of drought in Heilongjiang Province. Spatially, there are few cultivated lands in region I and are rarely affected by drought, and there is little difference between the middle ages of drought in region III. Regions IV and II carry the main cultivated land resources in the province. Region IV shows the level of perennial drought, especially in the west. There are various drought levels in region II. The drought in the east is the lightest and rarely affected by drought, but the difference between ages is obvious (TVDIR > 60). There are a small number of perennial severe drought regions in the middle, and the drought in other areas is medium. Region II is the region with the largest difference in drought change between ages.Figure 6 Results of average drought and range difference of the whole province in 22 years: (a) TVDI22 drought grade distribution map and (b) TVDIR map. (a)(b) ## 3.3. Analysis of Drought Time Characteristics in Heilongjiang Province from 2000 to 2021 ### 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. ### 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ## 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. ## 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ## 3.4. Analysis of Drought Frequency in Heilongjiang Province Analyze the area proportion of the total frequency of drought in Heilongjiang Province from 2000 to 2021 (Figure9(a)) and the regional distribution of different frequencies (Figure 9(b)). The drought frequency distribution is similar to that of TVDI22. Surface temperature and effective precipitation determine the spatial distribution of drought grade and drought frequency. Figure 9(b) shows that the areas with drought frequency less than 40 times are mainly distributed in the east of region I and region II, and the areas with drought frequency more than 81 times are mainly distributed in the west of region IV and the middle of region II. In the 110 months during the crop growing season (May to September) from 2000 to 2021, about 22.28% of the cultivated land in the province has a drought frequency of less than 40 times, about 13.88% of the cultivated land has a drought frequency of 40 to 60 times, and about 63.84% of the cultivated land has a drought frequency of more than 60 times.Figure 9 Total frequency of drought in Heilongjiang Province from 2000 to 2021 A and proportion of different frequency regions B: (a) total frequency of drought and (b) proportion of different frequency regions. (a)(b)Analyze the distribution of drought occurrence frequency (Figure10(a)) and the proportion of drought area in Heilongjiang Province (Figure 10(b)) accumulated from May to September in the past 22 years. The results show that the average monthly drought frequency in the province is 15.71 times in May, 14.49 times in June, 7.18 times in July, 10.92 times in August and 17.34 times in September respectively. The highest drought occurrence frequency in the province is in September and May, respectively. The area with 22 droughts accounted for 17.9% in May, and the area with 20 droughts accounted for 33.4% in September. The area with one drought accounted for 10.7% in July and 0.2% in September. The area in May and September with less than 16 droughts accounted for no more than 5%.Figure 10 (a) Spatial distribution of drought frequency in Heilongjiang Province from May to September 2000–2021. (b) Proportion of drought frequency region in Heilongjiang Province from May to September 2000–2021. (a)(b)By analyzing the drought frequency of four agricultural regions accumulated in 22 years from May to September, it is found that the average drought frequency from May to September in region I is 11.3 times, 9.1 times, 3.6 times, 5.5 times, and 12.5 times, respectively, that in region II is 11.6 times, 10.4 times, 5.4 times, 9.4 times, and 16.2 times, that in region III is 13.8 times, 10 times, 5 times, 9 times, and 18.4 times, and that in region IV is 18.5 times, 17.5 times, 8.6 times, 12.5 times, and 18.2 times. The highest frequency of drought in each region is in May and September of region IV and September of region III. The local rainfall in spring (May) and autumn (September) is less, and it is more vulnerable to strong winds, accelerating soil water evaporation and aggravating the frequency of drought.Drought occurred frequently and widely from 2000 to 2021, which basically ran through the whole crop growth season, but there were some differences in drought area, drought frequency, and drought degree. The drought in spring and autumn is serious, frequent, and affects a wide area. Although the drought in summer is not serious, it is frequent and common in the southwest of region IV.In different years, in the beginning of rainy season, there is a certain randomness in the amount of rainfall and the distribution of rainfall. There is also a certain seasonality of rainfall on the regional scale. Therefore, drought itself has a certain randomness and periodicity. Drought frequency is an appropriate index to describe the randomness and periodicity of drought. The drought frequency in the whole province decreases from spring to summer and then increases from late summer to autumn. This paper finds that the drought frequency in the whole growing season of crops in regions I, II, and III changes to some extent, while the drought frequency in the whole growing season of crops in the southwest of region IV has been very high, indicating that precipitation in Heilongjiang province plays a significant role in the formation of a drought cycle. Precipitation is mainly concentrated in summer, but less in the southwest of region IV. ## 4. Discussion Using the surface TVDI data covering the whole province, this paper analyzes the temporal and spatial characteristics and frequency of drought at different time and spatial scales, and reveals the occurrence, development, severity, and change law of drought in Heilongjiang Province in the past 22 years. Li Chongrui [33] used SPEI to analyze the drought law of corn in the time scale (1, 3, 6, 12, and 24 months) from 1989 to 2018 in Northeast China, and made it clear that the drought was more serious from 2000 to 2010, the high incidence month of drought was May, with the largest drought area and degree, and the southwest of Heilongjiang Province was a high incidence area of drought. This result is consistent with the results of this paper. This paper analyzes the temporal and spatial characteristics of drought from four agricultural regions. The planting structure in each region is different. This paper does not discuss and analyze different crop types in different regions, and the data analysis is limited by the age of remote sensing data. In the future, while continuously improving and supplementing remote sensing data, we will continue to study artificial intelligence and machine learning methods [39, 40], which can extract different crop types on the ground and monitor the response of different crop types to drought. If we can analyze the damage degree of early and late maturing crops in different drought periods and regions, and guide the research of suitable crop varieties in different regions according to the analysis results, it will be very meaningful. ## 5. Conclusion From 2000 to 2021, the drought phenomenon of cultivated land in Heilongjiang Province was frequent and serious. In the whole crop growing season, the medium drought degree was the most prominent, accounting for 70% of the total cultivated land, which was distributed throughout the province. In the 110 months of monitoring, the drought frequency was more than 80 times, mainly distributed in the west of region IV and the middle of region II, and the drought frequency of about 63.84% of the cultivated land occurred more than 60 times. During the 22 years of TVDI monitoring, the drought was serious from 2000 to 2009, which was alleviated after 2010.2000, 2002, 2006, and 2009 were dry years, and 2013 and 2019 were wet years. Although the climate temperature in Heilongjiang Province has gradually increased in recent years, the short-term drought has been alleviated due to the influence of strong rainfall. Among the four agricultural regions in the province, region IV is a typical arid area, which is subject to large drought area and heavy intensity all year round, especially in May and June every year. The drought area is the largest but medium intensity in September. At the same time, the drought frequency in May and September is also the most. The drought levels in region II are diverse, with strong drought in the middle and weak drought in the east. Its diverse drought levels lead to large interannual fluctuations and are more vulnerable to drought years. The crop in Heilongjiang province belongs to the once crop per annual. September is the autumn harvest season. The mild drought is conducive to the maturity and harvest of crops. There is less rainfall in spring and is easy to be affected by strong winds, which intensifies the occurrence of drought. The strong drought in spring and early summer (May-June) seriously affects the germination and growth of crops. The total frequency distribution of annual drought is similar to that of TVDI22, and there is a significant correlation between TVDI and SPEI, indicating that precipitation and surface temperature determine the spatial distribution of drought grade and drought frequency. Owing to the uncertainty of rainfall time, rainfall, and distribution, drought is also accompanied by randomness and periodicity. Therefore, regional drought must be accurately monitored, and its drought occurrence mechanism also needs more in-depth discussion. It will be more meaningful to consider social, human, natural, and other influencing factors. --- *Source: 1003243-2022-04-28.xml*
1003243-2022-04-28_1003243-2022-04-28.md
82,162
Analysis of 22-year Drought Characteristics in Heilongjiang Province Based on Temperature Vegetation Drought Index
Li Wu; Youzhi Zhang; Limin Wang; Wenhuan Xie; Lijuan Song; Haifeng Zhang; Hongwen Bi; Yanyan Zheng; Yu Zhang; Xiaofei Zhang; Yan Li; Zhiqun Lv
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1003243
1003243-2022-04-28.xml
--- ## Abstract Heilongjiang Province is the main grain producing region in China and an important part of Northeast China Plain, which is one of the three black soil belts in the world. The cultivated region of black soil accounts for 50.6% of the black soil region in Northeast China. Due to the obvious rise of temperature and uneven distribution of precipitation in the 20th century, it has been considered to be one of the important reasons for agricultural drought and aridity. Under the background of climate change, understanding the multiyear changes and occurrence characteristics of cultivated land drought in different agricultural regions in Heilongjiang Province is of great significance for the establishment of agricultural drought prediction and early warning system in the future, guiding agricultural high-standard farmland irrigation in different regions, promoting black soil protection, and then improving grain yield. This paper calculates the temperature vegetation drought index (TVDI) based on the normalized difference vegetation index (NDVI) and surface temperature (TS) product data of MODIS from 2000 to 2021. Taking TVDI as the drought evaluation index, this paper studies the temporal and spatial variation distribution characteristics and occurrence frequency of drought in the whole region and four agricultural regions of Heilongjiang Province: Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). The results show that medium drought generally occurred in Heilongjiang Province from 2000 to 2021, accounting for about 70% of the total cultivated land. The drought was severe from 2000 to 2009 and weakened from 2010 to 2021. In the 110 months of the crop growing season from 2000 to 2021, about 63.84% of the region suffered more than 60 droughts. It is found that the frequency of drought varies from region to region. More than 80 droughts occurred in the west of region IV and the middle of region II. The characteristics of region IV are large sandstorm, less precipitation, and lack of water conservancy facilities, resulting in frequent and strong drought. It is also found that the occurrence frequency, degree grade and regional distribution of drought are closely related to seasonal changes. In spring, the occurrence grade and frequency of drought in region IV are the strongest and the drought phenomenon is serious. In autumn, drought is frequent and distributed in all regions, but the grade is not strong (mainly medium drought), and the drought phenomenon is medium. It is humid in summer. Crops in Heilongjiang Province are one crop per annual. Spring drought seriously restricts the water content of crops. Long-term drought will lead to poor crop development and reduce yield. Therefore, only by clarifying the characteristics of regional time drought, monitoring accurate drought events and accurately predicting the occurrence of drought, can we guide high-standard farmland precision irrigation, improve crop yield and ensure national food security. At the same time, severe drought will affect the terrestrial ecosystem, resulting in the distribution of crops and microorganisms, and the transformation between carbon sink and carbon source. --- ## Body ## 1. Introduction Heilongjiang Province is the main grain producing region in China and an important part of Northeast China Plain, which is one of the three black soil belts in the world. It is an important commodity grain base in China, known as “Beidacang”. The western part of black soil region is located in the dual-transition zone of climate and ecology. It is a climate change-sensitive region with serious desertification [1]. Heilongjiang Province has become one of the main warming regions in China since 1980, and the average temperature has increased by 1.4℃ in recent 100 years (1905–2001). Drought has become one of the major natural disasters in the province. If the arid area and drought intensity level cannot be accurately determined or effective countermeasures cannot be taken, farmers’ agricultural income will suffer serious losses by 2030 [2]. Drought is affected by many factors, such as climate warming, soil erosion, land use/land cover change, and human activities. In recent years, black land degradation and protection, drought monitoring, and early warning have always been the focus of research. The eighth phase strategic plan (2014–2021) of the international hydrological plan (IHP) takes the change of water-related disasters affected by global climate change and intense human activities as one of the key research issues and coordinates the research on drought through the international drought initiative (IDI) to improve the ability to cope with drought [3].According to the statistics, from 2000 to 2019, the drought-affected region of crops in Heilongjiang Province has reached 2.1374 million hectares every year [4]. In the past 20 years, the drought area accounted for 63% of the total area of meteorological disasters, ranking the first in the province. Under the background of global climate change, extreme weather and drought events occur from time to time, such as the severe drought in the summer of 2000 [5], in which the drought-affected area of the whole province reached 5.016 million hectares and the drought time reached 30 to 40 days, and in 2002, the temperature in the whole province was generally on the high side, the precipitation in the northwest was less, and the drought situation was severe [6]. The research focus of this paper is to monitor and analyze the temporal and spatial variation characteristics and frequency of drought by using remote sensing indicators. Previous studies mainly used meteorological drought indicators and historical statistical data to monitor drought. The monitoring mainly analyzed the point results of meteorological stations or transformed the points into surface results through analysis methods. The accuracy of the results was limited by the number of stations and analysis methods. Using remote sensing indicators to monitor the spatial-temporal and frequency characteristics of drought in Heilongjiang Province was rare. The previous research took Heilongjiang Province as the research area, did not subdivide the region of the province, and the research on the temporal, spatial, and frequency characteristics of drought in different agricultural regions of Heilongjiang Province was also rare. Most research results [7–9] did not clearly define the occurrence, development, and end period of drought. It is very important to monitor and predict the occurrence and development of drought in the whole region and different agricultural regions of Heilongjiang Province, and accurately evaluate it. It is very important to carry out precision irrigation according to the different levels of drought in different regions at different times. Drought is a common natural disaster, which is gradual. Its occurrence and end are slow and difficult to detect. For drought-prone areas, accurate quantitative expression of drought is very important for effective drought management. Remote sensing drought monitoring indicators can be divided into two categories: one is the monitoring of a single index, including a single vegetation index or a single temperature index, and the other is the monitoring of a double index, considering both temperature and vegetation index. Commonly used single indexes include anomaly vegetation index (AVI), temperature state index (TCI), and normalized difference vegetation index (NDVI). However, the research shows that [10], the factors affecting drought are complex, and it is limited to use a single index to monitor drought. The double indexes include temperature vegetation drought index (TVDI), conditional vegetation index (VTCI), and vegetation water supply index (VSWI). Both vegetation index and temperature are considered, so drought can be retrieved effectively. Yu Min [11] used TVDI to monitor the summer drought in Heilongjiang Province in 2007, showing that this method can monitor the drought in real time and truly reflect the dynamic process of local drought occurrence and development. Zhong Wei [12] analyzed the characteristics of soil moisture in Lhasa based on TVDI and found that TVDI has a good negative correlation with the measured surface soil moisture. Using TVDI, it is monitored that the arid area is mainly distributed in the north, southwest, and some parts of central and southern China.This paper first compares the TVDI method of remote sensing drought index with the standard precipitation evapotranspiration index (SPEI) method of mature meteorological drought index and clarify the advanced nature and superiority of TVDI method. Second, this paper monitors the drought change situation of 110 months in the whole growth season of Heilongjiang Province in 22 years (every May to September in 2000–2021), and analyzes the spatial distribution characteristics, occurrence frequency, and future trend of Heilongjiang drought. This study is conducive to accurately grasp the multiyear changes and occurrence characteristics of drought in the whole region and agricultural subregion of Heilongjiang Province. It can not only establish an effective drought monitoring [13–15] and early warning system to guide the accurate and effective irrigation of high-standard farmland but also explore the response mechanism of ecological environment and surface crops to climate change and provide reference for agricultural production and managers [16]. ## 2. Data and Methods ### 2.1. Overview of the Study Region As the largest commercial grain production base in China, Heilongjiang Province is facing severe challenges in regional drought prevention and disaster reduction due to the impact of climate warming and uneven rainfall distribution. Heilongjiang Province (121°11′–135°05 ′E, 43°25′–53 °33′N) is located in Northeast China. It is the province with the highest latitude in China. The terrain is characterized by two mountains in the north and south and two plains in the east and west (Figure1). The crops in the study region are one crop per annual, and the whole growth period is from May to September, with an annual average temperature of −6°C to 4°C in the region. According to the temperature index from south to north, it can be divided into middle temperate zone and cold temperate zone. The annual precipitation is 350 to 650 mm. From east to west, it can be divided into humid region, semihumid region, and semiarid region according to the dryness index [17]. Agricultural characteristic area in Heilongjiang Province is divided into four regions [18] (Figure 1): Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). Region I is mainly forestry and a small amount of agriculture, mainly wheat, soybean, and potato. Regions II and IV are mainly agriculture, mainly planting corn, rice, soybean, wheat, potato, and so on. Region III is dominated by agriculture and forestry, and agriculture is mainly planted with rice, corn, soybean, and so on.Figure 1 Scope of study region and partition. ### 2.2. Data Processing #### 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. #### 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. #### 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ### 2.3. Research Methods #### 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) #### 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 #### 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. #### 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 2.1. Overview of the Study Region As the largest commercial grain production base in China, Heilongjiang Province is facing severe challenges in regional drought prevention and disaster reduction due to the impact of climate warming and uneven rainfall distribution. Heilongjiang Province (121°11′–135°05 ′E, 43°25′–53 °33′N) is located in Northeast China. It is the province with the highest latitude in China. The terrain is characterized by two mountains in the north and south and two plains in the east and west (Figure1). The crops in the study region are one crop per annual, and the whole growth period is from May to September, with an annual average temperature of −6°C to 4°C in the region. According to the temperature index from south to north, it can be divided into middle temperate zone and cold temperate zone. The annual precipitation is 350 to 650 mm. From east to west, it can be divided into humid region, semihumid region, and semiarid region according to the dryness index [17]. Agricultural characteristic area in Heilongjiang Province is divided into four regions [18] (Figure 1): Daxing an Mountain and Xiaoxing an Mountain (region I), Sanjiang Plain (region II), Zhangguangcai Mountains (region III), and Songnen Plain (region IV). Region I is mainly forestry and a small amount of agriculture, mainly wheat, soybean, and potato. Regions II and IV are mainly agriculture, mainly planting corn, rice, soybean, wheat, potato, and so on. Region III is dominated by agriculture and forestry, and agriculture is mainly planted with rice, corn, soybean, and so on.Figure 1 Scope of study region and partition. ## 2.2. Data Processing ### 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. ### 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. ### 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ## 2.2.1. MODIS\Terra NDVI and Ts Products Remote sensing MODIS products (2640 scenes) covering all the study regions: vegetation index (MOD13A2) and surface temperature (MOD11A2) data. MODIS sensors transit twice a day, which can achieve full coverage of the research region. The data come from the computer network information center of the Chinese Academy of Sciences (https://www.nsdata.cn/). The collection time of MOD11A2 product (global 1 km 8-day surface surface temperature/emissivity data) is from day 121 to day 273 every year (2000–2021), and the collection time of MOD13A2 product (global 1 km 16-day vegetation index data) is the same. The maximum value method is used to unify the product time of MOD13A2 and MOD11A2 to 16 days. ## 2.2.2. Acquisition of Cultivated Land Information The MODIS Collection 5.1 land use/land cover product data set (MCD12Q1), the University of Boston in the United States, is used for the study of cultivated land information, which is sourced from NASA website (https://ladsweb.nascom.nasa.gov/data/search.html). The spatial resolution in 2012 was 500 meters. MCD12Q1 data have five data layers, and the overall classification accuracy ranges within 74.8% ± 1.3%. The main types include forest land, grassland, cultivated land, wetland, and urban construction land. In this paper, the selection of cultivated land is the phase element with DN value of 12 in the first classified image [19]. ## 2.2.3. Meteorological Data Observed by Meteorological Stations The distribution of meteorological stations in Heilongjiang Province is shown in Figure2. The meteorological data (2000–2019) includes the monthly (May to September) average temperature and precipitation data monitored by each station, which are derived from China meteorological data sharing network (https://cdc.nmic.cn/). There are nearly 6000 valid data.Figure 2 Distribution of meteorological stations. ## 2.3. Research Methods ### 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) ### 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 ### 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. ### 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 2.3.1. TVDI Method 2.3.1.1 Spatial Characteristics of NDVI-Ts: Through analysis, Goetz [20] considered that there is an obvious negative correlation between NDVI and Ts. The main reason is that the underlying surface temperature will rise sharply when the vegetation is subjected to water stress. He estimated the regional average soil humidity conditions and considered that the resolution of the sensor has little effect on the relationship between them. Price et al. [21] analyzed the NDVI and Ts data obtained by different satellite sensors and considered that the scatter diagram composed of NDVI and surface radiation temperature was triangular (Figure 3). The spatial distribution of soil moisture in the region can be obtained by obtaining the regional vegetation index and surface temperature from satellite data, establishing the scatter map of them, and determining the coordinates of dry edge, wet edge, and each vertex of the model. In this paper, the monthly scale Ts-NDVI space is established by using monthly remote sensing data. The maximum and minimum Ts corresponding to each NDVI value are the maximum and minimum combination methods respectively. The maximum Ts value defines the dry edge and the minimum Ts value defines the wet edge. The calculation is as follows:(1)Tsmax=a1+b1×NDVI,(2)Tsmin=a2+b2×NDVI.Figure 3 NDVI-Ts triangular feature space.In this formula,Ts is the surface temperature; Tsmin is the minimum surface temperature under the same NDVI conditions; Tsmax is the maximum surface temperature under the same NDVI conditions. a1, a2, b1, and b2 are the coefficients of the fitting equation.2.3.1.2 TVDI Establishment: TVDI is a drought monitoring model proposed by Sandholt et al. [22] based on the characteristic space of NDVI-Ts. Its principle is that NDVI-Ts characteristic space has a series of soil moisture and other straight lines. These isolines are the slopes of Ts and NDVI under different water conditions. Therefore, the concept of TVDI is proposed (Figure 4). The calculation formula is as follows:(3)TVDI=Ts−TsminTsmax−Tsmin.Figure 4 TVDI model principle.In this formula,Ts, Tsmin, and Tsmax are the same as formulas (1) and (2).Substitute formulas (1) and (2) into formula (3), that is,(4)TVDI=Ts−a2+b2×NDVIa1+b1×NDVI−a2+b2×NDVI.In this formula,Ts, a1, a2, b1, and b2 are the same as formulas (1) and (2).2.3.1.3 TVDI Classification: The grade division of TVDI can characterize the degree of drought, and the TVDI value is between 0 and 1. According to the author’s previous research, the TVDI drought grade standard suitable for Heilongjiang Province is: 0 < TVDI < 0.46 is normal, 0.46 ≤ TVDI < 0.57 is light drought, 0.57 ≤ TVDI < 0.76 is medium drought, 0.76 ≤ TVDI < 0.86 is severe drought, and 0.86 ≤ TVDI < 1 is extreme drought [23]. In order to facilitate the drawing, this paper expands the TVDI value by 100 times for analysis, that is, the TVDI value is between 0 and 100 (Table 1).Table 1 Drought classification of TVDI in Heilongjiang Province. NormalLight droughtMedium droughtSevere droughtExtreme drought(0, 246)[46, 57)[57, 76)[76, 86)[86, 100) ## 2.3.2. Standardized Precipitation Evapotranspiration Index (SPEI) Method The standardized precipitation evapotranspiration index (SPEI) characterizes the drought condition of a region according to the degree that the difference between precipitation and evapotranspiration deviates from the average state. SPEI is constructed by considering the impact of temperature on drought and introducing potential evapotranspiration based on SPI index. Therefore, SPEI is also an index based on probability model and also has the characteristics of multitime scale. The specific calculation steps are shown by Li [24]. This paper only lists the main steps: (1) calculate the potential evapotranspiration (PET) by using Penman–Monteith formula; (2) calculate the difference D between monthly precipitation and potential evapotranspiration; (3) the cumulative probability density of D was calculated by three parameter log-logistic; and (4) normalized P = 1 − F(x)When cumulative probabilityp ≤ 0.5, W=−2lnP(5)SPEI=ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.When cumulative probabilityP > 0.5: W=−2ln1−P(6)SPEI=−ω−c0+c1ω+c2ω21+d1ω+d2ω2+d3ω3.In the formula,c0 = 2.5155, c1 = 0.0103, d1 = 1.4328, d2 = 0.1893, d3 = 0.0013, and P is precipitation.SPEI drought classification of TVDI is shown as Table2.Table 2 SPEI drought classification. NormalLight droughtMedium droughtSevere droughtExtreme drought−0.5,+∞(−1.0, −0.5](−1.5, −1.0](−2.0, −1.5]−∞,−2.0 ## 2.3.3. Drought Spatial Distribution Analysis Method 2.3.3.1 TVDI Average Method: In this study, the average value method (formula (7)) is used to analyze the spatial pattern distribution characteristics of drought, TVDI22 is obtained, and it is graded and marked according to the drought grade, so as to obtain the average spatial distribution of drought in Heilongjiang Province in 22 years.(7)TVDI22=TVDI2000+TVDI2001+⋯TVDI202122.In this formula, TVDI22 represents the 22-year average value of each TVDI pixel within Heilongjiang Province, TVDI2000, TVDI2001… TVDI2021 represents the annual average value of each TVDI pixel within Heilongjiang Province from 2000 to 2021.2.3.3.2 TVDI Range Method: The range value represents the difference between the maximum value and the minimum value of each TVDI pixel in Heilongjiang Province in the past 22 years (formula (8)). The greater the range value, the greater the interannual drought gap, indicating that the pixel is vulnerable to drought years. The smaller the range value, the smaller the interannual drought gap, indicating that it is not affected by drought years and is in the same state all year round.(8)TVDIR=TVDImax−TVDImin.In this formula, TVDIR represents the interannual range of each TVDI pixel in Heilongjiang Province, TVDImax is the maximum value of each TVDI pixel in 22 years, and TVDImin is the minimum value of each TVDI pixel in 22 years. ## 2.3.4. Drought Assessment Method 2.3.4.1 Calculation of region Proportion of Drought Grade: The proportion of drought grade region represents the scope of drought in the study region. The formula is expressed as follows:(9)Pi=mM×100%.In this formula,i is the drought grade, representing normal, light, medium, severe, and extreme drought; m is the number of pixels with i drought level; and M is the total number of pixels of all drought levels.2.3.4.2 Calculation of Drought Frequency: In this paper, TVDI ≥ 57 is defined as the standard for a certain degree of drought [25], which stipulates that if TVDIi ≥ 57, it is considered that a certain degree of drought occurs, and the given frequency value is 1; otherwise, TVDIi < 57, it is considered that a certain degree of drought does not occur, and the given frequency value is 0. TVDIi is the TVDI of the i month of the year (May to September), and the total drought frequency is the sum of the occurrence frequency of each month. ## 3. Results and Analysis ### 3.1. Comparison between TVDI Method and SPEI Method The standardized precipitation evapotranspiration index (SPEI) is one of the meteorological drought indexes. The calculation of SPEI in this paper is based on the impact of precipitation, temperature, and evapotranspiration items monitored by the meteorological station (Figure2) on meteorological drought, which can well describe the meteorological drought. Many scholars [26–31] have shown that this index is an ideal tool for monitoring crop drought. However, the deficiency of this method is that the monitoring results are meteorological station values and point data. Relying only on the data of meteorological stations cannot fully reflect the drought characteristics of the whole province. This is also the advantage of remote sensing monitoring drought, which can realize the full coverage of the province and provide area data. Graphs from remote sensing are fundamental models used in scientific approaches to describe the relation between objects in the real world [32]. Scholars use professional meteorological data interpolation software to spatially interpolate points and obtain surface data [33]. Owing to the limitations of the number of meteorological stations and conversion methods, the accuracy of monitoring surface drought is also limited. Owing to the complexity of region, environment and climate, TVDI has more advantages in monitoring drought pixel by pixel. Many scholars have found that the drought monitoring method of TVDI is superior and more feasible [34–38]. In this paper, the TVDI value and SPEI value of meteorological stations are selected to correspond one-to-one, and the correlation between them is analyzed (Figure 5). It is found that there is a negative correlation between January scale SPEI and monthly TVDI. The formula is SPEI = −0.0325TVDI + 1.2572, and the correlation coefficient is −0.6, P < 0.01. It shows that there is a very significant negative correlation between monthly TVDI value and monthly scale SPEI value, which shows that using TVDI to monitor drought can not only monitor large-scale regional drought in the whole province, but also retain the monitoring accuracy of SPEI method. The retrieval data of vegetation index and ground temperature obtained by remote sensing satellite can well express the ground agricultural drought.Figure 5 Relationship analysis between monthly SPEI and monthly TVDI. ### 3.2. Analysis of Spatial Characteristics of Drought in Heilongjiang Province from 2000 to 2021 The spatial distribution results of average TVDI (Figure6(a)) and TVDI range value (Figure 6(b)) from 2000 to 2021 show that the annual average TVDI and range value TVDI can reflect the occurrence, severity, and interannual changes of drought in Heilongjiang Province. Spatially, there are few cultivated lands in region I and are rarely affected by drought, and there is little difference between the middle ages of drought in region III. Regions IV and II carry the main cultivated land resources in the province. Region IV shows the level of perennial drought, especially in the west. There are various drought levels in region II. The drought in the east is the lightest and rarely affected by drought, but the difference between ages is obvious (TVDIR > 60). There are a small number of perennial severe drought regions in the middle, and the drought in other areas is medium. Region II is the region with the largest difference in drought change between ages.Figure 6 Results of average drought and range difference of the whole province in 22 years: (a) TVDI22 drought grade distribution map and (b) TVDIR map. (a)(b) ### 3.3. Analysis of Drought Time Characteristics in Heilongjiang Province from 2000 to 2021 #### 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. #### 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ### 3.4. Analysis of Drought Frequency in Heilongjiang Province Analyze the area proportion of the total frequency of drought in Heilongjiang Province from 2000 to 2021 (Figure9(a)) and the regional distribution of different frequencies (Figure 9(b)). The drought frequency distribution is similar to that of TVDI22. Surface temperature and effective precipitation determine the spatial distribution of drought grade and drought frequency. Figure 9(b) shows that the areas with drought frequency less than 40 times are mainly distributed in the east of region I and region II, and the areas with drought frequency more than 81 times are mainly distributed in the west of region IV and the middle of region II. In the 110 months during the crop growing season (May to September) from 2000 to 2021, about 22.28% of the cultivated land in the province has a drought frequency of less than 40 times, about 13.88% of the cultivated land has a drought frequency of 40 to 60 times, and about 63.84% of the cultivated land has a drought frequency of more than 60 times.Figure 9 Total frequency of drought in Heilongjiang Province from 2000 to 2021 A and proportion of different frequency regions B: (a) total frequency of drought and (b) proportion of different frequency regions. (a)(b)Analyze the distribution of drought occurrence frequency (Figure10(a)) and the proportion of drought area in Heilongjiang Province (Figure 10(b)) accumulated from May to September in the past 22 years. The results show that the average monthly drought frequency in the province is 15.71 times in May, 14.49 times in June, 7.18 times in July, 10.92 times in August and 17.34 times in September respectively. The highest drought occurrence frequency in the province is in September and May, respectively. The area with 22 droughts accounted for 17.9% in May, and the area with 20 droughts accounted for 33.4% in September. The area with one drought accounted for 10.7% in July and 0.2% in September. The area in May and September with less than 16 droughts accounted for no more than 5%.Figure 10 (a) Spatial distribution of drought frequency in Heilongjiang Province from May to September 2000–2021. (b) Proportion of drought frequency region in Heilongjiang Province from May to September 2000–2021. (a)(b)By analyzing the drought frequency of four agricultural regions accumulated in 22 years from May to September, it is found that the average drought frequency from May to September in region I is 11.3 times, 9.1 times, 3.6 times, 5.5 times, and 12.5 times, respectively, that in region II is 11.6 times, 10.4 times, 5.4 times, 9.4 times, and 16.2 times, that in region III is 13.8 times, 10 times, 5 times, 9 times, and 18.4 times, and that in region IV is 18.5 times, 17.5 times, 8.6 times, 12.5 times, and 18.2 times. The highest frequency of drought in each region is in May and September of region IV and September of region III. The local rainfall in spring (May) and autumn (September) is less, and it is more vulnerable to strong winds, accelerating soil water evaporation and aggravating the frequency of drought.Drought occurred frequently and widely from 2000 to 2021, which basically ran through the whole crop growth season, but there were some differences in drought area, drought frequency, and drought degree. The drought in spring and autumn is serious, frequent, and affects a wide area. Although the drought in summer is not serious, it is frequent and common in the southwest of region IV.In different years, in the beginning of rainy season, there is a certain randomness in the amount of rainfall and the distribution of rainfall. There is also a certain seasonality of rainfall on the regional scale. Therefore, drought itself has a certain randomness and periodicity. Drought frequency is an appropriate index to describe the randomness and periodicity of drought. The drought frequency in the whole province decreases from spring to summer and then increases from late summer to autumn. This paper finds that the drought frequency in the whole growing season of crops in regions I, II, and III changes to some extent, while the drought frequency in the whole growing season of crops in the southwest of region IV has been very high, indicating that precipitation in Heilongjiang province plays a significant role in the formation of a drought cycle. Precipitation is mainly concentrated in summer, but less in the southwest of region IV. ## 3.1. Comparison between TVDI Method and SPEI Method The standardized precipitation evapotranspiration index (SPEI) is one of the meteorological drought indexes. The calculation of SPEI in this paper is based on the impact of precipitation, temperature, and evapotranspiration items monitored by the meteorological station (Figure2) on meteorological drought, which can well describe the meteorological drought. Many scholars [26–31] have shown that this index is an ideal tool for monitoring crop drought. However, the deficiency of this method is that the monitoring results are meteorological station values and point data. Relying only on the data of meteorological stations cannot fully reflect the drought characteristics of the whole province. This is also the advantage of remote sensing monitoring drought, which can realize the full coverage of the province and provide area data. Graphs from remote sensing are fundamental models used in scientific approaches to describe the relation between objects in the real world [32]. Scholars use professional meteorological data interpolation software to spatially interpolate points and obtain surface data [33]. Owing to the limitations of the number of meteorological stations and conversion methods, the accuracy of monitoring surface drought is also limited. Owing to the complexity of region, environment and climate, TVDI has more advantages in monitoring drought pixel by pixel. Many scholars have found that the drought monitoring method of TVDI is superior and more feasible [34–38]. In this paper, the TVDI value and SPEI value of meteorological stations are selected to correspond one-to-one, and the correlation between them is analyzed (Figure 5). It is found that there is a negative correlation between January scale SPEI and monthly TVDI. The formula is SPEI = −0.0325TVDI + 1.2572, and the correlation coefficient is −0.6, P < 0.01. It shows that there is a very significant negative correlation between monthly TVDI value and monthly scale SPEI value, which shows that using TVDI to monitor drought can not only monitor large-scale regional drought in the whole province, but also retain the monitoring accuracy of SPEI method. The retrieval data of vegetation index and ground temperature obtained by remote sensing satellite can well express the ground agricultural drought.Figure 5 Relationship analysis between monthly SPEI and monthly TVDI. ## 3.2. Analysis of Spatial Characteristics of Drought in Heilongjiang Province from 2000 to 2021 The spatial distribution results of average TVDI (Figure6(a)) and TVDI range value (Figure 6(b)) from 2000 to 2021 show that the annual average TVDI and range value TVDI can reflect the occurrence, severity, and interannual changes of drought in Heilongjiang Province. Spatially, there are few cultivated lands in region I and are rarely affected by drought, and there is little difference between the middle ages of drought in region III. Regions IV and II carry the main cultivated land resources in the province. Region IV shows the level of perennial drought, especially in the west. There are various drought levels in region II. The drought in the east is the lightest and rarely affected by drought, but the difference between ages is obvious (TVDIR > 60). There are a small number of perennial severe drought regions in the middle, and the drought in other areas is medium. Region II is the region with the largest difference in drought change between ages.Figure 6 Results of average drought and range difference of the whole province in 22 years: (a) TVDI22 drought grade distribution map and (b) TVDIR map. (a)(b) ## 3.3. Analysis of Drought Time Characteristics in Heilongjiang Province from 2000 to 2021 ### 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. ### 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ## 3.3.1. Annual Characteristic Analysis The proportion of average TVDI drought grade area in Heilongjiang Province from 2000 to 2021 (Figure7(a)) shows that the medium drought grade region is the largest, accounting for 70% of the total region, the light drought region accounts for 23%, the severe drought region accounts for 4%, the normal region accounts for 3%, and the extreme drought area accounts for 0. The area above medium drought level accounts for 74%, indicating that drought is widespread in Heilongjiang Province, mainly in medium drought level, and drought is serious in some regions.Figure 7 Characteristics of annual drought change in province and regions: (a) Proportion of drought grade region in the whole province. (b) Proportion of drought grade region in different regions. (c) Characteristics of annual drought change in province. (d) Characteristics of annual drought change in regions. (a)(b)(c)(d)The proportion of the average TVDI drought grade area in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(b)) shows that the proportion of the sum of normal and light drought regions is 63% in region I, 40% in region II, 22% in region III, 16% in region IV. The proportion of severe drought area in region IV is the largest among the four regions, accounting for 6%. Therefore, the drought-affected levels of the four regions are region I, region II, region III, and region IV from low to high.The annual average TVDI and the proportion of drought grade region in Heilongjiang Province from 2000 to 2021 (Figure7(c)) show that the annual average TVDI from 2000 to 2009 is high, and the sum of severe drought and severe drought areas was 32%, 16%, 32%, 8%, 22%, 11%, 34%, 15%, 14%, and 36%, with an average of 22%, indicating that the drought in Heilongjiang Province from 2000 to 2009 was severe, and there was a major drought every other year. The severest years are 2009, 2006, 2002 and 2000 respectively, which are dry years. After 2010, the average TVDI decreased, and the proportion of severe drought and extreme drought was 18%, 23%, 14%, 0%, 10%, 13%, 16%, 18%, 23%, 0%, 3%, and 7%, with an average of 12%, indicating that the drought after 2010 was weaker than before, especially in 2013 and 2019.The annual average TVDI and the proportion of drought grade region in the four agricultural regions of Heilongjiang Province from 2000 to 2021 (Figure7(d)) show that the annual average TVDI in region IV is consistent with the annual average TVDI (7C) of the whole province. Except for a few years (2003, 2013, 2019, 2020, and 2021), the proportion of severe drought and extreme drought areas are large, especially in the four dry years, it reaches 50%. The average is 25% in 22 years. Therefore, it shows that region IV in the province is affected by drought with large area and strong grade, which is a typical arid area. In region III, the proportion of severe drought and extreme drought is 3% on average, and the proportion of normal and light drought areas is 33% on average. In 22 years, the proportion of area with severe drought and extreme drought in region II is 6% on average, and the proportion of area with normal and light drought is 46% on average. The sum of area with severe drought, extreme drought and area with normal and light drought in region III is smaller than that in region II. The results are consistent with the analysis in Figure 7(b). It shows that the drought-affected area in region III is large, but the level is not strong, mainly medium drought. The drought levels in region II are diverse, and there are certain areas with severe drought and extreme drought. The proportion of normal and light drought area in region I is 60% on average, and it is humid all year round. ## 3.3.2. Monthly Characteristics Analysis The proportion of monthly average TVDI and drought grade area in Heilongjiang Province from 2000 to 2021 (Figure8(a)) shows that the average value of annual monthly TVDI is 48 in July and 53 in August, belonging to light drought. It is 67 in May, 63 in June and 66 in September, belonging to medium drought. In May, the area of severe drought and extreme drought accounted for 25%, and the area of normal and light drought accounted for 26%. In June, the proportion of severe drought and extreme drought decreased to 22%, and the proportion of normal and light drought increased to 36%, indicating that the area of drought (aforementioned medium drought) in June decreased and the grade weakened compared with that in May. In September, the proportion of severe drought and extreme drought areas decreased to 11%, and the proportion of normal and light drought areas decreased to 14%, indicating that the drought area in September is large, and the drought grade is concentrated, mainly at the level of medium drought. Crops in Heilongjiang province belong to the one crop per annual. Drought in spring (May) and early summer (June) affects crop growth and development. Severe drought will reduce crop yield. Autumn (September) is the mature season. A certain degree of drought is conducive to crop maturity and harvest.Figure 8 Characteristics of monthly drought change in province and regions: (a) characteristics of monthly drought change in province and (b) characteristics of monthly drought change in regions. (a)(b)The proportion of monthly average TVDI and drought grade area in the four agricultural regions in the province from 2000 to 2021 (Figure8(b)) shows that the average TVDI values in region I from May to September are 55.50, 46.68, 45.36, 45.98, and 49.78, of which, there is no drought in July and August and light drought in May, June, and September; Region II: 56.88, 55.87, 47.58, 60.48, and 67.78, of which May, June, and July are light drought and August and September are medium drought; Region III: 62.94, 52.3, 53.31, 61.09, and 64.75, of which May, August, and September are medium drought and June and July are light drought; Region IV: 71.59, 68.52, 56.1, 5.46, and 65.6, of which May, June, and September are medium drought and July and August are light drought. The change trend of TVDI in region IV from May to September is the same as that in the whole province from May to September. The proportion of severe drought and extreme drought in region IV in May is the largest in all months and regions, accounting for 40%, followed by 34% in June. The results show that the areas with large drought area and serious grade in the crop growth season in Heilongjiang Province are spring (May) and early summer (June) in region IV, which will hinder the germination and growth of crops.To sum up, the research on the temporal and spatial changes of Heilongjiang Province and four agricultural regions shows that TVDI can monitor the occurrence and development of large-scale drought and judge its drought degree, but it cannot directly express the frequency of drought. ## 3.4. Analysis of Drought Frequency in Heilongjiang Province Analyze the area proportion of the total frequency of drought in Heilongjiang Province from 2000 to 2021 (Figure9(a)) and the regional distribution of different frequencies (Figure 9(b)). The drought frequency distribution is similar to that of TVDI22. Surface temperature and effective precipitation determine the spatial distribution of drought grade and drought frequency. Figure 9(b) shows that the areas with drought frequency less than 40 times are mainly distributed in the east of region I and region II, and the areas with drought frequency more than 81 times are mainly distributed in the west of region IV and the middle of region II. In the 110 months during the crop growing season (May to September) from 2000 to 2021, about 22.28% of the cultivated land in the province has a drought frequency of less than 40 times, about 13.88% of the cultivated land has a drought frequency of 40 to 60 times, and about 63.84% of the cultivated land has a drought frequency of more than 60 times.Figure 9 Total frequency of drought in Heilongjiang Province from 2000 to 2021 A and proportion of different frequency regions B: (a) total frequency of drought and (b) proportion of different frequency regions. (a)(b)Analyze the distribution of drought occurrence frequency (Figure10(a)) and the proportion of drought area in Heilongjiang Province (Figure 10(b)) accumulated from May to September in the past 22 years. The results show that the average monthly drought frequency in the province is 15.71 times in May, 14.49 times in June, 7.18 times in July, 10.92 times in August and 17.34 times in September respectively. The highest drought occurrence frequency in the province is in September and May, respectively. The area with 22 droughts accounted for 17.9% in May, and the area with 20 droughts accounted for 33.4% in September. The area with one drought accounted for 10.7% in July and 0.2% in September. The area in May and September with less than 16 droughts accounted for no more than 5%.Figure 10 (a) Spatial distribution of drought frequency in Heilongjiang Province from May to September 2000–2021. (b) Proportion of drought frequency region in Heilongjiang Province from May to September 2000–2021. (a)(b)By analyzing the drought frequency of four agricultural regions accumulated in 22 years from May to September, it is found that the average drought frequency from May to September in region I is 11.3 times, 9.1 times, 3.6 times, 5.5 times, and 12.5 times, respectively, that in region II is 11.6 times, 10.4 times, 5.4 times, 9.4 times, and 16.2 times, that in region III is 13.8 times, 10 times, 5 times, 9 times, and 18.4 times, and that in region IV is 18.5 times, 17.5 times, 8.6 times, 12.5 times, and 18.2 times. The highest frequency of drought in each region is in May and September of region IV and September of region III. The local rainfall in spring (May) and autumn (September) is less, and it is more vulnerable to strong winds, accelerating soil water evaporation and aggravating the frequency of drought.Drought occurred frequently and widely from 2000 to 2021, which basically ran through the whole crop growth season, but there were some differences in drought area, drought frequency, and drought degree. The drought in spring and autumn is serious, frequent, and affects a wide area. Although the drought in summer is not serious, it is frequent and common in the southwest of region IV.In different years, in the beginning of rainy season, there is a certain randomness in the amount of rainfall and the distribution of rainfall. There is also a certain seasonality of rainfall on the regional scale. Therefore, drought itself has a certain randomness and periodicity. Drought frequency is an appropriate index to describe the randomness and periodicity of drought. The drought frequency in the whole province decreases from spring to summer and then increases from late summer to autumn. This paper finds that the drought frequency in the whole growing season of crops in regions I, II, and III changes to some extent, while the drought frequency in the whole growing season of crops in the southwest of region IV has been very high, indicating that precipitation in Heilongjiang province plays a significant role in the formation of a drought cycle. Precipitation is mainly concentrated in summer, but less in the southwest of region IV. ## 4. Discussion Using the surface TVDI data covering the whole province, this paper analyzes the temporal and spatial characteristics and frequency of drought at different time and spatial scales, and reveals the occurrence, development, severity, and change law of drought in Heilongjiang Province in the past 22 years. Li Chongrui [33] used SPEI to analyze the drought law of corn in the time scale (1, 3, 6, 12, and 24 months) from 1989 to 2018 in Northeast China, and made it clear that the drought was more serious from 2000 to 2010, the high incidence month of drought was May, with the largest drought area and degree, and the southwest of Heilongjiang Province was a high incidence area of drought. This result is consistent with the results of this paper. This paper analyzes the temporal and spatial characteristics of drought from four agricultural regions. The planting structure in each region is different. This paper does not discuss and analyze different crop types in different regions, and the data analysis is limited by the age of remote sensing data. In the future, while continuously improving and supplementing remote sensing data, we will continue to study artificial intelligence and machine learning methods [39, 40], which can extract different crop types on the ground and monitor the response of different crop types to drought. If we can analyze the damage degree of early and late maturing crops in different drought periods and regions, and guide the research of suitable crop varieties in different regions according to the analysis results, it will be very meaningful. ## 5. Conclusion From 2000 to 2021, the drought phenomenon of cultivated land in Heilongjiang Province was frequent and serious. In the whole crop growing season, the medium drought degree was the most prominent, accounting for 70% of the total cultivated land, which was distributed throughout the province. In the 110 months of monitoring, the drought frequency was more than 80 times, mainly distributed in the west of region IV and the middle of region II, and the drought frequency of about 63.84% of the cultivated land occurred more than 60 times. During the 22 years of TVDI monitoring, the drought was serious from 2000 to 2009, which was alleviated after 2010.2000, 2002, 2006, and 2009 were dry years, and 2013 and 2019 were wet years. Although the climate temperature in Heilongjiang Province has gradually increased in recent years, the short-term drought has been alleviated due to the influence of strong rainfall. Among the four agricultural regions in the province, region IV is a typical arid area, which is subject to large drought area and heavy intensity all year round, especially in May and June every year. The drought area is the largest but medium intensity in September. At the same time, the drought frequency in May and September is also the most. The drought levels in region II are diverse, with strong drought in the middle and weak drought in the east. Its diverse drought levels lead to large interannual fluctuations and are more vulnerable to drought years. The crop in Heilongjiang province belongs to the once crop per annual. September is the autumn harvest season. The mild drought is conducive to the maturity and harvest of crops. There is less rainfall in spring and is easy to be affected by strong winds, which intensifies the occurrence of drought. The strong drought in spring and early summer (May-June) seriously affects the germination and growth of crops. The total frequency distribution of annual drought is similar to that of TVDI22, and there is a significant correlation between TVDI and SPEI, indicating that precipitation and surface temperature determine the spatial distribution of drought grade and drought frequency. Owing to the uncertainty of rainfall time, rainfall, and distribution, drought is also accompanied by randomness and periodicity. Therefore, regional drought must be accurately monitored, and its drought occurrence mechanism also needs more in-depth discussion. It will be more meaningful to consider social, human, natural, and other influencing factors. --- *Source: 1003243-2022-04-28.xml*
2022
# BUPNN: Manifold Learning Regularizer-Based Blood Usage Prediction Neural Network for Blood Centers **Authors:** Lingling Pan; Zelin Zang; Siqi Ma; Wei Hu; Zhechang Hu **Journal:** Computational Intelligence and Neuroscience (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1003310 --- ## Abstract Blood centers are an essential component of the healthcare system, as timely blood collection, processing, and efficient blood dispatch are critical to the treatment of patients and the performance of the entire healthcare system. At the same time, an efficient blood dispatching system through the high-precision predictive capability of artificial intelligence is crucial for the efficiency improvement of the blood centers. However, the current artificial intelligence (AI) models for predicting blood usage do not meet the needs of blood centers. The challenges of AI models mainly include lower generalization ability in different hospitals, limited stability under missing values, and low interpretability. An artificial neural network-based model named the blood usage prediction neural network (BUPNN) has been developed to address these challenges. BUPNN includes a novel similarity-based manifold regularizer that aims to enhance network mapping consistency and, thus, overcome the domain bise of different hospitals. Moreover, BUPNN diminishes the performance degradation caused by missing values through data enhancement. Experimental results on a large amount of accurate data demonstrate that BUPNN outperforms the baseline method in classification and regression tasks and excels in generalization and consistency. Moreover, BUPNN has solid potential to be interpreted. Therefore, the decision-making process of BUPNN is explored to the extent that it acts as an aid to the experts in the blood center. --- ## Body ## 1. Introduction Blood products are an essential part of the treatment of bleeding, cancer, AIDS, hepatitis, and other diseases[1]. Blood is also an indispensable resource in treating injured patients; whether promptly transfusing the blood is critical for rehabilitating injured patients. At the same time, early surgical interventions and rapid blood transfusions are the primary measures to reduce mortality. Unfortunately, these measures require large amounts of blood to support them.Blood, however, is significantly different from other medical products. Currently, blood cannot be manufactured or synthesized artificially and can only be donated by others. In addition, blood has a short shelf life, making emergency blood more specific and irreplaceable than other medical products. For example, patients in mass casualty events caused by earthquakes suffer from fractures, fractures accompanied by multiple organ injuries, and crush injuries. Therefore, the peak period of blood consumption occurs 96 hours after the earthquake [2]. In contrast, patients mainly suffer from burns in mass casualties caused by bombings or fires. The peak blood consumption occurs 24 hours after the event. In addition, ongoing blood transfusions may last for several months [3]. Therefore, modeling the prediction of blood use in patients is a meaningful and challenging topic.Current blood usage prediction models are not well developed. For example, Wang et al. [4] developed an early transfusion scoring system to predict blood requirements in severe trauma patients in the prehospital or initial phase of emergency resuscitation. However, the abovementioned design is more suitable for triage in a single hospital than in blood centers since the system does not consider how to avoid performance degradation due to domain deviations between different hospitals. Rebecca et al. [5, 6] summarized the currently available studies exploring how to predict the need for massive transfusion in patients with traumatic injuries, listing the blood consumption scoring system (ABC) and the shock index scoring system (SI). These systems use classical or ML-based methods to predict blood consumption during the treatment of patients. Unfortunately, although the progress is remarkable, but it is unsatisfactory in terms of accuracy and generalization, and thus, the blood demand prediction model cannot be widely used. We summarize the problems into the following three points:(i) Data quality is limited. The partnership between blood centers and hospitals makes it very difficult to establish a rigorous feedback system for patient information. In addition, biases and missing data due to differences in hospital equipment create challenges for a constant blood use prediction model.(ii) Model generalizability is unsatisfactory. Blood usage prediction models are often built for specific hospitals or cities without considering extending to a wider range of applications and thus have poor generalization performance.(iii) Model interpretability is inadequate. Most models can only output a category or blood usage forecast but cannot demonstrate the model’s decision process. An interpretable model can better work with experts to help blood centers with blood schedules.This paper proposes a blood usage prediction neural network, BUPNN, to solve the above problems. First, actual patient clinic data and treatment procedures in 12 hospitals are used as training data. Extensively collected data with biases from various hospitals will provide sufficient information to train a high-performance model. We used multiple data complementation schemes to restore the real problem and overcome missing values. In addition, the BUPNN model MDC is augmented with online data by linear interpolation, which increases the diversity of training data and thus improves the stability of the model under training with missing data. Second, to further improve the generalization performance of BUPNN, a similarity-based loss function is introduced to map data with biases to a stable semantic space by aligning samples from different hospitals in the latent space. Third, we analyze the model based on the deep learning interpretation method to enhance interpretability. The proposed analysis is accompanied by the prediction output of the model in real-time to assist one in understanding the process of the model’s decision-making. The interpretable study of BUPNN provides the conditions for computers and experts to help each other in blood consumption prediction.The main contributions of this study are as follows:(i) Representative samples. A large amount of data from twelve hospitals is collected for this study to investigate the implied relationship between patients’ chain indicators and blood consumption.(ii) Generalizable model. This study designs MDC for online missing data completion, and thus, data augmentation to enhance the model’s generalization ability. In addition, the various similarity loss function is designed to improve the model’s predictive power across domains.(iii) Excellent Performance. Experiments on six different settings demonstrate that our method outperforms all baseline methods.The rest of this paper is organized as follows. In Section2, we provide a literature review of demand forecasting methods for blood products. Section 3 provides an initial exploration of the data. We provide the data description, model background, model development, and evaluation of four different models for blood demand forecasting in Section. In Section 4, a comparison of the models is provided, and finally, in Section 5, concluding remarks are provided, including a discussion of ongoing work for this problem. ## 2. Related Work ### 2.1. ML Techniques for Medical Problems The integration of the medical field with ML technology has received much attention in recent years. Two areas that may benefit from the application of ML technology in the medical field are diagnosis and outcome prediction. It includes the possibility of identifying high-risk medical emergencies, such as recurrence or transition to another disease state. Recently, ML algorithms have been successfully used to classify thyroid cancer [7] and predict the progression of COVID-19 [8, 9]. On the other hand, ML-based visualization and dimensionality reduction techniques have the potential to help professionals analyze biological or medical data, guiding them to better understand the data [10, 11]. Furthermore, ML-based feature selection techniques [12, 13] have strong interpretability and the potential to find highly relevant biomarkers for output in a wide range of medical data, leading to new biological or medical discoveries. ### 2.2. Blood Demand Forecasting There is limited literature on blood demand forecasting; most investigate univariate time series methods. In these studies, forecasts are based solely on previous demand values without considering other factors affecting the demand. Frankfurter et al. [14] developed transfusion forecasting models using exponential smoothing (ES) methods for a blood collection and distribution center in New York. Critchfield et al. [15] developed models for forecasting blood usage in a blood center using several time series methods, including moving average (MA), winter’s approach, and ES. Filho et al. [16] developed a Box-Jenkins seasonal autoregressive integrated moving average (BJ-SARIMA) model to forecast weekly demand for hospital blood components. Their proposed method, SARIMA, is based on a Box-Jenkins approach that considers seasonal and nonseasonal characteristics of time series data. Later, Filho et al. [17] extended their model by developing an automatic procedure for demand forecasting and changing the model level from hospital level to regional blood center to help managers use the model directly. Kumari and Wijayanayake [18] proposed a blood inventory management model for daily blood supply, focusing on reducing blood shortages. Three time series methods, namely, MA, weighted moving average (WMA), and ES, are used to forecast blood usage and are evaluated based on needs. Fortsch and Khapalova [19] tested various blood demand prediction approaches, such as naive, moving average, exponential smoothing, and multiplicative time series decomposition (TSD). The results show that the Box-Jenkins (ARMA) approach, which uses an autoregressive moving average model, results in the highest prediction accuracy. Lestari et al. [20] applied four models to predict blood component demand, including moving average, weighted moving average, exponential smoothing, exponential smoothing with the trend, and select the best method for their data based on the minimum error between forecasts and the actual values. Volken et al. [21] used generalized additive regression and time-series models with exponential smoothing to predict future whole blood donation and RBC transfusion trends.Several recent studies consider clinically related indicators. For example, Drackley et al. [22] estimated a long-term blood demand for Ontario, Canada, based on previous transfusions’ age and sex-specific patterns. They forecast blood supply and demand for Ontario by considering demand and supply patterns and demographic forecasts, assuming fixed patterns and rates over time. Khaldi et al. [23] applied artificial neural networks (ANNs) to forecast the monthly demand for three blood components: red blood cells (RBCs), blood, and plasma, for a case study in Morocco. Guan et al. [24] proposed an optimisation ordering strategy in which they forecast the blood demand for several days into the future and build an optimal ordering policy based on the predicted direction, concentrating on minimising the wastage. Their primary focus is on an optimal ordering policy. They integrate their demand model into the inventory management problem, meaning they do not precisely try to forecast blood demand. Li et al. [25] developed a hybrid model consisting of seasonal and trend decomposition using Loess (STL) time series and eXtreme Gradient Boosting (XGBoost) for RBC demand forecasting and incorporated it into an inventory management problem.Recently, Motamedi et al. [26] presented an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). In addition, C. Twumasi and J. Twumasi [27] compared k-nearest neighbour regression (KNN), multilayer perceptron (MLP), and support vector machine (SVM) via a rolling-origin strategy for forecasting and backcasting blood demand data with missing values and outliers from a government hospital in Ghana. Abolghasemi et al. [28] treat the blood supply problem as an optimisation problem [29] and find that LightGBM provides promising solutions and outperforms other machine learning models. ## 2.1. ML Techniques for Medical Problems The integration of the medical field with ML technology has received much attention in recent years. Two areas that may benefit from the application of ML technology in the medical field are diagnosis and outcome prediction. It includes the possibility of identifying high-risk medical emergencies, such as recurrence or transition to another disease state. Recently, ML algorithms have been successfully used to classify thyroid cancer [7] and predict the progression of COVID-19 [8, 9]. On the other hand, ML-based visualization and dimensionality reduction techniques have the potential to help professionals analyze biological or medical data, guiding them to better understand the data [10, 11]. Furthermore, ML-based feature selection techniques [12, 13] have strong interpretability and the potential to find highly relevant biomarkers for output in a wide range of medical data, leading to new biological or medical discoveries. ## 2.2. Blood Demand Forecasting There is limited literature on blood demand forecasting; most investigate univariate time series methods. In these studies, forecasts are based solely on previous demand values without considering other factors affecting the demand. Frankfurter et al. [14] developed transfusion forecasting models using exponential smoothing (ES) methods for a blood collection and distribution center in New York. Critchfield et al. [15] developed models for forecasting blood usage in a blood center using several time series methods, including moving average (MA), winter’s approach, and ES. Filho et al. [16] developed a Box-Jenkins seasonal autoregressive integrated moving average (BJ-SARIMA) model to forecast weekly demand for hospital blood components. Their proposed method, SARIMA, is based on a Box-Jenkins approach that considers seasonal and nonseasonal characteristics of time series data. Later, Filho et al. [17] extended their model by developing an automatic procedure for demand forecasting and changing the model level from hospital level to regional blood center to help managers use the model directly. Kumari and Wijayanayake [18] proposed a blood inventory management model for daily blood supply, focusing on reducing blood shortages. Three time series methods, namely, MA, weighted moving average (WMA), and ES, are used to forecast blood usage and are evaluated based on needs. Fortsch and Khapalova [19] tested various blood demand prediction approaches, such as naive, moving average, exponential smoothing, and multiplicative time series decomposition (TSD). The results show that the Box-Jenkins (ARMA) approach, which uses an autoregressive moving average model, results in the highest prediction accuracy. Lestari et al. [20] applied four models to predict blood component demand, including moving average, weighted moving average, exponential smoothing, exponential smoothing with the trend, and select the best method for their data based on the minimum error between forecasts and the actual values. Volken et al. [21] used generalized additive regression and time-series models with exponential smoothing to predict future whole blood donation and RBC transfusion trends.Several recent studies consider clinically related indicators. For example, Drackley et al. [22] estimated a long-term blood demand for Ontario, Canada, based on previous transfusions’ age and sex-specific patterns. They forecast blood supply and demand for Ontario by considering demand and supply patterns and demographic forecasts, assuming fixed patterns and rates over time. Khaldi et al. [23] applied artificial neural networks (ANNs) to forecast the monthly demand for three blood components: red blood cells (RBCs), blood, and plasma, for a case study in Morocco. Guan et al. [24] proposed an optimisation ordering strategy in which they forecast the blood demand for several days into the future and build an optimal ordering policy based on the predicted direction, concentrating on minimising the wastage. Their primary focus is on an optimal ordering policy. They integrate their demand model into the inventory management problem, meaning they do not precisely try to forecast blood demand. Li et al. [25] developed a hybrid model consisting of seasonal and trend decomposition using Loess (STL) time series and eXtreme Gradient Boosting (XGBoost) for RBC demand forecasting and incorporated it into an inventory management problem.Recently, Motamedi et al. [26] presented an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). In addition, C. Twumasi and J. Twumasi [27] compared k-nearest neighbour regression (KNN), multilayer perceptron (MLP), and support vector machine (SVM) via a rolling-origin strategy for forecasting and backcasting blood demand data with missing values and outliers from a government hospital in Ghana. Abolghasemi et al. [28] treat the blood supply problem as an optimisation problem [29] and find that LightGBM provides promising solutions and outperforms other machine learning models. ## 3. Blood Centers and Datasets This section describes how blood center works with an example in Zhejiang Province, China. It includes the responsibilities of the blood center and the partnership between the blood center and the hospital, and shows in detail how the proposed model will assist the blood center in accomplishing its mission better. ### 3.1. Blood Center Figure1 shows that, this study considers a provincial centralized blood supply system, including blood centers, blood stations, and hospitals. The entire centralized blood supply system completes the collection, management, and delivery of blood products. The blood center is responsible for collecting blood products, testing for viruses and bacteria, and supplying some hospitals. At the same time, blood centers assume the management, coordination, and operational guidance of blood collection and supply institutions. Blood stations are responsible for collecting, storing, and transporting blood to local hospitals. While receiving blood, hospitals must collect clinical information on current patients for analysis and decision-making at the blood center to make the blood supply more efficient. Our proposed AI blood consumption prediction system (BUPNN) receives clinical information from each hospital and uses it to predict the future blood consumption of each patient. The proposed system helps blood center specialists perform blood collection and transportation better.Figure 1 BS blood supply chain with one regional blood center and multiple hospitals. ### 3.2. Data Details and Challenges We collected data in their actual state to build a practically usable model. The data in this study are constructed by processing BS shipping data and the TRUST (Transfusion Research for Utilization, Surveillance, and Tracking) database from Zhejiang Province Blood Center and 12 hospitals in Zhejiang Province. The data include data from 2025 patients, including 1970 data from emergency trauma patients in 10 hospitals in Zhejiang Province from 2018 to 2020 and another 55 from patients in two hospitals in Wenling’s explosion in 2020.Each dataset mainly included the following parts. (1) General patient information, including case number, consultation time, pretest classification, injury time, gender, age, weight, diagnosis, penetrating injury, heart rate, diastolic blood pressure, systolic blood pressure, body temperature, shock index, and Glasgow coma index. (2) Injury status, including pleural effusion, abdominal effusion, extremity injury status, thoracic and abdominal injury status, and pelvic injury status. (3) Laboratory tests, including hemoglobin, erythrocyte pressure product, albumin, hemoglobin value 24 hours after transfusion therapy, base residual, PH (acidity), base deficiency, oxygen saturation, PT (prothrombin time), and APTT (partial thromboplastin time). (4) Burn situation, including burn area and burn depth. (5) Patient regression, including whether blood use: whether to transfuse suspended red blood cells, the first day of hospital transfusion, and the amount of blood transfusion.For a more straightforward presentation of the data distribution, preliminary dataset statistics are shown in Table1, and box plots by age, blood consumption, and missing rate are shown in Figure 2.Table 1 Statistics of datasets. Hospital nameHospital abbreviationSample sizeAverage blood usageAverage missing rateMale/female ratioAverage ageDongyang HospitalDYang958.70.030.3449.19Enze HospitalEZe130.770.180.8650.38Haining HospitalHNing5715.960.170.6855.18Shiyi HospitalSYi724.650.080.2456.40Shaoyifu HospitalSYiFu1911.460.050.4152.50Shangyu HospitalSYu1357.920.070.4251.08Wenling HospitalWLing4211.620.090.4558.98Xinchang HospitalXChang5511.090.030.4957.89Xiaoshan HospitalXShan6200.080.4450.82Yongkang HospitalYKang1942.360.060.6264.43Yuyao HospitalYYao659.850.030.3554.68Zheer HospitalZEr10441.440.030.3653.93Figure 2 Boxplots for the relationship between hospital and age/sum blood/missing value. (a)(b)(c)After a detailed definition of the problem and a description of the dataset, we summarize the main problems faced in this paper.(i) Data holds missing values. as shown in Table1 and Figure 2(c), some hospitals have more than 10% missing values. The missing values cause perturbation to the data distribution, severely affecting the model’s training and performance.(ii) Data has domain bias. from Table1 and Figure 2(c), the missing values rate and blood consumption have obvious differences in different hospitals. Such a data distribution impacts the cross-hospital generalization of the model. In addition, the collection of clinical information from different hospitals may also be biased due to differences in testing devices and habits. ## 3.1. Blood Center Figure1 shows that, this study considers a provincial centralized blood supply system, including blood centers, blood stations, and hospitals. The entire centralized blood supply system completes the collection, management, and delivery of blood products. The blood center is responsible for collecting blood products, testing for viruses and bacteria, and supplying some hospitals. At the same time, blood centers assume the management, coordination, and operational guidance of blood collection and supply institutions. Blood stations are responsible for collecting, storing, and transporting blood to local hospitals. While receiving blood, hospitals must collect clinical information on current patients for analysis and decision-making at the blood center to make the blood supply more efficient. Our proposed AI blood consumption prediction system (BUPNN) receives clinical information from each hospital and uses it to predict the future blood consumption of each patient. The proposed system helps blood center specialists perform blood collection and transportation better.Figure 1 BS blood supply chain with one regional blood center and multiple hospitals. ## 3.2. Data Details and Challenges We collected data in their actual state to build a practically usable model. The data in this study are constructed by processing BS shipping data and the TRUST (Transfusion Research for Utilization, Surveillance, and Tracking) database from Zhejiang Province Blood Center and 12 hospitals in Zhejiang Province. The data include data from 2025 patients, including 1970 data from emergency trauma patients in 10 hospitals in Zhejiang Province from 2018 to 2020 and another 55 from patients in two hospitals in Wenling’s explosion in 2020.Each dataset mainly included the following parts. (1) General patient information, including case number, consultation time, pretest classification, injury time, gender, age, weight, diagnosis, penetrating injury, heart rate, diastolic blood pressure, systolic blood pressure, body temperature, shock index, and Glasgow coma index. (2) Injury status, including pleural effusion, abdominal effusion, extremity injury status, thoracic and abdominal injury status, and pelvic injury status. (3) Laboratory tests, including hemoglobin, erythrocyte pressure product, albumin, hemoglobin value 24 hours after transfusion therapy, base residual, PH (acidity), base deficiency, oxygen saturation, PT (prothrombin time), and APTT (partial thromboplastin time). (4) Burn situation, including burn area and burn depth. (5) Patient regression, including whether blood use: whether to transfuse suspended red blood cells, the first day of hospital transfusion, and the amount of blood transfusion.For a more straightforward presentation of the data distribution, preliminary dataset statistics are shown in Table1, and box plots by age, blood consumption, and missing rate are shown in Figure 2.Table 1 Statistics of datasets. Hospital nameHospital abbreviationSample sizeAverage blood usageAverage missing rateMale/female ratioAverage ageDongyang HospitalDYang958.70.030.3449.19Enze HospitalEZe130.770.180.8650.38Haining HospitalHNing5715.960.170.6855.18Shiyi HospitalSYi724.650.080.2456.40Shaoyifu HospitalSYiFu1911.460.050.4152.50Shangyu HospitalSYu1357.920.070.4251.08Wenling HospitalWLing4211.620.090.4558.98Xinchang HospitalXChang5511.090.030.4957.89Xiaoshan HospitalXShan6200.080.4450.82Yongkang HospitalYKang1942.360.060.6264.43Yuyao HospitalYYao659.850.030.3554.68Zheer HospitalZEr10441.440.030.3653.93Figure 2 Boxplots for the relationship between hospital and age/sum blood/missing value. (a)(b)(c)After a detailed definition of the problem and a description of the dataset, we summarize the main problems faced in this paper.(i) Data holds missing values. as shown in Table1 and Figure 2(c), some hospitals have more than 10% missing values. The missing values cause perturbation to the data distribution, severely affecting the model’s training and performance.(ii) Data has domain bias. from Table1 and Figure 2(c), the missing values rate and blood consumption have obvious differences in different hospitals. Such a data distribution impacts the cross-hospital generalization of the model. In addition, the collection of clinical information from different hospitals may also be biased due to differences in testing devices and habits. ## 4. Methodology ### 4.1. Problem Defensions The following definition is made in this paper to discuss the role of predictors.Definition 1. (Blood Consumption Prediction Problem, BCPP). LetXs,ys be a training dataset where clinical information Xs is implicitly linked to the blood usage ys. The BCPP train a model FXθ with Xs,ys, and use the model FXθ predict the blood usage of new collected data of clinical information Xt, where θ is the model parameter. The BCPP includes a classification subproblem and regression subproblem. For classification subproblemys and yt are one-hot or category values. For regression subproblem ys and yt are continuous values. ### 4.2. Blood Data Complementation Data complementation is the process of replacing missing data with substituted values. When covering for a data point, it is known as “unit complementation;” when substituting for a component of a data point, it is known as “item complementation.” Missing values introduce substantial noise, making data analysis more complex and less efficient. When one or more patient values are missing, most methods discard data with missing values by default, but data complementation attempts to fill in those values. However, missing data are also a reality, and the model should not require that all data be captured well. Therefore, data complementation is introduced to avoid performance degradation with missing values and improve the actual testing data’s performance.In this study, a single dataxi∈XS is complemented by(1)xiC=Cxi=Cxi,1,…,Cxi,f,…,Cxi,n,Cxi,f=xi,fxi,fis.not.missing,Ii,fxi,fis.missing,where xi,1, ⋯, xi,n are n data components of single data xi. If any component is missing, the imputed value If is used to fill this missing value. The If comes from mean value complementation, median value complementation, and KNN complementation:(2)Ii,fMean=MeanXf,Ii,fMedian=MedianXf,Ii,fKNN=KNN1K∑j∈NiK−NNxj,f,where MeanXfs and MedianXfs are the mean value or median value of training datasets on components f. NiK−NN is the KNN neighborhood of data i in the sense of European distance, K=5 in this paper. ### 4.3. Cross Hospitals Data Augmentation Data augmentation is a well-known neural network (NN) training strategy for image classification and signal processing [30]. Data augmentation improves the performance of the methods by precisely fitting the data distribution. First, data augmentation enhances the diversity of data, thereby overcoming overfitting. Second, data augmentation essentially reinforces the fundamental assumption of DR, i.e., the local connectivity of neighborhoods. Finally, it learns refined data distribution by generating more intra-manifold data based on sampled points.Cross-hospital data augmentation is introduced in a unified framework to generate new datax′=Tx:(3)xa=TxC=τxi,1C,…,τxi,fC,…,τxi,nC,τxfa=1−ru⋅xi,fC+ru⋅x¯i,fC,x˜∼Nihh∈H/hi,where the new augmented data xa is the combination of each feature τxi,fI. τxi,fC is calculated from the linear interpolation of original feature xi,fC and augmented feature x¯i,fC. In addition, the augmented feature x˜ is sampled from the neighborhood Nx of x. Nih is the k-NN neighborhood of the data i on the data of hospitals h. H is the set of the hospitals. H/hi means remove the neighborhood of data i’s hospitals. The combination parameter ru∼U0,pU is sampled from the uniform distribution U0,pU, and pU is the hyperparameter.The cross-hospital augmentation generates new data by combining datai with its neighborhoods in different hospitals. It reduces the terrible influence of the missing data and improves the training data’s divergence. Thus, the model learns a precise distribution to enhance the performance of our method. In addition, it works together with the loss function to align data from different hospitals, thus overcoming domain bias. ### 4.4. Architecture of BUPNN The proposed BUPNN does not require a unique backbone neural network. The multilayer perceptron network (MLP) is the backbone neural network. In addition to this, a new network architecture is proposed to enhance generalizability. The proposed neural network architecture is shown in Figure3.Figure 3 Architecture of BUPNN.A typical neural network model uses the network output directly to compute a supervised loss function. It may introduce undesirable phenomena such as overfitting. In this paper, similar to the paper [31], a manifold learning regularizer is proposed to suppress problems such as overfitting using the information in the latent space of the network. As shown in Figure 3, a complete neural network F⋅,wi,wj is divided into a latent network fi⋅,wi and an output network fj⋅,wj. The latent network is a preprocessing network that resists the adverse effects of noise and missing value completion. The output network is a dimensional reduction network that maps the data in high-dimensional latent space to the data in low-dimensional latent space. The latent network fi⋅,wi maps the data x′ input a network latent space and an output network fj⋅,wj further map it into the output space:(4)yi=fixi′,wi,yj=fixj′,wi,zi=fjyi,wj,zj=fjyj,wj,where xi′ and xj′ are the complementation and augmentation results of origin data x.Neural networks have powerful fitting performance, but at the same time, there is a risk of overfitting. The typical L2 regularization method can reduce overfitting, but it only limits the complexity of the network without constraining the network in terms of the manifold structure of the data. For example, there is no way to guarantee the distance-preserving and consistency of the network mapping. For this reason, we design a manifold regularizer based on manifold learning to solve this problem, and thus improve the generalization performance of the model and its usability for actual data. ### 4.5. Loss Function of BUPNN The loss function of BUPNN consists of two parts; one is the cross-entropy loss which uses label information, and the other is the manifold regularizer loss which uses hospital information and latent space information. #### 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. #### 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ### 4.6. Pseudocode and Complexity BUPNN’s pseudocode is shown in Algorithm1. The BUPNN includes the initialization and training phases. In the initialization phase, the kNN neighborhood of every single data is discovered. The time complexity of initialization phases is On1.14 [33], where n is the number of data. In the training phase, the complexity of the training phases is the same as that of artificial neural networks (ANN). BUPNN calculates the pairwise distance in a batch, so the complexity of training each batch is OB2, where B is the batch size. GPU can well accelerate the pairwise distance, so the training time consumption is the same as that of a typical ANN.Algorithm 1: BUPNN algorithm. Input: data: X=xii=1X, learning rate: η, epochs: E, batch size: B, β, νz, network: fθ,gϕ,Output: graph embedding: eii=1X.(1) whilei=0; i<E; i ++ do(2) xI=Ix ⊳ # blood data complementation in equation (1)(3) whileb=0; b<X/B; b ++ do(4) xa1,xa2=TxI,TxI ⊳ # blood data augmentation in equation (3)(5) ya1,ya2⟵fθxa1,fθxa2; ⊳ # map input data into Rdy space in equation (4)(6) za1,za2⟵fθya1,fθya2; ⊳ # map input data into Rdz space in equation (4)(7) dijy⟵dya1,ya2; dijz⟵dza1,za2; ⊳ #calculate distance in Rdy and Rdz(8) Sy⟵κRBijdijy,νy; Sz⟵κdijz,νz; ⊳ #calculate similarity in Rdy and Rdz in equation (5)(9) LD⟵DSy,Sz ; ⊳ # calculate loss function in equation (10).(10) θ⟵θ−η∂LD/∂θ, ϕ⟵ϕ−η∂LD/∂ϕ; ⊳ # update parameters.(11) end while(12) end while(13) zi⟵fθgϕxi; ⊳ # calculate the embedding result. ## 4.1. Problem Defensions The following definition is made in this paper to discuss the role of predictors.Definition 1. (Blood Consumption Prediction Problem, BCPP). LetXs,ys be a training dataset where clinical information Xs is implicitly linked to the blood usage ys. The BCPP train a model FXθ with Xs,ys, and use the model FXθ predict the blood usage of new collected data of clinical information Xt, where θ is the model parameter. The BCPP includes a classification subproblem and regression subproblem. For classification subproblemys and yt are one-hot or category values. For regression subproblem ys and yt are continuous values. ## 4.2. Blood Data Complementation Data complementation is the process of replacing missing data with substituted values. When covering for a data point, it is known as “unit complementation;” when substituting for a component of a data point, it is known as “item complementation.” Missing values introduce substantial noise, making data analysis more complex and less efficient. When one or more patient values are missing, most methods discard data with missing values by default, but data complementation attempts to fill in those values. However, missing data are also a reality, and the model should not require that all data be captured well. Therefore, data complementation is introduced to avoid performance degradation with missing values and improve the actual testing data’s performance.In this study, a single dataxi∈XS is complemented by(1)xiC=Cxi=Cxi,1,…,Cxi,f,…,Cxi,n,Cxi,f=xi,fxi,fis.not.missing,Ii,fxi,fis.missing,where xi,1, ⋯, xi,n are n data components of single data xi. If any component is missing, the imputed value If is used to fill this missing value. The If comes from mean value complementation, median value complementation, and KNN complementation:(2)Ii,fMean=MeanXf,Ii,fMedian=MedianXf,Ii,fKNN=KNN1K∑j∈NiK−NNxj,f,where MeanXfs and MedianXfs are the mean value or median value of training datasets on components f. NiK−NN is the KNN neighborhood of data i in the sense of European distance, K=5 in this paper. ## 4.3. Cross Hospitals Data Augmentation Data augmentation is a well-known neural network (NN) training strategy for image classification and signal processing [30]. Data augmentation improves the performance of the methods by precisely fitting the data distribution. First, data augmentation enhances the diversity of data, thereby overcoming overfitting. Second, data augmentation essentially reinforces the fundamental assumption of DR, i.e., the local connectivity of neighborhoods. Finally, it learns refined data distribution by generating more intra-manifold data based on sampled points.Cross-hospital data augmentation is introduced in a unified framework to generate new datax′=Tx:(3)xa=TxC=τxi,1C,…,τxi,fC,…,τxi,nC,τxfa=1−ru⋅xi,fC+ru⋅x¯i,fC,x˜∼Nihh∈H/hi,where the new augmented data xa is the combination of each feature τxi,fI. τxi,fC is calculated from the linear interpolation of original feature xi,fC and augmented feature x¯i,fC. In addition, the augmented feature x˜ is sampled from the neighborhood Nx of x. Nih is the k-NN neighborhood of the data i on the data of hospitals h. H is the set of the hospitals. H/hi means remove the neighborhood of data i’s hospitals. The combination parameter ru∼U0,pU is sampled from the uniform distribution U0,pU, and pU is the hyperparameter.The cross-hospital augmentation generates new data by combining datai with its neighborhoods in different hospitals. It reduces the terrible influence of the missing data and improves the training data’s divergence. Thus, the model learns a precise distribution to enhance the performance of our method. In addition, it works together with the loss function to align data from different hospitals, thus overcoming domain bias. ## 4.4. Architecture of BUPNN The proposed BUPNN does not require a unique backbone neural network. The multilayer perceptron network (MLP) is the backbone neural network. In addition to this, a new network architecture is proposed to enhance generalizability. The proposed neural network architecture is shown in Figure3.Figure 3 Architecture of BUPNN.A typical neural network model uses the network output directly to compute a supervised loss function. It may introduce undesirable phenomena such as overfitting. In this paper, similar to the paper [31], a manifold learning regularizer is proposed to suppress problems such as overfitting using the information in the latent space of the network. As shown in Figure 3, a complete neural network F⋅,wi,wj is divided into a latent network fi⋅,wi and an output network fj⋅,wj. The latent network is a preprocessing network that resists the adverse effects of noise and missing value completion. The output network is a dimensional reduction network that maps the data in high-dimensional latent space to the data in low-dimensional latent space. The latent network fi⋅,wi maps the data x′ input a network latent space and an output network fj⋅,wj further map it into the output space:(4)yi=fixi′,wi,yj=fixj′,wi,zi=fjyi,wj,zj=fjyj,wj,where xi′ and xj′ are the complementation and augmentation results of origin data x.Neural networks have powerful fitting performance, but at the same time, there is a risk of overfitting. The typical L2 regularization method can reduce overfitting, but it only limits the complexity of the network without constraining the network in terms of the manifold structure of the data. For example, there is no way to guarantee the distance-preserving and consistency of the network mapping. For this reason, we design a manifold regularizer based on manifold learning to solve this problem, and thus improve the generalization performance of the model and its usability for actual data. ## 4.5. Loss Function of BUPNN The loss function of BUPNN consists of two parts; one is the cross-entropy loss which uses label information, and the other is the manifold regularizer loss which uses hospital information and latent space information. ### 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. ### 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ## 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. ## 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ## 4.6. Pseudocode and Complexity BUPNN’s pseudocode is shown in Algorithm1. The BUPNN includes the initialization and training phases. In the initialization phase, the kNN neighborhood of every single data is discovered. The time complexity of initialization phases is On1.14 [33], where n is the number of data. In the training phase, the complexity of the training phases is the same as that of artificial neural networks (ANN). BUPNN calculates the pairwise distance in a batch, so the complexity of training each batch is OB2, where B is the batch size. GPU can well accelerate the pairwise distance, so the training time consumption is the same as that of a typical ANN.Algorithm 1: BUPNN algorithm. Input: data: X=xii=1X, learning rate: η, epochs: E, batch size: B, β, νz, network: fθ,gϕ,Output: graph embedding: eii=1X.(1) whilei=0; i<E; i ++ do(2) xI=Ix ⊳ # blood data complementation in equation (1)(3) whileb=0; b<X/B; b ++ do(4) xa1,xa2=TxI,TxI ⊳ # blood data augmentation in equation (3)(5) ya1,ya2⟵fθxa1,fθxa2; ⊳ # map input data into Rdy space in equation (4)(6) za1,za2⟵fθya1,fθya2; ⊳ # map input data into Rdz space in equation (4)(7) dijy⟵dya1,ya2; dijz⟵dza1,za2; ⊳ #calculate distance in Rdy and Rdz(8) Sy⟵κRBijdijy,νy; Sz⟵κdijz,νz; ⊳ #calculate similarity in Rdy and Rdz in equation (5)(9) LD⟵DSy,Sz ; ⊳ # calculate loss function in equation (10).(10) θ⟵θ−η∂LD/∂θ, ϕ⟵ϕ−η∂LD/∂ϕ; ⊳ # update parameters.(11) end while(12) end while(13) zi⟵fθgϕxi; ⊳ # calculate the embedding result. ## 5. Experiments ### 5.1. Baseline Methods In this Section, two subtasks, classification subtask and regression subtask, are defined. Seven state-of-the-art baseline classification and regression methods are chosen to discuss the relative advantages of BUPNN. The baseline approach is as follows. #### 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. #### 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. #### 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. #### 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. #### 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. #### 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. #### 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. #### 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ### 5.2. Dataset Partitioning and Grid Search Table1 and Figure 2 provide basic information about the data. Five data partitioning schemes are provided in this study to provide a detailed comparison of the performance differences between the different schemes.Three schemes are with data-complement, including COM-Mea, COM-Med, and COM-KNN as defined in equation (2). First, for COM-Mea, COM-Med, and COM-KNN, the missing values are complemented with a specific method. Following that, the training and testing sets are divided by 90%/10%. Two noncomplement schemes (NC-A and NC-B) are introduced to compare with the complement schemes. NC-A deletes all the data items with missing values, following the typical machine learning scheme. Following the data cleaning, NC-A divides the training and testing sets by 90%/10%. NC-B keeps the same training and testing set division as the data complement schemes and only removes all missing data from the training set and obtains a cleaner training set.The models are evaluated with 10-fold cross-validation for all the training sets and determine the optimal parameters by grid search. For a fair comparison, we control the search space of all baseline methods to be approximately equal. The search space of the compared process is in Table2.Table 2 Details of grid search. MethodsAbbreviationSearch spaceNoteK-nearest neighborKNNNei∈1,3,5,10,15,20,Nei⟶neighborssize,L∈10,20,30,50,70,100L⟶leaf.sizeRandom forestRFNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈2,3,4,5,6,7MSS⟶samples.split.sizeDecision treeDTMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeExtra treeETMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeSupport vector machineSVMM∈10,50,100,300,500,NE⟶maxiterations,T∈1e−4,5e−4,1e−3,2e−3,3e−3,5e−3T⟶tolerance.for.stopping.criteriaGradient boostingGBNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈ [2, 3, 4, 5, 6, 7]MSS⟶samples.split.sizeMultilayer perceptronMLPL∈2,3,4,5,6,L⟶number.of.layerWD∈ [0.1, 0.2, 0.3, 0.4]WD⟶weight.declayAdaptive boostADBNE∈40,50,60,70,80,90,NE⟶boosted.trees.size,LR∈0.8,0.9,1,2,5,LR⟶learning.rate\endLight gradient boosting machineLGBMNE∈21,26,31,26,41,NE⟶boosted.trees.size,L∈80,90,100,110,120,130L⟶leaf.size\endBlood usage prediction neural networkBUPNN (ours)νz∈0.01,0.03,0.05,0.07NE⟶boosted.trees.size,β∈0.1,0.2,K∈100,200,300L⟶leaf.size\end ### 5.3. Experimental Setup We initialize the other NN with the Kaiming initializer. We adopt the AdamW optimizer [43] with learning rate of 0.02 and weight decay of 0.5. All experiments use a fixed MLP network structure, fθ,w: (−1, 500, 300, 80), gϕ: (80, 500, 2), where −1 is the features number of the dataset. The number of epochs is 400. The batch size is 2048. νy=100. ### 5.4. Comparison on Classification Subtask Although the blood data for each patient is collected for accurate regression subtasks, it is equally important to predict whether a patient needs a blood transfusion. Especially in emergencies, indicating whether a patient needs a blood transfusion in the short term is more important than estimating the amount of blood used throughout the treatment cycle. Therefore, we first evaluate the performance of the proposed BUPNN on the classification subtask. Then, two evaluation metrics, classification accuracy (ACC) and area under the ROC curve (AUC), are used to compare the classifier’s performance from multiple perspectives. The performance comparison of BUPNN and eight baseline methods is shown in Tables3 and 4. In addition, the ROC curves of all the compared methods on different schemes are shown in Figure 5. The scatter plot the prediction of BUPNN for COM-Mea schemes are shown in Figure 6.Table 3 Classification AUC comparison with the baseline methods, the best result is shown in bold. The second result is italicized. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.80720.91660.84360.84430.87360.89490.90720.88810.9229 (↑0.0063)NC-B0.73020.81090.74210.81780.71160.72140.83240.81190.8349 (↑0.0240)COM-mid0.80090.85260.77640.84370.74350.84420.8420.85080.8843 (↑0.0317)COM-mea0.80540.85910.83990.82520.75530.84700.85620.86300.8797 (↑0.0167)COM-KNN0.80330.85750.79120.83210.77390.84460.85260.86200.8761 (↑0.0141)Average0.78940.85930.79920.83260.77160.83040.85810.85520.8796 (↑0.0202)Table 4 Classification ACC comparison with the baseline methods, the best result is shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.71130.83510.73670.81440.77320.84540.85570.84540.8454 (↓0.0103)NC-B0.65390.75310.75120.74280.62350.74090.75570.74450.7643 (↑0.0086)COM-mid0.73400.80300.75890.77340.55670.79310.75370.77830.8177 (↑0.0147)COM-mea0.73400.79310.75460.76350.63050.7980.79310.80300.8177 (↑0.0147)COM-KNN0.72410.79310.77680.76850.60590.77340.77830.77340.8226 (↑0.0295)Average0.71150.79550.75600.77250.63800.79020.78730.78900.8135 (↑0.0180)Figure 5 The ROC curve comparison for COM-Mea schemes, the missing values are filled with the median value. The closer the curve is to the upper left corner, the better the model’s performance. The symmetry of the curve along the line from (0, 1) to (1, 0) indicates the balanced performance of the model.Figure 6 The scatter plot the prediction of BUPNN for COM-Mea schemes, the missing values are filled with the median value. The vertical coordinate is used to distinguish different samples, and the horizontal coordinate indicates the predicted value of the BUPNN model for the samples. A predicted value of less than 0.5 indicates that no blood is needed, and a value greater than 0.5 indicates that blood is needed. The color of the scatter indicates whether the prediction is correct or not. #### 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. #### 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. #### 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) #### 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ### 5.5. Comparison on Regression Subtask Then, we discuss the performance of BUPNN on the regression subtask. Finally, the regression model is asked to predict the total blood usage in the patient’s treatment for advance scheduling at the blood center. The experimental setting of the regression subtask is the same as that of the classification subtask. The data are collected from the same patients, but the target variable is the total blood usage in the patient’s treatment. The mean square error (MSE) metric is calculated to measure the performance of the seven methods in the five schemes. The performance comparison is shown in Table6.Table 6 MSE comparison for all data, best results are shown in bold; results with clear advantage are shown in underline. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. LRSVMMLPETGBRFLGBMAdabBUPNNNC-A0.00950.01120.01210.01940.00930.00900.00900.01610.0079↑0.0012COM-mid0.00420.00750.01670.01030.00400.00430.00420.01750.0038↑0.0004COM-mea0.00420.00760.00760.00740.00390.00410.00440.01510.0037↑0.0002COM-KNN0.00410.00650.00820.01420.00400.00390.00430.01090.0036↑0.0003Average0.00550.00820.01110.01280.00530.00530.00540.01490.0048↑0.0005 #### 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ### 5.6. The Explanatory Analysis of BUPNN Artificial neural network-based models are considered to have a strong performance. Still, their black-box nature leads to difficulty in interpretability, thus making it difficult for the model to be trusted by domain experts. On the other hand, the proposed BUPNN has stronger interpretability because the smoothness of its mapping leads to models that profound model interpreters can easily interpret. An easy-to-use neural network interpretive tool is introduced to explain the decision process of BUPNN. The visual presentation of the interpretation results is shown in Figure7.Figure 7 Explanation of the decision process of BUPNN for a single sample. (a)(b)In Figure7, the decision processes of BUPNN for three samples are explained by “shap” tool. The vertical coordinates represent the clinical indicators, and the horizontal coordinates represent the predicted values of BUPNN. The images show the decision process of the model from bottom to top. Specifically, the model predicts E=0.47 for an average patient when no information about the patient is observed, i.e., no transfusion is needed. We believe this is reasonable because only patients who are sufficiently harmed need blood transfusions to be treated. Next, using Figure 7(a) as an example, the BUPNN model observed additional patient information, such as “no injury to the lower body,” “no injury to the abdomen,” “no injury to the pelvis,” and the “patient’s heartbeat is not accelerated.” This evidence made the BUPNN firm the original judgment that blood transfusion is unnecessary. Although the penetration injury raised the probability of transfusion, several essential features (e.g., albumin and hemoglobin) ultimately guided the model to predict fx=0.02.Figure 7(b) provides more examples of how to interpret the decision process of the BUPNN. These two examples show what kind of information needs to be observed by BUPNN that would predict a sample as needing a blood transfusion. Among them, we found some necessary signatures of the need for transfusion, such as the abdomen, pelvis, and pleural.Next, we calculated the importance of all features in determining whether a blood transfusion is needed or not and displayed them in Figure8 in the form of a bar chart.Figure 8 Important indicators about blood transfusion obtained by interpretable analysis of BUPNN. ## 5.1. Baseline Methods In this Section, two subtasks, classification subtask and regression subtask, are defined. Seven state-of-the-art baseline classification and regression methods are chosen to discuss the relative advantages of BUPNN. The baseline approach is as follows. ### 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. ### 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. ### 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. ### 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. ### 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. ### 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. ### 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. ### 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ## 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. ## 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. ## 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. ## 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. ## 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. ## 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. ## 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. ## 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ## 5.2. Dataset Partitioning and Grid Search Table1 and Figure 2 provide basic information about the data. Five data partitioning schemes are provided in this study to provide a detailed comparison of the performance differences between the different schemes.Three schemes are with data-complement, including COM-Mea, COM-Med, and COM-KNN as defined in equation (2). First, for COM-Mea, COM-Med, and COM-KNN, the missing values are complemented with a specific method. Following that, the training and testing sets are divided by 90%/10%. Two noncomplement schemes (NC-A and NC-B) are introduced to compare with the complement schemes. NC-A deletes all the data items with missing values, following the typical machine learning scheme. Following the data cleaning, NC-A divides the training and testing sets by 90%/10%. NC-B keeps the same training and testing set division as the data complement schemes and only removes all missing data from the training set and obtains a cleaner training set.The models are evaluated with 10-fold cross-validation for all the training sets and determine the optimal parameters by grid search. For a fair comparison, we control the search space of all baseline methods to be approximately equal. The search space of the compared process is in Table2.Table 2 Details of grid search. MethodsAbbreviationSearch spaceNoteK-nearest neighborKNNNei∈1,3,5,10,15,20,Nei⟶neighborssize,L∈10,20,30,50,70,100L⟶leaf.sizeRandom forestRFNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈2,3,4,5,6,7MSS⟶samples.split.sizeDecision treeDTMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeExtra treeETMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeSupport vector machineSVMM∈10,50,100,300,500,NE⟶maxiterations,T∈1e−4,5e−4,1e−3,2e−3,3e−3,5e−3T⟶tolerance.for.stopping.criteriaGradient boostingGBNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈ [2, 3, 4, 5, 6, 7]MSS⟶samples.split.sizeMultilayer perceptronMLPL∈2,3,4,5,6,L⟶number.of.layerWD∈ [0.1, 0.2, 0.3, 0.4]WD⟶weight.declayAdaptive boostADBNE∈40,50,60,70,80,90,NE⟶boosted.trees.size,LR∈0.8,0.9,1,2,5,LR⟶learning.rate\endLight gradient boosting machineLGBMNE∈21,26,31,26,41,NE⟶boosted.trees.size,L∈80,90,100,110,120,130L⟶leaf.size\endBlood usage prediction neural networkBUPNN (ours)νz∈0.01,0.03,0.05,0.07NE⟶boosted.trees.size,β∈0.1,0.2,K∈100,200,300L⟶leaf.size\end ## 5.3. Experimental Setup We initialize the other NN with the Kaiming initializer. We adopt the AdamW optimizer [43] with learning rate of 0.02 and weight decay of 0.5. All experiments use a fixed MLP network structure, fθ,w: (−1, 500, 300, 80), gϕ: (80, 500, 2), where −1 is the features number of the dataset. The number of epochs is 400. The batch size is 2048. νy=100. ## 5.4. Comparison on Classification Subtask Although the blood data for each patient is collected for accurate regression subtasks, it is equally important to predict whether a patient needs a blood transfusion. Especially in emergencies, indicating whether a patient needs a blood transfusion in the short term is more important than estimating the amount of blood used throughout the treatment cycle. Therefore, we first evaluate the performance of the proposed BUPNN on the classification subtask. Then, two evaluation metrics, classification accuracy (ACC) and area under the ROC curve (AUC), are used to compare the classifier’s performance from multiple perspectives. The performance comparison of BUPNN and eight baseline methods is shown in Tables3 and 4. In addition, the ROC curves of all the compared methods on different schemes are shown in Figure 5. The scatter plot the prediction of BUPNN for COM-Mea schemes are shown in Figure 6.Table 3 Classification AUC comparison with the baseline methods, the best result is shown in bold. The second result is italicized. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.80720.91660.84360.84430.87360.89490.90720.88810.9229 (↑0.0063)NC-B0.73020.81090.74210.81780.71160.72140.83240.81190.8349 (↑0.0240)COM-mid0.80090.85260.77640.84370.74350.84420.8420.85080.8843 (↑0.0317)COM-mea0.80540.85910.83990.82520.75530.84700.85620.86300.8797 (↑0.0167)COM-KNN0.80330.85750.79120.83210.77390.84460.85260.86200.8761 (↑0.0141)Average0.78940.85930.79920.83260.77160.83040.85810.85520.8796 (↑0.0202)Table 4 Classification ACC comparison with the baseline methods, the best result is shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.71130.83510.73670.81440.77320.84540.85570.84540.8454 (↓0.0103)NC-B0.65390.75310.75120.74280.62350.74090.75570.74450.7643 (↑0.0086)COM-mid0.73400.80300.75890.77340.55670.79310.75370.77830.8177 (↑0.0147)COM-mea0.73400.79310.75460.76350.63050.7980.79310.80300.8177 (↑0.0147)COM-KNN0.72410.79310.77680.76850.60590.77340.77830.77340.8226 (↑0.0295)Average0.71150.79550.75600.77250.63800.79020.78730.78900.8135 (↑0.0180)Figure 5 The ROC curve comparison for COM-Mea schemes, the missing values are filled with the median value. The closer the curve is to the upper left corner, the better the model’s performance. The symmetry of the curve along the line from (0, 1) to (1, 0) indicates the balanced performance of the model.Figure 6 The scatter plot the prediction of BUPNN for COM-Mea schemes, the missing values are filled with the median value. The vertical coordinate is used to distinguish different samples, and the horizontal coordinate indicates the predicted value of the BUPNN model for the samples. A predicted value of less than 0.5 indicates that no blood is needed, and a value greater than 0.5 indicates that blood is needed. The color of the scatter indicates whether the prediction is correct or not. ### 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. ### 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. ### 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) ### 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ## 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. ## 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. ## 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) ## 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ## 5.5. Comparison on Regression Subtask Then, we discuss the performance of BUPNN on the regression subtask. Finally, the regression model is asked to predict the total blood usage in the patient’s treatment for advance scheduling at the blood center. The experimental setting of the regression subtask is the same as that of the classification subtask. The data are collected from the same patients, but the target variable is the total blood usage in the patient’s treatment. The mean square error (MSE) metric is calculated to measure the performance of the seven methods in the five schemes. The performance comparison is shown in Table6.Table 6 MSE comparison for all data, best results are shown in bold; results with clear advantage are shown in underline. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. LRSVMMLPETGBRFLGBMAdabBUPNNNC-A0.00950.01120.01210.01940.00930.00900.00900.01610.0079↑0.0012COM-mid0.00420.00750.01670.01030.00400.00430.00420.01750.0038↑0.0004COM-mea0.00420.00760.00760.00740.00390.00410.00440.01510.0037↑0.0002COM-KNN0.00410.00650.00820.01420.00400.00390.00430.01090.0036↑0.0003Average0.00550.00820.01110.01280.00530.00530.00540.01490.0048↑0.0005 ### 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ## 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ## 5.6. The Explanatory Analysis of BUPNN Artificial neural network-based models are considered to have a strong performance. Still, their black-box nature leads to difficulty in interpretability, thus making it difficult for the model to be trusted by domain experts. On the other hand, the proposed BUPNN has stronger interpretability because the smoothness of its mapping leads to models that profound model interpreters can easily interpret. An easy-to-use neural network interpretive tool is introduced to explain the decision process of BUPNN. The visual presentation of the interpretation results is shown in Figure7.Figure 7 Explanation of the decision process of BUPNN for a single sample. (a)(b)In Figure7, the decision processes of BUPNN for three samples are explained by “shap” tool. The vertical coordinates represent the clinical indicators, and the horizontal coordinates represent the predicted values of BUPNN. The images show the decision process of the model from bottom to top. Specifically, the model predicts E=0.47 for an average patient when no information about the patient is observed, i.e., no transfusion is needed. We believe this is reasonable because only patients who are sufficiently harmed need blood transfusions to be treated. Next, using Figure 7(a) as an example, the BUPNN model observed additional patient information, such as “no injury to the lower body,” “no injury to the abdomen,” “no injury to the pelvis,” and the “patient’s heartbeat is not accelerated.” This evidence made the BUPNN firm the original judgment that blood transfusion is unnecessary. Although the penetration injury raised the probability of transfusion, several essential features (e.g., albumin and hemoglobin) ultimately guided the model to predict fx=0.02.Figure 7(b) provides more examples of how to interpret the decision process of the BUPNN. These two examples show what kind of information needs to be observed by BUPNN that would predict a sample as needing a blood transfusion. Among them, we found some necessary signatures of the need for transfusion, such as the abdomen, pelvis, and pleural.Next, we calculated the importance of all features in determining whether a blood transfusion is needed or not and displayed them in Figure8 in the form of a bar chart.Figure 8 Important indicators about blood transfusion obtained by interpretable analysis of BUPNN. ## 6. Conclusions In this paper, a neural network model-based blood usage predictor, called deep blood usage neural network (BUPNN), is proposed to serve the scheduling of blood supply in provincial blood centers. The proposed BUPNN receives clinical information from hospital patients and predicts whether a patient needs blood and the amount of blood used. The proposed BUPNN receives clinical data from hospital patients and indicates whether a patient needs blood and the blood consumption amount. The proposed BUPNN mainly solves the problem of predicting blood usage with high availability and generalization in real-life situations. It can accomplish the prediction task well in different hospitals’ missing values and domain bias. BUPNN proposes a manifold learning-based regularizer for the blood prediction problem to improve the model’s generalization performance on data from different hospitals and enhance the model using data augmentation and data complementation. --- *Source: 1003310-2023-01-31.xml*
1003310-2023-01-31_1003310-2023-01-31.md
100,492
BUPNN: Manifold Learning Regularizer-Based Blood Usage Prediction Neural Network for Blood Centers
Lingling Pan; Zelin Zang; Siqi Ma; Wei Hu; Zhechang Hu
Computational Intelligence and Neuroscience (2023)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1003310
1003310-2023-01-31.xml
--- ## Abstract Blood centers are an essential component of the healthcare system, as timely blood collection, processing, and efficient blood dispatch are critical to the treatment of patients and the performance of the entire healthcare system. At the same time, an efficient blood dispatching system through the high-precision predictive capability of artificial intelligence is crucial for the efficiency improvement of the blood centers. However, the current artificial intelligence (AI) models for predicting blood usage do not meet the needs of blood centers. The challenges of AI models mainly include lower generalization ability in different hospitals, limited stability under missing values, and low interpretability. An artificial neural network-based model named the blood usage prediction neural network (BUPNN) has been developed to address these challenges. BUPNN includes a novel similarity-based manifold regularizer that aims to enhance network mapping consistency and, thus, overcome the domain bise of different hospitals. Moreover, BUPNN diminishes the performance degradation caused by missing values through data enhancement. Experimental results on a large amount of accurate data demonstrate that BUPNN outperforms the baseline method in classification and regression tasks and excels in generalization and consistency. Moreover, BUPNN has solid potential to be interpreted. Therefore, the decision-making process of BUPNN is explored to the extent that it acts as an aid to the experts in the blood center. --- ## Body ## 1. Introduction Blood products are an essential part of the treatment of bleeding, cancer, AIDS, hepatitis, and other diseases[1]. Blood is also an indispensable resource in treating injured patients; whether promptly transfusing the blood is critical for rehabilitating injured patients. At the same time, early surgical interventions and rapid blood transfusions are the primary measures to reduce mortality. Unfortunately, these measures require large amounts of blood to support them.Blood, however, is significantly different from other medical products. Currently, blood cannot be manufactured or synthesized artificially and can only be donated by others. In addition, blood has a short shelf life, making emergency blood more specific and irreplaceable than other medical products. For example, patients in mass casualty events caused by earthquakes suffer from fractures, fractures accompanied by multiple organ injuries, and crush injuries. Therefore, the peak period of blood consumption occurs 96 hours after the earthquake [2]. In contrast, patients mainly suffer from burns in mass casualties caused by bombings or fires. The peak blood consumption occurs 24 hours after the event. In addition, ongoing blood transfusions may last for several months [3]. Therefore, modeling the prediction of blood use in patients is a meaningful and challenging topic.Current blood usage prediction models are not well developed. For example, Wang et al. [4] developed an early transfusion scoring system to predict blood requirements in severe trauma patients in the prehospital or initial phase of emergency resuscitation. However, the abovementioned design is more suitable for triage in a single hospital than in blood centers since the system does not consider how to avoid performance degradation due to domain deviations between different hospitals. Rebecca et al. [5, 6] summarized the currently available studies exploring how to predict the need for massive transfusion in patients with traumatic injuries, listing the blood consumption scoring system (ABC) and the shock index scoring system (SI). These systems use classical or ML-based methods to predict blood consumption during the treatment of patients. Unfortunately, although the progress is remarkable, but it is unsatisfactory in terms of accuracy and generalization, and thus, the blood demand prediction model cannot be widely used. We summarize the problems into the following three points:(i) Data quality is limited. The partnership between blood centers and hospitals makes it very difficult to establish a rigorous feedback system for patient information. In addition, biases and missing data due to differences in hospital equipment create challenges for a constant blood use prediction model.(ii) Model generalizability is unsatisfactory. Blood usage prediction models are often built for specific hospitals or cities without considering extending to a wider range of applications and thus have poor generalization performance.(iii) Model interpretability is inadequate. Most models can only output a category or blood usage forecast but cannot demonstrate the model’s decision process. An interpretable model can better work with experts to help blood centers with blood schedules.This paper proposes a blood usage prediction neural network, BUPNN, to solve the above problems. First, actual patient clinic data and treatment procedures in 12 hospitals are used as training data. Extensively collected data with biases from various hospitals will provide sufficient information to train a high-performance model. We used multiple data complementation schemes to restore the real problem and overcome missing values. In addition, the BUPNN model MDC is augmented with online data by linear interpolation, which increases the diversity of training data and thus improves the stability of the model under training with missing data. Second, to further improve the generalization performance of BUPNN, a similarity-based loss function is introduced to map data with biases to a stable semantic space by aligning samples from different hospitals in the latent space. Third, we analyze the model based on the deep learning interpretation method to enhance interpretability. The proposed analysis is accompanied by the prediction output of the model in real-time to assist one in understanding the process of the model’s decision-making. The interpretable study of BUPNN provides the conditions for computers and experts to help each other in blood consumption prediction.The main contributions of this study are as follows:(i) Representative samples. A large amount of data from twelve hospitals is collected for this study to investigate the implied relationship between patients’ chain indicators and blood consumption.(ii) Generalizable model. This study designs MDC for online missing data completion, and thus, data augmentation to enhance the model’s generalization ability. In addition, the various similarity loss function is designed to improve the model’s predictive power across domains.(iii) Excellent Performance. Experiments on six different settings demonstrate that our method outperforms all baseline methods.The rest of this paper is organized as follows. In Section2, we provide a literature review of demand forecasting methods for blood products. Section 3 provides an initial exploration of the data. We provide the data description, model background, model development, and evaluation of four different models for blood demand forecasting in Section. In Section 4, a comparison of the models is provided, and finally, in Section 5, concluding remarks are provided, including a discussion of ongoing work for this problem. ## 2. Related Work ### 2.1. ML Techniques for Medical Problems The integration of the medical field with ML technology has received much attention in recent years. Two areas that may benefit from the application of ML technology in the medical field are diagnosis and outcome prediction. It includes the possibility of identifying high-risk medical emergencies, such as recurrence or transition to another disease state. Recently, ML algorithms have been successfully used to classify thyroid cancer [7] and predict the progression of COVID-19 [8, 9]. On the other hand, ML-based visualization and dimensionality reduction techniques have the potential to help professionals analyze biological or medical data, guiding them to better understand the data [10, 11]. Furthermore, ML-based feature selection techniques [12, 13] have strong interpretability and the potential to find highly relevant biomarkers for output in a wide range of medical data, leading to new biological or medical discoveries. ### 2.2. Blood Demand Forecasting There is limited literature on blood demand forecasting; most investigate univariate time series methods. In these studies, forecasts are based solely on previous demand values without considering other factors affecting the demand. Frankfurter et al. [14] developed transfusion forecasting models using exponential smoothing (ES) methods for a blood collection and distribution center in New York. Critchfield et al. [15] developed models for forecasting blood usage in a blood center using several time series methods, including moving average (MA), winter’s approach, and ES. Filho et al. [16] developed a Box-Jenkins seasonal autoregressive integrated moving average (BJ-SARIMA) model to forecast weekly demand for hospital blood components. Their proposed method, SARIMA, is based on a Box-Jenkins approach that considers seasonal and nonseasonal characteristics of time series data. Later, Filho et al. [17] extended their model by developing an automatic procedure for demand forecasting and changing the model level from hospital level to regional blood center to help managers use the model directly. Kumari and Wijayanayake [18] proposed a blood inventory management model for daily blood supply, focusing on reducing blood shortages. Three time series methods, namely, MA, weighted moving average (WMA), and ES, are used to forecast blood usage and are evaluated based on needs. Fortsch and Khapalova [19] tested various blood demand prediction approaches, such as naive, moving average, exponential smoothing, and multiplicative time series decomposition (TSD). The results show that the Box-Jenkins (ARMA) approach, which uses an autoregressive moving average model, results in the highest prediction accuracy. Lestari et al. [20] applied four models to predict blood component demand, including moving average, weighted moving average, exponential smoothing, exponential smoothing with the trend, and select the best method for their data based on the minimum error between forecasts and the actual values. Volken et al. [21] used generalized additive regression and time-series models with exponential smoothing to predict future whole blood donation and RBC transfusion trends.Several recent studies consider clinically related indicators. For example, Drackley et al. [22] estimated a long-term blood demand for Ontario, Canada, based on previous transfusions’ age and sex-specific patterns. They forecast blood supply and demand for Ontario by considering demand and supply patterns and demographic forecasts, assuming fixed patterns and rates over time. Khaldi et al. [23] applied artificial neural networks (ANNs) to forecast the monthly demand for three blood components: red blood cells (RBCs), blood, and plasma, for a case study in Morocco. Guan et al. [24] proposed an optimisation ordering strategy in which they forecast the blood demand for several days into the future and build an optimal ordering policy based on the predicted direction, concentrating on minimising the wastage. Their primary focus is on an optimal ordering policy. They integrate their demand model into the inventory management problem, meaning they do not precisely try to forecast blood demand. Li et al. [25] developed a hybrid model consisting of seasonal and trend decomposition using Loess (STL) time series and eXtreme Gradient Boosting (XGBoost) for RBC demand forecasting and incorporated it into an inventory management problem.Recently, Motamedi et al. [26] presented an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). In addition, C. Twumasi and J. Twumasi [27] compared k-nearest neighbour regression (KNN), multilayer perceptron (MLP), and support vector machine (SVM) via a rolling-origin strategy for forecasting and backcasting blood demand data with missing values and outliers from a government hospital in Ghana. Abolghasemi et al. [28] treat the blood supply problem as an optimisation problem [29] and find that LightGBM provides promising solutions and outperforms other machine learning models. ## 2.1. ML Techniques for Medical Problems The integration of the medical field with ML technology has received much attention in recent years. Two areas that may benefit from the application of ML technology in the medical field are diagnosis and outcome prediction. It includes the possibility of identifying high-risk medical emergencies, such as recurrence or transition to another disease state. Recently, ML algorithms have been successfully used to classify thyroid cancer [7] and predict the progression of COVID-19 [8, 9]. On the other hand, ML-based visualization and dimensionality reduction techniques have the potential to help professionals analyze biological or medical data, guiding them to better understand the data [10, 11]. Furthermore, ML-based feature selection techniques [12, 13] have strong interpretability and the potential to find highly relevant biomarkers for output in a wide range of medical data, leading to new biological or medical discoveries. ## 2.2. Blood Demand Forecasting There is limited literature on blood demand forecasting; most investigate univariate time series methods. In these studies, forecasts are based solely on previous demand values without considering other factors affecting the demand. Frankfurter et al. [14] developed transfusion forecasting models using exponential smoothing (ES) methods for a blood collection and distribution center in New York. Critchfield et al. [15] developed models for forecasting blood usage in a blood center using several time series methods, including moving average (MA), winter’s approach, and ES. Filho et al. [16] developed a Box-Jenkins seasonal autoregressive integrated moving average (BJ-SARIMA) model to forecast weekly demand for hospital blood components. Their proposed method, SARIMA, is based on a Box-Jenkins approach that considers seasonal and nonseasonal characteristics of time series data. Later, Filho et al. [17] extended their model by developing an automatic procedure for demand forecasting and changing the model level from hospital level to regional blood center to help managers use the model directly. Kumari and Wijayanayake [18] proposed a blood inventory management model for daily blood supply, focusing on reducing blood shortages. Three time series methods, namely, MA, weighted moving average (WMA), and ES, are used to forecast blood usage and are evaluated based on needs. Fortsch and Khapalova [19] tested various blood demand prediction approaches, such as naive, moving average, exponential smoothing, and multiplicative time series decomposition (TSD). The results show that the Box-Jenkins (ARMA) approach, which uses an autoregressive moving average model, results in the highest prediction accuracy. Lestari et al. [20] applied four models to predict blood component demand, including moving average, weighted moving average, exponential smoothing, exponential smoothing with the trend, and select the best method for their data based on the minimum error between forecasts and the actual values. Volken et al. [21] used generalized additive regression and time-series models with exponential smoothing to predict future whole blood donation and RBC transfusion trends.Several recent studies consider clinically related indicators. For example, Drackley et al. [22] estimated a long-term blood demand for Ontario, Canada, based on previous transfusions’ age and sex-specific patterns. They forecast blood supply and demand for Ontario by considering demand and supply patterns and demographic forecasts, assuming fixed patterns and rates over time. Khaldi et al. [23] applied artificial neural networks (ANNs) to forecast the monthly demand for three blood components: red blood cells (RBCs), blood, and plasma, for a case study in Morocco. Guan et al. [24] proposed an optimisation ordering strategy in which they forecast the blood demand for several days into the future and build an optimal ordering policy based on the predicted direction, concentrating on minimising the wastage. Their primary focus is on an optimal ordering policy. They integrate their demand model into the inventory management problem, meaning they do not precisely try to forecast blood demand. Li et al. [25] developed a hybrid model consisting of seasonal and trend decomposition using Loess (STL) time series and eXtreme Gradient Boosting (XGBoost) for RBC demand forecasting and incorporated it into an inventory management problem.Recently, Motamedi et al. [26] presented an efficient forecasting model for platelet demand at Canadian Blood Services (CBS). In addition, C. Twumasi and J. Twumasi [27] compared k-nearest neighbour regression (KNN), multilayer perceptron (MLP), and support vector machine (SVM) via a rolling-origin strategy for forecasting and backcasting blood demand data with missing values and outliers from a government hospital in Ghana. Abolghasemi et al. [28] treat the blood supply problem as an optimisation problem [29] and find that LightGBM provides promising solutions and outperforms other machine learning models. ## 3. Blood Centers and Datasets This section describes how blood center works with an example in Zhejiang Province, China. It includes the responsibilities of the blood center and the partnership between the blood center and the hospital, and shows in detail how the proposed model will assist the blood center in accomplishing its mission better. ### 3.1. Blood Center Figure1 shows that, this study considers a provincial centralized blood supply system, including blood centers, blood stations, and hospitals. The entire centralized blood supply system completes the collection, management, and delivery of blood products. The blood center is responsible for collecting blood products, testing for viruses and bacteria, and supplying some hospitals. At the same time, blood centers assume the management, coordination, and operational guidance of blood collection and supply institutions. Blood stations are responsible for collecting, storing, and transporting blood to local hospitals. While receiving blood, hospitals must collect clinical information on current patients for analysis and decision-making at the blood center to make the blood supply more efficient. Our proposed AI blood consumption prediction system (BUPNN) receives clinical information from each hospital and uses it to predict the future blood consumption of each patient. The proposed system helps blood center specialists perform blood collection and transportation better.Figure 1 BS blood supply chain with one regional blood center and multiple hospitals. ### 3.2. Data Details and Challenges We collected data in their actual state to build a practically usable model. The data in this study are constructed by processing BS shipping data and the TRUST (Transfusion Research for Utilization, Surveillance, and Tracking) database from Zhejiang Province Blood Center and 12 hospitals in Zhejiang Province. The data include data from 2025 patients, including 1970 data from emergency trauma patients in 10 hospitals in Zhejiang Province from 2018 to 2020 and another 55 from patients in two hospitals in Wenling’s explosion in 2020.Each dataset mainly included the following parts. (1) General patient information, including case number, consultation time, pretest classification, injury time, gender, age, weight, diagnosis, penetrating injury, heart rate, diastolic blood pressure, systolic blood pressure, body temperature, shock index, and Glasgow coma index. (2) Injury status, including pleural effusion, abdominal effusion, extremity injury status, thoracic and abdominal injury status, and pelvic injury status. (3) Laboratory tests, including hemoglobin, erythrocyte pressure product, albumin, hemoglobin value 24 hours after transfusion therapy, base residual, PH (acidity), base deficiency, oxygen saturation, PT (prothrombin time), and APTT (partial thromboplastin time). (4) Burn situation, including burn area and burn depth. (5) Patient regression, including whether blood use: whether to transfuse suspended red blood cells, the first day of hospital transfusion, and the amount of blood transfusion.For a more straightforward presentation of the data distribution, preliminary dataset statistics are shown in Table1, and box plots by age, blood consumption, and missing rate are shown in Figure 2.Table 1 Statistics of datasets. Hospital nameHospital abbreviationSample sizeAverage blood usageAverage missing rateMale/female ratioAverage ageDongyang HospitalDYang958.70.030.3449.19Enze HospitalEZe130.770.180.8650.38Haining HospitalHNing5715.960.170.6855.18Shiyi HospitalSYi724.650.080.2456.40Shaoyifu HospitalSYiFu1911.460.050.4152.50Shangyu HospitalSYu1357.920.070.4251.08Wenling HospitalWLing4211.620.090.4558.98Xinchang HospitalXChang5511.090.030.4957.89Xiaoshan HospitalXShan6200.080.4450.82Yongkang HospitalYKang1942.360.060.6264.43Yuyao HospitalYYao659.850.030.3554.68Zheer HospitalZEr10441.440.030.3653.93Figure 2 Boxplots for the relationship between hospital and age/sum blood/missing value. (a)(b)(c)After a detailed definition of the problem and a description of the dataset, we summarize the main problems faced in this paper.(i) Data holds missing values. as shown in Table1 and Figure 2(c), some hospitals have more than 10% missing values. The missing values cause perturbation to the data distribution, severely affecting the model’s training and performance.(ii) Data has domain bias. from Table1 and Figure 2(c), the missing values rate and blood consumption have obvious differences in different hospitals. Such a data distribution impacts the cross-hospital generalization of the model. In addition, the collection of clinical information from different hospitals may also be biased due to differences in testing devices and habits. ## 3.1. Blood Center Figure1 shows that, this study considers a provincial centralized blood supply system, including blood centers, blood stations, and hospitals. The entire centralized blood supply system completes the collection, management, and delivery of blood products. The blood center is responsible for collecting blood products, testing for viruses and bacteria, and supplying some hospitals. At the same time, blood centers assume the management, coordination, and operational guidance of blood collection and supply institutions. Blood stations are responsible for collecting, storing, and transporting blood to local hospitals. While receiving blood, hospitals must collect clinical information on current patients for analysis and decision-making at the blood center to make the blood supply more efficient. Our proposed AI blood consumption prediction system (BUPNN) receives clinical information from each hospital and uses it to predict the future blood consumption of each patient. The proposed system helps blood center specialists perform blood collection and transportation better.Figure 1 BS blood supply chain with one regional blood center and multiple hospitals. ## 3.2. Data Details and Challenges We collected data in their actual state to build a practically usable model. The data in this study are constructed by processing BS shipping data and the TRUST (Transfusion Research for Utilization, Surveillance, and Tracking) database from Zhejiang Province Blood Center and 12 hospitals in Zhejiang Province. The data include data from 2025 patients, including 1970 data from emergency trauma patients in 10 hospitals in Zhejiang Province from 2018 to 2020 and another 55 from patients in two hospitals in Wenling’s explosion in 2020.Each dataset mainly included the following parts. (1) General patient information, including case number, consultation time, pretest classification, injury time, gender, age, weight, diagnosis, penetrating injury, heart rate, diastolic blood pressure, systolic blood pressure, body temperature, shock index, and Glasgow coma index. (2) Injury status, including pleural effusion, abdominal effusion, extremity injury status, thoracic and abdominal injury status, and pelvic injury status. (3) Laboratory tests, including hemoglobin, erythrocyte pressure product, albumin, hemoglobin value 24 hours after transfusion therapy, base residual, PH (acidity), base deficiency, oxygen saturation, PT (prothrombin time), and APTT (partial thromboplastin time). (4) Burn situation, including burn area and burn depth. (5) Patient regression, including whether blood use: whether to transfuse suspended red blood cells, the first day of hospital transfusion, and the amount of blood transfusion.For a more straightforward presentation of the data distribution, preliminary dataset statistics are shown in Table1, and box plots by age, blood consumption, and missing rate are shown in Figure 2.Table 1 Statistics of datasets. Hospital nameHospital abbreviationSample sizeAverage blood usageAverage missing rateMale/female ratioAverage ageDongyang HospitalDYang958.70.030.3449.19Enze HospitalEZe130.770.180.8650.38Haining HospitalHNing5715.960.170.6855.18Shiyi HospitalSYi724.650.080.2456.40Shaoyifu HospitalSYiFu1911.460.050.4152.50Shangyu HospitalSYu1357.920.070.4251.08Wenling HospitalWLing4211.620.090.4558.98Xinchang HospitalXChang5511.090.030.4957.89Xiaoshan HospitalXShan6200.080.4450.82Yongkang HospitalYKang1942.360.060.6264.43Yuyao HospitalYYao659.850.030.3554.68Zheer HospitalZEr10441.440.030.3653.93Figure 2 Boxplots for the relationship between hospital and age/sum blood/missing value. (a)(b)(c)After a detailed definition of the problem and a description of the dataset, we summarize the main problems faced in this paper.(i) Data holds missing values. as shown in Table1 and Figure 2(c), some hospitals have more than 10% missing values. The missing values cause perturbation to the data distribution, severely affecting the model’s training and performance.(ii) Data has domain bias. from Table1 and Figure 2(c), the missing values rate and blood consumption have obvious differences in different hospitals. Such a data distribution impacts the cross-hospital generalization of the model. In addition, the collection of clinical information from different hospitals may also be biased due to differences in testing devices and habits. ## 4. Methodology ### 4.1. Problem Defensions The following definition is made in this paper to discuss the role of predictors.Definition 1. (Blood Consumption Prediction Problem, BCPP). LetXs,ys be a training dataset where clinical information Xs is implicitly linked to the blood usage ys. The BCPP train a model FXθ with Xs,ys, and use the model FXθ predict the blood usage of new collected data of clinical information Xt, where θ is the model parameter. The BCPP includes a classification subproblem and regression subproblem. For classification subproblemys and yt are one-hot or category values. For regression subproblem ys and yt are continuous values. ### 4.2. Blood Data Complementation Data complementation is the process of replacing missing data with substituted values. When covering for a data point, it is known as “unit complementation;” when substituting for a component of a data point, it is known as “item complementation.” Missing values introduce substantial noise, making data analysis more complex and less efficient. When one or more patient values are missing, most methods discard data with missing values by default, but data complementation attempts to fill in those values. However, missing data are also a reality, and the model should not require that all data be captured well. Therefore, data complementation is introduced to avoid performance degradation with missing values and improve the actual testing data’s performance.In this study, a single dataxi∈XS is complemented by(1)xiC=Cxi=Cxi,1,…,Cxi,f,…,Cxi,n,Cxi,f=xi,fxi,fis.not.missing,Ii,fxi,fis.missing,where xi,1, ⋯, xi,n are n data components of single data xi. If any component is missing, the imputed value If is used to fill this missing value. The If comes from mean value complementation, median value complementation, and KNN complementation:(2)Ii,fMean=MeanXf,Ii,fMedian=MedianXf,Ii,fKNN=KNN1K∑j∈NiK−NNxj,f,where MeanXfs and MedianXfs are the mean value or median value of training datasets on components f. NiK−NN is the KNN neighborhood of data i in the sense of European distance, K=5 in this paper. ### 4.3. Cross Hospitals Data Augmentation Data augmentation is a well-known neural network (NN) training strategy for image classification and signal processing [30]. Data augmentation improves the performance of the methods by precisely fitting the data distribution. First, data augmentation enhances the diversity of data, thereby overcoming overfitting. Second, data augmentation essentially reinforces the fundamental assumption of DR, i.e., the local connectivity of neighborhoods. Finally, it learns refined data distribution by generating more intra-manifold data based on sampled points.Cross-hospital data augmentation is introduced in a unified framework to generate new datax′=Tx:(3)xa=TxC=τxi,1C,…,τxi,fC,…,τxi,nC,τxfa=1−ru⋅xi,fC+ru⋅x¯i,fC,x˜∼Nihh∈H/hi,where the new augmented data xa is the combination of each feature τxi,fI. τxi,fC is calculated from the linear interpolation of original feature xi,fC and augmented feature x¯i,fC. In addition, the augmented feature x˜ is sampled from the neighborhood Nx of x. Nih is the k-NN neighborhood of the data i on the data of hospitals h. H is the set of the hospitals. H/hi means remove the neighborhood of data i’s hospitals. The combination parameter ru∼U0,pU is sampled from the uniform distribution U0,pU, and pU is the hyperparameter.The cross-hospital augmentation generates new data by combining datai with its neighborhoods in different hospitals. It reduces the terrible influence of the missing data and improves the training data’s divergence. Thus, the model learns a precise distribution to enhance the performance of our method. In addition, it works together with the loss function to align data from different hospitals, thus overcoming domain bias. ### 4.4. Architecture of BUPNN The proposed BUPNN does not require a unique backbone neural network. The multilayer perceptron network (MLP) is the backbone neural network. In addition to this, a new network architecture is proposed to enhance generalizability. The proposed neural network architecture is shown in Figure3.Figure 3 Architecture of BUPNN.A typical neural network model uses the network output directly to compute a supervised loss function. It may introduce undesirable phenomena such as overfitting. In this paper, similar to the paper [31], a manifold learning regularizer is proposed to suppress problems such as overfitting using the information in the latent space of the network. As shown in Figure 3, a complete neural network F⋅,wi,wj is divided into a latent network fi⋅,wi and an output network fj⋅,wj. The latent network is a preprocessing network that resists the adverse effects of noise and missing value completion. The output network is a dimensional reduction network that maps the data in high-dimensional latent space to the data in low-dimensional latent space. The latent network fi⋅,wi maps the data x′ input a network latent space and an output network fj⋅,wj further map it into the output space:(4)yi=fixi′,wi,yj=fixj′,wi,zi=fjyi,wj,zj=fjyj,wj,where xi′ and xj′ are the complementation and augmentation results of origin data x.Neural networks have powerful fitting performance, but at the same time, there is a risk of overfitting. The typical L2 regularization method can reduce overfitting, but it only limits the complexity of the network without constraining the network in terms of the manifold structure of the data. For example, there is no way to guarantee the distance-preserving and consistency of the network mapping. For this reason, we design a manifold regularizer based on manifold learning to solve this problem, and thus improve the generalization performance of the model and its usability for actual data. ### 4.5. Loss Function of BUPNN The loss function of BUPNN consists of two parts; one is the cross-entropy loss which uses label information, and the other is the manifold regularizer loss which uses hospital information and latent space information. #### 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. #### 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ### 4.6. Pseudocode and Complexity BUPNN’s pseudocode is shown in Algorithm1. The BUPNN includes the initialization and training phases. In the initialization phase, the kNN neighborhood of every single data is discovered. The time complexity of initialization phases is On1.14 [33], where n is the number of data. In the training phase, the complexity of the training phases is the same as that of artificial neural networks (ANN). BUPNN calculates the pairwise distance in a batch, so the complexity of training each batch is OB2, where B is the batch size. GPU can well accelerate the pairwise distance, so the training time consumption is the same as that of a typical ANN.Algorithm 1: BUPNN algorithm. Input: data: X=xii=1X, learning rate: η, epochs: E, batch size: B, β, νz, network: fθ,gϕ,Output: graph embedding: eii=1X.(1) whilei=0; i<E; i ++ do(2) xI=Ix ⊳ # blood data complementation in equation (1)(3) whileb=0; b<X/B; b ++ do(4) xa1,xa2=TxI,TxI ⊳ # blood data augmentation in equation (3)(5) ya1,ya2⟵fθxa1,fθxa2; ⊳ # map input data into Rdy space in equation (4)(6) za1,za2⟵fθya1,fθya2; ⊳ # map input data into Rdz space in equation (4)(7) dijy⟵dya1,ya2; dijz⟵dza1,za2; ⊳ #calculate distance in Rdy and Rdz(8) Sy⟵κRBijdijy,νy; Sz⟵κdijz,νz; ⊳ #calculate similarity in Rdy and Rdz in equation (5)(9) LD⟵DSy,Sz ; ⊳ # calculate loss function in equation (10).(10) θ⟵θ−η∂LD/∂θ, ϕ⟵ϕ−η∂LD/∂ϕ; ⊳ # update parameters.(11) end while(12) end while(13) zi⟵fθgϕxi; ⊳ # calculate the embedding result. ## 4.1. Problem Defensions The following definition is made in this paper to discuss the role of predictors.Definition 1. (Blood Consumption Prediction Problem, BCPP). LetXs,ys be a training dataset where clinical information Xs is implicitly linked to the blood usage ys. The BCPP train a model FXθ with Xs,ys, and use the model FXθ predict the blood usage of new collected data of clinical information Xt, where θ is the model parameter. The BCPP includes a classification subproblem and regression subproblem. For classification subproblemys and yt are one-hot or category values. For regression subproblem ys and yt are continuous values. ## 4.2. Blood Data Complementation Data complementation is the process of replacing missing data with substituted values. When covering for a data point, it is known as “unit complementation;” when substituting for a component of a data point, it is known as “item complementation.” Missing values introduce substantial noise, making data analysis more complex and less efficient. When one or more patient values are missing, most methods discard data with missing values by default, but data complementation attempts to fill in those values. However, missing data are also a reality, and the model should not require that all data be captured well. Therefore, data complementation is introduced to avoid performance degradation with missing values and improve the actual testing data’s performance.In this study, a single dataxi∈XS is complemented by(1)xiC=Cxi=Cxi,1,…,Cxi,f,…,Cxi,n,Cxi,f=xi,fxi,fis.not.missing,Ii,fxi,fis.missing,where xi,1, ⋯, xi,n are n data components of single data xi. If any component is missing, the imputed value If is used to fill this missing value. The If comes from mean value complementation, median value complementation, and KNN complementation:(2)Ii,fMean=MeanXf,Ii,fMedian=MedianXf,Ii,fKNN=KNN1K∑j∈NiK−NNxj,f,where MeanXfs and MedianXfs are the mean value or median value of training datasets on components f. NiK−NN is the KNN neighborhood of data i in the sense of European distance, K=5 in this paper. ## 4.3. Cross Hospitals Data Augmentation Data augmentation is a well-known neural network (NN) training strategy for image classification and signal processing [30]. Data augmentation improves the performance of the methods by precisely fitting the data distribution. First, data augmentation enhances the diversity of data, thereby overcoming overfitting. Second, data augmentation essentially reinforces the fundamental assumption of DR, i.e., the local connectivity of neighborhoods. Finally, it learns refined data distribution by generating more intra-manifold data based on sampled points.Cross-hospital data augmentation is introduced in a unified framework to generate new datax′=Tx:(3)xa=TxC=τxi,1C,…,τxi,fC,…,τxi,nC,τxfa=1−ru⋅xi,fC+ru⋅x¯i,fC,x˜∼Nihh∈H/hi,where the new augmented data xa is the combination of each feature τxi,fI. τxi,fC is calculated from the linear interpolation of original feature xi,fC and augmented feature x¯i,fC. In addition, the augmented feature x˜ is sampled from the neighborhood Nx of x. Nih is the k-NN neighborhood of the data i on the data of hospitals h. H is the set of the hospitals. H/hi means remove the neighborhood of data i’s hospitals. The combination parameter ru∼U0,pU is sampled from the uniform distribution U0,pU, and pU is the hyperparameter.The cross-hospital augmentation generates new data by combining datai with its neighborhoods in different hospitals. It reduces the terrible influence of the missing data and improves the training data’s divergence. Thus, the model learns a precise distribution to enhance the performance of our method. In addition, it works together with the loss function to align data from different hospitals, thus overcoming domain bias. ## 4.4. Architecture of BUPNN The proposed BUPNN does not require a unique backbone neural network. The multilayer perceptron network (MLP) is the backbone neural network. In addition to this, a new network architecture is proposed to enhance generalizability. The proposed neural network architecture is shown in Figure3.Figure 3 Architecture of BUPNN.A typical neural network model uses the network output directly to compute a supervised loss function. It may introduce undesirable phenomena such as overfitting. In this paper, similar to the paper [31], a manifold learning regularizer is proposed to suppress problems such as overfitting using the information in the latent space of the network. As shown in Figure 3, a complete neural network F⋅,wi,wj is divided into a latent network fi⋅,wi and an output network fj⋅,wj. The latent network is a preprocessing network that resists the adverse effects of noise and missing value completion. The output network is a dimensional reduction network that maps the data in high-dimensional latent space to the data in low-dimensional latent space. The latent network fi⋅,wi maps the data x′ input a network latent space and an output network fj⋅,wj further map it into the output space:(4)yi=fixi′,wi,yj=fixj′,wi,zi=fjyi,wj,zj=fjyj,wj,where xi′ and xj′ are the complementation and augmentation results of origin data x.Neural networks have powerful fitting performance, but at the same time, there is a risk of overfitting. The typical L2 regularization method can reduce overfitting, but it only limits the complexity of the network without constraining the network in terms of the manifold structure of the data. For example, there is no way to guarantee the distance-preserving and consistency of the network mapping. For this reason, we design a manifold regularizer based on manifold learning to solve this problem, and thus improve the generalization performance of the model and its usability for actual data. ## 4.5. Loss Function of BUPNN The loss function of BUPNN consists of two parts; one is the cross-entropy loss which uses label information, and the other is the manifold regularizer loss which uses hospital information and latent space information. ### 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. ### 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ## 4.5.1. Manifold Regularizer Loss Manifold regularizer loss handles the domain bias in different hospitals during the training phase and provides a manifold constraint to prevent overfitting. Inconsistent medical equipment and subjective physician diagnoses in different hospitals cause domain bias in data between hospitals. The manifold regularizer loss guides the mapping of the neural networks to be insensitive to hospitals, thus overcoming domain bias (shown in Figure4). Therefore, the pairwise similarity between nodes is defined first. Considering the dimensional inconsistency of the latent space, we use the t-distribution with variable degrees of freedom as a kernel function to measure the point-pair similarity of the data:(5)κd,ν=Gamν+1/2νπGamν/21+d2ν−ν+1/2,where Gam⋅ is the Gamma function, and the degree of freedom ν controls the shape of the kernel function. d is the Euclidean pairwise distance of node pairs.Figure 4 How manifold regularizer loss works. Data from the same hospitals are clustered near each other in latent space due to the significant domain bias possessed by the current data. The manifold regularizer loss guides the neural network model to reduce the domain bias by pulling in neighboring nodes across hospitals to mix data from different hospitals.Based on the defined pairwise similarity in a single space, we minimize the similarity difference between two spaces by fuzzy set cross-entropy loss (two-way divergence) [32] Dp,q:(6)Dp,q=plogq+1−plog1−q,where p∈0,1. Notice that Dp,q is a continuous version of the cross-entropy loss. In BUPNN, equation (6) is used to guide the pairwise similarity of two latent spaces to fit each other.Therefore, the loss function of the manifold regularizer is defined as follows:(7)LD=∑i,jBLxi,xjLxi,xj=D1,κdijz,νzifxj=τxi,Dκdijy,νy,κdijz,νzotherwise,where B is the batch size and xj=τxi indicates whether xj is the augmented data of xi. If xj is the augmented data of xi, the loss pulls them together in the latent space; otherwise, the loss keeps a gap between them to preserve the manifold structure. The different degrees of freedom νy and νz in t-distribution are basic settings according to the dimensions of the space. Equation (7) describes the behavior of a manifold alignment that pairs data collected by different hospitals together in latent space to avoid the detrimental effects caused by domain bias:(8)dijy=dyi,yj,dijz=dzi,zj,where dijy and dijz are the distance between data node i and node j in spaces Rdy and Rdz. ## 4.5.2. Cross-Entropy Loss The loss function of the manifold regularizer is essentially an unsupervised term that also requires the use of label information while training the network model. The cross-entropy loss function is introduced simultaneously:(9)LCE=−∑i=1Nlilnσzi+1−liln1−σzi,where li is the label of data node i, σzi is the output of the network model, and ln⋅ is the natural logarithm. When solving the classification subtask, li is the category label, and when solving the regression subtask, li is the probability label.The loss function of BUPNN is(10)L=LD+βLCE,where β is a hyperparameter to balance LD and LCE. ## 4.6. Pseudocode and Complexity BUPNN’s pseudocode is shown in Algorithm1. The BUPNN includes the initialization and training phases. In the initialization phase, the kNN neighborhood of every single data is discovered. The time complexity of initialization phases is On1.14 [33], where n is the number of data. In the training phase, the complexity of the training phases is the same as that of artificial neural networks (ANN). BUPNN calculates the pairwise distance in a batch, so the complexity of training each batch is OB2, where B is the batch size. GPU can well accelerate the pairwise distance, so the training time consumption is the same as that of a typical ANN.Algorithm 1: BUPNN algorithm. Input: data: X=xii=1X, learning rate: η, epochs: E, batch size: B, β, νz, network: fθ,gϕ,Output: graph embedding: eii=1X.(1) whilei=0; i<E; i ++ do(2) xI=Ix ⊳ # blood data complementation in equation (1)(3) whileb=0; b<X/B; b ++ do(4) xa1,xa2=TxI,TxI ⊳ # blood data augmentation in equation (3)(5) ya1,ya2⟵fθxa1,fθxa2; ⊳ # map input data into Rdy space in equation (4)(6) za1,za2⟵fθya1,fθya2; ⊳ # map input data into Rdz space in equation (4)(7) dijy⟵dya1,ya2; dijz⟵dza1,za2; ⊳ #calculate distance in Rdy and Rdz(8) Sy⟵κRBijdijy,νy; Sz⟵κdijz,νz; ⊳ #calculate similarity in Rdy and Rdz in equation (5)(9) LD⟵DSy,Sz ; ⊳ # calculate loss function in equation (10).(10) θ⟵θ−η∂LD/∂θ, ϕ⟵ϕ−η∂LD/∂ϕ; ⊳ # update parameters.(11) end while(12) end while(13) zi⟵fθgϕxi; ⊳ # calculate the embedding result. ## 5. Experiments ### 5.1. Baseline Methods In this Section, two subtasks, classification subtask and regression subtask, are defined. Seven state-of-the-art baseline classification and regression methods are chosen to discuss the relative advantages of BUPNN. The baseline approach is as follows. #### 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. #### 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. #### 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. #### 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. #### 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. #### 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. #### 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. #### 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ### 5.2. Dataset Partitioning and Grid Search Table1 and Figure 2 provide basic information about the data. Five data partitioning schemes are provided in this study to provide a detailed comparison of the performance differences between the different schemes.Three schemes are with data-complement, including COM-Mea, COM-Med, and COM-KNN as defined in equation (2). First, for COM-Mea, COM-Med, and COM-KNN, the missing values are complemented with a specific method. Following that, the training and testing sets are divided by 90%/10%. Two noncomplement schemes (NC-A and NC-B) are introduced to compare with the complement schemes. NC-A deletes all the data items with missing values, following the typical machine learning scheme. Following the data cleaning, NC-A divides the training and testing sets by 90%/10%. NC-B keeps the same training and testing set division as the data complement schemes and only removes all missing data from the training set and obtains a cleaner training set.The models are evaluated with 10-fold cross-validation for all the training sets and determine the optimal parameters by grid search. For a fair comparison, we control the search space of all baseline methods to be approximately equal. The search space of the compared process is in Table2.Table 2 Details of grid search. MethodsAbbreviationSearch spaceNoteK-nearest neighborKNNNei∈1,3,5,10,15,20,Nei⟶neighborssize,L∈10,20,30,50,70,100L⟶leaf.sizeRandom forestRFNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈2,3,4,5,6,7MSS⟶samples.split.sizeDecision treeDTMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeExtra treeETMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeSupport vector machineSVMM∈10,50,100,300,500,NE⟶maxiterations,T∈1e−4,5e−4,1e−3,2e−3,3e−3,5e−3T⟶tolerance.for.stopping.criteriaGradient boostingGBNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈ [2, 3, 4, 5, 6, 7]MSS⟶samples.split.sizeMultilayer perceptronMLPL∈2,3,4,5,6,L⟶number.of.layerWD∈ [0.1, 0.2, 0.3, 0.4]WD⟶weight.declayAdaptive boostADBNE∈40,50,60,70,80,90,NE⟶boosted.trees.size,LR∈0.8,0.9,1,2,5,LR⟶learning.rate\endLight gradient boosting machineLGBMNE∈21,26,31,26,41,NE⟶boosted.trees.size,L∈80,90,100,110,120,130L⟶leaf.size\endBlood usage prediction neural networkBUPNN (ours)νz∈0.01,0.03,0.05,0.07NE⟶boosted.trees.size,β∈0.1,0.2,K∈100,200,300L⟶leaf.size\end ### 5.3. Experimental Setup We initialize the other NN with the Kaiming initializer. We adopt the AdamW optimizer [43] with learning rate of 0.02 and weight decay of 0.5. All experiments use a fixed MLP network structure, fθ,w: (−1, 500, 300, 80), gϕ: (80, 500, 2), where −1 is the features number of the dataset. The number of epochs is 400. The batch size is 2048. νy=100. ### 5.4. Comparison on Classification Subtask Although the blood data for each patient is collected for accurate regression subtasks, it is equally important to predict whether a patient needs a blood transfusion. Especially in emergencies, indicating whether a patient needs a blood transfusion in the short term is more important than estimating the amount of blood used throughout the treatment cycle. Therefore, we first evaluate the performance of the proposed BUPNN on the classification subtask. Then, two evaluation metrics, classification accuracy (ACC) and area under the ROC curve (AUC), are used to compare the classifier’s performance from multiple perspectives. The performance comparison of BUPNN and eight baseline methods is shown in Tables3 and 4. In addition, the ROC curves of all the compared methods on different schemes are shown in Figure 5. The scatter plot the prediction of BUPNN for COM-Mea schemes are shown in Figure 6.Table 3 Classification AUC comparison with the baseline methods, the best result is shown in bold. The second result is italicized. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.80720.91660.84360.84430.87360.89490.90720.88810.9229 (↑0.0063)NC-B0.73020.81090.74210.81780.71160.72140.83240.81190.8349 (↑0.0240)COM-mid0.80090.85260.77640.84370.74350.84420.8420.85080.8843 (↑0.0317)COM-mea0.80540.85910.83990.82520.75530.84700.85620.86300.8797 (↑0.0167)COM-KNN0.80330.85750.79120.83210.77390.84460.85260.86200.8761 (↑0.0141)Average0.78940.85930.79920.83260.77160.83040.85810.85520.8796 (↑0.0202)Table 4 Classification ACC comparison with the baseline methods, the best result is shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.71130.83510.73670.81440.77320.84540.85570.84540.8454 (↓0.0103)NC-B0.65390.75310.75120.74280.62350.74090.75570.74450.7643 (↑0.0086)COM-mid0.73400.80300.75890.77340.55670.79310.75370.77830.8177 (↑0.0147)COM-mea0.73400.79310.75460.76350.63050.7980.79310.80300.8177 (↑0.0147)COM-KNN0.72410.79310.77680.76850.60590.77340.77830.77340.8226 (↑0.0295)Average0.71150.79550.75600.77250.63800.79020.78730.78900.8135 (↑0.0180)Figure 5 The ROC curve comparison for COM-Mea schemes, the missing values are filled with the median value. The closer the curve is to the upper left corner, the better the model’s performance. The symmetry of the curve along the line from (0, 1) to (1, 0) indicates the balanced performance of the model.Figure 6 The scatter plot the prediction of BUPNN for COM-Mea schemes, the missing values are filled with the median value. The vertical coordinate is used to distinguish different samples, and the horizontal coordinate indicates the predicted value of the BUPNN model for the samples. A predicted value of less than 0.5 indicates that no blood is needed, and a value greater than 0.5 indicates that blood is needed. The color of the scatter indicates whether the prediction is correct or not. #### 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. #### 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. #### 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) #### 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ### 5.5. Comparison on Regression Subtask Then, we discuss the performance of BUPNN on the regression subtask. Finally, the regression model is asked to predict the total blood usage in the patient’s treatment for advance scheduling at the blood center. The experimental setting of the regression subtask is the same as that of the classification subtask. The data are collected from the same patients, but the target variable is the total blood usage in the patient’s treatment. The mean square error (MSE) metric is calculated to measure the performance of the seven methods in the five schemes. The performance comparison is shown in Table6.Table 6 MSE comparison for all data, best results are shown in bold; results with clear advantage are shown in underline. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. LRSVMMLPETGBRFLGBMAdabBUPNNNC-A0.00950.01120.01210.01940.00930.00900.00900.01610.0079↑0.0012COM-mid0.00420.00750.01670.01030.00400.00430.00420.01750.0038↑0.0004COM-mea0.00420.00760.00760.00740.00390.00410.00440.01510.0037↑0.0002COM-KNN0.00410.00650.00820.01420.00400.00390.00430.01090.0036↑0.0003Average0.00550.00820.01110.01280.00530.00530.00540.01490.0048↑0.0005 #### 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ### 5.6. The Explanatory Analysis of BUPNN Artificial neural network-based models are considered to have a strong performance. Still, their black-box nature leads to difficulty in interpretability, thus making it difficult for the model to be trusted by domain experts. On the other hand, the proposed BUPNN has stronger interpretability because the smoothness of its mapping leads to models that profound model interpreters can easily interpret. An easy-to-use neural network interpretive tool is introduced to explain the decision process of BUPNN. The visual presentation of the interpretation results is shown in Figure7.Figure 7 Explanation of the decision process of BUPNN for a single sample. (a)(b)In Figure7, the decision processes of BUPNN for three samples are explained by “shap” tool. The vertical coordinates represent the clinical indicators, and the horizontal coordinates represent the predicted values of BUPNN. The images show the decision process of the model from bottom to top. Specifically, the model predicts E=0.47 for an average patient when no information about the patient is observed, i.e., no transfusion is needed. We believe this is reasonable because only patients who are sufficiently harmed need blood transfusions to be treated. Next, using Figure 7(a) as an example, the BUPNN model observed additional patient information, such as “no injury to the lower body,” “no injury to the abdomen,” “no injury to the pelvis,” and the “patient’s heartbeat is not accelerated.” This evidence made the BUPNN firm the original judgment that blood transfusion is unnecessary. Although the penetration injury raised the probability of transfusion, several essential features (e.g., albumin and hemoglobin) ultimately guided the model to predict fx=0.02.Figure 7(b) provides more examples of how to interpret the decision process of the BUPNN. These two examples show what kind of information needs to be observed by BUPNN that would predict a sample as needing a blood transfusion. Among them, we found some necessary signatures of the need for transfusion, such as the abdomen, pelvis, and pleural.Next, we calculated the importance of all features in determining whether a blood transfusion is needed or not and displayed them in Figure8 in the form of a bar chart.Figure 8 Important indicators about blood transfusion obtained by interpretable analysis of BUPNN. ## 5.1. Baseline Methods In this Section, two subtasks, classification subtask and regression subtask, are defined. Seven state-of-the-art baseline classification and regression methods are chosen to discuss the relative advantages of BUPNN. The baseline approach is as follows. ### 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. ### 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. ### 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. ### 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. ### 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. ### 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. ### 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. ### 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ## 5.1.1. K-Nearest Neighbor Classification/Regression Method (KNN) [37] The K-nearest neighbor classification/regression method is a nonparametric statistical method. The KNN classification method outputs the prediction by the “majority vote” of its neighbors. The KNN regression method outputs the prediction by the average of its neighbors. ## 5.1.2. Decision Tree Classification/Regression Method (DT) [35] A decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while, at the same time, an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches representing values for the attribute tested. The leaf node represents a decision on the numerical target. The root node is the topmost decision node in a tree that corresponds to the best predictor. Decision trees can handle both categorical and numerical data. ## 5.1.3. Random Forest Classification/Regression Method (RF) [34] Random forests or random decision forests are ensemble learning methods for classification, regression, and other tasks that operate by constructing many decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct decision trees’ habit of overfitting their training set. ## 5.1.4. Extremely Randomized Trees Classification/Regression Method (ET) [38] Extremely randomized trees add a further step of randomization to random forest, while similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Furthermore, instead of computing the locally optimal cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a random cut-point is selected. ## 5.1.5. Support Vector Machine Classification/Regression Method (SVM) [39] Support vector machine is a supervised learning model with associated learning algorithms that analyze data for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new standards to one category or the other, making it a nonprobabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space to maximize the width of the gap between the two categories. New models are then mapped into that space and predicted to belong to a class based on which side of the hole they fall. ## 5.1.6. Gradient Boost Classification/Regression Method (GB) [40] Gradient boosting is a machine learning technique used in regression and classification tasks. It gives a prediction model in an ensemble of weak prediction models, typically decision trees. The resulting algorithm is called gradient-boosted trees. When a decision tree is a weak learner, it usually outperforms a random forest. ## 5.1.7. Adaptive Boost Classification/Regression Method (ADB) [36] The adaptive boost algorithm obtains a strong learner by combining a series of weak learners and integrating these vulnerable learners’ learning capabilities. Adaptive boost changes the weights of the samples based on the previous learners, increasing the importance of those previously misclassified samples and decreasing the weight of correctly classified samples so that the subsequent learners will focus on those misclassified samples. Finally, these learners are combined into strong learners by weighting. ## 5.1.8. Light Gradient Boosting Machine Classification/Regression Method (LightGBM) [41, 42] LightGBM is a distributed gradient boosting framework based on the decision tree algorithm. LightGBM is designed with two main ideas in mind: (1) to reduce the use of data in memory to ensure that a single machine can use as much data as possible without sacrificing speed and (2) to reduce the cost of communication to improve the efficiency when multiple machines are in parallel, and achieve linear acceleration in computation. It can be seen that LightGBM was originally designed to provide a fast and efficient data science tool with a low memory footprint, high accuracy, and support for parallel and large-scale data processing. ## 5.2. Dataset Partitioning and Grid Search Table1 and Figure 2 provide basic information about the data. Five data partitioning schemes are provided in this study to provide a detailed comparison of the performance differences between the different schemes.Three schemes are with data-complement, including COM-Mea, COM-Med, and COM-KNN as defined in equation (2). First, for COM-Mea, COM-Med, and COM-KNN, the missing values are complemented with a specific method. Following that, the training and testing sets are divided by 90%/10%. Two noncomplement schemes (NC-A and NC-B) are introduced to compare with the complement schemes. NC-A deletes all the data items with missing values, following the typical machine learning scheme. Following the data cleaning, NC-A divides the training and testing sets by 90%/10%. NC-B keeps the same training and testing set division as the data complement schemes and only removes all missing data from the training set and obtains a cleaner training set.The models are evaluated with 10-fold cross-validation for all the training sets and determine the optimal parameters by grid search. For a fair comparison, we control the search space of all baseline methods to be approximately equal. The search space of the compared process is in Table2.Table 2 Details of grid search. MethodsAbbreviationSearch spaceNoteK-nearest neighborKNNNei∈1,3,5,10,15,20,Nei⟶neighborssize,L∈10,20,30,50,70,100L⟶leaf.sizeRandom forestRFNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈2,3,4,5,6,7MSS⟶samples.split.sizeDecision treeDTMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeExtra treeETMSL∈1,2,3,5,7,10,MSL⟶sample.size.inaleaf,MSS∈2,3,5,7,10,15MSS⟶samples.split.sizeSupport vector machineSVMM∈10,50,100,300,500,NE⟶maxiterations,T∈1e−4,5e−4,1e−3,2e−3,3e−3,5e−3T⟶tolerance.for.stopping.criteriaGradient boostingGBNE∈80,90,100,110,120,130,NE⟶boosted.trees.size,MSS∈ [2, 3, 4, 5, 6, 7]MSS⟶samples.split.sizeMultilayer perceptronMLPL∈2,3,4,5,6,L⟶number.of.layerWD∈ [0.1, 0.2, 0.3, 0.4]WD⟶weight.declayAdaptive boostADBNE∈40,50,60,70,80,90,NE⟶boosted.trees.size,LR∈0.8,0.9,1,2,5,LR⟶learning.rate\endLight gradient boosting machineLGBMNE∈21,26,31,26,41,NE⟶boosted.trees.size,L∈80,90,100,110,120,130L⟶leaf.size\endBlood usage prediction neural networkBUPNN (ours)νz∈0.01,0.03,0.05,0.07NE⟶boosted.trees.size,β∈0.1,0.2,K∈100,200,300L⟶leaf.size\end ## 5.3. Experimental Setup We initialize the other NN with the Kaiming initializer. We adopt the AdamW optimizer [43] with learning rate of 0.02 and weight decay of 0.5. All experiments use a fixed MLP network structure, fθ,w: (−1, 500, 300, 80), gϕ: (80, 500, 2), where −1 is the features number of the dataset. The number of epochs is 400. The batch size is 2048. νy=100. ## 5.4. Comparison on Classification Subtask Although the blood data for each patient is collected for accurate regression subtasks, it is equally important to predict whether a patient needs a blood transfusion. Especially in emergencies, indicating whether a patient needs a blood transfusion in the short term is more important than estimating the amount of blood used throughout the treatment cycle. Therefore, we first evaluate the performance of the proposed BUPNN on the classification subtask. Then, two evaluation metrics, classification accuracy (ACC) and area under the ROC curve (AUC), are used to compare the classifier’s performance from multiple perspectives. The performance comparison of BUPNN and eight baseline methods is shown in Tables3 and 4. In addition, the ROC curves of all the compared methods on different schemes are shown in Figure 5. The scatter plot the prediction of BUPNN for COM-Mea schemes are shown in Figure 6.Table 3 Classification AUC comparison with the baseline methods, the best result is shown in bold. The second result is italicized. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.80720.91660.84360.84430.87360.89490.90720.88810.9229 (↑0.0063)NC-B0.73020.81090.74210.81780.71160.72140.83240.81190.8349 (↑0.0240)COM-mid0.80090.85260.77640.84370.74350.84420.8420.85080.8843 (↑0.0317)COM-mea0.80540.85910.83990.82520.75530.84700.85620.86300.8797 (↑0.0167)COM-KNN0.80330.85750.79120.83210.77390.84460.85260.86200.8761 (↑0.0141)Average0.78940.85930.79920.83260.77160.83040.85810.85520.8796 (↑0.0202)Table 4 Classification ACC comparison with the baseline methods, the best result is shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFMLPETSVMGBAdaBLGBMBUPNNNC-A0.71130.83510.73670.81440.77320.84540.85570.84540.8454 (↓0.0103)NC-B0.65390.75310.75120.74280.62350.74090.75570.74450.7643 (↑0.0086)COM-mid0.73400.80300.75890.77340.55670.79310.75370.77830.8177 (↑0.0147)COM-mea0.73400.79310.75460.76350.63050.7980.79310.80300.8177 (↑0.0147)COM-KNN0.72410.79310.77680.76850.60590.77340.77830.77340.8226 (↑0.0295)Average0.71150.79550.75600.77250.63800.79020.78730.78900.8135 (↑0.0180)Figure 5 The ROC curve comparison for COM-Mea schemes, the missing values are filled with the median value. The closer the curve is to the upper left corner, the better the model’s performance. The symmetry of the curve along the line from (0, 1) to (1, 0) indicates the balanced performance of the model.Figure 6 The scatter plot the prediction of BUPNN for COM-Mea schemes, the missing values are filled with the median value. The vertical coordinate is used to distinguish different samples, and the horizontal coordinate indicates the predicted value of the BUPNN model for the samples. A predicted value of less than 0.5 indicates that no blood is needed, and a value greater than 0.5 indicates that blood is needed. The color of the scatter indicates whether the prediction is correct or not. ### 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. ### 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. ### 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) ### 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ## 5.4.1. Data Complementation Delivers Performance Improvements From the performance results of NC-B, COM-Mid, COM-Mea, and COM-KNN in Tables3 and 4, we observe that the schemes with data complementation yield better performance. We attribute the reasons to the more abundant training data provided by data complementation. Although the complemented data are imperfect, artificial intelligence models can still learn more helpful information.Although the model trained with only clean data (NC-A) has the highest performance score, scheme NC-A only includes the clean data and cannot be conveniently generated to the actual data with missing values. However, missing values are unavoidable in real-world application scenarios, so data complementation is a better choice to improve the model’s performance. ## 5.4.2. BUPNN Outperforms Almost All Baseline Methods From Tables3 and 4, we observe that the proposed BUPNN has advantages over all the baseline methods. For the AUC metric, BUPNN has the advantage in all five schemes. However, the COM-MID scheme has the most significant benefits, outperforming the second-best method (LGBM) by 3.71%. For the ACC metric, BUPNN has the advantage in all five schemes. However, the COM-KNN scheme has the most significant benefits, outperforming the second-best method (LGBM) by 2.95%. BUPNN has a 1% average advantage over the second-best method in both metrics. The reason is attributed to the fact that BUPNN is a neural network-based model, which performs better with richer data. In addition, the proposed manifold loss function can improve the model’s generalization performance and thus enhance the performance of the testing set. ## 5.4.3. BUPNN Is Better at Handling Complemented Datasets From four schemes with the same testing set (NC-B, COM-Mid, COM-Mea, and COM-KNN) in Tables3–5, we observe that BUPNN performs better than all the baseline methods when handling complemented datasets. We attribute this to the data augmentation of the proposed BUPNN model. Data augmentation generates new training data from the completed data and attenuates the effect of missing values, thus guiding BUPNN to learn a smooth manifold.Table 5 Median classification ACC comparison with the baseline methods for different hospitals’ all data, the best result are shown in bold. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. KNNRFETMLPGBLGBMSVMVCADBBUPNNDYang1.0001.0000.5710.7230.7860.7140.8570.9291.000 (↑0.000)SYi1.0000.6671.0001.0001.0000.6671.0001.0001.000 (↑0.000)SYiFu0.7960.7780.8150.7350.5560.5930.6300.7780.926 (↑0.111)WLing0.5001.0000.7500.2501.0001.0001.0001.0001.000 (↑0.000)ZEr0.7480.8000.7430.7860.7070.8330.8280.8010.834 (↑0.001)Average0.7650.8340.7640.6980.7750.7500.8330.8540.909 (↑0.055) ## 5.4.4. BUPNN Has Better Generalizability between Different Hospitals To evaluate the generalization performance of our proposed BUPNN and baseline methods, we tested the ACC with the data from various hospitals (Figure5). The proposed BUPNN has a dominant performance of five hospitals out of a total of 6 hospitals and has a clear advantage over Shaoyifu Hospital. From Table 1 and Figure 1, we observe that the Shaoyifu Hospital has a relatively obvious domain bias. It has a minimum male-to-female rate, the top three average missing rates, and many outliers in its missing values. We argue that the domain bias influences the performance of the baseline methods, and the proposed BUPNN outperforms the baseline methods due to BUPNN overcoming the domain bias in it. Furthermore, the manifold regularizer loss function provides a good manifold constraint to improve the model’s generalizability between different hospitals (as shown in Figure 4). ## 5.5. Comparison on Regression Subtask Then, we discuss the performance of BUPNN on the regression subtask. Finally, the regression model is asked to predict the total blood usage in the patient’s treatment for advance scheduling at the blood center. The experimental setting of the regression subtask is the same as that of the classification subtask. The data are collected from the same patients, but the target variable is the total blood usage in the patient’s treatment. The mean square error (MSE) metric is calculated to measure the performance of the seven methods in the five schemes. The performance comparison is shown in Table6.Table 6 MSE comparison for all data, best results are shown in bold; results with clear advantage are shown in underline. The second result is italicised. The brackets at the right end show how much BUPNN exceeds the optimal metrics in the other methods. LRSVMMLPETGBRFLGBMAdabBUPNNNC-A0.00950.01120.01210.01940.00930.00900.00900.01610.0079↑0.0012COM-mid0.00420.00750.01670.01030.00400.00430.00420.01750.0038↑0.0004COM-mea0.00420.00760.00760.00740.00390.00410.00440.01510.0037↑0.0002COM-KNN0.00410.00650.00820.01420.00400.00390.00430.01090.0036↑0.0003Average0.00550.00820.01110.01280.00530.00530.00540.01490.0048↑0.0005 ### 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ## 5.5.1. BUPNN Shows a Consistency Advantage on Regression Subtasks From Table6, we observe that the proposed BUPNN outperforms all the baseline methods on all four schemes. BUPNN has the most significant benefits for the COM-mid scheme, outperforming the second-best method (LGBM) by 0.004 (10.5%). The percentage of improvement shows that the advantage of BUPNN is more evident in the regression subtask. Furthermore, it indicates that the proposed BUPNN is more suitable for handling more difficult regression tasks and that BUPNNs have strong application potential when collecting richer data. ## 5.6. The Explanatory Analysis of BUPNN Artificial neural network-based models are considered to have a strong performance. Still, their black-box nature leads to difficulty in interpretability, thus making it difficult for the model to be trusted by domain experts. On the other hand, the proposed BUPNN has stronger interpretability because the smoothness of its mapping leads to models that profound model interpreters can easily interpret. An easy-to-use neural network interpretive tool is introduced to explain the decision process of BUPNN. The visual presentation of the interpretation results is shown in Figure7.Figure 7 Explanation of the decision process of BUPNN for a single sample. (a)(b)In Figure7, the decision processes of BUPNN for three samples are explained by “shap” tool. The vertical coordinates represent the clinical indicators, and the horizontal coordinates represent the predicted values of BUPNN. The images show the decision process of the model from bottom to top. Specifically, the model predicts E=0.47 for an average patient when no information about the patient is observed, i.e., no transfusion is needed. We believe this is reasonable because only patients who are sufficiently harmed need blood transfusions to be treated. Next, using Figure 7(a) as an example, the BUPNN model observed additional patient information, such as “no injury to the lower body,” “no injury to the abdomen,” “no injury to the pelvis,” and the “patient’s heartbeat is not accelerated.” This evidence made the BUPNN firm the original judgment that blood transfusion is unnecessary. Although the penetration injury raised the probability of transfusion, several essential features (e.g., albumin and hemoglobin) ultimately guided the model to predict fx=0.02.Figure 7(b) provides more examples of how to interpret the decision process of the BUPNN. These two examples show what kind of information needs to be observed by BUPNN that would predict a sample as needing a blood transfusion. Among them, we found some necessary signatures of the need for transfusion, such as the abdomen, pelvis, and pleural.Next, we calculated the importance of all features in determining whether a blood transfusion is needed or not and displayed them in Figure8 in the form of a bar chart.Figure 8 Important indicators about blood transfusion obtained by interpretable analysis of BUPNN. ## 6. Conclusions In this paper, a neural network model-based blood usage predictor, called deep blood usage neural network (BUPNN), is proposed to serve the scheduling of blood supply in provincial blood centers. The proposed BUPNN receives clinical information from hospital patients and predicts whether a patient needs blood and the amount of blood used. The proposed BUPNN receives clinical data from hospital patients and indicates whether a patient needs blood and the blood consumption amount. The proposed BUPNN mainly solves the problem of predicting blood usage with high availability and generalization in real-life situations. It can accomplish the prediction task well in different hospitals’ missing values and domain bias. BUPNN proposes a manifold learning-based regularizer for the blood prediction problem to improve the model’s generalization performance on data from different hospitals and enhance the model using data augmentation and data complementation. --- *Source: 1003310-2023-01-31.xml*
2023
# Synthesis and Thermal Adsorption Characteristics of Silver-Based Hybrid Nanocomposites for Automotive Friction Material Application **Authors:** R. Venkatesh; P. Sakthivel; M. Vivekanandan; C. Ramesh Kannan; J. Phani Krishna; S. Dhanabalan; T. Thirugnanasambandham; Manaye Majora **Journal:** Adsorption Science & Technology (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1003492 --- ## Abstract Advances in friction materials are imposed on developing multiceramic reinforced hybrid nanocomposites with superior tribomechanical properties. The silver-based matrix metals are gained significance in various applications like bearing, ratchet, and electrical contacts due to their high frictional resistance and good thermal and chemical stability compared to traditional metals. The present research is to develop silver-based hybrid nanocomposites containing alumina (Al2O3) and silicon carbide (SiC) nanoparticles of 50 nm mixing with the ratio of 0 wt% Al2O3/0 wt% SiC, 5 wt% Al2O3/0 wt% SiC, and 5 wt% Al2O3/5 wt% SiC via the semisolid vacuum stir-cast technique. The vacuum technology minimizes casting defects and increases composite properties. The casted composite samples are subjected to study the effect of reinforcement on thermal adsorption, conductivity, diffusivity, and frictional resistance. The composite containing 5 wt% Al2O3np/5 wt% SiCnp is to find optimum thermal and frictional behaviour. The thermal adsorption and frictional resistance are increased by 30% and 27% compared to unreinforced cast silver. The Ag/5 wt% Al2O3np/5 wt% SiCnp hybrid nanocomposite is recommended for automotive friction-bearing applications. --- ## Body ## 1. Introduction In modern research, the world is forced to search the new advanced material to meet the industrial requirement and fulfill the following qualities: high strength, good thermal stability, enhanced corrosion resistance, ability to withstand high frictional force with reduced wear loss, and increased coefficient of friction. Many researchers have experimentally studied the aluminium alloy-based matrix composite [1–4] due to their lower density, high ductility, good strength, and stiffness compared to conventional materials [5–8]. However, the characteristics of composite with their hybrid systems may be varied due to the particle shape and size, mixing ratio, casting process parameters, and method of processing of composite [9–11]. The particulate-reinforced composite can perform high strength and friction resistance [12]. Specifically, silicon carbide, aluminium oxide, tungsten carbide, boron carbide, and zirconium dioxide-based ceramics to metal matrix result in high hardness, resistance to high frictional force, and high thermal stability [13]. The zinc/lead and nickel-based matrix composite is adopted in aviation and space applications due to its high frictional resistance, anticorrosion, and thermal proof [14, 15]. In the future, silver-based metal matrix composites (SMMCs) will perform extraordinary (solid) lubrication effect and withstand the high thermal stress during high frictional force applied in aerospace motor applications, high friction bearing, and electrical contact applications [16–20]. However, few studies are available on silver matrix composite for automotive friction material applications [21]. Pure silver is characterized by low wear resistance and superior thermoelectrical conductivity. Due to these properties, it was accomplished by several industries alloying with Zn, Cu, Mn, Ni, and aluminium alloy to obtain a specific performance [22]. In the past decades, the frictional properties for different constitutions of aluminium/mica and copper/coated ground mica have been studied for bearing applications. The facilitation of mica influences higher frictional strength [23]. The CSM tribotester estimated the silver-copper-based composite’s dry sliding wear characteristics. The result reveals that the composite’s worn surface is directly impacted by the friction coefficient and rate of wear [24]. Most matrix materials are bonded with suitable reinforcements via solid-state processing (powder metallurgy), liquid-state processing (gravity, centrifugal, stir, and vacuum stir casting), and vapour-state processing (vapour and spray deposition process) techniques [25, 26]. Among various fabrication techniques listed above, the liquid-state processing distinctively improves physical-chemical properties between matrix and reinforcements resulting in increased product quality and suitability for mass production. Most researchers reported that the liquid-state stir processing is apposite for complex shape production at massive and economical production [27–31]. Based on the above literature, various matrix alloying materials, reinforcements, and their processing methods are discussed with their enhanced properties. The present research is to develop a silver matrix hybrid nanocomposite containing Al2O3/SiC nanoparticles via the vacuum sir cast technology. The fabricated samples were studied for their thermal and friction characteristics. The influences of both ceramics on the silver matrix found enhanced thermal adsorption with reduced mass loss, better conductivity, diffusivity, and rate of wear. The ASTM G99-05 standard evaluates the wear rate of advanced composites. Finally, all the test results were compared, and the best constitution having enhanced properties to fulfil the automotive friction material applications was recommended. ## 2. Experimental Details ### 2.1. Selection of Primary Matrix Material The present study chose the silver-based alloy as the primary matrix material. The properties of silver are mentioned in Table1.Table 1 Properties of silver matrix. PropertiesDensityElastic modulusTensile strengthMelting temperatureThermal conductivityEmissivitySpecific heat capacityAg10.49 g/cc76 GPa140 MPa962°C419 W/mK0.0550.234 J/g °C ### 2.2. Selection of Secondary Phase Reinforcements The complex ceramic aluminium oxide and silicon carbide particle (50 nm) with an average size of 50 nm is chosen as secondary phase reinforcement to obtain better composite performance [28, 30, 31]. The properties of both ceramic phases are represented in Table 2.Table 2 Properties of reinforcements. ReinforcementsDensityHardnessModulus of elasticityMelting pointThermal conductivityg/ccVHNGPa°CW/mKAl2O33.961366375205530.12SiC3.114450412279977.54 ### 2.3. The Mixing Ratio of Composite Table3 illustrates the phase constitution of a silver matrix concerning weight percentages of reinforcement used by the production of silver matrix hybrid nanocomposite.Table 3 Phase constitutions of silver matrix composite. SampleDescriptionsPhase constitutions in wt%AgAl2O3SiC1Alloy100002Nanocomposite95503Hybrid nanocomposite9055 ### 2.4. Method and Processing of Composites Figures1(a) and 1(b) represent the silver matrix composite fabrication full setup with vacuum pump assembly. The different-sized silver round bar was preheated at 400°C for 30 min and melted via an electrical furnace with an applied temperature range of 1000°C to 1200°C under an inert atmosphere (supply of argon gas at a constant level of 3 l/h) to avoid the thermal oxidation. The higher temperature may increase the oxidation resulting in an increased porosity [27, 28]. According to the phase constitutions (mixing ratio) reported in Table 3, the preheated reinforcements (Al2O3/SiC) are added into a silver molten pool stirred with 500 rpm stir speed. Here, the graphite mechanical stirrer is used to improve the fluidity for surface preparation. A similar concept was reported by silver composite [20]. Thoroughly stirred molten state mixed silver matrix hybrid nanocomposite is developed with an applied vacuum pressure of 1×105 bar, resulting in minimized casting defects with increased composite performance. Table 4 represents the processing parameters of silver matrix composites.Figure 1 Actual fabrication setup for silver matrix composite. (a) Actual setup. (b) Processing chain with different thermal phase. (c) Pin on disc wear apparatus. (a)(b)(c)Table 4 Process parameter for silver matrix composites. DescriptionsPreheating temperatureRotational speed (stir)Impeller typeStir timeFeed rateDie preheat temperatureVacuum pressureMatrixReinforcementsUnits400°C500 rpmGraphite10 min0.9 g/sec350°C1×105 bar ### 2.5. Evaluation of Thermal Characteristics The thermal characteristics of silver matrix MMCs are evaluated by STA Jupiter make 449/F3 model differential thermal analysis equipment configured with -150°C to 2400°C under argon atmosphere. The laser flash technique is accomplished to find the thermal conductivity (ʎ) and its diffusivity as referred follows [20]. (1)ʎT=ρTxαTxCpT.Hereʎ is the thermal conductivity, T the temperature, ρ the density of material, α the thermal diffusivity, and Cp the specific heat coefficient.The NETZSCH-made DIL 402C and LFA 427 models are considered for evaluating linear thermal expansion and its diffusivity ofɸ8 mm and 25 mm length sample under the ambient temperature of 27°C to 1000°C with 7°C/min heat flow. ### 2.6. Evaluation of Frictional Characteristics The dry sliding frictional characteristics of cast silver, nanocomposite, and hybrid nanocomposite were evaluated by rotating pin on disc tribotester configured with a hardened steel disc with an applied load of 10 N, 20 N, and 30 N under the constant sliding velocity of 0.75 m/sec. The above conditions estimated the effect of reinforcement on frictional resistance of the silver matrix. The top view of the wear tester is shown in Figure1(c). ## 2.1. Selection of Primary Matrix Material The present study chose the silver-based alloy as the primary matrix material. The properties of silver are mentioned in Table1.Table 1 Properties of silver matrix. PropertiesDensityElastic modulusTensile strengthMelting temperatureThermal conductivityEmissivitySpecific heat capacityAg10.49 g/cc76 GPa140 MPa962°C419 W/mK0.0550.234 J/g °C ## 2.2. Selection of Secondary Phase Reinforcements The complex ceramic aluminium oxide and silicon carbide particle (50 nm) with an average size of 50 nm is chosen as secondary phase reinforcement to obtain better composite performance [28, 30, 31]. The properties of both ceramic phases are represented in Table 2.Table 2 Properties of reinforcements. ReinforcementsDensityHardnessModulus of elasticityMelting pointThermal conductivityg/ccVHNGPa°CW/mKAl2O33.961366375205530.12SiC3.114450412279977.54 ## 2.3. The Mixing Ratio of Composite Table3 illustrates the phase constitution of a silver matrix concerning weight percentages of reinforcement used by the production of silver matrix hybrid nanocomposite.Table 3 Phase constitutions of silver matrix composite. SampleDescriptionsPhase constitutions in wt%AgAl2O3SiC1Alloy100002Nanocomposite95503Hybrid nanocomposite9055 ## 2.4. Method and Processing of Composites Figures1(a) and 1(b) represent the silver matrix composite fabrication full setup with vacuum pump assembly. The different-sized silver round bar was preheated at 400°C for 30 min and melted via an electrical furnace with an applied temperature range of 1000°C to 1200°C under an inert atmosphere (supply of argon gas at a constant level of 3 l/h) to avoid the thermal oxidation. The higher temperature may increase the oxidation resulting in an increased porosity [27, 28]. According to the phase constitutions (mixing ratio) reported in Table 3, the preheated reinforcements (Al2O3/SiC) are added into a silver molten pool stirred with 500 rpm stir speed. Here, the graphite mechanical stirrer is used to improve the fluidity for surface preparation. A similar concept was reported by silver composite [20]. Thoroughly stirred molten state mixed silver matrix hybrid nanocomposite is developed with an applied vacuum pressure of 1×105 bar, resulting in minimized casting defects with increased composite performance. Table 4 represents the processing parameters of silver matrix composites.Figure 1 Actual fabrication setup for silver matrix composite. (a) Actual setup. (b) Processing chain with different thermal phase. (c) Pin on disc wear apparatus. (a)(b)(c)Table 4 Process parameter for silver matrix composites. DescriptionsPreheating temperatureRotational speed (stir)Impeller typeStir timeFeed rateDie preheat temperatureVacuum pressureMatrixReinforcementsUnits400°C500 rpmGraphite10 min0.9 g/sec350°C1×105 bar ## 2.5. Evaluation of Thermal Characteristics The thermal characteristics of silver matrix MMCs are evaluated by STA Jupiter make 449/F3 model differential thermal analysis equipment configured with -150°C to 2400°C under argon atmosphere. The laser flash technique is accomplished to find the thermal conductivity (ʎ) and its diffusivity as referred follows [20]. (1)ʎT=ρTxαTxCpT.Hereʎ is the thermal conductivity, T the temperature, ρ the density of material, α the thermal diffusivity, and Cp the specific heat coefficient.The NETZSCH-made DIL 402C and LFA 427 models are considered for evaluating linear thermal expansion and its diffusivity ofɸ8 mm and 25 mm length sample under the ambient temperature of 27°C to 1000°C with 7°C/min heat flow. ## 2.6. Evaluation of Frictional Characteristics The dry sliding frictional characteristics of cast silver, nanocomposite, and hybrid nanocomposite were evaluated by rotating pin on disc tribotester configured with a hardened steel disc with an applied load of 10 N, 20 N, and 30 N under the constant sliding velocity of 0.75 m/sec. The above conditions estimated the effect of reinforcement on frictional resistance of the silver matrix. The top view of the wear tester is shown in Figure1(c). ## 3. Result and Discussions ### 3.1. Differential Thermal Effect on Mass Loss of Silver Matrix Composites Figures2(a)–2(c) illustrate the differential thermal effect on mass loss of cast silver correlated with Al2O3- and SiC-reinforced silver nano- and hybrid nanocomposites evaluated under the thermal region of 27°C to 1000°C. The temperature-to-mass loss of each test sample is explained in detail. When the temperature increases from ambient temperature to high temperature, it shows solid to semisolid phase and liquid phase at the higher temperature (phase transformations—solid/liquid and liquid/solid) of heating and cooling phase during the evaluation of thermal studies. It is revealed from Figure 2(a) that the mass loss of cast silver (Ag) alloy is gradually decreased from 0.02 μV/mg to 0.0098 μV/mg with an increase in temperature of 27° to 825°C under an inert atmosphere. At the same time, increasing the temperature of cast silver by more than 825°C results to the formation of the plastic region with improved mass loss of 0.0302 μV/mg. It was due to the reaction of intermetallic coarse fine grain structure and dissolution of the Ag phase. Similar conditions have been reported by Jakub et al. [20] during the evaluation of silver matrix MMCs. The wettability of the composite was limited by the volume fraction SiC [22].Figure 2 Differential thermal effect on mass loss of silver matrix hybrid nanocomposite. (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp. (a)(b)(c)Figure2(b) represents the variations in weight loss of cast Ag alloy nanocomposite consisting 5 wt% alumina nanoparticles during differential thermal analysis. The red and blue curve represents the heat and cooling phase of Ag matrix composite processing as per the condition mentioned in Table 2. The heating curve in Figure 2(b) shows a gradually sloping from 0.0213 μV/mg to 0.0086 μV/mg for a semisolid phase temperature of 760°C. In-reversal effect of the heating curve shows that the maximum temperature of 840°C reduces the mass loss on the phase of bonded matrix—reinforcement by an applied stir speed of 500 rpm results in the formation of a homogenous uniform structure. The constant stir speed may reduce the composite’s casting defect (weight loss). The selection of stir-cast processing parameters was necessary for the quality of the composite. The discontinued stir action increases the cavity of the composite, resulting in increased porosity [27].Figure2(c) represents the phase transformation during the heat and cooling phase of silver matrix hybrid nanocomposite with a varied temperature range of 27°C to 1200°C. The intermediate transition zone (820°C) for both the heat and cooling phases shows a minimum mass loss of less than 0.009 μV/mg. Here, the thermal effect of silver matrix composite varied due to the chemical constitutions and bonding strength between matrix and alumina/silicon carbide nanoparticles.A similar scenario was reported by Mata and Alcala [9] during the performance friction material. However, both reinforcements are thermally stable at higher temperature (1000°C) for melting silver. The intermediate phase for silver, silver nanocomposite, and silver hybrid nanocomposite was found by differential thermal effect analysis, and the values are tabulated in Table 5.Table 5 Intermediate transition zone for unreinforced and reinforced silver matrix composite by differential thermal analysis. SampleDescriptionsIntermediate zone temperatureMass loss°CμV/mg1Alloy8500.00982Nanocomposite8400.00863Hybrid nanocomposite8200.0009 ### 3.2. Effect of Reinforcement on Thermal Adsorption and Thermal Diffusivity of Silver Matrix Composites Figure3 graph describes the detailed heat wave circulation (linear expansion and adsorption) of unreinforced Ag alloy and Al2O3/SiC-reinforced composites. The unreinforced Ag alloy is found 18×10−6 per K (1.70%). The inclusion of alumina nanoparticle (5 wt%) into Ag alloy shows a 4.8×10−6 per K. It was due to the rigid ceramic particles leading to increase hardness and withstand the higher temperature [18]. The circulation of the thermal wave to Ag nanocomposite is decreased to 12% compared to unreinforced Ag alloy. The composite contained 5 wt% alumina and silicon carbide nanoparticle is found at 1.42%. Generally, both reinforcements are complex, and high melting temperature reduces linear expansion. The various phase transformation during the evaluation is noted in Figure 3, and its intermediate zone for optimum thermal effect temperature tangent lines is drawn. However, the physical presence of both Al2O3 and SiC nanoparticles leads to decreased thermal coefficient and increased composite adsorption. The higher temperature withstand capacity was increased by over 30% compared to Ag alloy at 1000°C.Figure 3 Thermal wave circulation (linear expansion and adsorption) phase transformation of silver matrix hybrid nanocomposite.The experimental results for thermal diffusivity of (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp composites are shown in Figure 4. The variations in thermal diffusivity of pure Ag is shown in gradual increase with the increase in the temperature from ambient degree (27° C to 1200°C). The highest thermal diffusivity of 24 mm2/sec is found on pure Ag. However, the thermal diffusivity of hybrid nanocomposite shows the most negligible value compared to all others. It was due to the effect of phase transformation during high temperatures. It was decided as silver and reinforcement atomic structure [21]. It is distinct to 820°C of the intermediate phase and gets an effective thermal diffusivity of 15 mm2/sec. While the temperature increases, it has increased in thermal diffusion to 1000°C. It may be varied due to the bonding of the matrix and reinforcements [20, 21].Figure 4 Thermal diffusivity of silver matrix hybrid nanocomposite. ### 3.3. Effect of Reinforcements on Frictional Characteristics of Silver Matrix Composites The frictional wear loss of unreinforced silver and its composites are shown in Figure5. The wear loss of silver and its composites are increased linearly with an increase in an applied average load of 10-30 N, respectively. The wear loss of unreinforced silver composite is 10.2 mg on 40 N load under a constant sliding velocity of 0.75 m/sec. At the same time, adding Al2O3 nanoparticles in silver shows a minimum wear loss of 7.1 mg on high load and high sliding speed. The reduced wear loss of the composite is mainly attributed to alumina nanoparticles that resist the indentation against the frictional force during high sliding velocity. The alumina and silicon carbide particle combination in the silver matrix has low wear loss compared to all others [30, 31]. The composite contained 5 wt% Al2O3/5 wt% SiC 5.8 mg on a 40 N applied load with the frictional force of 23.4 N under 0.75 m/sec. The wear resistance against frictional composite was increased by 56.86% compared to unreinforced silver.Figure 5 Frictional wear loss of silver hybrid nanocomposite.The friction coefficient for silver composites is represented in Figure6 with different load conditions of 10-30 N at 0.75 m/sec sliding velocity. Sample 1 shows that the COF increases linearly with an increase in load under high sliding velocity. Sample 2 varied from 0.41 to 0.46 with increased content of reinforcements. However, all the test samples were shown increased COF value on the high frictional force. Sample 3 has a maximum COF of 0.58 and improved by 32% compared to sample 1 at 30 N load. It was due to the rigid ceramic particles being diffused within the matrix during high frictional force.Figure 6 Coefficient of friction (COF) of silver hybrid nanocomposite.Figure7 illustrates the friction-forced effect on wear loss and coefficient friction of unreinforced and reinforced silver matrix composite. It indicates that composite wear loss decreases gradually with increased frictional force from 27.98 N to 39.98 N. Similarly, the COF curve in Figure 7 represents the improved COF trend on the increased frictional force. However, sample 3 has tribological performance on a 30 N load under 0.75 m/sec sliding speed with 39.98 N frictional force.Figure 7 Comparisons and effect of the frictional force on wear loss and COF of silver matrix hybrid composite. ## 3.1. Differential Thermal Effect on Mass Loss of Silver Matrix Composites Figures2(a)–2(c) illustrate the differential thermal effect on mass loss of cast silver correlated with Al2O3- and SiC-reinforced silver nano- and hybrid nanocomposites evaluated under the thermal region of 27°C to 1000°C. The temperature-to-mass loss of each test sample is explained in detail. When the temperature increases from ambient temperature to high temperature, it shows solid to semisolid phase and liquid phase at the higher temperature (phase transformations—solid/liquid and liquid/solid) of heating and cooling phase during the evaluation of thermal studies. It is revealed from Figure 2(a) that the mass loss of cast silver (Ag) alloy is gradually decreased from 0.02 μV/mg to 0.0098 μV/mg with an increase in temperature of 27° to 825°C under an inert atmosphere. At the same time, increasing the temperature of cast silver by more than 825°C results to the formation of the plastic region with improved mass loss of 0.0302 μV/mg. It was due to the reaction of intermetallic coarse fine grain structure and dissolution of the Ag phase. Similar conditions have been reported by Jakub et al. [20] during the evaluation of silver matrix MMCs. The wettability of the composite was limited by the volume fraction SiC [22].Figure 2 Differential thermal effect on mass loss of silver matrix hybrid nanocomposite. (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp. (a)(b)(c)Figure2(b) represents the variations in weight loss of cast Ag alloy nanocomposite consisting 5 wt% alumina nanoparticles during differential thermal analysis. The red and blue curve represents the heat and cooling phase of Ag matrix composite processing as per the condition mentioned in Table 2. The heating curve in Figure 2(b) shows a gradually sloping from 0.0213 μV/mg to 0.0086 μV/mg for a semisolid phase temperature of 760°C. In-reversal effect of the heating curve shows that the maximum temperature of 840°C reduces the mass loss on the phase of bonded matrix—reinforcement by an applied stir speed of 500 rpm results in the formation of a homogenous uniform structure. The constant stir speed may reduce the composite’s casting defect (weight loss). The selection of stir-cast processing parameters was necessary for the quality of the composite. The discontinued stir action increases the cavity of the composite, resulting in increased porosity [27].Figure2(c) represents the phase transformation during the heat and cooling phase of silver matrix hybrid nanocomposite with a varied temperature range of 27°C to 1200°C. The intermediate transition zone (820°C) for both the heat and cooling phases shows a minimum mass loss of less than 0.009 μV/mg. Here, the thermal effect of silver matrix composite varied due to the chemical constitutions and bonding strength between matrix and alumina/silicon carbide nanoparticles.A similar scenario was reported by Mata and Alcala [9] during the performance friction material. However, both reinforcements are thermally stable at higher temperature (1000°C) for melting silver. The intermediate phase for silver, silver nanocomposite, and silver hybrid nanocomposite was found by differential thermal effect analysis, and the values are tabulated in Table 5.Table 5 Intermediate transition zone for unreinforced and reinforced silver matrix composite by differential thermal analysis. SampleDescriptionsIntermediate zone temperatureMass loss°CμV/mg1Alloy8500.00982Nanocomposite8400.00863Hybrid nanocomposite8200.0009 ## 3.2. Effect of Reinforcement on Thermal Adsorption and Thermal Diffusivity of Silver Matrix Composites Figure3 graph describes the detailed heat wave circulation (linear expansion and adsorption) of unreinforced Ag alloy and Al2O3/SiC-reinforced composites. The unreinforced Ag alloy is found 18×10−6 per K (1.70%). The inclusion of alumina nanoparticle (5 wt%) into Ag alloy shows a 4.8×10−6 per K. It was due to the rigid ceramic particles leading to increase hardness and withstand the higher temperature [18]. The circulation of the thermal wave to Ag nanocomposite is decreased to 12% compared to unreinforced Ag alloy. The composite contained 5 wt% alumina and silicon carbide nanoparticle is found at 1.42%. Generally, both reinforcements are complex, and high melting temperature reduces linear expansion. The various phase transformation during the evaluation is noted in Figure 3, and its intermediate zone for optimum thermal effect temperature tangent lines is drawn. However, the physical presence of both Al2O3 and SiC nanoparticles leads to decreased thermal coefficient and increased composite adsorption. The higher temperature withstand capacity was increased by over 30% compared to Ag alloy at 1000°C.Figure 3 Thermal wave circulation (linear expansion and adsorption) phase transformation of silver matrix hybrid nanocomposite.The experimental results for thermal diffusivity of (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp composites are shown in Figure 4. The variations in thermal diffusivity of pure Ag is shown in gradual increase with the increase in the temperature from ambient degree (27° C to 1200°C). The highest thermal diffusivity of 24 mm2/sec is found on pure Ag. However, the thermal diffusivity of hybrid nanocomposite shows the most negligible value compared to all others. It was due to the effect of phase transformation during high temperatures. It was decided as silver and reinforcement atomic structure [21]. It is distinct to 820°C of the intermediate phase and gets an effective thermal diffusivity of 15 mm2/sec. While the temperature increases, it has increased in thermal diffusion to 1000°C. It may be varied due to the bonding of the matrix and reinforcements [20, 21].Figure 4 Thermal diffusivity of silver matrix hybrid nanocomposite. ## 3.3. Effect of Reinforcements on Frictional Characteristics of Silver Matrix Composites The frictional wear loss of unreinforced silver and its composites are shown in Figure5. The wear loss of silver and its composites are increased linearly with an increase in an applied average load of 10-30 N, respectively. The wear loss of unreinforced silver composite is 10.2 mg on 40 N load under a constant sliding velocity of 0.75 m/sec. At the same time, adding Al2O3 nanoparticles in silver shows a minimum wear loss of 7.1 mg on high load and high sliding speed. The reduced wear loss of the composite is mainly attributed to alumina nanoparticles that resist the indentation against the frictional force during high sliding velocity. The alumina and silicon carbide particle combination in the silver matrix has low wear loss compared to all others [30, 31]. The composite contained 5 wt% Al2O3/5 wt% SiC 5.8 mg on a 40 N applied load with the frictional force of 23.4 N under 0.75 m/sec. The wear resistance against frictional composite was increased by 56.86% compared to unreinforced silver.Figure 5 Frictional wear loss of silver hybrid nanocomposite.The friction coefficient for silver composites is represented in Figure6 with different load conditions of 10-30 N at 0.75 m/sec sliding velocity. Sample 1 shows that the COF increases linearly with an increase in load under high sliding velocity. Sample 2 varied from 0.41 to 0.46 with increased content of reinforcements. However, all the test samples were shown increased COF value on the high frictional force. Sample 3 has a maximum COF of 0.58 and improved by 32% compared to sample 1 at 30 N load. It was due to the rigid ceramic particles being diffused within the matrix during high frictional force.Figure 6 Coefficient of friction (COF) of silver hybrid nanocomposite.Figure7 illustrates the friction-forced effect on wear loss and coefficient friction of unreinforced and reinforced silver matrix composite. It indicates that composite wear loss decreases gradually with increased frictional force from 27.98 N to 39.98 N. Similarly, the COF curve in Figure 7 represents the improved COF trend on the increased frictional force. However, sample 3 has tribological performance on a 30 N load under 0.75 m/sec sliding speed with 39.98 N frictional force.Figure 7 Comparisons and effect of the frictional force on wear loss and COF of silver matrix hybrid composite. ## 4. Conclusions The silver-based matrix hybrid nanocomposite developed with 5 wt% alumina and 5 wt% silicon carbide nanoparticle via vacuum stir-casting techniques to minimize the casting defect and increase the thermal adsorption composite of the following results are concluded.(i) The silver matrix hybrid nanocomposite (sample 3) is found to have good thermal characteristics compared to others(ii) Its intermediate transition zone temperature of 820°C has been adopted for both the heat and cooling phases with a minimum mass loss of 0.009μV/mg. It saved 16.5% compared to unreinforced cast silver(iii) The adsorption of hybrid nanocomposite is increased by 30% and thermally stable at higher temperatures, 1000° C(iv) The adequate thermal wave circulation (linear expansion) of hybrid nanocomposite is 1.42%(v) The thermal diffusivity of hybrid nanocomposite may be varied due to the bonding strength between the matrix and reinforcements(vi) Sample 3 is identified as having good wear resistance, and COF has improved by 56.86% and 32% compared to unreinforced silver composite(vii) It is recommended for automotive friction-bearing applications based on thermal and frictional characteristics --- *Source: 1003492-2023-02-01.xml*
1003492-2023-02-01_1003492-2023-02-01.md
32,238
Synthesis and Thermal Adsorption Characteristics of Silver-Based Hybrid Nanocomposites for Automotive Friction Material Application
R. Venkatesh; P. Sakthivel; M. Vivekanandan; C. Ramesh Kannan; J. Phani Krishna; S. Dhanabalan; T. Thirugnanasambandham; Manaye Majora
Adsorption Science & Technology (2023)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1003492
1003492-2023-02-01.xml
--- ## Abstract Advances in friction materials are imposed on developing multiceramic reinforced hybrid nanocomposites with superior tribomechanical properties. The silver-based matrix metals are gained significance in various applications like bearing, ratchet, and electrical contacts due to their high frictional resistance and good thermal and chemical stability compared to traditional metals. The present research is to develop silver-based hybrid nanocomposites containing alumina (Al2O3) and silicon carbide (SiC) nanoparticles of 50 nm mixing with the ratio of 0 wt% Al2O3/0 wt% SiC, 5 wt% Al2O3/0 wt% SiC, and 5 wt% Al2O3/5 wt% SiC via the semisolid vacuum stir-cast technique. The vacuum technology minimizes casting defects and increases composite properties. The casted composite samples are subjected to study the effect of reinforcement on thermal adsorption, conductivity, diffusivity, and frictional resistance. The composite containing 5 wt% Al2O3np/5 wt% SiCnp is to find optimum thermal and frictional behaviour. The thermal adsorption and frictional resistance are increased by 30% and 27% compared to unreinforced cast silver. The Ag/5 wt% Al2O3np/5 wt% SiCnp hybrid nanocomposite is recommended for automotive friction-bearing applications. --- ## Body ## 1. Introduction In modern research, the world is forced to search the new advanced material to meet the industrial requirement and fulfill the following qualities: high strength, good thermal stability, enhanced corrosion resistance, ability to withstand high frictional force with reduced wear loss, and increased coefficient of friction. Many researchers have experimentally studied the aluminium alloy-based matrix composite [1–4] due to their lower density, high ductility, good strength, and stiffness compared to conventional materials [5–8]. However, the characteristics of composite with their hybrid systems may be varied due to the particle shape and size, mixing ratio, casting process parameters, and method of processing of composite [9–11]. The particulate-reinforced composite can perform high strength and friction resistance [12]. Specifically, silicon carbide, aluminium oxide, tungsten carbide, boron carbide, and zirconium dioxide-based ceramics to metal matrix result in high hardness, resistance to high frictional force, and high thermal stability [13]. The zinc/lead and nickel-based matrix composite is adopted in aviation and space applications due to its high frictional resistance, anticorrosion, and thermal proof [14, 15]. In the future, silver-based metal matrix composites (SMMCs) will perform extraordinary (solid) lubrication effect and withstand the high thermal stress during high frictional force applied in aerospace motor applications, high friction bearing, and electrical contact applications [16–20]. However, few studies are available on silver matrix composite for automotive friction material applications [21]. Pure silver is characterized by low wear resistance and superior thermoelectrical conductivity. Due to these properties, it was accomplished by several industries alloying with Zn, Cu, Mn, Ni, and aluminium alloy to obtain a specific performance [22]. In the past decades, the frictional properties for different constitutions of aluminium/mica and copper/coated ground mica have been studied for bearing applications. The facilitation of mica influences higher frictional strength [23]. The CSM tribotester estimated the silver-copper-based composite’s dry sliding wear characteristics. The result reveals that the composite’s worn surface is directly impacted by the friction coefficient and rate of wear [24]. Most matrix materials are bonded with suitable reinforcements via solid-state processing (powder metallurgy), liquid-state processing (gravity, centrifugal, stir, and vacuum stir casting), and vapour-state processing (vapour and spray deposition process) techniques [25, 26]. Among various fabrication techniques listed above, the liquid-state processing distinctively improves physical-chemical properties between matrix and reinforcements resulting in increased product quality and suitability for mass production. Most researchers reported that the liquid-state stir processing is apposite for complex shape production at massive and economical production [27–31]. Based on the above literature, various matrix alloying materials, reinforcements, and their processing methods are discussed with their enhanced properties. The present research is to develop a silver matrix hybrid nanocomposite containing Al2O3/SiC nanoparticles via the vacuum sir cast technology. The fabricated samples were studied for their thermal and friction characteristics. The influences of both ceramics on the silver matrix found enhanced thermal adsorption with reduced mass loss, better conductivity, diffusivity, and rate of wear. The ASTM G99-05 standard evaluates the wear rate of advanced composites. Finally, all the test results were compared, and the best constitution having enhanced properties to fulfil the automotive friction material applications was recommended. ## 2. Experimental Details ### 2.1. Selection of Primary Matrix Material The present study chose the silver-based alloy as the primary matrix material. The properties of silver are mentioned in Table1.Table 1 Properties of silver matrix. PropertiesDensityElastic modulusTensile strengthMelting temperatureThermal conductivityEmissivitySpecific heat capacityAg10.49 g/cc76 GPa140 MPa962°C419 W/mK0.0550.234 J/g °C ### 2.2. Selection of Secondary Phase Reinforcements The complex ceramic aluminium oxide and silicon carbide particle (50 nm) with an average size of 50 nm is chosen as secondary phase reinforcement to obtain better composite performance [28, 30, 31]. The properties of both ceramic phases are represented in Table 2.Table 2 Properties of reinforcements. ReinforcementsDensityHardnessModulus of elasticityMelting pointThermal conductivityg/ccVHNGPa°CW/mKAl2O33.961366375205530.12SiC3.114450412279977.54 ### 2.3. The Mixing Ratio of Composite Table3 illustrates the phase constitution of a silver matrix concerning weight percentages of reinforcement used by the production of silver matrix hybrid nanocomposite.Table 3 Phase constitutions of silver matrix composite. SampleDescriptionsPhase constitutions in wt%AgAl2O3SiC1Alloy100002Nanocomposite95503Hybrid nanocomposite9055 ### 2.4. Method and Processing of Composites Figures1(a) and 1(b) represent the silver matrix composite fabrication full setup with vacuum pump assembly. The different-sized silver round bar was preheated at 400°C for 30 min and melted via an electrical furnace with an applied temperature range of 1000°C to 1200°C under an inert atmosphere (supply of argon gas at a constant level of 3 l/h) to avoid the thermal oxidation. The higher temperature may increase the oxidation resulting in an increased porosity [27, 28]. According to the phase constitutions (mixing ratio) reported in Table 3, the preheated reinforcements (Al2O3/SiC) are added into a silver molten pool stirred with 500 rpm stir speed. Here, the graphite mechanical stirrer is used to improve the fluidity for surface preparation. A similar concept was reported by silver composite [20]. Thoroughly stirred molten state mixed silver matrix hybrid nanocomposite is developed with an applied vacuum pressure of 1×105 bar, resulting in minimized casting defects with increased composite performance. Table 4 represents the processing parameters of silver matrix composites.Figure 1 Actual fabrication setup for silver matrix composite. (a) Actual setup. (b) Processing chain with different thermal phase. (c) Pin on disc wear apparatus. (a)(b)(c)Table 4 Process parameter for silver matrix composites. DescriptionsPreheating temperatureRotational speed (stir)Impeller typeStir timeFeed rateDie preheat temperatureVacuum pressureMatrixReinforcementsUnits400°C500 rpmGraphite10 min0.9 g/sec350°C1×105 bar ### 2.5. Evaluation of Thermal Characteristics The thermal characteristics of silver matrix MMCs are evaluated by STA Jupiter make 449/F3 model differential thermal analysis equipment configured with -150°C to 2400°C under argon atmosphere. The laser flash technique is accomplished to find the thermal conductivity (ʎ) and its diffusivity as referred follows [20]. (1)ʎT=ρTxαTxCpT.Hereʎ is the thermal conductivity, T the temperature, ρ the density of material, α the thermal diffusivity, and Cp the specific heat coefficient.The NETZSCH-made DIL 402C and LFA 427 models are considered for evaluating linear thermal expansion and its diffusivity ofɸ8 mm and 25 mm length sample under the ambient temperature of 27°C to 1000°C with 7°C/min heat flow. ### 2.6. Evaluation of Frictional Characteristics The dry sliding frictional characteristics of cast silver, nanocomposite, and hybrid nanocomposite were evaluated by rotating pin on disc tribotester configured with a hardened steel disc with an applied load of 10 N, 20 N, and 30 N under the constant sliding velocity of 0.75 m/sec. The above conditions estimated the effect of reinforcement on frictional resistance of the silver matrix. The top view of the wear tester is shown in Figure1(c). ## 2.1. Selection of Primary Matrix Material The present study chose the silver-based alloy as the primary matrix material. The properties of silver are mentioned in Table1.Table 1 Properties of silver matrix. PropertiesDensityElastic modulusTensile strengthMelting temperatureThermal conductivityEmissivitySpecific heat capacityAg10.49 g/cc76 GPa140 MPa962°C419 W/mK0.0550.234 J/g °C ## 2.2. Selection of Secondary Phase Reinforcements The complex ceramic aluminium oxide and silicon carbide particle (50 nm) with an average size of 50 nm is chosen as secondary phase reinforcement to obtain better composite performance [28, 30, 31]. The properties of both ceramic phases are represented in Table 2.Table 2 Properties of reinforcements. ReinforcementsDensityHardnessModulus of elasticityMelting pointThermal conductivityg/ccVHNGPa°CW/mKAl2O33.961366375205530.12SiC3.114450412279977.54 ## 2.3. The Mixing Ratio of Composite Table3 illustrates the phase constitution of a silver matrix concerning weight percentages of reinforcement used by the production of silver matrix hybrid nanocomposite.Table 3 Phase constitutions of silver matrix composite. SampleDescriptionsPhase constitutions in wt%AgAl2O3SiC1Alloy100002Nanocomposite95503Hybrid nanocomposite9055 ## 2.4. Method and Processing of Composites Figures1(a) and 1(b) represent the silver matrix composite fabrication full setup with vacuum pump assembly. The different-sized silver round bar was preheated at 400°C for 30 min and melted via an electrical furnace with an applied temperature range of 1000°C to 1200°C under an inert atmosphere (supply of argon gas at a constant level of 3 l/h) to avoid the thermal oxidation. The higher temperature may increase the oxidation resulting in an increased porosity [27, 28]. According to the phase constitutions (mixing ratio) reported in Table 3, the preheated reinforcements (Al2O3/SiC) are added into a silver molten pool stirred with 500 rpm stir speed. Here, the graphite mechanical stirrer is used to improve the fluidity for surface preparation. A similar concept was reported by silver composite [20]. Thoroughly stirred molten state mixed silver matrix hybrid nanocomposite is developed with an applied vacuum pressure of 1×105 bar, resulting in minimized casting defects with increased composite performance. Table 4 represents the processing parameters of silver matrix composites.Figure 1 Actual fabrication setup for silver matrix composite. (a) Actual setup. (b) Processing chain with different thermal phase. (c) Pin on disc wear apparatus. (a)(b)(c)Table 4 Process parameter for silver matrix composites. DescriptionsPreheating temperatureRotational speed (stir)Impeller typeStir timeFeed rateDie preheat temperatureVacuum pressureMatrixReinforcementsUnits400°C500 rpmGraphite10 min0.9 g/sec350°C1×105 bar ## 2.5. Evaluation of Thermal Characteristics The thermal characteristics of silver matrix MMCs are evaluated by STA Jupiter make 449/F3 model differential thermal analysis equipment configured with -150°C to 2400°C under argon atmosphere. The laser flash technique is accomplished to find the thermal conductivity (ʎ) and its diffusivity as referred follows [20]. (1)ʎT=ρTxαTxCpT.Hereʎ is the thermal conductivity, T the temperature, ρ the density of material, α the thermal diffusivity, and Cp the specific heat coefficient.The NETZSCH-made DIL 402C and LFA 427 models are considered for evaluating linear thermal expansion and its diffusivity ofɸ8 mm and 25 mm length sample under the ambient temperature of 27°C to 1000°C with 7°C/min heat flow. ## 2.6. Evaluation of Frictional Characteristics The dry sliding frictional characteristics of cast silver, nanocomposite, and hybrid nanocomposite were evaluated by rotating pin on disc tribotester configured with a hardened steel disc with an applied load of 10 N, 20 N, and 30 N under the constant sliding velocity of 0.75 m/sec. The above conditions estimated the effect of reinforcement on frictional resistance of the silver matrix. The top view of the wear tester is shown in Figure1(c). ## 3. Result and Discussions ### 3.1. Differential Thermal Effect on Mass Loss of Silver Matrix Composites Figures2(a)–2(c) illustrate the differential thermal effect on mass loss of cast silver correlated with Al2O3- and SiC-reinforced silver nano- and hybrid nanocomposites evaluated under the thermal region of 27°C to 1000°C. The temperature-to-mass loss of each test sample is explained in detail. When the temperature increases from ambient temperature to high temperature, it shows solid to semisolid phase and liquid phase at the higher temperature (phase transformations—solid/liquid and liquid/solid) of heating and cooling phase during the evaluation of thermal studies. It is revealed from Figure 2(a) that the mass loss of cast silver (Ag) alloy is gradually decreased from 0.02 μV/mg to 0.0098 μV/mg with an increase in temperature of 27° to 825°C under an inert atmosphere. At the same time, increasing the temperature of cast silver by more than 825°C results to the formation of the plastic region with improved mass loss of 0.0302 μV/mg. It was due to the reaction of intermetallic coarse fine grain structure and dissolution of the Ag phase. Similar conditions have been reported by Jakub et al. [20] during the evaluation of silver matrix MMCs. The wettability of the composite was limited by the volume fraction SiC [22].Figure 2 Differential thermal effect on mass loss of silver matrix hybrid nanocomposite. (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp. (a)(b)(c)Figure2(b) represents the variations in weight loss of cast Ag alloy nanocomposite consisting 5 wt% alumina nanoparticles during differential thermal analysis. The red and blue curve represents the heat and cooling phase of Ag matrix composite processing as per the condition mentioned in Table 2. The heating curve in Figure 2(b) shows a gradually sloping from 0.0213 μV/mg to 0.0086 μV/mg for a semisolid phase temperature of 760°C. In-reversal effect of the heating curve shows that the maximum temperature of 840°C reduces the mass loss on the phase of bonded matrix—reinforcement by an applied stir speed of 500 rpm results in the formation of a homogenous uniform structure. The constant stir speed may reduce the composite’s casting defect (weight loss). The selection of stir-cast processing parameters was necessary for the quality of the composite. The discontinued stir action increases the cavity of the composite, resulting in increased porosity [27].Figure2(c) represents the phase transformation during the heat and cooling phase of silver matrix hybrid nanocomposite with a varied temperature range of 27°C to 1200°C. The intermediate transition zone (820°C) for both the heat and cooling phases shows a minimum mass loss of less than 0.009 μV/mg. Here, the thermal effect of silver matrix composite varied due to the chemical constitutions and bonding strength between matrix and alumina/silicon carbide nanoparticles.A similar scenario was reported by Mata and Alcala [9] during the performance friction material. However, both reinforcements are thermally stable at higher temperature (1000°C) for melting silver. The intermediate phase for silver, silver nanocomposite, and silver hybrid nanocomposite was found by differential thermal effect analysis, and the values are tabulated in Table 5.Table 5 Intermediate transition zone for unreinforced and reinforced silver matrix composite by differential thermal analysis. SampleDescriptionsIntermediate zone temperatureMass loss°CμV/mg1Alloy8500.00982Nanocomposite8400.00863Hybrid nanocomposite8200.0009 ### 3.2. Effect of Reinforcement on Thermal Adsorption and Thermal Diffusivity of Silver Matrix Composites Figure3 graph describes the detailed heat wave circulation (linear expansion and adsorption) of unreinforced Ag alloy and Al2O3/SiC-reinforced composites. The unreinforced Ag alloy is found 18×10−6 per K (1.70%). The inclusion of alumina nanoparticle (5 wt%) into Ag alloy shows a 4.8×10−6 per K. It was due to the rigid ceramic particles leading to increase hardness and withstand the higher temperature [18]. The circulation of the thermal wave to Ag nanocomposite is decreased to 12% compared to unreinforced Ag alloy. The composite contained 5 wt% alumina and silicon carbide nanoparticle is found at 1.42%. Generally, both reinforcements are complex, and high melting temperature reduces linear expansion. The various phase transformation during the evaluation is noted in Figure 3, and its intermediate zone for optimum thermal effect temperature tangent lines is drawn. However, the physical presence of both Al2O3 and SiC nanoparticles leads to decreased thermal coefficient and increased composite adsorption. The higher temperature withstand capacity was increased by over 30% compared to Ag alloy at 1000°C.Figure 3 Thermal wave circulation (linear expansion and adsorption) phase transformation of silver matrix hybrid nanocomposite.The experimental results for thermal diffusivity of (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp composites are shown in Figure 4. The variations in thermal diffusivity of pure Ag is shown in gradual increase with the increase in the temperature from ambient degree (27° C to 1200°C). The highest thermal diffusivity of 24 mm2/sec is found on pure Ag. However, the thermal diffusivity of hybrid nanocomposite shows the most negligible value compared to all others. It was due to the effect of phase transformation during high temperatures. It was decided as silver and reinforcement atomic structure [21]. It is distinct to 820°C of the intermediate phase and gets an effective thermal diffusivity of 15 mm2/sec. While the temperature increases, it has increased in thermal diffusion to 1000°C. It may be varied due to the bonding of the matrix and reinforcements [20, 21].Figure 4 Thermal diffusivity of silver matrix hybrid nanocomposite. ### 3.3. Effect of Reinforcements on Frictional Characteristics of Silver Matrix Composites The frictional wear loss of unreinforced silver and its composites are shown in Figure5. The wear loss of silver and its composites are increased linearly with an increase in an applied average load of 10-30 N, respectively. The wear loss of unreinforced silver composite is 10.2 mg on 40 N load under a constant sliding velocity of 0.75 m/sec. At the same time, adding Al2O3 nanoparticles in silver shows a minimum wear loss of 7.1 mg on high load and high sliding speed. The reduced wear loss of the composite is mainly attributed to alumina nanoparticles that resist the indentation against the frictional force during high sliding velocity. The alumina and silicon carbide particle combination in the silver matrix has low wear loss compared to all others [30, 31]. The composite contained 5 wt% Al2O3/5 wt% SiC 5.8 mg on a 40 N applied load with the frictional force of 23.4 N under 0.75 m/sec. The wear resistance against frictional composite was increased by 56.86% compared to unreinforced silver.Figure 5 Frictional wear loss of silver hybrid nanocomposite.The friction coefficient for silver composites is represented in Figure6 with different load conditions of 10-30 N at 0.75 m/sec sliding velocity. Sample 1 shows that the COF increases linearly with an increase in load under high sliding velocity. Sample 2 varied from 0.41 to 0.46 with increased content of reinforcements. However, all the test samples were shown increased COF value on the high frictional force. Sample 3 has a maximum COF of 0.58 and improved by 32% compared to sample 1 at 30 N load. It was due to the rigid ceramic particles being diffused within the matrix during high frictional force.Figure 6 Coefficient of friction (COF) of silver hybrid nanocomposite.Figure7 illustrates the friction-forced effect on wear loss and coefficient friction of unreinforced and reinforced silver matrix composite. It indicates that composite wear loss decreases gradually with increased frictional force from 27.98 N to 39.98 N. Similarly, the COF curve in Figure 7 represents the improved COF trend on the increased frictional force. However, sample 3 has tribological performance on a 30 N load under 0.75 m/sec sliding speed with 39.98 N frictional force.Figure 7 Comparisons and effect of the frictional force on wear loss and COF of silver matrix hybrid composite. ## 3.1. Differential Thermal Effect on Mass Loss of Silver Matrix Composites Figures2(a)–2(c) illustrate the differential thermal effect on mass loss of cast silver correlated with Al2O3- and SiC-reinforced silver nano- and hybrid nanocomposites evaluated under the thermal region of 27°C to 1000°C. The temperature-to-mass loss of each test sample is explained in detail. When the temperature increases from ambient temperature to high temperature, it shows solid to semisolid phase and liquid phase at the higher temperature (phase transformations—solid/liquid and liquid/solid) of heating and cooling phase during the evaluation of thermal studies. It is revealed from Figure 2(a) that the mass loss of cast silver (Ag) alloy is gradually decreased from 0.02 μV/mg to 0.0098 μV/mg with an increase in temperature of 27° to 825°C under an inert atmosphere. At the same time, increasing the temperature of cast silver by more than 825°C results to the formation of the plastic region with improved mass loss of 0.0302 μV/mg. It was due to the reaction of intermetallic coarse fine grain structure and dissolution of the Ag phase. Similar conditions have been reported by Jakub et al. [20] during the evaluation of silver matrix MMCs. The wettability of the composite was limited by the volume fraction SiC [22].Figure 2 Differential thermal effect on mass loss of silver matrix hybrid nanocomposite. (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp. (a)(b)(c)Figure2(b) represents the variations in weight loss of cast Ag alloy nanocomposite consisting 5 wt% alumina nanoparticles during differential thermal analysis. The red and blue curve represents the heat and cooling phase of Ag matrix composite processing as per the condition mentioned in Table 2. The heating curve in Figure 2(b) shows a gradually sloping from 0.0213 μV/mg to 0.0086 μV/mg for a semisolid phase temperature of 760°C. In-reversal effect of the heating curve shows that the maximum temperature of 840°C reduces the mass loss on the phase of bonded matrix—reinforcement by an applied stir speed of 500 rpm results in the formation of a homogenous uniform structure. The constant stir speed may reduce the composite’s casting defect (weight loss). The selection of stir-cast processing parameters was necessary for the quality of the composite. The discontinued stir action increases the cavity of the composite, resulting in increased porosity [27].Figure2(c) represents the phase transformation during the heat and cooling phase of silver matrix hybrid nanocomposite with a varied temperature range of 27°C to 1200°C. The intermediate transition zone (820°C) for both the heat and cooling phases shows a minimum mass loss of less than 0.009 μV/mg. Here, the thermal effect of silver matrix composite varied due to the chemical constitutions and bonding strength between matrix and alumina/silicon carbide nanoparticles.A similar scenario was reported by Mata and Alcala [9] during the performance friction material. However, both reinforcements are thermally stable at higher temperature (1000°C) for melting silver. The intermediate phase for silver, silver nanocomposite, and silver hybrid nanocomposite was found by differential thermal effect analysis, and the values are tabulated in Table 5.Table 5 Intermediate transition zone for unreinforced and reinforced silver matrix composite by differential thermal analysis. SampleDescriptionsIntermediate zone temperatureMass loss°CμV/mg1Alloy8500.00982Nanocomposite8400.00863Hybrid nanocomposite8200.0009 ## 3.2. Effect of Reinforcement on Thermal Adsorption and Thermal Diffusivity of Silver Matrix Composites Figure3 graph describes the detailed heat wave circulation (linear expansion and adsorption) of unreinforced Ag alloy and Al2O3/SiC-reinforced composites. The unreinforced Ag alloy is found 18×10−6 per K (1.70%). The inclusion of alumina nanoparticle (5 wt%) into Ag alloy shows a 4.8×10−6 per K. It was due to the rigid ceramic particles leading to increase hardness and withstand the higher temperature [18]. The circulation of the thermal wave to Ag nanocomposite is decreased to 12% compared to unreinforced Ag alloy. The composite contained 5 wt% alumina and silicon carbide nanoparticle is found at 1.42%. Generally, both reinforcements are complex, and high melting temperature reduces linear expansion. The various phase transformation during the evaluation is noted in Figure 3, and its intermediate zone for optimum thermal effect temperature tangent lines is drawn. However, the physical presence of both Al2O3 and SiC nanoparticles leads to decreased thermal coefficient and increased composite adsorption. The higher temperature withstand capacity was increased by over 30% compared to Ag alloy at 1000°C.Figure 3 Thermal wave circulation (linear expansion and adsorption) phase transformation of silver matrix hybrid nanocomposite.The experimental results for thermal diffusivity of (a) Ag/0 wt% Al2O3np/0 wt% SiCnp, (b) Ag/5 wt% Al2O3np/0 wt% SiCnp, and (c) Ag/5 wt% Al2O3np/5 wt% SiCnp composites are shown in Figure 4. The variations in thermal diffusivity of pure Ag is shown in gradual increase with the increase in the temperature from ambient degree (27° C to 1200°C). The highest thermal diffusivity of 24 mm2/sec is found on pure Ag. However, the thermal diffusivity of hybrid nanocomposite shows the most negligible value compared to all others. It was due to the effect of phase transformation during high temperatures. It was decided as silver and reinforcement atomic structure [21]. It is distinct to 820°C of the intermediate phase and gets an effective thermal diffusivity of 15 mm2/sec. While the temperature increases, it has increased in thermal diffusion to 1000°C. It may be varied due to the bonding of the matrix and reinforcements [20, 21].Figure 4 Thermal diffusivity of silver matrix hybrid nanocomposite. ## 3.3. Effect of Reinforcements on Frictional Characteristics of Silver Matrix Composites The frictional wear loss of unreinforced silver and its composites are shown in Figure5. The wear loss of silver and its composites are increased linearly with an increase in an applied average load of 10-30 N, respectively. The wear loss of unreinforced silver composite is 10.2 mg on 40 N load under a constant sliding velocity of 0.75 m/sec. At the same time, adding Al2O3 nanoparticles in silver shows a minimum wear loss of 7.1 mg on high load and high sliding speed. The reduced wear loss of the composite is mainly attributed to alumina nanoparticles that resist the indentation against the frictional force during high sliding velocity. The alumina and silicon carbide particle combination in the silver matrix has low wear loss compared to all others [30, 31]. The composite contained 5 wt% Al2O3/5 wt% SiC 5.8 mg on a 40 N applied load with the frictional force of 23.4 N under 0.75 m/sec. The wear resistance against frictional composite was increased by 56.86% compared to unreinforced silver.Figure 5 Frictional wear loss of silver hybrid nanocomposite.The friction coefficient for silver composites is represented in Figure6 with different load conditions of 10-30 N at 0.75 m/sec sliding velocity. Sample 1 shows that the COF increases linearly with an increase in load under high sliding velocity. Sample 2 varied from 0.41 to 0.46 with increased content of reinforcements. However, all the test samples were shown increased COF value on the high frictional force. Sample 3 has a maximum COF of 0.58 and improved by 32% compared to sample 1 at 30 N load. It was due to the rigid ceramic particles being diffused within the matrix during high frictional force.Figure 6 Coefficient of friction (COF) of silver hybrid nanocomposite.Figure7 illustrates the friction-forced effect on wear loss and coefficient friction of unreinforced and reinforced silver matrix composite. It indicates that composite wear loss decreases gradually with increased frictional force from 27.98 N to 39.98 N. Similarly, the COF curve in Figure 7 represents the improved COF trend on the increased frictional force. However, sample 3 has tribological performance on a 30 N load under 0.75 m/sec sliding speed with 39.98 N frictional force.Figure 7 Comparisons and effect of the frictional force on wear loss and COF of silver matrix hybrid composite. ## 4. Conclusions The silver-based matrix hybrid nanocomposite developed with 5 wt% alumina and 5 wt% silicon carbide nanoparticle via vacuum stir-casting techniques to minimize the casting defect and increase the thermal adsorption composite of the following results are concluded.(i) The silver matrix hybrid nanocomposite (sample 3) is found to have good thermal characteristics compared to others(ii) Its intermediate transition zone temperature of 820°C has been adopted for both the heat and cooling phases with a minimum mass loss of 0.009μV/mg. It saved 16.5% compared to unreinforced cast silver(iii) The adsorption of hybrid nanocomposite is increased by 30% and thermally stable at higher temperatures, 1000° C(iv) The adequate thermal wave circulation (linear expansion) of hybrid nanocomposite is 1.42%(v) The thermal diffusivity of hybrid nanocomposite may be varied due to the bonding strength between the matrix and reinforcements(vi) Sample 3 is identified as having good wear resistance, and COF has improved by 56.86% and 32% compared to unreinforced silver composite(vii) It is recommended for automotive friction-bearing applications based on thermal and frictional characteristics --- *Source: 1003492-2023-02-01.xml*
2023
# Examining Impacts of Acidic Bath Temperature on Nano-Synthesized Lead Selenide Thin Films for the Application of Solar Cells **Authors:** Saka Abel; Jule Leta Tesfaye; N. Nagaprasad; R. Shanmugam; L. Priyanka Dwarampudi; Tyagi Deepak; Hongxia Zhang; Ramaswamy Krishnaraj; B. Stalin **Journal:** Bioinorganic Chemistry and Applications (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1003803 --- ## Abstract The influence of bath temperature on nano-manufactured PbSe (lead selenide) films was successfully generated by utilizing CBD on the acid solution’s metal surface tool. Pb (NO3)2 was employed as a lead ion source as a precursor, while Na2O4Se was used as a selenide ion source. The XRD characterization revealed that the prepared samples are the property of crystalline structure (111), (101), (100), and (110) Miller indices. The scanning electron microscope indicated that the particles have a rock-like shape. There was a decrement of energy bandgap that is from 2.4 eV to 1.2 eV with increasing temperature 20°C–85°C. Thin films prepared at 85°C revealed the best polycrystal structure as well as homogeneously dispersed on the substrate at superior particle scales. The photoluminescence spectrophotometer witnessed that as the temperature of the solution bath increases from 20°C to 85°C, the average strength of PL emission of the film decreases. The maximum photoluminescence strength predominantly exists at high temperatures because of self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. Therefore, the finest solution temperature is 85°C. --- ## Body ## 1. Introduction Currently, the world is in trouble with air and water pollution released from nonrenewable energy sources such as coal, natural gas, fossil fuels, and fabrics [1]. The fuels released from fabrics flow out to the rivers and cause water pollution. This polluted water is directly consumed by a person and cause diseases like cholera, amoebic dysentery, and typhoid. Generally, it harms human safety health in the world. Photovoltaic technology connects solar power renewable and sustainable energy. This renewable energy originates from natural assets that are continuously replaced. These include the sunlight, ocean, and the power of wind [2]. This technology of energy is deliberated unpolluted and does not contain carbon since it does not emit greenhouse gases [3]. Since it is a clean energy source, it has no influence on the atmosphere other than the energy coming from fossil coals. When they are burned, they release dangerous carbon toxic radiations into the environment [4]. To reduce these hazardous wastes and pollutions from the world, the production and fabrication of solar cells from compound semiconductor thin films is the only solution. Because currently, the existing elemental semiconductor is very expensive and anybody cannot use it [5]. A wide-ranging investigation has been dedicated to produce numerous kinds of semiconductor thin films which are applicable in renewable sources of energy like solar cells [6]. This is because of their potential uses in the production of photovoltaic materials, optical-electronic devices, and sensor and infrared indicator instruments [3]. The lead selenide thin films appeal consideration of many scholars because they involve low cost, exist in abundance, and retain semiconducting material goods [7].The production of films like lead selenide has been discovered via so many methods. These include electrodeposition, CBD, electrochemical atomic layer, photochemical, and molecular beam epitaxial deposition method [6]. The films synthesized via solution techniques are generally inexpensive than films produced via the concentrated somatic methods. In the present work, the CBD technique was carefully chosen because of its many benefits like the inexpensive, wide scope of fabrication, and straightforwardness during the installation.Present, chemical bath preparation techniques are used to synthesize many semiconductors films, including [8] zinc sulfide (ZnS), lead selenide (PbSe), cadmium selenite(CdSe), zinc selenite (ZnSe), Cu2S (copper sulphide), CuInS (copper indium sulfide), and CuBiS2 (copper bismuth sulfide) on glass substrates [9], and they have no longer quality.There are very few chemically prepared PbSe films reported by using glasses substrates in an alkaline base. But this can make deletions of films and affect the quality of prepared films. Even they release toxic hydroxide chemicals during their depositions [10]. For the present work, we have used metallic substrates to grow lead selenide films via the CBD method in an acidic medium (pH = 4) for the applications of solar cells. ## 2. Instruments and Methodology PbSe films were grown on a metallic substrate (30 × 70 × 1) mm by means of CBD techniques. Proceeding to preparations, the metal substrate was inserted in ethanol for about 15 min, tracked via ultra-sonically washed in deionized water again for 20 min, and lastly desiccated in warm condition. The solution of lead nitrate was used as a metallic precursor source that is lead, sodium selenite as the sulphide ion source, and [(HOC2H4)3N] as a complexing agent for synthesis of lead sulphide thin films [10, 11]. All compounds were analytically graded before the deposition, and the bath solutions were equipped with deionized seawater. Stepwise deposition, 25 ml of 0.2 M of lead nitrite was complexed with 15 ml of triethanolamine. Next, 15 ml of 0.2 M of sodium selenite was dropped step by step to the reaction. The pH values of the resultant solution were controlled by using droplet sulfuric acid [12] and then with continual stirring. The cleaned metal substrate was bought from a shop prepared to grow nanoparticles of lead selenide, with varying temperatures as 20°C, 45°C, and 85°C. As preparation time required, 95 min ended; we fetched the water from the chemical solution by using a syringe. The bottom of a glass of bath solution is left with molten lead selenide nanoparticles. Next, the metal substrates were coated with molten lead selenide, and it was dried in the air, splashed with deionized seawater and kept in an oven for further analysis.Crystalline structures of nano-synthesized PbSe films were studied by using XRD, PANalytical, US [13]. The diffractometer is kept with CuK sources to operate at 35 kV and 23 µA; the scanning was taken in a 2θ range from 20° to 85°. An optical absorption measurement was executed using a Janeway 6850 UV/visible spectrophotometer in the range of 226–2250 nm. The surface morphology study of nanoparticles size was characterized via scanning electron microscopy on a Hitachi SU5000 with an operating voltage of 20 kV. Photoluminescence of the prepared material was analyzed by using a photoluminescence spectrophotometer. ## 3. Results and Discussion ### 3.1. Structural Characterization The X-rays diffraction patterns of the PbSe thin films deposited at different temperature bath solutions are shown in Figure1. Thin films deposited at 45°C observed with four peaks at 2θ = 26°C and 30.6°C. When bath solution temperature rises to 85°C, the intensity of the peaks attributable to PbSe is revealed. The position of the peak along the (111), (101), (100), and (110) Miller indexes reveals the saturated intensity with well-defined sharp indicating high crystallinity of the material prepared. This means that the grain size of the grown thin film increases with the temperature of the bath. The number of peaks of PbSe films also increased when bath temperature increased. These witnesses show that the deposited structure has a cubic phase, matching with earlier reported data [12]. The three lattice constant values are equal to 6.13 Å. In this case, the presence of iron dioxide peaks in the XRD is because of the metallic substrate used to prepare films. The observed four peaks are at 2θ values, 40.2, 52.8, and 67.3. The peaks via solid triangles are related to the cubic shape of lead selenide, and those striking with undefended diamonds attributed to the orthorhombic crystal of iron dioxide. This result is witnessed with scanning electron microscope analysis homogeneous and cubic structure; this result agrees with a reported study [13]. Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time 95 min is given in Table 1.Figure 1 XRD plots of lead selenide films deposited at various bath temperatures: (a) 20°C. (b) 45°C. (c) 85°C.Table 1 Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time of 95 min. SampleTemperature (°C)2θ (radians)FWMH (°)Crystal size (nm)12030.8126.981.1724530.8476.131.0438542.8912.140.7The size of the particle was premeditated by using Scherer’s formula; it is given by(1)D=0.9λβcosθ,where λ shows the wavelength, β stands for the FWHM in rad, and θ represents the angle diffraction (Bragg angle).From Table1, it is possible to understand that the crystal size of the third sample is decreased at a higher temperature (at 85°C); this shows that when the temperature increases, the bonds between particles will break and the size becomes smaller and smaller. ### 3.2. Surface Morphology Characterization of PbSe Thin Films The scanning electron microscope micrographs of PbSe thin films deposited at different temperatures are shown in Figure2. Variation of temperatures shows a significant influence on the surface morphology of the thin films. All micrographs samples were taken at 10.00 kV and 20.71x magnification. Scanning electron microscopy reveals that lead selenide films grown at all temperatures fully covered the metal substrates. The synthesized films that had cracks were very small in size with well-covered grain borders. As the researcher, well-defined grains were observed for the film deposited at all temperatures, but the grains were decreasing with temperature increases from 20°C to 85°C. As it was observed in XRD analysis, the shape of PbSe films was cubic, and fully covered crystals on the metal substrate were investigated. Increasing temperature raises the smoothness of the films; the grain amounts were observed to rise slowly. Additionally, there is a material which just likes soil which was decreasing in size as shown in Figures 2(a)–2(c) with increasing bath temperature from 20°C to 85°C that shield over cadmium sulphide thin films in some amounts of the apparent. The impact of temperature on the surface morphology of nano-synthesized lead selenide films was observed; this is in good agreement with a reported study [14].Figure 2 The scanning electron microscope micrograph of nano-synthesized lead selenide films grown varying temperatures: (a) 20. (b) 45. (c) 85°C. ### 3.3. Optical Properties The optical immersion of the nano-synthesized lead selenide films prepared from different temperatures was measured in the wavelength range of 226–2250 nm, which is in the visible region photocatalyst, as shown in Figure3. The absorbance of the films expressively increased when the bath temperature increased within the deliberated range of wavelength. The highest absorption of thin films was witnessed in the visible wavelength (λ) range. The absorption coefficient of PbSe films was evaluated by using Lambert’s equation:(2)α=2.30At,where A is the absorbance, α is the absorption of coefficient, and t is stands for the thickness. The band gaps (Eg) of films were calculated from Tauc’s relation [15]:(3)αhvn=khv−Eg,where h is Planck’s number, v stands for the frequency, k expresses the optical transition constant number, Eg is the energy of bandgap, and n is transition type, and it is varying either 2 or 2/3 for direct allowed and forbidden transitions or 1/2 or 1/3 for indirect allowed and forbidden transitions, correspondingly [16]. Best linear fit for equation (3) is given for n = 2 in the main absorption edge, representing that the thin films have direct optical band gaps. The (αhv) axis intercept attained by extrapolating the linear portion of the (αhv)2 vs. (hv) curve gives the Eg of the films as shown in Figure 3. The Eg of the nano-synthesized PbSe thin films declined from 2.4 eV to 1.2 eV with the temperature of the solution increasing from 20°C to 85°C. The bandgap decrement could be because of increment of crystal size with temperature; this result is the same as reported [17]. The maximum absorbance observed in visible light section and band gaps of thin films within the range of 2.6–1.2 eV in all PbSe thin films provides the application materials as the absorber layer in photovoltaic thin film solar cells as well as well-organized visible light photocatalyst [18–21].Figure 3 Plots of variation of optical absorption (αt) vs. wavelength films variation in bath temperature: (a) 20°C. (b) 45°C. (c) 85°C. ### 3.4. Photoluminescence Property (PL) Inappropriate to discover the optical study of deposited PbSe nanoparticles, photoluminescence was similarly used. In theλ range from 350 nm to 600 nm at different temperatures, the PL spectrum of nano-synthesized films was reported. When solution bath temperature increases from 20°C to 85°C, the average strength of PL decreases. The maximum photoluminescence strength is predominant because of the self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. The photoluminescence intensity rises sequentially with all temperatures.Figure4 shows the absorption of nano-synthesized lead selenide films deposited at various temperatures; all samples reveal a gradual rising absorbance in the visible area, which provides the potential for these tools to be applicable in a photovoltaic solar cell. The plotted figure shows that the samples synthesized at a greater temperature of the solution have greater absorption values related to other solution bath temperatures. Because these are nano-synthesized, thin films have the maximum uniform surface and well crystallinity compared with other reported samples.Figure 4 Photoluminescence spectra of nano-synthesized PbSe films grown at different temperatures: (a) 20°C. (b) 45°C. (c) 85°C. ## 3.1. Structural Characterization The X-rays diffraction patterns of the PbSe thin films deposited at different temperature bath solutions are shown in Figure1. Thin films deposited at 45°C observed with four peaks at 2θ = 26°C and 30.6°C. When bath solution temperature rises to 85°C, the intensity of the peaks attributable to PbSe is revealed. The position of the peak along the (111), (101), (100), and (110) Miller indexes reveals the saturated intensity with well-defined sharp indicating high crystallinity of the material prepared. This means that the grain size of the grown thin film increases with the temperature of the bath. The number of peaks of PbSe films also increased when bath temperature increased. These witnesses show that the deposited structure has a cubic phase, matching with earlier reported data [12]. The three lattice constant values are equal to 6.13 Å. In this case, the presence of iron dioxide peaks in the XRD is because of the metallic substrate used to prepare films. The observed four peaks are at 2θ values, 40.2, 52.8, and 67.3. The peaks via solid triangles are related to the cubic shape of lead selenide, and those striking with undefended diamonds attributed to the orthorhombic crystal of iron dioxide. This result is witnessed with scanning electron microscope analysis homogeneous and cubic structure; this result agrees with a reported study [13]. Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time 95 min is given in Table 1.Figure 1 XRD plots of lead selenide films deposited at various bath temperatures: (a) 20°C. (b) 45°C. (c) 85°C.Table 1 Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time of 95 min. SampleTemperature (°C)2θ (radians)FWMH (°)Crystal size (nm)12030.8126.981.1724530.8476.131.0438542.8912.140.7The size of the particle was premeditated by using Scherer’s formula; it is given by(1)D=0.9λβcosθ,where λ shows the wavelength, β stands for the FWHM in rad, and θ represents the angle diffraction (Bragg angle).From Table1, it is possible to understand that the crystal size of the third sample is decreased at a higher temperature (at 85°C); this shows that when the temperature increases, the bonds between particles will break and the size becomes smaller and smaller. ## 3.2. Surface Morphology Characterization of PbSe Thin Films The scanning electron microscope micrographs of PbSe thin films deposited at different temperatures are shown in Figure2. Variation of temperatures shows a significant influence on the surface morphology of the thin films. All micrographs samples were taken at 10.00 kV and 20.71x magnification. Scanning electron microscopy reveals that lead selenide films grown at all temperatures fully covered the metal substrates. The synthesized films that had cracks were very small in size with well-covered grain borders. As the researcher, well-defined grains were observed for the film deposited at all temperatures, but the grains were decreasing with temperature increases from 20°C to 85°C. As it was observed in XRD analysis, the shape of PbSe films was cubic, and fully covered crystals on the metal substrate were investigated. Increasing temperature raises the smoothness of the films; the grain amounts were observed to rise slowly. Additionally, there is a material which just likes soil which was decreasing in size as shown in Figures 2(a)–2(c) with increasing bath temperature from 20°C to 85°C that shield over cadmium sulphide thin films in some amounts of the apparent. The impact of temperature on the surface morphology of nano-synthesized lead selenide films was observed; this is in good agreement with a reported study [14].Figure 2 The scanning electron microscope micrograph of nano-synthesized lead selenide films grown varying temperatures: (a) 20. (b) 45. (c) 85°C. ## 3.3. Optical Properties The optical immersion of the nano-synthesized lead selenide films prepared from different temperatures was measured in the wavelength range of 226–2250 nm, which is in the visible region photocatalyst, as shown in Figure3. The absorbance of the films expressively increased when the bath temperature increased within the deliberated range of wavelength. The highest absorption of thin films was witnessed in the visible wavelength (λ) range. The absorption coefficient of PbSe films was evaluated by using Lambert’s equation:(2)α=2.30At,where A is the absorbance, α is the absorption of coefficient, and t is stands for the thickness. The band gaps (Eg) of films were calculated from Tauc’s relation [15]:(3)αhvn=khv−Eg,where h is Planck’s number, v stands for the frequency, k expresses the optical transition constant number, Eg is the energy of bandgap, and n is transition type, and it is varying either 2 or 2/3 for direct allowed and forbidden transitions or 1/2 or 1/3 for indirect allowed and forbidden transitions, correspondingly [16]. Best linear fit for equation (3) is given for n = 2 in the main absorption edge, representing that the thin films have direct optical band gaps. The (αhv) axis intercept attained by extrapolating the linear portion of the (αhv)2 vs. (hv) curve gives the Eg of the films as shown in Figure 3. The Eg of the nano-synthesized PbSe thin films declined from 2.4 eV to 1.2 eV with the temperature of the solution increasing from 20°C to 85°C. The bandgap decrement could be because of increment of crystal size with temperature; this result is the same as reported [17]. The maximum absorbance observed in visible light section and band gaps of thin films within the range of 2.6–1.2 eV in all PbSe thin films provides the application materials as the absorber layer in photovoltaic thin film solar cells as well as well-organized visible light photocatalyst [18–21].Figure 3 Plots of variation of optical absorption (αt) vs. wavelength films variation in bath temperature: (a) 20°C. (b) 45°C. (c) 85°C. ## 3.4. Photoluminescence Property (PL) Inappropriate to discover the optical study of deposited PbSe nanoparticles, photoluminescence was similarly used. In theλ range from 350 nm to 600 nm at different temperatures, the PL spectrum of nano-synthesized films was reported. When solution bath temperature increases from 20°C to 85°C, the average strength of PL decreases. The maximum photoluminescence strength is predominant because of the self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. The photoluminescence intensity rises sequentially with all temperatures.Figure4 shows the absorption of nano-synthesized lead selenide films deposited at various temperatures; all samples reveal a gradual rising absorbance in the visible area, which provides the potential for these tools to be applicable in a photovoltaic solar cell. The plotted figure shows that the samples synthesized at a greater temperature of the solution have greater absorption values related to other solution bath temperatures. Because these are nano-synthesized, thin films have the maximum uniform surface and well crystallinity compared with other reported samples.Figure 4 Photoluminescence spectra of nano-synthesized PbSe films grown at different temperatures: (a) 20°C. (b) 45°C. (c) 85°C. ## 4. Conclusions Nano-synthesized lead selenide films are successfully prepared using an easy, inexpensive CBD technique in an acidic medium. Chemical solutions made from lead nitrate and sodium selenite compounds are to be sources of lead and selenide ions. The triethanolamine was served as a complexing mediator through the deposition procedure. The X-ray diffraction pattern tells the creation of a cubic crystalline structure with very tough peaks defined as (111), (101), (100), and (110) Miller indices plane. PL confirmed that photoluminescence emission of the deposited thin films increased with increasing bath temperature. The deposited film at 85°C indicated the best crystallite and is homogenously formed on a substrate with larger crystal sizes. Bandgap energy was declined from 2.4 eV to 1.2 eV with temperature increases from 20°C to 85°C, and this is suitable for photovoltaic solar cells. --- *Source: 1003803-2022-01-11.xml*
1003803-2022-01-11_1003803-2022-01-11.md
23,179
Examining Impacts of Acidic Bath Temperature on Nano-Synthesized Lead Selenide Thin Films for the Application of Solar Cells
Saka Abel; Jule Leta Tesfaye; N. Nagaprasad; R. Shanmugam; L. Priyanka Dwarampudi; Tyagi Deepak; Hongxia Zhang; Ramaswamy Krishnaraj; B. Stalin
Bioinorganic Chemistry and Applications (2022)
Chemistry and Chemical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1003803
1003803-2022-01-11.xml
--- ## Abstract The influence of bath temperature on nano-manufactured PbSe (lead selenide) films was successfully generated by utilizing CBD on the acid solution’s metal surface tool. Pb (NO3)2 was employed as a lead ion source as a precursor, while Na2O4Se was used as a selenide ion source. The XRD characterization revealed that the prepared samples are the property of crystalline structure (111), (101), (100), and (110) Miller indices. The scanning electron microscope indicated that the particles have a rock-like shape. There was a decrement of energy bandgap that is from 2.4 eV to 1.2 eV with increasing temperature 20°C–85°C. Thin films prepared at 85°C revealed the best polycrystal structure as well as homogeneously dispersed on the substrate at superior particle scales. The photoluminescence spectrophotometer witnessed that as the temperature of the solution bath increases from 20°C to 85°C, the average strength of PL emission of the film decreases. The maximum photoluminescence strength predominantly exists at high temperatures because of self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. Therefore, the finest solution temperature is 85°C. --- ## Body ## 1. Introduction Currently, the world is in trouble with air and water pollution released from nonrenewable energy sources such as coal, natural gas, fossil fuels, and fabrics [1]. The fuels released from fabrics flow out to the rivers and cause water pollution. This polluted water is directly consumed by a person and cause diseases like cholera, amoebic dysentery, and typhoid. Generally, it harms human safety health in the world. Photovoltaic technology connects solar power renewable and sustainable energy. This renewable energy originates from natural assets that are continuously replaced. These include the sunlight, ocean, and the power of wind [2]. This technology of energy is deliberated unpolluted and does not contain carbon since it does not emit greenhouse gases [3]. Since it is a clean energy source, it has no influence on the atmosphere other than the energy coming from fossil coals. When they are burned, they release dangerous carbon toxic radiations into the environment [4]. To reduce these hazardous wastes and pollutions from the world, the production and fabrication of solar cells from compound semiconductor thin films is the only solution. Because currently, the existing elemental semiconductor is very expensive and anybody cannot use it [5]. A wide-ranging investigation has been dedicated to produce numerous kinds of semiconductor thin films which are applicable in renewable sources of energy like solar cells [6]. This is because of their potential uses in the production of photovoltaic materials, optical-electronic devices, and sensor and infrared indicator instruments [3]. The lead selenide thin films appeal consideration of many scholars because they involve low cost, exist in abundance, and retain semiconducting material goods [7].The production of films like lead selenide has been discovered via so many methods. These include electrodeposition, CBD, electrochemical atomic layer, photochemical, and molecular beam epitaxial deposition method [6]. The films synthesized via solution techniques are generally inexpensive than films produced via the concentrated somatic methods. In the present work, the CBD technique was carefully chosen because of its many benefits like the inexpensive, wide scope of fabrication, and straightforwardness during the installation.Present, chemical bath preparation techniques are used to synthesize many semiconductors films, including [8] zinc sulfide (ZnS), lead selenide (PbSe), cadmium selenite(CdSe), zinc selenite (ZnSe), Cu2S (copper sulphide), CuInS (copper indium sulfide), and CuBiS2 (copper bismuth sulfide) on glass substrates [9], and they have no longer quality.There are very few chemically prepared PbSe films reported by using glasses substrates in an alkaline base. But this can make deletions of films and affect the quality of prepared films. Even they release toxic hydroxide chemicals during their depositions [10]. For the present work, we have used metallic substrates to grow lead selenide films via the CBD method in an acidic medium (pH = 4) for the applications of solar cells. ## 2. Instruments and Methodology PbSe films were grown on a metallic substrate (30 × 70 × 1) mm by means of CBD techniques. Proceeding to preparations, the metal substrate was inserted in ethanol for about 15 min, tracked via ultra-sonically washed in deionized water again for 20 min, and lastly desiccated in warm condition. The solution of lead nitrate was used as a metallic precursor source that is lead, sodium selenite as the sulphide ion source, and [(HOC2H4)3N] as a complexing agent for synthesis of lead sulphide thin films [10, 11]. All compounds were analytically graded before the deposition, and the bath solutions were equipped with deionized seawater. Stepwise deposition, 25 ml of 0.2 M of lead nitrite was complexed with 15 ml of triethanolamine. Next, 15 ml of 0.2 M of sodium selenite was dropped step by step to the reaction. The pH values of the resultant solution were controlled by using droplet sulfuric acid [12] and then with continual stirring. The cleaned metal substrate was bought from a shop prepared to grow nanoparticles of lead selenide, with varying temperatures as 20°C, 45°C, and 85°C. As preparation time required, 95 min ended; we fetched the water from the chemical solution by using a syringe. The bottom of a glass of bath solution is left with molten lead selenide nanoparticles. Next, the metal substrates were coated with molten lead selenide, and it was dried in the air, splashed with deionized seawater and kept in an oven for further analysis.Crystalline structures of nano-synthesized PbSe films were studied by using XRD, PANalytical, US [13]. The diffractometer is kept with CuK sources to operate at 35 kV and 23 µA; the scanning was taken in a 2θ range from 20° to 85°. An optical absorption measurement was executed using a Janeway 6850 UV/visible spectrophotometer in the range of 226–2250 nm. The surface morphology study of nanoparticles size was characterized via scanning electron microscopy on a Hitachi SU5000 with an operating voltage of 20 kV. Photoluminescence of the prepared material was analyzed by using a photoluminescence spectrophotometer. ## 3. Results and Discussion ### 3.1. Structural Characterization The X-rays diffraction patterns of the PbSe thin films deposited at different temperature bath solutions are shown in Figure1. Thin films deposited at 45°C observed with four peaks at 2θ = 26°C and 30.6°C. When bath solution temperature rises to 85°C, the intensity of the peaks attributable to PbSe is revealed. The position of the peak along the (111), (101), (100), and (110) Miller indexes reveals the saturated intensity with well-defined sharp indicating high crystallinity of the material prepared. This means that the grain size of the grown thin film increases with the temperature of the bath. The number of peaks of PbSe films also increased when bath temperature increased. These witnesses show that the deposited structure has a cubic phase, matching with earlier reported data [12]. The three lattice constant values are equal to 6.13 Å. In this case, the presence of iron dioxide peaks in the XRD is because of the metallic substrate used to prepare films. The observed four peaks are at 2θ values, 40.2, 52.8, and 67.3. The peaks via solid triangles are related to the cubic shape of lead selenide, and those striking with undefended diamonds attributed to the orthorhombic crystal of iron dioxide. This result is witnessed with scanning electron microscope analysis homogeneous and cubic structure; this result agrees with a reported study [13]. Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time 95 min is given in Table 1.Figure 1 XRD plots of lead selenide films deposited at various bath temperatures: (a) 20°C. (b) 45°C. (c) 85°C.Table 1 Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time of 95 min. SampleTemperature (°C)2θ (radians)FWMH (°)Crystal size (nm)12030.8126.981.1724530.8476.131.0438542.8912.140.7The size of the particle was premeditated by using Scherer’s formula; it is given by(1)D=0.9λβcosθ,where λ shows the wavelength, β stands for the FWHM in rad, and θ represents the angle diffraction (Bragg angle).From Table1, it is possible to understand that the crystal size of the third sample is decreased at a higher temperature (at 85°C); this shows that when the temperature increases, the bonds between particles will break and the size becomes smaller and smaller. ### 3.2. Surface Morphology Characterization of PbSe Thin Films The scanning electron microscope micrographs of PbSe thin films deposited at different temperatures are shown in Figure2. Variation of temperatures shows a significant influence on the surface morphology of the thin films. All micrographs samples were taken at 10.00 kV and 20.71x magnification. Scanning electron microscopy reveals that lead selenide films grown at all temperatures fully covered the metal substrates. The synthesized films that had cracks were very small in size with well-covered grain borders. As the researcher, well-defined grains were observed for the film deposited at all temperatures, but the grains were decreasing with temperature increases from 20°C to 85°C. As it was observed in XRD analysis, the shape of PbSe films was cubic, and fully covered crystals on the metal substrate were investigated. Increasing temperature raises the smoothness of the films; the grain amounts were observed to rise slowly. Additionally, there is a material which just likes soil which was decreasing in size as shown in Figures 2(a)–2(c) with increasing bath temperature from 20°C to 85°C that shield over cadmium sulphide thin films in some amounts of the apparent. The impact of temperature on the surface morphology of nano-synthesized lead selenide films was observed; this is in good agreement with a reported study [14].Figure 2 The scanning electron microscope micrograph of nano-synthesized lead selenide films grown varying temperatures: (a) 20. (b) 45. (c) 85°C. ### 3.3. Optical Properties The optical immersion of the nano-synthesized lead selenide films prepared from different temperatures was measured in the wavelength range of 226–2250 nm, which is in the visible region photocatalyst, as shown in Figure3. The absorbance of the films expressively increased when the bath temperature increased within the deliberated range of wavelength. The highest absorption of thin films was witnessed in the visible wavelength (λ) range. The absorption coefficient of PbSe films was evaluated by using Lambert’s equation:(2)α=2.30At,where A is the absorbance, α is the absorption of coefficient, and t is stands for the thickness. The band gaps (Eg) of films were calculated from Tauc’s relation [15]:(3)αhvn=khv−Eg,where h is Planck’s number, v stands for the frequency, k expresses the optical transition constant number, Eg is the energy of bandgap, and n is transition type, and it is varying either 2 or 2/3 for direct allowed and forbidden transitions or 1/2 or 1/3 for indirect allowed and forbidden transitions, correspondingly [16]. Best linear fit for equation (3) is given for n = 2 in the main absorption edge, representing that the thin films have direct optical band gaps. The (αhv) axis intercept attained by extrapolating the linear portion of the (αhv)2 vs. (hv) curve gives the Eg of the films as shown in Figure 3. The Eg of the nano-synthesized PbSe thin films declined from 2.4 eV to 1.2 eV with the temperature of the solution increasing from 20°C to 85°C. The bandgap decrement could be because of increment of crystal size with temperature; this result is the same as reported [17]. The maximum absorbance observed in visible light section and band gaps of thin films within the range of 2.6–1.2 eV in all PbSe thin films provides the application materials as the absorber layer in photovoltaic thin film solar cells as well as well-organized visible light photocatalyst [18–21].Figure 3 Plots of variation of optical absorption (αt) vs. wavelength films variation in bath temperature: (a) 20°C. (b) 45°C. (c) 85°C. ### 3.4. Photoluminescence Property (PL) Inappropriate to discover the optical study of deposited PbSe nanoparticles, photoluminescence was similarly used. In theλ range from 350 nm to 600 nm at different temperatures, the PL spectrum of nano-synthesized films was reported. When solution bath temperature increases from 20°C to 85°C, the average strength of PL decreases. The maximum photoluminescence strength is predominant because of the self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. The photoluminescence intensity rises sequentially with all temperatures.Figure4 shows the absorption of nano-synthesized lead selenide films deposited at various temperatures; all samples reveal a gradual rising absorbance in the visible area, which provides the potential for these tools to be applicable in a photovoltaic solar cell. The plotted figure shows that the samples synthesized at a greater temperature of the solution have greater absorption values related to other solution bath temperatures. Because these are nano-synthesized, thin films have the maximum uniform surface and well crystallinity compared with other reported samples.Figure 4 Photoluminescence spectra of nano-synthesized PbSe films grown at different temperatures: (a) 20°C. (b) 45°C. (c) 85°C. ## 3.1. Structural Characterization The X-rays diffraction patterns of the PbSe thin films deposited at different temperature bath solutions are shown in Figure1. Thin films deposited at 45°C observed with four peaks at 2θ = 26°C and 30.6°C. When bath solution temperature rises to 85°C, the intensity of the peaks attributable to PbSe is revealed. The position of the peak along the (111), (101), (100), and (110) Miller indexes reveals the saturated intensity with well-defined sharp indicating high crystallinity of the material prepared. This means that the grain size of the grown thin film increases with the temperature of the bath. The number of peaks of PbSe films also increased when bath temperature increased. These witnesses show that the deposited structure has a cubic phase, matching with earlier reported data [12]. The three lattice constant values are equal to 6.13 Å. In this case, the presence of iron dioxide peaks in the XRD is because of the metallic substrate used to prepare films. The observed four peaks are at 2θ values, 40.2, 52.8, and 67.3. The peaks via solid triangles are related to the cubic shape of lead selenide, and those striking with undefended diamonds attributed to the orthorhombic crystal of iron dioxide. This result is witnessed with scanning electron microscope analysis homogeneous and cubic structure; this result agrees with a reported study [13]. Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time 95 min is given in Table 1.Figure 1 XRD plots of lead selenide films deposited at various bath temperatures: (a) 20°C. (b) 45°C. (c) 85°C.Table 1 Comparison of evaluated and standard “d” and 2θ values for nano-synthesized PbSe thin films with varying bath temperatures (20°C, 45°C, and 85°C) and deposition time of 95 min. SampleTemperature (°C)2θ (radians)FWMH (°)Crystal size (nm)12030.8126.981.1724530.8476.131.0438542.8912.140.7The size of the particle was premeditated by using Scherer’s formula; it is given by(1)D=0.9λβcosθ,where λ shows the wavelength, β stands for the FWHM in rad, and θ represents the angle diffraction (Bragg angle).From Table1, it is possible to understand that the crystal size of the third sample is decreased at a higher temperature (at 85°C); this shows that when the temperature increases, the bonds between particles will break and the size becomes smaller and smaller. ## 3.2. Surface Morphology Characterization of PbSe Thin Films The scanning electron microscope micrographs of PbSe thin films deposited at different temperatures are shown in Figure2. Variation of temperatures shows a significant influence on the surface morphology of the thin films. All micrographs samples were taken at 10.00 kV and 20.71x magnification. Scanning electron microscopy reveals that lead selenide films grown at all temperatures fully covered the metal substrates. The synthesized films that had cracks were very small in size with well-covered grain borders. As the researcher, well-defined grains were observed for the film deposited at all temperatures, but the grains were decreasing with temperature increases from 20°C to 85°C. As it was observed in XRD analysis, the shape of PbSe films was cubic, and fully covered crystals on the metal substrate were investigated. Increasing temperature raises the smoothness of the films; the grain amounts were observed to rise slowly. Additionally, there is a material which just likes soil which was decreasing in size as shown in Figures 2(a)–2(c) with increasing bath temperature from 20°C to 85°C that shield over cadmium sulphide thin films in some amounts of the apparent. The impact of temperature on the surface morphology of nano-synthesized lead selenide films was observed; this is in good agreement with a reported study [14].Figure 2 The scanning electron microscope micrograph of nano-synthesized lead selenide films grown varying temperatures: (a) 20. (b) 45. (c) 85°C. ## 3.3. Optical Properties The optical immersion of the nano-synthesized lead selenide films prepared from different temperatures was measured in the wavelength range of 226–2250 nm, which is in the visible region photocatalyst, as shown in Figure3. The absorbance of the films expressively increased when the bath temperature increased within the deliberated range of wavelength. The highest absorption of thin films was witnessed in the visible wavelength (λ) range. The absorption coefficient of PbSe films was evaluated by using Lambert’s equation:(2)α=2.30At,where A is the absorbance, α is the absorption of coefficient, and t is stands for the thickness. The band gaps (Eg) of films were calculated from Tauc’s relation [15]:(3)αhvn=khv−Eg,where h is Planck’s number, v stands for the frequency, k expresses the optical transition constant number, Eg is the energy of bandgap, and n is transition type, and it is varying either 2 or 2/3 for direct allowed and forbidden transitions or 1/2 or 1/3 for indirect allowed and forbidden transitions, correspondingly [16]. Best linear fit for equation (3) is given for n = 2 in the main absorption edge, representing that the thin films have direct optical band gaps. The (αhv) axis intercept attained by extrapolating the linear portion of the (αhv)2 vs. (hv) curve gives the Eg of the films as shown in Figure 3. The Eg of the nano-synthesized PbSe thin films declined from 2.4 eV to 1.2 eV with the temperature of the solution increasing from 20°C to 85°C. The bandgap decrement could be because of increment of crystal size with temperature; this result is the same as reported [17]. The maximum absorbance observed in visible light section and band gaps of thin films within the range of 2.6–1.2 eV in all PbSe thin films provides the application materials as the absorber layer in photovoltaic thin film solar cells as well as well-organized visible light photocatalyst [18–21].Figure 3 Plots of variation of optical absorption (αt) vs. wavelength films variation in bath temperature: (a) 20°C. (b) 45°C. (c) 85°C. ## 3.4. Photoluminescence Property (PL) Inappropriate to discover the optical study of deposited PbSe nanoparticles, photoluminescence was similarly used. In theλ range from 350 nm to 600 nm at different temperatures, the PL spectrum of nano-synthesized films was reported. When solution bath temperature increases from 20°C to 85°C, the average strength of PL decreases. The maximum photoluminescence strength is predominant because of the self-trapped exciton recombination, formed from O2 vacancy and particle size what we call defect centres, for the deposited thin films at 45°C and 85°C. The photoluminescence intensity rises sequentially with all temperatures.Figure4 shows the absorption of nano-synthesized lead selenide films deposited at various temperatures; all samples reveal a gradual rising absorbance in the visible area, which provides the potential for these tools to be applicable in a photovoltaic solar cell. The plotted figure shows that the samples synthesized at a greater temperature of the solution have greater absorption values related to other solution bath temperatures. Because these are nano-synthesized, thin films have the maximum uniform surface and well crystallinity compared with other reported samples.Figure 4 Photoluminescence spectra of nano-synthesized PbSe films grown at different temperatures: (a) 20°C. (b) 45°C. (c) 85°C. ## 4. Conclusions Nano-synthesized lead selenide films are successfully prepared using an easy, inexpensive CBD technique in an acidic medium. Chemical solutions made from lead nitrate and sodium selenite compounds are to be sources of lead and selenide ions. The triethanolamine was served as a complexing mediator through the deposition procedure. The X-ray diffraction pattern tells the creation of a cubic crystalline structure with very tough peaks defined as (111), (101), (100), and (110) Miller indices plane. PL confirmed that photoluminescence emission of the deposited thin films increased with increasing bath temperature. The deposited film at 85°C indicated the best crystallite and is homogenously formed on a substrate with larger crystal sizes. Bandgap energy was declined from 2.4 eV to 1.2 eV with temperature increases from 20°C to 85°C, and this is suitable for photovoltaic solar cells. --- *Source: 1003803-2022-01-11.xml*
2022
# Ground State Solution for an Autonomous Nonlinear Schrödinger System **Authors:** Min Liu; Jiu Liu **Journal:** Journal of Function Spaces (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1003941 --- ## Abstract In this paper, we study the following autonomous nonlinear Schrödinger system (discussed in the paper), whereλ,μ, and ν are positive parameters; 2∗=2N/N−2 is the critical Sobolev exponent; and f satisfies general subcritical growth conditions. With the help of the Pohožaev manifold, a ground state solution is obtained. --- ## Body ## 1. Introduction and Main Result In this paper, we consider the following autonomous nonlinear Schrödinger system:(1)−Δu+μu=μfu+λv,x∈ℝN,−Δv+νv=v2∗−2v+λu,x∈ℝN,u,v∈H1ℝN,N≥3,where μ,ν, and λ are positive parameters satisfying 0<λ<μν; 2∗=2N/N−2 is the critical Sobolev exponent; and f satisfies the following conditions:(f1) f∈Cℝ,ℝ is an odd function.(f2) lims⟶0+fs/s=0.(f3) lims⟶+∞fs/s2∗−1=0.(f4) There existsζ>0 such that Fζ>ζ2/2, where Fζ=∫0ζftdt.Systems of above type arise in nonlinear optics (cf. [1]). It is well known that a solution u,v∈H1ℝN×H1ℝN of system (1) is called a ground state solution if u,v≠0,0 and its energy is minimal among the energy of all the nontrivial solutions.The following nonlinear Schrödinger system(2)−Δu+μu=up−1u+λv,x∈ℝN,−Δv+νv=vq−1v+λu,x∈ℝN,u,v∈H1ℝN,has been studied by many authors. When N≤3, μ=ν=1, p=q=3, and λ>0 small enough, Ambrosetti et al. [2] proved that (2) has multibump solitons. When N≥2, μ=ν=1, 1<p,q<2∗−1, 0<λ<1, and up−1u and vq−1v are replaced by 1+axup−1u and 1+bxvq−1v, Ambrosetti et al. [3] proved that system (2) has a positive ground state solution. When u,v∈D1,2ℝ4, μ=V1x, and ν=V2x satisfy the integral conditions and up−1u,λv,vq−1v and λu are replaced by μ1u3,βuv2,μ2v3, and βu2v, respectively, Liu and Liu [4] proved that (2) has a positive solution. When u,v∈H01Ω, Ω is a smooth bounded domain in ℝ3, p=q=3, and λv, λu are replaced by −βuv2,−βvu2, respectively, Noris and Ramos [5] proved that (2) admits an unbounded sequence of solutions u,v with u>0,v>0, and u≠v for sufficiently large β>0. When N≥3, 1<p<2∗−1,q=2∗−1, and μ,ν>0, 0<λ<μν, Chen and Zou [6] proved that (2) has a positive ground state solution under λ,μ,ν which satisfied certain conditions. When N≥3, 1<p<2∗−1,q=2∗−1, and μ=ax,ν=bx,λ=λx, Li and Tang [7] proved that (2) has a nontrivial solution.Inspired by the above literatures, especially [6], we investigate the existence of ground state solution of system (1). When μfu=up−1u with 1<p<2∗−1, by using the Nehari manifold, Chen and Zou [6] obtained the existence of ground state solution of system (1). But in our paper, without the assumption of the monotonicity of u↦fu/u, we have to adopt a new method to replace the Nehari manifold.The following single Schrödinger equation(3)−Δu+u=fu,u∈H1ℝN,N≥3,has been widely studied by many researchers, and relevant results can been referred to [8–10] and the references therein. By [9], we know that if f satisfies (f1)-(f4); then, equation (3) has a ground state solution. Define (4)a=infu∈Γ12∫ℝN∇u2+u2dx−∫ℝNFudx,where Γ=u∈H1ℝN:uisanontrivialsolutionofequation3 and define (5)S=infu∈D1,2ℝN\0∫ℝN∇u2dx∫ℝNu2∗dx2/2∗,where S is the optimal constant of the Sobolev embedding D1,2ℝN⟶L2∗ℝN.The main result of this paper is the following.Theorem 1. Assume thatμ,ν, and λ are positive parameters satisfying μ>S−N/N−2aN2/N−2 and 0<λ<μν. Suppose that f satisfies (f1)-(f4). Then, system (1) has a ground state solution.Remark 2. There are some examples of functions that satisfy the assumptions (f1)-(f4), for example, fs=sp−2s with 2<p<2∗ and fs=sp−2s/1+s2 with 4<p<2∗+2.Remark 3. It is obvious that system (1) has no semitrivial solutions. Indeed, if u,0 is a solution of system (1), then u=0 and if 0,v is a solution of system (1), then v=0.Remark 4. There are some recent studies on the ground state solutions for other types of Schrödinger equations or systems, for example, [6, 11]. Moreover, in the bounded domain, the existence and the regularity of solutions to differential problems have been widely investigated by using tools of harmonic and real analysis and variational methods, for example, [12–14]. ## 2. Preliminaries In order to make a precise explanation of the results in this paper, we will give some notations.C,Ci denote various positive constants.LpℝN is the usual Lebesgue space endowed with the norm (6)up=∫ℝNupdx1/p.D1,2ℝN=u∈L2∗ℝN∣∂u/∂xi∈L2ℝN,i=1,2,⋯,N endowed with the norm (7)uD1,2=∫ℝN∇u2dx1/2.H1ℝN=u∈L2ℝN∣∂u/∂xi∈L2ℝN,i=1,2,⋯,N endowed with the norm (8)u=∫ℝN∇u2+u2dx1/2.For anyu,v∈H≔H1ℝN×H1ℝN, we set (9)u,vH=∫ℝN∇u2+μu2+∇v2+νv2dx1/2.For anyu∈H1ℝN, we denote ut=u·/t for all t>0.The weak solutions of (1) correspond to critical points of the functional (10)Iu,v=12u,vH2−μ∫ℝNFudx−12∗∫ℝNv2∗dx−λ∫ℝNuvdx.Obviously,I∈C1H,ℝ and for all u,v∈H and φ,ψ∈H, we have (11)I′u,v,φ,ψ=∫ℝN∇u·∇φ+μuφ+∇v·∇ψ+νvψdx−μ∫ℝNfuφdx−∫ℝNv2∗−2vψdx−λ∫ℝNφv+uψdx.Similar to [15, 16], in order to obtain a ground state solution, we define the Pohožaev manifold (12)P=u,v∈H\0,0:Ju,v=0and consider the constraint minimization problem (13)m=infu,v∈PIu,v,where J:H⟶ℝ is defined as (14)Ju,v=N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx−μN∫ℝNFudx−N2∗∫ℝNv2∗dx−λN∫ℝNuvdx.We also require the following subcritical system of system (1): (15)−Δu+μu=μfu+λv,x∈ℝN,−Δv+νv=vq−2v+λu,x∈ℝN,u,v∈H1ℝN,N≥3,where 2<q<2∗, μ,ν, and λ are positive parameters satisfying 0<λ<μν and f satisfies (f1)-(f4). The energy functional of system (15) is (16)Iqu,v=12u,vH2−μ∫ℝNFudx−1q∫ℝNvqdx−λ∫ℝNuvdx.Define(17)Pq=u,v∈H\0,0:Jqu,v=0andmq=infu,v∈PqIqu,v,where (18)Jqu,v=N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx−μN∫ℝNFudx−Nq∫ℝNvqdx−λN∫ℝNuvdx. ## 3. Proof of Theorem1 The following two lemmas will be used in proof.Lemma 5 (compactness lemma of Strauss, see [9, 10]). LetP,Q:ℝ⟶ℝ be two continuous functions satisfying (19)PsQs⟶0ass⟶+∞. Letun be a sequence of measurable functions: ℝN⟶ℝ such that (20)supn∫ℝNQunxdx<+∞and Punx⟶υx a.e. in ℝN, as n⟶∞. Then, for any bounded Borel set B, one has (21)∫BPunx−υxdx⟶0asn⟶+∞. If one further assumes that(22)PsQs⟶0ass⟶0and unx⟶0 as x⟶+∞, uniformly with respect to n, then Pun converges to υ in L1ℝN as n⟶+∞.Lemma 6 (Strauss inequality, see [17]). IfN≥2, there exists CN>0 such that, for every ux=ux∈H1ℝN, (23)ux≤CNu21/2∇u21/2x1−N/2a.e. on ℝN. Before proving Theorem1, we need to prove a series of lemmas.Lemma 7. Suppose that (f1)-(f4) hold. Then, the Pohožaev manifold P is not empty.Proof. From [17], we know that for any ε>0, (24)uε=NN−2N−2/4εN−2/2ε+x2N−2/4is a positive solution of the following equation: (25)−Δu=u2∗−2u,x∈ℝN,N≥3. Define a cut-off functionϕ∈C0∞ℝN,0,1 as (26)ϕ=1,x∈Bρ,0,x∈ℝN\B2ρ,where ϱ>0 and Bϱ=x∈ℝN,x<ϱ. Let Wε=ϕuε and define Vε=Wε/∫ℝNWε2∗dx1/2∗. By [16], we have (27)∫ℝNVε2∗dx1/2∗=1,∫ℝNVε2dx=oε1/2,N=3,oεlnε,N=4,oε,N=5. Takeε>0 small enough such that ∫ℝN1/2∗Vε2∗−ν/2Vε2dx>0. Let U∈H1ℝN be a positive ground state solution of equation (3). Then, we have the following Pohožaev equality: (28)N−22∫ℝN∇U2dx+N2∫ℝNU2dx=N∫ℝNFUdx. Then,∫ℝNFU−1/2U2dx>0. Thus, we have (29)τt≔IUt,Vεt=tN−22∫ℝN∇U2+∇Vε2dx−μtN∫ℝNFU−12U2dx−tN∫ℝN12∗Vε2∗−ν2Vε2dx−λtN∫ℝNUVεdx. Defines=tN; we have (30)ηs≔τs1/N=sN−2/N2∫ℝN∇U2+∇Vε2dx−μs∫ℝNFU−12U2dx−s∫ℝN12∗Vε2∗−ν2Vε2dx−λs∫ℝNUVεdx. We can easily know thatηs>0 for s small enough and ηs<0 for large s. Since d2ηs/ds2<0, ηs is a concave function. Then, there exists a unique s0>0 such that η′s0=0. Hence, there exists a unique t0=s01/N>0 such that τ′t0=0. Then, we have t0τ′t0=JUx/t0,Vεx/t0=0. Then, Ut0,Vεt0∈P.Lemma 8. Suppose that (f1)-(f4) hold. Then, m=infu,v∈PIu,v>0.Proof. Since0<λ<μν, there exists 0<θ<1 such that 0<λ<μ1−θν. For any u,v∈P, we have Ju,v=0. By using Young’s inequality, we have (31)N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx=μN∫ℝNFudx+N2∗∫ℝNv2∗dx+λN∫ℝNuvdx≤μNθ2∫ℝNu2dx+NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx+λN∫ℝNuvdx≤μNθ2∫ℝNu2dx+NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx+Nμ1−θ2∫ℝNu2dx+Nν2∫ℝNv2dx. Therefore, we have(32)N−22∫ℝN∇u2+∇v2dx≤NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx. By using Sobolev’s inequality, we have(33)N−22∫ℝN∇u2+∇v2dx≤C1∫ℝN∇u2dx2∗/2+∫ℝN∇v2dx2∗/2≤C2∫ℝN∇u2+∇v2dx2∗/2,which implies ∫ℝN∇u2+∇v2dx≥N−2/2C2N−2/2>0. Therefore, we conclude that for any u,v∈P, we have (34)Iu,v=Iu,v−1NJu,v=12−12∗∫ℝN∇u2dx+12−12∗∫ℝN∇v2dx≥1NN−22C2N−2/2. Therefore, we havem>0.Lemma 9. Suppose that (f1)-(f4) hold. Then, m<1/NSN/2.Proof. LetU∈H1ℝN be a positive ground state solution of equation (3). Then, (28) holds and (35)a=12∫ℝN∇U2+U2dx−∫ℝNFUdx=12∫ℝN∇U2+U2dx−∫ℝNFUdx−1NN−22∫ℝN∇U2dx+N2∫ℝNU2dx−N∫ℝNFUdx=∫ℝN∇U2dx. Moreover, we have alsoUμx which is a solution of equation (36)−Δu+μu=μfu,u∈H1ℝN,N≥3. Then,Uμx,0∈P. Since μ>S−N/N−2aN2/N−2, we have (37)m≤IUμx,0=IUμx,0−1NJUμx,0=1N∫ℝN∇Uμx2dx=aμ2−N/2<1NSN/2.Lemma 10. Suppose that (f1)-(f4) hold. For any un,vn⊂P, if Iun,vn≤C, then un,vn is bounded in H.Proof. SinceIun,vn≤C, we have (38)C≥Iun,vn=Iun,vn−1NJun,vn=1N∫ℝN∇un2+∇vn2dx. Because0<λ<μν, there exists 0<θ<1/2 and α>0 such that 0<λ<μ1−2θν−α. Therefore, we have (39)N−22∫ℝN∇un2+∇vn2dx+N2∫ℝNμun2+νvn2dx=μN∫ℝNFundx+N2∗∫ℝNvn2∗dx+λN∫ℝNunvndx≤μNθ2∫ℝNun2dx+NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+λN∫ℝNunvndx≤μNθ2∫ℝNun2dx+NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+Nμ1−2θ2∫ℝNun2dx+Nν−α2∫ℝNvn2dx. Then, we have(40)Nμθ2∫ℝNun2dx+Nα2∫ℝNvn2dx≤CN∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx≤C3∫ℝN∇u2+∇v2dx2∗/2≤C4. Hence,un,vnis bounded in H.Lemma 11. Suppose that (f1)-(f4) hold. Then, limq⟶2∗−supmq≤m.Proof. For anyε∈0,1/2, there exists u,v∈P such that Iu,v<m+ε. Since Ju,v=0, for any t>0, we have (41)Iut,vt=Iut,vt−tNNJu,v=tN−22−N−22NtN∫ℝN∇u2+∇v2dx. Defineht=tN−2/2−N−2/2NtN. Through simple calculations, we have h′t=N−2/2tN−3−tN−1. We can easily see that h is increasing for t∈0,1 and h is decreasing for t>1. Then, we have maxt>0Iut,vt=Iu,v and Iut,vt<Iu,v for any t≠1. By calculation, we have Iut,vt<0 for t>N/N−2. Take large T such that (42)IuT,vT=TN−22∫ℝN∇u2+∇v2dx+TN2∫ℝNμu2+νv2dx−μTN∫ℝNFudx−TN2∗∫ℝNv2∗dx−λTN∫ℝNuvdx≤−1. Then, there existsσ∈0,2∗ such that (43)Iqut,vt−Iut,vt=tN2∗∫ℝNv2∗dx−tNq∫ℝNvqdx<ε,for all 2∗−σ<q<2∗ and 0≤t≤T. Then, we have IquT,vT≤−1/2 for all 2∗−σ<q<2∗. Since (44)Iqut,vt=tN−22∫ℝN∇u2+∇v2dx+tN2∫ℝNμu2+νv2dx−μtN∫ℝNFudx−tNq∫ℝNvqdx−λtN∫ℝNuvdx,Iqut,vt>0 for t small enough. Then, there exists tq∈0,T such that d/dtIqut,vtt=tq=0. So, utq,vtq∈Pq. Hence, we have (45)mq≤Iqutq,vtq≤Iutq,vtq+ε≤Iu,v+ε<m+2ε,for all 2∗−σ<q<2∗. From [18, 19], we know that system (15) has a positive and radial ground state solution. Then, for any qn∈2,2∗ and qn⟶2∗−, there exists a positive and radial sequence un,vn⊂H such that (46)Iqnun,vn=mqn,Iqn′un,vn=0,Jqnun,vn=0. By Lemmas10 and 11, we know that un,vn is bounded in H.Lemma 12. Suppose that (f1)-(f4) and (46) hold. Then, liminfn⟶∞mqn>0.Proof. Similar to the proof of Lemma8, we have (47)N−22∫ℝN∇un2+∇vn2dx≤NC∫ℝNun2∗dx+Nqn∫ℝNvnqndx. Using Young’s inequality implies(48)Nqn∫ℝNvnqndx=Nqn∫ℝNvn22∗−qn/2∗−2vn2∗qn−2/2∗−2dx≤Nqn2∗−qn2∗−2∫ℝNvn2dx+Nqnqn−22∗−2∫ℝNvn2∗dx=N2∗∫ℝNvn2∗dx+o1. Then,(49)N−22∫ℝN∇un2+∇vn2dx≤NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+o1≤C5∫ℝN∇un2+∇vn2dx2∗/2+o1. So there existsϖ>0 such that up to a subsequence, ∫ℝN∇un2+∇vn2dx+o1≥ϖ. On the other hand, (50)mqn=Iqnun,vn=Iqnun,vn−1NJqnun,vn=1N∫ℝN∇un2+∇vn2dx. Then,liminfn⟶∞mqn>0.Proof of Theorem 1. Because (46) holds, there exists u,v∈H such that un†u,vn†v in H1ℝN, un⟶u,vn⟶v in LpℝN,2<p<2∗, and unx⟶ux,vnx⟶vx a.e. in ℝN. For any φ,ψ∈H, we have (51)0=Iqn′un,vn,φ,ψ⟶I′u,v,φ,ψ,i.e., u,v is a solution of system (1). Suppose that u=0. Set Ps=fss and Qs=s2+s2∗. Through Lemma 5 and Lemma 6, we have ∫ℝNPundx⟶0 as n⟶+∞. Since Iqn′un,vn,un,vn=0, by using Young’s inequality, we have (52)un,vnH=μ∫ℝNfunundx+∫ℝNvnqndx+2λ∫ℝNunvndx≤μ∫ℝNPundx+∫ℝNvn22∗−qn/2∗−2vn2∗qn−2/2∗−2dx+∫ℝNμun2+νvn2dx≤μ∫ℝNPundx+2∗−qn2∗−2∫ℝNvn2dx+qn−22∗−2∫ℝNvn2∗dx+∫ℝNμun2+νvn2dx=∫ℝNvn2∗dx+∫ℝNμun2+νvn2dx+o1. One has(53)∫ℝN∇vn2dx≤∫ℝN∇un2+∇vn2dx≤∫ℝNvn2∗dx+o1≤∫ℝN∇vn2dxS2∗/2+o1. So we have (i)∫ℝN∇vn2dx⟶0 or (ii) limsupn⟶∞∫ℝN∇vn2dx≥SN/2. If (i) holds, then we have (54)mqn=Iqnun,vn−1NJqnun,vn=1N∫ℝN∇un2+∇vn2dx⟶0,which contradicts with Lemma 12. If (ii) holds, then we have (55)m≥limsupn⟶∞mqn=limsupn⟶∞Iqnun,vn−1NJqnun,vn=limsupn⟶∞1N∫ℝN∇un2+∇vn2dx≥limsupn⟶∞1N∫ℝN∇vn2dx≥1NSN/2. This is a contradiction. Sou≠0 and through Remark 3, we know that v≠0. Applying the weak lower-semicontinuity of the norm, we have (56)m≤Iu,v=Iu,v−1NJu,v=1N∫ℝN∇u2+∇v2dx≤liminfn⟶∞1N∫ℝN∇un2+∇vn2dx=liminfn⟶∞Iqnun,vn−1NJqnun,vn=liminfmqnn⟶∞≤m. This impliesIu,v=m. We complete the proof. --- *Source: 1003941-2021-10-27.xml*
1003941-2021-10-27_1003941-2021-10-27.md
14,429
Ground State Solution for an Autonomous Nonlinear Schrödinger System
Min Liu; Jiu Liu
Journal of Function Spaces (2021)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1003941
1003941-2021-10-27.xml
--- ## Abstract In this paper, we study the following autonomous nonlinear Schrödinger system (discussed in the paper), whereλ,μ, and ν are positive parameters; 2∗=2N/N−2 is the critical Sobolev exponent; and f satisfies general subcritical growth conditions. With the help of the Pohožaev manifold, a ground state solution is obtained. --- ## Body ## 1. Introduction and Main Result In this paper, we consider the following autonomous nonlinear Schrödinger system:(1)−Δu+μu=μfu+λv,x∈ℝN,−Δv+νv=v2∗−2v+λu,x∈ℝN,u,v∈H1ℝN,N≥3,where μ,ν, and λ are positive parameters satisfying 0<λ<μν; 2∗=2N/N−2 is the critical Sobolev exponent; and f satisfies the following conditions:(f1) f∈Cℝ,ℝ is an odd function.(f2) lims⟶0+fs/s=0.(f3) lims⟶+∞fs/s2∗−1=0.(f4) There existsζ>0 such that Fζ>ζ2/2, where Fζ=∫0ζftdt.Systems of above type arise in nonlinear optics (cf. [1]). It is well known that a solution u,v∈H1ℝN×H1ℝN of system (1) is called a ground state solution if u,v≠0,0 and its energy is minimal among the energy of all the nontrivial solutions.The following nonlinear Schrödinger system(2)−Δu+μu=up−1u+λv,x∈ℝN,−Δv+νv=vq−1v+λu,x∈ℝN,u,v∈H1ℝN,has been studied by many authors. When N≤3, μ=ν=1, p=q=3, and λ>0 small enough, Ambrosetti et al. [2] proved that (2) has multibump solitons. When N≥2, μ=ν=1, 1<p,q<2∗−1, 0<λ<1, and up−1u and vq−1v are replaced by 1+axup−1u and 1+bxvq−1v, Ambrosetti et al. [3] proved that system (2) has a positive ground state solution. When u,v∈D1,2ℝ4, μ=V1x, and ν=V2x satisfy the integral conditions and up−1u,λv,vq−1v and λu are replaced by μ1u3,βuv2,μ2v3, and βu2v, respectively, Liu and Liu [4] proved that (2) has a positive solution. When u,v∈H01Ω, Ω is a smooth bounded domain in ℝ3, p=q=3, and λv, λu are replaced by −βuv2,−βvu2, respectively, Noris and Ramos [5] proved that (2) admits an unbounded sequence of solutions u,v with u>0,v>0, and u≠v for sufficiently large β>0. When N≥3, 1<p<2∗−1,q=2∗−1, and μ,ν>0, 0<λ<μν, Chen and Zou [6] proved that (2) has a positive ground state solution under λ,μ,ν which satisfied certain conditions. When N≥3, 1<p<2∗−1,q=2∗−1, and μ=ax,ν=bx,λ=λx, Li and Tang [7] proved that (2) has a nontrivial solution.Inspired by the above literatures, especially [6], we investigate the existence of ground state solution of system (1). When μfu=up−1u with 1<p<2∗−1, by using the Nehari manifold, Chen and Zou [6] obtained the existence of ground state solution of system (1). But in our paper, without the assumption of the monotonicity of u↦fu/u, we have to adopt a new method to replace the Nehari manifold.The following single Schrödinger equation(3)−Δu+u=fu,u∈H1ℝN,N≥3,has been widely studied by many researchers, and relevant results can been referred to [8–10] and the references therein. By [9], we know that if f satisfies (f1)-(f4); then, equation (3) has a ground state solution. Define (4)a=infu∈Γ12∫ℝN∇u2+u2dx−∫ℝNFudx,where Γ=u∈H1ℝN:uisanontrivialsolutionofequation3 and define (5)S=infu∈D1,2ℝN\0∫ℝN∇u2dx∫ℝNu2∗dx2/2∗,where S is the optimal constant of the Sobolev embedding D1,2ℝN⟶L2∗ℝN.The main result of this paper is the following.Theorem 1. Assume thatμ,ν, and λ are positive parameters satisfying μ>S−N/N−2aN2/N−2 and 0<λ<μν. Suppose that f satisfies (f1)-(f4). Then, system (1) has a ground state solution.Remark 2. There are some examples of functions that satisfy the assumptions (f1)-(f4), for example, fs=sp−2s with 2<p<2∗ and fs=sp−2s/1+s2 with 4<p<2∗+2.Remark 3. It is obvious that system (1) has no semitrivial solutions. Indeed, if u,0 is a solution of system (1), then u=0 and if 0,v is a solution of system (1), then v=0.Remark 4. There are some recent studies on the ground state solutions for other types of Schrödinger equations or systems, for example, [6, 11]. Moreover, in the bounded domain, the existence and the regularity of solutions to differential problems have been widely investigated by using tools of harmonic and real analysis and variational methods, for example, [12–14]. ## 2. Preliminaries In order to make a precise explanation of the results in this paper, we will give some notations.C,Ci denote various positive constants.LpℝN is the usual Lebesgue space endowed with the norm (6)up=∫ℝNupdx1/p.D1,2ℝN=u∈L2∗ℝN∣∂u/∂xi∈L2ℝN,i=1,2,⋯,N endowed with the norm (7)uD1,2=∫ℝN∇u2dx1/2.H1ℝN=u∈L2ℝN∣∂u/∂xi∈L2ℝN,i=1,2,⋯,N endowed with the norm (8)u=∫ℝN∇u2+u2dx1/2.For anyu,v∈H≔H1ℝN×H1ℝN, we set (9)u,vH=∫ℝN∇u2+μu2+∇v2+νv2dx1/2.For anyu∈H1ℝN, we denote ut=u·/t for all t>0.The weak solutions of (1) correspond to critical points of the functional (10)Iu,v=12u,vH2−μ∫ℝNFudx−12∗∫ℝNv2∗dx−λ∫ℝNuvdx.Obviously,I∈C1H,ℝ and for all u,v∈H and φ,ψ∈H, we have (11)I′u,v,φ,ψ=∫ℝN∇u·∇φ+μuφ+∇v·∇ψ+νvψdx−μ∫ℝNfuφdx−∫ℝNv2∗−2vψdx−λ∫ℝNφv+uψdx.Similar to [15, 16], in order to obtain a ground state solution, we define the Pohožaev manifold (12)P=u,v∈H\0,0:Ju,v=0and consider the constraint minimization problem (13)m=infu,v∈PIu,v,where J:H⟶ℝ is defined as (14)Ju,v=N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx−μN∫ℝNFudx−N2∗∫ℝNv2∗dx−λN∫ℝNuvdx.We also require the following subcritical system of system (1): (15)−Δu+μu=μfu+λv,x∈ℝN,−Δv+νv=vq−2v+λu,x∈ℝN,u,v∈H1ℝN,N≥3,where 2<q<2∗, μ,ν, and λ are positive parameters satisfying 0<λ<μν and f satisfies (f1)-(f4). The energy functional of system (15) is (16)Iqu,v=12u,vH2−μ∫ℝNFudx−1q∫ℝNvqdx−λ∫ℝNuvdx.Define(17)Pq=u,v∈H\0,0:Jqu,v=0andmq=infu,v∈PqIqu,v,where (18)Jqu,v=N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx−μN∫ℝNFudx−Nq∫ℝNvqdx−λN∫ℝNuvdx. ## 3. Proof of Theorem1 The following two lemmas will be used in proof.Lemma 5 (compactness lemma of Strauss, see [9, 10]). LetP,Q:ℝ⟶ℝ be two continuous functions satisfying (19)PsQs⟶0ass⟶+∞. Letun be a sequence of measurable functions: ℝN⟶ℝ such that (20)supn∫ℝNQunxdx<+∞and Punx⟶υx a.e. in ℝN, as n⟶∞. Then, for any bounded Borel set B, one has (21)∫BPunx−υxdx⟶0asn⟶+∞. If one further assumes that(22)PsQs⟶0ass⟶0and unx⟶0 as x⟶+∞, uniformly with respect to n, then Pun converges to υ in L1ℝN as n⟶+∞.Lemma 6 (Strauss inequality, see [17]). IfN≥2, there exists CN>0 such that, for every ux=ux∈H1ℝN, (23)ux≤CNu21/2∇u21/2x1−N/2a.e. on ℝN. Before proving Theorem1, we need to prove a series of lemmas.Lemma 7. Suppose that (f1)-(f4) hold. Then, the Pohožaev manifold P is not empty.Proof. From [17], we know that for any ε>0, (24)uε=NN−2N−2/4εN−2/2ε+x2N−2/4is a positive solution of the following equation: (25)−Δu=u2∗−2u,x∈ℝN,N≥3. Define a cut-off functionϕ∈C0∞ℝN,0,1 as (26)ϕ=1,x∈Bρ,0,x∈ℝN\B2ρ,where ϱ>0 and Bϱ=x∈ℝN,x<ϱ. Let Wε=ϕuε and define Vε=Wε/∫ℝNWε2∗dx1/2∗. By [16], we have (27)∫ℝNVε2∗dx1/2∗=1,∫ℝNVε2dx=oε1/2,N=3,oεlnε,N=4,oε,N=5. Takeε>0 small enough such that ∫ℝN1/2∗Vε2∗−ν/2Vε2dx>0. Let U∈H1ℝN be a positive ground state solution of equation (3). Then, we have the following Pohožaev equality: (28)N−22∫ℝN∇U2dx+N2∫ℝNU2dx=N∫ℝNFUdx. Then,∫ℝNFU−1/2U2dx>0. Thus, we have (29)τt≔IUt,Vεt=tN−22∫ℝN∇U2+∇Vε2dx−μtN∫ℝNFU−12U2dx−tN∫ℝN12∗Vε2∗−ν2Vε2dx−λtN∫ℝNUVεdx. Defines=tN; we have (30)ηs≔τs1/N=sN−2/N2∫ℝN∇U2+∇Vε2dx−μs∫ℝNFU−12U2dx−s∫ℝN12∗Vε2∗−ν2Vε2dx−λs∫ℝNUVεdx. We can easily know thatηs>0 for s small enough and ηs<0 for large s. Since d2ηs/ds2<0, ηs is a concave function. Then, there exists a unique s0>0 such that η′s0=0. Hence, there exists a unique t0=s01/N>0 such that τ′t0=0. Then, we have t0τ′t0=JUx/t0,Vεx/t0=0. Then, Ut0,Vεt0∈P.Lemma 8. Suppose that (f1)-(f4) hold. Then, m=infu,v∈PIu,v>0.Proof. Since0<λ<μν, there exists 0<θ<1 such that 0<λ<μ1−θν. For any u,v∈P, we have Ju,v=0. By using Young’s inequality, we have (31)N−22∫ℝN∇u2+∇v2dx+N2∫ℝNμu2+νv2dx=μN∫ℝNFudx+N2∗∫ℝNv2∗dx+λN∫ℝNuvdx≤μNθ2∫ℝNu2dx+NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx+λN∫ℝNuvdx≤μNθ2∫ℝNu2dx+NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx+Nμ1−θ2∫ℝNu2dx+Nν2∫ℝNv2dx. Therefore, we have(32)N−22∫ℝN∇u2+∇v2dx≤NC∫ℝNu2∗dx+N2∗∫ℝNv2∗dx. By using Sobolev’s inequality, we have(33)N−22∫ℝN∇u2+∇v2dx≤C1∫ℝN∇u2dx2∗/2+∫ℝN∇v2dx2∗/2≤C2∫ℝN∇u2+∇v2dx2∗/2,which implies ∫ℝN∇u2+∇v2dx≥N−2/2C2N−2/2>0. Therefore, we conclude that for any u,v∈P, we have (34)Iu,v=Iu,v−1NJu,v=12−12∗∫ℝN∇u2dx+12−12∗∫ℝN∇v2dx≥1NN−22C2N−2/2. Therefore, we havem>0.Lemma 9. Suppose that (f1)-(f4) hold. Then, m<1/NSN/2.Proof. LetU∈H1ℝN be a positive ground state solution of equation (3). Then, (28) holds and (35)a=12∫ℝN∇U2+U2dx−∫ℝNFUdx=12∫ℝN∇U2+U2dx−∫ℝNFUdx−1NN−22∫ℝN∇U2dx+N2∫ℝNU2dx−N∫ℝNFUdx=∫ℝN∇U2dx. Moreover, we have alsoUμx which is a solution of equation (36)−Δu+μu=μfu,u∈H1ℝN,N≥3. Then,Uμx,0∈P. Since μ>S−N/N−2aN2/N−2, we have (37)m≤IUμx,0=IUμx,0−1NJUμx,0=1N∫ℝN∇Uμx2dx=aμ2−N/2<1NSN/2.Lemma 10. Suppose that (f1)-(f4) hold. For any un,vn⊂P, if Iun,vn≤C, then un,vn is bounded in H.Proof. SinceIun,vn≤C, we have (38)C≥Iun,vn=Iun,vn−1NJun,vn=1N∫ℝN∇un2+∇vn2dx. Because0<λ<μν, there exists 0<θ<1/2 and α>0 such that 0<λ<μ1−2θν−α. Therefore, we have (39)N−22∫ℝN∇un2+∇vn2dx+N2∫ℝNμun2+νvn2dx=μN∫ℝNFundx+N2∗∫ℝNvn2∗dx+λN∫ℝNunvndx≤μNθ2∫ℝNun2dx+NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+λN∫ℝNunvndx≤μNθ2∫ℝNun2dx+NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+Nμ1−2θ2∫ℝNun2dx+Nν−α2∫ℝNvn2dx. Then, we have(40)Nμθ2∫ℝNun2dx+Nα2∫ℝNvn2dx≤CN∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx≤C3∫ℝN∇u2+∇v2dx2∗/2≤C4. Hence,un,vnis bounded in H.Lemma 11. Suppose that (f1)-(f4) hold. Then, limq⟶2∗−supmq≤m.Proof. For anyε∈0,1/2, there exists u,v∈P such that Iu,v<m+ε. Since Ju,v=0, for any t>0, we have (41)Iut,vt=Iut,vt−tNNJu,v=tN−22−N−22NtN∫ℝN∇u2+∇v2dx. Defineht=tN−2/2−N−2/2NtN. Through simple calculations, we have h′t=N−2/2tN−3−tN−1. We can easily see that h is increasing for t∈0,1 and h is decreasing for t>1. Then, we have maxt>0Iut,vt=Iu,v and Iut,vt<Iu,v for any t≠1. By calculation, we have Iut,vt<0 for t>N/N−2. Take large T such that (42)IuT,vT=TN−22∫ℝN∇u2+∇v2dx+TN2∫ℝNμu2+νv2dx−μTN∫ℝNFudx−TN2∗∫ℝNv2∗dx−λTN∫ℝNuvdx≤−1. Then, there existsσ∈0,2∗ such that (43)Iqut,vt−Iut,vt=tN2∗∫ℝNv2∗dx−tNq∫ℝNvqdx<ε,for all 2∗−σ<q<2∗ and 0≤t≤T. Then, we have IquT,vT≤−1/2 for all 2∗−σ<q<2∗. Since (44)Iqut,vt=tN−22∫ℝN∇u2+∇v2dx+tN2∫ℝNμu2+νv2dx−μtN∫ℝNFudx−tNq∫ℝNvqdx−λtN∫ℝNuvdx,Iqut,vt>0 for t small enough. Then, there exists tq∈0,T such that d/dtIqut,vtt=tq=0. So, utq,vtq∈Pq. Hence, we have (45)mq≤Iqutq,vtq≤Iutq,vtq+ε≤Iu,v+ε<m+2ε,for all 2∗−σ<q<2∗. From [18, 19], we know that system (15) has a positive and radial ground state solution. Then, for any qn∈2,2∗ and qn⟶2∗−, there exists a positive and radial sequence un,vn⊂H such that (46)Iqnun,vn=mqn,Iqn′un,vn=0,Jqnun,vn=0. By Lemmas10 and 11, we know that un,vn is bounded in H.Lemma 12. Suppose that (f1)-(f4) and (46) hold. Then, liminfn⟶∞mqn>0.Proof. Similar to the proof of Lemma8, we have (47)N−22∫ℝN∇un2+∇vn2dx≤NC∫ℝNun2∗dx+Nqn∫ℝNvnqndx. Using Young’s inequality implies(48)Nqn∫ℝNvnqndx=Nqn∫ℝNvn22∗−qn/2∗−2vn2∗qn−2/2∗−2dx≤Nqn2∗−qn2∗−2∫ℝNvn2dx+Nqnqn−22∗−2∫ℝNvn2∗dx=N2∗∫ℝNvn2∗dx+o1. Then,(49)N−22∫ℝN∇un2+∇vn2dx≤NC∫ℝNun2∗dx+N2∗∫ℝNvn2∗dx+o1≤C5∫ℝN∇un2+∇vn2dx2∗/2+o1. So there existsϖ>0 such that up to a subsequence, ∫ℝN∇un2+∇vn2dx+o1≥ϖ. On the other hand, (50)mqn=Iqnun,vn=Iqnun,vn−1NJqnun,vn=1N∫ℝN∇un2+∇vn2dx. Then,liminfn⟶∞mqn>0.Proof of Theorem 1. Because (46) holds, there exists u,v∈H such that un†u,vn†v in H1ℝN, un⟶u,vn⟶v in LpℝN,2<p<2∗, and unx⟶ux,vnx⟶vx a.e. in ℝN. For any φ,ψ∈H, we have (51)0=Iqn′un,vn,φ,ψ⟶I′u,v,φ,ψ,i.e., u,v is a solution of system (1). Suppose that u=0. Set Ps=fss and Qs=s2+s2∗. Through Lemma 5 and Lemma 6, we have ∫ℝNPundx⟶0 as n⟶+∞. Since Iqn′un,vn,un,vn=0, by using Young’s inequality, we have (52)un,vnH=μ∫ℝNfunundx+∫ℝNvnqndx+2λ∫ℝNunvndx≤μ∫ℝNPundx+∫ℝNvn22∗−qn/2∗−2vn2∗qn−2/2∗−2dx+∫ℝNμun2+νvn2dx≤μ∫ℝNPundx+2∗−qn2∗−2∫ℝNvn2dx+qn−22∗−2∫ℝNvn2∗dx+∫ℝNμun2+νvn2dx=∫ℝNvn2∗dx+∫ℝNμun2+νvn2dx+o1. One has(53)∫ℝN∇vn2dx≤∫ℝN∇un2+∇vn2dx≤∫ℝNvn2∗dx+o1≤∫ℝN∇vn2dxS2∗/2+o1. So we have (i)∫ℝN∇vn2dx⟶0 or (ii) limsupn⟶∞∫ℝN∇vn2dx≥SN/2. If (i) holds, then we have (54)mqn=Iqnun,vn−1NJqnun,vn=1N∫ℝN∇un2+∇vn2dx⟶0,which contradicts with Lemma 12. If (ii) holds, then we have (55)m≥limsupn⟶∞mqn=limsupn⟶∞Iqnun,vn−1NJqnun,vn=limsupn⟶∞1N∫ℝN∇un2+∇vn2dx≥limsupn⟶∞1N∫ℝN∇vn2dx≥1NSN/2. This is a contradiction. Sou≠0 and through Remark 3, we know that v≠0. Applying the weak lower-semicontinuity of the norm, we have (56)m≤Iu,v=Iu,v−1NJu,v=1N∫ℝN∇u2+∇v2dx≤liminfn⟶∞1N∫ℝN∇un2+∇vn2dx=liminfn⟶∞Iqnun,vn−1NJqnun,vn=liminfmqnn⟶∞≤m. This impliesIu,v=m. We complete the proof. --- *Source: 1003941-2021-10-27.xml*
2021
# Combined Long Short-Term Memory Network-Based Short-Term Prediction of Solar Irradiance **Authors:** Manoharan Madhiarasan; Mohamed Louzazni **Journal:** International Journal of Photoenergy (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004051 --- ## Abstract Achieving the highly accurate and generic prediction of solar irradiance is arduous because solar irradiance possesses intermittent randomness and is influenced by meteorological parameters. Therefore, this paper endeavors a new combined long short-term memory network (CLSTMN) with various influence meteorological parameters as inputs. We investigated the proposed predictive model applicability for short-term solar irradiance prediction application and validated it in the real-time metrological dataset. The proposed prediction model is combined and accumulated by various inputs, incurring six individual long short-term memory models to improve solar irradiance prediction accuracy and generalization. Thus, the CLSTMN-based solar irradiance prediction can be generic and overcome the metrological parameters concerning variability. The experimental results ensure good prediction accuracy with minimal evaluation metrics of the proposed CLSTMN for solar irradiance prediction. The RMSE, MAPE, and MSE achieved based on the proposed CLSTMN one-hour-ahead prediction are7.7729×10−04,8.2479×10−05, and6.0419×10−07and for six-hour-ahead prediction are 0.0157, 0.0017, and2.4627×10−04for sunny days, and for cloudy days, the RMSE, MAPE, and MSE achieved based on the proposed CLSTMN one-hour-ahead prediction are1.2969×10−04,1.6882×10−04, and1.6819×10−08and for six-hour-ahead prediction are 0.0176, 0.0043, and3.0863×10−04, respectively. Finally, we investigate the CLSTMN performance effectiveness by comparative analysis with well-known baseline models. The investigative study shows the surpassing prediction performance of the proposed CLSTMN for short-term solar irradiance prediction. --- ## Body ## 1. Introduction Nowadays, much attention pays to solar energy systems because of increased energy crises and CO2 emissions. Unfortunately, we do not have consistent sunshine for 24 hours. Sometimes we have sunny and cloudy days despite the seasons. The most frequently changing tendency of solar irradiance creates a challenging task to integrate the PV system into the power system. Short-term solar irradiance prediction is aimed at predicting the solar irradiance for 30 minutes to 6 hours. The short-term prediction of solar irradiance requires making effective operational decisions, automatic generation control, energy commercialization, maintenance, scheduling, economic dispatch, and unit commitment [1, 2]. The multivariate problem and dependence of inputs can learn more effectively by the artificial neural network than by statistical and NWP-based prediction models. The recurrent neural network belongs to the feedback artificial neural network. LSTM is a variant of recurrent neural networks. A tremendous amount of interest has been received in the long short-term memory (LSTM) of time series and sequence database applications. Recently, LSTM was used for a wide range of applications [3–6]. The prediction model’s performance depends not only on the selection of the input but also on the model framework playing an essential role in prediction accuracy. Through the use of data-driven models, it is possible to capture the underlying mapping of solar irradiance. BRT (boosted regression trees) and other data-driven method-based solar irradiance were analyzed in [7] but suggested model error increases with the horizon. However, the computations and costs are high for the data-driven method. Prediction model validation using uncertainty quantification was carried out in [8]. By optimizing the parameters and selecting features, the predictive model accuracy improves. Using feature selection and interpretable methods tried to increase the forecasting accuracy in [9, 10], but it is a complex and high computation burden task.Sometimes, the individual model performance is not satisfactory, which pays attraction towards an ensemble predictive model. The best way to conquer the limitation of the individual prediction model is to combine the various models into averaged models to achieve high accuracy [11]. This paper presents a novel combined long short-term memory network to predict solar irradiance in short-term horizons. The proposed model comprises six individual LSTMs with various input and framework structures. The primary aim of the proposed CLSTMN model is to improve the generalization and prediction ability.The significant contributions of the proposed prediction CLSTMN models are described as follows:(1) A novel short-term solar irradiance prediction model is proposed using six individual input LSTMs(2) A unique combinational framework is proposed(3) Real-time actual sunny and cloudy day datasets are applied to verify the proposed CLSTMN model performance(4) The proposed framework improves the generalization and accuracy(5) The problem of input data based uncertain can overcome the use of the proposed CLSTMN(6) The proposed prediction model versatile capability proved on sunny and cloudy weather dataset based on prediction of solar irradiance for one-hour- to six-hour-ahead prediction(7) Carry out the performance comparison with other baseline prediction modelsSection1 describes the introduction and the prior works related to solar irradiance prediction discussed in Section 2. Section 3 presents the proposed combined long short-term memory network framework and mathematical modeling. Section 4 covers the experimental details, and Section 5 presents the results and discussion on sunny and cloudy days. Finally, Section 6 summarizes the conclusion, and Section 7 discusses the proposed predictive model limitations and future research. ## 2. Related Work Several short-term prediction models were developed in the field of solar irradiance prediction applications. Xiang et al. [12] carried out the persistence extreme learning machine-based solar power forecasting for the short-term horizon. Ferrari et al. [13] presented solar radiation prediction using the statistical approach and stated that ARIMA has minimum parameters compared to the AR and ARMA. de Araujo [14] investigated the WRF (Weather Research Forecasting) and LSTM performance to forecast solar radiation. A-Sbou and Alawasa [15] performed prediction of solar radiation in Mutah city with NARX (Nonlinear Autoregressive RNN with exogenous). El Alani et al. studied multilayer perceptron neural network based on global horizontal irradiance for short-term horizon [16]. Madhiarasan and Deepa [17] developed a solar irradiance forecasting model with an innovative neural network, and apt hidden neurons are identified with the use of the deciding standard. Gutierrrez-Corea et al. [18] pointed out various inputs associated with artificial neural networks based on global solar irradiance forecasting for short-term horizons. Halpern-Wight et al. [19] analyzed LSTM with one and five hidden layers for the solar forecasting application. The investigation stated that for over five hidden layers, LSTM single LSTM-based forecasting model provides the lowest errors.Kartini and Chen [20] presented a combinational forecasting model that used k-NN (k-nearest neighbour) and BPLNN (multilayer backpropagation learning neural network) for one-hour-ahead GSI (global solar irradiance) forecasting. Mishra and Palanisamy [21] suggested RNN (recurrent neural network) based on solar irradiance forecasting for multitime horizons. Bae et al. [22] suggested K-mean clustering associated support vector machine based on one-hour-ahead prediction of solar irradiance. Complex structured hybrid prediction model requires more computation, which leads to increase the training time. We still need a reliable and robust prediction model to address the solar irradiance prediction. Although more research exists on solar irradiance prediction, a generalization issue still needs to be addressed. This paper overcomes the deficiencies of the individual LSTM models and meteorological parameters’ impact on solar irradiance. This research work used combinations of various inputs and frameworks based on the LSTM model to enhance the generalization ability for the short-term solar irradiance prediction.Using the combination of individual long short-term memory networks with various inputs, we accomplish the following benefits than the prior predictive models:(i) By compromising variance and bias, the proposed model can reach a better generalized solution in most cases(ii) Ability to resolve underfitting and overfitting issues(iii) Network stability issue gets rid of the proposed combination approach based on CLSTMN(iv) In contrast to a single LSTM model approach, the proposed CLSTMN is an average of numerous input-associated LSTM models that can overcome the limitations and uncertainties associated with a single LSTM model(v) Occurrence of local minima is avoided(vi) Able to manage seasonal and cyclical changes(vii) The suggested model is more useful, easier to use, and more accurate in the prediction of short-term solar irradiance than the current model ## 3. Proposed Combined Long Short-Term Network This paper presents a novel combined LSTM network-based prediction for the short-term solar irradiance prediction. The concept, framework, and mathematical modeling of the proposed CLSTMN are detailed in this section. ### 3.1. Long Short-Term Memory Network In 1997, Hochreiter and Schmidhuber [23] devised a long short-term memory network that can overcome the vanishing gradient issue and handle the long-term dependence. Therefore, LSTM is suitable for time series prediction applications. It is a particular variant of the recurrent neural network. The CEC (constant error carousel) is used to store the long-term dependence. It is a linear unit self-connected recurrently. The LSTM network comprises a cell state and three gates: input, forget, and output. The steps incurred in the LSTM network are as follows:Step 1. Forget gate identifies the irrelevant information from the pastime step and passes it to the cell state.Step 2. With the help of the input gate, it updates the cell state with the new input information.Step 3. The critical information passed to the next hidden state is determined using the output gates.Step 4. Cell state is used to have the knowledge of the past relevant over a long time. #### 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ### 3.2. Combined-Long Short-Term Memory Network Under different inputs, model architecture, and uncertainties, maintaining stable performance is the aim of generalization. The six LSTM models with various inputs and hidden neurons (framework) are used to develop the proposed model. We can overcome the uncertain irregularity present in solar irradiance with the help of atmospheric input features. Using the combinations of the LSTM model can reduce the error values and improve the network stability. The selected inputs to the proposed model are high influenced parameters on solar irradiance. We fix hidden neurons by a trial-and-error approach [24, 25] and the identified optimal hidden neurons for each individual LSTM are used for the development of the proposed CLSTMN structure. Figure 1 shows the proposed CLSTMN framework.Figure 1 The framework of the proposed CLSTMN.The proposed CLSTMN mathematical model is as follows:(7)CLSTMNoutput=1N∑j=1NLSTMjforj=1,2.3⋯,N.Let,N the number of LSTMs. (8)Predictedsolarirradiance=LSTM2inputs+LSTM3inputs+LSTM4inputs+LSTM5inputs+LSTM6inputs+LSTM7inputs6.Steps of the proposed CLSTMN algorithm are as follows:Step 1. Collect the solar irradiance and meteorological parameters for sunny and cloudy days.Step 2. Normalize the data.Step 3. Divide the data into training and testing.Step 4. Design the individual LSTM models with various inputs.Step 5. Train the designed LSTM model and predict the solar irradiance on test data.Step 6. Average all LSTM models predict the final solar irradiance.Step 7. Compute the evaluation metric and compare it again with the other baseline prediction models.The proposed CLSTMN model parameters and values are as follows:Inputneurons=2,3,4,5,6,and7.Hiddenneurons=2,4,5,7,9,and11.Outputneuron=1.Optimizer=Adam.Lossfunction=RMSE.Epochs=200.Moreover, the individual model generalization inability can be improved by averaging the various framework-based LSTM prediction models. We improved the accuracy in terms of a reduction in prediction error. The proposed model could manage the model stability and convergence. ## 3.1. Long Short-Term Memory Network In 1997, Hochreiter and Schmidhuber [23] devised a long short-term memory network that can overcome the vanishing gradient issue and handle the long-term dependence. Therefore, LSTM is suitable for time series prediction applications. It is a particular variant of the recurrent neural network. The CEC (constant error carousel) is used to store the long-term dependence. It is a linear unit self-connected recurrently. The LSTM network comprises a cell state and three gates: input, forget, and output. The steps incurred in the LSTM network are as follows:Step 1. Forget gate identifies the irrelevant information from the pastime step and passes it to the cell state.Step 2. With the help of the input gate, it updates the cell state with the new input information.Step 3. The critical information passed to the next hidden state is determined using the output gates.Step 4. Cell state is used to have the knowledge of the past relevant over a long time. ### 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ## 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ## 3.2. Combined-Long Short-Term Memory Network Under different inputs, model architecture, and uncertainties, maintaining stable performance is the aim of generalization. The six LSTM models with various inputs and hidden neurons (framework) are used to develop the proposed model. We can overcome the uncertain irregularity present in solar irradiance with the help of atmospheric input features. Using the combinations of the LSTM model can reduce the error values and improve the network stability. The selected inputs to the proposed model are high influenced parameters on solar irradiance. We fix hidden neurons by a trial-and-error approach [24, 25] and the identified optimal hidden neurons for each individual LSTM are used for the development of the proposed CLSTMN structure. Figure 1 shows the proposed CLSTMN framework.Figure 1 The framework of the proposed CLSTMN.The proposed CLSTMN mathematical model is as follows:(7)CLSTMNoutput=1N∑j=1NLSTMjforj=1,2.3⋯,N.Let,N the number of LSTMs. (8)Predictedsolarirradiance=LSTM2inputs+LSTM3inputs+LSTM4inputs+LSTM5inputs+LSTM6inputs+LSTM7inputs6.Steps of the proposed CLSTMN algorithm are as follows:Step 1. Collect the solar irradiance and meteorological parameters for sunny and cloudy days.Step 2. Normalize the data.Step 3. Divide the data into training and testing.Step 4. Design the individual LSTM models with various inputs.Step 5. Train the designed LSTM model and predict the solar irradiance on test data.Step 6. Average all LSTM models predict the final solar irradiance.Step 7. Compute the evaluation metric and compare it again with the other baseline prediction models.The proposed CLSTMN model parameters and values are as follows:Inputneurons=2,3,4,5,6,and7.Hiddenneurons=2,4,5,7,9,and11.Outputneuron=1.Optimizer=Adam.Lossfunction=RMSE.Epochs=200.Moreover, the individual model generalization inability can be improved by averaging the various framework-based LSTM prediction models. We improved the accuracy in terms of a reduction in prediction error. The proposed model could manage the model stability and convergence. ## 4. Experimental Details The proposed solar irradiance prediction approach runs on the MATLAB platform using a hp laptop with AMD Ryzen 5 3550 H processor, 8 GB RAM, 2100 Mhz, and 4 GB NVIDIA GeForce GTX 1650. Achieving the desired prediction accuracy is crucial because of solar irradiance’s randomness characteristics. This paper attempts to predict solar irradiance over a period of a few minutes to 6 hours. Various inputs and different frameworks incurred in LSTM models were used to develop the proposed model. Averaging the six individual LSTM models improved accuracy and reduced error. ### 4.1. Dataset and Input Parameter Details We got the sunny day and cloud day dataset from the NOAA. The dataset was collected from the latitude 40.05° N and longitude 88.37° W during the period of 2021. The following parameters are used as inputs to the proposed CLSTMN model.(1) Solar irradiance in W/m2(2) Temperature in °C(3) Wind speed in m/s(4) Wind direction in degrees(5) Pressure in mb(6) Relative humidity in %(7) Cloud cover in Oktas ### 4.2. Normalization Normalization improves the prediction performance based on the training efficiency improvement.The actual inputs are normalized based on the Min–Max normalization. The Min–Max formulation is as follows:(9)Normalizedinput,uj′=uj−uminumax−umin,whereuj is the actual input value, umin is the minimum input value, and umax is the maximum input value. ### 4.3. Training and Testing Datasets We carried out the proposed model training using a two-day data sample of 2800 and testing with 6 hours of data samples of 360. Figures2 and 3 depict the sunny day dataset training and testing samples and cloudy day dataset training and testing samples, respectively. We validate the proposed prediction model on the sunny and cloudy day datasets.Figure 2 Sunny day dataset training and testing samples.Figure 3 Cloudy day dataset training and testing samples. ### 4.4. Evaluation Metric We used the RMSE, MAPE, and MSE as an evaluation metric to evaluate the proposed CLSTMN model performance. Stated evaluation metric formulations are as follows:(10)RMSE=1N∑j=1Nuj′−uj2,(11)MAPE=100N∑j=1Nuj′−uju¯j,(12)MSE=1N∑j=1Nuj′−uj2,whereN is the total number of data samples, uj′ is the actual output, u¯j is the average actual output, and uj is predicted output. ## 4.1. Dataset and Input Parameter Details We got the sunny day and cloud day dataset from the NOAA. The dataset was collected from the latitude 40.05° N and longitude 88.37° W during the period of 2021. The following parameters are used as inputs to the proposed CLSTMN model.(1) Solar irradiance in W/m2(2) Temperature in °C(3) Wind speed in m/s(4) Wind direction in degrees(5) Pressure in mb(6) Relative humidity in %(7) Cloud cover in Oktas ## 4.2. Normalization Normalization improves the prediction performance based on the training efficiency improvement.The actual inputs are normalized based on the Min–Max normalization. The Min–Max formulation is as follows:(9)Normalizedinput,uj′=uj−uminumax−umin,whereuj is the actual input value, umin is the minimum input value, and umax is the maximum input value. ## 4.3. Training and Testing Datasets We carried out the proposed model training using a two-day data sample of 2800 and testing with 6 hours of data samples of 360. Figures2 and 3 depict the sunny day dataset training and testing samples and cloudy day dataset training and testing samples, respectively. We validate the proposed prediction model on the sunny and cloudy day datasets.Figure 2 Sunny day dataset training and testing samples.Figure 3 Cloudy day dataset training and testing samples. ## 4.4. Evaluation Metric We used the RMSE, MAPE, and MSE as an evaluation metric to evaluate the proposed CLSTMN model performance. Stated evaluation metric formulations are as follows:(10)RMSE=1N∑j=1Nuj′−uj2,(11)MAPE=100N∑j=1Nuj′−uju¯j,(12)MSE=1N∑j=1Nuj′−uj2,whereN is the total number of data samples, uj′ is the actual output, u¯j is the average actual output, and uj is predicted output. ## 5. Results and Discussion We report the proposed CLSTMN model-based simulation results for short-term solar irradiance prediction on sunny and cloudy days. We tested the proposed CLSTMN model on a testing dataset (360 data samples of each cloudy day and sunny day), tabulating the achieved results reported in Tables1 and 2.Table 1 Proposed CLSTMN-based results on sunny datasets for different hours ahead prediction of solar irradiance. Hour ahead predictionRMSE in W/m2MAPE in %MSE in W/m21 hour ahead7.7729×10−048.2479×10−056.0419×10−072 hours ahead0.00312.5029×10−049.5190×10−063 hours ahead0.00876.7010×10−047.5604×10−054 hours ahead0.01300.00101.6916×10−045 hours ahead0.01620.00152.6139×10−046 hours ahead0.01570.00172.4627×10−04Table 2 The proposed CLSTMN-based results on cloudy datasets for different hour ahead prediction of solar irradiance. Hour ahead predictionRMSE in W/m2MAPE in %MSE in W/m21 hour ahead1.2969×10−041.6882×10−041.6819×10−082 hours ahead0.00690.00154.7559×10−053 hours ahead0.02730.00427.4344×10−044 hours ahead0.03280.00490.00115 hours ahead0.00290.00415.2663×10−046 hours ahead0.01760.00433.0863×10−04 ### 5.1. Short-Term Prediction of Solar Irradiance on Sunny Days The performance of the proposed CLSTMN model was evaluated on sunny day datasets.The achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours in Table1 and Figures 4–21. Consequently, the predicted solar irradiance for sunny days is accurately matched with the actual solar irradiance, it is noted in Figures 4, 7, 10, 13, 16, and 19. Figures 5, 8, 11, 14, 17, and 20 show that the prediction on sunny days exactly matches the actual solar irradiance; thus, the prediction errors are near to zero. The predicted solar irradiance is linear matched with the actual solar irradiance, and it is clearly perceived from Figures 6, 9, 12, 15, 18, and 21.Figure 4 Comparison of predicted solar irradiance and actual solar irradiance for sunny day one-hour-ahead prediction.Figure 5 Prediction error vs. time for sunny day one-hour-ahead prediction.Figure 6 Relationship between actual vs. predicted solar irradiance for sunny day one-hour-ahead prediction.Figure 7 Comparison of predicted solar irradiance and actual solar irradiance for sunny day two-hour-ahead prediction.Figure 8 Prediction error vs. time for sunny day two-hour-ahead prediction.Figure 9 Relationship between actual vs. predicted solar irradiance for sunny day two-hour-ahead prediction.Figure 10 Comparison of predicted solar irradiance and actual solar irradiance for sunny day three-hour-ahead prediction.Figure 11 Prediction error vs. time for sunny day three-hour-ahead prediction.Figure 12 Relationship between actual vs. predicted solar irradiance for sunny day three-hour-ahead prediction.Figure 13 Comparison of predicted solar irradiance and actual solar irradiance for sunny day four-hour-ahead prediction.Figure 14 Prediction error vs. time for sunny day four-hour-ahead prediction.Figure 15 Relationship between actual vs. predicted solar irradiance for sunny day four-hour-ahead prediction.Figure 16 Comparison of predicted solar irradiance and actual solar irradiance for sunny day five-hour-ahead prediction.Figure 17 Prediction error vs. time for sunny day five-hour-ahead prediction.Figure 18 Relationship between actual vs. predicted solar irradiance for sunny day five-hour-ahead prediction.Figure 19 Comparison of predicted solar irradiance and actual solar irradiance for sunny day six-hour-ahead prediction.Figure 20 Prediction error vs. time for sunny day six-hour-ahead prediction.Figure 21 Relationship between actual vs. predicted solar irradiance for sunny day six-hour-ahead prediction.The prediction errors for one-hour-ahead prediction are the lowest RMSE7.7729×10−04, MAPE 8.2479×10−05, and MSE 6.0419×10−07, and the predictions are better than other hour-ahead-based predictions. The sunny day-based results from the proposed 6-hour-ahead CLSTMN prediction model have the evaluation metrics as RMSE 0.0157, MAPE 0.0017, and MSE 2.4627×10−04. The performance of the proposed CLSTMN model on the sunny day dataset based on short-term solar irradiance prediction results in better prediction accuracy on one-hour, two-hour, three-hour, four-hour, five-hour, and six-hour-ahead prediction of solar irradiance. ### 5.2. Short-Term Prediction of Solar Irradiance on Cloudy Days The proposed CLSTMN model performance is further evaluated on the cloudy day dataset, and the achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours ahead, and 6 hours ahead are tabulated in Table2 and Figures 22–39. From Figures 22, 25, 28, 31, 34, 37, for cloudy days, the predicted solar irradiance is matched accurately with the actual solar irradiance perceived clearly. Therefore, the prediction errors are the lowest. It was noticed in Figures 23, 26, 29, 32, 35, and 38. The prediction on the cloudy day dataset exactly matches the actual solar irradiance with the predicted solar irradiance. Thus, the linear relationship between actual vs. predicted solar irradiance for the cloudy day dataset depicts in Figures 24, 27, 30, 33, 36, and 39.Figure 22 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day one-hour-ahead prediction.Figure 23 Prediction error vs. time for cloudy day one-hour-ahead prediction.Figure 24 Relationship between actual vs. predicted solar irradiance for cloudy day one-hour-ahead prediction.Figure 25 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day two-hour-ahead prediction.Figure 26 Prediction error vs. time for cloudy day two-hour-ahead prediction.Figure 27 Relationship between actual vs. predicted solar irradiance for cloudy day two-hour-ahead prediction.Figure 28 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day three-hour-ahead prediction.Figure 29 Prediction error vs. time for cloudy day three-hour-ahead prediction.Figure 30 Relationship between actual vs. predicted solar irradiance for cloudy day three-hour-ahead prediction.Figure 31 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day four-hour-ahead prediction.Figure 32 Prediction error vs. time for cloudy day four-hour-ahead prediction.Figure 33 Relationship between actual vs. predicted solar irradiance for cloudy day four-hour-ahead prediction.Figure 34 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day five-hour-ahead prediction.Figure 35 Prediction error vs. time for cloudy day five-hour-ahead prediction.Figure 36 Relationship between actual vs. predicted solar irradiance for cloudy day five-hour-ahead prediction.Figure 37 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day six-hour-ahead prediction.Figure 38 Prediction error vs. time for cloudy day six-hour-ahead prediction.Figure 39 Relationship between actual vs. predicted solar irradiance for cloudy day six-hour-ahead prediction.For the cloudy day dataset, the proposed CLSTMN model is based on 6-hour-ahead prediction result RMSE 0.0176, MAPE 0.0043, and MSE3.0863×10−04. One-hour-ahead prediction of solar irradiance results is more precise than the other hour-ahead-based predictions with RMSE 1.2969×10−04, MAPE 1.6882×10−04, and MSE−1.6819×10−08. The proposed CLSTMN model evaluation on cloudy day datasets results in better accuracy for short-term solar irradiance prediction (one to 6 hours ahead).Sunny days have a higher solar irradiance than cloudy days. The sunny day-based prediction using the CLSTMN is competing with the cloudy day-based prediction. The proposed CLSTMN model can generalize well to the model and input uncertainty and accurately predict the actual solar irradiance with the minor evaluation metrics. Based on the analysis of the obtained result, we identify the proposed CLSTMN prediction model achieved improved prediction accuracy and generalization ability on both sunny and cloudy day datasets. Through the precise prediction of the solar irradiance using the proposed CLSTMN makes benefits and effective planning and scheduling of solar energy systems. ### 5.3. Comparative Analysis with the Baseline Model In addition, the comparative analysis was carried out to prove the predictive model’s ability with the baseline models. The persistence model and other well-known predictive models (ARIMA, WRF, RNN, k-NN-BPLNN, SVM, MLP, NARX, and LSTM) were used as a baseline model to verify and compare the performance of the proposed CLSTMN prediction model. We kept the considered baseline model set parameters the same as mentioned in the respective research papers but validated on our collected datasets. For sunny and cloudy day datasets, the proposed CLSTMN model provides a consistent result for one hour, two hours, three hours, four hours, five hours, and six hours ahead the prediction of solar irradiance. Thus, the proposed CLSTMN model can accurately predict the solar irradiance that matches the actual solar irradiance on the short-term horizon.The compared baseline predictive model was less accurately predicting the solar irradiance in short-term horizons on the considered datasets. This is clearly observed in Table3, for a better understanding of the 3D column chart depicted in Figure 40. The baseline model-based predicted solar irradiance is not almost the same as the actual values in both datasets; hence, the performance evaluation metrics are increasing, and the accuracy decreased.Table 3 Comparative analysis of the proposed CLSTMN with the baseline prediction model on sunny and cloudy datasets. S. NoAuthorsYearPrediction modelEvaluation metric (RMSE in W/m2)Sunny daysCloudy days1Xiaoyan Xiang et al.2021Persistence48.946449.17592Ferrari, Stefano et al.2013ARIMA3.01433.13543de Araujo, Jose Manuel Soares2020WRF25.484726.26914Mishra, Sakshi & Praveen Palanisamy2018RNN1.47192.01435Kartini, Unit Three & Chao Rong Chen2017k-NN-BPLNN0.94640.98706Kuk Yeol Bae et al.2017SVM0.79970.80507Omaima El Alani et al.2019MLP0.40410.43578Yazeed A. A-Sbou & Khaled M. Alawasa2017NARX0.52680.57949Naylani Halpern-Wight et al.2020LSTM0.25190.313310Manoharan Madhiarasan & Mohamed Louzazni2022Proposed CLSTMN 1 hour ahead7.7729×10−041.2969×10−04Proposed CLSTMN 2 hours ahead0.00310.0069Proposed CLSTMN 3 hours ahead0.00870.0273Proposed CLSTMN 4 hours ahead0.01300.0328Proposed CLSTMN 5 hours ahead0.01620.0029Proposed CLSTMN 6 hours ahead0.01570.0176Bold implies the best result.Figure 40 Comparative analysis of the proposed CLSTMN with the baseline prediction models.In summary, the highly accurate short-term prediction model is proposed using the combined long short-term memory network, and it makes it easy to adapt to climatic conditions and model framework variations. The proposed model-based predicted solar irradiance values are a good fit for the actual values. The proposed model can handle the changes in inputs and model framework; thus, it is well generalized to both datasets and results in the best progress than the other compared predictive models. ## 5.1. Short-Term Prediction of Solar Irradiance on Sunny Days The performance of the proposed CLSTMN model was evaluated on sunny day datasets.The achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours in Table1 and Figures 4–21. Consequently, the predicted solar irradiance for sunny days is accurately matched with the actual solar irradiance, it is noted in Figures 4, 7, 10, 13, 16, and 19. Figures 5, 8, 11, 14, 17, and 20 show that the prediction on sunny days exactly matches the actual solar irradiance; thus, the prediction errors are near to zero. The predicted solar irradiance is linear matched with the actual solar irradiance, and it is clearly perceived from Figures 6, 9, 12, 15, 18, and 21.Figure 4 Comparison of predicted solar irradiance and actual solar irradiance for sunny day one-hour-ahead prediction.Figure 5 Prediction error vs. time for sunny day one-hour-ahead prediction.Figure 6 Relationship between actual vs. predicted solar irradiance for sunny day one-hour-ahead prediction.Figure 7 Comparison of predicted solar irradiance and actual solar irradiance for sunny day two-hour-ahead prediction.Figure 8 Prediction error vs. time for sunny day two-hour-ahead prediction.Figure 9 Relationship between actual vs. predicted solar irradiance for sunny day two-hour-ahead prediction.Figure 10 Comparison of predicted solar irradiance and actual solar irradiance for sunny day three-hour-ahead prediction.Figure 11 Prediction error vs. time for sunny day three-hour-ahead prediction.Figure 12 Relationship between actual vs. predicted solar irradiance for sunny day three-hour-ahead prediction.Figure 13 Comparison of predicted solar irradiance and actual solar irradiance for sunny day four-hour-ahead prediction.Figure 14 Prediction error vs. time for sunny day four-hour-ahead prediction.Figure 15 Relationship between actual vs. predicted solar irradiance for sunny day four-hour-ahead prediction.Figure 16 Comparison of predicted solar irradiance and actual solar irradiance for sunny day five-hour-ahead prediction.Figure 17 Prediction error vs. time for sunny day five-hour-ahead prediction.Figure 18 Relationship between actual vs. predicted solar irradiance for sunny day five-hour-ahead prediction.Figure 19 Comparison of predicted solar irradiance and actual solar irradiance for sunny day six-hour-ahead prediction.Figure 20 Prediction error vs. time for sunny day six-hour-ahead prediction.Figure 21 Relationship between actual vs. predicted solar irradiance for sunny day six-hour-ahead prediction.The prediction errors for one-hour-ahead prediction are the lowest RMSE7.7729×10−04, MAPE 8.2479×10−05, and MSE 6.0419×10−07, and the predictions are better than other hour-ahead-based predictions. The sunny day-based results from the proposed 6-hour-ahead CLSTMN prediction model have the evaluation metrics as RMSE 0.0157, MAPE 0.0017, and MSE 2.4627×10−04. The performance of the proposed CLSTMN model on the sunny day dataset based on short-term solar irradiance prediction results in better prediction accuracy on one-hour, two-hour, three-hour, four-hour, five-hour, and six-hour-ahead prediction of solar irradiance. ## 5.2. Short-Term Prediction of Solar Irradiance on Cloudy Days The proposed CLSTMN model performance is further evaluated on the cloudy day dataset, and the achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours ahead, and 6 hours ahead are tabulated in Table2 and Figures 22–39. From Figures 22, 25, 28, 31, 34, 37, for cloudy days, the predicted solar irradiance is matched accurately with the actual solar irradiance perceived clearly. Therefore, the prediction errors are the lowest. It was noticed in Figures 23, 26, 29, 32, 35, and 38. The prediction on the cloudy day dataset exactly matches the actual solar irradiance with the predicted solar irradiance. Thus, the linear relationship between actual vs. predicted solar irradiance for the cloudy day dataset depicts in Figures 24, 27, 30, 33, 36, and 39.Figure 22 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day one-hour-ahead prediction.Figure 23 Prediction error vs. time for cloudy day one-hour-ahead prediction.Figure 24 Relationship between actual vs. predicted solar irradiance for cloudy day one-hour-ahead prediction.Figure 25 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day two-hour-ahead prediction.Figure 26 Prediction error vs. time for cloudy day two-hour-ahead prediction.Figure 27 Relationship between actual vs. predicted solar irradiance for cloudy day two-hour-ahead prediction.Figure 28 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day three-hour-ahead prediction.Figure 29 Prediction error vs. time for cloudy day three-hour-ahead prediction.Figure 30 Relationship between actual vs. predicted solar irradiance for cloudy day three-hour-ahead prediction.Figure 31 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day four-hour-ahead prediction.Figure 32 Prediction error vs. time for cloudy day four-hour-ahead prediction.Figure 33 Relationship between actual vs. predicted solar irradiance for cloudy day four-hour-ahead prediction.Figure 34 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day five-hour-ahead prediction.Figure 35 Prediction error vs. time for cloudy day five-hour-ahead prediction.Figure 36 Relationship between actual vs. predicted solar irradiance for cloudy day five-hour-ahead prediction.Figure 37 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day six-hour-ahead prediction.Figure 38 Prediction error vs. time for cloudy day six-hour-ahead prediction.Figure 39 Relationship between actual vs. predicted solar irradiance for cloudy day six-hour-ahead prediction.For the cloudy day dataset, the proposed CLSTMN model is based on 6-hour-ahead prediction result RMSE 0.0176, MAPE 0.0043, and MSE3.0863×10−04. One-hour-ahead prediction of solar irradiance results is more precise than the other hour-ahead-based predictions with RMSE 1.2969×10−04, MAPE 1.6882×10−04, and MSE−1.6819×10−08. The proposed CLSTMN model evaluation on cloudy day datasets results in better accuracy for short-term solar irradiance prediction (one to 6 hours ahead).Sunny days have a higher solar irradiance than cloudy days. The sunny day-based prediction using the CLSTMN is competing with the cloudy day-based prediction. The proposed CLSTMN model can generalize well to the model and input uncertainty and accurately predict the actual solar irradiance with the minor evaluation metrics. Based on the analysis of the obtained result, we identify the proposed CLSTMN prediction model achieved improved prediction accuracy and generalization ability on both sunny and cloudy day datasets. Through the precise prediction of the solar irradiance using the proposed CLSTMN makes benefits and effective planning and scheduling of solar energy systems. ## 5.3. Comparative Analysis with the Baseline Model In addition, the comparative analysis was carried out to prove the predictive model’s ability with the baseline models. The persistence model and other well-known predictive models (ARIMA, WRF, RNN, k-NN-BPLNN, SVM, MLP, NARX, and LSTM) were used as a baseline model to verify and compare the performance of the proposed CLSTMN prediction model. We kept the considered baseline model set parameters the same as mentioned in the respective research papers but validated on our collected datasets. For sunny and cloudy day datasets, the proposed CLSTMN model provides a consistent result for one hour, two hours, three hours, four hours, five hours, and six hours ahead the prediction of solar irradiance. Thus, the proposed CLSTMN model can accurately predict the solar irradiance that matches the actual solar irradiance on the short-term horizon.The compared baseline predictive model was less accurately predicting the solar irradiance in short-term horizons on the considered datasets. This is clearly observed in Table3, for a better understanding of the 3D column chart depicted in Figure 40. The baseline model-based predicted solar irradiance is not almost the same as the actual values in both datasets; hence, the performance evaluation metrics are increasing, and the accuracy decreased.Table 3 Comparative analysis of the proposed CLSTMN with the baseline prediction model on sunny and cloudy datasets. S. NoAuthorsYearPrediction modelEvaluation metric (RMSE in W/m2)Sunny daysCloudy days1Xiaoyan Xiang et al.2021Persistence48.946449.17592Ferrari, Stefano et al.2013ARIMA3.01433.13543de Araujo, Jose Manuel Soares2020WRF25.484726.26914Mishra, Sakshi & Praveen Palanisamy2018RNN1.47192.01435Kartini, Unit Three & Chao Rong Chen2017k-NN-BPLNN0.94640.98706Kuk Yeol Bae et al.2017SVM0.79970.80507Omaima El Alani et al.2019MLP0.40410.43578Yazeed A. A-Sbou & Khaled M. Alawasa2017NARX0.52680.57949Naylani Halpern-Wight et al.2020LSTM0.25190.313310Manoharan Madhiarasan & Mohamed Louzazni2022Proposed CLSTMN 1 hour ahead7.7729×10−041.2969×10−04Proposed CLSTMN 2 hours ahead0.00310.0069Proposed CLSTMN 3 hours ahead0.00870.0273Proposed CLSTMN 4 hours ahead0.01300.0328Proposed CLSTMN 5 hours ahead0.01620.0029Proposed CLSTMN 6 hours ahead0.01570.0176Bold implies the best result.Figure 40 Comparative analysis of the proposed CLSTMN with the baseline prediction models.In summary, the highly accurate short-term prediction model is proposed using the combined long short-term memory network, and it makes it easy to adapt to climatic conditions and model framework variations. The proposed model-based predicted solar irradiance values are a good fit for the actual values. The proposed model can handle the changes in inputs and model framework; thus, it is well generalized to both datasets and results in the best progress than the other compared predictive models. ## 6. Conclusion The amalgamation of various input and framework-based individual LSTM models may help increase the prediction accuracy and learning ability and make it suitable for general application. The relationship among various inputs is handled effectively with the proposed CLSTMN model. Six different input and framework-based LSTM model ensembles enable the proposed model to extract the solar irradiance and meteorological parameter dependencies accurately. Thus, the uncertainties about the model framework and inputs are managed effectively, which makes the better prediction results of the proposed model concerned with short-term solar irradiance prediction. The learning ability of the proposed model is highly improved, and this makes the proposed model can predict 1 hour, 2 hours, 3 hours, 4 hours, 5 hours, and 6 hours ahead of solar irradiance precisely.In addition, sunny day and cloudy day datasets based on performance validation were performed to verify the prediction ability of the proposed CLSTMN model for 1-hour to 6-hour ahead prediction. The evaluation of sunny day- and cloudy day-based datasets shows that the proposed model can cause better performance in a greatly uncertain situation. Thus, we attain the best prediction results on sunny and cloudy days for 1-hour- to 6-hour-ahead predictions regarding the proposed CLSTMN model. The proposed CLSTMN model, compared with the baseline predictive models and result analysis, has proved the prediction effectiveness of the proposed CLSTMN model for sunny and cloudy days based on short-term solar irradiance with the lowest evaluation metrics. Risk of solar energy integration to the electric grid is eliminated through the simple and workable proposed CLSTMN short-term prediction model. ## 7. Proposed Predictive Model Limitations and Future Research This model has limitations of higher computational cost than the individual model, despite its improved prediction accuracy.The future works are as follows:(i) This paper extends the applicability of the proposed CLSTMN and further investigates by performing the multihorizon-based solar irradiance prediction(ii) The authors intend to use an optimization algorithm to identify the optimal hyperparameters of the LSTM network(iii) Develop the FPGA model, and apply the model to real-world scenarios --- *Source: 1004051-2022-08-16.xml*
1004051-2022-08-16_1004051-2022-08-16.md
45,166
Combined Long Short-Term Memory Network-Based Short-Term Prediction of Solar Irradiance
Manoharan Madhiarasan; Mohamed Louzazni
International Journal of Photoenergy (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004051
1004051-2022-08-16.xml
--- ## Abstract Achieving the highly accurate and generic prediction of solar irradiance is arduous because solar irradiance possesses intermittent randomness and is influenced by meteorological parameters. Therefore, this paper endeavors a new combined long short-term memory network (CLSTMN) with various influence meteorological parameters as inputs. We investigated the proposed predictive model applicability for short-term solar irradiance prediction application and validated it in the real-time metrological dataset. The proposed prediction model is combined and accumulated by various inputs, incurring six individual long short-term memory models to improve solar irradiance prediction accuracy and generalization. Thus, the CLSTMN-based solar irradiance prediction can be generic and overcome the metrological parameters concerning variability. The experimental results ensure good prediction accuracy with minimal evaluation metrics of the proposed CLSTMN for solar irradiance prediction. The RMSE, MAPE, and MSE achieved based on the proposed CLSTMN one-hour-ahead prediction are7.7729×10−04,8.2479×10−05, and6.0419×10−07and for six-hour-ahead prediction are 0.0157, 0.0017, and2.4627×10−04for sunny days, and for cloudy days, the RMSE, MAPE, and MSE achieved based on the proposed CLSTMN one-hour-ahead prediction are1.2969×10−04,1.6882×10−04, and1.6819×10−08and for six-hour-ahead prediction are 0.0176, 0.0043, and3.0863×10−04, respectively. Finally, we investigate the CLSTMN performance effectiveness by comparative analysis with well-known baseline models. The investigative study shows the surpassing prediction performance of the proposed CLSTMN for short-term solar irradiance prediction. --- ## Body ## 1. Introduction Nowadays, much attention pays to solar energy systems because of increased energy crises and CO2 emissions. Unfortunately, we do not have consistent sunshine for 24 hours. Sometimes we have sunny and cloudy days despite the seasons. The most frequently changing tendency of solar irradiance creates a challenging task to integrate the PV system into the power system. Short-term solar irradiance prediction is aimed at predicting the solar irradiance for 30 minutes to 6 hours. The short-term prediction of solar irradiance requires making effective operational decisions, automatic generation control, energy commercialization, maintenance, scheduling, economic dispatch, and unit commitment [1, 2]. The multivariate problem and dependence of inputs can learn more effectively by the artificial neural network than by statistical and NWP-based prediction models. The recurrent neural network belongs to the feedback artificial neural network. LSTM is a variant of recurrent neural networks. A tremendous amount of interest has been received in the long short-term memory (LSTM) of time series and sequence database applications. Recently, LSTM was used for a wide range of applications [3–6]. The prediction model’s performance depends not only on the selection of the input but also on the model framework playing an essential role in prediction accuracy. Through the use of data-driven models, it is possible to capture the underlying mapping of solar irradiance. BRT (boosted regression trees) and other data-driven method-based solar irradiance were analyzed in [7] but suggested model error increases with the horizon. However, the computations and costs are high for the data-driven method. Prediction model validation using uncertainty quantification was carried out in [8]. By optimizing the parameters and selecting features, the predictive model accuracy improves. Using feature selection and interpretable methods tried to increase the forecasting accuracy in [9, 10], but it is a complex and high computation burden task.Sometimes, the individual model performance is not satisfactory, which pays attraction towards an ensemble predictive model. The best way to conquer the limitation of the individual prediction model is to combine the various models into averaged models to achieve high accuracy [11]. This paper presents a novel combined long short-term memory network to predict solar irradiance in short-term horizons. The proposed model comprises six individual LSTMs with various input and framework structures. The primary aim of the proposed CLSTMN model is to improve the generalization and prediction ability.The significant contributions of the proposed prediction CLSTMN models are described as follows:(1) A novel short-term solar irradiance prediction model is proposed using six individual input LSTMs(2) A unique combinational framework is proposed(3) Real-time actual sunny and cloudy day datasets are applied to verify the proposed CLSTMN model performance(4) The proposed framework improves the generalization and accuracy(5) The problem of input data based uncertain can overcome the use of the proposed CLSTMN(6) The proposed prediction model versatile capability proved on sunny and cloudy weather dataset based on prediction of solar irradiance for one-hour- to six-hour-ahead prediction(7) Carry out the performance comparison with other baseline prediction modelsSection1 describes the introduction and the prior works related to solar irradiance prediction discussed in Section 2. Section 3 presents the proposed combined long short-term memory network framework and mathematical modeling. Section 4 covers the experimental details, and Section 5 presents the results and discussion on sunny and cloudy days. Finally, Section 6 summarizes the conclusion, and Section 7 discusses the proposed predictive model limitations and future research. ## 2. Related Work Several short-term prediction models were developed in the field of solar irradiance prediction applications. Xiang et al. [12] carried out the persistence extreme learning machine-based solar power forecasting for the short-term horizon. Ferrari et al. [13] presented solar radiation prediction using the statistical approach and stated that ARIMA has minimum parameters compared to the AR and ARMA. de Araujo [14] investigated the WRF (Weather Research Forecasting) and LSTM performance to forecast solar radiation. A-Sbou and Alawasa [15] performed prediction of solar radiation in Mutah city with NARX (Nonlinear Autoregressive RNN with exogenous). El Alani et al. studied multilayer perceptron neural network based on global horizontal irradiance for short-term horizon [16]. Madhiarasan and Deepa [17] developed a solar irradiance forecasting model with an innovative neural network, and apt hidden neurons are identified with the use of the deciding standard. Gutierrrez-Corea et al. [18] pointed out various inputs associated with artificial neural networks based on global solar irradiance forecasting for short-term horizons. Halpern-Wight et al. [19] analyzed LSTM with one and five hidden layers for the solar forecasting application. The investigation stated that for over five hidden layers, LSTM single LSTM-based forecasting model provides the lowest errors.Kartini and Chen [20] presented a combinational forecasting model that used k-NN (k-nearest neighbour) and BPLNN (multilayer backpropagation learning neural network) for one-hour-ahead GSI (global solar irradiance) forecasting. Mishra and Palanisamy [21] suggested RNN (recurrent neural network) based on solar irradiance forecasting for multitime horizons. Bae et al. [22] suggested K-mean clustering associated support vector machine based on one-hour-ahead prediction of solar irradiance. Complex structured hybrid prediction model requires more computation, which leads to increase the training time. We still need a reliable and robust prediction model to address the solar irradiance prediction. Although more research exists on solar irradiance prediction, a generalization issue still needs to be addressed. This paper overcomes the deficiencies of the individual LSTM models and meteorological parameters’ impact on solar irradiance. This research work used combinations of various inputs and frameworks based on the LSTM model to enhance the generalization ability for the short-term solar irradiance prediction.Using the combination of individual long short-term memory networks with various inputs, we accomplish the following benefits than the prior predictive models:(i) By compromising variance and bias, the proposed model can reach a better generalized solution in most cases(ii) Ability to resolve underfitting and overfitting issues(iii) Network stability issue gets rid of the proposed combination approach based on CLSTMN(iv) In contrast to a single LSTM model approach, the proposed CLSTMN is an average of numerous input-associated LSTM models that can overcome the limitations and uncertainties associated with a single LSTM model(v) Occurrence of local minima is avoided(vi) Able to manage seasonal and cyclical changes(vii) The suggested model is more useful, easier to use, and more accurate in the prediction of short-term solar irradiance than the current model ## 3. Proposed Combined Long Short-Term Network This paper presents a novel combined LSTM network-based prediction for the short-term solar irradiance prediction. The concept, framework, and mathematical modeling of the proposed CLSTMN are detailed in this section. ### 3.1. Long Short-Term Memory Network In 1997, Hochreiter and Schmidhuber [23] devised a long short-term memory network that can overcome the vanishing gradient issue and handle the long-term dependence. Therefore, LSTM is suitable for time series prediction applications. It is a particular variant of the recurrent neural network. The CEC (constant error carousel) is used to store the long-term dependence. It is a linear unit self-connected recurrently. The LSTM network comprises a cell state and three gates: input, forget, and output. The steps incurred in the LSTM network are as follows:Step 1. Forget gate identifies the irrelevant information from the pastime step and passes it to the cell state.Step 2. With the help of the input gate, it updates the cell state with the new input information.Step 3. The critical information passed to the next hidden state is determined using the output gates.Step 4. Cell state is used to have the knowledge of the past relevant over a long time. #### 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ### 3.2. Combined-Long Short-Term Memory Network Under different inputs, model architecture, and uncertainties, maintaining stable performance is the aim of generalization. The six LSTM models with various inputs and hidden neurons (framework) are used to develop the proposed model. We can overcome the uncertain irregularity present in solar irradiance with the help of atmospheric input features. Using the combinations of the LSTM model can reduce the error values and improve the network stability. The selected inputs to the proposed model are high influenced parameters on solar irradiance. We fix hidden neurons by a trial-and-error approach [24, 25] and the identified optimal hidden neurons for each individual LSTM are used for the development of the proposed CLSTMN structure. Figure 1 shows the proposed CLSTMN framework.Figure 1 The framework of the proposed CLSTMN.The proposed CLSTMN mathematical model is as follows:(7)CLSTMNoutput=1N∑j=1NLSTMjforj=1,2.3⋯,N.Let,N the number of LSTMs. (8)Predictedsolarirradiance=LSTM2inputs+LSTM3inputs+LSTM4inputs+LSTM5inputs+LSTM6inputs+LSTM7inputs6.Steps of the proposed CLSTMN algorithm are as follows:Step 1. Collect the solar irradiance and meteorological parameters for sunny and cloudy days.Step 2. Normalize the data.Step 3. Divide the data into training and testing.Step 4. Design the individual LSTM models with various inputs.Step 5. Train the designed LSTM model and predict the solar irradiance on test data.Step 6. Average all LSTM models predict the final solar irradiance.Step 7. Compute the evaluation metric and compare it again with the other baseline prediction models.The proposed CLSTMN model parameters and values are as follows:Inputneurons=2,3,4,5,6,and7.Hiddenneurons=2,4,5,7,9,and11.Outputneuron=1.Optimizer=Adam.Lossfunction=RMSE.Epochs=200.Moreover, the individual model generalization inability can be improved by averaging the various framework-based LSTM prediction models. We improved the accuracy in terms of a reduction in prediction error. The proposed model could manage the model stability and convergence. ## 3.1. Long Short-Term Memory Network In 1997, Hochreiter and Schmidhuber [23] devised a long short-term memory network that can overcome the vanishing gradient issue and handle the long-term dependence. Therefore, LSTM is suitable for time series prediction applications. It is a particular variant of the recurrent neural network. The CEC (constant error carousel) is used to store the long-term dependence. It is a linear unit self-connected recurrently. The LSTM network comprises a cell state and three gates: input, forget, and output. The steps incurred in the LSTM network are as follows:Step 1. Forget gate identifies the irrelevant information from the pastime step and passes it to the cell state.Step 2. With the help of the input gate, it updates the cell state with the new input information.Step 3. The critical information passed to the next hidden state is determined using the output gates.Step 4. Cell state is used to have the knowledge of the past relevant over a long time. ### 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ## 3.1.1. Mathematical Model of LSTM (1)Forgetgate,Ft=σsigWFut,Ht−1+bF,(2)Inputgate,It=σsigWIut,Ht−1+bI,(3)CellagentCt=σtanWCut,Ht−1+bC(4)Outputgate,Ot=σsigWOut,Ht−1+bO,(5)HiddenstateHt=Ot∘σtanhSt(6)Cellstate.St=Ft∘St−1+It∘Ctwhere∘ is the Hadamard Product, W is weights of the respective gates, Ht−1 is the time stamp t−1 past LSTM block output, ut is the current timestamp input, and b is the bias of the respective gates. ## 3.2. Combined-Long Short-Term Memory Network Under different inputs, model architecture, and uncertainties, maintaining stable performance is the aim of generalization. The six LSTM models with various inputs and hidden neurons (framework) are used to develop the proposed model. We can overcome the uncertain irregularity present in solar irradiance with the help of atmospheric input features. Using the combinations of the LSTM model can reduce the error values and improve the network stability. The selected inputs to the proposed model are high influenced parameters on solar irradiance. We fix hidden neurons by a trial-and-error approach [24, 25] and the identified optimal hidden neurons for each individual LSTM are used for the development of the proposed CLSTMN structure. Figure 1 shows the proposed CLSTMN framework.Figure 1 The framework of the proposed CLSTMN.The proposed CLSTMN mathematical model is as follows:(7)CLSTMNoutput=1N∑j=1NLSTMjforj=1,2.3⋯,N.Let,N the number of LSTMs. (8)Predictedsolarirradiance=LSTM2inputs+LSTM3inputs+LSTM4inputs+LSTM5inputs+LSTM6inputs+LSTM7inputs6.Steps of the proposed CLSTMN algorithm are as follows:Step 1. Collect the solar irradiance and meteorological parameters for sunny and cloudy days.Step 2. Normalize the data.Step 3. Divide the data into training and testing.Step 4. Design the individual LSTM models with various inputs.Step 5. Train the designed LSTM model and predict the solar irradiance on test data.Step 6. Average all LSTM models predict the final solar irradiance.Step 7. Compute the evaluation metric and compare it again with the other baseline prediction models.The proposed CLSTMN model parameters and values are as follows:Inputneurons=2,3,4,5,6,and7.Hiddenneurons=2,4,5,7,9,and11.Outputneuron=1.Optimizer=Adam.Lossfunction=RMSE.Epochs=200.Moreover, the individual model generalization inability can be improved by averaging the various framework-based LSTM prediction models. We improved the accuracy in terms of a reduction in prediction error. The proposed model could manage the model stability and convergence. ## 4. Experimental Details The proposed solar irradiance prediction approach runs on the MATLAB platform using a hp laptop with AMD Ryzen 5 3550 H processor, 8 GB RAM, 2100 Mhz, and 4 GB NVIDIA GeForce GTX 1650. Achieving the desired prediction accuracy is crucial because of solar irradiance’s randomness characteristics. This paper attempts to predict solar irradiance over a period of a few minutes to 6 hours. Various inputs and different frameworks incurred in LSTM models were used to develop the proposed model. Averaging the six individual LSTM models improved accuracy and reduced error. ### 4.1. Dataset and Input Parameter Details We got the sunny day and cloud day dataset from the NOAA. The dataset was collected from the latitude 40.05° N and longitude 88.37° W during the period of 2021. The following parameters are used as inputs to the proposed CLSTMN model.(1) Solar irradiance in W/m2(2) Temperature in °C(3) Wind speed in m/s(4) Wind direction in degrees(5) Pressure in mb(6) Relative humidity in %(7) Cloud cover in Oktas ### 4.2. Normalization Normalization improves the prediction performance based on the training efficiency improvement.The actual inputs are normalized based on the Min–Max normalization. The Min–Max formulation is as follows:(9)Normalizedinput,uj′=uj−uminumax−umin,whereuj is the actual input value, umin is the minimum input value, and umax is the maximum input value. ### 4.3. Training and Testing Datasets We carried out the proposed model training using a two-day data sample of 2800 and testing with 6 hours of data samples of 360. Figures2 and 3 depict the sunny day dataset training and testing samples and cloudy day dataset training and testing samples, respectively. We validate the proposed prediction model on the sunny and cloudy day datasets.Figure 2 Sunny day dataset training and testing samples.Figure 3 Cloudy day dataset training and testing samples. ### 4.4. Evaluation Metric We used the RMSE, MAPE, and MSE as an evaluation metric to evaluate the proposed CLSTMN model performance. Stated evaluation metric formulations are as follows:(10)RMSE=1N∑j=1Nuj′−uj2,(11)MAPE=100N∑j=1Nuj′−uju¯j,(12)MSE=1N∑j=1Nuj′−uj2,whereN is the total number of data samples, uj′ is the actual output, u¯j is the average actual output, and uj is predicted output. ## 4.1. Dataset and Input Parameter Details We got the sunny day and cloud day dataset from the NOAA. The dataset was collected from the latitude 40.05° N and longitude 88.37° W during the period of 2021. The following parameters are used as inputs to the proposed CLSTMN model.(1) Solar irradiance in W/m2(2) Temperature in °C(3) Wind speed in m/s(4) Wind direction in degrees(5) Pressure in mb(6) Relative humidity in %(7) Cloud cover in Oktas ## 4.2. Normalization Normalization improves the prediction performance based on the training efficiency improvement.The actual inputs are normalized based on the Min–Max normalization. The Min–Max formulation is as follows:(9)Normalizedinput,uj′=uj−uminumax−umin,whereuj is the actual input value, umin is the minimum input value, and umax is the maximum input value. ## 4.3. Training and Testing Datasets We carried out the proposed model training using a two-day data sample of 2800 and testing with 6 hours of data samples of 360. Figures2 and 3 depict the sunny day dataset training and testing samples and cloudy day dataset training and testing samples, respectively. We validate the proposed prediction model on the sunny and cloudy day datasets.Figure 2 Sunny day dataset training and testing samples.Figure 3 Cloudy day dataset training and testing samples. ## 4.4. Evaluation Metric We used the RMSE, MAPE, and MSE as an evaluation metric to evaluate the proposed CLSTMN model performance. Stated evaluation metric formulations are as follows:(10)RMSE=1N∑j=1Nuj′−uj2,(11)MAPE=100N∑j=1Nuj′−uju¯j,(12)MSE=1N∑j=1Nuj′−uj2,whereN is the total number of data samples, uj′ is the actual output, u¯j is the average actual output, and uj is predicted output. ## 5. Results and Discussion We report the proposed CLSTMN model-based simulation results for short-term solar irradiance prediction on sunny and cloudy days. We tested the proposed CLSTMN model on a testing dataset (360 data samples of each cloudy day and sunny day), tabulating the achieved results reported in Tables1 and 2.Table 1 Proposed CLSTMN-based results on sunny datasets for different hours ahead prediction of solar irradiance. Hour ahead predictionRMSE in W/m2MAPE in %MSE in W/m21 hour ahead7.7729×10−048.2479×10−056.0419×10−072 hours ahead0.00312.5029×10−049.5190×10−063 hours ahead0.00876.7010×10−047.5604×10−054 hours ahead0.01300.00101.6916×10−045 hours ahead0.01620.00152.6139×10−046 hours ahead0.01570.00172.4627×10−04Table 2 The proposed CLSTMN-based results on cloudy datasets for different hour ahead prediction of solar irradiance. Hour ahead predictionRMSE in W/m2MAPE in %MSE in W/m21 hour ahead1.2969×10−041.6882×10−041.6819×10−082 hours ahead0.00690.00154.7559×10−053 hours ahead0.02730.00427.4344×10−044 hours ahead0.03280.00490.00115 hours ahead0.00290.00415.2663×10−046 hours ahead0.01760.00433.0863×10−04 ### 5.1. Short-Term Prediction of Solar Irradiance on Sunny Days The performance of the proposed CLSTMN model was evaluated on sunny day datasets.The achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours in Table1 and Figures 4–21. Consequently, the predicted solar irradiance for sunny days is accurately matched with the actual solar irradiance, it is noted in Figures 4, 7, 10, 13, 16, and 19. Figures 5, 8, 11, 14, 17, and 20 show that the prediction on sunny days exactly matches the actual solar irradiance; thus, the prediction errors are near to zero. The predicted solar irradiance is linear matched with the actual solar irradiance, and it is clearly perceived from Figures 6, 9, 12, 15, 18, and 21.Figure 4 Comparison of predicted solar irradiance and actual solar irradiance for sunny day one-hour-ahead prediction.Figure 5 Prediction error vs. time for sunny day one-hour-ahead prediction.Figure 6 Relationship between actual vs. predicted solar irradiance for sunny day one-hour-ahead prediction.Figure 7 Comparison of predicted solar irradiance and actual solar irradiance for sunny day two-hour-ahead prediction.Figure 8 Prediction error vs. time for sunny day two-hour-ahead prediction.Figure 9 Relationship between actual vs. predicted solar irradiance for sunny day two-hour-ahead prediction.Figure 10 Comparison of predicted solar irradiance and actual solar irradiance for sunny day three-hour-ahead prediction.Figure 11 Prediction error vs. time for sunny day three-hour-ahead prediction.Figure 12 Relationship between actual vs. predicted solar irradiance for sunny day three-hour-ahead prediction.Figure 13 Comparison of predicted solar irradiance and actual solar irradiance for sunny day four-hour-ahead prediction.Figure 14 Prediction error vs. time for sunny day four-hour-ahead prediction.Figure 15 Relationship between actual vs. predicted solar irradiance for sunny day four-hour-ahead prediction.Figure 16 Comparison of predicted solar irradiance and actual solar irradiance for sunny day five-hour-ahead prediction.Figure 17 Prediction error vs. time for sunny day five-hour-ahead prediction.Figure 18 Relationship between actual vs. predicted solar irradiance for sunny day five-hour-ahead prediction.Figure 19 Comparison of predicted solar irradiance and actual solar irradiance for sunny day six-hour-ahead prediction.Figure 20 Prediction error vs. time for sunny day six-hour-ahead prediction.Figure 21 Relationship between actual vs. predicted solar irradiance for sunny day six-hour-ahead prediction.The prediction errors for one-hour-ahead prediction are the lowest RMSE7.7729×10−04, MAPE 8.2479×10−05, and MSE 6.0419×10−07, and the predictions are better than other hour-ahead-based predictions. The sunny day-based results from the proposed 6-hour-ahead CLSTMN prediction model have the evaluation metrics as RMSE 0.0157, MAPE 0.0017, and MSE 2.4627×10−04. The performance of the proposed CLSTMN model on the sunny day dataset based on short-term solar irradiance prediction results in better prediction accuracy on one-hour, two-hour, three-hour, four-hour, five-hour, and six-hour-ahead prediction of solar irradiance. ### 5.2. Short-Term Prediction of Solar Irradiance on Cloudy Days The proposed CLSTMN model performance is further evaluated on the cloudy day dataset, and the achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours ahead, and 6 hours ahead are tabulated in Table2 and Figures 22–39. From Figures 22, 25, 28, 31, 34, 37, for cloudy days, the predicted solar irradiance is matched accurately with the actual solar irradiance perceived clearly. Therefore, the prediction errors are the lowest. It was noticed in Figures 23, 26, 29, 32, 35, and 38. The prediction on the cloudy day dataset exactly matches the actual solar irradiance with the predicted solar irradiance. Thus, the linear relationship between actual vs. predicted solar irradiance for the cloudy day dataset depicts in Figures 24, 27, 30, 33, 36, and 39.Figure 22 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day one-hour-ahead prediction.Figure 23 Prediction error vs. time for cloudy day one-hour-ahead prediction.Figure 24 Relationship between actual vs. predicted solar irradiance for cloudy day one-hour-ahead prediction.Figure 25 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day two-hour-ahead prediction.Figure 26 Prediction error vs. time for cloudy day two-hour-ahead prediction.Figure 27 Relationship between actual vs. predicted solar irradiance for cloudy day two-hour-ahead prediction.Figure 28 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day three-hour-ahead prediction.Figure 29 Prediction error vs. time for cloudy day three-hour-ahead prediction.Figure 30 Relationship between actual vs. predicted solar irradiance for cloudy day three-hour-ahead prediction.Figure 31 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day four-hour-ahead prediction.Figure 32 Prediction error vs. time for cloudy day four-hour-ahead prediction.Figure 33 Relationship between actual vs. predicted solar irradiance for cloudy day four-hour-ahead prediction.Figure 34 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day five-hour-ahead prediction.Figure 35 Prediction error vs. time for cloudy day five-hour-ahead prediction.Figure 36 Relationship between actual vs. predicted solar irradiance for cloudy day five-hour-ahead prediction.Figure 37 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day six-hour-ahead prediction.Figure 38 Prediction error vs. time for cloudy day six-hour-ahead prediction.Figure 39 Relationship between actual vs. predicted solar irradiance for cloudy day six-hour-ahead prediction.For the cloudy day dataset, the proposed CLSTMN model is based on 6-hour-ahead prediction result RMSE 0.0176, MAPE 0.0043, and MSE3.0863×10−04. One-hour-ahead prediction of solar irradiance results is more precise than the other hour-ahead-based predictions with RMSE 1.2969×10−04, MAPE 1.6882×10−04, and MSE−1.6819×10−08. The proposed CLSTMN model evaluation on cloudy day datasets results in better accuracy for short-term solar irradiance prediction (one to 6 hours ahead).Sunny days have a higher solar irradiance than cloudy days. The sunny day-based prediction using the CLSTMN is competing with the cloudy day-based prediction. The proposed CLSTMN model can generalize well to the model and input uncertainty and accurately predict the actual solar irradiance with the minor evaluation metrics. Based on the analysis of the obtained result, we identify the proposed CLSTMN prediction model achieved improved prediction accuracy and generalization ability on both sunny and cloudy day datasets. Through the precise prediction of the solar irradiance using the proposed CLSTMN makes benefits and effective planning and scheduling of solar energy systems. ### 5.3. Comparative Analysis with the Baseline Model In addition, the comparative analysis was carried out to prove the predictive model’s ability with the baseline models. The persistence model and other well-known predictive models (ARIMA, WRF, RNN, k-NN-BPLNN, SVM, MLP, NARX, and LSTM) were used as a baseline model to verify and compare the performance of the proposed CLSTMN prediction model. We kept the considered baseline model set parameters the same as mentioned in the respective research papers but validated on our collected datasets. For sunny and cloudy day datasets, the proposed CLSTMN model provides a consistent result for one hour, two hours, three hours, four hours, five hours, and six hours ahead the prediction of solar irradiance. Thus, the proposed CLSTMN model can accurately predict the solar irradiance that matches the actual solar irradiance on the short-term horizon.The compared baseline predictive model was less accurately predicting the solar irradiance in short-term horizons on the considered datasets. This is clearly observed in Table3, for a better understanding of the 3D column chart depicted in Figure 40. The baseline model-based predicted solar irradiance is not almost the same as the actual values in both datasets; hence, the performance evaluation metrics are increasing, and the accuracy decreased.Table 3 Comparative analysis of the proposed CLSTMN with the baseline prediction model on sunny and cloudy datasets. S. NoAuthorsYearPrediction modelEvaluation metric (RMSE in W/m2)Sunny daysCloudy days1Xiaoyan Xiang et al.2021Persistence48.946449.17592Ferrari, Stefano et al.2013ARIMA3.01433.13543de Araujo, Jose Manuel Soares2020WRF25.484726.26914Mishra, Sakshi & Praveen Palanisamy2018RNN1.47192.01435Kartini, Unit Three & Chao Rong Chen2017k-NN-BPLNN0.94640.98706Kuk Yeol Bae et al.2017SVM0.79970.80507Omaima El Alani et al.2019MLP0.40410.43578Yazeed A. A-Sbou & Khaled M. Alawasa2017NARX0.52680.57949Naylani Halpern-Wight et al.2020LSTM0.25190.313310Manoharan Madhiarasan & Mohamed Louzazni2022Proposed CLSTMN 1 hour ahead7.7729×10−041.2969×10−04Proposed CLSTMN 2 hours ahead0.00310.0069Proposed CLSTMN 3 hours ahead0.00870.0273Proposed CLSTMN 4 hours ahead0.01300.0328Proposed CLSTMN 5 hours ahead0.01620.0029Proposed CLSTMN 6 hours ahead0.01570.0176Bold implies the best result.Figure 40 Comparative analysis of the proposed CLSTMN with the baseline prediction models.In summary, the highly accurate short-term prediction model is proposed using the combined long short-term memory network, and it makes it easy to adapt to climatic conditions and model framework variations. The proposed model-based predicted solar irradiance values are a good fit for the actual values. The proposed model can handle the changes in inputs and model framework; thus, it is well generalized to both datasets and results in the best progress than the other compared predictive models. ## 5.1. Short-Term Prediction of Solar Irradiance on Sunny Days The performance of the proposed CLSTMN model was evaluated on sunny day datasets.The achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours in Table1 and Figures 4–21. Consequently, the predicted solar irradiance for sunny days is accurately matched with the actual solar irradiance, it is noted in Figures 4, 7, 10, 13, 16, and 19. Figures 5, 8, 11, 14, 17, and 20 show that the prediction on sunny days exactly matches the actual solar irradiance; thus, the prediction errors are near to zero. The predicted solar irradiance is linear matched with the actual solar irradiance, and it is clearly perceived from Figures 6, 9, 12, 15, 18, and 21.Figure 4 Comparison of predicted solar irradiance and actual solar irradiance for sunny day one-hour-ahead prediction.Figure 5 Prediction error vs. time for sunny day one-hour-ahead prediction.Figure 6 Relationship between actual vs. predicted solar irradiance for sunny day one-hour-ahead prediction.Figure 7 Comparison of predicted solar irradiance and actual solar irradiance for sunny day two-hour-ahead prediction.Figure 8 Prediction error vs. time for sunny day two-hour-ahead prediction.Figure 9 Relationship between actual vs. predicted solar irradiance for sunny day two-hour-ahead prediction.Figure 10 Comparison of predicted solar irradiance and actual solar irradiance for sunny day three-hour-ahead prediction.Figure 11 Prediction error vs. time for sunny day three-hour-ahead prediction.Figure 12 Relationship between actual vs. predicted solar irradiance for sunny day three-hour-ahead prediction.Figure 13 Comparison of predicted solar irradiance and actual solar irradiance for sunny day four-hour-ahead prediction.Figure 14 Prediction error vs. time for sunny day four-hour-ahead prediction.Figure 15 Relationship between actual vs. predicted solar irradiance for sunny day four-hour-ahead prediction.Figure 16 Comparison of predicted solar irradiance and actual solar irradiance for sunny day five-hour-ahead prediction.Figure 17 Prediction error vs. time for sunny day five-hour-ahead prediction.Figure 18 Relationship between actual vs. predicted solar irradiance for sunny day five-hour-ahead prediction.Figure 19 Comparison of predicted solar irradiance and actual solar irradiance for sunny day six-hour-ahead prediction.Figure 20 Prediction error vs. time for sunny day six-hour-ahead prediction.Figure 21 Relationship between actual vs. predicted solar irradiance for sunny day six-hour-ahead prediction.The prediction errors for one-hour-ahead prediction are the lowest RMSE7.7729×10−04, MAPE 8.2479×10−05, and MSE 6.0419×10−07, and the predictions are better than other hour-ahead-based predictions. The sunny day-based results from the proposed 6-hour-ahead CLSTMN prediction model have the evaluation metrics as RMSE 0.0157, MAPE 0.0017, and MSE 2.4627×10−04. The performance of the proposed CLSTMN model on the sunny day dataset based on short-term solar irradiance prediction results in better prediction accuracy on one-hour, two-hour, three-hour, four-hour, five-hour, and six-hour-ahead prediction of solar irradiance. ## 5.2. Short-Term Prediction of Solar Irradiance on Cloudy Days The proposed CLSTMN model performance is further evaluated on the cloudy day dataset, and the achieved results for short-term solar irradiance prediction such as one hour ahead, 2 hours ahead, 3 hours ahead, 4 hours ahead, 5 hours ahead, and 6 hours ahead are tabulated in Table2 and Figures 22–39. From Figures 22, 25, 28, 31, 34, 37, for cloudy days, the predicted solar irradiance is matched accurately with the actual solar irradiance perceived clearly. Therefore, the prediction errors are the lowest. It was noticed in Figures 23, 26, 29, 32, 35, and 38. The prediction on the cloudy day dataset exactly matches the actual solar irradiance with the predicted solar irradiance. Thus, the linear relationship between actual vs. predicted solar irradiance for the cloudy day dataset depicts in Figures 24, 27, 30, 33, 36, and 39.Figure 22 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day one-hour-ahead prediction.Figure 23 Prediction error vs. time for cloudy day one-hour-ahead prediction.Figure 24 Relationship between actual vs. predicted solar irradiance for cloudy day one-hour-ahead prediction.Figure 25 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day two-hour-ahead prediction.Figure 26 Prediction error vs. time for cloudy day two-hour-ahead prediction.Figure 27 Relationship between actual vs. predicted solar irradiance for cloudy day two-hour-ahead prediction.Figure 28 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day three-hour-ahead prediction.Figure 29 Prediction error vs. time for cloudy day three-hour-ahead prediction.Figure 30 Relationship between actual vs. predicted solar irradiance for cloudy day three-hour-ahead prediction.Figure 31 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day four-hour-ahead prediction.Figure 32 Prediction error vs. time for cloudy day four-hour-ahead prediction.Figure 33 Relationship between actual vs. predicted solar irradiance for cloudy day four-hour-ahead prediction.Figure 34 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day five-hour-ahead prediction.Figure 35 Prediction error vs. time for cloudy day five-hour-ahead prediction.Figure 36 Relationship between actual vs. predicted solar irradiance for cloudy day five-hour-ahead prediction.Figure 37 Comparison of predicted solar irradiance and actual solar irradiance for cloudy day six-hour-ahead prediction.Figure 38 Prediction error vs. time for cloudy day six-hour-ahead prediction.Figure 39 Relationship between actual vs. predicted solar irradiance for cloudy day six-hour-ahead prediction.For the cloudy day dataset, the proposed CLSTMN model is based on 6-hour-ahead prediction result RMSE 0.0176, MAPE 0.0043, and MSE3.0863×10−04. One-hour-ahead prediction of solar irradiance results is more precise than the other hour-ahead-based predictions with RMSE 1.2969×10−04, MAPE 1.6882×10−04, and MSE−1.6819×10−08. The proposed CLSTMN model evaluation on cloudy day datasets results in better accuracy for short-term solar irradiance prediction (one to 6 hours ahead).Sunny days have a higher solar irradiance than cloudy days. The sunny day-based prediction using the CLSTMN is competing with the cloudy day-based prediction. The proposed CLSTMN model can generalize well to the model and input uncertainty and accurately predict the actual solar irradiance with the minor evaluation metrics. Based on the analysis of the obtained result, we identify the proposed CLSTMN prediction model achieved improved prediction accuracy and generalization ability on both sunny and cloudy day datasets. Through the precise prediction of the solar irradiance using the proposed CLSTMN makes benefits and effective planning and scheduling of solar energy systems. ## 5.3. Comparative Analysis with the Baseline Model In addition, the comparative analysis was carried out to prove the predictive model’s ability with the baseline models. The persistence model and other well-known predictive models (ARIMA, WRF, RNN, k-NN-BPLNN, SVM, MLP, NARX, and LSTM) were used as a baseline model to verify and compare the performance of the proposed CLSTMN prediction model. We kept the considered baseline model set parameters the same as mentioned in the respective research papers but validated on our collected datasets. For sunny and cloudy day datasets, the proposed CLSTMN model provides a consistent result for one hour, two hours, three hours, four hours, five hours, and six hours ahead the prediction of solar irradiance. Thus, the proposed CLSTMN model can accurately predict the solar irradiance that matches the actual solar irradiance on the short-term horizon.The compared baseline predictive model was less accurately predicting the solar irradiance in short-term horizons on the considered datasets. This is clearly observed in Table3, for a better understanding of the 3D column chart depicted in Figure 40. The baseline model-based predicted solar irradiance is not almost the same as the actual values in both datasets; hence, the performance evaluation metrics are increasing, and the accuracy decreased.Table 3 Comparative analysis of the proposed CLSTMN with the baseline prediction model on sunny and cloudy datasets. S. NoAuthorsYearPrediction modelEvaluation metric (RMSE in W/m2)Sunny daysCloudy days1Xiaoyan Xiang et al.2021Persistence48.946449.17592Ferrari, Stefano et al.2013ARIMA3.01433.13543de Araujo, Jose Manuel Soares2020WRF25.484726.26914Mishra, Sakshi & Praveen Palanisamy2018RNN1.47192.01435Kartini, Unit Three & Chao Rong Chen2017k-NN-BPLNN0.94640.98706Kuk Yeol Bae et al.2017SVM0.79970.80507Omaima El Alani et al.2019MLP0.40410.43578Yazeed A. A-Sbou & Khaled M. Alawasa2017NARX0.52680.57949Naylani Halpern-Wight et al.2020LSTM0.25190.313310Manoharan Madhiarasan & Mohamed Louzazni2022Proposed CLSTMN 1 hour ahead7.7729×10−041.2969×10−04Proposed CLSTMN 2 hours ahead0.00310.0069Proposed CLSTMN 3 hours ahead0.00870.0273Proposed CLSTMN 4 hours ahead0.01300.0328Proposed CLSTMN 5 hours ahead0.01620.0029Proposed CLSTMN 6 hours ahead0.01570.0176Bold implies the best result.Figure 40 Comparative analysis of the proposed CLSTMN with the baseline prediction models.In summary, the highly accurate short-term prediction model is proposed using the combined long short-term memory network, and it makes it easy to adapt to climatic conditions and model framework variations. The proposed model-based predicted solar irradiance values are a good fit for the actual values. The proposed model can handle the changes in inputs and model framework; thus, it is well generalized to both datasets and results in the best progress than the other compared predictive models. ## 6. Conclusion The amalgamation of various input and framework-based individual LSTM models may help increase the prediction accuracy and learning ability and make it suitable for general application. The relationship among various inputs is handled effectively with the proposed CLSTMN model. Six different input and framework-based LSTM model ensembles enable the proposed model to extract the solar irradiance and meteorological parameter dependencies accurately. Thus, the uncertainties about the model framework and inputs are managed effectively, which makes the better prediction results of the proposed model concerned with short-term solar irradiance prediction. The learning ability of the proposed model is highly improved, and this makes the proposed model can predict 1 hour, 2 hours, 3 hours, 4 hours, 5 hours, and 6 hours ahead of solar irradiance precisely.In addition, sunny day and cloudy day datasets based on performance validation were performed to verify the prediction ability of the proposed CLSTMN model for 1-hour to 6-hour ahead prediction. The evaluation of sunny day- and cloudy day-based datasets shows that the proposed model can cause better performance in a greatly uncertain situation. Thus, we attain the best prediction results on sunny and cloudy days for 1-hour- to 6-hour-ahead predictions regarding the proposed CLSTMN model. The proposed CLSTMN model, compared with the baseline predictive models and result analysis, has proved the prediction effectiveness of the proposed CLSTMN model for sunny and cloudy days based on short-term solar irradiance with the lowest evaluation metrics. Risk of solar energy integration to the electric grid is eliminated through the simple and workable proposed CLSTMN short-term prediction model. ## 7. Proposed Predictive Model Limitations and Future Research This model has limitations of higher computational cost than the individual model, despite its improved prediction accuracy.The future works are as follows:(i) This paper extends the applicability of the proposed CLSTMN and further investigates by performing the multihorizon-based solar irradiance prediction(ii) The authors intend to use an optimization algorithm to identify the optimal hyperparameters of the LSTM network(iii) Develop the FPGA model, and apply the model to real-world scenarios --- *Source: 1004051-2022-08-16.xml*
2022
# Analysis of Effects of PTEN-Mediated TGF-β/Smad2 Pathway on Osteogenic Differentiation in Osteoporotic Tibial Fracture Rats and Bone Marrow Mesenchymal Stem Cell under Tension **Authors:** Shiyong Ling; Chen Yan; Kai Huang; Bo Lv; Hua Wang; Xiaoyan Wang; Jun Chen; Jingchuan Sun **Journal:** Cellular Microbiology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004203 --- ## Abstract Purpose. To discuss effects of phosphatase and tensin homolog protein (PTEN)-mediated transforming growth factor-β (TGF-β)/Smad homologue 2 (Smad2) pathway on osteogenic differentiation in osteoporotic (OP) tibial fracture rats and bone marrow mesenchymal stem cell (BMSC) under tension. Methods. A tibial fracture model was established. The rats were divided into sham-operated group and model group, and tibia tissue was collected. Purchase well-grown cultured rat BMSC, and use the Flexercell in vitro cell mechanics loading device to apply tension. The expression of PTEN was detected by qRT-PCR. After the BMSCs were transfected with si-PTEN and oe-PTEN, the force was applied to detect cell differentiation. The expression of TGF-β/Smad2 protein was detected by Western blot. The formation of calcium nodules in BMSC was detected by alkaline phosphatase (ALP) staining and alizarin red (AR) staining. Results. The expression of PTEN was higher in the model group and tension MSC group, and the expression of TGF-β and Smad2 protein was lower. The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than the oe-NC group and control group. The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group. The results of ALP staining and AR staining also confirmed the above results. Conclusion. PTEN-mediated TGF-β/Smad2 pathway may play a key role in the osteogenic differentiation of OP tibial fracture rats. Downregulation of PTEN and upregulation of TGF-β/Smad2 signal can promote the osteogenic differentiation of BMSC under tension. --- ## Body ## 1. Introduction Osteoporosis (OP) is a progressive disease of systemic bone metabolism disorder, which is characterized by degradation of bone microstructure, accelerated bone loss, and destruction of bone microstructure, and will increase the risk of fracture [1, 2]. Tibial fracture, as the most serious consequence of OP, will bring severe living burden and economic burden to patients. At present, OP-induced fractures are affecting more and more people, especially middle-aged and elderly patients, and people pay more and more attention to them.Bone marrow mesenchymal stem cell (BMSC) can differentiate into osteoblasts and adipose cells can in bone marrow [3]. OP leads to the impairment of the function and differentiation ability of BMSC in patients with tibial fracture, and then the imbalance between osteogenic differentiation and adipogenic differentiation occurs, resulting in the decrease of osteoblast formation in bone marrow and the increase of adipose tissue formation [4]. Previous studies have shown that BMSC is one of the most sensitive cells to mechanical stress, and mechanical tensile stress can have a certain influence on the osteogenic differentiation of BMSC [5]. Mechanical tension is divided into physiological tension and pathological tension, and its regulation is closely related to the size, frequency, and duration of mechanical stimulation. Low-level tension is not enough to maintain bone formation; proper tension can provide effective physiological stimulation for the maintenance of bone tissue; when the tension is too large, the speed of bone tissue destruction is greatly accelerated, which is pathological distraction tension.Phosphatase and tensin homolog protein (PTEN) is a factor with the activities of lipid phosphatase and protein phosphatase, it participates in the process of cell DNA repair, apoptosis and proliferation [6]. The deletion of the PTEN gene will lead to cancer, nervous system diseases, metabolic diseases, and immune system diseases [7]. Transforming growth factor-β (TGF-β) is one of the most important factors involved in process of bone remodeling [8]. Smads family is an important new gene family in TGF-β pathway in vertebrates. Smad homologue 2 (Smad2) is a receptor-activated Smads protein, which participates in TGF-β or activin signal transduction [9]. Some experts found that TGF-β/Smad2 pathway of BMSC can promote its proliferation and osteogenic differentiation, regulate the late osteogenic differentiation, and participate in collagen secretion and calcium salt deposition [10].However, at present, there are few research on the relationship between TGF-β/Smad2 pathway, PTEN, and osteogenic differentiation of BMSC under tension. In this study, the author observed the effects of PTEN-mediated TGF-β/Smad2 pathway on the osteogenic differentiation of BMSC in OP tibial fracture rats under tension, in order to provide theoretical basis for bone tissue engineering research. ## 2. Methods 8 clean grade rats aged 4-5 months (260-300 g) were selected, and all rats were fed with normal diet under the same conditions. The experimental rats were randomly divided into two groups according to body weight: sham-operated group and model group. In the model group, the rats were anesthetized by intramuscular injection of 20% urethane, and the bilateral ovaries of the rats were removed from both sides of the lumbar spine of the rats. In the sham-operated group, the ovaries were exposed in the same way, but the ovaries were not removed.OP model was established, and the levels of alkaline phosphatase (ALP) and tartrate-resistant acid phosphatase (TRAP) in the blood of rats were detected by ALP kit and TRAP kit. The absorbance value can be read on the microplate reader, and the corresponding concentration can be converted from the standard curve according to the absorbance value. When the serum ALP level exceeds147.25±56.29IU/mL and the TRAP level exceeds 24.50±1.16IU/mL, the OP model of rats was qualified, and the tibia fracture model was established. qRT-PCR method was used to detect the expression of PTEN, and Western blot method was used to detect the expression of TGF-β protein and Smad2 protein.The well-grown cultured rat BMSC were purchased; the cells passed the test for bacteria, fungi, mycoplasma, and endotoxin and passed the test of differentiation ability. After being digested with 0.25% trypsin solution, the cells were inoculated into 6-well culture plates at a density of l ×105/cm2. The cells were cultured for 48 h and waited for the cell confluency to reach 80%-90%. Then, Flexercell in vitro cell mechanics loading device was used; the tensile force with a frequency of 1 Hz and a deformation rate of 18% were applied, twice a day for 30 min; and BMSC was harvested on the 5th day of experiment to induce the osteogenic differentiation of BMSC. qRT-PCR was used to detect the expression of PTEN in osteoblast differentiated cells. After BMSC were transfected with si-PTEN and oe-PTEN, the tension was applied, and cell differentiation was detected. The expression of TGF-β/Smad2 protein was detected by Western blot. ### 2.1. qRT-PCR Analysis Total RNA was extracted by Trizol method, reverse transcribed into complementary DNA, and qRT-PCR detection was carried out with reference to reagent instructions. The internal reference was GAPDH, and every third compound hole was a sample. The Ct value of each group was obtained, and the relative mRNA expression was calculated by 2−ΔΔCt. ### 2.2. Western Blot Analysis The total protein BCA method was used to quantify the protein and detect TGF-β and Smad2 protein in bone cells. SDS-PAGE gel electrophoresis was performed, membrane was transferred for 1 h in ice bath state, BSA was blocked at room temperature for 2 h, primary antibody was incubated overnight at 4 °C, and membrane was washed three times for 10 min each time. Then, incubate the second antibody at room temperature for 1 h, and wash for 3 times, each time for 10 min. The electrochemiluminescence substrate chromogenic solution was added, and the image was taken by gel imager.ALP staining and alizarin red (AR) staining was used to detect the formation of calcium nodules in BMSC. In ALP staining, follow the instructions of the kit. A few drops of No.1 solution were added to the cell slide, fixed at room temperature for 1 min, rinsed for 2 min, and dried. A few drops of action solution were added, incubated in a wet box at 37 °C for 2 h, and rinsed for 2 min. A few drops of No.5 solution were added, counterstained for 5 min, rinsed for 2 min, and dried, and take images with a microscope. In AR staining, the cells were transferred to the glass slide, fixed with 95% ethanol for 10 min, and washed. Then, it was incubated in 0.1% alizarin red staining solution at room temperature for 30 min. Rinse with distilled water, dry, seal, and take pictures with microscope. ### 2.3. Statistical Methods With SPSS 22.0 statistical software, the data was expressed as mean ± standard deviation,t test was used for comparison, and P<0.05 was significant. ## 2.1. qRT-PCR Analysis Total RNA was extracted by Trizol method, reverse transcribed into complementary DNA, and qRT-PCR detection was carried out with reference to reagent instructions. The internal reference was GAPDH, and every third compound hole was a sample. The Ct value of each group was obtained, and the relative mRNA expression was calculated by 2−ΔΔCt. ## 2.2. Western Blot Analysis The total protein BCA method was used to quantify the protein and detect TGF-β and Smad2 protein in bone cells. SDS-PAGE gel electrophoresis was performed, membrane was transferred for 1 h in ice bath state, BSA was blocked at room temperature for 2 h, primary antibody was incubated overnight at 4 °C, and membrane was washed three times for 10 min each time. Then, incubate the second antibody at room temperature for 1 h, and wash for 3 times, each time for 10 min. The electrochemiluminescence substrate chromogenic solution was added, and the image was taken by gel imager.ALP staining and alizarin red (AR) staining was used to detect the formation of calcium nodules in BMSC. In ALP staining, follow the instructions of the kit. A few drops of No.1 solution were added to the cell slide, fixed at room temperature for 1 min, rinsed for 2 min, and dried. A few drops of action solution were added, incubated in a wet box at 37 °C for 2 h, and rinsed for 2 min. A few drops of No.5 solution were added, counterstained for 5 min, rinsed for 2 min, and dried, and take images with a microscope. In AR staining, the cells were transferred to the glass slide, fixed with 95% ethanol for 10 min, and washed. Then, it was incubated in 0.1% alizarin red staining solution at room temperature for 30 min. Rinse with distilled water, dry, seal, and take pictures with microscope. ## 2.3. Statistical Methods With SPSS 22.0 statistical software, the data was expressed as mean ± standard deviation,t test was used for comparison, and P<0.05 was significant. ## 3. Results ### 3.1. Establishment of OP Model Rats Serum ALP level in model group was higher than the sham-operated group (P<0.05) (see Figure 1).Figure 1 Comparison of serum ALP levels between the two groups compared to the sham-operated group,∗P<0.05. ### 3.2. Expression of PTEN and TGF-β/Smad2 Protein in Tibia of OP Model Rats The expression of PTEN in model group was higher than the sham-operated group (P<0.05). The expression of TGF-β and Smad2 protein in the model group was lower than the sham-operated group (see Figures 2 and 3).Figure 2 Comparison of PTEN expression between the two groups compared to the sham-operated group,∗P<0.05.Figure 3 Expression of TGF-β/Smad2 protein in two groups. ### 3.3. Expression of PTEN and TGF-β/Smad2 in Osteogenic Differentiation of BMSC under Tension The expression of PTEN in tension MSC group was higher than MSC group (P<0.05). The expression of TGF-β and Smad2 protein in tension MSC group was lower than MSC group (see Figures 4 and 5).Figure 4 Comparison of PTEN expression between the two groups compared to the MSC group,∗P<0.05.Figure 5 Expression of TGF-β/Smad2 protein in the two groups. ### 3.4. Effect of Overexpression of PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In ALP staining, compared with the control group and oe-NC group, the osteogenic differentiation ability of BMSC and ALP activity in oe-PTEN group were obviously weaker (see Figure6). In the AR staining, compared with the control group and oe-NC group, oe-PTEN group had lower mineralization degree, fewer calcium nodules, and lighter cell staining (see Figure 7). The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than oe-NC group and control group (see Figure 8).Figure 6 ALP staining results.Figure 7 AR staining results.Figure 8 Western blot results. ### 3.5. Effects of Interfering PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In the ALP staining, compared with the control group and si-NC group, the osteogenic differentiation ability of BMSC and ALP activity in si-PTEN group were obviously stronger (see Figure9). In the AR staining, compared with the control group and si-NC group, the si-PTEN group had higher mineralization degree, more calcium nodules, and deeper cell staining (see Figure 10). The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group (see Figure 11).Figure 9 ALP staining results.Figure 10 AR staining results.Figure 11 Western blot results. ## 3.1. Establishment of OP Model Rats Serum ALP level in model group was higher than the sham-operated group (P<0.05) (see Figure 1).Figure 1 Comparison of serum ALP levels between the two groups compared to the sham-operated group,∗P<0.05. ## 3.2. Expression of PTEN and TGF-β/Smad2 Protein in Tibia of OP Model Rats The expression of PTEN in model group was higher than the sham-operated group (P<0.05). The expression of TGF-β and Smad2 protein in the model group was lower than the sham-operated group (see Figures 2 and 3).Figure 2 Comparison of PTEN expression between the two groups compared to the sham-operated group,∗P<0.05.Figure 3 Expression of TGF-β/Smad2 protein in two groups. ## 3.3. Expression of PTEN and TGF-β/Smad2 in Osteogenic Differentiation of BMSC under Tension The expression of PTEN in tension MSC group was higher than MSC group (P<0.05). The expression of TGF-β and Smad2 protein in tension MSC group was lower than MSC group (see Figures 4 and 5).Figure 4 Comparison of PTEN expression between the two groups compared to the MSC group,∗P<0.05.Figure 5 Expression of TGF-β/Smad2 protein in the two groups. ## 3.4. Effect of Overexpression of PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In ALP staining, compared with the control group and oe-NC group, the osteogenic differentiation ability of BMSC and ALP activity in oe-PTEN group were obviously weaker (see Figure6). In the AR staining, compared with the control group and oe-NC group, oe-PTEN group had lower mineralization degree, fewer calcium nodules, and lighter cell staining (see Figure 7). The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than oe-NC group and control group (see Figure 8).Figure 6 ALP staining results.Figure 7 AR staining results.Figure 8 Western blot results. ## 3.5. Effects of Interfering PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In the ALP staining, compared with the control group and si-NC group, the osteogenic differentiation ability of BMSC and ALP activity in si-PTEN group were obviously stronger (see Figure9). In the AR staining, compared with the control group and si-NC group, the si-PTEN group had higher mineralization degree, more calcium nodules, and deeper cell staining (see Figure 10). The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group (see Figure 11).Figure 9 ALP staining results.Figure 10 AR staining results.Figure 11 Western blot results. ## 4. Discussion OP is the primary cause of fractures in the elderly in China. The pathogenesis of tibial fractures caused by OP has been highly valued clinically [11]. Bone is composed of bone matrix and minerals deposited in it, which has the functions of supporting, protecting, and storing calcium and phosphorus for the entire body. The bone can sense and adapt to mechanical tension, maintaining a balance between bone resorption and bone remodeling [12, 13].TGF-β is a protein polypeptide, which controls bone density by regulating the deposition of osteoblasts and the absorption of osteoclasts, and can maintain bone homeostasis [14]. TGF-β can not only increase the directional migration ability of osteoblasts, but also attract the positioning and movement of osteoblast progenitor cells during bone reconstruction. It can also reduce bone turnover, promote the formation of bone and cartilage, and accelerate osteoclast apoptosis [15]. TGF-β family proteins act on the whole process of osteogenic differentiation. Tu’s team found that reduced TGF-β level may be one of the pathogenic factors of OP, and different doses of TGF-β can regulate the proliferation and differentiation of osteoclasts through different pathways, affecting the absorption and destruction of bone, thus regulating the process of bone transformation [16]. In the early stage, TGF-β can regulate the osteogenic differentiation of BMSC and promote the proliferation of osteoblasts. In the late stage, TGF-β can regulate the collagen secretion and calcium salt deposition of osteoblasts [17]. In addition, Lin’s team found that there is abundant expression of Smad2 protein in normal bone tissue, it is widely expressed in the epiphysis, and it is mainly expressed in osteoblasts on the surface of bone matrix and around trabecular bone [18]. TGF-β/Smad2 signal transduction is a rather complicated process from cell membrane to nucleus; the main steps include activation binding stage, recombination separation stage, and transfer action stage, involving many genes and proteins. In the process of BMSC aggregation differentiation to osteoblasts and osteoblasts’ own proliferation and differentiation, Smads directly combined with DNA as transcription factor or interacted with other transcription factors and activating factors to induce the transcription response to TGF-β signal. TGF-β affects the expression of its downstream osteogenic specific transcription factor Runx2 through the classical Smad signal pathway and regulated the expression of osteogenic related genes [19]. Li’s team research shows that silencing endogenous Smad2 expression in BMSC can enhance bone formation, but inhibit adipogenesis, and miR-10b promotes osteogenic differentiation and bone formation through TGF-β pathway [20]. Yuan’s team believes that after ovariectomy, the expression of TGF-β1 is down-regulated and the expression of Smad2 protein is also significantly reduced and Smad2 protein and its mediated TGF-β1 may play an important role in the formation of postmenopausal OP [21].PTEN is associated with the differentiation of osteoblasts and osteoclasts. Shen’s team found that the upregulation of PTEN facilitated the osteogenesis of dental pulp mesenchymal stem cells [22]. Cai’s team showed that the addition of PTEN inhibitor partially blocked the process of oxaloacetic acid affecting the differentiation of mouse embryonic osteoblast precursor cells, myoblasts and osteoblasts, indicating that PTEN can inhibit the differentiation of osteoclasts [23]. Qi’s team found that the tensile force of 2 000 μ strain can promote the osteogenic differentiation of BMSC [24]. Koike’s team found that 0.8% strain rate can promote the osteogenic differentiation of ST2 cells, while 10% and 15% strain rate can inhibit the osteogenic differentiation of ST2 cells [25]. We found that the expression of PTEN was higher in the model group and tension MSC group, and the expression of TGF-β and Smad2 protein was lower. The results showed that PTEN and TGF-β/Smad2 pathways might play a key role in the osteogenic differentiation of BMSC in OP tibial fracture rats, while pathological tension inhibited the differentiation of MSC cells into mature osteoblasts and reduced the expression of osteogenic related genes under the conduction of stress stimulation signals.In this study, we interfered with PTEN and then gave osteogenic differentiation. The results showed that the expression of TGF-β and Smad2 protein increased in si-PTEN group, but decreased after overexpression of PTEN. This indicates that PTEN plays an important role in the osteogenic differentiation of rat BMSC stimulated by tension and PTEN may regulate the osteogenic differentiation of BMSC by mediating TGF-β/Smad2 pathway. Downregulation of PTEN can upregulate the expression of TGF-β and Smad2 protein and interfere with PTEN to promote osteogenic differentiation of BMSC. Meanwhile, the ALP staining results and AR staining results also confirmed that interfering PTEN can promote the osteogenic differentiation of cells under tension, but the specific mechanism remains to be further explored. ## 5. Conclusion To sum up, PTEN-mediated TGF-β/Smad2 pathway may play a key role in the osteogenic differentiation of OP tibial fracture rats. Downregulation of PTEN and upregulation of TGF-β/Smad2 signal can promote the osteogenic differentiation of BMSC under tension, which can be used as a target for bone tissue research. --- *Source: 1004203-2022-04-29.xml*
1004203-2022-04-29_1004203-2022-04-29.md
21,951
Analysis of Effects of PTEN-Mediated TGF-β/Smad2 Pathway on Osteogenic Differentiation in Osteoporotic Tibial Fracture Rats and Bone Marrow Mesenchymal Stem Cell under Tension
Shiyong Ling; Chen Yan; Kai Huang; Bo Lv; Hua Wang; Xiaoyan Wang; Jun Chen; Jingchuan Sun
Cellular Microbiology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004203
1004203-2022-04-29.xml
--- ## Abstract Purpose. To discuss effects of phosphatase and tensin homolog protein (PTEN)-mediated transforming growth factor-β (TGF-β)/Smad homologue 2 (Smad2) pathway on osteogenic differentiation in osteoporotic (OP) tibial fracture rats and bone marrow mesenchymal stem cell (BMSC) under tension. Methods. A tibial fracture model was established. The rats were divided into sham-operated group and model group, and tibia tissue was collected. Purchase well-grown cultured rat BMSC, and use the Flexercell in vitro cell mechanics loading device to apply tension. The expression of PTEN was detected by qRT-PCR. After the BMSCs were transfected with si-PTEN and oe-PTEN, the force was applied to detect cell differentiation. The expression of TGF-β/Smad2 protein was detected by Western blot. The formation of calcium nodules in BMSC was detected by alkaline phosphatase (ALP) staining and alizarin red (AR) staining. Results. The expression of PTEN was higher in the model group and tension MSC group, and the expression of TGF-β and Smad2 protein was lower. The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than the oe-NC group and control group. The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group. The results of ALP staining and AR staining also confirmed the above results. Conclusion. PTEN-mediated TGF-β/Smad2 pathway may play a key role in the osteogenic differentiation of OP tibial fracture rats. Downregulation of PTEN and upregulation of TGF-β/Smad2 signal can promote the osteogenic differentiation of BMSC under tension. --- ## Body ## 1. Introduction Osteoporosis (OP) is a progressive disease of systemic bone metabolism disorder, which is characterized by degradation of bone microstructure, accelerated bone loss, and destruction of bone microstructure, and will increase the risk of fracture [1, 2]. Tibial fracture, as the most serious consequence of OP, will bring severe living burden and economic burden to patients. At present, OP-induced fractures are affecting more and more people, especially middle-aged and elderly patients, and people pay more and more attention to them.Bone marrow mesenchymal stem cell (BMSC) can differentiate into osteoblasts and adipose cells can in bone marrow [3]. OP leads to the impairment of the function and differentiation ability of BMSC in patients with tibial fracture, and then the imbalance between osteogenic differentiation and adipogenic differentiation occurs, resulting in the decrease of osteoblast formation in bone marrow and the increase of adipose tissue formation [4]. Previous studies have shown that BMSC is one of the most sensitive cells to mechanical stress, and mechanical tensile stress can have a certain influence on the osteogenic differentiation of BMSC [5]. Mechanical tension is divided into physiological tension and pathological tension, and its regulation is closely related to the size, frequency, and duration of mechanical stimulation. Low-level tension is not enough to maintain bone formation; proper tension can provide effective physiological stimulation for the maintenance of bone tissue; when the tension is too large, the speed of bone tissue destruction is greatly accelerated, which is pathological distraction tension.Phosphatase and tensin homolog protein (PTEN) is a factor with the activities of lipid phosphatase and protein phosphatase, it participates in the process of cell DNA repair, apoptosis and proliferation [6]. The deletion of the PTEN gene will lead to cancer, nervous system diseases, metabolic diseases, and immune system diseases [7]. Transforming growth factor-β (TGF-β) is one of the most important factors involved in process of bone remodeling [8]. Smads family is an important new gene family in TGF-β pathway in vertebrates. Smad homologue 2 (Smad2) is a receptor-activated Smads protein, which participates in TGF-β or activin signal transduction [9]. Some experts found that TGF-β/Smad2 pathway of BMSC can promote its proliferation and osteogenic differentiation, regulate the late osteogenic differentiation, and participate in collagen secretion and calcium salt deposition [10].However, at present, there are few research on the relationship between TGF-β/Smad2 pathway, PTEN, and osteogenic differentiation of BMSC under tension. In this study, the author observed the effects of PTEN-mediated TGF-β/Smad2 pathway on the osteogenic differentiation of BMSC in OP tibial fracture rats under tension, in order to provide theoretical basis for bone tissue engineering research. ## 2. Methods 8 clean grade rats aged 4-5 months (260-300 g) were selected, and all rats were fed with normal diet under the same conditions. The experimental rats were randomly divided into two groups according to body weight: sham-operated group and model group. In the model group, the rats were anesthetized by intramuscular injection of 20% urethane, and the bilateral ovaries of the rats were removed from both sides of the lumbar spine of the rats. In the sham-operated group, the ovaries were exposed in the same way, but the ovaries were not removed.OP model was established, and the levels of alkaline phosphatase (ALP) and tartrate-resistant acid phosphatase (TRAP) in the blood of rats were detected by ALP kit and TRAP kit. The absorbance value can be read on the microplate reader, and the corresponding concentration can be converted from the standard curve according to the absorbance value. When the serum ALP level exceeds147.25±56.29IU/mL and the TRAP level exceeds 24.50±1.16IU/mL, the OP model of rats was qualified, and the tibia fracture model was established. qRT-PCR method was used to detect the expression of PTEN, and Western blot method was used to detect the expression of TGF-β protein and Smad2 protein.The well-grown cultured rat BMSC were purchased; the cells passed the test for bacteria, fungi, mycoplasma, and endotoxin and passed the test of differentiation ability. After being digested with 0.25% trypsin solution, the cells were inoculated into 6-well culture plates at a density of l ×105/cm2. The cells were cultured for 48 h and waited for the cell confluency to reach 80%-90%. Then, Flexercell in vitro cell mechanics loading device was used; the tensile force with a frequency of 1 Hz and a deformation rate of 18% were applied, twice a day for 30 min; and BMSC was harvested on the 5th day of experiment to induce the osteogenic differentiation of BMSC. qRT-PCR was used to detect the expression of PTEN in osteoblast differentiated cells. After BMSC were transfected with si-PTEN and oe-PTEN, the tension was applied, and cell differentiation was detected. The expression of TGF-β/Smad2 protein was detected by Western blot. ### 2.1. qRT-PCR Analysis Total RNA was extracted by Trizol method, reverse transcribed into complementary DNA, and qRT-PCR detection was carried out with reference to reagent instructions. The internal reference was GAPDH, and every third compound hole was a sample. The Ct value of each group was obtained, and the relative mRNA expression was calculated by 2−ΔΔCt. ### 2.2. Western Blot Analysis The total protein BCA method was used to quantify the protein and detect TGF-β and Smad2 protein in bone cells. SDS-PAGE gel electrophoresis was performed, membrane was transferred for 1 h in ice bath state, BSA was blocked at room temperature for 2 h, primary antibody was incubated overnight at 4 °C, and membrane was washed three times for 10 min each time. Then, incubate the second antibody at room temperature for 1 h, and wash for 3 times, each time for 10 min. The electrochemiluminescence substrate chromogenic solution was added, and the image was taken by gel imager.ALP staining and alizarin red (AR) staining was used to detect the formation of calcium nodules in BMSC. In ALP staining, follow the instructions of the kit. A few drops of No.1 solution were added to the cell slide, fixed at room temperature for 1 min, rinsed for 2 min, and dried. A few drops of action solution were added, incubated in a wet box at 37 °C for 2 h, and rinsed for 2 min. A few drops of No.5 solution were added, counterstained for 5 min, rinsed for 2 min, and dried, and take images with a microscope. In AR staining, the cells were transferred to the glass slide, fixed with 95% ethanol for 10 min, and washed. Then, it was incubated in 0.1% alizarin red staining solution at room temperature for 30 min. Rinse with distilled water, dry, seal, and take pictures with microscope. ### 2.3. Statistical Methods With SPSS 22.0 statistical software, the data was expressed as mean ± standard deviation,t test was used for comparison, and P<0.05 was significant. ## 2.1. qRT-PCR Analysis Total RNA was extracted by Trizol method, reverse transcribed into complementary DNA, and qRT-PCR detection was carried out with reference to reagent instructions. The internal reference was GAPDH, and every third compound hole was a sample. The Ct value of each group was obtained, and the relative mRNA expression was calculated by 2−ΔΔCt. ## 2.2. Western Blot Analysis The total protein BCA method was used to quantify the protein and detect TGF-β and Smad2 protein in bone cells. SDS-PAGE gel electrophoresis was performed, membrane was transferred for 1 h in ice bath state, BSA was blocked at room temperature for 2 h, primary antibody was incubated overnight at 4 °C, and membrane was washed three times for 10 min each time. Then, incubate the second antibody at room temperature for 1 h, and wash for 3 times, each time for 10 min. The electrochemiluminescence substrate chromogenic solution was added, and the image was taken by gel imager.ALP staining and alizarin red (AR) staining was used to detect the formation of calcium nodules in BMSC. In ALP staining, follow the instructions of the kit. A few drops of No.1 solution were added to the cell slide, fixed at room temperature for 1 min, rinsed for 2 min, and dried. A few drops of action solution were added, incubated in a wet box at 37 °C for 2 h, and rinsed for 2 min. A few drops of No.5 solution were added, counterstained for 5 min, rinsed for 2 min, and dried, and take images with a microscope. In AR staining, the cells were transferred to the glass slide, fixed with 95% ethanol for 10 min, and washed. Then, it was incubated in 0.1% alizarin red staining solution at room temperature for 30 min. Rinse with distilled water, dry, seal, and take pictures with microscope. ## 2.3. Statistical Methods With SPSS 22.0 statistical software, the data was expressed as mean ± standard deviation,t test was used for comparison, and P<0.05 was significant. ## 3. Results ### 3.1. Establishment of OP Model Rats Serum ALP level in model group was higher than the sham-operated group (P<0.05) (see Figure 1).Figure 1 Comparison of serum ALP levels between the two groups compared to the sham-operated group,∗P<0.05. ### 3.2. Expression of PTEN and TGF-β/Smad2 Protein in Tibia of OP Model Rats The expression of PTEN in model group was higher than the sham-operated group (P<0.05). The expression of TGF-β and Smad2 protein in the model group was lower than the sham-operated group (see Figures 2 and 3).Figure 2 Comparison of PTEN expression between the two groups compared to the sham-operated group,∗P<0.05.Figure 3 Expression of TGF-β/Smad2 protein in two groups. ### 3.3. Expression of PTEN and TGF-β/Smad2 in Osteogenic Differentiation of BMSC under Tension The expression of PTEN in tension MSC group was higher than MSC group (P<0.05). The expression of TGF-β and Smad2 protein in tension MSC group was lower than MSC group (see Figures 4 and 5).Figure 4 Comparison of PTEN expression between the two groups compared to the MSC group,∗P<0.05.Figure 5 Expression of TGF-β/Smad2 protein in the two groups. ### 3.4. Effect of Overexpression of PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In ALP staining, compared with the control group and oe-NC group, the osteogenic differentiation ability of BMSC and ALP activity in oe-PTEN group were obviously weaker (see Figure6). In the AR staining, compared with the control group and oe-NC group, oe-PTEN group had lower mineralization degree, fewer calcium nodules, and lighter cell staining (see Figure 7). The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than oe-NC group and control group (see Figure 8).Figure 6 ALP staining results.Figure 7 AR staining results.Figure 8 Western blot results. ### 3.5. Effects of Interfering PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In the ALP staining, compared with the control group and si-NC group, the osteogenic differentiation ability of BMSC and ALP activity in si-PTEN group were obviously stronger (see Figure9). In the AR staining, compared with the control group and si-NC group, the si-PTEN group had higher mineralization degree, more calcium nodules, and deeper cell staining (see Figure 10). The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group (see Figure 11).Figure 9 ALP staining results.Figure 10 AR staining results.Figure 11 Western blot results. ## 3.1. Establishment of OP Model Rats Serum ALP level in model group was higher than the sham-operated group (P<0.05) (see Figure 1).Figure 1 Comparison of serum ALP levels between the two groups compared to the sham-operated group,∗P<0.05. ## 3.2. Expression of PTEN and TGF-β/Smad2 Protein in Tibia of OP Model Rats The expression of PTEN in model group was higher than the sham-operated group (P<0.05). The expression of TGF-β and Smad2 protein in the model group was lower than the sham-operated group (see Figures 2 and 3).Figure 2 Comparison of PTEN expression between the two groups compared to the sham-operated group,∗P<0.05.Figure 3 Expression of TGF-β/Smad2 protein in two groups. ## 3.3. Expression of PTEN and TGF-β/Smad2 in Osteogenic Differentiation of BMSC under Tension The expression of PTEN in tension MSC group was higher than MSC group (P<0.05). The expression of TGF-β and Smad2 protein in tension MSC group was lower than MSC group (see Figures 4 and 5).Figure 4 Comparison of PTEN expression between the two groups compared to the MSC group,∗P<0.05.Figure 5 Expression of TGF-β/Smad2 protein in the two groups. ## 3.4. Effect of Overexpression of PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In ALP staining, compared with the control group and oe-NC group, the osteogenic differentiation ability of BMSC and ALP activity in oe-PTEN group were obviously weaker (see Figure6). In the AR staining, compared with the control group and oe-NC group, oe-PTEN group had lower mineralization degree, fewer calcium nodules, and lighter cell staining (see Figure 7). The expression of TGF-β and Smad2 protein in oe-PTEN group was lower than oe-NC group and control group (see Figure 8).Figure 6 ALP staining results.Figure 7 AR staining results.Figure 8 Western blot results. ## 3.5. Effects of Interfering PTEN on Osteogenic Differentiation and TGF-β/Smad2 Protein of BMSC under Tension In the ALP staining, compared with the control group and si-NC group, the osteogenic differentiation ability of BMSC and ALP activity in si-PTEN group were obviously stronger (see Figure9). In the AR staining, compared with the control group and si-NC group, the si-PTEN group had higher mineralization degree, more calcium nodules, and deeper cell staining (see Figure 10). The expression of TGF-β and Smad2 protein in si-PTEN group was higher than the si-NC group and control group (see Figure 11).Figure 9 ALP staining results.Figure 10 AR staining results.Figure 11 Western blot results. ## 4. Discussion OP is the primary cause of fractures in the elderly in China. The pathogenesis of tibial fractures caused by OP has been highly valued clinically [11]. Bone is composed of bone matrix and minerals deposited in it, which has the functions of supporting, protecting, and storing calcium and phosphorus for the entire body. The bone can sense and adapt to mechanical tension, maintaining a balance between bone resorption and bone remodeling [12, 13].TGF-β is a protein polypeptide, which controls bone density by regulating the deposition of osteoblasts and the absorption of osteoclasts, and can maintain bone homeostasis [14]. TGF-β can not only increase the directional migration ability of osteoblasts, but also attract the positioning and movement of osteoblast progenitor cells during bone reconstruction. It can also reduce bone turnover, promote the formation of bone and cartilage, and accelerate osteoclast apoptosis [15]. TGF-β family proteins act on the whole process of osteogenic differentiation. Tu’s team found that reduced TGF-β level may be one of the pathogenic factors of OP, and different doses of TGF-β can regulate the proliferation and differentiation of osteoclasts through different pathways, affecting the absorption and destruction of bone, thus regulating the process of bone transformation [16]. In the early stage, TGF-β can regulate the osteogenic differentiation of BMSC and promote the proliferation of osteoblasts. In the late stage, TGF-β can regulate the collagen secretion and calcium salt deposition of osteoblasts [17]. In addition, Lin’s team found that there is abundant expression of Smad2 protein in normal bone tissue, it is widely expressed in the epiphysis, and it is mainly expressed in osteoblasts on the surface of bone matrix and around trabecular bone [18]. TGF-β/Smad2 signal transduction is a rather complicated process from cell membrane to nucleus; the main steps include activation binding stage, recombination separation stage, and transfer action stage, involving many genes and proteins. In the process of BMSC aggregation differentiation to osteoblasts and osteoblasts’ own proliferation and differentiation, Smads directly combined with DNA as transcription factor or interacted with other transcription factors and activating factors to induce the transcription response to TGF-β signal. TGF-β affects the expression of its downstream osteogenic specific transcription factor Runx2 through the classical Smad signal pathway and regulated the expression of osteogenic related genes [19]. Li’s team research shows that silencing endogenous Smad2 expression in BMSC can enhance bone formation, but inhibit adipogenesis, and miR-10b promotes osteogenic differentiation and bone formation through TGF-β pathway [20]. Yuan’s team believes that after ovariectomy, the expression of TGF-β1 is down-regulated and the expression of Smad2 protein is also significantly reduced and Smad2 protein and its mediated TGF-β1 may play an important role in the formation of postmenopausal OP [21].PTEN is associated with the differentiation of osteoblasts and osteoclasts. Shen’s team found that the upregulation of PTEN facilitated the osteogenesis of dental pulp mesenchymal stem cells [22]. Cai’s team showed that the addition of PTEN inhibitor partially blocked the process of oxaloacetic acid affecting the differentiation of mouse embryonic osteoblast precursor cells, myoblasts and osteoblasts, indicating that PTEN can inhibit the differentiation of osteoclasts [23]. Qi’s team found that the tensile force of 2 000 μ strain can promote the osteogenic differentiation of BMSC [24]. Koike’s team found that 0.8% strain rate can promote the osteogenic differentiation of ST2 cells, while 10% and 15% strain rate can inhibit the osteogenic differentiation of ST2 cells [25]. We found that the expression of PTEN was higher in the model group and tension MSC group, and the expression of TGF-β and Smad2 protein was lower. The results showed that PTEN and TGF-β/Smad2 pathways might play a key role in the osteogenic differentiation of BMSC in OP tibial fracture rats, while pathological tension inhibited the differentiation of MSC cells into mature osteoblasts and reduced the expression of osteogenic related genes under the conduction of stress stimulation signals.In this study, we interfered with PTEN and then gave osteogenic differentiation. The results showed that the expression of TGF-β and Smad2 protein increased in si-PTEN group, but decreased after overexpression of PTEN. This indicates that PTEN plays an important role in the osteogenic differentiation of rat BMSC stimulated by tension and PTEN may regulate the osteogenic differentiation of BMSC by mediating TGF-β/Smad2 pathway. Downregulation of PTEN can upregulate the expression of TGF-β and Smad2 protein and interfere with PTEN to promote osteogenic differentiation of BMSC. Meanwhile, the ALP staining results and AR staining results also confirmed that interfering PTEN can promote the osteogenic differentiation of cells under tension, but the specific mechanism remains to be further explored. ## 5. Conclusion To sum up, PTEN-mediated TGF-β/Smad2 pathway may play a key role in the osteogenic differentiation of OP tibial fracture rats. Downregulation of PTEN and upregulation of TGF-β/Smad2 signal can promote the osteogenic differentiation of BMSC under tension, which can be used as a target for bone tissue research. --- *Source: 1004203-2022-04-29.xml*
2022
# Study on the Innovative Development of Digital Media Art in the Context of Artificial Intelligence **Authors:** Chaomiao Chen **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004204 --- ## Abstract With the rapid development of modern science and technology, the speed of digital media to disseminate information is also accelerating, and the forms of communication become more and more diversified. In order to make digital media art better innovate and develop, designers should actively explore, so that art under digital media has the characteristics of intelligence, networking, and content diversification, so that it can be better applied in modern digital media communication and let visual art reach new development heights in the context of digital media. The current rapid development of artificial intelligence technology is being gradually applied to all fields of society. From the current point of view, China’s digital media art will inevitably remain in the early development stage for a long time to face many difficulties and problems. In the context of the era of artificial intelligence, there will also be a large number of art and design talents for continuous research and exploration, and the emergence of these talents will certainly prompt the further development and enhancement of digital media art and artificial intelligence technology. In this process, it is especially worth noting that China’s excellent cultural connotations should not be abandoned. Committed to digging more beneficial digital media art elements from traditional culture to make digital media art always develop in a more meaningful way. This paper takes the artificial intelligence era as the background, discusses the core of the development of the mutual integration of artificial intelligence technology and digital media art, analyzes the current development status of digital media art and technology, as well as the innovative development direction and future trends, proposes a digital media art design algorithm based on the convolutional neural network, and finally proves the effectiveness of the method in the relevant data set. --- ## Body ## 1. Introduction With the increasing development of current science and technology and social economy, the art of combining digital and multimedia has gradually developed and matured, while gradually replacing traditional art methods and becoming a mainstream art. At present, it is necessary to accelerate the pace of development, reform, and innovation for the concept of art, and to fully display the characteristics of the times of the art of combining digital and multimedia. First, the need for innovation in the art of combining digital and multimedia concept, the art of combining digital and multimedia in the actual development process is still not optimistic, for example, the process of product design is seriously lacking artistry, often just applying the template to carry out design work, which is conducive to people to change the traditional cognition of the art of combining digital and multimedia [1–3]. The art of combining digital and multimedia is a comprehensive discipline, which involves many aspects of content and knowledge; however, the art of combining digital and multimedia has not yet achieved interdisciplinary communication, but is relatively rigid in the level of artistic expression; for example, shooting a certain short film not only need to have a camera but also need to create, tuning, rendering, color mixing, etc., which involves relatively cumbersome content, so, at present, the art of combining digital and multimedia development among the change of mindset is particularly urgent. The innovative development of digital media art in the context of artificial intelligence is shown in Figure 1.Figure 1 Schematic diagram of the innovative development of digital media art in the context of artificial intelligence.To achieve further innovation and reform of the concept of the art of combining digital and multimedia, the original traditional artistic thinking and design methods should be abandoned, and the art of combining digital and multimedia with cultural connotations and contemporary values should be created [4–6]. However, thought itself has a significant dynamic character, but is also particularly susceptible to the internal and external environment, at the same time, human epiphany, association, and intuition usually have a certain influence on the thought, in order to completely out of the traditional art thought on the human fetters, need to combine with the development of the times, to cognitive and understand the new works, feel the vitality of the times, should also experience life, should only in this way can we achieve change and innovation in the art of combining digital and multimedia. In the present, artistic expression includes many types, for example, transplantation, analogy, and imagination. If traditional ideas are used in the current process of the art of combining digital and multimedia development, it is difficult to effectively meet people’s pursuit of art and requires a combination of basic artistic expression methods to promote the enrichment of the art of combining digital and multimedia connotations. First, imagination is not unrealistic but should be based on the premise of rich knowledge through the brain to effectively create and process the process. For example, if a movie is imaginative and out of touch with reality, it is unlikely to have such a great impact [7]. Although the content described in the movie is far from reality, it is still created with reality as the basis. Secondly, an analogy is mainly from the nature between two similar things, inferring things in other aspects of different points, and the same point, decision-making, by using the analogy, can often get good results. Finally, transplantation is an excellent creative technique that usually facilitates the innovation of ideas. Other techniques that are similar to transplantation include transformation, which is literally the conversion of things to each other, including conceptual and role shifts. The main content is that life is art. Nowadays, people’s material living standard has improved significantly, and the understanding of aesthetics has also undergone a great change, so art is not only exclusive to the privileged class, but everyone can create art and appreciate art, which is the test of life. Third, the expression of the art of combining digital and multimedia concept innovation is first of all the content. According to the traditional artworks for comparison and analysis, we can fully understand that the content of early artworks is relatively single. With the increasing development of the times and economy, and the mutual integration of Eastern and Western cultures, people can see more content with unique artistic characteristics. The second is the method of creation. The rapid development of science and technology, as well as the increasing maturity of computer technology, has enabled the creation of digital media with more diverse characteristics. For example, Arlen creation software is a typical representative. Creators can practice their artistic ideas through Arlen creative software. In addition, there are still a large number of the art of combining digital and multimediatizes experimenting with other methods of art creation [8].The world has entered the era of digital development, and the fusion of artificial intelligence technology and the art of combining digital and multimedia is rapidly reconstructing and overturning people’s perceptions of the art and design field, and new means of design and artistic expression have been created. According to the “2017 Emerging Technology Maturity Curve Report” released by a U.S. consulting firm, artificial intelligence technology had a share of over 50% of the 33 new technologies at that time [9–12]. In these subsequent years, AI technology has achieved a comprehensive and widespread application. Nowadays, AI technology is compatible with intelligent systems such as environment perception, memory storage, thinking and reasoning, and learning ability possessed by human intelligence. And in the field of design, AI-aided design has made certain achievements, such as the artificial intelligence product Laban launched by Alibaba in the Double Eleven, which designed 4 × 108 advertising banners by itself, witnessing the powerful power of art and technology collision. However, in the process of artificial intelligence replacing the basic design of artwork, people also perceive the lack of judgment and understanding of aesthetics by intelligent machines, and the beauty of art cannot be realized by computer technology alone. At this point, art designers became “trainers,” they need to cultivate the computer’s sense of beauty so that the computer can design and learn to acquire a kind of human-based aesthetics and wisdom, so as to form their own thinking and insights on design aesthetics, and then feedback to the works involved. The optimists believe that the future of the art of combining digital and multimedia has unlimited possibilities, while the pessimists believe that it will be the end of the road for art designers. Along with the accelerated pace of global digital development, Chinese digital media design majors should take the initiative to accept new knowledge and technology in education, objectively and rationally view the opportunities and challenges brought by artificial intelligence and its high technology, and always uphold the principles of integrating tradition and modernity, approaching ideals and reality, giving equal importance to practice and theory, and carrying out reform and survival, so as to build a digital media design teaching system that meets the modern context. This paper proposes a system based on the convolutional neural network. The main contributions of this paper are (1) A convolutional neural network-based art design algorithm for combining digital and multimedia is proposed, the framework structure of the neural network model is analyzed, and the process of how to use the model to extract artistic styles and fuse them with ordinary images is discussed. (2) Then, based on the theory, according to the actual characteristics of the art combining digital and multimedia, the experimental analysis is performed to find the processing content images with suitable convolution layers, as well as finding the best overlay combination for digital media art feature extraction and proposing visualization criteria for evaluating image quality. (3) Finally, the feasibility of the theory is verified by adjusting the scale coefficients of content images and style images to obtain images that meet the expected goals, and a new style extraction method for digital and multimedia combined art is proposed. ## 2. Related Work ### 2.1. Digital Media Art The art of combining digital and multimedia refers to the synthesis of digital image processing technology, information and communication technology, art design, and other disciplines. Unlike traditional art, the art of combining digital and multimedia is widely used and developed in other fields through media communication with its own unique characteristics of integration, virtualization, and intersectionality [12–14]. The art of combining digital and multimedia has diversified forms of expression, and it changes people’s aesthetic pursuit of art with its unique creation concept, influencing the direction of art modernity development. As socialism with Chinese characteristics enters a new era, the living standards of the people and the aesthetic ability of the public are increasingly rising, the art of combining digital and multimedia not only brings people a better quality visual experience but also gives them stronger sensory stimulation. In artistic expression and communication, the art of combining digital and multimedia can vividly and effectively release and convey information and is gradually becoming a new carrier for the development of modern artistic expression. The benign development of the art of combining digital and multimedia can play an excellent role in promoting the development of human society. The main characteristics of the art of combining digital and multimedia along with the development of social life, people have put forward higher requirements for the accuracy, interactivity, and efficiency of information dissemination, and the application of the art of combining digital and multimedia in art has brought about great changes in people’s artistic feelings. The art of combining digital and multimedia is an art that combines participation, interactivity, multimedia use, high-tech enrichment, and new forms of expression, and is a reorganization of the real world and virtual images. Its emergence and development has provided a broader stage of expression for traditional art content. Grassroots participation in art is closely related to people’s lives, and people can enjoy their favorite artworks at any time, publish their own comments on artworks through the Internet, and even publish their own artworks anonymously. 3D and 4D technologies that were only bred in the digital media era, have become popular, extending people’ ability to appreciate art, innovating forms of art appreciation, and getting closer to art. The interactive nature of two-way interactive is the art of combining digital and multimedia.The interactive nature of two-way interactive is the art of combining digital and multimedia provides a new creative experience for art creators. Before the digital media came out, the dissemination of art was a whole process of creation, release, and transmission by artists and traditional media, with the audience only playing the role of receiver and appreciator, in which the artist could not get feedback from the audience. The emergence of digital media has brought about a new change in the process of art communication [15–17]. The whole process has changed from one-way to two-way interaction, and the artist can also know the feelings of the audience at any time, and the audience can also participate in the creation of the process with two-way interaction. Multimedia use of our country’s traditional artworks are created through different tools and different materials, for now, China’s art of combining digital and multimedia products are mainly conducted through computers and new technologies. In the process of painting oil paintings, the creators first use computer software, which will help with later editing and modifications, and the computer can record the entire creative process of the creator. In addition to these, the art of combining digital and multimedia also includes e-books and e-maps. People use e-books for reading and can make comments and changes at any time, bringing convenience to reading. People can enter the places they want to go on the electronic map, and it will remind people of the relevant traffic precautions. One of the most important features of high-tech enrichment in the field of art is exaggeration, and the art of combining digital and multimedia is no exception, and it brings this feature to the highest point. We can easily find that in the art of combining digital and multimedia, every piece of art and every character is exaggerated, with exaggerated expressions and shapes, and even exaggerated language, all of which are designed to give people the best artistic experience. For example, new technologies in the art of combining digital and multimedia are used in paintings to make them more vivid and rich; VR games are games with high technology in the art of combining digital and multimedia, and the game screen becomes three-dimensional from two-dimensional, so that people can participate in the game more realistically. Compared with other types of artistic expression, the art of combining digital and multimedia has its own unique innovative performance, which is summarized in the following points: innovation of creative concepts China’s economy in the new era is still in a relatively rapid development period, and the strong economic level provides convenient conditions for the innovative development of the art of combining digital and multimedia. The economic foundation determines the superstructure, with the economic foundation as a pavement, and then through new technology and means, the art of combining digital and multimedia can have adequate development concepts and new ideas. Under the new creation concept, the traditional means and the current means will be very different, the creation method is more and more rich and diverse, the creation content is newer, the art of combining digital and multimedia also pays more attention to the development of personality, and the needs of different audience groups are satisfied. The difficulty of creation lies in new ideas, once any creation lacks new ideas, it is difficult to get people’s love, and the art of combining digital and multimedia is no exception [17–19]. With the emergence of the art of combining digital and multimedia, some old artists do not have a deep enough understanding of this new form of creation and cannot break through themselves in time with the development of the times. Some people still use the old methods in their creation and simply think about the development of traditional art according to their past thinking. In their eyes, the modern art of combining digital and multimedia has no way to reflect the real value of their works. This makes the works they create lack their own individuality and are as same. Besides these people, there are also some artists who are not too familiar with the functions of digital media, leading to the lack of innovation in their works and the phenomenon of similar works. ### 2.2. Artificial Intelligence Technology The integration of artificial intelligence technology and the art of combining digital and multimedia is based on human understanding of themselves and their experience of emotions [19–21]. If a person has no understanding of the formation of aesthetic thinking and consciousness in a different cultural context, it is simply impossible to solve various design problems concerning the way humans live and work if they simply rely on the computational methods of intelligent machines. Today’s iterative developments in science and technology continue to enrich the way people live and produce and raise the demand for aesthetics. The reason for developing artificial intelligence technology is to better serve people themselves, while the integration of art and design is precisely from the emotional needs of people, where art is more prominent in the expression of the subjective concept of people. The future of artificial intelligence is to think about the needs of people and the ability to change different environmental needs. From the current application of artificial intelligence scenarios, artificial intelligence in solving certain boundary problems, people are bound to lose to the machine; however, the need to make decisions with the help of emotional judgment, the logical reasoning made based on artificial intelligence big data also exists uncertainty. At present, artificial intelligence can only provide high technology, while the formation of excellent artworks must require new technical means to achieve. People have emphasized that art is the expression of human subjective concepts, just like saying that if it is similar, it is vulgar, and if it is not, it is not. Art does not have boundaries; therefore, artificial intelligence can never replace a human to complete subjective concepts. The key to the integration of artificial intelligence technology and the art of combining digital and multimedia is to obtain new methods of art created with the help of artificial intelligence technology, so as to better enrich the means of the art design, so the core of the integration of the two depends on people’s values and cognitive ability of artificial intelligence [22].In summary, we know the concept of artificial intelligence, the way of integration of artificial intelligence and the art of combining digital and multimedia and the core significance. In the context of artificial intelligence, intelligent machines can drive the continuous development and progress of society by virtue of their powerful computing power, and the next stage of deeper integration and innovation of artificial intelligence and the art of combining digital and multimedia will definitely present the following development characteristics. The art of combining digital and multimedia has characteristics such as openness, integration, and interactivity, and there are also artistic time-sensitive features that change with time or forms, and it is based on these special features that make the current art of combining digital and multimedia more and more diversified. In recent years, the rapid development of computer and network technology in China has reached the needs of art creation making the organic integration of art and artificial intelligence technology. Only by scientifically and rationally utilizing this fusion can the art of combining digital and multimedia give audiences a more perfect visual experience and strengthen their perceptions and impressions of the works of art creation. With various high-tech as the creative support of the art of combining digital and multimedia, it will let art and technology burst into more artistic miracles. Actively expanding the diversity of digital media, the characteristics of the art of combining digital and multimedia are more diverse, yet in fact, the development of the art of combining digital and multimedia and artificial intelligence technology shows an obvious sense of weakness. In this period of sustainable development of the art of combining digital and multimedia and simultaneous technological progress, Chinese art of combining digital and multimedia will present more new types of art in a richer form of expression, prompting digital media to achieve diverse development, bringing audiences a completely different consumer experience and a new experience and artistic perceptiveness in visual perception [23–25]. From this point of view, the mutual integration of the art of combining digital and multimedia and artificial intelligence technology will certainly make the development space of art and technology more extensive. From the analysis of the development history of the art of combining digital and multimedia in the past, the integration of the art of combining digital and multimedia and artificial intelligence technology in China can only be better presented in the form of technology and art through digital media by continuously strengthening its own traditional cultural heritage and modern civilization. Most people one-sidedly believe that digital technology and computer technology is the catalyst for the development of the art of combining digital and multimedia and technology, however, in fact from another perspective, it is based on the background of the artificial intelligence era that society has stepped into a new period of development of art and technology. From the current point of view, Chinese art of combining digital and multimedia is bound to remain in the early stages of development for a long time and to face many difficulties and problems. In the context of the artificial intelligence era, there will also be a large number of art and design talents emerging for continuous research and exploration, and the emergence of these talents will certainly lead to further development and enhancement of the art of combining digital and multimedia and artificial intelligence technology. In this process, it is especially worth noting that the excellent Chinese cultural connotations should not be abandoned. Commitment to digging out more beneficial art of combining digital and multimedia elements from traditional culture is the only way to make the art of combining digital and multimedia always develop in a more meaningful way. ## 2.1. Digital Media Art The art of combining digital and multimedia refers to the synthesis of digital image processing technology, information and communication technology, art design, and other disciplines. Unlike traditional art, the art of combining digital and multimedia is widely used and developed in other fields through media communication with its own unique characteristics of integration, virtualization, and intersectionality [12–14]. The art of combining digital and multimedia has diversified forms of expression, and it changes people’s aesthetic pursuit of art with its unique creation concept, influencing the direction of art modernity development. As socialism with Chinese characteristics enters a new era, the living standards of the people and the aesthetic ability of the public are increasingly rising, the art of combining digital and multimedia not only brings people a better quality visual experience but also gives them stronger sensory stimulation. In artistic expression and communication, the art of combining digital and multimedia can vividly and effectively release and convey information and is gradually becoming a new carrier for the development of modern artistic expression. The benign development of the art of combining digital and multimedia can play an excellent role in promoting the development of human society. The main characteristics of the art of combining digital and multimedia along with the development of social life, people have put forward higher requirements for the accuracy, interactivity, and efficiency of information dissemination, and the application of the art of combining digital and multimedia in art has brought about great changes in people’s artistic feelings. The art of combining digital and multimedia is an art that combines participation, interactivity, multimedia use, high-tech enrichment, and new forms of expression, and is a reorganization of the real world and virtual images. Its emergence and development has provided a broader stage of expression for traditional art content. Grassroots participation in art is closely related to people’s lives, and people can enjoy their favorite artworks at any time, publish their own comments on artworks through the Internet, and even publish their own artworks anonymously. 3D and 4D technologies that were only bred in the digital media era, have become popular, extending people’ ability to appreciate art, innovating forms of art appreciation, and getting closer to art. The interactive nature of two-way interactive is the art of combining digital and multimedia.The interactive nature of two-way interactive is the art of combining digital and multimedia provides a new creative experience for art creators. Before the digital media came out, the dissemination of art was a whole process of creation, release, and transmission by artists and traditional media, with the audience only playing the role of receiver and appreciator, in which the artist could not get feedback from the audience. The emergence of digital media has brought about a new change in the process of art communication [15–17]. The whole process has changed from one-way to two-way interaction, and the artist can also know the feelings of the audience at any time, and the audience can also participate in the creation of the process with two-way interaction. Multimedia use of our country’s traditional artworks are created through different tools and different materials, for now, China’s art of combining digital and multimedia products are mainly conducted through computers and new technologies. In the process of painting oil paintings, the creators first use computer software, which will help with later editing and modifications, and the computer can record the entire creative process of the creator. In addition to these, the art of combining digital and multimedia also includes e-books and e-maps. People use e-books for reading and can make comments and changes at any time, bringing convenience to reading. People can enter the places they want to go on the electronic map, and it will remind people of the relevant traffic precautions. One of the most important features of high-tech enrichment in the field of art is exaggeration, and the art of combining digital and multimedia is no exception, and it brings this feature to the highest point. We can easily find that in the art of combining digital and multimedia, every piece of art and every character is exaggerated, with exaggerated expressions and shapes, and even exaggerated language, all of which are designed to give people the best artistic experience. For example, new technologies in the art of combining digital and multimedia are used in paintings to make them more vivid and rich; VR games are games with high technology in the art of combining digital and multimedia, and the game screen becomes three-dimensional from two-dimensional, so that people can participate in the game more realistically. Compared with other types of artistic expression, the art of combining digital and multimedia has its own unique innovative performance, which is summarized in the following points: innovation of creative concepts China’s economy in the new era is still in a relatively rapid development period, and the strong economic level provides convenient conditions for the innovative development of the art of combining digital and multimedia. The economic foundation determines the superstructure, with the economic foundation as a pavement, and then through new technology and means, the art of combining digital and multimedia can have adequate development concepts and new ideas. Under the new creation concept, the traditional means and the current means will be very different, the creation method is more and more rich and diverse, the creation content is newer, the art of combining digital and multimedia also pays more attention to the development of personality, and the needs of different audience groups are satisfied. The difficulty of creation lies in new ideas, once any creation lacks new ideas, it is difficult to get people’s love, and the art of combining digital and multimedia is no exception [17–19]. With the emergence of the art of combining digital and multimedia, some old artists do not have a deep enough understanding of this new form of creation and cannot break through themselves in time with the development of the times. Some people still use the old methods in their creation and simply think about the development of traditional art according to their past thinking. In their eyes, the modern art of combining digital and multimedia has no way to reflect the real value of their works. This makes the works they create lack their own individuality and are as same. Besides these people, there are also some artists who are not too familiar with the functions of digital media, leading to the lack of innovation in their works and the phenomenon of similar works. ## 2.2. Artificial Intelligence Technology The integration of artificial intelligence technology and the art of combining digital and multimedia is based on human understanding of themselves and their experience of emotions [19–21]. If a person has no understanding of the formation of aesthetic thinking and consciousness in a different cultural context, it is simply impossible to solve various design problems concerning the way humans live and work if they simply rely on the computational methods of intelligent machines. Today’s iterative developments in science and technology continue to enrich the way people live and produce and raise the demand for aesthetics. The reason for developing artificial intelligence technology is to better serve people themselves, while the integration of art and design is precisely from the emotional needs of people, where art is more prominent in the expression of the subjective concept of people. The future of artificial intelligence is to think about the needs of people and the ability to change different environmental needs. From the current application of artificial intelligence scenarios, artificial intelligence in solving certain boundary problems, people are bound to lose to the machine; however, the need to make decisions with the help of emotional judgment, the logical reasoning made based on artificial intelligence big data also exists uncertainty. At present, artificial intelligence can only provide high technology, while the formation of excellent artworks must require new technical means to achieve. People have emphasized that art is the expression of human subjective concepts, just like saying that if it is similar, it is vulgar, and if it is not, it is not. Art does not have boundaries; therefore, artificial intelligence can never replace a human to complete subjective concepts. The key to the integration of artificial intelligence technology and the art of combining digital and multimedia is to obtain new methods of art created with the help of artificial intelligence technology, so as to better enrich the means of the art design, so the core of the integration of the two depends on people’s values and cognitive ability of artificial intelligence [22].In summary, we know the concept of artificial intelligence, the way of integration of artificial intelligence and the art of combining digital and multimedia and the core significance. In the context of artificial intelligence, intelligent machines can drive the continuous development and progress of society by virtue of their powerful computing power, and the next stage of deeper integration and innovation of artificial intelligence and the art of combining digital and multimedia will definitely present the following development characteristics. The art of combining digital and multimedia has characteristics such as openness, integration, and interactivity, and there are also artistic time-sensitive features that change with time or forms, and it is based on these special features that make the current art of combining digital and multimedia more and more diversified. In recent years, the rapid development of computer and network technology in China has reached the needs of art creation making the organic integration of art and artificial intelligence technology. Only by scientifically and rationally utilizing this fusion can the art of combining digital and multimedia give audiences a more perfect visual experience and strengthen their perceptions and impressions of the works of art creation. With various high-tech as the creative support of the art of combining digital and multimedia, it will let art and technology burst into more artistic miracles. Actively expanding the diversity of digital media, the characteristics of the art of combining digital and multimedia are more diverse, yet in fact, the development of the art of combining digital and multimedia and artificial intelligence technology shows an obvious sense of weakness. In this period of sustainable development of the art of combining digital and multimedia and simultaneous technological progress, Chinese art of combining digital and multimedia will present more new types of art in a richer form of expression, prompting digital media to achieve diverse development, bringing audiences a completely different consumer experience and a new experience and artistic perceptiveness in visual perception [23–25]. From this point of view, the mutual integration of the art of combining digital and multimedia and artificial intelligence technology will certainly make the development space of art and technology more extensive. From the analysis of the development history of the art of combining digital and multimedia in the past, the integration of the art of combining digital and multimedia and artificial intelligence technology in China can only be better presented in the form of technology and art through digital media by continuously strengthening its own traditional cultural heritage and modern civilization. Most people one-sidedly believe that digital technology and computer technology is the catalyst for the development of the art of combining digital and multimedia and technology, however, in fact from another perspective, it is based on the background of the artificial intelligence era that society has stepped into a new period of development of art and technology. From the current point of view, Chinese art of combining digital and multimedia is bound to remain in the early stages of development for a long time and to face many difficulties and problems. In the context of the artificial intelligence era, there will also be a large number of art and design talents emerging for continuous research and exploration, and the emergence of these talents will certainly lead to further development and enhancement of the art of combining digital and multimedia and artificial intelligence technology. In this process, it is especially worth noting that the excellent Chinese cultural connotations should not be abandoned. Commitment to digging out more beneficial art of combining digital and multimedia elements from traditional culture is the only way to make the art of combining digital and multimedia always develop in a more meaningful way. ## 3. Methods ### 3.1. Model Architecture The art of combining digital and multimedia style conversion is an important technique for nonrealistic drawing in computer graphics. For texture synthesis, existing nonparametric algorithms can synthesize realistic natural textures by resampling the pixels of a given source texture. Most previous texture transfer algorithms use these nonparametric methods for texture synthesis, while using different methods to preserve the structure of the target image. This study uses a deep convolutional neural network to learn a generic feature representation that performs texture transfer (i.e., style transformation) while preserving the semantic content of the target image. The model structure of this paper is shown in Figure2.Figure 2 Model structure. ### 3.2. Image Semantic Content Representation Given an input imagex, each layer of the convolutional neural network is encoded in each layer using the image’s filter. A layer of the neural network contains Nl different filters, i.e., it has Nl feature mappings, each of size Ml. Therefore, the response in layer l can be stored in the matrix Fl, where Fijl is the activation function of the ith filter at position j in layer l. To visualize the image information encoded on different layers, gradient descent can be performed on the white noise image to find another image that matches the feature response of the original image. Let p and x be the original image and the generated image, and let Pl and Fl represent the features in the lth layer, respectively. Then, define the squared error loss between the two feature representations as follows:(1)Lcontentp,x⟶,l=12∑i,jFijl−Pijl2.With respect to the activation function in layerl, the derivative of the loss function as(2)dLcontentdFijl=Fl−Plij,ifFijl>0,0,ifFijl<0.The standard error backpropagation can then be used to calculate the gradient with respect to the imagex. Thus, the initial random image x can be changed until it generates the same response in a particular layer of the convolutional neural network as the original image p. The schematic diagram of semantic content representation and transformation is shown in Figure 3.Figure 3 Schematic diagram of semantic content representation and transformation. ### 3.3. Artistic Style Representation The generative network model in the art of combining digital and multimedia style network uses a residual network, which uses a “bootstrap” approach to effectively solve the problem of gradient disappearance when the deep learning network is too deep. In addition, the advantage of using the residual layer to build the network is that the residual layer has a faster training speed than the general convolutional layer with the same convolutional effect. The style transition diagram is shown in Figure4. The perceptual network model uses a five-layer residual network for image feature extraction; however, the perceptual network does not have a deep depth, with three convolutional layers, five residual layers plus three deconvolutional layers for a total of eleven layers. Each residual layer contains two convolutional layers and the size of the convolutional kernel is 3 × 3. The art of combining digital and multimedia generation network is deepened and transformed sufficiently based on this, and ten residual layers are chosen for the middle part of the art of combining digital and multimedia generation network to play the function of the residual network as much as possible under the condition of the existing experimental environment. At the same time, the residual layer of the art of combining digital and multimedia generation network is adjusted from two 3 × 3 convolutional kernels to a combination of two 1 × 1 convolutional kernels and one 3 × 3 convolutional kernel. 1 × 1 convolutional kernels are not useful in two-dimensional plane operations, but each network layer of the art of combining digital and multimedia generation network has multiple channels for receiving image data and its response, and each network layer has multiple convolutional kernels. At this point, the 1 × 1 convolutional kernel can downscale these stacked 3D data so that the number of input and output channels of the convolutional layer is reduced, thus reducing the number of network parameters, and the training speed of the network model is significantly improved. Considered from the opposite perspective, due to the use of 1 × 1 convolutional kernels, the art of combining digital and multimedia generation network can be allowed to receive image data with larger dimensions at the same network size. For example, the original art of combining digital and multimedia generation network allows input images with a maximum dimension of 512 × 512, while the improved network can accommodate images with a resolution of 2048 × 2048. The selection of better quality and larger resolution images will result in better visualization of the experimental results. These feature correlations are represented by the Gram matrix Gijl, where Gijl is the inner product between vectorized feature mappings i and j in layer l.(3)Gijl=∑kFiklFjkl.Figure 4 Schematic diagram of style transformation.Leta and x be the original image and the generated image, while Aijl and Gijl denote the styles of the original and generated image layers l, respectively. Then, the loss of layer l can be expressed as(4)El=14Nl2Ml2∑i,jGijl−Aijl2.The total loss function has the following formula:(5)Lstylea,x⟶=∑lwlEl,where wl is the weighting factor. With respect to the activation function in layer l, the derivative of El can be expressed as(6)∂El∂Fijl=1Nl2Ml2FlTGl−Alji,ifFijl>0,0,ifFijl<0,The gradient ofEl with respect to the pixel value x can be easily calculated using standard error back propagation. ### 3.4. Training the Art of Combining Digital and Multimedia Style Network The art of combining digital and multimedia style network is constructed, and similar to other CNN networks, the network needs to rely on a large amount of image data and loss functions for training. In this paper, we use the MSCOCO2014 dataset, which contains 328,000 images and is divided into 91 categories. The update of the art of combining digital and multimedia generation network parameters relies on the loss function and the stochastic gradient descent algorithm, and the specific process of training is as follows: firstly, the noisy imagex is input into the art of combining digital and multimedia generation network, and x is convolved and deconvoluted by the network operation to obtain images of the same size as x. The content map y and a specified art of combining digital and multimedia image y in the MSCOCO dataset are used in the art of combining digital and multimedia discriminative, the calculation results are fed back to the art of combining digital and multimedia generation network using the BP algorithm, and the parameters and weights of each layer of the art of combining digital and multimedia generation network are updated to minimize the total loss cost. Since the dataset provided to the art of combining digital and multimedia generation network for training is very large, the network has a good generalization ability after the training is completed, and it is able to apply the style of the art of combining digital and multimedia to any unknown picture. In this paper, content loss function ℓfeatϕ,jy^,yc and style loss function ℓstyleϕ,jy^,ys are constructed for picture content and the art of combining digital and multimedia style, respectively, and the content loss function calculates and y. The square of the Euclidean distance between the response y^ and yc obtained at a layer j after input pretraining the model, where CjHjWj refers to the data obtained at layer j after computing the size and dimensionality of the feature mapping.(7)ℓfeatϕ,jy^,yc=1CjHjWjϕjy^−ϕjyc22.The extraction of image styles relies on the Gram matrix, Gram matrixGjϕxc,c′ calculates the inner product between the same dimensional feature mapping ϕjxh,w,c and ϕjxh,w,c′ obtained at layer j. The covariance operation in the Gram matrix can be used to understand the covariance relationship between different feature mapping of responses in the same layer in the network, and also to know which network nodes are activated, responsive and cooperative at the same time. The style loss function is defined as the F-parametric of the difference between Gjϕy^ and Gjϕys.(8)Gjϕxc,c′=1cjHjWj∑h=1Hj∑w=1Wjϕjxh,w,cϕjxh,w,c′,ℓstyleϕ,jy^,ys=1cjHjWjGjϕy^−GjϕysF2.Besides, this paper uses the full variational regularizationλTVℒTVy^ to ensure the smoothness of the final generated images. The weighted sum of the content loss function λcℓfeatϕ,jy^,y, the style loss function ℓstyleϕ,jy^,y, and the full-variance regularization ℒTVy^ yields the final global loss function.(9)ℒtotal=argminyλcℓfeatϕ,jy^,y+λsℓstyleϕ,jy^,y+λTVℒTVy^.The training goal of the art of combining digital and multimedia generation network is to minimize the global loss function and use this value to update the response while using the BP algorithm back to the neurons of the art of combining digital and multimedia generation network, which use the gradient descent algorithm to update the parameter weights. The whole process of training the art of combining digital and multimedia generation network can also be regarded as using the feature extraction capability of the VGG-19 model parameters to help the art of combining digital and multimedia generation network obtain the corresponding art of combining digital and multimedia style simulation parameters based on the existing VGG-19 pretraining model. The trained art of combining digital and multimedia generation network has a strong generalization ability and can map the art of combining digital and multimedia style to any unknown image. Since the parameters of the generative network are mature enough, the next image that needs to map the art of combining digital and multimedia style does not need to calculate the loss function, and a forward propagation operation can be performed directly in the art of combining digital and multimedia generative network, so the art of combining digital and multimedia generative network is very fast in style mapping. ## 3.1. Model Architecture The art of combining digital and multimedia style conversion is an important technique for nonrealistic drawing in computer graphics. For texture synthesis, existing nonparametric algorithms can synthesize realistic natural textures by resampling the pixels of a given source texture. Most previous texture transfer algorithms use these nonparametric methods for texture synthesis, while using different methods to preserve the structure of the target image. This study uses a deep convolutional neural network to learn a generic feature representation that performs texture transfer (i.e., style transformation) while preserving the semantic content of the target image. The model structure of this paper is shown in Figure2.Figure 2 Model structure. ## 3.2. Image Semantic Content Representation Given an input imagex, each layer of the convolutional neural network is encoded in each layer using the image’s filter. A layer of the neural network contains Nl different filters, i.e., it has Nl feature mappings, each of size Ml. Therefore, the response in layer l can be stored in the matrix Fl, where Fijl is the activation function of the ith filter at position j in layer l. To visualize the image information encoded on different layers, gradient descent can be performed on the white noise image to find another image that matches the feature response of the original image. Let p and x be the original image and the generated image, and let Pl and Fl represent the features in the lth layer, respectively. Then, define the squared error loss between the two feature representations as follows:(1)Lcontentp,x⟶,l=12∑i,jFijl−Pijl2.With respect to the activation function in layerl, the derivative of the loss function as(2)dLcontentdFijl=Fl−Plij,ifFijl>0,0,ifFijl<0.The standard error backpropagation can then be used to calculate the gradient with respect to the imagex. Thus, the initial random image x can be changed until it generates the same response in a particular layer of the convolutional neural network as the original image p. The schematic diagram of semantic content representation and transformation is shown in Figure 3.Figure 3 Schematic diagram of semantic content representation and transformation. ## 3.3. Artistic Style Representation The generative network model in the art of combining digital and multimedia style network uses a residual network, which uses a “bootstrap” approach to effectively solve the problem of gradient disappearance when the deep learning network is too deep. In addition, the advantage of using the residual layer to build the network is that the residual layer has a faster training speed than the general convolutional layer with the same convolutional effect. The style transition diagram is shown in Figure4. The perceptual network model uses a five-layer residual network for image feature extraction; however, the perceptual network does not have a deep depth, with three convolutional layers, five residual layers plus three deconvolutional layers for a total of eleven layers. Each residual layer contains two convolutional layers and the size of the convolutional kernel is 3 × 3. The art of combining digital and multimedia generation network is deepened and transformed sufficiently based on this, and ten residual layers are chosen for the middle part of the art of combining digital and multimedia generation network to play the function of the residual network as much as possible under the condition of the existing experimental environment. At the same time, the residual layer of the art of combining digital and multimedia generation network is adjusted from two 3 × 3 convolutional kernels to a combination of two 1 × 1 convolutional kernels and one 3 × 3 convolutional kernel. 1 × 1 convolutional kernels are not useful in two-dimensional plane operations, but each network layer of the art of combining digital and multimedia generation network has multiple channels for receiving image data and its response, and each network layer has multiple convolutional kernels. At this point, the 1 × 1 convolutional kernel can downscale these stacked 3D data so that the number of input and output channels of the convolutional layer is reduced, thus reducing the number of network parameters, and the training speed of the network model is significantly improved. Considered from the opposite perspective, due to the use of 1 × 1 convolutional kernels, the art of combining digital and multimedia generation network can be allowed to receive image data with larger dimensions at the same network size. For example, the original art of combining digital and multimedia generation network allows input images with a maximum dimension of 512 × 512, while the improved network can accommodate images with a resolution of 2048 × 2048. The selection of better quality and larger resolution images will result in better visualization of the experimental results. These feature correlations are represented by the Gram matrix Gijl, where Gijl is the inner product between vectorized feature mappings i and j in layer l.(3)Gijl=∑kFiklFjkl.Figure 4 Schematic diagram of style transformation.Leta and x be the original image and the generated image, while Aijl and Gijl denote the styles of the original and generated image layers l, respectively. Then, the loss of layer l can be expressed as(4)El=14Nl2Ml2∑i,jGijl−Aijl2.The total loss function has the following formula:(5)Lstylea,x⟶=∑lwlEl,where wl is the weighting factor. With respect to the activation function in layer l, the derivative of El can be expressed as(6)∂El∂Fijl=1Nl2Ml2FlTGl−Alji,ifFijl>0,0,ifFijl<0,The gradient ofEl with respect to the pixel value x can be easily calculated using standard error back propagation. ## 3.4. Training the Art of Combining Digital and Multimedia Style Network The art of combining digital and multimedia style network is constructed, and similar to other CNN networks, the network needs to rely on a large amount of image data and loss functions for training. In this paper, we use the MSCOCO2014 dataset, which contains 328,000 images and is divided into 91 categories. The update of the art of combining digital and multimedia generation network parameters relies on the loss function and the stochastic gradient descent algorithm, and the specific process of training is as follows: firstly, the noisy imagex is input into the art of combining digital and multimedia generation network, and x is convolved and deconvoluted by the network operation to obtain images of the same size as x. The content map y and a specified art of combining digital and multimedia image y in the MSCOCO dataset are used in the art of combining digital and multimedia discriminative, the calculation results are fed back to the art of combining digital and multimedia generation network using the BP algorithm, and the parameters and weights of each layer of the art of combining digital and multimedia generation network are updated to minimize the total loss cost. Since the dataset provided to the art of combining digital and multimedia generation network for training is very large, the network has a good generalization ability after the training is completed, and it is able to apply the style of the art of combining digital and multimedia to any unknown picture. In this paper, content loss function ℓfeatϕ,jy^,yc and style loss function ℓstyleϕ,jy^,ys are constructed for picture content and the art of combining digital and multimedia style, respectively, and the content loss function calculates and y. The square of the Euclidean distance between the response y^ and yc obtained at a layer j after input pretraining the model, where CjHjWj refers to the data obtained at layer j after computing the size and dimensionality of the feature mapping.(7)ℓfeatϕ,jy^,yc=1CjHjWjϕjy^−ϕjyc22.The extraction of image styles relies on the Gram matrix, Gram matrixGjϕxc,c′ calculates the inner product between the same dimensional feature mapping ϕjxh,w,c and ϕjxh,w,c′ obtained at layer j. The covariance operation in the Gram matrix can be used to understand the covariance relationship between different feature mapping of responses in the same layer in the network, and also to know which network nodes are activated, responsive and cooperative at the same time. The style loss function is defined as the F-parametric of the difference between Gjϕy^ and Gjϕys.(8)Gjϕxc,c′=1cjHjWj∑h=1Hj∑w=1Wjϕjxh,w,cϕjxh,w,c′,ℓstyleϕ,jy^,ys=1cjHjWjGjϕy^−GjϕysF2.Besides, this paper uses the full variational regularizationλTVℒTVy^ to ensure the smoothness of the final generated images. The weighted sum of the content loss function λcℓfeatϕ,jy^,y, the style loss function ℓstyleϕ,jy^,y, and the full-variance regularization ℒTVy^ yields the final global loss function.(9)ℒtotal=argminyλcℓfeatϕ,jy^,y+λsℓstyleϕ,jy^,y+λTVℒTVy^.The training goal of the art of combining digital and multimedia generation network is to minimize the global loss function and use this value to update the response while using the BP algorithm back to the neurons of the art of combining digital and multimedia generation network, which use the gradient descent algorithm to update the parameter weights. The whole process of training the art of combining digital and multimedia generation network can also be regarded as using the feature extraction capability of the VGG-19 model parameters to help the art of combining digital and multimedia generation network obtain the corresponding art of combining digital and multimedia style simulation parameters based on the existing VGG-19 pretraining model. The trained art of combining digital and multimedia generation network has a strong generalization ability and can map the art of combining digital and multimedia style to any unknown image. Since the parameters of the generative network are mature enough, the next image that needs to map the art of combining digital and multimedia style does not need to calculate the loss function, and a forward propagation operation can be performed directly in the art of combining digital and multimedia generative network, so the art of combining digital and multimedia generative network is very fast in style mapping. ## 4. Experiments and Results ### 4.1. Experimental Setup In this paper, we trained and used the art of combining digital and multimedia style network model on a computer with Ubuntu 16.04, GTX1070 graphics card, 8 GB RAM, and TensorFlow as the experimental framework are continuously optimized. This chapter will focus on showing the generated images using different art of combining digital and multimedia style models for stylized mapping, where the art of combining digital and multimedia style network models include the art of combining digital and multimedia models of the slate category with rough textures and bright colors and the art of combining digital and multimedia models of the paper category with delicate brush strokes and a bias toward realism, and the content images include five categories such as people, architecture, landscape, objects, and animals. These perspectives were used to view the degree of similarity between the generated images of the art of combining digital and multimedia style network and real the art of combining digital and multimedia. A total of 25 art of combining digital and multimedia style network models were trained for comparative analysis with many different categories of real the art of combining digital and multimedia. The real art of combining digital and multimedia can be broadly classified into board the art of combining digital and multimedia and paper the art of combining digital and multimedia according to the drawing method and visual effect. Before training the network models, different training parameters were set based on the different styles of real the art of combining digital and multimedia. One of the main parameters is the ratio of content weights to style weights in the loss function, i.e., the ratio of the weights of the content loss function to the weights of the style loss functionα/β, the higher the ratio, the less stylized the generated images are, and vice versa. For the board-like the art of combining digital and multimedia network model, the value of α/β should be set larger in order to retain the roughness of lines and the visual effect of color interweaving in the real board-like the art of combining digital and multimedia as much as possible, and 0.15 is chosen as the final parameter in this paper. The value of α/β should be smaller, and 0.3 is chosen as the final parameter. The training process loss convergence curve and performance improvement are shown in Figures 5 and 6.Figure 5 Training process loss convergence curve.Figure 6 Training process performance improvement diagram. ### 4.2. Experimental Results Using the network model for the experiments on the original data and the data after data enhancement, the results are shown in Table1, and it can be seen that the classification accuracy after data enhancement is improved by 9.21%.Table 1 Classification results before and after data enhancement. Image data processingAccuracy rateWithout data enhancement77.34With data enhancement86.55Comparison of the results of different classification methods: comparison with the traditional network model. In order to verify the effectiveness of this paper’s network model for classifying the art of combining digital and multimedia images, the data of this paper were input into the traditional network model and this paper’s network model for training and verification, and the results are shown in Figures7 and 8.Figure 7 Comparison of classification time of different network models.Figure 8 Comparison of the classification results of different network models.From Figures7 and 8, compared with the existed network model, the network model in this paper is lightweight, has short training time, and has the best accuracy rate. However, the proposed network module contains parallel convolution operations, which makes the parameters of the network model in this paper higher than those of Shuffle-Net, MobileNetV1, and MobileNetV2. The method + SE and the method + SK in this paper are network models that replace the proposed modules with the SE and SK modules, and the values of the descent rate r in the modules are taken as 16. The experimental results show that the accuracy of the network model in this paper is 0.86% and 1.25% higher than that of the method + SE and the method + SK in this paper, respectively. When the r value of the proposed module is taken as 4, the classification accuracy is higher than that when r is taken as 16.Comparing with the traditional methods, it can be seen from Table2 that the traditional manually extracted underlying overall features based on color, shape, etc., and local features cannot fully distinguish the style features of various types of art images, while the method in this paper can better extract the overall features and local detail features of art images and improve the classification accuracy.Table 2 Comparison of the results of the method in this paper with the traditional method. MethodsAccuracy rate (%)Comparison method 166.78Comparison method 260.2Method of this paper86.55Module role: location is to see the effect of the proposed module on the classification of art images at different locations of the network model, it is placed separately at the locations of the network model numbered as 6, 9, and 12 in this paper, and the convolutional kernels of 3 × 3 and 5 × 5 are taken on the branches, respectively, and the descent rater value is taken as 4. The results are shown in Figures 9 and 10.Figure 9 Accuracy of the module in different positions.Figure 10 Operation volume of the module in different positions.From Figures9 and 10, the proposed module has the highest classification accuracy and the least computational consumption for the art images when it is placed at the position of network model number 6 alone.The drop rater and the branch convolution kernel size are an important set of parameters in the proposed module for controlling the computational resources and experimental accuracy. The proposed module is individually placed in the network model with numbering position 6, and the proposed r value and the branch convolution kernel size are experimented and analyzed, and the results are shown in Table 3. It can be seen that when the size of the convolution kernel on the branch of the proposed module is fixed, the classification results are higher when the r value is taken as 4 than when the r value is taken as 16; when the r value is fixed, the proposed module of 2 branches takes less time and fewer parameters than the training of 3 branches; when the convolution kernel on the branch of the proposed module is taken as 1 × 1 and 5 × 5, respectively, the experimental results have the highest accuracy rate.Table 3 Comparison of drop rate and convolution kernel size classification results. Decline rate1 × 13 × 35 × 5Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.3136787.26✓✓2.6168087.58✓✓2.7180087.24✓✓✓2.7237087.35R = 16✓✓2.3123086.35✓✓2.6122086.85✓✓2.7144086.15✓✓✓2.7177086.42Null convolution: in order to compare the effect of null convolution kernels on art image feature extraction, different sizes of null convolution kernels are taken on the proposed module for experiments on the data of this paper, and the results are shown in Table4, where K3 denotes an ordinary 3 × 3 convolution kernel, K5 denotes a 3 × 3 convolution kernel with a null convolution expansion rate of 2 and a perceptual field of 5 × 5, and K7 denotes a 3 × 3 convolution kernel with an expansion rate of 3 and a perceptual field of 7 × 7.Table 4 Module branching takes the classification results of different null convolution kernels. Decline rateK = 3K = 5K = 7Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.4153086.07✓✓2.4165786.00✓✓2.4156084.50✓✓✓2.6222086.40The experimental results show that the parameters of the null convolution are less than those of the normal convolution with the same receptive field, but the classification accuracy is not as high as that of the normal convolution. The 3 × 3 convolutional kernels with 5 × 5 receptive fields andK7 indicates the 3 × 3 convolutional kernel with 3 expansion rate and 7 × 7 receptive fields. ## 4.1. Experimental Setup In this paper, we trained and used the art of combining digital and multimedia style network model on a computer with Ubuntu 16.04, GTX1070 graphics card, 8 GB RAM, and TensorFlow as the experimental framework are continuously optimized. This chapter will focus on showing the generated images using different art of combining digital and multimedia style models for stylized mapping, where the art of combining digital and multimedia style network models include the art of combining digital and multimedia models of the slate category with rough textures and bright colors and the art of combining digital and multimedia models of the paper category with delicate brush strokes and a bias toward realism, and the content images include five categories such as people, architecture, landscape, objects, and animals. These perspectives were used to view the degree of similarity between the generated images of the art of combining digital and multimedia style network and real the art of combining digital and multimedia. A total of 25 art of combining digital and multimedia style network models were trained for comparative analysis with many different categories of real the art of combining digital and multimedia. The real art of combining digital and multimedia can be broadly classified into board the art of combining digital and multimedia and paper the art of combining digital and multimedia according to the drawing method and visual effect. Before training the network models, different training parameters were set based on the different styles of real the art of combining digital and multimedia. One of the main parameters is the ratio of content weights to style weights in the loss function, i.e., the ratio of the weights of the content loss function to the weights of the style loss functionα/β, the higher the ratio, the less stylized the generated images are, and vice versa. For the board-like the art of combining digital and multimedia network model, the value of α/β should be set larger in order to retain the roughness of lines and the visual effect of color interweaving in the real board-like the art of combining digital and multimedia as much as possible, and 0.15 is chosen as the final parameter in this paper. The value of α/β should be smaller, and 0.3 is chosen as the final parameter. The training process loss convergence curve and performance improvement are shown in Figures 5 and 6.Figure 5 Training process loss convergence curve.Figure 6 Training process performance improvement diagram. ## 4.2. Experimental Results Using the network model for the experiments on the original data and the data after data enhancement, the results are shown in Table1, and it can be seen that the classification accuracy after data enhancement is improved by 9.21%.Table 1 Classification results before and after data enhancement. Image data processingAccuracy rateWithout data enhancement77.34With data enhancement86.55Comparison of the results of different classification methods: comparison with the traditional network model. In order to verify the effectiveness of this paper’s network model for classifying the art of combining digital and multimedia images, the data of this paper were input into the traditional network model and this paper’s network model for training and verification, and the results are shown in Figures7 and 8.Figure 7 Comparison of classification time of different network models.Figure 8 Comparison of the classification results of different network models.From Figures7 and 8, compared with the existed network model, the network model in this paper is lightweight, has short training time, and has the best accuracy rate. However, the proposed network module contains parallel convolution operations, which makes the parameters of the network model in this paper higher than those of Shuffle-Net, MobileNetV1, and MobileNetV2. The method + SE and the method + SK in this paper are network models that replace the proposed modules with the SE and SK modules, and the values of the descent rate r in the modules are taken as 16. The experimental results show that the accuracy of the network model in this paper is 0.86% and 1.25% higher than that of the method + SE and the method + SK in this paper, respectively. When the r value of the proposed module is taken as 4, the classification accuracy is higher than that when r is taken as 16.Comparing with the traditional methods, it can be seen from Table2 that the traditional manually extracted underlying overall features based on color, shape, etc., and local features cannot fully distinguish the style features of various types of art images, while the method in this paper can better extract the overall features and local detail features of art images and improve the classification accuracy.Table 2 Comparison of the results of the method in this paper with the traditional method. MethodsAccuracy rate (%)Comparison method 166.78Comparison method 260.2Method of this paper86.55Module role: location is to see the effect of the proposed module on the classification of art images at different locations of the network model, it is placed separately at the locations of the network model numbered as 6, 9, and 12 in this paper, and the convolutional kernels of 3 × 3 and 5 × 5 are taken on the branches, respectively, and the descent rater value is taken as 4. The results are shown in Figures 9 and 10.Figure 9 Accuracy of the module in different positions.Figure 10 Operation volume of the module in different positions.From Figures9 and 10, the proposed module has the highest classification accuracy and the least computational consumption for the art images when it is placed at the position of network model number 6 alone.The drop rater and the branch convolution kernel size are an important set of parameters in the proposed module for controlling the computational resources and experimental accuracy. The proposed module is individually placed in the network model with numbering position 6, and the proposed r value and the branch convolution kernel size are experimented and analyzed, and the results are shown in Table 3. It can be seen that when the size of the convolution kernel on the branch of the proposed module is fixed, the classification results are higher when the r value is taken as 4 than when the r value is taken as 16; when the r value is fixed, the proposed module of 2 branches takes less time and fewer parameters than the training of 3 branches; when the convolution kernel on the branch of the proposed module is taken as 1 × 1 and 5 × 5, respectively, the experimental results have the highest accuracy rate.Table 3 Comparison of drop rate and convolution kernel size classification results. Decline rate1 × 13 × 35 × 5Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.3136787.26✓✓2.6168087.58✓✓2.7180087.24✓✓✓2.7237087.35R = 16✓✓2.3123086.35✓✓2.6122086.85✓✓2.7144086.15✓✓✓2.7177086.42Null convolution: in order to compare the effect of null convolution kernels on art image feature extraction, different sizes of null convolution kernels are taken on the proposed module for experiments on the data of this paper, and the results are shown in Table4, where K3 denotes an ordinary 3 × 3 convolution kernel, K5 denotes a 3 × 3 convolution kernel with a null convolution expansion rate of 2 and a perceptual field of 5 × 5, and K7 denotes a 3 × 3 convolution kernel with an expansion rate of 3 and a perceptual field of 7 × 7.Table 4 Module branching takes the classification results of different null convolution kernels. Decline rateK = 3K = 5K = 7Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.4153086.07✓✓2.4165786.00✓✓2.4156084.50✓✓✓2.6222086.40The experimental results show that the parameters of the null convolution are less than those of the normal convolution with the same receptive field, but the classification accuracy is not as high as that of the normal convolution. The 3 × 3 convolutional kernels with 5 × 5 receptive fields andK7 indicates the 3 × 3 convolutional kernel with 3 expansion rate and 7 × 7 receptive fields. ## 5. Conclusion Along with the accelerated development of artificial intelligence technology, designers in the traditional design field will be gradually outlawed by intelligent machines if they do not review the situation, transform, and improve themselves in time. However, not all the work in the field of art and design can be replaced by it, especially some creative design concepts and contents that no high-tech and machine can do. As a professional, in the field of art and design, we should think more about the intangible value in art and design and only the core competitiveness of design talents is to burst out a constant source of creativity and inventiveness. The powerful computing power of artificial intelligence can assist them to use historical big data to explore human needs comprehensively, help people to solve practical problems through design, and use data to obtain design thinking. At the same time, the current global information is extremely rich and diverse, and this information will drive people to accept new products, new technologies, and new digital life in a passive mode. In-depth exploration of the aesthetics and wisdom of digital media design requires art designers to look at the changing times with their own understanding and feel the difference in the life of the times. This paper mainly proposes a method based on the convolutional neural network to better achieve feature extraction and classification of art images. This paper solves the problem of insufficient research on the classification of existing multi-class art images, and achieves better classification results of art images than existing network models and traditional classification methods. In the subsequent work, we will optimize the art image classification network model, expand the art image sample library, and further improve the accuracy of art image classification and the efficiency of network model classification. --- *Source: 1004204-2022-08-08.xml*
1004204-2022-08-08_1004204-2022-08-08.md
77,206
Study on the Innovative Development of Digital Media Art in the Context of Artificial Intelligence
Chaomiao Chen
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004204
1004204-2022-08-08.xml
--- ## Abstract With the rapid development of modern science and technology, the speed of digital media to disseminate information is also accelerating, and the forms of communication become more and more diversified. In order to make digital media art better innovate and develop, designers should actively explore, so that art under digital media has the characteristics of intelligence, networking, and content diversification, so that it can be better applied in modern digital media communication and let visual art reach new development heights in the context of digital media. The current rapid development of artificial intelligence technology is being gradually applied to all fields of society. From the current point of view, China’s digital media art will inevitably remain in the early development stage for a long time to face many difficulties and problems. In the context of the era of artificial intelligence, there will also be a large number of art and design talents for continuous research and exploration, and the emergence of these talents will certainly prompt the further development and enhancement of digital media art and artificial intelligence technology. In this process, it is especially worth noting that China’s excellent cultural connotations should not be abandoned. Committed to digging more beneficial digital media art elements from traditional culture to make digital media art always develop in a more meaningful way. This paper takes the artificial intelligence era as the background, discusses the core of the development of the mutual integration of artificial intelligence technology and digital media art, analyzes the current development status of digital media art and technology, as well as the innovative development direction and future trends, proposes a digital media art design algorithm based on the convolutional neural network, and finally proves the effectiveness of the method in the relevant data set. --- ## Body ## 1. Introduction With the increasing development of current science and technology and social economy, the art of combining digital and multimedia has gradually developed and matured, while gradually replacing traditional art methods and becoming a mainstream art. At present, it is necessary to accelerate the pace of development, reform, and innovation for the concept of art, and to fully display the characteristics of the times of the art of combining digital and multimedia. First, the need for innovation in the art of combining digital and multimedia concept, the art of combining digital and multimedia in the actual development process is still not optimistic, for example, the process of product design is seriously lacking artistry, often just applying the template to carry out design work, which is conducive to people to change the traditional cognition of the art of combining digital and multimedia [1–3]. The art of combining digital and multimedia is a comprehensive discipline, which involves many aspects of content and knowledge; however, the art of combining digital and multimedia has not yet achieved interdisciplinary communication, but is relatively rigid in the level of artistic expression; for example, shooting a certain short film not only need to have a camera but also need to create, tuning, rendering, color mixing, etc., which involves relatively cumbersome content, so, at present, the art of combining digital and multimedia development among the change of mindset is particularly urgent. The innovative development of digital media art in the context of artificial intelligence is shown in Figure 1.Figure 1 Schematic diagram of the innovative development of digital media art in the context of artificial intelligence.To achieve further innovation and reform of the concept of the art of combining digital and multimedia, the original traditional artistic thinking and design methods should be abandoned, and the art of combining digital and multimedia with cultural connotations and contemporary values should be created [4–6]. However, thought itself has a significant dynamic character, but is also particularly susceptible to the internal and external environment, at the same time, human epiphany, association, and intuition usually have a certain influence on the thought, in order to completely out of the traditional art thought on the human fetters, need to combine with the development of the times, to cognitive and understand the new works, feel the vitality of the times, should also experience life, should only in this way can we achieve change and innovation in the art of combining digital and multimedia. In the present, artistic expression includes many types, for example, transplantation, analogy, and imagination. If traditional ideas are used in the current process of the art of combining digital and multimedia development, it is difficult to effectively meet people’s pursuit of art and requires a combination of basic artistic expression methods to promote the enrichment of the art of combining digital and multimedia connotations. First, imagination is not unrealistic but should be based on the premise of rich knowledge through the brain to effectively create and process the process. For example, if a movie is imaginative and out of touch with reality, it is unlikely to have such a great impact [7]. Although the content described in the movie is far from reality, it is still created with reality as the basis. Secondly, an analogy is mainly from the nature between two similar things, inferring things in other aspects of different points, and the same point, decision-making, by using the analogy, can often get good results. Finally, transplantation is an excellent creative technique that usually facilitates the innovation of ideas. Other techniques that are similar to transplantation include transformation, which is literally the conversion of things to each other, including conceptual and role shifts. The main content is that life is art. Nowadays, people’s material living standard has improved significantly, and the understanding of aesthetics has also undergone a great change, so art is not only exclusive to the privileged class, but everyone can create art and appreciate art, which is the test of life. Third, the expression of the art of combining digital and multimedia concept innovation is first of all the content. According to the traditional artworks for comparison and analysis, we can fully understand that the content of early artworks is relatively single. With the increasing development of the times and economy, and the mutual integration of Eastern and Western cultures, people can see more content with unique artistic characteristics. The second is the method of creation. The rapid development of science and technology, as well as the increasing maturity of computer technology, has enabled the creation of digital media with more diverse characteristics. For example, Arlen creation software is a typical representative. Creators can practice their artistic ideas through Arlen creative software. In addition, there are still a large number of the art of combining digital and multimediatizes experimenting with other methods of art creation [8].The world has entered the era of digital development, and the fusion of artificial intelligence technology and the art of combining digital and multimedia is rapidly reconstructing and overturning people’s perceptions of the art and design field, and new means of design and artistic expression have been created. According to the “2017 Emerging Technology Maturity Curve Report” released by a U.S. consulting firm, artificial intelligence technology had a share of over 50% of the 33 new technologies at that time [9–12]. In these subsequent years, AI technology has achieved a comprehensive and widespread application. Nowadays, AI technology is compatible with intelligent systems such as environment perception, memory storage, thinking and reasoning, and learning ability possessed by human intelligence. And in the field of design, AI-aided design has made certain achievements, such as the artificial intelligence product Laban launched by Alibaba in the Double Eleven, which designed 4 × 108 advertising banners by itself, witnessing the powerful power of art and technology collision. However, in the process of artificial intelligence replacing the basic design of artwork, people also perceive the lack of judgment and understanding of aesthetics by intelligent machines, and the beauty of art cannot be realized by computer technology alone. At this point, art designers became “trainers,” they need to cultivate the computer’s sense of beauty so that the computer can design and learn to acquire a kind of human-based aesthetics and wisdom, so as to form their own thinking and insights on design aesthetics, and then feedback to the works involved. The optimists believe that the future of the art of combining digital and multimedia has unlimited possibilities, while the pessimists believe that it will be the end of the road for art designers. Along with the accelerated pace of global digital development, Chinese digital media design majors should take the initiative to accept new knowledge and technology in education, objectively and rationally view the opportunities and challenges brought by artificial intelligence and its high technology, and always uphold the principles of integrating tradition and modernity, approaching ideals and reality, giving equal importance to practice and theory, and carrying out reform and survival, so as to build a digital media design teaching system that meets the modern context. This paper proposes a system based on the convolutional neural network. The main contributions of this paper are (1) A convolutional neural network-based art design algorithm for combining digital and multimedia is proposed, the framework structure of the neural network model is analyzed, and the process of how to use the model to extract artistic styles and fuse them with ordinary images is discussed. (2) Then, based on the theory, according to the actual characteristics of the art combining digital and multimedia, the experimental analysis is performed to find the processing content images with suitable convolution layers, as well as finding the best overlay combination for digital media art feature extraction and proposing visualization criteria for evaluating image quality. (3) Finally, the feasibility of the theory is verified by adjusting the scale coefficients of content images and style images to obtain images that meet the expected goals, and a new style extraction method for digital and multimedia combined art is proposed. ## 2. Related Work ### 2.1. Digital Media Art The art of combining digital and multimedia refers to the synthesis of digital image processing technology, information and communication technology, art design, and other disciplines. Unlike traditional art, the art of combining digital and multimedia is widely used and developed in other fields through media communication with its own unique characteristics of integration, virtualization, and intersectionality [12–14]. The art of combining digital and multimedia has diversified forms of expression, and it changes people’s aesthetic pursuit of art with its unique creation concept, influencing the direction of art modernity development. As socialism with Chinese characteristics enters a new era, the living standards of the people and the aesthetic ability of the public are increasingly rising, the art of combining digital and multimedia not only brings people a better quality visual experience but also gives them stronger sensory stimulation. In artistic expression and communication, the art of combining digital and multimedia can vividly and effectively release and convey information and is gradually becoming a new carrier for the development of modern artistic expression. The benign development of the art of combining digital and multimedia can play an excellent role in promoting the development of human society. The main characteristics of the art of combining digital and multimedia along with the development of social life, people have put forward higher requirements for the accuracy, interactivity, and efficiency of information dissemination, and the application of the art of combining digital and multimedia in art has brought about great changes in people’s artistic feelings. The art of combining digital and multimedia is an art that combines participation, interactivity, multimedia use, high-tech enrichment, and new forms of expression, and is a reorganization of the real world and virtual images. Its emergence and development has provided a broader stage of expression for traditional art content. Grassroots participation in art is closely related to people’s lives, and people can enjoy their favorite artworks at any time, publish their own comments on artworks through the Internet, and even publish their own artworks anonymously. 3D and 4D technologies that were only bred in the digital media era, have become popular, extending people’ ability to appreciate art, innovating forms of art appreciation, and getting closer to art. The interactive nature of two-way interactive is the art of combining digital and multimedia.The interactive nature of two-way interactive is the art of combining digital and multimedia provides a new creative experience for art creators. Before the digital media came out, the dissemination of art was a whole process of creation, release, and transmission by artists and traditional media, with the audience only playing the role of receiver and appreciator, in which the artist could not get feedback from the audience. The emergence of digital media has brought about a new change in the process of art communication [15–17]. The whole process has changed from one-way to two-way interaction, and the artist can also know the feelings of the audience at any time, and the audience can also participate in the creation of the process with two-way interaction. Multimedia use of our country’s traditional artworks are created through different tools and different materials, for now, China’s art of combining digital and multimedia products are mainly conducted through computers and new technologies. In the process of painting oil paintings, the creators first use computer software, which will help with later editing and modifications, and the computer can record the entire creative process of the creator. In addition to these, the art of combining digital and multimedia also includes e-books and e-maps. People use e-books for reading and can make comments and changes at any time, bringing convenience to reading. People can enter the places they want to go on the electronic map, and it will remind people of the relevant traffic precautions. One of the most important features of high-tech enrichment in the field of art is exaggeration, and the art of combining digital and multimedia is no exception, and it brings this feature to the highest point. We can easily find that in the art of combining digital and multimedia, every piece of art and every character is exaggerated, with exaggerated expressions and shapes, and even exaggerated language, all of which are designed to give people the best artistic experience. For example, new technologies in the art of combining digital and multimedia are used in paintings to make them more vivid and rich; VR games are games with high technology in the art of combining digital and multimedia, and the game screen becomes three-dimensional from two-dimensional, so that people can participate in the game more realistically. Compared with other types of artistic expression, the art of combining digital and multimedia has its own unique innovative performance, which is summarized in the following points: innovation of creative concepts China’s economy in the new era is still in a relatively rapid development period, and the strong economic level provides convenient conditions for the innovative development of the art of combining digital and multimedia. The economic foundation determines the superstructure, with the economic foundation as a pavement, and then through new technology and means, the art of combining digital and multimedia can have adequate development concepts and new ideas. Under the new creation concept, the traditional means and the current means will be very different, the creation method is more and more rich and diverse, the creation content is newer, the art of combining digital and multimedia also pays more attention to the development of personality, and the needs of different audience groups are satisfied. The difficulty of creation lies in new ideas, once any creation lacks new ideas, it is difficult to get people’s love, and the art of combining digital and multimedia is no exception [17–19]. With the emergence of the art of combining digital and multimedia, some old artists do not have a deep enough understanding of this new form of creation and cannot break through themselves in time with the development of the times. Some people still use the old methods in their creation and simply think about the development of traditional art according to their past thinking. In their eyes, the modern art of combining digital and multimedia has no way to reflect the real value of their works. This makes the works they create lack their own individuality and are as same. Besides these people, there are also some artists who are not too familiar with the functions of digital media, leading to the lack of innovation in their works and the phenomenon of similar works. ### 2.2. Artificial Intelligence Technology The integration of artificial intelligence technology and the art of combining digital and multimedia is based on human understanding of themselves and their experience of emotions [19–21]. If a person has no understanding of the formation of aesthetic thinking and consciousness in a different cultural context, it is simply impossible to solve various design problems concerning the way humans live and work if they simply rely on the computational methods of intelligent machines. Today’s iterative developments in science and technology continue to enrich the way people live and produce and raise the demand for aesthetics. The reason for developing artificial intelligence technology is to better serve people themselves, while the integration of art and design is precisely from the emotional needs of people, where art is more prominent in the expression of the subjective concept of people. The future of artificial intelligence is to think about the needs of people and the ability to change different environmental needs. From the current application of artificial intelligence scenarios, artificial intelligence in solving certain boundary problems, people are bound to lose to the machine; however, the need to make decisions with the help of emotional judgment, the logical reasoning made based on artificial intelligence big data also exists uncertainty. At present, artificial intelligence can only provide high technology, while the formation of excellent artworks must require new technical means to achieve. People have emphasized that art is the expression of human subjective concepts, just like saying that if it is similar, it is vulgar, and if it is not, it is not. Art does not have boundaries; therefore, artificial intelligence can never replace a human to complete subjective concepts. The key to the integration of artificial intelligence technology and the art of combining digital and multimedia is to obtain new methods of art created with the help of artificial intelligence technology, so as to better enrich the means of the art design, so the core of the integration of the two depends on people’s values and cognitive ability of artificial intelligence [22].In summary, we know the concept of artificial intelligence, the way of integration of artificial intelligence and the art of combining digital and multimedia and the core significance. In the context of artificial intelligence, intelligent machines can drive the continuous development and progress of society by virtue of their powerful computing power, and the next stage of deeper integration and innovation of artificial intelligence and the art of combining digital and multimedia will definitely present the following development characteristics. The art of combining digital and multimedia has characteristics such as openness, integration, and interactivity, and there are also artistic time-sensitive features that change with time or forms, and it is based on these special features that make the current art of combining digital and multimedia more and more diversified. In recent years, the rapid development of computer and network technology in China has reached the needs of art creation making the organic integration of art and artificial intelligence technology. Only by scientifically and rationally utilizing this fusion can the art of combining digital and multimedia give audiences a more perfect visual experience and strengthen their perceptions and impressions of the works of art creation. With various high-tech as the creative support of the art of combining digital and multimedia, it will let art and technology burst into more artistic miracles. Actively expanding the diversity of digital media, the characteristics of the art of combining digital and multimedia are more diverse, yet in fact, the development of the art of combining digital and multimedia and artificial intelligence technology shows an obvious sense of weakness. In this period of sustainable development of the art of combining digital and multimedia and simultaneous technological progress, Chinese art of combining digital and multimedia will present more new types of art in a richer form of expression, prompting digital media to achieve diverse development, bringing audiences a completely different consumer experience and a new experience and artistic perceptiveness in visual perception [23–25]. From this point of view, the mutual integration of the art of combining digital and multimedia and artificial intelligence technology will certainly make the development space of art and technology more extensive. From the analysis of the development history of the art of combining digital and multimedia in the past, the integration of the art of combining digital and multimedia and artificial intelligence technology in China can only be better presented in the form of technology and art through digital media by continuously strengthening its own traditional cultural heritage and modern civilization. Most people one-sidedly believe that digital technology and computer technology is the catalyst for the development of the art of combining digital and multimedia and technology, however, in fact from another perspective, it is based on the background of the artificial intelligence era that society has stepped into a new period of development of art and technology. From the current point of view, Chinese art of combining digital and multimedia is bound to remain in the early stages of development for a long time and to face many difficulties and problems. In the context of the artificial intelligence era, there will also be a large number of art and design talents emerging for continuous research and exploration, and the emergence of these talents will certainly lead to further development and enhancement of the art of combining digital and multimedia and artificial intelligence technology. In this process, it is especially worth noting that the excellent Chinese cultural connotations should not be abandoned. Commitment to digging out more beneficial art of combining digital and multimedia elements from traditional culture is the only way to make the art of combining digital and multimedia always develop in a more meaningful way. ## 2.1. Digital Media Art The art of combining digital and multimedia refers to the synthesis of digital image processing technology, information and communication technology, art design, and other disciplines. Unlike traditional art, the art of combining digital and multimedia is widely used and developed in other fields through media communication with its own unique characteristics of integration, virtualization, and intersectionality [12–14]. The art of combining digital and multimedia has diversified forms of expression, and it changes people’s aesthetic pursuit of art with its unique creation concept, influencing the direction of art modernity development. As socialism with Chinese characteristics enters a new era, the living standards of the people and the aesthetic ability of the public are increasingly rising, the art of combining digital and multimedia not only brings people a better quality visual experience but also gives them stronger sensory stimulation. In artistic expression and communication, the art of combining digital and multimedia can vividly and effectively release and convey information and is gradually becoming a new carrier for the development of modern artistic expression. The benign development of the art of combining digital and multimedia can play an excellent role in promoting the development of human society. The main characteristics of the art of combining digital and multimedia along with the development of social life, people have put forward higher requirements for the accuracy, interactivity, and efficiency of information dissemination, and the application of the art of combining digital and multimedia in art has brought about great changes in people’s artistic feelings. The art of combining digital and multimedia is an art that combines participation, interactivity, multimedia use, high-tech enrichment, and new forms of expression, and is a reorganization of the real world and virtual images. Its emergence and development has provided a broader stage of expression for traditional art content. Grassroots participation in art is closely related to people’s lives, and people can enjoy their favorite artworks at any time, publish their own comments on artworks through the Internet, and even publish their own artworks anonymously. 3D and 4D technologies that were only bred in the digital media era, have become popular, extending people’ ability to appreciate art, innovating forms of art appreciation, and getting closer to art. The interactive nature of two-way interactive is the art of combining digital and multimedia.The interactive nature of two-way interactive is the art of combining digital and multimedia provides a new creative experience for art creators. Before the digital media came out, the dissemination of art was a whole process of creation, release, and transmission by artists and traditional media, with the audience only playing the role of receiver and appreciator, in which the artist could not get feedback from the audience. The emergence of digital media has brought about a new change in the process of art communication [15–17]. The whole process has changed from one-way to two-way interaction, and the artist can also know the feelings of the audience at any time, and the audience can also participate in the creation of the process with two-way interaction. Multimedia use of our country’s traditional artworks are created through different tools and different materials, for now, China’s art of combining digital and multimedia products are mainly conducted through computers and new technologies. In the process of painting oil paintings, the creators first use computer software, which will help with later editing and modifications, and the computer can record the entire creative process of the creator. In addition to these, the art of combining digital and multimedia also includes e-books and e-maps. People use e-books for reading and can make comments and changes at any time, bringing convenience to reading. People can enter the places they want to go on the electronic map, and it will remind people of the relevant traffic precautions. One of the most important features of high-tech enrichment in the field of art is exaggeration, and the art of combining digital and multimedia is no exception, and it brings this feature to the highest point. We can easily find that in the art of combining digital and multimedia, every piece of art and every character is exaggerated, with exaggerated expressions and shapes, and even exaggerated language, all of which are designed to give people the best artistic experience. For example, new technologies in the art of combining digital and multimedia are used in paintings to make them more vivid and rich; VR games are games with high technology in the art of combining digital and multimedia, and the game screen becomes three-dimensional from two-dimensional, so that people can participate in the game more realistically. Compared with other types of artistic expression, the art of combining digital and multimedia has its own unique innovative performance, which is summarized in the following points: innovation of creative concepts China’s economy in the new era is still in a relatively rapid development period, and the strong economic level provides convenient conditions for the innovative development of the art of combining digital and multimedia. The economic foundation determines the superstructure, with the economic foundation as a pavement, and then through new technology and means, the art of combining digital and multimedia can have adequate development concepts and new ideas. Under the new creation concept, the traditional means and the current means will be very different, the creation method is more and more rich and diverse, the creation content is newer, the art of combining digital and multimedia also pays more attention to the development of personality, and the needs of different audience groups are satisfied. The difficulty of creation lies in new ideas, once any creation lacks new ideas, it is difficult to get people’s love, and the art of combining digital and multimedia is no exception [17–19]. With the emergence of the art of combining digital and multimedia, some old artists do not have a deep enough understanding of this new form of creation and cannot break through themselves in time with the development of the times. Some people still use the old methods in their creation and simply think about the development of traditional art according to their past thinking. In their eyes, the modern art of combining digital and multimedia has no way to reflect the real value of their works. This makes the works they create lack their own individuality and are as same. Besides these people, there are also some artists who are not too familiar with the functions of digital media, leading to the lack of innovation in their works and the phenomenon of similar works. ## 2.2. Artificial Intelligence Technology The integration of artificial intelligence technology and the art of combining digital and multimedia is based on human understanding of themselves and their experience of emotions [19–21]. If a person has no understanding of the formation of aesthetic thinking and consciousness in a different cultural context, it is simply impossible to solve various design problems concerning the way humans live and work if they simply rely on the computational methods of intelligent machines. Today’s iterative developments in science and technology continue to enrich the way people live and produce and raise the demand for aesthetics. The reason for developing artificial intelligence technology is to better serve people themselves, while the integration of art and design is precisely from the emotional needs of people, where art is more prominent in the expression of the subjective concept of people. The future of artificial intelligence is to think about the needs of people and the ability to change different environmental needs. From the current application of artificial intelligence scenarios, artificial intelligence in solving certain boundary problems, people are bound to lose to the machine; however, the need to make decisions with the help of emotional judgment, the logical reasoning made based on artificial intelligence big data also exists uncertainty. At present, artificial intelligence can only provide high technology, while the formation of excellent artworks must require new technical means to achieve. People have emphasized that art is the expression of human subjective concepts, just like saying that if it is similar, it is vulgar, and if it is not, it is not. Art does not have boundaries; therefore, artificial intelligence can never replace a human to complete subjective concepts. The key to the integration of artificial intelligence technology and the art of combining digital and multimedia is to obtain new methods of art created with the help of artificial intelligence technology, so as to better enrich the means of the art design, so the core of the integration of the two depends on people’s values and cognitive ability of artificial intelligence [22].In summary, we know the concept of artificial intelligence, the way of integration of artificial intelligence and the art of combining digital and multimedia and the core significance. In the context of artificial intelligence, intelligent machines can drive the continuous development and progress of society by virtue of their powerful computing power, and the next stage of deeper integration and innovation of artificial intelligence and the art of combining digital and multimedia will definitely present the following development characteristics. The art of combining digital and multimedia has characteristics such as openness, integration, and interactivity, and there are also artistic time-sensitive features that change with time or forms, and it is based on these special features that make the current art of combining digital and multimedia more and more diversified. In recent years, the rapid development of computer and network technology in China has reached the needs of art creation making the organic integration of art and artificial intelligence technology. Only by scientifically and rationally utilizing this fusion can the art of combining digital and multimedia give audiences a more perfect visual experience and strengthen their perceptions and impressions of the works of art creation. With various high-tech as the creative support of the art of combining digital and multimedia, it will let art and technology burst into more artistic miracles. Actively expanding the diversity of digital media, the characteristics of the art of combining digital and multimedia are more diverse, yet in fact, the development of the art of combining digital and multimedia and artificial intelligence technology shows an obvious sense of weakness. In this period of sustainable development of the art of combining digital and multimedia and simultaneous technological progress, Chinese art of combining digital and multimedia will present more new types of art in a richer form of expression, prompting digital media to achieve diverse development, bringing audiences a completely different consumer experience and a new experience and artistic perceptiveness in visual perception [23–25]. From this point of view, the mutual integration of the art of combining digital and multimedia and artificial intelligence technology will certainly make the development space of art and technology more extensive. From the analysis of the development history of the art of combining digital and multimedia in the past, the integration of the art of combining digital and multimedia and artificial intelligence technology in China can only be better presented in the form of technology and art through digital media by continuously strengthening its own traditional cultural heritage and modern civilization. Most people one-sidedly believe that digital technology and computer technology is the catalyst for the development of the art of combining digital and multimedia and technology, however, in fact from another perspective, it is based on the background of the artificial intelligence era that society has stepped into a new period of development of art and technology. From the current point of view, Chinese art of combining digital and multimedia is bound to remain in the early stages of development for a long time and to face many difficulties and problems. In the context of the artificial intelligence era, there will also be a large number of art and design talents emerging for continuous research and exploration, and the emergence of these talents will certainly lead to further development and enhancement of the art of combining digital and multimedia and artificial intelligence technology. In this process, it is especially worth noting that the excellent Chinese cultural connotations should not be abandoned. Commitment to digging out more beneficial art of combining digital and multimedia elements from traditional culture is the only way to make the art of combining digital and multimedia always develop in a more meaningful way. ## 3. Methods ### 3.1. Model Architecture The art of combining digital and multimedia style conversion is an important technique for nonrealistic drawing in computer graphics. For texture synthesis, existing nonparametric algorithms can synthesize realistic natural textures by resampling the pixels of a given source texture. Most previous texture transfer algorithms use these nonparametric methods for texture synthesis, while using different methods to preserve the structure of the target image. This study uses a deep convolutional neural network to learn a generic feature representation that performs texture transfer (i.e., style transformation) while preserving the semantic content of the target image. The model structure of this paper is shown in Figure2.Figure 2 Model structure. ### 3.2. Image Semantic Content Representation Given an input imagex, each layer of the convolutional neural network is encoded in each layer using the image’s filter. A layer of the neural network contains Nl different filters, i.e., it has Nl feature mappings, each of size Ml. Therefore, the response in layer l can be stored in the matrix Fl, where Fijl is the activation function of the ith filter at position j in layer l. To visualize the image information encoded on different layers, gradient descent can be performed on the white noise image to find another image that matches the feature response of the original image. Let p and x be the original image and the generated image, and let Pl and Fl represent the features in the lth layer, respectively. Then, define the squared error loss between the two feature representations as follows:(1)Lcontentp,x⟶,l=12∑i,jFijl−Pijl2.With respect to the activation function in layerl, the derivative of the loss function as(2)dLcontentdFijl=Fl−Plij,ifFijl>0,0,ifFijl<0.The standard error backpropagation can then be used to calculate the gradient with respect to the imagex. Thus, the initial random image x can be changed until it generates the same response in a particular layer of the convolutional neural network as the original image p. The schematic diagram of semantic content representation and transformation is shown in Figure 3.Figure 3 Schematic diagram of semantic content representation and transformation. ### 3.3. Artistic Style Representation The generative network model in the art of combining digital and multimedia style network uses a residual network, which uses a “bootstrap” approach to effectively solve the problem of gradient disappearance when the deep learning network is too deep. In addition, the advantage of using the residual layer to build the network is that the residual layer has a faster training speed than the general convolutional layer with the same convolutional effect. The style transition diagram is shown in Figure4. The perceptual network model uses a five-layer residual network for image feature extraction; however, the perceptual network does not have a deep depth, with three convolutional layers, five residual layers plus three deconvolutional layers for a total of eleven layers. Each residual layer contains two convolutional layers and the size of the convolutional kernel is 3 × 3. The art of combining digital and multimedia generation network is deepened and transformed sufficiently based on this, and ten residual layers are chosen for the middle part of the art of combining digital and multimedia generation network to play the function of the residual network as much as possible under the condition of the existing experimental environment. At the same time, the residual layer of the art of combining digital and multimedia generation network is adjusted from two 3 × 3 convolutional kernels to a combination of two 1 × 1 convolutional kernels and one 3 × 3 convolutional kernel. 1 × 1 convolutional kernels are not useful in two-dimensional plane operations, but each network layer of the art of combining digital and multimedia generation network has multiple channels for receiving image data and its response, and each network layer has multiple convolutional kernels. At this point, the 1 × 1 convolutional kernel can downscale these stacked 3D data so that the number of input and output channels of the convolutional layer is reduced, thus reducing the number of network parameters, and the training speed of the network model is significantly improved. Considered from the opposite perspective, due to the use of 1 × 1 convolutional kernels, the art of combining digital and multimedia generation network can be allowed to receive image data with larger dimensions at the same network size. For example, the original art of combining digital and multimedia generation network allows input images with a maximum dimension of 512 × 512, while the improved network can accommodate images with a resolution of 2048 × 2048. The selection of better quality and larger resolution images will result in better visualization of the experimental results. These feature correlations are represented by the Gram matrix Gijl, where Gijl is the inner product between vectorized feature mappings i and j in layer l.(3)Gijl=∑kFiklFjkl.Figure 4 Schematic diagram of style transformation.Leta and x be the original image and the generated image, while Aijl and Gijl denote the styles of the original and generated image layers l, respectively. Then, the loss of layer l can be expressed as(4)El=14Nl2Ml2∑i,jGijl−Aijl2.The total loss function has the following formula:(5)Lstylea,x⟶=∑lwlEl,where wl is the weighting factor. With respect to the activation function in layer l, the derivative of El can be expressed as(6)∂El∂Fijl=1Nl2Ml2FlTGl−Alji,ifFijl>0,0,ifFijl<0,The gradient ofEl with respect to the pixel value x can be easily calculated using standard error back propagation. ### 3.4. Training the Art of Combining Digital and Multimedia Style Network The art of combining digital and multimedia style network is constructed, and similar to other CNN networks, the network needs to rely on a large amount of image data and loss functions for training. In this paper, we use the MSCOCO2014 dataset, which contains 328,000 images and is divided into 91 categories. The update of the art of combining digital and multimedia generation network parameters relies on the loss function and the stochastic gradient descent algorithm, and the specific process of training is as follows: firstly, the noisy imagex is input into the art of combining digital and multimedia generation network, and x is convolved and deconvoluted by the network operation to obtain images of the same size as x. The content map y and a specified art of combining digital and multimedia image y in the MSCOCO dataset are used in the art of combining digital and multimedia discriminative, the calculation results are fed back to the art of combining digital and multimedia generation network using the BP algorithm, and the parameters and weights of each layer of the art of combining digital and multimedia generation network are updated to minimize the total loss cost. Since the dataset provided to the art of combining digital and multimedia generation network for training is very large, the network has a good generalization ability after the training is completed, and it is able to apply the style of the art of combining digital and multimedia to any unknown picture. In this paper, content loss function ℓfeatϕ,jy^,yc and style loss function ℓstyleϕ,jy^,ys are constructed for picture content and the art of combining digital and multimedia style, respectively, and the content loss function calculates and y. The square of the Euclidean distance between the response y^ and yc obtained at a layer j after input pretraining the model, where CjHjWj refers to the data obtained at layer j after computing the size and dimensionality of the feature mapping.(7)ℓfeatϕ,jy^,yc=1CjHjWjϕjy^−ϕjyc22.The extraction of image styles relies on the Gram matrix, Gram matrixGjϕxc,c′ calculates the inner product between the same dimensional feature mapping ϕjxh,w,c and ϕjxh,w,c′ obtained at layer j. The covariance operation in the Gram matrix can be used to understand the covariance relationship between different feature mapping of responses in the same layer in the network, and also to know which network nodes are activated, responsive and cooperative at the same time. The style loss function is defined as the F-parametric of the difference between Gjϕy^ and Gjϕys.(8)Gjϕxc,c′=1cjHjWj∑h=1Hj∑w=1Wjϕjxh,w,cϕjxh,w,c′,ℓstyleϕ,jy^,ys=1cjHjWjGjϕy^−GjϕysF2.Besides, this paper uses the full variational regularizationλTVℒTVy^ to ensure the smoothness of the final generated images. The weighted sum of the content loss function λcℓfeatϕ,jy^,y, the style loss function ℓstyleϕ,jy^,y, and the full-variance regularization ℒTVy^ yields the final global loss function.(9)ℒtotal=argminyλcℓfeatϕ,jy^,y+λsℓstyleϕ,jy^,y+λTVℒTVy^.The training goal of the art of combining digital and multimedia generation network is to minimize the global loss function and use this value to update the response while using the BP algorithm back to the neurons of the art of combining digital and multimedia generation network, which use the gradient descent algorithm to update the parameter weights. The whole process of training the art of combining digital and multimedia generation network can also be regarded as using the feature extraction capability of the VGG-19 model parameters to help the art of combining digital and multimedia generation network obtain the corresponding art of combining digital and multimedia style simulation parameters based on the existing VGG-19 pretraining model. The trained art of combining digital and multimedia generation network has a strong generalization ability and can map the art of combining digital and multimedia style to any unknown image. Since the parameters of the generative network are mature enough, the next image that needs to map the art of combining digital and multimedia style does not need to calculate the loss function, and a forward propagation operation can be performed directly in the art of combining digital and multimedia generative network, so the art of combining digital and multimedia generative network is very fast in style mapping. ## 3.1. Model Architecture The art of combining digital and multimedia style conversion is an important technique for nonrealistic drawing in computer graphics. For texture synthesis, existing nonparametric algorithms can synthesize realistic natural textures by resampling the pixels of a given source texture. Most previous texture transfer algorithms use these nonparametric methods for texture synthesis, while using different methods to preserve the structure of the target image. This study uses a deep convolutional neural network to learn a generic feature representation that performs texture transfer (i.e., style transformation) while preserving the semantic content of the target image. The model structure of this paper is shown in Figure2.Figure 2 Model structure. ## 3.2. Image Semantic Content Representation Given an input imagex, each layer of the convolutional neural network is encoded in each layer using the image’s filter. A layer of the neural network contains Nl different filters, i.e., it has Nl feature mappings, each of size Ml. Therefore, the response in layer l can be stored in the matrix Fl, where Fijl is the activation function of the ith filter at position j in layer l. To visualize the image information encoded on different layers, gradient descent can be performed on the white noise image to find another image that matches the feature response of the original image. Let p and x be the original image and the generated image, and let Pl and Fl represent the features in the lth layer, respectively. Then, define the squared error loss between the two feature representations as follows:(1)Lcontentp,x⟶,l=12∑i,jFijl−Pijl2.With respect to the activation function in layerl, the derivative of the loss function as(2)dLcontentdFijl=Fl−Plij,ifFijl>0,0,ifFijl<0.The standard error backpropagation can then be used to calculate the gradient with respect to the imagex. Thus, the initial random image x can be changed until it generates the same response in a particular layer of the convolutional neural network as the original image p. The schematic diagram of semantic content representation and transformation is shown in Figure 3.Figure 3 Schematic diagram of semantic content representation and transformation. ## 3.3. Artistic Style Representation The generative network model in the art of combining digital and multimedia style network uses a residual network, which uses a “bootstrap” approach to effectively solve the problem of gradient disappearance when the deep learning network is too deep. In addition, the advantage of using the residual layer to build the network is that the residual layer has a faster training speed than the general convolutional layer with the same convolutional effect. The style transition diagram is shown in Figure4. The perceptual network model uses a five-layer residual network for image feature extraction; however, the perceptual network does not have a deep depth, with three convolutional layers, five residual layers plus three deconvolutional layers for a total of eleven layers. Each residual layer contains two convolutional layers and the size of the convolutional kernel is 3 × 3. The art of combining digital and multimedia generation network is deepened and transformed sufficiently based on this, and ten residual layers are chosen for the middle part of the art of combining digital and multimedia generation network to play the function of the residual network as much as possible under the condition of the existing experimental environment. At the same time, the residual layer of the art of combining digital and multimedia generation network is adjusted from two 3 × 3 convolutional kernels to a combination of two 1 × 1 convolutional kernels and one 3 × 3 convolutional kernel. 1 × 1 convolutional kernels are not useful in two-dimensional plane operations, but each network layer of the art of combining digital and multimedia generation network has multiple channels for receiving image data and its response, and each network layer has multiple convolutional kernels. At this point, the 1 × 1 convolutional kernel can downscale these stacked 3D data so that the number of input and output channels of the convolutional layer is reduced, thus reducing the number of network parameters, and the training speed of the network model is significantly improved. Considered from the opposite perspective, due to the use of 1 × 1 convolutional kernels, the art of combining digital and multimedia generation network can be allowed to receive image data with larger dimensions at the same network size. For example, the original art of combining digital and multimedia generation network allows input images with a maximum dimension of 512 × 512, while the improved network can accommodate images with a resolution of 2048 × 2048. The selection of better quality and larger resolution images will result in better visualization of the experimental results. These feature correlations are represented by the Gram matrix Gijl, where Gijl is the inner product between vectorized feature mappings i and j in layer l.(3)Gijl=∑kFiklFjkl.Figure 4 Schematic diagram of style transformation.Leta and x be the original image and the generated image, while Aijl and Gijl denote the styles of the original and generated image layers l, respectively. Then, the loss of layer l can be expressed as(4)El=14Nl2Ml2∑i,jGijl−Aijl2.The total loss function has the following formula:(5)Lstylea,x⟶=∑lwlEl,where wl is the weighting factor. With respect to the activation function in layer l, the derivative of El can be expressed as(6)∂El∂Fijl=1Nl2Ml2FlTGl−Alji,ifFijl>0,0,ifFijl<0,The gradient ofEl with respect to the pixel value x can be easily calculated using standard error back propagation. ## 3.4. Training the Art of Combining Digital and Multimedia Style Network The art of combining digital and multimedia style network is constructed, and similar to other CNN networks, the network needs to rely on a large amount of image data and loss functions for training. In this paper, we use the MSCOCO2014 dataset, which contains 328,000 images and is divided into 91 categories. The update of the art of combining digital and multimedia generation network parameters relies on the loss function and the stochastic gradient descent algorithm, and the specific process of training is as follows: firstly, the noisy imagex is input into the art of combining digital and multimedia generation network, and x is convolved and deconvoluted by the network operation to obtain images of the same size as x. The content map y and a specified art of combining digital and multimedia image y in the MSCOCO dataset are used in the art of combining digital and multimedia discriminative, the calculation results are fed back to the art of combining digital and multimedia generation network using the BP algorithm, and the parameters and weights of each layer of the art of combining digital and multimedia generation network are updated to minimize the total loss cost. Since the dataset provided to the art of combining digital and multimedia generation network for training is very large, the network has a good generalization ability after the training is completed, and it is able to apply the style of the art of combining digital and multimedia to any unknown picture. In this paper, content loss function ℓfeatϕ,jy^,yc and style loss function ℓstyleϕ,jy^,ys are constructed for picture content and the art of combining digital and multimedia style, respectively, and the content loss function calculates and y. The square of the Euclidean distance between the response y^ and yc obtained at a layer j after input pretraining the model, where CjHjWj refers to the data obtained at layer j after computing the size and dimensionality of the feature mapping.(7)ℓfeatϕ,jy^,yc=1CjHjWjϕjy^−ϕjyc22.The extraction of image styles relies on the Gram matrix, Gram matrixGjϕxc,c′ calculates the inner product between the same dimensional feature mapping ϕjxh,w,c and ϕjxh,w,c′ obtained at layer j. The covariance operation in the Gram matrix can be used to understand the covariance relationship between different feature mapping of responses in the same layer in the network, and also to know which network nodes are activated, responsive and cooperative at the same time. The style loss function is defined as the F-parametric of the difference between Gjϕy^ and Gjϕys.(8)Gjϕxc,c′=1cjHjWj∑h=1Hj∑w=1Wjϕjxh,w,cϕjxh,w,c′,ℓstyleϕ,jy^,ys=1cjHjWjGjϕy^−GjϕysF2.Besides, this paper uses the full variational regularizationλTVℒTVy^ to ensure the smoothness of the final generated images. The weighted sum of the content loss function λcℓfeatϕ,jy^,y, the style loss function ℓstyleϕ,jy^,y, and the full-variance regularization ℒTVy^ yields the final global loss function.(9)ℒtotal=argminyλcℓfeatϕ,jy^,y+λsℓstyleϕ,jy^,y+λTVℒTVy^.The training goal of the art of combining digital and multimedia generation network is to minimize the global loss function and use this value to update the response while using the BP algorithm back to the neurons of the art of combining digital and multimedia generation network, which use the gradient descent algorithm to update the parameter weights. The whole process of training the art of combining digital and multimedia generation network can also be regarded as using the feature extraction capability of the VGG-19 model parameters to help the art of combining digital and multimedia generation network obtain the corresponding art of combining digital and multimedia style simulation parameters based on the existing VGG-19 pretraining model. The trained art of combining digital and multimedia generation network has a strong generalization ability and can map the art of combining digital and multimedia style to any unknown image. Since the parameters of the generative network are mature enough, the next image that needs to map the art of combining digital and multimedia style does not need to calculate the loss function, and a forward propagation operation can be performed directly in the art of combining digital and multimedia generative network, so the art of combining digital and multimedia generative network is very fast in style mapping. ## 4. Experiments and Results ### 4.1. Experimental Setup In this paper, we trained and used the art of combining digital and multimedia style network model on a computer with Ubuntu 16.04, GTX1070 graphics card, 8 GB RAM, and TensorFlow as the experimental framework are continuously optimized. This chapter will focus on showing the generated images using different art of combining digital and multimedia style models for stylized mapping, where the art of combining digital and multimedia style network models include the art of combining digital and multimedia models of the slate category with rough textures and bright colors and the art of combining digital and multimedia models of the paper category with delicate brush strokes and a bias toward realism, and the content images include five categories such as people, architecture, landscape, objects, and animals. These perspectives were used to view the degree of similarity between the generated images of the art of combining digital and multimedia style network and real the art of combining digital and multimedia. A total of 25 art of combining digital and multimedia style network models were trained for comparative analysis with many different categories of real the art of combining digital and multimedia. The real art of combining digital and multimedia can be broadly classified into board the art of combining digital and multimedia and paper the art of combining digital and multimedia according to the drawing method and visual effect. Before training the network models, different training parameters were set based on the different styles of real the art of combining digital and multimedia. One of the main parameters is the ratio of content weights to style weights in the loss function, i.e., the ratio of the weights of the content loss function to the weights of the style loss functionα/β, the higher the ratio, the less stylized the generated images are, and vice versa. For the board-like the art of combining digital and multimedia network model, the value of α/β should be set larger in order to retain the roughness of lines and the visual effect of color interweaving in the real board-like the art of combining digital and multimedia as much as possible, and 0.15 is chosen as the final parameter in this paper. The value of α/β should be smaller, and 0.3 is chosen as the final parameter. The training process loss convergence curve and performance improvement are shown in Figures 5 and 6.Figure 5 Training process loss convergence curve.Figure 6 Training process performance improvement diagram. ### 4.2. Experimental Results Using the network model for the experiments on the original data and the data after data enhancement, the results are shown in Table1, and it can be seen that the classification accuracy after data enhancement is improved by 9.21%.Table 1 Classification results before and after data enhancement. Image data processingAccuracy rateWithout data enhancement77.34With data enhancement86.55Comparison of the results of different classification methods: comparison with the traditional network model. In order to verify the effectiveness of this paper’s network model for classifying the art of combining digital and multimedia images, the data of this paper were input into the traditional network model and this paper’s network model for training and verification, and the results are shown in Figures7 and 8.Figure 7 Comparison of classification time of different network models.Figure 8 Comparison of the classification results of different network models.From Figures7 and 8, compared with the existed network model, the network model in this paper is lightweight, has short training time, and has the best accuracy rate. However, the proposed network module contains parallel convolution operations, which makes the parameters of the network model in this paper higher than those of Shuffle-Net, MobileNetV1, and MobileNetV2. The method + SE and the method + SK in this paper are network models that replace the proposed modules with the SE and SK modules, and the values of the descent rate r in the modules are taken as 16. The experimental results show that the accuracy of the network model in this paper is 0.86% and 1.25% higher than that of the method + SE and the method + SK in this paper, respectively. When the r value of the proposed module is taken as 4, the classification accuracy is higher than that when r is taken as 16.Comparing with the traditional methods, it can be seen from Table2 that the traditional manually extracted underlying overall features based on color, shape, etc., and local features cannot fully distinguish the style features of various types of art images, while the method in this paper can better extract the overall features and local detail features of art images and improve the classification accuracy.Table 2 Comparison of the results of the method in this paper with the traditional method. MethodsAccuracy rate (%)Comparison method 166.78Comparison method 260.2Method of this paper86.55Module role: location is to see the effect of the proposed module on the classification of art images at different locations of the network model, it is placed separately at the locations of the network model numbered as 6, 9, and 12 in this paper, and the convolutional kernels of 3 × 3 and 5 × 5 are taken on the branches, respectively, and the descent rater value is taken as 4. The results are shown in Figures 9 and 10.Figure 9 Accuracy of the module in different positions.Figure 10 Operation volume of the module in different positions.From Figures9 and 10, the proposed module has the highest classification accuracy and the least computational consumption for the art images when it is placed at the position of network model number 6 alone.The drop rater and the branch convolution kernel size are an important set of parameters in the proposed module for controlling the computational resources and experimental accuracy. The proposed module is individually placed in the network model with numbering position 6, and the proposed r value and the branch convolution kernel size are experimented and analyzed, and the results are shown in Table 3. It can be seen that when the size of the convolution kernel on the branch of the proposed module is fixed, the classification results are higher when the r value is taken as 4 than when the r value is taken as 16; when the r value is fixed, the proposed module of 2 branches takes less time and fewer parameters than the training of 3 branches; when the convolution kernel on the branch of the proposed module is taken as 1 × 1 and 5 × 5, respectively, the experimental results have the highest accuracy rate.Table 3 Comparison of drop rate and convolution kernel size classification results. Decline rate1 × 13 × 35 × 5Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.3136787.26✓✓2.6168087.58✓✓2.7180087.24✓✓✓2.7237087.35R = 16✓✓2.3123086.35✓✓2.6122086.85✓✓2.7144086.15✓✓✓2.7177086.42Null convolution: in order to compare the effect of null convolution kernels on art image feature extraction, different sizes of null convolution kernels are taken on the proposed module for experiments on the data of this paper, and the results are shown in Table4, where K3 denotes an ordinary 3 × 3 convolution kernel, K5 denotes a 3 × 3 convolution kernel with a null convolution expansion rate of 2 and a perceptual field of 5 × 5, and K7 denotes a 3 × 3 convolution kernel with an expansion rate of 3 and a perceptual field of 7 × 7.Table 4 Module branching takes the classification results of different null convolution kernels. Decline rateK = 3K = 5K = 7Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.4153086.07✓✓2.4165786.00✓✓2.4156084.50✓✓✓2.6222086.40The experimental results show that the parameters of the null convolution are less than those of the normal convolution with the same receptive field, but the classification accuracy is not as high as that of the normal convolution. The 3 × 3 convolutional kernels with 5 × 5 receptive fields andK7 indicates the 3 × 3 convolutional kernel with 3 expansion rate and 7 × 7 receptive fields. ## 4.1. Experimental Setup In this paper, we trained and used the art of combining digital and multimedia style network model on a computer with Ubuntu 16.04, GTX1070 graphics card, 8 GB RAM, and TensorFlow as the experimental framework are continuously optimized. This chapter will focus on showing the generated images using different art of combining digital and multimedia style models for stylized mapping, where the art of combining digital and multimedia style network models include the art of combining digital and multimedia models of the slate category with rough textures and bright colors and the art of combining digital and multimedia models of the paper category with delicate brush strokes and a bias toward realism, and the content images include five categories such as people, architecture, landscape, objects, and animals. These perspectives were used to view the degree of similarity between the generated images of the art of combining digital and multimedia style network and real the art of combining digital and multimedia. A total of 25 art of combining digital and multimedia style network models were trained for comparative analysis with many different categories of real the art of combining digital and multimedia. The real art of combining digital and multimedia can be broadly classified into board the art of combining digital and multimedia and paper the art of combining digital and multimedia according to the drawing method and visual effect. Before training the network models, different training parameters were set based on the different styles of real the art of combining digital and multimedia. One of the main parameters is the ratio of content weights to style weights in the loss function, i.e., the ratio of the weights of the content loss function to the weights of the style loss functionα/β, the higher the ratio, the less stylized the generated images are, and vice versa. For the board-like the art of combining digital and multimedia network model, the value of α/β should be set larger in order to retain the roughness of lines and the visual effect of color interweaving in the real board-like the art of combining digital and multimedia as much as possible, and 0.15 is chosen as the final parameter in this paper. The value of α/β should be smaller, and 0.3 is chosen as the final parameter. The training process loss convergence curve and performance improvement are shown in Figures 5 and 6.Figure 5 Training process loss convergence curve.Figure 6 Training process performance improvement diagram. ## 4.2. Experimental Results Using the network model for the experiments on the original data and the data after data enhancement, the results are shown in Table1, and it can be seen that the classification accuracy after data enhancement is improved by 9.21%.Table 1 Classification results before and after data enhancement. Image data processingAccuracy rateWithout data enhancement77.34With data enhancement86.55Comparison of the results of different classification methods: comparison with the traditional network model. In order to verify the effectiveness of this paper’s network model for classifying the art of combining digital and multimedia images, the data of this paper were input into the traditional network model and this paper’s network model for training and verification, and the results are shown in Figures7 and 8.Figure 7 Comparison of classification time of different network models.Figure 8 Comparison of the classification results of different network models.From Figures7 and 8, compared with the existed network model, the network model in this paper is lightweight, has short training time, and has the best accuracy rate. However, the proposed network module contains parallel convolution operations, which makes the parameters of the network model in this paper higher than those of Shuffle-Net, MobileNetV1, and MobileNetV2. The method + SE and the method + SK in this paper are network models that replace the proposed modules with the SE and SK modules, and the values of the descent rate r in the modules are taken as 16. The experimental results show that the accuracy of the network model in this paper is 0.86% and 1.25% higher than that of the method + SE and the method + SK in this paper, respectively. When the r value of the proposed module is taken as 4, the classification accuracy is higher than that when r is taken as 16.Comparing with the traditional methods, it can be seen from Table2 that the traditional manually extracted underlying overall features based on color, shape, etc., and local features cannot fully distinguish the style features of various types of art images, while the method in this paper can better extract the overall features and local detail features of art images and improve the classification accuracy.Table 2 Comparison of the results of the method in this paper with the traditional method. MethodsAccuracy rate (%)Comparison method 166.78Comparison method 260.2Method of this paper86.55Module role: location is to see the effect of the proposed module on the classification of art images at different locations of the network model, it is placed separately at the locations of the network model numbered as 6, 9, and 12 in this paper, and the convolutional kernels of 3 × 3 and 5 × 5 are taken on the branches, respectively, and the descent rater value is taken as 4. The results are shown in Figures 9 and 10.Figure 9 Accuracy of the module in different positions.Figure 10 Operation volume of the module in different positions.From Figures9 and 10, the proposed module has the highest classification accuracy and the least computational consumption for the art images when it is placed at the position of network model number 6 alone.The drop rater and the branch convolution kernel size are an important set of parameters in the proposed module for controlling the computational resources and experimental accuracy. The proposed module is individually placed in the network model with numbering position 6, and the proposed r value and the branch convolution kernel size are experimented and analyzed, and the results are shown in Table 3. It can be seen that when the size of the convolution kernel on the branch of the proposed module is fixed, the classification results are higher when the r value is taken as 4 than when the r value is taken as 16; when the r value is fixed, the proposed module of 2 branches takes less time and fewer parameters than the training of 3 branches; when the convolution kernel on the branch of the proposed module is taken as 1 × 1 and 5 × 5, respectively, the experimental results have the highest accuracy rate.Table 3 Comparison of drop rate and convolution kernel size classification results. Decline rate1 × 13 × 35 × 5Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.3136787.26✓✓2.6168087.58✓✓2.7180087.24✓✓✓2.7237087.35R = 16✓✓2.3123086.35✓✓2.6122086.85✓✓2.7144086.15✓✓✓2.7177086.42Null convolution: in order to compare the effect of null convolution kernels on art image feature extraction, different sizes of null convolution kernels are taken on the proposed module for experiments on the data of this paper, and the results are shown in Table4, where K3 denotes an ordinary 3 × 3 convolution kernel, K5 denotes a 3 × 3 convolution kernel with a null convolution expansion rate of 2 and a perceptual field of 5 × 5, and K7 denotes a 3 × 3 convolution kernel with an expansion rate of 3 and a perceptual field of 7 × 7.Table 4 Module branching takes the classification results of different null convolution kernels. Decline rateK = 3K = 5K = 7Parameter (M)Time (min)Accuracy rate (%)R = 4✓✓2.4153086.07✓✓2.4165786.00✓✓2.4156084.50✓✓✓2.6222086.40The experimental results show that the parameters of the null convolution are less than those of the normal convolution with the same receptive field, but the classification accuracy is not as high as that of the normal convolution. The 3 × 3 convolutional kernels with 5 × 5 receptive fields andK7 indicates the 3 × 3 convolutional kernel with 3 expansion rate and 7 × 7 receptive fields. ## 5. Conclusion Along with the accelerated development of artificial intelligence technology, designers in the traditional design field will be gradually outlawed by intelligent machines if they do not review the situation, transform, and improve themselves in time. However, not all the work in the field of art and design can be replaced by it, especially some creative design concepts and contents that no high-tech and machine can do. As a professional, in the field of art and design, we should think more about the intangible value in art and design and only the core competitiveness of design talents is to burst out a constant source of creativity and inventiveness. The powerful computing power of artificial intelligence can assist them to use historical big data to explore human needs comprehensively, help people to solve practical problems through design, and use data to obtain design thinking. At the same time, the current global information is extremely rich and diverse, and this information will drive people to accept new products, new technologies, and new digital life in a passive mode. In-depth exploration of the aesthetics and wisdom of digital media design requires art designers to look at the changing times with their own understanding and feel the difference in the life of the times. This paper mainly proposes a method based on the convolutional neural network to better achieve feature extraction and classification of art images. This paper solves the problem of insufficient research on the classification of existing multi-class art images, and achieves better classification results of art images than existing network models and traditional classification methods. In the subsequent work, we will optimize the art image classification network model, expand the art image sample library, and further improve the accuracy of art image classification and the efficiency of network model classification. --- *Source: 1004204-2022-08-08.xml*
2022
# Three-Dimensional Image Analysis and Training Method of Action Characteristics of Sanda Whip Leg Technique Based on Wavelet Transform **Authors:** Lei Song **Journal:** Advances in Multimedia (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004568 --- ## Abstract The whip leg technique is an indispensable offensive action for high-level Sanda athletes in the competition. In order to improve the effective guidance of the action of Wushu Sanda whip leg, based on wavelet transform, the three-dimensional image analysis of Sanda whip leg technical action characteristics, to explore the movement speed of each link of the attacking leg, the contraction form, and force sequence of major muscles or muscle groups and put forward correct and reasonable movement techniques, is expected to provide theoretical basis and reference for coaches and athletes in the training of Sanda whip leg movement. Based on this, this paper attempts to use the three-dimensional image analysis system and surface electromyography acquisition system from the perspective of wavelet transform to collect the speed of the attacking leg movement and the parameters related to muscle discharge and to quantitatively analyze and evaluate its movements. The sequence of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. It is suggested to strictly follow the principle of link movement in practice, and the completion of the whole movement should be relaxed and natural. --- ## Body ## 1. Introduction In the offensive action of Sanda, the “whip leg” is one of the most important means of scoring [1]. It is widely used in competitions and has great power. Hitting the opponent can not only directly score points but also ingeniously inflict heavy damage on the opponent, causing it to be “forced to count the seconds,” and even directly gain an advantage [2]. Relevant data statistics show that in each game’s offensive actions (excluding other defensive counterattack tactics), the use rate of the whip leg is about 30% [3]. It can be seen that whether an athlete can complete a high-quality whipping action in a game directly affects the outcome of the game [4]. According to the technical statistics of the 216 men’s games in the 2010 National Sanda Championships, the use of whip legs accounted for 78.25% of all leg techniques and 39.51% of the entire Sanda technique [5]. The use rate of leg techniques was significantly higher than that of any other movements. But the analysis found that the score rate and usage rate are not proportional. It shows that there are still problems that need to be improved and improved in the teaching and training of whipping [6].Sanda whip leg is one of the most powerful leg techniques. It not only has the characteristics of flexion and extension leg technique but also has the function of sweeping and turning leg technique. It has the characteristics of large attack force and fast recovery speed. The kicking of whip leg is to turn the body, twist the waist, and turn the hip with the support leg as the axis, and the hip drives the thigh. The thigh drives the calf, and the corresponding links are accelerated in turn with the transmission of power. Finally, when the instep reaches the maximum speed, it bounces and kicks at the target. This rotational force generation method “uses the inertial motion of the large mass part generated by the imbalance to generate force consciously.” In addition, the power generated by twisting the waist and turning the hip can greatly exert the potential of the human body and produce an ideal attack effect. The three-dimensional image of Sanda whip leg is a kind of signal, and time and frequency are the two basic physical quantities to describe and analyze the signal [7]. As one of the methods in the field of digital signal analysis and processing, the time-frequency joint analysis method can reflect the trend of signal frequency components changing with time and overcome the shortcomings of Fourier transform in nonstationary signal processing [8]. The joint analysis of signal characteristics from two aspects in the frequency domain can analyze the signal more comprehensively and has become an indispensable mathematical tool in modern signal analysis [9]. The wavelet transform can continuously adjust the time width and bandwidth of the analysis with the transformation of the frequency [10]. With the continuous increase of the frequency, the corresponding time width decreases and the bandwidth increases, which is in line with the expectation of high- and low-frequency signal analysis [11]. However, since the time width and bandwidth are only a qualitative description, the wavelet transform cannot quantitatively analyze the analysis results but can only display the overall change trend of the analysis signal and can display the change effect of the signal from the time-frequency domain. Effectively reflect the overall characteristics of the signal [12]. When performing time-frequency analysis on the signal through wavelet transform, the whole picture of the signal can be analyzed, and the effective or interesting time-frequency interval of the signal can be obtained. Using the digital down-conversion (DDC) technique, the frequency band of interest can be effectively moved to the low frequency and reduce the data redundancy by decimation filtering and filter out the signal outside the frequency band of interest, which changes the relationship between the signal frequency and the sampling rate after down-conversion. The time width and bandwidth have been changed, and the details of the signal in a specific frequency band can be analyzed, so the wavelet transform can effectively analyze the trend of the signal change [13]. ## 2. State of the Art Sanda, as an important part of Chinese martial arts, has been piloted since 1979. In recent years, with the standardization of Wushu work and the introduction of new rules for Wushu Sanda, “standards” and “norms” have become signs of the maturity of Sanda movement [14]. Although it is still suffering from the controversy of “loss of traditional characteristics” and unbalanced development of technical structure,” the pace of its rapid development has become an indisputable fact [15]. At present, the three technical structures of punching, kicking, and throwing have developed in a balanced manner. It has become the main theme of the development of the project [16]. Since the leg technique has become the main scoring method in the game, it has played a key role in the outcome of the game [17], especially the whip leg technique which is characterized by a small range of motion, strong suddenness, wide striking range, strong kicking power [18], easy to attack and difficult to defend, wide striking parts, good cohesion when combined with other techniques, strong movement coherence, etc. The high technical movements and the key technical movements determine the outcome of the game. However, in the competition, athletes of different sports levels have different performances such as the use of the leg whipping technique and the scoring rate. The sport level not only affects the application effect of the leg whipping technique, but also, it has become the main factor that induces the risk of acute sports injury to the lower extremity joints [19]. Some studies have shown that the start-up time of Wuying-level Sanda athletes is faster than that of first-level athletes [20]. The biological laws of the athlete are better than those of the general population, and the isokinetic muscle strength and stability of the bilateral lower extremity joints of the athlete-level athletes are significantly better than those of the first-level athletes.Through the scale transform and translation transform of the basic wavelet, the wavelet transform can realize the positioning function of frequency and time. With the increase of frequency, the time resolution can be improved and the frequency domain resolution can be reduced. On the contrary, as the frequency continues to decrease the time resolution, the frequency domain resolution decreases and the frequency domain resolution increases, which can automatically adapt to the requirements of time-frequency analysis.As a powerful signal analysis algorithm, wavelet transform is widely used and has a wide range of practical significance, mainly including many disciplines in the field of mathematics, signal analysis and image processing, quantum mechanics and theoretical physics, military electronic countermeasures, and weapon intelligence. STFT, WVD, and wavelet transform all provide strong theoretical support for time-frequency joint analysis and lay the foundation for hardware implementation of time-frequency analysis function. ## 3. Research Methods ### 3.1. Time-Frequency Analysis of Wavelet Transform Wavelet can track time and frequency information. It can “look close” to short-time pulses or “look far” to detect long-time slow waves. There are two functions that play a very important role in wavelet analysis, i.e., scale function and wavelet function. These two functions generate a set of function families that can be used to decompose and reconstruct signals. The time-frequency analysis method can process and analyze the signal from the time-frequency dimension at the same time, avoiding the limitations of pure time-domain or frequency analysis. The emergence of the time-frequency analysis method solves the problem that Fourier transform can only analyze the signal from the frequency domain, and provides a new method for the field of signal analysis. STFT and wavelet transform are two typical time-frequency analysis methods, which are also implemented in the acquisition system. These two methods have their own advantages and disadvantages. Different time-frequency analysis methods can be selected under different requirements. This paper mainly introduces the time-frequency realization based on wavelet transform, and the content of STFT is only briefly introduced. The definition formula of STFT is as follows:(1)STFTxt,Ω=∫xτg∗τ−te−jΩτdτ=<xτ,gτ−tejΩτ>.Among them,xτ is the analysis signal, and gτ is the window function. The basic idea of the STFT algorithm is to use the local characteristics of the window function gτ; taking a rectangular window as an example, multiply the signal to be analyzed by the window function to achieve local interception of the signal, and use the Fourier transform to obtain the corresponding current timet, spectrum of the segment signal. In order to obtain a complete time-frequency relationship, by changing the center position t of the window function through constant translation, that is, changing the time center of the window function analysis, the corresponding frequency spectrum at different times can be determined, thus forming the relationship between timetand frequencyΩ, time-frequency relationship diagram.Wavelet transform is widely used in digital signal processing and has important applications in filtering, denoising, compression, time-frequency analysis, and so on. The definition of wavelet transform is(2)WTxa,b=1a∫xtψ∗t−badt=∫xtψa,b∗tdt=<xt,ψa,bt>.Among them,xt is the analysis signal, Ψt is the basic wavelet function, and Ψa,btis the wavelet basis function, which is obtained by translating and scaling the basic wavelet function Ψt. Its frequency domain expression is (3)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ.The “wavelet” in the wavelet transform indicates that the wavelet functionΨt has volatility. With the change of time, the waveform exhibits the form of oscillation, and the waveform is attenuated as a whole, and it is a band-limited signal in the time domain; its support range is limited. At the same time, the wavelet function behaves as a band-pass filter in the frequency domain; it only allows the signal within the passband to pass and suppresses the frequency components of the signal outside the passband, and its frequency domain support range is also limited (Figure 1). Such a wavelet function Ψt is called basic wavelet or mother wavelet, which has limited support in both time and frequency domains, and realizes the function of simultaneous localization in time and frequency domains. The mother wavelet Ψt is stretched and shifted, and its expression is (4)ψa,bt=1aψt−ba.Figure 1 Approximate extraction diagram of frequency domain approximate extraction algorithm.It can be seen from formula (2) that not only t is a continuous variable, but also a and b are continuous variables, so the above formula is called continuous wavelet transform (CWT). Let the Fourier transform of Ψt be ΨΩ. According to the properties of the Fourier transform, the wavelet basis function, the Fourier transform ofΨa,btis (5)ψa,bt=1aψt−ba,FTΨa,bΩ=aΨaΩe−jΩb.The frequency domain expression of wavelet transform is(6)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ=a2π<XΩ,Ψa,bΩ>.In the time-domain and frequency domain definitions of wavelet transform (1), a, b, and t are all continuous variables, but when implemented on a computer, the operation of continuous variables cannot be realized, so the continuous variables need to be discrete and converted into a form that can be calculated by a computer. If the scale factor a is discretized as a=2j, j∈Z, the wavelet realized by this discretization is called binary wavelet [19]. The scale factor jump of binary wavelet is too large, and it is too rough to realize time-frequency analysis, so it is still necessary to use continuous wavelet transform to realize the function of time-frequency analysis. According to the time-domain definition of wavelet transform, let (7)ht=ψ∗−ta.Then, there are(8)hb−t=ψ∗t‐ba.Then, the time-domain definition of wavelet transform can be transformed into(9)WTxa,b=1a∫xtψ∗t−badt=1a∫xthb−tdt=1axb∗hb.According to formula (9), the integral expression can be transformed into the convolution expression commonly used in digital signal processing. At the same time, the continuous wavelet transform time-domain expression is discretized; let (10)WTxa,b=1a∫xtψ∗t−badt=∑k1a∫kk+1xtψ∗t−badt.Since the continuous variablet is in the interval k~k+1, xt=xk, the above formula can be rewritten as (11)WTxa,b=1a∫xtψ∗t−badt=∑k1axk∫kk+1ψ∗t−badt=∑k1axk∫−∞k+1ψ∗t−badt−∫−∞k+1ψ∗t−badt.In the same way, the frequency domain expression of wavelet transform is discretized, and the expression is as follows:(12)WTxa,k=a2π∑m=−M/2M/2X2πMmΨ∗2πMamej2π/Mkm,m=0,1,⋯,M−1.At different scales, the relationship between the center frequencyfa of the corresponding wavelet basis function and the center frequency fc of the basic wavelet Ψt is (13)fa=fca.The sampling rate of the analyzed signalxn is fs, and the pseudofrequency feq of the analyzed signal under the corresponding scale a is (14)feq=fs⋅fa=fs⋅f0a.Thus, there is a relationship between scale and pseudofrequency as(15)a=fs⋅f0feq. ### 3.2. Visual Image Recognition of Wushu Sanda Whip Leg Action Set the empirical scale function of Wushu Sanda whip leg action, and use the gradient descent method to segment the region of Wushu Sanda whip leg action image, so that the sparse feature values of the visual image of Wushu Sanda whip leg action meet the conditions. According to the sparse prior representation result, the empirical scale function and the empirical wavelet function are established to calculate the segmentation function of the visual image of Wushu Sanda whip leg action. Template matching of visual images of Wushu Sanda whip leg action is based on approximate sparse representation. Combined with the three-dimensional translation matrix feature decomposition method to perform feature segmentation and edge contour feature detection of the visual image of Wushu Sanda whip leg movement, construct at least four sets of two-dimensional images for visual reconstruction of Wushu Sanda whip leg movement, and calculate the visual image of Wushu Sanda whip leg movement. For the local prior pixel set, using the morphological knowledge to carry out the visual decomposition of Wushu Sanda whip leg movement, establish a visual feature analysis and adaptive feature extraction model of Wushu Sanda whip leg movement, and realize the Wushu Sanda whip leg movement visual feature extraction results according to the results of Wushu Sanda whip leg movement and Sanda whip leg action visual image recognition. ## 3.1. Time-Frequency Analysis of Wavelet Transform Wavelet can track time and frequency information. It can “look close” to short-time pulses or “look far” to detect long-time slow waves. There are two functions that play a very important role in wavelet analysis, i.e., scale function and wavelet function. These two functions generate a set of function families that can be used to decompose and reconstruct signals. The time-frequency analysis method can process and analyze the signal from the time-frequency dimension at the same time, avoiding the limitations of pure time-domain or frequency analysis. The emergence of the time-frequency analysis method solves the problem that Fourier transform can only analyze the signal from the frequency domain, and provides a new method for the field of signal analysis. STFT and wavelet transform are two typical time-frequency analysis methods, which are also implemented in the acquisition system. These two methods have their own advantages and disadvantages. Different time-frequency analysis methods can be selected under different requirements. This paper mainly introduces the time-frequency realization based on wavelet transform, and the content of STFT is only briefly introduced. The definition formula of STFT is as follows:(1)STFTxt,Ω=∫xτg∗τ−te−jΩτdτ=<xτ,gτ−tejΩτ>.Among them,xτ is the analysis signal, and gτ is the window function. The basic idea of the STFT algorithm is to use the local characteristics of the window function gτ; taking a rectangular window as an example, multiply the signal to be analyzed by the window function to achieve local interception of the signal, and use the Fourier transform to obtain the corresponding current timet, spectrum of the segment signal. In order to obtain a complete time-frequency relationship, by changing the center position t of the window function through constant translation, that is, changing the time center of the window function analysis, the corresponding frequency spectrum at different times can be determined, thus forming the relationship between timetand frequencyΩ, time-frequency relationship diagram.Wavelet transform is widely used in digital signal processing and has important applications in filtering, denoising, compression, time-frequency analysis, and so on. The definition of wavelet transform is(2)WTxa,b=1a∫xtψ∗t−badt=∫xtψa,b∗tdt=<xt,ψa,bt>.Among them,xt is the analysis signal, Ψt is the basic wavelet function, and Ψa,btis the wavelet basis function, which is obtained by translating and scaling the basic wavelet function Ψt. Its frequency domain expression is (3)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ.The “wavelet” in the wavelet transform indicates that the wavelet functionΨt has volatility. With the change of time, the waveform exhibits the form of oscillation, and the waveform is attenuated as a whole, and it is a band-limited signal in the time domain; its support range is limited. At the same time, the wavelet function behaves as a band-pass filter in the frequency domain; it only allows the signal within the passband to pass and suppresses the frequency components of the signal outside the passband, and its frequency domain support range is also limited (Figure 1). Such a wavelet function Ψt is called basic wavelet or mother wavelet, which has limited support in both time and frequency domains, and realizes the function of simultaneous localization in time and frequency domains. The mother wavelet Ψt is stretched and shifted, and its expression is (4)ψa,bt=1aψt−ba.Figure 1 Approximate extraction diagram of frequency domain approximate extraction algorithm.It can be seen from formula (2) that not only t is a continuous variable, but also a and b are continuous variables, so the above formula is called continuous wavelet transform (CWT). Let the Fourier transform of Ψt be ΨΩ. According to the properties of the Fourier transform, the wavelet basis function, the Fourier transform ofΨa,btis (5)ψa,bt=1aψt−ba,FTΨa,bΩ=aΨaΩe−jΩb.The frequency domain expression of wavelet transform is(6)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ=a2π<XΩ,Ψa,bΩ>.In the time-domain and frequency domain definitions of wavelet transform (1), a, b, and t are all continuous variables, but when implemented on a computer, the operation of continuous variables cannot be realized, so the continuous variables need to be discrete and converted into a form that can be calculated by a computer. If the scale factor a is discretized as a=2j, j∈Z, the wavelet realized by this discretization is called binary wavelet [19]. The scale factor jump of binary wavelet is too large, and it is too rough to realize time-frequency analysis, so it is still necessary to use continuous wavelet transform to realize the function of time-frequency analysis. According to the time-domain definition of wavelet transform, let (7)ht=ψ∗−ta.Then, there are(8)hb−t=ψ∗t‐ba.Then, the time-domain definition of wavelet transform can be transformed into(9)WTxa,b=1a∫xtψ∗t−badt=1a∫xthb−tdt=1axb∗hb.According to formula (9), the integral expression can be transformed into the convolution expression commonly used in digital signal processing. At the same time, the continuous wavelet transform time-domain expression is discretized; let (10)WTxa,b=1a∫xtψ∗t−badt=∑k1a∫kk+1xtψ∗t−badt.Since the continuous variablet is in the interval k~k+1, xt=xk, the above formula can be rewritten as (11)WTxa,b=1a∫xtψ∗t−badt=∑k1axk∫kk+1ψ∗t−badt=∑k1axk∫−∞k+1ψ∗t−badt−∫−∞k+1ψ∗t−badt.In the same way, the frequency domain expression of wavelet transform is discretized, and the expression is as follows:(12)WTxa,k=a2π∑m=−M/2M/2X2πMmΨ∗2πMamej2π/Mkm,m=0,1,⋯,M−1.At different scales, the relationship between the center frequencyfa of the corresponding wavelet basis function and the center frequency fc of the basic wavelet Ψt is (13)fa=fca.The sampling rate of the analyzed signalxn is fs, and the pseudofrequency feq of the analyzed signal under the corresponding scale a is (14)feq=fs⋅fa=fs⋅f0a.Thus, there is a relationship between scale and pseudofrequency as(15)a=fs⋅f0feq. ## 3.2. Visual Image Recognition of Wushu Sanda Whip Leg Action Set the empirical scale function of Wushu Sanda whip leg action, and use the gradient descent method to segment the region of Wushu Sanda whip leg action image, so that the sparse feature values of the visual image of Wushu Sanda whip leg action meet the conditions. According to the sparse prior representation result, the empirical scale function and the empirical wavelet function are established to calculate the segmentation function of the visual image of Wushu Sanda whip leg action. Template matching of visual images of Wushu Sanda whip leg action is based on approximate sparse representation. Combined with the three-dimensional translation matrix feature decomposition method to perform feature segmentation and edge contour feature detection of the visual image of Wushu Sanda whip leg movement, construct at least four sets of two-dimensional images for visual reconstruction of Wushu Sanda whip leg movement, and calculate the visual image of Wushu Sanda whip leg movement. For the local prior pixel set, using the morphological knowledge to carry out the visual decomposition of Wushu Sanda whip leg movement, establish a visual feature analysis and adaptive feature extraction model of Wushu Sanda whip leg movement, and realize the Wushu Sanda whip leg movement visual feature extraction results according to the results of Wushu Sanda whip leg movement and Sanda whip leg action visual image recognition. ## 4. Results and Analysis ### 4.1. Time-Frequency Domain Wavelet Spectral Refinement For the basic wavelet expressed in the time domain, after the basic wavelet function can be discretized, the result in the frequency domain can be calculated by using the DFT operation. The time-domain waveform is shown in Figure2.Figure 2 The scaling of the basic wavelet and the influence of parametersa and b on the time-frequency domain.Starting from the band-pass nature of the wavelet function spectrum, most of the DFT results are zero, and only the value of the band-pass part has practical significance. First, select the threshold value of the band-pass part, which determines the frequency range of the band-pass part of the wavelet transform. The larger the value is, the smaller the range of the band-pass part is selected, and the lower the calculation accuracy is. After a large number of reliable experiments, it is found that the threshold value is selected as 1% of the absolute value of the maximum wavelet spectrum, which can ensure that the information of the wavelet band-pass part is preserved, and the accuracy of the wavelet transform result is better. Figure3 shows the spectrum of the complex Morlet wavelet (both bandwidth and center frequency are 3), in which the curve represents the spectrum obtained by FFT operation on the time-domain sampled signal, and the two points represent the selection under the condition that the threshold is 1%.Figure 3 Wavelet basis function FFT and band-pass limit diagram.According to the above steps, the band-pass part of the spectrum of the complex Morlet wavelet is refined, and the bandwidth factor and the center frequency factor are both 3, as shown in Figure4, which is the time-domain expression wavelet spectrum refinement diagram, in which the hollow circle is the result of the wavelet basis function FFT operation, the hexagonal star points are the left and right boundaries of the wavelet band pass corresponding to the threshold, and the curve is after the refinement. It can be seen from the figure that the refined spectrum is smoother and its frequency resolution is better. For the frequency domain wavelet, since its frequency domain expression is known, the frequency domain expression can be directly discretized, and then, according to the selected threshold, the left and right boundaries of the frequency domain band-pass part of the wavelet are selected, which are consistent with the time domain. The method is the same; only the part on the positive half-axis of the frequency is judged for the band-pass part; after the frequency boundary of the band-pass part is obtained, the part within the boundary is refined by the target number of N points, so as to obtain the band-pass part. As the basic theory of analyzing and involving digital control system, discrete system theory develops rapidly. In the performance analysis of high-order systems, it is difficult to apply the time-domain analysis method. The frequency domain analysis method is mainly applicable to linear time-varying systems. It is a practical engineering method for analyzing and designing control systems and is widely used. As shown in Figure 5, it is a complex Morlet wavelet (both bandwidth factor and center frequency factor are 3), in which the hollow circle represents the discretization of the frequency domain expression. In the obtained spectrogram, the hexagonal star points represent the two boundary values of the band-pass part selected under the condition that the threshold is 1%, the curve is the dense discretization result under the target number of points in the band-pass range, and (a) is the spectrum refinement and overview, and (b) is a detailed view of the spectral refinement.Figure 4 Refinement of time-domain wavelet spectrum by CZT algorithm. (a)(b)Figure 5 Wavelet band-pass partial spectrogram with frequency domain expression. (a)(b) ### 4.2. The Characteristics of Kinematics and EMG Synchronization in T1 (Preparation End) and T2 Stage (Knee Bending and Leg Raising) According to the above parameter settings, select sample data. The data used in the experiment comes from Baidu Gallery, and there are a total of 10,000 visual pictures of some Wushu Sanda whip leg movements. The height and weight of the Sanda athletes were measured and numbered. Then, the fixed sandbags equipped with the sensor of the event group fight test instrument were whipped in place for three times. Compare the visual images of Sanda whip leg action to delete the unqualified images, select 1,000 images as the training set for this experiment, and set the image format in the database to400∗400. 1000 images were randomly divided into 5 groups of test data. In the T1 (preliminary end) stage, the knee angle of the attacking leg is 144±140, keeping the right knee flexed at a small angle, which is beneficial for the athlete to put the strong leg behind and move the footsteps quickly to give the opponent a powerful blow. In the T2 stage (bend the knee and raise the leg), after the right leg is kicked off the ground, the body is slightly tilted, the right leg is raised to the front, the knee tip is facing the front, the thigh and the trunk are about 90°, the calf is naturally drooping, and the ankle joint is naturally relaxed, and at the end of this phase, your knees are at your chest height. In the T2 stage, the hip joint velocity was 2.75±0.54 m/s, the knee joint velocity was 6.12±1.42 m/s, and the ankle joint velocity was 6.93±1.56 m/s. The hip angle was 41±90, the knee angle was 67±190, and the ankle angle was 122±110.Indicating that the completion of the knee-bend and leg-raise action mainly involves these muscles. The contribution of the muscles to the action can be seen from the size of the iEMG,rectusossae>bicepsfemoris>sartorius>gastrocnemius>tensorfascialata. Combined with the analysis of the working methods of the corresponding muscles, the knee flexion and leg raising stage is mainly the attacking leg bending and raising (near fixation), and the iliopsoas, rectus femoris, sartorius, tensor fascia lata, and pubis muscles rapidly contract to flex. Hip (thigh flexion) movement; active force of the biceps femoris, gastrocnemius, semitendinosus, semimembranosus, and gracilis to flex the knee (calf flexion). The action shows the characteristics of accelerated contraction, rapid braking, and one-step action. Muscle contraction shows the characteristics of active knee flexion and fast muscle contraction. The rapid completion and braking of the knee-bending and leg-raising action are directly related to the degree of relaxation of the calf and ankle joints. According to the analysis, the calf sags naturally, the ankle joint is relaxed, and the knee joint angle can be quickly reduced when the knee is flexed, which is conducive to the rapid completion of the action. The rapid reduction of the knee joint angle reduces the radius of rotation of the attacking leg, the moment of inertia becomes smaller, the difficulty of rotation is reduced, and the rotation is completed quickly. ### 4.3. The Characteristics of Kinematics and EMG Synchronization in T3 Stage (Hip and Buckling) Stage T3 (hip and knee) is completed on the basis of bending the knee and raising the leg; the attacking leg hip joint is internally rotated; the knee joint is turned to the left side (internal buckle), the lower leg is folded and parallel to the ground, and the instep is aligned with the attacking direction. At T3, the hip joint velocity was1.56±0.37 m/s, the knee joint velocity was 5.17±1.23 m/s, and the ankle joint velocity was 7.22±1.11 m/s. The hip angle was 51±10°, the knee angle was 95±15°, and the ankle angle was 139±19°.In the T3 stage, the contraction and discharge of the muscles of the attacking leg are mainly manifested in the tensor fascia lata (TFL), and its iEMG changes greatly. It shows that the action of raising the hip and buckling the knee is mainly completed by attacking the tensor fascia lata of the leg, and other muscles have an auxiliary role. The working method of the muscles corresponding to the action is mainly to straighten the hip and buckle the knee (near fixation). From the technical point of view of the movement, straightening the hips makes the knees buckle down, buckles the knees as a brake during the swing, and buckles the knees bringing a series of favorable changes. According to the principle of kinetic energy transmission, thigh braking enables energy to be transmitted from the thigh to the calf, and the mass of the calf is much lighter than that of the thigh, and the corresponding speed will increase, thereby increasing the whipping speed of the calf. In addition, the buckled knee changes the direction of the calf, increasing the effect of the horizontal strike. ### 4.4. Characteristics of Kinematics and EMG Synchronization in T4 (Strike Contact Sandbag) and T5 Stages (Return) After the explosion, the knee is extended and the ankle is actively flexed to hit the target quickly and powerfully. In motion, when the end of the link chain is expected to generate maximum speed and force, the movement of the limb is often expressed from the proximal link to the distal loop.The segments are accelerated and braked in turn, and the speed of each segment is shown to increase sequentially from the proximal end to the distal end. The whipping action is mainly to get the maximum speed in the end link. The explanation of the sequential principle of joint activity in the general textbook “Sports Biomechanics” of the National Academy of Physical Education is as follows: the joints with large cross-sectional areas of muscles around the joints are called large joints, and vice versa. The large joints exert force first to overcome the inertia of the links and limbs and facilitate the initiation of the joints and limbs; the secondary joints exert force to facilitate the acceleration of the links and limbs in one step; the small joints exert force last to facilitate the control of the direction and range of motion. Therefore, the whip leg must strictly follow this law; otherwise, the range, speed, and strength of the movement will be limited, resulting in a decrease in the quality of the movement or failure. From this, it is not difficult for us to conclude that the order of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. At T4, the hip joint velocity was1.12±0.24 m/s, the knee joint velocity was 1.26±0.31 m/s, and the ankle joint velocity was 7.13±2.21 m/s. The hip angle was 75±6°, the knee angle was 130±11°, and the ankle angle was 135±14°.During the T4 (strike contact with sandbag) stage, the changes in the integral EMG values of the vastus lateralis (VL), vastus medialis (VM), rectus femoris (RF), and gastrocnemius (GC) muscles of the attacking leg were very obvious, especially the role of the quadriceps which shows that the quality of the striking action is mainly closely related to the quadriceps. EMG changes combined with action analysis, the quadriceps femoris contraction completes the knee extension action (near fixation), the calf triceps contraction completes the ankle joint foot flexion action, and the knee is actively and quickly buckled before the whiplash to obtain a higher right knee acceleration. That is to say, the movement of the calf is based on the noninertial reference frame of the thigh; that is, the calf is rotated forward around the thigh axis by the moment of its center of mass under the action of the inertial force from the front and upper direction of the thigh on the knee joint, thereby forming a hitting action. During the whole whipping process, bend the hips, raise the knees, turn the body to straighten the waist, buckle the knees, and drive the calves to whip the inside, front, and top. The speed and strength of the movement are closely related to the contraction speed of the agonist muscle and the relaxation degree of the antagonist muscle. Active contraction of the agonist muscle accelerates the movement, while the antagonist muscle acts as a hindrance. The ability to relax muscles can reduce the negative confrontation between muscles and improve the speed of movement. When the active muscle group is contracted hard, the antagonistic muscle group is relaxed in a timely manner, which can reduce the resistance of the antagonistic muscle group and increase the contraction speed and strength of the active muscle group, and the movement becomes faster, and the strike is more powerful. In the T5 stage, the athlete should quickly move the center of gravity forward; the right leg (attacking leg) should fall forward, master the balance of the body after the attack, and prepare for the next defensive action or the next offensive action. When restoring the posture, the knee joint angle is152±7°. Therefore, in the restoration phase, the athlete should keep the included angle of the two knees at about 150° and keep the two knees in a slightly flexed state, which is conducive to the athlete’s quick start and rapid entry into the T1 stage. ### 4.5. Displacement and Velocity Characteristics during Action Period The movement displacement of the attacking foot is measured in Figure6. The comparison found that the movement displacement of the 2 groups was in the same direction at all time periods, but there were differences in magnitude. The excellent group had a leftward displacement of 0.46±0.19 m during the hitting period and a backward displacement of 0.45±0.08 m during the recovery period, which was larger than that of the ordinary group. The upward displacement of the normal group (1.22±0.10 m) in the recovery period was larger than that of the excellent group, and there was a statistically significant difference. Figure 6 shows the t-test diagram of the applied displacement.Figure 6 t-test plot of action displacement.The action time of the attack was measured (Figure7). The comparison found that the action time of the 2 groups was different in all periods. The action time of the excellent group was 0.32±0.05 s relatively short in the start-up period, and there was a statistically significant difference.Figure 7 t-test plot of action time. ## 4.1. Time-Frequency Domain Wavelet Spectral Refinement For the basic wavelet expressed in the time domain, after the basic wavelet function can be discretized, the result in the frequency domain can be calculated by using the DFT operation. The time-domain waveform is shown in Figure2.Figure 2 The scaling of the basic wavelet and the influence of parametersa and b on the time-frequency domain.Starting from the band-pass nature of the wavelet function spectrum, most of the DFT results are zero, and only the value of the band-pass part has practical significance. First, select the threshold value of the band-pass part, which determines the frequency range of the band-pass part of the wavelet transform. The larger the value is, the smaller the range of the band-pass part is selected, and the lower the calculation accuracy is. After a large number of reliable experiments, it is found that the threshold value is selected as 1% of the absolute value of the maximum wavelet spectrum, which can ensure that the information of the wavelet band-pass part is preserved, and the accuracy of the wavelet transform result is better. Figure3 shows the spectrum of the complex Morlet wavelet (both bandwidth and center frequency are 3), in which the curve represents the spectrum obtained by FFT operation on the time-domain sampled signal, and the two points represent the selection under the condition that the threshold is 1%.Figure 3 Wavelet basis function FFT and band-pass limit diagram.According to the above steps, the band-pass part of the spectrum of the complex Morlet wavelet is refined, and the bandwidth factor and the center frequency factor are both 3, as shown in Figure4, which is the time-domain expression wavelet spectrum refinement diagram, in which the hollow circle is the result of the wavelet basis function FFT operation, the hexagonal star points are the left and right boundaries of the wavelet band pass corresponding to the threshold, and the curve is after the refinement. It can be seen from the figure that the refined spectrum is smoother and its frequency resolution is better. For the frequency domain wavelet, since its frequency domain expression is known, the frequency domain expression can be directly discretized, and then, according to the selected threshold, the left and right boundaries of the frequency domain band-pass part of the wavelet are selected, which are consistent with the time domain. The method is the same; only the part on the positive half-axis of the frequency is judged for the band-pass part; after the frequency boundary of the band-pass part is obtained, the part within the boundary is refined by the target number of N points, so as to obtain the band-pass part. As the basic theory of analyzing and involving digital control system, discrete system theory develops rapidly. In the performance analysis of high-order systems, it is difficult to apply the time-domain analysis method. The frequency domain analysis method is mainly applicable to linear time-varying systems. It is a practical engineering method for analyzing and designing control systems and is widely used. As shown in Figure 5, it is a complex Morlet wavelet (both bandwidth factor and center frequency factor are 3), in which the hollow circle represents the discretization of the frequency domain expression. In the obtained spectrogram, the hexagonal star points represent the two boundary values of the band-pass part selected under the condition that the threshold is 1%, the curve is the dense discretization result under the target number of points in the band-pass range, and (a) is the spectrum refinement and overview, and (b) is a detailed view of the spectral refinement.Figure 4 Refinement of time-domain wavelet spectrum by CZT algorithm. (a)(b)Figure 5 Wavelet band-pass partial spectrogram with frequency domain expression. (a)(b) ## 4.2. The Characteristics of Kinematics and EMG Synchronization in T1 (Preparation End) and T2 Stage (Knee Bending and Leg Raising) According to the above parameter settings, select sample data. The data used in the experiment comes from Baidu Gallery, and there are a total of 10,000 visual pictures of some Wushu Sanda whip leg movements. The height and weight of the Sanda athletes were measured and numbered. Then, the fixed sandbags equipped with the sensor of the event group fight test instrument were whipped in place for three times. Compare the visual images of Sanda whip leg action to delete the unqualified images, select 1,000 images as the training set for this experiment, and set the image format in the database to400∗400. 1000 images were randomly divided into 5 groups of test data. In the T1 (preliminary end) stage, the knee angle of the attacking leg is 144±140, keeping the right knee flexed at a small angle, which is beneficial for the athlete to put the strong leg behind and move the footsteps quickly to give the opponent a powerful blow. In the T2 stage (bend the knee and raise the leg), after the right leg is kicked off the ground, the body is slightly tilted, the right leg is raised to the front, the knee tip is facing the front, the thigh and the trunk are about 90°, the calf is naturally drooping, and the ankle joint is naturally relaxed, and at the end of this phase, your knees are at your chest height. In the T2 stage, the hip joint velocity was 2.75±0.54 m/s, the knee joint velocity was 6.12±1.42 m/s, and the ankle joint velocity was 6.93±1.56 m/s. The hip angle was 41±90, the knee angle was 67±190, and the ankle angle was 122±110.Indicating that the completion of the knee-bend and leg-raise action mainly involves these muscles. The contribution of the muscles to the action can be seen from the size of the iEMG,rectusossae>bicepsfemoris>sartorius>gastrocnemius>tensorfascialata. Combined with the analysis of the working methods of the corresponding muscles, the knee flexion and leg raising stage is mainly the attacking leg bending and raising (near fixation), and the iliopsoas, rectus femoris, sartorius, tensor fascia lata, and pubis muscles rapidly contract to flex. Hip (thigh flexion) movement; active force of the biceps femoris, gastrocnemius, semitendinosus, semimembranosus, and gracilis to flex the knee (calf flexion). The action shows the characteristics of accelerated contraction, rapid braking, and one-step action. Muscle contraction shows the characteristics of active knee flexion and fast muscle contraction. The rapid completion and braking of the knee-bending and leg-raising action are directly related to the degree of relaxation of the calf and ankle joints. According to the analysis, the calf sags naturally, the ankle joint is relaxed, and the knee joint angle can be quickly reduced when the knee is flexed, which is conducive to the rapid completion of the action. The rapid reduction of the knee joint angle reduces the radius of rotation of the attacking leg, the moment of inertia becomes smaller, the difficulty of rotation is reduced, and the rotation is completed quickly. ## 4.3. The Characteristics of Kinematics and EMG Synchronization in T3 Stage (Hip and Buckling) Stage T3 (hip and knee) is completed on the basis of bending the knee and raising the leg; the attacking leg hip joint is internally rotated; the knee joint is turned to the left side (internal buckle), the lower leg is folded and parallel to the ground, and the instep is aligned with the attacking direction. At T3, the hip joint velocity was1.56±0.37 m/s, the knee joint velocity was 5.17±1.23 m/s, and the ankle joint velocity was 7.22±1.11 m/s. The hip angle was 51±10°, the knee angle was 95±15°, and the ankle angle was 139±19°.In the T3 stage, the contraction and discharge of the muscles of the attacking leg are mainly manifested in the tensor fascia lata (TFL), and its iEMG changes greatly. It shows that the action of raising the hip and buckling the knee is mainly completed by attacking the tensor fascia lata of the leg, and other muscles have an auxiliary role. The working method of the muscles corresponding to the action is mainly to straighten the hip and buckle the knee (near fixation). From the technical point of view of the movement, straightening the hips makes the knees buckle down, buckles the knees as a brake during the swing, and buckles the knees bringing a series of favorable changes. According to the principle of kinetic energy transmission, thigh braking enables energy to be transmitted from the thigh to the calf, and the mass of the calf is much lighter than that of the thigh, and the corresponding speed will increase, thereby increasing the whipping speed of the calf. In addition, the buckled knee changes the direction of the calf, increasing the effect of the horizontal strike. ## 4.4. Characteristics of Kinematics and EMG Synchronization in T4 (Strike Contact Sandbag) and T5 Stages (Return) After the explosion, the knee is extended and the ankle is actively flexed to hit the target quickly and powerfully. In motion, when the end of the link chain is expected to generate maximum speed and force, the movement of the limb is often expressed from the proximal link to the distal loop.The segments are accelerated and braked in turn, and the speed of each segment is shown to increase sequentially from the proximal end to the distal end. The whipping action is mainly to get the maximum speed in the end link. The explanation of the sequential principle of joint activity in the general textbook “Sports Biomechanics” of the National Academy of Physical Education is as follows: the joints with large cross-sectional areas of muscles around the joints are called large joints, and vice versa. The large joints exert force first to overcome the inertia of the links and limbs and facilitate the initiation of the joints and limbs; the secondary joints exert force to facilitate the acceleration of the links and limbs in one step; the small joints exert force last to facilitate the control of the direction and range of motion. Therefore, the whip leg must strictly follow this law; otherwise, the range, speed, and strength of the movement will be limited, resulting in a decrease in the quality of the movement or failure. From this, it is not difficult for us to conclude that the order of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. At T4, the hip joint velocity was1.12±0.24 m/s, the knee joint velocity was 1.26±0.31 m/s, and the ankle joint velocity was 7.13±2.21 m/s. The hip angle was 75±6°, the knee angle was 130±11°, and the ankle angle was 135±14°.During the T4 (strike contact with sandbag) stage, the changes in the integral EMG values of the vastus lateralis (VL), vastus medialis (VM), rectus femoris (RF), and gastrocnemius (GC) muscles of the attacking leg were very obvious, especially the role of the quadriceps which shows that the quality of the striking action is mainly closely related to the quadriceps. EMG changes combined with action analysis, the quadriceps femoris contraction completes the knee extension action (near fixation), the calf triceps contraction completes the ankle joint foot flexion action, and the knee is actively and quickly buckled before the whiplash to obtain a higher right knee acceleration. That is to say, the movement of the calf is based on the noninertial reference frame of the thigh; that is, the calf is rotated forward around the thigh axis by the moment of its center of mass under the action of the inertial force from the front and upper direction of the thigh on the knee joint, thereby forming a hitting action. During the whole whipping process, bend the hips, raise the knees, turn the body to straighten the waist, buckle the knees, and drive the calves to whip the inside, front, and top. The speed and strength of the movement are closely related to the contraction speed of the agonist muscle and the relaxation degree of the antagonist muscle. Active contraction of the agonist muscle accelerates the movement, while the antagonist muscle acts as a hindrance. The ability to relax muscles can reduce the negative confrontation between muscles and improve the speed of movement. When the active muscle group is contracted hard, the antagonistic muscle group is relaxed in a timely manner, which can reduce the resistance of the antagonistic muscle group and increase the contraction speed and strength of the active muscle group, and the movement becomes faster, and the strike is more powerful. In the T5 stage, the athlete should quickly move the center of gravity forward; the right leg (attacking leg) should fall forward, master the balance of the body after the attack, and prepare for the next defensive action or the next offensive action. When restoring the posture, the knee joint angle is152±7°. Therefore, in the restoration phase, the athlete should keep the included angle of the two knees at about 150° and keep the two knees in a slightly flexed state, which is conducive to the athlete’s quick start and rapid entry into the T1 stage. ## 4.5. Displacement and Velocity Characteristics during Action Period The movement displacement of the attacking foot is measured in Figure6. The comparison found that the movement displacement of the 2 groups was in the same direction at all time periods, but there were differences in magnitude. The excellent group had a leftward displacement of 0.46±0.19 m during the hitting period and a backward displacement of 0.45±0.08 m during the recovery period, which was larger than that of the ordinary group. The upward displacement of the normal group (1.22±0.10 m) in the recovery period was larger than that of the excellent group, and there was a statistically significant difference. Figure 6 shows the t-test diagram of the applied displacement.Figure 6 t-test plot of action displacement.The action time of the attack was measured (Figure7). The comparison found that the action time of the 2 groups was different in all periods. The action time of the excellent group was 0.32±0.05 s relatively short in the start-up period, and there was a statistically significant difference.Figure 7 t-test plot of action time. ## 5. Conclusion The conclusions of this paper are as follows: in the preparation stage, the knee joint angle of the attacking leg is about 140°, which is conducive to rapid movement. Bend your knees, lift your legs, straighten your hips, bend your knees, and hit the ball at three stages of speed, 135° change. In the return phase, the knee joint angle is about 150° and slightly bent. The angle of each joint in each stage should be kept as much as possible in order to achieve better results in fast motion training. In the knee flexion and leg lifting stage, rectus abdominis, sartorius, tensor fascia lata, biceps, and gastrocnemius participate in the contraction, and the EMG contribution isrectus>femoralhead>sartorius>gastrocnemius>latissimusmembranetensor. The stage of straightening the hips and flexing the knees is mainly completed by the tensor fascia lata. In the striking stage, there are mainly the iEMG of the lateral femoris, medial femoris, rectus femoris, and gastrocnemius, especially the quadriceps femoris. Fast motion exercises should mainly develop these muscles and pay attention to the sequence of their use. In the process of whip leg movement, a reasonable compensatory movement is formed through the adjustment of the body center of gravity to maintain the stability of the body. It can not only improve the quality of action completion but also help to maintain the consistency of each action link in the whip leg movement. The moving distance of the body center of gravity in the attack direction reflects the depth of the attack. The internal rotation of the heel of the supporting leg and the stretching of the link are two of the important factors to adjust the striking distance of the whip leg. --- *Source: 1004568-2022-09-24.xml*
1004568-2022-09-24_1004568-2022-09-24.md
54,832
Three-Dimensional Image Analysis and Training Method of Action Characteristics of Sanda Whip Leg Technique Based on Wavelet Transform
Lei Song
Advances in Multimedia (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004568
1004568-2022-09-24.xml
--- ## Abstract The whip leg technique is an indispensable offensive action for high-level Sanda athletes in the competition. In order to improve the effective guidance of the action of Wushu Sanda whip leg, based on wavelet transform, the three-dimensional image analysis of Sanda whip leg technical action characteristics, to explore the movement speed of each link of the attacking leg, the contraction form, and force sequence of major muscles or muscle groups and put forward correct and reasonable movement techniques, is expected to provide theoretical basis and reference for coaches and athletes in the training of Sanda whip leg movement. Based on this, this paper attempts to use the three-dimensional image analysis system and surface electromyography acquisition system from the perspective of wavelet transform to collect the speed of the attacking leg movement and the parameters related to muscle discharge and to quantitatively analyze and evaluate its movements. The sequence of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. It is suggested to strictly follow the principle of link movement in practice, and the completion of the whole movement should be relaxed and natural. --- ## Body ## 1. Introduction In the offensive action of Sanda, the “whip leg” is one of the most important means of scoring [1]. It is widely used in competitions and has great power. Hitting the opponent can not only directly score points but also ingeniously inflict heavy damage on the opponent, causing it to be “forced to count the seconds,” and even directly gain an advantage [2]. Relevant data statistics show that in each game’s offensive actions (excluding other defensive counterattack tactics), the use rate of the whip leg is about 30% [3]. It can be seen that whether an athlete can complete a high-quality whipping action in a game directly affects the outcome of the game [4]. According to the technical statistics of the 216 men’s games in the 2010 National Sanda Championships, the use of whip legs accounted for 78.25% of all leg techniques and 39.51% of the entire Sanda technique [5]. The use rate of leg techniques was significantly higher than that of any other movements. But the analysis found that the score rate and usage rate are not proportional. It shows that there are still problems that need to be improved and improved in the teaching and training of whipping [6].Sanda whip leg is one of the most powerful leg techniques. It not only has the characteristics of flexion and extension leg technique but also has the function of sweeping and turning leg technique. It has the characteristics of large attack force and fast recovery speed. The kicking of whip leg is to turn the body, twist the waist, and turn the hip with the support leg as the axis, and the hip drives the thigh. The thigh drives the calf, and the corresponding links are accelerated in turn with the transmission of power. Finally, when the instep reaches the maximum speed, it bounces and kicks at the target. This rotational force generation method “uses the inertial motion of the large mass part generated by the imbalance to generate force consciously.” In addition, the power generated by twisting the waist and turning the hip can greatly exert the potential of the human body and produce an ideal attack effect. The three-dimensional image of Sanda whip leg is a kind of signal, and time and frequency are the two basic physical quantities to describe and analyze the signal [7]. As one of the methods in the field of digital signal analysis and processing, the time-frequency joint analysis method can reflect the trend of signal frequency components changing with time and overcome the shortcomings of Fourier transform in nonstationary signal processing [8]. The joint analysis of signal characteristics from two aspects in the frequency domain can analyze the signal more comprehensively and has become an indispensable mathematical tool in modern signal analysis [9]. The wavelet transform can continuously adjust the time width and bandwidth of the analysis with the transformation of the frequency [10]. With the continuous increase of the frequency, the corresponding time width decreases and the bandwidth increases, which is in line with the expectation of high- and low-frequency signal analysis [11]. However, since the time width and bandwidth are only a qualitative description, the wavelet transform cannot quantitatively analyze the analysis results but can only display the overall change trend of the analysis signal and can display the change effect of the signal from the time-frequency domain. Effectively reflect the overall characteristics of the signal [12]. When performing time-frequency analysis on the signal through wavelet transform, the whole picture of the signal can be analyzed, and the effective or interesting time-frequency interval of the signal can be obtained. Using the digital down-conversion (DDC) technique, the frequency band of interest can be effectively moved to the low frequency and reduce the data redundancy by decimation filtering and filter out the signal outside the frequency band of interest, which changes the relationship between the signal frequency and the sampling rate after down-conversion. The time width and bandwidth have been changed, and the details of the signal in a specific frequency band can be analyzed, so the wavelet transform can effectively analyze the trend of the signal change [13]. ## 2. State of the Art Sanda, as an important part of Chinese martial arts, has been piloted since 1979. In recent years, with the standardization of Wushu work and the introduction of new rules for Wushu Sanda, “standards” and “norms” have become signs of the maturity of Sanda movement [14]. Although it is still suffering from the controversy of “loss of traditional characteristics” and unbalanced development of technical structure,” the pace of its rapid development has become an indisputable fact [15]. At present, the three technical structures of punching, kicking, and throwing have developed in a balanced manner. It has become the main theme of the development of the project [16]. Since the leg technique has become the main scoring method in the game, it has played a key role in the outcome of the game [17], especially the whip leg technique which is characterized by a small range of motion, strong suddenness, wide striking range, strong kicking power [18], easy to attack and difficult to defend, wide striking parts, good cohesion when combined with other techniques, strong movement coherence, etc. The high technical movements and the key technical movements determine the outcome of the game. However, in the competition, athletes of different sports levels have different performances such as the use of the leg whipping technique and the scoring rate. The sport level not only affects the application effect of the leg whipping technique, but also, it has become the main factor that induces the risk of acute sports injury to the lower extremity joints [19]. Some studies have shown that the start-up time of Wuying-level Sanda athletes is faster than that of first-level athletes [20]. The biological laws of the athlete are better than those of the general population, and the isokinetic muscle strength and stability of the bilateral lower extremity joints of the athlete-level athletes are significantly better than those of the first-level athletes.Through the scale transform and translation transform of the basic wavelet, the wavelet transform can realize the positioning function of frequency and time. With the increase of frequency, the time resolution can be improved and the frequency domain resolution can be reduced. On the contrary, as the frequency continues to decrease the time resolution, the frequency domain resolution decreases and the frequency domain resolution increases, which can automatically adapt to the requirements of time-frequency analysis.As a powerful signal analysis algorithm, wavelet transform is widely used and has a wide range of practical significance, mainly including many disciplines in the field of mathematics, signal analysis and image processing, quantum mechanics and theoretical physics, military electronic countermeasures, and weapon intelligence. STFT, WVD, and wavelet transform all provide strong theoretical support for time-frequency joint analysis and lay the foundation for hardware implementation of time-frequency analysis function. ## 3. Research Methods ### 3.1. Time-Frequency Analysis of Wavelet Transform Wavelet can track time and frequency information. It can “look close” to short-time pulses or “look far” to detect long-time slow waves. There are two functions that play a very important role in wavelet analysis, i.e., scale function and wavelet function. These two functions generate a set of function families that can be used to decompose and reconstruct signals. The time-frequency analysis method can process and analyze the signal from the time-frequency dimension at the same time, avoiding the limitations of pure time-domain or frequency analysis. The emergence of the time-frequency analysis method solves the problem that Fourier transform can only analyze the signal from the frequency domain, and provides a new method for the field of signal analysis. STFT and wavelet transform are two typical time-frequency analysis methods, which are also implemented in the acquisition system. These two methods have their own advantages and disadvantages. Different time-frequency analysis methods can be selected under different requirements. This paper mainly introduces the time-frequency realization based on wavelet transform, and the content of STFT is only briefly introduced. The definition formula of STFT is as follows:(1)STFTxt,Ω=∫xτg∗τ−te−jΩτdτ=<xτ,gτ−tejΩτ>.Among them,xτ is the analysis signal, and gτ is the window function. The basic idea of the STFT algorithm is to use the local characteristics of the window function gτ; taking a rectangular window as an example, multiply the signal to be analyzed by the window function to achieve local interception of the signal, and use the Fourier transform to obtain the corresponding current timet, spectrum of the segment signal. In order to obtain a complete time-frequency relationship, by changing the center position t of the window function through constant translation, that is, changing the time center of the window function analysis, the corresponding frequency spectrum at different times can be determined, thus forming the relationship between timetand frequencyΩ, time-frequency relationship diagram.Wavelet transform is widely used in digital signal processing and has important applications in filtering, denoising, compression, time-frequency analysis, and so on. The definition of wavelet transform is(2)WTxa,b=1a∫xtψ∗t−badt=∫xtψa,b∗tdt=<xt,ψa,bt>.Among them,xt is the analysis signal, Ψt is the basic wavelet function, and Ψa,btis the wavelet basis function, which is obtained by translating and scaling the basic wavelet function Ψt. Its frequency domain expression is (3)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ.The “wavelet” in the wavelet transform indicates that the wavelet functionΨt has volatility. With the change of time, the waveform exhibits the form of oscillation, and the waveform is attenuated as a whole, and it is a band-limited signal in the time domain; its support range is limited. At the same time, the wavelet function behaves as a band-pass filter in the frequency domain; it only allows the signal within the passband to pass and suppresses the frequency components of the signal outside the passband, and its frequency domain support range is also limited (Figure 1). Such a wavelet function Ψt is called basic wavelet or mother wavelet, which has limited support in both time and frequency domains, and realizes the function of simultaneous localization in time and frequency domains. The mother wavelet Ψt is stretched and shifted, and its expression is (4)ψa,bt=1aψt−ba.Figure 1 Approximate extraction diagram of frequency domain approximate extraction algorithm.It can be seen from formula (2) that not only t is a continuous variable, but also a and b are continuous variables, so the above formula is called continuous wavelet transform (CWT). Let the Fourier transform of Ψt be ΨΩ. According to the properties of the Fourier transform, the wavelet basis function, the Fourier transform ofΨa,btis (5)ψa,bt=1aψt−ba,FTΨa,bΩ=aΨaΩe−jΩb.The frequency domain expression of wavelet transform is(6)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ=a2π<XΩ,Ψa,bΩ>.In the time-domain and frequency domain definitions of wavelet transform (1), a, b, and t are all continuous variables, but when implemented on a computer, the operation of continuous variables cannot be realized, so the continuous variables need to be discrete and converted into a form that can be calculated by a computer. If the scale factor a is discretized as a=2j, j∈Z, the wavelet realized by this discretization is called binary wavelet [19]. The scale factor jump of binary wavelet is too large, and it is too rough to realize time-frequency analysis, so it is still necessary to use continuous wavelet transform to realize the function of time-frequency analysis. According to the time-domain definition of wavelet transform, let (7)ht=ψ∗−ta.Then, there are(8)hb−t=ψ∗t‐ba.Then, the time-domain definition of wavelet transform can be transformed into(9)WTxa,b=1a∫xtψ∗t−badt=1a∫xthb−tdt=1axb∗hb.According to formula (9), the integral expression can be transformed into the convolution expression commonly used in digital signal processing. At the same time, the continuous wavelet transform time-domain expression is discretized; let (10)WTxa,b=1a∫xtψ∗t−badt=∑k1a∫kk+1xtψ∗t−badt.Since the continuous variablet is in the interval k~k+1, xt=xk, the above formula can be rewritten as (11)WTxa,b=1a∫xtψ∗t−badt=∑k1axk∫kk+1ψ∗t−badt=∑k1axk∫−∞k+1ψ∗t−badt−∫−∞k+1ψ∗t−badt.In the same way, the frequency domain expression of wavelet transform is discretized, and the expression is as follows:(12)WTxa,k=a2π∑m=−M/2M/2X2πMmΨ∗2πMamej2π/Mkm,m=0,1,⋯,M−1.At different scales, the relationship between the center frequencyfa of the corresponding wavelet basis function and the center frequency fc of the basic wavelet Ψt is (13)fa=fca.The sampling rate of the analyzed signalxn is fs, and the pseudofrequency feq of the analyzed signal under the corresponding scale a is (14)feq=fs⋅fa=fs⋅f0a.Thus, there is a relationship between scale and pseudofrequency as(15)a=fs⋅f0feq. ### 3.2. Visual Image Recognition of Wushu Sanda Whip Leg Action Set the empirical scale function of Wushu Sanda whip leg action, and use the gradient descent method to segment the region of Wushu Sanda whip leg action image, so that the sparse feature values of the visual image of Wushu Sanda whip leg action meet the conditions. According to the sparse prior representation result, the empirical scale function and the empirical wavelet function are established to calculate the segmentation function of the visual image of Wushu Sanda whip leg action. Template matching of visual images of Wushu Sanda whip leg action is based on approximate sparse representation. Combined with the three-dimensional translation matrix feature decomposition method to perform feature segmentation and edge contour feature detection of the visual image of Wushu Sanda whip leg movement, construct at least four sets of two-dimensional images for visual reconstruction of Wushu Sanda whip leg movement, and calculate the visual image of Wushu Sanda whip leg movement. For the local prior pixel set, using the morphological knowledge to carry out the visual decomposition of Wushu Sanda whip leg movement, establish a visual feature analysis and adaptive feature extraction model of Wushu Sanda whip leg movement, and realize the Wushu Sanda whip leg movement visual feature extraction results according to the results of Wushu Sanda whip leg movement and Sanda whip leg action visual image recognition. ## 3.1. Time-Frequency Analysis of Wavelet Transform Wavelet can track time and frequency information. It can “look close” to short-time pulses or “look far” to detect long-time slow waves. There are two functions that play a very important role in wavelet analysis, i.e., scale function and wavelet function. These two functions generate a set of function families that can be used to decompose and reconstruct signals. The time-frequency analysis method can process and analyze the signal from the time-frequency dimension at the same time, avoiding the limitations of pure time-domain or frequency analysis. The emergence of the time-frequency analysis method solves the problem that Fourier transform can only analyze the signal from the frequency domain, and provides a new method for the field of signal analysis. STFT and wavelet transform are two typical time-frequency analysis methods, which are also implemented in the acquisition system. These two methods have their own advantages and disadvantages. Different time-frequency analysis methods can be selected under different requirements. This paper mainly introduces the time-frequency realization based on wavelet transform, and the content of STFT is only briefly introduced. The definition formula of STFT is as follows:(1)STFTxt,Ω=∫xτg∗τ−te−jΩτdτ=<xτ,gτ−tejΩτ>.Among them,xτ is the analysis signal, and gτ is the window function. The basic idea of the STFT algorithm is to use the local characteristics of the window function gτ; taking a rectangular window as an example, multiply the signal to be analyzed by the window function to achieve local interception of the signal, and use the Fourier transform to obtain the corresponding current timet, spectrum of the segment signal. In order to obtain a complete time-frequency relationship, by changing the center position t of the window function through constant translation, that is, changing the time center of the window function analysis, the corresponding frequency spectrum at different times can be determined, thus forming the relationship between timetand frequencyΩ, time-frequency relationship diagram.Wavelet transform is widely used in digital signal processing and has important applications in filtering, denoising, compression, time-frequency analysis, and so on. The definition of wavelet transform is(2)WTxa,b=1a∫xtψ∗t−badt=∫xtψa,b∗tdt=<xt,ψa,bt>.Among them,xt is the analysis signal, Ψt is the basic wavelet function, and Ψa,btis the wavelet basis function, which is obtained by translating and scaling the basic wavelet function Ψt. Its frequency domain expression is (3)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ.The “wavelet” in the wavelet transform indicates that the wavelet functionΨt has volatility. With the change of time, the waveform exhibits the form of oscillation, and the waveform is attenuated as a whole, and it is a band-limited signal in the time domain; its support range is limited. At the same time, the wavelet function behaves as a band-pass filter in the frequency domain; it only allows the signal within the passband to pass and suppresses the frequency components of the signal outside the passband, and its frequency domain support range is also limited (Figure 1). Such a wavelet function Ψt is called basic wavelet or mother wavelet, which has limited support in both time and frequency domains, and realizes the function of simultaneous localization in time and frequency domains. The mother wavelet Ψt is stretched and shifted, and its expression is (4)ψa,bt=1aψt−ba.Figure 1 Approximate extraction diagram of frequency domain approximate extraction algorithm.It can be seen from formula (2) that not only t is a continuous variable, but also a and b are continuous variables, so the above formula is called continuous wavelet transform (CWT). Let the Fourier transform of Ψt be ΨΩ. According to the properties of the Fourier transform, the wavelet basis function, the Fourier transform ofΨa,btis (5)ψa,bt=1aψt−ba,FTΨa,bΩ=aΨaΩe−jΩb.The frequency domain expression of wavelet transform is(6)WTxa,b=a2π∫−∞+∞XΩΨ∗aΩejΩbdΩ=a2π<XΩ,Ψa,bΩ>.In the time-domain and frequency domain definitions of wavelet transform (1), a, b, and t are all continuous variables, but when implemented on a computer, the operation of continuous variables cannot be realized, so the continuous variables need to be discrete and converted into a form that can be calculated by a computer. If the scale factor a is discretized as a=2j, j∈Z, the wavelet realized by this discretization is called binary wavelet [19]. The scale factor jump of binary wavelet is too large, and it is too rough to realize time-frequency analysis, so it is still necessary to use continuous wavelet transform to realize the function of time-frequency analysis. According to the time-domain definition of wavelet transform, let (7)ht=ψ∗−ta.Then, there are(8)hb−t=ψ∗t‐ba.Then, the time-domain definition of wavelet transform can be transformed into(9)WTxa,b=1a∫xtψ∗t−badt=1a∫xthb−tdt=1axb∗hb.According to formula (9), the integral expression can be transformed into the convolution expression commonly used in digital signal processing. At the same time, the continuous wavelet transform time-domain expression is discretized; let (10)WTxa,b=1a∫xtψ∗t−badt=∑k1a∫kk+1xtψ∗t−badt.Since the continuous variablet is in the interval k~k+1, xt=xk, the above formula can be rewritten as (11)WTxa,b=1a∫xtψ∗t−badt=∑k1axk∫kk+1ψ∗t−badt=∑k1axk∫−∞k+1ψ∗t−badt−∫−∞k+1ψ∗t−badt.In the same way, the frequency domain expression of wavelet transform is discretized, and the expression is as follows:(12)WTxa,k=a2π∑m=−M/2M/2X2πMmΨ∗2πMamej2π/Mkm,m=0,1,⋯,M−1.At different scales, the relationship between the center frequencyfa of the corresponding wavelet basis function and the center frequency fc of the basic wavelet Ψt is (13)fa=fca.The sampling rate of the analyzed signalxn is fs, and the pseudofrequency feq of the analyzed signal under the corresponding scale a is (14)feq=fs⋅fa=fs⋅f0a.Thus, there is a relationship between scale and pseudofrequency as(15)a=fs⋅f0feq. ## 3.2. Visual Image Recognition of Wushu Sanda Whip Leg Action Set the empirical scale function of Wushu Sanda whip leg action, and use the gradient descent method to segment the region of Wushu Sanda whip leg action image, so that the sparse feature values of the visual image of Wushu Sanda whip leg action meet the conditions. According to the sparse prior representation result, the empirical scale function and the empirical wavelet function are established to calculate the segmentation function of the visual image of Wushu Sanda whip leg action. Template matching of visual images of Wushu Sanda whip leg action is based on approximate sparse representation. Combined with the three-dimensional translation matrix feature decomposition method to perform feature segmentation and edge contour feature detection of the visual image of Wushu Sanda whip leg movement, construct at least four sets of two-dimensional images for visual reconstruction of Wushu Sanda whip leg movement, and calculate the visual image of Wushu Sanda whip leg movement. For the local prior pixel set, using the morphological knowledge to carry out the visual decomposition of Wushu Sanda whip leg movement, establish a visual feature analysis and adaptive feature extraction model of Wushu Sanda whip leg movement, and realize the Wushu Sanda whip leg movement visual feature extraction results according to the results of Wushu Sanda whip leg movement and Sanda whip leg action visual image recognition. ## 4. Results and Analysis ### 4.1. Time-Frequency Domain Wavelet Spectral Refinement For the basic wavelet expressed in the time domain, after the basic wavelet function can be discretized, the result in the frequency domain can be calculated by using the DFT operation. The time-domain waveform is shown in Figure2.Figure 2 The scaling of the basic wavelet and the influence of parametersa and b on the time-frequency domain.Starting from the band-pass nature of the wavelet function spectrum, most of the DFT results are zero, and only the value of the band-pass part has practical significance. First, select the threshold value of the band-pass part, which determines the frequency range of the band-pass part of the wavelet transform. The larger the value is, the smaller the range of the band-pass part is selected, and the lower the calculation accuracy is. After a large number of reliable experiments, it is found that the threshold value is selected as 1% of the absolute value of the maximum wavelet spectrum, which can ensure that the information of the wavelet band-pass part is preserved, and the accuracy of the wavelet transform result is better. Figure3 shows the spectrum of the complex Morlet wavelet (both bandwidth and center frequency are 3), in which the curve represents the spectrum obtained by FFT operation on the time-domain sampled signal, and the two points represent the selection under the condition that the threshold is 1%.Figure 3 Wavelet basis function FFT and band-pass limit diagram.According to the above steps, the band-pass part of the spectrum of the complex Morlet wavelet is refined, and the bandwidth factor and the center frequency factor are both 3, as shown in Figure4, which is the time-domain expression wavelet spectrum refinement diagram, in which the hollow circle is the result of the wavelet basis function FFT operation, the hexagonal star points are the left and right boundaries of the wavelet band pass corresponding to the threshold, and the curve is after the refinement. It can be seen from the figure that the refined spectrum is smoother and its frequency resolution is better. For the frequency domain wavelet, since its frequency domain expression is known, the frequency domain expression can be directly discretized, and then, according to the selected threshold, the left and right boundaries of the frequency domain band-pass part of the wavelet are selected, which are consistent with the time domain. The method is the same; only the part on the positive half-axis of the frequency is judged for the band-pass part; after the frequency boundary of the band-pass part is obtained, the part within the boundary is refined by the target number of N points, so as to obtain the band-pass part. As the basic theory of analyzing and involving digital control system, discrete system theory develops rapidly. In the performance analysis of high-order systems, it is difficult to apply the time-domain analysis method. The frequency domain analysis method is mainly applicable to linear time-varying systems. It is a practical engineering method for analyzing and designing control systems and is widely used. As shown in Figure 5, it is a complex Morlet wavelet (both bandwidth factor and center frequency factor are 3), in which the hollow circle represents the discretization of the frequency domain expression. In the obtained spectrogram, the hexagonal star points represent the two boundary values of the band-pass part selected under the condition that the threshold is 1%, the curve is the dense discretization result under the target number of points in the band-pass range, and (a) is the spectrum refinement and overview, and (b) is a detailed view of the spectral refinement.Figure 4 Refinement of time-domain wavelet spectrum by CZT algorithm. (a)(b)Figure 5 Wavelet band-pass partial spectrogram with frequency domain expression. (a)(b) ### 4.2. The Characteristics of Kinematics and EMG Synchronization in T1 (Preparation End) and T2 Stage (Knee Bending and Leg Raising) According to the above parameter settings, select sample data. The data used in the experiment comes from Baidu Gallery, and there are a total of 10,000 visual pictures of some Wushu Sanda whip leg movements. The height and weight of the Sanda athletes were measured and numbered. Then, the fixed sandbags equipped with the sensor of the event group fight test instrument were whipped in place for three times. Compare the visual images of Sanda whip leg action to delete the unqualified images, select 1,000 images as the training set for this experiment, and set the image format in the database to400∗400. 1000 images were randomly divided into 5 groups of test data. In the T1 (preliminary end) stage, the knee angle of the attacking leg is 144±140, keeping the right knee flexed at a small angle, which is beneficial for the athlete to put the strong leg behind and move the footsteps quickly to give the opponent a powerful blow. In the T2 stage (bend the knee and raise the leg), after the right leg is kicked off the ground, the body is slightly tilted, the right leg is raised to the front, the knee tip is facing the front, the thigh and the trunk are about 90°, the calf is naturally drooping, and the ankle joint is naturally relaxed, and at the end of this phase, your knees are at your chest height. In the T2 stage, the hip joint velocity was 2.75±0.54 m/s, the knee joint velocity was 6.12±1.42 m/s, and the ankle joint velocity was 6.93±1.56 m/s. The hip angle was 41±90, the knee angle was 67±190, and the ankle angle was 122±110.Indicating that the completion of the knee-bend and leg-raise action mainly involves these muscles. The contribution of the muscles to the action can be seen from the size of the iEMG,rectusossae>bicepsfemoris>sartorius>gastrocnemius>tensorfascialata. Combined with the analysis of the working methods of the corresponding muscles, the knee flexion and leg raising stage is mainly the attacking leg bending and raising (near fixation), and the iliopsoas, rectus femoris, sartorius, tensor fascia lata, and pubis muscles rapidly contract to flex. Hip (thigh flexion) movement; active force of the biceps femoris, gastrocnemius, semitendinosus, semimembranosus, and gracilis to flex the knee (calf flexion). The action shows the characteristics of accelerated contraction, rapid braking, and one-step action. Muscle contraction shows the characteristics of active knee flexion and fast muscle contraction. The rapid completion and braking of the knee-bending and leg-raising action are directly related to the degree of relaxation of the calf and ankle joints. According to the analysis, the calf sags naturally, the ankle joint is relaxed, and the knee joint angle can be quickly reduced when the knee is flexed, which is conducive to the rapid completion of the action. The rapid reduction of the knee joint angle reduces the radius of rotation of the attacking leg, the moment of inertia becomes smaller, the difficulty of rotation is reduced, and the rotation is completed quickly. ### 4.3. The Characteristics of Kinematics and EMG Synchronization in T3 Stage (Hip and Buckling) Stage T3 (hip and knee) is completed on the basis of bending the knee and raising the leg; the attacking leg hip joint is internally rotated; the knee joint is turned to the left side (internal buckle), the lower leg is folded and parallel to the ground, and the instep is aligned with the attacking direction. At T3, the hip joint velocity was1.56±0.37 m/s, the knee joint velocity was 5.17±1.23 m/s, and the ankle joint velocity was 7.22±1.11 m/s. The hip angle was 51±10°, the knee angle was 95±15°, and the ankle angle was 139±19°.In the T3 stage, the contraction and discharge of the muscles of the attacking leg are mainly manifested in the tensor fascia lata (TFL), and its iEMG changes greatly. It shows that the action of raising the hip and buckling the knee is mainly completed by attacking the tensor fascia lata of the leg, and other muscles have an auxiliary role. The working method of the muscles corresponding to the action is mainly to straighten the hip and buckle the knee (near fixation). From the technical point of view of the movement, straightening the hips makes the knees buckle down, buckles the knees as a brake during the swing, and buckles the knees bringing a series of favorable changes. According to the principle of kinetic energy transmission, thigh braking enables energy to be transmitted from the thigh to the calf, and the mass of the calf is much lighter than that of the thigh, and the corresponding speed will increase, thereby increasing the whipping speed of the calf. In addition, the buckled knee changes the direction of the calf, increasing the effect of the horizontal strike. ### 4.4. Characteristics of Kinematics and EMG Synchronization in T4 (Strike Contact Sandbag) and T5 Stages (Return) After the explosion, the knee is extended and the ankle is actively flexed to hit the target quickly and powerfully. In motion, when the end of the link chain is expected to generate maximum speed and force, the movement of the limb is often expressed from the proximal link to the distal loop.The segments are accelerated and braked in turn, and the speed of each segment is shown to increase sequentially from the proximal end to the distal end. The whipping action is mainly to get the maximum speed in the end link. The explanation of the sequential principle of joint activity in the general textbook “Sports Biomechanics” of the National Academy of Physical Education is as follows: the joints with large cross-sectional areas of muscles around the joints are called large joints, and vice versa. The large joints exert force first to overcome the inertia of the links and limbs and facilitate the initiation of the joints and limbs; the secondary joints exert force to facilitate the acceleration of the links and limbs in one step; the small joints exert force last to facilitate the control of the direction and range of motion. Therefore, the whip leg must strictly follow this law; otherwise, the range, speed, and strength of the movement will be limited, resulting in a decrease in the quality of the movement or failure. From this, it is not difficult for us to conclude that the order of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. At T4, the hip joint velocity was1.12±0.24 m/s, the knee joint velocity was 1.26±0.31 m/s, and the ankle joint velocity was 7.13±2.21 m/s. The hip angle was 75±6°, the knee angle was 130±11°, and the ankle angle was 135±14°.During the T4 (strike contact with sandbag) stage, the changes in the integral EMG values of the vastus lateralis (VL), vastus medialis (VM), rectus femoris (RF), and gastrocnemius (GC) muscles of the attacking leg were very obvious, especially the role of the quadriceps which shows that the quality of the striking action is mainly closely related to the quadriceps. EMG changes combined with action analysis, the quadriceps femoris contraction completes the knee extension action (near fixation), the calf triceps contraction completes the ankle joint foot flexion action, and the knee is actively and quickly buckled before the whiplash to obtain a higher right knee acceleration. That is to say, the movement of the calf is based on the noninertial reference frame of the thigh; that is, the calf is rotated forward around the thigh axis by the moment of its center of mass under the action of the inertial force from the front and upper direction of the thigh on the knee joint, thereby forming a hitting action. During the whole whipping process, bend the hips, raise the knees, turn the body to straighten the waist, buckle the knees, and drive the calves to whip the inside, front, and top. The speed and strength of the movement are closely related to the contraction speed of the agonist muscle and the relaxation degree of the antagonist muscle. Active contraction of the agonist muscle accelerates the movement, while the antagonist muscle acts as a hindrance. The ability to relax muscles can reduce the negative confrontation between muscles and improve the speed of movement. When the active muscle group is contracted hard, the antagonistic muscle group is relaxed in a timely manner, which can reduce the resistance of the antagonistic muscle group and increase the contraction speed and strength of the active muscle group, and the movement becomes faster, and the strike is more powerful. In the T5 stage, the athlete should quickly move the center of gravity forward; the right leg (attacking leg) should fall forward, master the balance of the body after the attack, and prepare for the next defensive action or the next offensive action. When restoring the posture, the knee joint angle is152±7°. Therefore, in the restoration phase, the athlete should keep the included angle of the two knees at about 150° and keep the two knees in a slightly flexed state, which is conducive to the athlete’s quick start and rapid entry into the T1 stage. ### 4.5. Displacement and Velocity Characteristics during Action Period The movement displacement of the attacking foot is measured in Figure6. The comparison found that the movement displacement of the 2 groups was in the same direction at all time periods, but there were differences in magnitude. The excellent group had a leftward displacement of 0.46±0.19 m during the hitting period and a backward displacement of 0.45±0.08 m during the recovery period, which was larger than that of the ordinary group. The upward displacement of the normal group (1.22±0.10 m) in the recovery period was larger than that of the excellent group, and there was a statistically significant difference. Figure 6 shows the t-test diagram of the applied displacement.Figure 6 t-test plot of action displacement.The action time of the attack was measured (Figure7). The comparison found that the action time of the 2 groups was different in all periods. The action time of the excellent group was 0.32±0.05 s relatively short in the start-up period, and there was a statistically significant difference.Figure 7 t-test plot of action time. ## 4.1. Time-Frequency Domain Wavelet Spectral Refinement For the basic wavelet expressed in the time domain, after the basic wavelet function can be discretized, the result in the frequency domain can be calculated by using the DFT operation. The time-domain waveform is shown in Figure2.Figure 2 The scaling of the basic wavelet and the influence of parametersa and b on the time-frequency domain.Starting from the band-pass nature of the wavelet function spectrum, most of the DFT results are zero, and only the value of the band-pass part has practical significance. First, select the threshold value of the band-pass part, which determines the frequency range of the band-pass part of the wavelet transform. The larger the value is, the smaller the range of the band-pass part is selected, and the lower the calculation accuracy is. After a large number of reliable experiments, it is found that the threshold value is selected as 1% of the absolute value of the maximum wavelet spectrum, which can ensure that the information of the wavelet band-pass part is preserved, and the accuracy of the wavelet transform result is better. Figure3 shows the spectrum of the complex Morlet wavelet (both bandwidth and center frequency are 3), in which the curve represents the spectrum obtained by FFT operation on the time-domain sampled signal, and the two points represent the selection under the condition that the threshold is 1%.Figure 3 Wavelet basis function FFT and band-pass limit diagram.According to the above steps, the band-pass part of the spectrum of the complex Morlet wavelet is refined, and the bandwidth factor and the center frequency factor are both 3, as shown in Figure4, which is the time-domain expression wavelet spectrum refinement diagram, in which the hollow circle is the result of the wavelet basis function FFT operation, the hexagonal star points are the left and right boundaries of the wavelet band pass corresponding to the threshold, and the curve is after the refinement. It can be seen from the figure that the refined spectrum is smoother and its frequency resolution is better. For the frequency domain wavelet, since its frequency domain expression is known, the frequency domain expression can be directly discretized, and then, according to the selected threshold, the left and right boundaries of the frequency domain band-pass part of the wavelet are selected, which are consistent with the time domain. The method is the same; only the part on the positive half-axis of the frequency is judged for the band-pass part; after the frequency boundary of the band-pass part is obtained, the part within the boundary is refined by the target number of N points, so as to obtain the band-pass part. As the basic theory of analyzing and involving digital control system, discrete system theory develops rapidly. In the performance analysis of high-order systems, it is difficult to apply the time-domain analysis method. The frequency domain analysis method is mainly applicable to linear time-varying systems. It is a practical engineering method for analyzing and designing control systems and is widely used. As shown in Figure 5, it is a complex Morlet wavelet (both bandwidth factor and center frequency factor are 3), in which the hollow circle represents the discretization of the frequency domain expression. In the obtained spectrogram, the hexagonal star points represent the two boundary values of the band-pass part selected under the condition that the threshold is 1%, the curve is the dense discretization result under the target number of points in the band-pass range, and (a) is the spectrum refinement and overview, and (b) is a detailed view of the spectral refinement.Figure 4 Refinement of time-domain wavelet spectrum by CZT algorithm. (a)(b)Figure 5 Wavelet band-pass partial spectrogram with frequency domain expression. (a)(b) ## 4.2. The Characteristics of Kinematics and EMG Synchronization in T1 (Preparation End) and T2 Stage (Knee Bending and Leg Raising) According to the above parameter settings, select sample data. The data used in the experiment comes from Baidu Gallery, and there are a total of 10,000 visual pictures of some Wushu Sanda whip leg movements. The height and weight of the Sanda athletes were measured and numbered. Then, the fixed sandbags equipped with the sensor of the event group fight test instrument were whipped in place for three times. Compare the visual images of Sanda whip leg action to delete the unqualified images, select 1,000 images as the training set for this experiment, and set the image format in the database to400∗400. 1000 images were randomly divided into 5 groups of test data. In the T1 (preliminary end) stage, the knee angle of the attacking leg is 144±140, keeping the right knee flexed at a small angle, which is beneficial for the athlete to put the strong leg behind and move the footsteps quickly to give the opponent a powerful blow. In the T2 stage (bend the knee and raise the leg), after the right leg is kicked off the ground, the body is slightly tilted, the right leg is raised to the front, the knee tip is facing the front, the thigh and the trunk are about 90°, the calf is naturally drooping, and the ankle joint is naturally relaxed, and at the end of this phase, your knees are at your chest height. In the T2 stage, the hip joint velocity was 2.75±0.54 m/s, the knee joint velocity was 6.12±1.42 m/s, and the ankle joint velocity was 6.93±1.56 m/s. The hip angle was 41±90, the knee angle was 67±190, and the ankle angle was 122±110.Indicating that the completion of the knee-bend and leg-raise action mainly involves these muscles. The contribution of the muscles to the action can be seen from the size of the iEMG,rectusossae>bicepsfemoris>sartorius>gastrocnemius>tensorfascialata. Combined with the analysis of the working methods of the corresponding muscles, the knee flexion and leg raising stage is mainly the attacking leg bending and raising (near fixation), and the iliopsoas, rectus femoris, sartorius, tensor fascia lata, and pubis muscles rapidly contract to flex. Hip (thigh flexion) movement; active force of the biceps femoris, gastrocnemius, semitendinosus, semimembranosus, and gracilis to flex the knee (calf flexion). The action shows the characteristics of accelerated contraction, rapid braking, and one-step action. Muscle contraction shows the characteristics of active knee flexion and fast muscle contraction. The rapid completion and braking of the knee-bending and leg-raising action are directly related to the degree of relaxation of the calf and ankle joints. According to the analysis, the calf sags naturally, the ankle joint is relaxed, and the knee joint angle can be quickly reduced when the knee is flexed, which is conducive to the rapid completion of the action. The rapid reduction of the knee joint angle reduces the radius of rotation of the attacking leg, the moment of inertia becomes smaller, the difficulty of rotation is reduced, and the rotation is completed quickly. ## 4.3. The Characteristics of Kinematics and EMG Synchronization in T3 Stage (Hip and Buckling) Stage T3 (hip and knee) is completed on the basis of bending the knee and raising the leg; the attacking leg hip joint is internally rotated; the knee joint is turned to the left side (internal buckle), the lower leg is folded and parallel to the ground, and the instep is aligned with the attacking direction. At T3, the hip joint velocity was1.56±0.37 m/s, the knee joint velocity was 5.17±1.23 m/s, and the ankle joint velocity was 7.22±1.11 m/s. The hip angle was 51±10°, the knee angle was 95±15°, and the ankle angle was 139±19°.In the T3 stage, the contraction and discharge of the muscles of the attacking leg are mainly manifested in the tensor fascia lata (TFL), and its iEMG changes greatly. It shows that the action of raising the hip and buckling the knee is mainly completed by attacking the tensor fascia lata of the leg, and other muscles have an auxiliary role. The working method of the muscles corresponding to the action is mainly to straighten the hip and buckle the knee (near fixation). From the technical point of view of the movement, straightening the hips makes the knees buckle down, buckles the knees as a brake during the swing, and buckles the knees bringing a series of favorable changes. According to the principle of kinetic energy transmission, thigh braking enables energy to be transmitted from the thigh to the calf, and the mass of the calf is much lighter than that of the thigh, and the corresponding speed will increase, thereby increasing the whipping speed of the calf. In addition, the buckled knee changes the direction of the calf, increasing the effect of the horizontal strike. ## 4.4. Characteristics of Kinematics and EMG Synchronization in T4 (Strike Contact Sandbag) and T5 Stages (Return) After the explosion, the knee is extended and the ankle is actively flexed to hit the target quickly and powerfully. In motion, when the end of the link chain is expected to generate maximum speed and force, the movement of the limb is often expressed from the proximal link to the distal loop.The segments are accelerated and braked in turn, and the speed of each segment is shown to increase sequentially from the proximal end to the distal end. The whipping action is mainly to get the maximum speed in the end link. The explanation of the sequential principle of joint activity in the general textbook “Sports Biomechanics” of the National Academy of Physical Education is as follows: the joints with large cross-sectional areas of muscles around the joints are called large joints, and vice versa. The large joints exert force first to overcome the inertia of the links and limbs and facilitate the initiation of the joints and limbs; the secondary joints exert force to facilitate the acceleration of the links and limbs in one step; the small joints exert force last to facilitate the control of the direction and range of motion. Therefore, the whip leg must strictly follow this law; otherwise, the range, speed, and strength of the movement will be limited, resulting in a decrease in the quality of the movement or failure. From this, it is not difficult for us to conclude that the order of activities of the whip leg strike is hip joint-knee joint-hip joint-knee joint-ankle joint. At T4, the hip joint velocity was1.12±0.24 m/s, the knee joint velocity was 1.26±0.31 m/s, and the ankle joint velocity was 7.13±2.21 m/s. The hip angle was 75±6°, the knee angle was 130±11°, and the ankle angle was 135±14°.During the T4 (strike contact with sandbag) stage, the changes in the integral EMG values of the vastus lateralis (VL), vastus medialis (VM), rectus femoris (RF), and gastrocnemius (GC) muscles of the attacking leg were very obvious, especially the role of the quadriceps which shows that the quality of the striking action is mainly closely related to the quadriceps. EMG changes combined with action analysis, the quadriceps femoris contraction completes the knee extension action (near fixation), the calf triceps contraction completes the ankle joint foot flexion action, and the knee is actively and quickly buckled before the whiplash to obtain a higher right knee acceleration. That is to say, the movement of the calf is based on the noninertial reference frame of the thigh; that is, the calf is rotated forward around the thigh axis by the moment of its center of mass under the action of the inertial force from the front and upper direction of the thigh on the knee joint, thereby forming a hitting action. During the whole whipping process, bend the hips, raise the knees, turn the body to straighten the waist, buckle the knees, and drive the calves to whip the inside, front, and top. The speed and strength of the movement are closely related to the contraction speed of the agonist muscle and the relaxation degree of the antagonist muscle. Active contraction of the agonist muscle accelerates the movement, while the antagonist muscle acts as a hindrance. The ability to relax muscles can reduce the negative confrontation between muscles and improve the speed of movement. When the active muscle group is contracted hard, the antagonistic muscle group is relaxed in a timely manner, which can reduce the resistance of the antagonistic muscle group and increase the contraction speed and strength of the active muscle group, and the movement becomes faster, and the strike is more powerful. In the T5 stage, the athlete should quickly move the center of gravity forward; the right leg (attacking leg) should fall forward, master the balance of the body after the attack, and prepare for the next defensive action or the next offensive action. When restoring the posture, the knee joint angle is152±7°. Therefore, in the restoration phase, the athlete should keep the included angle of the two knees at about 150° and keep the two knees in a slightly flexed state, which is conducive to the athlete’s quick start and rapid entry into the T1 stage. ## 4.5. Displacement and Velocity Characteristics during Action Period The movement displacement of the attacking foot is measured in Figure6. The comparison found that the movement displacement of the 2 groups was in the same direction at all time periods, but there were differences in magnitude. The excellent group had a leftward displacement of 0.46±0.19 m during the hitting period and a backward displacement of 0.45±0.08 m during the recovery period, which was larger than that of the ordinary group. The upward displacement of the normal group (1.22±0.10 m) in the recovery period was larger than that of the excellent group, and there was a statistically significant difference. Figure 6 shows the t-test diagram of the applied displacement.Figure 6 t-test plot of action displacement.The action time of the attack was measured (Figure7). The comparison found that the action time of the 2 groups was different in all periods. The action time of the excellent group was 0.32±0.05 s relatively short in the start-up period, and there was a statistically significant difference.Figure 7 t-test plot of action time. ## 5. Conclusion The conclusions of this paper are as follows: in the preparation stage, the knee joint angle of the attacking leg is about 140°, which is conducive to rapid movement. Bend your knees, lift your legs, straighten your hips, bend your knees, and hit the ball at three stages of speed, 135° change. In the return phase, the knee joint angle is about 150° and slightly bent. The angle of each joint in each stage should be kept as much as possible in order to achieve better results in fast motion training. In the knee flexion and leg lifting stage, rectus abdominis, sartorius, tensor fascia lata, biceps, and gastrocnemius participate in the contraction, and the EMG contribution isrectus>femoralhead>sartorius>gastrocnemius>latissimusmembranetensor. The stage of straightening the hips and flexing the knees is mainly completed by the tensor fascia lata. In the striking stage, there are mainly the iEMG of the lateral femoris, medial femoris, rectus femoris, and gastrocnemius, especially the quadriceps femoris. Fast motion exercises should mainly develop these muscles and pay attention to the sequence of their use. In the process of whip leg movement, a reasonable compensatory movement is formed through the adjustment of the body center of gravity to maintain the stability of the body. It can not only improve the quality of action completion but also help to maintain the consistency of each action link in the whip leg movement. The moving distance of the body center of gravity in the attack direction reflects the depth of the attack. The internal rotation of the heel of the supporting leg and the stretching of the link are two of the important factors to adjust the striking distance of the whip leg. --- *Source: 1004568-2022-09-24.xml*
2022
# Compressive and Flexural Strength of Concrete with Different Nanomaterials: A Critical Review **Authors:** R. M. Ashwini; M. Potharaju; V. Srinivas; S. Kanaka Durga; G. V. Rathnamala; Anish Paudel **Journal:** Journal of Nanomaterials (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1004597 --- ## Abstract With recent technological advances, adding nanomaterials as a reinforcement material in concrete has gained immense attention. This review paper aims to report advances in the form of a one-stop shop catering to methods that focus on improving the quality of traditional concrete. Nanoparticles—the elementary form of nanomaterials—are proven to enhance the strength and longevity of concrete. Nanosilica, nanoalumina, nanometakaolin, carbon nanotubes, and nanotitanium oxide are modern nanomaterials that have demonstrated strong evidence of enhancing concrete quality, which supports infrastructure building and long-term monitoring. Nanoconcrete—an exciting prospect extending the boundaries of traditional civil engineering—exhibited increased compressive and flexural strength using elementary compounds. In particular, the rigorous research survey of many articles reveals an increase in compressive strength from 20% to 63% by replacing the cement with different nanomaterials in different percentages and flexural strength from 16% to 47%. --- ## Body ## 1. Introduction The most common traditional material required for infrastructure construction is the mixture of cement, fine aggregate, coarse aggregate, and water—popularly known as concrete. Concrete is a porous substance that has to have its durability, usability, mechanical characteristics, and microstructural factors investigated [1]. Recent technological advances have resulted in the enhancement of several concrete properties exhibiting improvement over traditional concrete. Notably, the reduction of the water–cement (w/c) ratio [2] has subsequently resulted in contributing to the increased strength of the cement [3]. Additionally, the mix percentage has demonstrated an optimal condition by combining compendious nanoauxiliary concise [4–6]. Nanoconcrete employs constituent nanomaterials [7–13] that significantly improve the packing model structure in bulk characteristics. In addition to augmenting the properties of concrete, nanoparticles act as an excellent filler material. This review paper aims to report all the advances in nanomaterials-enhanced concrete that exhibit compressive and flexural characteristics [14].The construction industry has immensely benefitted from nanomaterials throughout the review. In particular, the nanomaterials in cement and concrete products such as nano-TiO2, nanoalumina, nanometakaolin, nano-SiO2, nanoclay [15–17], and carbon nanotubes (CNTs) have improved the overall characteristics [18]. Inherently integrated with it are the filling features that contribute to increasing durability [19]. In addition, nanomaterials have been demonstrated to enhance microstructural features that are not explored in conventional construction engineering but are a mainstream genre of research for contemporary investigations. A thorough review of prior research sheds some light on this area, but a detailed analysis is needed. Still, an arching framework that incorporates characteristics of concrete containing NMK, TiO2, and nanocellulose is lacking—which greatly motivates this research work. This review aims to advance the use of nanomaterials in contributing to the flexural and compressive strength of concrete. ## 2. Production of Nanomaterials Even though the concept of creating nanomaterials through nanotechnology emerged in the late 1960s [20, 21], the aspect of strengthening the properties of concrete is relatively nascent—gaining momentum in recent decades. While all materials eventually convert into nanoparticles, Bharadwaz et al. [22] pointed out that these particles—solely attributed to their nanosize—have a stronger foothold compared with microbased components as far as filler materials are concerned. A top-down strategy [23] is inherently optimized that provides selection based on nanobehavioral expertise, appropriateness, and cost [24, 25]. Defined as the process of reducing larger structures to the nanoscale—while retaining their original features or chemical composition even at the atomic level—the top-down approach provides robustness and applicability over a wide variety of domain expertise [26]. In other words, mechanical attrition and etching processes break down bulk materials into nanoparticles [27]. The milling process is one of the strategies under the framework of top-down approaches [28, 29]. A milling machine’s fundamental feasibility and accessibility corroborate changes without the need for chemicals or electronic devices. Another name for the top-down strategy is the present way of nanofabrication. However, the final product homogeneity and quality are inconsistent in the top-down approach.High-energy ball millings can synthesize nanomaterials, nanograins, nanocomposites, and nano-quasicrystalline materials. In particular, by modifying the number of balls employed and their types, with an increase in the machine speed and the type of container used, the nanoparticles can alleviate the traditional shortcomings of the top-down approach [2, 30, 31]. During milling, plastic deformation, cold welding, and fracture are the factors influencing the deformation and transforming process of materials into the required shape. Milling not only breaks materials into smaller parts but also blends several particles or materials and transforms them into new phases of material composition in the case of the reactive ball milling technique, but this is not possible through dry and wet ball milling techniques. Although the milling process automatically reduces the size of materials, the mixture of various particles converts into a new material phase in the case of the reactive ball milling technique. The end product from a milling operation churns out materials in the new regime with lake-shaped strata in the case of the dry ball milling technique. However, refining can be carried out to obtain a more delicate structure based on the type and the size of the ball used corresponding to the milling technique. From a historical perspective, John Benjamin (1970) first administered the method of milling through the strengthening of alloy components for high-temperature structures [32]; this led to the first use of milling as an effective technique to produce oxide particles.Contrary to the top-down approach, the bottom-up technique is employed when materials are assembled or self-assembled from atoms or molecular components. This methodology is helpful for most nanomaterials, such as nanosilica, nanoalumina, and nanoclay, which are widely used to improve the characteristics of concrete. This process is aptly termed molecular manufacturing or nanotechnology due to its indirect benefits, including synthesis and chemical formulation [24]. The critical difference between the various schools of thought is that the bottom-up method will produce a uniform and perfect structure of the nanoparticles compared to the top-down approach. This is explained by the fact that nanocrystals can automatically develop when atoms or molecules are well-organized or in crystalline form. A few strategies for this process include increased electronic conductivity, optical absorption, and chemical reactivity [25, 33].Additionally, a significant reduction in the size of the particles—with the development of tidy surface atoms—combined with the enormous change in the surface energy leads to improved morphologies. Nanoparticles have become ideal candidates for advanced applications in electronic components and biotechnology. In the longer run, nanomaterials derive their applications toward boosting catalytic activity, wave sensing capabilities, novel pigments, and self-healing and cleaning properties in paint. The flip side of the coin, called flippin, exposes the severe drawbacks of the bottom-up strategy, including its high operational costs, the need for specialized knowledge in chemical applications, and its strict applicability to laboratory applications only [22, 34, 35]. ## 3. Compressive Strength of Nanoconcrete Made with Nanomaterial Numerous nanomaterials have been included in concrete since the development of nanotechnology in the building industry. Nanomaterials can be used to improve the durability and performance of concrete. The discussion of nanomaterials, such as nanometakaolin, nanosilica, nanoalumina, nanotitanium dioxide, CNTs, and nanocellulose, as well as their utilization in all types of concrete, will be further developed in this subsection in terms of their mechanical properties. ### 3.1. Nanometakaolin NMK considerably enhanced the compressive strengths of cementitious materials, according to several studies [36–38]. Table 1 shows the 28-day compressive strength of cementitious materials treated with NMK. According to an extensive literature survey, adding the right amount of NMK to cementitious materials boosted their compressive strength [52, 53]. When the amount of NMK in a cementitious material exceeds the optimum level, the concrete compressive strength will reduce.Table 1 Twenty-eight-day compressive strength of concrete made with nanometakaolin. Cementitious materialsw/b ratioReplacement of NMK (%)Compressive strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.274–158 ⟶ 20 ⟶ −1510[39]0.32–1616 ⟶ 54 ⟶ −10[30]0.33–0.492–148.6 ⟶ 59.4 ⟶ 46.6[38]0.443–1016 ⟶ 24.6 ⟶ 226[23]0.5–0.592–1010 ⟶ 63 ⟶ 20[40]Cement mortar0.32–109.3 ⟶ 22.6 ⟶ −1.34[41]0.45–1028 ⟶ 20[42]0.485354[43]0.52.5–1015 ⟶ 34 ⟶ 197.50[44]0.542–148.8 ⟶ 42 ⟶ 2010[45]0.65–1523 ⟶ 85[46]Ordinary concrete0.51026.32[47]0.533–1042.2 – 63.110[48]UHPC0.21–912 ⟶ −16.51[49]0.21–108.5 ⟶ −8.51[18]SCHPC0.351.25–3.7512 ⟶ 17[50]SCC1–512.7 ⟶ 42.2[51]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.Meanwhile, too much NMK causes a weak interfacial transition zone (ITZ) and fewer contact points, which act as binding sites between cement particles [48, 54]. The addition of NMK to concrete increases its compressive strength.Even though mixed proportion characteristics like w/c ratio and NMK content and curing conditions are the same, the optimum NMK contents are not the same based on a literature survey.As a result, more research should be conducted and studied [22] to better understand the various effects of NMK on the compressive strength of concrete or mortar. In addition, some results are incongruent. The compressive strength of air-cooled slag (ACS)-blended cement mortar gradually lowered when the NMK content increased [48]. At 28 days, the compressive strength of mortar containing 8% NMK was marginally lower than that of the control mortar. According to Norhasri et al. [23], adding NMK to ultra-high-performance concrete (UHPC) with a compressive strength of 150 MPa at 28 days did not improve the early compressive strength. Although UHPC with 1% NMK had the maximum compressive strength, it was somewhat lower than the control UHPC. The early stability of UHPC was not compromised by the addition of NMK [55–57]. Furthermore, as the amount of NMK in UHPC increased, the material strength of compression dropped [58]. The compressive strength improved by up to 63% when the nanometakaolin content increased to 10%, and as the nanometakaolin content increased further, the mechanical characteristics deteriorated [59].Almost all ages of hydration resulted in higher compression strength values than those reported for the typical ordinary Portland cement (OPC) paste. This increase in strength is primarily due to the pozzolanic reaction of free calcium hydroxide (CH), liberated from Portland cement hydration, with nanometakaolin to form excessive amounts of additional hydration products, primarily as calcium silicate hydrate (CSH) gel and crystalline CSH hydrates; these hydrates act as microfills, which reduce total porosity leading to an increase in the entire contents of binding centers in the specimens; consequently, an increase in the strength [60]. Thermal gravimetric analysis and scanning electron microscopy (SEM) were also used to monitor the hydration process (TG). These examinations indicate that NMK behaves as a filler to improve the microstructure and as an activator to promote the pozzolanic reaction [38]. ### 3.2. Nanosilica After 28 days, the compressive strength of concrete containing 3% NS improved by 20% compared to baseline concrete strength [43]. At 90 and 365 days, compressive strength testing revealed a similar pattern. Table 2 shows the existing compressive strength of cementitious materials with NS at 28 days.Table 2 Twenty-eight-day compressive strength of concrete made with nanosilica. Materialsw/c ratioReplacement of NS (%)Compressive strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.32–610.5 ⟶ 15[44]0.361–2.59.2 ⟶ 20.25 ⟶ −52[61]Ordinary concrete0.30.5–11.75 ⟶ 2.5[47]0.350.5–11.05 ⟶ 1.84[47]0.40.5–13.3 ⟶ 7.2[47]0.450.5–15.59 ⟶ 10.39[47]0.40.75–320[43]0.4331[41]HPC0.4333[40]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The colloidal 40–50 nm NS effectively increased the compressive strength of 3% NS high-performance concrete to 33.25% for 24 hr. This improvement in strength corresponds to the 40 MPa compression strength. Also, the flexural strength exceeded 13.5% after 7 days [40]. The concrete with 3% NS as a replacement with cement substitute exhibited more compressive strength and a longer lifespan. The compressive strength of 3% NS was enhanced by 31.42% compared to the reference concrete [41]. The kind of NS is colloidal at 15 nm, and the surface area, particularly of NS particles, directly impacts the concrete compressive strength. The NS particles with a surface area of 51.4 m2/g are the least beneficial in improving compressive strength. The w/b ratio influences the strength of NS concrete as well. The ideal amount of NS replacement is linked to the reactivity and accumulation level of the NS particles [42]. Ordinary concrete (sulfuric-acid-rain condition) with 0.3 w/b ratios substituted cement material with 2%–6%, which increased compressive strength by 15% [44]. Ordinary concrete with a 0.36 w/b ratio with 1% to 2.5% replacement of cement material shows increase in compressive strength up to 20.25% for 2% replacement and a 5% reduction in compressive strength for 2.5% replacement, corroborates the optimum replacement of NS is 2% [61]. The compressive strength of regular concrete increases from 1.75% to 7.2% with a w/c ratio of 0.3, 0.4, and 0.45, and NS content of 0.5%, 0.75%, and 1% as a replacement for cement [47]. Table 3 shows the tabulated results, which suggest that a 2% NS substitution is optimal and any further increment in NS content diminishes strength [70].Table 3 Twenty-eight-day compressive strength of concrete made with nanotitanium dioxide. Materialsw/c ratioReplacement of NT (%)Compressive strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.7510.5 ⟶ 19.33 ⟶ 15.07 ⟶ 4.270.75[62]Mortar with 10% of black rice husk ash0.350.5 ⟶ 1 ⟶ 1.51.48 ⟶ 4.75 ⟶ 13.221.5[63]RPC1, 3, 518.553[64]HPC0.251.5023[65]SCC with ground granulated blast-furnace slag (GGBS)0.41 ⟶ 2 ⟶ 3 ⟶ 42.7 ⟶ 26.5 ⟶ 36.4 ⟶ 27.83[66]RPC1 ⟶ 3 ⟶ 543.43 ⟶ 74.9 ⟶ 875[67]Concrete0.482232[68]SCGPC2 ⟶ 4 ⟶ 6−3.43 ⟶ 7.7 ⟶ 3.44[69]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the study, up to 3% of NS doses can improve mechanical characteristics, potentially due to pozzolanic activity, pore structure refinement, and filling effect. Compressive strength increases as the NS content grows, reaching 33% for an NS proportion of 3%.The compressive strength was seen to increase to 2% nanosilica substitution before significantly declining. Due to the increased hydration by nanosilica, there is a more significant consumption of Ca(OH)2 in the early stages (1–7 days of curing). This outcome favors a 2% substitution of nanosilica in cement by weight. The pozzolanic reaction of nanosilica and CH produces well-compacted hydration products that coat the unhydrated cement and slow the rate of hydration. Additionally, hydration products plug the pores in the cement, reducing the amount of water that can reach the anhydrate cement particles and reducing the strength above 2% nanosilica replacement [71]. ### 3.3. Nanotitanium Dioxide Most researchers agreed that using NT particles might improve the compressive strength of concrete to some extent. The effect of NT on the compressive strength of concrete is shown in Table3.TiO2 nanoparticles with an average diameter of 15 nm were used in four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight of cement with a w/c ratio of 0.5 [72]. The 0.75% NT increases the mortar’s compressive strength by 19.33% after 28 days. The strength decreased as the NT content increased; hence, the optimum NT content is 0.75% [62]. In 10%, 20%, and 30% of the fractions, untreated black rice husk ash (BRHA) was used to replace cement. When nano-TiO2 doses of 0.5%, 1.0%, and 1.5% were added to blended cement, the compressive strength increased to 13.22% [63]. The compressive strength of concrete containing 20% fly ash (FA) can be enhanced by 18% by using 3% NT, according to Li et al. [64]. It was discovered that NT at a dose of 3 wt% improved the compressive strength of self-compacting concrete (SCC) with GGBS and a w/c ratio of 0.4 the most. The flexural strength of nano SiO2 coated TiO2 reinforced reactive powder concrete (NSCTRRPC) reached a maximum of 9.77 MPa when the range of NSCT was 3% and increased 83.3%/4.44 MPa compared with reactive powder concrete (RPC) without NSCT. Even while the strength of flexural NSCTRRPC was slightly lesser than that of plain RPC when the NSCT content was 5%, it was still much more than bare reactive powder concrete. It could be related to a decrease in hydration speed caused by water absorption. At 28 days, the ideal level of NS content was 5.0%. The use of nanoparticles as cementitious materials increased the compressive strength of concrete, as shown in Table 3. After 28 days of curing, the compressive strength of concrete can enhance up to 22.71% by replacing 2% cement with nanotitanium oxide particles (relative to plain concrete). Titanium oxide was introduced to specimens with wollastonite; the compressive strength increased initially, and then declined. The best combination was 4% NT without wollastonite [69, 73].According to the study, the optimal dose of NT is 3%, which boosts compressive strength by up to 23%; however, increasing the NT concentration diminishes mechanical characteristics.The review paper’s findings revealed the diffraction intensity of several CH and C3S crystals specimens. First, with increasing hydration age, C3S diffraction apex intensity declined, while CH diffraction apex intensity increased in the base specimen. But even after 28 days, C3S had not fully hydrated. Second, utterly different change tendencies were visible in the strength of the two CH diffraction peaks, as evidenced by the variance in X-ray diffraction (XRD) results between the base specimen and the additional specimen. When nano-TiO2 was used to replace cement, the intensity of the CH (101) crystal plane grew early on, whereas the power of the (001) crystal plane significantly dropped. When the cement was replaced with nano-TiO2, the amount of CH crystal did not increase after 1 day. Therefore, the rise in hydrated products should not be cause of the improvement in early strength. Third, the intensity of CH at evening ages increased so slowly after the slag powder was applied to the cement mortar that it reduced after 14 days, and the power of CH was significantly lower than that of other specimens without the slag powder. This meant that CH was used to hydrate the slag powder, which helped to increase strength in the evening hours [74]. ### 3.4. Nanoalumina Most researchers agreed that using nano alumina (NA) particles might improve the cementitious composites and compressive strength. Table4 shows the effect of NA on the compression strength of cementitious material.Table 4 Twenty-eight-day compressive strength of concrete made with nanoalumina. Materialsw/c ratioReplacement of NA (%)Compressive strength increment (%)Maximum replacement of NA (%)ReferencesMortar0.791, 3, 520 ⟶ 15 ⟶ 36.005.00[75]Mortar0.351, 2, 316.00 ⟶ 12 ⟶ 121[76]Concrete0.441, 2, 313.00 ⟶ 4 ⟶ −1[77]Concrete0.330.5, 0.75, 16 ⟶ 28 ⟶ 461[78]Mortar with 10% of black rice husk ash0.491, 2, 31 ⟶ 11 ⟶ 163[79]Concrete0.483, 5, 716.67 ⟶ 30.13 ⟶ 23.585[80]Concrete2, 33.32 ⟶ 5.33[81]Mortar0.50.5, 1, 1.57 ⟶ 10.6 ⟶ 11.41.50[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.After 7 days of the curing period, the addition of NA increased by 1%, 3%, and 5% replacement by 46%, 27%, and 19.3%, respectively. It was discovered that the 1% replacement level of NA provides the best early strength. After 28 days of the curing period, the compressive strength of the 0% replacement level was found to be 13.69 MPa; the addition of NA resulted in increase of 20%, 15%, and 36%, respectively, whereas 5% replacement resulted in better compressive strength after 28 days [75]. The 1% nanoalumina content to instars increased their compressive strength by up to 16% at room temperature. Higher concentrations of NA (>2%) reduced the power mortars to their original level. One percent nanoalumina was added at temperatures up to 800°C; the residual compressive strength remained more elevated than the actual amount [76]. When OPC is substituted with NA, the compressive strength increases by 1%. The 28-day compressive strength increased by 13% when a 1% NA replacement level was utilized instead of a 0% NA combination. The compressive strength of concrete cubes was boosted by introducing NA into the matrix; the compressive strength of composites rose by 33.14% at 28 days when the proportion of NA was 1% of the cement by weight [78].The addition of rice husk ash boosted the 28-day compressive strength of samples, with 10% replacement of rice husk and 3% NA giving the greatest compressive strength of 16.6%. The compression strength decreases as the rice husk content increases [79] by replacing 1% of cement with nanoalumina particles; concrete 28-day compressive strength was increased by 4.03%; when the concentration of NA was increased from 1% to 3%, the compressive strength improved from 4.03% to 8.00%. As a result, cement hydration was hastened, resulting in higher amounts of reaction products. Also, nanofiller nano-Al2O3 particles regain the concrete particle packing density and improve the microstructure. As a result, the volume of larger pores in the cement paste is reduced [80]. The compressive strength of all the tested mortars decreases as the amount of Al2O3 nanopowder in their composition increases. When 0.5% Al2O3 nanopowder was added, the decline was 7%, 10.6% with 1% and 11.4% with 1.5% compared with the reference mortar. According to several studies, adding Al2O3 nanopowder to mortars can increase their compressive strength [82].According to the study, the optimum concentration of nanoalumina was 5%, at which point compressive strength increased by 36%.The rapid consumption of Ca(OH)2 produced during Portland cement hydration, which is connected to the high reactivity of nano-Al2O3 particles, is likely what caused the increase in the compressive strength of concrete containing nanoalumina. As a result, the cement’s hydration was sped up, and more reaction products were generated. Additionally, nano-Al2O3 particles restore the concrete’s particle packing density and enhance its microstructure as a nanofiller, decreasing the volume of bigger pores in the cement paste. The outcomes align with those reported by other researchers [80]—dosage of the samples NA1, NA2, NA3, and control at various temperatures. The compressive strength decreased when the dosage of nanoalumina rose to 2% and 3%, although it remained higher than the control samples. The ITZ may have become loose due to the excessive aggregation of nanoalumina particles, which may have surrounded fine aggregates [76]. ### 3.5. Carbon Nanotubes According to most studies, the compression strength of cementitious material could be increased to some extent with CNT particles. Table5 shows CNT’s effect on cementitious material’s compressive strength.Table 5 Twenty-eight-day compressive strength of concrete with carbon nanotubes. Materialsw/b ratioReplacement of MWCNT (%)Compressive strength increment (%)Maximum replacement of CNT (%)ReferencesM30 grade of concrete0.40.015, 0.03, and 0.0452.75 ⟶ 26.7[83]Concrete0.25 and 0.57.14 ⟶ 15.7[84]Mortar0.550.05, 0.1, and 0.215 ⟶ 8 ⟶ 100.05[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.57.11 ⟶ 18.2 ⟶ 22.56 ⟶ 24.5 ⟶ 27.35[86]SCC0.450.1, 0.3, and 0.516.6 ⟶ 24 ⟶ 38.62[87]Ultra high strength concrete (UHSC)0.20.05, 0.1, and 0.154.6 ⟶ 2.1 ⟶ −1.970.1[88]Concrete_0.0.2, 0.03, 0.05, and 0.0983.33 ⟶ 97.22 ⟶ 80.55 ⟶ 63.880.03[89]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The compressive strength of concrete increased by up to 26.7% when functionalized multiwalled carbon nanotube (MWCNT) was used in place of 0.045% cement. MWCNT in cement shows an improvement in the stiffness of the CSH gel, making the composite more substantial. An explanation for the improved mechanical properties of concrete could be that the MWCNTs occupy the nanostructure in MWCNT concrete, making them more crack resistant throughout the loading period [83]. Modified MWCNTs were disseminated in cement mortars to improve their mechanical qualities. When pure MWCNTs were used, the compressive strength of cement mortars was significantly increased—incorporating 0.50 wt%. Percent MWCNTs resulted in a 15.7% increase in compressive strength, whereas containing 0.250 wt%. Percent MWCNTs resulted in a 7.14% increase in compressive strength [84]. The cement mortar’s compressive strength was improved by adding CNTs. The maximum enhancement obtained when utilizing 0.05% CNTs is up to 15%; however, the stability decreases as the CNT content increases [85]. According to the strength of compression data, the percentage increase in compressive strength for 0.1% and 0.5% is 7.11% and 27.35%, respectively, for 28 days. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.%, 0.3%, and 0.5% MWCNT concentration increased to 38.62% [87]. The most significant gains in the mechanical characteristics of cement-based materials were found to be at low concentrations of MWCNTs (0.05%). The compressive strength rose to 4.6% for a 0.05% MWCNT in the current investigation; however, as the content of MWCNT increases, the power drops; hence, the optimum content is 0.05% [88].It was determined from various sources that the maximum concentration of CNT was 0.1%. The compressive strength of UHSC rises by up to 2.1%.At 80°C, the CNTs were functionalized in a solution of concentrated H2SO4 and HNO3. The removal of carboxylate carbonaceous fragments that could adversely influence the mechanical strength of the concrete due to their interaction with cement hydration products was discovered to require the functionalization of acetone washing [89].The ability to synthesize novel hybrid nanostructured materials in which CNTs and carbon nanofibers (CNFs) connected to cement particles and enabled good dispersion of the carbon nanomaterials in the cement was made possible by employing cement particles as catalysts and support material. Two chemical vapour deposition reactors were used to create this hybrid material, which is easily included in the production of commercial cement [90, 91]. The fluidized bed reactor’s product yield was significantly enhanced. The research using TEM, SEM, XRD, thermogravimetric analysis, and Raman measurements revealed the process for producing CNTs and CNFs at low temperatures and high yields to be highly effective. After 28 days of curing in water, tests on the physical characteristics of the cement hybrid material paste revealed up to a twofold increase in compressive strength and a 40-fold increase in electrical conductivity [90, 91].The increased compressive strength may be related to the fact that the addition of CNTs caused the microcracks to start and spread more slowly because they were evenly distributed throughout the cement mortar. The mechanical strengths of the mortar may be improved by adding CNTs to improve the adhesion between the hydration products. Additionally, it is possible that the presence of CNTs led to the production of additional CSH and the consumption of CH [85, 86]. ### 3.6. Nanocellulose (NC) According to most studies, the compression strength of cementitious material could be increased to some extent with nanocellulose particles. Table6 shows NC’s effect on cementitious material’s compressive strength.Table 6 Twenty-eight-day compressive strength of concrete made with nanocellulose [92–94]. Materialsw/c ratioReplacement of NC (%)Compressive strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.325 ⟶ 17 ⟶ 11 ⟶ 30.05[92]Concrete0.150.005, 0.01, and 0.0158 ⟶ 3 ⟶ 10.005[93]Concrete0.350.2 and 0.110 ⟶ 17[94]The compressive strength of concrete with a cellulose content of 0.05% and 0.10% increased by 26% and 17%, respectively. In contrast, combinations with 0.20% and 0.30% NC had less apparent impacts, rising by 11% and 3%. The effect of NC on hydration kinetics and hydrate characteristics may be linked to the higher compression strength in the presence of nanocellulose [92]. The results demonstrate that NC mortar samples improve compressive strength after 7 days of curing; the UHP mortar sample, which included 0.005 wt% NC, had the most excellent compressive strength value of 184 MPa of binders. In this situation, the compressive strength was roughly 8% higher than the control mortar and 4%–8% higher than the 0.01% and 0.015% NC mortars. Because of its high specific surface, NC probably provides close spacing and strong adhesion to the cement matrix, boosting density and impacting compressive strength development [93]. Adding 0.2% and 1% NC raises the strength of the material by 10% and 17%, respectively. The hydrophilic properties of CNCs, which result in increased hydration products, may be responsible for increasing compressive strength. Furthermore, using 0.2% and 1% CNC decreases cement volume while maintaining compressive strength [94].With an optimum dose of NC of 0.1%, the compressive strength increases by up to 17%, while a higher amount of NC lowers the mechanical characteristics.The greater compressive strength in the presence of cellulose filaments (CF) may be related to the effect of nanocellulose on the hydration kinetics and the properties of hydrates. According to research [92], the hydrophilic and hygroscopic nanocellulose may add more water to the cementitious matrix to raise the degree of hydration and improve mechanical performance. ## 3.1. Nanometakaolin NMK considerably enhanced the compressive strengths of cementitious materials, according to several studies [36–38]. Table 1 shows the 28-day compressive strength of cementitious materials treated with NMK. According to an extensive literature survey, adding the right amount of NMK to cementitious materials boosted their compressive strength [52, 53]. When the amount of NMK in a cementitious material exceeds the optimum level, the concrete compressive strength will reduce.Table 1 Twenty-eight-day compressive strength of concrete made with nanometakaolin. Cementitious materialsw/b ratioReplacement of NMK (%)Compressive strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.274–158 ⟶ 20 ⟶ −1510[39]0.32–1616 ⟶ 54 ⟶ −10[30]0.33–0.492–148.6 ⟶ 59.4 ⟶ 46.6[38]0.443–1016 ⟶ 24.6 ⟶ 226[23]0.5–0.592–1010 ⟶ 63 ⟶ 20[40]Cement mortar0.32–109.3 ⟶ 22.6 ⟶ −1.34[41]0.45–1028 ⟶ 20[42]0.485354[43]0.52.5–1015 ⟶ 34 ⟶ 197.50[44]0.542–148.8 ⟶ 42 ⟶ 2010[45]0.65–1523 ⟶ 85[46]Ordinary concrete0.51026.32[47]0.533–1042.2 – 63.110[48]UHPC0.21–912 ⟶ −16.51[49]0.21–108.5 ⟶ −8.51[18]SCHPC0.351.25–3.7512 ⟶ 17[50]SCC1–512.7 ⟶ 42.2[51]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.Meanwhile, too much NMK causes a weak interfacial transition zone (ITZ) and fewer contact points, which act as binding sites between cement particles [48, 54]. The addition of NMK to concrete increases its compressive strength.Even though mixed proportion characteristics like w/c ratio and NMK content and curing conditions are the same, the optimum NMK contents are not the same based on a literature survey.As a result, more research should be conducted and studied [22] to better understand the various effects of NMK on the compressive strength of concrete or mortar. In addition, some results are incongruent. The compressive strength of air-cooled slag (ACS)-blended cement mortar gradually lowered when the NMK content increased [48]. At 28 days, the compressive strength of mortar containing 8% NMK was marginally lower than that of the control mortar. According to Norhasri et al. [23], adding NMK to ultra-high-performance concrete (UHPC) with a compressive strength of 150 MPa at 28 days did not improve the early compressive strength. Although UHPC with 1% NMK had the maximum compressive strength, it was somewhat lower than the control UHPC. The early stability of UHPC was not compromised by the addition of NMK [55–57]. Furthermore, as the amount of NMK in UHPC increased, the material strength of compression dropped [58]. The compressive strength improved by up to 63% when the nanometakaolin content increased to 10%, and as the nanometakaolin content increased further, the mechanical characteristics deteriorated [59].Almost all ages of hydration resulted in higher compression strength values than those reported for the typical ordinary Portland cement (OPC) paste. This increase in strength is primarily due to the pozzolanic reaction of free calcium hydroxide (CH), liberated from Portland cement hydration, with nanometakaolin to form excessive amounts of additional hydration products, primarily as calcium silicate hydrate (CSH) gel and crystalline CSH hydrates; these hydrates act as microfills, which reduce total porosity leading to an increase in the entire contents of binding centers in the specimens; consequently, an increase in the strength [60]. Thermal gravimetric analysis and scanning electron microscopy (SEM) were also used to monitor the hydration process (TG). These examinations indicate that NMK behaves as a filler to improve the microstructure and as an activator to promote the pozzolanic reaction [38]. ## 3.2. Nanosilica After 28 days, the compressive strength of concrete containing 3% NS improved by 20% compared to baseline concrete strength [43]. At 90 and 365 days, compressive strength testing revealed a similar pattern. Table 2 shows the existing compressive strength of cementitious materials with NS at 28 days.Table 2 Twenty-eight-day compressive strength of concrete made with nanosilica. Materialsw/c ratioReplacement of NS (%)Compressive strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.32–610.5 ⟶ 15[44]0.361–2.59.2 ⟶ 20.25 ⟶ −52[61]Ordinary concrete0.30.5–11.75 ⟶ 2.5[47]0.350.5–11.05 ⟶ 1.84[47]0.40.5–13.3 ⟶ 7.2[47]0.450.5–15.59 ⟶ 10.39[47]0.40.75–320[43]0.4331[41]HPC0.4333[40]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The colloidal 40–50 nm NS effectively increased the compressive strength of 3% NS high-performance concrete to 33.25% for 24 hr. This improvement in strength corresponds to the 40 MPa compression strength. Also, the flexural strength exceeded 13.5% after 7 days [40]. The concrete with 3% NS as a replacement with cement substitute exhibited more compressive strength and a longer lifespan. The compressive strength of 3% NS was enhanced by 31.42% compared to the reference concrete [41]. The kind of NS is colloidal at 15 nm, and the surface area, particularly of NS particles, directly impacts the concrete compressive strength. The NS particles with a surface area of 51.4 m2/g are the least beneficial in improving compressive strength. The w/b ratio influences the strength of NS concrete as well. The ideal amount of NS replacement is linked to the reactivity and accumulation level of the NS particles [42]. Ordinary concrete (sulfuric-acid-rain condition) with 0.3 w/b ratios substituted cement material with 2%–6%, which increased compressive strength by 15% [44]. Ordinary concrete with a 0.36 w/b ratio with 1% to 2.5% replacement of cement material shows increase in compressive strength up to 20.25% for 2% replacement and a 5% reduction in compressive strength for 2.5% replacement, corroborates the optimum replacement of NS is 2% [61]. The compressive strength of regular concrete increases from 1.75% to 7.2% with a w/c ratio of 0.3, 0.4, and 0.45, and NS content of 0.5%, 0.75%, and 1% as a replacement for cement [47]. Table 3 shows the tabulated results, which suggest that a 2% NS substitution is optimal and any further increment in NS content diminishes strength [70].Table 3 Twenty-eight-day compressive strength of concrete made with nanotitanium dioxide. Materialsw/c ratioReplacement of NT (%)Compressive strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.7510.5 ⟶ 19.33 ⟶ 15.07 ⟶ 4.270.75[62]Mortar with 10% of black rice husk ash0.350.5 ⟶ 1 ⟶ 1.51.48 ⟶ 4.75 ⟶ 13.221.5[63]RPC1, 3, 518.553[64]HPC0.251.5023[65]SCC with ground granulated blast-furnace slag (GGBS)0.41 ⟶ 2 ⟶ 3 ⟶ 42.7 ⟶ 26.5 ⟶ 36.4 ⟶ 27.83[66]RPC1 ⟶ 3 ⟶ 543.43 ⟶ 74.9 ⟶ 875[67]Concrete0.482232[68]SCGPC2 ⟶ 4 ⟶ 6−3.43 ⟶ 7.7 ⟶ 3.44[69]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the study, up to 3% of NS doses can improve mechanical characteristics, potentially due to pozzolanic activity, pore structure refinement, and filling effect. Compressive strength increases as the NS content grows, reaching 33% for an NS proportion of 3%.The compressive strength was seen to increase to 2% nanosilica substitution before significantly declining. Due to the increased hydration by nanosilica, there is a more significant consumption of Ca(OH)2 in the early stages (1–7 days of curing). This outcome favors a 2% substitution of nanosilica in cement by weight. The pozzolanic reaction of nanosilica and CH produces well-compacted hydration products that coat the unhydrated cement and slow the rate of hydration. Additionally, hydration products plug the pores in the cement, reducing the amount of water that can reach the anhydrate cement particles and reducing the strength above 2% nanosilica replacement [71]. ## 3.3. Nanotitanium Dioxide Most researchers agreed that using NT particles might improve the compressive strength of concrete to some extent. The effect of NT on the compressive strength of concrete is shown in Table3.TiO2 nanoparticles with an average diameter of 15 nm were used in four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight of cement with a w/c ratio of 0.5 [72]. The 0.75% NT increases the mortar’s compressive strength by 19.33% after 28 days. The strength decreased as the NT content increased; hence, the optimum NT content is 0.75% [62]. In 10%, 20%, and 30% of the fractions, untreated black rice husk ash (BRHA) was used to replace cement. When nano-TiO2 doses of 0.5%, 1.0%, and 1.5% were added to blended cement, the compressive strength increased to 13.22% [63]. The compressive strength of concrete containing 20% fly ash (FA) can be enhanced by 18% by using 3% NT, according to Li et al. [64]. It was discovered that NT at a dose of 3 wt% improved the compressive strength of self-compacting concrete (SCC) with GGBS and a w/c ratio of 0.4 the most. The flexural strength of nano SiO2 coated TiO2 reinforced reactive powder concrete (NSCTRRPC) reached a maximum of 9.77 MPa when the range of NSCT was 3% and increased 83.3%/4.44 MPa compared with reactive powder concrete (RPC) without NSCT. Even while the strength of flexural NSCTRRPC was slightly lesser than that of plain RPC when the NSCT content was 5%, it was still much more than bare reactive powder concrete. It could be related to a decrease in hydration speed caused by water absorption. At 28 days, the ideal level of NS content was 5.0%. The use of nanoparticles as cementitious materials increased the compressive strength of concrete, as shown in Table 3. After 28 days of curing, the compressive strength of concrete can enhance up to 22.71% by replacing 2% cement with nanotitanium oxide particles (relative to plain concrete). Titanium oxide was introduced to specimens with wollastonite; the compressive strength increased initially, and then declined. The best combination was 4% NT without wollastonite [69, 73].According to the study, the optimal dose of NT is 3%, which boosts compressive strength by up to 23%; however, increasing the NT concentration diminishes mechanical characteristics.The review paper’s findings revealed the diffraction intensity of several CH and C3S crystals specimens. First, with increasing hydration age, C3S diffraction apex intensity declined, while CH diffraction apex intensity increased in the base specimen. But even after 28 days, C3S had not fully hydrated. Second, utterly different change tendencies were visible in the strength of the two CH diffraction peaks, as evidenced by the variance in X-ray diffraction (XRD) results between the base specimen and the additional specimen. When nano-TiO2 was used to replace cement, the intensity of the CH (101) crystal plane grew early on, whereas the power of the (001) crystal plane significantly dropped. When the cement was replaced with nano-TiO2, the amount of CH crystal did not increase after 1 day. Therefore, the rise in hydrated products should not be cause of the improvement in early strength. Third, the intensity of CH at evening ages increased so slowly after the slag powder was applied to the cement mortar that it reduced after 14 days, and the power of CH was significantly lower than that of other specimens without the slag powder. This meant that CH was used to hydrate the slag powder, which helped to increase strength in the evening hours [74]. ## 3.4. Nanoalumina Most researchers agreed that using nano alumina (NA) particles might improve the cementitious composites and compressive strength. Table4 shows the effect of NA on the compression strength of cementitious material.Table 4 Twenty-eight-day compressive strength of concrete made with nanoalumina. Materialsw/c ratioReplacement of NA (%)Compressive strength increment (%)Maximum replacement of NA (%)ReferencesMortar0.791, 3, 520 ⟶ 15 ⟶ 36.005.00[75]Mortar0.351, 2, 316.00 ⟶ 12 ⟶ 121[76]Concrete0.441, 2, 313.00 ⟶ 4 ⟶ −1[77]Concrete0.330.5, 0.75, 16 ⟶ 28 ⟶ 461[78]Mortar with 10% of black rice husk ash0.491, 2, 31 ⟶ 11 ⟶ 163[79]Concrete0.483, 5, 716.67 ⟶ 30.13 ⟶ 23.585[80]Concrete2, 33.32 ⟶ 5.33[81]Mortar0.50.5, 1, 1.57 ⟶ 10.6 ⟶ 11.41.50[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.After 7 days of the curing period, the addition of NA increased by 1%, 3%, and 5% replacement by 46%, 27%, and 19.3%, respectively. It was discovered that the 1% replacement level of NA provides the best early strength. After 28 days of the curing period, the compressive strength of the 0% replacement level was found to be 13.69 MPa; the addition of NA resulted in increase of 20%, 15%, and 36%, respectively, whereas 5% replacement resulted in better compressive strength after 28 days [75]. The 1% nanoalumina content to instars increased their compressive strength by up to 16% at room temperature. Higher concentrations of NA (>2%) reduced the power mortars to their original level. One percent nanoalumina was added at temperatures up to 800°C; the residual compressive strength remained more elevated than the actual amount [76]. When OPC is substituted with NA, the compressive strength increases by 1%. The 28-day compressive strength increased by 13% when a 1% NA replacement level was utilized instead of a 0% NA combination. The compressive strength of concrete cubes was boosted by introducing NA into the matrix; the compressive strength of composites rose by 33.14% at 28 days when the proportion of NA was 1% of the cement by weight [78].The addition of rice husk ash boosted the 28-day compressive strength of samples, with 10% replacement of rice husk and 3% NA giving the greatest compressive strength of 16.6%. The compression strength decreases as the rice husk content increases [79] by replacing 1% of cement with nanoalumina particles; concrete 28-day compressive strength was increased by 4.03%; when the concentration of NA was increased from 1% to 3%, the compressive strength improved from 4.03% to 8.00%. As a result, cement hydration was hastened, resulting in higher amounts of reaction products. Also, nanofiller nano-Al2O3 particles regain the concrete particle packing density and improve the microstructure. As a result, the volume of larger pores in the cement paste is reduced [80]. The compressive strength of all the tested mortars decreases as the amount of Al2O3 nanopowder in their composition increases. When 0.5% Al2O3 nanopowder was added, the decline was 7%, 10.6% with 1% and 11.4% with 1.5% compared with the reference mortar. According to several studies, adding Al2O3 nanopowder to mortars can increase their compressive strength [82].According to the study, the optimum concentration of nanoalumina was 5%, at which point compressive strength increased by 36%.The rapid consumption of Ca(OH)2 produced during Portland cement hydration, which is connected to the high reactivity of nano-Al2O3 particles, is likely what caused the increase in the compressive strength of concrete containing nanoalumina. As a result, the cement’s hydration was sped up, and more reaction products were generated. Additionally, nano-Al2O3 particles restore the concrete’s particle packing density and enhance its microstructure as a nanofiller, decreasing the volume of bigger pores in the cement paste. The outcomes align with those reported by other researchers [80]—dosage of the samples NA1, NA2, NA3, and control at various temperatures. The compressive strength decreased when the dosage of nanoalumina rose to 2% and 3%, although it remained higher than the control samples. The ITZ may have become loose due to the excessive aggregation of nanoalumina particles, which may have surrounded fine aggregates [76]. ## 3.5. Carbon Nanotubes According to most studies, the compression strength of cementitious material could be increased to some extent with CNT particles. Table5 shows CNT’s effect on cementitious material’s compressive strength.Table 5 Twenty-eight-day compressive strength of concrete with carbon nanotubes. Materialsw/b ratioReplacement of MWCNT (%)Compressive strength increment (%)Maximum replacement of CNT (%)ReferencesM30 grade of concrete0.40.015, 0.03, and 0.0452.75 ⟶ 26.7[83]Concrete0.25 and 0.57.14 ⟶ 15.7[84]Mortar0.550.05, 0.1, and 0.215 ⟶ 8 ⟶ 100.05[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.57.11 ⟶ 18.2 ⟶ 22.56 ⟶ 24.5 ⟶ 27.35[86]SCC0.450.1, 0.3, and 0.516.6 ⟶ 24 ⟶ 38.62[87]Ultra high strength concrete (UHSC)0.20.05, 0.1, and 0.154.6 ⟶ 2.1 ⟶ −1.970.1[88]Concrete_0.0.2, 0.03, 0.05, and 0.0983.33 ⟶ 97.22 ⟶ 80.55 ⟶ 63.880.03[89]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The compressive strength of concrete increased by up to 26.7% when functionalized multiwalled carbon nanotube (MWCNT) was used in place of 0.045% cement. MWCNT in cement shows an improvement in the stiffness of the CSH gel, making the composite more substantial. An explanation for the improved mechanical properties of concrete could be that the MWCNTs occupy the nanostructure in MWCNT concrete, making them more crack resistant throughout the loading period [83]. Modified MWCNTs were disseminated in cement mortars to improve their mechanical qualities. When pure MWCNTs were used, the compressive strength of cement mortars was significantly increased—incorporating 0.50 wt%. Percent MWCNTs resulted in a 15.7% increase in compressive strength, whereas containing 0.250 wt%. Percent MWCNTs resulted in a 7.14% increase in compressive strength [84]. The cement mortar’s compressive strength was improved by adding CNTs. The maximum enhancement obtained when utilizing 0.05% CNTs is up to 15%; however, the stability decreases as the CNT content increases [85]. According to the strength of compression data, the percentage increase in compressive strength for 0.1% and 0.5% is 7.11% and 27.35%, respectively, for 28 days. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.%, 0.3%, and 0.5% MWCNT concentration increased to 38.62% [87]. The most significant gains in the mechanical characteristics of cement-based materials were found to be at low concentrations of MWCNTs (0.05%). The compressive strength rose to 4.6% for a 0.05% MWCNT in the current investigation; however, as the content of MWCNT increases, the power drops; hence, the optimum content is 0.05% [88].It was determined from various sources that the maximum concentration of CNT was 0.1%. The compressive strength of UHSC rises by up to 2.1%.At 80°C, the CNTs were functionalized in a solution of concentrated H2SO4 and HNO3. The removal of carboxylate carbonaceous fragments that could adversely influence the mechanical strength of the concrete due to their interaction with cement hydration products was discovered to require the functionalization of acetone washing [89].The ability to synthesize novel hybrid nanostructured materials in which CNTs and carbon nanofibers (CNFs) connected to cement particles and enabled good dispersion of the carbon nanomaterials in the cement was made possible by employing cement particles as catalysts and support material. Two chemical vapour deposition reactors were used to create this hybrid material, which is easily included in the production of commercial cement [90, 91]. The fluidized bed reactor’s product yield was significantly enhanced. The research using TEM, SEM, XRD, thermogravimetric analysis, and Raman measurements revealed the process for producing CNTs and CNFs at low temperatures and high yields to be highly effective. After 28 days of curing in water, tests on the physical characteristics of the cement hybrid material paste revealed up to a twofold increase in compressive strength and a 40-fold increase in electrical conductivity [90, 91].The increased compressive strength may be related to the fact that the addition of CNTs caused the microcracks to start and spread more slowly because they were evenly distributed throughout the cement mortar. The mechanical strengths of the mortar may be improved by adding CNTs to improve the adhesion between the hydration products. Additionally, it is possible that the presence of CNTs led to the production of additional CSH and the consumption of CH [85, 86]. ## 3.6. Nanocellulose (NC) According to most studies, the compression strength of cementitious material could be increased to some extent with nanocellulose particles. Table6 shows NC’s effect on cementitious material’s compressive strength.Table 6 Twenty-eight-day compressive strength of concrete made with nanocellulose [92–94]. Materialsw/c ratioReplacement of NC (%)Compressive strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.325 ⟶ 17 ⟶ 11 ⟶ 30.05[92]Concrete0.150.005, 0.01, and 0.0158 ⟶ 3 ⟶ 10.005[93]Concrete0.350.2 and 0.110 ⟶ 17[94]The compressive strength of concrete with a cellulose content of 0.05% and 0.10% increased by 26% and 17%, respectively. In contrast, combinations with 0.20% and 0.30% NC had less apparent impacts, rising by 11% and 3%. The effect of NC on hydration kinetics and hydrate characteristics may be linked to the higher compression strength in the presence of nanocellulose [92]. The results demonstrate that NC mortar samples improve compressive strength after 7 days of curing; the UHP mortar sample, which included 0.005 wt% NC, had the most excellent compressive strength value of 184 MPa of binders. In this situation, the compressive strength was roughly 8% higher than the control mortar and 4%–8% higher than the 0.01% and 0.015% NC mortars. Because of its high specific surface, NC probably provides close spacing and strong adhesion to the cement matrix, boosting density and impacting compressive strength development [93]. Adding 0.2% and 1% NC raises the strength of the material by 10% and 17%, respectively. The hydrophilic properties of CNCs, which result in increased hydration products, may be responsible for increasing compressive strength. Furthermore, using 0.2% and 1% CNC decreases cement volume while maintaining compressive strength [94].With an optimum dose of NC of 0.1%, the compressive strength increases by up to 17%, while a higher amount of NC lowers the mechanical characteristics.The greater compressive strength in the presence of cellulose filaments (CF) may be related to the effect of nanocellulose on the hydration kinetics and the properties of hydrates. According to research [92], the hydrophilic and hygroscopic nanocellulose may add more water to the cementitious matrix to raise the degree of hydration and improve mechanical performance. ## 4. Flexural Strength of Nanoconcrete Made with Nanomaterial ### 4.1. Nanometakaolin As shown in Table7, NMK has the potential to improve cementitious material flexural strength significantly. The optimum concentration of NMK is primarily between 8% and 10% [49, 95]. The results showed that fiber-reinforced cementitious composite (FRCC) with 10% NMK exhibited a 67% increase in flexural strength after 28 days compared with control FRCC. However, flexural strength rapidly decreased when the NMK concentration was increased. The effect of NMK addition on SCC as a partial replacement by weight of cement at four percentages (0%, 1%, 3%, and 5%) increased flexural strength by up to 33.8% [51].Table 7 Twenty-eight day flexural strength of concrete made with nanometakaolin. Materialsw/b ratioReplacement of NMK (%)Flexural strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.32–1414 ⟶ 36 ⟶ −2510[33, 52]0.33–0.492–143.9 ⟶ 58 ⟶ 38[38]0.52.5–106.4 ⟶ 29 ⟶ 197.50[44]0.65–158–45[46]Ordinary concrete0.50.10.2587[47]0.533–100–46.810[48]RPC0.1752–53.16–7.355[71]FRCC0.32–1416 ⟶ 67 ⟶ 5410[55]SCC1–514.5 ⟶ 33.8[51]SCHPC0.351.25–3.7510 ⟶ 27.5[50]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the findings, the optimal dose of NMK is 10%, which enhances flexural strength by 46.8%; however, increasing the concentration of NMK diminishes mechanical characteristics.Results illustrate the relationship between flexural strength variation and curing age for blended cement mortars containing NMK. As the curing age and NMK addition rise, flexural strength also increases. At 60 days of hydration, the flexural strength increases as the NMK addition increases up to 7.5% and then decreases after a 10% addition. The pozzolanic reaction of FA and NMK with free lime released during OPC hydration and the physical filling of the NMK platelet particles inside the interstitial spaces of the FA-cement skeleton is what causes the increase in flexural strength. However, the NMK platelet particles act nanosize enhanced because of the interfacial zone. At 7.5% NMK addition, the increase in flexural strength was 2.3-fold. The reduction of flexural strength at a later age and 10% NMK addition may be due to the agglomeration of NMK particles around cement grains. ### 4.2. Nanosilica Most researchers agreed that using nanosilica particles could somewhat improve the flexural strength of cementitious material. Table8 summarizes NS’s outcome on the cementitious material’s flexural strength. According to Jalal et al. [46], high-performance SCC with 2% NS and 10% silica fume (SF) exhibited better flexural strength than the reference mix at 28 days, with an optimum increase of 59%; however, the increased flexural strength was demonstrated to slow down as the age of the concrete increased. They also discovered that NS-only concrete had considerably lower flexural strength than NS-plus-SF-admixed concrete. Ordinary concrete with a w/c ratio of 0.36 has a cement replacement rate of 1%–2.5%, with a maximum replacement rate of 2%. Because increasing the NS content diminishes strength, the study determined that a 2% replacement is the best option [61]. The flexural strength of UHPC with a w/c ratio of 0.4 and nanosilica replacement with cement 4%–5% increases to 34.6%, and at 5% replacement of NS, the flexural strength decreases to 26.9%; thus, in this review, it is concluded that the optimum content of NS is 4%, and when it is combined with 2.5% steel fiber and 4% NS, the strength increases to 34.6%. In the high-performance concrete with a w/c ratio of 0.31 and NS replacement varying from 0.5% to 3%, flexural strength increased from 10.7% for 0.5% NS to 36.5% for 2% NS with a 0.5% increment. It then decreased to 22% for 3% NS with further addition of NS at about 3%, indicating maximum content is 2% [71]. According to Li et al. [97], adding 1% NS increased flexural strength by 41% and 35% for UHPC under the combined curing conditions of 2 days heat and 26 days conventional curing, respectively, at 0.16 and 0.17 w/b ratios control UHPC matrix under the collective curing conditions of 2 days heat and 26 days conventional curing. On the other hand, adding more than 1% NS resulted in a slight increase in flexural strength. Table 8 depicts the effects of NS at different doses throughout 28 days. According to the study, up to 3% of NS doses can increase mechanical qualities, which might be related to pore structure refinement, pozzolanic effect, and filling effect. Flexural strength rises as the NS percentage rises, reaching 34.6% for an NS content of 4%.Table 8 Twenty-eight-day flexural strength of cementitious materials with NS. Materialsw/c ratioReplacement of NS (%)Flexural strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.361–2.55 ⟶ 16.8 ⟶ −0.262[61]UHPC0.44–50 ⟶ 34.6 ⟶ −26.94[96]UHPC with 2.5% steel fibers0.44 NS + 2.5 steel fibers10.4–24[96]HPC0.310.5, 1, 1.5, 2, 2.5, and 310.7 ⟶ 21.1 ⟶ 28.8 ⟶ 36.5 ⟶ 29 ⟶ −222[71]The minus (−) imply decreasing the given attribute calculated concerning the reference one.NS’s sizeable specific surface area and high pozzolanic activity boost the strength gained at young ages when it is added to mortars and concrete. More CSH gel and compact structures are produced due to the above-mentioned process. Beyond the 2% addition, the amount of nanosilica exceeds the amount of released lime, which lessens the pozzolanic activity. It may be established that 2% of nanosilica is the ideal amount in high performance steam-cured concrete. ### 4.3. Nanotitanium Dioxide The effects of NT on the flexural strength of cementitious composites are shown in Table9.Table 9 Twenty-eight-day flexural strength of cementitious materials with NT. Materialsw/c ratioReplacement of NS (%)Flexural/split tensile strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.759 ⟶ 15.1 ⟶ 13.2 ⟶ 7.50.75[62]Mortar0.581–5613[98]RPC1, 3, 547.073[64]HPC0.251.5018[65]SCC with GGBS0.41 ⟶ 2 ⟶ 3 ⟶ 45.5 ⟶ 14.8 ⟶ 27.7 ⟶ 16.63[66]RPC1 ⟶ 3 ⟶ 55.97 ⟶ 12.26 ⟶ 10.325[67]SCGPC2 ⟶ 4 ⟶ 66.8 ⟶ −8.18 ⟶ −2.582[69]The minus (−) sign denotes decreasing the estimated attribute concerning the reference one.With a water-to-cement ratio of 0.5, with four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight cement, TiO2 nanoparticles with an average diameter of 15 nm were used. The 0.75% NT content raised the mortar flexural strength by 15.1% after 28 days; however, increasing the NT content lowered the power; thus, the optimum value is 0.75% [62]. The effects of nano-TiO2 (NT) with a percentage replacement of cement of 1%, 2%, 3%, 4%, and 5%. The results demonstrate that 3% NT increases tensile/flexural strength by 61% (i.e., toughness) and that increasing the NT level reduces power; thus, the optimum content is 3% [98]. Because NT particles can improve the ITZ of cementitious material, Li et al. [64] found that adding NT to cementitious composites can significantly increase long-term and short-term strength. With 3% of NT, the flexural strength increases to 47% [64].HPC increases flexural strength by 18% [65] with a 1.5% NT and a water-to-binder ratio of 0.25. The effect of different NT dosages on the flexural strength of cementitious composites was evaluated by Nazari and Riahi [66]. They discovered that adding 3 wt% NT at 28 days raised flexural strength by 27.7%. Presence of NSCT does not affect the compressive strength of the composites during the 3-day curing phase, but it does throughout the 28-day curing phase. The NSCTRRPC compressive strength peaked at 111.75 MPa after 28 days, increasing 12.26%/12.2 MPa above the ordinary RPC [99]. This is because NSCT generates more negative charges on the surface of NT, making it simpler to disperse in water via electrostatic repulsion [67]. When titanium oxide was introduced to specimens with wollastonite, the tensile strength increased and dropped. The flexural strength of samples containing titanium oxide but no wollastonite was superior to wollastonite models. The best combination was found to be 4% NT without wollastonite [69]. Based on the previous study, the improvement in bending strength of NT cementitious composites could be related to the following factors.On one hand, cement hydration products would deposit on nanoparticles because of the particles’ high surface activity, and as a result, the nanoparticles would become the nucleus of agglomerates. The nucleation effect is the name for this phenomenon. This method’s NT dispersed in the matrix can improve the matrix compactness and microstructure [100, 101]. NT has a nanocore action that induces fracture deflection and limits crack extension [67]. According to the study, the optimum dosage of NT for RPC is 3% flexural strength up to 87%; increasing the NT content diminishes the mechanical characteristics.In the opinion of the reviewers, the following elements may be to blame for the enhanced flexural/split tensile strength of NT-engineered cementitious composites. The nucleation effect is where cement hydration products initially deposit on nanoparticles due to their extensive surface activity and then multiply to generate conglomerations with the nanoparticles functioning as the “nucleus.” According to this strategy, the matrix’s microstructure and compactness can be improved by the NT disseminated throughout it [102]. On the other hand, the nanocore action of NT may result in fracture deflection and stop cracks from expanding to generate a toughening effect. ### 4.4. Nanoalumina Most studies agreed that adding NA particles to cementitious composites could improve their flexural strength. Table10 summarizes the possible changes of NA on the flexural strength of cementitious material.Table 10 Twenty-eight-day flexural strength of cementitious materials with NA. Materialsw/c ratioReplacement of NA (%)Flexural strength increment (%)Maximum replacement of NA (%)ReferencesMortar with 10% of black rice husk ash0.491, 2, 3−0.1 ⟶ 11.6 ⟶ 16.73[79]Concrete2, 35.16 ⟶ 6.73[81]Mortar0.50.5, 1, 1.510 ⟶ 12 ⟶ 131.5[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference.The addition of rice husk boosted the 28-day flexural strength of samples, with 10% replacement of rice husk and 3% nanoalumina giving the greatest flexural strength of 17%. The flexural strength decreases as the rice husk content increases [79]. It was discovered that raising 2% and 3% of NA increased its power up to 3%, respectively, in nonfiber designs (equivalent to a 5% increase). When the treatment time was increased to 28 days and the nanoalumina concentration was raised by 2% and 3%, this value climbed to 5.5 (equivalent to a 5% increase) and 5.58 MPa (equal to a 7% increase) [81]. By 7 days of age, however, nanoalumina percent had climbed to 2% and 3%. This parameter had increased to 5.21 (corresponding to a 5% increase) and 5.33 MPa, respectively (equivalent to a 7% increase). The development of pozzolanic reactions and the microstructure density of the mortar matrix improves the transitional area and, as a result, improves the adhesion of the fibers and matrix, as well as increasing the strength of the elongation of the fiber during flexural loading [81], can explain this trend. The flexural strength was reduced by around 10% by adding 1% and 1.5% nanopowder, regardless of the nanopowder amount [82].According to the findings, the optimum concentration of nanoalumina was 3%, with a 16.7% increase in flexural strength.The development of pozzolanic reactions and the microstructure density of the mortar matrix, which enhances the transitional area and, in turn, improves the matrix’s adhesion property, while also strengthening the elongation during flexural loading, can be interpreted as the cause of the increase in flexural strength by adding nanoalumina [81]. ### 4.5. Carbon Nanotubes Most researchers agreed that CNT particles might improve the bending strength of cementitious material to some extent. Table11 shows the effect of CNT on the bending strength of cementitious material when CNT is added.Table 11 Twenty-eight-day flexural strength of cementitious materials with CNT. Materialsw/b ratioReplacement of MWCNT (%)Flexural strength increment (%)Maximum replacement of MWCNT (%)ReferencesConcrete0.25 and 0.53 ⟶ 10.4[84]Mortar0.550.05, 0.1, and 0.21, 7, and 28[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.510.25 ⟶ 15.4 ⟶ 19.23 ⟶ 20.5 ⟶ −10.4[86]SCC0.450.1, 0.3, and 0.521.2 ⟶ 32.9 ⟶ 38.6[87]UHSC0.20.05, 0.1, and 0.157.5 ⟶ 3.33% ⟶ −0.660.05[88]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The bending strength of mortars was significantly improved simultaneously when pristine MWCNTs and all contents were used—incorporating 0.250 wt% percent MWCNTs resulted in a 3% increase in flexural strength, while 0.50 wt% percent MWCNTs resulted in a 10.4% increase in bending strength [84]. The cement mortar bending strength was improved by adding CNTs [103]. When added CNTs increase, the flexural strength improves up to 28% for the 0.2% of CNTs [85]. The percentage increase in compressive strength for 0.1% and 0.4% was 10.25% and 20.5%, correspondingly, for 28 days. A further increase in the content of 0.5% caused the flexural strength to decrease, indicating that the optimum range was 0.4%. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.1%, 0.3%, and 0.5% MWCNT increased by 38.6% [87]. The flexural strength rose to 3.33% for a 0.1% MWCNT in the current study; when the content of MWCNT grows, the flexural strength falls; therefore, 0.1% is the ideal content [88].According to various literary works, the ideal amount of CNT is 0.4%, and flexural strength improves up to 20.5%.Improvement in flexural strength and tensile strength is due to bridging the crack surfaces in the presence of CNTs. It is also noted that with CNTs, concrete density decreases with the density value reducing from 310 to 290 kg/m3. The reason is decreased pore wall discharge and uniform pore size. Also, the workability of the paste was increased as a superplasticizer was used. The basic properties of cement and the process of gaining strength in concrete include components of different shapes. CSH is a cloud-like structure, CH is like the rose of stone, and calcium sulfur aluminate hydrates are needle structures that produce pores due to the different shapes of frames. Homogeneous dispersion of CNTs made concrete denser as dispersed CNTs filled voids, increasing its crack resistivity [86]. ### 4.6. Nanocellulose Table12 shows the outcome of NC on the flexural strength of cementitious material.Table 12 Flexural strength of cementitious materials with NC at 28 days. Cementitious materialsw/b ratioReplacement of NC (%)Flexural strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.316 ⟶ 19 ⟶ 21 ⟶ 200.20[92]Concrete0.150.005, 0.01, and 0.01537.3 ⟶ 18.53 ⟶ 1.90.005[93]Concrete0.350.25, 0.5, 0.75, 1, 1.75, 0.5, 1, and 1.520, 12.5, 10, 7.5, 6.25, and 5[104]The flexural capacity of the reference concrete was 4.84 MPa, while the flexural capacities of NC concentrations of 0.05%, 0.10%, 0.20%, and 0.30% as a replacement for cement were 5.62, 5.75, 5.84, and 5.81 MPa, respectively. This translates to increased flexural strength of 16%, 19%, 21%, and 20%, respectively. This reveals that increasing the content of CF increases flexural strength by up to 21% for a 0.2% NC content. Expanding the scope of NC diminishes flexural strength; thus, the ideal range is 0.2% [92]. The results showed that the sample with 0.005% of NC has higher flexural strength, around 37.3%, than the control mix; however, further increases in % of NC, the flexural capacity decreases; hence, the optimum range of NC is 0.005% [93]. Adding nanocellulose increases the mechanical qualities of the material. Adding NC as a 0.2% replacement for cement increases flexural strength by 20% [105]; expanding the scope of NC further reduces flexural strength but still leaves it higher than the reference mix; thus, the optimum range is 0.2% [104].According to numerous studies, the ideal content of NC is 0.2%, and flexural strength rises by up to 20%. The above results reveal that the effect of CF on flexural capacity is two-fold: (1) an increased first peak strength (rupture strength) associated with the nanometric properties of CF. This alters the properties of the cement paste matrix at the microstructure level (as further discussed afterward); (2) an enhanced toughness associated with the high aspect ratio and the tensile strength of CF, which contribute toward maintaining the peak load for a prolonged range of microdeflections prior to failure, thereby increasing the cracking resistance. The observed effect of CF on composite fracture behavior is driven by the contribution of the filaments as a nanoreinforcement. Possible mechanisms involved in the CF effect on composite fracture behavior may include: (1) filament bridging capacity driven by their high aspect ratio and fibrillated morphology, (2) filament resistance to rupturing owing to their tensile properties, and (3) the filament-matrix interfacial bond stemming for the potential interaction between the omnipresent –OH groups on CF surface and cement hydrates by hydrogen bonding. In this mechanism, irrespective of the effect of CF on peak flexural capacity, the high probability of the fibers intercepting microcracks may play a favorable role in delaying the matrix fracture [92]. ## 4.1. Nanometakaolin As shown in Table7, NMK has the potential to improve cementitious material flexural strength significantly. The optimum concentration of NMK is primarily between 8% and 10% [49, 95]. The results showed that fiber-reinforced cementitious composite (FRCC) with 10% NMK exhibited a 67% increase in flexural strength after 28 days compared with control FRCC. However, flexural strength rapidly decreased when the NMK concentration was increased. The effect of NMK addition on SCC as a partial replacement by weight of cement at four percentages (0%, 1%, 3%, and 5%) increased flexural strength by up to 33.8% [51].Table 7 Twenty-eight day flexural strength of concrete made with nanometakaolin. Materialsw/b ratioReplacement of NMK (%)Flexural strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.32–1414 ⟶ 36 ⟶ −2510[33, 52]0.33–0.492–143.9 ⟶ 58 ⟶ 38[38]0.52.5–106.4 ⟶ 29 ⟶ 197.50[44]0.65–158–45[46]Ordinary concrete0.50.10.2587[47]0.533–100–46.810[48]RPC0.1752–53.16–7.355[71]FRCC0.32–1416 ⟶ 67 ⟶ 5410[55]SCC1–514.5 ⟶ 33.8[51]SCHPC0.351.25–3.7510 ⟶ 27.5[50]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the findings, the optimal dose of NMK is 10%, which enhances flexural strength by 46.8%; however, increasing the concentration of NMK diminishes mechanical characteristics.Results illustrate the relationship between flexural strength variation and curing age for blended cement mortars containing NMK. As the curing age and NMK addition rise, flexural strength also increases. At 60 days of hydration, the flexural strength increases as the NMK addition increases up to 7.5% and then decreases after a 10% addition. The pozzolanic reaction of FA and NMK with free lime released during OPC hydration and the physical filling of the NMK platelet particles inside the interstitial spaces of the FA-cement skeleton is what causes the increase in flexural strength. However, the NMK platelet particles act nanosize enhanced because of the interfacial zone. At 7.5% NMK addition, the increase in flexural strength was 2.3-fold. The reduction of flexural strength at a later age and 10% NMK addition may be due to the agglomeration of NMK particles around cement grains. ## 4.2. Nanosilica Most researchers agreed that using nanosilica particles could somewhat improve the flexural strength of cementitious material. Table8 summarizes NS’s outcome on the cementitious material’s flexural strength. According to Jalal et al. [46], high-performance SCC with 2% NS and 10% silica fume (SF) exhibited better flexural strength than the reference mix at 28 days, with an optimum increase of 59%; however, the increased flexural strength was demonstrated to slow down as the age of the concrete increased. They also discovered that NS-only concrete had considerably lower flexural strength than NS-plus-SF-admixed concrete. Ordinary concrete with a w/c ratio of 0.36 has a cement replacement rate of 1%–2.5%, with a maximum replacement rate of 2%. Because increasing the NS content diminishes strength, the study determined that a 2% replacement is the best option [61]. The flexural strength of UHPC with a w/c ratio of 0.4 and nanosilica replacement with cement 4%–5% increases to 34.6%, and at 5% replacement of NS, the flexural strength decreases to 26.9%; thus, in this review, it is concluded that the optimum content of NS is 4%, and when it is combined with 2.5% steel fiber and 4% NS, the strength increases to 34.6%. In the high-performance concrete with a w/c ratio of 0.31 and NS replacement varying from 0.5% to 3%, flexural strength increased from 10.7% for 0.5% NS to 36.5% for 2% NS with a 0.5% increment. It then decreased to 22% for 3% NS with further addition of NS at about 3%, indicating maximum content is 2% [71]. According to Li et al. [97], adding 1% NS increased flexural strength by 41% and 35% for UHPC under the combined curing conditions of 2 days heat and 26 days conventional curing, respectively, at 0.16 and 0.17 w/b ratios control UHPC matrix under the collective curing conditions of 2 days heat and 26 days conventional curing. On the other hand, adding more than 1% NS resulted in a slight increase in flexural strength. Table 8 depicts the effects of NS at different doses throughout 28 days. According to the study, up to 3% of NS doses can increase mechanical qualities, which might be related to pore structure refinement, pozzolanic effect, and filling effect. Flexural strength rises as the NS percentage rises, reaching 34.6% for an NS content of 4%.Table 8 Twenty-eight-day flexural strength of cementitious materials with NS. Materialsw/c ratioReplacement of NS (%)Flexural strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.361–2.55 ⟶ 16.8 ⟶ −0.262[61]UHPC0.44–50 ⟶ 34.6 ⟶ −26.94[96]UHPC with 2.5% steel fibers0.44 NS + 2.5 steel fibers10.4–24[96]HPC0.310.5, 1, 1.5, 2, 2.5, and 310.7 ⟶ 21.1 ⟶ 28.8 ⟶ 36.5 ⟶ 29 ⟶ −222[71]The minus (−) imply decreasing the given attribute calculated concerning the reference one.NS’s sizeable specific surface area and high pozzolanic activity boost the strength gained at young ages when it is added to mortars and concrete. More CSH gel and compact structures are produced due to the above-mentioned process. Beyond the 2% addition, the amount of nanosilica exceeds the amount of released lime, which lessens the pozzolanic activity. It may be established that 2% of nanosilica is the ideal amount in high performance steam-cured concrete. ## 4.3. Nanotitanium Dioxide The effects of NT on the flexural strength of cementitious composites are shown in Table9.Table 9 Twenty-eight-day flexural strength of cementitious materials with NT. Materialsw/c ratioReplacement of NS (%)Flexural/split tensile strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.759 ⟶ 15.1 ⟶ 13.2 ⟶ 7.50.75[62]Mortar0.581–5613[98]RPC1, 3, 547.073[64]HPC0.251.5018[65]SCC with GGBS0.41 ⟶ 2 ⟶ 3 ⟶ 45.5 ⟶ 14.8 ⟶ 27.7 ⟶ 16.63[66]RPC1 ⟶ 3 ⟶ 55.97 ⟶ 12.26 ⟶ 10.325[67]SCGPC2 ⟶ 4 ⟶ 66.8 ⟶ −8.18 ⟶ −2.582[69]The minus (−) sign denotes decreasing the estimated attribute concerning the reference one.With a water-to-cement ratio of 0.5, with four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight cement, TiO2 nanoparticles with an average diameter of 15 nm were used. The 0.75% NT content raised the mortar flexural strength by 15.1% after 28 days; however, increasing the NT content lowered the power; thus, the optimum value is 0.75% [62]. The effects of nano-TiO2 (NT) with a percentage replacement of cement of 1%, 2%, 3%, 4%, and 5%. The results demonstrate that 3% NT increases tensile/flexural strength by 61% (i.e., toughness) and that increasing the NT level reduces power; thus, the optimum content is 3% [98]. Because NT particles can improve the ITZ of cementitious material, Li et al. [64] found that adding NT to cementitious composites can significantly increase long-term and short-term strength. With 3% of NT, the flexural strength increases to 47% [64].HPC increases flexural strength by 18% [65] with a 1.5% NT and a water-to-binder ratio of 0.25. The effect of different NT dosages on the flexural strength of cementitious composites was evaluated by Nazari and Riahi [66]. They discovered that adding 3 wt% NT at 28 days raised flexural strength by 27.7%. Presence of NSCT does not affect the compressive strength of the composites during the 3-day curing phase, but it does throughout the 28-day curing phase. The NSCTRRPC compressive strength peaked at 111.75 MPa after 28 days, increasing 12.26%/12.2 MPa above the ordinary RPC [99]. This is because NSCT generates more negative charges on the surface of NT, making it simpler to disperse in water via electrostatic repulsion [67]. When titanium oxide was introduced to specimens with wollastonite, the tensile strength increased and dropped. The flexural strength of samples containing titanium oxide but no wollastonite was superior to wollastonite models. The best combination was found to be 4% NT without wollastonite [69]. Based on the previous study, the improvement in bending strength of NT cementitious composites could be related to the following factors.On one hand, cement hydration products would deposit on nanoparticles because of the particles’ high surface activity, and as a result, the nanoparticles would become the nucleus of agglomerates. The nucleation effect is the name for this phenomenon. This method’s NT dispersed in the matrix can improve the matrix compactness and microstructure [100, 101]. NT has a nanocore action that induces fracture deflection and limits crack extension [67]. According to the study, the optimum dosage of NT for RPC is 3% flexural strength up to 87%; increasing the NT content diminishes the mechanical characteristics.In the opinion of the reviewers, the following elements may be to blame for the enhanced flexural/split tensile strength of NT-engineered cementitious composites. The nucleation effect is where cement hydration products initially deposit on nanoparticles due to their extensive surface activity and then multiply to generate conglomerations with the nanoparticles functioning as the “nucleus.” According to this strategy, the matrix’s microstructure and compactness can be improved by the NT disseminated throughout it [102]. On the other hand, the nanocore action of NT may result in fracture deflection and stop cracks from expanding to generate a toughening effect. ## 4.4. Nanoalumina Most studies agreed that adding NA particles to cementitious composites could improve their flexural strength. Table10 summarizes the possible changes of NA on the flexural strength of cementitious material.Table 10 Twenty-eight-day flexural strength of cementitious materials with NA. Materialsw/c ratioReplacement of NA (%)Flexural strength increment (%)Maximum replacement of NA (%)ReferencesMortar with 10% of black rice husk ash0.491, 2, 3−0.1 ⟶ 11.6 ⟶ 16.73[79]Concrete2, 35.16 ⟶ 6.73[81]Mortar0.50.5, 1, 1.510 ⟶ 12 ⟶ 131.5[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference.The addition of rice husk boosted the 28-day flexural strength of samples, with 10% replacement of rice husk and 3% nanoalumina giving the greatest flexural strength of 17%. The flexural strength decreases as the rice husk content increases [79]. It was discovered that raising 2% and 3% of NA increased its power up to 3%, respectively, in nonfiber designs (equivalent to a 5% increase). When the treatment time was increased to 28 days and the nanoalumina concentration was raised by 2% and 3%, this value climbed to 5.5 (equivalent to a 5% increase) and 5.58 MPa (equal to a 7% increase) [81]. By 7 days of age, however, nanoalumina percent had climbed to 2% and 3%. This parameter had increased to 5.21 (corresponding to a 5% increase) and 5.33 MPa, respectively (equivalent to a 7% increase). The development of pozzolanic reactions and the microstructure density of the mortar matrix improves the transitional area and, as a result, improves the adhesion of the fibers and matrix, as well as increasing the strength of the elongation of the fiber during flexural loading [81], can explain this trend. The flexural strength was reduced by around 10% by adding 1% and 1.5% nanopowder, regardless of the nanopowder amount [82].According to the findings, the optimum concentration of nanoalumina was 3%, with a 16.7% increase in flexural strength.The development of pozzolanic reactions and the microstructure density of the mortar matrix, which enhances the transitional area and, in turn, improves the matrix’s adhesion property, while also strengthening the elongation during flexural loading, can be interpreted as the cause of the increase in flexural strength by adding nanoalumina [81]. ## 4.5. Carbon Nanotubes Most researchers agreed that CNT particles might improve the bending strength of cementitious material to some extent. Table11 shows the effect of CNT on the bending strength of cementitious material when CNT is added.Table 11 Twenty-eight-day flexural strength of cementitious materials with CNT. Materialsw/b ratioReplacement of MWCNT (%)Flexural strength increment (%)Maximum replacement of MWCNT (%)ReferencesConcrete0.25 and 0.53 ⟶ 10.4[84]Mortar0.550.05, 0.1, and 0.21, 7, and 28[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.510.25 ⟶ 15.4 ⟶ 19.23 ⟶ 20.5 ⟶ −10.4[86]SCC0.450.1, 0.3, and 0.521.2 ⟶ 32.9 ⟶ 38.6[87]UHSC0.20.05, 0.1, and 0.157.5 ⟶ 3.33% ⟶ −0.660.05[88]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The bending strength of mortars was significantly improved simultaneously when pristine MWCNTs and all contents were used—incorporating 0.250 wt% percent MWCNTs resulted in a 3% increase in flexural strength, while 0.50 wt% percent MWCNTs resulted in a 10.4% increase in bending strength [84]. The cement mortar bending strength was improved by adding CNTs [103]. When added CNTs increase, the flexural strength improves up to 28% for the 0.2% of CNTs [85]. The percentage increase in compressive strength for 0.1% and 0.4% was 10.25% and 20.5%, correspondingly, for 28 days. A further increase in the content of 0.5% caused the flexural strength to decrease, indicating that the optimum range was 0.4%. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.1%, 0.3%, and 0.5% MWCNT increased by 38.6% [87]. The flexural strength rose to 3.33% for a 0.1% MWCNT in the current study; when the content of MWCNT grows, the flexural strength falls; therefore, 0.1% is the ideal content [88].According to various literary works, the ideal amount of CNT is 0.4%, and flexural strength improves up to 20.5%.Improvement in flexural strength and tensile strength is due to bridging the crack surfaces in the presence of CNTs. It is also noted that with CNTs, concrete density decreases with the density value reducing from 310 to 290 kg/m3. The reason is decreased pore wall discharge and uniform pore size. Also, the workability of the paste was increased as a superplasticizer was used. The basic properties of cement and the process of gaining strength in concrete include components of different shapes. CSH is a cloud-like structure, CH is like the rose of stone, and calcium sulfur aluminate hydrates are needle structures that produce pores due to the different shapes of frames. Homogeneous dispersion of CNTs made concrete denser as dispersed CNTs filled voids, increasing its crack resistivity [86]. ## 4.6. Nanocellulose Table12 shows the outcome of NC on the flexural strength of cementitious material.Table 12 Flexural strength of cementitious materials with NC at 28 days. Cementitious materialsw/b ratioReplacement of NC (%)Flexural strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.316 ⟶ 19 ⟶ 21 ⟶ 200.20[92]Concrete0.150.005, 0.01, and 0.01537.3 ⟶ 18.53 ⟶ 1.90.005[93]Concrete0.350.25, 0.5, 0.75, 1, 1.75, 0.5, 1, and 1.520, 12.5, 10, 7.5, 6.25, and 5[104]The flexural capacity of the reference concrete was 4.84 MPa, while the flexural capacities of NC concentrations of 0.05%, 0.10%, 0.20%, and 0.30% as a replacement for cement were 5.62, 5.75, 5.84, and 5.81 MPa, respectively. This translates to increased flexural strength of 16%, 19%, 21%, and 20%, respectively. This reveals that increasing the content of CF increases flexural strength by up to 21% for a 0.2% NC content. Expanding the scope of NC diminishes flexural strength; thus, the ideal range is 0.2% [92]. The results showed that the sample with 0.005% of NC has higher flexural strength, around 37.3%, than the control mix; however, further increases in % of NC, the flexural capacity decreases; hence, the optimum range of NC is 0.005% [93]. Adding nanocellulose increases the mechanical qualities of the material. Adding NC as a 0.2% replacement for cement increases flexural strength by 20% [105]; expanding the scope of NC further reduces flexural strength but still leaves it higher than the reference mix; thus, the optimum range is 0.2% [104].According to numerous studies, the ideal content of NC is 0.2%, and flexural strength rises by up to 20%. The above results reveal that the effect of CF on flexural capacity is two-fold: (1) an increased first peak strength (rupture strength) associated with the nanometric properties of CF. This alters the properties of the cement paste matrix at the microstructure level (as further discussed afterward); (2) an enhanced toughness associated with the high aspect ratio and the tensile strength of CF, which contribute toward maintaining the peak load for a prolonged range of microdeflections prior to failure, thereby increasing the cracking resistance. The observed effect of CF on composite fracture behavior is driven by the contribution of the filaments as a nanoreinforcement. Possible mechanisms involved in the CF effect on composite fracture behavior may include: (1) filament bridging capacity driven by their high aspect ratio and fibrillated morphology, (2) filament resistance to rupturing owing to their tensile properties, and (3) the filament-matrix interfacial bond stemming for the potential interaction between the omnipresent –OH groups on CF surface and cement hydrates by hydrogen bonding. In this mechanism, irrespective of the effect of CF on peak flexural capacity, the high probability of the fibers intercepting microcracks may play a favorable role in delaying the matrix fracture [92]. ## 5. Conclusion The development of early strength in concrete is enhanced by introducing nanomaterials that will positively impact the mechanical properties of cementitious materials. Improvement in flexural and compressive capacity can be observed in the initial assessment at a very nascent stage. During this review, a key finding was attributed to the pozzolanic filling effect, nucleation effect, and bridging development catalyzed the strengthening mechanism of nanomaterials. High surface mineral particles in cement mixtures need more water or plasticizers to make the concrete workable. The expanse of the literature suggests that the optimal percent of nanometakaolin is 10%, which boosts compressive capacity by 63% and flexural capacity by 46.8%. However, a trade-off ensures as the mechanical characteristics deteriorate with the gradual increase in NMK concentrations. According to this study, NS doses of up to 2% could improve compressive strength by 20.25% and flexural strength by 34.6% for 4% NS content. The characteristics of NS as an activator also aid in the hydration process. If the dose of NS is higher than 2%, the compressive and flexural capacities may be reduced. On the other side, increasing the amount of NT in material increases the compressive strength up to 23% and flexural strength to 47%, and using TiO2 reduces air pollution. Nanoalumina at 1% increased compressive and flexural capacities by 46% and 16.7%, respectively, for a 3% replacement. The optimal CNT concentration for SCC is 0.5%, which increases compressive strength by up to 38.6% and flexural strength by up to 20.5% in ordinary standard concrete and by 38% in SCC. Nanocellulose is a plant-derived polymer that is safe for the environment and nontoxic during implementation. It significantly improves the mechanical qualities of concrete by substituting cement. The compressive strength increased by 25% with a 0.05% replacement of NC, and the flexural strength increased by 21% with a 0.2% substitution of NC with cement. These increments are explained in Figures 1 and 2.Figure 1 Twenty-eight-day compressive strength increase (%) [48, 61, 65, 78, 83, 92].Figure 2 Twenty-eight-day flexural strength increase (%) [48, 64, 79, 86, 92, 96].When NMK concrete is compared to all six nanomaterials (NMK, NS, NT, NA, CNT, and nanocellulose), the compressive and flexural strength increases to 63% and 36%, respectively. However, several conclusions have been offered, but only a few are backed up by sufficient evidence, and the rest need to be confirmed. This is kept as a future work to be carried out by the authors. As a result, a holistic mechanistic framework should be built to determine the relationship between nanophenomena and mechanical characteristics and models capable of quantitatively analyzing the effects of nanomaterials on composite properties. --- *Source: 1004597-2023-01-07.xml*
1004597-2023-01-07_1004597-2023-01-07.md
95,507
Compressive and Flexural Strength of Concrete with Different Nanomaterials: A Critical Review
R. M. Ashwini; M. Potharaju; V. Srinivas; S. Kanaka Durga; G. V. Rathnamala; Anish Paudel
Journal of Nanomaterials (2023)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1004597
1004597-2023-01-07.xml
--- ## Abstract With recent technological advances, adding nanomaterials as a reinforcement material in concrete has gained immense attention. This review paper aims to report advances in the form of a one-stop shop catering to methods that focus on improving the quality of traditional concrete. Nanoparticles—the elementary form of nanomaterials—are proven to enhance the strength and longevity of concrete. Nanosilica, nanoalumina, nanometakaolin, carbon nanotubes, and nanotitanium oxide are modern nanomaterials that have demonstrated strong evidence of enhancing concrete quality, which supports infrastructure building and long-term monitoring. Nanoconcrete—an exciting prospect extending the boundaries of traditional civil engineering—exhibited increased compressive and flexural strength using elementary compounds. In particular, the rigorous research survey of many articles reveals an increase in compressive strength from 20% to 63% by replacing the cement with different nanomaterials in different percentages and flexural strength from 16% to 47%. --- ## Body ## 1. Introduction The most common traditional material required for infrastructure construction is the mixture of cement, fine aggregate, coarse aggregate, and water—popularly known as concrete. Concrete is a porous substance that has to have its durability, usability, mechanical characteristics, and microstructural factors investigated [1]. Recent technological advances have resulted in the enhancement of several concrete properties exhibiting improvement over traditional concrete. Notably, the reduction of the water–cement (w/c) ratio [2] has subsequently resulted in contributing to the increased strength of the cement [3]. Additionally, the mix percentage has demonstrated an optimal condition by combining compendious nanoauxiliary concise [4–6]. Nanoconcrete employs constituent nanomaterials [7–13] that significantly improve the packing model structure in bulk characteristics. In addition to augmenting the properties of concrete, nanoparticles act as an excellent filler material. This review paper aims to report all the advances in nanomaterials-enhanced concrete that exhibit compressive and flexural characteristics [14].The construction industry has immensely benefitted from nanomaterials throughout the review. In particular, the nanomaterials in cement and concrete products such as nano-TiO2, nanoalumina, nanometakaolin, nano-SiO2, nanoclay [15–17], and carbon nanotubes (CNTs) have improved the overall characteristics [18]. Inherently integrated with it are the filling features that contribute to increasing durability [19]. In addition, nanomaterials have been demonstrated to enhance microstructural features that are not explored in conventional construction engineering but are a mainstream genre of research for contemporary investigations. A thorough review of prior research sheds some light on this area, but a detailed analysis is needed. Still, an arching framework that incorporates characteristics of concrete containing NMK, TiO2, and nanocellulose is lacking—which greatly motivates this research work. This review aims to advance the use of nanomaterials in contributing to the flexural and compressive strength of concrete. ## 2. Production of Nanomaterials Even though the concept of creating nanomaterials through nanotechnology emerged in the late 1960s [20, 21], the aspect of strengthening the properties of concrete is relatively nascent—gaining momentum in recent decades. While all materials eventually convert into nanoparticles, Bharadwaz et al. [22] pointed out that these particles—solely attributed to their nanosize—have a stronger foothold compared with microbased components as far as filler materials are concerned. A top-down strategy [23] is inherently optimized that provides selection based on nanobehavioral expertise, appropriateness, and cost [24, 25]. Defined as the process of reducing larger structures to the nanoscale—while retaining their original features or chemical composition even at the atomic level—the top-down approach provides robustness and applicability over a wide variety of domain expertise [26]. In other words, mechanical attrition and etching processes break down bulk materials into nanoparticles [27]. The milling process is one of the strategies under the framework of top-down approaches [28, 29]. A milling machine’s fundamental feasibility and accessibility corroborate changes without the need for chemicals or electronic devices. Another name for the top-down strategy is the present way of nanofabrication. However, the final product homogeneity and quality are inconsistent in the top-down approach.High-energy ball millings can synthesize nanomaterials, nanograins, nanocomposites, and nano-quasicrystalline materials. In particular, by modifying the number of balls employed and their types, with an increase in the machine speed and the type of container used, the nanoparticles can alleviate the traditional shortcomings of the top-down approach [2, 30, 31]. During milling, plastic deformation, cold welding, and fracture are the factors influencing the deformation and transforming process of materials into the required shape. Milling not only breaks materials into smaller parts but also blends several particles or materials and transforms them into new phases of material composition in the case of the reactive ball milling technique, but this is not possible through dry and wet ball milling techniques. Although the milling process automatically reduces the size of materials, the mixture of various particles converts into a new material phase in the case of the reactive ball milling technique. The end product from a milling operation churns out materials in the new regime with lake-shaped strata in the case of the dry ball milling technique. However, refining can be carried out to obtain a more delicate structure based on the type and the size of the ball used corresponding to the milling technique. From a historical perspective, John Benjamin (1970) first administered the method of milling through the strengthening of alloy components for high-temperature structures [32]; this led to the first use of milling as an effective technique to produce oxide particles.Contrary to the top-down approach, the bottom-up technique is employed when materials are assembled or self-assembled from atoms or molecular components. This methodology is helpful for most nanomaterials, such as nanosilica, nanoalumina, and nanoclay, which are widely used to improve the characteristics of concrete. This process is aptly termed molecular manufacturing or nanotechnology due to its indirect benefits, including synthesis and chemical formulation [24]. The critical difference between the various schools of thought is that the bottom-up method will produce a uniform and perfect structure of the nanoparticles compared to the top-down approach. This is explained by the fact that nanocrystals can automatically develop when atoms or molecules are well-organized or in crystalline form. A few strategies for this process include increased electronic conductivity, optical absorption, and chemical reactivity [25, 33].Additionally, a significant reduction in the size of the particles—with the development of tidy surface atoms—combined with the enormous change in the surface energy leads to improved morphologies. Nanoparticles have become ideal candidates for advanced applications in electronic components and biotechnology. In the longer run, nanomaterials derive their applications toward boosting catalytic activity, wave sensing capabilities, novel pigments, and self-healing and cleaning properties in paint. The flip side of the coin, called flippin, exposes the severe drawbacks of the bottom-up strategy, including its high operational costs, the need for specialized knowledge in chemical applications, and its strict applicability to laboratory applications only [22, 34, 35]. ## 3. Compressive Strength of Nanoconcrete Made with Nanomaterial Numerous nanomaterials have been included in concrete since the development of nanotechnology in the building industry. Nanomaterials can be used to improve the durability and performance of concrete. The discussion of nanomaterials, such as nanometakaolin, nanosilica, nanoalumina, nanotitanium dioxide, CNTs, and nanocellulose, as well as their utilization in all types of concrete, will be further developed in this subsection in terms of their mechanical properties. ### 3.1. Nanometakaolin NMK considerably enhanced the compressive strengths of cementitious materials, according to several studies [36–38]. Table 1 shows the 28-day compressive strength of cementitious materials treated with NMK. According to an extensive literature survey, adding the right amount of NMK to cementitious materials boosted their compressive strength [52, 53]. When the amount of NMK in a cementitious material exceeds the optimum level, the concrete compressive strength will reduce.Table 1 Twenty-eight-day compressive strength of concrete made with nanometakaolin. Cementitious materialsw/b ratioReplacement of NMK (%)Compressive strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.274–158 ⟶ 20 ⟶ −1510[39]0.32–1616 ⟶ 54 ⟶ −10[30]0.33–0.492–148.6 ⟶ 59.4 ⟶ 46.6[38]0.443–1016 ⟶ 24.6 ⟶ 226[23]0.5–0.592–1010 ⟶ 63 ⟶ 20[40]Cement mortar0.32–109.3 ⟶ 22.6 ⟶ −1.34[41]0.45–1028 ⟶ 20[42]0.485354[43]0.52.5–1015 ⟶ 34 ⟶ 197.50[44]0.542–148.8 ⟶ 42 ⟶ 2010[45]0.65–1523 ⟶ 85[46]Ordinary concrete0.51026.32[47]0.533–1042.2 – 63.110[48]UHPC0.21–912 ⟶ −16.51[49]0.21–108.5 ⟶ −8.51[18]SCHPC0.351.25–3.7512 ⟶ 17[50]SCC1–512.7 ⟶ 42.2[51]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.Meanwhile, too much NMK causes a weak interfacial transition zone (ITZ) and fewer contact points, which act as binding sites between cement particles [48, 54]. The addition of NMK to concrete increases its compressive strength.Even though mixed proportion characteristics like w/c ratio and NMK content and curing conditions are the same, the optimum NMK contents are not the same based on a literature survey.As a result, more research should be conducted and studied [22] to better understand the various effects of NMK on the compressive strength of concrete or mortar. In addition, some results are incongruent. The compressive strength of air-cooled slag (ACS)-blended cement mortar gradually lowered when the NMK content increased [48]. At 28 days, the compressive strength of mortar containing 8% NMK was marginally lower than that of the control mortar. According to Norhasri et al. [23], adding NMK to ultra-high-performance concrete (UHPC) with a compressive strength of 150 MPa at 28 days did not improve the early compressive strength. Although UHPC with 1% NMK had the maximum compressive strength, it was somewhat lower than the control UHPC. The early stability of UHPC was not compromised by the addition of NMK [55–57]. Furthermore, as the amount of NMK in UHPC increased, the material strength of compression dropped [58]. The compressive strength improved by up to 63% when the nanometakaolin content increased to 10%, and as the nanometakaolin content increased further, the mechanical characteristics deteriorated [59].Almost all ages of hydration resulted in higher compression strength values than those reported for the typical ordinary Portland cement (OPC) paste. This increase in strength is primarily due to the pozzolanic reaction of free calcium hydroxide (CH), liberated from Portland cement hydration, with nanometakaolin to form excessive amounts of additional hydration products, primarily as calcium silicate hydrate (CSH) gel and crystalline CSH hydrates; these hydrates act as microfills, which reduce total porosity leading to an increase in the entire contents of binding centers in the specimens; consequently, an increase in the strength [60]. Thermal gravimetric analysis and scanning electron microscopy (SEM) were also used to monitor the hydration process (TG). These examinations indicate that NMK behaves as a filler to improve the microstructure and as an activator to promote the pozzolanic reaction [38]. ### 3.2. Nanosilica After 28 days, the compressive strength of concrete containing 3% NS improved by 20% compared to baseline concrete strength [43]. At 90 and 365 days, compressive strength testing revealed a similar pattern. Table 2 shows the existing compressive strength of cementitious materials with NS at 28 days.Table 2 Twenty-eight-day compressive strength of concrete made with nanosilica. Materialsw/c ratioReplacement of NS (%)Compressive strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.32–610.5 ⟶ 15[44]0.361–2.59.2 ⟶ 20.25 ⟶ −52[61]Ordinary concrete0.30.5–11.75 ⟶ 2.5[47]0.350.5–11.05 ⟶ 1.84[47]0.40.5–13.3 ⟶ 7.2[47]0.450.5–15.59 ⟶ 10.39[47]0.40.75–320[43]0.4331[41]HPC0.4333[40]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The colloidal 40–50 nm NS effectively increased the compressive strength of 3% NS high-performance concrete to 33.25% for 24 hr. This improvement in strength corresponds to the 40 MPa compression strength. Also, the flexural strength exceeded 13.5% after 7 days [40]. The concrete with 3% NS as a replacement with cement substitute exhibited more compressive strength and a longer lifespan. The compressive strength of 3% NS was enhanced by 31.42% compared to the reference concrete [41]. The kind of NS is colloidal at 15 nm, and the surface area, particularly of NS particles, directly impacts the concrete compressive strength. The NS particles with a surface area of 51.4 m2/g are the least beneficial in improving compressive strength. The w/b ratio influences the strength of NS concrete as well. The ideal amount of NS replacement is linked to the reactivity and accumulation level of the NS particles [42]. Ordinary concrete (sulfuric-acid-rain condition) with 0.3 w/b ratios substituted cement material with 2%–6%, which increased compressive strength by 15% [44]. Ordinary concrete with a 0.36 w/b ratio with 1% to 2.5% replacement of cement material shows increase in compressive strength up to 20.25% for 2% replacement and a 5% reduction in compressive strength for 2.5% replacement, corroborates the optimum replacement of NS is 2% [61]. The compressive strength of regular concrete increases from 1.75% to 7.2% with a w/c ratio of 0.3, 0.4, and 0.45, and NS content of 0.5%, 0.75%, and 1% as a replacement for cement [47]. Table 3 shows the tabulated results, which suggest that a 2% NS substitution is optimal and any further increment in NS content diminishes strength [70].Table 3 Twenty-eight-day compressive strength of concrete made with nanotitanium dioxide. Materialsw/c ratioReplacement of NT (%)Compressive strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.7510.5 ⟶ 19.33 ⟶ 15.07 ⟶ 4.270.75[62]Mortar with 10% of black rice husk ash0.350.5 ⟶ 1 ⟶ 1.51.48 ⟶ 4.75 ⟶ 13.221.5[63]RPC1, 3, 518.553[64]HPC0.251.5023[65]SCC with ground granulated blast-furnace slag (GGBS)0.41 ⟶ 2 ⟶ 3 ⟶ 42.7 ⟶ 26.5 ⟶ 36.4 ⟶ 27.83[66]RPC1 ⟶ 3 ⟶ 543.43 ⟶ 74.9 ⟶ 875[67]Concrete0.482232[68]SCGPC2 ⟶ 4 ⟶ 6−3.43 ⟶ 7.7 ⟶ 3.44[69]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the study, up to 3% of NS doses can improve mechanical characteristics, potentially due to pozzolanic activity, pore structure refinement, and filling effect. Compressive strength increases as the NS content grows, reaching 33% for an NS proportion of 3%.The compressive strength was seen to increase to 2% nanosilica substitution before significantly declining. Due to the increased hydration by nanosilica, there is a more significant consumption of Ca(OH)2 in the early stages (1–7 days of curing). This outcome favors a 2% substitution of nanosilica in cement by weight. The pozzolanic reaction of nanosilica and CH produces well-compacted hydration products that coat the unhydrated cement and slow the rate of hydration. Additionally, hydration products plug the pores in the cement, reducing the amount of water that can reach the anhydrate cement particles and reducing the strength above 2% nanosilica replacement [71]. ### 3.3. Nanotitanium Dioxide Most researchers agreed that using NT particles might improve the compressive strength of concrete to some extent. The effect of NT on the compressive strength of concrete is shown in Table3.TiO2 nanoparticles with an average diameter of 15 nm were used in four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight of cement with a w/c ratio of 0.5 [72]. The 0.75% NT increases the mortar’s compressive strength by 19.33% after 28 days. The strength decreased as the NT content increased; hence, the optimum NT content is 0.75% [62]. In 10%, 20%, and 30% of the fractions, untreated black rice husk ash (BRHA) was used to replace cement. When nano-TiO2 doses of 0.5%, 1.0%, and 1.5% were added to blended cement, the compressive strength increased to 13.22% [63]. The compressive strength of concrete containing 20% fly ash (FA) can be enhanced by 18% by using 3% NT, according to Li et al. [64]. It was discovered that NT at a dose of 3 wt% improved the compressive strength of self-compacting concrete (SCC) with GGBS and a w/c ratio of 0.4 the most. The flexural strength of nano SiO2 coated TiO2 reinforced reactive powder concrete (NSCTRRPC) reached a maximum of 9.77 MPa when the range of NSCT was 3% and increased 83.3%/4.44 MPa compared with reactive powder concrete (RPC) without NSCT. Even while the strength of flexural NSCTRRPC was slightly lesser than that of plain RPC when the NSCT content was 5%, it was still much more than bare reactive powder concrete. It could be related to a decrease in hydration speed caused by water absorption. At 28 days, the ideal level of NS content was 5.0%. The use of nanoparticles as cementitious materials increased the compressive strength of concrete, as shown in Table 3. After 28 days of curing, the compressive strength of concrete can enhance up to 22.71% by replacing 2% cement with nanotitanium oxide particles (relative to plain concrete). Titanium oxide was introduced to specimens with wollastonite; the compressive strength increased initially, and then declined. The best combination was 4% NT without wollastonite [69, 73].According to the study, the optimal dose of NT is 3%, which boosts compressive strength by up to 23%; however, increasing the NT concentration diminishes mechanical characteristics.The review paper’s findings revealed the diffraction intensity of several CH and C3S crystals specimens. First, with increasing hydration age, C3S diffraction apex intensity declined, while CH diffraction apex intensity increased in the base specimen. But even after 28 days, C3S had not fully hydrated. Second, utterly different change tendencies were visible in the strength of the two CH diffraction peaks, as evidenced by the variance in X-ray diffraction (XRD) results between the base specimen and the additional specimen. When nano-TiO2 was used to replace cement, the intensity of the CH (101) crystal plane grew early on, whereas the power of the (001) crystal plane significantly dropped. When the cement was replaced with nano-TiO2, the amount of CH crystal did not increase after 1 day. Therefore, the rise in hydrated products should not be cause of the improvement in early strength. Third, the intensity of CH at evening ages increased so slowly after the slag powder was applied to the cement mortar that it reduced after 14 days, and the power of CH was significantly lower than that of other specimens without the slag powder. This meant that CH was used to hydrate the slag powder, which helped to increase strength in the evening hours [74]. ### 3.4. Nanoalumina Most researchers agreed that using nano alumina (NA) particles might improve the cementitious composites and compressive strength. Table4 shows the effect of NA on the compression strength of cementitious material.Table 4 Twenty-eight-day compressive strength of concrete made with nanoalumina. Materialsw/c ratioReplacement of NA (%)Compressive strength increment (%)Maximum replacement of NA (%)ReferencesMortar0.791, 3, 520 ⟶ 15 ⟶ 36.005.00[75]Mortar0.351, 2, 316.00 ⟶ 12 ⟶ 121[76]Concrete0.441, 2, 313.00 ⟶ 4 ⟶ −1[77]Concrete0.330.5, 0.75, 16 ⟶ 28 ⟶ 461[78]Mortar with 10% of black rice husk ash0.491, 2, 31 ⟶ 11 ⟶ 163[79]Concrete0.483, 5, 716.67 ⟶ 30.13 ⟶ 23.585[80]Concrete2, 33.32 ⟶ 5.33[81]Mortar0.50.5, 1, 1.57 ⟶ 10.6 ⟶ 11.41.50[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.After 7 days of the curing period, the addition of NA increased by 1%, 3%, and 5% replacement by 46%, 27%, and 19.3%, respectively. It was discovered that the 1% replacement level of NA provides the best early strength. After 28 days of the curing period, the compressive strength of the 0% replacement level was found to be 13.69 MPa; the addition of NA resulted in increase of 20%, 15%, and 36%, respectively, whereas 5% replacement resulted in better compressive strength after 28 days [75]. The 1% nanoalumina content to instars increased their compressive strength by up to 16% at room temperature. Higher concentrations of NA (>2%) reduced the power mortars to their original level. One percent nanoalumina was added at temperatures up to 800°C; the residual compressive strength remained more elevated than the actual amount [76]. When OPC is substituted with NA, the compressive strength increases by 1%. The 28-day compressive strength increased by 13% when a 1% NA replacement level was utilized instead of a 0% NA combination. The compressive strength of concrete cubes was boosted by introducing NA into the matrix; the compressive strength of composites rose by 33.14% at 28 days when the proportion of NA was 1% of the cement by weight [78].The addition of rice husk ash boosted the 28-day compressive strength of samples, with 10% replacement of rice husk and 3% NA giving the greatest compressive strength of 16.6%. The compression strength decreases as the rice husk content increases [79] by replacing 1% of cement with nanoalumina particles; concrete 28-day compressive strength was increased by 4.03%; when the concentration of NA was increased from 1% to 3%, the compressive strength improved from 4.03% to 8.00%. As a result, cement hydration was hastened, resulting in higher amounts of reaction products. Also, nanofiller nano-Al2O3 particles regain the concrete particle packing density and improve the microstructure. As a result, the volume of larger pores in the cement paste is reduced [80]. The compressive strength of all the tested mortars decreases as the amount of Al2O3 nanopowder in their composition increases. When 0.5% Al2O3 nanopowder was added, the decline was 7%, 10.6% with 1% and 11.4% with 1.5% compared with the reference mortar. According to several studies, adding Al2O3 nanopowder to mortars can increase their compressive strength [82].According to the study, the optimum concentration of nanoalumina was 5%, at which point compressive strength increased by 36%.The rapid consumption of Ca(OH)2 produced during Portland cement hydration, which is connected to the high reactivity of nano-Al2O3 particles, is likely what caused the increase in the compressive strength of concrete containing nanoalumina. As a result, the cement’s hydration was sped up, and more reaction products were generated. Additionally, nano-Al2O3 particles restore the concrete’s particle packing density and enhance its microstructure as a nanofiller, decreasing the volume of bigger pores in the cement paste. The outcomes align with those reported by other researchers [80]—dosage of the samples NA1, NA2, NA3, and control at various temperatures. The compressive strength decreased when the dosage of nanoalumina rose to 2% and 3%, although it remained higher than the control samples. The ITZ may have become loose due to the excessive aggregation of nanoalumina particles, which may have surrounded fine aggregates [76]. ### 3.5. Carbon Nanotubes According to most studies, the compression strength of cementitious material could be increased to some extent with CNT particles. Table5 shows CNT’s effect on cementitious material’s compressive strength.Table 5 Twenty-eight-day compressive strength of concrete with carbon nanotubes. Materialsw/b ratioReplacement of MWCNT (%)Compressive strength increment (%)Maximum replacement of CNT (%)ReferencesM30 grade of concrete0.40.015, 0.03, and 0.0452.75 ⟶ 26.7[83]Concrete0.25 and 0.57.14 ⟶ 15.7[84]Mortar0.550.05, 0.1, and 0.215 ⟶ 8 ⟶ 100.05[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.57.11 ⟶ 18.2 ⟶ 22.56 ⟶ 24.5 ⟶ 27.35[86]SCC0.450.1, 0.3, and 0.516.6 ⟶ 24 ⟶ 38.62[87]Ultra high strength concrete (UHSC)0.20.05, 0.1, and 0.154.6 ⟶ 2.1 ⟶ −1.970.1[88]Concrete_0.0.2, 0.03, 0.05, and 0.0983.33 ⟶ 97.22 ⟶ 80.55 ⟶ 63.880.03[89]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The compressive strength of concrete increased by up to 26.7% when functionalized multiwalled carbon nanotube (MWCNT) was used in place of 0.045% cement. MWCNT in cement shows an improvement in the stiffness of the CSH gel, making the composite more substantial. An explanation for the improved mechanical properties of concrete could be that the MWCNTs occupy the nanostructure in MWCNT concrete, making them more crack resistant throughout the loading period [83]. Modified MWCNTs were disseminated in cement mortars to improve their mechanical qualities. When pure MWCNTs were used, the compressive strength of cement mortars was significantly increased—incorporating 0.50 wt%. Percent MWCNTs resulted in a 15.7% increase in compressive strength, whereas containing 0.250 wt%. Percent MWCNTs resulted in a 7.14% increase in compressive strength [84]. The cement mortar’s compressive strength was improved by adding CNTs. The maximum enhancement obtained when utilizing 0.05% CNTs is up to 15%; however, the stability decreases as the CNT content increases [85]. According to the strength of compression data, the percentage increase in compressive strength for 0.1% and 0.5% is 7.11% and 27.35%, respectively, for 28 days. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.%, 0.3%, and 0.5% MWCNT concentration increased to 38.62% [87]. The most significant gains in the mechanical characteristics of cement-based materials were found to be at low concentrations of MWCNTs (0.05%). The compressive strength rose to 4.6% for a 0.05% MWCNT in the current investigation; however, as the content of MWCNT increases, the power drops; hence, the optimum content is 0.05% [88].It was determined from various sources that the maximum concentration of CNT was 0.1%. The compressive strength of UHSC rises by up to 2.1%.At 80°C, the CNTs were functionalized in a solution of concentrated H2SO4 and HNO3. The removal of carboxylate carbonaceous fragments that could adversely influence the mechanical strength of the concrete due to their interaction with cement hydration products was discovered to require the functionalization of acetone washing [89].The ability to synthesize novel hybrid nanostructured materials in which CNTs and carbon nanofibers (CNFs) connected to cement particles and enabled good dispersion of the carbon nanomaterials in the cement was made possible by employing cement particles as catalysts and support material. Two chemical vapour deposition reactors were used to create this hybrid material, which is easily included in the production of commercial cement [90, 91]. The fluidized bed reactor’s product yield was significantly enhanced. The research using TEM, SEM, XRD, thermogravimetric analysis, and Raman measurements revealed the process for producing CNTs and CNFs at low temperatures and high yields to be highly effective. After 28 days of curing in water, tests on the physical characteristics of the cement hybrid material paste revealed up to a twofold increase in compressive strength and a 40-fold increase in electrical conductivity [90, 91].The increased compressive strength may be related to the fact that the addition of CNTs caused the microcracks to start and spread more slowly because they were evenly distributed throughout the cement mortar. The mechanical strengths of the mortar may be improved by adding CNTs to improve the adhesion between the hydration products. Additionally, it is possible that the presence of CNTs led to the production of additional CSH and the consumption of CH [85, 86]. ### 3.6. Nanocellulose (NC) According to most studies, the compression strength of cementitious material could be increased to some extent with nanocellulose particles. Table6 shows NC’s effect on cementitious material’s compressive strength.Table 6 Twenty-eight-day compressive strength of concrete made with nanocellulose [92–94]. Materialsw/c ratioReplacement of NC (%)Compressive strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.325 ⟶ 17 ⟶ 11 ⟶ 30.05[92]Concrete0.150.005, 0.01, and 0.0158 ⟶ 3 ⟶ 10.005[93]Concrete0.350.2 and 0.110 ⟶ 17[94]The compressive strength of concrete with a cellulose content of 0.05% and 0.10% increased by 26% and 17%, respectively. In contrast, combinations with 0.20% and 0.30% NC had less apparent impacts, rising by 11% and 3%. The effect of NC on hydration kinetics and hydrate characteristics may be linked to the higher compression strength in the presence of nanocellulose [92]. The results demonstrate that NC mortar samples improve compressive strength after 7 days of curing; the UHP mortar sample, which included 0.005 wt% NC, had the most excellent compressive strength value of 184 MPa of binders. In this situation, the compressive strength was roughly 8% higher than the control mortar and 4%–8% higher than the 0.01% and 0.015% NC mortars. Because of its high specific surface, NC probably provides close spacing and strong adhesion to the cement matrix, boosting density and impacting compressive strength development [93]. Adding 0.2% and 1% NC raises the strength of the material by 10% and 17%, respectively. The hydrophilic properties of CNCs, which result in increased hydration products, may be responsible for increasing compressive strength. Furthermore, using 0.2% and 1% CNC decreases cement volume while maintaining compressive strength [94].With an optimum dose of NC of 0.1%, the compressive strength increases by up to 17%, while a higher amount of NC lowers the mechanical characteristics.The greater compressive strength in the presence of cellulose filaments (CF) may be related to the effect of nanocellulose on the hydration kinetics and the properties of hydrates. According to research [92], the hydrophilic and hygroscopic nanocellulose may add more water to the cementitious matrix to raise the degree of hydration and improve mechanical performance. ## 3.1. Nanometakaolin NMK considerably enhanced the compressive strengths of cementitious materials, according to several studies [36–38]. Table 1 shows the 28-day compressive strength of cementitious materials treated with NMK. According to an extensive literature survey, adding the right amount of NMK to cementitious materials boosted their compressive strength [52, 53]. When the amount of NMK in a cementitious material exceeds the optimum level, the concrete compressive strength will reduce.Table 1 Twenty-eight-day compressive strength of concrete made with nanometakaolin. Cementitious materialsw/b ratioReplacement of NMK (%)Compressive strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.274–158 ⟶ 20 ⟶ −1510[39]0.32–1616 ⟶ 54 ⟶ −10[30]0.33–0.492–148.6 ⟶ 59.4 ⟶ 46.6[38]0.443–1016 ⟶ 24.6 ⟶ 226[23]0.5–0.592–1010 ⟶ 63 ⟶ 20[40]Cement mortar0.32–109.3 ⟶ 22.6 ⟶ −1.34[41]0.45–1028 ⟶ 20[42]0.485354[43]0.52.5–1015 ⟶ 34 ⟶ 197.50[44]0.542–148.8 ⟶ 42 ⟶ 2010[45]0.65–1523 ⟶ 85[46]Ordinary concrete0.51026.32[47]0.533–1042.2 – 63.110[48]UHPC0.21–912 ⟶ −16.51[49]0.21–108.5 ⟶ −8.51[18]SCHPC0.351.25–3.7512 ⟶ 17[50]SCC1–512.7 ⟶ 42.2[51]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.Meanwhile, too much NMK causes a weak interfacial transition zone (ITZ) and fewer contact points, which act as binding sites between cement particles [48, 54]. The addition of NMK to concrete increases its compressive strength.Even though mixed proportion characteristics like w/c ratio and NMK content and curing conditions are the same, the optimum NMK contents are not the same based on a literature survey.As a result, more research should be conducted and studied [22] to better understand the various effects of NMK on the compressive strength of concrete or mortar. In addition, some results are incongruent. The compressive strength of air-cooled slag (ACS)-blended cement mortar gradually lowered when the NMK content increased [48]. At 28 days, the compressive strength of mortar containing 8% NMK was marginally lower than that of the control mortar. According to Norhasri et al. [23], adding NMK to ultra-high-performance concrete (UHPC) with a compressive strength of 150 MPa at 28 days did not improve the early compressive strength. Although UHPC with 1% NMK had the maximum compressive strength, it was somewhat lower than the control UHPC. The early stability of UHPC was not compromised by the addition of NMK [55–57]. Furthermore, as the amount of NMK in UHPC increased, the material strength of compression dropped [58]. The compressive strength improved by up to 63% when the nanometakaolin content increased to 10%, and as the nanometakaolin content increased further, the mechanical characteristics deteriorated [59].Almost all ages of hydration resulted in higher compression strength values than those reported for the typical ordinary Portland cement (OPC) paste. This increase in strength is primarily due to the pozzolanic reaction of free calcium hydroxide (CH), liberated from Portland cement hydration, with nanometakaolin to form excessive amounts of additional hydration products, primarily as calcium silicate hydrate (CSH) gel and crystalline CSH hydrates; these hydrates act as microfills, which reduce total porosity leading to an increase in the entire contents of binding centers in the specimens; consequently, an increase in the strength [60]. Thermal gravimetric analysis and scanning electron microscopy (SEM) were also used to monitor the hydration process (TG). These examinations indicate that NMK behaves as a filler to improve the microstructure and as an activator to promote the pozzolanic reaction [38]. ## 3.2. Nanosilica After 28 days, the compressive strength of concrete containing 3% NS improved by 20% compared to baseline concrete strength [43]. At 90 and 365 days, compressive strength testing revealed a similar pattern. Table 2 shows the existing compressive strength of cementitious materials with NS at 28 days.Table 2 Twenty-eight-day compressive strength of concrete made with nanosilica. Materialsw/c ratioReplacement of NS (%)Compressive strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.32–610.5 ⟶ 15[44]0.361–2.59.2 ⟶ 20.25 ⟶ −52[61]Ordinary concrete0.30.5–11.75 ⟶ 2.5[47]0.350.5–11.05 ⟶ 1.84[47]0.40.5–13.3 ⟶ 7.2[47]0.450.5–15.59 ⟶ 10.39[47]0.40.75–320[43]0.4331[41]HPC0.4333[40]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The colloidal 40–50 nm NS effectively increased the compressive strength of 3% NS high-performance concrete to 33.25% for 24 hr. This improvement in strength corresponds to the 40 MPa compression strength. Also, the flexural strength exceeded 13.5% after 7 days [40]. The concrete with 3% NS as a replacement with cement substitute exhibited more compressive strength and a longer lifespan. The compressive strength of 3% NS was enhanced by 31.42% compared to the reference concrete [41]. The kind of NS is colloidal at 15 nm, and the surface area, particularly of NS particles, directly impacts the concrete compressive strength. The NS particles with a surface area of 51.4 m2/g are the least beneficial in improving compressive strength. The w/b ratio influences the strength of NS concrete as well. The ideal amount of NS replacement is linked to the reactivity and accumulation level of the NS particles [42]. Ordinary concrete (sulfuric-acid-rain condition) with 0.3 w/b ratios substituted cement material with 2%–6%, which increased compressive strength by 15% [44]. Ordinary concrete with a 0.36 w/b ratio with 1% to 2.5% replacement of cement material shows increase in compressive strength up to 20.25% for 2% replacement and a 5% reduction in compressive strength for 2.5% replacement, corroborates the optimum replacement of NS is 2% [61]. The compressive strength of regular concrete increases from 1.75% to 7.2% with a w/c ratio of 0.3, 0.4, and 0.45, and NS content of 0.5%, 0.75%, and 1% as a replacement for cement [47]. Table 3 shows the tabulated results, which suggest that a 2% NS substitution is optimal and any further increment in NS content diminishes strength [70].Table 3 Twenty-eight-day compressive strength of concrete made with nanotitanium dioxide. Materialsw/c ratioReplacement of NT (%)Compressive strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.7510.5 ⟶ 19.33 ⟶ 15.07 ⟶ 4.270.75[62]Mortar with 10% of black rice husk ash0.350.5 ⟶ 1 ⟶ 1.51.48 ⟶ 4.75 ⟶ 13.221.5[63]RPC1, 3, 518.553[64]HPC0.251.5023[65]SCC with ground granulated blast-furnace slag (GGBS)0.41 ⟶ 2 ⟶ 3 ⟶ 42.7 ⟶ 26.5 ⟶ 36.4 ⟶ 27.83[66]RPC1 ⟶ 3 ⟶ 543.43 ⟶ 74.9 ⟶ 875[67]Concrete0.482232[68]SCGPC2 ⟶ 4 ⟶ 6−3.43 ⟶ 7.7 ⟶ 3.44[69]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the study, up to 3% of NS doses can improve mechanical characteristics, potentially due to pozzolanic activity, pore structure refinement, and filling effect. Compressive strength increases as the NS content grows, reaching 33% for an NS proportion of 3%.The compressive strength was seen to increase to 2% nanosilica substitution before significantly declining. Due to the increased hydration by nanosilica, there is a more significant consumption of Ca(OH)2 in the early stages (1–7 days of curing). This outcome favors a 2% substitution of nanosilica in cement by weight. The pozzolanic reaction of nanosilica and CH produces well-compacted hydration products that coat the unhydrated cement and slow the rate of hydration. Additionally, hydration products plug the pores in the cement, reducing the amount of water that can reach the anhydrate cement particles and reducing the strength above 2% nanosilica replacement [71]. ## 3.3. Nanotitanium Dioxide Most researchers agreed that using NT particles might improve the compressive strength of concrete to some extent. The effect of NT on the compressive strength of concrete is shown in Table3.TiO2 nanoparticles with an average diameter of 15 nm were used in four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight of cement with a w/c ratio of 0.5 [72]. The 0.75% NT increases the mortar’s compressive strength by 19.33% after 28 days. The strength decreased as the NT content increased; hence, the optimum NT content is 0.75% [62]. In 10%, 20%, and 30% of the fractions, untreated black rice husk ash (BRHA) was used to replace cement. When nano-TiO2 doses of 0.5%, 1.0%, and 1.5% were added to blended cement, the compressive strength increased to 13.22% [63]. The compressive strength of concrete containing 20% fly ash (FA) can be enhanced by 18% by using 3% NT, according to Li et al. [64]. It was discovered that NT at a dose of 3 wt% improved the compressive strength of self-compacting concrete (SCC) with GGBS and a w/c ratio of 0.4 the most. The flexural strength of nano SiO2 coated TiO2 reinforced reactive powder concrete (NSCTRRPC) reached a maximum of 9.77 MPa when the range of NSCT was 3% and increased 83.3%/4.44 MPa compared with reactive powder concrete (RPC) without NSCT. Even while the strength of flexural NSCTRRPC was slightly lesser than that of plain RPC when the NSCT content was 5%, it was still much more than bare reactive powder concrete. It could be related to a decrease in hydration speed caused by water absorption. At 28 days, the ideal level of NS content was 5.0%. The use of nanoparticles as cementitious materials increased the compressive strength of concrete, as shown in Table 3. After 28 days of curing, the compressive strength of concrete can enhance up to 22.71% by replacing 2% cement with nanotitanium oxide particles (relative to plain concrete). Titanium oxide was introduced to specimens with wollastonite; the compressive strength increased initially, and then declined. The best combination was 4% NT without wollastonite [69, 73].According to the study, the optimal dose of NT is 3%, which boosts compressive strength by up to 23%; however, increasing the NT concentration diminishes mechanical characteristics.The review paper’s findings revealed the diffraction intensity of several CH and C3S crystals specimens. First, with increasing hydration age, C3S diffraction apex intensity declined, while CH diffraction apex intensity increased in the base specimen. But even after 28 days, C3S had not fully hydrated. Second, utterly different change tendencies were visible in the strength of the two CH diffraction peaks, as evidenced by the variance in X-ray diffraction (XRD) results between the base specimen and the additional specimen. When nano-TiO2 was used to replace cement, the intensity of the CH (101) crystal plane grew early on, whereas the power of the (001) crystal plane significantly dropped. When the cement was replaced with nano-TiO2, the amount of CH crystal did not increase after 1 day. Therefore, the rise in hydrated products should not be cause of the improvement in early strength. Third, the intensity of CH at evening ages increased so slowly after the slag powder was applied to the cement mortar that it reduced after 14 days, and the power of CH was significantly lower than that of other specimens without the slag powder. This meant that CH was used to hydrate the slag powder, which helped to increase strength in the evening hours [74]. ## 3.4. Nanoalumina Most researchers agreed that using nano alumina (NA) particles might improve the cementitious composites and compressive strength. Table4 shows the effect of NA on the compression strength of cementitious material.Table 4 Twenty-eight-day compressive strength of concrete made with nanoalumina. Materialsw/c ratioReplacement of NA (%)Compressive strength increment (%)Maximum replacement of NA (%)ReferencesMortar0.791, 3, 520 ⟶ 15 ⟶ 36.005.00[75]Mortar0.351, 2, 316.00 ⟶ 12 ⟶ 121[76]Concrete0.441, 2, 313.00 ⟶ 4 ⟶ −1[77]Concrete0.330.5, 0.75, 16 ⟶ 28 ⟶ 461[78]Mortar with 10% of black rice husk ash0.491, 2, 31 ⟶ 11 ⟶ 163[79]Concrete0.483, 5, 716.67 ⟶ 30.13 ⟶ 23.585[80]Concrete2, 33.32 ⟶ 5.33[81]Mortar0.50.5, 1, 1.57 ⟶ 10.6 ⟶ 11.41.50[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.After 7 days of the curing period, the addition of NA increased by 1%, 3%, and 5% replacement by 46%, 27%, and 19.3%, respectively. It was discovered that the 1% replacement level of NA provides the best early strength. After 28 days of the curing period, the compressive strength of the 0% replacement level was found to be 13.69 MPa; the addition of NA resulted in increase of 20%, 15%, and 36%, respectively, whereas 5% replacement resulted in better compressive strength after 28 days [75]. The 1% nanoalumina content to instars increased their compressive strength by up to 16% at room temperature. Higher concentrations of NA (>2%) reduced the power mortars to their original level. One percent nanoalumina was added at temperatures up to 800°C; the residual compressive strength remained more elevated than the actual amount [76]. When OPC is substituted with NA, the compressive strength increases by 1%. The 28-day compressive strength increased by 13% when a 1% NA replacement level was utilized instead of a 0% NA combination. The compressive strength of concrete cubes was boosted by introducing NA into the matrix; the compressive strength of composites rose by 33.14% at 28 days when the proportion of NA was 1% of the cement by weight [78].The addition of rice husk ash boosted the 28-day compressive strength of samples, with 10% replacement of rice husk and 3% NA giving the greatest compressive strength of 16.6%. The compression strength decreases as the rice husk content increases [79] by replacing 1% of cement with nanoalumina particles; concrete 28-day compressive strength was increased by 4.03%; when the concentration of NA was increased from 1% to 3%, the compressive strength improved from 4.03% to 8.00%. As a result, cement hydration was hastened, resulting in higher amounts of reaction products. Also, nanofiller nano-Al2O3 particles regain the concrete particle packing density and improve the microstructure. As a result, the volume of larger pores in the cement paste is reduced [80]. The compressive strength of all the tested mortars decreases as the amount of Al2O3 nanopowder in their composition increases. When 0.5% Al2O3 nanopowder was added, the decline was 7%, 10.6% with 1% and 11.4% with 1.5% compared with the reference mortar. According to several studies, adding Al2O3 nanopowder to mortars can increase their compressive strength [82].According to the study, the optimum concentration of nanoalumina was 5%, at which point compressive strength increased by 36%.The rapid consumption of Ca(OH)2 produced during Portland cement hydration, which is connected to the high reactivity of nano-Al2O3 particles, is likely what caused the increase in the compressive strength of concrete containing nanoalumina. As a result, the cement’s hydration was sped up, and more reaction products were generated. Additionally, nano-Al2O3 particles restore the concrete’s particle packing density and enhance its microstructure as a nanofiller, decreasing the volume of bigger pores in the cement paste. The outcomes align with those reported by other researchers [80]—dosage of the samples NA1, NA2, NA3, and control at various temperatures. The compressive strength decreased when the dosage of nanoalumina rose to 2% and 3%, although it remained higher than the control samples. The ITZ may have become loose due to the excessive aggregation of nanoalumina particles, which may have surrounded fine aggregates [76]. ## 3.5. Carbon Nanotubes According to most studies, the compression strength of cementitious material could be increased to some extent with CNT particles. Table5 shows CNT’s effect on cementitious material’s compressive strength.Table 5 Twenty-eight-day compressive strength of concrete with carbon nanotubes. Materialsw/b ratioReplacement of MWCNT (%)Compressive strength increment (%)Maximum replacement of CNT (%)ReferencesM30 grade of concrete0.40.015, 0.03, and 0.0452.75 ⟶ 26.7[83]Concrete0.25 and 0.57.14 ⟶ 15.7[84]Mortar0.550.05, 0.1, and 0.215 ⟶ 8 ⟶ 100.05[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.57.11 ⟶ 18.2 ⟶ 22.56 ⟶ 24.5 ⟶ 27.35[86]SCC0.450.1, 0.3, and 0.516.6 ⟶ 24 ⟶ 38.62[87]Ultra high strength concrete (UHSC)0.20.05, 0.1, and 0.154.6 ⟶ 2.1 ⟶ −1.970.1[88]Concrete_0.0.2, 0.03, 0.05, and 0.0983.33 ⟶ 97.22 ⟶ 80.55 ⟶ 63.880.03[89]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The compressive strength of concrete increased by up to 26.7% when functionalized multiwalled carbon nanotube (MWCNT) was used in place of 0.045% cement. MWCNT in cement shows an improvement in the stiffness of the CSH gel, making the composite more substantial. An explanation for the improved mechanical properties of concrete could be that the MWCNTs occupy the nanostructure in MWCNT concrete, making them more crack resistant throughout the loading period [83]. Modified MWCNTs were disseminated in cement mortars to improve their mechanical qualities. When pure MWCNTs were used, the compressive strength of cement mortars was significantly increased—incorporating 0.50 wt%. Percent MWCNTs resulted in a 15.7% increase in compressive strength, whereas containing 0.250 wt%. Percent MWCNTs resulted in a 7.14% increase in compressive strength [84]. The cement mortar’s compressive strength was improved by adding CNTs. The maximum enhancement obtained when utilizing 0.05% CNTs is up to 15%; however, the stability decreases as the CNT content increases [85]. According to the strength of compression data, the percentage increase in compressive strength for 0.1% and 0.5% is 7.11% and 27.35%, respectively, for 28 days. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.%, 0.3%, and 0.5% MWCNT concentration increased to 38.62% [87]. The most significant gains in the mechanical characteristics of cement-based materials were found to be at low concentrations of MWCNTs (0.05%). The compressive strength rose to 4.6% for a 0.05% MWCNT in the current investigation; however, as the content of MWCNT increases, the power drops; hence, the optimum content is 0.05% [88].It was determined from various sources that the maximum concentration of CNT was 0.1%. The compressive strength of UHSC rises by up to 2.1%.At 80°C, the CNTs were functionalized in a solution of concentrated H2SO4 and HNO3. The removal of carboxylate carbonaceous fragments that could adversely influence the mechanical strength of the concrete due to their interaction with cement hydration products was discovered to require the functionalization of acetone washing [89].The ability to synthesize novel hybrid nanostructured materials in which CNTs and carbon nanofibers (CNFs) connected to cement particles and enabled good dispersion of the carbon nanomaterials in the cement was made possible by employing cement particles as catalysts and support material. Two chemical vapour deposition reactors were used to create this hybrid material, which is easily included in the production of commercial cement [90, 91]. The fluidized bed reactor’s product yield was significantly enhanced. The research using TEM, SEM, XRD, thermogravimetric analysis, and Raman measurements revealed the process for producing CNTs and CNFs at low temperatures and high yields to be highly effective. After 28 days of curing in water, tests on the physical characteristics of the cement hybrid material paste revealed up to a twofold increase in compressive strength and a 40-fold increase in electrical conductivity [90, 91].The increased compressive strength may be related to the fact that the addition of CNTs caused the microcracks to start and spread more slowly because they were evenly distributed throughout the cement mortar. The mechanical strengths of the mortar may be improved by adding CNTs to improve the adhesion between the hydration products. Additionally, it is possible that the presence of CNTs led to the production of additional CSH and the consumption of CH [85, 86]. ## 3.6. Nanocellulose (NC) According to most studies, the compression strength of cementitious material could be increased to some extent with nanocellulose particles. Table6 shows NC’s effect on cementitious material’s compressive strength.Table 6 Twenty-eight-day compressive strength of concrete made with nanocellulose [92–94]. Materialsw/c ratioReplacement of NC (%)Compressive strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.325 ⟶ 17 ⟶ 11 ⟶ 30.05[92]Concrete0.150.005, 0.01, and 0.0158 ⟶ 3 ⟶ 10.005[93]Concrete0.350.2 and 0.110 ⟶ 17[94]The compressive strength of concrete with a cellulose content of 0.05% and 0.10% increased by 26% and 17%, respectively. In contrast, combinations with 0.20% and 0.30% NC had less apparent impacts, rising by 11% and 3%. The effect of NC on hydration kinetics and hydrate characteristics may be linked to the higher compression strength in the presence of nanocellulose [92]. The results demonstrate that NC mortar samples improve compressive strength after 7 days of curing; the UHP mortar sample, which included 0.005 wt% NC, had the most excellent compressive strength value of 184 MPa of binders. In this situation, the compressive strength was roughly 8% higher than the control mortar and 4%–8% higher than the 0.01% and 0.015% NC mortars. Because of its high specific surface, NC probably provides close spacing and strong adhesion to the cement matrix, boosting density and impacting compressive strength development [93]. Adding 0.2% and 1% NC raises the strength of the material by 10% and 17%, respectively. The hydrophilic properties of CNCs, which result in increased hydration products, may be responsible for increasing compressive strength. Furthermore, using 0.2% and 1% CNC decreases cement volume while maintaining compressive strength [94].With an optimum dose of NC of 0.1%, the compressive strength increases by up to 17%, while a higher amount of NC lowers the mechanical characteristics.The greater compressive strength in the presence of cellulose filaments (CF) may be related to the effect of nanocellulose on the hydration kinetics and the properties of hydrates. According to research [92], the hydrophilic and hygroscopic nanocellulose may add more water to the cementitious matrix to raise the degree of hydration and improve mechanical performance. ## 4. Flexural Strength of Nanoconcrete Made with Nanomaterial ### 4.1. Nanometakaolin As shown in Table7, NMK has the potential to improve cementitious material flexural strength significantly. The optimum concentration of NMK is primarily between 8% and 10% [49, 95]. The results showed that fiber-reinforced cementitious composite (FRCC) with 10% NMK exhibited a 67% increase in flexural strength after 28 days compared with control FRCC. However, flexural strength rapidly decreased when the NMK concentration was increased. The effect of NMK addition on SCC as a partial replacement by weight of cement at four percentages (0%, 1%, 3%, and 5%) increased flexural strength by up to 33.8% [51].Table 7 Twenty-eight day flexural strength of concrete made with nanometakaolin. Materialsw/b ratioReplacement of NMK (%)Flexural strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.32–1414 ⟶ 36 ⟶ −2510[33, 52]0.33–0.492–143.9 ⟶ 58 ⟶ 38[38]0.52.5–106.4 ⟶ 29 ⟶ 197.50[44]0.65–158–45[46]Ordinary concrete0.50.10.2587[47]0.533–100–46.810[48]RPC0.1752–53.16–7.355[71]FRCC0.32–1416 ⟶ 67 ⟶ 5410[55]SCC1–514.5 ⟶ 33.8[51]SCHPC0.351.25–3.7510 ⟶ 27.5[50]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the findings, the optimal dose of NMK is 10%, which enhances flexural strength by 46.8%; however, increasing the concentration of NMK diminishes mechanical characteristics.Results illustrate the relationship between flexural strength variation and curing age for blended cement mortars containing NMK. As the curing age and NMK addition rise, flexural strength also increases. At 60 days of hydration, the flexural strength increases as the NMK addition increases up to 7.5% and then decreases after a 10% addition. The pozzolanic reaction of FA and NMK with free lime released during OPC hydration and the physical filling of the NMK platelet particles inside the interstitial spaces of the FA-cement skeleton is what causes the increase in flexural strength. However, the NMK platelet particles act nanosize enhanced because of the interfacial zone. At 7.5% NMK addition, the increase in flexural strength was 2.3-fold. The reduction of flexural strength at a later age and 10% NMK addition may be due to the agglomeration of NMK particles around cement grains. ### 4.2. Nanosilica Most researchers agreed that using nanosilica particles could somewhat improve the flexural strength of cementitious material. Table8 summarizes NS’s outcome on the cementitious material’s flexural strength. According to Jalal et al. [46], high-performance SCC with 2% NS and 10% silica fume (SF) exhibited better flexural strength than the reference mix at 28 days, with an optimum increase of 59%; however, the increased flexural strength was demonstrated to slow down as the age of the concrete increased. They also discovered that NS-only concrete had considerably lower flexural strength than NS-plus-SF-admixed concrete. Ordinary concrete with a w/c ratio of 0.36 has a cement replacement rate of 1%–2.5%, with a maximum replacement rate of 2%. Because increasing the NS content diminishes strength, the study determined that a 2% replacement is the best option [61]. The flexural strength of UHPC with a w/c ratio of 0.4 and nanosilica replacement with cement 4%–5% increases to 34.6%, and at 5% replacement of NS, the flexural strength decreases to 26.9%; thus, in this review, it is concluded that the optimum content of NS is 4%, and when it is combined with 2.5% steel fiber and 4% NS, the strength increases to 34.6%. In the high-performance concrete with a w/c ratio of 0.31 and NS replacement varying from 0.5% to 3%, flexural strength increased from 10.7% for 0.5% NS to 36.5% for 2% NS with a 0.5% increment. It then decreased to 22% for 3% NS with further addition of NS at about 3%, indicating maximum content is 2% [71]. According to Li et al. [97], adding 1% NS increased flexural strength by 41% and 35% for UHPC under the combined curing conditions of 2 days heat and 26 days conventional curing, respectively, at 0.16 and 0.17 w/b ratios control UHPC matrix under the collective curing conditions of 2 days heat and 26 days conventional curing. On the other hand, adding more than 1% NS resulted in a slight increase in flexural strength. Table 8 depicts the effects of NS at different doses throughout 28 days. According to the study, up to 3% of NS doses can increase mechanical qualities, which might be related to pore structure refinement, pozzolanic effect, and filling effect. Flexural strength rises as the NS percentage rises, reaching 34.6% for an NS content of 4%.Table 8 Twenty-eight-day flexural strength of cementitious materials with NS. Materialsw/c ratioReplacement of NS (%)Flexural strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.361–2.55 ⟶ 16.8 ⟶ −0.262[61]UHPC0.44–50 ⟶ 34.6 ⟶ −26.94[96]UHPC with 2.5% steel fibers0.44 NS + 2.5 steel fibers10.4–24[96]HPC0.310.5, 1, 1.5, 2, 2.5, and 310.7 ⟶ 21.1 ⟶ 28.8 ⟶ 36.5 ⟶ 29 ⟶ −222[71]The minus (−) imply decreasing the given attribute calculated concerning the reference one.NS’s sizeable specific surface area and high pozzolanic activity boost the strength gained at young ages when it is added to mortars and concrete. More CSH gel and compact structures are produced due to the above-mentioned process. Beyond the 2% addition, the amount of nanosilica exceeds the amount of released lime, which lessens the pozzolanic activity. It may be established that 2% of nanosilica is the ideal amount in high performance steam-cured concrete. ### 4.3. Nanotitanium Dioxide The effects of NT on the flexural strength of cementitious composites are shown in Table9.Table 9 Twenty-eight-day flexural strength of cementitious materials with NT. Materialsw/c ratioReplacement of NS (%)Flexural/split tensile strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.759 ⟶ 15.1 ⟶ 13.2 ⟶ 7.50.75[62]Mortar0.581–5613[98]RPC1, 3, 547.073[64]HPC0.251.5018[65]SCC with GGBS0.41 ⟶ 2 ⟶ 3 ⟶ 45.5 ⟶ 14.8 ⟶ 27.7 ⟶ 16.63[66]RPC1 ⟶ 3 ⟶ 55.97 ⟶ 12.26 ⟶ 10.325[67]SCGPC2 ⟶ 4 ⟶ 66.8 ⟶ −8.18 ⟶ −2.582[69]The minus (−) sign denotes decreasing the estimated attribute concerning the reference one.With a water-to-cement ratio of 0.5, with four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight cement, TiO2 nanoparticles with an average diameter of 15 nm were used. The 0.75% NT content raised the mortar flexural strength by 15.1% after 28 days; however, increasing the NT content lowered the power; thus, the optimum value is 0.75% [62]. The effects of nano-TiO2 (NT) with a percentage replacement of cement of 1%, 2%, 3%, 4%, and 5%. The results demonstrate that 3% NT increases tensile/flexural strength by 61% (i.e., toughness) and that increasing the NT level reduces power; thus, the optimum content is 3% [98]. Because NT particles can improve the ITZ of cementitious material, Li et al. [64] found that adding NT to cementitious composites can significantly increase long-term and short-term strength. With 3% of NT, the flexural strength increases to 47% [64].HPC increases flexural strength by 18% [65] with a 1.5% NT and a water-to-binder ratio of 0.25. The effect of different NT dosages on the flexural strength of cementitious composites was evaluated by Nazari and Riahi [66]. They discovered that adding 3 wt% NT at 28 days raised flexural strength by 27.7%. Presence of NSCT does not affect the compressive strength of the composites during the 3-day curing phase, but it does throughout the 28-day curing phase. The NSCTRRPC compressive strength peaked at 111.75 MPa after 28 days, increasing 12.26%/12.2 MPa above the ordinary RPC [99]. This is because NSCT generates more negative charges on the surface of NT, making it simpler to disperse in water via electrostatic repulsion [67]. When titanium oxide was introduced to specimens with wollastonite, the tensile strength increased and dropped. The flexural strength of samples containing titanium oxide but no wollastonite was superior to wollastonite models. The best combination was found to be 4% NT without wollastonite [69]. Based on the previous study, the improvement in bending strength of NT cementitious composites could be related to the following factors.On one hand, cement hydration products would deposit on nanoparticles because of the particles’ high surface activity, and as a result, the nanoparticles would become the nucleus of agglomerates. The nucleation effect is the name for this phenomenon. This method’s NT dispersed in the matrix can improve the matrix compactness and microstructure [100, 101]. NT has a nanocore action that induces fracture deflection and limits crack extension [67]. According to the study, the optimum dosage of NT for RPC is 3% flexural strength up to 87%; increasing the NT content diminishes the mechanical characteristics.In the opinion of the reviewers, the following elements may be to blame for the enhanced flexural/split tensile strength of NT-engineered cementitious composites. The nucleation effect is where cement hydration products initially deposit on nanoparticles due to their extensive surface activity and then multiply to generate conglomerations with the nanoparticles functioning as the “nucleus.” According to this strategy, the matrix’s microstructure and compactness can be improved by the NT disseminated throughout it [102]. On the other hand, the nanocore action of NT may result in fracture deflection and stop cracks from expanding to generate a toughening effect. ### 4.4. Nanoalumina Most studies agreed that adding NA particles to cementitious composites could improve their flexural strength. Table10 summarizes the possible changes of NA on the flexural strength of cementitious material.Table 10 Twenty-eight-day flexural strength of cementitious materials with NA. Materialsw/c ratioReplacement of NA (%)Flexural strength increment (%)Maximum replacement of NA (%)ReferencesMortar with 10% of black rice husk ash0.491, 2, 3−0.1 ⟶ 11.6 ⟶ 16.73[79]Concrete2, 35.16 ⟶ 6.73[81]Mortar0.50.5, 1, 1.510 ⟶ 12 ⟶ 131.5[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference.The addition of rice husk boosted the 28-day flexural strength of samples, with 10% replacement of rice husk and 3% nanoalumina giving the greatest flexural strength of 17%. The flexural strength decreases as the rice husk content increases [79]. It was discovered that raising 2% and 3% of NA increased its power up to 3%, respectively, in nonfiber designs (equivalent to a 5% increase). When the treatment time was increased to 28 days and the nanoalumina concentration was raised by 2% and 3%, this value climbed to 5.5 (equivalent to a 5% increase) and 5.58 MPa (equal to a 7% increase) [81]. By 7 days of age, however, nanoalumina percent had climbed to 2% and 3%. This parameter had increased to 5.21 (corresponding to a 5% increase) and 5.33 MPa, respectively (equivalent to a 7% increase). The development of pozzolanic reactions and the microstructure density of the mortar matrix improves the transitional area and, as a result, improves the adhesion of the fibers and matrix, as well as increasing the strength of the elongation of the fiber during flexural loading [81], can explain this trend. The flexural strength was reduced by around 10% by adding 1% and 1.5% nanopowder, regardless of the nanopowder amount [82].According to the findings, the optimum concentration of nanoalumina was 3%, with a 16.7% increase in flexural strength.The development of pozzolanic reactions and the microstructure density of the mortar matrix, which enhances the transitional area and, in turn, improves the matrix’s adhesion property, while also strengthening the elongation during flexural loading, can be interpreted as the cause of the increase in flexural strength by adding nanoalumina [81]. ### 4.5. Carbon Nanotubes Most researchers agreed that CNT particles might improve the bending strength of cementitious material to some extent. Table11 shows the effect of CNT on the bending strength of cementitious material when CNT is added.Table 11 Twenty-eight-day flexural strength of cementitious materials with CNT. Materialsw/b ratioReplacement of MWCNT (%)Flexural strength increment (%)Maximum replacement of MWCNT (%)ReferencesConcrete0.25 and 0.53 ⟶ 10.4[84]Mortar0.550.05, 0.1, and 0.21, 7, and 28[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.510.25 ⟶ 15.4 ⟶ 19.23 ⟶ 20.5 ⟶ −10.4[86]SCC0.450.1, 0.3, and 0.521.2 ⟶ 32.9 ⟶ 38.6[87]UHSC0.20.05, 0.1, and 0.157.5 ⟶ 3.33% ⟶ −0.660.05[88]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The bending strength of mortars was significantly improved simultaneously when pristine MWCNTs and all contents were used—incorporating 0.250 wt% percent MWCNTs resulted in a 3% increase in flexural strength, while 0.50 wt% percent MWCNTs resulted in a 10.4% increase in bending strength [84]. The cement mortar bending strength was improved by adding CNTs [103]. When added CNTs increase, the flexural strength improves up to 28% for the 0.2% of CNTs [85]. The percentage increase in compressive strength for 0.1% and 0.4% was 10.25% and 20.5%, correspondingly, for 28 days. A further increase in the content of 0.5% caused the flexural strength to decrease, indicating that the optimum range was 0.4%. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.1%, 0.3%, and 0.5% MWCNT increased by 38.6% [87]. The flexural strength rose to 3.33% for a 0.1% MWCNT in the current study; when the content of MWCNT grows, the flexural strength falls; therefore, 0.1% is the ideal content [88].According to various literary works, the ideal amount of CNT is 0.4%, and flexural strength improves up to 20.5%.Improvement in flexural strength and tensile strength is due to bridging the crack surfaces in the presence of CNTs. It is also noted that with CNTs, concrete density decreases with the density value reducing from 310 to 290 kg/m3. The reason is decreased pore wall discharge and uniform pore size. Also, the workability of the paste was increased as a superplasticizer was used. The basic properties of cement and the process of gaining strength in concrete include components of different shapes. CSH is a cloud-like structure, CH is like the rose of stone, and calcium sulfur aluminate hydrates are needle structures that produce pores due to the different shapes of frames. Homogeneous dispersion of CNTs made concrete denser as dispersed CNTs filled voids, increasing its crack resistivity [86]. ### 4.6. Nanocellulose Table12 shows the outcome of NC on the flexural strength of cementitious material.Table 12 Flexural strength of cementitious materials with NC at 28 days. Cementitious materialsw/b ratioReplacement of NC (%)Flexural strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.316 ⟶ 19 ⟶ 21 ⟶ 200.20[92]Concrete0.150.005, 0.01, and 0.01537.3 ⟶ 18.53 ⟶ 1.90.005[93]Concrete0.350.25, 0.5, 0.75, 1, 1.75, 0.5, 1, and 1.520, 12.5, 10, 7.5, 6.25, and 5[104]The flexural capacity of the reference concrete was 4.84 MPa, while the flexural capacities of NC concentrations of 0.05%, 0.10%, 0.20%, and 0.30% as a replacement for cement were 5.62, 5.75, 5.84, and 5.81 MPa, respectively. This translates to increased flexural strength of 16%, 19%, 21%, and 20%, respectively. This reveals that increasing the content of CF increases flexural strength by up to 21% for a 0.2% NC content. Expanding the scope of NC diminishes flexural strength; thus, the ideal range is 0.2% [92]. The results showed that the sample with 0.005% of NC has higher flexural strength, around 37.3%, than the control mix; however, further increases in % of NC, the flexural capacity decreases; hence, the optimum range of NC is 0.005% [93]. Adding nanocellulose increases the mechanical qualities of the material. Adding NC as a 0.2% replacement for cement increases flexural strength by 20% [105]; expanding the scope of NC further reduces flexural strength but still leaves it higher than the reference mix; thus, the optimum range is 0.2% [104].According to numerous studies, the ideal content of NC is 0.2%, and flexural strength rises by up to 20%. The above results reveal that the effect of CF on flexural capacity is two-fold: (1) an increased first peak strength (rupture strength) associated with the nanometric properties of CF. This alters the properties of the cement paste matrix at the microstructure level (as further discussed afterward); (2) an enhanced toughness associated with the high aspect ratio and the tensile strength of CF, which contribute toward maintaining the peak load for a prolonged range of microdeflections prior to failure, thereby increasing the cracking resistance. The observed effect of CF on composite fracture behavior is driven by the contribution of the filaments as a nanoreinforcement. Possible mechanisms involved in the CF effect on composite fracture behavior may include: (1) filament bridging capacity driven by their high aspect ratio and fibrillated morphology, (2) filament resistance to rupturing owing to their tensile properties, and (3) the filament-matrix interfacial bond stemming for the potential interaction between the omnipresent –OH groups on CF surface and cement hydrates by hydrogen bonding. In this mechanism, irrespective of the effect of CF on peak flexural capacity, the high probability of the fibers intercepting microcracks may play a favorable role in delaying the matrix fracture [92]. ## 4.1. Nanometakaolin As shown in Table7, NMK has the potential to improve cementitious material flexural strength significantly. The optimum concentration of NMK is primarily between 8% and 10% [49, 95]. The results showed that fiber-reinforced cementitious composite (FRCC) with 10% NMK exhibited a 67% increase in flexural strength after 28 days compared with control FRCC. However, flexural strength rapidly decreased when the NMK concentration was increased. The effect of NMK addition on SCC as a partial replacement by weight of cement at four percentages (0%, 1%, 3%, and 5%) increased flexural strength by up to 33.8% [51].Table 7 Twenty-eight day flexural strength of concrete made with nanometakaolin. Materialsw/b ratioReplacement of NMK (%)Flexural strength increment (%)Maximum replacement of NMK (%)ReferencesCement paste0.32–1414 ⟶ 36 ⟶ −2510[33, 52]0.33–0.492–143.9 ⟶ 58 ⟶ 38[38]0.52.5–106.4 ⟶ 29 ⟶ 197.50[44]0.65–158–45[46]Ordinary concrete0.50.10.2587[47]0.533–100–46.810[48]RPC0.1752–53.16–7.355[71]FRCC0.32–1416 ⟶ 67 ⟶ 5410[55]SCC1–514.5 ⟶ 33.8[51]SCHPC0.351.25–3.7510 ⟶ 27.5[50]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.According to the findings, the optimal dose of NMK is 10%, which enhances flexural strength by 46.8%; however, increasing the concentration of NMK diminishes mechanical characteristics.Results illustrate the relationship between flexural strength variation and curing age for blended cement mortars containing NMK. As the curing age and NMK addition rise, flexural strength also increases. At 60 days of hydration, the flexural strength increases as the NMK addition increases up to 7.5% and then decreases after a 10% addition. The pozzolanic reaction of FA and NMK with free lime released during OPC hydration and the physical filling of the NMK platelet particles inside the interstitial spaces of the FA-cement skeleton is what causes the increase in flexural strength. However, the NMK platelet particles act nanosize enhanced because of the interfacial zone. At 7.5% NMK addition, the increase in flexural strength was 2.3-fold. The reduction of flexural strength at a later age and 10% NMK addition may be due to the agglomeration of NMK particles around cement grains. ## 4.2. Nanosilica Most researchers agreed that using nanosilica particles could somewhat improve the flexural strength of cementitious material. Table8 summarizes NS’s outcome on the cementitious material’s flexural strength. According to Jalal et al. [46], high-performance SCC with 2% NS and 10% silica fume (SF) exhibited better flexural strength than the reference mix at 28 days, with an optimum increase of 59%; however, the increased flexural strength was demonstrated to slow down as the age of the concrete increased. They also discovered that NS-only concrete had considerably lower flexural strength than NS-plus-SF-admixed concrete. Ordinary concrete with a w/c ratio of 0.36 has a cement replacement rate of 1%–2.5%, with a maximum replacement rate of 2%. Because increasing the NS content diminishes strength, the study determined that a 2% replacement is the best option [61]. The flexural strength of UHPC with a w/c ratio of 0.4 and nanosilica replacement with cement 4%–5% increases to 34.6%, and at 5% replacement of NS, the flexural strength decreases to 26.9%; thus, in this review, it is concluded that the optimum content of NS is 4%, and when it is combined with 2.5% steel fiber and 4% NS, the strength increases to 34.6%. In the high-performance concrete with a w/c ratio of 0.31 and NS replacement varying from 0.5% to 3%, flexural strength increased from 10.7% for 0.5% NS to 36.5% for 2% NS with a 0.5% increment. It then decreased to 22% for 3% NS with further addition of NS at about 3%, indicating maximum content is 2% [71]. According to Li et al. [97], adding 1% NS increased flexural strength by 41% and 35% for UHPC under the combined curing conditions of 2 days heat and 26 days conventional curing, respectively, at 0.16 and 0.17 w/b ratios control UHPC matrix under the collective curing conditions of 2 days heat and 26 days conventional curing. On the other hand, adding more than 1% NS resulted in a slight increase in flexural strength. Table 8 depicts the effects of NS at different doses throughout 28 days. According to the study, up to 3% of NS doses can increase mechanical qualities, which might be related to pore structure refinement, pozzolanic effect, and filling effect. Flexural strength rises as the NS percentage rises, reaching 34.6% for an NS content of 4%.Table 8 Twenty-eight-day flexural strength of cementitious materials with NS. Materialsw/c ratioReplacement of NS (%)Flexural strength increment (%)Maximum replacement of NS (%)ReferencesOrdinary concrete (sulfuric-acid-rain condition)0.361–2.55 ⟶ 16.8 ⟶ −0.262[61]UHPC0.44–50 ⟶ 34.6 ⟶ −26.94[96]UHPC with 2.5% steel fibers0.44 NS + 2.5 steel fibers10.4–24[96]HPC0.310.5, 1, 1.5, 2, 2.5, and 310.7 ⟶ 21.1 ⟶ 28.8 ⟶ 36.5 ⟶ 29 ⟶ −222[71]The minus (−) imply decreasing the given attribute calculated concerning the reference one.NS’s sizeable specific surface area and high pozzolanic activity boost the strength gained at young ages when it is added to mortars and concrete. More CSH gel and compact structures are produced due to the above-mentioned process. Beyond the 2% addition, the amount of nanosilica exceeds the amount of released lime, which lessens the pozzolanic activity. It may be established that 2% of nanosilica is the ideal amount in high performance steam-cured concrete. ## 4.3. Nanotitanium Dioxide The effects of NT on the flexural strength of cementitious composites are shown in Table9.Table 9 Twenty-eight-day flexural strength of cementitious materials with NT. Materialsw/c ratioReplacement of NS (%)Flexural/split tensile strength increment (%)Maximum replacement of NT (%)ReferencesMortar0.50.25 ⟶ 0.75 ⟶ 1.25 ⟶ 1.759 ⟶ 15.1 ⟶ 13.2 ⟶ 7.50.75[62]Mortar0.581–5613[98]RPC1, 3, 547.073[64]HPC0.251.5018[65]SCC with GGBS0.41 ⟶ 2 ⟶ 3 ⟶ 45.5 ⟶ 14.8 ⟶ 27.7 ⟶ 16.63[66]RPC1 ⟶ 3 ⟶ 55.97 ⟶ 12.26 ⟶ 10.325[67]SCGPC2 ⟶ 4 ⟶ 66.8 ⟶ −8.18 ⟶ −2.582[69]The minus (−) sign denotes decreasing the estimated attribute concerning the reference one.With a water-to-cement ratio of 0.5, with four different amounts of 0.25%, 0.75%, 1.25%, and 1.75% by weight cement, TiO2 nanoparticles with an average diameter of 15 nm were used. The 0.75% NT content raised the mortar flexural strength by 15.1% after 28 days; however, increasing the NT content lowered the power; thus, the optimum value is 0.75% [62]. The effects of nano-TiO2 (NT) with a percentage replacement of cement of 1%, 2%, 3%, 4%, and 5%. The results demonstrate that 3% NT increases tensile/flexural strength by 61% (i.e., toughness) and that increasing the NT level reduces power; thus, the optimum content is 3% [98]. Because NT particles can improve the ITZ of cementitious material, Li et al. [64] found that adding NT to cementitious composites can significantly increase long-term and short-term strength. With 3% of NT, the flexural strength increases to 47% [64].HPC increases flexural strength by 18% [65] with a 1.5% NT and a water-to-binder ratio of 0.25. The effect of different NT dosages on the flexural strength of cementitious composites was evaluated by Nazari and Riahi [66]. They discovered that adding 3 wt% NT at 28 days raised flexural strength by 27.7%. Presence of NSCT does not affect the compressive strength of the composites during the 3-day curing phase, but it does throughout the 28-day curing phase. The NSCTRRPC compressive strength peaked at 111.75 MPa after 28 days, increasing 12.26%/12.2 MPa above the ordinary RPC [99]. This is because NSCT generates more negative charges on the surface of NT, making it simpler to disperse in water via electrostatic repulsion [67]. When titanium oxide was introduced to specimens with wollastonite, the tensile strength increased and dropped. The flexural strength of samples containing titanium oxide but no wollastonite was superior to wollastonite models. The best combination was found to be 4% NT without wollastonite [69]. Based on the previous study, the improvement in bending strength of NT cementitious composites could be related to the following factors.On one hand, cement hydration products would deposit on nanoparticles because of the particles’ high surface activity, and as a result, the nanoparticles would become the nucleus of agglomerates. The nucleation effect is the name for this phenomenon. This method’s NT dispersed in the matrix can improve the matrix compactness and microstructure [100, 101]. NT has a nanocore action that induces fracture deflection and limits crack extension [67]. According to the study, the optimum dosage of NT for RPC is 3% flexural strength up to 87%; increasing the NT content diminishes the mechanical characteristics.In the opinion of the reviewers, the following elements may be to blame for the enhanced flexural/split tensile strength of NT-engineered cementitious composites. The nucleation effect is where cement hydration products initially deposit on nanoparticles due to their extensive surface activity and then multiply to generate conglomerations with the nanoparticles functioning as the “nucleus.” According to this strategy, the matrix’s microstructure and compactness can be improved by the NT disseminated throughout it [102]. On the other hand, the nanocore action of NT may result in fracture deflection and stop cracks from expanding to generate a toughening effect. ## 4.4. Nanoalumina Most studies agreed that adding NA particles to cementitious composites could improve their flexural strength. Table10 summarizes the possible changes of NA on the flexural strength of cementitious material.Table 10 Twenty-eight-day flexural strength of cementitious materials with NA. Materialsw/c ratioReplacement of NA (%)Flexural strength increment (%)Maximum replacement of NA (%)ReferencesMortar with 10% of black rice husk ash0.491, 2, 3−0.1 ⟶ 11.6 ⟶ 16.73[79]Concrete2, 35.16 ⟶ 6.73[81]Mortar0.50.5, 1, 1.510 ⟶ 12 ⟶ 131.5[82]The minus (−) sign implies decreasing the given attribute calculated concerning the reference.The addition of rice husk boosted the 28-day flexural strength of samples, with 10% replacement of rice husk and 3% nanoalumina giving the greatest flexural strength of 17%. The flexural strength decreases as the rice husk content increases [79]. It was discovered that raising 2% and 3% of NA increased its power up to 3%, respectively, in nonfiber designs (equivalent to a 5% increase). When the treatment time was increased to 28 days and the nanoalumina concentration was raised by 2% and 3%, this value climbed to 5.5 (equivalent to a 5% increase) and 5.58 MPa (equal to a 7% increase) [81]. By 7 days of age, however, nanoalumina percent had climbed to 2% and 3%. This parameter had increased to 5.21 (corresponding to a 5% increase) and 5.33 MPa, respectively (equivalent to a 7% increase). The development of pozzolanic reactions and the microstructure density of the mortar matrix improves the transitional area and, as a result, improves the adhesion of the fibers and matrix, as well as increasing the strength of the elongation of the fiber during flexural loading [81], can explain this trend. The flexural strength was reduced by around 10% by adding 1% and 1.5% nanopowder, regardless of the nanopowder amount [82].According to the findings, the optimum concentration of nanoalumina was 3%, with a 16.7% increase in flexural strength.The development of pozzolanic reactions and the microstructure density of the mortar matrix, which enhances the transitional area and, in turn, improves the matrix’s adhesion property, while also strengthening the elongation during flexural loading, can be interpreted as the cause of the increase in flexural strength by adding nanoalumina [81]. ## 4.5. Carbon Nanotubes Most researchers agreed that CNT particles might improve the bending strength of cementitious material to some extent. Table11 shows the effect of CNT on the bending strength of cementitious material when CNT is added.Table 11 Twenty-eight-day flexural strength of cementitious materials with CNT. Materialsw/b ratioReplacement of MWCNT (%)Flexural strength increment (%)Maximum replacement of MWCNT (%)ReferencesConcrete0.25 and 0.53 ⟶ 10.4[84]Mortar0.550.05, 0.1, and 0.21, 7, and 28[85]Concrete0.40.1, 0.2, 0.3, 0.4, and 0.510.25 ⟶ 15.4 ⟶ 19.23 ⟶ 20.5 ⟶ −10.4[86]SCC0.450.1, 0.3, and 0.521.2 ⟶ 32.9 ⟶ 38.6[87]UHSC0.20.05, 0.1, and 0.157.5 ⟶ 3.33% ⟶ −0.660.05[88]The minus (−) sign implies decreasing the given attribute calculated concerning the reference one.The bending strength of mortars was significantly improved simultaneously when pristine MWCNTs and all contents were used—incorporating 0.250 wt% percent MWCNTs resulted in a 3% increase in flexural strength, while 0.50 wt% percent MWCNTs resulted in a 10.4% increase in bending strength [84]. The cement mortar bending strength was improved by adding CNTs [103]. When added CNTs increase, the flexural strength improves up to 28% for the 0.2% of CNTs [85]. The percentage increase in compressive strength for 0.1% and 0.4% was 10.25% and 20.5%, correspondingly, for 28 days. A further increase in the content of 0.5% caused the flexural strength to decrease, indicating that the optimum range was 0.4%. The dispersion of CNTs is primarily responsible for the increase in results. The increase in the number of highly stiffened CSH in the presence of CNTs is the second explanation for the increase in features [86]. The compressive strength of SCC with 0.1%, 0.3%, and 0.5% MWCNT increased by 38.6% [87]. The flexural strength rose to 3.33% for a 0.1% MWCNT in the current study; when the content of MWCNT grows, the flexural strength falls; therefore, 0.1% is the ideal content [88].According to various literary works, the ideal amount of CNT is 0.4%, and flexural strength improves up to 20.5%.Improvement in flexural strength and tensile strength is due to bridging the crack surfaces in the presence of CNTs. It is also noted that with CNTs, concrete density decreases with the density value reducing from 310 to 290 kg/m3. The reason is decreased pore wall discharge and uniform pore size. Also, the workability of the paste was increased as a superplasticizer was used. The basic properties of cement and the process of gaining strength in concrete include components of different shapes. CSH is a cloud-like structure, CH is like the rose of stone, and calcium sulfur aluminate hydrates are needle structures that produce pores due to the different shapes of frames. Homogeneous dispersion of CNTs made concrete denser as dispersed CNTs filled voids, increasing its crack resistivity [86]. ## 4.6. Nanocellulose Table12 shows the outcome of NC on the flexural strength of cementitious material.Table 12 Flexural strength of cementitious materials with NC at 28 days. Cementitious materialsw/b ratioReplacement of NC (%)Flexural strength increment (%)Maximum replacement of NC (%)ReferencesConcrete0.30.05, 0.1, 0.2, and 0.316 ⟶ 19 ⟶ 21 ⟶ 200.20[92]Concrete0.150.005, 0.01, and 0.01537.3 ⟶ 18.53 ⟶ 1.90.005[93]Concrete0.350.25, 0.5, 0.75, 1, 1.75, 0.5, 1, and 1.520, 12.5, 10, 7.5, 6.25, and 5[104]The flexural capacity of the reference concrete was 4.84 MPa, while the flexural capacities of NC concentrations of 0.05%, 0.10%, 0.20%, and 0.30% as a replacement for cement were 5.62, 5.75, 5.84, and 5.81 MPa, respectively. This translates to increased flexural strength of 16%, 19%, 21%, and 20%, respectively. This reveals that increasing the content of CF increases flexural strength by up to 21% for a 0.2% NC content. Expanding the scope of NC diminishes flexural strength; thus, the ideal range is 0.2% [92]. The results showed that the sample with 0.005% of NC has higher flexural strength, around 37.3%, than the control mix; however, further increases in % of NC, the flexural capacity decreases; hence, the optimum range of NC is 0.005% [93]. Adding nanocellulose increases the mechanical qualities of the material. Adding NC as a 0.2% replacement for cement increases flexural strength by 20% [105]; expanding the scope of NC further reduces flexural strength but still leaves it higher than the reference mix; thus, the optimum range is 0.2% [104].According to numerous studies, the ideal content of NC is 0.2%, and flexural strength rises by up to 20%. The above results reveal that the effect of CF on flexural capacity is two-fold: (1) an increased first peak strength (rupture strength) associated with the nanometric properties of CF. This alters the properties of the cement paste matrix at the microstructure level (as further discussed afterward); (2) an enhanced toughness associated with the high aspect ratio and the tensile strength of CF, which contribute toward maintaining the peak load for a prolonged range of microdeflections prior to failure, thereby increasing the cracking resistance. The observed effect of CF on composite fracture behavior is driven by the contribution of the filaments as a nanoreinforcement. Possible mechanisms involved in the CF effect on composite fracture behavior may include: (1) filament bridging capacity driven by their high aspect ratio and fibrillated morphology, (2) filament resistance to rupturing owing to their tensile properties, and (3) the filament-matrix interfacial bond stemming for the potential interaction between the omnipresent –OH groups on CF surface and cement hydrates by hydrogen bonding. In this mechanism, irrespective of the effect of CF on peak flexural capacity, the high probability of the fibers intercepting microcracks may play a favorable role in delaying the matrix fracture [92]. ## 5. Conclusion The development of early strength in concrete is enhanced by introducing nanomaterials that will positively impact the mechanical properties of cementitious materials. Improvement in flexural and compressive capacity can be observed in the initial assessment at a very nascent stage. During this review, a key finding was attributed to the pozzolanic filling effect, nucleation effect, and bridging development catalyzed the strengthening mechanism of nanomaterials. High surface mineral particles in cement mixtures need more water or plasticizers to make the concrete workable. The expanse of the literature suggests that the optimal percent of nanometakaolin is 10%, which boosts compressive capacity by 63% and flexural capacity by 46.8%. However, a trade-off ensures as the mechanical characteristics deteriorate with the gradual increase in NMK concentrations. According to this study, NS doses of up to 2% could improve compressive strength by 20.25% and flexural strength by 34.6% for 4% NS content. The characteristics of NS as an activator also aid in the hydration process. If the dose of NS is higher than 2%, the compressive and flexural capacities may be reduced. On the other side, increasing the amount of NT in material increases the compressive strength up to 23% and flexural strength to 47%, and using TiO2 reduces air pollution. Nanoalumina at 1% increased compressive and flexural capacities by 46% and 16.7%, respectively, for a 3% replacement. The optimal CNT concentration for SCC is 0.5%, which increases compressive strength by up to 38.6% and flexural strength by up to 20.5% in ordinary standard concrete and by 38% in SCC. Nanocellulose is a plant-derived polymer that is safe for the environment and nontoxic during implementation. It significantly improves the mechanical qualities of concrete by substituting cement. The compressive strength increased by 25% with a 0.05% replacement of NC, and the flexural strength increased by 21% with a 0.2% substitution of NC with cement. These increments are explained in Figures 1 and 2.Figure 1 Twenty-eight-day compressive strength increase (%) [48, 61, 65, 78, 83, 92].Figure 2 Twenty-eight-day flexural strength increase (%) [48, 64, 79, 86, 92, 96].When NMK concrete is compared to all six nanomaterials (NMK, NS, NT, NA, CNT, and nanocellulose), the compressive and flexural strength increases to 63% and 36%, respectively. However, several conclusions have been offered, but only a few are backed up by sufficient evidence, and the rest need to be confirmed. This is kept as a future work to be carried out by the authors. As a result, a holistic mechanistic framework should be built to determine the relationship between nanophenomena and mechanical characteristics and models capable of quantitatively analyzing the effects of nanomaterials on composite properties. --- *Source: 1004597-2023-01-07.xml*
2023
# Rapid Maxillary Expansion and Nocturnal Enuresis in Children and Adolescents: A Systematic Review of Controlled Clinical Trials **Authors:** Khaled Khalaf; Dina Mansour; Zain Sawalha; Sima Habrawi **Journal:** The Scientific World Journal (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1004629 --- ## Abstract Objectives. To evaluate the effectiveness of rapid palatal expansion in the treatment of nocturnal enuresis among 6–18-year-old children and adolescents. Methods. Comprehensive searches were carried out in 6 electronic databases (EBSCO, ProQuest, Clinical Key, Science Direct, SCOPUS, and OVID) and supplemented by additional manual searches in 4 orthodontic journals until June 2020. Randomized controlled clinical trials (RCTs) and controlled clinical trials (CCTs) of children and adolescents aged 6–18 years old of both genders who underwent rapid palatal expansion and were considered unresponsive to previous conventional nocturnal enuresis treatment were included in this review. Risk of bias of individual trials was assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool for CCTs and the revised Cochrane Risk-of-Bias tool for RCTs (RoB 2). Results. Four studies met all inclusion criteria and were finally included in this systematic review, of which one was an RCT and three were CCTs. Reduction in nocturnal enuresis frequency was reported in all included studies with varying rates and methods of reporting, but most studies reported a statistically significant reduction in the number of wet nights per week. The average range of becoming completely dry 1 year after treatment with an RME was 0%–60%. Also, there was a statistically significant correlation between an improvement in bedwetting and an increase in nasal volume after the use of RME. Conclusion. A rapid palatal expansion device may be considered as an alternative treatment option of the nocturnal enuresis condition with guarded prognosis when other treatment modalities have failed. --- ## Body ## 1. Introduction Nocturnal enuresis (NE) or bedwetting (BW) is a prevalent condition that affects around 10% of children around the world, making it the second most common problem in school aged children after asthma and allergies [1–3]. It is defined as the involuntary voiding of urine by distinct acts of micturition at night in children 5 years old and above and is more common in males than females [4].Monosymptomatic nocturnal enuresis is defined as the absence of or subtle daytime symptoms; presents 80% of cases with NE. Furthermore, cases can be classified into either primary (children who never achieved dryness) or secondary (dryness has been achieved for at least six months before enuresis begins) [5]. On the brighter side, NE is reported to have annual spontaneous cure rate of 15% [6]. Rarely, approximately in 0.5% to 2% of cases, NE persists in otherwise-healthy adults [7]. It is important to distinguish between NE and nocturia which is defined as the frequent night awakening to void [8].The pathogenesis of NE is considered to be multifactorial and complex. However, previous studies were able to clear some ambiguities and highlight some important factors linked to this disorder. Historically, bedwetting was regarded a psychiatric disorder. Over the years, a clearer understanding has been obtained and causative factors have been narrowed down to mainly three reasons: (1) excessive production of urine at night, (2) hyperactivity of the bladder’s smooth muscle, and (3) the inability to wake while asleep to empty the bladder when full. Furthermore, it has been reported in some patients that enuresis can be caused by a blockage of the upper airway [9].Surprisingly, numerous “enuresis genes” have been detected making this disorder is highly hereditary, with astonishing increase in risk of 5–7% if one parent was affected [10]. Bedwetting is a stressful condition that has a negative impact on the child’s quality of life during their development; such drawbacks can immensely improve with successful treatment [11].Results obtained from multiple meta-analyses and clinical trials proposed different treatment modalities for children with NE [5, 9, 11, 12]. Such interventions usually start at the onset of the problem which is around 5–6 years of age. Generally, treatment options include motivational therapy, bladder training, fluid management, night alarms, and pharmacological agents such as desmopressin and tricyclic antidepressants. However, evidence is mostly in favor of the enuresis alarm which is considered the most effective and lasting management approach as has been shown in a systematic review of a large number of clinical trials (56). The second most accepted treatment option is the use of desmopressin or imipramine, an analogue of the vasopressin hormone, which reduces the production of urine and has been in use in the medical field for many years [11, 12].The current treatment modalities provide results that are far from satisfactory and have been proven by various trials to have minimal efficacy in the management of this condition. Therefore, investigators have started focusing on alternative treatment options. According to several case reports, there seems to be a correlation between the resolution of upper airway blockage and an improvement in the condition of NE, since up to 80% of enuretic children have concurrent sleep apnea [13]. From this point, several other treatment options were described for the treatment of sleep disordered breathing and airway obstruction. The most common treatment proposed was adenotonsillectomy, to reduce nocturnal resistance airflow, therefore alleviating NE [14].Some studies have reported the use of rapid maxillary expansion as a treatment modality to treat NE [14–16]. Expansion of approximately 5 mm was achieved by applying an orthodontic device to increase the maxillary width within 10–14 days [17, 18]. However, it is not yet known how effective this form of intervention to treat young children with NE who were unresponsive to the commonly used treatment modalities.Therefore, the aim of our systematic review was to investigate whether a rapid maxillary expansion is an effective management approach in alleviating or treating nocturnal enuresis of young children and adolescents who were unresponsive to the commonly used treatment modalities. ## 2. Materials and Methods ### 2.1. Protocol and Registration This systematic review was executed using the Prisma checklist guidelines and registered in PROSPERO (International Prospective Register of Systematic Reviews) under the registration number CRD42020170752. ### 2.2. Information Sources and Search Strategy A comprehensive search strategy was performed using both manual and electronic sources to identify and include all potential articles. The electronic database search included the following databases: EBSCO (January 1990–June 2020), ProQuest (January 1998–June 2020), Clinical Key (October 1990–June 2020), Science Direct (June 1945–June 2020) SCOPUS (1990–2020), and OVID (1990–June 2020).The manual search included the following journals: American Journal of Orthodontics and Dentofacial Orthopedics [AJODO] (July 1986–June 2020), Journal of orthodontics [JO] (March 2003–June 2020), Angle Orthodontist [AO] (January 1931–June 2020), and European Journal of Orthodontics [EJO] (February 1996–June 2020).A combination of medical and non-medical terms were used for searching the electronic databases; terms included “rapid maxillary expansion”, “rapid palatal expansion,” “rapid expander”, “maxillary expansion”, “nocturnal enuresis”, “bedwetting”, and “children” and “adolescents”. Search terms were adapted to each electronic database to identify all potential articles indexed in the database. ### 2.3. Selection of Studies Following a comprehensive search, only studies fulfilling the following criteria were included in this systematic review: (1) children and adolescents of either gender with 6–18 years old, (2) diagnosed with either primary or secondary therapy resistant NE, (3) had rapid maxillary expansion to treat nocturnal enuresis as the main outcome, and (4) were followed for at least six months post expansion. Papers that were reported in non-English or included participants with heavy snoring, sleep apnea, untreated constipation, concurrent urological, endocrinological, nephrological, odonatological or psychiatric disorders, and children and adolescents involved in ongoing enuretic treatment were excluded. Furthermore, study design was limited to only randomized controlled clinical trials and controlled clinical trials.The PICOs components used in this systematic review were as follows:Population: children and adolescents of either gender aged 6–18 years old who were diagnosed with either primary or secondary therapy resistant NE.Intervention: rapid maxillary expansion device to expand the maxilla.Comparison: no expansion of the maxilla/passive rapid maxillary expansion device.Primary outcome measure: the number of wet nights per week.Secondary outcome measures: the number of responders, intermediate responders, and non-responders; cure rate (complete dryness) in the short-term (one month, 3 months and 6 months) and in the log-term (one year and 3 years); nasal volume after the use of RME.Study design: randomized controlled clinical trials (RCCTs) and controlled clinical trials (CCTs).Titles, abstracts, and finally full texts of possible articles were scrutinized. Furthermore, references of the identified full text studies were inspected to identify additional ones for inclusion in the systematic review. The electronic and hand search was carried out in duplicate by two teams of investigators who met thereafter and had to agree. In the case of disagreement, a third author was consulted to reach a final decision. ### 2.4. Data Extraction A form was created and used to extract all relevant information from the final included articles. The extraction of data was performed in duplicate to ensure accuracy of information gathered. For each article included, participant’s demographic data, frequency of NE, signs and symptoms of upper airway obstruction (e.g., snoring, open mouth during sleep, and sleep apnea), Angle’s classification, presence of crossbites, daily palatal expansion, duration of expansion, type of appliance used, expansion end point, study methodology, response rates (i.e., responders, partial responders, and non-responders), time to become completely dry or improved, follow-up periods and enuresis type (i.e., primary or secondary), and type of statistical analysis used (if any) were gathered (Table1).Table 1 Summary data of the included studies. Author, year of studyStudy typeParticipantsInclusion and exclusion criteriaIntervention/controlAmount of expansionFollow-up periodsFindings of the studyNeveus et al., 2014 and Bazargani et al., 2016 (one study published as two separate articles)Nonrandomized controlled trail(i) 34 (29 M, 5 F), one dropped out(i) Inclusion criteria: children did not respond to conventional treatmentsRME appliance activated for all patients/RME appliance was left passive for the initial 4 weeks for all patients0.5 mm daily (0.25 mm morning, 0.25 mm night)(i) Baseline (before treatment)(i) The number of wet nights/week on the 4 follow-up periods was 5.48 ± 1.48, 5.12 ± 1.73, 3.09 ± 2.49, and 2.63 ± 2.81;p < 0.001(ii) Age: 8–15 years(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) With the orthodontic appliance in situ(ii) After RME the number of responders and intermediate responders was 16/33 (48.5%), and the number of nonresponders was 17/33 (51.5%)(iii) 6 months (after completion of expansion)(iii) The long-term cure rate after 1 year was 18/30 (60%), whereas 12/30 (40%) had no long-term response(iv) 1 year post treatment(iv) Nonresponders had more frequent enuresis (6.29 ± 1.31 versus 4.63 ± 1.15 wet nights/week;p = 0.001)Al-Taai et al., 2015Nonrandomized controlled trial(i) 19 (1 M, 18 F)(i) Inclusion criteria: healthy children with monosymptomatic primary NE (MPNE) treated with Minirin without long-term improvementRME appliance activated for all patients/RME appliance was left passive for the initial 30 days for 7 patients0.45 mm per day(i) 2–3 months after RME(i) The mean value of wetting per night before expansion = 2.21(ii) Age: 6–15 years(ii) Exclusion criteria: dryness >6 months, known general medical conditions or medications that are linked to NE(ii) 1 year after RME(ii) The mean value of wetting per night 2–3 months after RME = 0.42(iii) 3 years after RME(iii) 30 days after RME expansion, 6 out of 12 children demonstrated complete dryness, and the remaining demonstrated an improvement of NE(iv) No significant impact on NE (p > .05) was found in the control group (7 patients) 30 days after the use of a passive RME device(v) After 3 years, all patients reported complete drynessHyla-Klekot et al., 2015Nonrandomized control trial(i) 41 in total(i) Inclusion criteria: present NE, lack of disease in the kidneys and urinary tract systemRME activated (16)/No RME (25)Total of 6.5 mm(i) Every month during the first 12 months(i) 10/16 children in the intervention group did not wet the bed at all after 3 months and this was maintained 3 years later (8/16 children remained dry)(ii) Age: 6–18 years(ii) Exclusion criteria: active dental caries, bad oral hygiene, inadequate number of teeth for fitting the appliance, and lack of cooperation with orthodontic treatment(ii) 3 years(ii) After 3 years, 50% of the children in the intervention group were completely dry compared with only 32% in the control group(iii) 16 experimental (9 M, 7 F)(iii) After 3 years, there was 4.5 times increase in the reduction of NE in the experimental group compared with the control group(iv) 25 control (15 M, 10 F)Jönson Ring et al., 2019Randomized clinical trail(i) In total 38, 2 dropped out from the placebo group, age: 10.2 ± 1.8(i) Inclusion criteria: primary NE with at least 7 wet nights fortnightly and nonresponders to first-line treatmentRME appliance activated for 2 weeks/RME appliance was left passive for the 2 weeks0.5 mm per day(i) Baseline (T0)(i) From T0 to T1, the experimental group demonstrated a significant reduction of wet nights (mean difference = −2.2) and the placebo group demonstrated no significant reduction of wet nights, mean difference = −0.6). The difference between the 2 groups was not statistically significant(ii) Intervention group 18 (18 M), age (10.3 ± 1.8)(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) 2 weeks (T1)(ii) The mean reduction of wet nights for the whole group 6 months after expansion was significant (mean difference = −3.2)(iii) Placebo group 20 (17 M, 3 F), age (10.2 ± 1.8)(iii) 6 months (T3)(iii) 11 patients (35%) had a reduction in the frequency of NE by >50%(iv) At 6 months, the number of full, intermediate, and nonresponders was 1, 10, and 20, respectively(v) A wide maxilla and great voided volumes at baseline may be associated with a reduced frequency of enuresisM: male, F: female, NE: nocturnal enuresis, RME: rapid maxillary expansion, NE: nocturnal enuresis. ### 2.5. Risk of Bias of Individual Trials Two investigators independently assessed the risk of bias of the included articles using suitable assessment tools. Any disagreement between the two investigators was settled by a third author. Non-randomized clinical trials were assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool [19]. On the other hand, randomized clinical trials were assessed using the revised Cochrane Risk-of-Bias tool for randomized trials (RoB 2) [20]. ### 2.6. Data Synthesis Strategy Data from the included studies will ideally be analyzed quantitatively using a meta-analysis if deemed appropriate, i.e., all included studies are homogenous in terms of study design and outcome measures reported and all are of low risk of bias; otherwise a descriptive (narrative) analysis will be carried out. ### 2.7. Assessment of Quality of Evidence Presented by This Review Quality of evidence presented by this review was assessed using the GRADE approach (Grading of Recommendations Assessment, Development, and Evaluation) [21]. It consists of five assessment domains: risk of bias, inconsistency, imprecision, indirectness, and publication bias. A rating grade of high, moderate, low, or very-low is given to the quality of evidence presented by the review based on the above domains. ## 2.1. Protocol and Registration This systematic review was executed using the Prisma checklist guidelines and registered in PROSPERO (International Prospective Register of Systematic Reviews) under the registration number CRD42020170752. ## 2.2. Information Sources and Search Strategy A comprehensive search strategy was performed using both manual and electronic sources to identify and include all potential articles. The electronic database search included the following databases: EBSCO (January 1990–June 2020), ProQuest (January 1998–June 2020), Clinical Key (October 1990–June 2020), Science Direct (June 1945–June 2020) SCOPUS (1990–2020), and OVID (1990–June 2020).The manual search included the following journals: American Journal of Orthodontics and Dentofacial Orthopedics [AJODO] (July 1986–June 2020), Journal of orthodontics [JO] (March 2003–June 2020), Angle Orthodontist [AO] (January 1931–June 2020), and European Journal of Orthodontics [EJO] (February 1996–June 2020).A combination of medical and non-medical terms were used for searching the electronic databases; terms included “rapid maxillary expansion”, “rapid palatal expansion,” “rapid expander”, “maxillary expansion”, “nocturnal enuresis”, “bedwetting”, and “children” and “adolescents”. Search terms were adapted to each electronic database to identify all potential articles indexed in the database. ## 2.3. Selection of Studies Following a comprehensive search, only studies fulfilling the following criteria were included in this systematic review: (1) children and adolescents of either gender with 6–18 years old, (2) diagnosed with either primary or secondary therapy resistant NE, (3) had rapid maxillary expansion to treat nocturnal enuresis as the main outcome, and (4) were followed for at least six months post expansion. Papers that were reported in non-English or included participants with heavy snoring, sleep apnea, untreated constipation, concurrent urological, endocrinological, nephrological, odonatological or psychiatric disorders, and children and adolescents involved in ongoing enuretic treatment were excluded. Furthermore, study design was limited to only randomized controlled clinical trials and controlled clinical trials.The PICOs components used in this systematic review were as follows:Population: children and adolescents of either gender aged 6–18 years old who were diagnosed with either primary or secondary therapy resistant NE.Intervention: rapid maxillary expansion device to expand the maxilla.Comparison: no expansion of the maxilla/passive rapid maxillary expansion device.Primary outcome measure: the number of wet nights per week.Secondary outcome measures: the number of responders, intermediate responders, and non-responders; cure rate (complete dryness) in the short-term (one month, 3 months and 6 months) and in the log-term (one year and 3 years); nasal volume after the use of RME.Study design: randomized controlled clinical trials (RCCTs) and controlled clinical trials (CCTs).Titles, abstracts, and finally full texts of possible articles were scrutinized. Furthermore, references of the identified full text studies were inspected to identify additional ones for inclusion in the systematic review. The electronic and hand search was carried out in duplicate by two teams of investigators who met thereafter and had to agree. In the case of disagreement, a third author was consulted to reach a final decision. ## 2.4. Data Extraction A form was created and used to extract all relevant information from the final included articles. The extraction of data was performed in duplicate to ensure accuracy of information gathered. For each article included, participant’s demographic data, frequency of NE, signs and symptoms of upper airway obstruction (e.g., snoring, open mouth during sleep, and sleep apnea), Angle’s classification, presence of crossbites, daily palatal expansion, duration of expansion, type of appliance used, expansion end point, study methodology, response rates (i.e., responders, partial responders, and non-responders), time to become completely dry or improved, follow-up periods and enuresis type (i.e., primary or secondary), and type of statistical analysis used (if any) were gathered (Table1).Table 1 Summary data of the included studies. Author, year of studyStudy typeParticipantsInclusion and exclusion criteriaIntervention/controlAmount of expansionFollow-up periodsFindings of the studyNeveus et al., 2014 and Bazargani et al., 2016 (one study published as two separate articles)Nonrandomized controlled trail(i) 34 (29 M, 5 F), one dropped out(i) Inclusion criteria: children did not respond to conventional treatmentsRME appliance activated for all patients/RME appliance was left passive for the initial 4 weeks for all patients0.5 mm daily (0.25 mm morning, 0.25 mm night)(i) Baseline (before treatment)(i) The number of wet nights/week on the 4 follow-up periods was 5.48 ± 1.48, 5.12 ± 1.73, 3.09 ± 2.49, and 2.63 ± 2.81;p < 0.001(ii) Age: 8–15 years(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) With the orthodontic appliance in situ(ii) After RME the number of responders and intermediate responders was 16/33 (48.5%), and the number of nonresponders was 17/33 (51.5%)(iii) 6 months (after completion of expansion)(iii) The long-term cure rate after 1 year was 18/30 (60%), whereas 12/30 (40%) had no long-term response(iv) 1 year post treatment(iv) Nonresponders had more frequent enuresis (6.29 ± 1.31 versus 4.63 ± 1.15 wet nights/week;p = 0.001)Al-Taai et al., 2015Nonrandomized controlled trial(i) 19 (1 M, 18 F)(i) Inclusion criteria: healthy children with monosymptomatic primary NE (MPNE) treated with Minirin without long-term improvementRME appliance activated for all patients/RME appliance was left passive for the initial 30 days for 7 patients0.45 mm per day(i) 2–3 months after RME(i) The mean value of wetting per night before expansion = 2.21(ii) Age: 6–15 years(ii) Exclusion criteria: dryness >6 months, known general medical conditions or medications that are linked to NE(ii) 1 year after RME(ii) The mean value of wetting per night 2–3 months after RME = 0.42(iii) 3 years after RME(iii) 30 days after RME expansion, 6 out of 12 children demonstrated complete dryness, and the remaining demonstrated an improvement of NE(iv) No significant impact on NE (p > .05) was found in the control group (7 patients) 30 days after the use of a passive RME device(v) After 3 years, all patients reported complete drynessHyla-Klekot et al., 2015Nonrandomized control trial(i) 41 in total(i) Inclusion criteria: present NE, lack of disease in the kidneys and urinary tract systemRME activated (16)/No RME (25)Total of 6.5 mm(i) Every month during the first 12 months(i) 10/16 children in the intervention group did not wet the bed at all after 3 months and this was maintained 3 years later (8/16 children remained dry)(ii) Age: 6–18 years(ii) Exclusion criteria: active dental caries, bad oral hygiene, inadequate number of teeth for fitting the appliance, and lack of cooperation with orthodontic treatment(ii) 3 years(ii) After 3 years, 50% of the children in the intervention group were completely dry compared with only 32% in the control group(iii) 16 experimental (9 M, 7 F)(iii) After 3 years, there was 4.5 times increase in the reduction of NE in the experimental group compared with the control group(iv) 25 control (15 M, 10 F)Jönson Ring et al., 2019Randomized clinical trail(i) In total 38, 2 dropped out from the placebo group, age: 10.2 ± 1.8(i) Inclusion criteria: primary NE with at least 7 wet nights fortnightly and nonresponders to first-line treatmentRME appliance activated for 2 weeks/RME appliance was left passive for the 2 weeks0.5 mm per day(i) Baseline (T0)(i) From T0 to T1, the experimental group demonstrated a significant reduction of wet nights (mean difference = −2.2) and the placebo group demonstrated no significant reduction of wet nights, mean difference = −0.6). The difference between the 2 groups was not statistically significant(ii) Intervention group 18 (18 M), age (10.3 ± 1.8)(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) 2 weeks (T1)(ii) The mean reduction of wet nights for the whole group 6 months after expansion was significant (mean difference = −3.2)(iii) Placebo group 20 (17 M, 3 F), age (10.2 ± 1.8)(iii) 6 months (T3)(iii) 11 patients (35%) had a reduction in the frequency of NE by >50%(iv) At 6 months, the number of full, intermediate, and nonresponders was 1, 10, and 20, respectively(v) A wide maxilla and great voided volumes at baseline may be associated with a reduced frequency of enuresisM: male, F: female, NE: nocturnal enuresis, RME: rapid maxillary expansion, NE: nocturnal enuresis. ## 2.5. Risk of Bias of Individual Trials Two investigators independently assessed the risk of bias of the included articles using suitable assessment tools. Any disagreement between the two investigators was settled by a third author. Non-randomized clinical trials were assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool [19]. On the other hand, randomized clinical trials were assessed using the revised Cochrane Risk-of-Bias tool for randomized trials (RoB 2) [20]. ## 2.6. Data Synthesis Strategy Data from the included studies will ideally be analyzed quantitatively using a meta-analysis if deemed appropriate, i.e., all included studies are homogenous in terms of study design and outcome measures reported and all are of low risk of bias; otherwise a descriptive (narrative) analysis will be carried out. ## 2.7. Assessment of Quality of Evidence Presented by This Review Quality of evidence presented by this review was assessed using the GRADE approach (Grading of Recommendations Assessment, Development, and Evaluation) [21]. It consists of five assessment domains: risk of bias, inconsistency, imprecision, indirectness, and publication bias. A rating grade of high, moderate, low, or very-low is given to the quality of evidence presented by the review based on the above domains. ## 3. Results ### 3.1. Samples and Intervention Characteristics Figure1 shows the process of identifying and selecting all suitable articles for inclusion in this review. In total, 195 articles were assessed, of which 150 were from the online databases, and 45 from the manual search. Thirty-nine studies were duplicates, and 143 were not relevant from their abstracts and titles, thus leaving 13 articles deemed suitable at this stage for the inclusion in this review. Following probing the full texts of these studies, 9 were excluded, of which 5 were case series, 1 was not in English, 2 included adult patients, and 1 was duplicate of another study. Thus, only four studies were finally included in this systematic review, of which one was a randomized clinical trial (RCT) and three were non-randomized clinical trials (CCTs). One of these CCTs was published as two separate articles and thus were considered as one study in this systematic review. Two of the three CCTs used the expansion appliance as a placebo and one study had a separate control group.Figure 1 Flowchart of identifying, screening, and selecting suitable studies using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).The agreement between the reviewers regarding searching, identifying, and selecting the final studies was assessed using a kappa statistic which was found to be 0.89.In total, the number of participants in the studies included was 129 children and adolescents (3 dropped out) with an age range of 6 to 18 years. Descriptive and demographic data are presented in Table1. All studies used a hyrax screw which was inserted in an acrylic expanding device; some studies used a sham device for placebo effect. Expansion was achieved by applying a rapid heavy force delivered to the mid palatal suture by turning the screw twice/day. Expansion caused the suture to distract and the two palatal shelves to be pushed apart causing a diastema between the central incisors [22]. The expansion device was kept in place for a few months after finishing the active phase (10–14 days) as a retainer and to prevent further collapse. ### 3.2. Risk of Bias within Studies Figures2 and 3 show the risk of bias judgement for the final studies with the tools used and a justification for the grade given for each study. Two of the three CCTs [23–25] were assessed at moderate risk of bias and one at serious risk of bias [27], whereas the RCT [26] was assessed to be at low risk of bias.Figure 2 Risk of bias assessment for the non-RCT studies included in the review.Figure 3 Risk of bias assessment for the RCT study included in the review. ### 3.3. Results of Individual Studies Results of the individual studies will be summarized and reported narratively as it was not possible to pool the findings in a meta-analysis approach due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. #### 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. #### 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. #### 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. #### 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. #### 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ### 3.4. Evaluating the Strength of Evidence Provided by This Review The overall quality of evidence provided by this review for the main outcome measure, i.e., a decrease in the number of wet nights/week following the treatment with an RME device, was found to be very low. This was due to the moderate to critical risk of bias across the included studies except one, which was of low risk of bias, small sample sizes investigated by the majority of studies, and non-significant findings from a clinical point of view (Table2).Table 2 Rating the overall quality of evidence according to the GRADE’s approach. No. of participantsRisk of biasIndirectnessImprecisionInconsistencyPublication biasOverall quality of evidenceA reduction in the number of wet nights per week129SeriousaNot seriousbSeriousCNot seriousdNot suspectedeVery low⊕〇〇〇aTwo non-RCTs were ranked of moderate ROB and one non-RCT was ranked of serious ROB. bAll included studies were similar in terms of the inclusion criteria of participants, interventions (RME), and the primary outcome measures (the number of wet nights per week). cThe total number of participants for the primary outcome was very small (129). In addition, although the best quality study [26] reported a statistically significant decrease in the number of wet nights/week during the 2 weeks following the treatment with an RME, the difference between the intervention and control groups was not statistically significant. dAll studies reported a similar pattern and magnitude of effect in the main outcome measure between the intervention and control group. eA very comprehensive search of multiple sources was carried out. No clinical trials had been found to be registered in trials registry websites, but have not been published. Studies of positive and negative findings were published and included. ## 3.1. Samples and Intervention Characteristics Figure1 shows the process of identifying and selecting all suitable articles for inclusion in this review. In total, 195 articles were assessed, of which 150 were from the online databases, and 45 from the manual search. Thirty-nine studies were duplicates, and 143 were not relevant from their abstracts and titles, thus leaving 13 articles deemed suitable at this stage for the inclusion in this review. Following probing the full texts of these studies, 9 were excluded, of which 5 were case series, 1 was not in English, 2 included adult patients, and 1 was duplicate of another study. Thus, only four studies were finally included in this systematic review, of which one was a randomized clinical trial (RCT) and three were non-randomized clinical trials (CCTs). One of these CCTs was published as two separate articles and thus were considered as one study in this systematic review. Two of the three CCTs used the expansion appliance as a placebo and one study had a separate control group.Figure 1 Flowchart of identifying, screening, and selecting suitable studies using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).The agreement between the reviewers regarding searching, identifying, and selecting the final studies was assessed using a kappa statistic which was found to be 0.89.In total, the number of participants in the studies included was 129 children and adolescents (3 dropped out) with an age range of 6 to 18 years. Descriptive and demographic data are presented in Table1. All studies used a hyrax screw which was inserted in an acrylic expanding device; some studies used a sham device for placebo effect. Expansion was achieved by applying a rapid heavy force delivered to the mid palatal suture by turning the screw twice/day. Expansion caused the suture to distract and the two palatal shelves to be pushed apart causing a diastema between the central incisors [22]. The expansion device was kept in place for a few months after finishing the active phase (10–14 days) as a retainer and to prevent further collapse. ## 3.2. Risk of Bias within Studies Figures2 and 3 show the risk of bias judgement for the final studies with the tools used and a justification for the grade given for each study. Two of the three CCTs [23–25] were assessed at moderate risk of bias and one at serious risk of bias [27], whereas the RCT [26] was assessed to be at low risk of bias.Figure 2 Risk of bias assessment for the non-RCT studies included in the review.Figure 3 Risk of bias assessment for the RCT study included in the review. ## 3.3. Results of Individual Studies Results of the individual studies will be summarized and reported narratively as it was not possible to pool the findings in a meta-analysis approach due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ### 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. ### 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. ### 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. ### 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. ### 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ## 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. ## 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. ## 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. ## 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. ## 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ## 3.4. Evaluating the Strength of Evidence Provided by This Review The overall quality of evidence provided by this review for the main outcome measure, i.e., a decrease in the number of wet nights/week following the treatment with an RME device, was found to be very low. This was due to the moderate to critical risk of bias across the included studies except one, which was of low risk of bias, small sample sizes investigated by the majority of studies, and non-significant findings from a clinical point of view (Table2).Table 2 Rating the overall quality of evidence according to the GRADE’s approach. No. of participantsRisk of biasIndirectnessImprecisionInconsistencyPublication biasOverall quality of evidenceA reduction in the number of wet nights per week129SeriousaNot seriousbSeriousCNot seriousdNot suspectedeVery low⊕〇〇〇aTwo non-RCTs were ranked of moderate ROB and one non-RCT was ranked of serious ROB. bAll included studies were similar in terms of the inclusion criteria of participants, interventions (RME), and the primary outcome measures (the number of wet nights per week). cThe total number of participants for the primary outcome was very small (129). In addition, although the best quality study [26] reported a statistically significant decrease in the number of wet nights/week during the 2 weeks following the treatment with an RME, the difference between the intervention and control groups was not statistically significant. dAll studies reported a similar pattern and magnitude of effect in the main outcome measure between the intervention and control group. eA very comprehensive search of multiple sources was carried out. No clinical trials had been found to be registered in trials registry websites, but have not been published. Studies of positive and negative findings were published and included. ## 4. Discussion NE is a stressful condition that affects the child’s emotional wellbeing immensely which further reflects on their quality of life, self-esteem, and school performance; such drawbacks can significantly improve with a successful treatment [11]. It has been reported that NE have an annual spontaneous cure rate of 15% [6] and up to date, there is no clinically approved treatment modality for the NE condition in patients, where all commonly used treatments can only produce slight improvement and thus are considered of minimal efficacy. Therefore, investigators have started focusing on alternative treatment options. One of the possible suggested causes of NE is an upper airway obstruction [9]. Moreover, up to 80% of enuretic patients have concurrent sleep apnea [13]. Thus, it was only logical to consider palatal expansion as a potential solution to NE in young patients.In this systematic review, we found that the most commonly used and effective device for expansion is the RME device with a hyrax screw soldered to orthodontic bands on the first permanent molars. This orthodontic device results in expansion of the maxilla due to separation of the mid palatal suture over a period of 10–14 days where the midline screw was activated twice daily to achieve a total daily expansion of 0.5 mm [23–27]. The endpoint of expansion was concluded as when the occlusal surfaces of the palatal cusps of the upper first permanent molars came into contact with the occlusal surfaces of the buccal cusps of the lower first permanent molars [23–26].Although examining the effect of RME on occlusion was not a primary objective in our review, it is worth mentioning that the RME device can be used as an alternative method to improve the NE condition in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23]. Furthermore, the findings of the included studies show that the type of malocclusion does not bare any effect on the improvement of NE using RME devices. This means that the RME treatment modality, if it were proved to be effective in curing NE in patients, can be adopted as an alternative treatment option in young patients who suffer from NE and did not respond to the conventional management options regardless of the features of their occlusion.A reduction in the nocturnal enuresis following the use of an RME device was reported in all included studies with varying rates and methods of reporting such an improvement. The average rate of becoming completely dry 1 year after the treatment with an RME device was found in the range of 0–60%. The results of this systematic review somehow agree with the findings reported by a previous systematic review [22] which concluded that a rapid palatal expansion in the management of NE in patients had a success rate of 31% (average rate of becoming completely dry 1 year after the treatment with an RME) and thus might be contemplated when other management approaches did not succeed. However, the latter study [22] provided a weak scientific evidence due to the inclusion of only inherently low-quality study types, i.e., case series, very limited search of electronic databases (Pubmed and Embase and additional articles from Google scholar) with no additional search of relevant journals and the grey literature, whereas the current systematic review represents the current literature more accurately since (1) it included only randomized and nonrandomized clinical trials providing a higher level of scientific evidence and (2) identified all relevant articles using multiple online databases and further expanded the search including hand searching of four different orthodontic journals and the grey literature.The results of this systematic review may support the use of RME devices in the treatment of NE condition as a viable option when commonly used treatment modalities have failed for the following reasons: (1) a short active treatment duration of 10–14 days, (2) RME devices are considered minimally invasive compared to other treatment modalities, (3) RME devices can be used to correct pre-existing transverse occlusal discrepancies such as unilateral or bilateral crossbites, and (4) RME devices are well tolerated by patients with minimal side effects. ### 4.1. Strengths and Limitations To our knowledge, the current systematic review is the most up-to-date review on the topic, conducted according to the PRISMA guidelines and presents the best available evidence in the literature. We also followed meticulous and strict inclusion/exclusion criteria to ensure that we solely studied the effect of the RME treatment approach on the NE condition minimizing confounding variables like systemic diseases. We limited our inclusion criteria of study designs to only RCTs and CCTs to minimize biases inherently committed in other study types. Furthermore, two of the studies included in our systematic review [23, 24] reported the use of polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation in contrast to the previous systematic review by Poorsattar-Bejeh Mir et al. [22] which lacked studies that performed polysomnographic registration.One of the limitations in our systematic review was the lack of a parallel control group in 2 of the included studies. Another limitation was that the overall sample size of participants was low. Moreover, some information, such as familial history of enuresis, exact and average bedwetting per night and per week, was not reported in all studies. Furthermore, limiting the language to the English was a limitation, but it is unlikely that a high quality article would have been published in a non-English language. Finally, it was not possible to synthesize the data of all included studies quantitatively using a meta-analysis due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ## 4.1. Strengths and Limitations To our knowledge, the current systematic review is the most up-to-date review on the topic, conducted according to the PRISMA guidelines and presents the best available evidence in the literature. We also followed meticulous and strict inclusion/exclusion criteria to ensure that we solely studied the effect of the RME treatment approach on the NE condition minimizing confounding variables like systemic diseases. We limited our inclusion criteria of study designs to only RCTs and CCTs to minimize biases inherently committed in other study types. Furthermore, two of the studies included in our systematic review [23, 24] reported the use of polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation in contrast to the previous systematic review by Poorsattar-Bejeh Mir et al. [22] which lacked studies that performed polysomnographic registration.One of the limitations in our systematic review was the lack of a parallel control group in 2 of the included studies. Another limitation was that the overall sample size of participants was low. Moreover, some information, such as familial history of enuresis, exact and average bedwetting per night and per week, was not reported in all studies. Furthermore, limiting the language to the English was a limitation, but it is unlikely that a high quality article would have been published in a non-English language. Finally, it was not possible to synthesize the data of all included studies quantitatively using a meta-analysis due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ## 5. Conclusion The use of RME in patients with NE resulted in a significant reduction of wet nights per week compared with no intervention, but the difference between the two groups was not statistically significant. More well-designed RCTs are required to form a definitive conclusion. --- *Source: 1004629-2021-06-03.xml*
1004629-2021-06-03_1004629-2021-06-03.md
68,938
Rapid Maxillary Expansion and Nocturnal Enuresis in Children and Adolescents: A Systematic Review of Controlled Clinical Trials
Khaled Khalaf; Dina Mansour; Zain Sawalha; Sima Habrawi
The Scientific World Journal (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1004629
1004629-2021-06-03.xml
--- ## Abstract Objectives. To evaluate the effectiveness of rapid palatal expansion in the treatment of nocturnal enuresis among 6–18-year-old children and adolescents. Methods. Comprehensive searches were carried out in 6 electronic databases (EBSCO, ProQuest, Clinical Key, Science Direct, SCOPUS, and OVID) and supplemented by additional manual searches in 4 orthodontic journals until June 2020. Randomized controlled clinical trials (RCTs) and controlled clinical trials (CCTs) of children and adolescents aged 6–18 years old of both genders who underwent rapid palatal expansion and were considered unresponsive to previous conventional nocturnal enuresis treatment were included in this review. Risk of bias of individual trials was assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool for CCTs and the revised Cochrane Risk-of-Bias tool for RCTs (RoB 2). Results. Four studies met all inclusion criteria and were finally included in this systematic review, of which one was an RCT and three were CCTs. Reduction in nocturnal enuresis frequency was reported in all included studies with varying rates and methods of reporting, but most studies reported a statistically significant reduction in the number of wet nights per week. The average range of becoming completely dry 1 year after treatment with an RME was 0%–60%. Also, there was a statistically significant correlation between an improvement in bedwetting and an increase in nasal volume after the use of RME. Conclusion. A rapid palatal expansion device may be considered as an alternative treatment option of the nocturnal enuresis condition with guarded prognosis when other treatment modalities have failed. --- ## Body ## 1. Introduction Nocturnal enuresis (NE) or bedwetting (BW) is a prevalent condition that affects around 10% of children around the world, making it the second most common problem in school aged children after asthma and allergies [1–3]. It is defined as the involuntary voiding of urine by distinct acts of micturition at night in children 5 years old and above and is more common in males than females [4].Monosymptomatic nocturnal enuresis is defined as the absence of or subtle daytime symptoms; presents 80% of cases with NE. Furthermore, cases can be classified into either primary (children who never achieved dryness) or secondary (dryness has been achieved for at least six months before enuresis begins) [5]. On the brighter side, NE is reported to have annual spontaneous cure rate of 15% [6]. Rarely, approximately in 0.5% to 2% of cases, NE persists in otherwise-healthy adults [7]. It is important to distinguish between NE and nocturia which is defined as the frequent night awakening to void [8].The pathogenesis of NE is considered to be multifactorial and complex. However, previous studies were able to clear some ambiguities and highlight some important factors linked to this disorder. Historically, bedwetting was regarded a psychiatric disorder. Over the years, a clearer understanding has been obtained and causative factors have been narrowed down to mainly three reasons: (1) excessive production of urine at night, (2) hyperactivity of the bladder’s smooth muscle, and (3) the inability to wake while asleep to empty the bladder when full. Furthermore, it has been reported in some patients that enuresis can be caused by a blockage of the upper airway [9].Surprisingly, numerous “enuresis genes” have been detected making this disorder is highly hereditary, with astonishing increase in risk of 5–7% if one parent was affected [10]. Bedwetting is a stressful condition that has a negative impact on the child’s quality of life during their development; such drawbacks can immensely improve with successful treatment [11].Results obtained from multiple meta-analyses and clinical trials proposed different treatment modalities for children with NE [5, 9, 11, 12]. Such interventions usually start at the onset of the problem which is around 5–6 years of age. Generally, treatment options include motivational therapy, bladder training, fluid management, night alarms, and pharmacological agents such as desmopressin and tricyclic antidepressants. However, evidence is mostly in favor of the enuresis alarm which is considered the most effective and lasting management approach as has been shown in a systematic review of a large number of clinical trials (56). The second most accepted treatment option is the use of desmopressin or imipramine, an analogue of the vasopressin hormone, which reduces the production of urine and has been in use in the medical field for many years [11, 12].The current treatment modalities provide results that are far from satisfactory and have been proven by various trials to have minimal efficacy in the management of this condition. Therefore, investigators have started focusing on alternative treatment options. According to several case reports, there seems to be a correlation between the resolution of upper airway blockage and an improvement in the condition of NE, since up to 80% of enuretic children have concurrent sleep apnea [13]. From this point, several other treatment options were described for the treatment of sleep disordered breathing and airway obstruction. The most common treatment proposed was adenotonsillectomy, to reduce nocturnal resistance airflow, therefore alleviating NE [14].Some studies have reported the use of rapid maxillary expansion as a treatment modality to treat NE [14–16]. Expansion of approximately 5 mm was achieved by applying an orthodontic device to increase the maxillary width within 10–14 days [17, 18]. However, it is not yet known how effective this form of intervention to treat young children with NE who were unresponsive to the commonly used treatment modalities.Therefore, the aim of our systematic review was to investigate whether a rapid maxillary expansion is an effective management approach in alleviating or treating nocturnal enuresis of young children and adolescents who were unresponsive to the commonly used treatment modalities. ## 2. Materials and Methods ### 2.1. Protocol and Registration This systematic review was executed using the Prisma checklist guidelines and registered in PROSPERO (International Prospective Register of Systematic Reviews) under the registration number CRD42020170752. ### 2.2. Information Sources and Search Strategy A comprehensive search strategy was performed using both manual and electronic sources to identify and include all potential articles. The electronic database search included the following databases: EBSCO (January 1990–June 2020), ProQuest (January 1998–June 2020), Clinical Key (October 1990–June 2020), Science Direct (June 1945–June 2020) SCOPUS (1990–2020), and OVID (1990–June 2020).The manual search included the following journals: American Journal of Orthodontics and Dentofacial Orthopedics [AJODO] (July 1986–June 2020), Journal of orthodontics [JO] (March 2003–June 2020), Angle Orthodontist [AO] (January 1931–June 2020), and European Journal of Orthodontics [EJO] (February 1996–June 2020).A combination of medical and non-medical terms were used for searching the electronic databases; terms included “rapid maxillary expansion”, “rapid palatal expansion,” “rapid expander”, “maxillary expansion”, “nocturnal enuresis”, “bedwetting”, and “children” and “adolescents”. Search terms were adapted to each electronic database to identify all potential articles indexed in the database. ### 2.3. Selection of Studies Following a comprehensive search, only studies fulfilling the following criteria were included in this systematic review: (1) children and adolescents of either gender with 6–18 years old, (2) diagnosed with either primary or secondary therapy resistant NE, (3) had rapid maxillary expansion to treat nocturnal enuresis as the main outcome, and (4) were followed for at least six months post expansion. Papers that were reported in non-English or included participants with heavy snoring, sleep apnea, untreated constipation, concurrent urological, endocrinological, nephrological, odonatological or psychiatric disorders, and children and adolescents involved in ongoing enuretic treatment were excluded. Furthermore, study design was limited to only randomized controlled clinical trials and controlled clinical trials.The PICOs components used in this systematic review were as follows:Population: children and adolescents of either gender aged 6–18 years old who were diagnosed with either primary or secondary therapy resistant NE.Intervention: rapid maxillary expansion device to expand the maxilla.Comparison: no expansion of the maxilla/passive rapid maxillary expansion device.Primary outcome measure: the number of wet nights per week.Secondary outcome measures: the number of responders, intermediate responders, and non-responders; cure rate (complete dryness) in the short-term (one month, 3 months and 6 months) and in the log-term (one year and 3 years); nasal volume after the use of RME.Study design: randomized controlled clinical trials (RCCTs) and controlled clinical trials (CCTs).Titles, abstracts, and finally full texts of possible articles were scrutinized. Furthermore, references of the identified full text studies were inspected to identify additional ones for inclusion in the systematic review. The electronic and hand search was carried out in duplicate by two teams of investigators who met thereafter and had to agree. In the case of disagreement, a third author was consulted to reach a final decision. ### 2.4. Data Extraction A form was created and used to extract all relevant information from the final included articles. The extraction of data was performed in duplicate to ensure accuracy of information gathered. For each article included, participant’s demographic data, frequency of NE, signs and symptoms of upper airway obstruction (e.g., snoring, open mouth during sleep, and sleep apnea), Angle’s classification, presence of crossbites, daily palatal expansion, duration of expansion, type of appliance used, expansion end point, study methodology, response rates (i.e., responders, partial responders, and non-responders), time to become completely dry or improved, follow-up periods and enuresis type (i.e., primary or secondary), and type of statistical analysis used (if any) were gathered (Table1).Table 1 Summary data of the included studies. Author, year of studyStudy typeParticipantsInclusion and exclusion criteriaIntervention/controlAmount of expansionFollow-up periodsFindings of the studyNeveus et al., 2014 and Bazargani et al., 2016 (one study published as two separate articles)Nonrandomized controlled trail(i) 34 (29 M, 5 F), one dropped out(i) Inclusion criteria: children did not respond to conventional treatmentsRME appliance activated for all patients/RME appliance was left passive for the initial 4 weeks for all patients0.5 mm daily (0.25 mm morning, 0.25 mm night)(i) Baseline (before treatment)(i) The number of wet nights/week on the 4 follow-up periods was 5.48 ± 1.48, 5.12 ± 1.73, 3.09 ± 2.49, and 2.63 ± 2.81;p < 0.001(ii) Age: 8–15 years(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) With the orthodontic appliance in situ(ii) After RME the number of responders and intermediate responders was 16/33 (48.5%), and the number of nonresponders was 17/33 (51.5%)(iii) 6 months (after completion of expansion)(iii) The long-term cure rate after 1 year was 18/30 (60%), whereas 12/30 (40%) had no long-term response(iv) 1 year post treatment(iv) Nonresponders had more frequent enuresis (6.29 ± 1.31 versus 4.63 ± 1.15 wet nights/week;p = 0.001)Al-Taai et al., 2015Nonrandomized controlled trial(i) 19 (1 M, 18 F)(i) Inclusion criteria: healthy children with monosymptomatic primary NE (MPNE) treated with Minirin without long-term improvementRME appliance activated for all patients/RME appliance was left passive for the initial 30 days for 7 patients0.45 mm per day(i) 2–3 months after RME(i) The mean value of wetting per night before expansion = 2.21(ii) Age: 6–15 years(ii) Exclusion criteria: dryness >6 months, known general medical conditions or medications that are linked to NE(ii) 1 year after RME(ii) The mean value of wetting per night 2–3 months after RME = 0.42(iii) 3 years after RME(iii) 30 days after RME expansion, 6 out of 12 children demonstrated complete dryness, and the remaining demonstrated an improvement of NE(iv) No significant impact on NE (p > .05) was found in the control group (7 patients) 30 days after the use of a passive RME device(v) After 3 years, all patients reported complete drynessHyla-Klekot et al., 2015Nonrandomized control trial(i) 41 in total(i) Inclusion criteria: present NE, lack of disease in the kidneys and urinary tract systemRME activated (16)/No RME (25)Total of 6.5 mm(i) Every month during the first 12 months(i) 10/16 children in the intervention group did not wet the bed at all after 3 months and this was maintained 3 years later (8/16 children remained dry)(ii) Age: 6–18 years(ii) Exclusion criteria: active dental caries, bad oral hygiene, inadequate number of teeth for fitting the appliance, and lack of cooperation with orthodontic treatment(ii) 3 years(ii) After 3 years, 50% of the children in the intervention group were completely dry compared with only 32% in the control group(iii) 16 experimental (9 M, 7 F)(iii) After 3 years, there was 4.5 times increase in the reduction of NE in the experimental group compared with the control group(iv) 25 control (15 M, 10 F)Jönson Ring et al., 2019Randomized clinical trail(i) In total 38, 2 dropped out from the placebo group, age: 10.2 ± 1.8(i) Inclusion criteria: primary NE with at least 7 wet nights fortnightly and nonresponders to first-line treatmentRME appliance activated for 2 weeks/RME appliance was left passive for the 2 weeks0.5 mm per day(i) Baseline (T0)(i) From T0 to T1, the experimental group demonstrated a significant reduction of wet nights (mean difference = −2.2) and the placebo group demonstrated no significant reduction of wet nights, mean difference = −0.6). The difference between the 2 groups was not statistically significant(ii) Intervention group 18 (18 M), age (10.3 ± 1.8)(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) 2 weeks (T1)(ii) The mean reduction of wet nights for the whole group 6 months after expansion was significant (mean difference = −3.2)(iii) Placebo group 20 (17 M, 3 F), age (10.2 ± 1.8)(iii) 6 months (T3)(iii) 11 patients (35%) had a reduction in the frequency of NE by >50%(iv) At 6 months, the number of full, intermediate, and nonresponders was 1, 10, and 20, respectively(v) A wide maxilla and great voided volumes at baseline may be associated with a reduced frequency of enuresisM: male, F: female, NE: nocturnal enuresis, RME: rapid maxillary expansion, NE: nocturnal enuresis. ### 2.5. Risk of Bias of Individual Trials Two investigators independently assessed the risk of bias of the included articles using suitable assessment tools. Any disagreement between the two investigators was settled by a third author. Non-randomized clinical trials were assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool [19]. On the other hand, randomized clinical trials were assessed using the revised Cochrane Risk-of-Bias tool for randomized trials (RoB 2) [20]. ### 2.6. Data Synthesis Strategy Data from the included studies will ideally be analyzed quantitatively using a meta-analysis if deemed appropriate, i.e., all included studies are homogenous in terms of study design and outcome measures reported and all are of low risk of bias; otherwise a descriptive (narrative) analysis will be carried out. ### 2.7. Assessment of Quality of Evidence Presented by This Review Quality of evidence presented by this review was assessed using the GRADE approach (Grading of Recommendations Assessment, Development, and Evaluation) [21]. It consists of five assessment domains: risk of bias, inconsistency, imprecision, indirectness, and publication bias. A rating grade of high, moderate, low, or very-low is given to the quality of evidence presented by the review based on the above domains. ## 2.1. Protocol and Registration This systematic review was executed using the Prisma checklist guidelines and registered in PROSPERO (International Prospective Register of Systematic Reviews) under the registration number CRD42020170752. ## 2.2. Information Sources and Search Strategy A comprehensive search strategy was performed using both manual and electronic sources to identify and include all potential articles. The electronic database search included the following databases: EBSCO (January 1990–June 2020), ProQuest (January 1998–June 2020), Clinical Key (October 1990–June 2020), Science Direct (June 1945–June 2020) SCOPUS (1990–2020), and OVID (1990–June 2020).The manual search included the following journals: American Journal of Orthodontics and Dentofacial Orthopedics [AJODO] (July 1986–June 2020), Journal of orthodontics [JO] (March 2003–June 2020), Angle Orthodontist [AO] (January 1931–June 2020), and European Journal of Orthodontics [EJO] (February 1996–June 2020).A combination of medical and non-medical terms were used for searching the electronic databases; terms included “rapid maxillary expansion”, “rapid palatal expansion,” “rapid expander”, “maxillary expansion”, “nocturnal enuresis”, “bedwetting”, and “children” and “adolescents”. Search terms were adapted to each electronic database to identify all potential articles indexed in the database. ## 2.3. Selection of Studies Following a comprehensive search, only studies fulfilling the following criteria were included in this systematic review: (1) children and adolescents of either gender with 6–18 years old, (2) diagnosed with either primary or secondary therapy resistant NE, (3) had rapid maxillary expansion to treat nocturnal enuresis as the main outcome, and (4) were followed for at least six months post expansion. Papers that were reported in non-English or included participants with heavy snoring, sleep apnea, untreated constipation, concurrent urological, endocrinological, nephrological, odonatological or psychiatric disorders, and children and adolescents involved in ongoing enuretic treatment were excluded. Furthermore, study design was limited to only randomized controlled clinical trials and controlled clinical trials.The PICOs components used in this systematic review were as follows:Population: children and adolescents of either gender aged 6–18 years old who were diagnosed with either primary or secondary therapy resistant NE.Intervention: rapid maxillary expansion device to expand the maxilla.Comparison: no expansion of the maxilla/passive rapid maxillary expansion device.Primary outcome measure: the number of wet nights per week.Secondary outcome measures: the number of responders, intermediate responders, and non-responders; cure rate (complete dryness) in the short-term (one month, 3 months and 6 months) and in the log-term (one year and 3 years); nasal volume after the use of RME.Study design: randomized controlled clinical trials (RCCTs) and controlled clinical trials (CCTs).Titles, abstracts, and finally full texts of possible articles were scrutinized. Furthermore, references of the identified full text studies were inspected to identify additional ones for inclusion in the systematic review. The electronic and hand search was carried out in duplicate by two teams of investigators who met thereafter and had to agree. In the case of disagreement, a third author was consulted to reach a final decision. ## 2.4. Data Extraction A form was created and used to extract all relevant information from the final included articles. The extraction of data was performed in duplicate to ensure accuracy of information gathered. For each article included, participant’s demographic data, frequency of NE, signs and symptoms of upper airway obstruction (e.g., snoring, open mouth during sleep, and sleep apnea), Angle’s classification, presence of crossbites, daily palatal expansion, duration of expansion, type of appliance used, expansion end point, study methodology, response rates (i.e., responders, partial responders, and non-responders), time to become completely dry or improved, follow-up periods and enuresis type (i.e., primary or secondary), and type of statistical analysis used (if any) were gathered (Table1).Table 1 Summary data of the included studies. Author, year of studyStudy typeParticipantsInclusion and exclusion criteriaIntervention/controlAmount of expansionFollow-up periodsFindings of the studyNeveus et al., 2014 and Bazargani et al., 2016 (one study published as two separate articles)Nonrandomized controlled trail(i) 34 (29 M, 5 F), one dropped out(i) Inclusion criteria: children did not respond to conventional treatmentsRME appliance activated for all patients/RME appliance was left passive for the initial 4 weeks for all patients0.5 mm daily (0.25 mm morning, 0.25 mm night)(i) Baseline (before treatment)(i) The number of wet nights/week on the 4 follow-up periods was 5.48 ± 1.48, 5.12 ± 1.73, 3.09 ± 2.49, and 2.63 ± 2.81;p < 0.001(ii) Age: 8–15 years(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) With the orthodontic appliance in situ(ii) After RME the number of responders and intermediate responders was 16/33 (48.5%), and the number of nonresponders was 17/33 (51.5%)(iii) 6 months (after completion of expansion)(iii) The long-term cure rate after 1 year was 18/30 (60%), whereas 12/30 (40%) had no long-term response(iv) 1 year post treatment(iv) Nonresponders had more frequent enuresis (6.29 ± 1.31 versus 4.63 ± 1.15 wet nights/week;p = 0.001)Al-Taai et al., 2015Nonrandomized controlled trial(i) 19 (1 M, 18 F)(i) Inclusion criteria: healthy children with monosymptomatic primary NE (MPNE) treated with Minirin without long-term improvementRME appliance activated for all patients/RME appliance was left passive for the initial 30 days for 7 patients0.45 mm per day(i) 2–3 months after RME(i) The mean value of wetting per night before expansion = 2.21(ii) Age: 6–15 years(ii) Exclusion criteria: dryness >6 months, known general medical conditions or medications that are linked to NE(ii) 1 year after RME(ii) The mean value of wetting per night 2–3 months after RME = 0.42(iii) 3 years after RME(iii) 30 days after RME expansion, 6 out of 12 children demonstrated complete dryness, and the remaining demonstrated an improvement of NE(iv) No significant impact on NE (p > .05) was found in the control group (7 patients) 30 days after the use of a passive RME device(v) After 3 years, all patients reported complete drynessHyla-Klekot et al., 2015Nonrandomized control trial(i) 41 in total(i) Inclusion criteria: present NE, lack of disease in the kidneys and urinary tract systemRME activated (16)/No RME (25)Total of 6.5 mm(i) Every month during the first 12 months(i) 10/16 children in the intervention group did not wet the bed at all after 3 months and this was maintained 3 years later (8/16 children remained dry)(ii) Age: 6–18 years(ii) Exclusion criteria: active dental caries, bad oral hygiene, inadequate number of teeth for fitting the appliance, and lack of cooperation with orthodontic treatment(ii) 3 years(ii) After 3 years, 50% of the children in the intervention group were completely dry compared with only 32% in the control group(iii) 16 experimental (9 M, 7 F)(iii) After 3 years, there was 4.5 times increase in the reduction of NE in the experimental group compared with the control group(iv) 25 control (15 M, 10 F)Jönson Ring et al., 2019Randomized clinical trail(i) In total 38, 2 dropped out from the placebo group, age: 10.2 ± 1.8(i) Inclusion criteria: primary NE with at least 7 wet nights fortnightly and nonresponders to first-line treatmentRME appliance activated for 2 weeks/RME appliance was left passive for the 2 weeks0.5 mm per day(i) Baseline (T0)(i) From T0 to T1, the experimental group demonstrated a significant reduction of wet nights (mean difference = −2.2) and the placebo group demonstrated no significant reduction of wet nights, mean difference = −0.6). The difference between the 2 groups was not statistically significant(ii) Intervention group 18 (18 M), age (10.3 ± 1.8)(ii) Exclusion criteria: known general medical conditions or medications that are linked to NE(ii) 2 weeks (T1)(ii) The mean reduction of wet nights for the whole group 6 months after expansion was significant (mean difference = −3.2)(iii) Placebo group 20 (17 M, 3 F), age (10.2 ± 1.8)(iii) 6 months (T3)(iii) 11 patients (35%) had a reduction in the frequency of NE by >50%(iv) At 6 months, the number of full, intermediate, and nonresponders was 1, 10, and 20, respectively(v) A wide maxilla and great voided volumes at baseline may be associated with a reduced frequency of enuresisM: male, F: female, NE: nocturnal enuresis, RME: rapid maxillary expansion, NE: nocturnal enuresis. ## 2.5. Risk of Bias of Individual Trials Two investigators independently assessed the risk of bias of the included articles using suitable assessment tools. Any disagreement between the two investigators was settled by a third author. Non-randomized clinical trials were assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) assessment tool [19]. On the other hand, randomized clinical trials were assessed using the revised Cochrane Risk-of-Bias tool for randomized trials (RoB 2) [20]. ## 2.6. Data Synthesis Strategy Data from the included studies will ideally be analyzed quantitatively using a meta-analysis if deemed appropriate, i.e., all included studies are homogenous in terms of study design and outcome measures reported and all are of low risk of bias; otherwise a descriptive (narrative) analysis will be carried out. ## 2.7. Assessment of Quality of Evidence Presented by This Review Quality of evidence presented by this review was assessed using the GRADE approach (Grading of Recommendations Assessment, Development, and Evaluation) [21]. It consists of five assessment domains: risk of bias, inconsistency, imprecision, indirectness, and publication bias. A rating grade of high, moderate, low, or very-low is given to the quality of evidence presented by the review based on the above domains. ## 3. Results ### 3.1. Samples and Intervention Characteristics Figure1 shows the process of identifying and selecting all suitable articles for inclusion in this review. In total, 195 articles were assessed, of which 150 were from the online databases, and 45 from the manual search. Thirty-nine studies were duplicates, and 143 were not relevant from their abstracts and titles, thus leaving 13 articles deemed suitable at this stage for the inclusion in this review. Following probing the full texts of these studies, 9 were excluded, of which 5 were case series, 1 was not in English, 2 included adult patients, and 1 was duplicate of another study. Thus, only four studies were finally included in this systematic review, of which one was a randomized clinical trial (RCT) and three were non-randomized clinical trials (CCTs). One of these CCTs was published as two separate articles and thus were considered as one study in this systematic review. Two of the three CCTs used the expansion appliance as a placebo and one study had a separate control group.Figure 1 Flowchart of identifying, screening, and selecting suitable studies using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).The agreement between the reviewers regarding searching, identifying, and selecting the final studies was assessed using a kappa statistic which was found to be 0.89.In total, the number of participants in the studies included was 129 children and adolescents (3 dropped out) with an age range of 6 to 18 years. Descriptive and demographic data are presented in Table1. All studies used a hyrax screw which was inserted in an acrylic expanding device; some studies used a sham device for placebo effect. Expansion was achieved by applying a rapid heavy force delivered to the mid palatal suture by turning the screw twice/day. Expansion caused the suture to distract and the two palatal shelves to be pushed apart causing a diastema between the central incisors [22]. The expansion device was kept in place for a few months after finishing the active phase (10–14 days) as a retainer and to prevent further collapse. ### 3.2. Risk of Bias within Studies Figures2 and 3 show the risk of bias judgement for the final studies with the tools used and a justification for the grade given for each study. Two of the three CCTs [23–25] were assessed at moderate risk of bias and one at serious risk of bias [27], whereas the RCT [26] was assessed to be at low risk of bias.Figure 2 Risk of bias assessment for the non-RCT studies included in the review.Figure 3 Risk of bias assessment for the RCT study included in the review. ### 3.3. Results of Individual Studies Results of the individual studies will be summarized and reported narratively as it was not possible to pool the findings in a meta-analysis approach due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. #### 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. #### 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. #### 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. #### 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. #### 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ### 3.4. Evaluating the Strength of Evidence Provided by This Review The overall quality of evidence provided by this review for the main outcome measure, i.e., a decrease in the number of wet nights/week following the treatment with an RME device, was found to be very low. This was due to the moderate to critical risk of bias across the included studies except one, which was of low risk of bias, small sample sizes investigated by the majority of studies, and non-significant findings from a clinical point of view (Table2).Table 2 Rating the overall quality of evidence according to the GRADE’s approach. No. of participantsRisk of biasIndirectnessImprecisionInconsistencyPublication biasOverall quality of evidenceA reduction in the number of wet nights per week129SeriousaNot seriousbSeriousCNot seriousdNot suspectedeVery low⊕〇〇〇aTwo non-RCTs were ranked of moderate ROB and one non-RCT was ranked of serious ROB. bAll included studies were similar in terms of the inclusion criteria of participants, interventions (RME), and the primary outcome measures (the number of wet nights per week). cThe total number of participants for the primary outcome was very small (129). In addition, although the best quality study [26] reported a statistically significant decrease in the number of wet nights/week during the 2 weeks following the treatment with an RME, the difference between the intervention and control groups was not statistically significant. dAll studies reported a similar pattern and magnitude of effect in the main outcome measure between the intervention and control group. eA very comprehensive search of multiple sources was carried out. No clinical trials had been found to be registered in trials registry websites, but have not been published. Studies of positive and negative findings were published and included. ## 3.1. Samples and Intervention Characteristics Figure1 shows the process of identifying and selecting all suitable articles for inclusion in this review. In total, 195 articles were assessed, of which 150 were from the online databases, and 45 from the manual search. Thirty-nine studies were duplicates, and 143 were not relevant from their abstracts and titles, thus leaving 13 articles deemed suitable at this stage for the inclusion in this review. Following probing the full texts of these studies, 9 were excluded, of which 5 were case series, 1 was not in English, 2 included adult patients, and 1 was duplicate of another study. Thus, only four studies were finally included in this systematic review, of which one was a randomized clinical trial (RCT) and three were non-randomized clinical trials (CCTs). One of these CCTs was published as two separate articles and thus were considered as one study in this systematic review. Two of the three CCTs used the expansion appliance as a placebo and one study had a separate control group.Figure 1 Flowchart of identifying, screening, and selecting suitable studies using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).The agreement between the reviewers regarding searching, identifying, and selecting the final studies was assessed using a kappa statistic which was found to be 0.89.In total, the number of participants in the studies included was 129 children and adolescents (3 dropped out) with an age range of 6 to 18 years. Descriptive and demographic data are presented in Table1. All studies used a hyrax screw which was inserted in an acrylic expanding device; some studies used a sham device for placebo effect. Expansion was achieved by applying a rapid heavy force delivered to the mid palatal suture by turning the screw twice/day. Expansion caused the suture to distract and the two palatal shelves to be pushed apart causing a diastema between the central incisors [22]. The expansion device was kept in place for a few months after finishing the active phase (10–14 days) as a retainer and to prevent further collapse. ## 3.2. Risk of Bias within Studies Figures2 and 3 show the risk of bias judgement for the final studies with the tools used and a justification for the grade given for each study. Two of the three CCTs [23–25] were assessed at moderate risk of bias and one at serious risk of bias [27], whereas the RCT [26] was assessed to be at low risk of bias.Figure 2 Risk of bias assessment for the non-RCT studies included in the review.Figure 3 Risk of bias assessment for the RCT study included in the review. ## 3.3. Results of Individual Studies Results of the individual studies will be summarized and reported narratively as it was not possible to pool the findings in a meta-analysis approach due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ### 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. ### 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. ### 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. ### 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. ### 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ## 3.3.1. Improvement in Nocturnal Enuresis Reduction in nocturnal enuresis was reported in all included studies with varying rates and methods of reporting such an improvement. Three studies [23, 24, 27] reported a reduction in NE frequency and presented their findings in terms of responders (patients who became completely dry), intermediate responders (patients who still wet the bed occasionally), and non-responders (patients who did not improve and wet the bed regularly). Ring et al. [26] reported a statistically significant decrease in the number of wet nights during 2 weeks following the treatment with an RME (p < 0.001), but no significant decrease was found following the placebo treatment (p > 0.40). The mean number of wet nights per 2 weeks has significantly declined from 11.9 to 8.5, which was translated as one full responder, 10 intermediate responders, and 20 non-responders. However, the difference between the intervention and control groups was not statistically significant [26]. On the other hand, Bazargani et al. [23] and Nevéus et al. [24] reported a statistically significant decrease in NE following the treatment with an RME (p < 0.001) with a reduction of number of wet nights per week from 5.48 ± 1.48 at baseline to 3.09 ± 2.49 after RME. This represented as 48.5% of the patients to be full or intermediate responders and 51.5% of the patients considered as non-responders.Al-Taai et al. [25] did not use the previously mentioned terms of full, intermediate, and non-responders; instead, he reported that after RME expansion six out of 12 patients showed a complete dryness, and the remaining 6 patients showed an improvement in the NE. On the contrary, the control group (7 patients) showed no significant change in the frequency of their NE (p > 0.05).Finally, Hyla-Klekot et al. [27] described the intensity level of NE using a 4-grade scale, where a score of 4 = bedwetting twice a night, 3 = once a night, 2 = once or twice a week, and 1 = once or twice a month. After the RME treatment, 10/16 patients were completely dry, and this remained so 3 years later. 5/16 patients had their frequency decreased by one or two grades and 1 child did not improve at all. ## 3.3.2. Nasopharyngeal Airway Changes Bazargani et al. [23] and Nevéus et al. [24] obtained a polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation. They demonstrated a significant increase in nasal volume and airflow after treatment with the RME (p = 0.012). In addition, they reported a statistically significant association between a decrease in the enuresis and an increase in nasal volume (p = 0.034), but they could not detect such an association between a reduction in the enuresis and an increased nasal airflow (p = 0.46) [23]. Furthermore, Nevéus et al. [24] reported a resolution of the snoring habits as well as a greater nasal volume in the individuals who were treated with the RME.Al-Taai et al. [25] further investigated airway dimensional changes using a coronal section of computed tomography (CT) scan of the sinuses as well as anterior rhinometry measurements to assess nasal airflow and resistance. They concluded that nasal airflow increased significantly (p < 0.001) with nasal airflow rising from 405.05 cm3/s before the expansion to 584.86 cm3/s following the expansion. The CT scans taken also showed a significant increase (p < 0.001) in the width of the nasal cavity at the level of the inferior concha and a significant decrease (p < 0.001) in the nasal airway resistance after the expansion with the RME compared to prior to the expansion. ## 3.3.3. Psychological Impact and Sleep Disorders Most of the studies highlighted that persisting NE was a high risk of psychosocial comorbidity and negatively affects the quality of life. The feeling of helplessness of enuretic patients highlights the magnitude and complexity of the problem. Persisting enuresis adversely affects the coping, social competence, and school performance of enuretic patients when compared to their normal peers. Furthermore, a negative correlation exists between the self-esteem of an enuretic child and the chance of treatment failure [12, 23, 26].Furthermore, a considerable number of enuretic patients was found to have concurrent sleep problems, including but not limited to snoring and sleep apnea. Elimination of airway obstruction at nasopharyngeal or oropharyngeal level with either tonsillectomy or adenoidectomy or both may improve the nocturnal enuresis, and it showed favorable results [9, 23, 25].It is worth mentioning that many of our included studies have excluded patients who have any concurrent urological, endocrinological, nephrological, odonatological, or psychiatric disorders. ## 3.3.4. Expansion and Retention The expansion device used in our included studies was the RME expansion device with hyrax screw soldered to orthodontic bands on the upper permanent first molars. However, Al-Taai et al. [25] also applied bands on the first premolars or second primary molars of patients who had an unerupted first premolars.Retention after expansion was done by leaving the same appliance in situ for a few months [23, 24] or by using a Hawley retainer [25]. However, Hyla-Klekot et al. [27] did not elaborate on the method of expansion and retention.Follow-up period ranged from 6 months to 3 years. At least 6 months of follow-up was required post expansion for a study to be included in our systematic review. The follow-up assessment was done by either phone or direct interviews [24]. ## 3.3.5. Occlusion Investigating changes in occlusion brought about by RME devices was not the primary objective in the studies included in this systematic review. However, most studies obtained dental casts to check occlusion along with intermolar, interpremolar, and intercanine distances. They revealed that occlusion characteristics did not affect the outcome. Furthermore, it was found that the RME device can be used as an alternative method to improve the NE in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23].It was noticeable that reporting changes in occlusion was not consistent between studies. Hyla-Klekot et al. [27] reported the percentages of malocclusion and unilateral or bilateral crossbites in the patients included in the study. They showed that the most common malocclusion was a class II (35%) which was often associated with the presence of a deep bite (33%), whereas a posterior crossbite was reported in 14% of the individuals. The least common malocclusions were the class III (4%) and the open bite (2%). They concluded that the main aim of RME treatment was not to correct the malocclusion but to only reduce the NE. Similarly, Bazargani et al. [23] reported percentages of malocclusions and crossbites. Only two of the 34 subjects included in their study had posterior crossbites; 26 patients (76%) had an Angle Class I, which included the two crossbite cases; 7 (21%) had an Angle Class II with a mean overjet of 5.6 mm; and 1 (3%) had an Angle Class III. They concluded that no untoward impact could be observed on the occlusion in the long-term, thus corroborating the above finding that patients with normal occlusal features can be treated with a rapid maxillary expansion to improve their NE condition. Al-Taai et al. [25] reported patients with different degrees of crowding and only 2 out of 19 patients had a crossbite. They did not report on the skeletal class or Angle classification of their sample.Such a variation in reporting occlusal changes was expected since the primary focus of the studies included in this review was the effect of an RME on the nocturnal enuresis. ## 3.4. Evaluating the Strength of Evidence Provided by This Review The overall quality of evidence provided by this review for the main outcome measure, i.e., a decrease in the number of wet nights/week following the treatment with an RME device, was found to be very low. This was due to the moderate to critical risk of bias across the included studies except one, which was of low risk of bias, small sample sizes investigated by the majority of studies, and non-significant findings from a clinical point of view (Table2).Table 2 Rating the overall quality of evidence according to the GRADE’s approach. No. of participantsRisk of biasIndirectnessImprecisionInconsistencyPublication biasOverall quality of evidenceA reduction in the number of wet nights per week129SeriousaNot seriousbSeriousCNot seriousdNot suspectedeVery low⊕〇〇〇aTwo non-RCTs were ranked of moderate ROB and one non-RCT was ranked of serious ROB. bAll included studies were similar in terms of the inclusion criteria of participants, interventions (RME), and the primary outcome measures (the number of wet nights per week). cThe total number of participants for the primary outcome was very small (129). In addition, although the best quality study [26] reported a statistically significant decrease in the number of wet nights/week during the 2 weeks following the treatment with an RME, the difference between the intervention and control groups was not statistically significant. dAll studies reported a similar pattern and magnitude of effect in the main outcome measure between the intervention and control group. eA very comprehensive search of multiple sources was carried out. No clinical trials had been found to be registered in trials registry websites, but have not been published. Studies of positive and negative findings were published and included. ## 4. Discussion NE is a stressful condition that affects the child’s emotional wellbeing immensely which further reflects on their quality of life, self-esteem, and school performance; such drawbacks can significantly improve with a successful treatment [11]. It has been reported that NE have an annual spontaneous cure rate of 15% [6] and up to date, there is no clinically approved treatment modality for the NE condition in patients, where all commonly used treatments can only produce slight improvement and thus are considered of minimal efficacy. Therefore, investigators have started focusing on alternative treatment options. One of the possible suggested causes of NE is an upper airway obstruction [9]. Moreover, up to 80% of enuretic patients have concurrent sleep apnea [13]. Thus, it was only logical to consider palatal expansion as a potential solution to NE in young patients.In this systematic review, we found that the most commonly used and effective device for expansion is the RME device with a hyrax screw soldered to orthodontic bands on the first permanent molars. This orthodontic device results in expansion of the maxilla due to separation of the mid palatal suture over a period of 10–14 days where the midline screw was activated twice daily to achieve a total daily expansion of 0.5 mm [23–27]. The endpoint of expansion was concluded as when the occlusal surfaces of the palatal cusps of the upper first permanent molars came into contact with the occlusal surfaces of the buccal cusps of the lower first permanent molars [23–26].Although examining the effect of RME on occlusion was not a primary objective in our review, it is worth mentioning that the RME device can be used as an alternative method to improve the NE condition in patients with a normal bucco-lingual relationship of the posterior teeth with no detriment to the occlusion [23]. Furthermore, the findings of the included studies show that the type of malocclusion does not bare any effect on the improvement of NE using RME devices. This means that the RME treatment modality, if it were proved to be effective in curing NE in patients, can be adopted as an alternative treatment option in young patients who suffer from NE and did not respond to the conventional management options regardless of the features of their occlusion.A reduction in the nocturnal enuresis following the use of an RME device was reported in all included studies with varying rates and methods of reporting such an improvement. The average rate of becoming completely dry 1 year after the treatment with an RME device was found in the range of 0–60%. The results of this systematic review somehow agree with the findings reported by a previous systematic review [22] which concluded that a rapid palatal expansion in the management of NE in patients had a success rate of 31% (average rate of becoming completely dry 1 year after the treatment with an RME) and thus might be contemplated when other management approaches did not succeed. However, the latter study [22] provided a weak scientific evidence due to the inclusion of only inherently low-quality study types, i.e., case series, very limited search of electronic databases (Pubmed and Embase and additional articles from Google scholar) with no additional search of relevant journals and the grey literature, whereas the current systematic review represents the current literature more accurately since (1) it included only randomized and nonrandomized clinical trials providing a higher level of scientific evidence and (2) identified all relevant articles using multiple online databases and further expanded the search including hand searching of four different orthodontic journals and the grey literature.The results of this systematic review may support the use of RME devices in the treatment of NE condition as a viable option when commonly used treatment modalities have failed for the following reasons: (1) a short active treatment duration of 10–14 days, (2) RME devices are considered minimally invasive compared to other treatment modalities, (3) RME devices can be used to correct pre-existing transverse occlusal discrepancies such as unilateral or bilateral crossbites, and (4) RME devices are well tolerated by patients with minimal side effects. ### 4.1. Strengths and Limitations To our knowledge, the current systematic review is the most up-to-date review on the topic, conducted according to the PRISMA guidelines and presents the best available evidence in the literature. We also followed meticulous and strict inclusion/exclusion criteria to ensure that we solely studied the effect of the RME treatment approach on the NE condition minimizing confounding variables like systemic diseases. We limited our inclusion criteria of study designs to only RCTs and CCTs to minimize biases inherently committed in other study types. Furthermore, two of the studies included in our systematic review [23, 24] reported the use of polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation in contrast to the previous systematic review by Poorsattar-Bejeh Mir et al. [22] which lacked studies that performed polysomnographic registration.One of the limitations in our systematic review was the lack of a parallel control group in 2 of the included studies. Another limitation was that the overall sample size of participants was low. Moreover, some information, such as familial history of enuresis, exact and average bedwetting per night and per week, was not reported in all studies. Furthermore, limiting the language to the English was a limitation, but it is unlikely that a high quality article would have been published in a non-English language. Finally, it was not possible to synthesize the data of all included studies quantitatively using a meta-analysis due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ## 4.1. Strengths and Limitations To our knowledge, the current systematic review is the most up-to-date review on the topic, conducted according to the PRISMA guidelines and presents the best available evidence in the literature. We also followed meticulous and strict inclusion/exclusion criteria to ensure that we solely studied the effect of the RME treatment approach on the NE condition minimizing confounding variables like systemic diseases. We limited our inclusion criteria of study designs to only RCTs and CCTs to minimize biases inherently committed in other study types. Furthermore, two of the studies included in our systematic review [23, 24] reported the use of polysomnographic registration along with rhinomanometry and acoustic rhinometry to measure nasal airway patency, airflow, and oxygen saturation in contrast to the previous systematic review by Poorsattar-Bejeh Mir et al. [22] which lacked studies that performed polysomnographic registration.One of the limitations in our systematic review was the lack of a parallel control group in 2 of the included studies. Another limitation was that the overall sample size of participants was low. Moreover, some information, such as familial history of enuresis, exact and average bedwetting per night and per week, was not reported in all studies. Furthermore, limiting the language to the English was a limitation, but it is unlikely that a high quality article would have been published in a non-English language. Finally, it was not possible to synthesize the data of all included studies quantitatively using a meta-analysis due to the heterogeneity among the included studies in terms of study design and outcome measures reported and all but one study being of at least moderate risk of bias. ## 5. Conclusion The use of RME in patients with NE resulted in a significant reduction of wet nights per week compared with no intervention, but the difference between the two groups was not statistically significant. More well-designed RCTs are required to form a definitive conclusion. --- *Source: 1004629-2021-06-03.xml*
2021
# A Case Study of Energy-Saving and Frost Heave Control Scheme in Artificial Ground Freezing Project **Authors:** Song Zhang; Xiao-min Zhou; Jiwei Zhang; Tiecheng Sun; Wenzhu Ma; Yong Liu; Ning Yang **Journal:** Geofluids (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004735 --- ## Abstract In large-scale shallow buried artificial ground freezing (AGF) engineering, frost heave control and energy-saving have always been the most critical problems. By the research and analysis, this manuscript points out the applicability and superiority of intermittent freezing in large-scale shallow buried freezing projects and conducts in situ tests for the freezing project of a metro entrance. The test results show that intermittent freezing can effectively control the surface frost heave, but it has a certain hysteresis. The freezing wall thickness, average temperature, and optimization scheme during the freezing process are discussed. The results show the following: (1) Intermittent freezing would degrade the thickness of the freezing wall and the average temperature together. (2) The stop freezing scheme in intermittent freezing is not fit to frost heave control during the excavation stage. Therefore, it is necessary to establish a freezing cycle scheme with precise temperature control to control the frost heave to ensure the bearing capacity. Based on the above conclusions, a dual-circuit liquid supply scheme, the specific brine temperature control scheme, and its application method were proposed. Through this method, the development process of the freezing front can be effectively controlled, and the goal of being precise and controllable can be achieved. This research can provide practical technical guidance for frost heave control of similar AGF projects. --- ## Body ## 1. Introduction The artificial ground freezing (AGF) method is a temporary ground reinforcement method, which mainly reduces the soil temperature by circulating a low-temperature refrigerant (brine or liquid nitrogen) in the ground to build the frozen curtain [1]. Due to the fact that it could provide a safe and reliable working space for underground engineering, it is widely used in many engineering filed such as the shaft of mine, tunnel, and foundation pit [2–6].Since the freezing of water into ice will cause volume expansion and, at the same time, there will be continuous water migration from unfrozen soil to the frozen front during the freezing process, so with the formation of frozen curtains, continuous frost heave will occur [7, 8]. Frost heave will cause damages such as surface uplift, pipeline breakage, and structural deformation. Therefore, frost heave has always been one of the most concerning issues in artificial freezing construction. With the continuous development of China’s urban infrastructure in recent years, many projects need to be reinforced by the artificial ground freezing method. Many of these projects have a shallow depth, large volume, and long freezing time [9, 10].The research on soil frost heave can be traced back to the early part of the last century. At that time, the frost heave problem was mainly the frost heave deformation of natural frozen soil or the frost heave deformation of road and railway subgrade [11–13]. Since then, with the increase of artificial ground freezing projects, especially in urban areas that are sensitive to frost heave problems, there are more researches on frost heave caused by artificial ground freezing [7].At present, the control of the frost heave of natural formations is relatively simple. Control of cold energy input [14], reduction of frost heave sensitivity of soil [15], or migration of water can effectively reduce frost heave deformation. However, in artificial freezing projects, the input of cold energy is the key to constructing a frozen curtain with a certain strength and bearing capacity, so the input of cold energy cannot be isolated. To reduce the sensitivity of the soil or reduce the migration of water, it is necessary to improve the soil on a large scale, but it is limited by engineering conditions and often does not have the operating conditions for stratum improvement.In the current construction, soil pressure relief is more effective, and frost heave deformation can be effectively reduced through long-term soil extraction and pressure relief [16–18]. However, this method also has some problems. The first pressure relief hole must be arranged on the upper side of the frozen soil area to control the surface deformation effectively. Still, many underground projects do not have enough underground space. Second, the pressure relief hole is easy to be frozen during the freezing process, so it needs to be used in conjunction with the heating hole, which further increases the need for the top space of the frozen soil area. Therefore, this method has certain limitations.The development of active heating control frozen curtain in frozen soil areas is also used for frost heave control [19–21]. However, it still has many problems, such as occupying headspace and poor control effect, while, for large-scale freezing projects, it is due to the lack of drilling deflection control and accuracy measurement. It is difficult to evaluate the freezing effect accurately, and active heating methods can easily damage the frozen curtain. In addition, this method will cause a lot of energy waste. Therefore, this method is currently not widely used.In summary, it is necessary to establish a method that can save energy and effectively control frost heave. Based on this, Zhou [22] proposed intermittent freezing to prevent frost heave. In recent years, the design idea of freezing on demand (FOD) based on intermittent freezing has been derived [23]. A series of experiments and numerical calculation studies have been carried out for the intermittent freezing method. The effect of controlling frost heave and the implementation plan have been discussed [24, 25]. However, intermittent freezing is rarely used in actual projects, especially in large-scale freezing projects. Therefore, this manuscript analyzes the temperature field and displacement of a shallow buried large-scale project to discuss the feasibility and effect of intermittent freezing in actual engineering. While aiming at the problem of poor temperature control of intermittent freezing, a new freezing system and freezing pipe design scheme were proposed, and the application of intermittent freezing in actual engineering was optimized. Relevant research content will have important guiding significance for frost heave control and energy saving of similar projects. ## 2. Geology and Freezing Technology ### 2.1. Freezing Scheme The 2# entrance of Hualin Temple Station of Guangzhou Metro Line 8 is reinforced by the artificial ground freezing method. The length of reinforce area is 9.2 m, the height is 6.04 m, and the width is 7.7 m. The minimum overburden thickness on the frozen curtain is 1.35 m. The frozen curtain thickness is 2.5 m in design. Because the top buried depth is too shallow, the freezing process will produce a great frost heave. To avoid frost heave on the ground, the pipe shed was adopted as the main and the freezing reinforcement as the auxiliary. The frozen curtain of this area was adjusted by 1.0 m. The frozen curtain reinforcement range and ground conditions are shown in Figure1. From top to bottom in the figure are <1> Miscellaneous Fill, <2-1A> Sea-Land Interaction Sedimentary Silt Layer, <2-2> Sea-Land Interaction Sedimentary Silty Fine Sand, and Silty Fine Sand Layer.Figure 1 Design of freezing project and geological conditions.According to the design, it has a total of 21 shed freezing pipes, 147 freezing pipes, and 14 temperature measuring freezing pipes. The total freezing length was 1409.9 m. Among them, 21 shed pipes are made byΦ168×8 mm seamless steel pipes, and Φ108×8 mm seamless steel pipes are installed inside as the freezing pipe, while cement slurry is filled between the pipe-shed pipes and the freezing pipes. The arrangement of freezing pipes is shown in Figure 2.Figure 2 (a) The main drilling working face; (b) auxiliary drilling working face; (c) profile of freezing pipes. (a)(b)(c)The project began to freeze on October 31, 2019, and reached the design freezing time on December 19, 2019. However, due to the impact of the COVID-19, the excavation could not be organized, and the freezing time was forced to be postponed to March 15, 2020. During the entire freezing process, due to the shallow buried depth and long freezing time, the ground surface frost heave and deformation need to be controlled strictly. Therefore, on January 22, 2020, they increase the temperature to -25°C by gradually reducing the load of the refrigerating units and reducing the flow rate of brine to 3 m3/h to control frost heave. As of February 3, 2020, due to continuous frost heave on the ground surface, intermittent freezing operations have begun. During the entire intermittent freezing period, a total of 5 rounds were used. The freeze and refreeze times are shown in Table1. And the full load operation starts on February 26, 2020, to prepare for excavation.Table 1 Intermittent freezing plan. DateStateDateState2020/2/5, 11:00Stop2020/2/9, 10:00Run2020/2/11, 14:30Stop2020/2/15, 17:40Run2020/2/20, 9:00Stop2020/2/21, 9:00Run2020/2/22, 9:00Stop2020/2/23, 9:00Run2020/2/24, 9:00Stop2020/2/25, 9:00Run2020/2/26, 16:00Full load operation ### 2.2. Design of Monitoring Points A total of 14 temperature measuring pipes are designed for this project, and the specific positions are shown in Figure3. Each temperature measuring pipe was set with 2~5 temperature measuring points according to the depth. The temperature measuring point uses the DS18b20 temperature sensor, which test accuracy is 0.0625°C, while a series of displacement monitoring points were set up on the ground. The interval adjacent to every displacement monitoring point was 5.0 m. Due to the destruction of people and vehicles, there were only 3 points in the end. The positions are shown in Figure 4. The frequency of monitoring all test data is once a day.Figure 3 (a) Temperature measuring pipes in main working face; (b) temperature measuring pipes in auxiliary working face; (c) temperature measuring pipes in lateral section. (a)(b)(c)Figure 4 Position of displacement monitoring point. ## 2.1. Freezing Scheme The 2# entrance of Hualin Temple Station of Guangzhou Metro Line 8 is reinforced by the artificial ground freezing method. The length of reinforce area is 9.2 m, the height is 6.04 m, and the width is 7.7 m. The minimum overburden thickness on the frozen curtain is 1.35 m. The frozen curtain thickness is 2.5 m in design. Because the top buried depth is too shallow, the freezing process will produce a great frost heave. To avoid frost heave on the ground, the pipe shed was adopted as the main and the freezing reinforcement as the auxiliary. The frozen curtain of this area was adjusted by 1.0 m. The frozen curtain reinforcement range and ground conditions are shown in Figure1. From top to bottom in the figure are <1> Miscellaneous Fill, <2-1A> Sea-Land Interaction Sedimentary Silt Layer, <2-2> Sea-Land Interaction Sedimentary Silty Fine Sand, and Silty Fine Sand Layer.Figure 1 Design of freezing project and geological conditions.According to the design, it has a total of 21 shed freezing pipes, 147 freezing pipes, and 14 temperature measuring freezing pipes. The total freezing length was 1409.9 m. Among them, 21 shed pipes are made byΦ168×8 mm seamless steel pipes, and Φ108×8 mm seamless steel pipes are installed inside as the freezing pipe, while cement slurry is filled between the pipe-shed pipes and the freezing pipes. The arrangement of freezing pipes is shown in Figure 2.Figure 2 (a) The main drilling working face; (b) auxiliary drilling working face; (c) profile of freezing pipes. (a)(b)(c)The project began to freeze on October 31, 2019, and reached the design freezing time on December 19, 2019. However, due to the impact of the COVID-19, the excavation could not be organized, and the freezing time was forced to be postponed to March 15, 2020. During the entire freezing process, due to the shallow buried depth and long freezing time, the ground surface frost heave and deformation need to be controlled strictly. Therefore, on January 22, 2020, they increase the temperature to -25°C by gradually reducing the load of the refrigerating units and reducing the flow rate of brine to 3 m3/h to control frost heave. As of February 3, 2020, due to continuous frost heave on the ground surface, intermittent freezing operations have begun. During the entire intermittent freezing period, a total of 5 rounds were used. The freeze and refreeze times are shown in Table1. And the full load operation starts on February 26, 2020, to prepare for excavation.Table 1 Intermittent freezing plan. DateStateDateState2020/2/5, 11:00Stop2020/2/9, 10:00Run2020/2/11, 14:30Stop2020/2/15, 17:40Run2020/2/20, 9:00Stop2020/2/21, 9:00Run2020/2/22, 9:00Stop2020/2/23, 9:00Run2020/2/24, 9:00Stop2020/2/25, 9:00Run2020/2/26, 16:00Full load operation ## 2.2. Design of Monitoring Points A total of 14 temperature measuring pipes are designed for this project, and the specific positions are shown in Figure3. Each temperature measuring pipe was set with 2~5 temperature measuring points according to the depth. The temperature measuring point uses the DS18b20 temperature sensor, which test accuracy is 0.0625°C, while a series of displacement monitoring points were set up on the ground. The interval adjacent to every displacement monitoring point was 5.0 m. Due to the destruction of people and vehicles, there were only 3 points in the end. The positions are shown in Figure 4. The frequency of monitoring all test data is once a day.Figure 3 (a) Temperature measuring pipes in main working face; (b) temperature measuring pipes in auxiliary working face; (c) temperature measuring pipes in lateral section. (a)(b)(c)Figure 4 Position of displacement monitoring point. ## 3. Temperature and Displacement In Situ Test ### 3.1. Temperature of Refrigerated Coolant Figure5 shows the time history curve of the brine temperature during the entire freezing process. Corresponding to the time in Section 2.1, the entire freezing process could be divided into five stages: Stage 1 is the design freezing stage, stage 2 is the extended freezing stage, stage 3 is the flow and temperature control stage, stage 4 is the intermittent freezing stage, and stage 5 is the excavation and returning to normal freezing. At the end of stage 2, due to a power outage, there was a rapid temperature rise, and the rest of the time was normal, while drawing the brine temperature difference of inlet and outlet in Figure 6. In early stage 1, the brine temperature dropped rapidly. During the 13 days from October 31 to November 12, 2019, the temperature dropped at a rate of about 3°C/d, while the temperature difference remained at 2°C at this time. Then, the temperature drop ratio slowed down, and the temperature difference dropped to 1.5°C. After November 12, there was a rapid drop in temperature that lasted for 2 to 3 days (temperature drop rate: 1.5°C/d), and the temperature difference increased rapidly to 2°C. Subsequently, the brine temperature was basically stable, while the temperature difference continued to decrease. At the end of stage 1, the brine temperature was close to -30°C, and the temperature difference was 0.5°C. In the early time of stage 2, the freezing temperature was maintained at the original state, while the temperature difference is further reduced. In the middle time of stage 2, the temperature difference continued to be at a low level consistently which is caused by the commissioning of refrigeration equipment and the rise in brine temperature. At the end of stage 2, the temperature of brine rose rapidly due to the power outage. At this time, the brine also stopped running, which caused the return loop temperature to increase rapidly and then gradually return after the power back to normal. In stage 3, the temperature and flow rate of brine were gradually adjusted, so the temperature difference is further reduced. When the brine temperature is higher than -25°C, the temperature difference turns into a negative value. This indicates that the freezing system is in a state of reverse cooling. In stage 4, intermittent freezing was started due to the poor effect of stage 3. In the stop freezing phase, the flow of brine does not stop. Therefore, the brine temperature raised slower than in the third stage. The peak of brine temperature was maintained in the range of -12~-8°C after each stop of freezing. In this stage, the temperature difference of brine was negative most time, which indicates that the freezing system is in the state of reverse cooling during the whole stage. In stage 5, the freezing system returns to normal, and the brine temperature decreased. However, at this time, the edge of the frozen curtain was connected to the energy balance; the temperature difference continues to fluctuate around 0°C.Figure 5 The temperature of brine in main pipe.Figure 6 The temperature difference of main pipe. ### 3.2. Temperature of Measuring Pipes Figure7 is the temperature history curve of the temperature measuring pipes C1 and C2 at the top of the frozen curtain, while the distance of temperature measuring pipe to axis of the freezing pipe is summarized in the figure too. In the figure, the temperature of the measuring points dropped significantly at stages 1 and 2. The power outage at the end of the second stage also had an obvious influence on the temperature of the measuring point. Because C2 was closer to the freezing pipes, the temperature raised most obviously. In stage 3, the temperature change could be ignored. In stage 4, the temperature of each measuring point has a certain rise, in which the temperature of C1 rises by 2.9~3.2°C and the temperature of C2 rises by 6°C. After entering the excavation stage (stage 5), the temperature of C1 which was located on the outer side of the frozen curtain continued to maintain its original state, while the temperature of C2 rose rapidly. It is rising by about 30°C.Figure 7 Temperature time history curve of C1 and C2.Figure8 shows the temperature curves of C13 and C14 temperature measuring pipes. As shown in Figure 7, the temperature decreased steadily in stages 1 and 2 and rebounded at the time of power outage. Afterward, there was no significant change in stage 3, in which temperature was stable. In stage 4, a temperature rise state like C2 appeared in Figure 7. In stage 5, the temperature continues to drop.Figure 8 Temperature curve of C13 and C14. ### 3.3. Displacement of Ground The ground displacement curve drawn according to the measurement points arranged in Figure4 is shown in Figure 9, and the ground displacement monitoring data of the freezing pipe drilling process are added in the figure too. Due to the upward slope of the frozen wall at the top, the distance from ground to freezing front at DB3 is less than DB1 and DB2. In the figure, due to the loss of soil during the drilling stage, displacement of each measuring point has a negative value of 10-20 mm. The freezing pipes have grouted in the later stage to keep ground stable, and the displacement recovered from a small amount. When the freezing began, the ground displacement increases linearly. Affected by the power outage at the end of Phase 2, DB2 and DB3, which are closer to the frozen curtain, dropped significantly. In stage 3, the frost heave had not been effectively controlled. In stage 4, intermittent freezing has obvious effect on frost heave control. However, there is a hysteresis in the control of frost heave by interstitial freezing. The first stop freezing of the intermittent freezing was February 5, 2020, but frost heave of DB3 growth until February 7, while frost heave of DB1 and DB2 growth until February 10. While the shallower the buried depth, the more violent the reaction to intermittent freezing. In the figure, DB1 and DB2 are affected by intermittent freezing, and the cumulative decrease is about 3 mm, while the cumulative decrease of DB3 is 40.6 mm. At the same time, DB1 and DB2 showed a rising trend after the intermittent freezing stopped on February 25, but DB3 only showed a rising trend on March 7, which was 9 days later than DB1 and BD2. In stage 5, with the normal operation of the freezing system, the ground continues to rise.Figure 9 Displacement of ground.Comparing the above-mentioned measured data, the closer to the frozen curtain, the more sensitive the ground to frost heave. This is not only manifested at the beginning of intermittent freezing; the closer the distance is, the earlier the frost heave stops; it is also manifested after the recovery of freezing; the closer the distance is, the earlier the frost heave will appear. Therefore, the buried depth of the frozen curtain is the key factor to determine the intermittent freezing effect.The hysteresis of ground displacement is because the stop freezing time of intermittent freezing is very short. With the stop freezing process, the frozen soil has not melted, or only a small amount has melted. Only after repeated intermittent freezing, the frozen soil will degrade significantly, and the settlement after melting of the soil is a long process. Therefore, it takes time for the surface deformation to respond to intermittent freezing and it must be ensured that the frozen soil body melts. This has caused the delay of thawing settlement on the ground.Moreover, in the end of intermittent freezing, the soil will continue to settle for a period. This is because the thawing deformation has not been released. As shown in Figure10, when the intermittent freezing stops, the soil will melt to a certain extent, and the freezing front’s total dropped by h1. With the extension of the freezing time, the soil will continue to settle after thawing. Then, the freezing system is activated. However, it would take some time for low-temperature brine to affect the freezing front, whereafter the frozen front extends its thickness h2 again but is h3 away from the original frozen front. When the frost heave amount h2 in the refreezing area is less than the upper thaw settlement amount h3, the ground surface will still maintain the subsidence trend, but the subsidence rate will decrease. This has caused the delay of frost heave on the ground too.Figure 10 Schematic diagram of intermittent freezing process. ## 3.1. Temperature of Refrigerated Coolant Figure5 shows the time history curve of the brine temperature during the entire freezing process. Corresponding to the time in Section 2.1, the entire freezing process could be divided into five stages: Stage 1 is the design freezing stage, stage 2 is the extended freezing stage, stage 3 is the flow and temperature control stage, stage 4 is the intermittent freezing stage, and stage 5 is the excavation and returning to normal freezing. At the end of stage 2, due to a power outage, there was a rapid temperature rise, and the rest of the time was normal, while drawing the brine temperature difference of inlet and outlet in Figure 6. In early stage 1, the brine temperature dropped rapidly. During the 13 days from October 31 to November 12, 2019, the temperature dropped at a rate of about 3°C/d, while the temperature difference remained at 2°C at this time. Then, the temperature drop ratio slowed down, and the temperature difference dropped to 1.5°C. After November 12, there was a rapid drop in temperature that lasted for 2 to 3 days (temperature drop rate: 1.5°C/d), and the temperature difference increased rapidly to 2°C. Subsequently, the brine temperature was basically stable, while the temperature difference continued to decrease. At the end of stage 1, the brine temperature was close to -30°C, and the temperature difference was 0.5°C. In the early time of stage 2, the freezing temperature was maintained at the original state, while the temperature difference is further reduced. In the middle time of stage 2, the temperature difference continued to be at a low level consistently which is caused by the commissioning of refrigeration equipment and the rise in brine temperature. At the end of stage 2, the temperature of brine rose rapidly due to the power outage. At this time, the brine also stopped running, which caused the return loop temperature to increase rapidly and then gradually return after the power back to normal. In stage 3, the temperature and flow rate of brine were gradually adjusted, so the temperature difference is further reduced. When the brine temperature is higher than -25°C, the temperature difference turns into a negative value. This indicates that the freezing system is in a state of reverse cooling. In stage 4, intermittent freezing was started due to the poor effect of stage 3. In the stop freezing phase, the flow of brine does not stop. Therefore, the brine temperature raised slower than in the third stage. The peak of brine temperature was maintained in the range of -12~-8°C after each stop of freezing. In this stage, the temperature difference of brine was negative most time, which indicates that the freezing system is in the state of reverse cooling during the whole stage. In stage 5, the freezing system returns to normal, and the brine temperature decreased. However, at this time, the edge of the frozen curtain was connected to the energy balance; the temperature difference continues to fluctuate around 0°C.Figure 5 The temperature of brine in main pipe.Figure 6 The temperature difference of main pipe. ## 3.2. Temperature of Measuring Pipes Figure7 is the temperature history curve of the temperature measuring pipes C1 and C2 at the top of the frozen curtain, while the distance of temperature measuring pipe to axis of the freezing pipe is summarized in the figure too. In the figure, the temperature of the measuring points dropped significantly at stages 1 and 2. The power outage at the end of the second stage also had an obvious influence on the temperature of the measuring point. Because C2 was closer to the freezing pipes, the temperature raised most obviously. In stage 3, the temperature change could be ignored. In stage 4, the temperature of each measuring point has a certain rise, in which the temperature of C1 rises by 2.9~3.2°C and the temperature of C2 rises by 6°C. After entering the excavation stage (stage 5), the temperature of C1 which was located on the outer side of the frozen curtain continued to maintain its original state, while the temperature of C2 rose rapidly. It is rising by about 30°C.Figure 7 Temperature time history curve of C1 and C2.Figure8 shows the temperature curves of C13 and C14 temperature measuring pipes. As shown in Figure 7, the temperature decreased steadily in stages 1 and 2 and rebounded at the time of power outage. Afterward, there was no significant change in stage 3, in which temperature was stable. In stage 4, a temperature rise state like C2 appeared in Figure 7. In stage 5, the temperature continues to drop.Figure 8 Temperature curve of C13 and C14. ## 3.3. Displacement of Ground The ground displacement curve drawn according to the measurement points arranged in Figure4 is shown in Figure 9, and the ground displacement monitoring data of the freezing pipe drilling process are added in the figure too. Due to the upward slope of the frozen wall at the top, the distance from ground to freezing front at DB3 is less than DB1 and DB2. In the figure, due to the loss of soil during the drilling stage, displacement of each measuring point has a negative value of 10-20 mm. The freezing pipes have grouted in the later stage to keep ground stable, and the displacement recovered from a small amount. When the freezing began, the ground displacement increases linearly. Affected by the power outage at the end of Phase 2, DB2 and DB3, which are closer to the frozen curtain, dropped significantly. In stage 3, the frost heave had not been effectively controlled. In stage 4, intermittent freezing has obvious effect on frost heave control. However, there is a hysteresis in the control of frost heave by interstitial freezing. The first stop freezing of the intermittent freezing was February 5, 2020, but frost heave of DB3 growth until February 7, while frost heave of DB1 and DB2 growth until February 10. While the shallower the buried depth, the more violent the reaction to intermittent freezing. In the figure, DB1 and DB2 are affected by intermittent freezing, and the cumulative decrease is about 3 mm, while the cumulative decrease of DB3 is 40.6 mm. At the same time, DB1 and DB2 showed a rising trend after the intermittent freezing stopped on February 25, but DB3 only showed a rising trend on March 7, which was 9 days later than DB1 and BD2. In stage 5, with the normal operation of the freezing system, the ground continues to rise.Figure 9 Displacement of ground.Comparing the above-mentioned measured data, the closer to the frozen curtain, the more sensitive the ground to frost heave. This is not only manifested at the beginning of intermittent freezing; the closer the distance is, the earlier the frost heave stops; it is also manifested after the recovery of freezing; the closer the distance is, the earlier the frost heave will appear. Therefore, the buried depth of the frozen curtain is the key factor to determine the intermittent freezing effect.The hysteresis of ground displacement is because the stop freezing time of intermittent freezing is very short. With the stop freezing process, the frozen soil has not melted, or only a small amount has melted. Only after repeated intermittent freezing, the frozen soil will degrade significantly, and the settlement after melting of the soil is a long process. Therefore, it takes time for the surface deformation to respond to intermittent freezing and it must be ensured that the frozen soil body melts. This has caused the delay of thawing settlement on the ground.Moreover, in the end of intermittent freezing, the soil will continue to settle for a period. This is because the thawing deformation has not been released. As shown in Figure10, when the intermittent freezing stops, the soil will melt to a certain extent, and the freezing front’s total dropped by h1. With the extension of the freezing time, the soil will continue to settle after thawing. Then, the freezing system is activated. However, it would take some time for low-temperature brine to affect the freezing front, whereafter the frozen front extends its thickness h2 again but is h3 away from the original frozen front. When the frost heave amount h2 in the refreezing area is less than the upper thaw settlement amount h3, the ground surface will still maintain the subsidence trend, but the subsidence rate will decrease. This has caused the delay of frost heave on the ground too.Figure 10 Schematic diagram of intermittent freezing process. ## 4. Discussion ### 4.1. Evolvement Law of Frozen Curtain Thickness According to the analysis in Section3.3, the position of the freezing front is closely related to the frost heave. Therefore, the temperature measurement data is used to calculate the freezing front outside of the top frozen curtain. Select the temperature of C12 and C7 which are closest to the DB1 and DB3 areas to calculate the position. The calculation formula [26] of frozen curtain thickness is (1)m1x,y=12ln2ch2πly−cos2πlx,(2)ξ=lπTx,ylnl/2πr0+TCTm1x,yTCT−Tx,y,where TCT is the temperature of the freezing pipe surface, ξ is the thickness of the frozen curtain, r0 is the radius of the freezing pipe, x and y are the coordinates of the freezing pipe, and l is the spacing of the freezing pipe. The x-direction is the direction of the freezing pipes’ connection, and the y-direction is the direction perpendicular to the freezing pipe connection. The relevant parameters in this calculation are shown in Table 2.Table 2 Parameters for calculation of frozen curtain. ParameterValueParameterValueTCT-28°Cr00.054 ml of C121.05 ml of C71.15 mm1 of C122.6m1 of C72.17Since this formula is applicable after the frozen curtain closed, the calculation of the frozen curtain thickness starts from stage 2. The calculated frozen curtain thickness and ground frost heave are plotted in Figure11. According to the figure, the freezing temperature control’s influence in stage 3 on the thickness of the frozen curtain could be ignored. In this stage, the frozen curtain calculated by C12 and C7 maintains the original thickness. During the intermittent freezing of stage 4, the frozen curtain melts obviously.Figure 11 Scheme of frozen curtain thickness and ground displacement.Because the frozen curtain about C12 is far from the ground, it is less disturbed by the ground temperature, and the frozen soil thaws slowly than others. There were two obvious frozen curtain thaws in stage 4. The time is from February 5 to February 9 and February 11 to February 17; the thickness of the frozen curtain thawed 11 mm and 7.3 mm.The soil over C7 is shallower, and the frozen curtain thawed 3 times during stage 4, which were from February 5 to February 9, February 11 to February 16, and February 21 to February 26, respectively. The thickness of the frozen curtain was 20.6 mm, 14.9 mm, and 6.6 mm, respectively. In other periods, the frozen curtain continued to expand or remained stable, while the change of the freezing front verifies the hysteresis analysis assumption of the ground displacement in Section3.3 (similar to the length of the shaded area in Figure 11).Base on the data of stage 3, the scheme of frost heave control through a small increase of brine temperature was not successful. However, the intermittent freezing scheme adopted in stage 4 effectively controls the frost heave, while, in stage 4 intermittent freezing process, the frost heave control is better in the early stage and the later stage control effect is poor. This indicates that factors such as brine temperature and intermittent freezing time in the intermittent freezing process are the key parameters for controlling intermittent freezing. In this case, the treatment plan of stopping freezing and circulating can not effectively control the brine temperature, so the freezing front changes cannot be accurately controlled. ### 4.2. The Average Temperature of Frozen Curtain The average temperature of the frozen curtain is a key indicator to measure the strength of frozen curtain, and the soil temperature field would change significantly during the intermittent freezing process. According to Hu’s analytical solution [27, 28], the formula for calculating the average temperature of the frozen curtain is (3)T¯=TCTπξ/l−ln2lnl/2πr0+πξ/l.According to this formula, the average temperature of the frozen curtain about C7 and C12 is shown in Figure12. In stage 2, the average temperature dropped significantly. However, affected by stop freezing at the end time of stage 2, there was a rapid rising. Therefore, in stage 3, a downward trend was formed after the freezing system turn on. Affected by intermittent freezing in stage 4, the average temperature rose rapidly. In stage 5, with the normal operation of the freezing system, the average temperature continues to drop steadily.Figure 12 The average temperature in different zone.According to the average temperature curve of stage 4, the change rule of the average temperature during the first and second intermittent freezing matches with the stop and run time. During the above two intermittent freezing phases, the temperature dropped slightly in the first two days but then dropped rapidly (especially the second intermittent freezing). For the 3rd to 5th intermittent freezing, the temperature continued to rise due to the small-time conversion interval between stop and run. However, the temperature rise rate is significantly less than the first and second intermittent freezing. Taking the average temperature about C7 as an example, the temperature rise rate of the first intermittent freezing is 0.4°C/d, and the second intermittent freezing is 0.34°C/d. Then, the average temperature of the next three intermittent freezing rose back to 0.11°C/d (including the freezing time in the calculation). ### 4.3. Optimization of Intermittent Freezing In the above case, the intermittent freezing method was circulating the brine after the freezing stopped. This method does not control the brine temperature, and the slow temperature rise of the brine further causes the hysteresis of the frozen curtain thickness. At the same time, this method cannot accurately control the freezing front changes. In order to reduce the hysteresis reflected by the frozen curtain and realize the precise controllability of intermittent freezing, the manuscripts design a dual-circuit liquid supply scheme, in which the freezing pipe and freezing system transformation scheme is shown in Figure13.Figure 13 The dual-circuit liquid supply scheme.In the new scheme, each group of freezing pipes has two inlets and two outlets, which are connected to the freezing system and the temperature control system, respectively. While two brine systems are set up at the freezing station, one is a conventional freezing system, and the other is a brine temperature control system.The optimized intermittent freezing scheme is used as follows:(Step 1) The ground displacement reaches the prewarning value, and preparations for intermittent freezing are carried out. Start the temperature control system to control the temperature of the hot brine tank to -5°C.(Step 2) Close the freezing system circulation pipeline, start the temperature control system pipeline, and perform the first high temperature cycle in the freezing system. At the same time, stop the brine circulation in the freezing system pipeline and the brine tank to keep the brine in the area at a low temperature. The duration of this stage is 3~4 days.(Step 3) Adjust the pipeline valve, start the low-temperature freezing system, and carry out the low-temperature cycle.(Step 4) Transfer to the normal intermittent freezing stage, so that the brine will circulate repeatedly between the design intermittent freezing temperatureTu and the design minimum brine temperature Td.The above method can effectively control the brine temperature during the entire intermittent freezing process. At the same time, the continuous switching of the dual-circulation system realizes the rapid and precise change of the brine temperature.In order to compare the difference before and after the optimization, the numerical calculation model base on the design of above case was built and is shown in Figure14. The parameters in the calculation are listed in Table 3. And the brine temperature was set to -28°C in the early stage, and two optimization schemes were adopted in the later stage. The brine temperature curve and frozen curtain thickness of the two schemes are shown in Figure 15.Figure 14 The numerical model and boundary condition.Table 3 Parameters for numerical simulation. Density(kg/m3)Heat conductivity coefficient(W/(m∗°C))Specific heat(J/(kg∗°C))Heat latent(kJ/kg)Soils15714.3816Water10000.64200334.88Ice9172.22100Figure 15 The comparison of different freezing scheme.As shown in the figure, the two intermittent freezing schemes do not affect the thickness of the frozen wall in the first cycle. At this stage, the temperature field distribution inside the frozen soil is mainly adjusted. After the first cycle of intermittent freezing, the changes of the soil frozen curtain were different, and the frozen curtain continued to expand in a slow state before optimization. And after optimization, the frozen wall remains in a stable area and does not change.Since there is a brine exchange pipeline between the optimized temperature control system and the freezing system, the high temperature state of intermittent freezing can be strictly controlled. Figure16 shows the development of the frozen curtain at different Tu. As shown in the figure, the expansion trend of the frozen curtain is controlled by controlling Tu, while the frost heave and bear capacity is controlled too. Therefore, the precise and controllable intermittent freezing method can also be used for the excavation stage, while the difference in frozen curtain thickness generally starts from the end of the second intermittent freezing cycle.Figure 16 The comparison of differentTu. ## 4.1. Evolvement Law of Frozen Curtain Thickness According to the analysis in Section3.3, the position of the freezing front is closely related to the frost heave. Therefore, the temperature measurement data is used to calculate the freezing front outside of the top frozen curtain. Select the temperature of C12 and C7 which are closest to the DB1 and DB3 areas to calculate the position. The calculation formula [26] of frozen curtain thickness is (1)m1x,y=12ln2ch2πly−cos2πlx,(2)ξ=lπTx,ylnl/2πr0+TCTm1x,yTCT−Tx,y,where TCT is the temperature of the freezing pipe surface, ξ is the thickness of the frozen curtain, r0 is the radius of the freezing pipe, x and y are the coordinates of the freezing pipe, and l is the spacing of the freezing pipe. The x-direction is the direction of the freezing pipes’ connection, and the y-direction is the direction perpendicular to the freezing pipe connection. The relevant parameters in this calculation are shown in Table 2.Table 2 Parameters for calculation of frozen curtain. ParameterValueParameterValueTCT-28°Cr00.054 ml of C121.05 ml of C71.15 mm1 of C122.6m1 of C72.17Since this formula is applicable after the frozen curtain closed, the calculation of the frozen curtain thickness starts from stage 2. The calculated frozen curtain thickness and ground frost heave are plotted in Figure11. According to the figure, the freezing temperature control’s influence in stage 3 on the thickness of the frozen curtain could be ignored. In this stage, the frozen curtain calculated by C12 and C7 maintains the original thickness. During the intermittent freezing of stage 4, the frozen curtain melts obviously.Figure 11 Scheme of frozen curtain thickness and ground displacement.Because the frozen curtain about C12 is far from the ground, it is less disturbed by the ground temperature, and the frozen soil thaws slowly than others. There were two obvious frozen curtain thaws in stage 4. The time is from February 5 to February 9 and February 11 to February 17; the thickness of the frozen curtain thawed 11 mm and 7.3 mm.The soil over C7 is shallower, and the frozen curtain thawed 3 times during stage 4, which were from February 5 to February 9, February 11 to February 16, and February 21 to February 26, respectively. The thickness of the frozen curtain was 20.6 mm, 14.9 mm, and 6.6 mm, respectively. In other periods, the frozen curtain continued to expand or remained stable, while the change of the freezing front verifies the hysteresis analysis assumption of the ground displacement in Section3.3 (similar to the length of the shaded area in Figure 11).Base on the data of stage 3, the scheme of frost heave control through a small increase of brine temperature was not successful. However, the intermittent freezing scheme adopted in stage 4 effectively controls the frost heave, while, in stage 4 intermittent freezing process, the frost heave control is better in the early stage and the later stage control effect is poor. This indicates that factors such as brine temperature and intermittent freezing time in the intermittent freezing process are the key parameters for controlling intermittent freezing. In this case, the treatment plan of stopping freezing and circulating can not effectively control the brine temperature, so the freezing front changes cannot be accurately controlled. ## 4.2. The Average Temperature of Frozen Curtain The average temperature of the frozen curtain is a key indicator to measure the strength of frozen curtain, and the soil temperature field would change significantly during the intermittent freezing process. According to Hu’s analytical solution [27, 28], the formula for calculating the average temperature of the frozen curtain is (3)T¯=TCTπξ/l−ln2lnl/2πr0+πξ/l.According to this formula, the average temperature of the frozen curtain about C7 and C12 is shown in Figure12. In stage 2, the average temperature dropped significantly. However, affected by stop freezing at the end time of stage 2, there was a rapid rising. Therefore, in stage 3, a downward trend was formed after the freezing system turn on. Affected by intermittent freezing in stage 4, the average temperature rose rapidly. In stage 5, with the normal operation of the freezing system, the average temperature continues to drop steadily.Figure 12 The average temperature in different zone.According to the average temperature curve of stage 4, the change rule of the average temperature during the first and second intermittent freezing matches with the stop and run time. During the above two intermittent freezing phases, the temperature dropped slightly in the first two days but then dropped rapidly (especially the second intermittent freezing). For the 3rd to 5th intermittent freezing, the temperature continued to rise due to the small-time conversion interval between stop and run. However, the temperature rise rate is significantly less than the first and second intermittent freezing. Taking the average temperature about C7 as an example, the temperature rise rate of the first intermittent freezing is 0.4°C/d, and the second intermittent freezing is 0.34°C/d. Then, the average temperature of the next three intermittent freezing rose back to 0.11°C/d (including the freezing time in the calculation). ## 4.3. Optimization of Intermittent Freezing In the above case, the intermittent freezing method was circulating the brine after the freezing stopped. This method does not control the brine temperature, and the slow temperature rise of the brine further causes the hysteresis of the frozen curtain thickness. At the same time, this method cannot accurately control the freezing front changes. In order to reduce the hysteresis reflected by the frozen curtain and realize the precise controllability of intermittent freezing, the manuscripts design a dual-circuit liquid supply scheme, in which the freezing pipe and freezing system transformation scheme is shown in Figure13.Figure 13 The dual-circuit liquid supply scheme.In the new scheme, each group of freezing pipes has two inlets and two outlets, which are connected to the freezing system and the temperature control system, respectively. While two brine systems are set up at the freezing station, one is a conventional freezing system, and the other is a brine temperature control system.The optimized intermittent freezing scheme is used as follows:(Step 1) The ground displacement reaches the prewarning value, and preparations for intermittent freezing are carried out. Start the temperature control system to control the temperature of the hot brine tank to -5°C.(Step 2) Close the freezing system circulation pipeline, start the temperature control system pipeline, and perform the first high temperature cycle in the freezing system. At the same time, stop the brine circulation in the freezing system pipeline and the brine tank to keep the brine in the area at a low temperature. The duration of this stage is 3~4 days.(Step 3) Adjust the pipeline valve, start the low-temperature freezing system, and carry out the low-temperature cycle.(Step 4) Transfer to the normal intermittent freezing stage, so that the brine will circulate repeatedly between the design intermittent freezing temperatureTu and the design minimum brine temperature Td.The above method can effectively control the brine temperature during the entire intermittent freezing process. At the same time, the continuous switching of the dual-circulation system realizes the rapid and precise change of the brine temperature.In order to compare the difference before and after the optimization, the numerical calculation model base on the design of above case was built and is shown in Figure14. The parameters in the calculation are listed in Table 3. And the brine temperature was set to -28°C in the early stage, and two optimization schemes were adopted in the later stage. The brine temperature curve and frozen curtain thickness of the two schemes are shown in Figure 15.Figure 14 The numerical model and boundary condition.Table 3 Parameters for numerical simulation. Density(kg/m3)Heat conductivity coefficient(W/(m∗°C))Specific heat(J/(kg∗°C))Heat latent(kJ/kg)Soils15714.3816Water10000.64200334.88Ice9172.22100Figure 15 The comparison of different freezing scheme.As shown in the figure, the two intermittent freezing schemes do not affect the thickness of the frozen wall in the first cycle. At this stage, the temperature field distribution inside the frozen soil is mainly adjusted. After the first cycle of intermittent freezing, the changes of the soil frozen curtain were different, and the frozen curtain continued to expand in a slow state before optimization. And after optimization, the frozen wall remains in a stable area and does not change.Since there is a brine exchange pipeline between the optimized temperature control system and the freezing system, the high temperature state of intermittent freezing can be strictly controlled. Figure16 shows the development of the frozen curtain at different Tu. As shown in the figure, the expansion trend of the frozen curtain is controlled by controlling Tu, while the frost heave and bear capacity is controlled too. Therefore, the precise and controllable intermittent freezing method can also be used for the excavation stage, while the difference in frozen curtain thickness generally starts from the end of the second intermittent freezing cycle.Figure 16 The comparison of differentTu. ## 5. Conclusion This article focuses on the research on frost heave control of large-scale shallow buried underground excavation projects. Through comparative analysis, the intermittent freezing method has a wide application range and a small occupation area, while it can save energy effetely. Therefore, it is suitable for frost heave control of large-scale freezing projects. Furthermore, a systematic analysis of the application of intermittent freezing in engineering is carried out, and the following conclusions are drawn:(1) Engineering practice has proved that intermittent freezing can effectively control surface deformation, but there is an obvious hysteresis. This is mainly caused by the inability of changing the brine temperature in time and the thermal diffusion efficiency of frozen soil. In addition, intermittent freezing construction will significantly weaken the average temperature and thickness of the frozen wall. Therefore, it is must design an accurate and controllable intermittent freezing method(2) Aiming at the problem of inaccurate brine temperature control, a dual-circuit liquid supply scheme is proposed. Accurate temperature control in the intermittent freezing stage has been achieved through the transformation of the freezing system and the freezing pipe. According to numerical calculations, the new scheme can effectively control the position of the freezing front and realize the purpose of controlling frost heave while ensuring the bearing capacity of the frozen curtain. It improves the applicability of the intermittent freezing scheme in the excavation stage --- *Source: 1004735-2022-06-03.xml*
1004735-2022-06-03_1004735-2022-06-03.md
52,121
A Case Study of Energy-Saving and Frost Heave Control Scheme in Artificial Ground Freezing Project
Song Zhang; Xiao-min Zhou; Jiwei Zhang; Tiecheng Sun; Wenzhu Ma; Yong Liu; Ning Yang
Geofluids (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004735
1004735-2022-06-03.xml
--- ## Abstract In large-scale shallow buried artificial ground freezing (AGF) engineering, frost heave control and energy-saving have always been the most critical problems. By the research and analysis, this manuscript points out the applicability and superiority of intermittent freezing in large-scale shallow buried freezing projects and conducts in situ tests for the freezing project of a metro entrance. The test results show that intermittent freezing can effectively control the surface frost heave, but it has a certain hysteresis. The freezing wall thickness, average temperature, and optimization scheme during the freezing process are discussed. The results show the following: (1) Intermittent freezing would degrade the thickness of the freezing wall and the average temperature together. (2) The stop freezing scheme in intermittent freezing is not fit to frost heave control during the excavation stage. Therefore, it is necessary to establish a freezing cycle scheme with precise temperature control to control the frost heave to ensure the bearing capacity. Based on the above conclusions, a dual-circuit liquid supply scheme, the specific brine temperature control scheme, and its application method were proposed. Through this method, the development process of the freezing front can be effectively controlled, and the goal of being precise and controllable can be achieved. This research can provide practical technical guidance for frost heave control of similar AGF projects. --- ## Body ## 1. Introduction The artificial ground freezing (AGF) method is a temporary ground reinforcement method, which mainly reduces the soil temperature by circulating a low-temperature refrigerant (brine or liquid nitrogen) in the ground to build the frozen curtain [1]. Due to the fact that it could provide a safe and reliable working space for underground engineering, it is widely used in many engineering filed such as the shaft of mine, tunnel, and foundation pit [2–6].Since the freezing of water into ice will cause volume expansion and, at the same time, there will be continuous water migration from unfrozen soil to the frozen front during the freezing process, so with the formation of frozen curtains, continuous frost heave will occur [7, 8]. Frost heave will cause damages such as surface uplift, pipeline breakage, and structural deformation. Therefore, frost heave has always been one of the most concerning issues in artificial freezing construction. With the continuous development of China’s urban infrastructure in recent years, many projects need to be reinforced by the artificial ground freezing method. Many of these projects have a shallow depth, large volume, and long freezing time [9, 10].The research on soil frost heave can be traced back to the early part of the last century. At that time, the frost heave problem was mainly the frost heave deformation of natural frozen soil or the frost heave deformation of road and railway subgrade [11–13]. Since then, with the increase of artificial ground freezing projects, especially in urban areas that are sensitive to frost heave problems, there are more researches on frost heave caused by artificial ground freezing [7].At present, the control of the frost heave of natural formations is relatively simple. Control of cold energy input [14], reduction of frost heave sensitivity of soil [15], or migration of water can effectively reduce frost heave deformation. However, in artificial freezing projects, the input of cold energy is the key to constructing a frozen curtain with a certain strength and bearing capacity, so the input of cold energy cannot be isolated. To reduce the sensitivity of the soil or reduce the migration of water, it is necessary to improve the soil on a large scale, but it is limited by engineering conditions and often does not have the operating conditions for stratum improvement.In the current construction, soil pressure relief is more effective, and frost heave deformation can be effectively reduced through long-term soil extraction and pressure relief [16–18]. However, this method also has some problems. The first pressure relief hole must be arranged on the upper side of the frozen soil area to control the surface deformation effectively. Still, many underground projects do not have enough underground space. Second, the pressure relief hole is easy to be frozen during the freezing process, so it needs to be used in conjunction with the heating hole, which further increases the need for the top space of the frozen soil area. Therefore, this method has certain limitations.The development of active heating control frozen curtain in frozen soil areas is also used for frost heave control [19–21]. However, it still has many problems, such as occupying headspace and poor control effect, while, for large-scale freezing projects, it is due to the lack of drilling deflection control and accuracy measurement. It is difficult to evaluate the freezing effect accurately, and active heating methods can easily damage the frozen curtain. In addition, this method will cause a lot of energy waste. Therefore, this method is currently not widely used.In summary, it is necessary to establish a method that can save energy and effectively control frost heave. Based on this, Zhou [22] proposed intermittent freezing to prevent frost heave. In recent years, the design idea of freezing on demand (FOD) based on intermittent freezing has been derived [23]. A series of experiments and numerical calculation studies have been carried out for the intermittent freezing method. The effect of controlling frost heave and the implementation plan have been discussed [24, 25]. However, intermittent freezing is rarely used in actual projects, especially in large-scale freezing projects. Therefore, this manuscript analyzes the temperature field and displacement of a shallow buried large-scale project to discuss the feasibility and effect of intermittent freezing in actual engineering. While aiming at the problem of poor temperature control of intermittent freezing, a new freezing system and freezing pipe design scheme were proposed, and the application of intermittent freezing in actual engineering was optimized. Relevant research content will have important guiding significance for frost heave control and energy saving of similar projects. ## 2. Geology and Freezing Technology ### 2.1. Freezing Scheme The 2# entrance of Hualin Temple Station of Guangzhou Metro Line 8 is reinforced by the artificial ground freezing method. The length of reinforce area is 9.2 m, the height is 6.04 m, and the width is 7.7 m. The minimum overburden thickness on the frozen curtain is 1.35 m. The frozen curtain thickness is 2.5 m in design. Because the top buried depth is too shallow, the freezing process will produce a great frost heave. To avoid frost heave on the ground, the pipe shed was adopted as the main and the freezing reinforcement as the auxiliary. The frozen curtain of this area was adjusted by 1.0 m. The frozen curtain reinforcement range and ground conditions are shown in Figure1. From top to bottom in the figure are <1> Miscellaneous Fill, <2-1A> Sea-Land Interaction Sedimentary Silt Layer, <2-2> Sea-Land Interaction Sedimentary Silty Fine Sand, and Silty Fine Sand Layer.Figure 1 Design of freezing project and geological conditions.According to the design, it has a total of 21 shed freezing pipes, 147 freezing pipes, and 14 temperature measuring freezing pipes. The total freezing length was 1409.9 m. Among them, 21 shed pipes are made byΦ168×8 mm seamless steel pipes, and Φ108×8 mm seamless steel pipes are installed inside as the freezing pipe, while cement slurry is filled between the pipe-shed pipes and the freezing pipes. The arrangement of freezing pipes is shown in Figure 2.Figure 2 (a) The main drilling working face; (b) auxiliary drilling working face; (c) profile of freezing pipes. (a)(b)(c)The project began to freeze on October 31, 2019, and reached the design freezing time on December 19, 2019. However, due to the impact of the COVID-19, the excavation could not be organized, and the freezing time was forced to be postponed to March 15, 2020. During the entire freezing process, due to the shallow buried depth and long freezing time, the ground surface frost heave and deformation need to be controlled strictly. Therefore, on January 22, 2020, they increase the temperature to -25°C by gradually reducing the load of the refrigerating units and reducing the flow rate of brine to 3 m3/h to control frost heave. As of February 3, 2020, due to continuous frost heave on the ground surface, intermittent freezing operations have begun. During the entire intermittent freezing period, a total of 5 rounds were used. The freeze and refreeze times are shown in Table1. And the full load operation starts on February 26, 2020, to prepare for excavation.Table 1 Intermittent freezing plan. DateStateDateState2020/2/5, 11:00Stop2020/2/9, 10:00Run2020/2/11, 14:30Stop2020/2/15, 17:40Run2020/2/20, 9:00Stop2020/2/21, 9:00Run2020/2/22, 9:00Stop2020/2/23, 9:00Run2020/2/24, 9:00Stop2020/2/25, 9:00Run2020/2/26, 16:00Full load operation ### 2.2. Design of Monitoring Points A total of 14 temperature measuring pipes are designed for this project, and the specific positions are shown in Figure3. Each temperature measuring pipe was set with 2~5 temperature measuring points according to the depth. The temperature measuring point uses the DS18b20 temperature sensor, which test accuracy is 0.0625°C, while a series of displacement monitoring points were set up on the ground. The interval adjacent to every displacement monitoring point was 5.0 m. Due to the destruction of people and vehicles, there were only 3 points in the end. The positions are shown in Figure 4. The frequency of monitoring all test data is once a day.Figure 3 (a) Temperature measuring pipes in main working face; (b) temperature measuring pipes in auxiliary working face; (c) temperature measuring pipes in lateral section. (a)(b)(c)Figure 4 Position of displacement monitoring point. ## 2.1. Freezing Scheme The 2# entrance of Hualin Temple Station of Guangzhou Metro Line 8 is reinforced by the artificial ground freezing method. The length of reinforce area is 9.2 m, the height is 6.04 m, and the width is 7.7 m. The minimum overburden thickness on the frozen curtain is 1.35 m. The frozen curtain thickness is 2.5 m in design. Because the top buried depth is too shallow, the freezing process will produce a great frost heave. To avoid frost heave on the ground, the pipe shed was adopted as the main and the freezing reinforcement as the auxiliary. The frozen curtain of this area was adjusted by 1.0 m. The frozen curtain reinforcement range and ground conditions are shown in Figure1. From top to bottom in the figure are <1> Miscellaneous Fill, <2-1A> Sea-Land Interaction Sedimentary Silt Layer, <2-2> Sea-Land Interaction Sedimentary Silty Fine Sand, and Silty Fine Sand Layer.Figure 1 Design of freezing project and geological conditions.According to the design, it has a total of 21 shed freezing pipes, 147 freezing pipes, and 14 temperature measuring freezing pipes. The total freezing length was 1409.9 m. Among them, 21 shed pipes are made byΦ168×8 mm seamless steel pipes, and Φ108×8 mm seamless steel pipes are installed inside as the freezing pipe, while cement slurry is filled between the pipe-shed pipes and the freezing pipes. The arrangement of freezing pipes is shown in Figure 2.Figure 2 (a) The main drilling working face; (b) auxiliary drilling working face; (c) profile of freezing pipes. (a)(b)(c)The project began to freeze on October 31, 2019, and reached the design freezing time on December 19, 2019. However, due to the impact of the COVID-19, the excavation could not be organized, and the freezing time was forced to be postponed to March 15, 2020. During the entire freezing process, due to the shallow buried depth and long freezing time, the ground surface frost heave and deformation need to be controlled strictly. Therefore, on January 22, 2020, they increase the temperature to -25°C by gradually reducing the load of the refrigerating units and reducing the flow rate of brine to 3 m3/h to control frost heave. As of February 3, 2020, due to continuous frost heave on the ground surface, intermittent freezing operations have begun. During the entire intermittent freezing period, a total of 5 rounds were used. The freeze and refreeze times are shown in Table1. And the full load operation starts on February 26, 2020, to prepare for excavation.Table 1 Intermittent freezing plan. DateStateDateState2020/2/5, 11:00Stop2020/2/9, 10:00Run2020/2/11, 14:30Stop2020/2/15, 17:40Run2020/2/20, 9:00Stop2020/2/21, 9:00Run2020/2/22, 9:00Stop2020/2/23, 9:00Run2020/2/24, 9:00Stop2020/2/25, 9:00Run2020/2/26, 16:00Full load operation ## 2.2. Design of Monitoring Points A total of 14 temperature measuring pipes are designed for this project, and the specific positions are shown in Figure3. Each temperature measuring pipe was set with 2~5 temperature measuring points according to the depth. The temperature measuring point uses the DS18b20 temperature sensor, which test accuracy is 0.0625°C, while a series of displacement monitoring points were set up on the ground. The interval adjacent to every displacement monitoring point was 5.0 m. Due to the destruction of people and vehicles, there were only 3 points in the end. The positions are shown in Figure 4. The frequency of monitoring all test data is once a day.Figure 3 (a) Temperature measuring pipes in main working face; (b) temperature measuring pipes in auxiliary working face; (c) temperature measuring pipes in lateral section. (a)(b)(c)Figure 4 Position of displacement monitoring point. ## 3. Temperature and Displacement In Situ Test ### 3.1. Temperature of Refrigerated Coolant Figure5 shows the time history curve of the brine temperature during the entire freezing process. Corresponding to the time in Section 2.1, the entire freezing process could be divided into five stages: Stage 1 is the design freezing stage, stage 2 is the extended freezing stage, stage 3 is the flow and temperature control stage, stage 4 is the intermittent freezing stage, and stage 5 is the excavation and returning to normal freezing. At the end of stage 2, due to a power outage, there was a rapid temperature rise, and the rest of the time was normal, while drawing the brine temperature difference of inlet and outlet in Figure 6. In early stage 1, the brine temperature dropped rapidly. During the 13 days from October 31 to November 12, 2019, the temperature dropped at a rate of about 3°C/d, while the temperature difference remained at 2°C at this time. Then, the temperature drop ratio slowed down, and the temperature difference dropped to 1.5°C. After November 12, there was a rapid drop in temperature that lasted for 2 to 3 days (temperature drop rate: 1.5°C/d), and the temperature difference increased rapidly to 2°C. Subsequently, the brine temperature was basically stable, while the temperature difference continued to decrease. At the end of stage 1, the brine temperature was close to -30°C, and the temperature difference was 0.5°C. In the early time of stage 2, the freezing temperature was maintained at the original state, while the temperature difference is further reduced. In the middle time of stage 2, the temperature difference continued to be at a low level consistently which is caused by the commissioning of refrigeration equipment and the rise in brine temperature. At the end of stage 2, the temperature of brine rose rapidly due to the power outage. At this time, the brine also stopped running, which caused the return loop temperature to increase rapidly and then gradually return after the power back to normal. In stage 3, the temperature and flow rate of brine were gradually adjusted, so the temperature difference is further reduced. When the brine temperature is higher than -25°C, the temperature difference turns into a negative value. This indicates that the freezing system is in a state of reverse cooling. In stage 4, intermittent freezing was started due to the poor effect of stage 3. In the stop freezing phase, the flow of brine does not stop. Therefore, the brine temperature raised slower than in the third stage. The peak of brine temperature was maintained in the range of -12~-8°C after each stop of freezing. In this stage, the temperature difference of brine was negative most time, which indicates that the freezing system is in the state of reverse cooling during the whole stage. In stage 5, the freezing system returns to normal, and the brine temperature decreased. However, at this time, the edge of the frozen curtain was connected to the energy balance; the temperature difference continues to fluctuate around 0°C.Figure 5 The temperature of brine in main pipe.Figure 6 The temperature difference of main pipe. ### 3.2. Temperature of Measuring Pipes Figure7 is the temperature history curve of the temperature measuring pipes C1 and C2 at the top of the frozen curtain, while the distance of temperature measuring pipe to axis of the freezing pipe is summarized in the figure too. In the figure, the temperature of the measuring points dropped significantly at stages 1 and 2. The power outage at the end of the second stage also had an obvious influence on the temperature of the measuring point. Because C2 was closer to the freezing pipes, the temperature raised most obviously. In stage 3, the temperature change could be ignored. In stage 4, the temperature of each measuring point has a certain rise, in which the temperature of C1 rises by 2.9~3.2°C and the temperature of C2 rises by 6°C. After entering the excavation stage (stage 5), the temperature of C1 which was located on the outer side of the frozen curtain continued to maintain its original state, while the temperature of C2 rose rapidly. It is rising by about 30°C.Figure 7 Temperature time history curve of C1 and C2.Figure8 shows the temperature curves of C13 and C14 temperature measuring pipes. As shown in Figure 7, the temperature decreased steadily in stages 1 and 2 and rebounded at the time of power outage. Afterward, there was no significant change in stage 3, in which temperature was stable. In stage 4, a temperature rise state like C2 appeared in Figure 7. In stage 5, the temperature continues to drop.Figure 8 Temperature curve of C13 and C14. ### 3.3. Displacement of Ground The ground displacement curve drawn according to the measurement points arranged in Figure4 is shown in Figure 9, and the ground displacement monitoring data of the freezing pipe drilling process are added in the figure too. Due to the upward slope of the frozen wall at the top, the distance from ground to freezing front at DB3 is less than DB1 and DB2. In the figure, due to the loss of soil during the drilling stage, displacement of each measuring point has a negative value of 10-20 mm. The freezing pipes have grouted in the later stage to keep ground stable, and the displacement recovered from a small amount. When the freezing began, the ground displacement increases linearly. Affected by the power outage at the end of Phase 2, DB2 and DB3, which are closer to the frozen curtain, dropped significantly. In stage 3, the frost heave had not been effectively controlled. In stage 4, intermittent freezing has obvious effect on frost heave control. However, there is a hysteresis in the control of frost heave by interstitial freezing. The first stop freezing of the intermittent freezing was February 5, 2020, but frost heave of DB3 growth until February 7, while frost heave of DB1 and DB2 growth until February 10. While the shallower the buried depth, the more violent the reaction to intermittent freezing. In the figure, DB1 and DB2 are affected by intermittent freezing, and the cumulative decrease is about 3 mm, while the cumulative decrease of DB3 is 40.6 mm. At the same time, DB1 and DB2 showed a rising trend after the intermittent freezing stopped on February 25, but DB3 only showed a rising trend on March 7, which was 9 days later than DB1 and BD2. In stage 5, with the normal operation of the freezing system, the ground continues to rise.Figure 9 Displacement of ground.Comparing the above-mentioned measured data, the closer to the frozen curtain, the more sensitive the ground to frost heave. This is not only manifested at the beginning of intermittent freezing; the closer the distance is, the earlier the frost heave stops; it is also manifested after the recovery of freezing; the closer the distance is, the earlier the frost heave will appear. Therefore, the buried depth of the frozen curtain is the key factor to determine the intermittent freezing effect.The hysteresis of ground displacement is because the stop freezing time of intermittent freezing is very short. With the stop freezing process, the frozen soil has not melted, or only a small amount has melted. Only after repeated intermittent freezing, the frozen soil will degrade significantly, and the settlement after melting of the soil is a long process. Therefore, it takes time for the surface deformation to respond to intermittent freezing and it must be ensured that the frozen soil body melts. This has caused the delay of thawing settlement on the ground.Moreover, in the end of intermittent freezing, the soil will continue to settle for a period. This is because the thawing deformation has not been released. As shown in Figure10, when the intermittent freezing stops, the soil will melt to a certain extent, and the freezing front’s total dropped by h1. With the extension of the freezing time, the soil will continue to settle after thawing. Then, the freezing system is activated. However, it would take some time for low-temperature brine to affect the freezing front, whereafter the frozen front extends its thickness h2 again but is h3 away from the original frozen front. When the frost heave amount h2 in the refreezing area is less than the upper thaw settlement amount h3, the ground surface will still maintain the subsidence trend, but the subsidence rate will decrease. This has caused the delay of frost heave on the ground too.Figure 10 Schematic diagram of intermittent freezing process. ## 3.1. Temperature of Refrigerated Coolant Figure5 shows the time history curve of the brine temperature during the entire freezing process. Corresponding to the time in Section 2.1, the entire freezing process could be divided into five stages: Stage 1 is the design freezing stage, stage 2 is the extended freezing stage, stage 3 is the flow and temperature control stage, stage 4 is the intermittent freezing stage, and stage 5 is the excavation and returning to normal freezing. At the end of stage 2, due to a power outage, there was a rapid temperature rise, and the rest of the time was normal, while drawing the brine temperature difference of inlet and outlet in Figure 6. In early stage 1, the brine temperature dropped rapidly. During the 13 days from October 31 to November 12, 2019, the temperature dropped at a rate of about 3°C/d, while the temperature difference remained at 2°C at this time. Then, the temperature drop ratio slowed down, and the temperature difference dropped to 1.5°C. After November 12, there was a rapid drop in temperature that lasted for 2 to 3 days (temperature drop rate: 1.5°C/d), and the temperature difference increased rapidly to 2°C. Subsequently, the brine temperature was basically stable, while the temperature difference continued to decrease. At the end of stage 1, the brine temperature was close to -30°C, and the temperature difference was 0.5°C. In the early time of stage 2, the freezing temperature was maintained at the original state, while the temperature difference is further reduced. In the middle time of stage 2, the temperature difference continued to be at a low level consistently which is caused by the commissioning of refrigeration equipment and the rise in brine temperature. At the end of stage 2, the temperature of brine rose rapidly due to the power outage. At this time, the brine also stopped running, which caused the return loop temperature to increase rapidly and then gradually return after the power back to normal. In stage 3, the temperature and flow rate of brine were gradually adjusted, so the temperature difference is further reduced. When the brine temperature is higher than -25°C, the temperature difference turns into a negative value. This indicates that the freezing system is in a state of reverse cooling. In stage 4, intermittent freezing was started due to the poor effect of stage 3. In the stop freezing phase, the flow of brine does not stop. Therefore, the brine temperature raised slower than in the third stage. The peak of brine temperature was maintained in the range of -12~-8°C after each stop of freezing. In this stage, the temperature difference of brine was negative most time, which indicates that the freezing system is in the state of reverse cooling during the whole stage. In stage 5, the freezing system returns to normal, and the brine temperature decreased. However, at this time, the edge of the frozen curtain was connected to the energy balance; the temperature difference continues to fluctuate around 0°C.Figure 5 The temperature of brine in main pipe.Figure 6 The temperature difference of main pipe. ## 3.2. Temperature of Measuring Pipes Figure7 is the temperature history curve of the temperature measuring pipes C1 and C2 at the top of the frozen curtain, while the distance of temperature measuring pipe to axis of the freezing pipe is summarized in the figure too. In the figure, the temperature of the measuring points dropped significantly at stages 1 and 2. The power outage at the end of the second stage also had an obvious influence on the temperature of the measuring point. Because C2 was closer to the freezing pipes, the temperature raised most obviously. In stage 3, the temperature change could be ignored. In stage 4, the temperature of each measuring point has a certain rise, in which the temperature of C1 rises by 2.9~3.2°C and the temperature of C2 rises by 6°C. After entering the excavation stage (stage 5), the temperature of C1 which was located on the outer side of the frozen curtain continued to maintain its original state, while the temperature of C2 rose rapidly. It is rising by about 30°C.Figure 7 Temperature time history curve of C1 and C2.Figure8 shows the temperature curves of C13 and C14 temperature measuring pipes. As shown in Figure 7, the temperature decreased steadily in stages 1 and 2 and rebounded at the time of power outage. Afterward, there was no significant change in stage 3, in which temperature was stable. In stage 4, a temperature rise state like C2 appeared in Figure 7. In stage 5, the temperature continues to drop.Figure 8 Temperature curve of C13 and C14. ## 3.3. Displacement of Ground The ground displacement curve drawn according to the measurement points arranged in Figure4 is shown in Figure 9, and the ground displacement monitoring data of the freezing pipe drilling process are added in the figure too. Due to the upward slope of the frozen wall at the top, the distance from ground to freezing front at DB3 is less than DB1 and DB2. In the figure, due to the loss of soil during the drilling stage, displacement of each measuring point has a negative value of 10-20 mm. The freezing pipes have grouted in the later stage to keep ground stable, and the displacement recovered from a small amount. When the freezing began, the ground displacement increases linearly. Affected by the power outage at the end of Phase 2, DB2 and DB3, which are closer to the frozen curtain, dropped significantly. In stage 3, the frost heave had not been effectively controlled. In stage 4, intermittent freezing has obvious effect on frost heave control. However, there is a hysteresis in the control of frost heave by interstitial freezing. The first stop freezing of the intermittent freezing was February 5, 2020, but frost heave of DB3 growth until February 7, while frost heave of DB1 and DB2 growth until February 10. While the shallower the buried depth, the more violent the reaction to intermittent freezing. In the figure, DB1 and DB2 are affected by intermittent freezing, and the cumulative decrease is about 3 mm, while the cumulative decrease of DB3 is 40.6 mm. At the same time, DB1 and DB2 showed a rising trend after the intermittent freezing stopped on February 25, but DB3 only showed a rising trend on March 7, which was 9 days later than DB1 and BD2. In stage 5, with the normal operation of the freezing system, the ground continues to rise.Figure 9 Displacement of ground.Comparing the above-mentioned measured data, the closer to the frozen curtain, the more sensitive the ground to frost heave. This is not only manifested at the beginning of intermittent freezing; the closer the distance is, the earlier the frost heave stops; it is also manifested after the recovery of freezing; the closer the distance is, the earlier the frost heave will appear. Therefore, the buried depth of the frozen curtain is the key factor to determine the intermittent freezing effect.The hysteresis of ground displacement is because the stop freezing time of intermittent freezing is very short. With the stop freezing process, the frozen soil has not melted, or only a small amount has melted. Only after repeated intermittent freezing, the frozen soil will degrade significantly, and the settlement after melting of the soil is a long process. Therefore, it takes time for the surface deformation to respond to intermittent freezing and it must be ensured that the frozen soil body melts. This has caused the delay of thawing settlement on the ground.Moreover, in the end of intermittent freezing, the soil will continue to settle for a period. This is because the thawing deformation has not been released. As shown in Figure10, when the intermittent freezing stops, the soil will melt to a certain extent, and the freezing front’s total dropped by h1. With the extension of the freezing time, the soil will continue to settle after thawing. Then, the freezing system is activated. However, it would take some time for low-temperature brine to affect the freezing front, whereafter the frozen front extends its thickness h2 again but is h3 away from the original frozen front. When the frost heave amount h2 in the refreezing area is less than the upper thaw settlement amount h3, the ground surface will still maintain the subsidence trend, but the subsidence rate will decrease. This has caused the delay of frost heave on the ground too.Figure 10 Schematic diagram of intermittent freezing process. ## 4. Discussion ### 4.1. Evolvement Law of Frozen Curtain Thickness According to the analysis in Section3.3, the position of the freezing front is closely related to the frost heave. Therefore, the temperature measurement data is used to calculate the freezing front outside of the top frozen curtain. Select the temperature of C12 and C7 which are closest to the DB1 and DB3 areas to calculate the position. The calculation formula [26] of frozen curtain thickness is (1)m1x,y=12ln2ch2πly−cos2πlx,(2)ξ=lπTx,ylnl/2πr0+TCTm1x,yTCT−Tx,y,where TCT is the temperature of the freezing pipe surface, ξ is the thickness of the frozen curtain, r0 is the radius of the freezing pipe, x and y are the coordinates of the freezing pipe, and l is the spacing of the freezing pipe. The x-direction is the direction of the freezing pipes’ connection, and the y-direction is the direction perpendicular to the freezing pipe connection. The relevant parameters in this calculation are shown in Table 2.Table 2 Parameters for calculation of frozen curtain. ParameterValueParameterValueTCT-28°Cr00.054 ml of C121.05 ml of C71.15 mm1 of C122.6m1 of C72.17Since this formula is applicable after the frozen curtain closed, the calculation of the frozen curtain thickness starts from stage 2. The calculated frozen curtain thickness and ground frost heave are plotted in Figure11. According to the figure, the freezing temperature control’s influence in stage 3 on the thickness of the frozen curtain could be ignored. In this stage, the frozen curtain calculated by C12 and C7 maintains the original thickness. During the intermittent freezing of stage 4, the frozen curtain melts obviously.Figure 11 Scheme of frozen curtain thickness and ground displacement.Because the frozen curtain about C12 is far from the ground, it is less disturbed by the ground temperature, and the frozen soil thaws slowly than others. There were two obvious frozen curtain thaws in stage 4. The time is from February 5 to February 9 and February 11 to February 17; the thickness of the frozen curtain thawed 11 mm and 7.3 mm.The soil over C7 is shallower, and the frozen curtain thawed 3 times during stage 4, which were from February 5 to February 9, February 11 to February 16, and February 21 to February 26, respectively. The thickness of the frozen curtain was 20.6 mm, 14.9 mm, and 6.6 mm, respectively. In other periods, the frozen curtain continued to expand or remained stable, while the change of the freezing front verifies the hysteresis analysis assumption of the ground displacement in Section3.3 (similar to the length of the shaded area in Figure 11).Base on the data of stage 3, the scheme of frost heave control through a small increase of brine temperature was not successful. However, the intermittent freezing scheme adopted in stage 4 effectively controls the frost heave, while, in stage 4 intermittent freezing process, the frost heave control is better in the early stage and the later stage control effect is poor. This indicates that factors such as brine temperature and intermittent freezing time in the intermittent freezing process are the key parameters for controlling intermittent freezing. In this case, the treatment plan of stopping freezing and circulating can not effectively control the brine temperature, so the freezing front changes cannot be accurately controlled. ### 4.2. The Average Temperature of Frozen Curtain The average temperature of the frozen curtain is a key indicator to measure the strength of frozen curtain, and the soil temperature field would change significantly during the intermittent freezing process. According to Hu’s analytical solution [27, 28], the formula for calculating the average temperature of the frozen curtain is (3)T¯=TCTπξ/l−ln2lnl/2πr0+πξ/l.According to this formula, the average temperature of the frozen curtain about C7 and C12 is shown in Figure12. In stage 2, the average temperature dropped significantly. However, affected by stop freezing at the end time of stage 2, there was a rapid rising. Therefore, in stage 3, a downward trend was formed after the freezing system turn on. Affected by intermittent freezing in stage 4, the average temperature rose rapidly. In stage 5, with the normal operation of the freezing system, the average temperature continues to drop steadily.Figure 12 The average temperature in different zone.According to the average temperature curve of stage 4, the change rule of the average temperature during the first and second intermittent freezing matches with the stop and run time. During the above two intermittent freezing phases, the temperature dropped slightly in the first two days but then dropped rapidly (especially the second intermittent freezing). For the 3rd to 5th intermittent freezing, the temperature continued to rise due to the small-time conversion interval between stop and run. However, the temperature rise rate is significantly less than the first and second intermittent freezing. Taking the average temperature about C7 as an example, the temperature rise rate of the first intermittent freezing is 0.4°C/d, and the second intermittent freezing is 0.34°C/d. Then, the average temperature of the next three intermittent freezing rose back to 0.11°C/d (including the freezing time in the calculation). ### 4.3. Optimization of Intermittent Freezing In the above case, the intermittent freezing method was circulating the brine after the freezing stopped. This method does not control the brine temperature, and the slow temperature rise of the brine further causes the hysteresis of the frozen curtain thickness. At the same time, this method cannot accurately control the freezing front changes. In order to reduce the hysteresis reflected by the frozen curtain and realize the precise controllability of intermittent freezing, the manuscripts design a dual-circuit liquid supply scheme, in which the freezing pipe and freezing system transformation scheme is shown in Figure13.Figure 13 The dual-circuit liquid supply scheme.In the new scheme, each group of freezing pipes has two inlets and two outlets, which are connected to the freezing system and the temperature control system, respectively. While two brine systems are set up at the freezing station, one is a conventional freezing system, and the other is a brine temperature control system.The optimized intermittent freezing scheme is used as follows:(Step 1) The ground displacement reaches the prewarning value, and preparations for intermittent freezing are carried out. Start the temperature control system to control the temperature of the hot brine tank to -5°C.(Step 2) Close the freezing system circulation pipeline, start the temperature control system pipeline, and perform the first high temperature cycle in the freezing system. At the same time, stop the brine circulation in the freezing system pipeline and the brine tank to keep the brine in the area at a low temperature. The duration of this stage is 3~4 days.(Step 3) Adjust the pipeline valve, start the low-temperature freezing system, and carry out the low-temperature cycle.(Step 4) Transfer to the normal intermittent freezing stage, so that the brine will circulate repeatedly between the design intermittent freezing temperatureTu and the design minimum brine temperature Td.The above method can effectively control the brine temperature during the entire intermittent freezing process. At the same time, the continuous switching of the dual-circulation system realizes the rapid and precise change of the brine temperature.In order to compare the difference before and after the optimization, the numerical calculation model base on the design of above case was built and is shown in Figure14. The parameters in the calculation are listed in Table 3. And the brine temperature was set to -28°C in the early stage, and two optimization schemes were adopted in the later stage. The brine temperature curve and frozen curtain thickness of the two schemes are shown in Figure 15.Figure 14 The numerical model and boundary condition.Table 3 Parameters for numerical simulation. Density(kg/m3)Heat conductivity coefficient(W/(m∗°C))Specific heat(J/(kg∗°C))Heat latent(kJ/kg)Soils15714.3816Water10000.64200334.88Ice9172.22100Figure 15 The comparison of different freezing scheme.As shown in the figure, the two intermittent freezing schemes do not affect the thickness of the frozen wall in the first cycle. At this stage, the temperature field distribution inside the frozen soil is mainly adjusted. After the first cycle of intermittent freezing, the changes of the soil frozen curtain were different, and the frozen curtain continued to expand in a slow state before optimization. And after optimization, the frozen wall remains in a stable area and does not change.Since there is a brine exchange pipeline between the optimized temperature control system and the freezing system, the high temperature state of intermittent freezing can be strictly controlled. Figure16 shows the development of the frozen curtain at different Tu. As shown in the figure, the expansion trend of the frozen curtain is controlled by controlling Tu, while the frost heave and bear capacity is controlled too. Therefore, the precise and controllable intermittent freezing method can also be used for the excavation stage, while the difference in frozen curtain thickness generally starts from the end of the second intermittent freezing cycle.Figure 16 The comparison of differentTu. ## 4.1. Evolvement Law of Frozen Curtain Thickness According to the analysis in Section3.3, the position of the freezing front is closely related to the frost heave. Therefore, the temperature measurement data is used to calculate the freezing front outside of the top frozen curtain. Select the temperature of C12 and C7 which are closest to the DB1 and DB3 areas to calculate the position. The calculation formula [26] of frozen curtain thickness is (1)m1x,y=12ln2ch2πly−cos2πlx,(2)ξ=lπTx,ylnl/2πr0+TCTm1x,yTCT−Tx,y,where TCT is the temperature of the freezing pipe surface, ξ is the thickness of the frozen curtain, r0 is the radius of the freezing pipe, x and y are the coordinates of the freezing pipe, and l is the spacing of the freezing pipe. The x-direction is the direction of the freezing pipes’ connection, and the y-direction is the direction perpendicular to the freezing pipe connection. The relevant parameters in this calculation are shown in Table 2.Table 2 Parameters for calculation of frozen curtain. ParameterValueParameterValueTCT-28°Cr00.054 ml of C121.05 ml of C71.15 mm1 of C122.6m1 of C72.17Since this formula is applicable after the frozen curtain closed, the calculation of the frozen curtain thickness starts from stage 2. The calculated frozen curtain thickness and ground frost heave are plotted in Figure11. According to the figure, the freezing temperature control’s influence in stage 3 on the thickness of the frozen curtain could be ignored. In this stage, the frozen curtain calculated by C12 and C7 maintains the original thickness. During the intermittent freezing of stage 4, the frozen curtain melts obviously.Figure 11 Scheme of frozen curtain thickness and ground displacement.Because the frozen curtain about C12 is far from the ground, it is less disturbed by the ground temperature, and the frozen soil thaws slowly than others. There were two obvious frozen curtain thaws in stage 4. The time is from February 5 to February 9 and February 11 to February 17; the thickness of the frozen curtain thawed 11 mm and 7.3 mm.The soil over C7 is shallower, and the frozen curtain thawed 3 times during stage 4, which were from February 5 to February 9, February 11 to February 16, and February 21 to February 26, respectively. The thickness of the frozen curtain was 20.6 mm, 14.9 mm, and 6.6 mm, respectively. In other periods, the frozen curtain continued to expand or remained stable, while the change of the freezing front verifies the hysteresis analysis assumption of the ground displacement in Section3.3 (similar to the length of the shaded area in Figure 11).Base on the data of stage 3, the scheme of frost heave control through a small increase of brine temperature was not successful. However, the intermittent freezing scheme adopted in stage 4 effectively controls the frost heave, while, in stage 4 intermittent freezing process, the frost heave control is better in the early stage and the later stage control effect is poor. This indicates that factors such as brine temperature and intermittent freezing time in the intermittent freezing process are the key parameters for controlling intermittent freezing. In this case, the treatment plan of stopping freezing and circulating can not effectively control the brine temperature, so the freezing front changes cannot be accurately controlled. ## 4.2. The Average Temperature of Frozen Curtain The average temperature of the frozen curtain is a key indicator to measure the strength of frozen curtain, and the soil temperature field would change significantly during the intermittent freezing process. According to Hu’s analytical solution [27, 28], the formula for calculating the average temperature of the frozen curtain is (3)T¯=TCTπξ/l−ln2lnl/2πr0+πξ/l.According to this formula, the average temperature of the frozen curtain about C7 and C12 is shown in Figure12. In stage 2, the average temperature dropped significantly. However, affected by stop freezing at the end time of stage 2, there was a rapid rising. Therefore, in stage 3, a downward trend was formed after the freezing system turn on. Affected by intermittent freezing in stage 4, the average temperature rose rapidly. In stage 5, with the normal operation of the freezing system, the average temperature continues to drop steadily.Figure 12 The average temperature in different zone.According to the average temperature curve of stage 4, the change rule of the average temperature during the first and second intermittent freezing matches with the stop and run time. During the above two intermittent freezing phases, the temperature dropped slightly in the first two days but then dropped rapidly (especially the second intermittent freezing). For the 3rd to 5th intermittent freezing, the temperature continued to rise due to the small-time conversion interval between stop and run. However, the temperature rise rate is significantly less than the first and second intermittent freezing. Taking the average temperature about C7 as an example, the temperature rise rate of the first intermittent freezing is 0.4°C/d, and the second intermittent freezing is 0.34°C/d. Then, the average temperature of the next three intermittent freezing rose back to 0.11°C/d (including the freezing time in the calculation). ## 4.3. Optimization of Intermittent Freezing In the above case, the intermittent freezing method was circulating the brine after the freezing stopped. This method does not control the brine temperature, and the slow temperature rise of the brine further causes the hysteresis of the frozen curtain thickness. At the same time, this method cannot accurately control the freezing front changes. In order to reduce the hysteresis reflected by the frozen curtain and realize the precise controllability of intermittent freezing, the manuscripts design a dual-circuit liquid supply scheme, in which the freezing pipe and freezing system transformation scheme is shown in Figure13.Figure 13 The dual-circuit liquid supply scheme.In the new scheme, each group of freezing pipes has two inlets and two outlets, which are connected to the freezing system and the temperature control system, respectively. While two brine systems are set up at the freezing station, one is a conventional freezing system, and the other is a brine temperature control system.The optimized intermittent freezing scheme is used as follows:(Step 1) The ground displacement reaches the prewarning value, and preparations for intermittent freezing are carried out. Start the temperature control system to control the temperature of the hot brine tank to -5°C.(Step 2) Close the freezing system circulation pipeline, start the temperature control system pipeline, and perform the first high temperature cycle in the freezing system. At the same time, stop the brine circulation in the freezing system pipeline and the brine tank to keep the brine in the area at a low temperature. The duration of this stage is 3~4 days.(Step 3) Adjust the pipeline valve, start the low-temperature freezing system, and carry out the low-temperature cycle.(Step 4) Transfer to the normal intermittent freezing stage, so that the brine will circulate repeatedly between the design intermittent freezing temperatureTu and the design minimum brine temperature Td.The above method can effectively control the brine temperature during the entire intermittent freezing process. At the same time, the continuous switching of the dual-circulation system realizes the rapid and precise change of the brine temperature.In order to compare the difference before and after the optimization, the numerical calculation model base on the design of above case was built and is shown in Figure14. The parameters in the calculation are listed in Table 3. And the brine temperature was set to -28°C in the early stage, and two optimization schemes were adopted in the later stage. The brine temperature curve and frozen curtain thickness of the two schemes are shown in Figure 15.Figure 14 The numerical model and boundary condition.Table 3 Parameters for numerical simulation. Density(kg/m3)Heat conductivity coefficient(W/(m∗°C))Specific heat(J/(kg∗°C))Heat latent(kJ/kg)Soils15714.3816Water10000.64200334.88Ice9172.22100Figure 15 The comparison of different freezing scheme.As shown in the figure, the two intermittent freezing schemes do not affect the thickness of the frozen wall in the first cycle. At this stage, the temperature field distribution inside the frozen soil is mainly adjusted. After the first cycle of intermittent freezing, the changes of the soil frozen curtain were different, and the frozen curtain continued to expand in a slow state before optimization. And after optimization, the frozen wall remains in a stable area and does not change.Since there is a brine exchange pipeline between the optimized temperature control system and the freezing system, the high temperature state of intermittent freezing can be strictly controlled. Figure16 shows the development of the frozen curtain at different Tu. As shown in the figure, the expansion trend of the frozen curtain is controlled by controlling Tu, while the frost heave and bear capacity is controlled too. Therefore, the precise and controllable intermittent freezing method can also be used for the excavation stage, while the difference in frozen curtain thickness generally starts from the end of the second intermittent freezing cycle.Figure 16 The comparison of differentTu. ## 5. Conclusion This article focuses on the research on frost heave control of large-scale shallow buried underground excavation projects. Through comparative analysis, the intermittent freezing method has a wide application range and a small occupation area, while it can save energy effetely. Therefore, it is suitable for frost heave control of large-scale freezing projects. Furthermore, a systematic analysis of the application of intermittent freezing in engineering is carried out, and the following conclusions are drawn:(1) Engineering practice has proved that intermittent freezing can effectively control surface deformation, but there is an obvious hysteresis. This is mainly caused by the inability of changing the brine temperature in time and the thermal diffusion efficiency of frozen soil. In addition, intermittent freezing construction will significantly weaken the average temperature and thickness of the frozen wall. Therefore, it is must design an accurate and controllable intermittent freezing method(2) Aiming at the problem of inaccurate brine temperature control, a dual-circuit liquid supply scheme is proposed. Accurate temperature control in the intermittent freezing stage has been achieved through the transformation of the freezing system and the freezing pipe. According to numerical calculations, the new scheme can effectively control the position of the freezing front and realize the purpose of controlling frost heave while ensuring the bearing capacity of the frozen curtain. It improves the applicability of the intermittent freezing scheme in the excavation stage --- *Source: 1004735-2022-06-03.xml*
2022
# Diagnosis of Chronic Kidney Disease Using Effective Classification Algorithms and Recursive Feature Elimination Techniques **Authors:** Ebrahime Mohammed Senan; Mosleh Hmoud Al-Adhaileh; Fawaz Waselallah Alsaade; Theyazn H. H. Aldhyani; Ahmed Abdullah Alqarni; Nizar Alsharif; M. Irfan Uddin; Ahmed H. Alahmadi; Mukti E Jadhav; Mohammed Y. Alzahrani **Journal:** Journal of Healthcare Engineering (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1004767 --- ## Abstract Chronic kidney disease (CKD) is among the top 20 causes of death worldwide and affects approximately 10% of the world adult population. CKD is a disorder that disrupts normal kidney function. Due to the increasing number of people with CKD, effective prediction measures for the early diagnosis of CKD are required. The novelty of this study lies in developing the diagnosis system to detect chronic kidney diseases. This study assists experts in exploring preventive measures for CKD through early diagnosis using machine learning techniques. This study focused on evaluating a dataset collected from 400 patients containing 24 features. The mean and mode statistical analysis methods were used to replace the missing numerical and the nominal values. To choose the most important features, Recursive Feature Elimination (RFE) was applied. Four classification algorithms applied in this study were support vector machine (SVM),k-nearest neighbors (KNN), decision tree, and random forest. All the classification algorithms achieved promising performance. The random forest algorithm outperformed all other applied algorithms, reaching an accuracy, precision, recall, and F1-score of 100% for all measures. CKD is a serious life-threatening disease, with high rates of morbidity and mortality. Therefore, artificial intelligence techniques are of great importance in the early detection of CKD. These techniques are supportive of experts and doctors in early diagnosis to avoid developing kidney failure. --- ## Body ## 1. Introduction Chronic kidney disease (CKD) has received much attention due to its high mortality rate. Chronic diseases have become a concern threatening developing countries, according to the World Health Organization (WHO) [1]. CKD is a kidney disorder treatable in its early stages, but it causes kidney failure in its late stages. In 2016, chronic kidney disease caused the death of 753 million people worldwide, where the number of males died was 336 million, while the number of females died was 417 million [2]. It is called “chronic” disease because the kidney disease begins gradually and lasts for a long time, which affects the functioning of the urinary system. The accumulation of waste products in the blood leads to the emergence of other health problems, which are associated with several symptoms such as high and low blood pressure, diabetes, nerve damage, and bone problems, which lead to cardiovascular disease. Risk factors for CKD patients include diabetes, blood pressure, and cardiovascular disease (CVD) [3]. CKD patients suffer from side effects, especially in the late stages, which damage the nervous and immune system. In developing countries, patients may reach the late stages, so they must undergo dialysis or kidney transplantation. Medical experts determine kidney disease through glomerular filtration rate (GFR), which describes kidney function. GFR is based on information such as age, blood test, gender, and other factors suffered by the patient [4]. Regarding the GFR value, doctors can classify CKD into five stages. Table 1 shows the different stages of kidney disease development with GFR levels.Table 1 The stages of development of CKD. StageDescriptionGlomerular filtration rate (GFR) (mL/min/1.73 m2)Treatment stage1Kidney function is normal≥90Observation, blood pressure control2Kidney damage is mild60–89Observation, blood pressure control and risk factors3Kidney damage is moderate30–59Observation, blood pressure control and risk factors4Kidney damage is severe15–29Planning for end-stage renal failure5Established kidney failure≤ 15Treatment choicesEarly diagnosis and treatment of chronic kidney disease will prevent its progression to kidney failure. The best way to treat chronic kidney disease is to diagnose it in the early stages, but discovering it in its late stages will lead to kidney failure, which requires continuous dialysis or kidney transplantation to maintain a normal life. In the medical diagnosis of chronic kidney disease, two medical tests are used to detect CKD, which are by a blood test to check the glomerular filtrate or by a urine test to check albumin. Due to the increasing number of chronic kidney patients, the scarcity of specialist physicians, and the high costs of diagnosis and treatment, especially in developing countries, there is a need for computer-assisted diagnostics to help physicians and radiologists in supporting their diagnostic decisions. Artificial intelligence techniques have played a role in the health sector and medical image processing, where machine learning and deep learning techniques have been applied in the processes of disease prediction and disease diagnosis in the early stages. Artificial intelligence (ANN) approaches have played a basic role in the early diagnosis of CKD. Machine learning algorithms are used for the early diagnosis of CKD. The ANN and SVM algorithms are among the most widely used technologies. These technologies have great advantages in diagnosing several fields, including medical diagnosis. The ANN algorithm works like human neurons, which can learn how to operate once properly trained, and its ability to generalize and solve future problems (test data) [5]. However, SVM algorithm depends on experience and examples to assign labels to the class. SVM algorithm basically separates the data by a line that achieves the maximum distance between the class data [6]. Many factors affect kidney performance, which induce CKD, like diabetes, blood pressure, heart disease, some kind of food, and family history. Figure 1 presents some factors affecting chronic kidney disease.Figure 1 Factors affecting chronic kidney disease.Pujari et al. [7] presented a system for detecting the stages of CKD through ultrasonography (USG) images. The algorithm works to identify fibrotic cases during different periods. Ahmed et al. [8] proposed a fuzzy expert system to determine whether the urinary system is good or bad. Khamparia et al. [9] studied a stacked autoencoder model to extract the characteristics of CKD and used Softmax to classify the final class. Kim et al. [10] proposed a genetic algorithm (GA) based on neural networks in which the weight vectors were optimized by GA to train NN. The system surpasses traditional neural networks for CKD diagnosis. Vasquez-Morales et al. [11] presented a model based on neural networks to predict whether a person is at risk of developing CKD. Almansour et al. [12] diagnosed a CKD dataset using ANN and SVM algorithms. ANN and SVM reached an accuracy of 99.75% and 97.75%, respectively. Rady and Anwar [13] applied probabilistic neural networks (PNN), multilayer perceptron (MLP), SVM, and radial basis function (RBF) algorithms to diagnose CKD dataset. The PNN algorithm outperformed the MLP, SVM, and RBF algorithms. Kunwar et al. [14] applied two algorithms—naive Bayes and artificial neural networks (ANN)—to diagnose a UCI dataset for CKD. Naive Bayes algorithm outperformed ANN. The accuracy of the naive Bayes algorithm was 100%, while the ANN accuracy was 72.73%. Wibawa et al. [15] applied correlation-based feature selection (CFS) for feature selection, and AdaBoost for ensemble learning was applied to improve CKD diagnosis. The KNN, naive Bayes, and SVM algorithms were applied for CKD dataset diagnosis. Their system achieved the best accuracy when implementing a hybrid between KNN with CFS and AdaBoost by 98.1%. Avci et al. [16] used WEKA software to diagnose the UCI dataset for CKD. The dataset was evaluated using NB, K-Star, SVM, and J48 classifiers. The J48 algorithm outperformed the rest of the algorithms with an accuracy of 99%. Chiu et al. [17] built intelligence models using neural network algorithms to classify CKD. The models included a back-propagation network (BPN), generalized feed forward neural networks (GRNN), and modular neural network (MNN) for the early detection of CKD. The authors proposed hybrid models between the GA and the three mentioned models. Shrivas et al. [18] applied the Union Based Feature Selection Technique (UBFST) to choose the most important features. The selected features were diagnosed by several techniques of machine learning. The aim of the study was to reduce diagnostic time and obtain high diagnostic accuracy. Kunwar et al. [14] used Artificial Neural Network (ANN) and Naive Bayes to evaluate a UCI dataset of 400 patients. The experiment was implemented with RapidMiner tool. Naive Bayes reached a diagnostic accuracy of 100% better than ANN, which reached a diagnostic accuracy of 72.73%. Elhoseny et al. [19] presented a system for healthcare to diagnose CKD through Density Based Feature Selection (DFS) and also a method of Ant Colony Optimization. DFS removes unrelated features that have weak association with the target feature. Abdelaziz et al. [20] presented healthcare service (HCS) system, applying Parallel Particle Swarm Optimization (PPSO), to optimize selection of Virtual Machines (VMs). Then, a new model with linear regression (LR) and neural network (NN) was applied to evaluate the performance of their VMs for diagnosing CKD. Xiong et al. [21] proposed the Las Vegas Wrapper Feature Selection method (LVW-FS) to extract the most important vital features. Ravizza et al. [22] applied a model to test diabetes related to chronic kidney disease. To reduce the dimensions of high data, the Chi-Square statistical method was applied. The model predicts the state of the kidney through some features such as glucose, age, rate of albumin, etc. Sara et al. [23] applied two methods, namely, Hybrid Wrapper and Filter-Based FS (HWFFS) and Feature Selection (FS), to reduce the dimensions of the dataset and select the features associated with CKD strongly. The features extracted from the two methods were then combined, and the hybrid features were classified by using SVM classifier.The contribution of the current study lies in using Recursive Feature Elimination (RFE) technique with machine learning algorithms to develop system for detecting chronic kidney diseases. The contributions of this paper are summarized as follows:(i) We used integrated model to select the most significant representative features by using the Recursive Feature Elimination (RFE) algorithm(ii) Four machine learning algorithms, namely, SVM, KNN, Decision Tree, and Random Forest, were used to diagnose CKD with promising accuracy(iii) Highly efficient machine learning techniques for the diagnosis of chronic kidney disease can be popularized with the help of expert physicians ## 2. Materials and Methods A series of experiments were conducted using machine learning algorithms: SVM, KNN, decision tree, and random forest to evaluate CKD dataset. Figure2 shows the general structure of CKD diagnosis in this paper. In preprocessing, the mean method was used to compute the missing numerical values, and the mode method was used to compute the missing nominal values. The features of importance associated with the features of importance for CKD diagnosis were selected using the RFE algorithm. These selected features were fed into classifiers for disease diagnosis. In this study, four classifiers were applied to diagnose CKD: SVM, KNN, decision tree, and random forest. All classifiers showed promising results for diagnosing a dataset into CKD or a normal kidney.Figure 2 The proposed system for the diagnosis of CKD. ### 2.1. Dataset The CKD dataset was collected from 400 patients from the University of California, Irvine Machine Learning Repository [24]. The dataset comprises 24 features divided into 11 numeric features and 13 categorical features, in addition to the class features, such as “ckd” and “notckd” for classification. Features include age, blood pressure, specific gravity, albumin, sugar, red blood cells, pus cell, pus cell clumps, bacteria, blood glucose random, blood urea, serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white blood cell count, red blood cell count, hypertension, diabetes mellitus, coronary artery disease, appetite, pedal edema, and anemia. The diagnostic class contains two values: ckd and notckd. All features contained missing values except for the diagnostic feature. The dataset is unbalanced because it contains 250 cases of “ckd” class by 62.5% and 150 cases of “notckd” by 37.5%. ### 2.2. Preprocessing The dataset contained outliers and noise, so it must be cleaned up in a preprocessing stage. The preprocessing stage included estimating missing values and eliminating noise, such as outliers, normalization, and checking of unbalanced data. Some measurements may be missed when patients are undergoing tests, thereby causing missing values. The dataset contained 158 completed instances, and the remaining instances had missing values. The simplest method to handle missing values is to ignore the record, but it is inappropriate with small dataset. We can use algorithms to compute missing values instead of removing records. The missing values for numerical features can be computed through one of the statistical measures, such as mean, median, and standard deviation. However, the missing values of nominal features can be computed using the mode method, in which the missing value is replaced by the most common value of the features. In this study, the missing numerical features were replaced by the mean method, and a mode method was applied to replace the missing nominal features. Table2 shows the statistical analysis of the dataset, such as mean and standard deviation; max and min were introduced for the numerical features in the dataset. Table 3 shows statistical analysis of numerical feature. While numerical features are the values that can be measured and have two types, either separate or continuous.Table 2 Statistical analysis of the dataset of numerical features. FeaturesMeanStandard deviationMaxMinAge51.48317.21902Blood glucose random148.03776.58349022Serum creatinine3.0724.512760.4Blood pressure76.46913.75618050Blood urea57.42649.9873911.5Potassium4.6272.92472.5Packed cell volume38.8848.762549Sodium137.5299.9081634.5Hemoglobin12.5262.81517.83.1White blood cell count8406.122823.35264002200Red blood cell count4.7070.8982.1Table 3 Statistical analysis of the dataset of nominal features. FeaturesLabelCountAlbumin024514424334342451Specific gravity1.00571.01841.015751.021531.02581Sugar033911321831441353Pus cellNormal324Abnormal76Red blood cellsNormal353Abnormal47BacteriaPresent22Not present378Pus cell clumpsPresent42Not present358Diabetes mellitusYes137No263HypertensionYes147No253EdemaYes76No324Coronary artery diseaseYes34No366AnemiaYes60No340AppetiteGood318Poor82 ### 2.3. Features Selection After computing the missing values, identifying the important features having a strong and positive correlation with features of importance for disease diagnosis is required. Extracting the vector features eliminates useless features for prediction and those that are irrelevant, which prevents the construction of a robust diagnostic model [25]. In this study, we used the RFE method to extract the most important features of a prediction. The Recursive Feature Elimination (RFE) algorithm is very popular due to its ease of use and configurations and its effectiveness in selecting features in training datasets relevant to predicting target variables and eliminating weak features. The RFE method is used to select the most significant features by finding high correlation between specific features and target (labels). Table 4 shows the most significant features according to RFE; it is noted that albumin feature has highest correction (17.99%), featured by 14.34%, then the packed cell volume feature by 12.91%, and the serum creatinine feature by 12.09%. RFECV plots the number of features in the dataset along with a cross-validated score and visualizes the selected features is presented in Figure 3.Table 4 The importance of predictive variables in diagnosing CKD. FeaturesPriority ratio (%)al17.99hemo14.34pcv12.91sc12.09rc7.51bu6.56sg6.08pcv5.60htn4.64bgr3.48dm3.20pe1.25wc1.01sod0.92rbc0.91bp0.39su0.35appet0.28ba0.18age0.18cad0.09pcc0.06pot0.00ane0.00Figure 3 Number of features vs. cross-validated score. ### 2.4. Classification Data mining techniques have been used to define new and understandable patterns to construct classification templates [26]. Supervised and unsupervised learning techniques require the construction of models based on prior analysis and are used in medical and clinical diagnostics for classification and regression [27]. Four popular machine learning algorithms used are SVM, KNN, decision tree, and random forest, which give the best diagnostic results. Machine learning techniques work to build predictive/classification models through two stages: the training phase, in which a model is constructed from a set of training data with the expected outputs, and the validation stage, which estimates the quality of the trained models from the validation dataset without the expected output. All algorithms are supervised algorithms that are used to solve classification and regression problems. #### 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. #### 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. #### 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. #### 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 2.1. Dataset The CKD dataset was collected from 400 patients from the University of California, Irvine Machine Learning Repository [24]. The dataset comprises 24 features divided into 11 numeric features and 13 categorical features, in addition to the class features, such as “ckd” and “notckd” for classification. Features include age, blood pressure, specific gravity, albumin, sugar, red blood cells, pus cell, pus cell clumps, bacteria, blood glucose random, blood urea, serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white blood cell count, red blood cell count, hypertension, diabetes mellitus, coronary artery disease, appetite, pedal edema, and anemia. The diagnostic class contains two values: ckd and notckd. All features contained missing values except for the diagnostic feature. The dataset is unbalanced because it contains 250 cases of “ckd” class by 62.5% and 150 cases of “notckd” by 37.5%. ## 2.2. Preprocessing The dataset contained outliers and noise, so it must be cleaned up in a preprocessing stage. The preprocessing stage included estimating missing values and eliminating noise, such as outliers, normalization, and checking of unbalanced data. Some measurements may be missed when patients are undergoing tests, thereby causing missing values. The dataset contained 158 completed instances, and the remaining instances had missing values. The simplest method to handle missing values is to ignore the record, but it is inappropriate with small dataset. We can use algorithms to compute missing values instead of removing records. The missing values for numerical features can be computed through one of the statistical measures, such as mean, median, and standard deviation. However, the missing values of nominal features can be computed using the mode method, in which the missing value is replaced by the most common value of the features. In this study, the missing numerical features were replaced by the mean method, and a mode method was applied to replace the missing nominal features. Table2 shows the statistical analysis of the dataset, such as mean and standard deviation; max and min were introduced for the numerical features in the dataset. Table 3 shows statistical analysis of numerical feature. While numerical features are the values that can be measured and have two types, either separate or continuous.Table 2 Statistical analysis of the dataset of numerical features. FeaturesMeanStandard deviationMaxMinAge51.48317.21902Blood glucose random148.03776.58349022Serum creatinine3.0724.512760.4Blood pressure76.46913.75618050Blood urea57.42649.9873911.5Potassium4.6272.92472.5Packed cell volume38.8848.762549Sodium137.5299.9081634.5Hemoglobin12.5262.81517.83.1White blood cell count8406.122823.35264002200Red blood cell count4.7070.8982.1Table 3 Statistical analysis of the dataset of nominal features. FeaturesLabelCountAlbumin024514424334342451Specific gravity1.00571.01841.015751.021531.02581Sugar033911321831441353Pus cellNormal324Abnormal76Red blood cellsNormal353Abnormal47BacteriaPresent22Not present378Pus cell clumpsPresent42Not present358Diabetes mellitusYes137No263HypertensionYes147No253EdemaYes76No324Coronary artery diseaseYes34No366AnemiaYes60No340AppetiteGood318Poor82 ## 2.3. Features Selection After computing the missing values, identifying the important features having a strong and positive correlation with features of importance for disease diagnosis is required. Extracting the vector features eliminates useless features for prediction and those that are irrelevant, which prevents the construction of a robust diagnostic model [25]. In this study, we used the RFE method to extract the most important features of a prediction. The Recursive Feature Elimination (RFE) algorithm is very popular due to its ease of use and configurations and its effectiveness in selecting features in training datasets relevant to predicting target variables and eliminating weak features. The RFE method is used to select the most significant features by finding high correlation between specific features and target (labels). Table 4 shows the most significant features according to RFE; it is noted that albumin feature has highest correction (17.99%), featured by 14.34%, then the packed cell volume feature by 12.91%, and the serum creatinine feature by 12.09%. RFECV plots the number of features in the dataset along with a cross-validated score and visualizes the selected features is presented in Figure 3.Table 4 The importance of predictive variables in diagnosing CKD. FeaturesPriority ratio (%)al17.99hemo14.34pcv12.91sc12.09rc7.51bu6.56sg6.08pcv5.60htn4.64bgr3.48dm3.20pe1.25wc1.01sod0.92rbc0.91bp0.39su0.35appet0.28ba0.18age0.18cad0.09pcc0.06pot0.00ane0.00Figure 3 Number of features vs. cross-validated score. ## 2.4. Classification Data mining techniques have been used to define new and understandable patterns to construct classification templates [26]. Supervised and unsupervised learning techniques require the construction of models based on prior analysis and are used in medical and clinical diagnostics for classification and regression [27]. Four popular machine learning algorithms used are SVM, KNN, decision tree, and random forest, which give the best diagnostic results. Machine learning techniques work to build predictive/classification models through two stages: the training phase, in which a model is constructed from a set of training data with the expected outputs, and the validation stage, which estimates the quality of the trained models from the validation dataset without the expected output. All algorithms are supervised algorithms that are used to solve classification and regression problems. ### 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. ### 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. ### 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. ### 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. ## 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. ## 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. ## 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 3. Experiment Environment Setup This section presents the results of the developing system. ### 3.1. Environment Setup The system has been developed by using different environments. Table5 shows the environment setup of the developing system.Table 5 Environment setup of the proposed system. ResourceDetailsCPUCore i5 Gen6RAM8 GBGPU4 GBSoftwarePython ### 3.2. Evaluation Metrics Evaluation metrics were used to evaluate the performance of the four classifiers. One of these measures is through the confusion matrix, from which the accuracy, precision, recall, and F1-score are extracted by computing the correctly classified samples (TP and TN) and the incorrectly classified samples (FP and FN), as shown in the following equations [28]:(3)accuracy=TN+TPTN+TP+FN+FP∗100%,(4)precision=TPTP+FP∗100%,(5)recall=TPTP+FN∗100%,(6)F1−score=2∗precision∗recallprecision∗recall∗100,where TN is True Negative, TP is True Positive, FN is False Negative, and FP is False Positive. ### 3.3. Splitting Dataset The dataset was divided into 75% for training and 25 for testing and validation. Table6 shows the splitting data.Table 6 Splitting dataset. DatasetNumbersTraining300 patientsTesting and validation100 patients ## 3.1. Environment Setup The system has been developed by using different environments. Table5 shows the environment setup of the developing system.Table 5 Environment setup of the proposed system. ResourceDetailsCPUCore i5 Gen6RAM8 GBGPU4 GBSoftwarePython ## 3.2. Evaluation Metrics Evaluation metrics were used to evaluate the performance of the four classifiers. One of these measures is through the confusion matrix, from which the accuracy, precision, recall, and F1-score are extracted by computing the correctly classified samples (TP and TN) and the incorrectly classified samples (FP and FN), as shown in the following equations [28]:(3)accuracy=TN+TPTN+TP+FN+FP∗100%,(4)precision=TPTP+FP∗100%,(5)recall=TPTP+FN∗100%,(6)F1−score=2∗precision∗recallprecision∗recall∗100,where TN is True Negative, TP is True Positive, FN is False Negative, and FP is False Positive. ## 3.3. Splitting Dataset The dataset was divided into 75% for training and 25 for testing and validation. Table6 shows the splitting data.Table 6 Splitting dataset. DatasetNumbersTraining300 patientsTesting and validation100 patients ## 4. Results The random forest algorithm classified all positive and negative samples correctly, as positive samples were correctly classified 250 samples (TP), and all negative samples (TN) were classified for 150 samples correctly. While the SVM, KNN, and Decision Tree algorithms rated the positive (TP) samples by 94.74%, 97.37%, and 98.68%, respectively, that is, with an error (TN) 5.26%, 2.63%, and 1.32%, respectively. Table6 shows the results obtained from the four classifiers. The random forest algorithm outperformed the rest of the classifiers, reaching an accuracy, precision, recall, and F1-score of 100% for all measures. It was followed by the decision tree algorithm, which reached the accuracy, precision, recall, and F1-score with a score of 99.17%, 100%, 98.68%, and 99.34%, respectively. Then, the KNN algorithm came up with accuracy, precision, recall, and F1-score of 98.33%, 100% 97.37%, and 98.67%, respectively. Finally, the SVM accuracy, precision, recall, and F1-score algorithm scored 96.67%, 92%, 94.74%, and 97.30%, respectively.The performance of the proposed systems was evaluated through several previous related studies, as shown in Table7. It is noted that the existing studies have obtained the lowest accuracy; the accuracy ranges of existing studies are between 96.8% and 66.3%, while the proposed system has obtained accuracy of 100% with random forest tree method. Finally, it is observed that the proposed has optimal results compared with existing systems.Table 7 Results of diagnosing CKD using four machine learning algorithms. ClassifiersSVMKNNDecision treeRandom forestAccuracy %96.6798.3399.17100.00Precision %92.00100.00100.00100.00Recall %94.7497.3798.68100.00F1-score%97.3098.6799.34100.00Twenty-four numerical and nominal features were introduced from 400 patients with CKD. Due to the neglect of some tests for some patients, some computation methods were applied to solve this problem. To solve the missing numerical values, mean method was used; for missing nominal values, the mode method was used. As Figure4 shows a correlation between different features, the figure shows positive and negative correlation. There is a positive correlation, for example, between specific gravity with red blood cell count, packed cell volume, and hemoglobin; between sugar with blood glucose random; between blood urea and serum creatinine; and between hemoglobin with red blood cell count and packed cell volume. There is also a negative correlation, for example, between albumin and blood urea with red blood cell count, packed cell volume, and hemoglobin and between serum creatinine and sodium.Figure 4 Correlation between different features. ### 4.1. Results and Discussion The dataset is randomly divided into 75% for training and 25% for testing and validation. The Recursive Feature Elimination method was presented to select the irrelevant subset features. Then, the select features were processed by employing classifiers for diagnosis of CKD. A comparative analysis between the proposed system and existing approaches is presented in Table8. It is noted that the proposed system has achieved promising results. We have used RFE algorithm for finding the best relationships between each feature with the target features and works to prioritize the features and give each feature a percentage based on the correlation with the target feature. Figure 5 displays the performance of the proposed system against existing systems, where the accuracy in the existing systems reached a ratio between 95.84% and 66.3%, while the accuracy of our systems reached between 100% by random forest and 97.3% by SVM.Table 8 Comparison of the performance of our proposed system with previous studies. Previous studiesAccuracy %Precision %Recall %F1-score %Hore et al. [29]92.5485.719690.56Vasquez-Morales et al. [11]92939091Rady and Anwar [13]95.8484.0693.5588.55Elhoseny et al. [19]858888Ogunleye and Wang [30]96.88793Khan et al. [31]95.7596.295.895.8Chittora et al. [32]90.7383.349388.05Jongbo et al. [33]89.297.7297.8Harimoorthy and Thangavelu [34]66.365.965.9Proposed model (random forest)100100100100Proposed model (decision tree)99.3498.6810099.17Proposed model (KNN)98.3310097.3798.67Proposed model (SVM)97.394.749296.67Figure 5 Comparison of system’s performance on diagnostic accuracy in the two datasets. ## 4.1. Results and Discussion The dataset is randomly divided into 75% for training and 25% for testing and validation. The Recursive Feature Elimination method was presented to select the irrelevant subset features. Then, the select features were processed by employing classifiers for diagnosis of CKD. A comparative analysis between the proposed system and existing approaches is presented in Table8. It is noted that the proposed system has achieved promising results. We have used RFE algorithm for finding the best relationships between each feature with the target features and works to prioritize the features and give each feature a percentage based on the correlation with the target feature. Figure 5 displays the performance of the proposed system against existing systems, where the accuracy in the existing systems reached a ratio between 95.84% and 66.3%, while the accuracy of our systems reached between 100% by random forest and 97.3% by SVM.Table 8 Comparison of the performance of our proposed system with previous studies. Previous studiesAccuracy %Precision %Recall %F1-score %Hore et al. [29]92.5485.719690.56Vasquez-Morales et al. [11]92939091Rady and Anwar [13]95.8484.0693.5588.55Elhoseny et al. [19]858888Ogunleye and Wang [30]96.88793Khan et al. [31]95.7596.295.895.8Chittora et al. [32]90.7383.349388.05Jongbo et al. [33]89.297.7297.8Harimoorthy and Thangavelu [34]66.365.965.9Proposed model (random forest)100100100100Proposed model (decision tree)99.3498.6810099.17Proposed model (KNN)98.3310097.3798.67Proposed model (SVM)97.394.749296.67Figure 5 Comparison of system’s performance on diagnostic accuracy in the two datasets. ## 5. Conclusion This study provided insight into the diagnosis of CKD patients to tackle their condition and receive treatment in the early stages of the disease. The dataset was collected from 400 patients containing 24 features. The dataset was divided into 75% training and 25% testing and validation. The dataset was processed to remove outliers and replace missing numerical and nominal values using mean and mode statistical measures, respectively. The RFE algorithm was applied to select the most strongly representative features of CKD. Selected features were fed into classification algorithms: SVM, KNN, decision tree, and random forest. The parameters of all classifiers were tuned to perform the best classification, so all algorithms reached promising results. The random forest algorithm outperformed all other algorithms, achieving an accuracy, precision, recall, and F1-score of 100% for all measures. The system was examined and evaluated through multiclass statistical analysis, and the empirical results of SVM, KNN, and decision tree algorithms found significant values of 96.67%, 98.33%, and 99.17% with respect to accuracy metric. --- *Source: 1004767-2021-06-09.xml*
1004767-2021-06-09_1004767-2021-06-09.md
46,360
Diagnosis of Chronic Kidney Disease Using Effective Classification Algorithms and Recursive Feature Elimination Techniques
Ebrahime Mohammed Senan; Mosleh Hmoud Al-Adhaileh; Fawaz Waselallah Alsaade; Theyazn H. H. Aldhyani; Ahmed Abdullah Alqarni; Nizar Alsharif; M. Irfan Uddin; Ahmed H. Alahmadi; Mukti E Jadhav; Mohammed Y. Alzahrani
Journal of Healthcare Engineering (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1004767
1004767-2021-06-09.xml
--- ## Abstract Chronic kidney disease (CKD) is among the top 20 causes of death worldwide and affects approximately 10% of the world adult population. CKD is a disorder that disrupts normal kidney function. Due to the increasing number of people with CKD, effective prediction measures for the early diagnosis of CKD are required. The novelty of this study lies in developing the diagnosis system to detect chronic kidney diseases. This study assists experts in exploring preventive measures for CKD through early diagnosis using machine learning techniques. This study focused on evaluating a dataset collected from 400 patients containing 24 features. The mean and mode statistical analysis methods were used to replace the missing numerical and the nominal values. To choose the most important features, Recursive Feature Elimination (RFE) was applied. Four classification algorithms applied in this study were support vector machine (SVM),k-nearest neighbors (KNN), decision tree, and random forest. All the classification algorithms achieved promising performance. The random forest algorithm outperformed all other applied algorithms, reaching an accuracy, precision, recall, and F1-score of 100% for all measures. CKD is a serious life-threatening disease, with high rates of morbidity and mortality. Therefore, artificial intelligence techniques are of great importance in the early detection of CKD. These techniques are supportive of experts and doctors in early diagnosis to avoid developing kidney failure. --- ## Body ## 1. Introduction Chronic kidney disease (CKD) has received much attention due to its high mortality rate. Chronic diseases have become a concern threatening developing countries, according to the World Health Organization (WHO) [1]. CKD is a kidney disorder treatable in its early stages, but it causes kidney failure in its late stages. In 2016, chronic kidney disease caused the death of 753 million people worldwide, where the number of males died was 336 million, while the number of females died was 417 million [2]. It is called “chronic” disease because the kidney disease begins gradually and lasts for a long time, which affects the functioning of the urinary system. The accumulation of waste products in the blood leads to the emergence of other health problems, which are associated with several symptoms such as high and low blood pressure, diabetes, nerve damage, and bone problems, which lead to cardiovascular disease. Risk factors for CKD patients include diabetes, blood pressure, and cardiovascular disease (CVD) [3]. CKD patients suffer from side effects, especially in the late stages, which damage the nervous and immune system. In developing countries, patients may reach the late stages, so they must undergo dialysis or kidney transplantation. Medical experts determine kidney disease through glomerular filtration rate (GFR), which describes kidney function. GFR is based on information such as age, blood test, gender, and other factors suffered by the patient [4]. Regarding the GFR value, doctors can classify CKD into five stages. Table 1 shows the different stages of kidney disease development with GFR levels.Table 1 The stages of development of CKD. StageDescriptionGlomerular filtration rate (GFR) (mL/min/1.73 m2)Treatment stage1Kidney function is normal≥90Observation, blood pressure control2Kidney damage is mild60–89Observation, blood pressure control and risk factors3Kidney damage is moderate30–59Observation, blood pressure control and risk factors4Kidney damage is severe15–29Planning for end-stage renal failure5Established kidney failure≤ 15Treatment choicesEarly diagnosis and treatment of chronic kidney disease will prevent its progression to kidney failure. The best way to treat chronic kidney disease is to diagnose it in the early stages, but discovering it in its late stages will lead to kidney failure, which requires continuous dialysis or kidney transplantation to maintain a normal life. In the medical diagnosis of chronic kidney disease, two medical tests are used to detect CKD, which are by a blood test to check the glomerular filtrate or by a urine test to check albumin. Due to the increasing number of chronic kidney patients, the scarcity of specialist physicians, and the high costs of diagnosis and treatment, especially in developing countries, there is a need for computer-assisted diagnostics to help physicians and radiologists in supporting their diagnostic decisions. Artificial intelligence techniques have played a role in the health sector and medical image processing, where machine learning and deep learning techniques have been applied in the processes of disease prediction and disease diagnosis in the early stages. Artificial intelligence (ANN) approaches have played a basic role in the early diagnosis of CKD. Machine learning algorithms are used for the early diagnosis of CKD. The ANN and SVM algorithms are among the most widely used technologies. These technologies have great advantages in diagnosing several fields, including medical diagnosis. The ANN algorithm works like human neurons, which can learn how to operate once properly trained, and its ability to generalize and solve future problems (test data) [5]. However, SVM algorithm depends on experience and examples to assign labels to the class. SVM algorithm basically separates the data by a line that achieves the maximum distance between the class data [6]. Many factors affect kidney performance, which induce CKD, like diabetes, blood pressure, heart disease, some kind of food, and family history. Figure 1 presents some factors affecting chronic kidney disease.Figure 1 Factors affecting chronic kidney disease.Pujari et al. [7] presented a system for detecting the stages of CKD through ultrasonography (USG) images. The algorithm works to identify fibrotic cases during different periods. Ahmed et al. [8] proposed a fuzzy expert system to determine whether the urinary system is good or bad. Khamparia et al. [9] studied a stacked autoencoder model to extract the characteristics of CKD and used Softmax to classify the final class. Kim et al. [10] proposed a genetic algorithm (GA) based on neural networks in which the weight vectors were optimized by GA to train NN. The system surpasses traditional neural networks for CKD diagnosis. Vasquez-Morales et al. [11] presented a model based on neural networks to predict whether a person is at risk of developing CKD. Almansour et al. [12] diagnosed a CKD dataset using ANN and SVM algorithms. ANN and SVM reached an accuracy of 99.75% and 97.75%, respectively. Rady and Anwar [13] applied probabilistic neural networks (PNN), multilayer perceptron (MLP), SVM, and radial basis function (RBF) algorithms to diagnose CKD dataset. The PNN algorithm outperformed the MLP, SVM, and RBF algorithms. Kunwar et al. [14] applied two algorithms—naive Bayes and artificial neural networks (ANN)—to diagnose a UCI dataset for CKD. Naive Bayes algorithm outperformed ANN. The accuracy of the naive Bayes algorithm was 100%, while the ANN accuracy was 72.73%. Wibawa et al. [15] applied correlation-based feature selection (CFS) for feature selection, and AdaBoost for ensemble learning was applied to improve CKD diagnosis. The KNN, naive Bayes, and SVM algorithms were applied for CKD dataset diagnosis. Their system achieved the best accuracy when implementing a hybrid between KNN with CFS and AdaBoost by 98.1%. Avci et al. [16] used WEKA software to diagnose the UCI dataset for CKD. The dataset was evaluated using NB, K-Star, SVM, and J48 classifiers. The J48 algorithm outperformed the rest of the algorithms with an accuracy of 99%. Chiu et al. [17] built intelligence models using neural network algorithms to classify CKD. The models included a back-propagation network (BPN), generalized feed forward neural networks (GRNN), and modular neural network (MNN) for the early detection of CKD. The authors proposed hybrid models between the GA and the three mentioned models. Shrivas et al. [18] applied the Union Based Feature Selection Technique (UBFST) to choose the most important features. The selected features were diagnosed by several techniques of machine learning. The aim of the study was to reduce diagnostic time and obtain high diagnostic accuracy. Kunwar et al. [14] used Artificial Neural Network (ANN) and Naive Bayes to evaluate a UCI dataset of 400 patients. The experiment was implemented with RapidMiner tool. Naive Bayes reached a diagnostic accuracy of 100% better than ANN, which reached a diagnostic accuracy of 72.73%. Elhoseny et al. [19] presented a system for healthcare to diagnose CKD through Density Based Feature Selection (DFS) and also a method of Ant Colony Optimization. DFS removes unrelated features that have weak association with the target feature. Abdelaziz et al. [20] presented healthcare service (HCS) system, applying Parallel Particle Swarm Optimization (PPSO), to optimize selection of Virtual Machines (VMs). Then, a new model with linear regression (LR) and neural network (NN) was applied to evaluate the performance of their VMs for diagnosing CKD. Xiong et al. [21] proposed the Las Vegas Wrapper Feature Selection method (LVW-FS) to extract the most important vital features. Ravizza et al. [22] applied a model to test diabetes related to chronic kidney disease. To reduce the dimensions of high data, the Chi-Square statistical method was applied. The model predicts the state of the kidney through some features such as glucose, age, rate of albumin, etc. Sara et al. [23] applied two methods, namely, Hybrid Wrapper and Filter-Based FS (HWFFS) and Feature Selection (FS), to reduce the dimensions of the dataset and select the features associated with CKD strongly. The features extracted from the two methods were then combined, and the hybrid features were classified by using SVM classifier.The contribution of the current study lies in using Recursive Feature Elimination (RFE) technique with machine learning algorithms to develop system for detecting chronic kidney diseases. The contributions of this paper are summarized as follows:(i) We used integrated model to select the most significant representative features by using the Recursive Feature Elimination (RFE) algorithm(ii) Four machine learning algorithms, namely, SVM, KNN, Decision Tree, and Random Forest, were used to diagnose CKD with promising accuracy(iii) Highly efficient machine learning techniques for the diagnosis of chronic kidney disease can be popularized with the help of expert physicians ## 2. Materials and Methods A series of experiments were conducted using machine learning algorithms: SVM, KNN, decision tree, and random forest to evaluate CKD dataset. Figure2 shows the general structure of CKD diagnosis in this paper. In preprocessing, the mean method was used to compute the missing numerical values, and the mode method was used to compute the missing nominal values. The features of importance associated with the features of importance for CKD diagnosis were selected using the RFE algorithm. These selected features were fed into classifiers for disease diagnosis. In this study, four classifiers were applied to diagnose CKD: SVM, KNN, decision tree, and random forest. All classifiers showed promising results for diagnosing a dataset into CKD or a normal kidney.Figure 2 The proposed system for the diagnosis of CKD. ### 2.1. Dataset The CKD dataset was collected from 400 patients from the University of California, Irvine Machine Learning Repository [24]. The dataset comprises 24 features divided into 11 numeric features and 13 categorical features, in addition to the class features, such as “ckd” and “notckd” for classification. Features include age, blood pressure, specific gravity, albumin, sugar, red blood cells, pus cell, pus cell clumps, bacteria, blood glucose random, blood urea, serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white blood cell count, red blood cell count, hypertension, diabetes mellitus, coronary artery disease, appetite, pedal edema, and anemia. The diagnostic class contains two values: ckd and notckd. All features contained missing values except for the diagnostic feature. The dataset is unbalanced because it contains 250 cases of “ckd” class by 62.5% and 150 cases of “notckd” by 37.5%. ### 2.2. Preprocessing The dataset contained outliers and noise, so it must be cleaned up in a preprocessing stage. The preprocessing stage included estimating missing values and eliminating noise, such as outliers, normalization, and checking of unbalanced data. Some measurements may be missed when patients are undergoing tests, thereby causing missing values. The dataset contained 158 completed instances, and the remaining instances had missing values. The simplest method to handle missing values is to ignore the record, but it is inappropriate with small dataset. We can use algorithms to compute missing values instead of removing records. The missing values for numerical features can be computed through one of the statistical measures, such as mean, median, and standard deviation. However, the missing values of nominal features can be computed using the mode method, in which the missing value is replaced by the most common value of the features. In this study, the missing numerical features were replaced by the mean method, and a mode method was applied to replace the missing nominal features. Table2 shows the statistical analysis of the dataset, such as mean and standard deviation; max and min were introduced for the numerical features in the dataset. Table 3 shows statistical analysis of numerical feature. While numerical features are the values that can be measured and have two types, either separate or continuous.Table 2 Statistical analysis of the dataset of numerical features. FeaturesMeanStandard deviationMaxMinAge51.48317.21902Blood glucose random148.03776.58349022Serum creatinine3.0724.512760.4Blood pressure76.46913.75618050Blood urea57.42649.9873911.5Potassium4.6272.92472.5Packed cell volume38.8848.762549Sodium137.5299.9081634.5Hemoglobin12.5262.81517.83.1White blood cell count8406.122823.35264002200Red blood cell count4.7070.8982.1Table 3 Statistical analysis of the dataset of nominal features. FeaturesLabelCountAlbumin024514424334342451Specific gravity1.00571.01841.015751.021531.02581Sugar033911321831441353Pus cellNormal324Abnormal76Red blood cellsNormal353Abnormal47BacteriaPresent22Not present378Pus cell clumpsPresent42Not present358Diabetes mellitusYes137No263HypertensionYes147No253EdemaYes76No324Coronary artery diseaseYes34No366AnemiaYes60No340AppetiteGood318Poor82 ### 2.3. Features Selection After computing the missing values, identifying the important features having a strong and positive correlation with features of importance for disease diagnosis is required. Extracting the vector features eliminates useless features for prediction and those that are irrelevant, which prevents the construction of a robust diagnostic model [25]. In this study, we used the RFE method to extract the most important features of a prediction. The Recursive Feature Elimination (RFE) algorithm is very popular due to its ease of use and configurations and its effectiveness in selecting features in training datasets relevant to predicting target variables and eliminating weak features. The RFE method is used to select the most significant features by finding high correlation between specific features and target (labels). Table 4 shows the most significant features according to RFE; it is noted that albumin feature has highest correction (17.99%), featured by 14.34%, then the packed cell volume feature by 12.91%, and the serum creatinine feature by 12.09%. RFECV plots the number of features in the dataset along with a cross-validated score and visualizes the selected features is presented in Figure 3.Table 4 The importance of predictive variables in diagnosing CKD. FeaturesPriority ratio (%)al17.99hemo14.34pcv12.91sc12.09rc7.51bu6.56sg6.08pcv5.60htn4.64bgr3.48dm3.20pe1.25wc1.01sod0.92rbc0.91bp0.39su0.35appet0.28ba0.18age0.18cad0.09pcc0.06pot0.00ane0.00Figure 3 Number of features vs. cross-validated score. ### 2.4. Classification Data mining techniques have been used to define new and understandable patterns to construct classification templates [26]. Supervised and unsupervised learning techniques require the construction of models based on prior analysis and are used in medical and clinical diagnostics for classification and regression [27]. Four popular machine learning algorithms used are SVM, KNN, decision tree, and random forest, which give the best diagnostic results. Machine learning techniques work to build predictive/classification models through two stages: the training phase, in which a model is constructed from a set of training data with the expected outputs, and the validation stage, which estimates the quality of the trained models from the validation dataset without the expected output. All algorithms are supervised algorithms that are used to solve classification and regression problems. #### 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. #### 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. #### 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. #### 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 2.1. Dataset The CKD dataset was collected from 400 patients from the University of California, Irvine Machine Learning Repository [24]. The dataset comprises 24 features divided into 11 numeric features and 13 categorical features, in addition to the class features, such as “ckd” and “notckd” for classification. Features include age, blood pressure, specific gravity, albumin, sugar, red blood cells, pus cell, pus cell clumps, bacteria, blood glucose random, blood urea, serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white blood cell count, red blood cell count, hypertension, diabetes mellitus, coronary artery disease, appetite, pedal edema, and anemia. The diagnostic class contains two values: ckd and notckd. All features contained missing values except for the diagnostic feature. The dataset is unbalanced because it contains 250 cases of “ckd” class by 62.5% and 150 cases of “notckd” by 37.5%. ## 2.2. Preprocessing The dataset contained outliers and noise, so it must be cleaned up in a preprocessing stage. The preprocessing stage included estimating missing values and eliminating noise, such as outliers, normalization, and checking of unbalanced data. Some measurements may be missed when patients are undergoing tests, thereby causing missing values. The dataset contained 158 completed instances, and the remaining instances had missing values. The simplest method to handle missing values is to ignore the record, but it is inappropriate with small dataset. We can use algorithms to compute missing values instead of removing records. The missing values for numerical features can be computed through one of the statistical measures, such as mean, median, and standard deviation. However, the missing values of nominal features can be computed using the mode method, in which the missing value is replaced by the most common value of the features. In this study, the missing numerical features were replaced by the mean method, and a mode method was applied to replace the missing nominal features. Table2 shows the statistical analysis of the dataset, such as mean and standard deviation; max and min were introduced for the numerical features in the dataset. Table 3 shows statistical analysis of numerical feature. While numerical features are the values that can be measured and have two types, either separate or continuous.Table 2 Statistical analysis of the dataset of numerical features. FeaturesMeanStandard deviationMaxMinAge51.48317.21902Blood glucose random148.03776.58349022Serum creatinine3.0724.512760.4Blood pressure76.46913.75618050Blood urea57.42649.9873911.5Potassium4.6272.92472.5Packed cell volume38.8848.762549Sodium137.5299.9081634.5Hemoglobin12.5262.81517.83.1White blood cell count8406.122823.35264002200Red blood cell count4.7070.8982.1Table 3 Statistical analysis of the dataset of nominal features. FeaturesLabelCountAlbumin024514424334342451Specific gravity1.00571.01841.015751.021531.02581Sugar033911321831441353Pus cellNormal324Abnormal76Red blood cellsNormal353Abnormal47BacteriaPresent22Not present378Pus cell clumpsPresent42Not present358Diabetes mellitusYes137No263HypertensionYes147No253EdemaYes76No324Coronary artery diseaseYes34No366AnemiaYes60No340AppetiteGood318Poor82 ## 2.3. Features Selection After computing the missing values, identifying the important features having a strong and positive correlation with features of importance for disease diagnosis is required. Extracting the vector features eliminates useless features for prediction and those that are irrelevant, which prevents the construction of a robust diagnostic model [25]. In this study, we used the RFE method to extract the most important features of a prediction. The Recursive Feature Elimination (RFE) algorithm is very popular due to its ease of use and configurations and its effectiveness in selecting features in training datasets relevant to predicting target variables and eliminating weak features. The RFE method is used to select the most significant features by finding high correlation between specific features and target (labels). Table 4 shows the most significant features according to RFE; it is noted that albumin feature has highest correction (17.99%), featured by 14.34%, then the packed cell volume feature by 12.91%, and the serum creatinine feature by 12.09%. RFECV plots the number of features in the dataset along with a cross-validated score and visualizes the selected features is presented in Figure 3.Table 4 The importance of predictive variables in diagnosing CKD. FeaturesPriority ratio (%)al17.99hemo14.34pcv12.91sc12.09rc7.51bu6.56sg6.08pcv5.60htn4.64bgr3.48dm3.20pe1.25wc1.01sod0.92rbc0.91bp0.39su0.35appet0.28ba0.18age0.18cad0.09pcc0.06pot0.00ane0.00Figure 3 Number of features vs. cross-validated score. ## 2.4. Classification Data mining techniques have been used to define new and understandable patterns to construct classification templates [26]. Supervised and unsupervised learning techniques require the construction of models based on prior analysis and are used in medical and clinical diagnostics for classification and regression [27]. Four popular machine learning algorithms used are SVM, KNN, decision tree, and random forest, which give the best diagnostic results. Machine learning techniques work to build predictive/classification models through two stages: the training phase, in which a model is constructed from a set of training data with the expected outputs, and the validation stage, which estimates the quality of the trained models from the validation dataset without the expected output. All algorithms are supervised algorithms that are used to solve classification and regression problems. ### 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. ### 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. ### 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. ### 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 2.4.1. Support Vector Machine Classifier The SVM algorithm primarily creates a line to separate the dataset into classes, enabling it to decide the test data into which classes it belongs. The line or decision boundary is called a hyperplane. The algorithm works on two types: linear and nonlinear. Linear SVM is used when the dataset comprises two classes and is separable. When the dataset is inseparable, a nonlinear SVM is applied, where the algorithm converts the original coordinate area into a separable space. There can be multiple hyperplanes, and the best hyperplane is chosen with the max margin between data points. The dataset closest to the hyperplane is called a support vector.(1)KX,X′=exp−X−X′22σ2,where X,X′ are input data and X−X′2 indicates the between the between the input features. σ is a free parameter. The Radial Basis Function (RBF) was employed for classification data. ## 2.4.2.k-Nearest Neighbour Classifier The KNN algorithm works on the similarity between new and stored data points (training points) and classifies the new test point into the most similar class among the available classes. The KNN algorithm is nonparametric, and it is called the lazy learning algorithm, meaning that it does not learn from the training dataset, but rather stores the training dataset. When classifying the new dataset (test data), it classifies the new data based on the value ofk, where it uses the Euclidean distance to measure the distance between the new point and the stored training points. The new point is classified into a class with the maximum number of neighbors. The Euclidean distance function (Di) was applied to find the nearest neighbored in features vector.(2)Di=x1−x2+y1−y22,where x1, x2, y1, and y2 are variables for input data. ## 2.4.3. Decision Tree Classifier A decision tree algorithm is based on a tree structure. The root node represents the entire dataset, the internal nodes represent the features, the branches represent the decision rules, and the leaf node represents the outcome. A decision tree contains two types of nodes: a decision node, having additional branches, and a leaf node, lacking additional branches. Decisions are performed following the given features. The decision tree compares the feature in the root node with the features’ record (real dataset), and based on the comparison, the algorithm takes the decision and moves to the next node. The algorithm compares the features in the second node with the features in the subnodes, and the process continues until it reaches the leaf node. ## 2.4.4. Random Forest Classifier The random forest algorithm works according to the principle of ensemble learning by combining several classifiers to improve model performance and solve a complex problem. By the name of the algorithm, it is a classifier that contains some decision trees on subsets of the dataset, and an average is taken to improve the prediction. Instead of relying on a single decision tree for the prediction process, the random forest algorithm takes predictions from each decision tree and relies on the majority vote to make the decision to predict the final outcome. The more tree numbers, the higher the accuracy, and this prevents the overfitting problem. Since the algorithm contains some decision trees to predict the class of a dataset, some trees may predict the correct output while others may not. Therefore, there are two assumptions for the high accuracy of a prediction. First, the feature variable must contain actual values for the algorithm to predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low. Therefore, there are two assumptions for a high accuracy of a prediction. First, the feature variable must contain actual values so that the algorithm can predict accurate results instead of guessing. Second, the correlation between the predictions of each tree should be very low.Pseudocode of Random forest tree is as follows:(i) Find the number of trees for generating, e.g.,K.(ii) Whenk (1 < k < K):(iii) Feature vectorΘK is generated, ΘK represents input data generated from creating tree samples(iv) At this step, construct tree -h(x, ΘK)(v) Employing any Decision Tree Algorithm(vi) At this step, each tree casts 1 vote for classy(vii) The classy is classified by choosing the class with maximum votes ## 3. Experiment Environment Setup This section presents the results of the developing system. ### 3.1. Environment Setup The system has been developed by using different environments. Table5 shows the environment setup of the developing system.Table 5 Environment setup of the proposed system. ResourceDetailsCPUCore i5 Gen6RAM8 GBGPU4 GBSoftwarePython ### 3.2. Evaluation Metrics Evaluation metrics were used to evaluate the performance of the four classifiers. One of these measures is through the confusion matrix, from which the accuracy, precision, recall, and F1-score are extracted by computing the correctly classified samples (TP and TN) and the incorrectly classified samples (FP and FN), as shown in the following equations [28]:(3)accuracy=TN+TPTN+TP+FN+FP∗100%,(4)precision=TPTP+FP∗100%,(5)recall=TPTP+FN∗100%,(6)F1−score=2∗precision∗recallprecision∗recall∗100,where TN is True Negative, TP is True Positive, FN is False Negative, and FP is False Positive. ### 3.3. Splitting Dataset The dataset was divided into 75% for training and 25 for testing and validation. Table6 shows the splitting data.Table 6 Splitting dataset. DatasetNumbersTraining300 patientsTesting and validation100 patients ## 3.1. Environment Setup The system has been developed by using different environments. Table5 shows the environment setup of the developing system.Table 5 Environment setup of the proposed system. ResourceDetailsCPUCore i5 Gen6RAM8 GBGPU4 GBSoftwarePython ## 3.2. Evaluation Metrics Evaluation metrics were used to evaluate the performance of the four classifiers. One of these measures is through the confusion matrix, from which the accuracy, precision, recall, and F1-score are extracted by computing the correctly classified samples (TP and TN) and the incorrectly classified samples (FP and FN), as shown in the following equations [28]:(3)accuracy=TN+TPTN+TP+FN+FP∗100%,(4)precision=TPTP+FP∗100%,(5)recall=TPTP+FN∗100%,(6)F1−score=2∗precision∗recallprecision∗recall∗100,where TN is True Negative, TP is True Positive, FN is False Negative, and FP is False Positive. ## 3.3. Splitting Dataset The dataset was divided into 75% for training and 25 for testing and validation. Table6 shows the splitting data.Table 6 Splitting dataset. DatasetNumbersTraining300 patientsTesting and validation100 patients ## 4. Results The random forest algorithm classified all positive and negative samples correctly, as positive samples were correctly classified 250 samples (TP), and all negative samples (TN) were classified for 150 samples correctly. While the SVM, KNN, and Decision Tree algorithms rated the positive (TP) samples by 94.74%, 97.37%, and 98.68%, respectively, that is, with an error (TN) 5.26%, 2.63%, and 1.32%, respectively. Table6 shows the results obtained from the four classifiers. The random forest algorithm outperformed the rest of the classifiers, reaching an accuracy, precision, recall, and F1-score of 100% for all measures. It was followed by the decision tree algorithm, which reached the accuracy, precision, recall, and F1-score with a score of 99.17%, 100%, 98.68%, and 99.34%, respectively. Then, the KNN algorithm came up with accuracy, precision, recall, and F1-score of 98.33%, 100% 97.37%, and 98.67%, respectively. Finally, the SVM accuracy, precision, recall, and F1-score algorithm scored 96.67%, 92%, 94.74%, and 97.30%, respectively.The performance of the proposed systems was evaluated through several previous related studies, as shown in Table7. It is noted that the existing studies have obtained the lowest accuracy; the accuracy ranges of existing studies are between 96.8% and 66.3%, while the proposed system has obtained accuracy of 100% with random forest tree method. Finally, it is observed that the proposed has optimal results compared with existing systems.Table 7 Results of diagnosing CKD using four machine learning algorithms. ClassifiersSVMKNNDecision treeRandom forestAccuracy %96.6798.3399.17100.00Precision %92.00100.00100.00100.00Recall %94.7497.3798.68100.00F1-score%97.3098.6799.34100.00Twenty-four numerical and nominal features were introduced from 400 patients with CKD. Due to the neglect of some tests for some patients, some computation methods were applied to solve this problem. To solve the missing numerical values, mean method was used; for missing nominal values, the mode method was used. As Figure4 shows a correlation between different features, the figure shows positive and negative correlation. There is a positive correlation, for example, between specific gravity with red blood cell count, packed cell volume, and hemoglobin; between sugar with blood glucose random; between blood urea and serum creatinine; and between hemoglobin with red blood cell count and packed cell volume. There is also a negative correlation, for example, between albumin and blood urea with red blood cell count, packed cell volume, and hemoglobin and between serum creatinine and sodium.Figure 4 Correlation between different features. ### 4.1. Results and Discussion The dataset is randomly divided into 75% for training and 25% for testing and validation. The Recursive Feature Elimination method was presented to select the irrelevant subset features. Then, the select features were processed by employing classifiers for diagnosis of CKD. A comparative analysis between the proposed system and existing approaches is presented in Table8. It is noted that the proposed system has achieved promising results. We have used RFE algorithm for finding the best relationships between each feature with the target features and works to prioritize the features and give each feature a percentage based on the correlation with the target feature. Figure 5 displays the performance of the proposed system against existing systems, where the accuracy in the existing systems reached a ratio between 95.84% and 66.3%, while the accuracy of our systems reached between 100% by random forest and 97.3% by SVM.Table 8 Comparison of the performance of our proposed system with previous studies. Previous studiesAccuracy %Precision %Recall %F1-score %Hore et al. [29]92.5485.719690.56Vasquez-Morales et al. [11]92939091Rady and Anwar [13]95.8484.0693.5588.55Elhoseny et al. [19]858888Ogunleye and Wang [30]96.88793Khan et al. [31]95.7596.295.895.8Chittora et al. [32]90.7383.349388.05Jongbo et al. [33]89.297.7297.8Harimoorthy and Thangavelu [34]66.365.965.9Proposed model (random forest)100100100100Proposed model (decision tree)99.3498.6810099.17Proposed model (KNN)98.3310097.3798.67Proposed model (SVM)97.394.749296.67Figure 5 Comparison of system’s performance on diagnostic accuracy in the two datasets. ## 4.1. Results and Discussion The dataset is randomly divided into 75% for training and 25% for testing and validation. The Recursive Feature Elimination method was presented to select the irrelevant subset features. Then, the select features were processed by employing classifiers for diagnosis of CKD. A comparative analysis between the proposed system and existing approaches is presented in Table8. It is noted that the proposed system has achieved promising results. We have used RFE algorithm for finding the best relationships between each feature with the target features and works to prioritize the features and give each feature a percentage based on the correlation with the target feature. Figure 5 displays the performance of the proposed system against existing systems, where the accuracy in the existing systems reached a ratio between 95.84% and 66.3%, while the accuracy of our systems reached between 100% by random forest and 97.3% by SVM.Table 8 Comparison of the performance of our proposed system with previous studies. Previous studiesAccuracy %Precision %Recall %F1-score %Hore et al. [29]92.5485.719690.56Vasquez-Morales et al. [11]92939091Rady and Anwar [13]95.8484.0693.5588.55Elhoseny et al. [19]858888Ogunleye and Wang [30]96.88793Khan et al. [31]95.7596.295.895.8Chittora et al. [32]90.7383.349388.05Jongbo et al. [33]89.297.7297.8Harimoorthy and Thangavelu [34]66.365.965.9Proposed model (random forest)100100100100Proposed model (decision tree)99.3498.6810099.17Proposed model (KNN)98.3310097.3798.67Proposed model (SVM)97.394.749296.67Figure 5 Comparison of system’s performance on diagnostic accuracy in the two datasets. ## 5. Conclusion This study provided insight into the diagnosis of CKD patients to tackle their condition and receive treatment in the early stages of the disease. The dataset was collected from 400 patients containing 24 features. The dataset was divided into 75% training and 25% testing and validation. The dataset was processed to remove outliers and replace missing numerical and nominal values using mean and mode statistical measures, respectively. The RFE algorithm was applied to select the most strongly representative features of CKD. Selected features were fed into classification algorithms: SVM, KNN, decision tree, and random forest. The parameters of all classifiers were tuned to perform the best classification, so all algorithms reached promising results. The random forest algorithm outperformed all other algorithms, achieving an accuracy, precision, recall, and F1-score of 100% for all measures. The system was examined and evaluated through multiclass statistical analysis, and the empirical results of SVM, KNN, and decision tree algorithms found significant values of 96.67%, 98.33%, and 99.17% with respect to accuracy metric. --- *Source: 1004767-2021-06-09.xml*
2021
# Effect of Oxytocin Combined with Different Volume of Water Sac in High-Risk Term Pregnancies **Authors:** Hanna Mi; Na Sun **Journal:** Evidence-Based Complementary and Alternative Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004816 --- ## Abstract Objective. The study estimated the impacts of water sac of different capacities combined with oxytocin (OXT) on pregnant women with high-risk term pregnancies. Methods. Women with high-risk term pregnancies who received OXT were enrolled to perform labor induction using 30  mL (group A), 80 mL (group B), and 150  mL (group C), followed by the comparisons regarding to the success rate of labor induction, cesarean section rate, duration of induced labor to labor, duration of the first stage of labor, postpartum blood loss, the incidence of adverse reactions, and the assessment of cervical ripening using Bishop Score. Besides, neonatal weight, Apgar score, as well as psychological status, and satisfaction of patients were compared among these groups. Results. As compared with group A, the success rate of induced labor was higher in groups B and C with lower cesarean section rate and shorter duration of induced labor to labor, but the duration of the first stage of labor in group B was the shortest among the three groups. The amount of postpartum hemorrhage decreased stepwise from groups A to B to C. In addition, groups A and B showed a reduced incidence of adverse reactions than group C, but the highest level of cervical ripening and highest patient satisfaction was revealed in group C and group B, respectively. Furthermore, the highest patient satisfaction was found in group B. Conclusion. The usage of an 80  mL water sac combined with OXT  in high-risk term pregnancy has ideal induction effects, which can guarantee maternal cervical maturity and shorten the time of the first stage of labor. --- ## Body ## 1. Introduction Pregnant women suffer from various acute and chronic diseases and pregnancies, as well as adverse environmental and social factors, which can lead to fetal death, intrauterine growth retardation, congenital malformation, premature birth, neonatal diseases, and so on that constitute a high-risk pregnancy process called high-risk pregnancy [1]. In recent years, high-risk pregnancy has become increasingly common in the clinic, accounting for 8–12% of all pregnancies, an approximately four-fold increase compared to 2010 [2]. High-risk pregnancy not only poses a great threat to the newborn but may also lead to maternal death due to shock and massive hemorrhage during delivery [3]. Therefore, in clinic, it is necessary to focus on monitoring and additional targeted treatment for such pregnant women, so as to ensure maternal and child safety.In clinical practice, induced uterine contractions are often used to help the fetus escape from the adverse intrauterine environment and reduce the occurrence of adverse pregnancy outcomes [4]. For normal pregnant women, labor can be induced by a variety of methods like low-dose oxytocin (OXT), prostate inhibitors, mifepristone, and pulse therapy [5]. However, due to the limitations of their own diseases, the only drug for induction of high-risk pregnant women is low-dose OXT—a therapy that can play a role in uterine contractions but has no significant effect on cervical dilation [6,7]. Therefore, there is an urgent need to find a more effective way to provide more effective protection for such pregnant women.Water sac is an emerging technology in recent years, which can promote the softening and maturity of the cervix [8]. The placement of a water sac in the internal opening of the cervix can help artificially peel off the placenta and mechanically compress the cervix [9]. At present, some studies believe that water sac induction is safe for women with high-risk pregnancy, and it is expected to be a breakthrough to solve the problem of induced labor in high-risk pregnancy [10,11]. However, some other evidence has pointed out that the use of water sac may disrupt the normal state of the cervix of the mother [12]. Due to the current lack of authoritative and unified application guides, the use of water sac in high-risk pregnancies remains controversial.This study compares the impacts of water sac of different capacities combined with OXT on postpartum cervical status of high-risk term pregnant women, aiming at providing more effective protection for maternal and child life safety and providing a more comprehensive reference for the subsequent application of water-sac induction of labor. ## 2. Data and Methods ### 2.1. Research Subjects A total of 165 cases of high-risk term pregnancies who visited our hospital between January 2019 and March 2020 were selected as the research subjects for retrospective analysis. Among them, 54 cases were induced by 30 mL water sac (group A), 61 cases by 80 mL (group B), and 50 cases by 150 mL (group C). This study was carried out in strict accordance with the Declaration of Helsinki, and all the research subjects provided informed consent. ### 2.2. Eligibility Criteria Patients who were in line with the diagnostic criteria of high-risk pregnancy [13] and water-sac-induced labor indications [14] with singleton pregnancy and fetal presentation were enrolled. In contrast, pregnant women with premature rupture of membranes, vaginitis, liver and kidney dysfunction, or fetal cardiac distress that need to stop pregnancy immediately were ruled out. ### 2.3. Treatment All pregnant women received routine examinations, including B-ultrasound and fetal heart monitoring. At the same time, low-dose OXT (H34022979, An’ hui Hongye Pharmaceutical Co., Ltd., China) was given as follows to induce labor. Day 1: OXT 2.5U was added into 500 mL of 0.5% glucose injection and then intravenously dripped. The infusion rate was appropriately adjusted according to the pregnant women’s contractions to keep the contractions effective. Day 2: the parturient with a Bishop Score (BS) less than 6, which indicated an unripe cervix of patients [15], was given an OXT drip (same dose as day 1). Day 3: pregnant women with BS > 6 were subjected to artificial rupture of membranes, and those with BS < 6 were given OXT (same as day 1). No delivery after 3 days meant failure of induced labor and cesarean section was used instead. On this basis, patients in groups A, B, and C were induced by 30, 80, and 150 mL water sac, respectively, with the procedures as follows. First, in a lithotomy position, the vulva of the pregnant woman was routinely disinfected. The cervix was then exposed and disinfected with iodophor cotton balls. The front end of the water sac was inserted into the cervical canal, and 30, 80, or 150 mL of normal saline was injected slowly. The tail end of the water sac tube was ensured to be above the internal cervical orifice and the catheter was fixed on the inner thigh. After placing the water sac, the fetal heart and pregnant women's symptoms as well as the maternal physical symptoms were closely observed. The water sac was removed 20 h later and an artificial rupture of membranes was performed based on the BS score of the parturient. ### 2.4. Determination of Labor Induction Efficacy Regular contractions lasting more than 30 seconds within 12 hours after treatment with the BS score increased by > 3 points were considered as remarkably effective. Effective was indicated if there were no regular contractions after treatment until the removal of the water sac for a period of time, with the BS score increased by 1–3 points. No regular contractions posttreatment nor changes or increase in the BS score was deemed ineffective. The induction success rate of induced labor = (remarkably effective + effective) cases/total cases × 100%. Moreover, the duration of induced labor to labor, the duration of the first stage of labor, the postpartum hemorrhage amount, and the cesarean section rate were recorded. BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3). ### 2.5. Evaluation of Neonatal Status Neonatal status was assessed via newborn weight, as well as using the Apgar Score (AS) [16] at 1 min and 5 min after birth, with the score positively associated with the neonatal status. ### 2.6. Assessment of Mental State Before and after delivery, the maternal psychological state was evaluated by Self-rating Depression/Anxiety Scale (SDS/SAS) [17,18]. The standard cut-off is 50 points, with 50–59, 60–69, and > 69 being mild, moderate, and severe anxiety, respectively. ### 2.7. Determination of Adverse Reactions and Patient Satisfaction The incidence of adverse reactions (ARs) was recorded between the application of the water sac for labor induction and at discharge. Patient satisfaction was assessed with the self-made nursing questionnaire (10-point scale), with 10, 7–9, 4–6, and 1–3 points being very satisfied, satisfied, need improvement, and dissatisfied, respectively. Satisfaction = (very satisfied + satisfied) cases/total cases × 100%. ### 2.8. Statistical Methods SPSS22.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis, and differences withP<0.05 were considered significant. The measurement (mean ± SD) and enumeration data (n (%)) were analyzed via one-way analysis of variance (ANOVA) followed by Tukey’s honest significant difference (HSD) and χ2 test, respectively. ## 2.1. Research Subjects A total of 165 cases of high-risk term pregnancies who visited our hospital between January 2019 and March 2020 were selected as the research subjects for retrospective analysis. Among them, 54 cases were induced by 30 mL water sac (group A), 61 cases by 80 mL (group B), and 50 cases by 150 mL (group C). This study was carried out in strict accordance with the Declaration of Helsinki, and all the research subjects provided informed consent. ## 2.2. Eligibility Criteria Patients who were in line with the diagnostic criteria of high-risk pregnancy [13] and water-sac-induced labor indications [14] with singleton pregnancy and fetal presentation were enrolled. In contrast, pregnant women with premature rupture of membranes, vaginitis, liver and kidney dysfunction, or fetal cardiac distress that need to stop pregnancy immediately were ruled out. ## 2.3. Treatment All pregnant women received routine examinations, including B-ultrasound and fetal heart monitoring. At the same time, low-dose OXT (H34022979, An’ hui Hongye Pharmaceutical Co., Ltd., China) was given as follows to induce labor. Day 1: OXT 2.5U was added into 500 mL of 0.5% glucose injection and then intravenously dripped. The infusion rate was appropriately adjusted according to the pregnant women’s contractions to keep the contractions effective. Day 2: the parturient with a Bishop Score (BS) less than 6, which indicated an unripe cervix of patients [15], was given an OXT drip (same dose as day 1). Day 3: pregnant women with BS > 6 were subjected to artificial rupture of membranes, and those with BS < 6 were given OXT (same as day 1). No delivery after 3 days meant failure of induced labor and cesarean section was used instead. On this basis, patients in groups A, B, and C were induced by 30, 80, and 150 mL water sac, respectively, with the procedures as follows. First, in a lithotomy position, the vulva of the pregnant woman was routinely disinfected. The cervix was then exposed and disinfected with iodophor cotton balls. The front end of the water sac was inserted into the cervical canal, and 30, 80, or 150 mL of normal saline was injected slowly. The tail end of the water sac tube was ensured to be above the internal cervical orifice and the catheter was fixed on the inner thigh. After placing the water sac, the fetal heart and pregnant women's symptoms as well as the maternal physical symptoms were closely observed. The water sac was removed 20 h later and an artificial rupture of membranes was performed based on the BS score of the parturient. ## 2.4. Determination of Labor Induction Efficacy Regular contractions lasting more than 30 seconds within 12 hours after treatment with the BS score increased by > 3 points were considered as remarkably effective. Effective was indicated if there were no regular contractions after treatment until the removal of the water sac for a period of time, with the BS score increased by 1–3 points. No regular contractions posttreatment nor changes or increase in the BS score was deemed ineffective. The induction success rate of induced labor = (remarkably effective + effective) cases/total cases × 100%. Moreover, the duration of induced labor to labor, the duration of the first stage of labor, the postpartum hemorrhage amount, and the cesarean section rate were recorded. BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3). ## 2.5. Evaluation of Neonatal Status Neonatal status was assessed via newborn weight, as well as using the Apgar Score (AS) [16] at 1 min and 5 min after birth, with the score positively associated with the neonatal status. ## 2.6. Assessment of Mental State Before and after delivery, the maternal psychological state was evaluated by Self-rating Depression/Anxiety Scale (SDS/SAS) [17,18]. The standard cut-off is 50 points, with 50–59, 60–69, and > 69 being mild, moderate, and severe anxiety, respectively. ## 2.7. Determination of Adverse Reactions and Patient Satisfaction The incidence of adverse reactions (ARs) was recorded between the application of the water sac for labor induction and at discharge. Patient satisfaction was assessed with the self-made nursing questionnaire (10-point scale), with 10, 7–9, 4–6, and 1–3 points being very satisfied, satisfied, need improvement, and dissatisfied, respectively. Satisfaction = (very satisfied + satisfied) cases/total cases × 100%. ## 2.8. Statistical Methods SPSS22.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis, and differences withP<0.05 were considered significant. The measurement (mean ± SD) and enumeration data (n (%)) were analyzed via one-way analysis of variance (ANOVA) followed by Tukey’s honest significant difference (HSD) and χ2 test, respectively. ## 3. Results ### 3.1. Comparison of Clinical Baseline Data As shown in Table1, comparisons among the three groups were conducted on baseline data regarding age, body mass index (BMI), gestational weeks, family history of disease, primipara, and household register, with no statistically significant differences (all P>0.05)Table 1 Comparison of clinical baseline data among the three groups. Group A (n = 54)Group B (n = 61)Group C (n = 50)F or χ2PAge28.30 ± 3.3127.90 ± 4.5028.68 ± 4.110.5150.598BMI (kg/m2)26.26 ± 2.5425.79 ± 2.8226.35 ± 2.570.5020.606Gestational weeks39.67 ± 0.7039.66 ± 0.8939.36 ± 0.962.1440.121Family history of disease0.4310.806Yes8 (14.81)10 (16.39)6 (12.00)No46 (85.19)51 (83.61)44 (88.00)Primipara0.7690.681No21 (38.89)19 (31.15)17 (34.00)Yes33 (61.11)42 (68.85)33 (66.00)Household register0.7200.698City36 (66.67)36 (59.02)31 (62.00)Rural18 (33.33)25 (40.98)19 (38.00) ### 3.2. Comparison of the Delivery Situation in High-Risk Term Pregnancies among the Three Groups The effect of labor induction was compared, and the results are shown in Table2. We found a similar success rate of labor induction in group B (80.33%) and group C (82.00%) (P>0.05), higher than that of group A (61.11%) (P<0.05). Similarly, the cesarean section rate differed significantly between group B (19.67%) and group C (18.00%) (P>0.05), which was lower when compared to group A (38.89%) (P<0.05). By comparing the delivery situation of the three groups (Figure 1), we found that there was no difference in the duration of induced labor to labor between groups B and C (P>0.05), shorter than that of group A (P<0.05). The duration of the first stage of labor was the shortest in group B (5.19 ± 1.65 h) among the three groups, followed by group A and group C (P<0.05). The amount of postpartum hemorrhage decreased stepwise from groups A to B to C (P<0.05).Table 2 Comparison of labor inducement effects. GroupRemarkably effectiveEffectiveIneffectiveInduction success rateCesarean section rateGroup A (n = 54)20 (37.04)13 (24.07)21 (38.89)61.11%38.89%Group B (n = 61)34 (55.74)15 (24.59)12 (19.67)80.33%∗19.67%∗Group C (n = 50)34 (68.00)7 (14.00)9 (18.00)82.00%∗18.00%∗χ27.675P0.022Note:∗ means P<0.05 compared with group A.Figure 1 Comparison of the delivery situation in high-risk term pregnancies among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group An = 54), 80 mL (group B n = 61), and 150 mL (group C n = 50) water sac. (a–c) Comparison of the duration of induced labor to labor (a), the duration of the first stage of labor (b), and the postpartum hemorrhage volume (c) among the three groups. ∗P<0.05. (a)(b)(c) ### 3.3. Comparison of Maternal Cervical Status among the Three Groups The results of BS scores before and after induced labor in the three groups are shown in Table3. The three groups showed no difference in BS scores at T0 and T3 (P<0.05), while at T1, the BS score was similar in groups B and C (P<0.05), higher compared with group A (P<0.05). At T2, the BS scores of the three groups from low to high were group A, group B, and group C (P<0.05). In all the three groups, the BS score was the lowest at T0, increased continuously from T1 to T2, and reached the highest at T3 (P<0.05).Table 3 Changes of Bishop score during delivery in three groups of parturients. GroupT0T1T2T3FPGroup A (n = 54)3.59 ± 1.066.15 ± 0.76&7.54 ± 1.02&@8.89 ± 0.86&@%318.0< 0.001Group B (n = 61)3.46 ± 1.136.70 ± 0.86∗&7.97 ± 0.75∗&@8.98 ± 0.81&@%434.6< 0.001Group C (n = 50)3.48 ± 1.256.72 ± 1.17∗&8.32 ± 0.47∗#&@9.12 ± 0.85&@%323.2< 0.001F0.2076.53112.9200.988P0.8140.002< 0.0010.375Note: BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3).∗ means P<0.05 compared with group A, # means P<0.05 compared with group B, & means P<0.05 compared with T0 in the same group, @ means P<0.055 compared with T1 in the same group, and % means P<0.05 compared with T2 in the same group. ### 3.4. Comparison of Neonatal Status among the Three Groups No neonatal asphyxia or physiological defects occurred in the three groups, nor were there any notable differences among the three groups in neonatal weight and Apgar scores at 1 min and 5 min after birth (P<0.05, Figure 2)Figure 2 Comparison of neonatal status among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the neonatal weight (a) and the Apgar scores (b) among the three groups. (a)(b) ### 3.5. Comparison of Maternal Mental State among the Three Groups SAS and SDS score results are detailed in Figure3. The two scores differed insignificantly among the three groups before and after childbirth (P>0.05) and were lower after childbirth compared with those before delivery (P<0.05).Figure 3 Comparison of maternal mental state among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the scores of the self-rating anxiety scale (SAS) (a) and self-rating depression scale (SDS) (b) among the three groups. (a)(b) ### 3.6. Comparison of ARs and Patient Satisfaction among the Three Groups The statistics of ARs (Table4) revealed no obvious difference in the incidence of ARs between groups A and B (P>0.05), lower than that in group C (18.00%) (P<0.05). Finally, the nursing satisfaction was surveyed and the results are presented in Table 5. The nursing satisfaction was 91.80% in group B, higher than that of groups A and C (P<0.05).Table 4 Incidence of maternal adverse reactions (ARs) in three groups. Umbilical cord sheddingStrong cervical contractionsCervical tearARsGroup A (n = 54)1 (1.85)1 (1.85)0 (0.0)3.70%Group B (n = 61)2 (3.28)1 (1.64)1 (1.64)6.56%Group C (n = 50)3 (6.00)4 (8.00)2 (4.00)18.00%∗#χ27.172P0.028Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B.Table 5 Nursing satisfaction of three groups of puerperae. GroupVery satisfiedSatisfyNeeds improvementDissatisfiedTotal satisfactionGroup A (n = 54)24 (44.44)18 (33.33)6 (11.11)6 (11.11)77.78%Group B (n = 61)39 (63.93)17 (28.87)4 (6.56)1 (1.64)91.80%∗Group C (n = 50)20 (40.00)17 (34.00)7 (14.00)6 (12.00)74.00%#χ29.066P0.011Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B. ## 3.1. Comparison of Clinical Baseline Data As shown in Table1, comparisons among the three groups were conducted on baseline data regarding age, body mass index (BMI), gestational weeks, family history of disease, primipara, and household register, with no statistically significant differences (all P>0.05)Table 1 Comparison of clinical baseline data among the three groups. Group A (n = 54)Group B (n = 61)Group C (n = 50)F or χ2PAge28.30 ± 3.3127.90 ± 4.5028.68 ± 4.110.5150.598BMI (kg/m2)26.26 ± 2.5425.79 ± 2.8226.35 ± 2.570.5020.606Gestational weeks39.67 ± 0.7039.66 ± 0.8939.36 ± 0.962.1440.121Family history of disease0.4310.806Yes8 (14.81)10 (16.39)6 (12.00)No46 (85.19)51 (83.61)44 (88.00)Primipara0.7690.681No21 (38.89)19 (31.15)17 (34.00)Yes33 (61.11)42 (68.85)33 (66.00)Household register0.7200.698City36 (66.67)36 (59.02)31 (62.00)Rural18 (33.33)25 (40.98)19 (38.00) ## 3.2. Comparison of the Delivery Situation in High-Risk Term Pregnancies among the Three Groups The effect of labor induction was compared, and the results are shown in Table2. We found a similar success rate of labor induction in group B (80.33%) and group C (82.00%) (P>0.05), higher than that of group A (61.11%) (P<0.05). Similarly, the cesarean section rate differed significantly between group B (19.67%) and group C (18.00%) (P>0.05), which was lower when compared to group A (38.89%) (P<0.05). By comparing the delivery situation of the three groups (Figure 1), we found that there was no difference in the duration of induced labor to labor between groups B and C (P>0.05), shorter than that of group A (P<0.05). The duration of the first stage of labor was the shortest in group B (5.19 ± 1.65 h) among the three groups, followed by group A and group C (P<0.05). The amount of postpartum hemorrhage decreased stepwise from groups A to B to C (P<0.05).Table 2 Comparison of labor inducement effects. GroupRemarkably effectiveEffectiveIneffectiveInduction success rateCesarean section rateGroup A (n = 54)20 (37.04)13 (24.07)21 (38.89)61.11%38.89%Group B (n = 61)34 (55.74)15 (24.59)12 (19.67)80.33%∗19.67%∗Group C (n = 50)34 (68.00)7 (14.00)9 (18.00)82.00%∗18.00%∗χ27.675P0.022Note:∗ means P<0.05 compared with group A.Figure 1 Comparison of the delivery situation in high-risk term pregnancies among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group An = 54), 80 mL (group B n = 61), and 150 mL (group C n = 50) water sac. (a–c) Comparison of the duration of induced labor to labor (a), the duration of the first stage of labor (b), and the postpartum hemorrhage volume (c) among the three groups. ∗P<0.05. (a)(b)(c) ## 3.3. Comparison of Maternal Cervical Status among the Three Groups The results of BS scores before and after induced labor in the three groups are shown in Table3. The three groups showed no difference in BS scores at T0 and T3 (P<0.05), while at T1, the BS score was similar in groups B and C (P<0.05), higher compared with group A (P<0.05). At T2, the BS scores of the three groups from low to high were group A, group B, and group C (P<0.05). In all the three groups, the BS score was the lowest at T0, increased continuously from T1 to T2, and reached the highest at T3 (P<0.05).Table 3 Changes of Bishop score during delivery in three groups of parturients. GroupT0T1T2T3FPGroup A (n = 54)3.59 ± 1.066.15 ± 0.76&7.54 ± 1.02&@8.89 ± 0.86&@%318.0< 0.001Group B (n = 61)3.46 ± 1.136.70 ± 0.86∗&7.97 ± 0.75∗&@8.98 ± 0.81&@%434.6< 0.001Group C (n = 50)3.48 ± 1.256.72 ± 1.17∗&8.32 ± 0.47∗#&@9.12 ± 0.85&@%323.2< 0.001F0.2076.53112.9200.988P0.8140.002< 0.0010.375Note: BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3).∗ means P<0.05 compared with group A, # means P<0.05 compared with group B, & means P<0.05 compared with T0 in the same group, @ means P<0.055 compared with T1 in the same group, and % means P<0.05 compared with T2 in the same group. ## 3.4. Comparison of Neonatal Status among the Three Groups No neonatal asphyxia or physiological defects occurred in the three groups, nor were there any notable differences among the three groups in neonatal weight and Apgar scores at 1 min and 5 min after birth (P<0.05, Figure 2)Figure 2 Comparison of neonatal status among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the neonatal weight (a) and the Apgar scores (b) among the three groups. (a)(b) ## 3.5. Comparison of Maternal Mental State among the Three Groups SAS and SDS score results are detailed in Figure3. The two scores differed insignificantly among the three groups before and after childbirth (P>0.05) and were lower after childbirth compared with those before delivery (P<0.05).Figure 3 Comparison of maternal mental state among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the scores of the self-rating anxiety scale (SAS) (a) and self-rating depression scale (SDS) (b) among the three groups. (a)(b) ## 3.6. Comparison of ARs and Patient Satisfaction among the Three Groups The statistics of ARs (Table4) revealed no obvious difference in the incidence of ARs between groups A and B (P>0.05), lower than that in group C (18.00%) (P<0.05). Finally, the nursing satisfaction was surveyed and the results are presented in Table 5. The nursing satisfaction was 91.80% in group B, higher than that of groups A and C (P<0.05).Table 4 Incidence of maternal adverse reactions (ARs) in three groups. Umbilical cord sheddingStrong cervical contractionsCervical tearARsGroup A (n = 54)1 (1.85)1 (1.85)0 (0.0)3.70%Group B (n = 61)2 (3.28)1 (1.64)1 (1.64)6.56%Group C (n = 50)3 (6.00)4 (8.00)2 (4.00)18.00%∗#χ27.172P0.028Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B.Table 5 Nursing satisfaction of three groups of puerperae. GroupVery satisfiedSatisfyNeeds improvementDissatisfiedTotal satisfactionGroup A (n = 54)24 (44.44)18 (33.33)6 (11.11)6 (11.11)77.78%Group B (n = 61)39 (63.93)17 (28.87)4 (6.56)1 (1.64)91.80%∗Group C (n = 50)20 (40.00)17 (34.00)7 (14.00)6 (12.00)74.00%#χ29.066P0.011Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B. ## 4. Discussion At present, induction of labor for pregnant women in the third trimester has become a common means in obstetrics and gynecology [19]. Among them, high-risk women are more worthy of labor induction in full-term pregnancy due to their various functional obstacles, so as to ensure the life safety of mothers and newborns [20]. In previous studies, we have found that low-dose OXT combined with water sac can increase the vaginal delivery rate of term pregnant women [21], but its application in high-risk pregnant women is still rare. As an emerging technique for inducing labor in recent years, water sac was reported to achieve the synthesis and to release the local endogenous prostaglandins in the cervix by dilating the cervix, thereby realizing labor induction [22]. Because of high safety, water sacs are favored by obstetrics and gynecology [23]. However, at present, there is still a great controversy in the selection of water sac capacity, so this study has important reference significance for the future clinical application of water sac.Herein, we compared the delivery status of 3 groups of high-risk pregnancies using 30, 80, and 150 mL water sac, respectively. First, we can see that groups B and C had better induced labor effects and lower cesarean section rate than group A, indicating that 80 and 150 mL water sacs have better induced labor effects for high-risk pregnant women, being consistent with the research results of Delaney et al. [24]. Second, less duration of induced labor to labor was determined in groups B and C than in group A, which once again emphasizes better effects of 80 and 150 mL water sac. However, we found that the duration of the first stage of labor and postpartum hemorrhage volume in group C were the highest among the three groups, with an obviously higher incidence of ARs, suggesting low safety of the 150 mL water sac in high-risk pregnancy. As we all know, the mechanical stimulation of the water sac to the maternal cervix can react on the pituitary gland, induce OXT secretion, and accelerate patients’ uterine contractions [22]. Therefore, we speculate that the difference among the three groups may be due to the small size of the 30 mL water sac that has weak stimulation on the maternal cervical canal, so the cervical maturity of the parturient is low and the effect of induced labor is not good. The 150 mL water sac, as the one with the largest capacity used in this study, has the strongest stimulation to the cervical canal, which can promote the cervical maturation faster and shorten the time of induced labor. But on the other hand, too big a water sac may bring greater pain to the maternal, which is not conducive to the subsequent delivery. Besides, the large capacity of the water sac can induce great mechanical damage to the parturient and easily cause complications such as cervical laceration, which leads to intensified pain in the parturient in the first stage of labor, as well as maternal hormone disorders that affect the regular contractions of the parturient, resulting in the prolongation of the first stage of labor [25]. Moreover, as the 150 mL water sac is placed in a high position of the uterus, it may move during maternal exercise, resulting in increased cervical compression and consequently leading to cervical laceration, umbilical cord prolapse, and other complications. Therefore, the safety of the 150 mL water sac is worse than that of the other two kinds. Then, we compared the BS scores of three groups of parturient during delivery, which also clearly showed the fastest cervical maturity in group C after the application of water sac, and the reason may be consistent with our above inference. The 80 mL water sac can effectively induce labor in high-risk pregnancies with a higher safety profile, so we believe that such a water sac has higher applicability.In addition, postpartum depression, as a high-incidence mental illness of pregnant women after delivery, also seriously affects the health of mothers, their family members, and newborns [26,27]. Therefore, monitoring changes in mental state before and after childbirth is also one of the key items in obstetrics and gynecology at present [28]. The comparison of SAS and SDS scores among the three groups showed no difference, which once again emphasized the positive significance of water-sac-induced labor in high-risk pregnancies. Likewise, the comparison results of the neonatal situation revealed little difference among water sacs of the three sizes, indicating that they all have a relatively reliable guarantee for the safety of newborns. Finally, the survey results of nursing satisfaction show that group B has the highest satisfaction, which demonstrates that the 80 mL water sac is most suitable for high-risk pregnant women. The decrease in satisfaction in the other two groups, we hypothesize, may be due to the poor effect of induced labor in group A and the poor safety in group C.Although this study analyzed the effect of water sacs with different capacities on induced labor in high-risk pregnancies, there are still many limitations to be improved. The number of cases included in this study was small, and the trial period was too short to evaluate the long-term prognosis of maternal and newborn babies. Second, we need to further compare the merits and demerits of water-sac induced labor with other labor induction methods, so as to provide a significance with more comprehensive reference opinions for the future clinical application of water-sac induced labor. ## 5. Conclusion The usage of an 80 mL water sac combined with OXT in high-risk full-term pregnancy has ideal induction effects and high safety, which can effectively guarantee maternal cervical maturity and shorten the time of the first stage of labor, with high clinical popularization value. --- *Source: 1004816-2022-07-06.xml*
1004816-2022-07-06_1004816-2022-07-06.md
33,986
Effect of Oxytocin Combined with Different Volume of Water Sac in High-Risk Term Pregnancies
Hanna Mi; Na Sun
Evidence-Based Complementary and Alternative Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004816
1004816-2022-07-06.xml
--- ## Abstract Objective. The study estimated the impacts of water sac of different capacities combined with oxytocin (OXT) on pregnant women with high-risk term pregnancies. Methods. Women with high-risk term pregnancies who received OXT were enrolled to perform labor induction using 30  mL (group A), 80 mL (group B), and 150  mL (group C), followed by the comparisons regarding to the success rate of labor induction, cesarean section rate, duration of induced labor to labor, duration of the first stage of labor, postpartum blood loss, the incidence of adverse reactions, and the assessment of cervical ripening using Bishop Score. Besides, neonatal weight, Apgar score, as well as psychological status, and satisfaction of patients were compared among these groups. Results. As compared with group A, the success rate of induced labor was higher in groups B and C with lower cesarean section rate and shorter duration of induced labor to labor, but the duration of the first stage of labor in group B was the shortest among the three groups. The amount of postpartum hemorrhage decreased stepwise from groups A to B to C. In addition, groups A and B showed a reduced incidence of adverse reactions than group C, but the highest level of cervical ripening and highest patient satisfaction was revealed in group C and group B, respectively. Furthermore, the highest patient satisfaction was found in group B. Conclusion. The usage of an 80  mL water sac combined with OXT  in high-risk term pregnancy has ideal induction effects, which can guarantee maternal cervical maturity and shorten the time of the first stage of labor. --- ## Body ## 1. Introduction Pregnant women suffer from various acute and chronic diseases and pregnancies, as well as adverse environmental and social factors, which can lead to fetal death, intrauterine growth retardation, congenital malformation, premature birth, neonatal diseases, and so on that constitute a high-risk pregnancy process called high-risk pregnancy [1]. In recent years, high-risk pregnancy has become increasingly common in the clinic, accounting for 8–12% of all pregnancies, an approximately four-fold increase compared to 2010 [2]. High-risk pregnancy not only poses a great threat to the newborn but may also lead to maternal death due to shock and massive hemorrhage during delivery [3]. Therefore, in clinic, it is necessary to focus on monitoring and additional targeted treatment for such pregnant women, so as to ensure maternal and child safety.In clinical practice, induced uterine contractions are often used to help the fetus escape from the adverse intrauterine environment and reduce the occurrence of adverse pregnancy outcomes [4]. For normal pregnant women, labor can be induced by a variety of methods like low-dose oxytocin (OXT), prostate inhibitors, mifepristone, and pulse therapy [5]. However, due to the limitations of their own diseases, the only drug for induction of high-risk pregnant women is low-dose OXT—a therapy that can play a role in uterine contractions but has no significant effect on cervical dilation [6,7]. Therefore, there is an urgent need to find a more effective way to provide more effective protection for such pregnant women.Water sac is an emerging technology in recent years, which can promote the softening and maturity of the cervix [8]. The placement of a water sac in the internal opening of the cervix can help artificially peel off the placenta and mechanically compress the cervix [9]. At present, some studies believe that water sac induction is safe for women with high-risk pregnancy, and it is expected to be a breakthrough to solve the problem of induced labor in high-risk pregnancy [10,11]. However, some other evidence has pointed out that the use of water sac may disrupt the normal state of the cervix of the mother [12]. Due to the current lack of authoritative and unified application guides, the use of water sac in high-risk pregnancies remains controversial.This study compares the impacts of water sac of different capacities combined with OXT on postpartum cervical status of high-risk term pregnant women, aiming at providing more effective protection for maternal and child life safety and providing a more comprehensive reference for the subsequent application of water-sac induction of labor. ## 2. Data and Methods ### 2.1. Research Subjects A total of 165 cases of high-risk term pregnancies who visited our hospital between January 2019 and March 2020 were selected as the research subjects for retrospective analysis. Among them, 54 cases were induced by 30 mL water sac (group A), 61 cases by 80 mL (group B), and 50 cases by 150 mL (group C). This study was carried out in strict accordance with the Declaration of Helsinki, and all the research subjects provided informed consent. ### 2.2. Eligibility Criteria Patients who were in line with the diagnostic criteria of high-risk pregnancy [13] and water-sac-induced labor indications [14] with singleton pregnancy and fetal presentation were enrolled. In contrast, pregnant women with premature rupture of membranes, vaginitis, liver and kidney dysfunction, or fetal cardiac distress that need to stop pregnancy immediately were ruled out. ### 2.3. Treatment All pregnant women received routine examinations, including B-ultrasound and fetal heart monitoring. At the same time, low-dose OXT (H34022979, An’ hui Hongye Pharmaceutical Co., Ltd., China) was given as follows to induce labor. Day 1: OXT 2.5U was added into 500 mL of 0.5% glucose injection and then intravenously dripped. The infusion rate was appropriately adjusted according to the pregnant women’s contractions to keep the contractions effective. Day 2: the parturient with a Bishop Score (BS) less than 6, which indicated an unripe cervix of patients [15], was given an OXT drip (same dose as day 1). Day 3: pregnant women with BS > 6 were subjected to artificial rupture of membranes, and those with BS < 6 were given OXT (same as day 1). No delivery after 3 days meant failure of induced labor and cesarean section was used instead. On this basis, patients in groups A, B, and C were induced by 30, 80, and 150 mL water sac, respectively, with the procedures as follows. First, in a lithotomy position, the vulva of the pregnant woman was routinely disinfected. The cervix was then exposed and disinfected with iodophor cotton balls. The front end of the water sac was inserted into the cervical canal, and 30, 80, or 150 mL of normal saline was injected slowly. The tail end of the water sac tube was ensured to be above the internal cervical orifice and the catheter was fixed on the inner thigh. After placing the water sac, the fetal heart and pregnant women's symptoms as well as the maternal physical symptoms were closely observed. The water sac was removed 20 h later and an artificial rupture of membranes was performed based on the BS score of the parturient. ### 2.4. Determination of Labor Induction Efficacy Regular contractions lasting more than 30 seconds within 12 hours after treatment with the BS score increased by > 3 points were considered as remarkably effective. Effective was indicated if there were no regular contractions after treatment until the removal of the water sac for a period of time, with the BS score increased by 1–3 points. No regular contractions posttreatment nor changes or increase in the BS score was deemed ineffective. The induction success rate of induced labor = (remarkably effective + effective) cases/total cases × 100%. Moreover, the duration of induced labor to labor, the duration of the first stage of labor, the postpartum hemorrhage amount, and the cesarean section rate were recorded. BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3). ### 2.5. Evaluation of Neonatal Status Neonatal status was assessed via newborn weight, as well as using the Apgar Score (AS) [16] at 1 min and 5 min after birth, with the score positively associated with the neonatal status. ### 2.6. Assessment of Mental State Before and after delivery, the maternal psychological state was evaluated by Self-rating Depression/Anxiety Scale (SDS/SAS) [17,18]. The standard cut-off is 50 points, with 50–59, 60–69, and > 69 being mild, moderate, and severe anxiety, respectively. ### 2.7. Determination of Adverse Reactions and Patient Satisfaction The incidence of adverse reactions (ARs) was recorded between the application of the water sac for labor induction and at discharge. Patient satisfaction was assessed with the self-made nursing questionnaire (10-point scale), with 10, 7–9, 4–6, and 1–3 points being very satisfied, satisfied, need improvement, and dissatisfied, respectively. Satisfaction = (very satisfied + satisfied) cases/total cases × 100%. ### 2.8. Statistical Methods SPSS22.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis, and differences withP<0.05 were considered significant. The measurement (mean ± SD) and enumeration data (n (%)) were analyzed via one-way analysis of variance (ANOVA) followed by Tukey’s honest significant difference (HSD) and χ2 test, respectively. ## 2.1. Research Subjects A total of 165 cases of high-risk term pregnancies who visited our hospital between January 2019 and March 2020 were selected as the research subjects for retrospective analysis. Among them, 54 cases were induced by 30 mL water sac (group A), 61 cases by 80 mL (group B), and 50 cases by 150 mL (group C). This study was carried out in strict accordance with the Declaration of Helsinki, and all the research subjects provided informed consent. ## 2.2. Eligibility Criteria Patients who were in line with the diagnostic criteria of high-risk pregnancy [13] and water-sac-induced labor indications [14] with singleton pregnancy and fetal presentation were enrolled. In contrast, pregnant women with premature rupture of membranes, vaginitis, liver and kidney dysfunction, or fetal cardiac distress that need to stop pregnancy immediately were ruled out. ## 2.3. Treatment All pregnant women received routine examinations, including B-ultrasound and fetal heart monitoring. At the same time, low-dose OXT (H34022979, An’ hui Hongye Pharmaceutical Co., Ltd., China) was given as follows to induce labor. Day 1: OXT 2.5U was added into 500 mL of 0.5% glucose injection and then intravenously dripped. The infusion rate was appropriately adjusted according to the pregnant women’s contractions to keep the contractions effective. Day 2: the parturient with a Bishop Score (BS) less than 6, which indicated an unripe cervix of patients [15], was given an OXT drip (same dose as day 1). Day 3: pregnant women with BS > 6 were subjected to artificial rupture of membranes, and those with BS < 6 were given OXT (same as day 1). No delivery after 3 days meant failure of induced labor and cesarean section was used instead. On this basis, patients in groups A, B, and C were induced by 30, 80, and 150 mL water sac, respectively, with the procedures as follows. First, in a lithotomy position, the vulva of the pregnant woman was routinely disinfected. The cervix was then exposed and disinfected with iodophor cotton balls. The front end of the water sac was inserted into the cervical canal, and 30, 80, or 150 mL of normal saline was injected slowly. The tail end of the water sac tube was ensured to be above the internal cervical orifice and the catheter was fixed on the inner thigh. After placing the water sac, the fetal heart and pregnant women's symptoms as well as the maternal physical symptoms were closely observed. The water sac was removed 20 h later and an artificial rupture of membranes was performed based on the BS score of the parturient. ## 2.4. Determination of Labor Induction Efficacy Regular contractions lasting more than 30 seconds within 12 hours after treatment with the BS score increased by > 3 points were considered as remarkably effective. Effective was indicated if there were no regular contractions after treatment until the removal of the water sac for a period of time, with the BS score increased by 1–3 points. No regular contractions posttreatment nor changes or increase in the BS score was deemed ineffective. The induction success rate of induced labor = (remarkably effective + effective) cases/total cases × 100%. Moreover, the duration of induced labor to labor, the duration of the first stage of labor, the postpartum hemorrhage amount, and the cesarean section rate were recorded. BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3). ## 2.5. Evaluation of Neonatal Status Neonatal status was assessed via newborn weight, as well as using the Apgar Score (AS) [16] at 1 min and 5 min after birth, with the score positively associated with the neonatal status. ## 2.6. Assessment of Mental State Before and after delivery, the maternal psychological state was evaluated by Self-rating Depression/Anxiety Scale (SDS/SAS) [17,18]. The standard cut-off is 50 points, with 50–59, 60–69, and > 69 being mild, moderate, and severe anxiety, respectively. ## 2.7. Determination of Adverse Reactions and Patient Satisfaction The incidence of adverse reactions (ARs) was recorded between the application of the water sac for labor induction and at discharge. Patient satisfaction was assessed with the self-made nursing questionnaire (10-point scale), with 10, 7–9, 4–6, and 1–3 points being very satisfied, satisfied, need improvement, and dissatisfied, respectively. Satisfaction = (very satisfied + satisfied) cases/total cases × 100%. ## 2.8. Statistical Methods SPSS22.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis, and differences withP<0.05 were considered significant. The measurement (mean ± SD) and enumeration data (n (%)) were analyzed via one-way analysis of variance (ANOVA) followed by Tukey’s honest significant difference (HSD) and χ2 test, respectively. ## 3. Results ### 3.1. Comparison of Clinical Baseline Data As shown in Table1, comparisons among the three groups were conducted on baseline data regarding age, body mass index (BMI), gestational weeks, family history of disease, primipara, and household register, with no statistically significant differences (all P>0.05)Table 1 Comparison of clinical baseline data among the three groups. Group A (n = 54)Group B (n = 61)Group C (n = 50)F or χ2PAge28.30 ± 3.3127.90 ± 4.5028.68 ± 4.110.5150.598BMI (kg/m2)26.26 ± 2.5425.79 ± 2.8226.35 ± 2.570.5020.606Gestational weeks39.67 ± 0.7039.66 ± 0.8939.36 ± 0.962.1440.121Family history of disease0.4310.806Yes8 (14.81)10 (16.39)6 (12.00)No46 (85.19)51 (83.61)44 (88.00)Primipara0.7690.681No21 (38.89)19 (31.15)17 (34.00)Yes33 (61.11)42 (68.85)33 (66.00)Household register0.7200.698City36 (66.67)36 (59.02)31 (62.00)Rural18 (33.33)25 (40.98)19 (38.00) ### 3.2. Comparison of the Delivery Situation in High-Risk Term Pregnancies among the Three Groups The effect of labor induction was compared, and the results are shown in Table2. We found a similar success rate of labor induction in group B (80.33%) and group C (82.00%) (P>0.05), higher than that of group A (61.11%) (P<0.05). Similarly, the cesarean section rate differed significantly between group B (19.67%) and group C (18.00%) (P>0.05), which was lower when compared to group A (38.89%) (P<0.05). By comparing the delivery situation of the three groups (Figure 1), we found that there was no difference in the duration of induced labor to labor between groups B and C (P>0.05), shorter than that of group A (P<0.05). The duration of the first stage of labor was the shortest in group B (5.19 ± 1.65 h) among the three groups, followed by group A and group C (P<0.05). The amount of postpartum hemorrhage decreased stepwise from groups A to B to C (P<0.05).Table 2 Comparison of labor inducement effects. GroupRemarkably effectiveEffectiveIneffectiveInduction success rateCesarean section rateGroup A (n = 54)20 (37.04)13 (24.07)21 (38.89)61.11%38.89%Group B (n = 61)34 (55.74)15 (24.59)12 (19.67)80.33%∗19.67%∗Group C (n = 50)34 (68.00)7 (14.00)9 (18.00)82.00%∗18.00%∗χ27.675P0.022Note:∗ means P<0.05 compared with group A.Figure 1 Comparison of the delivery situation in high-risk term pregnancies among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group An = 54), 80 mL (group B n = 61), and 150 mL (group C n = 50) water sac. (a–c) Comparison of the duration of induced labor to labor (a), the duration of the first stage of labor (b), and the postpartum hemorrhage volume (c) among the three groups. ∗P<0.05. (a)(b)(c) ### 3.3. Comparison of Maternal Cervical Status among the Three Groups The results of BS scores before and after induced labor in the three groups are shown in Table3. The three groups showed no difference in BS scores at T0 and T3 (P<0.05), while at T1, the BS score was similar in groups B and C (P<0.05), higher compared with group A (P<0.05). At T2, the BS scores of the three groups from low to high were group A, group B, and group C (P<0.05). In all the three groups, the BS score was the lowest at T0, increased continuously from T1 to T2, and reached the highest at T3 (P<0.05).Table 3 Changes of Bishop score during delivery in three groups of parturients. GroupT0T1T2T3FPGroup A (n = 54)3.59 ± 1.066.15 ± 0.76&7.54 ± 1.02&@8.89 ± 0.86&@%318.0< 0.001Group B (n = 61)3.46 ± 1.136.70 ± 0.86∗&7.97 ± 0.75∗&@8.98 ± 0.81&@%434.6< 0.001Group C (n = 50)3.48 ± 1.256.72 ± 1.17∗&8.32 ± 0.47∗#&@9.12 ± 0.85&@%323.2< 0.001F0.2076.53112.9200.988P0.8140.002< 0.0010.375Note: BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3).∗ means P<0.05 compared with group A, # means P<0.05 compared with group B, & means P<0.05 compared with T0 in the same group, @ means P<0.055 compared with T1 in the same group, and % means P<0.05 compared with T2 in the same group. ### 3.4. Comparison of Neonatal Status among the Three Groups No neonatal asphyxia or physiological defects occurred in the three groups, nor were there any notable differences among the three groups in neonatal weight and Apgar scores at 1 min and 5 min after birth (P<0.05, Figure 2)Figure 2 Comparison of neonatal status among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the neonatal weight (a) and the Apgar scores (b) among the three groups. (a)(b) ### 3.5. Comparison of Maternal Mental State among the Three Groups SAS and SDS score results are detailed in Figure3. The two scores differed insignificantly among the three groups before and after childbirth (P>0.05) and were lower after childbirth compared with those before delivery (P<0.05).Figure 3 Comparison of maternal mental state among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the scores of the self-rating anxiety scale (SAS) (a) and self-rating depression scale (SDS) (b) among the three groups. (a)(b) ### 3.6. Comparison of ARs and Patient Satisfaction among the Three Groups The statistics of ARs (Table4) revealed no obvious difference in the incidence of ARs between groups A and B (P>0.05), lower than that in group C (18.00%) (P<0.05). Finally, the nursing satisfaction was surveyed and the results are presented in Table 5. The nursing satisfaction was 91.80% in group B, higher than that of groups A and C (P<0.05).Table 4 Incidence of maternal adverse reactions (ARs) in three groups. Umbilical cord sheddingStrong cervical contractionsCervical tearARsGroup A (n = 54)1 (1.85)1 (1.85)0 (0.0)3.70%Group B (n = 61)2 (3.28)1 (1.64)1 (1.64)6.56%Group C (n = 50)3 (6.00)4 (8.00)2 (4.00)18.00%∗#χ27.172P0.028Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B.Table 5 Nursing satisfaction of three groups of puerperae. GroupVery satisfiedSatisfyNeeds improvementDissatisfiedTotal satisfactionGroup A (n = 54)24 (44.44)18 (33.33)6 (11.11)6 (11.11)77.78%Group B (n = 61)39 (63.93)17 (28.87)4 (6.56)1 (1.64)91.80%∗Group C (n = 50)20 (40.00)17 (34.00)7 (14.00)6 (12.00)74.00%#χ29.066P0.011Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B. ## 3.1. Comparison of Clinical Baseline Data As shown in Table1, comparisons among the three groups were conducted on baseline data regarding age, body mass index (BMI), gestational weeks, family history of disease, primipara, and household register, with no statistically significant differences (all P>0.05)Table 1 Comparison of clinical baseline data among the three groups. Group A (n = 54)Group B (n = 61)Group C (n = 50)F or χ2PAge28.30 ± 3.3127.90 ± 4.5028.68 ± 4.110.5150.598BMI (kg/m2)26.26 ± 2.5425.79 ± 2.8226.35 ± 2.570.5020.606Gestational weeks39.67 ± 0.7039.66 ± 0.8939.36 ± 0.962.1440.121Family history of disease0.4310.806Yes8 (14.81)10 (16.39)6 (12.00)No46 (85.19)51 (83.61)44 (88.00)Primipara0.7690.681No21 (38.89)19 (31.15)17 (34.00)Yes33 (61.11)42 (68.85)33 (66.00)Household register0.7200.698City36 (66.67)36 (59.02)31 (62.00)Rural18 (33.33)25 (40.98)19 (38.00) ## 3.2. Comparison of the Delivery Situation in High-Risk Term Pregnancies among the Three Groups The effect of labor induction was compared, and the results are shown in Table2. We found a similar success rate of labor induction in group B (80.33%) and group C (82.00%) (P>0.05), higher than that of group A (61.11%) (P<0.05). Similarly, the cesarean section rate differed significantly between group B (19.67%) and group C (18.00%) (P>0.05), which was lower when compared to group A (38.89%) (P<0.05). By comparing the delivery situation of the three groups (Figure 1), we found that there was no difference in the duration of induced labor to labor between groups B and C (P>0.05), shorter than that of group A (P<0.05). The duration of the first stage of labor was the shortest in group B (5.19 ± 1.65 h) among the three groups, followed by group A and group C (P<0.05). The amount of postpartum hemorrhage decreased stepwise from groups A to B to C (P<0.05).Table 2 Comparison of labor inducement effects. GroupRemarkably effectiveEffectiveIneffectiveInduction success rateCesarean section rateGroup A (n = 54)20 (37.04)13 (24.07)21 (38.89)61.11%38.89%Group B (n = 61)34 (55.74)15 (24.59)12 (19.67)80.33%∗19.67%∗Group C (n = 50)34 (68.00)7 (14.00)9 (18.00)82.00%∗18.00%∗χ27.675P0.022Note:∗ means P<0.05 compared with group A.Figure 1 Comparison of the delivery situation in high-risk term pregnancies among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group An = 54), 80 mL (group B n = 61), and 150 mL (group C n = 50) water sac. (a–c) Comparison of the duration of induced labor to labor (a), the duration of the first stage of labor (b), and the postpartum hemorrhage volume (c) among the three groups. ∗P<0.05. (a)(b)(c) ## 3.3. Comparison of Maternal Cervical Status among the Three Groups The results of BS scores before and after induced labor in the three groups are shown in Table3. The three groups showed no difference in BS scores at T0 and T3 (P<0.05), while at T1, the BS score was similar in groups B and C (P<0.05), higher compared with group A (P<0.05). At T2, the BS scores of the three groups from low to high were group A, group B, and group C (P<0.05). In all the three groups, the BS score was the lowest at T0, increased continuously from T1 to T2, and reached the highest at T3 (P<0.05).Table 3 Changes of Bishop score during delivery in three groups of parturients. GroupT0T1T2T3FPGroup A (n = 54)3.59 ± 1.066.15 ± 0.76&7.54 ± 1.02&@8.89 ± 0.86&@%318.0< 0.001Group B (n = 61)3.46 ± 1.136.70 ± 0.86∗&7.97 ± 0.75∗&@8.98 ± 0.81&@%434.6< 0.001Group C (n = 50)3.48 ± 1.256.72 ± 1.17∗&8.32 ± 0.47∗#&@9.12 ± 0.85&@%323.2< 0.001F0.2076.53112.9200.988P0.8140.002< 0.0010.375Note: BS results were recorded before (T0), and 2 h (T1) and 12 h induced labor (T2), as well as 2 h postpartum (T3).∗ means P<0.05 compared with group A, # means P<0.05 compared with group B, & means P<0.05 compared with T0 in the same group, @ means P<0.055 compared with T1 in the same group, and % means P<0.05 compared with T2 in the same group. ## 3.4. Comparison of Neonatal Status among the Three Groups No neonatal asphyxia or physiological defects occurred in the three groups, nor were there any notable differences among the three groups in neonatal weight and Apgar scores at 1 min and 5 min after birth (P<0.05, Figure 2)Figure 2 Comparison of neonatal status among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the neonatal weight (a) and the Apgar scores (b) among the three groups. (a)(b) ## 3.5. Comparison of Maternal Mental State among the Three Groups SAS and SDS score results are detailed in Figure3. The two scores differed insignificantly among the three groups before and after childbirth (P>0.05) and were lower after childbirth compared with those before delivery (P<0.05).Figure 3 Comparison of maternal mental state among the three groups. Note: patients were given low dose of oxytocin (OXT) combined with 30 mL (group A,n = 54), 80 mL (group B, n = 61), and 150 mL (group C, n = 50) water sac. (a, b) Comparison of the scores of the self-rating anxiety scale (SAS) (a) and self-rating depression scale (SDS) (b) among the three groups. (a)(b) ## 3.6. Comparison of ARs and Patient Satisfaction among the Three Groups The statistics of ARs (Table4) revealed no obvious difference in the incidence of ARs between groups A and B (P>0.05), lower than that in group C (18.00%) (P<0.05). Finally, the nursing satisfaction was surveyed and the results are presented in Table 5. The nursing satisfaction was 91.80% in group B, higher than that of groups A and C (P<0.05).Table 4 Incidence of maternal adverse reactions (ARs) in three groups. Umbilical cord sheddingStrong cervical contractionsCervical tearARsGroup A (n = 54)1 (1.85)1 (1.85)0 (0.0)3.70%Group B (n = 61)2 (3.28)1 (1.64)1 (1.64)6.56%Group C (n = 50)3 (6.00)4 (8.00)2 (4.00)18.00%∗#χ27.172P0.028Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B.Table 5 Nursing satisfaction of three groups of puerperae. GroupVery satisfiedSatisfyNeeds improvementDissatisfiedTotal satisfactionGroup A (n = 54)24 (44.44)18 (33.33)6 (11.11)6 (11.11)77.78%Group B (n = 61)39 (63.93)17 (28.87)4 (6.56)1 (1.64)91.80%∗Group C (n = 50)20 (40.00)17 (34.00)7 (14.00)6 (12.00)74.00%#χ29.066P0.011Note:∗ means P<0.05 compared with group A and # means P<0.05 compared with group B. ## 4. Discussion At present, induction of labor for pregnant women in the third trimester has become a common means in obstetrics and gynecology [19]. Among them, high-risk women are more worthy of labor induction in full-term pregnancy due to their various functional obstacles, so as to ensure the life safety of mothers and newborns [20]. In previous studies, we have found that low-dose OXT combined with water sac can increase the vaginal delivery rate of term pregnant women [21], but its application in high-risk pregnant women is still rare. As an emerging technique for inducing labor in recent years, water sac was reported to achieve the synthesis and to release the local endogenous prostaglandins in the cervix by dilating the cervix, thereby realizing labor induction [22]. Because of high safety, water sacs are favored by obstetrics and gynecology [23]. However, at present, there is still a great controversy in the selection of water sac capacity, so this study has important reference significance for the future clinical application of water sac.Herein, we compared the delivery status of 3 groups of high-risk pregnancies using 30, 80, and 150 mL water sac, respectively. First, we can see that groups B and C had better induced labor effects and lower cesarean section rate than group A, indicating that 80 and 150 mL water sacs have better induced labor effects for high-risk pregnant women, being consistent with the research results of Delaney et al. [24]. Second, less duration of induced labor to labor was determined in groups B and C than in group A, which once again emphasizes better effects of 80 and 150 mL water sac. However, we found that the duration of the first stage of labor and postpartum hemorrhage volume in group C were the highest among the three groups, with an obviously higher incidence of ARs, suggesting low safety of the 150 mL water sac in high-risk pregnancy. As we all know, the mechanical stimulation of the water sac to the maternal cervix can react on the pituitary gland, induce OXT secretion, and accelerate patients’ uterine contractions [22]. Therefore, we speculate that the difference among the three groups may be due to the small size of the 30 mL water sac that has weak stimulation on the maternal cervical canal, so the cervical maturity of the parturient is low and the effect of induced labor is not good. The 150 mL water sac, as the one with the largest capacity used in this study, has the strongest stimulation to the cervical canal, which can promote the cervical maturation faster and shorten the time of induced labor. But on the other hand, too big a water sac may bring greater pain to the maternal, which is not conducive to the subsequent delivery. Besides, the large capacity of the water sac can induce great mechanical damage to the parturient and easily cause complications such as cervical laceration, which leads to intensified pain in the parturient in the first stage of labor, as well as maternal hormone disorders that affect the regular contractions of the parturient, resulting in the prolongation of the first stage of labor [25]. Moreover, as the 150 mL water sac is placed in a high position of the uterus, it may move during maternal exercise, resulting in increased cervical compression and consequently leading to cervical laceration, umbilical cord prolapse, and other complications. Therefore, the safety of the 150 mL water sac is worse than that of the other two kinds. Then, we compared the BS scores of three groups of parturient during delivery, which also clearly showed the fastest cervical maturity in group C after the application of water sac, and the reason may be consistent with our above inference. The 80 mL water sac can effectively induce labor in high-risk pregnancies with a higher safety profile, so we believe that such a water sac has higher applicability.In addition, postpartum depression, as a high-incidence mental illness of pregnant women after delivery, also seriously affects the health of mothers, their family members, and newborns [26,27]. Therefore, monitoring changes in mental state before and after childbirth is also one of the key items in obstetrics and gynecology at present [28]. The comparison of SAS and SDS scores among the three groups showed no difference, which once again emphasized the positive significance of water-sac-induced labor in high-risk pregnancies. Likewise, the comparison results of the neonatal situation revealed little difference among water sacs of the three sizes, indicating that they all have a relatively reliable guarantee for the safety of newborns. Finally, the survey results of nursing satisfaction show that group B has the highest satisfaction, which demonstrates that the 80 mL water sac is most suitable for high-risk pregnant women. The decrease in satisfaction in the other two groups, we hypothesize, may be due to the poor effect of induced labor in group A and the poor safety in group C.Although this study analyzed the effect of water sacs with different capacities on induced labor in high-risk pregnancies, there are still many limitations to be improved. The number of cases included in this study was small, and the trial period was too short to evaluate the long-term prognosis of maternal and newborn babies. Second, we need to further compare the merits and demerits of water-sac induced labor with other labor induction methods, so as to provide a significance with more comprehensive reference opinions for the future clinical application of water-sac induced labor. ## 5. Conclusion The usage of an 80 mL water sac combined with OXT in high-risk full-term pregnancy has ideal induction effects and high safety, which can effectively guarantee maternal cervical maturity and shorten the time of the first stage of labor, with high clinical popularization value. --- *Source: 1004816-2022-07-06.xml*
2022
# The Novel Application of Three-Dimensional Printing Assisted Patient-Specific Instrument Osteotomy Guide in the Precise Osteotomy of Adult Talipes Equinovarus **Authors:** Yuan-Wei Zhang; Mu-Rong You; Xiao-Xiang Zhang; Xing-Liang Yu; Liang Zhang; Liang Deng; Zhe Wang; Xie-Ping Dong **Journal:** BioMed Research International (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1004849 --- ## Abstract Objective. This current research is aimed at assessing clinical efficacy and prognosis of three-dimensional (3D) printing assisted patient-specific instrument (PSI) osteotomy guide in precise osteotomy of adult talipes equinovarus (ATE). Methods. We included a total of 27 patients of ATE malformation (including 12 males and 15 females) from June 2014 to June 2018 in the current research. The patients were divided into the routine group (n=12) and 3D printing group (n=15) based on different operative methods. The parameters, including the operative time, intraoperative blood loss, complications, time to obtain bony fusion, functional outcomes based on American Orthopedic Foot and Ankle Society (AOFAS), and International Congenital Clubfoot Study group (ICFSG) scoring systems between the two groups were observed and recorded regularly. Results. The 3D printing group exhibits superiorities in shorter operative time, less intraoperative blood loss, higher rate of excellent, and good outcomes presented by ICFSG score at last follow-up (P<0.001, P<0.001, P=0.019) than the routine group. However, there was no significant difference exhibited in the AOFAS score at the last follow-up and total rate of complications between the two groups (P=0.136, P=0.291). Conclusion. Operation assisted by 3D printing PSI osteotomy guide for correcting the ATE malformation is novel and feasible, which might be an effective method to polish up the precise osteotomy of ATE malformation and enhance the clinical efficacy. --- ## Body ## 1. Introduction Talipes equinovarus (TE) is a kind of complicated whole lower limb deformity, which is mainly manifested as the plantar flexion and posterior talipes varus deformities, and in severe cases, it is often combined with anterior talipes adduction and arch elevation [1, 2]. The etiology of TE is often divided into primary and secondary, including the factor of specific gene deletion for the primary TE and the factors of congenital musculoskeletal malformations, neuromuscular diseases, trauma, infection, and burns for secondary TE [3, 4]. Meanwhile, the pathological changes of adult talipes equinovarus (ATE) malformation mainly include severe soft tissue contractures and bone and joint deformities [5, 6]. Hence, the overall complicated causes and abnormal deformity manifestations make the treatment require the comprehensive consideration of multiple factors. Currently, the nonsurgical treatment of TE has been generally recognized, and the Ponseti therapy has been widely applied in children with TE [7]. However, there is still no unified standard for the treatment strategy of ATE malformation, and the routine surgical methods mainly include Achilles tendon lengthening and balance muscle force of internal and external inversion, and the triple arthrodesis (TA) is also one of the most ideal surgical methods that is worthy of consideration [8, 9].Since the TA was first proposed and applied in 1923 [10], it has gradually become one of the most important surgical methods for the correction of ATE malformation. However, ATE being a complex deformity, some of the patients after TA may be left with inferior outcomes, persistent deformities, and a variable degree of recurrence [11, 12]. Recently, with the in-depth understanding of ATE malformation, more and more scholars believe that the ATE malformation is a kind of three-dimensional and multidirectional anatomical relationship disorder [13, 14]. However, the pathogenic factors of ATE malformation are complicated, and the degree of deformities is not uniform. Moreover, the ideal treatment strategy for ATE malformation has not yet reached a consensus currently, which has presented a great challenge for surgeons to a certain extent [15]. During the operation, it is hard for operators to accurately adjust the osteotomy direction and angle of each dimension, so it often needs to repeatedly adjust or determine the correction strategy only based on subjective assumptions, which eventually leads to a huge deviation from the preoperative planning, and the prognosis will be unoptimistic. Hence, it is vital to develop an appropriate and personalized treatment plan for different etiology, deformity location, and degree of ATE malformation for the excellent functional reconstruction [16, 17]. With regard to this, in view of the unique advantages of three-dimensional (3D) printing in the personalized design and flexible application, it has been widely used in several orthopedic subspecialties, such as spine, joint, trauma, bone tumor, and orthopedics [18, 19]. Specifically, for one thing, 3D printing is particularly suitable for individualized customization and rapid manufacturing, which can greatly meet the special needs of orthopedic doctors for the implants. For another thing, 3D printing can also accurately and conveniently assist the routine operations in orthopedic surgeries, such as the osteotomy, reduction, and fixation, which improve the efficiency and quality of surgery to a certain extent [18, 20]. Thus, in order to further explore the novel surgical approach and evaluate the clinical efficacy of 3D printing assisted PSI osteotomy guide in accurate osteotomy of ATE malformation, this current research compared the routine ATE deformity correction surgeries with the operation assisted by PSI osteotomy guide and further evaluated the prognosis. ## 2. Methods ### 2.1. Patients This retrospective research included a total of 27 patients (12 males and 15 females) of ATE malformation admitted to Jiangxi Provincial People’s Hospital Affiliated to Nanchang University from June 2014 to June 2018. Therein, the inclusion criteria were mainly summarized as the patients diagnosed as the TE by preoperative physical examination and imaging examinations such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI). The exclusion criteria included (1) patients under 16 years old, whose bone development of the lower limbs had not yet been finalized; (2) the ulcer reached the bone level and formed the chronic osteomyelitis; (3) patients who cannot tolerate surgery due to severe organic diseases; and (4) patients who cannot undergo regular follow-up. After admission, we have performed a randomized division for included individuals to two different groups (routine group and 3D printing group). Specifically, 12 patients were enrolled into the routine group and 15 patients were enrolled into the 3D printing group, and further underwent the corresponding surgical methods, regardless of the severity of deformity. All patients included in this research have signed the informed consents, and the research was approved by Jiangxi Provincial People’s Hospital Affiliated to Nanchang University. ### 2.2. PSI Osteotomy Guide Fabrication and Simulation Operation All of the patients were routinely examined by CT scan and anteroposterior and lateral X-rays of malformed lower limb at the time of admission. The CT scanning data of 15 patients in the 3D printing group were gathered by dual-source 64-slice spiral CT system (Siemens, Munich, Germany). The exact scanning parameters of CT system included the voltage of 120 KV and the pitch of 0.625 mm. Then, the data were further imported into the Mimics 19.0 software (Materialise, Leuven, Belgium) in Digital Imaging and Communications in Medicine (DICOM) format by the professional orthopedic 3D medical engineers, so as to reconstruct the 3D model of malformed lower limb. On one hand, the osteotomy calculations were conducted by the professional orthopedic surgeons in our treatment team, and the specific observational indicators, such as the corrections of the deformities of talipes varus and arch elevation, were considered to achieve the desired corrections. On the other hand, with the close cooperation of orthopedic surgeons and orthopedic 3D medical engineers in our treatment team, the design thinking was conceived by the orthopedic surgeons and communicated with the 3D medical engineers face to face for practical operation, while the dedicated software operation was carried out by the 3D medical engineers. Furthermore, in order to correct the deformities of talipes varus and arch elevation caused by ATE malformation, the osteotomy planes were defined by the orthopedic surgeons at different joint surfaces, respectively, and the consistent shape of guides was established individually. Meanwhile, the data of Kirschner wire guide holes on each guide were also improved, respectively. Then, the further simulated osteotomy was performed, and the osteotomy surfaces of each joint were aligned to make the lower limb correct to the neutral position. After Boolean operation (through the operation of union, difference, and intersection of more than two objects, the new form of items can be obtained), the boundaries of guides were further trimmed, and all guide holes were penetrated through, thereby completing design and production of guides. Finally, the obtained data were saved in STereoLithography (STL) format and further imported to the 3D printer (Waston Med, Inc., Changzhou, Jiangsu, China) to print out the target models and the PSI osteotomy guides, with the materials of photosensitive resin (Figure1).Figure 1 Design and fabrication processes of the 3D printing models and guides. (a) According to the imported CT data, the deformed structure of the ankle joint was simulated in Mimics19.0 software. (b) 1 : 1 reference model printed based on the simulation result. (c) In order to correct the deformity of arch elevation, the osteotomy planes were defined on the joint surfaces. (d) The simulated osteotomy for correcting the deformity of arch elevation was performed at the defined osteotomy planes, and the reduction was performed after osteotomy (frontal view). (e) The result of reduction after the simulated osteotomy (lateral view). (f) In order to correct the deformity of talipes varus, the osteotomy planes were defined on the joint surfaces. (g) The simulated osteotomy for correcting the deformity of talipes varus was performed at defined osteotomy planes, and the reduction was performed after osteotomy. (h) The osteotomy model and guides fabricated by the 3D printing technique. (a)(b)(c)(d)(e)(f)(g)(h) ### 2.3. Surgical Procedures Senior surgeons in the same treatment group completed all the surgical procedures of all patients in this research. Under the general anesthesia or epidural, the patient was placed in the supine position, and the blood circulation of proximal thigh was blocked by the pneumatic tourniquet. Then, an arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus, and the flap was further freed to expose the tissues of ATE malformation. Then, according to the preoperative physical and imaging examinations, the degree of inversion and soft tissue contracture was measured, and the soft tissue release was performed, including the Achilles tendon lengthening, the extension of posterior tibial tendon, and the subcutaneous release of medial plantar, so as to maximize the muscle strength and obtain the balance. Regardless of the severity of deformity, the determination of osteotomy lines and the placement of PSI osteotomy guide were directly according to the outcomes of preoperative design and simulation in 3D printing group. Subsequently, two 2.0 mm Kirschner wires were then drilled into the model along with the guide hole to set each guide. After ensuring that the PSI osteotomy guide was matched and firmly fixed, the pendulum saw was used to carry out the precise osteotomy along with the PSI osteotomy guide (Figure2). However, the determination of osteotomy lines was mainly based on preoperative planning and intraoperative attempts in the routine group, and the main procedures included correcting the deformities of talipes varus and arch elevation. After confirming the orthopedic osteotomy was satisfactory, the talocalcaneal and calcaneocuboid joints were fixed with hollow screws, and the talonavicular joint was fixed with door-shaped screw. Ultimately, the reconstruction plates (Dongya Med, Inc., Shenyang, Liaoning, China) were selectively used for the shaping and fixation according to the specific degree of deformities.Figure 2 Intraoperative photographs of operation assisted by the 3D printing assisted PSI osteotomy guide. (a) The arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus. (b) and (c) The PSI osteotomy guide was fixed with the preset Kirschner wire guide holes on the binding surfaces, and the precise osteotomy was performed under the guidance of guides. (d) The reconstruction plate was used for the shaping and fixation of ATE malformation. (a)(b)(c)(d) ### 2.4. Postoperative Managements There was no significant difference in postoperative managements between the two groups. After operation, the ankle joint of the affected side was fixed in 90° metatarsal flexion with lower leg plaster, and the affected limb was raised to observe the peripheral blood supply and skin sensation. Furthermore, the postoperative anterior and lateral X-rays and CT of the affected ankle joint were reexamined regularly. If the osteotomy end failed to heal at 9 months after the operation, it was regarded as the osteotomy end nonunion. Until the bony fusion was confirmed by the postoperative imaging examination, the plaster was demolished and the patients were instructed to begin partial rehabilitation training. In terms of the follow-up, patients in two groups were all regularly followed up within 1, 3, 6, 12, 24, and 36 months after the surgery. During follow-ups, the incision, skin healing, range of ankle joint motion, complications, and the time to obtain bony fusion were also regularly observed and recorded. ### 2.5. Parameter Assessment The demographic data and ATE malformation characteristics between the two groups were recorded. Moreover, the parameters, including operative time, intraoperative blood loss, and range of ankle joint motion, were further observed and recorded. At the last follow-up, the functional outcomes of ATE malformation were evaluated by AOFAS [21] and ICFSG scoring systems [22]. In the ICFSG scoring system with a total score of 60, 0 is regarded as normal, 0-5 is regarded as excellent, 6-15 is regarded as good, 16-30 is regarded as fair, and >30 is regarded as poor. ### 2.6. Statistical Analysis Data presented in this current research were statistically analyzed by the SPSS 24.0 software (SPSS, Inc., Chicago, USA) and exhibited asmean±standarddeviation (SD) or count (percentage). The statistical methods of Chi-squared test, Fisher exact test, and Student’s t test were applied in this research. The independent t test was used to assess the continuous variables, and the chi-square test or Fisher exact test was used to assess the categorical variables of different parameters collected between the two groups. P value < 0.05 was represented as statistically significant. ## 2.1. Patients This retrospective research included a total of 27 patients (12 males and 15 females) of ATE malformation admitted to Jiangxi Provincial People’s Hospital Affiliated to Nanchang University from June 2014 to June 2018. Therein, the inclusion criteria were mainly summarized as the patients diagnosed as the TE by preoperative physical examination and imaging examinations such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI). The exclusion criteria included (1) patients under 16 years old, whose bone development of the lower limbs had not yet been finalized; (2) the ulcer reached the bone level and formed the chronic osteomyelitis; (3) patients who cannot tolerate surgery due to severe organic diseases; and (4) patients who cannot undergo regular follow-up. After admission, we have performed a randomized division for included individuals to two different groups (routine group and 3D printing group). Specifically, 12 patients were enrolled into the routine group and 15 patients were enrolled into the 3D printing group, and further underwent the corresponding surgical methods, regardless of the severity of deformity. All patients included in this research have signed the informed consents, and the research was approved by Jiangxi Provincial People’s Hospital Affiliated to Nanchang University. ## 2.2. PSI Osteotomy Guide Fabrication and Simulation Operation All of the patients were routinely examined by CT scan and anteroposterior and lateral X-rays of malformed lower limb at the time of admission. The CT scanning data of 15 patients in the 3D printing group were gathered by dual-source 64-slice spiral CT system (Siemens, Munich, Germany). The exact scanning parameters of CT system included the voltage of 120 KV and the pitch of 0.625 mm. Then, the data were further imported into the Mimics 19.0 software (Materialise, Leuven, Belgium) in Digital Imaging and Communications in Medicine (DICOM) format by the professional orthopedic 3D medical engineers, so as to reconstruct the 3D model of malformed lower limb. On one hand, the osteotomy calculations were conducted by the professional orthopedic surgeons in our treatment team, and the specific observational indicators, such as the corrections of the deformities of talipes varus and arch elevation, were considered to achieve the desired corrections. On the other hand, with the close cooperation of orthopedic surgeons and orthopedic 3D medical engineers in our treatment team, the design thinking was conceived by the orthopedic surgeons and communicated with the 3D medical engineers face to face for practical operation, while the dedicated software operation was carried out by the 3D medical engineers. Furthermore, in order to correct the deformities of talipes varus and arch elevation caused by ATE malformation, the osteotomy planes were defined by the orthopedic surgeons at different joint surfaces, respectively, and the consistent shape of guides was established individually. Meanwhile, the data of Kirschner wire guide holes on each guide were also improved, respectively. Then, the further simulated osteotomy was performed, and the osteotomy surfaces of each joint were aligned to make the lower limb correct to the neutral position. After Boolean operation (through the operation of union, difference, and intersection of more than two objects, the new form of items can be obtained), the boundaries of guides were further trimmed, and all guide holes were penetrated through, thereby completing design and production of guides. Finally, the obtained data were saved in STereoLithography (STL) format and further imported to the 3D printer (Waston Med, Inc., Changzhou, Jiangsu, China) to print out the target models and the PSI osteotomy guides, with the materials of photosensitive resin (Figure1).Figure 1 Design and fabrication processes of the 3D printing models and guides. (a) According to the imported CT data, the deformed structure of the ankle joint was simulated in Mimics19.0 software. (b) 1 : 1 reference model printed based on the simulation result. (c) In order to correct the deformity of arch elevation, the osteotomy planes were defined on the joint surfaces. (d) The simulated osteotomy for correcting the deformity of arch elevation was performed at the defined osteotomy planes, and the reduction was performed after osteotomy (frontal view). (e) The result of reduction after the simulated osteotomy (lateral view). (f) In order to correct the deformity of talipes varus, the osteotomy planes were defined on the joint surfaces. (g) The simulated osteotomy for correcting the deformity of talipes varus was performed at defined osteotomy planes, and the reduction was performed after osteotomy. (h) The osteotomy model and guides fabricated by the 3D printing technique. (a)(b)(c)(d)(e)(f)(g)(h) ## 2.3. Surgical Procedures Senior surgeons in the same treatment group completed all the surgical procedures of all patients in this research. Under the general anesthesia or epidural, the patient was placed in the supine position, and the blood circulation of proximal thigh was blocked by the pneumatic tourniquet. Then, an arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus, and the flap was further freed to expose the tissues of ATE malformation. Then, according to the preoperative physical and imaging examinations, the degree of inversion and soft tissue contracture was measured, and the soft tissue release was performed, including the Achilles tendon lengthening, the extension of posterior tibial tendon, and the subcutaneous release of medial plantar, so as to maximize the muscle strength and obtain the balance. Regardless of the severity of deformity, the determination of osteotomy lines and the placement of PSI osteotomy guide were directly according to the outcomes of preoperative design and simulation in 3D printing group. Subsequently, two 2.0 mm Kirschner wires were then drilled into the model along with the guide hole to set each guide. After ensuring that the PSI osteotomy guide was matched and firmly fixed, the pendulum saw was used to carry out the precise osteotomy along with the PSI osteotomy guide (Figure2). However, the determination of osteotomy lines was mainly based on preoperative planning and intraoperative attempts in the routine group, and the main procedures included correcting the deformities of talipes varus and arch elevation. After confirming the orthopedic osteotomy was satisfactory, the talocalcaneal and calcaneocuboid joints were fixed with hollow screws, and the talonavicular joint was fixed with door-shaped screw. Ultimately, the reconstruction plates (Dongya Med, Inc., Shenyang, Liaoning, China) were selectively used for the shaping and fixation according to the specific degree of deformities.Figure 2 Intraoperative photographs of operation assisted by the 3D printing assisted PSI osteotomy guide. (a) The arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus. (b) and (c) The PSI osteotomy guide was fixed with the preset Kirschner wire guide holes on the binding surfaces, and the precise osteotomy was performed under the guidance of guides. (d) The reconstruction plate was used for the shaping and fixation of ATE malformation. (a)(b)(c)(d) ## 2.4. Postoperative Managements There was no significant difference in postoperative managements between the two groups. After operation, the ankle joint of the affected side was fixed in 90° metatarsal flexion with lower leg plaster, and the affected limb was raised to observe the peripheral blood supply and skin sensation. Furthermore, the postoperative anterior and lateral X-rays and CT of the affected ankle joint were reexamined regularly. If the osteotomy end failed to heal at 9 months after the operation, it was regarded as the osteotomy end nonunion. Until the bony fusion was confirmed by the postoperative imaging examination, the plaster was demolished and the patients were instructed to begin partial rehabilitation training. In terms of the follow-up, patients in two groups were all regularly followed up within 1, 3, 6, 12, 24, and 36 months after the surgery. During follow-ups, the incision, skin healing, range of ankle joint motion, complications, and the time to obtain bony fusion were also regularly observed and recorded. ## 2.5. Parameter Assessment The demographic data and ATE malformation characteristics between the two groups were recorded. Moreover, the parameters, including operative time, intraoperative blood loss, and range of ankle joint motion, were further observed and recorded. At the last follow-up, the functional outcomes of ATE malformation were evaluated by AOFAS [21] and ICFSG scoring systems [22]. In the ICFSG scoring system with a total score of 60, 0 is regarded as normal, 0-5 is regarded as excellent, 6-15 is regarded as good, 16-30 is regarded as fair, and >30 is regarded as poor. ## 2.6. Statistical Analysis Data presented in this current research were statistically analyzed by the SPSS 24.0 software (SPSS, Inc., Chicago, USA) and exhibited asmean±standarddeviation (SD) or count (percentage). The statistical methods of Chi-squared test, Fisher exact test, and Student’s t test were applied in this research. The independent t test was used to assess the continuous variables, and the chi-square test or Fisher exact test was used to assess the categorical variables of different parameters collected between the two groups. P value < 0.05 was represented as statistically significant. ## 3. Results ### 3.1. Demographic Data and ATE Malformation Characteristics Table1 presents the demographic data and ATE malformation characteristics of two groups in this research. However, there was no significant difference between the two groups in age, gender, ATE malformation side, and causes of ATE malformation (P>0.05 for all).Table 1 Comparisons of demographic data and ATE malformation characteristics between two groups. CharacteristicsRoutine group (n=12)3D printing group (n=15)P valueMean age (range), years54.7±12.3 (45-68)56.1±13.2 (43-69)0.579Gender,n (%)0.743Male5 (41.7)7 (46.7)Female7 (58.3)8 (53.3)ATE malformation side,n (%)0.285Left8 (66.7)9 (60.0)Right4 (33.3)6 (40.0)Causes of ATE malformation,n (%)0.641Neglected treatment of CTE12 (100)15 (100)Recurrence of CTE after treatment00Sequelae of polio00Trauma00Note: CTE: congenital talipes equinovarus; ATE: adult talipes equinovarus. ### 3.2. Clinical Data and Functional Outcomes Clinical data and functional outcomes of the two groups are summarized in Table2. In terms of operative time, (96.3±14.2min) in the 3D printing group was significantly less than (122.9±18.3min) in the routine group (P<0.001). In terms of intraoperative blood loss, the difference between the 3D printing group (98.6±18.7ml) and the routine group (126.5±23.2ml) was statistically significant (P<0.001). However, there was no significant difference in follow-up time, time to obtain bony fusion, range of ankle joint motion, and AOFAS score at last follow-up (P all > 0.05). As for the ICFSG score at last follow-up, 5 patients were evaluated as excellent, 9 patients were good, and 1 patient was fair in the 3D printing group. In the routine group, 3 patients were evaluated as excellent, 6 patients were good, and 3 patients were fair. The rate of excellent and good outcomes of the 3D printing group was 93.3%, which was higher than that of the routine group (75%, P=0.019). In addition, types and number of operations performed in each group were exhibited in supplementary table (available here).Table 2 Comparison of clinical data and functional outcomes between two groups of ATE malformation. Routine group (n=12)3D printing group (n=15)P valueOperative time, min122.9±18.396.3±14.2<0.001Intraoperative blood loss, ml126.5±23.298.6±18.7<0.001Follow-up time, month26.1±7.625.3±6.90.352Time to obtain bony fusion, week13.3±3.112.6±2.70.243Range of ankle joint motion at last follow-up °Dorsal expansion23.6±3.424.2±3.80.371Plantarflexion26.8±3.727.4±2.90.253Inversion24.3±3.125.5±3.50.162Eversion27.3±3.428.1±3.20.527AOFAS score at last follow-up, point77.8±9.178.5±8.50.136ICFSG score at last follow-up,n (%)Excellent3 (25.0)5 (33.3)0.257Good6 (50.0)9 (60.0)0.632Fair3 (25.0)1 (6.7)0.876aPoor00—Rate of excellent and good outcomes, %75.093.30.019aP value for continuity-corrected chi-squared test. Note: ICFSG: International Congenital Clubfoot Study Group; AOFAS: American Orthopedic Foot and Ankle Society; ATE: adult talipes equinovarus. ### 3.3. Complications As shown in Table3, the total rate of complications of the 3D printing group and routine group was 13.3% (2/15) and 16.7% (2/12), respectively, and there was no significant difference (P=0.291).Table 3 Comparison of complications between two groups of ATE malformation. ComplicationsRoutine group (n=12)3D printing group (n=15)P valueSuperficial infection01 (6.7)1.000aDeep infection1 (8.3)01.000aSkin necrosis00—Nerve injury00—Vascular injury00—Osteotomy end nonunion00—Anklebone stiffness1 (8.3)1(6.7)1.000aTotal2 (16.7)2 (13.3)0.291Values are expressed asn (%); aP value for Fisher’s exact test. Note: ATE: adult talipes equinovarus. ### 3.4. Typical Case Female, 57 years old, who has suffered from congenital talipes equinovarus (CTE) in right side for 50 years. The patient has not paid enough attention to it and walked on the back of foot for 20 years. Later, the patient was admitted to our hospital for more than half a year due to the right foot pain and limited activity (Figure3). Physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. After the treatment of osteotomy assisted by PSI osteotomy guide, further tibiotalocalcaneal arthrodesis was then performed, and the postoperative imaging examination indicated that the osteotomy end was well aligned and the fixation effect was reliable. At the follow-up of 12 weeks, the osteotomy end obtained the bony fusion (Figure 4). At the last follow-up, the AOFAS score was evaluated as 81 points, and the ICFSG score was 12 points, which was evaluated as good level.Figure 3 Female, 57 years old, who has suffered from CTE in right side for 50 years. (a) The patient has not paid enough attention to the CTE and walked on the back of foot for 20 years. (b) Preoperative appearance, and the physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. (c) and (d) Preoperative X-ray and 3D reconstruction CT scanning of right ankle joint indicated the severe ATE malformation of the patient. (a)(b)(c)(d)Figure 4 Postoperative imaging examinations and general appearance of right ankle joint. (a) and (b) Postoperative X-rays indicated that the osteotomy end was well aligned and the fixation effect was reliable. (c) At 12 weeks postoperatively, the osteotomy end obtained the bony fusion confirmed by CT scanning. (d) The general appearance of the right ankle joint at the last follow-up. (a)(b)(c)(d) ## 3.1. Demographic Data and ATE Malformation Characteristics Table1 presents the demographic data and ATE malformation characteristics of two groups in this research. However, there was no significant difference between the two groups in age, gender, ATE malformation side, and causes of ATE malformation (P>0.05 for all).Table 1 Comparisons of demographic data and ATE malformation characteristics between two groups. CharacteristicsRoutine group (n=12)3D printing group (n=15)P valueMean age (range), years54.7±12.3 (45-68)56.1±13.2 (43-69)0.579Gender,n (%)0.743Male5 (41.7)7 (46.7)Female7 (58.3)8 (53.3)ATE malformation side,n (%)0.285Left8 (66.7)9 (60.0)Right4 (33.3)6 (40.0)Causes of ATE malformation,n (%)0.641Neglected treatment of CTE12 (100)15 (100)Recurrence of CTE after treatment00Sequelae of polio00Trauma00Note: CTE: congenital talipes equinovarus; ATE: adult talipes equinovarus. ## 3.2. Clinical Data and Functional Outcomes Clinical data and functional outcomes of the two groups are summarized in Table2. In terms of operative time, (96.3±14.2min) in the 3D printing group was significantly less than (122.9±18.3min) in the routine group (P<0.001). In terms of intraoperative blood loss, the difference between the 3D printing group (98.6±18.7ml) and the routine group (126.5±23.2ml) was statistically significant (P<0.001). However, there was no significant difference in follow-up time, time to obtain bony fusion, range of ankle joint motion, and AOFAS score at last follow-up (P all > 0.05). As for the ICFSG score at last follow-up, 5 patients were evaluated as excellent, 9 patients were good, and 1 patient was fair in the 3D printing group. In the routine group, 3 patients were evaluated as excellent, 6 patients were good, and 3 patients were fair. The rate of excellent and good outcomes of the 3D printing group was 93.3%, which was higher than that of the routine group (75%, P=0.019). In addition, types and number of operations performed in each group were exhibited in supplementary table (available here).Table 2 Comparison of clinical data and functional outcomes between two groups of ATE malformation. Routine group (n=12)3D printing group (n=15)P valueOperative time, min122.9±18.396.3±14.2<0.001Intraoperative blood loss, ml126.5±23.298.6±18.7<0.001Follow-up time, month26.1±7.625.3±6.90.352Time to obtain bony fusion, week13.3±3.112.6±2.70.243Range of ankle joint motion at last follow-up °Dorsal expansion23.6±3.424.2±3.80.371Plantarflexion26.8±3.727.4±2.90.253Inversion24.3±3.125.5±3.50.162Eversion27.3±3.428.1±3.20.527AOFAS score at last follow-up, point77.8±9.178.5±8.50.136ICFSG score at last follow-up,n (%)Excellent3 (25.0)5 (33.3)0.257Good6 (50.0)9 (60.0)0.632Fair3 (25.0)1 (6.7)0.876aPoor00—Rate of excellent and good outcomes, %75.093.30.019aP value for continuity-corrected chi-squared test. Note: ICFSG: International Congenital Clubfoot Study Group; AOFAS: American Orthopedic Foot and Ankle Society; ATE: adult talipes equinovarus. ## 3.3. Complications As shown in Table3, the total rate of complications of the 3D printing group and routine group was 13.3% (2/15) and 16.7% (2/12), respectively, and there was no significant difference (P=0.291).Table 3 Comparison of complications between two groups of ATE malformation. ComplicationsRoutine group (n=12)3D printing group (n=15)P valueSuperficial infection01 (6.7)1.000aDeep infection1 (8.3)01.000aSkin necrosis00—Nerve injury00—Vascular injury00—Osteotomy end nonunion00—Anklebone stiffness1 (8.3)1(6.7)1.000aTotal2 (16.7)2 (13.3)0.291Values are expressed asn (%); aP value for Fisher’s exact test. Note: ATE: adult talipes equinovarus. ## 3.4. Typical Case Female, 57 years old, who has suffered from congenital talipes equinovarus (CTE) in right side for 50 years. The patient has not paid enough attention to it and walked on the back of foot for 20 years. Later, the patient was admitted to our hospital for more than half a year due to the right foot pain and limited activity (Figure3). Physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. After the treatment of osteotomy assisted by PSI osteotomy guide, further tibiotalocalcaneal arthrodesis was then performed, and the postoperative imaging examination indicated that the osteotomy end was well aligned and the fixation effect was reliable. At the follow-up of 12 weeks, the osteotomy end obtained the bony fusion (Figure 4). At the last follow-up, the AOFAS score was evaluated as 81 points, and the ICFSG score was 12 points, which was evaluated as good level.Figure 3 Female, 57 years old, who has suffered from CTE in right side for 50 years. (a) The patient has not paid enough attention to the CTE and walked on the back of foot for 20 years. (b) Preoperative appearance, and the physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. (c) and (d) Preoperative X-ray and 3D reconstruction CT scanning of right ankle joint indicated the severe ATE malformation of the patient. (a)(b)(c)(d)Figure 4 Postoperative imaging examinations and general appearance of right ankle joint. (a) and (b) Postoperative X-rays indicated that the osteotomy end was well aligned and the fixation effect was reliable. (c) At 12 weeks postoperatively, the osteotomy end obtained the bony fusion confirmed by CT scanning. (d) The general appearance of the right ankle joint at the last follow-up. (a)(b)(c)(d) ## 4. Discussion To sum up, this study demonstrated the safety, accuracy, and reliability of 3D printing assisted PSI osteotomy guide for correcting the ATE malformation. Compared with the routine group, the 3D printing group exhibits the superiorities of shorter operative time, less intraoperative blood loss, higher rate of excellent, and good outcomes expressed by the ICFSG score at last follow-up. Moreover, the operation assisted by the PSI osteotomy guide for correcting the ATE malformation is novel and feasible, which might become an effective method to polish up the precise osteotomy of ATE malformation and polish up the clinical efficacy.In this current research, we compared the clinical efficacy and prognosis of routine ATE deformity correction surgeries and operation assisted by PSI osteotomy guide for the treatment of ATE malformation patients. In the 3D printing group, the personalized PSI osteotomy guide fabricated by 3D printing technique can be closely fitted to each joint surface. In all 15 patients, the osteotomy was performed successfully in one time, avoiding the repeated adjustments and tests during operation, which contributes to a more standardized and simpler operation. Meanwhile, it also reasonably explained that the parameters of operative time and intraoperative blood loss in the 3D printing group were less than that in the routine group (P<0.001). In addition, the PSI osteotomy guide can be fixed with the preset Kirschner wire guide holes on the binding surfaces, which can effectively prevent the pendulum saw from slipping during the process of operation and producing deviation. Hence, after the processes of rigorous design and standardized operation, the accurate osteotomy of ATE malformation can be realized, and the clinical efficacy was equivalent to the preoperative planning. Meanwhile, the 3D printing group also exhibited a higher rate of excellent and good outcomes than the routine group in ICFSG score at last follow-up (93.3% versus 75.0%, P=0.019).In addition, although the 3D printing assisted PSI osteotomy guide has been diffusely applied in several clinical studies, such as the cubitus varus deformity, developmental dysplasia of the hip (DDH), spinal scoliosis, hallux valgus, and other deformities [20, 23–25], while its application in the correction of ATE malformation has been rarely reported. Previously, Windisch et al. [26] applied 3D printing technique to fabricate a physical model of ATE malformation with a magnification of 4 times for surgeons to accurately analyze all bone and joint deformities and perform preoperative planning, but this application only played the most basic role of 3D printing technique. Moreover, Gozar et al. [16] and Barker et al. [27] applied computer modeling analysis technique to the correction of TE malformation and obtained the satisfactory short-term outcomes, but still lacked a long-term prognostic comparison with the routine group. In this research, there was no significant difference between the two groups in total rate of complications (13.3% vs. 16.7%, P=0.291). This outcome was consistent with the research results of Zhang et al. [28] used the 3D printing technique for the patients with complicated ankle fractures, indicating that the 3D printing technique presents no apparent superiorities in avoiding complications. Furthermore, at the last follow-up, there was no significant difference in the range of ankle joint motion, including the dorsal expansion, plantarflexion, inversion, and eversion, between the two groups (P all > 0.05). This may be attributed to the similar time of obtaining bony fusion and taking the similar programs of rehabilitation training between the two groups, and both groups presented relatively excellent postoperative functional outcomes. In addition, it is worth noting that the mismatch between the ICFSG and AOFAS scores in the functional outcome evaluation is not correlated to difference in range of ankle joint motion, including the dorsal expansion, plantarflexion, inversion, and eversion. This outcome might be relevant to the difference in the time of demolishing the plaster and beginning the rehabilitation training.Ultimately, it is essential to further point out and recognize the drawbacks of this research. For one thing, the mean follow-up time of this research was about 2 years. At the last follow-up, the prognosis of partial patients with ATE malformation (3 patients in routine group and 1 patient in 3D printing group) was evaluated as the fair, and it is essential to conduct further follow-up observation for a longer time in future. For another thing, this current research is the retrospective study with relatively small sample size. In order to further verify the clinical efficacy and prognosis of 3D printing assisted PSI osteotomy guide for the correction of ATE malformation, multicenter prospective studies and larger sample size data are still needed. Moreover, the specific comparison of pre- and postoperative radiological lower limb alignment parameters between the two study groups is also of great significance and interesting, which is expected to be further improved in future researches. ## 5. Conclusions Clinical application of 3D printing assisted PSI osteotomy guide for correcting the ATE malformation is safe, precise, and dependable. The 3D printing group exhibits the superiorities of shorter operative time, less intraoperative blood loss, and higher rate of excellent and good outcomes presented by ICFSG score at last follow-up than the routine group. The operation assisted by 3D printing PSI osteotomy guide for correcting ATE malformation is novel and feasible, which might be an effective method to polish up the precise osteotomy of ATE malformation and enhance the clinical efficacy. --- *Source: 1004849-2021-12-02.xml*
1004849-2021-12-02_1004849-2021-12-02.md
42,370
The Novel Application of Three-Dimensional Printing Assisted Patient-Specific Instrument Osteotomy Guide in the Precise Osteotomy of Adult Talipes Equinovarus
Yuan-Wei Zhang; Mu-Rong You; Xiao-Xiang Zhang; Xing-Liang Yu; Liang Zhang; Liang Deng; Zhe Wang; Xie-Ping Dong
BioMed Research International (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1004849
1004849-2021-12-02.xml
--- ## Abstract Objective. This current research is aimed at assessing clinical efficacy and prognosis of three-dimensional (3D) printing assisted patient-specific instrument (PSI) osteotomy guide in precise osteotomy of adult talipes equinovarus (ATE). Methods. We included a total of 27 patients of ATE malformation (including 12 males and 15 females) from June 2014 to June 2018 in the current research. The patients were divided into the routine group (n=12) and 3D printing group (n=15) based on different operative methods. The parameters, including the operative time, intraoperative blood loss, complications, time to obtain bony fusion, functional outcomes based on American Orthopedic Foot and Ankle Society (AOFAS), and International Congenital Clubfoot Study group (ICFSG) scoring systems between the two groups were observed and recorded regularly. Results. The 3D printing group exhibits superiorities in shorter operative time, less intraoperative blood loss, higher rate of excellent, and good outcomes presented by ICFSG score at last follow-up (P<0.001, P<0.001, P=0.019) than the routine group. However, there was no significant difference exhibited in the AOFAS score at the last follow-up and total rate of complications between the two groups (P=0.136, P=0.291). Conclusion. Operation assisted by 3D printing PSI osteotomy guide for correcting the ATE malformation is novel and feasible, which might be an effective method to polish up the precise osteotomy of ATE malformation and enhance the clinical efficacy. --- ## Body ## 1. Introduction Talipes equinovarus (TE) is a kind of complicated whole lower limb deformity, which is mainly manifested as the plantar flexion and posterior talipes varus deformities, and in severe cases, it is often combined with anterior talipes adduction and arch elevation [1, 2]. The etiology of TE is often divided into primary and secondary, including the factor of specific gene deletion for the primary TE and the factors of congenital musculoskeletal malformations, neuromuscular diseases, trauma, infection, and burns for secondary TE [3, 4]. Meanwhile, the pathological changes of adult talipes equinovarus (ATE) malformation mainly include severe soft tissue contractures and bone and joint deformities [5, 6]. Hence, the overall complicated causes and abnormal deformity manifestations make the treatment require the comprehensive consideration of multiple factors. Currently, the nonsurgical treatment of TE has been generally recognized, and the Ponseti therapy has been widely applied in children with TE [7]. However, there is still no unified standard for the treatment strategy of ATE malformation, and the routine surgical methods mainly include Achilles tendon lengthening and balance muscle force of internal and external inversion, and the triple arthrodesis (TA) is also one of the most ideal surgical methods that is worthy of consideration [8, 9].Since the TA was first proposed and applied in 1923 [10], it has gradually become one of the most important surgical methods for the correction of ATE malformation. However, ATE being a complex deformity, some of the patients after TA may be left with inferior outcomes, persistent deformities, and a variable degree of recurrence [11, 12]. Recently, with the in-depth understanding of ATE malformation, more and more scholars believe that the ATE malformation is a kind of three-dimensional and multidirectional anatomical relationship disorder [13, 14]. However, the pathogenic factors of ATE malformation are complicated, and the degree of deformities is not uniform. Moreover, the ideal treatment strategy for ATE malformation has not yet reached a consensus currently, which has presented a great challenge for surgeons to a certain extent [15]. During the operation, it is hard for operators to accurately adjust the osteotomy direction and angle of each dimension, so it often needs to repeatedly adjust or determine the correction strategy only based on subjective assumptions, which eventually leads to a huge deviation from the preoperative planning, and the prognosis will be unoptimistic. Hence, it is vital to develop an appropriate and personalized treatment plan for different etiology, deformity location, and degree of ATE malformation for the excellent functional reconstruction [16, 17]. With regard to this, in view of the unique advantages of three-dimensional (3D) printing in the personalized design and flexible application, it has been widely used in several orthopedic subspecialties, such as spine, joint, trauma, bone tumor, and orthopedics [18, 19]. Specifically, for one thing, 3D printing is particularly suitable for individualized customization and rapid manufacturing, which can greatly meet the special needs of orthopedic doctors for the implants. For another thing, 3D printing can also accurately and conveniently assist the routine operations in orthopedic surgeries, such as the osteotomy, reduction, and fixation, which improve the efficiency and quality of surgery to a certain extent [18, 20]. Thus, in order to further explore the novel surgical approach and evaluate the clinical efficacy of 3D printing assisted PSI osteotomy guide in accurate osteotomy of ATE malformation, this current research compared the routine ATE deformity correction surgeries with the operation assisted by PSI osteotomy guide and further evaluated the prognosis. ## 2. Methods ### 2.1. Patients This retrospective research included a total of 27 patients (12 males and 15 females) of ATE malformation admitted to Jiangxi Provincial People’s Hospital Affiliated to Nanchang University from June 2014 to June 2018. Therein, the inclusion criteria were mainly summarized as the patients diagnosed as the TE by preoperative physical examination and imaging examinations such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI). The exclusion criteria included (1) patients under 16 years old, whose bone development of the lower limbs had not yet been finalized; (2) the ulcer reached the bone level and formed the chronic osteomyelitis; (3) patients who cannot tolerate surgery due to severe organic diseases; and (4) patients who cannot undergo regular follow-up. After admission, we have performed a randomized division for included individuals to two different groups (routine group and 3D printing group). Specifically, 12 patients were enrolled into the routine group and 15 patients were enrolled into the 3D printing group, and further underwent the corresponding surgical methods, regardless of the severity of deformity. All patients included in this research have signed the informed consents, and the research was approved by Jiangxi Provincial People’s Hospital Affiliated to Nanchang University. ### 2.2. PSI Osteotomy Guide Fabrication and Simulation Operation All of the patients were routinely examined by CT scan and anteroposterior and lateral X-rays of malformed lower limb at the time of admission. The CT scanning data of 15 patients in the 3D printing group were gathered by dual-source 64-slice spiral CT system (Siemens, Munich, Germany). The exact scanning parameters of CT system included the voltage of 120 KV and the pitch of 0.625 mm. Then, the data were further imported into the Mimics 19.0 software (Materialise, Leuven, Belgium) in Digital Imaging and Communications in Medicine (DICOM) format by the professional orthopedic 3D medical engineers, so as to reconstruct the 3D model of malformed lower limb. On one hand, the osteotomy calculations were conducted by the professional orthopedic surgeons in our treatment team, and the specific observational indicators, such as the corrections of the deformities of talipes varus and arch elevation, were considered to achieve the desired corrections. On the other hand, with the close cooperation of orthopedic surgeons and orthopedic 3D medical engineers in our treatment team, the design thinking was conceived by the orthopedic surgeons and communicated with the 3D medical engineers face to face for practical operation, while the dedicated software operation was carried out by the 3D medical engineers. Furthermore, in order to correct the deformities of talipes varus and arch elevation caused by ATE malformation, the osteotomy planes were defined by the orthopedic surgeons at different joint surfaces, respectively, and the consistent shape of guides was established individually. Meanwhile, the data of Kirschner wire guide holes on each guide were also improved, respectively. Then, the further simulated osteotomy was performed, and the osteotomy surfaces of each joint were aligned to make the lower limb correct to the neutral position. After Boolean operation (through the operation of union, difference, and intersection of more than two objects, the new form of items can be obtained), the boundaries of guides were further trimmed, and all guide holes were penetrated through, thereby completing design and production of guides. Finally, the obtained data were saved in STereoLithography (STL) format and further imported to the 3D printer (Waston Med, Inc., Changzhou, Jiangsu, China) to print out the target models and the PSI osteotomy guides, with the materials of photosensitive resin (Figure1).Figure 1 Design and fabrication processes of the 3D printing models and guides. (a) According to the imported CT data, the deformed structure of the ankle joint was simulated in Mimics19.0 software. (b) 1 : 1 reference model printed based on the simulation result. (c) In order to correct the deformity of arch elevation, the osteotomy planes were defined on the joint surfaces. (d) The simulated osteotomy for correcting the deformity of arch elevation was performed at the defined osteotomy planes, and the reduction was performed after osteotomy (frontal view). (e) The result of reduction after the simulated osteotomy (lateral view). (f) In order to correct the deformity of talipes varus, the osteotomy planes were defined on the joint surfaces. (g) The simulated osteotomy for correcting the deformity of talipes varus was performed at defined osteotomy planes, and the reduction was performed after osteotomy. (h) The osteotomy model and guides fabricated by the 3D printing technique. (a)(b)(c)(d)(e)(f)(g)(h) ### 2.3. Surgical Procedures Senior surgeons in the same treatment group completed all the surgical procedures of all patients in this research. Under the general anesthesia or epidural, the patient was placed in the supine position, and the blood circulation of proximal thigh was blocked by the pneumatic tourniquet. Then, an arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus, and the flap was further freed to expose the tissues of ATE malformation. Then, according to the preoperative physical and imaging examinations, the degree of inversion and soft tissue contracture was measured, and the soft tissue release was performed, including the Achilles tendon lengthening, the extension of posterior tibial tendon, and the subcutaneous release of medial plantar, so as to maximize the muscle strength and obtain the balance. Regardless of the severity of deformity, the determination of osteotomy lines and the placement of PSI osteotomy guide were directly according to the outcomes of preoperative design and simulation in 3D printing group. Subsequently, two 2.0 mm Kirschner wires were then drilled into the model along with the guide hole to set each guide. After ensuring that the PSI osteotomy guide was matched and firmly fixed, the pendulum saw was used to carry out the precise osteotomy along with the PSI osteotomy guide (Figure2). However, the determination of osteotomy lines was mainly based on preoperative planning and intraoperative attempts in the routine group, and the main procedures included correcting the deformities of talipes varus and arch elevation. After confirming the orthopedic osteotomy was satisfactory, the talocalcaneal and calcaneocuboid joints were fixed with hollow screws, and the talonavicular joint was fixed with door-shaped screw. Ultimately, the reconstruction plates (Dongya Med, Inc., Shenyang, Liaoning, China) were selectively used for the shaping and fixation according to the specific degree of deformities.Figure 2 Intraoperative photographs of operation assisted by the 3D printing assisted PSI osteotomy guide. (a) The arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus. (b) and (c) The PSI osteotomy guide was fixed with the preset Kirschner wire guide holes on the binding surfaces, and the precise osteotomy was performed under the guidance of guides. (d) The reconstruction plate was used for the shaping and fixation of ATE malformation. (a)(b)(c)(d) ### 2.4. Postoperative Managements There was no significant difference in postoperative managements between the two groups. After operation, the ankle joint of the affected side was fixed in 90° metatarsal flexion with lower leg plaster, and the affected limb was raised to observe the peripheral blood supply and skin sensation. Furthermore, the postoperative anterior and lateral X-rays and CT of the affected ankle joint were reexamined regularly. If the osteotomy end failed to heal at 9 months after the operation, it was regarded as the osteotomy end nonunion. Until the bony fusion was confirmed by the postoperative imaging examination, the plaster was demolished and the patients were instructed to begin partial rehabilitation training. In terms of the follow-up, patients in two groups were all regularly followed up within 1, 3, 6, 12, 24, and 36 months after the surgery. During follow-ups, the incision, skin healing, range of ankle joint motion, complications, and the time to obtain bony fusion were also regularly observed and recorded. ### 2.5. Parameter Assessment The demographic data and ATE malformation characteristics between the two groups were recorded. Moreover, the parameters, including operative time, intraoperative blood loss, and range of ankle joint motion, were further observed and recorded. At the last follow-up, the functional outcomes of ATE malformation were evaluated by AOFAS [21] and ICFSG scoring systems [22]. In the ICFSG scoring system with a total score of 60, 0 is regarded as normal, 0-5 is regarded as excellent, 6-15 is regarded as good, 16-30 is regarded as fair, and >30 is regarded as poor. ### 2.6. Statistical Analysis Data presented in this current research were statistically analyzed by the SPSS 24.0 software (SPSS, Inc., Chicago, USA) and exhibited asmean±standarddeviation (SD) or count (percentage). The statistical methods of Chi-squared test, Fisher exact test, and Student’s t test were applied in this research. The independent t test was used to assess the continuous variables, and the chi-square test or Fisher exact test was used to assess the categorical variables of different parameters collected between the two groups. P value < 0.05 was represented as statistically significant. ## 2.1. Patients This retrospective research included a total of 27 patients (12 males and 15 females) of ATE malformation admitted to Jiangxi Provincial People’s Hospital Affiliated to Nanchang University from June 2014 to June 2018. Therein, the inclusion criteria were mainly summarized as the patients diagnosed as the TE by preoperative physical examination and imaging examinations such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI). The exclusion criteria included (1) patients under 16 years old, whose bone development of the lower limbs had not yet been finalized; (2) the ulcer reached the bone level and formed the chronic osteomyelitis; (3) patients who cannot tolerate surgery due to severe organic diseases; and (4) patients who cannot undergo regular follow-up. After admission, we have performed a randomized division for included individuals to two different groups (routine group and 3D printing group). Specifically, 12 patients were enrolled into the routine group and 15 patients were enrolled into the 3D printing group, and further underwent the corresponding surgical methods, regardless of the severity of deformity. All patients included in this research have signed the informed consents, and the research was approved by Jiangxi Provincial People’s Hospital Affiliated to Nanchang University. ## 2.2. PSI Osteotomy Guide Fabrication and Simulation Operation All of the patients were routinely examined by CT scan and anteroposterior and lateral X-rays of malformed lower limb at the time of admission. The CT scanning data of 15 patients in the 3D printing group were gathered by dual-source 64-slice spiral CT system (Siemens, Munich, Germany). The exact scanning parameters of CT system included the voltage of 120 KV and the pitch of 0.625 mm. Then, the data were further imported into the Mimics 19.0 software (Materialise, Leuven, Belgium) in Digital Imaging and Communications in Medicine (DICOM) format by the professional orthopedic 3D medical engineers, so as to reconstruct the 3D model of malformed lower limb. On one hand, the osteotomy calculations were conducted by the professional orthopedic surgeons in our treatment team, and the specific observational indicators, such as the corrections of the deformities of talipes varus and arch elevation, were considered to achieve the desired corrections. On the other hand, with the close cooperation of orthopedic surgeons and orthopedic 3D medical engineers in our treatment team, the design thinking was conceived by the orthopedic surgeons and communicated with the 3D medical engineers face to face for practical operation, while the dedicated software operation was carried out by the 3D medical engineers. Furthermore, in order to correct the deformities of talipes varus and arch elevation caused by ATE malformation, the osteotomy planes were defined by the orthopedic surgeons at different joint surfaces, respectively, and the consistent shape of guides was established individually. Meanwhile, the data of Kirschner wire guide holes on each guide were also improved, respectively. Then, the further simulated osteotomy was performed, and the osteotomy surfaces of each joint were aligned to make the lower limb correct to the neutral position. After Boolean operation (through the operation of union, difference, and intersection of more than two objects, the new form of items can be obtained), the boundaries of guides were further trimmed, and all guide holes were penetrated through, thereby completing design and production of guides. Finally, the obtained data were saved in STereoLithography (STL) format and further imported to the 3D printer (Waston Med, Inc., Changzhou, Jiangsu, China) to print out the target models and the PSI osteotomy guides, with the materials of photosensitive resin (Figure1).Figure 1 Design and fabrication processes of the 3D printing models and guides. (a) According to the imported CT data, the deformed structure of the ankle joint was simulated in Mimics19.0 software. (b) 1 : 1 reference model printed based on the simulation result. (c) In order to correct the deformity of arch elevation, the osteotomy planes were defined on the joint surfaces. (d) The simulated osteotomy for correcting the deformity of arch elevation was performed at the defined osteotomy planes, and the reduction was performed after osteotomy (frontal view). (e) The result of reduction after the simulated osteotomy (lateral view). (f) In order to correct the deformity of talipes varus, the osteotomy planes were defined on the joint surfaces. (g) The simulated osteotomy for correcting the deformity of talipes varus was performed at defined osteotomy planes, and the reduction was performed after osteotomy. (h) The osteotomy model and guides fabricated by the 3D printing technique. (a)(b)(c)(d)(e)(f)(g)(h) ## 2.3. Surgical Procedures Senior surgeons in the same treatment group completed all the surgical procedures of all patients in this research. Under the general anesthesia or epidural, the patient was placed in the supine position, and the blood circulation of proximal thigh was blocked by the pneumatic tourniquet. Then, an arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus, and the flap was further freed to expose the tissues of ATE malformation. Then, according to the preoperative physical and imaging examinations, the degree of inversion and soft tissue contracture was measured, and the soft tissue release was performed, including the Achilles tendon lengthening, the extension of posterior tibial tendon, and the subcutaneous release of medial plantar, so as to maximize the muscle strength and obtain the balance. Regardless of the severity of deformity, the determination of osteotomy lines and the placement of PSI osteotomy guide were directly according to the outcomes of preoperative design and simulation in 3D printing group. Subsequently, two 2.0 mm Kirschner wires were then drilled into the model along with the guide hole to set each guide. After ensuring that the PSI osteotomy guide was matched and firmly fixed, the pendulum saw was used to carry out the precise osteotomy along with the PSI osteotomy guide (Figure2). However, the determination of osteotomy lines was mainly based on preoperative planning and intraoperative attempts in the routine group, and the main procedures included correcting the deformities of talipes varus and arch elevation. After confirming the orthopedic osteotomy was satisfactory, the talocalcaneal and calcaneocuboid joints were fixed with hollow screws, and the talonavicular joint was fixed with door-shaped screw. Ultimately, the reconstruction plates (Dongya Med, Inc., Shenyang, Liaoning, China) were selectively used for the shaping and fixation according to the specific degree of deformities.Figure 2 Intraoperative photographs of operation assisted by the 3D printing assisted PSI osteotomy guide. (a) The arc-shaped incision was made from the dorsolateral side of the foot to about 2 cm below the lateral malleolus. (b) and (c) The PSI osteotomy guide was fixed with the preset Kirschner wire guide holes on the binding surfaces, and the precise osteotomy was performed under the guidance of guides. (d) The reconstruction plate was used for the shaping and fixation of ATE malformation. (a)(b)(c)(d) ## 2.4. Postoperative Managements There was no significant difference in postoperative managements between the two groups. After operation, the ankle joint of the affected side was fixed in 90° metatarsal flexion with lower leg plaster, and the affected limb was raised to observe the peripheral blood supply and skin sensation. Furthermore, the postoperative anterior and lateral X-rays and CT of the affected ankle joint were reexamined regularly. If the osteotomy end failed to heal at 9 months after the operation, it was regarded as the osteotomy end nonunion. Until the bony fusion was confirmed by the postoperative imaging examination, the plaster was demolished and the patients were instructed to begin partial rehabilitation training. In terms of the follow-up, patients in two groups were all regularly followed up within 1, 3, 6, 12, 24, and 36 months after the surgery. During follow-ups, the incision, skin healing, range of ankle joint motion, complications, and the time to obtain bony fusion were also regularly observed and recorded. ## 2.5. Parameter Assessment The demographic data and ATE malformation characteristics between the two groups were recorded. Moreover, the parameters, including operative time, intraoperative blood loss, and range of ankle joint motion, were further observed and recorded. At the last follow-up, the functional outcomes of ATE malformation were evaluated by AOFAS [21] and ICFSG scoring systems [22]. In the ICFSG scoring system with a total score of 60, 0 is regarded as normal, 0-5 is regarded as excellent, 6-15 is regarded as good, 16-30 is regarded as fair, and >30 is regarded as poor. ## 2.6. Statistical Analysis Data presented in this current research were statistically analyzed by the SPSS 24.0 software (SPSS, Inc., Chicago, USA) and exhibited asmean±standarddeviation (SD) or count (percentage). The statistical methods of Chi-squared test, Fisher exact test, and Student’s t test were applied in this research. The independent t test was used to assess the continuous variables, and the chi-square test or Fisher exact test was used to assess the categorical variables of different parameters collected between the two groups. P value < 0.05 was represented as statistically significant. ## 3. Results ### 3.1. Demographic Data and ATE Malformation Characteristics Table1 presents the demographic data and ATE malformation characteristics of two groups in this research. However, there was no significant difference between the two groups in age, gender, ATE malformation side, and causes of ATE malformation (P>0.05 for all).Table 1 Comparisons of demographic data and ATE malformation characteristics between two groups. CharacteristicsRoutine group (n=12)3D printing group (n=15)P valueMean age (range), years54.7±12.3 (45-68)56.1±13.2 (43-69)0.579Gender,n (%)0.743Male5 (41.7)7 (46.7)Female7 (58.3)8 (53.3)ATE malformation side,n (%)0.285Left8 (66.7)9 (60.0)Right4 (33.3)6 (40.0)Causes of ATE malformation,n (%)0.641Neglected treatment of CTE12 (100)15 (100)Recurrence of CTE after treatment00Sequelae of polio00Trauma00Note: CTE: congenital talipes equinovarus; ATE: adult talipes equinovarus. ### 3.2. Clinical Data and Functional Outcomes Clinical data and functional outcomes of the two groups are summarized in Table2. In terms of operative time, (96.3±14.2min) in the 3D printing group was significantly less than (122.9±18.3min) in the routine group (P<0.001). In terms of intraoperative blood loss, the difference between the 3D printing group (98.6±18.7ml) and the routine group (126.5±23.2ml) was statistically significant (P<0.001). However, there was no significant difference in follow-up time, time to obtain bony fusion, range of ankle joint motion, and AOFAS score at last follow-up (P all > 0.05). As for the ICFSG score at last follow-up, 5 patients were evaluated as excellent, 9 patients were good, and 1 patient was fair in the 3D printing group. In the routine group, 3 patients were evaluated as excellent, 6 patients were good, and 3 patients were fair. The rate of excellent and good outcomes of the 3D printing group was 93.3%, which was higher than that of the routine group (75%, P=0.019). In addition, types and number of operations performed in each group were exhibited in supplementary table (available here).Table 2 Comparison of clinical data and functional outcomes between two groups of ATE malformation. Routine group (n=12)3D printing group (n=15)P valueOperative time, min122.9±18.396.3±14.2<0.001Intraoperative blood loss, ml126.5±23.298.6±18.7<0.001Follow-up time, month26.1±7.625.3±6.90.352Time to obtain bony fusion, week13.3±3.112.6±2.70.243Range of ankle joint motion at last follow-up °Dorsal expansion23.6±3.424.2±3.80.371Plantarflexion26.8±3.727.4±2.90.253Inversion24.3±3.125.5±3.50.162Eversion27.3±3.428.1±3.20.527AOFAS score at last follow-up, point77.8±9.178.5±8.50.136ICFSG score at last follow-up,n (%)Excellent3 (25.0)5 (33.3)0.257Good6 (50.0)9 (60.0)0.632Fair3 (25.0)1 (6.7)0.876aPoor00—Rate of excellent and good outcomes, %75.093.30.019aP value for continuity-corrected chi-squared test. Note: ICFSG: International Congenital Clubfoot Study Group; AOFAS: American Orthopedic Foot and Ankle Society; ATE: adult talipes equinovarus. ### 3.3. Complications As shown in Table3, the total rate of complications of the 3D printing group and routine group was 13.3% (2/15) and 16.7% (2/12), respectively, and there was no significant difference (P=0.291).Table 3 Comparison of complications between two groups of ATE malformation. ComplicationsRoutine group (n=12)3D printing group (n=15)P valueSuperficial infection01 (6.7)1.000aDeep infection1 (8.3)01.000aSkin necrosis00—Nerve injury00—Vascular injury00—Osteotomy end nonunion00—Anklebone stiffness1 (8.3)1(6.7)1.000aTotal2 (16.7)2 (13.3)0.291Values are expressed asn (%); aP value for Fisher’s exact test. Note: ATE: adult talipes equinovarus. ### 3.4. Typical Case Female, 57 years old, who has suffered from congenital talipes equinovarus (CTE) in right side for 50 years. The patient has not paid enough attention to it and walked on the back of foot for 20 years. Later, the patient was admitted to our hospital for more than half a year due to the right foot pain and limited activity (Figure3). Physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. After the treatment of osteotomy assisted by PSI osteotomy guide, further tibiotalocalcaneal arthrodesis was then performed, and the postoperative imaging examination indicated that the osteotomy end was well aligned and the fixation effect was reliable. At the follow-up of 12 weeks, the osteotomy end obtained the bony fusion (Figure 4). At the last follow-up, the AOFAS score was evaluated as 81 points, and the ICFSG score was 12 points, which was evaluated as good level.Figure 3 Female, 57 years old, who has suffered from CTE in right side for 50 years. (a) The patient has not paid enough attention to the CTE and walked on the back of foot for 20 years. (b) Preoperative appearance, and the physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. (c) and (d) Preoperative X-ray and 3D reconstruction CT scanning of right ankle joint indicated the severe ATE malformation of the patient. (a)(b)(c)(d)Figure 4 Postoperative imaging examinations and general appearance of right ankle joint. (a) and (b) Postoperative X-rays indicated that the osteotomy end was well aligned and the fixation effect was reliable. (c) At 12 weeks postoperatively, the osteotomy end obtained the bony fusion confirmed by CT scanning. (d) The general appearance of the right ankle joint at the last follow-up. (a)(b)(c)(d) ## 3.1. Demographic Data and ATE Malformation Characteristics Table1 presents the demographic data and ATE malformation characteristics of two groups in this research. However, there was no significant difference between the two groups in age, gender, ATE malformation side, and causes of ATE malformation (P>0.05 for all).Table 1 Comparisons of demographic data and ATE malformation characteristics between two groups. CharacteristicsRoutine group (n=12)3D printing group (n=15)P valueMean age (range), years54.7±12.3 (45-68)56.1±13.2 (43-69)0.579Gender,n (%)0.743Male5 (41.7)7 (46.7)Female7 (58.3)8 (53.3)ATE malformation side,n (%)0.285Left8 (66.7)9 (60.0)Right4 (33.3)6 (40.0)Causes of ATE malformation,n (%)0.641Neglected treatment of CTE12 (100)15 (100)Recurrence of CTE after treatment00Sequelae of polio00Trauma00Note: CTE: congenital talipes equinovarus; ATE: adult talipes equinovarus. ## 3.2. Clinical Data and Functional Outcomes Clinical data and functional outcomes of the two groups are summarized in Table2. In terms of operative time, (96.3±14.2min) in the 3D printing group was significantly less than (122.9±18.3min) in the routine group (P<0.001). In terms of intraoperative blood loss, the difference between the 3D printing group (98.6±18.7ml) and the routine group (126.5±23.2ml) was statistically significant (P<0.001). However, there was no significant difference in follow-up time, time to obtain bony fusion, range of ankle joint motion, and AOFAS score at last follow-up (P all > 0.05). As for the ICFSG score at last follow-up, 5 patients were evaluated as excellent, 9 patients were good, and 1 patient was fair in the 3D printing group. In the routine group, 3 patients were evaluated as excellent, 6 patients were good, and 3 patients were fair. The rate of excellent and good outcomes of the 3D printing group was 93.3%, which was higher than that of the routine group (75%, P=0.019). In addition, types and number of operations performed in each group were exhibited in supplementary table (available here).Table 2 Comparison of clinical data and functional outcomes between two groups of ATE malformation. Routine group (n=12)3D printing group (n=15)P valueOperative time, min122.9±18.396.3±14.2<0.001Intraoperative blood loss, ml126.5±23.298.6±18.7<0.001Follow-up time, month26.1±7.625.3±6.90.352Time to obtain bony fusion, week13.3±3.112.6±2.70.243Range of ankle joint motion at last follow-up °Dorsal expansion23.6±3.424.2±3.80.371Plantarflexion26.8±3.727.4±2.90.253Inversion24.3±3.125.5±3.50.162Eversion27.3±3.428.1±3.20.527AOFAS score at last follow-up, point77.8±9.178.5±8.50.136ICFSG score at last follow-up,n (%)Excellent3 (25.0)5 (33.3)0.257Good6 (50.0)9 (60.0)0.632Fair3 (25.0)1 (6.7)0.876aPoor00—Rate of excellent and good outcomes, %75.093.30.019aP value for continuity-corrected chi-squared test. Note: ICFSG: International Congenital Clubfoot Study Group; AOFAS: American Orthopedic Foot and Ankle Society; ATE: adult talipes equinovarus. ## 3.3. Complications As shown in Table3, the total rate of complications of the 3D printing group and routine group was 13.3% (2/15) and 16.7% (2/12), respectively, and there was no significant difference (P=0.291).Table 3 Comparison of complications between two groups of ATE malformation. ComplicationsRoutine group (n=12)3D printing group (n=15)P valueSuperficial infection01 (6.7)1.000aDeep infection1 (8.3)01.000aSkin necrosis00—Nerve injury00—Vascular injury00—Osteotomy end nonunion00—Anklebone stiffness1 (8.3)1(6.7)1.000aTotal2 (16.7)2 (13.3)0.291Values are expressed asn (%); aP value for Fisher’s exact test. Note: ATE: adult talipes equinovarus. ## 3.4. Typical Case Female, 57 years old, who has suffered from congenital talipes equinovarus (CTE) in right side for 50 years. The patient has not paid enough attention to it and walked on the back of foot for 20 years. Later, the patient was admitted to our hospital for more than half a year due to the right foot pain and limited activity (Figure3). Physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. After the treatment of osteotomy assisted by PSI osteotomy guide, further tibiotalocalcaneal arthrodesis was then performed, and the postoperative imaging examination indicated that the osteotomy end was well aligned and the fixation effect was reliable. At the follow-up of 12 weeks, the osteotomy end obtained the bony fusion (Figure 4). At the last follow-up, the AOFAS score was evaluated as 81 points, and the ICFSG score was 12 points, which was evaluated as good level.Figure 3 Female, 57 years old, who has suffered from CTE in right side for 50 years. (a) The patient has not paid enough attention to the CTE and walked on the back of foot for 20 years. (b) Preoperative appearance, and the physical examination revealed that the right ankle joint was stiff, the subtalar joint had no range of motion, and the knee tendon reflex was hyperactive. (c) and (d) Preoperative X-ray and 3D reconstruction CT scanning of right ankle joint indicated the severe ATE malformation of the patient. (a)(b)(c)(d)Figure 4 Postoperative imaging examinations and general appearance of right ankle joint. (a) and (b) Postoperative X-rays indicated that the osteotomy end was well aligned and the fixation effect was reliable. (c) At 12 weeks postoperatively, the osteotomy end obtained the bony fusion confirmed by CT scanning. (d) The general appearance of the right ankle joint at the last follow-up. (a)(b)(c)(d) ## 4. Discussion To sum up, this study demonstrated the safety, accuracy, and reliability of 3D printing assisted PSI osteotomy guide for correcting the ATE malformation. Compared with the routine group, the 3D printing group exhibits the superiorities of shorter operative time, less intraoperative blood loss, higher rate of excellent, and good outcomes expressed by the ICFSG score at last follow-up. Moreover, the operation assisted by the PSI osteotomy guide for correcting the ATE malformation is novel and feasible, which might become an effective method to polish up the precise osteotomy of ATE malformation and polish up the clinical efficacy.In this current research, we compared the clinical efficacy and prognosis of routine ATE deformity correction surgeries and operation assisted by PSI osteotomy guide for the treatment of ATE malformation patients. In the 3D printing group, the personalized PSI osteotomy guide fabricated by 3D printing technique can be closely fitted to each joint surface. In all 15 patients, the osteotomy was performed successfully in one time, avoiding the repeated adjustments and tests during operation, which contributes to a more standardized and simpler operation. Meanwhile, it also reasonably explained that the parameters of operative time and intraoperative blood loss in the 3D printing group were less than that in the routine group (P<0.001). In addition, the PSI osteotomy guide can be fixed with the preset Kirschner wire guide holes on the binding surfaces, which can effectively prevent the pendulum saw from slipping during the process of operation and producing deviation. Hence, after the processes of rigorous design and standardized operation, the accurate osteotomy of ATE malformation can be realized, and the clinical efficacy was equivalent to the preoperative planning. Meanwhile, the 3D printing group also exhibited a higher rate of excellent and good outcomes than the routine group in ICFSG score at last follow-up (93.3% versus 75.0%, P=0.019).In addition, although the 3D printing assisted PSI osteotomy guide has been diffusely applied in several clinical studies, such as the cubitus varus deformity, developmental dysplasia of the hip (DDH), spinal scoliosis, hallux valgus, and other deformities [20, 23–25], while its application in the correction of ATE malformation has been rarely reported. Previously, Windisch et al. [26] applied 3D printing technique to fabricate a physical model of ATE malformation with a magnification of 4 times for surgeons to accurately analyze all bone and joint deformities and perform preoperative planning, but this application only played the most basic role of 3D printing technique. Moreover, Gozar et al. [16] and Barker et al. [27] applied computer modeling analysis technique to the correction of TE malformation and obtained the satisfactory short-term outcomes, but still lacked a long-term prognostic comparison with the routine group. In this research, there was no significant difference between the two groups in total rate of complications (13.3% vs. 16.7%, P=0.291). This outcome was consistent with the research results of Zhang et al. [28] used the 3D printing technique for the patients with complicated ankle fractures, indicating that the 3D printing technique presents no apparent superiorities in avoiding complications. Furthermore, at the last follow-up, there was no significant difference in the range of ankle joint motion, including the dorsal expansion, plantarflexion, inversion, and eversion, between the two groups (P all > 0.05). This may be attributed to the similar time of obtaining bony fusion and taking the similar programs of rehabilitation training between the two groups, and both groups presented relatively excellent postoperative functional outcomes. In addition, it is worth noting that the mismatch between the ICFSG and AOFAS scores in the functional outcome evaluation is not correlated to difference in range of ankle joint motion, including the dorsal expansion, plantarflexion, inversion, and eversion. This outcome might be relevant to the difference in the time of demolishing the plaster and beginning the rehabilitation training.Ultimately, it is essential to further point out and recognize the drawbacks of this research. For one thing, the mean follow-up time of this research was about 2 years. At the last follow-up, the prognosis of partial patients with ATE malformation (3 patients in routine group and 1 patient in 3D printing group) was evaluated as the fair, and it is essential to conduct further follow-up observation for a longer time in future. For another thing, this current research is the retrospective study with relatively small sample size. In order to further verify the clinical efficacy and prognosis of 3D printing assisted PSI osteotomy guide for the correction of ATE malformation, multicenter prospective studies and larger sample size data are still needed. Moreover, the specific comparison of pre- and postoperative radiological lower limb alignment parameters between the two study groups is also of great significance and interesting, which is expected to be further improved in future researches. ## 5. Conclusions Clinical application of 3D printing assisted PSI osteotomy guide for correcting the ATE malformation is safe, precise, and dependable. The 3D printing group exhibits the superiorities of shorter operative time, less intraoperative blood loss, and higher rate of excellent and good outcomes presented by ICFSG score at last follow-up than the routine group. The operation assisted by 3D printing PSI osteotomy guide for correcting ATE malformation is novel and feasible, which might be an effective method to polish up the precise osteotomy of ATE malformation and enhance the clinical efficacy. --- *Source: 1004849-2021-12-02.xml*
2021
# Myopia: Mechanisms and Strategies to Slow Down Its Progression **Authors:** Andrea Russo; Alessandro Boldini; Davide Romano; Giuseppina Mazza; Stefano Bignotti; Francesco Morescalchi; Francesco Semeraro **Journal:** Journal of Ophthalmology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1004977 --- ## Abstract This topical review aimed to update and clarify the behavioral, pharmacological, surgical, and optical strategies that are currently available to prevent and reduce myopia progression. Myopia is the commonest ocular abnormality; reinstated interest is associated with high and increasing prevalence, especially but not, in the Asian population and progressive nature in children. The growing global prevalence seems to be associated with both genetic and environmental factors such as spending more time indoor and using digital devices, particularly during the coronavirus disease 2019 pandemic. Various options have been assessed to prevent or reduce myopia progression in children. In this review, we assess the effects of several types of measures, including spending more time outdoor, optical interventions such as the bifocal/progressive spectacle lenses, soft bifocal/multifocal/extended depth of focus/orthokeratology contact lenses, refractive surgery, and pharmacological treatments. All these options for controlling myopia progression in children have various degrees of efficacy. Atropine, orthokeratology/peripheral defocus contact and spectacle lenses, bifocal or progressive addition spectacles, and increased outdoor activities have been associated with the highest, moderate, and lower efficacies, respectively. --- ## Body ## 1. Introduction Myopia is the most widespread refractive error and is principally due to the increasing axial length of the eyeball. In myopia, the distant object’s image is formed anterior to the retinal plane, leading to blurred vision, which requires correction for clear vision. Noncorrected myopia impairs the patients’ quality of life, affects school performance, and limits employability. Even corrected myopia may be responsible for serious complications such as staphyloma (outpouching of the back wall of the eye), glaucoma, cataract, choroidal neovascularization, retinal tears, schisis, and detachment; these complications together account for great economic implications for public health. Hence, many researchers and ophthalmologists have focused on myopia development and treatment.A global increase in myopia cases has garnered renewed interest. In 2000, myopia affected 1.4 billion people worldwide, while in 2050, the number is estimated to reach 4.8 billion [1]. Myopia cases are increasing in Asian and Western countries. Higher prevalence has been reported among schoolchildren in East Asia, Singapore, China, Taiwan, and South Korea [2, 3]. A recent meta-analysis including 61,946 adults showed that in Europe, myopia increased from 17.8% (95% confidence interval (CI): 17.6–18.1) to 23.5% (95% CI: 23.2–23.7) in people born between 1910 and 1939 in comparison to those born between 1940 and 1979 (P = 0.03) [4]. A significant difference in the myopia incidences based on sex was found in most studies; however, the Correction of Myopia Evaluation Trial (COMET) study suggested that males showed slower progression [5]. Further, among females, myopia progressed differently at menarche. A study by Xu et al. in China reported a 13% higher risk of myopia in premenarche girls when adjusted for the exact age and behavioral risk factors [6].Many etiological studies have assessed the role of both genetic and environmental factors in the development of myopia. Studies have reported a greater risk of myopia development in children with myopic parents. The Northern Ireland Childhood Errors of Refraction (NICER) study showed that the risk of myopia recurrence was 2.91 and 7.79 times more in children with one and two myopic parents, respectively [7]. Another study reported a 7.6, 14.9, and 43.6% myopia risk in children with none, one, and two myopic parents, respectively [8].Myopia can be classified as syndromic and nonsyndromic. A known genetic factor has been implicated in genesis and development of syndromic myopia (such as Marfan syndrome or congenital stationary night blindness). Nonsyndromic myopia has no clear association with a genetic mutation; however, polymorphisms in different genes are associated with nonsyndromic myopia. A recent genome-wide association study named CREAM found 24 loci associated with myopia, which increase the myopia risk up to 10 folds.Many studies have suggested that the environment plays a pivotal role in the development of nonsyndromic myopia forms; associations have been found with time spent in outdoor activities or near work, use of LED lamps for homework, population density, socioeconomic status, and use of video terminals. To control the deterioration of visual acuity, studies in recent decades tested several methods such as the use of anticholinergic drugs, correction of refractive error, multifocal spectacles or contact lenses, orthokeratology, and refractive surgery.The growing interest in understanding myopia is justified due to possibility of stopping or slowing the disease through concrete mitigation strategies or new therapies. This review provides a critical analysis of the association between myopia development and environmental factors and analyzes the available strategies to reduce myopia evolution in children. ## 2. Outdoor Time and Near Work Many studies have focused on the relationship between myopia development and progression and environmental factors such as near work, outdoor activities, sports practice, and use of technological devices. Most of these studies have suggested its inverse relationship with outdoor activities/sports and direct relationship with near work. Eppenberg and Sturm aiming to assess the protective role of outdoor light exposure in the incidence and prevalence of myopia recently summarized data from two cross-sectional studies, seven prospective cohort studies, and three intervention studies published between October 2008 and January 2019. The articles represent data of 32,381 participants between 6 and 18 years of age. Five of the nine cross-sectional studies found an inverse association [9]. Further, studies by Dirani and Sun revealed a significantly lower incidence of myopia in patients who reported a longer outdoor time (the reported odds ratio (OR), 0.90 (95% CI: 0.84–0.96, P = .004) and 0.74 (95% CI: 0.53–0.92, P < 0.001), respectively). Dirani et al. also reported that the mean amount of time of playing outdoor sports resulted to be longer among subjects without myopia (0.85 h/day, SD 0.80) than among those with myopia (0.72 h/day, SD = 0.82) (P = 0.007). Outdoor activities were associated with a lower prevalence of myopia; conversely, indoor sports were not. The data support the role of the overall outdoor activity as compared to sports alone in reducing the incidence of myopia [10, 11].Jones-Jordan et al. examined 514 children and found that nonmyopic children were engaged in a significantly greater amount of sports and outdoor activities than the myopic ones (11.65 (SD 6.97) vs 7.98 (SD 6.54)) hours per week (P < 0.001) [12].Conversely, a cohort study by Jacobsen et al. suggested that physical activity per sec is inversely associated with a refractive change toward myopia (P = 0.015) [13].A systematic review assessing the correlation of physical activity, comprising the data from 263 studies, identified a solid relationship of more physical activity and lower myopia, but no evidence of physical activity as an independent risk factor for myopia was obtained. Hence, as per evidence, outdoor time remains the most important factor [14].Chen et al. reported a later onset of myopia in people who spent more time outside. Guggenheim and Saxena confirmed this data (the relative risk reported was OR = 0.90 (95% CI: 0.45–0.96) andR = 0.54 (95% CI: 0.37–0.79; P = 0.002)) [15, 16]. Wu et al. showed a slower myopic shift in children who were encouraged to spend more time outside. (OR 0.46 (95% CI: 0.28–0.77); P = 0.003) [17]. However, studies by Jordan-Jones et al. Ma et al., and Hsu et al. [12, 18, 19] reported no association between myopia and time spent outdoors.A recent school-based, prospective, cluster-randomized trial was conducted to assess the relationship between time spent outdoors and the myopia onset/progression. A total of 6,295 children were randomized into a control group (n = 2,037), test group I (n = 2,329, 40 minutes outdoor time/day), or test group II (n = 1,929, 80 minutes outdoor time/day). The study failed to demonstrate any significant association between the time spent outdoor and myopia development or progression [20]. Jones-Jordan et al. did not observe any retardation in myopia development in children who spent more time outdoors, as reported by He et al. [12, 20].Many studies have identified an inverse association between myopia development and progression and outdoor exposure; however, contrasting evidence has also emerged. This may be due to biases. First, the data on near work, outdoor activities, and related parameters in almost all published studies were obtained from questionnaires and lacked uniformity. Moreover, the results of the questionnaires were influenced by geography, culture, cognitive ability, and memory bias. The refraction data might have been influenced by measurement bias. Complete cycloplegic refraction was obtained in only a part of the studies by using different drugs (tropicamide vs. cyclopentolate); therefore, these refraction results could not be considered reliable for statistical analyses.Nevertheless, existing evidence supports this association. The mechanism through which outdoor exposure may be responsible for lowering the incidence of myopia is explained by different hypotheses. Sunlight peaks at a wavelength of 550 nm, resulting roughly to the peak of sensitivity of the human eye. Indoor light peaks at a longer wavelength. Thus, most of the light beams received by the eye are focused behind the retina plane and might cause a situation similar to that of a negative lens. This phenomenon has proven to stimulate global growth in myopia [21].Another hypothesis focused on the importance of dopamine release stimulated by sunlight. Animal models (one-day-old white Australorp cockerels) were used to verify the effect of a translucent diffuser placed over the eye and kept on a 12 : 12 light/dark cycle. These birds exhibited excessive axial length causing myopia; however, if the diffuser was removed for 3 hours during the light period, the axial length did not grow. In birds wearing a diffuser, intravitreal injection of dopamine blocked axial growth. Dopamine antagonists exerted the opposite effects [22, 23].Myopia development and progression have been associated with higher educational levels and near work. The latter is considered a group of activities performed at short working distances such as reading, studying, computer use, playing videogames, or watching TV. School children spend a lot of time in near vision activities, and this could be regarded as a risk factor for myopia development. To study the effect of near work, a meta-analysis was conducted comprising the available literature published between April 1, 1989, and May 1, 2014, with a total of 10,384 participants aged 6–18 years. Results showed a pooled OR of 1.14 (95% CI: 1.08–1.20), advocating that near activities are associated with myopia. A subgroup analysis based on the definition of near work found that children who performed more near work were more likely to be myopic (OR = 1.85; 95% CI: 1.3–2.62;I2 85%) and that the odds ratio of myopia increased by 2% (OR = 1.02; 95% CI: 1.01–1.03; I2 42.8%) for every diopter-hour increase of near work per week [24].The Generation R Study conducted in Rotterdam tested the relationship between computer use and myopia development. This study comprised a total of 5074 children born in Rotterdam between 2002 and 2006. Data on computer use and outdoor exposure were acquired at the age of three, six, and nine years using a questionnaire; reading time and reading distance were assessed at nine years of age. Statistical analysis showed a significant association between computer use at the age of 3 years and myopia at six and nine years (OR = 1.005, 95% CI: 1.002–1.010; OR = 1.009, 95% CI: 1.002–1.0017). The cumulative time of the computer use in infancy was significantly correlated with myopia at nine years (OR = 1.005, 95% CI: 1.001–1.009). In the same study, reading time at the age of nine years was significantly associated with myopia at nine years and axial elongation. The study found that the effect of near vision activities decreases longer outdoor exposure (Figure1) [25].Figure 1 Odds ratios for near activity risk and the mean outdoor time on myopia at the age of 9 years. Near activities risk tertiles represent the combined risk of the computer use, reading, and reading distance. The outdoor time was classified into <7, 7–14, and >14 hours per week. The subset with low near risk and >14 hours per week of outdoor exposure was the reference subset (adapted from the study by Enthoven et al.).A prospective study by Oner et al. found that only reading and writing had a negative association with annual myopic progression (r = −0.362, P = 0.010), while computer use, watching television, and outdoor activities had no correlation with the annual myopia evolution rate. Different near vision activities could differently affect myopia risk at different light levels, word sizes, and working distances [26].According to Pӓrssinen and Lyyra, a correlation was found between time spent on reading or near work and myopia [27]. Conversely, the studies of Tan et al. reported no statistically significant associations between myopia progression and near activities in children [28, 29]. Contrasting evidences could be due to the difference in the age of the participants in the groups analyzed.While accommodation and convergence occurring after prolonged near work have been proposed as the mechanisms for the development of myopia, a strong association between accommodation and myopia has not been found [27]. Forced hyperopic defocus has been shown as a significant stimulus for eye growth in experimental studies [30].The coronavirus pandemic (COVID-19), a problem affecting people worldwide since the beginning of 2020, has changed people’s habits and led to an increase in use of digital devices owing to lockdown measures. In order to establish the risk of increase in the incidence of myopia with the increased digital device use, Wong et al. reviewed studies published on the association between PC, tablet, or smart phone use and myopia. They found that current evidence is inconclusive, but most of the pieces of evidence suggest a higher risk of myopia in people spending more time on digital screens. They argued that the COVID-19 pandemic outbreak period could potentially aggravate myopia by increasing exposure to digital devices. Moreover, the usage of digital devices might have a long-term negative impact [31].To limit the consequences, the American Ministry of Education recommends spending less than 20 minutes per day on electronic homework and prohibition of phones and tablets in classrooms [32].Interestingly, the exposition to the red light (650 nm wavelength) at home with a desktop light therapy device had recently been shown to be effective in myopia control. At the 12-month follow-up visit, the group given red light therapy had a 70% reduction in myopia progression and 32% of patients in this group also had a ≥0.05 mmaxial length shortening [33]. Further studies with double-masking and the placebo-controlled groups are needed to understand the long-term efficacy and safety, possible rebound effects, and optimal treatment strategies, beyond the potential underlying mechanisms. ## 3. Pharmacological Strategies ### 3.1. Atropine Atropine, a nonselective muscarinic antagonist drug, is known for its potential myopia-inhibiting capacity. Initially, since accommodation was considered an important factor in myopia progression, atropine was used because of its cycloplegic effect. However, animal studies have revealed that the effect of atropine might be mediated by nonaccommodative mechanisms [34, 35].Atropine has affinity for all five subtypes of acetylcholine receptors (MR1-MR5), which are distributed in different ocular tissues and scleral fibroblasts [36]. Several studies have shown that mAChR antagonists inhibit scleral proliferation in mice and humans and subsequently inhibit axial elongation of the eye [37].Nonetheless, the exact mechanism by which atropine exerts its suppressive action on myopia has not been established yet. Some studies have demonstrated an increase in retinal dopamine after instillation of atropine and postulated that dopamine may stimulate the release of nitric oxide as a part of the signaling chain [38]. Recently, Barathi et al. suggested that GABAergic-mediated signaling is involved, while Carr et al. described a possible implication of α2 adrenergic receptors [39, 40].Prepas proposed that pupil dilatation induced by antimuscarinic drugs leads to increased UV exposure, which controls the scleral growth through collagen cross-linking [41]. However, this hypothesis disagrees with the lack of myopic progression control after instillation of tropicamide [42].Several randomized clinical trials have shown that 1% and 0.5% atropine are effective in slowing myopia progression [42–45]. The Atropine in the Treatment of Myopia (ATOM) study was a randomized, double-masked, placebo-controlled trial conducted in Singapore with over 400 children aged 6 to 12 years. For two years, 1% atropine eye drops were instilled, followed by a one-year suspension. The results after two years demonstrated a 77% reduction in progression of myopia as compared to the control group (−0.28 ± 0.92 diopters (D) compared with −1.20 ± 0.69 D in the placebo group with P < 0.001), but no change in the axial length compared to the baseline (−0.02 ± 0.35 mm) [43].During the washout phase, the suspension of treatment caused a rebound effect in both refraction and axial length in the eyes treated with atropine, but the final progression was lower in the atropine-treated group than that of the control group [46]. Moreover, 1% atropine caused side effects such as photophobia, blurred vision, and reduced accommodation. However, the safety profile of a high dosage of atropine is a major concern in clinical practice, and reduced accommodation may require children to wear bifocal or progressive lenses to read. Recent clinical trials have confirmed that atropine is effective in controlling myopic progression with a dose-related effect.In a two-year study conducted by Shih et al., 200 Taiwanese children were treated with 0.5%, 0.25%, or 0.1% atropine. After two years, there was a reduction in myopia progression by 61%, 49%, and 42% respectively, as compared with children treated with tropicamide in the control group (−0.04 ± 0.63 D/Y, 0.45 ± 0.55 D/Y, and 0.47 ± 0.91 D/Y in the 0.5, 0.25, and 0.1% atropine groups, respectively, in comparison to the control group (−1.06 ± 0.61 D)) [42].The ATOM 2 study evaluated the efficacy and side effects of lower doses of atropine on myopic progression (0.5%, 0.1%, and 0.01% atropine instilled for 24 months followed by the 12-month washout phase). The authors demonstrated a dose-related effect, with higher doses leading to greater inhibition of myopia progression (−0.30 ± 0.60 D, −0.38 ± 0.60 D, and −0.49 ± 0.63 D in the 0.5%, 0.1%, and 0.01% atropine groups, respectively, (P = 0.02, between the 0.01 and 0.5% groups; P = 0.05, between other concentrations)) [47].However, after suspension of treatment, there was a greater rebound effect in the eyes treated with higher concentrations of atropine, whereas only a slight increase was observed in the 0.01% group. After 36 months, myopia progression in the 0.01% group was −0.72 ± 0.72 D, while in the 0.5% and 0.1% groups it was −1.15 ± 0.81 D and −1.04 ± 0.83 D, respectively, (P < 0.001) [48]. The authors concluded that the lowest (0.01%) concentration seems to be the safest choice causing fewer adverse effects compared to higher formulations while retaining similar efficacy [47].In a recent study of low-concentration atropine for myopia control (LAMP), Yam et al. compared 0.05%, 0.025%, and 0.01% atropine eye drops and described a dose-related effect on myopia progression. Atropine (0.05%) was the most effective in limiting both the spherical equivalent and axial elongation progression [49]. After two years, efficacy of 0.05% doubled than that of 0.01% atropine [50]. Regarding combined treatment with both atropine and multifocal or bifocal lenses, studies found a lower rate of myopic progression with both 1% and 0.5% atropine plus multifocal and bifocal lenses compared to placebo plus single-vision lenses [42, 50]. The most recent report from the same study (LAMP, Phase 3) regarding the third year of usage confirmed that atropine treatment achieved a better effect across all concentrations compared with the washout regimen. In particular, 0.05% atropine remained the optimal concentration over 3 years in the study population. The differences in rebound effects were clinically small across all 3 studied atropine concentrations. Stopping treatment at an older age and lower concentration is associated with a smaller rebound: the older the subject’s age, the smaller the rebound effect. This might be explained by the slower inherent physiological progression of children at older ages, as previously demonstrated by the results of the LAMP study Phases 1 and 2 [51].In conclusion, results from studies have proved that atropine eye drops, alone or in combination with other treatments, are useful in reducing myopic progression, although mild side effects were described, including pupil dilation, photophobia, and near blur. To date, atropine treatment has been adopted in Asian countries, such as Taiwan and Singapore. ### 3.2. Pirenzepine Several studies have demonstrated that pirenzepine, a selective M1 muscarinic receptor antagonist, is effective in controlling the progression of myopia in children [52–54]. A study conducted on myopic Asian children treated with a pirenzepine 2% gel twice daily found a 44% reduction in myopic progression compared with the control group.A parallel-group, placebo-controlled, double-masked, randomized trial conducted by Siatkowski et al. found a 41% reduction in myopic progression in children treated with a 2% pirenzepine gel compared with the placebo (0.58 D vs. 0.99 D after two years), but the difference in the axial length between the study groups was statistically insignificant. The United States-based clinical trial found that pirenzepine was well tolerated with mild to moderate adverse effects [53]. However, pirenzepine is not available as a treatment option currently. ### 3.3. 7-Methylxanthine 7-Methylxanthine, a nonselective adenosine antagonist, has been adopted as a treatment option only in Denmark. Oral administration of 7-methylxanthine causes a rise in the scleral collagen fibril diameter, amino acid content, and thickening of the sclera in rabbits [55].A trial evaluated the effect of 400 mg 7-methylxanthine once a day in children compared to a placebo group. The results revealed a modest effect on myopia progression in children with moderate axial growth rates at the baseline (22%), but no effect in individuals with high-progressing myopia. The treatment seemed safe, with no ocular or systemic side effects [56]. Currently, 7-methylxanthine is a nonregistered drug in Denmark. Evaluation conducted on animals [57, 58] and humans have exhibited potential efficacy; however, further evaluations are needed. ## 3.1. Atropine Atropine, a nonselective muscarinic antagonist drug, is known for its potential myopia-inhibiting capacity. Initially, since accommodation was considered an important factor in myopia progression, atropine was used because of its cycloplegic effect. However, animal studies have revealed that the effect of atropine might be mediated by nonaccommodative mechanisms [34, 35].Atropine has affinity for all five subtypes of acetylcholine receptors (MR1-MR5), which are distributed in different ocular tissues and scleral fibroblasts [36]. Several studies have shown that mAChR antagonists inhibit scleral proliferation in mice and humans and subsequently inhibit axial elongation of the eye [37].Nonetheless, the exact mechanism by which atropine exerts its suppressive action on myopia has not been established yet. Some studies have demonstrated an increase in retinal dopamine after instillation of atropine and postulated that dopamine may stimulate the release of nitric oxide as a part of the signaling chain [38]. Recently, Barathi et al. suggested that GABAergic-mediated signaling is involved, while Carr et al. described a possible implication of α2 adrenergic receptors [39, 40].Prepas proposed that pupil dilatation induced by antimuscarinic drugs leads to increased UV exposure, which controls the scleral growth through collagen cross-linking [41]. However, this hypothesis disagrees with the lack of myopic progression control after instillation of tropicamide [42].Several randomized clinical trials have shown that 1% and 0.5% atropine are effective in slowing myopia progression [42–45]. The Atropine in the Treatment of Myopia (ATOM) study was a randomized, double-masked, placebo-controlled trial conducted in Singapore with over 400 children aged 6 to 12 years. For two years, 1% atropine eye drops were instilled, followed by a one-year suspension. The results after two years demonstrated a 77% reduction in progression of myopia as compared to the control group (−0.28 ± 0.92 diopters (D) compared with −1.20 ± 0.69 D in the placebo group with P < 0.001), but no change in the axial length compared to the baseline (−0.02 ± 0.35 mm) [43].During the washout phase, the suspension of treatment caused a rebound effect in both refraction and axial length in the eyes treated with atropine, but the final progression was lower in the atropine-treated group than that of the control group [46]. Moreover, 1% atropine caused side effects such as photophobia, blurred vision, and reduced accommodation. However, the safety profile of a high dosage of atropine is a major concern in clinical practice, and reduced accommodation may require children to wear bifocal or progressive lenses to read. Recent clinical trials have confirmed that atropine is effective in controlling myopic progression with a dose-related effect.In a two-year study conducted by Shih et al., 200 Taiwanese children were treated with 0.5%, 0.25%, or 0.1% atropine. After two years, there was a reduction in myopia progression by 61%, 49%, and 42% respectively, as compared with children treated with tropicamide in the control group (−0.04 ± 0.63 D/Y, 0.45 ± 0.55 D/Y, and 0.47 ± 0.91 D/Y in the 0.5, 0.25, and 0.1% atropine groups, respectively, in comparison to the control group (−1.06 ± 0.61 D)) [42].The ATOM 2 study evaluated the efficacy and side effects of lower doses of atropine on myopic progression (0.5%, 0.1%, and 0.01% atropine instilled for 24 months followed by the 12-month washout phase). The authors demonstrated a dose-related effect, with higher doses leading to greater inhibition of myopia progression (−0.30 ± 0.60 D, −0.38 ± 0.60 D, and −0.49 ± 0.63 D in the 0.5%, 0.1%, and 0.01% atropine groups, respectively, (P = 0.02, between the 0.01 and 0.5% groups; P = 0.05, between other concentrations)) [47].However, after suspension of treatment, there was a greater rebound effect in the eyes treated with higher concentrations of atropine, whereas only a slight increase was observed in the 0.01% group. After 36 months, myopia progression in the 0.01% group was −0.72 ± 0.72 D, while in the 0.5% and 0.1% groups it was −1.15 ± 0.81 D and −1.04 ± 0.83 D, respectively, (P < 0.001) [48]. The authors concluded that the lowest (0.01%) concentration seems to be the safest choice causing fewer adverse effects compared to higher formulations while retaining similar efficacy [47].In a recent study of low-concentration atropine for myopia control (LAMP), Yam et al. compared 0.05%, 0.025%, and 0.01% atropine eye drops and described a dose-related effect on myopia progression. Atropine (0.05%) was the most effective in limiting both the spherical equivalent and axial elongation progression [49]. After two years, efficacy of 0.05% doubled than that of 0.01% atropine [50]. Regarding combined treatment with both atropine and multifocal or bifocal lenses, studies found a lower rate of myopic progression with both 1% and 0.5% atropine plus multifocal and bifocal lenses compared to placebo plus single-vision lenses [42, 50]. The most recent report from the same study (LAMP, Phase 3) regarding the third year of usage confirmed that atropine treatment achieved a better effect across all concentrations compared with the washout regimen. In particular, 0.05% atropine remained the optimal concentration over 3 years in the study population. The differences in rebound effects were clinically small across all 3 studied atropine concentrations. Stopping treatment at an older age and lower concentration is associated with a smaller rebound: the older the subject’s age, the smaller the rebound effect. This might be explained by the slower inherent physiological progression of children at older ages, as previously demonstrated by the results of the LAMP study Phases 1 and 2 [51].In conclusion, results from studies have proved that atropine eye drops, alone or in combination with other treatments, are useful in reducing myopic progression, although mild side effects were described, including pupil dilation, photophobia, and near blur. To date, atropine treatment has been adopted in Asian countries, such as Taiwan and Singapore. ## 3.2. Pirenzepine Several studies have demonstrated that pirenzepine, a selective M1 muscarinic receptor antagonist, is effective in controlling the progression of myopia in children [52–54]. A study conducted on myopic Asian children treated with a pirenzepine 2% gel twice daily found a 44% reduction in myopic progression compared with the control group.A parallel-group, placebo-controlled, double-masked, randomized trial conducted by Siatkowski et al. found a 41% reduction in myopic progression in children treated with a 2% pirenzepine gel compared with the placebo (0.58 D vs. 0.99 D after two years), but the difference in the axial length between the study groups was statistically insignificant. The United States-based clinical trial found that pirenzepine was well tolerated with mild to moderate adverse effects [53]. However, pirenzepine is not available as a treatment option currently. ## 3.3. 7-Methylxanthine 7-Methylxanthine, a nonselective adenosine antagonist, has been adopted as a treatment option only in Denmark. Oral administration of 7-methylxanthine causes a rise in the scleral collagen fibril diameter, amino acid content, and thickening of the sclera in rabbits [55].A trial evaluated the effect of 400 mg 7-methylxanthine once a day in children compared to a placebo group. The results revealed a modest effect on myopia progression in children with moderate axial growth rates at the baseline (22%), but no effect in individuals with high-progressing myopia. The treatment seemed safe, with no ocular or systemic side effects [56]. Currently, 7-methylxanthine is a nonregistered drug in Denmark. Evaluation conducted on animals [57, 58] and humans have exhibited potential efficacy; however, further evaluations are needed. ## 4. Surgical Strategies Refractive surgery was first used in a pediatric population in the 90s [59], with the aim to improve vision in a selected group of visually impaired children [60]. In the adult population, refractive surgery is used to achieve the best-uncorrected vision possible.Amblyopia is a reduction in visual acuity or visual deprivation without an organic cause due to abnormal interaction between the two eyes and the brain. In a population-based cross-sectional study [61], amblyopia accounted for 33% of monocular visual impairment in children.The most frequent cause of amblyopia is anisometropia. Myopic anisometropia of more than 2 D results in an increased incidence of amblyopia and reduced stereopsis. Anisometropia greater than 6 D is amblyogenic in all children [62]. Moreover, a higher degree of anisometropia affects amblyopia therapy and leads to a worse visual outcome [63].Glasses, contact lenses, and patching are the most common options for treating pediatric high refractive errors associated with amblyopia. However, children may refuse conventional therapy for different reasons. If a significant refractive difference exists between the two eyes, the use of a spectacle may result in aniseikonia and interfere with good stereopsis. Correction with glasses, especially those with high refractive errors, may lead to a narrower field of view, prismatically induced aberrations, and social stigma. Contact lenses offer a better quality of vision and a larger field of view but are associated with poor compliance due to intolerance and difficulty of insertion and removal [64].In a study by Paysse, factors associated with failure of traditional therapy are age >6 years, poor compliance, inadequate parental understanding, initial visual acuity of 20/200 or lower, and presence of astigmatism >1.5 D [65]. Children with craniofacial and/or ear abnormalities, hearing aids, or neurobehavioral disorders may be averse to wearing spectacles. These children can develop very poor vision in the amblyopic eyes because conventional treatment is more challenging [66].Moreover, some studies have shown that only about two-thirds of cases with anisometropic amblyopia achieve good visual outcomes if treated with conventional methods [65, 67, 68]. If myopic anisometropia is more than 6 D, the chance of achieving a best-corrected visual acuity of 20/40 or better is only 25% [63].The application of refractive surgery in the treatment of anisometropic amblyopia in children is still unclear. Options include laser vision correction such as photorefractive keratectomy (PRK), laser-assisted subepithelial keratectomy (LASEK), laser-assisted in situ keratomileusis (LASIK), or phakic intraocular lens implantations (anterior or posterior chamber). PRK, LASEK, and LASIK yield successful outcomes in refraction and visual acuity in children with high myopic anisometropia and amblyopia than in those who are noncompliant with traditional treatment [59, 69–82].Nucci and Drack evaluated the safety and efficacy of refractive surgery in children with unilateral high myopia to supplement optical correction. A total of 14 eyes in 14 children aged 9–14 years received surgery (11 PRK and three LASIK). The preoperative best-corrected visual acuity was 20/147, while that at 20 months was 20/121. Average preoperative and postoperative refraction (spherical equivalent) was −7.96 ± 2.16 D and −0.67 ± 0.68 D at 20 months, respectively. Only minimal corneal haze was reported [73].Autarata and Rehurek evaluated the results of PRK for high myopic anisometropia and contact lenses intolerance in 21 children aged 7–15 years. The mean preoperative and postoperative refraction was −8.93 ± 1.39 D and −1.66 ± 0.68 D, respectively (P < 0.05). A total of nine eyes gained one line of the best-corrected visual acuity, and five eyes gained two lines. No significant complications were observed. The authors concluded that PRK is safe and effective over a four-year follow-up period [83].Phillips et al. treated LASIK myopic anisometropia in five patients between 8 and 19 years of age and evaluated the results over 18 months. The mean preoperative refractive error was −9.05 D, while the mean postoperative refractive error was −1.17 D, and two of five patients gained one line of vision [84].In an analysis of 17 case series published by Daoud et al., 298 patients were treated with PRK, LASEK, and LASIK for severe myopic anisometropia. Follow-up ranged from 12 to 36 months. Patients’ preoperative refraction was between −14.9 and −6 D and age varied between 0.8 and 19 years. The authors found an improvement in the best-corrected visual acuity from 20/30 to 20/400 preoperatively to 20/26–20/126 postoperatively. Improved binocular vision after surgery was found in 64% of patients in six of the largest studies analyzed [64]. Interestingly, several studies reveal an increased level of stereopsis after excimer refractive surgery [80, 81, 85].Paysse evaluated the long-term visual acuity and the refractive outcome in 11 children who underwent PRK for the treatment of anisometropic amblyopia. She reported a long-term reduction in the refractive error with increased visual acuity. Stereoacuity improved in 55% of testable children [80].Astle et al. found an improvement in the best-corrected visual acuity in 63.6% of children treated with LASEK. Positive stereopsis was present in 39.4% of patients preoperatively and 87.9% postoperatively [81]. In a retrospective study, Magli et al. evaluated the use of PRK in the treatment of 18 myopic anisometropic children. Best-corrected visual acuity showed an improvement after surgery (from 20/70 to 20/50), and the level of stereopsis increased in two of 18 patients [85].Excimer laser surgery has also been successfully used to treat high bilateral myopic amblyopia. In a case study published by Astle et al., 11 patients aged 1–17 years were treated with LASEK. The average spherical equivalent was −8 D preoperatively and −1.2 D postoperatively. The average best-corrected visual acuity was 20/80 preoperatively and 20/50 postoperatively [76]. Tychsen reported nine patients between 3 and 16 years of age were treated with LASEK. After surgery, uncorrected acuity improved in all eyes, with improvement in behavior and environmental visual interaction [86].Corneal haze is the predominant complication of ablative refractive surgery. In a meta-analysis [87], LASIK patients had lower rates of postsurgical haze than those of PRK (5.3% vs. 8.5%, respectively). In children, postsurgical haze is more common than in adults, given that children have a stronger inflammatory response. Long-term corticosteroids and mitomycin C have been recommended to reduce the incidence of postsurgical haze [88].Patient cooperation may be challenging in the case of children. During laser or intraocular refractive surgeries in the adult population, the patient is asked to fixate on the operating light or laser target. Collaboration varies in children as they may not be able to fixate, and general anesthesia might be required. However, adolescents are often able to fixate [84]. Some studies have investigated the use of different anesthesia protocols during excimer laser surgery [89, 90].However, according to Brown [91], given that the patient’s line of sight is determined by the desire to actively fixate on an object, an unconscious patient is not able to direct the fovea toward a target. Corneal refractive surgery should be centered on the intersection between the patient’s line of sight and the cornea, while the laser firing axis is centered on the surgeon’s line of sight. Tilting the laser firing axis relative to the patient’s line of sight could result in optically asymmetric ablation. The best timing for performing refractive surgery is debatable, but studies suggest that the best results are shown when performed early [87].However, eye modifications such as changes in the axial growth and lens thickness can affect long-term outcomes of early surgery. In laser refractive surgery, possible corneal biomechanical changes over time must be considered [92]. In young children, corneal strength has not been established, but there is evidence that the corneal strength increases with age [93].Another concern is the myopic regression. Most of it occurs during the first year after surgery, with lesser regression over the following 2–3 years [80]. Daoud et al. observed a myopic regression of 1 D/year on average in children treated for myopic anisometropic amblyopia [64]. For these reasons, authors suggest overcorrecting and targeting slight hyperopia in myopic corrections [92].Another option for surgery in children with high refractive errors and amblyopia is phakic intraocular lens implantation. The phakic intraocular lens was first used in the pediatric population in 1999 [94]. There are two types of FDA-approved phakic intraocular lenses: an anterior chamber phakic intraocular lens called Verisyse (Ophtec BV) in the United States, similar to the Artisan phakic intraocular lens in Europe and Asia and a posterior chamber phakic intraocular lens called Visian Implantable Collamer Lens (ICL) (Staar Surgical Co). The Visian ICL is implanted between the iris and the natural lens with the haptics located in the ciliary sulcus.Indications of ICL implantation in the pediatric population are high anisometropia, myopia, or hyperopia noncompliant with conventional treatment, bilateral high ametropia noncompliant with conventional treatment, and high refractive amblyopia associated with neurobehavioral disorders [95, 96]. In recent years, several studies have been published on the use of anterior chamber phakic intraocular lenses for the treatment of refractive errors in children. These studies documented an improvement in uncorrected visual acuity, and surgery was well tolerated [97–99].In a study conducted by Pirouzian et al., six pediatric patients with anisometropic myopic amblyopia underwent Verisyse anterior chamber phakic intraocular lens implantation. Patients were aged 5–11 years, and none of the patients were compliant with glasses or contact lenses. Results showed the improved best-corrected visual acuity from less than 20/400 to a mean of 20/70 postoperatively, an increase in stereopsis, and minimal side effects [97].One of the most important concerns was the potential long-term endothelial cell loss. For these reasons, guidelines approve phakic intraocular lenses only when the anterior chamber depth is more than 3.2 mm. In the studies of Pirouzian et al. and Ip et al., the endothelial cell loss rate after 3–5 years of follow-up was between 6.5% and 15.2% [99, 100]. However, as with visual acuity, the endothelial count is difficult to measure in all children, and the real cell loss cannot be accurately assessed in these studies.Since 2013, different authors have reported their experience with posterior chamber phakic intraocular lenses in children. Results showed an improvement in corrected and uncorrected visual acuity [101–103]. In 2017 large case series, Tychsen et al. published the results of Visian phakic intraocular lens implantation in 40 eyes of 23 children with high anisometropia and amblyopia. About 57% of the patients had a neurobehavioral disorder. Best-corrected visual acuity improved from 20/74 preoperatively to 20/33 postoperatively. Uncorrected visual acuity improved 25-fold, which is relevant, given that children with neurobehavioral disorders are intolerant to glasses. Moreover, 85% of the children had improved social performance [103].Complications from the above-mentioned studies were due to the lens position, including a pupillary block from not enough patent peripheral iridotomy and pigment dispersion from the lens rubbing on the posterior iris [101–103].There are several advantages of using phakic intraocular lenses compared to laser refractive surgery. The phakic intraocular lens procedure is reversible, and there is less risk of refraction regression over time. Moreover, laser surgery carries a risk of corneal haze. Nevertheless, there is a need for further studies on the long-term effects of phakic intraocular lenses on endothelial cells, the risk of cataract formation, and angle-closure glaucoma.Despite evidence of efficacy and short-term safety, many questions about refractive surgery in children have not yet been answered. The major concerns to be explored are the lack of pediatric nomograms, the role of anesthesia, the lack of evidence regarding the effect of the eye growth on long-term outcomes, the instability of the refractive error in children, susceptibility to trauma, and lack of evidence of long-term safety. ## 5. Optical Strategies Several strategies have been attempted in order to optically control the progression of myopia, including under and overcorrection. In China, two studies aimed to evaluate the progression of myopia in uncorrected eyes. In the first study proposed by Hu and Guo [104], 90 participants were divided into the three groups: uncorrected, monocular corrected, or binocular corrected. The results showed that over a 12-month follow-up visit, the uncorrected patients had a faster progression of myopia (−0.95 ± 0.12 D) as compared to those who were fully corrected (−0.50 ± 0.15 D). However, this study had some limitations: the selection procedure and age were not specified, and the groups were not well matched.In another study, Sun et al. [105] evaluated a cohort of 121 twelve-year-old Chinese children. In the first year, in the uncorrected group, myopia progression was less (−0.39 ± 0.48 D) as compared to the full-corrected group (−0.57 ± 0.36 D; P = 0.03). This difference was significant even after adjusting for the baseline standard error of regression, age of the myopia onset, height, presence of parents with nearsightedness, and time spent in outdoor and indoor activities (−0.39 ± 0.06 D vs −0.58 ± 0.06 D, P < 0.01).Lastly, Ong et al. [106] reported no difference in myopic progression over a three-year period among myopic children who wore full-corrected glasses full-time, part-time, or not at all. ### 5.1. Undercorrection of Myopia Undercorrection is one of the optical strategies proposed to slow the progression of myopia. It is based on the rationale that in undercorrected eyes, the accommodative response for near vision is reduced [107]. In fact, in animal models (chicks, tree shrews, marmosets, and infant monkeys) [21, 108, 109], a myopic defocus, in which the retinal image is formed in front of the retina, was capable of inhibiting eyeball elongation and associated myopic progression.Tokoro and Kabe [110] found that in a population aged 7–15 years, the rate of myopia progression was lower with undercorrection (−0.54 ± 0.39 D) than with full correction, either in full correction full-time wear (−0.75 D ± 0.27 D) or in full correction part-time wear (−0.62 ± 0.32 D). This study had several limitations, including a small sample size, limited statistical analysis, and concurrent use of pharmacological intervention for myopia control.In the study by Li et al. [111], the study population consisted of 12-year-old Chinese children. One hundred-twenty patients were undercorrected, and 133 patients were fully corrected; at one year, no statistically significant difference was observed between the two groups. However, a regression analysis showed a significant association if the refractive error, not the axial length, was considered. In this case, the progression of myopia decreased with an increasing amount of undercorrection (R2 = 0.02; P = 0.02). However, in order to achieve reduction in myopia progression by 0.25 D, undercorrection of more than 1.50 D was required.In both studies by Adler and Millodot [107] and Koomson et al. [112], undercorrection did not prove a statistically significant reduction in myopia progression. Adler and Millodot found that in a cohort of 48 children aged 6–15 years, undercorrection by 0.50 D was associated with myopia progression of 0.17 D when compared to full correction.Koomson et al. enrolled 150 Ghanaian children who were divided into two groups (n = 75). The first group was undercorrected by 0.50 D, while the second group was fully corrected. At two years, myopia progressed by the same rate in both the groups (−0.54 D ± 0.26 in the full-corrected group vs −0.50 D ± 0.22 in the undercorrected group; P = 0.31). Conversely, three studies have reported that under-correction causes a more rapid progression of myopia.Chung et al. [113] reported that 47 children undercorrected by 0.75 D had a greater progression of myopia compared with the 47 children who were fully corrected (−1.00 D vs 0.77 D; P < 0.01); however, the axial elongation was smaller in the undercorrected eyes (0.58 mm vs 0.65 mm; P = 0.04).Chen [114] designed a study in which 77 fully corrected eyes were compared to 55 undercorrected eyes. The two groups were matched for the age, sex, and refractive error. At a 12-month interval, the undercorrected −0.25 to −0.50 D) group exhibited a significant myopic progression (−0.60 D vs −0.52 D; no standard deviation; standard error; and 95% confidence interval were reported).Vasudevan et al. [115] retrospectively examined myopia progression rate records from the USA and the level of undercorrection of myopia versus full correction of myopia. They found that greater undercorrection was associated with a greater progression of myopia (P < 0.01).In all these scenarios, both eyes were corrected, either undercorrected or fully corrected. However, two studies evaluated the rate of progression of myopia by correcting only one of the eyes.In a population of 18 children aged 11 years, Phillips [116] noticed that undercorrection of the nondominant eye was associated with a slower progression of myopia compared to that in the dominant eye, which was fully corrected. The intereye difference was 0.36 D/y (P = 0.002).However, Hu and Guo [104] reported opposite results, in which the undercorrection of one eye in myopic children was associated with a faster progression than fully corrected ones (−0.67 ± 0.22 D vs −0.50 ± 0.15 D).Unfortunately, considering all human trials, the evidence supporting undercorrection as feasible for slowing the progression of myopia is low. Moreover, many pediatric practitioners suggest that the goal is to an attain optimal vision, which can be achieved by full correction. ### 5.2. Overcorrection of Myopia In a case-control study by Goss [117], 36 children aged 7–15 years were overcorrected by 0.75 D and matched by control individuals randomly selected from the files of a university optometry clinic. The rate of progression among the groups was different but not statistically different; −0.49 D/year in the overcorrected group versus −0.47 D in the control group. ### 5.3. Bifocal and Multifocal Lenses The rational use of bifocal or multifocal lenses to slow the progression of myopia is based on two theories. The first one, proven in animal models [108, 118], is based on central and peripheral hyperopic retinal defocus caused by a large accommodative lag [119, 120], which is defined as the residual refractive error of the difference between the accommodative demand required and its response. A large accommodative lag causes a hyperopic retinal defocus, which stimulates axial elongation in central defocus. Furthermore, in the case of peripheral defocus, the eye globe seems to acquire a more prolate shape. However, this stimulus is nullified by short periods of clear vision [21]; therefore, whether transient hyperopic retinal blur can lead to the onset and/or progression of myopia remains unclear.The second theory assumes that during accommodation, there is a mechanical tension created by the crystalline lens or ciliary body. On the one hand, this tension restricts the equatorial ocular expansion, causing accelerated axial elongation; on the other hand, as the ciliary-choroidal tension increases, the effort needed to accommodate increases as well. This probably leads to a further increase in accommodative lags in children, which is a consequence rather than a cause of myopia [121–125]. Regarding the association between myopia in children and accommodative lags, it has been reported that(1) Compared to emmetropic children, myopic children generally show insufficient accommodation with larger accommodative lags, even before the development of myopia. [120, 123, 126, 127].(2) In myopic children, a larger accommodative lag correlates with a faster myopia progression [128]Unfortunately, as seen in the undercorrection approach, no consensus exists regarding the use of bifocal or multifocal lenses to slow the progression of myopia. This is mainly due to the standard near addition power use in the trials, typically between +1.00 D and +2.00 D so that interindividual differences are nullified, causing even a possible overcorrection in some cases.The COMET study was a randomized, multicenter clinical trial in which 469 children, aged 6–11 years, were enrolled and divided into two groups: the first group was assigned to progressive addition lenses (with +2.00 D addition) and the second group to single-vision lenses. At three years, the difference between the progressive addition lenses and the control group in diopters was 0.20 ± 0.08 D and the axial elongation was 0.11 ± 0.03 mm. Even if statistically significant, these differences were considered clinically insignificant [129].The same conclusions were obtained in the COMET 2 study [130]. A total of 180 children aged 8–12 years with spherical equivalent refraction from −0.75 D to −2.50 D and near esophoria ≥2 prism-diopters were enrolled. An additional inclusion criterion was high accommodative lag, initially set to at least 0.50 D (accommodative response less than 2.50 D for a 3.00 D demand) and subsequently restricted further to at least 1.00 D. A total of 110 children completed the study in three years; the progression of myopia was −0.87 D in the group treated with progressive addition lenses (+2.00 D) versus −1.15 D in the single-vision lens group. Nevertheless, despite being statistically significant, the authors considered the results to be clinically insignificant.Cheng et al. [131] attempted to evaluate the use of bifocal and prismatic bifocal lenses. One hundred thirty-five Chinese-Canadian children, aged 8–13 years with myopia progression of at least 0.50 D in the preceding year, were randomly assigned to one of the three treatments: single vision (control, n = 41), +1.50 D executive bifocals (n = 48), and +1.50 D executive bifocals with 3-Δ base-in the prism in the near segment of each lens. At the three-year follow-up, the progression of myopia in terms of diopters and axial length elongation was highest in children treated with single vision (−2.06 D and 0.82 mm) compared to those who were treated with bifocal (−1.25 D and 0.57 mm) or prismatic bifocal lenses (−1.01 D and 0.54 mm). Furthermore, in children with high accommodative lags (>1.00 D), no difference was observed in myopia control using bifocal or prismatic bifocal lenses. Instead, in children who showed low lags of accommodation (≤1.00 D), greater benefits were observed using prismatic bifocal lenses. According to the authors, this could be explained as prismatic bifocal lenses, because prisms may reduce the convergence and lens-induced exophoria with prism correction.Currently, research is moving from the correction of the hyperopic shift to the induction of myopic peripheral defocus. The rationale is based on two findings.(1) Visual signals derived from the peripheral retina are stronger than those originating from the central retina [132, 133](2) Optical defocus in the peripheral retina governs ocular growth: peripheral defocus stimulates axial elongation of the eye, while the opposite effect is demonstrated with peripheral myopic defocus (Figure2) [134–140]Figure 2 Peripheral hyperopic defocus (red arrow) might lead to axial elongation. A myopic defocus (green arrow) can be achieved with orthokeratology, contact lenses, laser refractive surgery, and spectacle lenses (defocus incorporated multiple segment lenses and Apollo progressive addition lenses).Spectacles of two types can induce peripheral myopic defocus: The defocus incorporated multiple segment lenses and Apollo progressive addition lenses (Apollo PALs, Apollo Eyewear, River Grove, IL, USA)and defocus incorporated multiple segment (DIMS) lenses [141] are custom-made plastic spectacle lenses. Each lens includes a central optical zone (9 mm in diameter) for correcting distance refractive errors and an annular multifocal zone with multiple segments (33 mm in diameter) with a relative positive power (+3.50 D). The diameter of each segment is 1.03 mm. Lam et al. [141] evaluated the use of defocus incorporated multiple segments versus single-vision lenses in 160 children. The results indicated that myopia progressed slower by 52% in the defocus incorporated multiple segment group than that in the single-vision group (−0.41 ± 0.06 D in the defocus incorporated multiple segment group and −0.85 ± 0.08 D in the single-vision group; mean difference −0.44 ± 0.09 D, P < 0.001). Moreover, the axial elongation was shorter in children in the defocus incorporated multiple segment group (0.21 ± 0.02 mm) by 62% than those in the single-vision group (0.55 ± 0.02 mm in the defocus incorporated multiple segment; mean difference 0.34 ± 0.04 mm, P < 0.001). These preliminary results were confirmed after a 3-year follow-up, showing that the myopia control effect was sustained in the third year in children who had used the DIMS spectacles in the previous 2 years and was also shown in the children switching from single vision to DIMS lenses [142]. Interestingly, in a study by Zhang et al., [143] baseline relative peripheral refraction (RPR) was assessed as a variable on the myopia control effects in myopic children wearing DIMS lenses. The authors concluded that DIMS lenses slowed down myopia progression, and myopia control was better for the children with baseline hyperopic RPR than the children with myopic RPR. This may partially explain why the efficacy of DIMS technology varies among myopic children and advocates the need for customized myopic defocus for patients to optimize myopia control effects. Indeed, similar results were found in animal studies, showing that a greater hyperopic defocus leads to more myopia progression while inducing myopic defocus retarded myopia progression [144]. Outcomes in infant monkeys and chicks advocated that spatial resolution at the anatomic level of the optical pathway could modulate overall eye growth [145]. Animal studies using contact lenses with embedded myopic defocus found that myopia progression could be slowed by 20% to 60% [146, 147].The Apollo progressive addition lenses comprise an asymmetrical myopic defocus design with a 3 myopic defocus zone, including a +2.50 D full-positive power superior zone, an 80% full myopic defocus power nasal zone, and a 60% full myopic defocus power temporal zone. Currently, a prospective, multicenter, randomized controlled trial, promoted by Li, is ongoing to evaluate the possible efficacy of the defocus incorporated multiple segment and Apollo progressive addition lenses [148]. ### 5.4. Contact Lenses and Orthokeratology in Myopia Control As previously reported, a theory for eye elongation suggests that axial elongation is caused by peripheral retinal hyperopic defocus [105, 135, 149, 150].This theory has led researchers to consider that reducing peripheral hyperopic defocus or inducing peripheral myopic defocus with bifocal, progressive, or multifocal lenses may help prevent myopic progression. In animal models, evidence suggests that the imposition of hyperopic or myopic defocus with negative or positive power lenses, respectively, can influence eye growth and lead to compensatory refractive changes: hyperopic defocus leads to longer and more myopic eyes and myopic defocus leads to shorter and more hyperopic eyes [151–156].This supports the theory of slowing down axial elongation with optical treatments that correct distance vision while achieving simultaneous myopic defocus.The reduction of peripheral retinal hyperopic defocus by contact lenses represents a new and interesting area of research that could be an effective intervention in myopia control. Effective contact lens options for myopia control include multifocal, extended depth of focus (EDOF), and orthokeratology contact lenses. ### 5.5. Single-Vision Rigid Gas-Permeable and Soft Contact Lenses Single-vision lenses intend to correct the refractive error and are not prescribed for myopia control [149, 150]. Over several decades, there have been suggestions that gas-permeable contact lenses (not orthokeratology design) can slow myopia progression in children, but these studies have shown important limitations in their study design [157–160]. Nevertheless, well-conducted studies have recently demonstrated that gas-permeable contact lenses have no effect on the progression of myopia in children [160], even among children who use them regularly. These lenses temporarily flatten the corneal curvature without affecting axial elongation.Although Atchison [161] has revealed that spherical contact lenses produce more peripheral myopic shift than spherically surfaced spectacle lenses, some prospective randomized studies did not find any differences in the myopia progression rate between soft contact lenses and spectacle wearers [162, 163]. However, other studies have tried to compare rigid with soft contact lenses. Katz et al. [160] found no difference in myopia progression or axial elongation over a period of two years between children wearing gas-permeable and soft single-vision contact lenses. Walline et al. [162] reported no difference in the amount of axial elongation between gas-permeable and soft single-vision contact lens wearers. ### 5.6. Soft Bifocal, Peripheral Gradient, and EDOF Contact Lenses Three different promising types of contact lenses for myopia control in children have been studied: bifocal concentric lenses, peripheral gradient lenses, and EDOF contact lenses (Figure3).Figure 3 Single-vision contact lenses (CLs) provide a peripheral hyperopic defocus. A peripheral myopic defocus can be achieved with peripheral gradient CL, bifocal CL, and EDOF CL.The first two multifocal contact lens designs include a central area for correcting myopia. However, bifocal concentric lenses use a concentric zone of rings with positive power addition to concurrently impose peripheral myopic defocus, and peripheral gradient lenses produce constant peripheral myopization defocus that increases gradually from the central optic axis toward the periphery [164]. The third type is based on the EDOF theory, which was designed to incorporate and manipulate selective higher-order aberrations (mainly spherical aberration) to achieve the global retinal image quality that was optimized for points at and anterior to the retina and degraded for points posterior to the retina. It was hypothesized that a poor image quality posterior to the retina prevents axial elongation [165].Demonstrating the propensity for slowing both refractive and axial length myopia progression by around 30%–50% [166, 167], these contact lens options have the capability of correcting myopia as well as providing a treatment strategy for myopia control. In contrast, spectacle lens alternatives have shown less effective success for myopia control [168] except in one specific prismatic bifocal design [131] and a novel multisegment defocus design [141]. Moreover, in clinical studies, contact lenses provide better lens centration and are less affected by eye movements than spectacle lenses [135].Data from two recent clinical pilot studies showed that adding myopic defocus to the distance correction reduced myopia progression by an average of 0.27 D/year after one year [147, 169], which is slightly better than the effect seen at one year using progressive addition lenses or bifocal lenses [129, 130, 170–172].MiSight 1 day is a daily replacement of hydrophilic soft bifocal contact lenses approved by the FDA for correction of nearsightedness and slows its progression in children, aged 8 to 12 years, with a refraction of −0.75 to −4.00 D (spherical equivalent) and astigmatism less than or equal to 0.75 D at the beginning of treatment. MiSight’s Activ Control™ technology is based on an optic zone concentric ring design. Concentric zones of the alternating distance and near power produce two focal planes, allowing for the correction of the refractive error and 2.00 D of simultaneous myopic retinal defocus. A two-year randomized clinical trial [164] showed lesser progression and axial elongation in the MiSight group than in the single-vision spectacle group.Several studies [147, 164, 169, 173–178] published between 2011 and 2016 showed a reduction of 38.0% in myopia progression and 37.9% in axial elongation with multifocal soft contact lenses. In 2014, Benavente-Perez et al. [135] showed the effect of soft bifocal contact lenses on eye growth and the refractive state of 30 juvenile marmosets by imposing hyperopic and myopic defocus on their peripheral retina. Each marmoset wore one of three investigational annular bifocal contact lens designs in their right eye and a plano contact lens in the left eye as a control for 10 weeks. The three types of lenses had a plano center zone (1.5 mm or 3 mm) and +5 D or −5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm, and −5 D/3 mm). The results were compared with untreated, single-vision positive and negative, and +5/−5 D multizone lens-reared marmosets. Eyes treated with positive power in the periphery showed to grow significantly less than untreated eyes and eyes with multizone contact lenses, supporting the use of bifocal contact lenses as an effective treatment for myopia control. Moreover, the treatment effect was associated with the size of the peripheral treatment zone as well as with the peripheral refractive state and the eye growth rate before the treatment started.The bifocal lenses In nearsighted kids (BLINK) randomized clinical trial [179] has recently determined the role of soft multifocal lenses in slowing myopia progression in children, comparing high-add power (+2.50 D) with medium-add power (+1.50 D) and single-vision contact lenses. A total of 294 children with −0.75 D to −5.00 D of spherical component myopia and less than 1.00 D of astigmatism were enrolled, with a three-year follow-up. Adjusted three-year myopia progression was −0.60 D for high-add power, −0.89 D for medium-add power, and −1.05 D for single-vision contact lenses. This demonstrated that treatment with high-add power multifocal contact lenses significantly reduced the rate of eye elongation compared with medium-add power multifocal and single-vision contact lenses. However, further research is required to understand the clinical importance of these data.EDOF contact lenses were tested in a three-year prospective, double-blind trial [165] that demonstrated their efficacy in slowing myopia progression. A total of 508 children with the cycloplegic spherical equivalent −0.75 to −3.50 were enrolled and randomized in one of the five groups: one group with single vision, two groups with bifocal, and two groups with EDOF contact lenses (configured to offer EDOF of up to +1.75 D and +1.25 D). At two years, the two groups of EDOF lenses slowed myopia by 32% and 26% and reduced axial length elongation by 25% and 27%, respectively. However, efficacy was not significantly different between the bifocal and EDOF lens groups. ### 5.7. Orthokeratology (Ortho-K) Lenses Orthokeratology (ortho-k) is defined as a “reduction, modification, or elimination of a refractive error by programmed application of contact lenses [180].” It refers to the application of a rigid contact lens at night to induce temporary changes in the corneal epithelium shape, allowing for clear, unaided daytime vision. Wesley and Jessen in the 1950s casually observed spectacle blur experienced by patients after wearing hard contact lenses. This blurring was subsequently related to lens-induced epithelial reshaping, which was then utilized for therapeutic purposes [181].Studies have shown that myopic orthokeratology lenses produce a flattening of the central cornea and a steepening of the midperipheral cornea, accompanied by changes in the epithelial thickness (Figure4) [182–184].Figure 4 Epithelium remodeling is achieved with orthokeratology. Central corneal flattening is accompanied by a midperipheral steepening (tangential map, (a)), due to accumulation of the epithelium (epithelial thickness map, (b)). (a)(b)Although these lenses were designed for refractive error correction, studies have revealed a secondary advantage of slowing myopic progression [149] by creating peripheral myopic defocus secondary to epithelial reshaping. A number of studies have shown a 30 %–71% reduction in axial elongation compared with the control [150, 185, 186].Other studies and meta-analyses have revealed a 40%–60% mean reduction in the rate of refractive change compared with controls using spectacles to correct myopia [168, 187–194]. In one of the first trials, the retardation of myopia in orthokeratology study [195], axial elongation was reported to be slowed by an average of 43%.In a second trial, the high myopia-partial reduction orthokeratology study [196], highly myopic individuals were enrolled and randomly assigned into partial reduction orthokeratology and single-vision spectacle groups. The first group needed to wear single-vision spectacles to correct residual refractive errors during the day. In this group, the axial elongation was 63% less than that of the second group. More recently, orthokeratology and gas-permeable lenses have been compared with a novel experimental study design [197]. Patients were fitted with overnight orthokeratology in one eye and traditional rigid gas-permeable lenses for daytime wear in the contralateral eye. The lenses were worn for six months. After a washout period of 2 weeks, lens-eye combinations were reversed and wearing lens was continued further for six months. The results revealed no increases in axial elongation over either the first or second six-month period for eyes with orthokeratology, compared with an increase in 0.04 mm and 0.09 mm, respectively, in eyes with gas-permeable lenses.A recent one-year retrospective study by Na and Yoo [198] investigated myopic progression in children with myopic anisometropia who underwent orthokeratology treatment in their myopic eye and no correction in their emmetropic eye. The results showed statistically significant reduction in axial length elongation in the treated eye (0.07 ± 0.21 mm, P = 0.038) as compared with the control eye (0.36 ± 0.23 mm, P < 0.001).Zhang and Chen [199] in a retrospective study compared the effect of toric versus spherical design orthokeratology lenses on myopia progression in children with moderate-to-high astigmatism (cylinder >1.5 D). Toric orthokeratology wearers had a 55.6% slower rate of axial elongation than that of the spherical group. Some studies have tried to assess the effects of combined treatments, such as orthokeratology lenses and atropine. Studies by Wan et al. [200] and Kinoshita et al. [201] found improvement in myopia control by combining the two strategies compared with orthokeratology monotherapy.Although orthokeratology has a significant effect on slowing axial elongation, the results vary among individuals. Some patients show little or no myopic progression, while others continue to progress. Some studies [202–207] have shown that better myopia control is positively associated with a higher degree of baseline myopia, older age of the myopia onset and at initiation of treatment, larger pupil size, and a smaller resulting central optical zone (more peripheral myopia induced by a ring of steepening outside the treatment zone).Cheung et al. [186] suggest that ideal candidates for orthokeratology might be children around 6–9 years of age with fast myopic progression (increase in the axial length of ≥0.20 mm/7 months or spherical equivalent of ≥1 diopter/year). Moreover, several studies have shown that children are sufficiently mature to safely and successfully wear different types of contact lenses, such as soft [208, 209] and orthokeratology lenses [191, 192]. ## 5.1. Undercorrection of Myopia Undercorrection is one of the optical strategies proposed to slow the progression of myopia. It is based on the rationale that in undercorrected eyes, the accommodative response for near vision is reduced [107]. In fact, in animal models (chicks, tree shrews, marmosets, and infant monkeys) [21, 108, 109], a myopic defocus, in which the retinal image is formed in front of the retina, was capable of inhibiting eyeball elongation and associated myopic progression.Tokoro and Kabe [110] found that in a population aged 7–15 years, the rate of myopia progression was lower with undercorrection (−0.54 ± 0.39 D) than with full correction, either in full correction full-time wear (−0.75 D ± 0.27 D) or in full correction part-time wear (−0.62 ± 0.32 D). This study had several limitations, including a small sample size, limited statistical analysis, and concurrent use of pharmacological intervention for myopia control.In the study by Li et al. [111], the study population consisted of 12-year-old Chinese children. One hundred-twenty patients were undercorrected, and 133 patients were fully corrected; at one year, no statistically significant difference was observed between the two groups. However, a regression analysis showed a significant association if the refractive error, not the axial length, was considered. In this case, the progression of myopia decreased with an increasing amount of undercorrection (R2 = 0.02; P = 0.02). However, in order to achieve reduction in myopia progression by 0.25 D, undercorrection of more than 1.50 D was required.In both studies by Adler and Millodot [107] and Koomson et al. [112], undercorrection did not prove a statistically significant reduction in myopia progression. Adler and Millodot found that in a cohort of 48 children aged 6–15 years, undercorrection by 0.50 D was associated with myopia progression of 0.17 D when compared to full correction.Koomson et al. enrolled 150 Ghanaian children who were divided into two groups (n = 75). The first group was undercorrected by 0.50 D, while the second group was fully corrected. At two years, myopia progressed by the same rate in both the groups (−0.54 D ± 0.26 in the full-corrected group vs −0.50 D ± 0.22 in the undercorrected group; P = 0.31). Conversely, three studies have reported that under-correction causes a more rapid progression of myopia.Chung et al. [113] reported that 47 children undercorrected by 0.75 D had a greater progression of myopia compared with the 47 children who were fully corrected (−1.00 D vs 0.77 D; P < 0.01); however, the axial elongation was smaller in the undercorrected eyes (0.58 mm vs 0.65 mm; P = 0.04).Chen [114] designed a study in which 77 fully corrected eyes were compared to 55 undercorrected eyes. The two groups were matched for the age, sex, and refractive error. At a 12-month interval, the undercorrected −0.25 to −0.50 D) group exhibited a significant myopic progression (−0.60 D vs −0.52 D; no standard deviation; standard error; and 95% confidence interval were reported).Vasudevan et al. [115] retrospectively examined myopia progression rate records from the USA and the level of undercorrection of myopia versus full correction of myopia. They found that greater undercorrection was associated with a greater progression of myopia (P < 0.01).In all these scenarios, both eyes were corrected, either undercorrected or fully corrected. However, two studies evaluated the rate of progression of myopia by correcting only one of the eyes.In a population of 18 children aged 11 years, Phillips [116] noticed that undercorrection of the nondominant eye was associated with a slower progression of myopia compared to that in the dominant eye, which was fully corrected. The intereye difference was 0.36 D/y (P = 0.002).However, Hu and Guo [104] reported opposite results, in which the undercorrection of one eye in myopic children was associated with a faster progression than fully corrected ones (−0.67 ± 0.22 D vs −0.50 ± 0.15 D).Unfortunately, considering all human trials, the evidence supporting undercorrection as feasible for slowing the progression of myopia is low. Moreover, many pediatric practitioners suggest that the goal is to an attain optimal vision, which can be achieved by full correction. ## 5.2. Overcorrection of Myopia In a case-control study by Goss [117], 36 children aged 7–15 years were overcorrected by 0.75 D and matched by control individuals randomly selected from the files of a university optometry clinic. The rate of progression among the groups was different but not statistically different; −0.49 D/year in the overcorrected group versus −0.47 D in the control group. ## 5.3. Bifocal and Multifocal Lenses The rational use of bifocal or multifocal lenses to slow the progression of myopia is based on two theories. The first one, proven in animal models [108, 118], is based on central and peripheral hyperopic retinal defocus caused by a large accommodative lag [119, 120], which is defined as the residual refractive error of the difference between the accommodative demand required and its response. A large accommodative lag causes a hyperopic retinal defocus, which stimulates axial elongation in central defocus. Furthermore, in the case of peripheral defocus, the eye globe seems to acquire a more prolate shape. However, this stimulus is nullified by short periods of clear vision [21]; therefore, whether transient hyperopic retinal blur can lead to the onset and/or progression of myopia remains unclear.The second theory assumes that during accommodation, there is a mechanical tension created by the crystalline lens or ciliary body. On the one hand, this tension restricts the equatorial ocular expansion, causing accelerated axial elongation; on the other hand, as the ciliary-choroidal tension increases, the effort needed to accommodate increases as well. This probably leads to a further increase in accommodative lags in children, which is a consequence rather than a cause of myopia [121–125]. Regarding the association between myopia in children and accommodative lags, it has been reported that(1) Compared to emmetropic children, myopic children generally show insufficient accommodation with larger accommodative lags, even before the development of myopia. [120, 123, 126, 127].(2) In myopic children, a larger accommodative lag correlates with a faster myopia progression [128]Unfortunately, as seen in the undercorrection approach, no consensus exists regarding the use of bifocal or multifocal lenses to slow the progression of myopia. This is mainly due to the standard near addition power use in the trials, typically between +1.00 D and +2.00 D so that interindividual differences are nullified, causing even a possible overcorrection in some cases.The COMET study was a randomized, multicenter clinical trial in which 469 children, aged 6–11 years, were enrolled and divided into two groups: the first group was assigned to progressive addition lenses (with +2.00 D addition) and the second group to single-vision lenses. At three years, the difference between the progressive addition lenses and the control group in diopters was 0.20 ± 0.08 D and the axial elongation was 0.11 ± 0.03 mm. Even if statistically significant, these differences were considered clinically insignificant [129].The same conclusions were obtained in the COMET 2 study [130]. A total of 180 children aged 8–12 years with spherical equivalent refraction from −0.75 D to −2.50 D and near esophoria ≥2 prism-diopters were enrolled. An additional inclusion criterion was high accommodative lag, initially set to at least 0.50 D (accommodative response less than 2.50 D for a 3.00 D demand) and subsequently restricted further to at least 1.00 D. A total of 110 children completed the study in three years; the progression of myopia was −0.87 D in the group treated with progressive addition lenses (+2.00 D) versus −1.15 D in the single-vision lens group. Nevertheless, despite being statistically significant, the authors considered the results to be clinically insignificant.Cheng et al. [131] attempted to evaluate the use of bifocal and prismatic bifocal lenses. One hundred thirty-five Chinese-Canadian children, aged 8–13 years with myopia progression of at least 0.50 D in the preceding year, were randomly assigned to one of the three treatments: single vision (control, n = 41), +1.50 D executive bifocals (n = 48), and +1.50 D executive bifocals with 3-Δ base-in the prism in the near segment of each lens. At the three-year follow-up, the progression of myopia in terms of diopters and axial length elongation was highest in children treated with single vision (−2.06 D and 0.82 mm) compared to those who were treated with bifocal (−1.25 D and 0.57 mm) or prismatic bifocal lenses (−1.01 D and 0.54 mm). Furthermore, in children with high accommodative lags (>1.00 D), no difference was observed in myopia control using bifocal or prismatic bifocal lenses. Instead, in children who showed low lags of accommodation (≤1.00 D), greater benefits were observed using prismatic bifocal lenses. According to the authors, this could be explained as prismatic bifocal lenses, because prisms may reduce the convergence and lens-induced exophoria with prism correction.Currently, research is moving from the correction of the hyperopic shift to the induction of myopic peripheral defocus. The rationale is based on two findings.(1) Visual signals derived from the peripheral retina are stronger than those originating from the central retina [132, 133](2) Optical defocus in the peripheral retina governs ocular growth: peripheral defocus stimulates axial elongation of the eye, while the opposite effect is demonstrated with peripheral myopic defocus (Figure2) [134–140]Figure 2 Peripheral hyperopic defocus (red arrow) might lead to axial elongation. A myopic defocus (green arrow) can be achieved with orthokeratology, contact lenses, laser refractive surgery, and spectacle lenses (defocus incorporated multiple segment lenses and Apollo progressive addition lenses).Spectacles of two types can induce peripheral myopic defocus: The defocus incorporated multiple segment lenses and Apollo progressive addition lenses (Apollo PALs, Apollo Eyewear, River Grove, IL, USA)and defocus incorporated multiple segment (DIMS) lenses [141] are custom-made plastic spectacle lenses. Each lens includes a central optical zone (9 mm in diameter) for correcting distance refractive errors and an annular multifocal zone with multiple segments (33 mm in diameter) with a relative positive power (+3.50 D). The diameter of each segment is 1.03 mm. Lam et al. [141] evaluated the use of defocus incorporated multiple segments versus single-vision lenses in 160 children. The results indicated that myopia progressed slower by 52% in the defocus incorporated multiple segment group than that in the single-vision group (−0.41 ± 0.06 D in the defocus incorporated multiple segment group and −0.85 ± 0.08 D in the single-vision group; mean difference −0.44 ± 0.09 D, P < 0.001). Moreover, the axial elongation was shorter in children in the defocus incorporated multiple segment group (0.21 ± 0.02 mm) by 62% than those in the single-vision group (0.55 ± 0.02 mm in the defocus incorporated multiple segment; mean difference 0.34 ± 0.04 mm, P < 0.001). These preliminary results were confirmed after a 3-year follow-up, showing that the myopia control effect was sustained in the third year in children who had used the DIMS spectacles in the previous 2 years and was also shown in the children switching from single vision to DIMS lenses [142]. Interestingly, in a study by Zhang et al., [143] baseline relative peripheral refraction (RPR) was assessed as a variable on the myopia control effects in myopic children wearing DIMS lenses. The authors concluded that DIMS lenses slowed down myopia progression, and myopia control was better for the children with baseline hyperopic RPR than the children with myopic RPR. This may partially explain why the efficacy of DIMS technology varies among myopic children and advocates the need for customized myopic defocus for patients to optimize myopia control effects. Indeed, similar results were found in animal studies, showing that a greater hyperopic defocus leads to more myopia progression while inducing myopic defocus retarded myopia progression [144]. Outcomes in infant monkeys and chicks advocated that spatial resolution at the anatomic level of the optical pathway could modulate overall eye growth [145]. Animal studies using contact lenses with embedded myopic defocus found that myopia progression could be slowed by 20% to 60% [146, 147].The Apollo progressive addition lenses comprise an asymmetrical myopic defocus design with a 3 myopic defocus zone, including a +2.50 D full-positive power superior zone, an 80% full myopic defocus power nasal zone, and a 60% full myopic defocus power temporal zone. Currently, a prospective, multicenter, randomized controlled trial, promoted by Li, is ongoing to evaluate the possible efficacy of the defocus incorporated multiple segment and Apollo progressive addition lenses [148]. ## 5.4. Contact Lenses and Orthokeratology in Myopia Control As previously reported, a theory for eye elongation suggests that axial elongation is caused by peripheral retinal hyperopic defocus [105, 135, 149, 150].This theory has led researchers to consider that reducing peripheral hyperopic defocus or inducing peripheral myopic defocus with bifocal, progressive, or multifocal lenses may help prevent myopic progression. In animal models, evidence suggests that the imposition of hyperopic or myopic defocus with negative or positive power lenses, respectively, can influence eye growth and lead to compensatory refractive changes: hyperopic defocus leads to longer and more myopic eyes and myopic defocus leads to shorter and more hyperopic eyes [151–156].This supports the theory of slowing down axial elongation with optical treatments that correct distance vision while achieving simultaneous myopic defocus.The reduction of peripheral retinal hyperopic defocus by contact lenses represents a new and interesting area of research that could be an effective intervention in myopia control. Effective contact lens options for myopia control include multifocal, extended depth of focus (EDOF), and orthokeratology contact lenses. ## 5.5. Single-Vision Rigid Gas-Permeable and Soft Contact Lenses Single-vision lenses intend to correct the refractive error and are not prescribed for myopia control [149, 150]. Over several decades, there have been suggestions that gas-permeable contact lenses (not orthokeratology design) can slow myopia progression in children, but these studies have shown important limitations in their study design [157–160]. Nevertheless, well-conducted studies have recently demonstrated that gas-permeable contact lenses have no effect on the progression of myopia in children [160], even among children who use them regularly. These lenses temporarily flatten the corneal curvature without affecting axial elongation.Although Atchison [161] has revealed that spherical contact lenses produce more peripheral myopic shift than spherically surfaced spectacle lenses, some prospective randomized studies did not find any differences in the myopia progression rate between soft contact lenses and spectacle wearers [162, 163]. However, other studies have tried to compare rigid with soft contact lenses. Katz et al. [160] found no difference in myopia progression or axial elongation over a period of two years between children wearing gas-permeable and soft single-vision contact lenses. Walline et al. [162] reported no difference in the amount of axial elongation between gas-permeable and soft single-vision contact lens wearers. ## 5.6. Soft Bifocal, Peripheral Gradient, and EDOF Contact Lenses Three different promising types of contact lenses for myopia control in children have been studied: bifocal concentric lenses, peripheral gradient lenses, and EDOF contact lenses (Figure3).Figure 3 Single-vision contact lenses (CLs) provide a peripheral hyperopic defocus. A peripheral myopic defocus can be achieved with peripheral gradient CL, bifocal CL, and EDOF CL.The first two multifocal contact lens designs include a central area for correcting myopia. However, bifocal concentric lenses use a concentric zone of rings with positive power addition to concurrently impose peripheral myopic defocus, and peripheral gradient lenses produce constant peripheral myopization defocus that increases gradually from the central optic axis toward the periphery [164]. The third type is based on the EDOF theory, which was designed to incorporate and manipulate selective higher-order aberrations (mainly spherical aberration) to achieve the global retinal image quality that was optimized for points at and anterior to the retina and degraded for points posterior to the retina. It was hypothesized that a poor image quality posterior to the retina prevents axial elongation [165].Demonstrating the propensity for slowing both refractive and axial length myopia progression by around 30%–50% [166, 167], these contact lens options have the capability of correcting myopia as well as providing a treatment strategy for myopia control. In contrast, spectacle lens alternatives have shown less effective success for myopia control [168] except in one specific prismatic bifocal design [131] and a novel multisegment defocus design [141]. Moreover, in clinical studies, contact lenses provide better lens centration and are less affected by eye movements than spectacle lenses [135].Data from two recent clinical pilot studies showed that adding myopic defocus to the distance correction reduced myopia progression by an average of 0.27 D/year after one year [147, 169], which is slightly better than the effect seen at one year using progressive addition lenses or bifocal lenses [129, 130, 170–172].MiSight 1 day is a daily replacement of hydrophilic soft bifocal contact lenses approved by the FDA for correction of nearsightedness and slows its progression in children, aged 8 to 12 years, with a refraction of −0.75 to −4.00 D (spherical equivalent) and astigmatism less than or equal to 0.75 D at the beginning of treatment. MiSight’s Activ Control™ technology is based on an optic zone concentric ring design. Concentric zones of the alternating distance and near power produce two focal planes, allowing for the correction of the refractive error and 2.00 D of simultaneous myopic retinal defocus. A two-year randomized clinical trial [164] showed lesser progression and axial elongation in the MiSight group than in the single-vision spectacle group.Several studies [147, 164, 169, 173–178] published between 2011 and 2016 showed a reduction of 38.0% in myopia progression and 37.9% in axial elongation with multifocal soft contact lenses. In 2014, Benavente-Perez et al. [135] showed the effect of soft bifocal contact lenses on eye growth and the refractive state of 30 juvenile marmosets by imposing hyperopic and myopic defocus on their peripheral retina. Each marmoset wore one of three investigational annular bifocal contact lens designs in their right eye and a plano contact lens in the left eye as a control for 10 weeks. The three types of lenses had a plano center zone (1.5 mm or 3 mm) and +5 D or −5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm, and −5 D/3 mm). The results were compared with untreated, single-vision positive and negative, and +5/−5 D multizone lens-reared marmosets. Eyes treated with positive power in the periphery showed to grow significantly less than untreated eyes and eyes with multizone contact lenses, supporting the use of bifocal contact lenses as an effective treatment for myopia control. Moreover, the treatment effect was associated with the size of the peripheral treatment zone as well as with the peripheral refractive state and the eye growth rate before the treatment started.The bifocal lenses In nearsighted kids (BLINK) randomized clinical trial [179] has recently determined the role of soft multifocal lenses in slowing myopia progression in children, comparing high-add power (+2.50 D) with medium-add power (+1.50 D) and single-vision contact lenses. A total of 294 children with −0.75 D to −5.00 D of spherical component myopia and less than 1.00 D of astigmatism were enrolled, with a three-year follow-up. Adjusted three-year myopia progression was −0.60 D for high-add power, −0.89 D for medium-add power, and −1.05 D for single-vision contact lenses. This demonstrated that treatment with high-add power multifocal contact lenses significantly reduced the rate of eye elongation compared with medium-add power multifocal and single-vision contact lenses. However, further research is required to understand the clinical importance of these data.EDOF contact lenses were tested in a three-year prospective, double-blind trial [165] that demonstrated their efficacy in slowing myopia progression. A total of 508 children with the cycloplegic spherical equivalent −0.75 to −3.50 were enrolled and randomized in one of the five groups: one group with single vision, two groups with bifocal, and two groups with EDOF contact lenses (configured to offer EDOF of up to +1.75 D and +1.25 D). At two years, the two groups of EDOF lenses slowed myopia by 32% and 26% and reduced axial length elongation by 25% and 27%, respectively. However, efficacy was not significantly different between the bifocal and EDOF lens groups. ## 5.7. Orthokeratology (Ortho-K) Lenses Orthokeratology (ortho-k) is defined as a “reduction, modification, or elimination of a refractive error by programmed application of contact lenses [180].” It refers to the application of a rigid contact lens at night to induce temporary changes in the corneal epithelium shape, allowing for clear, unaided daytime vision. Wesley and Jessen in the 1950s casually observed spectacle blur experienced by patients after wearing hard contact lenses. This blurring was subsequently related to lens-induced epithelial reshaping, which was then utilized for therapeutic purposes [181].Studies have shown that myopic orthokeratology lenses produce a flattening of the central cornea and a steepening of the midperipheral cornea, accompanied by changes in the epithelial thickness (Figure4) [182–184].Figure 4 Epithelium remodeling is achieved with orthokeratology. Central corneal flattening is accompanied by a midperipheral steepening (tangential map, (a)), due to accumulation of the epithelium (epithelial thickness map, (b)). (a)(b)Although these lenses were designed for refractive error correction, studies have revealed a secondary advantage of slowing myopic progression [149] by creating peripheral myopic defocus secondary to epithelial reshaping. A number of studies have shown a 30 %–71% reduction in axial elongation compared with the control [150, 185, 186].Other studies and meta-analyses have revealed a 40%–60% mean reduction in the rate of refractive change compared with controls using spectacles to correct myopia [168, 187–194]. In one of the first trials, the retardation of myopia in orthokeratology study [195], axial elongation was reported to be slowed by an average of 43%.In a second trial, the high myopia-partial reduction orthokeratology study [196], highly myopic individuals were enrolled and randomly assigned into partial reduction orthokeratology and single-vision spectacle groups. The first group needed to wear single-vision spectacles to correct residual refractive errors during the day. In this group, the axial elongation was 63% less than that of the second group. More recently, orthokeratology and gas-permeable lenses have been compared with a novel experimental study design [197]. Patients were fitted with overnight orthokeratology in one eye and traditional rigid gas-permeable lenses for daytime wear in the contralateral eye. The lenses were worn for six months. After a washout period of 2 weeks, lens-eye combinations were reversed and wearing lens was continued further for six months. The results revealed no increases in axial elongation over either the first or second six-month period for eyes with orthokeratology, compared with an increase in 0.04 mm and 0.09 mm, respectively, in eyes with gas-permeable lenses.A recent one-year retrospective study by Na and Yoo [198] investigated myopic progression in children with myopic anisometropia who underwent orthokeratology treatment in their myopic eye and no correction in their emmetropic eye. The results showed statistically significant reduction in axial length elongation in the treated eye (0.07 ± 0.21 mm, P = 0.038) as compared with the control eye (0.36 ± 0.23 mm, P < 0.001).Zhang and Chen [199] in a retrospective study compared the effect of toric versus spherical design orthokeratology lenses on myopia progression in children with moderate-to-high astigmatism (cylinder >1.5 D). Toric orthokeratology wearers had a 55.6% slower rate of axial elongation than that of the spherical group. Some studies have tried to assess the effects of combined treatments, such as orthokeratology lenses and atropine. Studies by Wan et al. [200] and Kinoshita et al. [201] found improvement in myopia control by combining the two strategies compared with orthokeratology monotherapy.Although orthokeratology has a significant effect on slowing axial elongation, the results vary among individuals. Some patients show little or no myopic progression, while others continue to progress. Some studies [202–207] have shown that better myopia control is positively associated with a higher degree of baseline myopia, older age of the myopia onset and at initiation of treatment, larger pupil size, and a smaller resulting central optical zone (more peripheral myopia induced by a ring of steepening outside the treatment zone).Cheung et al. [186] suggest that ideal candidates for orthokeratology might be children around 6–9 years of age with fast myopic progression (increase in the axial length of ≥0.20 mm/7 months or spherical equivalent of ≥1 diopter/year). Moreover, several studies have shown that children are sufficiently mature to safely and successfully wear different types of contact lenses, such as soft [208, 209] and orthokeratology lenses [191, 192]. ## 6. Conclusions The rapid increase in the prevalence of myopia, especially in Asian and Western countries, has made it a significant public health concern. In fact, high myopia (≥5 D or axial length ≥26 mm) is associated with an increased risk of vision-threatening complications such as retinal detachment, choroidal neovascularization, primary open-angle glaucoma, and early-onset cataract. Many studies have suggested the implication of both genetic and environmental factors in the development of myopia. The genetic pool is associated with both syndromic and nonsyndromic forms of myopia, whereas the environment plays an important role in nonsyndromic forms. However, we are far from understanding complex pathogenesis.Various options have been assessed to prevent or slow myopia progression in children.Environmental modifications, such as spending more time outdoors, can decrease the risk of the onset of myopia. In fact, many studies have identified an inverse association between the myopia onset and progression in outdoor exposure and a direct association with near work. However, contrasting evidence has also emerged, perhaps because of many biases, such as recall and measurement bias.Optical interventions such as bifocal/progressive spectacle lenses, soft bifocal/multifocal/EDOF contact lenses, and orthokeratology lenses show moderate reduction in the myopia progression rate compared to single-vision lenses. All of these options seem to reduce hyperopic peripheral defocus, which is a stimulus for axial elongation, thus promoting myopic peripheral defocus and slowing axial elongation.Regarding spectacle lenses, promising results are derived from the use of defocus incorporated multiple segment lenses and progressive addition lenses. However, further studies are needed to confirm this hypothesis. Conversely, undercorrection of the myopic refractive error does not slow the progression of nearsightedness. In fact, several studies have revealed no difference in progression with undercorrection. Others have reported an increase in myopia progression compared with full correction; thus, the full correction of myopia is currently recommended to attain an optimal vision as the main aim.Gas-permeable and soft single-vision contact lenses are prescribed solely to correct the refractive error because many studies have shown no effects on axial elongation and myopia control.Refractive surgery may be an interesting option for treating amblyogenic anisometropia in children who refuse conventional therapy. Despite its successful outcomes in refraction and visual acuity, the use of refractive surgery in these individuals remains unclear, mainly because of the need for anesthesia, susceptibility to trauma, lack of pediatric nomograms, instability of the refractive error, and lack of evidence of long-term safety. Further studies are needed to better explore the role of refractive surgery in this area.Currently, pharmacological treatment with atropine is the most researched and effective strategy for myopia control. In particular, low-concentration atropine (0.01%) is known to maintain its efficacy on myopia control with a lower rate of side effects. Interestingly, data from studies on the effects of combined treatments, such as low-concentration atropine (0.01%) plus orthokeratology lenses or low-concentration atropine plus soft bifocal contact lenses (bifocal and atropine in myopia, BAM study), suggest that the combination seems to be superior to monotherapy. However, the BAM study is still ongoing, and no results have yet been published.In summary, all these options for controlling myopia progression in children exhibit varying degrees of efficacy, as shown in the literature. Compared with single-vision spectacles as control, atropine exhibits the highest efficacy; orthokeratology, peripheral defocus contact, and spectacle lenses have moderate efficacy, whereas bifocal or progressive addition spectacles and increased outdoor activities show lower efficacy [185]. --- *Source: 1004977-2022-06-14.xml*
1004977-2022-06-14_1004977-2022-06-14.md
104,635
Myopia: Mechanisms and Strategies to Slow Down Its Progression
Andrea Russo; Alessandro Boldini; Davide Romano; Giuseppina Mazza; Stefano Bignotti; Francesco Morescalchi; Francesco Semeraro
Journal of Ophthalmology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1004977
1004977-2022-06-14.xml
--- ## Abstract This topical review aimed to update and clarify the behavioral, pharmacological, surgical, and optical strategies that are currently available to prevent and reduce myopia progression. Myopia is the commonest ocular abnormality; reinstated interest is associated with high and increasing prevalence, especially but not, in the Asian population and progressive nature in children. The growing global prevalence seems to be associated with both genetic and environmental factors such as spending more time indoor and using digital devices, particularly during the coronavirus disease 2019 pandemic. Various options have been assessed to prevent or reduce myopia progression in children. In this review, we assess the effects of several types of measures, including spending more time outdoor, optical interventions such as the bifocal/progressive spectacle lenses, soft bifocal/multifocal/extended depth of focus/orthokeratology contact lenses, refractive surgery, and pharmacological treatments. All these options for controlling myopia progression in children have various degrees of efficacy. Atropine, orthokeratology/peripheral defocus contact and spectacle lenses, bifocal or progressive addition spectacles, and increased outdoor activities have been associated with the highest, moderate, and lower efficacies, respectively. --- ## Body ## 1. Introduction Myopia is the most widespread refractive error and is principally due to the increasing axial length of the eyeball. In myopia, the distant object’s image is formed anterior to the retinal plane, leading to blurred vision, which requires correction for clear vision. Noncorrected myopia impairs the patients’ quality of life, affects school performance, and limits employability. Even corrected myopia may be responsible for serious complications such as staphyloma (outpouching of the back wall of the eye), glaucoma, cataract, choroidal neovascularization, retinal tears, schisis, and detachment; these complications together account for great economic implications for public health. Hence, many researchers and ophthalmologists have focused on myopia development and treatment.A global increase in myopia cases has garnered renewed interest. In 2000, myopia affected 1.4 billion people worldwide, while in 2050, the number is estimated to reach 4.8 billion [1]. Myopia cases are increasing in Asian and Western countries. Higher prevalence has been reported among schoolchildren in East Asia, Singapore, China, Taiwan, and South Korea [2, 3]. A recent meta-analysis including 61,946 adults showed that in Europe, myopia increased from 17.8% (95% confidence interval (CI): 17.6–18.1) to 23.5% (95% CI: 23.2–23.7) in people born between 1910 and 1939 in comparison to those born between 1940 and 1979 (P = 0.03) [4]. A significant difference in the myopia incidences based on sex was found in most studies; however, the Correction of Myopia Evaluation Trial (COMET) study suggested that males showed slower progression [5]. Further, among females, myopia progressed differently at menarche. A study by Xu et al. in China reported a 13% higher risk of myopia in premenarche girls when adjusted for the exact age and behavioral risk factors [6].Many etiological studies have assessed the role of both genetic and environmental factors in the development of myopia. Studies have reported a greater risk of myopia development in children with myopic parents. The Northern Ireland Childhood Errors of Refraction (NICER) study showed that the risk of myopia recurrence was 2.91 and 7.79 times more in children with one and two myopic parents, respectively [7]. Another study reported a 7.6, 14.9, and 43.6% myopia risk in children with none, one, and two myopic parents, respectively [8].Myopia can be classified as syndromic and nonsyndromic. A known genetic factor has been implicated in genesis and development of syndromic myopia (such as Marfan syndrome or congenital stationary night blindness). Nonsyndromic myopia has no clear association with a genetic mutation; however, polymorphisms in different genes are associated with nonsyndromic myopia. A recent genome-wide association study named CREAM found 24 loci associated with myopia, which increase the myopia risk up to 10 folds.Many studies have suggested that the environment plays a pivotal role in the development of nonsyndromic myopia forms; associations have been found with time spent in outdoor activities or near work, use of LED lamps for homework, population density, socioeconomic status, and use of video terminals. To control the deterioration of visual acuity, studies in recent decades tested several methods such as the use of anticholinergic drugs, correction of refractive error, multifocal spectacles or contact lenses, orthokeratology, and refractive surgery.The growing interest in understanding myopia is justified due to possibility of stopping or slowing the disease through concrete mitigation strategies or new therapies. This review provides a critical analysis of the association between myopia development and environmental factors and analyzes the available strategies to reduce myopia evolution in children. ## 2. Outdoor Time and Near Work Many studies have focused on the relationship between myopia development and progression and environmental factors such as near work, outdoor activities, sports practice, and use of technological devices. Most of these studies have suggested its inverse relationship with outdoor activities/sports and direct relationship with near work. Eppenberg and Sturm aiming to assess the protective role of outdoor light exposure in the incidence and prevalence of myopia recently summarized data from two cross-sectional studies, seven prospective cohort studies, and three intervention studies published between October 2008 and January 2019. The articles represent data of 32,381 participants between 6 and 18 years of age. Five of the nine cross-sectional studies found an inverse association [9]. Further, studies by Dirani and Sun revealed a significantly lower incidence of myopia in patients who reported a longer outdoor time (the reported odds ratio (OR), 0.90 (95% CI: 0.84–0.96, P = .004) and 0.74 (95% CI: 0.53–0.92, P < 0.001), respectively). Dirani et al. also reported that the mean amount of time of playing outdoor sports resulted to be longer among subjects without myopia (0.85 h/day, SD 0.80) than among those with myopia (0.72 h/day, SD = 0.82) (P = 0.007). Outdoor activities were associated with a lower prevalence of myopia; conversely, indoor sports were not. The data support the role of the overall outdoor activity as compared to sports alone in reducing the incidence of myopia [10, 11].Jones-Jordan et al. examined 514 children and found that nonmyopic children were engaged in a significantly greater amount of sports and outdoor activities than the myopic ones (11.65 (SD 6.97) vs 7.98 (SD 6.54)) hours per week (P < 0.001) [12].Conversely, a cohort study by Jacobsen et al. suggested that physical activity per sec is inversely associated with a refractive change toward myopia (P = 0.015) [13].A systematic review assessing the correlation of physical activity, comprising the data from 263 studies, identified a solid relationship of more physical activity and lower myopia, but no evidence of physical activity as an independent risk factor for myopia was obtained. Hence, as per evidence, outdoor time remains the most important factor [14].Chen et al. reported a later onset of myopia in people who spent more time outside. Guggenheim and Saxena confirmed this data (the relative risk reported was OR = 0.90 (95% CI: 0.45–0.96) andR = 0.54 (95% CI: 0.37–0.79; P = 0.002)) [15, 16]. Wu et al. showed a slower myopic shift in children who were encouraged to spend more time outside. (OR 0.46 (95% CI: 0.28–0.77); P = 0.003) [17]. However, studies by Jordan-Jones et al. Ma et al., and Hsu et al. [12, 18, 19] reported no association between myopia and time spent outdoors.A recent school-based, prospective, cluster-randomized trial was conducted to assess the relationship between time spent outdoors and the myopia onset/progression. A total of 6,295 children were randomized into a control group (n = 2,037), test group I (n = 2,329, 40 minutes outdoor time/day), or test group II (n = 1,929, 80 minutes outdoor time/day). The study failed to demonstrate any significant association between the time spent outdoor and myopia development or progression [20]. Jones-Jordan et al. did not observe any retardation in myopia development in children who spent more time outdoors, as reported by He et al. [12, 20].Many studies have identified an inverse association between myopia development and progression and outdoor exposure; however, contrasting evidence has also emerged. This may be due to biases. First, the data on near work, outdoor activities, and related parameters in almost all published studies were obtained from questionnaires and lacked uniformity. Moreover, the results of the questionnaires were influenced by geography, culture, cognitive ability, and memory bias. The refraction data might have been influenced by measurement bias. Complete cycloplegic refraction was obtained in only a part of the studies by using different drugs (tropicamide vs. cyclopentolate); therefore, these refraction results could not be considered reliable for statistical analyses.Nevertheless, existing evidence supports this association. The mechanism through which outdoor exposure may be responsible for lowering the incidence of myopia is explained by different hypotheses. Sunlight peaks at a wavelength of 550 nm, resulting roughly to the peak of sensitivity of the human eye. Indoor light peaks at a longer wavelength. Thus, most of the light beams received by the eye are focused behind the retina plane and might cause a situation similar to that of a negative lens. This phenomenon has proven to stimulate global growth in myopia [21].Another hypothesis focused on the importance of dopamine release stimulated by sunlight. Animal models (one-day-old white Australorp cockerels) were used to verify the effect of a translucent diffuser placed over the eye and kept on a 12 : 12 light/dark cycle. These birds exhibited excessive axial length causing myopia; however, if the diffuser was removed for 3 hours during the light period, the axial length did not grow. In birds wearing a diffuser, intravitreal injection of dopamine blocked axial growth. Dopamine antagonists exerted the opposite effects [22, 23].Myopia development and progression have been associated with higher educational levels and near work. The latter is considered a group of activities performed at short working distances such as reading, studying, computer use, playing videogames, or watching TV. School children spend a lot of time in near vision activities, and this could be regarded as a risk factor for myopia development. To study the effect of near work, a meta-analysis was conducted comprising the available literature published between April 1, 1989, and May 1, 2014, with a total of 10,384 participants aged 6–18 years. Results showed a pooled OR of 1.14 (95% CI: 1.08–1.20), advocating that near activities are associated with myopia. A subgroup analysis based on the definition of near work found that children who performed more near work were more likely to be myopic (OR = 1.85; 95% CI: 1.3–2.62;I2 85%) and that the odds ratio of myopia increased by 2% (OR = 1.02; 95% CI: 1.01–1.03; I2 42.8%) for every diopter-hour increase of near work per week [24].The Generation R Study conducted in Rotterdam tested the relationship between computer use and myopia development. This study comprised a total of 5074 children born in Rotterdam between 2002 and 2006. Data on computer use and outdoor exposure were acquired at the age of three, six, and nine years using a questionnaire; reading time and reading distance were assessed at nine years of age. Statistical analysis showed a significant association between computer use at the age of 3 years and myopia at six and nine years (OR = 1.005, 95% CI: 1.002–1.010; OR = 1.009, 95% CI: 1.002–1.0017). The cumulative time of the computer use in infancy was significantly correlated with myopia at nine years (OR = 1.005, 95% CI: 1.001–1.009). In the same study, reading time at the age of nine years was significantly associated with myopia at nine years and axial elongation. The study found that the effect of near vision activities decreases longer outdoor exposure (Figure1) [25].Figure 1 Odds ratios for near activity risk and the mean outdoor time on myopia at the age of 9 years. Near activities risk tertiles represent the combined risk of the computer use, reading, and reading distance. The outdoor time was classified into <7, 7–14, and >14 hours per week. The subset with low near risk and >14 hours per week of outdoor exposure was the reference subset (adapted from the study by Enthoven et al.).A prospective study by Oner et al. found that only reading and writing had a negative association with annual myopic progression (r = −0.362, P = 0.010), while computer use, watching television, and outdoor activities had no correlation with the annual myopia evolution rate. Different near vision activities could differently affect myopia risk at different light levels, word sizes, and working distances [26].According to Pӓrssinen and Lyyra, a correlation was found between time spent on reading or near work and myopia [27]. Conversely, the studies of Tan et al. reported no statistically significant associations between myopia progression and near activities in children [28, 29]. Contrasting evidences could be due to the difference in the age of the participants in the groups analyzed.While accommodation and convergence occurring after prolonged near work have been proposed as the mechanisms for the development of myopia, a strong association between accommodation and myopia has not been found [27]. Forced hyperopic defocus has been shown as a significant stimulus for eye growth in experimental studies [30].The coronavirus pandemic (COVID-19), a problem affecting people worldwide since the beginning of 2020, has changed people’s habits and led to an increase in use of digital devices owing to lockdown measures. In order to establish the risk of increase in the incidence of myopia with the increased digital device use, Wong et al. reviewed studies published on the association between PC, tablet, or smart phone use and myopia. They found that current evidence is inconclusive, but most of the pieces of evidence suggest a higher risk of myopia in people spending more time on digital screens. They argued that the COVID-19 pandemic outbreak period could potentially aggravate myopia by increasing exposure to digital devices. Moreover, the usage of digital devices might have a long-term negative impact [31].To limit the consequences, the American Ministry of Education recommends spending less than 20 minutes per day on electronic homework and prohibition of phones and tablets in classrooms [32].Interestingly, the exposition to the red light (650 nm wavelength) at home with a desktop light therapy device had recently been shown to be effective in myopia control. At the 12-month follow-up visit, the group given red light therapy had a 70% reduction in myopia progression and 32% of patients in this group also had a ≥0.05 mmaxial length shortening [33]. Further studies with double-masking and the placebo-controlled groups are needed to understand the long-term efficacy and safety, possible rebound effects, and optimal treatment strategies, beyond the potential underlying mechanisms. ## 3. Pharmacological Strategies ### 3.1. Atropine Atropine, a nonselective muscarinic antagonist drug, is known for its potential myopia-inhibiting capacity. Initially, since accommodation was considered an important factor in myopia progression, atropine was used because of its cycloplegic effect. However, animal studies have revealed that the effect of atropine might be mediated by nonaccommodative mechanisms [34, 35].Atropine has affinity for all five subtypes of acetylcholine receptors (MR1-MR5), which are distributed in different ocular tissues and scleral fibroblasts [36]. Several studies have shown that mAChR antagonists inhibit scleral proliferation in mice and humans and subsequently inhibit axial elongation of the eye [37].Nonetheless, the exact mechanism by which atropine exerts its suppressive action on myopia has not been established yet. Some studies have demonstrated an increase in retinal dopamine after instillation of atropine and postulated that dopamine may stimulate the release of nitric oxide as a part of the signaling chain [38]. Recently, Barathi et al. suggested that GABAergic-mediated signaling is involved, while Carr et al. described a possible implication of α2 adrenergic receptors [39, 40].Prepas proposed that pupil dilatation induced by antimuscarinic drugs leads to increased UV exposure, which controls the scleral growth through collagen cross-linking [41]. However, this hypothesis disagrees with the lack of myopic progression control after instillation of tropicamide [42].Several randomized clinical trials have shown that 1% and 0.5% atropine are effective in slowing myopia progression [42–45]. The Atropine in the Treatment of Myopia (ATOM) study was a randomized, double-masked, placebo-controlled trial conducted in Singapore with over 400 children aged 6 to 12 years. For two years, 1% atropine eye drops were instilled, followed by a one-year suspension. The results after two years demonstrated a 77% reduction in progression of myopia as compared to the control group (−0.28 ± 0.92 diopters (D) compared with −1.20 ± 0.69 D in the placebo group with P < 0.001), but no change in the axial length compared to the baseline (−0.02 ± 0.35 mm) [43].During the washout phase, the suspension of treatment caused a rebound effect in both refraction and axial length in the eyes treated with atropine, but the final progression was lower in the atropine-treated group than that of the control group [46]. Moreover, 1% atropine caused side effects such as photophobia, blurred vision, and reduced accommodation. However, the safety profile of a high dosage of atropine is a major concern in clinical practice, and reduced accommodation may require children to wear bifocal or progressive lenses to read. Recent clinical trials have confirmed that atropine is effective in controlling myopic progression with a dose-related effect.In a two-year study conducted by Shih et al., 200 Taiwanese children were treated with 0.5%, 0.25%, or 0.1% atropine. After two years, there was a reduction in myopia progression by 61%, 49%, and 42% respectively, as compared with children treated with tropicamide in the control group (−0.04 ± 0.63 D/Y, 0.45 ± 0.55 D/Y, and 0.47 ± 0.91 D/Y in the 0.5, 0.25, and 0.1% atropine groups, respectively, in comparison to the control group (−1.06 ± 0.61 D)) [42].The ATOM 2 study evaluated the efficacy and side effects of lower doses of atropine on myopic progression (0.5%, 0.1%, and 0.01% atropine instilled for 24 months followed by the 12-month washout phase). The authors demonstrated a dose-related effect, with higher doses leading to greater inhibition of myopia progression (−0.30 ± 0.60 D, −0.38 ± 0.60 D, and −0.49 ± 0.63 D in the 0.5%, 0.1%, and 0.01% atropine groups, respectively, (P = 0.02, between the 0.01 and 0.5% groups; P = 0.05, between other concentrations)) [47].However, after suspension of treatment, there was a greater rebound effect in the eyes treated with higher concentrations of atropine, whereas only a slight increase was observed in the 0.01% group. After 36 months, myopia progression in the 0.01% group was −0.72 ± 0.72 D, while in the 0.5% and 0.1% groups it was −1.15 ± 0.81 D and −1.04 ± 0.83 D, respectively, (P < 0.001) [48]. The authors concluded that the lowest (0.01%) concentration seems to be the safest choice causing fewer adverse effects compared to higher formulations while retaining similar efficacy [47].In a recent study of low-concentration atropine for myopia control (LAMP), Yam et al. compared 0.05%, 0.025%, and 0.01% atropine eye drops and described a dose-related effect on myopia progression. Atropine (0.05%) was the most effective in limiting both the spherical equivalent and axial elongation progression [49]. After two years, efficacy of 0.05% doubled than that of 0.01% atropine [50]. Regarding combined treatment with both atropine and multifocal or bifocal lenses, studies found a lower rate of myopic progression with both 1% and 0.5% atropine plus multifocal and bifocal lenses compared to placebo plus single-vision lenses [42, 50]. The most recent report from the same study (LAMP, Phase 3) regarding the third year of usage confirmed that atropine treatment achieved a better effect across all concentrations compared with the washout regimen. In particular, 0.05% atropine remained the optimal concentration over 3 years in the study population. The differences in rebound effects were clinically small across all 3 studied atropine concentrations. Stopping treatment at an older age and lower concentration is associated with a smaller rebound: the older the subject’s age, the smaller the rebound effect. This might be explained by the slower inherent physiological progression of children at older ages, as previously demonstrated by the results of the LAMP study Phases 1 and 2 [51].In conclusion, results from studies have proved that atropine eye drops, alone or in combination with other treatments, are useful in reducing myopic progression, although mild side effects were described, including pupil dilation, photophobia, and near blur. To date, atropine treatment has been adopted in Asian countries, such as Taiwan and Singapore. ### 3.2. Pirenzepine Several studies have demonstrated that pirenzepine, a selective M1 muscarinic receptor antagonist, is effective in controlling the progression of myopia in children [52–54]. A study conducted on myopic Asian children treated with a pirenzepine 2% gel twice daily found a 44% reduction in myopic progression compared with the control group.A parallel-group, placebo-controlled, double-masked, randomized trial conducted by Siatkowski et al. found a 41% reduction in myopic progression in children treated with a 2% pirenzepine gel compared with the placebo (0.58 D vs. 0.99 D after two years), but the difference in the axial length between the study groups was statistically insignificant. The United States-based clinical trial found that pirenzepine was well tolerated with mild to moderate adverse effects [53]. However, pirenzepine is not available as a treatment option currently. ### 3.3. 7-Methylxanthine 7-Methylxanthine, a nonselective adenosine antagonist, has been adopted as a treatment option only in Denmark. Oral administration of 7-methylxanthine causes a rise in the scleral collagen fibril diameter, amino acid content, and thickening of the sclera in rabbits [55].A trial evaluated the effect of 400 mg 7-methylxanthine once a day in children compared to a placebo group. The results revealed a modest effect on myopia progression in children with moderate axial growth rates at the baseline (22%), but no effect in individuals with high-progressing myopia. The treatment seemed safe, with no ocular or systemic side effects [56]. Currently, 7-methylxanthine is a nonregistered drug in Denmark. Evaluation conducted on animals [57, 58] and humans have exhibited potential efficacy; however, further evaluations are needed. ## 3.1. Atropine Atropine, a nonselective muscarinic antagonist drug, is known for its potential myopia-inhibiting capacity. Initially, since accommodation was considered an important factor in myopia progression, atropine was used because of its cycloplegic effect. However, animal studies have revealed that the effect of atropine might be mediated by nonaccommodative mechanisms [34, 35].Atropine has affinity for all five subtypes of acetylcholine receptors (MR1-MR5), which are distributed in different ocular tissues and scleral fibroblasts [36]. Several studies have shown that mAChR antagonists inhibit scleral proliferation in mice and humans and subsequently inhibit axial elongation of the eye [37].Nonetheless, the exact mechanism by which atropine exerts its suppressive action on myopia has not been established yet. Some studies have demonstrated an increase in retinal dopamine after instillation of atropine and postulated that dopamine may stimulate the release of nitric oxide as a part of the signaling chain [38]. Recently, Barathi et al. suggested that GABAergic-mediated signaling is involved, while Carr et al. described a possible implication of α2 adrenergic receptors [39, 40].Prepas proposed that pupil dilatation induced by antimuscarinic drugs leads to increased UV exposure, which controls the scleral growth through collagen cross-linking [41]. However, this hypothesis disagrees with the lack of myopic progression control after instillation of tropicamide [42].Several randomized clinical trials have shown that 1% and 0.5% atropine are effective in slowing myopia progression [42–45]. The Atropine in the Treatment of Myopia (ATOM) study was a randomized, double-masked, placebo-controlled trial conducted in Singapore with over 400 children aged 6 to 12 years. For two years, 1% atropine eye drops were instilled, followed by a one-year suspension. The results after two years demonstrated a 77% reduction in progression of myopia as compared to the control group (−0.28 ± 0.92 diopters (D) compared with −1.20 ± 0.69 D in the placebo group with P < 0.001), but no change in the axial length compared to the baseline (−0.02 ± 0.35 mm) [43].During the washout phase, the suspension of treatment caused a rebound effect in both refraction and axial length in the eyes treated with atropine, but the final progression was lower in the atropine-treated group than that of the control group [46]. Moreover, 1% atropine caused side effects such as photophobia, blurred vision, and reduced accommodation. However, the safety profile of a high dosage of atropine is a major concern in clinical practice, and reduced accommodation may require children to wear bifocal or progressive lenses to read. Recent clinical trials have confirmed that atropine is effective in controlling myopic progression with a dose-related effect.In a two-year study conducted by Shih et al., 200 Taiwanese children were treated with 0.5%, 0.25%, or 0.1% atropine. After two years, there was a reduction in myopia progression by 61%, 49%, and 42% respectively, as compared with children treated with tropicamide in the control group (−0.04 ± 0.63 D/Y, 0.45 ± 0.55 D/Y, and 0.47 ± 0.91 D/Y in the 0.5, 0.25, and 0.1% atropine groups, respectively, in comparison to the control group (−1.06 ± 0.61 D)) [42].The ATOM 2 study evaluated the efficacy and side effects of lower doses of atropine on myopic progression (0.5%, 0.1%, and 0.01% atropine instilled for 24 months followed by the 12-month washout phase). The authors demonstrated a dose-related effect, with higher doses leading to greater inhibition of myopia progression (−0.30 ± 0.60 D, −0.38 ± 0.60 D, and −0.49 ± 0.63 D in the 0.5%, 0.1%, and 0.01% atropine groups, respectively, (P = 0.02, between the 0.01 and 0.5% groups; P = 0.05, between other concentrations)) [47].However, after suspension of treatment, there was a greater rebound effect in the eyes treated with higher concentrations of atropine, whereas only a slight increase was observed in the 0.01% group. After 36 months, myopia progression in the 0.01% group was −0.72 ± 0.72 D, while in the 0.5% and 0.1% groups it was −1.15 ± 0.81 D and −1.04 ± 0.83 D, respectively, (P < 0.001) [48]. The authors concluded that the lowest (0.01%) concentration seems to be the safest choice causing fewer adverse effects compared to higher formulations while retaining similar efficacy [47].In a recent study of low-concentration atropine for myopia control (LAMP), Yam et al. compared 0.05%, 0.025%, and 0.01% atropine eye drops and described a dose-related effect on myopia progression. Atropine (0.05%) was the most effective in limiting both the spherical equivalent and axial elongation progression [49]. After two years, efficacy of 0.05% doubled than that of 0.01% atropine [50]. Regarding combined treatment with both atropine and multifocal or bifocal lenses, studies found a lower rate of myopic progression with both 1% and 0.5% atropine plus multifocal and bifocal lenses compared to placebo plus single-vision lenses [42, 50]. The most recent report from the same study (LAMP, Phase 3) regarding the third year of usage confirmed that atropine treatment achieved a better effect across all concentrations compared with the washout regimen. In particular, 0.05% atropine remained the optimal concentration over 3 years in the study population. The differences in rebound effects were clinically small across all 3 studied atropine concentrations. Stopping treatment at an older age and lower concentration is associated with a smaller rebound: the older the subject’s age, the smaller the rebound effect. This might be explained by the slower inherent physiological progression of children at older ages, as previously demonstrated by the results of the LAMP study Phases 1 and 2 [51].In conclusion, results from studies have proved that atropine eye drops, alone or in combination with other treatments, are useful in reducing myopic progression, although mild side effects were described, including pupil dilation, photophobia, and near blur. To date, atropine treatment has been adopted in Asian countries, such as Taiwan and Singapore. ## 3.2. Pirenzepine Several studies have demonstrated that pirenzepine, a selective M1 muscarinic receptor antagonist, is effective in controlling the progression of myopia in children [52–54]. A study conducted on myopic Asian children treated with a pirenzepine 2% gel twice daily found a 44% reduction in myopic progression compared with the control group.A parallel-group, placebo-controlled, double-masked, randomized trial conducted by Siatkowski et al. found a 41% reduction in myopic progression in children treated with a 2% pirenzepine gel compared with the placebo (0.58 D vs. 0.99 D after two years), but the difference in the axial length between the study groups was statistically insignificant. The United States-based clinical trial found that pirenzepine was well tolerated with mild to moderate adverse effects [53]. However, pirenzepine is not available as a treatment option currently. ## 3.3. 7-Methylxanthine 7-Methylxanthine, a nonselective adenosine antagonist, has been adopted as a treatment option only in Denmark. Oral administration of 7-methylxanthine causes a rise in the scleral collagen fibril diameter, amino acid content, and thickening of the sclera in rabbits [55].A trial evaluated the effect of 400 mg 7-methylxanthine once a day in children compared to a placebo group. The results revealed a modest effect on myopia progression in children with moderate axial growth rates at the baseline (22%), but no effect in individuals with high-progressing myopia. The treatment seemed safe, with no ocular or systemic side effects [56]. Currently, 7-methylxanthine is a nonregistered drug in Denmark. Evaluation conducted on animals [57, 58] and humans have exhibited potential efficacy; however, further evaluations are needed. ## 4. Surgical Strategies Refractive surgery was first used in a pediatric population in the 90s [59], with the aim to improve vision in a selected group of visually impaired children [60]. In the adult population, refractive surgery is used to achieve the best-uncorrected vision possible.Amblyopia is a reduction in visual acuity or visual deprivation without an organic cause due to abnormal interaction between the two eyes and the brain. In a population-based cross-sectional study [61], amblyopia accounted for 33% of monocular visual impairment in children.The most frequent cause of amblyopia is anisometropia. Myopic anisometropia of more than 2 D results in an increased incidence of amblyopia and reduced stereopsis. Anisometropia greater than 6 D is amblyogenic in all children [62]. Moreover, a higher degree of anisometropia affects amblyopia therapy and leads to a worse visual outcome [63].Glasses, contact lenses, and patching are the most common options for treating pediatric high refractive errors associated with amblyopia. However, children may refuse conventional therapy for different reasons. If a significant refractive difference exists between the two eyes, the use of a spectacle may result in aniseikonia and interfere with good stereopsis. Correction with glasses, especially those with high refractive errors, may lead to a narrower field of view, prismatically induced aberrations, and social stigma. Contact lenses offer a better quality of vision and a larger field of view but are associated with poor compliance due to intolerance and difficulty of insertion and removal [64].In a study by Paysse, factors associated with failure of traditional therapy are age >6 years, poor compliance, inadequate parental understanding, initial visual acuity of 20/200 or lower, and presence of astigmatism >1.5 D [65]. Children with craniofacial and/or ear abnormalities, hearing aids, or neurobehavioral disorders may be averse to wearing spectacles. These children can develop very poor vision in the amblyopic eyes because conventional treatment is more challenging [66].Moreover, some studies have shown that only about two-thirds of cases with anisometropic amblyopia achieve good visual outcomes if treated with conventional methods [65, 67, 68]. If myopic anisometropia is more than 6 D, the chance of achieving a best-corrected visual acuity of 20/40 or better is only 25% [63].The application of refractive surgery in the treatment of anisometropic amblyopia in children is still unclear. Options include laser vision correction such as photorefractive keratectomy (PRK), laser-assisted subepithelial keratectomy (LASEK), laser-assisted in situ keratomileusis (LASIK), or phakic intraocular lens implantations (anterior or posterior chamber). PRK, LASEK, and LASIK yield successful outcomes in refraction and visual acuity in children with high myopic anisometropia and amblyopia than in those who are noncompliant with traditional treatment [59, 69–82].Nucci and Drack evaluated the safety and efficacy of refractive surgery in children with unilateral high myopia to supplement optical correction. A total of 14 eyes in 14 children aged 9–14 years received surgery (11 PRK and three LASIK). The preoperative best-corrected visual acuity was 20/147, while that at 20 months was 20/121. Average preoperative and postoperative refraction (spherical equivalent) was −7.96 ± 2.16 D and −0.67 ± 0.68 D at 20 months, respectively. Only minimal corneal haze was reported [73].Autarata and Rehurek evaluated the results of PRK for high myopic anisometropia and contact lenses intolerance in 21 children aged 7–15 years. The mean preoperative and postoperative refraction was −8.93 ± 1.39 D and −1.66 ± 0.68 D, respectively (P < 0.05). A total of nine eyes gained one line of the best-corrected visual acuity, and five eyes gained two lines. No significant complications were observed. The authors concluded that PRK is safe and effective over a four-year follow-up period [83].Phillips et al. treated LASIK myopic anisometropia in five patients between 8 and 19 years of age and evaluated the results over 18 months. The mean preoperative refractive error was −9.05 D, while the mean postoperative refractive error was −1.17 D, and two of five patients gained one line of vision [84].In an analysis of 17 case series published by Daoud et al., 298 patients were treated with PRK, LASEK, and LASIK for severe myopic anisometropia. Follow-up ranged from 12 to 36 months. Patients’ preoperative refraction was between −14.9 and −6 D and age varied between 0.8 and 19 years. The authors found an improvement in the best-corrected visual acuity from 20/30 to 20/400 preoperatively to 20/26–20/126 postoperatively. Improved binocular vision after surgery was found in 64% of patients in six of the largest studies analyzed [64]. Interestingly, several studies reveal an increased level of stereopsis after excimer refractive surgery [80, 81, 85].Paysse evaluated the long-term visual acuity and the refractive outcome in 11 children who underwent PRK for the treatment of anisometropic amblyopia. She reported a long-term reduction in the refractive error with increased visual acuity. Stereoacuity improved in 55% of testable children [80].Astle et al. found an improvement in the best-corrected visual acuity in 63.6% of children treated with LASEK. Positive stereopsis was present in 39.4% of patients preoperatively and 87.9% postoperatively [81]. In a retrospective study, Magli et al. evaluated the use of PRK in the treatment of 18 myopic anisometropic children. Best-corrected visual acuity showed an improvement after surgery (from 20/70 to 20/50), and the level of stereopsis increased in two of 18 patients [85].Excimer laser surgery has also been successfully used to treat high bilateral myopic amblyopia. In a case study published by Astle et al., 11 patients aged 1–17 years were treated with LASEK. The average spherical equivalent was −8 D preoperatively and −1.2 D postoperatively. The average best-corrected visual acuity was 20/80 preoperatively and 20/50 postoperatively [76]. Tychsen reported nine patients between 3 and 16 years of age were treated with LASEK. After surgery, uncorrected acuity improved in all eyes, with improvement in behavior and environmental visual interaction [86].Corneal haze is the predominant complication of ablative refractive surgery. In a meta-analysis [87], LASIK patients had lower rates of postsurgical haze than those of PRK (5.3% vs. 8.5%, respectively). In children, postsurgical haze is more common than in adults, given that children have a stronger inflammatory response. Long-term corticosteroids and mitomycin C have been recommended to reduce the incidence of postsurgical haze [88].Patient cooperation may be challenging in the case of children. During laser or intraocular refractive surgeries in the adult population, the patient is asked to fixate on the operating light or laser target. Collaboration varies in children as they may not be able to fixate, and general anesthesia might be required. However, adolescents are often able to fixate [84]. Some studies have investigated the use of different anesthesia protocols during excimer laser surgery [89, 90].However, according to Brown [91], given that the patient’s line of sight is determined by the desire to actively fixate on an object, an unconscious patient is not able to direct the fovea toward a target. Corneal refractive surgery should be centered on the intersection between the patient’s line of sight and the cornea, while the laser firing axis is centered on the surgeon’s line of sight. Tilting the laser firing axis relative to the patient’s line of sight could result in optically asymmetric ablation. The best timing for performing refractive surgery is debatable, but studies suggest that the best results are shown when performed early [87].However, eye modifications such as changes in the axial growth and lens thickness can affect long-term outcomes of early surgery. In laser refractive surgery, possible corneal biomechanical changes over time must be considered [92]. In young children, corneal strength has not been established, but there is evidence that the corneal strength increases with age [93].Another concern is the myopic regression. Most of it occurs during the first year after surgery, with lesser regression over the following 2–3 years [80]. Daoud et al. observed a myopic regression of 1 D/year on average in children treated for myopic anisometropic amblyopia [64]. For these reasons, authors suggest overcorrecting and targeting slight hyperopia in myopic corrections [92].Another option for surgery in children with high refractive errors and amblyopia is phakic intraocular lens implantation. The phakic intraocular lens was first used in the pediatric population in 1999 [94]. There are two types of FDA-approved phakic intraocular lenses: an anterior chamber phakic intraocular lens called Verisyse (Ophtec BV) in the United States, similar to the Artisan phakic intraocular lens in Europe and Asia and a posterior chamber phakic intraocular lens called Visian Implantable Collamer Lens (ICL) (Staar Surgical Co). The Visian ICL is implanted between the iris and the natural lens with the haptics located in the ciliary sulcus.Indications of ICL implantation in the pediatric population are high anisometropia, myopia, or hyperopia noncompliant with conventional treatment, bilateral high ametropia noncompliant with conventional treatment, and high refractive amblyopia associated with neurobehavioral disorders [95, 96]. In recent years, several studies have been published on the use of anterior chamber phakic intraocular lenses for the treatment of refractive errors in children. These studies documented an improvement in uncorrected visual acuity, and surgery was well tolerated [97–99].In a study conducted by Pirouzian et al., six pediatric patients with anisometropic myopic amblyopia underwent Verisyse anterior chamber phakic intraocular lens implantation. Patients were aged 5–11 years, and none of the patients were compliant with glasses or contact lenses. Results showed the improved best-corrected visual acuity from less than 20/400 to a mean of 20/70 postoperatively, an increase in stereopsis, and minimal side effects [97].One of the most important concerns was the potential long-term endothelial cell loss. For these reasons, guidelines approve phakic intraocular lenses only when the anterior chamber depth is more than 3.2 mm. In the studies of Pirouzian et al. and Ip et al., the endothelial cell loss rate after 3–5 years of follow-up was between 6.5% and 15.2% [99, 100]. However, as with visual acuity, the endothelial count is difficult to measure in all children, and the real cell loss cannot be accurately assessed in these studies.Since 2013, different authors have reported their experience with posterior chamber phakic intraocular lenses in children. Results showed an improvement in corrected and uncorrected visual acuity [101–103]. In 2017 large case series, Tychsen et al. published the results of Visian phakic intraocular lens implantation in 40 eyes of 23 children with high anisometropia and amblyopia. About 57% of the patients had a neurobehavioral disorder. Best-corrected visual acuity improved from 20/74 preoperatively to 20/33 postoperatively. Uncorrected visual acuity improved 25-fold, which is relevant, given that children with neurobehavioral disorders are intolerant to glasses. Moreover, 85% of the children had improved social performance [103].Complications from the above-mentioned studies were due to the lens position, including a pupillary block from not enough patent peripheral iridotomy and pigment dispersion from the lens rubbing on the posterior iris [101–103].There are several advantages of using phakic intraocular lenses compared to laser refractive surgery. The phakic intraocular lens procedure is reversible, and there is less risk of refraction regression over time. Moreover, laser surgery carries a risk of corneal haze. Nevertheless, there is a need for further studies on the long-term effects of phakic intraocular lenses on endothelial cells, the risk of cataract formation, and angle-closure glaucoma.Despite evidence of efficacy and short-term safety, many questions about refractive surgery in children have not yet been answered. The major concerns to be explored are the lack of pediatric nomograms, the role of anesthesia, the lack of evidence regarding the effect of the eye growth on long-term outcomes, the instability of the refractive error in children, susceptibility to trauma, and lack of evidence of long-term safety. ## 5. Optical Strategies Several strategies have been attempted in order to optically control the progression of myopia, including under and overcorrection. In China, two studies aimed to evaluate the progression of myopia in uncorrected eyes. In the first study proposed by Hu and Guo [104], 90 participants were divided into the three groups: uncorrected, monocular corrected, or binocular corrected. The results showed that over a 12-month follow-up visit, the uncorrected patients had a faster progression of myopia (−0.95 ± 0.12 D) as compared to those who were fully corrected (−0.50 ± 0.15 D). However, this study had some limitations: the selection procedure and age were not specified, and the groups were not well matched.In another study, Sun et al. [105] evaluated a cohort of 121 twelve-year-old Chinese children. In the first year, in the uncorrected group, myopia progression was less (−0.39 ± 0.48 D) as compared to the full-corrected group (−0.57 ± 0.36 D; P = 0.03). This difference was significant even after adjusting for the baseline standard error of regression, age of the myopia onset, height, presence of parents with nearsightedness, and time spent in outdoor and indoor activities (−0.39 ± 0.06 D vs −0.58 ± 0.06 D, P < 0.01).Lastly, Ong et al. [106] reported no difference in myopic progression over a three-year period among myopic children who wore full-corrected glasses full-time, part-time, or not at all. ### 5.1. Undercorrection of Myopia Undercorrection is one of the optical strategies proposed to slow the progression of myopia. It is based on the rationale that in undercorrected eyes, the accommodative response for near vision is reduced [107]. In fact, in animal models (chicks, tree shrews, marmosets, and infant monkeys) [21, 108, 109], a myopic defocus, in which the retinal image is formed in front of the retina, was capable of inhibiting eyeball elongation and associated myopic progression.Tokoro and Kabe [110] found that in a population aged 7–15 years, the rate of myopia progression was lower with undercorrection (−0.54 ± 0.39 D) than with full correction, either in full correction full-time wear (−0.75 D ± 0.27 D) or in full correction part-time wear (−0.62 ± 0.32 D). This study had several limitations, including a small sample size, limited statistical analysis, and concurrent use of pharmacological intervention for myopia control.In the study by Li et al. [111], the study population consisted of 12-year-old Chinese children. One hundred-twenty patients were undercorrected, and 133 patients were fully corrected; at one year, no statistically significant difference was observed between the two groups. However, a regression analysis showed a significant association if the refractive error, not the axial length, was considered. In this case, the progression of myopia decreased with an increasing amount of undercorrection (R2 = 0.02; P = 0.02). However, in order to achieve reduction in myopia progression by 0.25 D, undercorrection of more than 1.50 D was required.In both studies by Adler and Millodot [107] and Koomson et al. [112], undercorrection did not prove a statistically significant reduction in myopia progression. Adler and Millodot found that in a cohort of 48 children aged 6–15 years, undercorrection by 0.50 D was associated with myopia progression of 0.17 D when compared to full correction.Koomson et al. enrolled 150 Ghanaian children who were divided into two groups (n = 75). The first group was undercorrected by 0.50 D, while the second group was fully corrected. At two years, myopia progressed by the same rate in both the groups (−0.54 D ± 0.26 in the full-corrected group vs −0.50 D ± 0.22 in the undercorrected group; P = 0.31). Conversely, three studies have reported that under-correction causes a more rapid progression of myopia.Chung et al. [113] reported that 47 children undercorrected by 0.75 D had a greater progression of myopia compared with the 47 children who were fully corrected (−1.00 D vs 0.77 D; P < 0.01); however, the axial elongation was smaller in the undercorrected eyes (0.58 mm vs 0.65 mm; P = 0.04).Chen [114] designed a study in which 77 fully corrected eyes were compared to 55 undercorrected eyes. The two groups were matched for the age, sex, and refractive error. At a 12-month interval, the undercorrected −0.25 to −0.50 D) group exhibited a significant myopic progression (−0.60 D vs −0.52 D; no standard deviation; standard error; and 95% confidence interval were reported).Vasudevan et al. [115] retrospectively examined myopia progression rate records from the USA and the level of undercorrection of myopia versus full correction of myopia. They found that greater undercorrection was associated with a greater progression of myopia (P < 0.01).In all these scenarios, both eyes were corrected, either undercorrected or fully corrected. However, two studies evaluated the rate of progression of myopia by correcting only one of the eyes.In a population of 18 children aged 11 years, Phillips [116] noticed that undercorrection of the nondominant eye was associated with a slower progression of myopia compared to that in the dominant eye, which was fully corrected. The intereye difference was 0.36 D/y (P = 0.002).However, Hu and Guo [104] reported opposite results, in which the undercorrection of one eye in myopic children was associated with a faster progression than fully corrected ones (−0.67 ± 0.22 D vs −0.50 ± 0.15 D).Unfortunately, considering all human trials, the evidence supporting undercorrection as feasible for slowing the progression of myopia is low. Moreover, many pediatric practitioners suggest that the goal is to an attain optimal vision, which can be achieved by full correction. ### 5.2. Overcorrection of Myopia In a case-control study by Goss [117], 36 children aged 7–15 years were overcorrected by 0.75 D and matched by control individuals randomly selected from the files of a university optometry clinic. The rate of progression among the groups was different but not statistically different; −0.49 D/year in the overcorrected group versus −0.47 D in the control group. ### 5.3. Bifocal and Multifocal Lenses The rational use of bifocal or multifocal lenses to slow the progression of myopia is based on two theories. The first one, proven in animal models [108, 118], is based on central and peripheral hyperopic retinal defocus caused by a large accommodative lag [119, 120], which is defined as the residual refractive error of the difference between the accommodative demand required and its response. A large accommodative lag causes a hyperopic retinal defocus, which stimulates axial elongation in central defocus. Furthermore, in the case of peripheral defocus, the eye globe seems to acquire a more prolate shape. However, this stimulus is nullified by short periods of clear vision [21]; therefore, whether transient hyperopic retinal blur can lead to the onset and/or progression of myopia remains unclear.The second theory assumes that during accommodation, there is a mechanical tension created by the crystalline lens or ciliary body. On the one hand, this tension restricts the equatorial ocular expansion, causing accelerated axial elongation; on the other hand, as the ciliary-choroidal tension increases, the effort needed to accommodate increases as well. This probably leads to a further increase in accommodative lags in children, which is a consequence rather than a cause of myopia [121–125]. Regarding the association between myopia in children and accommodative lags, it has been reported that(1) Compared to emmetropic children, myopic children generally show insufficient accommodation with larger accommodative lags, even before the development of myopia. [120, 123, 126, 127].(2) In myopic children, a larger accommodative lag correlates with a faster myopia progression [128]Unfortunately, as seen in the undercorrection approach, no consensus exists regarding the use of bifocal or multifocal lenses to slow the progression of myopia. This is mainly due to the standard near addition power use in the trials, typically between +1.00 D and +2.00 D so that interindividual differences are nullified, causing even a possible overcorrection in some cases.The COMET study was a randomized, multicenter clinical trial in which 469 children, aged 6–11 years, were enrolled and divided into two groups: the first group was assigned to progressive addition lenses (with +2.00 D addition) and the second group to single-vision lenses. At three years, the difference between the progressive addition lenses and the control group in diopters was 0.20 ± 0.08 D and the axial elongation was 0.11 ± 0.03 mm. Even if statistically significant, these differences were considered clinically insignificant [129].The same conclusions were obtained in the COMET 2 study [130]. A total of 180 children aged 8–12 years with spherical equivalent refraction from −0.75 D to −2.50 D and near esophoria ≥2 prism-diopters were enrolled. An additional inclusion criterion was high accommodative lag, initially set to at least 0.50 D (accommodative response less than 2.50 D for a 3.00 D demand) and subsequently restricted further to at least 1.00 D. A total of 110 children completed the study in three years; the progression of myopia was −0.87 D in the group treated with progressive addition lenses (+2.00 D) versus −1.15 D in the single-vision lens group. Nevertheless, despite being statistically significant, the authors considered the results to be clinically insignificant.Cheng et al. [131] attempted to evaluate the use of bifocal and prismatic bifocal lenses. One hundred thirty-five Chinese-Canadian children, aged 8–13 years with myopia progression of at least 0.50 D in the preceding year, were randomly assigned to one of the three treatments: single vision (control, n = 41), +1.50 D executive bifocals (n = 48), and +1.50 D executive bifocals with 3-Δ base-in the prism in the near segment of each lens. At the three-year follow-up, the progression of myopia in terms of diopters and axial length elongation was highest in children treated with single vision (−2.06 D and 0.82 mm) compared to those who were treated with bifocal (−1.25 D and 0.57 mm) or prismatic bifocal lenses (−1.01 D and 0.54 mm). Furthermore, in children with high accommodative lags (>1.00 D), no difference was observed in myopia control using bifocal or prismatic bifocal lenses. Instead, in children who showed low lags of accommodation (≤1.00 D), greater benefits were observed using prismatic bifocal lenses. According to the authors, this could be explained as prismatic bifocal lenses, because prisms may reduce the convergence and lens-induced exophoria with prism correction.Currently, research is moving from the correction of the hyperopic shift to the induction of myopic peripheral defocus. The rationale is based on two findings.(1) Visual signals derived from the peripheral retina are stronger than those originating from the central retina [132, 133](2) Optical defocus in the peripheral retina governs ocular growth: peripheral defocus stimulates axial elongation of the eye, while the opposite effect is demonstrated with peripheral myopic defocus (Figure2) [134–140]Figure 2 Peripheral hyperopic defocus (red arrow) might lead to axial elongation. A myopic defocus (green arrow) can be achieved with orthokeratology, contact lenses, laser refractive surgery, and spectacle lenses (defocus incorporated multiple segment lenses and Apollo progressive addition lenses).Spectacles of two types can induce peripheral myopic defocus: The defocus incorporated multiple segment lenses and Apollo progressive addition lenses (Apollo PALs, Apollo Eyewear, River Grove, IL, USA)and defocus incorporated multiple segment (DIMS) lenses [141] are custom-made plastic spectacle lenses. Each lens includes a central optical zone (9 mm in diameter) for correcting distance refractive errors and an annular multifocal zone with multiple segments (33 mm in diameter) with a relative positive power (+3.50 D). The diameter of each segment is 1.03 mm. Lam et al. [141] evaluated the use of defocus incorporated multiple segments versus single-vision lenses in 160 children. The results indicated that myopia progressed slower by 52% in the defocus incorporated multiple segment group than that in the single-vision group (−0.41 ± 0.06 D in the defocus incorporated multiple segment group and −0.85 ± 0.08 D in the single-vision group; mean difference −0.44 ± 0.09 D, P < 0.001). Moreover, the axial elongation was shorter in children in the defocus incorporated multiple segment group (0.21 ± 0.02 mm) by 62% than those in the single-vision group (0.55 ± 0.02 mm in the defocus incorporated multiple segment; mean difference 0.34 ± 0.04 mm, P < 0.001). These preliminary results were confirmed after a 3-year follow-up, showing that the myopia control effect was sustained in the third year in children who had used the DIMS spectacles in the previous 2 years and was also shown in the children switching from single vision to DIMS lenses [142]. Interestingly, in a study by Zhang et al., [143] baseline relative peripheral refraction (RPR) was assessed as a variable on the myopia control effects in myopic children wearing DIMS lenses. The authors concluded that DIMS lenses slowed down myopia progression, and myopia control was better for the children with baseline hyperopic RPR than the children with myopic RPR. This may partially explain why the efficacy of DIMS technology varies among myopic children and advocates the need for customized myopic defocus for patients to optimize myopia control effects. Indeed, similar results were found in animal studies, showing that a greater hyperopic defocus leads to more myopia progression while inducing myopic defocus retarded myopia progression [144]. Outcomes in infant monkeys and chicks advocated that spatial resolution at the anatomic level of the optical pathway could modulate overall eye growth [145]. Animal studies using contact lenses with embedded myopic defocus found that myopia progression could be slowed by 20% to 60% [146, 147].The Apollo progressive addition lenses comprise an asymmetrical myopic defocus design with a 3 myopic defocus zone, including a +2.50 D full-positive power superior zone, an 80% full myopic defocus power nasal zone, and a 60% full myopic defocus power temporal zone. Currently, a prospective, multicenter, randomized controlled trial, promoted by Li, is ongoing to evaluate the possible efficacy of the defocus incorporated multiple segment and Apollo progressive addition lenses [148]. ### 5.4. Contact Lenses and Orthokeratology in Myopia Control As previously reported, a theory for eye elongation suggests that axial elongation is caused by peripheral retinal hyperopic defocus [105, 135, 149, 150].This theory has led researchers to consider that reducing peripheral hyperopic defocus or inducing peripheral myopic defocus with bifocal, progressive, or multifocal lenses may help prevent myopic progression. In animal models, evidence suggests that the imposition of hyperopic or myopic defocus with negative or positive power lenses, respectively, can influence eye growth and lead to compensatory refractive changes: hyperopic defocus leads to longer and more myopic eyes and myopic defocus leads to shorter and more hyperopic eyes [151–156].This supports the theory of slowing down axial elongation with optical treatments that correct distance vision while achieving simultaneous myopic defocus.The reduction of peripheral retinal hyperopic defocus by contact lenses represents a new and interesting area of research that could be an effective intervention in myopia control. Effective contact lens options for myopia control include multifocal, extended depth of focus (EDOF), and orthokeratology contact lenses. ### 5.5. Single-Vision Rigid Gas-Permeable and Soft Contact Lenses Single-vision lenses intend to correct the refractive error and are not prescribed for myopia control [149, 150]. Over several decades, there have been suggestions that gas-permeable contact lenses (not orthokeratology design) can slow myopia progression in children, but these studies have shown important limitations in their study design [157–160]. Nevertheless, well-conducted studies have recently demonstrated that gas-permeable contact lenses have no effect on the progression of myopia in children [160], even among children who use them regularly. These lenses temporarily flatten the corneal curvature without affecting axial elongation.Although Atchison [161] has revealed that spherical contact lenses produce more peripheral myopic shift than spherically surfaced spectacle lenses, some prospective randomized studies did not find any differences in the myopia progression rate between soft contact lenses and spectacle wearers [162, 163]. However, other studies have tried to compare rigid with soft contact lenses. Katz et al. [160] found no difference in myopia progression or axial elongation over a period of two years between children wearing gas-permeable and soft single-vision contact lenses. Walline et al. [162] reported no difference in the amount of axial elongation between gas-permeable and soft single-vision contact lens wearers. ### 5.6. Soft Bifocal, Peripheral Gradient, and EDOF Contact Lenses Three different promising types of contact lenses for myopia control in children have been studied: bifocal concentric lenses, peripheral gradient lenses, and EDOF contact lenses (Figure3).Figure 3 Single-vision contact lenses (CLs) provide a peripheral hyperopic defocus. A peripheral myopic defocus can be achieved with peripheral gradient CL, bifocal CL, and EDOF CL.The first two multifocal contact lens designs include a central area for correcting myopia. However, bifocal concentric lenses use a concentric zone of rings with positive power addition to concurrently impose peripheral myopic defocus, and peripheral gradient lenses produce constant peripheral myopization defocus that increases gradually from the central optic axis toward the periphery [164]. The third type is based on the EDOF theory, which was designed to incorporate and manipulate selective higher-order aberrations (mainly spherical aberration) to achieve the global retinal image quality that was optimized for points at and anterior to the retina and degraded for points posterior to the retina. It was hypothesized that a poor image quality posterior to the retina prevents axial elongation [165].Demonstrating the propensity for slowing both refractive and axial length myopia progression by around 30%–50% [166, 167], these contact lens options have the capability of correcting myopia as well as providing a treatment strategy for myopia control. In contrast, spectacle lens alternatives have shown less effective success for myopia control [168] except in one specific prismatic bifocal design [131] and a novel multisegment defocus design [141]. Moreover, in clinical studies, contact lenses provide better lens centration and are less affected by eye movements than spectacle lenses [135].Data from two recent clinical pilot studies showed that adding myopic defocus to the distance correction reduced myopia progression by an average of 0.27 D/year after one year [147, 169], which is slightly better than the effect seen at one year using progressive addition lenses or bifocal lenses [129, 130, 170–172].MiSight 1 day is a daily replacement of hydrophilic soft bifocal contact lenses approved by the FDA for correction of nearsightedness and slows its progression in children, aged 8 to 12 years, with a refraction of −0.75 to −4.00 D (spherical equivalent) and astigmatism less than or equal to 0.75 D at the beginning of treatment. MiSight’s Activ Control™ technology is based on an optic zone concentric ring design. Concentric zones of the alternating distance and near power produce two focal planes, allowing for the correction of the refractive error and 2.00 D of simultaneous myopic retinal defocus. A two-year randomized clinical trial [164] showed lesser progression and axial elongation in the MiSight group than in the single-vision spectacle group.Several studies [147, 164, 169, 173–178] published between 2011 and 2016 showed a reduction of 38.0% in myopia progression and 37.9% in axial elongation with multifocal soft contact lenses. In 2014, Benavente-Perez et al. [135] showed the effect of soft bifocal contact lenses on eye growth and the refractive state of 30 juvenile marmosets by imposing hyperopic and myopic defocus on their peripheral retina. Each marmoset wore one of three investigational annular bifocal contact lens designs in their right eye and a plano contact lens in the left eye as a control for 10 weeks. The three types of lenses had a plano center zone (1.5 mm or 3 mm) and +5 D or −5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm, and −5 D/3 mm). The results were compared with untreated, single-vision positive and negative, and +5/−5 D multizone lens-reared marmosets. Eyes treated with positive power in the periphery showed to grow significantly less than untreated eyes and eyes with multizone contact lenses, supporting the use of bifocal contact lenses as an effective treatment for myopia control. Moreover, the treatment effect was associated with the size of the peripheral treatment zone as well as with the peripheral refractive state and the eye growth rate before the treatment started.The bifocal lenses In nearsighted kids (BLINK) randomized clinical trial [179] has recently determined the role of soft multifocal lenses in slowing myopia progression in children, comparing high-add power (+2.50 D) with medium-add power (+1.50 D) and single-vision contact lenses. A total of 294 children with −0.75 D to −5.00 D of spherical component myopia and less than 1.00 D of astigmatism were enrolled, with a three-year follow-up. Adjusted three-year myopia progression was −0.60 D for high-add power, −0.89 D for medium-add power, and −1.05 D for single-vision contact lenses. This demonstrated that treatment with high-add power multifocal contact lenses significantly reduced the rate of eye elongation compared with medium-add power multifocal and single-vision contact lenses. However, further research is required to understand the clinical importance of these data.EDOF contact lenses were tested in a three-year prospective, double-blind trial [165] that demonstrated their efficacy in slowing myopia progression. A total of 508 children with the cycloplegic spherical equivalent −0.75 to −3.50 were enrolled and randomized in one of the five groups: one group with single vision, two groups with bifocal, and two groups with EDOF contact lenses (configured to offer EDOF of up to +1.75 D and +1.25 D). At two years, the two groups of EDOF lenses slowed myopia by 32% and 26% and reduced axial length elongation by 25% and 27%, respectively. However, efficacy was not significantly different between the bifocal and EDOF lens groups. ### 5.7. Orthokeratology (Ortho-K) Lenses Orthokeratology (ortho-k) is defined as a “reduction, modification, or elimination of a refractive error by programmed application of contact lenses [180].” It refers to the application of a rigid contact lens at night to induce temporary changes in the corneal epithelium shape, allowing for clear, unaided daytime vision. Wesley and Jessen in the 1950s casually observed spectacle blur experienced by patients after wearing hard contact lenses. This blurring was subsequently related to lens-induced epithelial reshaping, which was then utilized for therapeutic purposes [181].Studies have shown that myopic orthokeratology lenses produce a flattening of the central cornea and a steepening of the midperipheral cornea, accompanied by changes in the epithelial thickness (Figure4) [182–184].Figure 4 Epithelium remodeling is achieved with orthokeratology. Central corneal flattening is accompanied by a midperipheral steepening (tangential map, (a)), due to accumulation of the epithelium (epithelial thickness map, (b)). (a)(b)Although these lenses were designed for refractive error correction, studies have revealed a secondary advantage of slowing myopic progression [149] by creating peripheral myopic defocus secondary to epithelial reshaping. A number of studies have shown a 30 %–71% reduction in axial elongation compared with the control [150, 185, 186].Other studies and meta-analyses have revealed a 40%–60% mean reduction in the rate of refractive change compared with controls using spectacles to correct myopia [168, 187–194]. In one of the first trials, the retardation of myopia in orthokeratology study [195], axial elongation was reported to be slowed by an average of 43%.In a second trial, the high myopia-partial reduction orthokeratology study [196], highly myopic individuals were enrolled and randomly assigned into partial reduction orthokeratology and single-vision spectacle groups. The first group needed to wear single-vision spectacles to correct residual refractive errors during the day. In this group, the axial elongation was 63% less than that of the second group. More recently, orthokeratology and gas-permeable lenses have been compared with a novel experimental study design [197]. Patients were fitted with overnight orthokeratology in one eye and traditional rigid gas-permeable lenses for daytime wear in the contralateral eye. The lenses were worn for six months. After a washout period of 2 weeks, lens-eye combinations were reversed and wearing lens was continued further for six months. The results revealed no increases in axial elongation over either the first or second six-month period for eyes with orthokeratology, compared with an increase in 0.04 mm and 0.09 mm, respectively, in eyes with gas-permeable lenses.A recent one-year retrospective study by Na and Yoo [198] investigated myopic progression in children with myopic anisometropia who underwent orthokeratology treatment in their myopic eye and no correction in their emmetropic eye. The results showed statistically significant reduction in axial length elongation in the treated eye (0.07 ± 0.21 mm, P = 0.038) as compared with the control eye (0.36 ± 0.23 mm, P < 0.001).Zhang and Chen [199] in a retrospective study compared the effect of toric versus spherical design orthokeratology lenses on myopia progression in children with moderate-to-high astigmatism (cylinder >1.5 D). Toric orthokeratology wearers had a 55.6% slower rate of axial elongation than that of the spherical group. Some studies have tried to assess the effects of combined treatments, such as orthokeratology lenses and atropine. Studies by Wan et al. [200] and Kinoshita et al. [201] found improvement in myopia control by combining the two strategies compared with orthokeratology monotherapy.Although orthokeratology has a significant effect on slowing axial elongation, the results vary among individuals. Some patients show little or no myopic progression, while others continue to progress. Some studies [202–207] have shown that better myopia control is positively associated with a higher degree of baseline myopia, older age of the myopia onset and at initiation of treatment, larger pupil size, and a smaller resulting central optical zone (more peripheral myopia induced by a ring of steepening outside the treatment zone).Cheung et al. [186] suggest that ideal candidates for orthokeratology might be children around 6–9 years of age with fast myopic progression (increase in the axial length of ≥0.20 mm/7 months or spherical equivalent of ≥1 diopter/year). Moreover, several studies have shown that children are sufficiently mature to safely and successfully wear different types of contact lenses, such as soft [208, 209] and orthokeratology lenses [191, 192]. ## 5.1. Undercorrection of Myopia Undercorrection is one of the optical strategies proposed to slow the progression of myopia. It is based on the rationale that in undercorrected eyes, the accommodative response for near vision is reduced [107]. In fact, in animal models (chicks, tree shrews, marmosets, and infant monkeys) [21, 108, 109], a myopic defocus, in which the retinal image is formed in front of the retina, was capable of inhibiting eyeball elongation and associated myopic progression.Tokoro and Kabe [110] found that in a population aged 7–15 years, the rate of myopia progression was lower with undercorrection (−0.54 ± 0.39 D) than with full correction, either in full correction full-time wear (−0.75 D ± 0.27 D) or in full correction part-time wear (−0.62 ± 0.32 D). This study had several limitations, including a small sample size, limited statistical analysis, and concurrent use of pharmacological intervention for myopia control.In the study by Li et al. [111], the study population consisted of 12-year-old Chinese children. One hundred-twenty patients were undercorrected, and 133 patients were fully corrected; at one year, no statistically significant difference was observed between the two groups. However, a regression analysis showed a significant association if the refractive error, not the axial length, was considered. In this case, the progression of myopia decreased with an increasing amount of undercorrection (R2 = 0.02; P = 0.02). However, in order to achieve reduction in myopia progression by 0.25 D, undercorrection of more than 1.50 D was required.In both studies by Adler and Millodot [107] and Koomson et al. [112], undercorrection did not prove a statistically significant reduction in myopia progression. Adler and Millodot found that in a cohort of 48 children aged 6–15 years, undercorrection by 0.50 D was associated with myopia progression of 0.17 D when compared to full correction.Koomson et al. enrolled 150 Ghanaian children who were divided into two groups (n = 75). The first group was undercorrected by 0.50 D, while the second group was fully corrected. At two years, myopia progressed by the same rate in both the groups (−0.54 D ± 0.26 in the full-corrected group vs −0.50 D ± 0.22 in the undercorrected group; P = 0.31). Conversely, three studies have reported that under-correction causes a more rapid progression of myopia.Chung et al. [113] reported that 47 children undercorrected by 0.75 D had a greater progression of myopia compared with the 47 children who were fully corrected (−1.00 D vs 0.77 D; P < 0.01); however, the axial elongation was smaller in the undercorrected eyes (0.58 mm vs 0.65 mm; P = 0.04).Chen [114] designed a study in which 77 fully corrected eyes were compared to 55 undercorrected eyes. The two groups were matched for the age, sex, and refractive error. At a 12-month interval, the undercorrected −0.25 to −0.50 D) group exhibited a significant myopic progression (−0.60 D vs −0.52 D; no standard deviation; standard error; and 95% confidence interval were reported).Vasudevan et al. [115] retrospectively examined myopia progression rate records from the USA and the level of undercorrection of myopia versus full correction of myopia. They found that greater undercorrection was associated with a greater progression of myopia (P < 0.01).In all these scenarios, both eyes were corrected, either undercorrected or fully corrected. However, two studies evaluated the rate of progression of myopia by correcting only one of the eyes.In a population of 18 children aged 11 years, Phillips [116] noticed that undercorrection of the nondominant eye was associated with a slower progression of myopia compared to that in the dominant eye, which was fully corrected. The intereye difference was 0.36 D/y (P = 0.002).However, Hu and Guo [104] reported opposite results, in which the undercorrection of one eye in myopic children was associated with a faster progression than fully corrected ones (−0.67 ± 0.22 D vs −0.50 ± 0.15 D).Unfortunately, considering all human trials, the evidence supporting undercorrection as feasible for slowing the progression of myopia is low. Moreover, many pediatric practitioners suggest that the goal is to an attain optimal vision, which can be achieved by full correction. ## 5.2. Overcorrection of Myopia In a case-control study by Goss [117], 36 children aged 7–15 years were overcorrected by 0.75 D and matched by control individuals randomly selected from the files of a university optometry clinic. The rate of progression among the groups was different but not statistically different; −0.49 D/year in the overcorrected group versus −0.47 D in the control group. ## 5.3. Bifocal and Multifocal Lenses The rational use of bifocal or multifocal lenses to slow the progression of myopia is based on two theories. The first one, proven in animal models [108, 118], is based on central and peripheral hyperopic retinal defocus caused by a large accommodative lag [119, 120], which is defined as the residual refractive error of the difference between the accommodative demand required and its response. A large accommodative lag causes a hyperopic retinal defocus, which stimulates axial elongation in central defocus. Furthermore, in the case of peripheral defocus, the eye globe seems to acquire a more prolate shape. However, this stimulus is nullified by short periods of clear vision [21]; therefore, whether transient hyperopic retinal blur can lead to the onset and/or progression of myopia remains unclear.The second theory assumes that during accommodation, there is a mechanical tension created by the crystalline lens or ciliary body. On the one hand, this tension restricts the equatorial ocular expansion, causing accelerated axial elongation; on the other hand, as the ciliary-choroidal tension increases, the effort needed to accommodate increases as well. This probably leads to a further increase in accommodative lags in children, which is a consequence rather than a cause of myopia [121–125]. Regarding the association between myopia in children and accommodative lags, it has been reported that(1) Compared to emmetropic children, myopic children generally show insufficient accommodation with larger accommodative lags, even before the development of myopia. [120, 123, 126, 127].(2) In myopic children, a larger accommodative lag correlates with a faster myopia progression [128]Unfortunately, as seen in the undercorrection approach, no consensus exists regarding the use of bifocal or multifocal lenses to slow the progression of myopia. This is mainly due to the standard near addition power use in the trials, typically between +1.00 D and +2.00 D so that interindividual differences are nullified, causing even a possible overcorrection in some cases.The COMET study was a randomized, multicenter clinical trial in which 469 children, aged 6–11 years, were enrolled and divided into two groups: the first group was assigned to progressive addition lenses (with +2.00 D addition) and the second group to single-vision lenses. At three years, the difference between the progressive addition lenses and the control group in diopters was 0.20 ± 0.08 D and the axial elongation was 0.11 ± 0.03 mm. Even if statistically significant, these differences were considered clinically insignificant [129].The same conclusions were obtained in the COMET 2 study [130]. A total of 180 children aged 8–12 years with spherical equivalent refraction from −0.75 D to −2.50 D and near esophoria ≥2 prism-diopters were enrolled. An additional inclusion criterion was high accommodative lag, initially set to at least 0.50 D (accommodative response less than 2.50 D for a 3.00 D demand) and subsequently restricted further to at least 1.00 D. A total of 110 children completed the study in three years; the progression of myopia was −0.87 D in the group treated with progressive addition lenses (+2.00 D) versus −1.15 D in the single-vision lens group. Nevertheless, despite being statistically significant, the authors considered the results to be clinically insignificant.Cheng et al. [131] attempted to evaluate the use of bifocal and prismatic bifocal lenses. One hundred thirty-five Chinese-Canadian children, aged 8–13 years with myopia progression of at least 0.50 D in the preceding year, were randomly assigned to one of the three treatments: single vision (control, n = 41), +1.50 D executive bifocals (n = 48), and +1.50 D executive bifocals with 3-Δ base-in the prism in the near segment of each lens. At the three-year follow-up, the progression of myopia in terms of diopters and axial length elongation was highest in children treated with single vision (−2.06 D and 0.82 mm) compared to those who were treated with bifocal (−1.25 D and 0.57 mm) or prismatic bifocal lenses (−1.01 D and 0.54 mm). Furthermore, in children with high accommodative lags (>1.00 D), no difference was observed in myopia control using bifocal or prismatic bifocal lenses. Instead, in children who showed low lags of accommodation (≤1.00 D), greater benefits were observed using prismatic bifocal lenses. According to the authors, this could be explained as prismatic bifocal lenses, because prisms may reduce the convergence and lens-induced exophoria with prism correction.Currently, research is moving from the correction of the hyperopic shift to the induction of myopic peripheral defocus. The rationale is based on two findings.(1) Visual signals derived from the peripheral retina are stronger than those originating from the central retina [132, 133](2) Optical defocus in the peripheral retina governs ocular growth: peripheral defocus stimulates axial elongation of the eye, while the opposite effect is demonstrated with peripheral myopic defocus (Figure2) [134–140]Figure 2 Peripheral hyperopic defocus (red arrow) might lead to axial elongation. A myopic defocus (green arrow) can be achieved with orthokeratology, contact lenses, laser refractive surgery, and spectacle lenses (defocus incorporated multiple segment lenses and Apollo progressive addition lenses).Spectacles of two types can induce peripheral myopic defocus: The defocus incorporated multiple segment lenses and Apollo progressive addition lenses (Apollo PALs, Apollo Eyewear, River Grove, IL, USA)and defocus incorporated multiple segment (DIMS) lenses [141] are custom-made plastic spectacle lenses. Each lens includes a central optical zone (9 mm in diameter) for correcting distance refractive errors and an annular multifocal zone with multiple segments (33 mm in diameter) with a relative positive power (+3.50 D). The diameter of each segment is 1.03 mm. Lam et al. [141] evaluated the use of defocus incorporated multiple segments versus single-vision lenses in 160 children. The results indicated that myopia progressed slower by 52% in the defocus incorporated multiple segment group than that in the single-vision group (−0.41 ± 0.06 D in the defocus incorporated multiple segment group and −0.85 ± 0.08 D in the single-vision group; mean difference −0.44 ± 0.09 D, P < 0.001). Moreover, the axial elongation was shorter in children in the defocus incorporated multiple segment group (0.21 ± 0.02 mm) by 62% than those in the single-vision group (0.55 ± 0.02 mm in the defocus incorporated multiple segment; mean difference 0.34 ± 0.04 mm, P < 0.001). These preliminary results were confirmed after a 3-year follow-up, showing that the myopia control effect was sustained in the third year in children who had used the DIMS spectacles in the previous 2 years and was also shown in the children switching from single vision to DIMS lenses [142]. Interestingly, in a study by Zhang et al., [143] baseline relative peripheral refraction (RPR) was assessed as a variable on the myopia control effects in myopic children wearing DIMS lenses. The authors concluded that DIMS lenses slowed down myopia progression, and myopia control was better for the children with baseline hyperopic RPR than the children with myopic RPR. This may partially explain why the efficacy of DIMS technology varies among myopic children and advocates the need for customized myopic defocus for patients to optimize myopia control effects. Indeed, similar results were found in animal studies, showing that a greater hyperopic defocus leads to more myopia progression while inducing myopic defocus retarded myopia progression [144]. Outcomes in infant monkeys and chicks advocated that spatial resolution at the anatomic level of the optical pathway could modulate overall eye growth [145]. Animal studies using contact lenses with embedded myopic defocus found that myopia progression could be slowed by 20% to 60% [146, 147].The Apollo progressive addition lenses comprise an asymmetrical myopic defocus design with a 3 myopic defocus zone, including a +2.50 D full-positive power superior zone, an 80% full myopic defocus power nasal zone, and a 60% full myopic defocus power temporal zone. Currently, a prospective, multicenter, randomized controlled trial, promoted by Li, is ongoing to evaluate the possible efficacy of the defocus incorporated multiple segment and Apollo progressive addition lenses [148]. ## 5.4. Contact Lenses and Orthokeratology in Myopia Control As previously reported, a theory for eye elongation suggests that axial elongation is caused by peripheral retinal hyperopic defocus [105, 135, 149, 150].This theory has led researchers to consider that reducing peripheral hyperopic defocus or inducing peripheral myopic defocus with bifocal, progressive, or multifocal lenses may help prevent myopic progression. In animal models, evidence suggests that the imposition of hyperopic or myopic defocus with negative or positive power lenses, respectively, can influence eye growth and lead to compensatory refractive changes: hyperopic defocus leads to longer and more myopic eyes and myopic defocus leads to shorter and more hyperopic eyes [151–156].This supports the theory of slowing down axial elongation with optical treatments that correct distance vision while achieving simultaneous myopic defocus.The reduction of peripheral retinal hyperopic defocus by contact lenses represents a new and interesting area of research that could be an effective intervention in myopia control. Effective contact lens options for myopia control include multifocal, extended depth of focus (EDOF), and orthokeratology contact lenses. ## 5.5. Single-Vision Rigid Gas-Permeable and Soft Contact Lenses Single-vision lenses intend to correct the refractive error and are not prescribed for myopia control [149, 150]. Over several decades, there have been suggestions that gas-permeable contact lenses (not orthokeratology design) can slow myopia progression in children, but these studies have shown important limitations in their study design [157–160]. Nevertheless, well-conducted studies have recently demonstrated that gas-permeable contact lenses have no effect on the progression of myopia in children [160], even among children who use them regularly. These lenses temporarily flatten the corneal curvature without affecting axial elongation.Although Atchison [161] has revealed that spherical contact lenses produce more peripheral myopic shift than spherically surfaced spectacle lenses, some prospective randomized studies did not find any differences in the myopia progression rate between soft contact lenses and spectacle wearers [162, 163]. However, other studies have tried to compare rigid with soft contact lenses. Katz et al. [160] found no difference in myopia progression or axial elongation over a period of two years between children wearing gas-permeable and soft single-vision contact lenses. Walline et al. [162] reported no difference in the amount of axial elongation between gas-permeable and soft single-vision contact lens wearers. ## 5.6. Soft Bifocal, Peripheral Gradient, and EDOF Contact Lenses Three different promising types of contact lenses for myopia control in children have been studied: bifocal concentric lenses, peripheral gradient lenses, and EDOF contact lenses (Figure3).Figure 3 Single-vision contact lenses (CLs) provide a peripheral hyperopic defocus. A peripheral myopic defocus can be achieved with peripheral gradient CL, bifocal CL, and EDOF CL.The first two multifocal contact lens designs include a central area for correcting myopia. However, bifocal concentric lenses use a concentric zone of rings with positive power addition to concurrently impose peripheral myopic defocus, and peripheral gradient lenses produce constant peripheral myopization defocus that increases gradually from the central optic axis toward the periphery [164]. The third type is based on the EDOF theory, which was designed to incorporate and manipulate selective higher-order aberrations (mainly spherical aberration) to achieve the global retinal image quality that was optimized for points at and anterior to the retina and degraded for points posterior to the retina. It was hypothesized that a poor image quality posterior to the retina prevents axial elongation [165].Demonstrating the propensity for slowing both refractive and axial length myopia progression by around 30%–50% [166, 167], these contact lens options have the capability of correcting myopia as well as providing a treatment strategy for myopia control. In contrast, spectacle lens alternatives have shown less effective success for myopia control [168] except in one specific prismatic bifocal design [131] and a novel multisegment defocus design [141]. Moreover, in clinical studies, contact lenses provide better lens centration and are less affected by eye movements than spectacle lenses [135].Data from two recent clinical pilot studies showed that adding myopic defocus to the distance correction reduced myopia progression by an average of 0.27 D/year after one year [147, 169], which is slightly better than the effect seen at one year using progressive addition lenses or bifocal lenses [129, 130, 170–172].MiSight 1 day is a daily replacement of hydrophilic soft bifocal contact lenses approved by the FDA for correction of nearsightedness and slows its progression in children, aged 8 to 12 years, with a refraction of −0.75 to −4.00 D (spherical equivalent) and astigmatism less than or equal to 0.75 D at the beginning of treatment. MiSight’s Activ Control™ technology is based on an optic zone concentric ring design. Concentric zones of the alternating distance and near power produce two focal planes, allowing for the correction of the refractive error and 2.00 D of simultaneous myopic retinal defocus. A two-year randomized clinical trial [164] showed lesser progression and axial elongation in the MiSight group than in the single-vision spectacle group.Several studies [147, 164, 169, 173–178] published between 2011 and 2016 showed a reduction of 38.0% in myopia progression and 37.9% in axial elongation with multifocal soft contact lenses. In 2014, Benavente-Perez et al. [135] showed the effect of soft bifocal contact lenses on eye growth and the refractive state of 30 juvenile marmosets by imposing hyperopic and myopic defocus on their peripheral retina. Each marmoset wore one of three investigational annular bifocal contact lens designs in their right eye and a plano contact lens in the left eye as a control for 10 weeks. The three types of lenses had a plano center zone (1.5 mm or 3 mm) and +5 D or −5 D in the periphery (referred to as +5 D/1.5 mm, +5 D/3 mm, and −5 D/3 mm). The results were compared with untreated, single-vision positive and negative, and +5/−5 D multizone lens-reared marmosets. Eyes treated with positive power in the periphery showed to grow significantly less than untreated eyes and eyes with multizone contact lenses, supporting the use of bifocal contact lenses as an effective treatment for myopia control. Moreover, the treatment effect was associated with the size of the peripheral treatment zone as well as with the peripheral refractive state and the eye growth rate before the treatment started.The bifocal lenses In nearsighted kids (BLINK) randomized clinical trial [179] has recently determined the role of soft multifocal lenses in slowing myopia progression in children, comparing high-add power (+2.50 D) with medium-add power (+1.50 D) and single-vision contact lenses. A total of 294 children with −0.75 D to −5.00 D of spherical component myopia and less than 1.00 D of astigmatism were enrolled, with a three-year follow-up. Adjusted three-year myopia progression was −0.60 D for high-add power, −0.89 D for medium-add power, and −1.05 D for single-vision contact lenses. This demonstrated that treatment with high-add power multifocal contact lenses significantly reduced the rate of eye elongation compared with medium-add power multifocal and single-vision contact lenses. However, further research is required to understand the clinical importance of these data.EDOF contact lenses were tested in a three-year prospective, double-blind trial [165] that demonstrated their efficacy in slowing myopia progression. A total of 508 children with the cycloplegic spherical equivalent −0.75 to −3.50 were enrolled and randomized in one of the five groups: one group with single vision, two groups with bifocal, and two groups with EDOF contact lenses (configured to offer EDOF of up to +1.75 D and +1.25 D). At two years, the two groups of EDOF lenses slowed myopia by 32% and 26% and reduced axial length elongation by 25% and 27%, respectively. However, efficacy was not significantly different between the bifocal and EDOF lens groups. ## 5.7. Orthokeratology (Ortho-K) Lenses Orthokeratology (ortho-k) is defined as a “reduction, modification, or elimination of a refractive error by programmed application of contact lenses [180].” It refers to the application of a rigid contact lens at night to induce temporary changes in the corneal epithelium shape, allowing for clear, unaided daytime vision. Wesley and Jessen in the 1950s casually observed spectacle blur experienced by patients after wearing hard contact lenses. This blurring was subsequently related to lens-induced epithelial reshaping, which was then utilized for therapeutic purposes [181].Studies have shown that myopic orthokeratology lenses produce a flattening of the central cornea and a steepening of the midperipheral cornea, accompanied by changes in the epithelial thickness (Figure4) [182–184].Figure 4 Epithelium remodeling is achieved with orthokeratology. Central corneal flattening is accompanied by a midperipheral steepening (tangential map, (a)), due to accumulation of the epithelium (epithelial thickness map, (b)). (a)(b)Although these lenses were designed for refractive error correction, studies have revealed a secondary advantage of slowing myopic progression [149] by creating peripheral myopic defocus secondary to epithelial reshaping. A number of studies have shown a 30 %–71% reduction in axial elongation compared with the control [150, 185, 186].Other studies and meta-analyses have revealed a 40%–60% mean reduction in the rate of refractive change compared with controls using spectacles to correct myopia [168, 187–194]. In one of the first trials, the retardation of myopia in orthokeratology study [195], axial elongation was reported to be slowed by an average of 43%.In a second trial, the high myopia-partial reduction orthokeratology study [196], highly myopic individuals were enrolled and randomly assigned into partial reduction orthokeratology and single-vision spectacle groups. The first group needed to wear single-vision spectacles to correct residual refractive errors during the day. In this group, the axial elongation was 63% less than that of the second group. More recently, orthokeratology and gas-permeable lenses have been compared with a novel experimental study design [197]. Patients were fitted with overnight orthokeratology in one eye and traditional rigid gas-permeable lenses for daytime wear in the contralateral eye. The lenses were worn for six months. After a washout period of 2 weeks, lens-eye combinations were reversed and wearing lens was continued further for six months. The results revealed no increases in axial elongation over either the first or second six-month period for eyes with orthokeratology, compared with an increase in 0.04 mm and 0.09 mm, respectively, in eyes with gas-permeable lenses.A recent one-year retrospective study by Na and Yoo [198] investigated myopic progression in children with myopic anisometropia who underwent orthokeratology treatment in their myopic eye and no correction in their emmetropic eye. The results showed statistically significant reduction in axial length elongation in the treated eye (0.07 ± 0.21 mm, P = 0.038) as compared with the control eye (0.36 ± 0.23 mm, P < 0.001).Zhang and Chen [199] in a retrospective study compared the effect of toric versus spherical design orthokeratology lenses on myopia progression in children with moderate-to-high astigmatism (cylinder >1.5 D). Toric orthokeratology wearers had a 55.6% slower rate of axial elongation than that of the spherical group. Some studies have tried to assess the effects of combined treatments, such as orthokeratology lenses and atropine. Studies by Wan et al. [200] and Kinoshita et al. [201] found improvement in myopia control by combining the two strategies compared with orthokeratology monotherapy.Although orthokeratology has a significant effect on slowing axial elongation, the results vary among individuals. Some patients show little or no myopic progression, while others continue to progress. Some studies [202–207] have shown that better myopia control is positively associated with a higher degree of baseline myopia, older age of the myopia onset and at initiation of treatment, larger pupil size, and a smaller resulting central optical zone (more peripheral myopia induced by a ring of steepening outside the treatment zone).Cheung et al. [186] suggest that ideal candidates for orthokeratology might be children around 6–9 years of age with fast myopic progression (increase in the axial length of ≥0.20 mm/7 months or spherical equivalent of ≥1 diopter/year). Moreover, several studies have shown that children are sufficiently mature to safely and successfully wear different types of contact lenses, such as soft [208, 209] and orthokeratology lenses [191, 192]. ## 6. Conclusions The rapid increase in the prevalence of myopia, especially in Asian and Western countries, has made it a significant public health concern. In fact, high myopia (≥5 D or axial length ≥26 mm) is associated with an increased risk of vision-threatening complications such as retinal detachment, choroidal neovascularization, primary open-angle glaucoma, and early-onset cataract. Many studies have suggested the implication of both genetic and environmental factors in the development of myopia. The genetic pool is associated with both syndromic and nonsyndromic forms of myopia, whereas the environment plays an important role in nonsyndromic forms. However, we are far from understanding complex pathogenesis.Various options have been assessed to prevent or slow myopia progression in children.Environmental modifications, such as spending more time outdoors, can decrease the risk of the onset of myopia. In fact, many studies have identified an inverse association between the myopia onset and progression in outdoor exposure and a direct association with near work. However, contrasting evidence has also emerged, perhaps because of many biases, such as recall and measurement bias.Optical interventions such as bifocal/progressive spectacle lenses, soft bifocal/multifocal/EDOF contact lenses, and orthokeratology lenses show moderate reduction in the myopia progression rate compared to single-vision lenses. All of these options seem to reduce hyperopic peripheral defocus, which is a stimulus for axial elongation, thus promoting myopic peripheral defocus and slowing axial elongation.Regarding spectacle lenses, promising results are derived from the use of defocus incorporated multiple segment lenses and progressive addition lenses. However, further studies are needed to confirm this hypothesis. Conversely, undercorrection of the myopic refractive error does not slow the progression of nearsightedness. In fact, several studies have revealed no difference in progression with undercorrection. Others have reported an increase in myopia progression compared with full correction; thus, the full correction of myopia is currently recommended to attain an optimal vision as the main aim.Gas-permeable and soft single-vision contact lenses are prescribed solely to correct the refractive error because many studies have shown no effects on axial elongation and myopia control.Refractive surgery may be an interesting option for treating amblyogenic anisometropia in children who refuse conventional therapy. Despite its successful outcomes in refraction and visual acuity, the use of refractive surgery in these individuals remains unclear, mainly because of the need for anesthesia, susceptibility to trauma, lack of pediatric nomograms, instability of the refractive error, and lack of evidence of long-term safety. Further studies are needed to better explore the role of refractive surgery in this area.Currently, pharmacological treatment with atropine is the most researched and effective strategy for myopia control. In particular, low-concentration atropine (0.01%) is known to maintain its efficacy on myopia control with a lower rate of side effects. Interestingly, data from studies on the effects of combined treatments, such as low-concentration atropine (0.01%) plus orthokeratology lenses or low-concentration atropine plus soft bifocal contact lenses (bifocal and atropine in myopia, BAM study), suggest that the combination seems to be superior to monotherapy. However, the BAM study is still ongoing, and no results have yet been published.In summary, all these options for controlling myopia progression in children exhibit varying degrees of efficacy, as shown in the literature. Compared with single-vision spectacles as control, atropine exhibits the highest efficacy; orthokeratology, peripheral defocus contact, and spectacle lenses have moderate efficacy, whereas bifocal or progressive addition spectacles and increased outdoor activities show lower efficacy [185]. --- *Source: 1004977-2022-06-14.xml*
2022
# Intelligent Writing System for Patients with Upper Limb Disabilities **Authors:** Mihaela Hnatiuc; Domnica Alpetri; Muhamad Arif; Dragos Vicoveanu; Iuliana Chiuchisan; Oana Geman **Journal:** Journal of Sensors (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005061 --- ## Abstract Subjects with neurological problems and neurological disorders can be recovered in the dedicated recovery centers and at home. The paper presents a smart writing system that uses dedicated movements to recover specific problems of the hands. The system is based on a smart pen equipped with vibration, acceleration, and gyroscope sensors that give remote data to a computer via an ESP Arduino board via Bluetooth. The purpose of the system design is to serve in applications such as recognizing the writing, drawings, or emotional and physiological states of the writer. The data is preprocessed by extracting the characteristics. They will be used in a prediction and classification system using machine learning (ML) algorithms. The paper proposes a new method of using the dynamic processing of certain geometric shapes that are recognized and analyzed to diagnose the mobility of the subjects’ hands. Using methods of clustering data as K-means and classification as support vector machine (SVM), the results obtained from the data analysis using the TensorFlow platform have an accuracy of over 70%. Due to the adaptation algorithms used, the system can be customized, learning the hand movement of the subject. --- ## Body ## 1. Introduction With the growing use of computers in society, human-computer interaction has become part of everyday life. One of the existing barriers, according to [1], in human-computer interaction is the way subjects interact with the high flow of information. The solution is to develop “natural” computer interaction techniques, similar to those by which people interact with each other. Through nonverbal forms of communication, made by moving hands, gestures become part of a computer dialog language used for information exchange [2]. Motion detection is a used term that simplifies many processes, reducing consumption and real-time response time. The most common motion sensors used are the accelerometer, gyroscope, and magnetometer to measure linear and rotational movements. For example, MPU 6050 is a motion sensor used in movement identification applications that has an accelerometer and a gyroscope on a single chip called an inertial measurement sensor (IMU) [3, 4]. An online handwriting solution, where data is achieved through depth sensors, is presented in [5]. Users can write in air and the algorithms can recognize in real time using the proposed representation. The method uses an effective fingertip tracking approach and reduces the need for a pen-up/pen-down shift, obtaining a 97,59% recognition accuracy for character recognition.A new method of designing and implementing of an infrared sensor system, InfraNotes, is described in [6], to automatically record grade notes on the board by detection and the analysis of the teacher’s hand. Compared to existing techniques, the system does not need special accessories, such as sensor-enabled pixels, surface writing, or video recording infrastructure. Similar research is presented in [7] using the Kinect sensor where it is possible to recognize numeric and alphabetical characters written by hand in the air. The Kinect sensor can capture motion without the sensor device being attached to the user’s body. Most people are not used to writing in the air and are not familiar with the Kinect sensor, and it takes some time to control both of them. Firstly, the user must get used to using the sensor; the average recognition rate is 95,0% and 98,9%, obtained for numeric and alphabetical characters. Studies to identify finger movements have been carried out to identify hand-made forms [8–10]. In [11], a vector machine support (SVM) using a hand gesture identifier (Figure 1) is used in the observation phase to identify those data segments containing handwritten characters. The recognition stage uses the hidden Markov models (HMM) to generate a text representation from motion sensor data. Individual letters are processed using HMM and concatenated to a word. The system can continuously recognize traditional sentences based on a free vocabulary. A statistical language model is used to improve the recognition of performance and to restrict the search space.Figure 1 Gesture identifier device [11].A Wi-Fi data acquisition system [12] has been defined in e-health research, and the results have been recently published. A fairly complex system with both hardware and software components has been developed and tested to allow physiological tremors to be investigated. This device includes the Wi-Fi module and a 3-axis acceleration sensor, and data received from the sensors are processed into an application developed in the LabVIEW environment. The signal acquired from the acceleration sensors is processed with a Butterworth filter, and an FFT (fast Fourier transform) analysis is performed. The data acquisition system can communicate directly with a router and transmit data to a server or smartphone. The software architecture of the system is presented in detail, considering the different situations in which it is used. A group of healthy subjects who offered themselves as volunteers are monitored during trials, and data analysis is performed online using the developed programs. Comparative data for signal amplitude and frequency for each axis of the accelerometers are presented in the paper, following different scenarios. A low-power Wi-Fi-based system has been experimentally developed and tested, designed to monitor physiological and pathological tremors. The prototype delivers performance features such as low-power consumption, battery life, Wi-Fi data transmission possibility, and relatively small size. The identification of human emotions using inertial sensors is another study presented in [13]. The task of recognizing a subject’s condition by writing is actively investigated, due to the high variety of possible types of writing (in terms of its appearance), when the subject is in different emotional states. In [14], a tool has been developed to effectively classify the emotions and gestures of a human subject into “typical” and “atypical” during a certain type of interactions.Two types of neural network architectures have been used to classify human gestures and emotions resulting from infrared cameras. These architectures can be used in parallel (for faster and more robust processing) or only convolutional neural network (CNN) for processing the characteristic features. Therefore, two types of modular hierarchical architectures are applied, one for the task of recognizing emotions and the other being represented by human actions and used to solve a real problem, as regards the classification of human behavior in “typical” and “atypical” relative to a particular task. Various projects have been developed, using accelerometer sensors, to identify the person after walking using the inertial sensor in mobile phones [15–17]. Other applications have the basis idea of developing an intelligent pen that can identify the person’s emotions [18, 19] and also to be used as a medical recovery tool.Taking into account the research presented above, this paper describes an intelligent writing system based on sensor-based handwriting recognition methods that allow the continuous observation and recognition of a drawing, considered a special type of gesture. Three drawing gestures are recognized: horizontal, vertical lines, and circles. Gestures can be used to develop interfaces that allow the recovery of people with hand disabilities for integration into the activities of daily living. Gestures are identified using relevant signal segments in the continuous data stream.The paper is organized into four sections. The first section presents the state of art and the second section describes the methods and algorithms used in signal processing. In the third section are presented the details of the intelligent writing system, and the results and discussion are presented in the fourth section, followed by conclusions. ## 2. Standalone Models Based on the type of techniques, the independent models were divided into two categories: statistics and machine learning (ML). ### 2.1. Statistics Several conventional statistical time series forecasting models have been widely used in nonlinear time series resolving problems and showing outstanding performance. In this paper, we focus on the most recent approaches, such as the Savitzky-Golay (i.e., SG) filter which is a noise elimination method widely used in different domains. The SG filter is a digital filter that has two design parameters, window length and filter order. As the length of the window increases, the estimation variance decreases, but the bias error increases at the same time [20]. The signals are preprocessed statistically for the extraction of the characteristics to identify the type of hand movement. The data after the preprocessing step is normalized using the standard min-max procedure. (1)xinorm=xi−xminxmax−xmin.One of the parameters used in signal processing is entropy. If an experiment is conducted only once, producing eventi, it shall provide a quantity of information Ii. If the experiment is repeated several times and each possible event is likely to occur pi, total information will be obtained which depends on the information provided by each event [21].Thus, inN rehearsals of the experiment, with the frequency of occurrence of the event i and (2)∑ini=N,the amount of information for each event is (3)Ii=−logpi.The total information will be(4)It=∑iniIi.The average information will be written as(5)Im=∑niNIi.IfN is very large (theoretical N), limn⟶∞ni/N=pi, therefore, (6)Im=∑pilog1p1=−∑pilogpi.The average information, noted withH, is called the Shannon informational entropy because its expression is analogous to the Boltzmann formula of thermodynamic entropy. (7)S=−k∑i=1npilogpi.The difference between the maximum and actual informational entropy of an experiment, as noted withR, is called the absolute redundancy, R=Hmax−H. In practice, it is often used the size called relative redundancy, Rr. (8)Rr=RHmax=1−HHmax.In the statistics are used the standard deviation and variance to measure dispersion. Therefore, the coefficient of variation (CV), which is defined as the ratio of the standard deviation to the mean, is used to measure the relative dispersion. The coefficient of variation is lacking the unit of measurement and is useful in comparing the variability between observation groups. Many researchers have proposed confidence intervals for the coefficient of variation [22]. Other parameters are the skewness and kurtosis coefficients described in the diagram illustrated in Figure 2.Figure 2 Brief description of kurtosis and skewness algorithms.The clustering algorithms are based on K-means, Silhouette, Silhouette score, Calinski-Harabasz (CH) index, and Davies-Bouldin (DB) index [21]. The Calinski-Harabasz index validates the number of classes based on the average of the square sum of the values inside and outside the class. The index (I) considers the separation based on the maximum distance between the center of the classes and measures the compactness based on the sum of the distances between objects and their cluster’s center. The Silhouette (S) index [13] validates the clustering performance based on the pairwise difference of between and within-cluster differences. In addition, the optimal cluster number is determined by maximizing the value of this index. The Davies-Bouldin (DB) index is calculated as such, for each C cluster, the similarities between C and all other clusters are calculated, and the highest value is attributed to C as similarity to its cluster. The DB index can then be obtained by averaging all the similarities of the cluster. The smaller the index, the better the grouping result.For signal, characteristics have been calculated the standard deviation, max peak, and min prominence peaks. The results were analyzed only as preliminary results and they were not used in the classification. ### 2.2. Machine Learning A sequential neuronal network has been tested for the data classification system. In ML, the radial basis function kernel (RBF kernel) is a popular kernel function used in various kernel-based learning algorithms. In particular, it is commonly used in support vector machine (SVM) classification [23].The RBF kernel on two samplesx and x’, represented as feature vectors in some input space, is defined in [23, 24]. (9)Kx,x′=exp−∥x−x′∥22σ2,where ∥x−x′∥2 may be recognized as squared Euclidean distance between the two feature vectors. The σ is a free parameter. An equivalent definition involves the parameter (10)γ=12σ2.Therefore,(11)Kx,x′=exp−γ∥x−x′∥2.Since the value of the RBF kernel decreases with distance and ranges between zero (in the limit) and one (whenx=x’), it has a ready interpretation as a similarity measure. The feature space of the kernel has an infinite number of dimensions. For σ=1, its expansion is (12)exp−12∥x−x′∥2=exp22xTx′−12∥x∥2−12∥x′∥2=expxTx′exp−12∥x∥2exp−12∥x′∥2=∑j=0∞xTx′jj!exp−12∥x∥2exp−12∥x′∥2=∑j=0∞∑ni=jexp−12∥x∥2x1n1⋯xknkn1!⋯nk!exp−12∥x′∥2x1′n1⋯xk′nkn1!⋯nk!.For the accuracy score, it shows the percentage of the true positive and true negative to all data points. So, it is useful when the data set is balanced. For theF1 score, it calculates the harmonic mean between precision and recall, and both depend on the false positive and false negative. So, it is useful to calculate the F1 score when the data set is not balanced [25]. The spectral entropy (SE) of a signal is a measure of its spectral power distribution. The concept is based on the Shannon entropy or information entropy. ## 2.1. Statistics Several conventional statistical time series forecasting models have been widely used in nonlinear time series resolving problems and showing outstanding performance. In this paper, we focus on the most recent approaches, such as the Savitzky-Golay (i.e., SG) filter which is a noise elimination method widely used in different domains. The SG filter is a digital filter that has two design parameters, window length and filter order. As the length of the window increases, the estimation variance decreases, but the bias error increases at the same time [20]. The signals are preprocessed statistically for the extraction of the characteristics to identify the type of hand movement. The data after the preprocessing step is normalized using the standard min-max procedure. (1)xinorm=xi−xminxmax−xmin.One of the parameters used in signal processing is entropy. If an experiment is conducted only once, producing eventi, it shall provide a quantity of information Ii. If the experiment is repeated several times and each possible event is likely to occur pi, total information will be obtained which depends on the information provided by each event [21].Thus, inN rehearsals of the experiment, with the frequency of occurrence of the event i and (2)∑ini=N,the amount of information for each event is (3)Ii=−logpi.The total information will be(4)It=∑iniIi.The average information will be written as(5)Im=∑niNIi.IfN is very large (theoretical N), limn⟶∞ni/N=pi, therefore, (6)Im=∑pilog1p1=−∑pilogpi.The average information, noted withH, is called the Shannon informational entropy because its expression is analogous to the Boltzmann formula of thermodynamic entropy. (7)S=−k∑i=1npilogpi.The difference between the maximum and actual informational entropy of an experiment, as noted withR, is called the absolute redundancy, R=Hmax−H. In practice, it is often used the size called relative redundancy, Rr. (8)Rr=RHmax=1−HHmax.In the statistics are used the standard deviation and variance to measure dispersion. Therefore, the coefficient of variation (CV), which is defined as the ratio of the standard deviation to the mean, is used to measure the relative dispersion. The coefficient of variation is lacking the unit of measurement and is useful in comparing the variability between observation groups. Many researchers have proposed confidence intervals for the coefficient of variation [22]. Other parameters are the skewness and kurtosis coefficients described in the diagram illustrated in Figure 2.Figure 2 Brief description of kurtosis and skewness algorithms.The clustering algorithms are based on K-means, Silhouette, Silhouette score, Calinski-Harabasz (CH) index, and Davies-Bouldin (DB) index [21]. The Calinski-Harabasz index validates the number of classes based on the average of the square sum of the values inside and outside the class. The index (I) considers the separation based on the maximum distance between the center of the classes and measures the compactness based on the sum of the distances between objects and their cluster’s center. The Silhouette (S) index [13] validates the clustering performance based on the pairwise difference of between and within-cluster differences. In addition, the optimal cluster number is determined by maximizing the value of this index. The Davies-Bouldin (DB) index is calculated as such, for each C cluster, the similarities between C and all other clusters are calculated, and the highest value is attributed to C as similarity to its cluster. The DB index can then be obtained by averaging all the similarities of the cluster. The smaller the index, the better the grouping result.For signal, characteristics have been calculated the standard deviation, max peak, and min prominence peaks. The results were analyzed only as preliminary results and they were not used in the classification. ## 2.2. Machine Learning A sequential neuronal network has been tested for the data classification system. In ML, the radial basis function kernel (RBF kernel) is a popular kernel function used in various kernel-based learning algorithms. In particular, it is commonly used in support vector machine (SVM) classification [23].The RBF kernel on two samplesx and x’, represented as feature vectors in some input space, is defined in [23, 24]. (9)Kx,x′=exp−∥x−x′∥22σ2,where ∥x−x′∥2 may be recognized as squared Euclidean distance between the two feature vectors. The σ is a free parameter. An equivalent definition involves the parameter (10)γ=12σ2.Therefore,(11)Kx,x′=exp−γ∥x−x′∥2.Since the value of the RBF kernel decreases with distance and ranges between zero (in the limit) and one (whenx=x’), it has a ready interpretation as a similarity measure. The feature space of the kernel has an infinite number of dimensions. For σ=1, its expansion is (12)exp−12∥x−x′∥2=exp22xTx′−12∥x∥2−12∥x′∥2=expxTx′exp−12∥x∥2exp−12∥x′∥2=∑j=0∞xTx′jj!exp−12∥x∥2exp−12∥x′∥2=∑j=0∞∑ni=jexp−12∥x∥2x1n1⋯xknkn1!⋯nk!exp−12∥x′∥2x1′n1⋯xk′nkn1!⋯nk!.For the accuracy score, it shows the percentage of the true positive and true negative to all data points. So, it is useful when the data set is balanced. For theF1 score, it calculates the harmonic mean between precision and recall, and both depend on the false positive and false negative. So, it is useful to calculate the F1 score when the data set is not balanced [25]. The spectral entropy (SE) of a signal is a measure of its spectral power distribution. The concept is based on the Shannon entropy or information entropy. ## 3. The Proposed System The system proposed in this paper is based on the sensors and acquisition card placed on the pen to identify the hand movement. The smart system uses a vibration sensor to identify the hand’s movements and an inertial sensor with an accelerometer and gyroscope (MPU 6050) to identify the movements in the writing process. The signal acquisition is performed with an Arduino ESP-32 board that collects data from the sensors and transmits it to the computer, for storage and analysis. The Arduino ESP-32 has a frequency of 80 MHz, and communication with the sensors is made via the I2C serial port (Figures3(a) and 3(b). The acquisition board communicates with the computer via Bluetooth.Figure 3 The intelligent writing system: (a) lateral view and (b) top view. (a)(b)The MPU6050 sensor is a device used to monitor the movements on the three coordinated axes. It combines the gyro signals on the three axes with the accelerometer signals on the three axes, respectively, with the angular speed. The SW-420 vibration sensor transmits two logic signals (low or high/0 logic or 1 logic) depending on motion. If the motion is sudden, the sensor will send a high signal (1 logic), and if the movement is slow, the signal will be low. The lateral view of the system can be visualized in Figure4. The inertial sensor (IMU) is positioned under the vibration sensors on the writing tool, being responsible for recordings of the space motions in the 3D coordinates.Figure 4 Lateral view of the intelligent writing system with 3D axis representation.The vibration sensor data are not used in the signal’s classification because when the signal is “1 logic,” the system shows as the subject begins to draw. The trajectory used for the tests is presented in Figure5. The signals resulting from testing the system are shown in Figure 6. The vertical, horizontal, and circle lines are identified separately, and then, the shapes in the drawing are identified.Figure 5 The trajectory of the pen: the horizontal and vertical lines with the size of 7 cm and circle with the diameter of 7 cm.Figure 6 The signal from the accelerometer, gyroscope, and vibration sensors during the drawing of the circle and horizontal and vertical lines.Figure6 illustrates the signals acquired from the accelerometer, gyroscope, and vibration sensors during the drawing of the circle and horizontal and vertical lines. OX axis signals have higher amplitudes and are chosen for analysis.The signals acquired from the inertial accelerometer and gyroscope sensor during the 5 cm horizontal line representation with slight breaks between the drawings are presented in Figure7. The largest amplitude is the signal from the accelerator and gyroscope sensor in the OY axis. The OX and OY signals are synchronous; the OZ signals are not significant in this test.Figure 7 Representation of the signals from (a) the accelerometer and (b) gyroscope on the three axes for a horizontal line of 5 cm. (a)(b)Analyzing the signal presented in Figure8, it can be observed that, during the drawing of a horizontal line with a size of 7 cm, the signal has a frequency in which after 2 small amplitudes a larger amplitude follows. The signals will not be the same because the person’s movements are not usually regular. The signals are acquired from testing the intelligent writing tool on the following drawings: (1) horizontal line of 5, 10, 15, and 20 cm; (2) vertical line of 5, 10, 15, and 20 cm; and (3) circle of 5 and 10 cm in diameter.Figure 8 Accelerometer signal in the OX axis during the drawing of the 7 cm horizontal line.Also, the following statistical parameters have been used: entropy, standard deviation, skewness, kurtosis, peak detection, max prominence peaks, min prominence peaks, min peaks, and half-width max peaks. The signals are filtered using the Savitzky-Golay (SG) filter. The identification of the writing process and the resting is achieved by isolating the signals acquired during drawing with the resting time (Figure9).Figure 9 Signals from the accelerometer on the OX axis: (a) the noisy and filtered signals for the horizontal line of 5 cm length; (b) the normalized and filtered signals with the Savitzky-Golay filter for the horizontal line of 5 cm length; (c) the noisy and filtered signals for the vertical line of 15 cm length; (d) the normalized and filtered signals with the Savitzky-Golay filter for the vertical line of 15 cm length; (e) the noisy and filtered signals for the horizontal line of 20 cm length; (f) the normalized and filtered signals with the Savitzky-Golay filter for the horizontal line of 20 cm length; (g) the noisy and filtered signals for the circle of 5 cm in diameter; and (h) the normalized and filtered signals with the Savitzky-Golay filter for the circle of 5 cm in diameter. Note: skewness ACCY: skewness coefficient of accelerometer signal OY axis; kurtosis ACCY: kurtosis coefficient of accelerometer signal OY axis; skewness ACCX: skewness coefficient of accelerometer signal OX axis; kurtosis ACCX: kurtosis coefficient of accelerometer signal OX axis; entropy ACCY: entropy coefficient of accelerometer signal OY axis; entropy ACCX: kurtosis coefficient of accelerometer signal OX axis; standard deviation ACCY: standard deviation of accelerometer signal OY axis; standard deviation ACCX: standard deviation of accelerometer signal OX axis. (a)(b)(c)(d)(e)(f)(g)(h)The histograms are performed to observe the distribution of signals (Figure10). The standard deviation of the signals collected from the accelerometer on the OX axis is in the range of 0,1 to 0,2.Figure 10 The distribution of signals.It can be observed that the entropy for both OX and OY accelerations is about 6. The OX and OY signals are symmetrical, and the OX flattening is better than OY. The standard deviation is around 0,1 which shows a spread of values.The correlation matrix shows how much the characteristics extracted from the analyzed signals are related. The entropies on the OX and OY axes are related (0,92), which is also shown from the signal shape, confirming that the signals are synchronized. The characteristics of the OX axis signals are related to a coefficient with a value of -0,6. In order to draw the geometric figures, 71 files were used with an average of 1000 samples per file.After applying the K-mean and Silhouette index, it was observed that all signals (feature parameters) can be classified into three classes (Figure11). The Silhouette parameter has the highest value for these three classes. The number of classes used for trace identification was 3, corresponding with the classes resulting for the clustering algorithm.Figure 11 Silhouette score resulting from data clustering presented for a class number from 1 to 10 for the following features: (a) Silhouette score: skewness ACCY and kurtosis ACCY; (b) Silhouette score: skewness ACCY, kurtosis ACCY, and kurtosis ACCX; (c) Silhouette score: entropy ACCY and kurtosis ACCY; and (d) Silhouette score: entropy ACCX, entropy ACCY, and kurtosis ACCY. (a)(b)(c)(d)Data analysis shows that the parameters most specified in the classification are entropy ACCY and kurtosis ACCY because the Silhouette score is the largest and the CH and DB indexes are the lowest (Table1). The number of samples used in the classification was 68.Table 1 Algorithm performance analysis using K-means. ClassCentroidInertiaSilhouette scoreCalinski-Harabasz indexDavies-Bouldin indexSkewness ACCY and kurtosis ACCYHorizontal line0,02819,63Vertical line0,0846,471388,80,56175,960,57Circle-2,8635,81Entropy ACCY and kurtosis ACCYHorizontal line6,656,47Vertical line6,2319,90910,160,63265,80,41Circle6,1836,54Skewness ACCX, kurtosis ACCY, and kurtosis ACCXHorizontal line-7,781-3,460,047Vertical line13,90-2,350,470,4873,360,70Circle0,17614,32-0,929Skewness ACC, kurtosis ACCY, and kurtosis ACCXHorizontal line-10,796-0,10,01Vertical line2,640,150,0160,63264,620,42Circle19,28-0,160,015The correlation matrix is illustrated in Figure12 and shows how closely the parameters extracted from the signals are related. The entropy on the OX and OY axes is related (0,92), which is also extracted from the signal shape, confirming that these two signals are synchronized. The characteristics of the signals on the OX axis are related to a coefficient with a value of -0,6. In the classification, 71 files were used with an average of 1000 samples to draw the geometrical figures.Figure 12 The correlation matrix of the statistical parameters. ## 4. Results and Discussion Different machine learning methods, such as decision trees, SVM, neural networks, k-nearest neighbor method, and the Bayesian classifier, can be used for classification. The SVM is the most used method because it removes overfitting and is not influenced by the noise. However, a nonlinear classification problem cannot be resolved by a linear classifier. It is possible to transform the nonlinear dataset in a way that allows it to be classified linearly. An approach to this problem is to use different kernels of different types of functions (i.e., polynomial).Several methods can be used for classification, including the SVM clustering algorithm, which is unsupervised classification. In this paper, data has been classified using the SVM clustering algorithm for drawing identification (Table2). Feature sets 2 and 4 give the best results in classification with an accuracy of more than 60 percent. The SVM was used with polynomial and RBF kernel. In both cases, the best classification accuracy was obtained by entropy ACCY and kurtosis ACCY data for Gamma=0,7 and C=0,9. Gamma is a scalar that defines how much influence has a single training point. Precision in each class is defined as the number of correct classified position data split by the total number date. Recall in each class is the ratio of the number of correctly classified position data to the number of positive values. The F1 score in each class is obtained as a measure of precision and recall in each class. For the F1 score, it can be observed that the horizontal lines are best classified using both SVC modes with polynomial and RBF kernel for the entire ACCY and kurtosis ACCY characteristics. If we analyze the number of forms used in the identification, it can be observed that for 12 horizontal lines, the F1 score is 0,7 which demonstrates that drawings can be identified using the proposed intelligent writing sensor-based system. These signals can be translated into a graphical mode that can be viewed remotely by a therapist.Table 2 Algorithm performance analysis. ClassPrecisionRecallF1 scoreAccuracySVM with polynomial kernelEntropy ACCX and kurtosis ACCYHorizontal line0,730,670,70,57Vertical line0,50,380,43Circle0,2510,4SVM with polynomial kernelEntropy ACCX and kurtosis ACCYHorizontal line0,750,750,750,71Vertical line10,50,67Circle0,510,67SVM with RBF kernelEntropy ACCX and kurtosis ACCYHorizontal line0,560,830,670,64Vertical line0,750,430,55Circle111SVM with RBF kernelEntropy ACCY and kurtosis ACCYHorizontal line0,560,830,670,71Vertical line0,750,430,55Circle111In testing process, four data feature sets are used for clustering: (1) “skewness ACCY” and “kurtosis ACCY”; (2) “entropy ACCY” and “kurtosis ACCY”; (3) “kurtosis ACCX,” “skewness ACCY,” and “kurtosis ACCY”; and (4) “entropy ACCX,” “entropy ACCY,” and “kurtosis ACCY.”For classification, it was chosen as features of the entropy ACCY and kurtosis ACCY signals. A simple feed-forward full-connected artificial neural network has been developed. The classification task was achieved by using the TensorFlow 2.0 library along with the Keras module. The total number of parameters used in the dense 2 hidden layers of “relu” type, together with the output layer, was 323. On the output, the activation function is “softmax” and three neurons are considered, each representing a class—horizontal line, vertical line, and circle. In the first hidden layer, there are 20 neurons and in the second 10 neurons. The training is done for 200 epochs with a loss function of 0,59 and an accuracy of 0,79. The average drive time was 7 ms over an epoch.The loss function of 0,6737 and the accuracy of 0,7142 were obtained during system testing (Figure13). In future research, other types of CNN will be tested due to the obtained results which are more than 50% on the classification to identify the object drawn.Figure 13 The loss function and the accuracy represented against the epoch number. (a)(b)Compared with other studies where the hand gesture identification/classification was made by image processing techniques, our idea is fairly simpler to implement, keeping at the same time the subject anonymous by using only sensors and signal processing algorithms. Comparing the results obtained by smart pen in [26] with the results obtained with the system developed in the laboratory, it is observed that the first system allows identifying the characteristics of writing and tremor with age. In the case of our system, it helps in recovering the subjects’ hand problems by making specific drawings indicated by the therapist. ## 5. Conclusions The present paper describes a handwriting recognition method based on inertial sensors that allow continuous observation and recognition of a drawing, constructed from simple geometric figures. The architecture of the system comprises hardware and software elements to detect the hand of a subject’s normal (i.e., physiological) movements by detecting the proposed written gestures and solving the correct classification by various algorithms.The results obtained after the signal processing techniques lead us in two study directions. Firstly, they can give to the observer a good classification of the signals considering a set of their statistical parameters for different geometric tracks (i.e., simple or combined). Secondly, it can provide a valuable clinical tool regarding the hand movement evolution in a recovery process. Therefore, we intend to improve the actual system using an acquisition board that can be used to send the data to the cloud where it will be processed, and the results provide a basis for the therapists. The classification problem can be improved and therefore deep learning methods will be used. Data (i.e., physiological and pathological) will be analyzed to have good accuracy of the sketch identification. The system proposed can be used in the medical rehabilitation clinics. Because the system is based on adaptive algorithms, the therapist may suggest that the subjects draw certain figures based on the required recovery needs. --- *Source: 1005061-2022-04-15.xml*
1005061-2022-04-15_1005061-2022-04-15.md
34,709
Intelligent Writing System for Patients with Upper Limb Disabilities
Mihaela Hnatiuc; Domnica Alpetri; Muhamad Arif; Dragos Vicoveanu; Iuliana Chiuchisan; Oana Geman
Journal of Sensors (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005061
1005061-2022-04-15.xml
--- ## Abstract Subjects with neurological problems and neurological disorders can be recovered in the dedicated recovery centers and at home. The paper presents a smart writing system that uses dedicated movements to recover specific problems of the hands. The system is based on a smart pen equipped with vibration, acceleration, and gyroscope sensors that give remote data to a computer via an ESP Arduino board via Bluetooth. The purpose of the system design is to serve in applications such as recognizing the writing, drawings, or emotional and physiological states of the writer. The data is preprocessed by extracting the characteristics. They will be used in a prediction and classification system using machine learning (ML) algorithms. The paper proposes a new method of using the dynamic processing of certain geometric shapes that are recognized and analyzed to diagnose the mobility of the subjects’ hands. Using methods of clustering data as K-means and classification as support vector machine (SVM), the results obtained from the data analysis using the TensorFlow platform have an accuracy of over 70%. Due to the adaptation algorithms used, the system can be customized, learning the hand movement of the subject. --- ## Body ## 1. Introduction With the growing use of computers in society, human-computer interaction has become part of everyday life. One of the existing barriers, according to [1], in human-computer interaction is the way subjects interact with the high flow of information. The solution is to develop “natural” computer interaction techniques, similar to those by which people interact with each other. Through nonverbal forms of communication, made by moving hands, gestures become part of a computer dialog language used for information exchange [2]. Motion detection is a used term that simplifies many processes, reducing consumption and real-time response time. The most common motion sensors used are the accelerometer, gyroscope, and magnetometer to measure linear and rotational movements. For example, MPU 6050 is a motion sensor used in movement identification applications that has an accelerometer and a gyroscope on a single chip called an inertial measurement sensor (IMU) [3, 4]. An online handwriting solution, where data is achieved through depth sensors, is presented in [5]. Users can write in air and the algorithms can recognize in real time using the proposed representation. The method uses an effective fingertip tracking approach and reduces the need for a pen-up/pen-down shift, obtaining a 97,59% recognition accuracy for character recognition.A new method of designing and implementing of an infrared sensor system, InfraNotes, is described in [6], to automatically record grade notes on the board by detection and the analysis of the teacher’s hand. Compared to existing techniques, the system does not need special accessories, such as sensor-enabled pixels, surface writing, or video recording infrastructure. Similar research is presented in [7] using the Kinect sensor where it is possible to recognize numeric and alphabetical characters written by hand in the air. The Kinect sensor can capture motion without the sensor device being attached to the user’s body. Most people are not used to writing in the air and are not familiar with the Kinect sensor, and it takes some time to control both of them. Firstly, the user must get used to using the sensor; the average recognition rate is 95,0% and 98,9%, obtained for numeric and alphabetical characters. Studies to identify finger movements have been carried out to identify hand-made forms [8–10]. In [11], a vector machine support (SVM) using a hand gesture identifier (Figure 1) is used in the observation phase to identify those data segments containing handwritten characters. The recognition stage uses the hidden Markov models (HMM) to generate a text representation from motion sensor data. Individual letters are processed using HMM and concatenated to a word. The system can continuously recognize traditional sentences based on a free vocabulary. A statistical language model is used to improve the recognition of performance and to restrict the search space.Figure 1 Gesture identifier device [11].A Wi-Fi data acquisition system [12] has been defined in e-health research, and the results have been recently published. A fairly complex system with both hardware and software components has been developed and tested to allow physiological tremors to be investigated. This device includes the Wi-Fi module and a 3-axis acceleration sensor, and data received from the sensors are processed into an application developed in the LabVIEW environment. The signal acquired from the acceleration sensors is processed with a Butterworth filter, and an FFT (fast Fourier transform) analysis is performed. The data acquisition system can communicate directly with a router and transmit data to a server or smartphone. The software architecture of the system is presented in detail, considering the different situations in which it is used. A group of healthy subjects who offered themselves as volunteers are monitored during trials, and data analysis is performed online using the developed programs. Comparative data for signal amplitude and frequency for each axis of the accelerometers are presented in the paper, following different scenarios. A low-power Wi-Fi-based system has been experimentally developed and tested, designed to monitor physiological and pathological tremors. The prototype delivers performance features such as low-power consumption, battery life, Wi-Fi data transmission possibility, and relatively small size. The identification of human emotions using inertial sensors is another study presented in [13]. The task of recognizing a subject’s condition by writing is actively investigated, due to the high variety of possible types of writing (in terms of its appearance), when the subject is in different emotional states. In [14], a tool has been developed to effectively classify the emotions and gestures of a human subject into “typical” and “atypical” during a certain type of interactions.Two types of neural network architectures have been used to classify human gestures and emotions resulting from infrared cameras. These architectures can be used in parallel (for faster and more robust processing) or only convolutional neural network (CNN) for processing the characteristic features. Therefore, two types of modular hierarchical architectures are applied, one for the task of recognizing emotions and the other being represented by human actions and used to solve a real problem, as regards the classification of human behavior in “typical” and “atypical” relative to a particular task. Various projects have been developed, using accelerometer sensors, to identify the person after walking using the inertial sensor in mobile phones [15–17]. Other applications have the basis idea of developing an intelligent pen that can identify the person’s emotions [18, 19] and also to be used as a medical recovery tool.Taking into account the research presented above, this paper describes an intelligent writing system based on sensor-based handwriting recognition methods that allow the continuous observation and recognition of a drawing, considered a special type of gesture. Three drawing gestures are recognized: horizontal, vertical lines, and circles. Gestures can be used to develop interfaces that allow the recovery of people with hand disabilities for integration into the activities of daily living. Gestures are identified using relevant signal segments in the continuous data stream.The paper is organized into four sections. The first section presents the state of art and the second section describes the methods and algorithms used in signal processing. In the third section are presented the details of the intelligent writing system, and the results and discussion are presented in the fourth section, followed by conclusions. ## 2. Standalone Models Based on the type of techniques, the independent models were divided into two categories: statistics and machine learning (ML). ### 2.1. Statistics Several conventional statistical time series forecasting models have been widely used in nonlinear time series resolving problems and showing outstanding performance. In this paper, we focus on the most recent approaches, such as the Savitzky-Golay (i.e., SG) filter which is a noise elimination method widely used in different domains. The SG filter is a digital filter that has two design parameters, window length and filter order. As the length of the window increases, the estimation variance decreases, but the bias error increases at the same time [20]. The signals are preprocessed statistically for the extraction of the characteristics to identify the type of hand movement. The data after the preprocessing step is normalized using the standard min-max procedure. (1)xinorm=xi−xminxmax−xmin.One of the parameters used in signal processing is entropy. If an experiment is conducted only once, producing eventi, it shall provide a quantity of information Ii. If the experiment is repeated several times and each possible event is likely to occur pi, total information will be obtained which depends on the information provided by each event [21].Thus, inN rehearsals of the experiment, with the frequency of occurrence of the event i and (2)∑ini=N,the amount of information for each event is (3)Ii=−logpi.The total information will be(4)It=∑iniIi.The average information will be written as(5)Im=∑niNIi.IfN is very large (theoretical N), limn⟶∞ni/N=pi, therefore, (6)Im=∑pilog1p1=−∑pilogpi.The average information, noted withH, is called the Shannon informational entropy because its expression is analogous to the Boltzmann formula of thermodynamic entropy. (7)S=−k∑i=1npilogpi.The difference between the maximum and actual informational entropy of an experiment, as noted withR, is called the absolute redundancy, R=Hmax−H. In practice, it is often used the size called relative redundancy, Rr. (8)Rr=RHmax=1−HHmax.In the statistics are used the standard deviation and variance to measure dispersion. Therefore, the coefficient of variation (CV), which is defined as the ratio of the standard deviation to the mean, is used to measure the relative dispersion. The coefficient of variation is lacking the unit of measurement and is useful in comparing the variability between observation groups. Many researchers have proposed confidence intervals for the coefficient of variation [22]. Other parameters are the skewness and kurtosis coefficients described in the diagram illustrated in Figure 2.Figure 2 Brief description of kurtosis and skewness algorithms.The clustering algorithms are based on K-means, Silhouette, Silhouette score, Calinski-Harabasz (CH) index, and Davies-Bouldin (DB) index [21]. The Calinski-Harabasz index validates the number of classes based on the average of the square sum of the values inside and outside the class. The index (I) considers the separation based on the maximum distance between the center of the classes and measures the compactness based on the sum of the distances between objects and their cluster’s center. The Silhouette (S) index [13] validates the clustering performance based on the pairwise difference of between and within-cluster differences. In addition, the optimal cluster number is determined by maximizing the value of this index. The Davies-Bouldin (DB) index is calculated as such, for each C cluster, the similarities between C and all other clusters are calculated, and the highest value is attributed to C as similarity to its cluster. The DB index can then be obtained by averaging all the similarities of the cluster. The smaller the index, the better the grouping result.For signal, characteristics have been calculated the standard deviation, max peak, and min prominence peaks. The results were analyzed only as preliminary results and they were not used in the classification. ### 2.2. Machine Learning A sequential neuronal network has been tested for the data classification system. In ML, the radial basis function kernel (RBF kernel) is a popular kernel function used in various kernel-based learning algorithms. In particular, it is commonly used in support vector machine (SVM) classification [23].The RBF kernel on two samplesx and x’, represented as feature vectors in some input space, is defined in [23, 24]. (9)Kx,x′=exp−∥x−x′∥22σ2,where ∥x−x′∥2 may be recognized as squared Euclidean distance between the two feature vectors. The σ is a free parameter. An equivalent definition involves the parameter (10)γ=12σ2.Therefore,(11)Kx,x′=exp−γ∥x−x′∥2.Since the value of the RBF kernel decreases with distance and ranges between zero (in the limit) and one (whenx=x’), it has a ready interpretation as a similarity measure. The feature space of the kernel has an infinite number of dimensions. For σ=1, its expansion is (12)exp−12∥x−x′∥2=exp22xTx′−12∥x∥2−12∥x′∥2=expxTx′exp−12∥x∥2exp−12∥x′∥2=∑j=0∞xTx′jj!exp−12∥x∥2exp−12∥x′∥2=∑j=0∞∑ni=jexp−12∥x∥2x1n1⋯xknkn1!⋯nk!exp−12∥x′∥2x1′n1⋯xk′nkn1!⋯nk!.For the accuracy score, it shows the percentage of the true positive and true negative to all data points. So, it is useful when the data set is balanced. For theF1 score, it calculates the harmonic mean between precision and recall, and both depend on the false positive and false negative. So, it is useful to calculate the F1 score when the data set is not balanced [25]. The spectral entropy (SE) of a signal is a measure of its spectral power distribution. The concept is based on the Shannon entropy or information entropy. ## 2.1. Statistics Several conventional statistical time series forecasting models have been widely used in nonlinear time series resolving problems and showing outstanding performance. In this paper, we focus on the most recent approaches, such as the Savitzky-Golay (i.e., SG) filter which is a noise elimination method widely used in different domains. The SG filter is a digital filter that has two design parameters, window length and filter order. As the length of the window increases, the estimation variance decreases, but the bias error increases at the same time [20]. The signals are preprocessed statistically for the extraction of the characteristics to identify the type of hand movement. The data after the preprocessing step is normalized using the standard min-max procedure. (1)xinorm=xi−xminxmax−xmin.One of the parameters used in signal processing is entropy. If an experiment is conducted only once, producing eventi, it shall provide a quantity of information Ii. If the experiment is repeated several times and each possible event is likely to occur pi, total information will be obtained which depends on the information provided by each event [21].Thus, inN rehearsals of the experiment, with the frequency of occurrence of the event i and (2)∑ini=N,the amount of information for each event is (3)Ii=−logpi.The total information will be(4)It=∑iniIi.The average information will be written as(5)Im=∑niNIi.IfN is very large (theoretical N), limn⟶∞ni/N=pi, therefore, (6)Im=∑pilog1p1=−∑pilogpi.The average information, noted withH, is called the Shannon informational entropy because its expression is analogous to the Boltzmann formula of thermodynamic entropy. (7)S=−k∑i=1npilogpi.The difference between the maximum and actual informational entropy of an experiment, as noted withR, is called the absolute redundancy, R=Hmax−H. In practice, it is often used the size called relative redundancy, Rr. (8)Rr=RHmax=1−HHmax.In the statistics are used the standard deviation and variance to measure dispersion. Therefore, the coefficient of variation (CV), which is defined as the ratio of the standard deviation to the mean, is used to measure the relative dispersion. The coefficient of variation is lacking the unit of measurement and is useful in comparing the variability between observation groups. Many researchers have proposed confidence intervals for the coefficient of variation [22]. Other parameters are the skewness and kurtosis coefficients described in the diagram illustrated in Figure 2.Figure 2 Brief description of kurtosis and skewness algorithms.The clustering algorithms are based on K-means, Silhouette, Silhouette score, Calinski-Harabasz (CH) index, and Davies-Bouldin (DB) index [21]. The Calinski-Harabasz index validates the number of classes based on the average of the square sum of the values inside and outside the class. The index (I) considers the separation based on the maximum distance between the center of the classes and measures the compactness based on the sum of the distances between objects and their cluster’s center. The Silhouette (S) index [13] validates the clustering performance based on the pairwise difference of between and within-cluster differences. In addition, the optimal cluster number is determined by maximizing the value of this index. The Davies-Bouldin (DB) index is calculated as such, for each C cluster, the similarities between C and all other clusters are calculated, and the highest value is attributed to C as similarity to its cluster. The DB index can then be obtained by averaging all the similarities of the cluster. The smaller the index, the better the grouping result.For signal, characteristics have been calculated the standard deviation, max peak, and min prominence peaks. The results were analyzed only as preliminary results and they were not used in the classification. ## 2.2. Machine Learning A sequential neuronal network has been tested for the data classification system. In ML, the radial basis function kernel (RBF kernel) is a popular kernel function used in various kernel-based learning algorithms. In particular, it is commonly used in support vector machine (SVM) classification [23].The RBF kernel on two samplesx and x’, represented as feature vectors in some input space, is defined in [23, 24]. (9)Kx,x′=exp−∥x−x′∥22σ2,where ∥x−x′∥2 may be recognized as squared Euclidean distance between the two feature vectors. The σ is a free parameter. An equivalent definition involves the parameter (10)γ=12σ2.Therefore,(11)Kx,x′=exp−γ∥x−x′∥2.Since the value of the RBF kernel decreases with distance and ranges between zero (in the limit) and one (whenx=x’), it has a ready interpretation as a similarity measure. The feature space of the kernel has an infinite number of dimensions. For σ=1, its expansion is (12)exp−12∥x−x′∥2=exp22xTx′−12∥x∥2−12∥x′∥2=expxTx′exp−12∥x∥2exp−12∥x′∥2=∑j=0∞xTx′jj!exp−12∥x∥2exp−12∥x′∥2=∑j=0∞∑ni=jexp−12∥x∥2x1n1⋯xknkn1!⋯nk!exp−12∥x′∥2x1′n1⋯xk′nkn1!⋯nk!.For the accuracy score, it shows the percentage of the true positive and true negative to all data points. So, it is useful when the data set is balanced. For theF1 score, it calculates the harmonic mean between precision and recall, and both depend on the false positive and false negative. So, it is useful to calculate the F1 score when the data set is not balanced [25]. The spectral entropy (SE) of a signal is a measure of its spectral power distribution. The concept is based on the Shannon entropy or information entropy. ## 3. The Proposed System The system proposed in this paper is based on the sensors and acquisition card placed on the pen to identify the hand movement. The smart system uses a vibration sensor to identify the hand’s movements and an inertial sensor with an accelerometer and gyroscope (MPU 6050) to identify the movements in the writing process. The signal acquisition is performed with an Arduino ESP-32 board that collects data from the sensors and transmits it to the computer, for storage and analysis. The Arduino ESP-32 has a frequency of 80 MHz, and communication with the sensors is made via the I2C serial port (Figures3(a) and 3(b). The acquisition board communicates with the computer via Bluetooth.Figure 3 The intelligent writing system: (a) lateral view and (b) top view. (a)(b)The MPU6050 sensor is a device used to monitor the movements on the three coordinated axes. It combines the gyro signals on the three axes with the accelerometer signals on the three axes, respectively, with the angular speed. The SW-420 vibration sensor transmits two logic signals (low or high/0 logic or 1 logic) depending on motion. If the motion is sudden, the sensor will send a high signal (1 logic), and if the movement is slow, the signal will be low. The lateral view of the system can be visualized in Figure4. The inertial sensor (IMU) is positioned under the vibration sensors on the writing tool, being responsible for recordings of the space motions in the 3D coordinates.Figure 4 Lateral view of the intelligent writing system with 3D axis representation.The vibration sensor data are not used in the signal’s classification because when the signal is “1 logic,” the system shows as the subject begins to draw. The trajectory used for the tests is presented in Figure5. The signals resulting from testing the system are shown in Figure 6. The vertical, horizontal, and circle lines are identified separately, and then, the shapes in the drawing are identified.Figure 5 The trajectory of the pen: the horizontal and vertical lines with the size of 7 cm and circle with the diameter of 7 cm.Figure 6 The signal from the accelerometer, gyroscope, and vibration sensors during the drawing of the circle and horizontal and vertical lines.Figure6 illustrates the signals acquired from the accelerometer, gyroscope, and vibration sensors during the drawing of the circle and horizontal and vertical lines. OX axis signals have higher amplitudes and are chosen for analysis.The signals acquired from the inertial accelerometer and gyroscope sensor during the 5 cm horizontal line representation with slight breaks between the drawings are presented in Figure7. The largest amplitude is the signal from the accelerator and gyroscope sensor in the OY axis. The OX and OY signals are synchronous; the OZ signals are not significant in this test.Figure 7 Representation of the signals from (a) the accelerometer and (b) gyroscope on the three axes for a horizontal line of 5 cm. (a)(b)Analyzing the signal presented in Figure8, it can be observed that, during the drawing of a horizontal line with a size of 7 cm, the signal has a frequency in which after 2 small amplitudes a larger amplitude follows. The signals will not be the same because the person’s movements are not usually regular. The signals are acquired from testing the intelligent writing tool on the following drawings: (1) horizontal line of 5, 10, 15, and 20 cm; (2) vertical line of 5, 10, 15, and 20 cm; and (3) circle of 5 and 10 cm in diameter.Figure 8 Accelerometer signal in the OX axis during the drawing of the 7 cm horizontal line.Also, the following statistical parameters have been used: entropy, standard deviation, skewness, kurtosis, peak detection, max prominence peaks, min prominence peaks, min peaks, and half-width max peaks. The signals are filtered using the Savitzky-Golay (SG) filter. The identification of the writing process and the resting is achieved by isolating the signals acquired during drawing with the resting time (Figure9).Figure 9 Signals from the accelerometer on the OX axis: (a) the noisy and filtered signals for the horizontal line of 5 cm length; (b) the normalized and filtered signals with the Savitzky-Golay filter for the horizontal line of 5 cm length; (c) the noisy and filtered signals for the vertical line of 15 cm length; (d) the normalized and filtered signals with the Savitzky-Golay filter for the vertical line of 15 cm length; (e) the noisy and filtered signals for the horizontal line of 20 cm length; (f) the normalized and filtered signals with the Savitzky-Golay filter for the horizontal line of 20 cm length; (g) the noisy and filtered signals for the circle of 5 cm in diameter; and (h) the normalized and filtered signals with the Savitzky-Golay filter for the circle of 5 cm in diameter. Note: skewness ACCY: skewness coefficient of accelerometer signal OY axis; kurtosis ACCY: kurtosis coefficient of accelerometer signal OY axis; skewness ACCX: skewness coefficient of accelerometer signal OX axis; kurtosis ACCX: kurtosis coefficient of accelerometer signal OX axis; entropy ACCY: entropy coefficient of accelerometer signal OY axis; entropy ACCX: kurtosis coefficient of accelerometer signal OX axis; standard deviation ACCY: standard deviation of accelerometer signal OY axis; standard deviation ACCX: standard deviation of accelerometer signal OX axis. (a)(b)(c)(d)(e)(f)(g)(h)The histograms are performed to observe the distribution of signals (Figure10). The standard deviation of the signals collected from the accelerometer on the OX axis is in the range of 0,1 to 0,2.Figure 10 The distribution of signals.It can be observed that the entropy for both OX and OY accelerations is about 6. The OX and OY signals are symmetrical, and the OX flattening is better than OY. The standard deviation is around 0,1 which shows a spread of values.The correlation matrix shows how much the characteristics extracted from the analyzed signals are related. The entropies on the OX and OY axes are related (0,92), which is also shown from the signal shape, confirming that the signals are synchronized. The characteristics of the OX axis signals are related to a coefficient with a value of -0,6. In order to draw the geometric figures, 71 files were used with an average of 1000 samples per file.After applying the K-mean and Silhouette index, it was observed that all signals (feature parameters) can be classified into three classes (Figure11). The Silhouette parameter has the highest value for these three classes. The number of classes used for trace identification was 3, corresponding with the classes resulting for the clustering algorithm.Figure 11 Silhouette score resulting from data clustering presented for a class number from 1 to 10 for the following features: (a) Silhouette score: skewness ACCY and kurtosis ACCY; (b) Silhouette score: skewness ACCY, kurtosis ACCY, and kurtosis ACCX; (c) Silhouette score: entropy ACCY and kurtosis ACCY; and (d) Silhouette score: entropy ACCX, entropy ACCY, and kurtosis ACCY. (a)(b)(c)(d)Data analysis shows that the parameters most specified in the classification are entropy ACCY and kurtosis ACCY because the Silhouette score is the largest and the CH and DB indexes are the lowest (Table1). The number of samples used in the classification was 68.Table 1 Algorithm performance analysis using K-means. ClassCentroidInertiaSilhouette scoreCalinski-Harabasz indexDavies-Bouldin indexSkewness ACCY and kurtosis ACCYHorizontal line0,02819,63Vertical line0,0846,471388,80,56175,960,57Circle-2,8635,81Entropy ACCY and kurtosis ACCYHorizontal line6,656,47Vertical line6,2319,90910,160,63265,80,41Circle6,1836,54Skewness ACCX, kurtosis ACCY, and kurtosis ACCXHorizontal line-7,781-3,460,047Vertical line13,90-2,350,470,4873,360,70Circle0,17614,32-0,929Skewness ACC, kurtosis ACCY, and kurtosis ACCXHorizontal line-10,796-0,10,01Vertical line2,640,150,0160,63264,620,42Circle19,28-0,160,015The correlation matrix is illustrated in Figure12 and shows how closely the parameters extracted from the signals are related. The entropy on the OX and OY axes is related (0,92), which is also extracted from the signal shape, confirming that these two signals are synchronized. The characteristics of the signals on the OX axis are related to a coefficient with a value of -0,6. In the classification, 71 files were used with an average of 1000 samples to draw the geometrical figures.Figure 12 The correlation matrix of the statistical parameters. ## 4. Results and Discussion Different machine learning methods, such as decision trees, SVM, neural networks, k-nearest neighbor method, and the Bayesian classifier, can be used for classification. The SVM is the most used method because it removes overfitting and is not influenced by the noise. However, a nonlinear classification problem cannot be resolved by a linear classifier. It is possible to transform the nonlinear dataset in a way that allows it to be classified linearly. An approach to this problem is to use different kernels of different types of functions (i.e., polynomial).Several methods can be used for classification, including the SVM clustering algorithm, which is unsupervised classification. In this paper, data has been classified using the SVM clustering algorithm for drawing identification (Table2). Feature sets 2 and 4 give the best results in classification with an accuracy of more than 60 percent. The SVM was used with polynomial and RBF kernel. In both cases, the best classification accuracy was obtained by entropy ACCY and kurtosis ACCY data for Gamma=0,7 and C=0,9. Gamma is a scalar that defines how much influence has a single training point. Precision in each class is defined as the number of correct classified position data split by the total number date. Recall in each class is the ratio of the number of correctly classified position data to the number of positive values. The F1 score in each class is obtained as a measure of precision and recall in each class. For the F1 score, it can be observed that the horizontal lines are best classified using both SVC modes with polynomial and RBF kernel for the entire ACCY and kurtosis ACCY characteristics. If we analyze the number of forms used in the identification, it can be observed that for 12 horizontal lines, the F1 score is 0,7 which demonstrates that drawings can be identified using the proposed intelligent writing sensor-based system. These signals can be translated into a graphical mode that can be viewed remotely by a therapist.Table 2 Algorithm performance analysis. ClassPrecisionRecallF1 scoreAccuracySVM with polynomial kernelEntropy ACCX and kurtosis ACCYHorizontal line0,730,670,70,57Vertical line0,50,380,43Circle0,2510,4SVM with polynomial kernelEntropy ACCX and kurtosis ACCYHorizontal line0,750,750,750,71Vertical line10,50,67Circle0,510,67SVM with RBF kernelEntropy ACCX and kurtosis ACCYHorizontal line0,560,830,670,64Vertical line0,750,430,55Circle111SVM with RBF kernelEntropy ACCY and kurtosis ACCYHorizontal line0,560,830,670,71Vertical line0,750,430,55Circle111In testing process, four data feature sets are used for clustering: (1) “skewness ACCY” and “kurtosis ACCY”; (2) “entropy ACCY” and “kurtosis ACCY”; (3) “kurtosis ACCX,” “skewness ACCY,” and “kurtosis ACCY”; and (4) “entropy ACCX,” “entropy ACCY,” and “kurtosis ACCY.”For classification, it was chosen as features of the entropy ACCY and kurtosis ACCY signals. A simple feed-forward full-connected artificial neural network has been developed. The classification task was achieved by using the TensorFlow 2.0 library along with the Keras module. The total number of parameters used in the dense 2 hidden layers of “relu” type, together with the output layer, was 323. On the output, the activation function is “softmax” and three neurons are considered, each representing a class—horizontal line, vertical line, and circle. In the first hidden layer, there are 20 neurons and in the second 10 neurons. The training is done for 200 epochs with a loss function of 0,59 and an accuracy of 0,79. The average drive time was 7 ms over an epoch.The loss function of 0,6737 and the accuracy of 0,7142 were obtained during system testing (Figure13). In future research, other types of CNN will be tested due to the obtained results which are more than 50% on the classification to identify the object drawn.Figure 13 The loss function and the accuracy represented against the epoch number. (a)(b)Compared with other studies where the hand gesture identification/classification was made by image processing techniques, our idea is fairly simpler to implement, keeping at the same time the subject anonymous by using only sensors and signal processing algorithms. Comparing the results obtained by smart pen in [26] with the results obtained with the system developed in the laboratory, it is observed that the first system allows identifying the characteristics of writing and tremor with age. In the case of our system, it helps in recovering the subjects’ hand problems by making specific drawings indicated by the therapist. ## 5. Conclusions The present paper describes a handwriting recognition method based on inertial sensors that allow continuous observation and recognition of a drawing, constructed from simple geometric figures. The architecture of the system comprises hardware and software elements to detect the hand of a subject’s normal (i.e., physiological) movements by detecting the proposed written gestures and solving the correct classification by various algorithms.The results obtained after the signal processing techniques lead us in two study directions. Firstly, they can give to the observer a good classification of the signals considering a set of their statistical parameters for different geometric tracks (i.e., simple or combined). Secondly, it can provide a valuable clinical tool regarding the hand movement evolution in a recovery process. Therefore, we intend to improve the actual system using an acquisition board that can be used to send the data to the cloud where it will be processed, and the results provide a basis for the therapists. The classification problem can be improved and therefore deep learning methods will be used. Data (i.e., physiological and pathological) will be analyzed to have good accuracy of the sketch identification. The system proposed can be used in the medical rehabilitation clinics. Because the system is based on adaptive algorithms, the therapist may suggest that the subjects draw certain figures based on the required recovery needs. --- *Source: 1005061-2022-04-15.xml*
2022
# Gut Microbiota and Inflammatory Cytokine Changes in Patients with Ankylosing Spondylitis **Authors:** Bin Liu; Zhenghua Ding; Junhui Xiong; Xing Heng; Huafu Wang; Weihua Chu **Journal:** BioMed Research International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005111 --- ## Abstract Ankylosing spondylitis (AS) is a chronic inflammatory disease characterized by sacroiliac joint lesions and spinal ascending involvement. The aim of this work was at investigating the gut microbiota profile and proinflammatory cytokines in AS patients. Gut microbiota of AS patients was clearly different from that of healthy human controls. 16S rRNA sequencing analysis demonstrated a changed microbial diversity in the AS patients, and there was a significant increase in the abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at a phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota, Proteobacteria, and Verrucomicrobiota declined in AS patients. ELISA results for the markers of inflammation in the AS patients revealed increased concentrations of proinflammatory cytokines such as IL-23, IL-17, and IFN-γ. Our findings support the fact that the intestinal microbiota are altered in AS with an inflammatory status, which indicates that gut microbiota should be a potential target for ankylosing spondylitis therapy. --- ## Body ## 1. Introduction Ankylosing spondylitis (AS) is a chronic, progressive disease, which mainly invades sacroiliac joints, paraspinal soft tissue, spinal process, and peripheral joints and can also present with extraarticular manifestations such as anterior uveitis [1, 2]. Spinal deformity and ankylosis can occur in severe cases. The prevalence rate in China is about 0.29%, the male to female ratio is roughly 2~3 : 1, the peak age of onset is 20~30 years, and it is rare above the age of 40 years and below the age of 8 years [3, 4].Recent studies have shown that there is a certain relationship between ankylosing spondylitis and gut microbiota [5–7]. The human body is home to trillions of microorganisms, and the number of microbial cells in the human body is equivalent to that of human cells [8]. Among them, a large proportion of microorganisms live in our digestive tract and constitute our intestinal microbiome. The research on the correlation between human health and intestinal microorganisms continues to rise, and the correlation between intestinal microecological imbalance and the occurrence and development of diseases has attracted more and more attention. Gut microbiome imbalance is sure to be involved in the process of many immune-related diseases [9]. As research continues, it has been found that gut microbiota plays an important role in the development of ankylosing spondylitis [10–12]. In this study, we compared the alterations of gut microbiota and inflammatory cytokines in AS and healthy human controls. ## 2. Results ### 2.1. Alterations of Inflammatory Cytokines in AS Patients As shown in Figure1, the serum levels of INF-γ, IL-17, and IL-23 in the AS patient group were significantly increased when compared with the healthy control. These findings indicate that the levels of proinflammatory factors were increased in AS patients. However, the concentrations of TNF-α and IL-1 were significantly reduced compared to those of the healthy control (HC) group. No significant change was detected in IL-25 levels between the AS and HC groups.Figure 1 The serum levels of IL-17, IFN-γ, IL-25, IL-23, IL-1, and TNF-α were determined by ELISA. Data are expressed as means±standard deviation from 8 person per group. (a) IL-17; (b) IFN-γ; (c) IL-25; (d) IL-23; (e) IL-1; (f) TNF-α. ∗∗P<0.01 and ∗P<0.05 vs. control. (a)(b)(c)(d)(e)(f) ### 2.2. Analysis of Gut Microbiota With high-throughput sequencing, 805215 high-quality reads which were used to construct OTUs were acquired after filtrating and merging from all samples. At 97% similarity, the amplicons were clustered into 1295 OTUs. By the number of the observed genera, alpha diversity was shown to significantly differ between AS and HCs. The relative abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at the phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota Proteobacteria, and Verrucomicrobiota declined statistically (P<0.05) (Figure 2(a)). At the genus level, AS patients had an increased relative abundance of Prevotella_9, Alistipes, Lachnospiraceae, Parabacteroides, and Ruminococcus in AS (P<0.05). However, the relative abundance of Dialister, Bifidobacterium, Veillonella, Anaerostipes, and Escherichia-Shigella had declined levels in AS (P<0.05) (Figure 2(b)). In the results from the species level, the community differences between AS and HC groups were analyzed with the level of phylum and genus classification by using the linear discriminant analysis effect size (LEfSe) and linear discriminant analysis (LDA) (Figures 2(c) and 2(d)).Figure 2 Gut microbial communities are significantly different between AS patients and healthy controls at the phylum (a) and genus (b) levels, and LEfSe analysis on the phylogenetic tree in cladogram format (c) and for LDA scores (d). (a)(b)(c)(d) ## 2.1. Alterations of Inflammatory Cytokines in AS Patients As shown in Figure1, the serum levels of INF-γ, IL-17, and IL-23 in the AS patient group were significantly increased when compared with the healthy control. These findings indicate that the levels of proinflammatory factors were increased in AS patients. However, the concentrations of TNF-α and IL-1 were significantly reduced compared to those of the healthy control (HC) group. No significant change was detected in IL-25 levels between the AS and HC groups.Figure 1 The serum levels of IL-17, IFN-γ, IL-25, IL-23, IL-1, and TNF-α were determined by ELISA. Data are expressed as means±standard deviation from 8 person per group. (a) IL-17; (b) IFN-γ; (c) IL-25; (d) IL-23; (e) IL-1; (f) TNF-α. ∗∗P<0.01 and ∗P<0.05 vs. control. (a)(b)(c)(d)(e)(f) ## 2.2. Analysis of Gut Microbiota With high-throughput sequencing, 805215 high-quality reads which were used to construct OTUs were acquired after filtrating and merging from all samples. At 97% similarity, the amplicons were clustered into 1295 OTUs. By the number of the observed genera, alpha diversity was shown to significantly differ between AS and HCs. The relative abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at the phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota Proteobacteria, and Verrucomicrobiota declined statistically (P<0.05) (Figure 2(a)). At the genus level, AS patients had an increased relative abundance of Prevotella_9, Alistipes, Lachnospiraceae, Parabacteroides, and Ruminococcus in AS (P<0.05). However, the relative abundance of Dialister, Bifidobacterium, Veillonella, Anaerostipes, and Escherichia-Shigella had declined levels in AS (P<0.05) (Figure 2(b)). In the results from the species level, the community differences between AS and HC groups were analyzed with the level of phylum and genus classification by using the linear discriminant analysis effect size (LEfSe) and linear discriminant analysis (LDA) (Figures 2(c) and 2(d)).Figure 2 Gut microbial communities are significantly different between AS patients and healthy controls at the phylum (a) and genus (b) levels, and LEfSe analysis on the phylogenetic tree in cladogram format (c) and for LDA scores (d). (a)(b)(c)(d) ## 3. Discussion Gut microbiota promotes the development and progression of AS through mechanisms such as increased intestinal permeability and intestinal mucosal immunity. Patients with AS have a unique gut microbiota pattern that may activate autoimmunity. Proinflammatory cytokines such as IL-23, IL-17, IL-10, IFN-γ, IL-6, and TNF-α are important in the progression of AS. The IL-23/IL-17 immune axis has been shown to be an important factor in the immunopathogenesis of AS [13]. In this study, we found that the levels of IL-23, IL-17, and IFN-γ increased significantly in AS patients, while the levels of IL-1 and TNF-α decreased, and there was no significant change in IL-25 levels. However, our results were not consistent with the results of Sveaas et al. [14], who concluded that the levels of IL-17 and IL-23 were significantly reduced in AS patients. The level of TNF-α has been reported to be significantly decreased in patients with AS [15]. Similarly, in AS patients, a significant decrease in the IL-1 level has been reported [16]. Also, the IFN-γ level has been reported to be significantly increased in AS patients [17]. There are limited reports concerning IL-25 levels in AS patients, but several studies have reported significantly increased levels of IL-6 in AS patients [18, 19].It was found that Firmicutes, Bacteroides, Proteobacteria, and Actinobacteria were the four major microbiota at the phylum level in AS patients and the healthy controls and the abundance of Actinobacteria increased but the abundance of Proteobacteria was significantly decreased in AS patients than in controls, along with Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota, and Verrucomicrobiota. Our results are partially consistent with the previous study by Wen et al. [20], which concluded that the Actinobacteria was significantly higher and the Verrucomicrobia was lower in AS patients. However, at the genus level, the Bifidobacterium and Prevotellaceae including Prevotella melaninogenica, Prevotella copri, and Prevotella sp. C561 had a higher abundance in AS patients. In addition, when compared with HCs, it was found that the gut microbiota of AS patients demonstrated an increase in the abundance of Lachnospiraceae, Ruminococcaceae, Rikenellaceae, Bacteroidaceae, and Porphyromonadaceae and a decrease in the abundance of Veillonellaceae and Prevotellaceae [21]. Some previous studies had also demonstrated an increase in Klebsiella in patients with AS [22].Dysbiosis of gut microbiota is closely related to the occurrence and development of AS, and probiotic supplementation and modification of the dietary structure may help alleviate the condition and symptoms of patients. However, current studies still present several issues, such as the lack of large samples and the lack of long-term follow-up controlled trials. The results of these studies vary widely, and the species of harmful and beneficial bacteria are not yet fully defined. The study of intestinal microbiota regulating the immune system of AS patients still needs to be further explored, thus opening up new avenues for the treatment of AS. ## 4. Conclusions Richness and diversity of gut microbiota in AS patients were compared with those of healthy human controls. The results indicate that gut microbiota might participate in the pathogenesis of AS by modulating the inflammatory cytokines. We found the gut microbiota in patients with AS has a specific alteration displaying an increase in some bacterial species associated with the decrease in others. Our results are consistent with previous reports. This discovery indicates that gut microbiota should be a potential target for ankylosing spondylitis therapy. ## 5. Methods ### 5.1. Study Participants In this study, a total of 16 participants were recruited including 8 AS patients and 8 healthy patients matched in age and sex. AS patients were diagnosed in Lishui People’s Hospital and collected between August 2020 and March 2021. All AS patients met the ACR/EULAR identification criteria for AS [23]. Healthy controls were recruited from health screening centers of Lishui People’s Hospital. Each participant provided informed consent, and the research was approved by the institutional ethics committee of Lishui People’s Hospital (ethics number: 2021-342). ### 5.2. Sample Collection Fresh fecal samples were collected from participants using a sterile box and were transported to the laboratory immediately and then stored at −80° C for use. The blood samples were collected and centrifuged (3000 g for 20 min); then, the serum samples were collected for cytokine analysis. ### 5.3. Detection of Serum Cytokines Enzyme-linked immunosorbent assay (ELISA) kits (Nanjing Jiancheng Institute of Biotechnology, China) were used to analyze the inflammatory cytokines levels (IL-1, IL-17, IL-25, TNF-α, and IFN-γ). A Bio-Rad microplate reader (Bio-Rad Laboratories, USA) was used to evaluate the optical density at a wavelength according to the instrument manufacture. ### 5.4. Gut Microbiome Analysis Total fecal samples of microbial DNA was extracted using the E.Z.N.A.® Stool DNA Kit (Omega Bio-Tek, Norcross, GA, U.S.) according to the protocols of the manufacturer. Partial bacteria of the 16S ribosomal RNA gene (V4–V5 region) were amplified by the PCR method. The PCR products were purified and used to quantify. All purified PCR products were mixed. Library preparation, Illumina sequencing, and bioinformatics analysis were conducted as described by Li et al. [24]. ### 5.5. Statistical Analysis SPSS software (version 22.0, IBM SPSS Inc., USA) was employed to analyze the values. Statistical analysis of variations between the groups was performed using Student’st-test. p<0.05 indicated statistically significant differences. ## 5.1. Study Participants In this study, a total of 16 participants were recruited including 8 AS patients and 8 healthy patients matched in age and sex. AS patients were diagnosed in Lishui People’s Hospital and collected between August 2020 and March 2021. All AS patients met the ACR/EULAR identification criteria for AS [23]. Healthy controls were recruited from health screening centers of Lishui People’s Hospital. Each participant provided informed consent, and the research was approved by the institutional ethics committee of Lishui People’s Hospital (ethics number: 2021-342). ## 5.2. Sample Collection Fresh fecal samples were collected from participants using a sterile box and were transported to the laboratory immediately and then stored at −80° C for use. The blood samples were collected and centrifuged (3000 g for 20 min); then, the serum samples were collected for cytokine analysis. ## 5.3. Detection of Serum Cytokines Enzyme-linked immunosorbent assay (ELISA) kits (Nanjing Jiancheng Institute of Biotechnology, China) were used to analyze the inflammatory cytokines levels (IL-1, IL-17, IL-25, TNF-α, and IFN-γ). A Bio-Rad microplate reader (Bio-Rad Laboratories, USA) was used to evaluate the optical density at a wavelength according to the instrument manufacture. ## 5.4. Gut Microbiome Analysis Total fecal samples of microbial DNA was extracted using the E.Z.N.A.® Stool DNA Kit (Omega Bio-Tek, Norcross, GA, U.S.) according to the protocols of the manufacturer. Partial bacteria of the 16S ribosomal RNA gene (V4–V5 region) were amplified by the PCR method. The PCR products were purified and used to quantify. All purified PCR products were mixed. Library preparation, Illumina sequencing, and bioinformatics analysis were conducted as described by Li et al. [24]. ## 5.5. Statistical Analysis SPSS software (version 22.0, IBM SPSS Inc., USA) was employed to analyze the values. Statistical analysis of variations between the groups was performed using Student’st-test. p<0.05 indicated statistically significant differences. --- *Source: 1005111-2022-08-19.xml*
1005111-2022-08-19_1005111-2022-08-19.md
15,979
Gut Microbiota and Inflammatory Cytokine Changes in Patients with Ankylosing Spondylitis
Bin Liu; Zhenghua Ding; Junhui Xiong; Xing Heng; Huafu Wang; Weihua Chu
BioMed Research International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005111
1005111-2022-08-19.xml
--- ## Abstract Ankylosing spondylitis (AS) is a chronic inflammatory disease characterized by sacroiliac joint lesions and spinal ascending involvement. The aim of this work was at investigating the gut microbiota profile and proinflammatory cytokines in AS patients. Gut microbiota of AS patients was clearly different from that of healthy human controls. 16S rRNA sequencing analysis demonstrated a changed microbial diversity in the AS patients, and there was a significant increase in the abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at a phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota, Proteobacteria, and Verrucomicrobiota declined in AS patients. ELISA results for the markers of inflammation in the AS patients revealed increased concentrations of proinflammatory cytokines such as IL-23, IL-17, and IFN-γ. Our findings support the fact that the intestinal microbiota are altered in AS with an inflammatory status, which indicates that gut microbiota should be a potential target for ankylosing spondylitis therapy. --- ## Body ## 1. Introduction Ankylosing spondylitis (AS) is a chronic, progressive disease, which mainly invades sacroiliac joints, paraspinal soft tissue, spinal process, and peripheral joints and can also present with extraarticular manifestations such as anterior uveitis [1, 2]. Spinal deformity and ankylosis can occur in severe cases. The prevalence rate in China is about 0.29%, the male to female ratio is roughly 2~3 : 1, the peak age of onset is 20~30 years, and it is rare above the age of 40 years and below the age of 8 years [3, 4].Recent studies have shown that there is a certain relationship between ankylosing spondylitis and gut microbiota [5–7]. The human body is home to trillions of microorganisms, and the number of microbial cells in the human body is equivalent to that of human cells [8]. Among them, a large proportion of microorganisms live in our digestive tract and constitute our intestinal microbiome. The research on the correlation between human health and intestinal microorganisms continues to rise, and the correlation between intestinal microecological imbalance and the occurrence and development of diseases has attracted more and more attention. Gut microbiome imbalance is sure to be involved in the process of many immune-related diseases [9]. As research continues, it has been found that gut microbiota plays an important role in the development of ankylosing spondylitis [10–12]. In this study, we compared the alterations of gut microbiota and inflammatory cytokines in AS and healthy human controls. ## 2. Results ### 2.1. Alterations of Inflammatory Cytokines in AS Patients As shown in Figure1, the serum levels of INF-γ, IL-17, and IL-23 in the AS patient group were significantly increased when compared with the healthy control. These findings indicate that the levels of proinflammatory factors were increased in AS patients. However, the concentrations of TNF-α and IL-1 were significantly reduced compared to those of the healthy control (HC) group. No significant change was detected in IL-25 levels between the AS and HC groups.Figure 1 The serum levels of IL-17, IFN-γ, IL-25, IL-23, IL-1, and TNF-α were determined by ELISA. Data are expressed as means±standard deviation from 8 person per group. (a) IL-17; (b) IFN-γ; (c) IL-25; (d) IL-23; (e) IL-1; (f) TNF-α. ∗∗P<0.01 and ∗P<0.05 vs. control. (a)(b)(c)(d)(e)(f) ### 2.2. Analysis of Gut Microbiota With high-throughput sequencing, 805215 high-quality reads which were used to construct OTUs were acquired after filtrating and merging from all samples. At 97% similarity, the amplicons were clustered into 1295 OTUs. By the number of the observed genera, alpha diversity was shown to significantly differ between AS and HCs. The relative abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at the phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota Proteobacteria, and Verrucomicrobiota declined statistically (P<0.05) (Figure 2(a)). At the genus level, AS patients had an increased relative abundance of Prevotella_9, Alistipes, Lachnospiraceae, Parabacteroides, and Ruminococcus in AS (P<0.05). However, the relative abundance of Dialister, Bifidobacterium, Veillonella, Anaerostipes, and Escherichia-Shigella had declined levels in AS (P<0.05) (Figure 2(b)). In the results from the species level, the community differences between AS and HC groups were analyzed with the level of phylum and genus classification by using the linear discriminant analysis effect size (LEfSe) and linear discriminant analysis (LDA) (Figures 2(c) and 2(d)).Figure 2 Gut microbial communities are significantly different between AS patients and healthy controls at the phylum (a) and genus (b) levels, and LEfSe analysis on the phylogenetic tree in cladogram format (c) and for LDA scores (d). (a)(b)(c)(d) ## 2.1. Alterations of Inflammatory Cytokines in AS Patients As shown in Figure1, the serum levels of INF-γ, IL-17, and IL-23 in the AS patient group were significantly increased when compared with the healthy control. These findings indicate that the levels of proinflammatory factors were increased in AS patients. However, the concentrations of TNF-α and IL-1 were significantly reduced compared to those of the healthy control (HC) group. No significant change was detected in IL-25 levels between the AS and HC groups.Figure 1 The serum levels of IL-17, IFN-γ, IL-25, IL-23, IL-1, and TNF-α were determined by ELISA. Data are expressed as means±standard deviation from 8 person per group. (a) IL-17; (b) IFN-γ; (c) IL-25; (d) IL-23; (e) IL-1; (f) TNF-α. ∗∗P<0.01 and ∗P<0.05 vs. control. (a)(b)(c)(d)(e)(f) ## 2.2. Analysis of Gut Microbiota With high-throughput sequencing, 805215 high-quality reads which were used to construct OTUs were acquired after filtrating and merging from all samples. At 97% similarity, the amplicons were clustered into 1295 OTUs. By the number of the observed genera, alpha diversity was shown to significantly differ between AS and HCs. The relative abundance of Cyanobacteria, Deinococcota, Patescibacteria, Actinobacteriota, and Synergistota at the phyla level increased in AS, while the relative abundance of Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota Proteobacteria, and Verrucomicrobiota declined statistically (P<0.05) (Figure 2(a)). At the genus level, AS patients had an increased relative abundance of Prevotella_9, Alistipes, Lachnospiraceae, Parabacteroides, and Ruminococcus in AS (P<0.05). However, the relative abundance of Dialister, Bifidobacterium, Veillonella, Anaerostipes, and Escherichia-Shigella had declined levels in AS (P<0.05) (Figure 2(b)). In the results from the species level, the community differences between AS and HC groups were analyzed with the level of phylum and genus classification by using the linear discriminant analysis effect size (LEfSe) and linear discriminant analysis (LDA) (Figures 2(c) and 2(d)).Figure 2 Gut microbial communities are significantly different between AS patients and healthy controls at the phylum (a) and genus (b) levels, and LEfSe analysis on the phylogenetic tree in cladogram format (c) and for LDA scores (d). (a)(b)(c)(d) ## 3. Discussion Gut microbiota promotes the development and progression of AS through mechanisms such as increased intestinal permeability and intestinal mucosal immunity. Patients with AS have a unique gut microbiota pattern that may activate autoimmunity. Proinflammatory cytokines such as IL-23, IL-17, IL-10, IFN-γ, IL-6, and TNF-α are important in the progression of AS. The IL-23/IL-17 immune axis has been shown to be an important factor in the immunopathogenesis of AS [13]. In this study, we found that the levels of IL-23, IL-17, and IFN-γ increased significantly in AS patients, while the levels of IL-1 and TNF-α decreased, and there was no significant change in IL-25 levels. However, our results were not consistent with the results of Sveaas et al. [14], who concluded that the levels of IL-17 and IL-23 were significantly reduced in AS patients. The level of TNF-α has been reported to be significantly decreased in patients with AS [15]. Similarly, in AS patients, a significant decrease in the IL-1 level has been reported [16]. Also, the IFN-γ level has been reported to be significantly increased in AS patients [17]. There are limited reports concerning IL-25 levels in AS patients, but several studies have reported significantly increased levels of IL-6 in AS patients [18, 19].It was found that Firmicutes, Bacteroides, Proteobacteria, and Actinobacteria were the four major microbiota at the phylum level in AS patients and the healthy controls and the abundance of Actinobacteria increased but the abundance of Proteobacteria was significantly decreased in AS patients than in controls, along with Acidobacteriota, Bdellovibrionota, Campylobacterota, Chloroflexi, Gemmatimonadota, Myxococcota, Nitrospirota, and Verrucomicrobiota. Our results are partially consistent with the previous study by Wen et al. [20], which concluded that the Actinobacteria was significantly higher and the Verrucomicrobia was lower in AS patients. However, at the genus level, the Bifidobacterium and Prevotellaceae including Prevotella melaninogenica, Prevotella copri, and Prevotella sp. C561 had a higher abundance in AS patients. In addition, when compared with HCs, it was found that the gut microbiota of AS patients demonstrated an increase in the abundance of Lachnospiraceae, Ruminococcaceae, Rikenellaceae, Bacteroidaceae, and Porphyromonadaceae and a decrease in the abundance of Veillonellaceae and Prevotellaceae [21]. Some previous studies had also demonstrated an increase in Klebsiella in patients with AS [22].Dysbiosis of gut microbiota is closely related to the occurrence and development of AS, and probiotic supplementation and modification of the dietary structure may help alleviate the condition and symptoms of patients. However, current studies still present several issues, such as the lack of large samples and the lack of long-term follow-up controlled trials. The results of these studies vary widely, and the species of harmful and beneficial bacteria are not yet fully defined. The study of intestinal microbiota regulating the immune system of AS patients still needs to be further explored, thus opening up new avenues for the treatment of AS. ## 4. Conclusions Richness and diversity of gut microbiota in AS patients were compared with those of healthy human controls. The results indicate that gut microbiota might participate in the pathogenesis of AS by modulating the inflammatory cytokines. We found the gut microbiota in patients with AS has a specific alteration displaying an increase in some bacterial species associated with the decrease in others. Our results are consistent with previous reports. This discovery indicates that gut microbiota should be a potential target for ankylosing spondylitis therapy. ## 5. Methods ### 5.1. Study Participants In this study, a total of 16 participants were recruited including 8 AS patients and 8 healthy patients matched in age and sex. AS patients were diagnosed in Lishui People’s Hospital and collected between August 2020 and March 2021. All AS patients met the ACR/EULAR identification criteria for AS [23]. Healthy controls were recruited from health screening centers of Lishui People’s Hospital. Each participant provided informed consent, and the research was approved by the institutional ethics committee of Lishui People’s Hospital (ethics number: 2021-342). ### 5.2. Sample Collection Fresh fecal samples were collected from participants using a sterile box and were transported to the laboratory immediately and then stored at −80° C for use. The blood samples were collected and centrifuged (3000 g for 20 min); then, the serum samples were collected for cytokine analysis. ### 5.3. Detection of Serum Cytokines Enzyme-linked immunosorbent assay (ELISA) kits (Nanjing Jiancheng Institute of Biotechnology, China) were used to analyze the inflammatory cytokines levels (IL-1, IL-17, IL-25, TNF-α, and IFN-γ). A Bio-Rad microplate reader (Bio-Rad Laboratories, USA) was used to evaluate the optical density at a wavelength according to the instrument manufacture. ### 5.4. Gut Microbiome Analysis Total fecal samples of microbial DNA was extracted using the E.Z.N.A.® Stool DNA Kit (Omega Bio-Tek, Norcross, GA, U.S.) according to the protocols of the manufacturer. Partial bacteria of the 16S ribosomal RNA gene (V4–V5 region) were amplified by the PCR method. The PCR products were purified and used to quantify. All purified PCR products were mixed. Library preparation, Illumina sequencing, and bioinformatics analysis were conducted as described by Li et al. [24]. ### 5.5. Statistical Analysis SPSS software (version 22.0, IBM SPSS Inc., USA) was employed to analyze the values. Statistical analysis of variations between the groups was performed using Student’st-test. p<0.05 indicated statistically significant differences. ## 5.1. Study Participants In this study, a total of 16 participants were recruited including 8 AS patients and 8 healthy patients matched in age and sex. AS patients were diagnosed in Lishui People’s Hospital and collected between August 2020 and March 2021. All AS patients met the ACR/EULAR identification criteria for AS [23]. Healthy controls were recruited from health screening centers of Lishui People’s Hospital. Each participant provided informed consent, and the research was approved by the institutional ethics committee of Lishui People’s Hospital (ethics number: 2021-342). ## 5.2. Sample Collection Fresh fecal samples were collected from participants using a sterile box and were transported to the laboratory immediately and then stored at −80° C for use. The blood samples were collected and centrifuged (3000 g for 20 min); then, the serum samples were collected for cytokine analysis. ## 5.3. Detection of Serum Cytokines Enzyme-linked immunosorbent assay (ELISA) kits (Nanjing Jiancheng Institute of Biotechnology, China) were used to analyze the inflammatory cytokines levels (IL-1, IL-17, IL-25, TNF-α, and IFN-γ). A Bio-Rad microplate reader (Bio-Rad Laboratories, USA) was used to evaluate the optical density at a wavelength according to the instrument manufacture. ## 5.4. Gut Microbiome Analysis Total fecal samples of microbial DNA was extracted using the E.Z.N.A.® Stool DNA Kit (Omega Bio-Tek, Norcross, GA, U.S.) according to the protocols of the manufacturer. Partial bacteria of the 16S ribosomal RNA gene (V4–V5 region) were amplified by the PCR method. The PCR products were purified and used to quantify. All purified PCR products were mixed. Library preparation, Illumina sequencing, and bioinformatics analysis were conducted as described by Li et al. [24]. ## 5.5. Statistical Analysis SPSS software (version 22.0, IBM SPSS Inc., USA) was employed to analyze the values. Statistical analysis of variations between the groups was performed using Student’st-test. p<0.05 indicated statistically significant differences. --- *Source: 1005111-2022-08-19.xml*
2022
# Fisetin Attenuates Arsenic-Induced Hepatic Damage by Improving Biochemical, Inflammatory, Apoptotic, and Histological Profile: In Vivo and In Silico Approach **Authors:** Muhammad Umar; Saima Muzammil; Muhammad Asif Zahoor; Shama Mustafa; Asma Ashraf; Sumreen Hayat; Muhammad Umar Ijaz **Journal:** Evidence-Based Complementary and Alternative Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005255 --- ## Abstract Arsenic (As) is a toxic metalloid and human carcinogen that may cause hepatotoxicity. Fisetin (3, 3′, 4′, 7-tetrahydroxyflavone) is a phytoflavonoid, which shows diverse therapeutic activities. This study aimed to examine the remedial potential of fisetin against As-instigated hepatotoxicity in adult male rats. To accomplish this aim, albino rats (N = 48) were evenly classified into 4 groups: control group, As (10 mg/kg) group, fisetin (2.5 mg/kg) + As (10 mg/kg) group, and fisetin (2.5 mg/kg) group. After one month of treatment, biochemical assay, total protein content (TPC), hepatic serum enzymes, inflammatory as well as pro- or anti-apoptotic markers, and histopathological profile of hepatic tissues were estimated. As administration disordered the biochemical profile by decreasing activities of antioxidant enzymes i.e., catalase (CAT), superoxide dismutase (SOD), glutathione reductase (GSR), and glutathione (GSH) content while escalating the levels of reactive oxygen species (ROS), and thiobarbituric acid reactive substances (TBARS). TPC was also considerably reduced after exposure to As. Furthermore, As markedly raised the levels of liver serum enzymes such as aspartate transaminase (AST), alkaline phosphatase (ALP), and alanine transaminase (ALT) as well as the levels of inflammatory markers, i.e., nuclear factor- κB (NF-κB), tumor necrosis- α (TNF-α), interleukin-1β (IL-1β), interleukin-6 (IL-6), and cyclo-oxygenase-2 (COX-2) activity. Besides, it lowered the level of antiapoptotic markers (Bcl-2) and upregulated the levels of proapoptotic markers (Bax, Caspase-3, and Caspase-9). Additionally, As exposure led to histopathological damage in hepatic tissues. However, fisetin administration remarkably alleviated all the depicted hepatic damages. For further verification, the screening of several dock complexes was performed by using the GOLD 5.3.0 version. Based on docking fitness and GOLD score, the ranking order of receptor proteins with fisetin compound is superoxide dismutase, interleukin, aspartate aminotransferase, alkaline phosphatase, TNF-alpha, alanine transaminase, cyclo-oxygenase 2, antiapoptotic, and glutathione reductase. Out of these three receptor proteins superoxide dismutase, interleukin, and aspartate aminotransferase showed the best interaction with the fisetin compound. In vivo and in silico outcomes of the current study demonstrated that fisetin could potentially ameliorate As-instigated hepatotoxicity. --- ## Body ## 1. Introduction Arsenic (As) is a noxious metalloid, which is ranked 1st by the United States (US) Agency for Disease Registry and Toxic Substances as well as US Environmental Protection Agency [1] that affected nearly 200 million people globally [2]. The most reported types of As-instigated damages in humans include skin diseases (viz. hyperkeratosis, hyper-pigmentation), skin or epithelial tissues cancers; respiratory tract, gastrointestinal tract, liver, kidney, central nervous system, cardiovascular, and reproductive complexities, thereby enhancing the rate of morbidity and mortality [3]. Humans get exposed to arsenic via inhalation, skin contact with As-contaminated products, and polluted drinking water (H2O) [4]. Moreover, As toxicity depends on the chemical nature of arsenicals (arsenic-comprising compounds), which exist in both organic and inorganic forms with differently charged cations (e.g., As3þ and As5þ) [5]. Overall, the inorganic form of As is more toxic than the organic form of this metal [6].After absorption from the lungs, As is delivered by the gut into the bloodstream where it (99% of arsenic) binds with red blood cells in circulating fluid, which eventually transports it to other parts of the body [7] and accumulates in different organs, i.e., lungs, liver, kidney, and heart [8]. The liver is a vital organ that tends to retain higher concentrations of As [9]. One of the most generally putative mechanisms to describe As-instigated toxicities is oxidative stress (OS) [10]. OS can cause mitochondrial dysfunction via fibrosis (TGF-β/Smad pathway), inflammation (NF-ĸB, TNF-α, IL-1, and IL-6), apoptosis (AKT-PKB, PI3/AKT, AKT/ERK, MAPK, PKCδ-JNK, and p53 pathways), and necrosis [11]. Besides, As intoxication was shown to weaken the antioxidant defense and damage several macromolecules (deoxyribonucleic acid (DNA), proteins, and lipids), which led to the foundation of the membrane, cell, and tissue dysfunction [12]. Moreover, As exposure may lead to inflammation which also results in liver damage. Thus, after scrutinizing the numerous sources of As exposure and their damaging impacts on human health, especially on the liver, a study on therapies against As-induced toxicities is needed.The advantage of using in silico methods for drug design is that it takes less time and money to find novel targets. Several biological issues have been resolved using in silico techniques that can characterize interacting molecules and forecast three-dimensional (3D) structures. To ascertain how various target proteins interact with the discovered chemical, in silico investigation was carried out. In this instance, fisetin (3,3′,4′,7-tetrahydroxyflavone) is a phytoflavonoid, which profoundly exists in multiple dietary sources such as apple, persimmon, grape, strawberry, cucumber, onion, and its quantities range from 2 to 160 mg/g with an average everyday consumption estimation of 0.4 mg [13]. It shows a broad range of therapeutic activities that include antioxidant [14], anticarcinogenic [13], anti-inflammatory [15], neuroprotective [15], and cardioprotective effects [14]. Up till now, the ameliorative potential of fisetin against arsenic-provoked hepatotoxicity is not available. So, the present investigation proposed to explore the remedial potency of fisetin against As-instigated hepatotoxicity in rats. ## 2. Materials and Methods ### 2.1. Chemicals As and fisetin were purchased from Germany (Sigma-Aldrich). ### 2.2. Animals Sexually mature male albino rats (n = 48) weighing 150 ± 30 g were kept in 12 rats per cage (made of steel) in the animal breeding as well as rearing house of the University of Agriculture, Faisalabad. All the rats were provided with tap water ad libitum as well as standard chow and photoperiod of 12 h light/dark cycle at temperature ranges between 23 and 26°C. Rats were kept in subordination with the European Union protocol (CEE Council 86/609) of animal care and experimentation. ## 2.1. Chemicals As and fisetin were purchased from Germany (Sigma-Aldrich). ## 2.2. Animals Sexually mature male albino rats (n = 48) weighing 150 ± 30 g were kept in 12 rats per cage (made of steel) in the animal breeding as well as rearing house of the University of Agriculture, Faisalabad. All the rats were provided with tap water ad libitum as well as standard chow and photoperiod of 12 h light/dark cycle at temperature ranges between 23 and 26°C. Rats were kept in subordination with the European Union protocol (CEE Council 86/609) of animal care and experimentation. ## 3. Experimental Protocol Albino rats (N = 48) were allocated into 4 groups (N = 12) and administered orally the following: control group (Treated with normal saline), As group (10 mg/kg. b. wt. Of As), cotreated group (10 mg/kg b.wt. Of As and 2.5 mg/kg. b.wt. Of fisetin), and only fisetin administered group (2.5 mg/kg.b.wt. Of fisetin). The entire experimental trial was conducted for thirty days. After one month of treatment, hepatic tissues were excised, weighed, and kept till additional analysis. ### 3.1. Biochemical Assay and TPC In the hepatic tissues, the activity of CAT was ascertained according to the methodology described by Chance and Maehly [16]. SOD activity was measured by following the process of Kakkar et al. [17]. GSR activity was determined according to the protocol of Carlberg and Mannervik [18]. GSH content was measured via the technique designed by Jollow et al. [19]. Hayashi et al. [20] protocol was used to estimate the level of ROS. The level of TBARS was assessed by following the technique of Iqbal et al. [21]. The TPC of hepatic tissues was quantified according to the Lowry method as modified by Peterson [22]. ### 3.2. Liver Serum Enzymes The levels of ALT, AST, and ALP were determined in accordance with the commercial kits purchased from Wiesbaden, Germany. ### 3.3. Inflammation The levels of TNF-α, NF-κB, IL-6, IL-1β, and COX-2 activity were estimated with an ELISA kit as per the company's guidance, BioTek, Winooski, VT, United States of America (USA). ### 3.4. Apoptosis The levels of Bcl-2, Bax, Caspase-3, and Caspase-9 were estimated with the help of ELISA kits bought from Cusabio Technology Llc, Houston, TX, USA. ### 3.5. Histopathology For histopathological analysis, initially, hepatic tissues were cleaned in 0.9% chilled saline and placed in 10% formalin solution, subsequently desiccated in mounting concentrations of alcohol, and embedded in paraffin wax. After that, paraffin-encased 5-µm slices were pruned via microtome, and staining was done with the help of hematoxylin-eosin (H&E) stain and observed below the Leica LB microscope at 400X [23]. ### 3.6. Statistical Analysis The results mean ± standard error (SE) was presented in the tables after applying ANOVA accompanied by Tukey’s test to interpret the entire data with the help of Minitab software. Results were declared meaningful atp < 0.05. ### 3.7. In Silico Analysis #### 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. #### 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. #### 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 3.1. Biochemical Assay and TPC In the hepatic tissues, the activity of CAT was ascertained according to the methodology described by Chance and Maehly [16]. SOD activity was measured by following the process of Kakkar et al. [17]. GSR activity was determined according to the protocol of Carlberg and Mannervik [18]. GSH content was measured via the technique designed by Jollow et al. [19]. Hayashi et al. [20] protocol was used to estimate the level of ROS. The level of TBARS was assessed by following the technique of Iqbal et al. [21]. The TPC of hepatic tissues was quantified according to the Lowry method as modified by Peterson [22]. ## 3.2. Liver Serum Enzymes The levels of ALT, AST, and ALP were determined in accordance with the commercial kits purchased from Wiesbaden, Germany. ## 3.3. Inflammation The levels of TNF-α, NF-κB, IL-6, IL-1β, and COX-2 activity were estimated with an ELISA kit as per the company's guidance, BioTek, Winooski, VT, United States of America (USA). ## 3.4. Apoptosis The levels of Bcl-2, Bax, Caspase-3, and Caspase-9 were estimated with the help of ELISA kits bought from Cusabio Technology Llc, Houston, TX, USA. ## 3.5. Histopathology For histopathological analysis, initially, hepatic tissues were cleaned in 0.9% chilled saline and placed in 10% formalin solution, subsequently desiccated in mounting concentrations of alcohol, and embedded in paraffin wax. After that, paraffin-encased 5-µm slices were pruned via microtome, and staining was done with the help of hematoxylin-eosin (H&E) stain and observed below the Leica LB microscope at 400X [23]. ## 3.6. Statistical Analysis The results mean ± standard error (SE) was presented in the tables after applying ANOVA accompanied by Tukey’s test to interpret the entire data with the help of Minitab software. Results were declared meaningful atp < 0.05. ## 3.7. In Silico Analysis ### 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. ### 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. ### 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. ## 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. ## 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 4. Results ### 4.1. Effect of Fisetin on Biochemical Assay and TPC The activity of CAT, SOD, GSR, as well as GSH level and TPC, was substantially (p < 0.05) reduced after As intoxication, while the concentration of ROS and level of TBARS were raised as matched with the untreated group. Conversely, fisetin supplementation with As remarkably (p < 0.05) elevated the activity of CAT, SOD, GSR, and GSH content as well as TPC, while considerably (p < 0.05) lowered the levels of ROS and TBARS in the cotreated group as contrasted with the As-induced group. Nonetheless, nonsignificant variation was witnessed among rats of the fisetin-only administrated and the untreated rats (Table 1).Table 1 Presents the outcomes of biochemical analysis along with total protein content. GroupsControlArsenicArsenic + FisetinFisetinCAT (U/mg protein)7.62 ± 0.09a3.58 ± 0.26b6.77 ± 0.06a6.99 ± 0.07aSOD (U/mg tissue)6.34 ± 0.12a3.29 ± 0.09b5.65 ± 0.06a6.19 ± 0.11aGSR (nm NADPH oxidized/min/mg tissue3.03 ± 0.06a1.62 ± 0.16b2.66 ± 0.06a2.96 ± 0.05aGSH (nM/min/mg protein)15.87 ± 0.21a8.99 ± 0.121b14.98 ± 0.07a15.17 ± 0.09aROS (U/mg tissue)TBARS (nM/min/mg tissue)14.47 ± 0.2a22.750 ± 0.27b15.91 ± 0.07a15.16 ± 0.19aTotal protein (µg/mg tissues)4.04 ± 0.08a1.88 ± 0.06b3.85 ± 0.04a4.02 ± 0.07aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.2. Effect of Fisetin on AST, ALP, and AST Table2 shows outcomes of the study exposed that hepatic serum levels of AST, ALP, as well as ALT, were substantially (p < 0.05) raised in the As-induced group as matched to the control group. Nevertheless, fisetin treatment substantially (p < 0.05) caused the decline of hepatic enzymes in the cotreated group as matched to the As-intoxicated group. Moreover, a nonsignificant variation was seen between the fisetin-only treated and the control groups.Table 2 Displays the levels of liver serum enzymes. GroupsControlArsenicArsenic + FisetinFisetinALP (U/I)67.00 ± 2.5a176.33 ± 5.21b101.7 ± 1.77a86.67 ± 2.97aALT (U/I)42.33 ± 4.4a243.00 ± 7.24b76.33 ± 6.94a54.00 ± 3.79aAST (U/I)65.00 ± 2.9a279.67 ± 11.5b106.3 ± 2.41a78.33 ± 5.05aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.3. Effect of Fisetin on Inflammatory Markers Table3 demonstrates the outcomes of the investigation that displayed As exposure substantially (p < 0.05) raised the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity in the As-induced group as matched to the control group. Nonetheless, fisetin supplementation markedly (p < 0.05) diminished inflammatory indices in the cotreated (As + fisetin) group as matched to the As group. There was insignificant (p < 0.05) variation between the fisetin-only treated and the control groups.Table 3 Depicts levels of inflammatory markers. GroupsControlArsenicArsenic + FisetinFisetinNF-κB (ng/g tissue)12.53 ± 0.64a64.38 ± 0.94b17.97 ± 1.00a12.16 ± 0.67aTNF-α (ng/g tissue)6.31 ± 0.53a17.76 ± 1.02b8.73 ± 0.50a6.26 ± 0.56aIL-1β (ng/g tissue)23.89 ± 1.13a87.38 ± 1.28b29.57 ± 1.09a23.84 ± 1.01aIL-6 (ng/g tissue)4.62 ± 0.45a22.64 ± 2.00b7.11 ± 0.98a4.59 ± 0.37aCOX-2 (ng/g tissue)23.39 ± 0.80a65.75 ± 2.19b29.10 ± 1.29a23.36 ± 0.70aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.4. Effect of Fisetin on Antiapoptotic and Proapoptotic Markers To ascertain the probable antiapoptotic activity of fisetin, a property that presents its protecting impact against As-instigated hepatic tissues apoptosis, we estimated the alterations in the levels of the antiapoptotic marker Bcl-2 and proapoptotic markers, particularly, Bax, Caspase-3, and Caspase-9 (Table4). Results of the study exposed that As-induction considerably (p < 0.05) decreased the antiapoptotic indices, whereas increased the proapoptotic inducers in the As-intoxicated rats as matched with the control rats. Nevertheless, fisetin cotreatment substantially (p < 0.05) restored the level of the above-stated antiapoptotic marker while reducing the levels of proapoptotic markers in the cotreated group as contrasted with the arsenic-induced group. However, a nonsignificant alteration was noticed among the mean values of the fisetin-only treated and the untreated rats.Table 4 Shows the levels of proapoptotic and antiapoptotic markers. GroupsControlArsenicArsenic + FisetinFisetinBcl-214.49 ± 0.65a6.10 ± 0.98b12.34 ± 0.28a14.57 ± 0.69aBax2.58 ± 0.25a7.52 ± 0.34b2.92 ± 0.21a2.55 ± 0.12aCaspase-31.73 ± 0.09a10.58 ± 0.57b2.74 ± 0.31a1.71 ± 0.12aCaspase-94.70 ± 0.18a15.24 ± 0.79b5.81 ± 0.25a4.69 ± 0.21aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.5. Effect of Fisetin on Histopathology Figure1 shows the comparative changes in the histopathological profile. Outcomes of the investigation presented that As exposure caused necrosis, sinusoid dilation, and apoptosis of hepatocytes along with central venule disruption in the As-induced rats as matched to the control rats (Figures 1(b) and 1(a)). Nonetheless, fisetin supplementation substantially (p < 0.05) mitigated the predominance and intensity of histopathological impairments such as diminished dilation of sinusoids with no necrotic cell, central venule disruption, and retrieved the classic architecture of liver cells in the coadministrated (As + fisetin) group as matched to the As group (Figures 1(b) and 1(c)). However, in the fisetin-only treated rats, histological architecture was similar to the control group (Figures 1(d) and 1(a)).Figure 1 Protecting impact of fisetin on arsenic deteriorated histopathology (hematoxylin-eosin. 40X). (a) Control group; (b) arsenic group (50 mg/kg); (c) arsenic (50 mg/kg) + fisetin group (50 mg/kg); (d) fisetin group (50 mg/kg). Central venule (CV); Kupffer cells (KC); hepatocytes (H); sinusoids (S); nucleus (N). ## 4.1. Effect of Fisetin on Biochemical Assay and TPC The activity of CAT, SOD, GSR, as well as GSH level and TPC, was substantially (p < 0.05) reduced after As intoxication, while the concentration of ROS and level of TBARS were raised as matched with the untreated group. Conversely, fisetin supplementation with As remarkably (p < 0.05) elevated the activity of CAT, SOD, GSR, and GSH content as well as TPC, while considerably (p < 0.05) lowered the levels of ROS and TBARS in the cotreated group as contrasted with the As-induced group. Nonetheless, nonsignificant variation was witnessed among rats of the fisetin-only administrated and the untreated rats (Table 1).Table 1 Presents the outcomes of biochemical analysis along with total protein content. GroupsControlArsenicArsenic + FisetinFisetinCAT (U/mg protein)7.62 ± 0.09a3.58 ± 0.26b6.77 ± 0.06a6.99 ± 0.07aSOD (U/mg tissue)6.34 ± 0.12a3.29 ± 0.09b5.65 ± 0.06a6.19 ± 0.11aGSR (nm NADPH oxidized/min/mg tissue3.03 ± 0.06a1.62 ± 0.16b2.66 ± 0.06a2.96 ± 0.05aGSH (nM/min/mg protein)15.87 ± 0.21a8.99 ± 0.121b14.98 ± 0.07a15.17 ± 0.09aROS (U/mg tissue)TBARS (nM/min/mg tissue)14.47 ± 0.2a22.750 ± 0.27b15.91 ± 0.07a15.16 ± 0.19aTotal protein (µg/mg tissues)4.04 ± 0.08a1.88 ± 0.06b3.85 ± 0.04a4.02 ± 0.07aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.2. Effect of Fisetin on AST, ALP, and AST Table2 shows outcomes of the study exposed that hepatic serum levels of AST, ALP, as well as ALT, were substantially (p < 0.05) raised in the As-induced group as matched to the control group. Nevertheless, fisetin treatment substantially (p < 0.05) caused the decline of hepatic enzymes in the cotreated group as matched to the As-intoxicated group. Moreover, a nonsignificant variation was seen between the fisetin-only treated and the control groups.Table 2 Displays the levels of liver serum enzymes. GroupsControlArsenicArsenic + FisetinFisetinALP (U/I)67.00 ± 2.5a176.33 ± 5.21b101.7 ± 1.77a86.67 ± 2.97aALT (U/I)42.33 ± 4.4a243.00 ± 7.24b76.33 ± 6.94a54.00 ± 3.79aAST (U/I)65.00 ± 2.9a279.67 ± 11.5b106.3 ± 2.41a78.33 ± 5.05aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.3. Effect of Fisetin on Inflammatory Markers Table3 demonstrates the outcomes of the investigation that displayed As exposure substantially (p < 0.05) raised the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity in the As-induced group as matched to the control group. Nonetheless, fisetin supplementation markedly (p < 0.05) diminished inflammatory indices in the cotreated (As + fisetin) group as matched to the As group. There was insignificant (p < 0.05) variation between the fisetin-only treated and the control groups.Table 3 Depicts levels of inflammatory markers. GroupsControlArsenicArsenic + FisetinFisetinNF-κB (ng/g tissue)12.53 ± 0.64a64.38 ± 0.94b17.97 ± 1.00a12.16 ± 0.67aTNF-α (ng/g tissue)6.31 ± 0.53a17.76 ± 1.02b8.73 ± 0.50a6.26 ± 0.56aIL-1β (ng/g tissue)23.89 ± 1.13a87.38 ± 1.28b29.57 ± 1.09a23.84 ± 1.01aIL-6 (ng/g tissue)4.62 ± 0.45a22.64 ± 2.00b7.11 ± 0.98a4.59 ± 0.37aCOX-2 (ng/g tissue)23.39 ± 0.80a65.75 ± 2.19b29.10 ± 1.29a23.36 ± 0.70aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.4. Effect of Fisetin on Antiapoptotic and Proapoptotic Markers To ascertain the probable antiapoptotic activity of fisetin, a property that presents its protecting impact against As-instigated hepatic tissues apoptosis, we estimated the alterations in the levels of the antiapoptotic marker Bcl-2 and proapoptotic markers, particularly, Bax, Caspase-3, and Caspase-9 (Table4). Results of the study exposed that As-induction considerably (p < 0.05) decreased the antiapoptotic indices, whereas increased the proapoptotic inducers in the As-intoxicated rats as matched with the control rats. Nevertheless, fisetin cotreatment substantially (p < 0.05) restored the level of the above-stated antiapoptotic marker while reducing the levels of proapoptotic markers in the cotreated group as contrasted with the arsenic-induced group. However, a nonsignificant alteration was noticed among the mean values of the fisetin-only treated and the untreated rats.Table 4 Shows the levels of proapoptotic and antiapoptotic markers. GroupsControlArsenicArsenic + FisetinFisetinBcl-214.49 ± 0.65a6.10 ± 0.98b12.34 ± 0.28a14.57 ± 0.69aBax2.58 ± 0.25a7.52 ± 0.34b2.92 ± 0.21a2.55 ± 0.12aCaspase-31.73 ± 0.09a10.58 ± 0.57b2.74 ± 0.31a1.71 ± 0.12aCaspase-94.70 ± 0.18a15.24 ± 0.79b5.81 ± 0.25a4.69 ± 0.21aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.5. Effect of Fisetin on Histopathology Figure1 shows the comparative changes in the histopathological profile. Outcomes of the investigation presented that As exposure caused necrosis, sinusoid dilation, and apoptosis of hepatocytes along with central venule disruption in the As-induced rats as matched to the control rats (Figures 1(b) and 1(a)). Nonetheless, fisetin supplementation substantially (p < 0.05) mitigated the predominance and intensity of histopathological impairments such as diminished dilation of sinusoids with no necrotic cell, central venule disruption, and retrieved the classic architecture of liver cells in the coadministrated (As + fisetin) group as matched to the As group (Figures 1(b) and 1(c)). However, in the fisetin-only treated rats, histological architecture was similar to the control group (Figures 1(d) and 1(a)).Figure 1 Protecting impact of fisetin on arsenic deteriorated histopathology (hematoxylin-eosin. 40X). (a) Control group; (b) arsenic group (50 mg/kg); (c) arsenic (50 mg/kg) + fisetin group (50 mg/kg); (d) fisetin group (50 mg/kg). Central venule (CV); Kupffer cells (KC); hepatocytes (H); sinusoids (S); nucleus (N). ## 5. Discussion Arsenic as a poison is a worldwide health predicament. Chronic As intoxication has been profoundly linked with several disorders and health problems in humans [31]. The overgeneration of intracellular ROS after As exposure mediates multiple alterations in cell functioning by changing signaling molecular antigenic alteration or provokes direct oxidative impairment to molecules [32]. Thus, antioxidants with remarkable free radical scavenging properties can alleviate As-instigated toxicities [33]. Therefore, the current investigation was formulated to estimate the antioxidant potency of fisetin, which is a potential flavonoid with diverse pharmacological properties against As-intoxicated hepatotoxicity in rats.Outcomes of the current research revealed that As intoxication considerably reduced activities of SOD, CAT, GSR, or GSH content, and TPC while escalating the levels of ROS and TBARS. As evident, the body's defense system is made up of antioxidants that may be enzymatic or nonenzymatic, which act swiftly and neutralize free radicals [34]. SOD, GPX, and CAT are enzymatic antioxidants, while GSH is a nonenzymatic antioxidant [35]. CAT and GPx transformed the hydrogen peroxide (H2O2) into the water [36], whereas the conversion of superoxide anion (O2-) into H2O2 is carried out by SOD [37] whereas reduced GSH functions as an anion donator in these redox reactions [38]. GSH is retained by GSR, which renovates reduced GSH from oxidized GSSG for the perpetual functioning of GPx [39]. However, an unnecessary escalation of ROS resulting from deficient antioxidant defense or collapse of the cells’ buffering system to retain the redox balance that leads to OS, which consequently commences numerous modifications in biomolecules and ultimately leads to disease conditions [40]. The oxidative damage to lipids is known as LPO [41]. LPO, in turn, may lead to damages that affect membrane integrity as well as fluidity and permeability [42]. However, fisetin provision remarkably alleviated the above-stated biochemical alteration via enhancing the activities of antioxidant defense or total protein content and lowering the levels of ROS and TBARS. This curative effect of fisetin may be due to the presence of one hydroxyl group on its A-ring that sets the lipid-H2O interface of the membrane and exhibits equivalent free radical scavenging activity similar to other flavonoids such as quercetin. Thus, it inhibited LPO by preventing the additional diffusion of reactive oxygen species into the lipid hydrophobic core [8].In the current investigation, As exposure caused a remarkable increment in ALP, ALT, and AST levels indicating damage to hepatic tissues. As documented earlier, these enzymes exist in hepatocytes, but their levels are ordinarily low. However, when liver cells are damaged, their membranes become more penetrable as a result their enzymes are liberated into the blood [43]. Our outcomes are in harmony with the results of Un et al. [44], who conveyed similar results, followed by As treatment. However, in the current research, fisetin oral gavage remarkably reduced the levels of hepatic serum enzymes, which may be due to its antioxidant potential.Inflammation is the reflective response of the body's defense system, which is provoked by internal, i.e., stressed, impaired, or defective functioning of tissues, as well as external sources, i.e., reactive chemicals, allergens, microbes, and ROS [45]. This inflammatory process leads to elevated cell membrane permeability and vasodilatation that causes the nuclear translocation of different leukocytes and inflammatory markers [46]. NF-κB is among the fundamental inflammatory mediators which get triggered instantly in response to the internal or external cellular stimulant, which ultimately increases the levels of TNF-α [46], IL-1β [47], IL-6 [46] and activity of COX-2 [47]. Outcomes of the present investigation showed that As-induction substantially boosted the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity. However, fisetin coadministration with As remarkably lowered the elevated levels of inflammatory markers, which showed its anti-inflammatory property.Apoptosis, a cell death mechanism, which helps to eradicate undesired cells, is accomplished by intrinsic (mitochondrial) and extrinsic (death receptor) pathways [48]. In the current investigation, we assessed apoptosis by estimating the level of Bax, Caspase-3, Caspase-9, and Bcl-2. Outcomes showed that As exposure lowered the level of Bcl-2 while boosting the levels of Bax, Caspase-3, and Caspase-9. Bax and Bcl-2 are proteins that are related to the Bcl-2 family. Bcl-2 promotes cellular longevity by stabilizing the opening of the MPT (mitochondrial-permeability-transition) pore complex and defends against Cytochrome c liberation, whereas Bax activates MPT pore and regulates the discharge of Cytochrome c into the cytosol [49], which activate Caspase-9 that cleaves Caspase-3 [50], that eventually leads to apoptosis [51]. As evident, Caspases are cysteine proteases that cut 100 distinct target proteins and provoke apoptosis [52]. Thus, the anti- or pro-apoptotic Bcl-2/Bax ratio regulates apoptosis [53]. Nevertheless, fisetin mitigated these hepatocytes' apoptosis via down-and-upregulating the levels of pro- or anti-apoptotic markers, respectively, in the rat liver. Our outcomes verify the antiapoptotic potential of fisetin.The outcomes of the present research demonstrated that As administration induced intense histopathological impairments in the hepatic tissues. The reason behind these toxic histological alterations is LP, which eventually leads to inflammation and apoptosis in hepatocytes. As-induced hepatic injuries include central venule disruption, apoptosis of hepatocytes, necrosis, and sinusoid dilation. Our results are compatible with Al-Forkan et al. [54], who studied the retention mechanism of As in organs and its effect on liver enzymes, hematology, and histology. However, fisetin treatment remarkably ameliorated the histopathological damages caused by As. Fisetin restored the impairments of its histopathological profile may be due to its free radical quenching, anti-inflammatory, and antiapoptotic, attributes. ### 5.1. Docking Analysis ChemDraw Ultra 12.0 and Chem 3D Pro were utilized in GOLD docking for the energy minimization of ligands in accordance with the method adopted by Andleeb et al. (2020). In order to comprehend the efficacy of these receptors, the molecular docking analysis looked at the molecular interactions of the fisetin molecule with different receptor proteins. The protein data repository was considered to obtain the coordinate crystal structure of the receptor proteins (PDB). The GOLD suite version 5.3.0 with a high resolution of 2.70 was then used to load each of these structures one at a time for docking. Using the GOLD 5.3.0 edition, screenings of various dock complexes were done based on docking fitness and GOLD score. The most effective chemical that interacts with the receptor was discovered thanks to the GOLD program. Based on docking score and fitness, the results were evaluated for binding compatibility. The ligand molecule with the highest binding affinity to the receptor molecule was selected as the best medication.The three receptor proteins 5YTO, 1ILR, and 6WNG demonstrated the best interaction with the fisetin compound, with GOLD fitness values of 77.99, 68.50, and 60.35 and GOLD docking scores of -9.29, -8.96, and -8.90, respectively. These interactions included the formation of a hydrogen bond (MET A:1, ASP A: 880, SER A: 108, GLU A:25, GLN A:148, and PRO A:190). These three receptor proteins displayed an incredibly strong association with the fisetin chemical and can be thought of as possible receptor molecules that may be useful as an indicator of inflammation, antiapoptotic activity, and antioxidant activity. With GOLD fitness values of 52.44, 53.42, 38.15, and 62.30, GOLD docking scores of -8.62, -8.42, -7.78, and -7.80, and interactions of the hydrogen bond with other molecules, 1ANJ, 5YOY, IBDO, and 51F9 demonstrated moderate binding affinity (SER A: 99, ALA A: 321, ASP A: 98, ASP A: 135, ASP A: 728, PRO A: 134, GLN A: 148, PRO A: 190, ALA A: 192, ARG A: 239, PRO A: 187, THR A: 181, ASN A: 351, HIS A: 357, THR A: 175, and PHE A: 179). The interaction between 6FSO and 6TJL is the least favorable, falling between 53.04 and 50.93, with docking scores of -7.19 and -7.32. The receptor proteins with fisetin compound are ranked in the following order: 5YTO > 1ILR > 6WNG > 1ANJ > 5YOY > IBDO > 51F9 > 6FSO > 6TJL. The best poses created by the discovery studio are depicted in Figure2 in a 2D depiction of the interactions between proteins and ligands.Figure 2 In silico molecular docking analysis of 2D and 3D interactions among fisetin and screened receptor proteins (a) 5YTO, (b) 1ILR, and (c) 6WNG. (a)(b)(c) ## 5.1. Docking Analysis ChemDraw Ultra 12.0 and Chem 3D Pro were utilized in GOLD docking for the energy minimization of ligands in accordance with the method adopted by Andleeb et al. (2020). In order to comprehend the efficacy of these receptors, the molecular docking analysis looked at the molecular interactions of the fisetin molecule with different receptor proteins. The protein data repository was considered to obtain the coordinate crystal structure of the receptor proteins (PDB). The GOLD suite version 5.3.0 with a high resolution of 2.70 was then used to load each of these structures one at a time for docking. Using the GOLD 5.3.0 edition, screenings of various dock complexes were done based on docking fitness and GOLD score. The most effective chemical that interacts with the receptor was discovered thanks to the GOLD program. Based on docking score and fitness, the results were evaluated for binding compatibility. The ligand molecule with the highest binding affinity to the receptor molecule was selected as the best medication.The three receptor proteins 5YTO, 1ILR, and 6WNG demonstrated the best interaction with the fisetin compound, with GOLD fitness values of 77.99, 68.50, and 60.35 and GOLD docking scores of -9.29, -8.96, and -8.90, respectively. These interactions included the formation of a hydrogen bond (MET A:1, ASP A: 880, SER A: 108, GLU A:25, GLN A:148, and PRO A:190). These three receptor proteins displayed an incredibly strong association with the fisetin chemical and can be thought of as possible receptor molecules that may be useful as an indicator of inflammation, antiapoptotic activity, and antioxidant activity. With GOLD fitness values of 52.44, 53.42, 38.15, and 62.30, GOLD docking scores of -8.62, -8.42, -7.78, and -7.80, and interactions of the hydrogen bond with other molecules, 1ANJ, 5YOY, IBDO, and 51F9 demonstrated moderate binding affinity (SER A: 99, ALA A: 321, ASP A: 98, ASP A: 135, ASP A: 728, PRO A: 134, GLN A: 148, PRO A: 190, ALA A: 192, ARG A: 239, PRO A: 187, THR A: 181, ASN A: 351, HIS A: 357, THR A: 175, and PHE A: 179). The interaction between 6FSO and 6TJL is the least favorable, falling between 53.04 and 50.93, with docking scores of -7.19 and -7.32. The receptor proteins with fisetin compound are ranked in the following order: 5YTO > 1ILR > 6WNG > 1ANJ > 5YOY > IBDO > 51F9 > 6FSO > 6TJL. The best poses created by the discovery studio are depicted in Figure2 in a 2D depiction of the interactions between proteins and ligands.Figure 2 In silico molecular docking analysis of 2D and 3D interactions among fisetin and screened receptor proteins (a) 5YTO, (b) 1ILR, and (c) 6WNG. (a)(b)(c) ## 6. Conclusion Adult male albino rats received the arsenic injection, which resulted in elevated serum enzyme levels, inflammatory and apoptotic indicators, and a worsened histopathological profile. Additionally, an unbalanced state that resulted in hepatic dysfunction was presented by the activity of enzymatic antioxidants, TPC or the levels of ROS, and TBARS. Nevertheless, due to its underlying antioxidant, antiapoptotic, as well as anti-inflammatory potentials, fisetin therapy significantly reduced arsenic-induced deficits in all of the aforementioned measures. [55] --- *Source: 1005255-2022-10-20.xml*
1005255-2022-10-20_1005255-2022-10-20.md
45,707
Fisetin Attenuates Arsenic-Induced Hepatic Damage by Improving Biochemical, Inflammatory, Apoptotic, and Histological Profile: In Vivo and In Silico Approach
Muhammad Umar; Saima Muzammil; Muhammad Asif Zahoor; Shama Mustafa; Asma Ashraf; Sumreen Hayat; Muhammad Umar Ijaz
Evidence-Based Complementary and Alternative Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005255
1005255-2022-10-20.xml
--- ## Abstract Arsenic (As) is a toxic metalloid and human carcinogen that may cause hepatotoxicity. Fisetin (3, 3′, 4′, 7-tetrahydroxyflavone) is a phytoflavonoid, which shows diverse therapeutic activities. This study aimed to examine the remedial potential of fisetin against As-instigated hepatotoxicity in adult male rats. To accomplish this aim, albino rats (N = 48) were evenly classified into 4 groups: control group, As (10 mg/kg) group, fisetin (2.5 mg/kg) + As (10 mg/kg) group, and fisetin (2.5 mg/kg) group. After one month of treatment, biochemical assay, total protein content (TPC), hepatic serum enzymes, inflammatory as well as pro- or anti-apoptotic markers, and histopathological profile of hepatic tissues were estimated. As administration disordered the biochemical profile by decreasing activities of antioxidant enzymes i.e., catalase (CAT), superoxide dismutase (SOD), glutathione reductase (GSR), and glutathione (GSH) content while escalating the levels of reactive oxygen species (ROS), and thiobarbituric acid reactive substances (TBARS). TPC was also considerably reduced after exposure to As. Furthermore, As markedly raised the levels of liver serum enzymes such as aspartate transaminase (AST), alkaline phosphatase (ALP), and alanine transaminase (ALT) as well as the levels of inflammatory markers, i.e., nuclear factor- κB (NF-κB), tumor necrosis- α (TNF-α), interleukin-1β (IL-1β), interleukin-6 (IL-6), and cyclo-oxygenase-2 (COX-2) activity. Besides, it lowered the level of antiapoptotic markers (Bcl-2) and upregulated the levels of proapoptotic markers (Bax, Caspase-3, and Caspase-9). Additionally, As exposure led to histopathological damage in hepatic tissues. However, fisetin administration remarkably alleviated all the depicted hepatic damages. For further verification, the screening of several dock complexes was performed by using the GOLD 5.3.0 version. Based on docking fitness and GOLD score, the ranking order of receptor proteins with fisetin compound is superoxide dismutase, interleukin, aspartate aminotransferase, alkaline phosphatase, TNF-alpha, alanine transaminase, cyclo-oxygenase 2, antiapoptotic, and glutathione reductase. Out of these three receptor proteins superoxide dismutase, interleukin, and aspartate aminotransferase showed the best interaction with the fisetin compound. In vivo and in silico outcomes of the current study demonstrated that fisetin could potentially ameliorate As-instigated hepatotoxicity. --- ## Body ## 1. Introduction Arsenic (As) is a noxious metalloid, which is ranked 1st by the United States (US) Agency for Disease Registry and Toxic Substances as well as US Environmental Protection Agency [1] that affected nearly 200 million people globally [2]. The most reported types of As-instigated damages in humans include skin diseases (viz. hyperkeratosis, hyper-pigmentation), skin or epithelial tissues cancers; respiratory tract, gastrointestinal tract, liver, kidney, central nervous system, cardiovascular, and reproductive complexities, thereby enhancing the rate of morbidity and mortality [3]. Humans get exposed to arsenic via inhalation, skin contact with As-contaminated products, and polluted drinking water (H2O) [4]. Moreover, As toxicity depends on the chemical nature of arsenicals (arsenic-comprising compounds), which exist in both organic and inorganic forms with differently charged cations (e.g., As3þ and As5þ) [5]. Overall, the inorganic form of As is more toxic than the organic form of this metal [6].After absorption from the lungs, As is delivered by the gut into the bloodstream where it (99% of arsenic) binds with red blood cells in circulating fluid, which eventually transports it to other parts of the body [7] and accumulates in different organs, i.e., lungs, liver, kidney, and heart [8]. The liver is a vital organ that tends to retain higher concentrations of As [9]. One of the most generally putative mechanisms to describe As-instigated toxicities is oxidative stress (OS) [10]. OS can cause mitochondrial dysfunction via fibrosis (TGF-β/Smad pathway), inflammation (NF-ĸB, TNF-α, IL-1, and IL-6), apoptosis (AKT-PKB, PI3/AKT, AKT/ERK, MAPK, PKCδ-JNK, and p53 pathways), and necrosis [11]. Besides, As intoxication was shown to weaken the antioxidant defense and damage several macromolecules (deoxyribonucleic acid (DNA), proteins, and lipids), which led to the foundation of the membrane, cell, and tissue dysfunction [12]. Moreover, As exposure may lead to inflammation which also results in liver damage. Thus, after scrutinizing the numerous sources of As exposure and their damaging impacts on human health, especially on the liver, a study on therapies against As-induced toxicities is needed.The advantage of using in silico methods for drug design is that it takes less time and money to find novel targets. Several biological issues have been resolved using in silico techniques that can characterize interacting molecules and forecast three-dimensional (3D) structures. To ascertain how various target proteins interact with the discovered chemical, in silico investigation was carried out. In this instance, fisetin (3,3′,4′,7-tetrahydroxyflavone) is a phytoflavonoid, which profoundly exists in multiple dietary sources such as apple, persimmon, grape, strawberry, cucumber, onion, and its quantities range from 2 to 160 mg/g with an average everyday consumption estimation of 0.4 mg [13]. It shows a broad range of therapeutic activities that include antioxidant [14], anticarcinogenic [13], anti-inflammatory [15], neuroprotective [15], and cardioprotective effects [14]. Up till now, the ameliorative potential of fisetin against arsenic-provoked hepatotoxicity is not available. So, the present investigation proposed to explore the remedial potency of fisetin against As-instigated hepatotoxicity in rats. ## 2. Materials and Methods ### 2.1. Chemicals As and fisetin were purchased from Germany (Sigma-Aldrich). ### 2.2. Animals Sexually mature male albino rats (n = 48) weighing 150 ± 30 g were kept in 12 rats per cage (made of steel) in the animal breeding as well as rearing house of the University of Agriculture, Faisalabad. All the rats were provided with tap water ad libitum as well as standard chow and photoperiod of 12 h light/dark cycle at temperature ranges between 23 and 26°C. Rats were kept in subordination with the European Union protocol (CEE Council 86/609) of animal care and experimentation. ## 2.1. Chemicals As and fisetin were purchased from Germany (Sigma-Aldrich). ## 2.2. Animals Sexually mature male albino rats (n = 48) weighing 150 ± 30 g were kept in 12 rats per cage (made of steel) in the animal breeding as well as rearing house of the University of Agriculture, Faisalabad. All the rats were provided with tap water ad libitum as well as standard chow and photoperiod of 12 h light/dark cycle at temperature ranges between 23 and 26°C. Rats were kept in subordination with the European Union protocol (CEE Council 86/609) of animal care and experimentation. ## 3. Experimental Protocol Albino rats (N = 48) were allocated into 4 groups (N = 12) and administered orally the following: control group (Treated with normal saline), As group (10 mg/kg. b. wt. Of As), cotreated group (10 mg/kg b.wt. Of As and 2.5 mg/kg. b.wt. Of fisetin), and only fisetin administered group (2.5 mg/kg.b.wt. Of fisetin). The entire experimental trial was conducted for thirty days. After one month of treatment, hepatic tissues were excised, weighed, and kept till additional analysis. ### 3.1. Biochemical Assay and TPC In the hepatic tissues, the activity of CAT was ascertained according to the methodology described by Chance and Maehly [16]. SOD activity was measured by following the process of Kakkar et al. [17]. GSR activity was determined according to the protocol of Carlberg and Mannervik [18]. GSH content was measured via the technique designed by Jollow et al. [19]. Hayashi et al. [20] protocol was used to estimate the level of ROS. The level of TBARS was assessed by following the technique of Iqbal et al. [21]. The TPC of hepatic tissues was quantified according to the Lowry method as modified by Peterson [22]. ### 3.2. Liver Serum Enzymes The levels of ALT, AST, and ALP were determined in accordance with the commercial kits purchased from Wiesbaden, Germany. ### 3.3. Inflammation The levels of TNF-α, NF-κB, IL-6, IL-1β, and COX-2 activity were estimated with an ELISA kit as per the company's guidance, BioTek, Winooski, VT, United States of America (USA). ### 3.4. Apoptosis The levels of Bcl-2, Bax, Caspase-3, and Caspase-9 were estimated with the help of ELISA kits bought from Cusabio Technology Llc, Houston, TX, USA. ### 3.5. Histopathology For histopathological analysis, initially, hepatic tissues were cleaned in 0.9% chilled saline and placed in 10% formalin solution, subsequently desiccated in mounting concentrations of alcohol, and embedded in paraffin wax. After that, paraffin-encased 5-µm slices were pruned via microtome, and staining was done with the help of hematoxylin-eosin (H&E) stain and observed below the Leica LB microscope at 400X [23]. ### 3.6. Statistical Analysis The results mean ± standard error (SE) was presented in the tables after applying ANOVA accompanied by Tukey’s test to interpret the entire data with the help of Minitab software. Results were declared meaningful atp < 0.05. ### 3.7. In Silico Analysis #### 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. #### 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. #### 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 3.1. Biochemical Assay and TPC In the hepatic tissues, the activity of CAT was ascertained according to the methodology described by Chance and Maehly [16]. SOD activity was measured by following the process of Kakkar et al. [17]. GSR activity was determined according to the protocol of Carlberg and Mannervik [18]. GSH content was measured via the technique designed by Jollow et al. [19]. Hayashi et al. [20] protocol was used to estimate the level of ROS. The level of TBARS was assessed by following the technique of Iqbal et al. [21]. The TPC of hepatic tissues was quantified according to the Lowry method as modified by Peterson [22]. ## 3.2. Liver Serum Enzymes The levels of ALT, AST, and ALP were determined in accordance with the commercial kits purchased from Wiesbaden, Germany. ## 3.3. Inflammation The levels of TNF-α, NF-κB, IL-6, IL-1β, and COX-2 activity were estimated with an ELISA kit as per the company's guidance, BioTek, Winooski, VT, United States of America (USA). ## 3.4. Apoptosis The levels of Bcl-2, Bax, Caspase-3, and Caspase-9 were estimated with the help of ELISA kits bought from Cusabio Technology Llc, Houston, TX, USA. ## 3.5. Histopathology For histopathological analysis, initially, hepatic tissues were cleaned in 0.9% chilled saline and placed in 10% formalin solution, subsequently desiccated in mounting concentrations of alcohol, and embedded in paraffin wax. After that, paraffin-encased 5-µm slices were pruned via microtome, and staining was done with the help of hematoxylin-eosin (H&E) stain and observed below the Leica LB microscope at 400X [23]. ## 3.6. Statistical Analysis The results mean ± standard error (SE) was presented in the tables after applying ANOVA accompanied by Tukey’s test to interpret the entire data with the help of Minitab software. Results were declared meaningful atp < 0.05. ## 3.7. In Silico Analysis ### 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. ### 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. ### 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 3.7.1. Ligand Preparation The two-dimensional (2D) configuration of fisetin phytocompound retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov) and treated in the ChemDraw ultra 12.0 and Chem 3D Pro for ionization, minimization, and optimization of ligands. Force field via the module for minimization and optimization of ligands having the lowest energy conformer of the ligand. ## 3.7.2. Receptor Preparation In order to assess the molecular docking, optimum resolution X-ray structures of proteins were obtained from the Protein Databank (RCSB PDB) (https://www.rcsb.org) and underwent the Protein preparation wizard of Maestro (Gold v 5.3.0). This module processed the protein by the addition of hydrogen atoms to the protein structure, removing solvent molecules (H2O), creating disulfide bonds, assigning bond orders, filling missing side chains as well as loops, and generating a protonation state at the cellular level pH (7.4 ± 0.5) using the Epik tool of protein structures for ligands. Following the processing of protein structures, the PDB ID of this 5YTO [24], 1ILR [25], 6WNG (10.2210/pdb6WNG/PDB), 1ANJ [26], 5YOY [27], IBDO [28], 51F9 [29], 6FSO [30], and 6TJL (10.2210/pdb6TJL/PDB) structures were optimized using GOLD at pH 7.0, and the OPLS3e force field was used to perform restrained minimization for energy minimization and protein structural geometry optimization. ## 3.7.3. Molecular Docking The docking studies were carried out with the use of molecular docking software parameters (https://www.ccdc.cam.ac.uk). Docking simulations were carried out by the Lamarckian genetic algorithm (LGA) and the Solis and Wets local search approach. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was filtered from ten distinct runs that were set to stop followed by a maximum of 1.5 Å assessments.Molecular docking experiments were used to investigate the potential binding/interaction between proteins and ligands. TableS1 illustrates the binding affinity (kcal/mol) of the fisetin (5281614) phytocompound with different receptor proteins. Three-dimensional structures of receptor proteins, alkaline phosphate (PDB ID 1ANJ), alanine transaminase (PDB ID, IBDO), cyclooxygenase-2 (PDB ID 51F9), interleukin (PDB ID, 1ILR), TNF-a (PDB ID, 5YOY), superoxide dismutase (PDB ID, 5YTO), antiapoptotic (PDB ID, 6FSO), glutathione reductase (PDB ID, 6TJL), and aspartate aminotransferase (PDB ID, 6WNG) were acquired from the PDB database (Protein Data Bank). Docking calculations were carried out with GOLD version 5.3.0 and BIOVIA discovery studio (http://www.3dsbiovia.com) for modeling and visualization. The initial position, orientation, and torsion of the ligand molecules were determined at random. Every docking experiment was extracted from a total of ten distinct runs that were set to end after a maximum of 1.5 Å evaluations. ## 4. Results ### 4.1. Effect of Fisetin on Biochemical Assay and TPC The activity of CAT, SOD, GSR, as well as GSH level and TPC, was substantially (p < 0.05) reduced after As intoxication, while the concentration of ROS and level of TBARS were raised as matched with the untreated group. Conversely, fisetin supplementation with As remarkably (p < 0.05) elevated the activity of CAT, SOD, GSR, and GSH content as well as TPC, while considerably (p < 0.05) lowered the levels of ROS and TBARS in the cotreated group as contrasted with the As-induced group. Nonetheless, nonsignificant variation was witnessed among rats of the fisetin-only administrated and the untreated rats (Table 1).Table 1 Presents the outcomes of biochemical analysis along with total protein content. GroupsControlArsenicArsenic + FisetinFisetinCAT (U/mg protein)7.62 ± 0.09a3.58 ± 0.26b6.77 ± 0.06a6.99 ± 0.07aSOD (U/mg tissue)6.34 ± 0.12a3.29 ± 0.09b5.65 ± 0.06a6.19 ± 0.11aGSR (nm NADPH oxidized/min/mg tissue3.03 ± 0.06a1.62 ± 0.16b2.66 ± 0.06a2.96 ± 0.05aGSH (nM/min/mg protein)15.87 ± 0.21a8.99 ± 0.121b14.98 ± 0.07a15.17 ± 0.09aROS (U/mg tissue)TBARS (nM/min/mg tissue)14.47 ± 0.2a22.750 ± 0.27b15.91 ± 0.07a15.16 ± 0.19aTotal protein (µg/mg tissues)4.04 ± 0.08a1.88 ± 0.06b3.85 ± 0.04a4.02 ± 0.07aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.2. Effect of Fisetin on AST, ALP, and AST Table2 shows outcomes of the study exposed that hepatic serum levels of AST, ALP, as well as ALT, were substantially (p < 0.05) raised in the As-induced group as matched to the control group. Nevertheless, fisetin treatment substantially (p < 0.05) caused the decline of hepatic enzymes in the cotreated group as matched to the As-intoxicated group. Moreover, a nonsignificant variation was seen between the fisetin-only treated and the control groups.Table 2 Displays the levels of liver serum enzymes. GroupsControlArsenicArsenic + FisetinFisetinALP (U/I)67.00 ± 2.5a176.33 ± 5.21b101.7 ± 1.77a86.67 ± 2.97aALT (U/I)42.33 ± 4.4a243.00 ± 7.24b76.33 ± 6.94a54.00 ± 3.79aAST (U/I)65.00 ± 2.9a279.67 ± 11.5b106.3 ± 2.41a78.33 ± 5.05aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.3. Effect of Fisetin on Inflammatory Markers Table3 demonstrates the outcomes of the investigation that displayed As exposure substantially (p < 0.05) raised the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity in the As-induced group as matched to the control group. Nonetheless, fisetin supplementation markedly (p < 0.05) diminished inflammatory indices in the cotreated (As + fisetin) group as matched to the As group. There was insignificant (p < 0.05) variation between the fisetin-only treated and the control groups.Table 3 Depicts levels of inflammatory markers. GroupsControlArsenicArsenic + FisetinFisetinNF-κB (ng/g tissue)12.53 ± 0.64a64.38 ± 0.94b17.97 ± 1.00a12.16 ± 0.67aTNF-α (ng/g tissue)6.31 ± 0.53a17.76 ± 1.02b8.73 ± 0.50a6.26 ± 0.56aIL-1β (ng/g tissue)23.89 ± 1.13a87.38 ± 1.28b29.57 ± 1.09a23.84 ± 1.01aIL-6 (ng/g tissue)4.62 ± 0.45a22.64 ± 2.00b7.11 ± 0.98a4.59 ± 0.37aCOX-2 (ng/g tissue)23.39 ± 0.80a65.75 ± 2.19b29.10 ± 1.29a23.36 ± 0.70aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.4. Effect of Fisetin on Antiapoptotic and Proapoptotic Markers To ascertain the probable antiapoptotic activity of fisetin, a property that presents its protecting impact against As-instigated hepatic tissues apoptosis, we estimated the alterations in the levels of the antiapoptotic marker Bcl-2 and proapoptotic markers, particularly, Bax, Caspase-3, and Caspase-9 (Table4). Results of the study exposed that As-induction considerably (p < 0.05) decreased the antiapoptotic indices, whereas increased the proapoptotic inducers in the As-intoxicated rats as matched with the control rats. Nevertheless, fisetin cotreatment substantially (p < 0.05) restored the level of the above-stated antiapoptotic marker while reducing the levels of proapoptotic markers in the cotreated group as contrasted with the arsenic-induced group. However, a nonsignificant alteration was noticed among the mean values of the fisetin-only treated and the untreated rats.Table 4 Shows the levels of proapoptotic and antiapoptotic markers. GroupsControlArsenicArsenic + FisetinFisetinBcl-214.49 ± 0.65a6.10 ± 0.98b12.34 ± 0.28a14.57 ± 0.69aBax2.58 ± 0.25a7.52 ± 0.34b2.92 ± 0.21a2.55 ± 0.12aCaspase-31.73 ± 0.09a10.58 ± 0.57b2.74 ± 0.31a1.71 ± 0.12aCaspase-94.70 ± 0.18a15.24 ± 0.79b5.81 ± 0.25a4.69 ± 0.21aSuperscripts indicate considerable difference at probability valuep < 0.05. ### 4.5. Effect of Fisetin on Histopathology Figure1 shows the comparative changes in the histopathological profile. Outcomes of the investigation presented that As exposure caused necrosis, sinusoid dilation, and apoptosis of hepatocytes along with central venule disruption in the As-induced rats as matched to the control rats (Figures 1(b) and 1(a)). Nonetheless, fisetin supplementation substantially (p < 0.05) mitigated the predominance and intensity of histopathological impairments such as diminished dilation of sinusoids with no necrotic cell, central venule disruption, and retrieved the classic architecture of liver cells in the coadministrated (As + fisetin) group as matched to the As group (Figures 1(b) and 1(c)). However, in the fisetin-only treated rats, histological architecture was similar to the control group (Figures 1(d) and 1(a)).Figure 1 Protecting impact of fisetin on arsenic deteriorated histopathology (hematoxylin-eosin. 40X). (a) Control group; (b) arsenic group (50 mg/kg); (c) arsenic (50 mg/kg) + fisetin group (50 mg/kg); (d) fisetin group (50 mg/kg). Central venule (CV); Kupffer cells (KC); hepatocytes (H); sinusoids (S); nucleus (N). ## 4.1. Effect of Fisetin on Biochemical Assay and TPC The activity of CAT, SOD, GSR, as well as GSH level and TPC, was substantially (p < 0.05) reduced after As intoxication, while the concentration of ROS and level of TBARS were raised as matched with the untreated group. Conversely, fisetin supplementation with As remarkably (p < 0.05) elevated the activity of CAT, SOD, GSR, and GSH content as well as TPC, while considerably (p < 0.05) lowered the levels of ROS and TBARS in the cotreated group as contrasted with the As-induced group. Nonetheless, nonsignificant variation was witnessed among rats of the fisetin-only administrated and the untreated rats (Table 1).Table 1 Presents the outcomes of biochemical analysis along with total protein content. GroupsControlArsenicArsenic + FisetinFisetinCAT (U/mg protein)7.62 ± 0.09a3.58 ± 0.26b6.77 ± 0.06a6.99 ± 0.07aSOD (U/mg tissue)6.34 ± 0.12a3.29 ± 0.09b5.65 ± 0.06a6.19 ± 0.11aGSR (nm NADPH oxidized/min/mg tissue3.03 ± 0.06a1.62 ± 0.16b2.66 ± 0.06a2.96 ± 0.05aGSH (nM/min/mg protein)15.87 ± 0.21a8.99 ± 0.121b14.98 ± 0.07a15.17 ± 0.09aROS (U/mg tissue)TBARS (nM/min/mg tissue)14.47 ± 0.2a22.750 ± 0.27b15.91 ± 0.07a15.16 ± 0.19aTotal protein (µg/mg tissues)4.04 ± 0.08a1.88 ± 0.06b3.85 ± 0.04a4.02 ± 0.07aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.2. Effect of Fisetin on AST, ALP, and AST Table2 shows outcomes of the study exposed that hepatic serum levels of AST, ALP, as well as ALT, were substantially (p < 0.05) raised in the As-induced group as matched to the control group. Nevertheless, fisetin treatment substantially (p < 0.05) caused the decline of hepatic enzymes in the cotreated group as matched to the As-intoxicated group. Moreover, a nonsignificant variation was seen between the fisetin-only treated and the control groups.Table 2 Displays the levels of liver serum enzymes. GroupsControlArsenicArsenic + FisetinFisetinALP (U/I)67.00 ± 2.5a176.33 ± 5.21b101.7 ± 1.77a86.67 ± 2.97aALT (U/I)42.33 ± 4.4a243.00 ± 7.24b76.33 ± 6.94a54.00 ± 3.79aAST (U/I)65.00 ± 2.9a279.67 ± 11.5b106.3 ± 2.41a78.33 ± 5.05aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.3. Effect of Fisetin on Inflammatory Markers Table3 demonstrates the outcomes of the investigation that displayed As exposure substantially (p < 0.05) raised the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity in the As-induced group as matched to the control group. Nonetheless, fisetin supplementation markedly (p < 0.05) diminished inflammatory indices in the cotreated (As + fisetin) group as matched to the As group. There was insignificant (p < 0.05) variation between the fisetin-only treated and the control groups.Table 3 Depicts levels of inflammatory markers. GroupsControlArsenicArsenic + FisetinFisetinNF-κB (ng/g tissue)12.53 ± 0.64a64.38 ± 0.94b17.97 ± 1.00a12.16 ± 0.67aTNF-α (ng/g tissue)6.31 ± 0.53a17.76 ± 1.02b8.73 ± 0.50a6.26 ± 0.56aIL-1β (ng/g tissue)23.89 ± 1.13a87.38 ± 1.28b29.57 ± 1.09a23.84 ± 1.01aIL-6 (ng/g tissue)4.62 ± 0.45a22.64 ± 2.00b7.11 ± 0.98a4.59 ± 0.37aCOX-2 (ng/g tissue)23.39 ± 0.80a65.75 ± 2.19b29.10 ± 1.29a23.36 ± 0.70aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.4. Effect of Fisetin on Antiapoptotic and Proapoptotic Markers To ascertain the probable antiapoptotic activity of fisetin, a property that presents its protecting impact against As-instigated hepatic tissues apoptosis, we estimated the alterations in the levels of the antiapoptotic marker Bcl-2 and proapoptotic markers, particularly, Bax, Caspase-3, and Caspase-9 (Table4). Results of the study exposed that As-induction considerably (p < 0.05) decreased the antiapoptotic indices, whereas increased the proapoptotic inducers in the As-intoxicated rats as matched with the control rats. Nevertheless, fisetin cotreatment substantially (p < 0.05) restored the level of the above-stated antiapoptotic marker while reducing the levels of proapoptotic markers in the cotreated group as contrasted with the arsenic-induced group. However, a nonsignificant alteration was noticed among the mean values of the fisetin-only treated and the untreated rats.Table 4 Shows the levels of proapoptotic and antiapoptotic markers. GroupsControlArsenicArsenic + FisetinFisetinBcl-214.49 ± 0.65a6.10 ± 0.98b12.34 ± 0.28a14.57 ± 0.69aBax2.58 ± 0.25a7.52 ± 0.34b2.92 ± 0.21a2.55 ± 0.12aCaspase-31.73 ± 0.09a10.58 ± 0.57b2.74 ± 0.31a1.71 ± 0.12aCaspase-94.70 ± 0.18a15.24 ± 0.79b5.81 ± 0.25a4.69 ± 0.21aSuperscripts indicate considerable difference at probability valuep < 0.05. ## 4.5. Effect of Fisetin on Histopathology Figure1 shows the comparative changes in the histopathological profile. Outcomes of the investigation presented that As exposure caused necrosis, sinusoid dilation, and apoptosis of hepatocytes along with central venule disruption in the As-induced rats as matched to the control rats (Figures 1(b) and 1(a)). Nonetheless, fisetin supplementation substantially (p < 0.05) mitigated the predominance and intensity of histopathological impairments such as diminished dilation of sinusoids with no necrotic cell, central venule disruption, and retrieved the classic architecture of liver cells in the coadministrated (As + fisetin) group as matched to the As group (Figures 1(b) and 1(c)). However, in the fisetin-only treated rats, histological architecture was similar to the control group (Figures 1(d) and 1(a)).Figure 1 Protecting impact of fisetin on arsenic deteriorated histopathology (hematoxylin-eosin. 40X). (a) Control group; (b) arsenic group (50 mg/kg); (c) arsenic (50 mg/kg) + fisetin group (50 mg/kg); (d) fisetin group (50 mg/kg). Central venule (CV); Kupffer cells (KC); hepatocytes (H); sinusoids (S); nucleus (N). ## 5. Discussion Arsenic as a poison is a worldwide health predicament. Chronic As intoxication has been profoundly linked with several disorders and health problems in humans [31]. The overgeneration of intracellular ROS after As exposure mediates multiple alterations in cell functioning by changing signaling molecular antigenic alteration or provokes direct oxidative impairment to molecules [32]. Thus, antioxidants with remarkable free radical scavenging properties can alleviate As-instigated toxicities [33]. Therefore, the current investigation was formulated to estimate the antioxidant potency of fisetin, which is a potential flavonoid with diverse pharmacological properties against As-intoxicated hepatotoxicity in rats.Outcomes of the current research revealed that As intoxication considerably reduced activities of SOD, CAT, GSR, or GSH content, and TPC while escalating the levels of ROS and TBARS. As evident, the body's defense system is made up of antioxidants that may be enzymatic or nonenzymatic, which act swiftly and neutralize free radicals [34]. SOD, GPX, and CAT are enzymatic antioxidants, while GSH is a nonenzymatic antioxidant [35]. CAT and GPx transformed the hydrogen peroxide (H2O2) into the water [36], whereas the conversion of superoxide anion (O2-) into H2O2 is carried out by SOD [37] whereas reduced GSH functions as an anion donator in these redox reactions [38]. GSH is retained by GSR, which renovates reduced GSH from oxidized GSSG for the perpetual functioning of GPx [39]. However, an unnecessary escalation of ROS resulting from deficient antioxidant defense or collapse of the cells’ buffering system to retain the redox balance that leads to OS, which consequently commences numerous modifications in biomolecules and ultimately leads to disease conditions [40]. The oxidative damage to lipids is known as LPO [41]. LPO, in turn, may lead to damages that affect membrane integrity as well as fluidity and permeability [42]. However, fisetin provision remarkably alleviated the above-stated biochemical alteration via enhancing the activities of antioxidant defense or total protein content and lowering the levels of ROS and TBARS. This curative effect of fisetin may be due to the presence of one hydroxyl group on its A-ring that sets the lipid-H2O interface of the membrane and exhibits equivalent free radical scavenging activity similar to other flavonoids such as quercetin. Thus, it inhibited LPO by preventing the additional diffusion of reactive oxygen species into the lipid hydrophobic core [8].In the current investigation, As exposure caused a remarkable increment in ALP, ALT, and AST levels indicating damage to hepatic tissues. As documented earlier, these enzymes exist in hepatocytes, but their levels are ordinarily low. However, when liver cells are damaged, their membranes become more penetrable as a result their enzymes are liberated into the blood [43]. Our outcomes are in harmony with the results of Un et al. [44], who conveyed similar results, followed by As treatment. However, in the current research, fisetin oral gavage remarkably reduced the levels of hepatic serum enzymes, which may be due to its antioxidant potential.Inflammation is the reflective response of the body's defense system, which is provoked by internal, i.e., stressed, impaired, or defective functioning of tissues, as well as external sources, i.e., reactive chemicals, allergens, microbes, and ROS [45]. This inflammatory process leads to elevated cell membrane permeability and vasodilatation that causes the nuclear translocation of different leukocytes and inflammatory markers [46]. NF-κB is among the fundamental inflammatory mediators which get triggered instantly in response to the internal or external cellular stimulant, which ultimately increases the levels of TNF-α [46], IL-1β [47], IL-6 [46] and activity of COX-2 [47]. Outcomes of the present investigation showed that As-induction substantially boosted the levels of IL-1β, TNF-α, NF-κB, IL-6, and COX-2 activity. However, fisetin coadministration with As remarkably lowered the elevated levels of inflammatory markers, which showed its anti-inflammatory property.Apoptosis, a cell death mechanism, which helps to eradicate undesired cells, is accomplished by intrinsic (mitochondrial) and extrinsic (death receptor) pathways [48]. In the current investigation, we assessed apoptosis by estimating the level of Bax, Caspase-3, Caspase-9, and Bcl-2. Outcomes showed that As exposure lowered the level of Bcl-2 while boosting the levels of Bax, Caspase-3, and Caspase-9. Bax and Bcl-2 are proteins that are related to the Bcl-2 family. Bcl-2 promotes cellular longevity by stabilizing the opening of the MPT (mitochondrial-permeability-transition) pore complex and defends against Cytochrome c liberation, whereas Bax activates MPT pore and regulates the discharge of Cytochrome c into the cytosol [49], which activate Caspase-9 that cleaves Caspase-3 [50], that eventually leads to apoptosis [51]. As evident, Caspases are cysteine proteases that cut 100 distinct target proteins and provoke apoptosis [52]. Thus, the anti- or pro-apoptotic Bcl-2/Bax ratio regulates apoptosis [53]. Nevertheless, fisetin mitigated these hepatocytes' apoptosis via down-and-upregulating the levels of pro- or anti-apoptotic markers, respectively, in the rat liver. Our outcomes verify the antiapoptotic potential of fisetin.The outcomes of the present research demonstrated that As administration induced intense histopathological impairments in the hepatic tissues. The reason behind these toxic histological alterations is LP, which eventually leads to inflammation and apoptosis in hepatocytes. As-induced hepatic injuries include central venule disruption, apoptosis of hepatocytes, necrosis, and sinusoid dilation. Our results are compatible with Al-Forkan et al. [54], who studied the retention mechanism of As in organs and its effect on liver enzymes, hematology, and histology. However, fisetin treatment remarkably ameliorated the histopathological damages caused by As. Fisetin restored the impairments of its histopathological profile may be due to its free radical quenching, anti-inflammatory, and antiapoptotic, attributes. ### 5.1. Docking Analysis ChemDraw Ultra 12.0 and Chem 3D Pro were utilized in GOLD docking for the energy minimization of ligands in accordance with the method adopted by Andleeb et al. (2020). In order to comprehend the efficacy of these receptors, the molecular docking analysis looked at the molecular interactions of the fisetin molecule with different receptor proteins. The protein data repository was considered to obtain the coordinate crystal structure of the receptor proteins (PDB). The GOLD suite version 5.3.0 with a high resolution of 2.70 was then used to load each of these structures one at a time for docking. Using the GOLD 5.3.0 edition, screenings of various dock complexes were done based on docking fitness and GOLD score. The most effective chemical that interacts with the receptor was discovered thanks to the GOLD program. Based on docking score and fitness, the results were evaluated for binding compatibility. The ligand molecule with the highest binding affinity to the receptor molecule was selected as the best medication.The three receptor proteins 5YTO, 1ILR, and 6WNG demonstrated the best interaction with the fisetin compound, with GOLD fitness values of 77.99, 68.50, and 60.35 and GOLD docking scores of -9.29, -8.96, and -8.90, respectively. These interactions included the formation of a hydrogen bond (MET A:1, ASP A: 880, SER A: 108, GLU A:25, GLN A:148, and PRO A:190). These three receptor proteins displayed an incredibly strong association with the fisetin chemical and can be thought of as possible receptor molecules that may be useful as an indicator of inflammation, antiapoptotic activity, and antioxidant activity. With GOLD fitness values of 52.44, 53.42, 38.15, and 62.30, GOLD docking scores of -8.62, -8.42, -7.78, and -7.80, and interactions of the hydrogen bond with other molecules, 1ANJ, 5YOY, IBDO, and 51F9 demonstrated moderate binding affinity (SER A: 99, ALA A: 321, ASP A: 98, ASP A: 135, ASP A: 728, PRO A: 134, GLN A: 148, PRO A: 190, ALA A: 192, ARG A: 239, PRO A: 187, THR A: 181, ASN A: 351, HIS A: 357, THR A: 175, and PHE A: 179). The interaction between 6FSO and 6TJL is the least favorable, falling between 53.04 and 50.93, with docking scores of -7.19 and -7.32. The receptor proteins with fisetin compound are ranked in the following order: 5YTO > 1ILR > 6WNG > 1ANJ > 5YOY > IBDO > 51F9 > 6FSO > 6TJL. The best poses created by the discovery studio are depicted in Figure2 in a 2D depiction of the interactions between proteins and ligands.Figure 2 In silico molecular docking analysis of 2D and 3D interactions among fisetin and screened receptor proteins (a) 5YTO, (b) 1ILR, and (c) 6WNG. (a)(b)(c) ## 5.1. Docking Analysis ChemDraw Ultra 12.0 and Chem 3D Pro were utilized in GOLD docking for the energy minimization of ligands in accordance with the method adopted by Andleeb et al. (2020). In order to comprehend the efficacy of these receptors, the molecular docking analysis looked at the molecular interactions of the fisetin molecule with different receptor proteins. The protein data repository was considered to obtain the coordinate crystal structure of the receptor proteins (PDB). The GOLD suite version 5.3.0 with a high resolution of 2.70 was then used to load each of these structures one at a time for docking. Using the GOLD 5.3.0 edition, screenings of various dock complexes were done based on docking fitness and GOLD score. The most effective chemical that interacts with the receptor was discovered thanks to the GOLD program. Based on docking score and fitness, the results were evaluated for binding compatibility. The ligand molecule with the highest binding affinity to the receptor molecule was selected as the best medication.The three receptor proteins 5YTO, 1ILR, and 6WNG demonstrated the best interaction with the fisetin compound, with GOLD fitness values of 77.99, 68.50, and 60.35 and GOLD docking scores of -9.29, -8.96, and -8.90, respectively. These interactions included the formation of a hydrogen bond (MET A:1, ASP A: 880, SER A: 108, GLU A:25, GLN A:148, and PRO A:190). These three receptor proteins displayed an incredibly strong association with the fisetin chemical and can be thought of as possible receptor molecules that may be useful as an indicator of inflammation, antiapoptotic activity, and antioxidant activity. With GOLD fitness values of 52.44, 53.42, 38.15, and 62.30, GOLD docking scores of -8.62, -8.42, -7.78, and -7.80, and interactions of the hydrogen bond with other molecules, 1ANJ, 5YOY, IBDO, and 51F9 demonstrated moderate binding affinity (SER A: 99, ALA A: 321, ASP A: 98, ASP A: 135, ASP A: 728, PRO A: 134, GLN A: 148, PRO A: 190, ALA A: 192, ARG A: 239, PRO A: 187, THR A: 181, ASN A: 351, HIS A: 357, THR A: 175, and PHE A: 179). The interaction between 6FSO and 6TJL is the least favorable, falling between 53.04 and 50.93, with docking scores of -7.19 and -7.32. The receptor proteins with fisetin compound are ranked in the following order: 5YTO > 1ILR > 6WNG > 1ANJ > 5YOY > IBDO > 51F9 > 6FSO > 6TJL. The best poses created by the discovery studio are depicted in Figure2 in a 2D depiction of the interactions between proteins and ligands.Figure 2 In silico molecular docking analysis of 2D and 3D interactions among fisetin and screened receptor proteins (a) 5YTO, (b) 1ILR, and (c) 6WNG. (a)(b)(c) ## 6. Conclusion Adult male albino rats received the arsenic injection, which resulted in elevated serum enzyme levels, inflammatory and apoptotic indicators, and a worsened histopathological profile. Additionally, an unbalanced state that resulted in hepatic dysfunction was presented by the activity of enzymatic antioxidants, TPC or the levels of ROS, and TBARS. Nevertheless, due to its underlying antioxidant, antiapoptotic, as well as anti-inflammatory potentials, fisetin therapy significantly reduced arsenic-induced deficits in all of the aforementioned measures. [55] --- *Source: 1005255-2022-10-20.xml*
2022
# Anatomical and Biochemical Traits Related to Blue Leaf Coloration ofSelaginella uncinata **Authors:** Lin Li; Lulu Yang; Aihua Qin; Fangyi Jiang; Limei Chen; Rongyan Deng **Journal:** Journal of Healthcare Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005449 --- ## Abstract Selaginella uncinata shows particularly rare blue leaves. Previous research has shown that structural interference by the cell wall of adaxial epidermal cells imparts blue coloration in leaves of S. uncinata; the objective of this study was to see whether anthocyanins might additionally contribute to this color, as changes in pH, and conjugation with metals and other flavonoids is also known to result in blue coloration in plants. We compared anatomical and biochemical traits of shade-grown (blue) S. uncinata leaves to high light (red) leaves of the same species and also to a non-blue (green) leaves of a congeneric S. kraussiana. By examining the anatomical structure, we found that the shape of adaxial epidermis of S. uncinata leaves was convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. These features were different from those of the abaxial and adaxial epidermis of S. kraussiana. We suspect that these structures increase the proportion of incident light entering the cell, deepening the leaf color, and therefore may be related to blue leaf color in S. uncinata. By examining biochemical traits, we found little difference in leaf pH value among the leaf types; all leaves contained several metal ions such as Mg, Fe, Mn, and copigments such as flavones. However, because there was no anthocyanin in blue S. uncinata leaves, we concluded that blue coloration in S. uncinata leaves is not caused by the three hypotheses of blue coloration: alkalization of the vacuole pH, metal chelation, or copigmentation with anthocyanins, but it may be related to the shape of the leaf adaxial epidermis. --- ## Body ## 1. Introduction Colorful leaves are attractive features that characterize ornamental plants. Red, purple, yellow, and variegated leaves are common, whereas blue leaves are particularly rare.Selaginella uncinata, a fern species that is adapted to the shaded conditions, is such a blue leaf plant. It has a blue upper and green lower surface. There have been some reports related to S. uncinata focusing on structural anatomy [1], developmental anatomy [2], cell genetics [3], chloroplast genome [4–6], and chemicals and medicine [7–13]. Our previous research showed that the leaf color of S. uncinata appears normally blue in the shade, while it changes to red under full light exposure [14], and this color change under high light corresponded with a reduction in chlorophyll and anthocyanin and an increase in carotenoids, resulting in a dominant orange color. Based on the transcriptome sequencing and quantitative real-time polymerase chain reaction (qRT-PCR) analysis, we concluded that the primary pathway of pigment metabolism in S. uncinata may be the chlorophyll metabolism pathway rather than the anthocyanin biosynthesis pathway [15]. According to the research of leaf coloration mechanism, leaf coloration in plants can be due to either pigments or structural coloration. These two groups differ in their appearance—pigmented colors look the same from all angles, while structural colors appear with different hues when viewed from different angles, a unique attribute of structural color called iridescence [16].There have been very few studies on the production of blue leaves. Hébant and Lee [1] found that the iridescent blue color of S. uncinata and S. willdenowii is caused by thin-film interference (a physical effect). In other blue iridescent plants, the iridescent ultrastructural basis is relevant to their adaxial epidermis, but they are different in detail. In Diplazium tomentosum, Lindsaea lucida, and Danaea nodosa, the iridescent ultrastructure is that in the uppermost cell walls of the adaxial epidermis, the arrangement of multiple layers of cellulose microfibrils is helicoidal [17, 18]. In Begonia pavonina, Phyllagathis rotundifolia, and Elaeocarpus angustifolius, blue coloration is due to the parallel lamellae in specialized plastids (iridoplasts) adjacent to the abaxial wall of the adaxial epidermis [18, 19], while in Trichomanes elegans, it results from the remarkably uniform thickness and arrangement of grana in specialized chloroplasts adjacent to the adaxial wall of the adaxial epidermis [17]. In these studies, it was not possible to extract blue pigment from the study material, such that in all cases, blue iridescence was considered to be a structural color.However, according to studies of blue flowers, pH in the vacuole [20–22], metal chelation [23], and copigmentation [24, 25] may also be related to blue coloration. It seems clear that blue leaves in S. uncinata have a structural mechanism, but whether it is also affected by any or all of these three factors remains unknown. In our previous study, we detected low content of anthocyanins in S. uncinata [14]. The objective of this study was to further investigate the possibility that anthocyanins may contribute to blue coloration in S. uncinata, by examining leaf pH, metal ions, and pigment composition, in addition to anatomical structure. ## 2. Material and Methods ### 2.1. Plant Material We conducted tests on three leaf types: blueS. uncinata leaves grown under a sunshade net (light intensity: 65–105 umol m−2 s−1), green S. kraussiana leaves grown under the same conditions, and red S. uncinata leaves grown in full exposure (light intensity: 500–520 umol m−2 s−1) [14]. There were 6 POTS for each leaf type and 3 replicates for a total of 54 POTS. All plants were 6 months old, given normal water and fertilizer management, and cultivated in the nursery of the Forestry College, Guangxi University, Nanning, China. Mature normal leaves were selected randomly in different directions from various individuals when sampling. ### 2.2. Methods We compared observations within species (blue and redS. uncinata leaves) and between species (blue S. uncinata and green S. kraussiana leaves). The traits examined included morphology, color parameters, leaf paraffin transverse sections, freehand sections, and scanning electron microscopy (SEM) photomicrographs to determine the structural mechanism. We also compared reported leaf pH values, and metal ion, anthocyanin, and flavonoid content to examine the physiological and biochemical mechanisms related to blue coloration. ### 2.3. Anatomical and Morphological Traits #### 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. #### 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). #### 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. #### 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ### 2.4. Physiological and Biochemical Traits #### 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. #### 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. #### 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. #### 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 2.1. Plant Material We conducted tests on three leaf types: blueS. uncinata leaves grown under a sunshade net (light intensity: 65–105 umol m−2 s−1), green S. kraussiana leaves grown under the same conditions, and red S. uncinata leaves grown in full exposure (light intensity: 500–520 umol m−2 s−1) [14]. There were 6 POTS for each leaf type and 3 replicates for a total of 54 POTS. All plants were 6 months old, given normal water and fertilizer management, and cultivated in the nursery of the Forestry College, Guangxi University, Nanning, China. Mature normal leaves were selected randomly in different directions from various individuals when sampling. ## 2.2. Methods We compared observations within species (blue and redS. uncinata leaves) and between species (blue S. uncinata and green S. kraussiana leaves). The traits examined included morphology, color parameters, leaf paraffin transverse sections, freehand sections, and scanning electron microscopy (SEM) photomicrographs to determine the structural mechanism. We also compared reported leaf pH values, and metal ion, anthocyanin, and flavonoid content to examine the physiological and biochemical mechanisms related to blue coloration. ## 2.3. Anatomical and Morphological Traits ### 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. ### 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). ### 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. ### 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ## 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. ## 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). ## 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. ## 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ## 2.4. Physiological and Biochemical Traits ### 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. ### 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. ### 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. ### 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. ## 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. ## 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. ## 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 3. Results ### 3.1. Anatomical and Morphological Traits #### 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. #### 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ### 3.2. The Shape of Epidermal Cell We used the freehand section to observe epidermal cells from the upper face and SEM photomicrographs to observe the three-dimensional shape of the epidermal cells. We found that the shapes of leaf adaxial and abaxial epidermal cells in blueS. uncinata were different: the adaxial epidermis was irregular circles, with smooth embossment, and by contrast, the abaxial epidermis was a long, wavy, irregular strip, with elongated embossment on the top view (Figure 3(e)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves (Figure 3(f)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were both shaped as irregular long strips, with elongated embossment on the top view (Figure 3(g)).Figure 3 Leaf epidermal cell shape of the three leaf types. (a) Freehand section photomicrographs of adaxial epidermis; (b) freehand section photomicrographs of abaxial epidermis; (c) SEM photomicrographs of adaxial epidermis; (d) SEM photomicrographs of abaxial epidermis; (e) blueS. uncinata leaf; (f) red S. uncinata leaf; and (g) S. kraussiana. ### 3.3. Physiological and Biochemical Traits #### 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. #### 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. #### 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. #### 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. #### 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. #### 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 3.1. Anatomical and Morphological Traits ### 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. ### 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ## 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. ## 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ## 3.2. The Shape of Epidermal Cell We used the freehand section to observe epidermal cells from the upper face and SEM photomicrographs to observe the three-dimensional shape of the epidermal cells. We found that the shapes of leaf adaxial and abaxial epidermal cells in blueS. uncinata were different: the adaxial epidermis was irregular circles, with smooth embossment, and by contrast, the abaxial epidermis was a long, wavy, irregular strip, with elongated embossment on the top view (Figure 3(e)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves (Figure 3(f)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were both shaped as irregular long strips, with elongated embossment on the top view (Figure 3(g)).Figure 3 Leaf epidermal cell shape of the three leaf types. (a) Freehand section photomicrographs of adaxial epidermis; (b) freehand section photomicrographs of abaxial epidermis; (c) SEM photomicrographs of adaxial epidermis; (d) SEM photomicrographs of abaxial epidermis; (e) blueS. uncinata leaf; (f) red S. uncinata leaf; and (g) S. kraussiana. ## 3.3. Physiological and Biochemical Traits ### 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. ### 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. ### 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. ### 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. ### 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. ### 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. ## 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. ## 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. ## 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. ## 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. ## 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 4. Discussion ### 4.1. The Adaptability of Leaf Morphology and Anatomical Structure to Environmental Conditions Leaf thickness is an important indicator of plant shade tolerance. Thinner leaves make fern chloroplasts more fully capable of absorbing light energy; improving the photosynthetic efficiency of ferns; and making them better adapted to shaded environments [35]. Morphology traits and measurement data show that blue S. uncinata leaves, including increased leaf area and decreased leaf thickness, are adaptations to low-light environments and weak-light intensity.Many researchers consider that shade-tolerant trees have a greater ability to change their leaf anatomical structure [36]. Leaf anatomical structure traits such as wax coating, shape of epidermal cells, epidermal thickness, and epidermal hair play an important role in light absorption, even determining the light use efficiency of the plant [37]. Deep-shade plants change their morphology and physiology traits of cells and chloroplasts to fit the low-light conditions [38]. In the paraffin sections of the three leaf types, we observed no clear differentiation of the palisade and spongy tissues in the mesophyll, and mesophyll cells were irregularly distributed. This trait contributes to reducing projection loss of quantum light, allowing the plant to fully utilize limited light to carry out photosynthesis and accumulate organic matter, thus adapting to the shaded environment [35].Previous research has shown that fern mesophyll cells possess more intercellular space, forming well-developed aeration tissue that can be used to store gases for photosynthesis and respiration, to make up for deficiencies in gas absorption, which is also an adaptation to a shaded environment [35]. The chloroplast is the main site of photosynthesis; in ferns, this structure appears long and narrow and is distributed as a moniliform, reducing the amount of photons penetrating the leaf and raising the utilization rate of quantum light in weak-light conditions, to improve the efficiency of leaf photosynthesis [35]. Plant chloroplasts typically exist in mesophyll cells, but through paraffin section observation, we found that chloroplasts in blue S. uncinata leaves exist mainly in epidermal cells and are larger. This result is consistent with that of Hébant and Lee [1], who examined transverse leaf sections of S. uncinata by light microscopy, but is different from those of Sheue et al., and they found several chloroplasts in the mesophyll cells and the ventral epidermal cells, while only one single giant chloroplast (bizonoplast, BP) per dorsal epidermal cell [39]. The preferential localization of chloroplasts in the lower part of the epidermal cells in S. uncinata would allow more light to penetrate and reach mesophyll cells [39], which is an adaptation of plants to weak-light environments. ### 4.2. Effects of the Shape of Leaf Epidermal Cells on Blue Coloration inS. uncinata The shape of petal epidermal cells has a great influence on the formation of flower color. Noda et al. [40] found that a conical shape in the epidermal cells of petal was believed to enhance light absorption and thus intensified its color, while the flat shape could reflect more incident light and thus lightened its color. Quintana et al. [41] found that in Anagallis, the epidermis contains anthocyanins, and most epidermal cells are flat, with dome-shaped and conical cells in the outer layer. Mudalige et al. inspected the perianths of 34 Dendrobium Sw. species and hybrids to clarify the relevance of pigment distribution, the shape of upper epidermal cells, color intensity, perception, and visual texture [42]. Four types of epidermal cell shapes were identified in these Dendrobium flowers: flat, dome-shaped, elongated dome-shaped, and papillate [42]. Yue [43] observed using SEM measurements that the epidermal cell shapes of 17 monocotyledon flowers could be grouped into five classes: conical, flat, oval, strip-shaped, and irregular mosaic. That study suggested that convex epidermal cells increased the refraction of light, making petal color appear deeper, and that bulging cells appeared to be more conducive to pigment, whereas flat epidermal cells decreased the effect, making their color appear lighter.By comparing leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was not only different from the abaxial epidermis, but also different from the adaxial epidermis of S. kraussiana. The shape appeared convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. This result corresponds with those of Hébant and Lee [1], who examined the convexly curved upper epidermal cells of S. willdenowii by SEM. The structure increases the proportion of incident light entering the cell, deepens the leaf color [40, 43], and therefore may be related to blue leaf coloration.According to Hébant and Lee [1], the blue color of S. uncinata results from thin-film interference. Which contributes more to blue coloration? Is it thin-film interference or convex or lens-shaped epidermal cells? Do they work complementary? It needs more intensive research. ### 4.3. Effects of the pH on Blue Coloration inS. uncinata The color of plant leaves is affected to some extent by the pH within vacuoles, which has a great influence on the coloration of anthocyanins, with varying performance among different plant species. Tang et al. [44] found that the pH affected anthocyanin synthesis and stability. The degradation rate of anthocyanins has been accelerated by increasing the pH during the process of red turning green in Loropetalum chinense var. rubrum. Studies on the relationship between leaf pigment content and leaf color change in Liquidambar formosana have shown that a reduction in pH was one reason their leaves turned red [45]. Research by Shi [46] indicated that Prunus cerasifera leaf color appeared red in a medium with pH <5, and the stronger the acidity, the more red the pigment. Red color was stable when the pH ranged from 4 to 5, and the solution turned green at pH >5. The stronger the alkalinity, the more green the pigment. Most research results have indicated that anthocyanins present stable red when the pH of the vacuole is lower, and unstable blue occurs as the pH increases.The vacuole is the largest organelle in the mature leaf cell. The pH of leaf juice is often used to approximate the pH of the vacuole [47]. We used this method in our experiment, with results indicating that the pH of blue leaves was greater than that of red leaves. This result is consistent with previous results that the pH of blue flowers was greater than that of red flowers, in Hydrangea macrophylla [48] and Pharbitis nil (Linn.) Choisy [22]. However, for the specific value, the difference between the two pH values was only 0.08, and the difference between those of blue S. uncinata leaves and S. kraussiana leaves is only 0.11, whereas that between blue and red cultivars of H. macrophylla was 0.8 [48] and that between blue in the full-bloom stage and red in the burgeoning stage of P. nil (Linn.) Choisy was 1.1 [22]. Therefore, we conclude that blue leaf coloration in S. uncinata has no concern with alkalization of the pH in the vacuole. ### 4.4. Effects of Metal Ion Content on Blue Coloration inS. uncinata #### 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. #### 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ### 4.5. Effects of Anthocyanin and Copigment on Blue Coloration inS. uncinata Copigments, often flavone and flavonols, are the two branches of the flavonoid metabolic pathway [55]. Combined with anthocyanins, they can stabilize pigments, and the compounds they form will influence the coloration of anthocyanins to some degree [56]. Research by Li [57] found that the effect of copigments turned purple or pink delphinidin flowers blue. Malvidin-3-glucoside is the basic anthocyanin of Primula sinensis; the flower appears purple when it is combined with flavonol, but appears garnet when not combined [58]. Under certain conditions, the larger the molar ratio of flavonols and anthocyanins, the more significant the copigmentation effect.In our research, we detected anthocyanins in a preliminary experiment by measuring absorbance values using an enzyme-linked immunosorbent assay (ELISA). Although the overall content was low, and the highest content was only 1.2 pigment units [14], we still initially speculate that blue leaf coloration in S. uncinata may be related to delphinidin in anthocyanins, or a copigment with anthocyanins. We did not detect anthocyanins in blue leaves using liquid chromatography-MS (this is accordant with the conclusion that compared with the anthocyanin biosynthesis pathway, the chlorophyll metabolism pathway may be the primary pigment metabolism pathway of S. uncinata [15], although there were copigments such as flavone). If anthocyanins are not present, the copigmentation of flavone cannot occur. Therefore, we infer that blue leaf coloration in S. uncinata was not caused by copigmentation of anthocyanins. ## 4.1. The Adaptability of Leaf Morphology and Anatomical Structure to Environmental Conditions Leaf thickness is an important indicator of plant shade tolerance. Thinner leaves make fern chloroplasts more fully capable of absorbing light energy; improving the photosynthetic efficiency of ferns; and making them better adapted to shaded environments [35]. Morphology traits and measurement data show that blue S. uncinata leaves, including increased leaf area and decreased leaf thickness, are adaptations to low-light environments and weak-light intensity.Many researchers consider that shade-tolerant trees have a greater ability to change their leaf anatomical structure [36]. Leaf anatomical structure traits such as wax coating, shape of epidermal cells, epidermal thickness, and epidermal hair play an important role in light absorption, even determining the light use efficiency of the plant [37]. Deep-shade plants change their morphology and physiology traits of cells and chloroplasts to fit the low-light conditions [38]. In the paraffin sections of the three leaf types, we observed no clear differentiation of the palisade and spongy tissues in the mesophyll, and mesophyll cells were irregularly distributed. This trait contributes to reducing projection loss of quantum light, allowing the plant to fully utilize limited light to carry out photosynthesis and accumulate organic matter, thus adapting to the shaded environment [35].Previous research has shown that fern mesophyll cells possess more intercellular space, forming well-developed aeration tissue that can be used to store gases for photosynthesis and respiration, to make up for deficiencies in gas absorption, which is also an adaptation to a shaded environment [35]. The chloroplast is the main site of photosynthesis; in ferns, this structure appears long and narrow and is distributed as a moniliform, reducing the amount of photons penetrating the leaf and raising the utilization rate of quantum light in weak-light conditions, to improve the efficiency of leaf photosynthesis [35]. Plant chloroplasts typically exist in mesophyll cells, but through paraffin section observation, we found that chloroplasts in blue S. uncinata leaves exist mainly in epidermal cells and are larger. This result is consistent with that of Hébant and Lee [1], who examined transverse leaf sections of S. uncinata by light microscopy, but is different from those of Sheue et al., and they found several chloroplasts in the mesophyll cells and the ventral epidermal cells, while only one single giant chloroplast (bizonoplast, BP) per dorsal epidermal cell [39]. The preferential localization of chloroplasts in the lower part of the epidermal cells in S. uncinata would allow more light to penetrate and reach mesophyll cells [39], which is an adaptation of plants to weak-light environments. ## 4.2. Effects of the Shape of Leaf Epidermal Cells on Blue Coloration inS. uncinata The shape of petal epidermal cells has a great influence on the formation of flower color. Noda et al. [40] found that a conical shape in the epidermal cells of petal was believed to enhance light absorption and thus intensified its color, while the flat shape could reflect more incident light and thus lightened its color. Quintana et al. [41] found that in Anagallis, the epidermis contains anthocyanins, and most epidermal cells are flat, with dome-shaped and conical cells in the outer layer. Mudalige et al. inspected the perianths of 34 Dendrobium Sw. species and hybrids to clarify the relevance of pigment distribution, the shape of upper epidermal cells, color intensity, perception, and visual texture [42]. Four types of epidermal cell shapes were identified in these Dendrobium flowers: flat, dome-shaped, elongated dome-shaped, and papillate [42]. Yue [43] observed using SEM measurements that the epidermal cell shapes of 17 monocotyledon flowers could be grouped into five classes: conical, flat, oval, strip-shaped, and irregular mosaic. That study suggested that convex epidermal cells increased the refraction of light, making petal color appear deeper, and that bulging cells appeared to be more conducive to pigment, whereas flat epidermal cells decreased the effect, making their color appear lighter.By comparing leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was not only different from the abaxial epidermis, but also different from the adaxial epidermis of S. kraussiana. The shape appeared convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. This result corresponds with those of Hébant and Lee [1], who examined the convexly curved upper epidermal cells of S. willdenowii by SEM. The structure increases the proportion of incident light entering the cell, deepens the leaf color [40, 43], and therefore may be related to blue leaf coloration.According to Hébant and Lee [1], the blue color of S. uncinata results from thin-film interference. Which contributes more to blue coloration? Is it thin-film interference or convex or lens-shaped epidermal cells? Do they work complementary? It needs more intensive research. ## 4.3. Effects of the pH on Blue Coloration inS. uncinata The color of plant leaves is affected to some extent by the pH within vacuoles, which has a great influence on the coloration of anthocyanins, with varying performance among different plant species. Tang et al. [44] found that the pH affected anthocyanin synthesis and stability. The degradation rate of anthocyanins has been accelerated by increasing the pH during the process of red turning green in Loropetalum chinense var. rubrum. Studies on the relationship between leaf pigment content and leaf color change in Liquidambar formosana have shown that a reduction in pH was one reason their leaves turned red [45]. Research by Shi [46] indicated that Prunus cerasifera leaf color appeared red in a medium with pH <5, and the stronger the acidity, the more red the pigment. Red color was stable when the pH ranged from 4 to 5, and the solution turned green at pH >5. The stronger the alkalinity, the more green the pigment. Most research results have indicated that anthocyanins present stable red when the pH of the vacuole is lower, and unstable blue occurs as the pH increases.The vacuole is the largest organelle in the mature leaf cell. The pH of leaf juice is often used to approximate the pH of the vacuole [47]. We used this method in our experiment, with results indicating that the pH of blue leaves was greater than that of red leaves. This result is consistent with previous results that the pH of blue flowers was greater than that of red flowers, in Hydrangea macrophylla [48] and Pharbitis nil (Linn.) Choisy [22]. However, for the specific value, the difference between the two pH values was only 0.08, and the difference between those of blue S. uncinata leaves and S. kraussiana leaves is only 0.11, whereas that between blue and red cultivars of H. macrophylla was 0.8 [48] and that between blue in the full-bloom stage and red in the burgeoning stage of P. nil (Linn.) Choisy was 1.1 [22]. Therefore, we conclude that blue leaf coloration in S. uncinata has no concern with alkalization of the pH in the vacuole. ## 4.4. Effects of Metal Ion Content on Blue Coloration inS. uncinata ### 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. ### 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ## 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. ## 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ## 4.5. Effects of Anthocyanin and Copigment on Blue Coloration inS. uncinata Copigments, often flavone and flavonols, are the two branches of the flavonoid metabolic pathway [55]. Combined with anthocyanins, they can stabilize pigments, and the compounds they form will influence the coloration of anthocyanins to some degree [56]. Research by Li [57] found that the effect of copigments turned purple or pink delphinidin flowers blue. Malvidin-3-glucoside is the basic anthocyanin of Primula sinensis; the flower appears purple when it is combined with flavonol, but appears garnet when not combined [58]. Under certain conditions, the larger the molar ratio of flavonols and anthocyanins, the more significant the copigmentation effect.In our research, we detected anthocyanins in a preliminary experiment by measuring absorbance values using an enzyme-linked immunosorbent assay (ELISA). Although the overall content was low, and the highest content was only 1.2 pigment units [14], we still initially speculate that blue leaf coloration in S. uncinata may be related to delphinidin in anthocyanins, or a copigment with anthocyanins. We did not detect anthocyanins in blue leaves using liquid chromatography-MS (this is accordant with the conclusion that compared with the anthocyanin biosynthesis pathway, the chlorophyll metabolism pathway may be the primary pigment metabolism pathway of S. uncinata [15], although there were copigments such as flavone). If anthocyanins are not present, the copigmentation of flavone cannot occur. Therefore, we infer that blue leaf coloration in S. uncinata was not caused by copigmentation of anthocyanins. ## 5. Conclusion Through comparison of leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. These shapes were different from those on the abaxial epidermis and the adaxial epidermis of S. kraussiana leaves. We speculated these structures increase the proportion of incident light entering the cell, deepening the leaf color, and therefore may be related to blue leaf coloration.Through comparison of previously published values of leaf pH and metal ion content, anthocyanins, and flavonoids with those of the three leaf types in our study, we found that leaf pH was similar among the leaf types and that the leaves all contained high levels of metal ions such as Mg, Fe, Mn, and copigments such as flavones. However, because there was no anthocyanin present in blueS. uncinata leaves, we conclude that blue leaf coloration in S. uncinata was not related to the three hypotheses of blue coloration: alkalization of vacuole pH, metal chelation, and copigmentation with anthocyanins. --- *Source: 1005449-2022-02-24.xml*
1005449-2022-02-24_1005449-2022-02-24.md
97,415
Anatomical and Biochemical Traits Related to Blue Leaf Coloration ofSelaginella uncinata
Lin Li; Lulu Yang; Aihua Qin; Fangyi Jiang; Limei Chen; Rongyan Deng
Journal of Healthcare Engineering (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005449
1005449-2022-02-24.xml
--- ## Abstract Selaginella uncinata shows particularly rare blue leaves. Previous research has shown that structural interference by the cell wall of adaxial epidermal cells imparts blue coloration in leaves of S. uncinata; the objective of this study was to see whether anthocyanins might additionally contribute to this color, as changes in pH, and conjugation with metals and other flavonoids is also known to result in blue coloration in plants. We compared anatomical and biochemical traits of shade-grown (blue) S. uncinata leaves to high light (red) leaves of the same species and also to a non-blue (green) leaves of a congeneric S. kraussiana. By examining the anatomical structure, we found that the shape of adaxial epidermis of S. uncinata leaves was convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. These features were different from those of the abaxial and adaxial epidermis of S. kraussiana. We suspect that these structures increase the proportion of incident light entering the cell, deepening the leaf color, and therefore may be related to blue leaf color in S. uncinata. By examining biochemical traits, we found little difference in leaf pH value among the leaf types; all leaves contained several metal ions such as Mg, Fe, Mn, and copigments such as flavones. However, because there was no anthocyanin in blue S. uncinata leaves, we concluded that blue coloration in S. uncinata leaves is not caused by the three hypotheses of blue coloration: alkalization of the vacuole pH, metal chelation, or copigmentation with anthocyanins, but it may be related to the shape of the leaf adaxial epidermis. --- ## Body ## 1. Introduction Colorful leaves are attractive features that characterize ornamental plants. Red, purple, yellow, and variegated leaves are common, whereas blue leaves are particularly rare.Selaginella uncinata, a fern species that is adapted to the shaded conditions, is such a blue leaf plant. It has a blue upper and green lower surface. There have been some reports related to S. uncinata focusing on structural anatomy [1], developmental anatomy [2], cell genetics [3], chloroplast genome [4–6], and chemicals and medicine [7–13]. Our previous research showed that the leaf color of S. uncinata appears normally blue in the shade, while it changes to red under full light exposure [14], and this color change under high light corresponded with a reduction in chlorophyll and anthocyanin and an increase in carotenoids, resulting in a dominant orange color. Based on the transcriptome sequencing and quantitative real-time polymerase chain reaction (qRT-PCR) analysis, we concluded that the primary pathway of pigment metabolism in S. uncinata may be the chlorophyll metabolism pathway rather than the anthocyanin biosynthesis pathway [15]. According to the research of leaf coloration mechanism, leaf coloration in plants can be due to either pigments or structural coloration. These two groups differ in their appearance—pigmented colors look the same from all angles, while structural colors appear with different hues when viewed from different angles, a unique attribute of structural color called iridescence [16].There have been very few studies on the production of blue leaves. Hébant and Lee [1] found that the iridescent blue color of S. uncinata and S. willdenowii is caused by thin-film interference (a physical effect). In other blue iridescent plants, the iridescent ultrastructural basis is relevant to their adaxial epidermis, but they are different in detail. In Diplazium tomentosum, Lindsaea lucida, and Danaea nodosa, the iridescent ultrastructure is that in the uppermost cell walls of the adaxial epidermis, the arrangement of multiple layers of cellulose microfibrils is helicoidal [17, 18]. In Begonia pavonina, Phyllagathis rotundifolia, and Elaeocarpus angustifolius, blue coloration is due to the parallel lamellae in specialized plastids (iridoplasts) adjacent to the abaxial wall of the adaxial epidermis [18, 19], while in Trichomanes elegans, it results from the remarkably uniform thickness and arrangement of grana in specialized chloroplasts adjacent to the adaxial wall of the adaxial epidermis [17]. In these studies, it was not possible to extract blue pigment from the study material, such that in all cases, blue iridescence was considered to be a structural color.However, according to studies of blue flowers, pH in the vacuole [20–22], metal chelation [23], and copigmentation [24, 25] may also be related to blue coloration. It seems clear that blue leaves in S. uncinata have a structural mechanism, but whether it is also affected by any or all of these three factors remains unknown. In our previous study, we detected low content of anthocyanins in S. uncinata [14]. The objective of this study was to further investigate the possibility that anthocyanins may contribute to blue coloration in S. uncinata, by examining leaf pH, metal ions, and pigment composition, in addition to anatomical structure. ## 2. Material and Methods ### 2.1. Plant Material We conducted tests on three leaf types: blueS. uncinata leaves grown under a sunshade net (light intensity: 65–105 umol m−2 s−1), green S. kraussiana leaves grown under the same conditions, and red S. uncinata leaves grown in full exposure (light intensity: 500–520 umol m−2 s−1) [14]. There were 6 POTS for each leaf type and 3 replicates for a total of 54 POTS. All plants were 6 months old, given normal water and fertilizer management, and cultivated in the nursery of the Forestry College, Guangxi University, Nanning, China. Mature normal leaves were selected randomly in different directions from various individuals when sampling. ### 2.2. Methods We compared observations within species (blue and redS. uncinata leaves) and between species (blue S. uncinata and green S. kraussiana leaves). The traits examined included morphology, color parameters, leaf paraffin transverse sections, freehand sections, and scanning electron microscopy (SEM) photomicrographs to determine the structural mechanism. We also compared reported leaf pH values, and metal ion, anthocyanin, and flavonoid content to examine the physiological and biochemical mechanisms related to blue coloration. ### 2.3. Anatomical and Morphological Traits #### 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. #### 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). #### 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. #### 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ### 2.4. Physiological and Biochemical Traits #### 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. #### 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. #### 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. #### 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 2.1. Plant Material We conducted tests on three leaf types: blueS. uncinata leaves grown under a sunshade net (light intensity: 65–105 umol m−2 s−1), green S. kraussiana leaves grown under the same conditions, and red S. uncinata leaves grown in full exposure (light intensity: 500–520 umol m−2 s−1) [14]. There were 6 POTS for each leaf type and 3 replicates for a total of 54 POTS. All plants were 6 months old, given normal water and fertilizer management, and cultivated in the nursery of the Forestry College, Guangxi University, Nanning, China. Mature normal leaves were selected randomly in different directions from various individuals when sampling. ## 2.2. Methods We compared observations within species (blue and redS. uncinata leaves) and between species (blue S. uncinata and green S. kraussiana leaves). The traits examined included morphology, color parameters, leaf paraffin transverse sections, freehand sections, and scanning electron microscopy (SEM) photomicrographs to determine the structural mechanism. We also compared reported leaf pH values, and metal ion, anthocyanin, and flavonoid content to examine the physiological and biochemical mechanisms related to blue coloration. ## 2.3. Anatomical and Morphological Traits ### 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. ### 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). ### 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. ### 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ## 2.3.1. Morphological Traits and Leaf Color Parameters Morphological observations and measurements included leaf type, leaf texture, leaf color on the adaxial and abaxial sides, leaf size, and leaf thickness. The procedure is repeated 6 times, and the results are averaged.Fresh leaves are taken, and leaf color in the middle of the upper epidermis was measured by Royal Horticultural Society Colour Chart (RHSCC) and a General Colorimeter (NR10QC, 3nh, Shenzhen). In a daylight condition, lightness (L∗) and two chromatic components a∗ and b∗ of the CIEL∗a∗b∗ color coordinate were measured. Based on the equations: C∗=a∗2+b∗21/2 and h = arctan (b∗/a∗), chroma (C∗) and hue angle (h) were calculated. The procedure is repeated 5 times, and the results are averaged. ## 2.3.2. Microscopic Observation of Leaf Transverse Sections The transverse sections of leaves were prepared according to Li [26]. Leaves were collected and fixed in a formalin-acetic acid-alcohol (FAA; absolute ethyl alcohol: glacial acetic acid, 3 : 1) solution for 30 min; fixed samples were washed three times in 50% ethanol and dehydrated through a series of ethanol concentrations: 60% (30 min), 70% (30 min), 85% (30 min), 95% (5 min), and 100% (5 min, twice). Ethanol in dehydrated samples was then replaced with xylol and paraffin, and samples were embedded and cut into sections of 8–14 μm thickness using a fully motorized rotary microtome (Leica RM2245, Germany). The sections were stained with Safranin Fast Green, washed with 50% ethanol, and then observed with a digital microscope (×10) (Nikon Eclipse E100, Japan). ## 2.3.3. Microscopic Observation of Leaf Epidermal Cells Freehand sections were prepared for leaf shape observations. We rinsed 1.5 × 2 cm leaf samples with distilled water, put them into a 1 : 1 solution of glacial acetic acid and 30% peroxide water, and then placed them in a 60°C incubator for 2–3 h. The samples were rinsed with distilled water, and peels (at least 2 mm long) of the upper and lower surfaces were made with fine-tipped tweezers from the central area of a single leaf and mounted in water, stained with Safranin for 30–60 s, washed, and observed with a digital microscope (Nikon Eclipse E100) [27]. ## 2.3.4. Scanning Electron Microscopy Leaf epidermal three-dimensional structure was observed by SEM (Hitachi, S-3400N, Japan). 1.5 × 2 cm samples from each of the three samples were cut, respectively, from the middle of each leaf and fixed with 2.5% glutaraldehyde solution for 2 h at room temperature, rinsed with 0.1 mol L−1 phosphate saline buffer, and dehydrated through increasing alcohol series, and then, the alcohol was replaced with isoamyl acetate. The samples were dried naturally, cut into appropriate sizes, and coated using a sputter coater. They were subsequently observed and photographed using SEM [28]. ## 2.4. Physiological and Biochemical Traits ### 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. ### 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. ### 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. ### 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 2.4.1. Measurement of Leaf pH The three samples were collected and rinsed, and 5 g of leaves was weighed and cut into pieces. We then added 50 ml distilled water, vibrated the samples for 10 min after soaking for 12 h, and measured the pH of the solution at 30°C using a pH meter (PHS-25, Hongyi, Shanghai) [29]. The procedure is repeated 3 times, and the results are averaged. ## 2.4.2. Leaf Metal Ion Measurements Dried leaves were ground into a fine powder, and a 0.6 g sample of dried material was digested in 5 ml of concentrated HNO3 and 1 ml of H2O2, followed by the treatment in a high-performance microwave digestion unit (CEM, Mars, Matthews, NC, USA). Settings used were as follows: time (minutes)/power (watts)/temperature (°C): 5/1,200/120, 10/1,200/160, and 20/1,200/180. After complete digestion and acid removal, the samples were diluted with ultrapure water for measurement. The procedure is repeated 3 times, and the results are averaged. Sample solutions were analyzed for elements by ICP-MS (NexION 350X, PerkinElmer, Waltham, MA, USA). The parameters for analysis were as follows: plasma power: 1,400 W, plasma flow: 18 l/min, auxiliary flow: 1.8 l/min, and sampling depth: 7.5 mm [30]. ## 2.4.3. Anthocyanin Analysis Only blue leaf samples were analyzed for anthocyanin components using ultra-performance liquid chromatography (UPLC). The methods of extraction were as previously described [31] with some modifications. Anthocyanins were extracted from 0.5 g freeze-dried leaf powder from blue leaves in 25 ml of 2% formic acid/methyl alcohol for 24 h at 4°C. The supernatant was removed and stored under the same conditions. The extraction was repeated once. These two extractions were merged and subjected to rotary evaporation at 30°C until the anthocyanins were dry. We then added a moderate amount of 2% formic acid solution to dissolve the residue and ethyl acetate to extract the anthocyanins in the aqueous phase.A 20μL sample was quantified by UPLC-triple-time-of-flight/mass spectrometry (TOF/MS) (AcquityTM Ultra, Waters, Milford, MA, USA) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (acetonitrile) for 30 min. The detection was performed by absorption at 520 nm. The gradient settings were as follows: 0 min, 10% B; 5 min, 10% B; 20 min, 40% B; 25 min, 100% B; and 30 min, 10% B. ## 2.4.4. Flavonoid Analysis The extraction methods as previously described by Zhu et al. [27] were referenced with some modifications. Flavonoids were extracted from 1.0 g freeze-dried leaf powder from three 2% formic acid/methyl alcohol samples, after oscillation in ultrasonic cleaners for 20 min at 20°C, and clarified by centrifugation at 12,235 × g for 10 min. The supernatant was then collected. The extraction was repeated twice, and the total extraction volume was 25 ml. After the combined extraction was filtered with a 0.22 μm nylon microporous filter, the solution was tested.UPLC conditions are as follows: a 5 uL sample was quantified by UPLC-Triple-TOF/MS (AcquityTM Ultra, Waters) at a flow rate of 0.8 ml/min and a column temperature of 30°C using a 4.6 × 100 mm column of C18 and a linear gradient of solvent A (0.1% formic acid/water (v/v)) in solvent B (0.1% formic acid/acetonitrile (v/v)) for 38 min. The detection was performed by absorption at 280 nm. The gradient settings were as follows: 0 min, 5% B; 2 min, 5% B; 25 min, 50% B; 35 min, 95% B; 37 min, 95% B; and 38 min, 5% B.MS was performed on an UPLC-Triple-TOF 5600 Plus System (AB Sciex, Framingham, MA, USA) equipped with an electrospray ionization source (ESI) system. The optimal MS conditions were as follows: scan range m/z of 100–1500. The experiment was conducted in negative ion mode, with a source voltage of −4.5 kV and source temperature of 550°C. The pressure of both gas 1 (air) and gas 2 (air) was set to 50 psi. The pressure of curtain gas (N2) was set to 35 psi. The maximum allowed error was set to ±2 ppm. The collision energy was 40 V, with a collision energy spread of 20 V. Exact mass calibration was performed automatically before each analysis, employing the automated calibration delivery system. ## 3. Results ### 3.1. Anatomical and Morphological Traits #### 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. #### 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ### 3.2. The Shape of Epidermal Cell We used the freehand section to observe epidermal cells from the upper face and SEM photomicrographs to observe the three-dimensional shape of the epidermal cells. We found that the shapes of leaf adaxial and abaxial epidermal cells in blueS. uncinata were different: the adaxial epidermis was irregular circles, with smooth embossment, and by contrast, the abaxial epidermis was a long, wavy, irregular strip, with elongated embossment on the top view (Figure 3(e)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves (Figure 3(f)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were both shaped as irregular long strips, with elongated embossment on the top view (Figure 3(g)).Figure 3 Leaf epidermal cell shape of the three leaf types. (a) Freehand section photomicrographs of adaxial epidermis; (b) freehand section photomicrographs of abaxial epidermis; (c) SEM photomicrographs of adaxial epidermis; (d) SEM photomicrographs of abaxial epidermis; (e) blueS. uncinata leaf; (f) red S. uncinata leaf; and (g) S. kraussiana. ### 3.3. Physiological and Biochemical Traits #### 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. #### 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. #### 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. #### 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. #### 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. #### 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 3.1. Anatomical and Morphological Traits ### 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. ### 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ## 3.1.1. Morphological Traits and Leaf Color Parameters BlueS. uncinata leaves were soft and thin; newly formed leaves were grass green. The adaxial side of mature leaves was glaucous, showing iridescence (Figure 1(a)), whereas the abaxial side was green (Figure 1(b)). Red S. uncinata leaves were hard and crisp; newly formed leaves were also grass green, and the adaxial and abaxial sides of the mature leaf were red (Figures 1(c) and 1(d)). S. kraussiana leaves were thick and soft; newly formed leaves and the adaxial and abaxial sides of mature leaves were all green (Figures 1(e) and 1(f)).Figure 1 Appearance of the three leaf types. (a) Adaxial sides of blueSelaginella uncinata leaves; (b) abaxial sides of blue S. uncinata leaves; (c) adaxial sides of red S. uncinata leaves; (d) abaxial sides of red S. uncinata leaves; (e) adaxial sides of S. kraussiana leaves; and (f) abaxial sides of S. kraussiana. (a)(b)(c)(d)(e)(f)The measurement data for the three samples are shown in Table1. The results indicated that the thickness of red leaves was significantly greater than that of blue leaves, whereas the area of blue leaves was significantly greater than that of red leaves.Table 1 Morphological comparison of the three leaf types. Leaf typeLeaf textureLeaf thickness (μm)Leaf areaLength (mm)Width (mm)LW−1BlueS. uncinata leavesSoft and thin48.53 ± 3.56B3.25 ± 0.61C1.73 ± 0.44C1.9RedS. uncinata leavesHard and crisp84.34 ± 4.62C3.09 ± 0.63A1.53 ± 0.27B2.0S. kraussiana leavesThick and soft35.73 ± 3.05A3.20 ± 0.32B1.37 ± 0.34A2.4Note: data analysis used Duncan's method, the data of leaf thickness, and length and width = mean value ± standard deviation (n = 6); A, B, and C show the different significant differences at P=0.05 level in SNK test.The color parameters for the three samples are shown in Table2. The results indicated that their hue angles (h) are all near 0° and belong to the red-purple area. The chroma (C∗) of blue leaves was significantly greater than that of red leaves.Table 2 Color parameters of the three leaf types. Leaf typeRHSCC (A)L∗a∗b∗C∗h/°BlueS. uncinata leaves12088.87 ± 0.83B−0.85 ± 0.21A−1.31 ± 0.10A1.57 ± 0.19B1.01 ± 0.09ARedS. uncinata leaves5981.35 ± 6.34A0.00 ± 0.35B−0.46 ± 0.73B0.83 ± 0.26A0.69 ± 1.27AS. kraussiana leaves14185.61 ± 3.90AB−0.42 ± 0.48AB−0.97 ± 0.39AB1.09 ± 0.53AB0.66 ± 1.23ANote: data analysis used Duncan’s method, and the data ofL∗a∗b∗ = mean value ± standard deviation (n = 5); A and B show the different significant differences at P=0.05 level in SNK test. ## 3.1.2. Anatomical Structure of Leaf Transverse Section The anatomical structure was observed in leaf transverse sections by the paraffin section. The three leaf types shared some similar features: the epidermis is covered by the cuticle; between the upper epidermis and the lower epidermis are irregularly shaped mesophyll cells; and there is no obvious differentiation of palisade and spongy tissue in the mesophyll. There are large chloroplasts in the mesophyll cells; these are long, narrow, and moniliform distribution. Cell arrangement is loose, with large gaps between the cells, forming well-developed aeration tissue in the mesophyll cells.The shapes of leaf adaxial and abaxial epidermal cells in blue and redS. uncinata were different from S. kraussiana. The shape of leaf adaxial and abaxial epidermal cells was different in blue leaves: the adaxial epidermis consisted of convex or lens-shaped cells, whereas the abaxial epidermis consisted of long cylindrical cells on the lateral view. In blue leaves, chloroplasts were distributed in the upper and lower epidermal cells, and in mesophyll cells, they were mainly at the bottom of the upper epidermal cells (Figure 2(a)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves; however, the cells were much more closely aligned, and chloroplasts were significantly reduced (Figure 2(b)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were long and cylindrical on the lateral view. Chloroplasts were distributed in the upper and lower epidermal and mesophyll cells, but primarily in the mesophyll cells (Figure 2(c)).Figure 2 Leaf paraffin transverse sections of the three leaf types. (a) BlueS. uncinata leaf; (b) red S. uncinata leaf; and (c) S. kraussiana. (a)(b)(c) ## 3.2. The Shape of Epidermal Cell We used the freehand section to observe epidermal cells from the upper face and SEM photomicrographs to observe the three-dimensional shape of the epidermal cells. We found that the shapes of leaf adaxial and abaxial epidermal cells in blueS. uncinata were different: the adaxial epidermis was irregular circles, with smooth embossment, and by contrast, the abaxial epidermis was a long, wavy, irregular strip, with elongated embossment on the top view (Figure 3(e)). The shape of leaf adaxial and abaxial epidermal cells in red leaves was similar to that of blue leaves (Figure 3(f)). The leaf adaxial and abaxial epidermal cells in S. kraussiana were both shaped as irregular long strips, with elongated embossment on the top view (Figure 3(g)).Figure 3 Leaf epidermal cell shape of the three leaf types. (a) Freehand section photomicrographs of adaxial epidermis; (b) freehand section photomicrographs of abaxial epidermis; (c) SEM photomicrographs of adaxial epidermis; (d) SEM photomicrographs of abaxial epidermis; (e) blueS. uncinata leaf; (f) red S. uncinata leaf; and (g) S. kraussiana. ## 3.3. Physiological and Biochemical Traits ### 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. ### 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. ### 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. ### 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. ### 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. ### 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 3.3.1. Leaf pH Leaf pH values were in the range of 4.5–5.0 (Table3), in the order blue S. uncinata leaves > red S. uncinata leaves > S. kraussiana leaves. The pH value of blue S. uncinata leaves was significantly greater than that of other two.Table 3 Leaf pH values for the three leaf types. Leaf typepHBlueS. uncinata leaves4.63 ± 0.03BRedS. uncinata leaves4.55 ± 0.01AS. kraussiana leaves4.52 ± 0.01ANote: pH values are means ± standard deviation (n = 3); A and B show the different significant differences at P=0.05 level in SNK test. ## 3.3.2. Leaf Metal Ion Content Mg ions were very abundant in all three leaf types (>3000 mg/kg). The Ca, Mn, Fe, Zn, and Al ion contents were also relatively high (55–1200 mg/kg). There was little Cu ion content (<9 mg/kg). Cd ion content was the lowest, ranging from 0.03 to 0.04 mg/kg (Table4).Table 4 Metal ion contents of the three leaf types. Sample nameMetal ion content(mg/kg)CdMgCaMnFeCuZnAlBlue S. uncinata leaves0.04 ± 0.01B4311.71 ± 340.96B598.88 ± 41.69A115.12 ± 8.63A421.11 ± 22.45B8.62 ± 0.63B55.04 ± 4.30A592.45 ± 14.47BRed S. uncinata leaves0.03 ± 0.00A8134.86 ± 227.00C919.65 ± 23.63B139.09 ± 3.70B308.02 ± 8.97A5.61 ± 1.23A58.00 ± 3.26A316.21 ± 8.44AS. kraussiana leaves0.04 ± 0.00AB3464.87 ± 77.35A1152.27 ± 19.28C386.18 ± 10.13C583.86 ± 7.83C6.20 ± 0.61A117.93 ± 1.69B806.34 ± 4.39CNote: metal ion content = mean value ± standard deviation (n = 3); A, B, and C show the different significant differences at P=0.05 level in SNK test. ## 3.3.3. Anthocyanin Analysis The extract solution in blueS. uncinata leaves contained no anthocyanins (Figure 4). No anthocyanin ion peaks were observed through mass spectrometry. The result is in line with our conclusion that was previously published: the primary pathway of pigment metabolism in S. uncinata might not be the anthocyanin biosynthesis pathway, but rather the chlorophyll metabolism pathway [15].Figure 4 Chromatogram (520 nm) and total ion flow diagrams for anthocyanin extracting solution in blueS. uncinata leaves. Note: the peaks near 28 min are solvent background peaks. ## 3.3.4. Flavonoid Analysis There was good flavonoid separation among the three leaf types (Figures5–7). We compared the peaks on the chromatographic map and the results of the mass spectrogram, total ion flow diagram, and debris ion mass spectrum analysis to the SciFinder and Reaxys databases and finally inferred 15, 20, and 9 types of flavonoids in blue S. uncinata leaves, red S. uncinata leaves, and S. kraussiana leaves, respectively (Figure 8 and Table 5).Figure 5 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for blueS. uncinata leaves.Figure 6 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution for redS. uncinata leaves.Figure 7 Ultraviolet (280 nm) chromatogram and total ion flow diagrams of flavonoid extracting solution forS. kraussiana leaves.Figure 8 Venn diagram of flavonoids in the three leaf types.Table 5 Flavonoids in the three leaf types. Leaf typesPeaks No.TR (min)ESIMS (m/z)Molecular weightMolecular formulaTentative identificationPeak areaThe relative contentB19.87473, 443, 353563.1369C26H28O146-C-arabinosyl-8-C-68824.440.39R29.89563.13977-glucosylapigenin316976.012.02GB211.03473, 443, 353533.1287C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside288726.711.63R511.04533.12921666593.1410.64G111.03533.1288108417.911.26B313.07287431.0959C21H20O10Genistin48655.930.27R613.04431.0971207658.771.33GB418.71117, 151285.0407C15H10O5Genistein105164.600.59RGB521.10331, 375, 417, 443537.0793C30H18O10Amentoflavone439318.232.48R721.07537.08081547602.409.88G221.08537.08171374459.9715.92B621.18307, 375539.0951C30H20O102”,3”-dihydroamentoflavone529271.522.99R821.15539.096947536.700.30GB721.97331, 375, 417, 443537.0805C30H18O10Robustaflavone596373.363.37R1021.94537.0817Tetrahydroamentoflavone702769.454.49G421.95537.0816640841.647.42B822.13311, 455541.1114C30H22O102,3,2”,3”-1160921.656.55R1122.09541.1119455568.282.91GB922.39307, 375539.0960C30H20O102,3-Dihydrorobustaflavone767712.864.33R1222.35539.0970462224.522.95G522.35539.097043695.790.51B1022.61307, 375539.0966C30H20O102,3-Dihydroamentoflavone330804.831.87R1322.58539.0967305323.381.95GB1122.88311, 455541.1117C30H22O102,3,2”,3”-Tetrahydro robustaflavone683116.303.86R1422.83541.1129183554.591.95GB1223.39389, 431, 457551.0960C31H20O10Bilobetin135656.040.77R1523.39551.0977637558.864.07G623.33551.0968892667.0310.34B1324.16511, 435, 403555.1273C31H24O107”-O-methyl-2,3,2”,3”-tetrahydrohinokiflavone604123.983.41R1624.08555.129882994.890.53GB1424.38389, 431551.0971C31H20O10Robustaflavone 4′-methyl ether790017.414.46R1824.36551.0988974541.476.22G824.36551.0971128115.651.48B1527.29533, 519565.1118C32H22O104′,7”-di-O-methylamentoflavone754645.714.26R2027.26565.113856949.140.36G927.29565.113083280.330.96R18.50473, 383, 353593.1502C26H28O146,8-C-diglucosylapigenin52957.810.34R310.15473, 443, 353533.1294C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside164986.791.05R410.60473, 443, 353533.1292C25H26O13Apigenin 6,8-di-C-α-L-arabinopyranoside198479.061.27R921.54405, 473567.0924C31H20O113‴-O-methylamentoflavone282241.091.80R1724.16509553.1126C31H22O10Dihydrobilobetin912746.555.83R1924.84401, 433553.1143C31H20O10Dihydrorobustaflavone 4'-Methyl ether691000.504.41G321.56405, 473539.0963C31H20O113‴-O-methylamentoflavone20205.960.23G724.18389, 431551.0962C31H20O10Robustaflavone 4'-methyl ether465924.155.40Note: B: blueSelaginella uncinata leaf; R: red Selaginella uncinata leaf; G: Selaginella kraussiana; the relative content refers to the proportion of the compound that it occupies in all of the peak materials of the sample. ## 3.3.5. Comparison of Flavonoid Composition in the Three Leaf Types There are 7 common compounds in the three leaf types: apigenin 6,8-di-C-α-L-arabinopyranoside, amentoflavone, robustaflavone, 2,3-dihydrorobustaflavone, bilobetin, robustaflavone 4′-methyl ether, and 4′,7“-di-O-methylamentoflavone.We find that the vast majority of the flavonoids present (14 compounds) in blue and red leaves are similar; the component specific to blue leaves was genistein, which has an antioxidant effect. Components specific to red leaves were 6,8-C-diglucosylapigenin, apigenin 6,8-di-C-α-L-arabinopyranoside, apigenin 6,8-di-C-α-L-arabinopyranoside isomeride, 3-O-methylamentoflavone, dihydrobilobetin, and dihydrorobustaflavone 4'-methyl ether. These components are likely to play an adaptive role in high light intensity S. uncinata environments.The results show that most of the flavonoids (7 compounds) in blueS. uncinata leaves and S. kraussiana leaves are similar. The components specific to blue leaves were 6-C-arabinosyl-8-C-glucosylapigenin, genistin, genistein, 2“,3“-dihydroamentoflavone, 2,3,2“,3“-tetrahydroamentoflavone, 2,3-dihydroamentoflavone, 2,3,2“,3“-tetrahydrorobustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. Among these, genistin and genistein are isoflavones. The components specific to S. kraussiana were 3-O-methylamentoflavone and robustaflavone 4'-methyl ether isomeride. ## 3.3.6. Comparison of BlueS. uncinata Leaf Flavonoid Composition with Published Values Zheng et al. [9] identified seven types of flavonoids using high-performance liquid chromatography (HPLC), modern spectroscopy, and nuclear magnetic resonance (NMR). Four of these were consistent with our results: amentoflavone, 2“, 3“-dihydroamentoflavon“, 2,3,2“,3“-tetrahydroamentoflavone, and 2,3-dihydroamentoflavone. Yiet al. [32] also identified 2,3-dihydroamentoflavone. Of the five identified types of flavonoids detected by [33] through Sephadex LH-20, chromatography, UV, and MS, those in common with our findings were as follows: amentoflavone, robustaflavone, and 7“-O-methyl-2,3,2“,3“-tetrahydrohinokiflavone. However, there was no common constituent between the studies of Wuet al. [34] and this study. In conclusion, flavonoids identified in blue S. uncinata leaves were similar to those found in previous studies; others were slightly different in structure. ## 4. Discussion ### 4.1. The Adaptability of Leaf Morphology and Anatomical Structure to Environmental Conditions Leaf thickness is an important indicator of plant shade tolerance. Thinner leaves make fern chloroplasts more fully capable of absorbing light energy; improving the photosynthetic efficiency of ferns; and making them better adapted to shaded environments [35]. Morphology traits and measurement data show that blue S. uncinata leaves, including increased leaf area and decreased leaf thickness, are adaptations to low-light environments and weak-light intensity.Many researchers consider that shade-tolerant trees have a greater ability to change their leaf anatomical structure [36]. Leaf anatomical structure traits such as wax coating, shape of epidermal cells, epidermal thickness, and epidermal hair play an important role in light absorption, even determining the light use efficiency of the plant [37]. Deep-shade plants change their morphology and physiology traits of cells and chloroplasts to fit the low-light conditions [38]. In the paraffin sections of the three leaf types, we observed no clear differentiation of the palisade and spongy tissues in the mesophyll, and mesophyll cells were irregularly distributed. This trait contributes to reducing projection loss of quantum light, allowing the plant to fully utilize limited light to carry out photosynthesis and accumulate organic matter, thus adapting to the shaded environment [35].Previous research has shown that fern mesophyll cells possess more intercellular space, forming well-developed aeration tissue that can be used to store gases for photosynthesis and respiration, to make up for deficiencies in gas absorption, which is also an adaptation to a shaded environment [35]. The chloroplast is the main site of photosynthesis; in ferns, this structure appears long and narrow and is distributed as a moniliform, reducing the amount of photons penetrating the leaf and raising the utilization rate of quantum light in weak-light conditions, to improve the efficiency of leaf photosynthesis [35]. Plant chloroplasts typically exist in mesophyll cells, but through paraffin section observation, we found that chloroplasts in blue S. uncinata leaves exist mainly in epidermal cells and are larger. This result is consistent with that of Hébant and Lee [1], who examined transverse leaf sections of S. uncinata by light microscopy, but is different from those of Sheue et al., and they found several chloroplasts in the mesophyll cells and the ventral epidermal cells, while only one single giant chloroplast (bizonoplast, BP) per dorsal epidermal cell [39]. The preferential localization of chloroplasts in the lower part of the epidermal cells in S. uncinata would allow more light to penetrate and reach mesophyll cells [39], which is an adaptation of plants to weak-light environments. ### 4.2. Effects of the Shape of Leaf Epidermal Cells on Blue Coloration inS. uncinata The shape of petal epidermal cells has a great influence on the formation of flower color. Noda et al. [40] found that a conical shape in the epidermal cells of petal was believed to enhance light absorption and thus intensified its color, while the flat shape could reflect more incident light and thus lightened its color. Quintana et al. [41] found that in Anagallis, the epidermis contains anthocyanins, and most epidermal cells are flat, with dome-shaped and conical cells in the outer layer. Mudalige et al. inspected the perianths of 34 Dendrobium Sw. species and hybrids to clarify the relevance of pigment distribution, the shape of upper epidermal cells, color intensity, perception, and visual texture [42]. Four types of epidermal cell shapes were identified in these Dendrobium flowers: flat, dome-shaped, elongated dome-shaped, and papillate [42]. Yue [43] observed using SEM measurements that the epidermal cell shapes of 17 monocotyledon flowers could be grouped into five classes: conical, flat, oval, strip-shaped, and irregular mosaic. That study suggested that convex epidermal cells increased the refraction of light, making petal color appear deeper, and that bulging cells appeared to be more conducive to pigment, whereas flat epidermal cells decreased the effect, making their color appear lighter.By comparing leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was not only different from the abaxial epidermis, but also different from the adaxial epidermis of S. kraussiana. The shape appeared convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. This result corresponds with those of Hébant and Lee [1], who examined the convexly curved upper epidermal cells of S. willdenowii by SEM. The structure increases the proportion of incident light entering the cell, deepens the leaf color [40, 43], and therefore may be related to blue leaf coloration.According to Hébant and Lee [1], the blue color of S. uncinata results from thin-film interference. Which contributes more to blue coloration? Is it thin-film interference or convex or lens-shaped epidermal cells? Do they work complementary? It needs more intensive research. ### 4.3. Effects of the pH on Blue Coloration inS. uncinata The color of plant leaves is affected to some extent by the pH within vacuoles, which has a great influence on the coloration of anthocyanins, with varying performance among different plant species. Tang et al. [44] found that the pH affected anthocyanin synthesis and stability. The degradation rate of anthocyanins has been accelerated by increasing the pH during the process of red turning green in Loropetalum chinense var. rubrum. Studies on the relationship between leaf pigment content and leaf color change in Liquidambar formosana have shown that a reduction in pH was one reason their leaves turned red [45]. Research by Shi [46] indicated that Prunus cerasifera leaf color appeared red in a medium with pH <5, and the stronger the acidity, the more red the pigment. Red color was stable when the pH ranged from 4 to 5, and the solution turned green at pH >5. The stronger the alkalinity, the more green the pigment. Most research results have indicated that anthocyanins present stable red when the pH of the vacuole is lower, and unstable blue occurs as the pH increases.The vacuole is the largest organelle in the mature leaf cell. The pH of leaf juice is often used to approximate the pH of the vacuole [47]. We used this method in our experiment, with results indicating that the pH of blue leaves was greater than that of red leaves. This result is consistent with previous results that the pH of blue flowers was greater than that of red flowers, in Hydrangea macrophylla [48] and Pharbitis nil (Linn.) Choisy [22]. However, for the specific value, the difference between the two pH values was only 0.08, and the difference between those of blue S. uncinata leaves and S. kraussiana leaves is only 0.11, whereas that between blue and red cultivars of H. macrophylla was 0.8 [48] and that between blue in the full-bloom stage and red in the burgeoning stage of P. nil (Linn.) Choisy was 1.1 [22]. Therefore, we conclude that blue leaf coloration in S. uncinata has no concern with alkalization of the pH in the vacuole. ### 4.4. Effects of Metal Ion Content on Blue Coloration inS. uncinata #### 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. #### 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ### 4.5. Effects of Anthocyanin and Copigment on Blue Coloration inS. uncinata Copigments, often flavone and flavonols, are the two branches of the flavonoid metabolic pathway [55]. Combined with anthocyanins, they can stabilize pigments, and the compounds they form will influence the coloration of anthocyanins to some degree [56]. Research by Li [57] found that the effect of copigments turned purple or pink delphinidin flowers blue. Malvidin-3-glucoside is the basic anthocyanin of Primula sinensis; the flower appears purple when it is combined with flavonol, but appears garnet when not combined [58]. Under certain conditions, the larger the molar ratio of flavonols and anthocyanins, the more significant the copigmentation effect.In our research, we detected anthocyanins in a preliminary experiment by measuring absorbance values using an enzyme-linked immunosorbent assay (ELISA). Although the overall content was low, and the highest content was only 1.2 pigment units [14], we still initially speculate that blue leaf coloration in S. uncinata may be related to delphinidin in anthocyanins, or a copigment with anthocyanins. We did not detect anthocyanins in blue leaves using liquid chromatography-MS (this is accordant with the conclusion that compared with the anthocyanin biosynthesis pathway, the chlorophyll metabolism pathway may be the primary pigment metabolism pathway of S. uncinata [15], although there were copigments such as flavone). If anthocyanins are not present, the copigmentation of flavone cannot occur. Therefore, we infer that blue leaf coloration in S. uncinata was not caused by copigmentation of anthocyanins. ## 4.1. The Adaptability of Leaf Morphology and Anatomical Structure to Environmental Conditions Leaf thickness is an important indicator of plant shade tolerance. Thinner leaves make fern chloroplasts more fully capable of absorbing light energy; improving the photosynthetic efficiency of ferns; and making them better adapted to shaded environments [35]. Morphology traits and measurement data show that blue S. uncinata leaves, including increased leaf area and decreased leaf thickness, are adaptations to low-light environments and weak-light intensity.Many researchers consider that shade-tolerant trees have a greater ability to change their leaf anatomical structure [36]. Leaf anatomical structure traits such as wax coating, shape of epidermal cells, epidermal thickness, and epidermal hair play an important role in light absorption, even determining the light use efficiency of the plant [37]. Deep-shade plants change their morphology and physiology traits of cells and chloroplasts to fit the low-light conditions [38]. In the paraffin sections of the three leaf types, we observed no clear differentiation of the palisade and spongy tissues in the mesophyll, and mesophyll cells were irregularly distributed. This trait contributes to reducing projection loss of quantum light, allowing the plant to fully utilize limited light to carry out photosynthesis and accumulate organic matter, thus adapting to the shaded environment [35].Previous research has shown that fern mesophyll cells possess more intercellular space, forming well-developed aeration tissue that can be used to store gases for photosynthesis and respiration, to make up for deficiencies in gas absorption, which is also an adaptation to a shaded environment [35]. The chloroplast is the main site of photosynthesis; in ferns, this structure appears long and narrow and is distributed as a moniliform, reducing the amount of photons penetrating the leaf and raising the utilization rate of quantum light in weak-light conditions, to improve the efficiency of leaf photosynthesis [35]. Plant chloroplasts typically exist in mesophyll cells, but through paraffin section observation, we found that chloroplasts in blue S. uncinata leaves exist mainly in epidermal cells and are larger. This result is consistent with that of Hébant and Lee [1], who examined transverse leaf sections of S. uncinata by light microscopy, but is different from those of Sheue et al., and they found several chloroplasts in the mesophyll cells and the ventral epidermal cells, while only one single giant chloroplast (bizonoplast, BP) per dorsal epidermal cell [39]. The preferential localization of chloroplasts in the lower part of the epidermal cells in S. uncinata would allow more light to penetrate and reach mesophyll cells [39], which is an adaptation of plants to weak-light environments. ## 4.2. Effects of the Shape of Leaf Epidermal Cells on Blue Coloration inS. uncinata The shape of petal epidermal cells has a great influence on the formation of flower color. Noda et al. [40] found that a conical shape in the epidermal cells of petal was believed to enhance light absorption and thus intensified its color, while the flat shape could reflect more incident light and thus lightened its color. Quintana et al. [41] found that in Anagallis, the epidermis contains anthocyanins, and most epidermal cells are flat, with dome-shaped and conical cells in the outer layer. Mudalige et al. inspected the perianths of 34 Dendrobium Sw. species and hybrids to clarify the relevance of pigment distribution, the shape of upper epidermal cells, color intensity, perception, and visual texture [42]. Four types of epidermal cell shapes were identified in these Dendrobium flowers: flat, dome-shaped, elongated dome-shaped, and papillate [42]. Yue [43] observed using SEM measurements that the epidermal cell shapes of 17 monocotyledon flowers could be grouped into five classes: conical, flat, oval, strip-shaped, and irregular mosaic. That study suggested that convex epidermal cells increased the refraction of light, making petal color appear deeper, and that bulging cells appeared to be more conducive to pigment, whereas flat epidermal cells decreased the effect, making their color appear lighter.By comparing leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was not only different from the abaxial epidermis, but also different from the adaxial epidermis of S. kraussiana. The shape appeared convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. This result corresponds with those of Hébant and Lee [1], who examined the convexly curved upper epidermal cells of S. willdenowii by SEM. The structure increases the proportion of incident light entering the cell, deepens the leaf color [40, 43], and therefore may be related to blue leaf coloration.According to Hébant and Lee [1], the blue color of S. uncinata results from thin-film interference. Which contributes more to blue coloration? Is it thin-film interference or convex or lens-shaped epidermal cells? Do they work complementary? It needs more intensive research. ## 4.3. Effects of the pH on Blue Coloration inS. uncinata The color of plant leaves is affected to some extent by the pH within vacuoles, which has a great influence on the coloration of anthocyanins, with varying performance among different plant species. Tang et al. [44] found that the pH affected anthocyanin synthesis and stability. The degradation rate of anthocyanins has been accelerated by increasing the pH during the process of red turning green in Loropetalum chinense var. rubrum. Studies on the relationship between leaf pigment content and leaf color change in Liquidambar formosana have shown that a reduction in pH was one reason their leaves turned red [45]. Research by Shi [46] indicated that Prunus cerasifera leaf color appeared red in a medium with pH <5, and the stronger the acidity, the more red the pigment. Red color was stable when the pH ranged from 4 to 5, and the solution turned green at pH >5. The stronger the alkalinity, the more green the pigment. Most research results have indicated that anthocyanins present stable red when the pH of the vacuole is lower, and unstable blue occurs as the pH increases.The vacuole is the largest organelle in the mature leaf cell. The pH of leaf juice is often used to approximate the pH of the vacuole [47]. We used this method in our experiment, with results indicating that the pH of blue leaves was greater than that of red leaves. This result is consistent with previous results that the pH of blue flowers was greater than that of red flowers, in Hydrangea macrophylla [48] and Pharbitis nil (Linn.) Choisy [22]. However, for the specific value, the difference between the two pH values was only 0.08, and the difference between those of blue S. uncinata leaves and S. kraussiana leaves is only 0.11, whereas that between blue and red cultivars of H. macrophylla was 0.8 [48] and that between blue in the full-bloom stage and red in the burgeoning stage of P. nil (Linn.) Choisy was 1.1 [22]. Therefore, we conclude that blue leaf coloration in S. uncinata has no concern with alkalization of the pH in the vacuole. ## 4.4. Effects of Metal Ion Content on Blue Coloration inS. uncinata ### 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. ### 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ## 4.4.1. Effects of Metal Ion Content on Anthocyanin Coloration Anthocyanins can be combined with metal ions and flavonoids in a stoichiometric ratio or not, to be assembled into metal pigment complexes [49], and these complexes can affect the coloration of plant leaves. There have been several studies of metal anthocyanins making color tend toward bluish, and these concentrate primarily on Mg, Al, Fe [23–25, 48, 50], Ca [30], and Mn [30]. Metal ions have a stable and protective effect on anthocyanins, and the pigments are often chelated if the cell sap contains metal ions such as Al, Fe, Mg, or Mo. In particular, anthocyanins, which change their color to some degree, often tend toward purple after chelation [51].In our experiment, Mg, Ca, Mn, Fe, Zn, and Al ion contents were all relatively high in the three leaf types; however, there was no anthocyanin in blueS. uncinata leaves, so we concluded that blue leaf coloration in S. uncinata was unrelated to metal chelation with anthocyanin. ## 4.4.2. Effects of Metal Ion Content on Chlorophyll Metal ions insert protoporphyrin IX, which is the branching point of chlorophyll synthesis and heme and plant pigment synthesis, Mg ions under the catalysis of a Mg ion chelating enzyme (CHLH) insert protoporphyrin IX, forming the chlorophyll branch; Fe ions under the catalysis of a Fe ion chelating enzyme (FECH) insert protoporphyrin IX, forming the heme and plant pigment branch. At the branch point, CHLH and FECH complete protoporphyrin IX [52].Mg is a part of the molecular composition of chlorophyll; chlorophyll formation will be affected if it is lacking. The concentration of Mg2+ influences the activity of CHLH [53]. Fe is necessary for protochlorophyllide formation; Mg-protoporphyrin IX and Mg-protoporphyrin IX methyl ester accumulate when short of Fe, and protochlorophyllide cannot form chlorophyll [53, 54]. Chlorophyll synthesis is also affected by the content of Cu, Mn, and other ions.In our experiment, the Cu content was not high, but those of Mg, Fe, and Mn were very high, particularly that of Mg, with content reaching 4311.7 mg/kg, which was 1.24-fold higher in blue leaves than in red leaves. We speculate that such high Mg levels may be associated with chlorophyll synthesis inS. uncinata. ## 4.5. Effects of Anthocyanin and Copigment on Blue Coloration inS. uncinata Copigments, often flavone and flavonols, are the two branches of the flavonoid metabolic pathway [55]. Combined with anthocyanins, they can stabilize pigments, and the compounds they form will influence the coloration of anthocyanins to some degree [56]. Research by Li [57] found that the effect of copigments turned purple or pink delphinidin flowers blue. Malvidin-3-glucoside is the basic anthocyanin of Primula sinensis; the flower appears purple when it is combined with flavonol, but appears garnet when not combined [58]. Under certain conditions, the larger the molar ratio of flavonols and anthocyanins, the more significant the copigmentation effect.In our research, we detected anthocyanins in a preliminary experiment by measuring absorbance values using an enzyme-linked immunosorbent assay (ELISA). Although the overall content was low, and the highest content was only 1.2 pigment units [14], we still initially speculate that blue leaf coloration in S. uncinata may be related to delphinidin in anthocyanins, or a copigment with anthocyanins. We did not detect anthocyanins in blue leaves using liquid chromatography-MS (this is accordant with the conclusion that compared with the anthocyanin biosynthesis pathway, the chlorophyll metabolism pathway may be the primary pigment metabolism pathway of S. uncinata [15], although there were copigments such as flavone). If anthocyanins are not present, the copigmentation of flavone cannot occur. Therefore, we infer that blue leaf coloration in S. uncinata was not caused by copigmentation of anthocyanins. ## 5. Conclusion Through comparison of leaf paraffin transverse sections, freehand sections, and SEM photomicrographs, we found that the shape of the adaxial epidermis ofS. uncinata leaves was convex or lens-shaped on the lateral view and irregular circles with smooth embossment on the top view. These shapes were different from those on the abaxial epidermis and the adaxial epidermis of S. kraussiana leaves. We speculated these structures increase the proportion of incident light entering the cell, deepening the leaf color, and therefore may be related to blue leaf coloration.Through comparison of previously published values of leaf pH and metal ion content, anthocyanins, and flavonoids with those of the three leaf types in our study, we found that leaf pH was similar among the leaf types and that the leaves all contained high levels of metal ions such as Mg, Fe, Mn, and copigments such as flavones. However, because there was no anthocyanin present in blueS. uncinata leaves, we conclude that blue leaf coloration in S. uncinata was not related to the three hypotheses of blue coloration: alkalization of vacuole pH, metal chelation, and copigmentation with anthocyanins. --- *Source: 1005449-2022-02-24.xml*
2022
# Prevalence ofS. mansoni Infection and Associated Risk Factors among School Children in Guangua District, Northwest Ethiopia **Authors:** Belaynesh Tazebew; Denekew Temesgen; Mastewal Alehegn; Desalew Salew; Molalign Tarekegn **Journal:** Journal of Parasitology Research (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005637 --- ## Abstract Back ground. Schistosomiasis is one of the neglected tropical diseases and is prevalent in tropics. It causes morbidity and mortality in developing countries including Ethiopia. This study aimed to determine the prevalence of S. mansoni infection and associated risk factors among two schools of Guangua district, northwest Ethiopia. Methods. A cross-sectional study design was employed. Four hundred twenty-two participants were selected. Data was collected through observation and interview with structured questionnaire. Stool specimens were collected and examined using two-slide Kato-Katz method. The data were analyzed using SPSS version 23. Logistic regression was fitted for analysis. Variables with p value <0.25 in the univariate logistic regression analysis were entered into the multivariable logistic regression model. Those with <0.05 were identified as significantly associated risk factors. To assure the quality of the data, training was given for data collectors and supervisors, and the tools were pretested on 5% of the sample size. Results. 404 (95.7%) school children were enrolled in the study. The overall prevalence of S. mansoni was 12.6%. School children in the age group 5-9 years old (AOR (95% CI): 22.27 (3.70-134.01), p=0.001), age group 10-14 years old (AOR (95% CI): 4.58 (1.14-18.42), p=0.032), grade levels 5-8 (AOR (95% CL): 14.95 (4.297-52.03), p=0.001),who swim frequently (AOR (95% CI): 11.35 (2.33-55.33), p=0.003), and those who cultivate near the irrigation area (AOR (95% CI): 7.10 (2.31-21.80), p=0.001) were significantly associated with high risk of S. mansoni infection. Conclusion and Recommendation. From the finding of the current study, it can be concluded that the prevalence of Schistosoma mansoni in the study area is relatively high. Age of fourteen and younger years old, swimming in the river, and irrigation practice were the main risk factors of S. mansoni infection. Thus, therapeutic interventions as well as health education are desirable. --- ## Body ## 1. Introduction Schistosomiasis or bilharzia is an acute and chronic parasitic disease caused by a blood fluke (trematode worm) belonging to the genus Schistosoma [1]. There are five schistosome species causing the disease, namely, S. haematobium, S. mansoni, S. japonicum, S. mekongi, and S. intercalatum [2]. The most clinically important species are Schistosoma mansoni and Shistosoma haematobium [1]. S. mansoni causes intestinal schistosomiasis, and it is endemic in sub-Saharan Africa [3]. It is predominant in most parts of the country [4].In Ethiopia, a number of epidemiological studies showed that intestinal schistosomiasis due toS. mansoni infection is widely distributed in several localities of the country with varying magnitudes of prevalence as high as 90% in school children [5]. In Ethiopia, about 5.01 million people are thought to be infected with schistosomiasis and 37.5 million to be at risk of infections [6].S. mansoni infection is transmitted through contact with fresh water polluted with human excreta containing schistosoma eggs, when the egg hatch in fresh water and releasing free swimming miracidia, which infect aquatic snail Biomphalaria pfeifferi. Biomphalaria pfeifferi is an intermediate host for S. mansoni to complete its life cycle and then release cercariae into the water, and human can be infected during contacts with water for various domestic purposes [7, 8]. It is more widespread in poor rural communities particularly in places where fishing and agricultural activities are dominant [9]. Domestic activities such as washing clothes and fetching water from contaminated Rivers/Lakes are the main risk factors for S. mansoni infection, which is a potential for children to be infected. Poor hygiene and recreational activities like swimming and fishing also increase the risk of infection in children [10].In Ethiopia,S. mansoni infection is one of the prevalent parasitic diseases reported across many regions, causing considerable morbidity. In northern Ethiopia, in Alamata district, studies revealed 73.9% prevalence of S. mansoni infection [11].Higher rates of infections were observed in males than females due to the frequent water contact behavior of male [12]. Due to they have higher participation in bathing, swimming and irrigation activities [13].Effective control of the disease requires determining its prevalence rate and identifying risk factors of infection in high-risk population groups [12]. It was important to conduct this research as the study site was appropriate for the study due to the presence of irrigational farming for agricultural product on the river, which is suitable for the intermediate host of S. mansoni. Efforts have been made to document the distribution of S. mansoni infection nearly at all corners of the region. However, it cannot be said that the distribution of the disease is fully mapped out, as there are recent discoveries of new transmission foci possibly associated with expansion of water development projects and human movement [14].Studies that indicate the prevalence ofS. mansoni infection and other intestinal parasites in different areas are crucial for identifying communities at high risk for parasitic infections and for formulating suitable prevention and control measures. The current study aimed to determine the prevalence of S. mansoni infection and associated risk factors among school children in two settings of Guangua district, northwest Ethiopia. The findings will help in strengthening the information available so far and encourage policy makers to design effective strategies to combat S. mansoni infection. ## 2. Materials and Methods ### 2.1. Study Area and Design A cross-sectional study was conducted from February to May 2018 to determine the prevalence ofS. mansoni infection and associated risk factors among school children in Guangua district, northwest Ethiopia. Guangua district is found in Agew Awi Zone, Amhara region. Guangua is bordered on the south and west by the Benshangul-Gumuz region, on the north by Dangla, on the northwest by Faggeta Lekoma and Banja Shekudad, and on the east Ankasha Guagua; the Dura River, a tributary of Abay River, defines parts of its western bordered. The district has 20 rural kebeles and 64 primary schools with an elevation of 1650 m above sea level. The average annual rain fall is 1896.6 mm with an average temperature of 25°C. The total population of the district is 142, 947. The economic base of the majority of the population (93.7%) of the district is agriculture. ### 2.2. Sample Size Determination and Sampling Techniques A sample size of 422 was determined using single population proportion formula [15]. Proportion of Schistosoma mansoni prevalence (p=0.5), level of confidence (z=1.96), and d precision (d=0.05) were considered. In addition, 10% nonresponse rate was added.Multistage sampling was employed to select the study subjects. The study was conducted in Anguay kebele which have two schools. During the study period, the total number of children attending Kibi and Gichgich was 1650 and 921, respectively. The sample size was allocated into two schools based on their total number. This kebele was selected purposively as it is a nearby kebele to the irrigation site in Dura River. The students were stratified by grade level from 1 to 8 in both schools. The numbers of the study participants were selected by systematic sampling technique in each class using their class rosters as sampling frame. ### 2.3. Inclusion and Exclusion Criteria All school children who gave consent to participate in the study and who had no any history of taking antihelminthes drug during the data collection or with in the last three months were included in the study. While children who were absent on the day of data collection and who did not gave consent were excluded to participate. ### 2.4. Variables The dependent variable is theS. mansoni infection status.The independent variables were Socio demographic status, water contact habit, defecation practice and latrine availability, shoe wearing habit, irrigation practices, availability of dams, distance between their homes and water bodies, and knowledge aboutS. mansoni infection, its transmission, and prevention method. ### 2.5. Data Collection #### 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. #### 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ### 2.6. Data Analysis Congregated data were double entered into and analyzed using SPSS version 23 software. Descriptive statistics was carried out to measure relative frequencies and percentages of the variables. Chi-squared tests (χ2) were used to determine the association between variables and to test statistical significance differences. Logistic regression analysis was performed to examine associations between variables. Odds ratios (OR) were calculated with 95% confidence interval (CI). Variables having significance at p values 0.25 in univariate test were selected for multivariate logistic regression analysis to identify the most important predictors of Schistosoma mansoni risk factors based on the test from logistic regression [16]. The associations were considered to be statistically significant when p values are less than 0.05. ### 2.7. Data Quality Control All the necessary reagents, chemicals, and instruments were checked by known positive and negative samples before processing and examination of samples of the study participants. Training was given for data collectors and supervisors. Also the specimens were checked for the serial number, quantity, and procedure of collection. Before the actual data collection, pretest was conducted involving 5% of the sample size that were not part of the sample population in the actual study to ensure the validity of the data collection tool. The smear samples were reexamined by other laboratory experts, which was blinded for the first examination results. ### 2.8. Ethical Considerations Ethical approval was obtained from the research and community service coordinating office of Science College, Bahir Dar University. Permissions were obtained from school administration/school director office to conduct the study after explaining the purpose and objective of the study. Informed written and oral consent was also obtained from the parent/guardian of the children/their home room teachers. All the data obtained from each study participant was kept confidential. Children found positive forSchistosoma mansoni were treated with the WHO standard procedure. Participants who were found infected during the study were provided prescription to take the drug in the nearby pharmacy. ## 2.1. Study Area and Design A cross-sectional study was conducted from February to May 2018 to determine the prevalence ofS. mansoni infection and associated risk factors among school children in Guangua district, northwest Ethiopia. Guangua district is found in Agew Awi Zone, Amhara region. Guangua is bordered on the south and west by the Benshangul-Gumuz region, on the north by Dangla, on the northwest by Faggeta Lekoma and Banja Shekudad, and on the east Ankasha Guagua; the Dura River, a tributary of Abay River, defines parts of its western bordered. The district has 20 rural kebeles and 64 primary schools with an elevation of 1650 m above sea level. The average annual rain fall is 1896.6 mm with an average temperature of 25°C. The total population of the district is 142, 947. The economic base of the majority of the population (93.7%) of the district is agriculture. ## 2.2. Sample Size Determination and Sampling Techniques A sample size of 422 was determined using single population proportion formula [15]. Proportion of Schistosoma mansoni prevalence (p=0.5), level of confidence (z=1.96), and d precision (d=0.05) were considered. In addition, 10% nonresponse rate was added.Multistage sampling was employed to select the study subjects. The study was conducted in Anguay kebele which have two schools. During the study period, the total number of children attending Kibi and Gichgich was 1650 and 921, respectively. The sample size was allocated into two schools based on their total number. This kebele was selected purposively as it is a nearby kebele to the irrigation site in Dura River. The students were stratified by grade level from 1 to 8 in both schools. The numbers of the study participants were selected by systematic sampling technique in each class using their class rosters as sampling frame. ## 2.3. Inclusion and Exclusion Criteria All school children who gave consent to participate in the study and who had no any history of taking antihelminthes drug during the data collection or with in the last three months were included in the study. While children who were absent on the day of data collection and who did not gave consent were excluded to participate. ## 2.4. Variables The dependent variable is theS. mansoni infection status.The independent variables were Socio demographic status, water contact habit, defecation practice and latrine availability, shoe wearing habit, irrigation practices, availability of dams, distance between their homes and water bodies, and knowledge aboutS. mansoni infection, its transmission, and prevention method. ## 2.5. Data Collection ### 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. ### 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ## 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. ## 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ## 2.6. Data Analysis Congregated data were double entered into and analyzed using SPSS version 23 software. Descriptive statistics was carried out to measure relative frequencies and percentages of the variables. Chi-squared tests (χ2) were used to determine the association between variables and to test statistical significance differences. Logistic regression analysis was performed to examine associations between variables. Odds ratios (OR) were calculated with 95% confidence interval (CI). Variables having significance at p values 0.25 in univariate test were selected for multivariate logistic regression analysis to identify the most important predictors of Schistosoma mansoni risk factors based on the test from logistic regression [16]. The associations were considered to be statistically significant when p values are less than 0.05. ## 2.7. Data Quality Control All the necessary reagents, chemicals, and instruments were checked by known positive and negative samples before processing and examination of samples of the study participants. Training was given for data collectors and supervisors. Also the specimens were checked for the serial number, quantity, and procedure of collection. Before the actual data collection, pretest was conducted involving 5% of the sample size that were not part of the sample population in the actual study to ensure the validity of the data collection tool. The smear samples were reexamined by other laboratory experts, which was blinded for the first examination results. ## 2.8. Ethical Considerations Ethical approval was obtained from the research and community service coordinating office of Science College, Bahir Dar University. Permissions were obtained from school administration/school director office to conduct the study after explaining the purpose and objective of the study. Informed written and oral consent was also obtained from the parent/guardian of the children/their home room teachers. All the data obtained from each study participant was kept confidential. Children found positive forSchistosoma mansoni were treated with the WHO standard procedure. Participants who were found infected during the study were provided prescription to take the drug in the nearby pharmacy. ## 3. Results ### 3.1. Socio-Demographic Characteristics of Study Participants A total of 422 school children were invited to participate, among these, 404 (95.7%) individuals were enrolled in the study, and the remaining 18 individuals were refused to participate. Of the total subject, 227 (56.2%) were male, and 177 (43.8%) were female. The highest proportion of participants 185 (45.8%) were found to be within the age range of 10-14 years. About 269 (66.6%) participants were from grades 1-4, and the remaining 135 (33.4%) were from grade levels 5-8. The parents of majority students (297 (73.5%)) were uneducated (Table1).Table 1 Socio-demographic characteristics of the study participants study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariableCategoryFrequencyPercentSexMale22756.2Female17743.8Age5-913132.410-1418545.815-198821.8Grade level1-426966.65-813533.4Parent education statusUneducated29773.5Educated10726.5 ### 3.2. Prevalence ofS. mansoni Infection The overall prevalence ofS. mansoni among the study participants was 51 (12.6%), while the remaining 353 (87.4%) were found to be negative for S. mansoni infection (Figure 1). In addition to S. mansoni, poly parasitism were also observed during the stool samples examination such as E. histolytica 42 (10.4%), G. lamblia 4 (1.0%), and Hook worm 4 (1.0%) (Figure 2).Figure 1 Prevalence ofschistosoma mansoni infection among school children in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019).Figure 2 Prevalence ofS. mansoni infection and other common intestinal parasite among study participants in Anguay kebele, Guangua district, northwest Ethiopia (2018). ### 3.3. Risk Factors ofS. mansoni In Chi-square analysis, no statistically significant differences inS. mansoni infection were observed among the categories of the variable, age, parent education status, body shower practices in rivers, and knowledge about S. mansoni infection and study site. However, the remaining variables were significantly associated with high risk of S. mansoni infection. S. mansoni infection was detected across all categories of the variable with varied prevalence rates of infection. The prevalence of S. mansoni infection was higher among males 36 (15.9%) than females 15 (8.5%) participants. The prevalence of S. mansoni infection was higher 28 (15.1%) among the study participants in the age group 10-14 years followed by age group of 5-9 years 16 (12.2%) and 15-19 years 7(8.0%). Students who attend grades 5-8 (22.2%) were highly infected by S. mansoni compared to those grades 1-4 (7.8%) (Table 2).Table 2 Chi-square analysis of association ofSchistosoma mansoni infection with socio demographic and associated risk factors of study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariablesCategoryNumberexamined n(%)S. mansoni infectionχ2, pPositiveNegativeSexMale227 (56.2)36 (15.9)191 (84.1)4.917, 0.027Female177 (43.8)15 (8.5)162 (91.5)Age5-9131 (32.4)16 (12.2)115 (87.8)2.817, 0.24410-14185 (45.8)28 (15.1)157 (84.9)15-1988 (21.8)7 (8.0)81 (92.0)Grade level1-4269 (66.6)21 (7.8)248 (92.2)16.935,0.0015-8135 (33.4)30 (22.2)105 (77.8)Parent education statusUneducated254 (62.9)43 (14.5)254 (85.5)3.496, 0.062Educated107 (73.5)8 (7.5)99 (92.5)Washing clothes and utensils in the riverYes304 (75.2)45 (14.8)259 (85.2)5.286, 0.021No100 (24.8)6 (6.0)94 (94.0)Frequency of swimming in the riversAlways76 (18.8)29 (38.2)47 (61.8)55.348, 0.001Some times264 (65.3)18 (6.8)246 (93.2)Not at all64 (15.8)4 (6.3)60 (93.8)Body shower practices in in riversYes332 (82.2)42 (12.7)290 (87.3)0.001, 0.972No72 (17.8)9 (12.5)63 (87.5)Crossing the water bodies by therespondents’ way to and from schoolYes164 (40.6)37 (22.6)127 (77.4)24.715, 0.001No240 (59.4)14 (5.8)226 (94.2)Irrigation practicesYes189 (46.8)43 (22.8)146 (77.2)33.024, 0.001No215 (53.2)8 (3.7)207 (96.3)Presence of dams in their localityYes161 (39.9)28 (17.4)133 (82.6)5.516, 0.019No243 (60.1)23 (9.5)220 (90.5)The distance between their homesand water bodiesNear (<1 KM)199 (49.3)37 (18.6)162 (81.4)12.669, 0.001Far (≥1 KM)205 (50.7)14 (6.8)191 (93.2)Shoe wearing habitsYes171 (42.3)13 (7.6)158 (92.4)6.778, 0.009No233 (57.7)38 (16.3)195 (83.7)Presence of latrine in their homesYes159 (39.4)9 (5.7)150 (94.3)11.526, 0.001No245 (60.6)42 (17.1)203 (82.9)Open defecationYes250 (61.9)43 (17.2)207 (82.8)12.452, 0.001No154 (38.1)8 (5.2)146 (94.8)Knowledge aboutS. mansoniinfection and its transmission methodYes151 (37.4)10 (6.6)141 (93.4)7.873, 0.005No253 (62.6)41 (16.2)212 (83.8)Knowledge about the burdenof S. mansoni infection in their areaYes153 (37.9)9 (5.9)144 (94.1)10.147, 0.001No251 (62.1)42 (16.7)209 (83.3)Knowledge aboutS. mansoniinfection preventionYes160 (39.6)14 (8.8)146 (91.3)3.604, 0.058No244 (60.4)37 (15.2)207 (84.8)Study siteGichgich201 (49.8)29 (14.4)172 (85.6)1.180, 0.277Kibi203 (50.2)22 (10.8)181 (89.2)Statistically significant atp<0.05.The majority of 304 (75.2%) participants washes their clothes and utensils in the River, amongst those 45 (14.8%) were positive forS. mansoni infection. With regard to swimming habits in the rivers, the prevalence of S. mansoni infection was higher in those who swim always 29 (38.2%) and followed by some times 18 (6.8%) and not at all 4 (6.3%). In this study, respondents who cross the water bodies (37 (22.6%)), who were involved at irrigation practices (43 (22.8%)), and who live at areas where there are dams in their locality (28 (17.4%)) were found to be positive for S. mansoni infection. In terms of distance between their homes and water bodies, the highest prevalence was observed in near distance (<1 KM) 37 (18.6%) compared to far distance (≥1 KM) 14 (6.8%). School children who had no the habit of shoe wearing (38 (16.3%)) and absence of latrine in their homes (42 (17.1%)) were more infected with S. mansoni (Table 2).This result showed that more than half (253 (62.6%)) of the study participants did not know about the disease,S. mansoni, and its transmission methods among these (41 (16.2%)) were positive. Sixty one percent of study participants had no knowledge about the burden of S. mansoni infection, of those majority (42 (16.7%)) were affected. More than half (250 (61.9%)) of the respondents defecate in open field. Among S. mansoni positive individuals, majority of them were defecates in open field (43(17.2%)) compared to those who did not defecate in open field (8 (5.2%)) (Table 2). ### 3.4. Multivariate Analyses ofS. mansoni Infection and Its Associated Factors In multivariate analysis, the significant independent predictors ofS. mansoni infection in this study were the age group 5-9 years and age group 10-14 years, grade level 5-8, always swimming in the River, having irrigation practice, and crossing the water bodies, while the remaining variables were not observed to have any significant association with S. mansoni infection (Table 3).Table 3 Multivariate logistic regression analysis ofSchistosoma mansoni prevalence with selected seemingly significant variables in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). List of variableCategoryS. mansoni infectionsCOR (95% CI)p valueAOR (95% CI)pvaluePositiveNegativeSexMale36 (15.9)191 (84.1)2.04 (1.08, 3.85)0.0291.48 (0.574, 3.84)0.416Female15 (8.5)162 (91.5)1.001.00Age5-916 (12.2)115 (87.8)1.61 (0.63, 4.09)0.31722.27 (3.70,134.01)0.001∗10-1428 (15.1)157 (84.9)2.06 (0.86, 4.93)0.1034.58 (1.14, 18.42)0.032∗15-197 (8.0)81 (92.0)1.001.00Grade level5-821 (7.8)248 (92.2)3.37 (1.85, 6.16)0.00114.95 (4.297,52.03)0.001∗1-430 (22.2)105 (77.8)1.001.00Parent education statusUneducated43 (14.5)254 (85.5)2.30 (0.95, 4.61)0.0702.77 (0.96, 7.96)0.059Educated8 (7.5)99 (92.5)1.001.00Washing clothes andutensils in the riverYes45 (14.8)259 (85.2)2.72 (1.13, 6.59)0.0262.50 (0.71, 8.87)0.156No6 (6.0)94 (94.0)1.001.00Frequency of swimming inthe riversAlways29 (38.2)47 (61.8)9.26 (3.04,28.17)0.00111.35 (2.33, 55.33)0.003∗Some times18 (6.8)246 (93.2)1.10 (0.36, 3.36)0.8701.48 (0.33, 6.70)0.613Not at all4 (6.3)60 (93.8)1.001.00Crossing the water bodiesby the respondents’ way to and from schoolYes37 (22.6)127 (77.4)4.70 (2.45, 9.03)0.0014.17 (1.70, 10.21)0.002∗No14 (5.8)226 (94.2)1.001.00Irrigation practicesYes43 (22.8)146 (77.2)7.62 (3.48,16.68)0.0017.10 (2.31, 21.80)0.001∗No8 (3.7)207 (96.3)1.001.00Presence of dams in theirlocalityYes28 (17.4)133 (82.6)2.01 (1.11, 3.64)0.0200.67 (0.25, 1.77)0.417No23 (9.5)220 (90.5)1.001.00The distance between theirhomes and water bodiesNear (<1 KM)37 (18.6)162 (81.4)3.12 (1.63, 5.97)0.0011.67 (0.63, 4.44)0.303Far (≥1 KM)14 (6.8)191 (93.2)1.001.00Shoe wearing habitsYes13 (7.6)158 (92.4)0.42 (0.22, 0.82)0.0110.41 (0.16, 1.80)0.076No38 (16.3)195 (83.7)1.001.00Presence of latrine in theirhomesYes9 (5.7)150 (94.3)0.29 (0.14, 0.61)0.0010.33 (0.06, 1.84)0.204No42 (17.1)203 (82.9)1.001.00Open defecationYes43 (17.2)207 (82.8)3.79 (1.73, 8.30)0.0012.39 (0.41, 13.90)0.333No8 (5.2)146 (94.8)1.001.00Knowledge aboutS. mansoni infection and its transmission methodYes10 (6.6)141 (93.4)0.37 (0.18, 0.76)0.0070.70 (0.26, 1.90)0.485No41 (16.2)212 (83.8)1.001.00Knowledge about theburden of S. mansoniinfection in their areaYes9 (5.9)144 (94.1)0.31 (0.15, 0.66)0.0020.391 (0.14, 1.06)0.064No42 (16.7)209 (83.3)1.001.00Knowledge aboutS. mansoni infection preventionYes14 (8.8)146 (91.3)0.54 (0.28, 1.03)0.0610.46 (0.18, 1.18)0.106No37 (15.2)207 (84.8)1.001.00COR = crude odd ratio; sig. atp≤0.25; AOR∗ = adjusted odd ratio; sig. at p≤0.05.In this study, the likelihood ofS. mansoni infection among participants who belonged to 5-9 years old age group was significant and about 22 times higher (AOR=22.27, 95% CI 3.70-134.01, p=0.001). The odds of S. mansoni infection were also significantly four times higher risk in age group of 10-14 years (AOR=4.58, 95% CI 1.14-18.42, p=0.032). Regarding the grade level of the study participants, grades 5-8 were about fifteen times at higher risk of S. mansoni infection than those who enrolled in grade levels 1-4, and it was statistically significant (AOR=14.95, 95% 4.297-52.03, p=0.001) (Table 3).The odds of positiveS. mansoni infection were significantly eleven times higher among individuals swimming in the river always compared to never swimming in the River at all (AOR=11.35, 95% CI 2.33-55.33, p=0.003). Subjects who practice irrigation were seven times positively associated with S. mansoni infection (AOR=7.10, 95% CI 2.31-21.80, p=0.001) (Table 3). ## 3.1. Socio-Demographic Characteristics of Study Participants A total of 422 school children were invited to participate, among these, 404 (95.7%) individuals were enrolled in the study, and the remaining 18 individuals were refused to participate. Of the total subject, 227 (56.2%) were male, and 177 (43.8%) were female. The highest proportion of participants 185 (45.8%) were found to be within the age range of 10-14 years. About 269 (66.6%) participants were from grades 1-4, and the remaining 135 (33.4%) were from grade levels 5-8. The parents of majority students (297 (73.5%)) were uneducated (Table1).Table 1 Socio-demographic characteristics of the study participants study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariableCategoryFrequencyPercentSexMale22756.2Female17743.8Age5-913132.410-1418545.815-198821.8Grade level1-426966.65-813533.4Parent education statusUneducated29773.5Educated10726.5 ## 3.2. Prevalence ofS. mansoni Infection The overall prevalence ofS. mansoni among the study participants was 51 (12.6%), while the remaining 353 (87.4%) were found to be negative for S. mansoni infection (Figure 1). In addition to S. mansoni, poly parasitism were also observed during the stool samples examination such as E. histolytica 42 (10.4%), G. lamblia 4 (1.0%), and Hook worm 4 (1.0%) (Figure 2).Figure 1 Prevalence ofschistosoma mansoni infection among school children in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019).Figure 2 Prevalence ofS. mansoni infection and other common intestinal parasite among study participants in Anguay kebele, Guangua district, northwest Ethiopia (2018). ## 3.3. Risk Factors ofS. mansoni In Chi-square analysis, no statistically significant differences inS. mansoni infection were observed among the categories of the variable, age, parent education status, body shower practices in rivers, and knowledge about S. mansoni infection and study site. However, the remaining variables were significantly associated with high risk of S. mansoni infection. S. mansoni infection was detected across all categories of the variable with varied prevalence rates of infection. The prevalence of S. mansoni infection was higher among males 36 (15.9%) than females 15 (8.5%) participants. The prevalence of S. mansoni infection was higher 28 (15.1%) among the study participants in the age group 10-14 years followed by age group of 5-9 years 16 (12.2%) and 15-19 years 7(8.0%). Students who attend grades 5-8 (22.2%) were highly infected by S. mansoni compared to those grades 1-4 (7.8%) (Table 2).Table 2 Chi-square analysis of association ofSchistosoma mansoni infection with socio demographic and associated risk factors of study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariablesCategoryNumberexamined n(%)S. mansoni infectionχ2, pPositiveNegativeSexMale227 (56.2)36 (15.9)191 (84.1)4.917, 0.027Female177 (43.8)15 (8.5)162 (91.5)Age5-9131 (32.4)16 (12.2)115 (87.8)2.817, 0.24410-14185 (45.8)28 (15.1)157 (84.9)15-1988 (21.8)7 (8.0)81 (92.0)Grade level1-4269 (66.6)21 (7.8)248 (92.2)16.935,0.0015-8135 (33.4)30 (22.2)105 (77.8)Parent education statusUneducated254 (62.9)43 (14.5)254 (85.5)3.496, 0.062Educated107 (73.5)8 (7.5)99 (92.5)Washing clothes and utensils in the riverYes304 (75.2)45 (14.8)259 (85.2)5.286, 0.021No100 (24.8)6 (6.0)94 (94.0)Frequency of swimming in the riversAlways76 (18.8)29 (38.2)47 (61.8)55.348, 0.001Some times264 (65.3)18 (6.8)246 (93.2)Not at all64 (15.8)4 (6.3)60 (93.8)Body shower practices in in riversYes332 (82.2)42 (12.7)290 (87.3)0.001, 0.972No72 (17.8)9 (12.5)63 (87.5)Crossing the water bodies by therespondents’ way to and from schoolYes164 (40.6)37 (22.6)127 (77.4)24.715, 0.001No240 (59.4)14 (5.8)226 (94.2)Irrigation practicesYes189 (46.8)43 (22.8)146 (77.2)33.024, 0.001No215 (53.2)8 (3.7)207 (96.3)Presence of dams in their localityYes161 (39.9)28 (17.4)133 (82.6)5.516, 0.019No243 (60.1)23 (9.5)220 (90.5)The distance between their homesand water bodiesNear (<1 KM)199 (49.3)37 (18.6)162 (81.4)12.669, 0.001Far (≥1 KM)205 (50.7)14 (6.8)191 (93.2)Shoe wearing habitsYes171 (42.3)13 (7.6)158 (92.4)6.778, 0.009No233 (57.7)38 (16.3)195 (83.7)Presence of latrine in their homesYes159 (39.4)9 (5.7)150 (94.3)11.526, 0.001No245 (60.6)42 (17.1)203 (82.9)Open defecationYes250 (61.9)43 (17.2)207 (82.8)12.452, 0.001No154 (38.1)8 (5.2)146 (94.8)Knowledge aboutS. mansoniinfection and its transmission methodYes151 (37.4)10 (6.6)141 (93.4)7.873, 0.005No253 (62.6)41 (16.2)212 (83.8)Knowledge about the burdenof S. mansoni infection in their areaYes153 (37.9)9 (5.9)144 (94.1)10.147, 0.001No251 (62.1)42 (16.7)209 (83.3)Knowledge aboutS. mansoniinfection preventionYes160 (39.6)14 (8.8)146 (91.3)3.604, 0.058No244 (60.4)37 (15.2)207 (84.8)Study siteGichgich201 (49.8)29 (14.4)172 (85.6)1.180, 0.277Kibi203 (50.2)22 (10.8)181 (89.2)Statistically significant atp<0.05.The majority of 304 (75.2%) participants washes their clothes and utensils in the River, amongst those 45 (14.8%) were positive forS. mansoni infection. With regard to swimming habits in the rivers, the prevalence of S. mansoni infection was higher in those who swim always 29 (38.2%) and followed by some times 18 (6.8%) and not at all 4 (6.3%). In this study, respondents who cross the water bodies (37 (22.6%)), who were involved at irrigation practices (43 (22.8%)), and who live at areas where there are dams in their locality (28 (17.4%)) were found to be positive for S. mansoni infection. In terms of distance between their homes and water bodies, the highest prevalence was observed in near distance (<1 KM) 37 (18.6%) compared to far distance (≥1 KM) 14 (6.8%). School children who had no the habit of shoe wearing (38 (16.3%)) and absence of latrine in their homes (42 (17.1%)) were more infected with S. mansoni (Table 2).This result showed that more than half (253 (62.6%)) of the study participants did not know about the disease,S. mansoni, and its transmission methods among these (41 (16.2%)) were positive. Sixty one percent of study participants had no knowledge about the burden of S. mansoni infection, of those majority (42 (16.7%)) were affected. More than half (250 (61.9%)) of the respondents defecate in open field. Among S. mansoni positive individuals, majority of them were defecates in open field (43(17.2%)) compared to those who did not defecate in open field (8 (5.2%)) (Table 2). ## 3.4. Multivariate Analyses ofS. mansoni Infection and Its Associated Factors In multivariate analysis, the significant independent predictors ofS. mansoni infection in this study were the age group 5-9 years and age group 10-14 years, grade level 5-8, always swimming in the River, having irrigation practice, and crossing the water bodies, while the remaining variables were not observed to have any significant association with S. mansoni infection (Table 3).Table 3 Multivariate logistic regression analysis ofSchistosoma mansoni prevalence with selected seemingly significant variables in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). List of variableCategoryS. mansoni infectionsCOR (95% CI)p valueAOR (95% CI)pvaluePositiveNegativeSexMale36 (15.9)191 (84.1)2.04 (1.08, 3.85)0.0291.48 (0.574, 3.84)0.416Female15 (8.5)162 (91.5)1.001.00Age5-916 (12.2)115 (87.8)1.61 (0.63, 4.09)0.31722.27 (3.70,134.01)0.001∗10-1428 (15.1)157 (84.9)2.06 (0.86, 4.93)0.1034.58 (1.14, 18.42)0.032∗15-197 (8.0)81 (92.0)1.001.00Grade level5-821 (7.8)248 (92.2)3.37 (1.85, 6.16)0.00114.95 (4.297,52.03)0.001∗1-430 (22.2)105 (77.8)1.001.00Parent education statusUneducated43 (14.5)254 (85.5)2.30 (0.95, 4.61)0.0702.77 (0.96, 7.96)0.059Educated8 (7.5)99 (92.5)1.001.00Washing clothes andutensils in the riverYes45 (14.8)259 (85.2)2.72 (1.13, 6.59)0.0262.50 (0.71, 8.87)0.156No6 (6.0)94 (94.0)1.001.00Frequency of swimming inthe riversAlways29 (38.2)47 (61.8)9.26 (3.04,28.17)0.00111.35 (2.33, 55.33)0.003∗Some times18 (6.8)246 (93.2)1.10 (0.36, 3.36)0.8701.48 (0.33, 6.70)0.613Not at all4 (6.3)60 (93.8)1.001.00Crossing the water bodiesby the respondents’ way to and from schoolYes37 (22.6)127 (77.4)4.70 (2.45, 9.03)0.0014.17 (1.70, 10.21)0.002∗No14 (5.8)226 (94.2)1.001.00Irrigation practicesYes43 (22.8)146 (77.2)7.62 (3.48,16.68)0.0017.10 (2.31, 21.80)0.001∗No8 (3.7)207 (96.3)1.001.00Presence of dams in theirlocalityYes28 (17.4)133 (82.6)2.01 (1.11, 3.64)0.0200.67 (0.25, 1.77)0.417No23 (9.5)220 (90.5)1.001.00The distance between theirhomes and water bodiesNear (<1 KM)37 (18.6)162 (81.4)3.12 (1.63, 5.97)0.0011.67 (0.63, 4.44)0.303Far (≥1 KM)14 (6.8)191 (93.2)1.001.00Shoe wearing habitsYes13 (7.6)158 (92.4)0.42 (0.22, 0.82)0.0110.41 (0.16, 1.80)0.076No38 (16.3)195 (83.7)1.001.00Presence of latrine in theirhomesYes9 (5.7)150 (94.3)0.29 (0.14, 0.61)0.0010.33 (0.06, 1.84)0.204No42 (17.1)203 (82.9)1.001.00Open defecationYes43 (17.2)207 (82.8)3.79 (1.73, 8.30)0.0012.39 (0.41, 13.90)0.333No8 (5.2)146 (94.8)1.001.00Knowledge aboutS. mansoni infection and its transmission methodYes10 (6.6)141 (93.4)0.37 (0.18, 0.76)0.0070.70 (0.26, 1.90)0.485No41 (16.2)212 (83.8)1.001.00Knowledge about theburden of S. mansoniinfection in their areaYes9 (5.9)144 (94.1)0.31 (0.15, 0.66)0.0020.391 (0.14, 1.06)0.064No42 (16.7)209 (83.3)1.001.00Knowledge aboutS. mansoni infection preventionYes14 (8.8)146 (91.3)0.54 (0.28, 1.03)0.0610.46 (0.18, 1.18)0.106No37 (15.2)207 (84.8)1.001.00COR = crude odd ratio; sig. atp≤0.25; AOR∗ = adjusted odd ratio; sig. at p≤0.05.In this study, the likelihood ofS. mansoni infection among participants who belonged to 5-9 years old age group was significant and about 22 times higher (AOR=22.27, 95% CI 3.70-134.01, p=0.001). The odds of S. mansoni infection were also significantly four times higher risk in age group of 10-14 years (AOR=4.58, 95% CI 1.14-18.42, p=0.032). Regarding the grade level of the study participants, grades 5-8 were about fifteen times at higher risk of S. mansoni infection than those who enrolled in grade levels 1-4, and it was statistically significant (AOR=14.95, 95% 4.297-52.03, p=0.001) (Table 3).The odds of positiveS. mansoni infection were significantly eleven times higher among individuals swimming in the river always compared to never swimming in the River at all (AOR=11.35, 95% CI 2.33-55.33, p=0.003). Subjects who practice irrigation were seven times positively associated with S. mansoni infection (AOR=7.10, 95% CI 2.31-21.80, p=0.001) (Table 3). ## 4. Discussion The overall prevalence rate ofS. mansoni among study participants in the current study was 12.6%. It is comparable with the study finding in an endemic area of Niger River basin (12.5%) [17] and Agaie, Niger State (10.17%) [18] of Nigeria, and Dembia (15.4%) [19] and (11.4%) Wondo District of Ethiopia [20].However, it was higher than the prevalence among preschool children in Gondar town (5.9%) [21], among the school children in Côte d’Ivoire (6.1%) [22], and in the White Nile River Basin of Sudan (5.9%) [23]. Higher prevalence of S. mansoni in the current study could be due to the existence of Dura River where the local community uses for washing clothes, taking baths, and fetching water for domestic purpose. The river may serve as a potential source of infection for S. mansoni. Moreover, the weather condition in the area is relatively warmer and more humid which favor the existence and reproduction rate of the snail. The other possible cause might be related with the differences in children’s behavior to water contact and level of awareness about the prevention and control of S. mansoni infection.The prevalence in the current study was lower than the overall prevalence ofS. mansoni infection among school children in districts of north Gondar (Chuahit, Sanja, Debark, and Maksegnit) (33.5%) [24], nearby rivers in Jimma town (28.7%) [25], in rural area of Bahir Dar (24.9%) [26], in Jimma Zone (27.6%) [27], southern Ethiopia (25.8%) [28], and Mekelle city (23.9%) [12]. This study was also much lower as compared to the finding in Sanja area, Amhara region (82.8%) [29], and Damot Woide district, Wolaita Zone (81.3%) [14]. The variations in the prevalence may be due to some factors such as difference in water contact habit, toilet utilization, and ecological distribution of snails in the study area (the presence of fast running rivers (Dura) in the current study may lead to low availability of vector snails, since snails mainly prefer stagnant or slow-moving water bodies). The snail (Biomphalaria pfeifferi), responsible for the transmission of S. mansoni infection, is more prevalent in areas 2000 m above sea level [30]. The variations in the prevalence in this study might also be due to factors which are related with the characteristics of intermediate snail hosts.In the current study, polyparasitism was observed other than theS. mansoni infection. Even if they perform sanitation; it is not enough in preventing schistosoma infection [31]. On the other hand, S. mansoni and other parasite infections will never be a public health problem if there are appropriate improvements of hygiene and sanitation standards [32]. The presence of adequate sanitation does not necessarily guarantee its use. The bulk of Schistosoma specious eggs reach directly into the water usually by children during bathing and swimming. The use of adequate sanitation systems for urine and faeces has reduced schistosomiasis within short periods of time, while it takes longer for other helminths such as Ascaris lumbricoides and Trichuris trichiura [31].Though a study in two the settings of Côte d’Ivoire [22] reported similar prevalence rates for boys and girls in Kenya [33], many studies showed contrasting finding in the prevalence of S. mansoni infection by sex [32–34]. The finding of the current study also showed higher prevalence of S. mansoni infection in males than in female. It is in agreement with the studies done in Wolaita Zone of Ethiopia [14] and the findings in Niger [35] and central Sudan [36]. Male children may have higher exposure of contact with cercariae contaminated water bodies than females while helping their family in outdoor activities such as herding cattle.Unlike a study in Sanja area of Amhara, Ethiopia [29], which reported a similar prevalence of S. mansoni across the age group of school children, the current finding showed that students in the age group of 14 years old and younger were at a higher risk of acquiring S. mansoni infection than those with the age range of 15-19. It was in line with previous studies done in Ethiopia [29, 36]. This might be due to their chance of playing in the field which increases the probability of contact with cercaria contaminated water bodies.In the current study, the prevalence ofS. mansoni infection was significantly associated with the frequency of swimming in the river. The chances of positive S. mansoni infection were eleven times higher in individuals who always swim in the river compared to those who never swim. This was in agreement with studies done in Hawassa and Gorgora of Ethiopia [33, 37].The current study revealed that water contact habit through irrigation practice and crossing the water bodies in their way to school are associated with high prevalence ofS. mansoni infection; this was in line with the study at Wolaita Zone, Southern [14] and Gondar town of Ethiopia [21], and south Côte d’Ivoire, and central Côte d’Ivoire [38], which reported that water contact at stream crosses and herding cattle near the stream increase the risk of S. mansoni infection.This study also showed that working in an irrigated field is significantly associated withS. mansoni infection. This is in agreement with the findings of the studies at Dudicha and Shesha Kekel localities [39], with different water source users in Tigray region of Ethiopia [40] and southeastern Brazil [41]. Distance from the water bodies was also associated with S. mansoni infection. Children who live near to the water bodies were more infected with S. mansoni than those who live far. It was in agreement with previous findings of a study in Cote d’Ivoire [42]. This study also showed that the prevalence of S. mansoni infection was associated with shoe wearing habit. This is in line with the findings of a study in Jiga town [43]. This could be due to the agricultural-based economy of the community; closer distance with water bodies and continuous barefoot water contact inhabit increase the chance of contact with water bodies that contain S. mansoni cercariae. ## 5. Conclusion and Recommendation S. mansoni infection was the major problem among school children in the current study. Students having fourteen and younger years old, students who frequently swim in the river, having irrigation practice, and cross the river bodies were at a higher risk of S. mansoni infection. Hence, it is recommended to focus on raising the awareness of the school children about the prevention and control measures of S. mansoni in the locality. In addition, responsible bodies on irrigational practices should work on the regular cleaning of water canals which favor the reproduction of the snail. --- *Source: 1005637-2022-04-16.xml*
1005637-2022-04-16_1005637-2022-04-16.md
46,730
Prevalence ofS. mansoni Infection and Associated Risk Factors among School Children in Guangua District, Northwest Ethiopia
Belaynesh Tazebew; Denekew Temesgen; Mastewal Alehegn; Desalew Salew; Molalign Tarekegn
Journal of Parasitology Research (2022)
Biological Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005637
1005637-2022-04-16.xml
--- ## Abstract Back ground. Schistosomiasis is one of the neglected tropical diseases and is prevalent in tropics. It causes morbidity and mortality in developing countries including Ethiopia. This study aimed to determine the prevalence of S. mansoni infection and associated risk factors among two schools of Guangua district, northwest Ethiopia. Methods. A cross-sectional study design was employed. Four hundred twenty-two participants were selected. Data was collected through observation and interview with structured questionnaire. Stool specimens were collected and examined using two-slide Kato-Katz method. The data were analyzed using SPSS version 23. Logistic regression was fitted for analysis. Variables with p value <0.25 in the univariate logistic regression analysis were entered into the multivariable logistic regression model. Those with <0.05 were identified as significantly associated risk factors. To assure the quality of the data, training was given for data collectors and supervisors, and the tools were pretested on 5% of the sample size. Results. 404 (95.7%) school children were enrolled in the study. The overall prevalence of S. mansoni was 12.6%. School children in the age group 5-9 years old (AOR (95% CI): 22.27 (3.70-134.01), p=0.001), age group 10-14 years old (AOR (95% CI): 4.58 (1.14-18.42), p=0.032), grade levels 5-8 (AOR (95% CL): 14.95 (4.297-52.03), p=0.001),who swim frequently (AOR (95% CI): 11.35 (2.33-55.33), p=0.003), and those who cultivate near the irrigation area (AOR (95% CI): 7.10 (2.31-21.80), p=0.001) were significantly associated with high risk of S. mansoni infection. Conclusion and Recommendation. From the finding of the current study, it can be concluded that the prevalence of Schistosoma mansoni in the study area is relatively high. Age of fourteen and younger years old, swimming in the river, and irrigation practice were the main risk factors of S. mansoni infection. Thus, therapeutic interventions as well as health education are desirable. --- ## Body ## 1. Introduction Schistosomiasis or bilharzia is an acute and chronic parasitic disease caused by a blood fluke (trematode worm) belonging to the genus Schistosoma [1]. There are five schistosome species causing the disease, namely, S. haematobium, S. mansoni, S. japonicum, S. mekongi, and S. intercalatum [2]. The most clinically important species are Schistosoma mansoni and Shistosoma haematobium [1]. S. mansoni causes intestinal schistosomiasis, and it is endemic in sub-Saharan Africa [3]. It is predominant in most parts of the country [4].In Ethiopia, a number of epidemiological studies showed that intestinal schistosomiasis due toS. mansoni infection is widely distributed in several localities of the country with varying magnitudes of prevalence as high as 90% in school children [5]. In Ethiopia, about 5.01 million people are thought to be infected with schistosomiasis and 37.5 million to be at risk of infections [6].S. mansoni infection is transmitted through contact with fresh water polluted with human excreta containing schistosoma eggs, when the egg hatch in fresh water and releasing free swimming miracidia, which infect aquatic snail Biomphalaria pfeifferi. Biomphalaria pfeifferi is an intermediate host for S. mansoni to complete its life cycle and then release cercariae into the water, and human can be infected during contacts with water for various domestic purposes [7, 8]. It is more widespread in poor rural communities particularly in places where fishing and agricultural activities are dominant [9]. Domestic activities such as washing clothes and fetching water from contaminated Rivers/Lakes are the main risk factors for S. mansoni infection, which is a potential for children to be infected. Poor hygiene and recreational activities like swimming and fishing also increase the risk of infection in children [10].In Ethiopia,S. mansoni infection is one of the prevalent parasitic diseases reported across many regions, causing considerable morbidity. In northern Ethiopia, in Alamata district, studies revealed 73.9% prevalence of S. mansoni infection [11].Higher rates of infections were observed in males than females due to the frequent water contact behavior of male [12]. Due to they have higher participation in bathing, swimming and irrigation activities [13].Effective control of the disease requires determining its prevalence rate and identifying risk factors of infection in high-risk population groups [12]. It was important to conduct this research as the study site was appropriate for the study due to the presence of irrigational farming for agricultural product on the river, which is suitable for the intermediate host of S. mansoni. Efforts have been made to document the distribution of S. mansoni infection nearly at all corners of the region. However, it cannot be said that the distribution of the disease is fully mapped out, as there are recent discoveries of new transmission foci possibly associated with expansion of water development projects and human movement [14].Studies that indicate the prevalence ofS. mansoni infection and other intestinal parasites in different areas are crucial for identifying communities at high risk for parasitic infections and for formulating suitable prevention and control measures. The current study aimed to determine the prevalence of S. mansoni infection and associated risk factors among school children in two settings of Guangua district, northwest Ethiopia. The findings will help in strengthening the information available so far and encourage policy makers to design effective strategies to combat S. mansoni infection. ## 2. Materials and Methods ### 2.1. Study Area and Design A cross-sectional study was conducted from February to May 2018 to determine the prevalence ofS. mansoni infection and associated risk factors among school children in Guangua district, northwest Ethiopia. Guangua district is found in Agew Awi Zone, Amhara region. Guangua is bordered on the south and west by the Benshangul-Gumuz region, on the north by Dangla, on the northwest by Faggeta Lekoma and Banja Shekudad, and on the east Ankasha Guagua; the Dura River, a tributary of Abay River, defines parts of its western bordered. The district has 20 rural kebeles and 64 primary schools with an elevation of 1650 m above sea level. The average annual rain fall is 1896.6 mm with an average temperature of 25°C. The total population of the district is 142, 947. The economic base of the majority of the population (93.7%) of the district is agriculture. ### 2.2. Sample Size Determination and Sampling Techniques A sample size of 422 was determined using single population proportion formula [15]. Proportion of Schistosoma mansoni prevalence (p=0.5), level of confidence (z=1.96), and d precision (d=0.05) were considered. In addition, 10% nonresponse rate was added.Multistage sampling was employed to select the study subjects. The study was conducted in Anguay kebele which have two schools. During the study period, the total number of children attending Kibi and Gichgich was 1650 and 921, respectively. The sample size was allocated into two schools based on their total number. This kebele was selected purposively as it is a nearby kebele to the irrigation site in Dura River. The students were stratified by grade level from 1 to 8 in both schools. The numbers of the study participants were selected by systematic sampling technique in each class using their class rosters as sampling frame. ### 2.3. Inclusion and Exclusion Criteria All school children who gave consent to participate in the study and who had no any history of taking antihelminthes drug during the data collection or with in the last three months were included in the study. While children who were absent on the day of data collection and who did not gave consent were excluded to participate. ### 2.4. Variables The dependent variable is theS. mansoni infection status.The independent variables were Socio demographic status, water contact habit, defecation practice and latrine availability, shoe wearing habit, irrigation practices, availability of dams, distance between their homes and water bodies, and knowledge aboutS. mansoni infection, its transmission, and prevention method. ### 2.5. Data Collection #### 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. #### 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ### 2.6. Data Analysis Congregated data were double entered into and analyzed using SPSS version 23 software. Descriptive statistics was carried out to measure relative frequencies and percentages of the variables. Chi-squared tests (χ2) were used to determine the association between variables and to test statistical significance differences. Logistic regression analysis was performed to examine associations between variables. Odds ratios (OR) were calculated with 95% confidence interval (CI). Variables having significance at p values 0.25 in univariate test were selected for multivariate logistic regression analysis to identify the most important predictors of Schistosoma mansoni risk factors based on the test from logistic regression [16]. The associations were considered to be statistically significant when p values are less than 0.05. ### 2.7. Data Quality Control All the necessary reagents, chemicals, and instruments were checked by known positive and negative samples before processing and examination of samples of the study participants. Training was given for data collectors and supervisors. Also the specimens were checked for the serial number, quantity, and procedure of collection. Before the actual data collection, pretest was conducted involving 5% of the sample size that were not part of the sample population in the actual study to ensure the validity of the data collection tool. The smear samples were reexamined by other laboratory experts, which was blinded for the first examination results. ### 2.8. Ethical Considerations Ethical approval was obtained from the research and community service coordinating office of Science College, Bahir Dar University. Permissions were obtained from school administration/school director office to conduct the study after explaining the purpose and objective of the study. Informed written and oral consent was also obtained from the parent/guardian of the children/their home room teachers. All the data obtained from each study participant was kept confidential. Children found positive forSchistosoma mansoni were treated with the WHO standard procedure. Participants who were found infected during the study were provided prescription to take the drug in the nearby pharmacy. ## 2.1. Study Area and Design A cross-sectional study was conducted from February to May 2018 to determine the prevalence ofS. mansoni infection and associated risk factors among school children in Guangua district, northwest Ethiopia. Guangua district is found in Agew Awi Zone, Amhara region. Guangua is bordered on the south and west by the Benshangul-Gumuz region, on the north by Dangla, on the northwest by Faggeta Lekoma and Banja Shekudad, and on the east Ankasha Guagua; the Dura River, a tributary of Abay River, defines parts of its western bordered. The district has 20 rural kebeles and 64 primary schools with an elevation of 1650 m above sea level. The average annual rain fall is 1896.6 mm with an average temperature of 25°C. The total population of the district is 142, 947. The economic base of the majority of the population (93.7%) of the district is agriculture. ## 2.2. Sample Size Determination and Sampling Techniques A sample size of 422 was determined using single population proportion formula [15]. Proportion of Schistosoma mansoni prevalence (p=0.5), level of confidence (z=1.96), and d precision (d=0.05) were considered. In addition, 10% nonresponse rate was added.Multistage sampling was employed to select the study subjects. The study was conducted in Anguay kebele which have two schools. During the study period, the total number of children attending Kibi and Gichgich was 1650 and 921, respectively. The sample size was allocated into two schools based on their total number. This kebele was selected purposively as it is a nearby kebele to the irrigation site in Dura River. The students were stratified by grade level from 1 to 8 in both schools. The numbers of the study participants were selected by systematic sampling technique in each class using their class rosters as sampling frame. ## 2.3. Inclusion and Exclusion Criteria All school children who gave consent to participate in the study and who had no any history of taking antihelminthes drug during the data collection or with in the last three months were included in the study. While children who were absent on the day of data collection and who did not gave consent were excluded to participate. ## 2.4. Variables The dependent variable is theS. mansoni infection status.The independent variables were Socio demographic status, water contact habit, defecation practice and latrine availability, shoe wearing habit, irrigation practices, availability of dams, distance between their homes and water bodies, and knowledge aboutS. mansoni infection, its transmission, and prevention method. ## 2.5. Data Collection ### 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. ### 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ## 2.5.1. Data Collection Tool A structured interview questionnaire was used to collect data on socio-demographic characteristics and associated risk factors ofS. mansoni infection. The questionnaire was first developed in English and translated into the local language, Amharic, and then translated back to English to check consistency. ## 2.5.2. Sample Collection and Examination A labeled, clean, dry, and leak-proof stool cup was used to collect a stool specimen of about 3 g from each student with an applicator stick, and it was preserved in 10% formalin solution. Then, the stool samples were transported to Bahir Dar University Biomedical Research Laboratory and were processed by Kato-Katz technique, using fixed quantity of sieved 41.7 mg of stool on holed template. They were mounted on slides and covered with malachite green saturated cellophane [9]. Finally, the smeared slides were examined under microscope using 10× and 40× for detection of eggs of the parasite. ## 2.6. Data Analysis Congregated data were double entered into and analyzed using SPSS version 23 software. Descriptive statistics was carried out to measure relative frequencies and percentages of the variables. Chi-squared tests (χ2) were used to determine the association between variables and to test statistical significance differences. Logistic regression analysis was performed to examine associations between variables. Odds ratios (OR) were calculated with 95% confidence interval (CI). Variables having significance at p values 0.25 in univariate test were selected for multivariate logistic regression analysis to identify the most important predictors of Schistosoma mansoni risk factors based on the test from logistic regression [16]. The associations were considered to be statistically significant when p values are less than 0.05. ## 2.7. Data Quality Control All the necessary reagents, chemicals, and instruments were checked by known positive and negative samples before processing and examination of samples of the study participants. Training was given for data collectors and supervisors. Also the specimens were checked for the serial number, quantity, and procedure of collection. Before the actual data collection, pretest was conducted involving 5% of the sample size that were not part of the sample population in the actual study to ensure the validity of the data collection tool. The smear samples were reexamined by other laboratory experts, which was blinded for the first examination results. ## 2.8. Ethical Considerations Ethical approval was obtained from the research and community service coordinating office of Science College, Bahir Dar University. Permissions were obtained from school administration/school director office to conduct the study after explaining the purpose and objective of the study. Informed written and oral consent was also obtained from the parent/guardian of the children/their home room teachers. All the data obtained from each study participant was kept confidential. Children found positive forSchistosoma mansoni were treated with the WHO standard procedure. Participants who were found infected during the study were provided prescription to take the drug in the nearby pharmacy. ## 3. Results ### 3.1. Socio-Demographic Characteristics of Study Participants A total of 422 school children were invited to participate, among these, 404 (95.7%) individuals were enrolled in the study, and the remaining 18 individuals were refused to participate. Of the total subject, 227 (56.2%) were male, and 177 (43.8%) were female. The highest proportion of participants 185 (45.8%) were found to be within the age range of 10-14 years. About 269 (66.6%) participants were from grades 1-4, and the remaining 135 (33.4%) were from grade levels 5-8. The parents of majority students (297 (73.5%)) were uneducated (Table1).Table 1 Socio-demographic characteristics of the study participants study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariableCategoryFrequencyPercentSexMale22756.2Female17743.8Age5-913132.410-1418545.815-198821.8Grade level1-426966.65-813533.4Parent education statusUneducated29773.5Educated10726.5 ### 3.2. Prevalence ofS. mansoni Infection The overall prevalence ofS. mansoni among the study participants was 51 (12.6%), while the remaining 353 (87.4%) were found to be negative for S. mansoni infection (Figure 1). In addition to S. mansoni, poly parasitism were also observed during the stool samples examination such as E. histolytica 42 (10.4%), G. lamblia 4 (1.0%), and Hook worm 4 (1.0%) (Figure 2).Figure 1 Prevalence ofschistosoma mansoni infection among school children in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019).Figure 2 Prevalence ofS. mansoni infection and other common intestinal parasite among study participants in Anguay kebele, Guangua district, northwest Ethiopia (2018). ### 3.3. Risk Factors ofS. mansoni In Chi-square analysis, no statistically significant differences inS. mansoni infection were observed among the categories of the variable, age, parent education status, body shower practices in rivers, and knowledge about S. mansoni infection and study site. However, the remaining variables were significantly associated with high risk of S. mansoni infection. S. mansoni infection was detected across all categories of the variable with varied prevalence rates of infection. The prevalence of S. mansoni infection was higher among males 36 (15.9%) than females 15 (8.5%) participants. The prevalence of S. mansoni infection was higher 28 (15.1%) among the study participants in the age group 10-14 years followed by age group of 5-9 years 16 (12.2%) and 15-19 years 7(8.0%). Students who attend grades 5-8 (22.2%) were highly infected by S. mansoni compared to those grades 1-4 (7.8%) (Table 2).Table 2 Chi-square analysis of association ofSchistosoma mansoni infection with socio demographic and associated risk factors of study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariablesCategoryNumberexamined n(%)S. mansoni infectionχ2, pPositiveNegativeSexMale227 (56.2)36 (15.9)191 (84.1)4.917, 0.027Female177 (43.8)15 (8.5)162 (91.5)Age5-9131 (32.4)16 (12.2)115 (87.8)2.817, 0.24410-14185 (45.8)28 (15.1)157 (84.9)15-1988 (21.8)7 (8.0)81 (92.0)Grade level1-4269 (66.6)21 (7.8)248 (92.2)16.935,0.0015-8135 (33.4)30 (22.2)105 (77.8)Parent education statusUneducated254 (62.9)43 (14.5)254 (85.5)3.496, 0.062Educated107 (73.5)8 (7.5)99 (92.5)Washing clothes and utensils in the riverYes304 (75.2)45 (14.8)259 (85.2)5.286, 0.021No100 (24.8)6 (6.0)94 (94.0)Frequency of swimming in the riversAlways76 (18.8)29 (38.2)47 (61.8)55.348, 0.001Some times264 (65.3)18 (6.8)246 (93.2)Not at all64 (15.8)4 (6.3)60 (93.8)Body shower practices in in riversYes332 (82.2)42 (12.7)290 (87.3)0.001, 0.972No72 (17.8)9 (12.5)63 (87.5)Crossing the water bodies by therespondents’ way to and from schoolYes164 (40.6)37 (22.6)127 (77.4)24.715, 0.001No240 (59.4)14 (5.8)226 (94.2)Irrigation practicesYes189 (46.8)43 (22.8)146 (77.2)33.024, 0.001No215 (53.2)8 (3.7)207 (96.3)Presence of dams in their localityYes161 (39.9)28 (17.4)133 (82.6)5.516, 0.019No243 (60.1)23 (9.5)220 (90.5)The distance between their homesand water bodiesNear (<1 KM)199 (49.3)37 (18.6)162 (81.4)12.669, 0.001Far (≥1 KM)205 (50.7)14 (6.8)191 (93.2)Shoe wearing habitsYes171 (42.3)13 (7.6)158 (92.4)6.778, 0.009No233 (57.7)38 (16.3)195 (83.7)Presence of latrine in their homesYes159 (39.4)9 (5.7)150 (94.3)11.526, 0.001No245 (60.6)42 (17.1)203 (82.9)Open defecationYes250 (61.9)43 (17.2)207 (82.8)12.452, 0.001No154 (38.1)8 (5.2)146 (94.8)Knowledge aboutS. mansoniinfection and its transmission methodYes151 (37.4)10 (6.6)141 (93.4)7.873, 0.005No253 (62.6)41 (16.2)212 (83.8)Knowledge about the burdenof S. mansoni infection in their areaYes153 (37.9)9 (5.9)144 (94.1)10.147, 0.001No251 (62.1)42 (16.7)209 (83.3)Knowledge aboutS. mansoniinfection preventionYes160 (39.6)14 (8.8)146 (91.3)3.604, 0.058No244 (60.4)37 (15.2)207 (84.8)Study siteGichgich201 (49.8)29 (14.4)172 (85.6)1.180, 0.277Kibi203 (50.2)22 (10.8)181 (89.2)Statistically significant atp<0.05.The majority of 304 (75.2%) participants washes their clothes and utensils in the River, amongst those 45 (14.8%) were positive forS. mansoni infection. With regard to swimming habits in the rivers, the prevalence of S. mansoni infection was higher in those who swim always 29 (38.2%) and followed by some times 18 (6.8%) and not at all 4 (6.3%). In this study, respondents who cross the water bodies (37 (22.6%)), who were involved at irrigation practices (43 (22.8%)), and who live at areas where there are dams in their locality (28 (17.4%)) were found to be positive for S. mansoni infection. In terms of distance between their homes and water bodies, the highest prevalence was observed in near distance (<1 KM) 37 (18.6%) compared to far distance (≥1 KM) 14 (6.8%). School children who had no the habit of shoe wearing (38 (16.3%)) and absence of latrine in their homes (42 (17.1%)) were more infected with S. mansoni (Table 2).This result showed that more than half (253 (62.6%)) of the study participants did not know about the disease,S. mansoni, and its transmission methods among these (41 (16.2%)) were positive. Sixty one percent of study participants had no knowledge about the burden of S. mansoni infection, of those majority (42 (16.7%)) were affected. More than half (250 (61.9%)) of the respondents defecate in open field. Among S. mansoni positive individuals, majority of them were defecates in open field (43(17.2%)) compared to those who did not defecate in open field (8 (5.2%)) (Table 2). ### 3.4. Multivariate Analyses ofS. mansoni Infection and Its Associated Factors In multivariate analysis, the significant independent predictors ofS. mansoni infection in this study were the age group 5-9 years and age group 10-14 years, grade level 5-8, always swimming in the River, having irrigation practice, and crossing the water bodies, while the remaining variables were not observed to have any significant association with S. mansoni infection (Table 3).Table 3 Multivariate logistic regression analysis ofSchistosoma mansoni prevalence with selected seemingly significant variables in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). List of variableCategoryS. mansoni infectionsCOR (95% CI)p valueAOR (95% CI)pvaluePositiveNegativeSexMale36 (15.9)191 (84.1)2.04 (1.08, 3.85)0.0291.48 (0.574, 3.84)0.416Female15 (8.5)162 (91.5)1.001.00Age5-916 (12.2)115 (87.8)1.61 (0.63, 4.09)0.31722.27 (3.70,134.01)0.001∗10-1428 (15.1)157 (84.9)2.06 (0.86, 4.93)0.1034.58 (1.14, 18.42)0.032∗15-197 (8.0)81 (92.0)1.001.00Grade level5-821 (7.8)248 (92.2)3.37 (1.85, 6.16)0.00114.95 (4.297,52.03)0.001∗1-430 (22.2)105 (77.8)1.001.00Parent education statusUneducated43 (14.5)254 (85.5)2.30 (0.95, 4.61)0.0702.77 (0.96, 7.96)0.059Educated8 (7.5)99 (92.5)1.001.00Washing clothes andutensils in the riverYes45 (14.8)259 (85.2)2.72 (1.13, 6.59)0.0262.50 (0.71, 8.87)0.156No6 (6.0)94 (94.0)1.001.00Frequency of swimming inthe riversAlways29 (38.2)47 (61.8)9.26 (3.04,28.17)0.00111.35 (2.33, 55.33)0.003∗Some times18 (6.8)246 (93.2)1.10 (0.36, 3.36)0.8701.48 (0.33, 6.70)0.613Not at all4 (6.3)60 (93.8)1.001.00Crossing the water bodiesby the respondents’ way to and from schoolYes37 (22.6)127 (77.4)4.70 (2.45, 9.03)0.0014.17 (1.70, 10.21)0.002∗No14 (5.8)226 (94.2)1.001.00Irrigation practicesYes43 (22.8)146 (77.2)7.62 (3.48,16.68)0.0017.10 (2.31, 21.80)0.001∗No8 (3.7)207 (96.3)1.001.00Presence of dams in theirlocalityYes28 (17.4)133 (82.6)2.01 (1.11, 3.64)0.0200.67 (0.25, 1.77)0.417No23 (9.5)220 (90.5)1.001.00The distance between theirhomes and water bodiesNear (<1 KM)37 (18.6)162 (81.4)3.12 (1.63, 5.97)0.0011.67 (0.63, 4.44)0.303Far (≥1 KM)14 (6.8)191 (93.2)1.001.00Shoe wearing habitsYes13 (7.6)158 (92.4)0.42 (0.22, 0.82)0.0110.41 (0.16, 1.80)0.076No38 (16.3)195 (83.7)1.001.00Presence of latrine in theirhomesYes9 (5.7)150 (94.3)0.29 (0.14, 0.61)0.0010.33 (0.06, 1.84)0.204No42 (17.1)203 (82.9)1.001.00Open defecationYes43 (17.2)207 (82.8)3.79 (1.73, 8.30)0.0012.39 (0.41, 13.90)0.333No8 (5.2)146 (94.8)1.001.00Knowledge aboutS. mansoni infection and its transmission methodYes10 (6.6)141 (93.4)0.37 (0.18, 0.76)0.0070.70 (0.26, 1.90)0.485No41 (16.2)212 (83.8)1.001.00Knowledge about theburden of S. mansoniinfection in their areaYes9 (5.9)144 (94.1)0.31 (0.15, 0.66)0.0020.391 (0.14, 1.06)0.064No42 (16.7)209 (83.3)1.001.00Knowledge aboutS. mansoni infection preventionYes14 (8.8)146 (91.3)0.54 (0.28, 1.03)0.0610.46 (0.18, 1.18)0.106No37 (15.2)207 (84.8)1.001.00COR = crude odd ratio; sig. atp≤0.25; AOR∗ = adjusted odd ratio; sig. at p≤0.05.In this study, the likelihood ofS. mansoni infection among participants who belonged to 5-9 years old age group was significant and about 22 times higher (AOR=22.27, 95% CI 3.70-134.01, p=0.001). The odds of S. mansoni infection were also significantly four times higher risk in age group of 10-14 years (AOR=4.58, 95% CI 1.14-18.42, p=0.032). Regarding the grade level of the study participants, grades 5-8 were about fifteen times at higher risk of S. mansoni infection than those who enrolled in grade levels 1-4, and it was statistically significant (AOR=14.95, 95% 4.297-52.03, p=0.001) (Table 3).The odds of positiveS. mansoni infection were significantly eleven times higher among individuals swimming in the river always compared to never swimming in the River at all (AOR=11.35, 95% CI 2.33-55.33, p=0.003). Subjects who practice irrigation were seven times positively associated with S. mansoni infection (AOR=7.10, 95% CI 2.31-21.80, p=0.001) (Table 3). ## 3.1. Socio-Demographic Characteristics of Study Participants A total of 422 school children were invited to participate, among these, 404 (95.7%) individuals were enrolled in the study, and the remaining 18 individuals were refused to participate. Of the total subject, 227 (56.2%) were male, and 177 (43.8%) were female. The highest proportion of participants 185 (45.8%) were found to be within the age range of 10-14 years. About 269 (66.6%) participants were from grades 1-4, and the remaining 135 (33.4%) were from grade levels 5-8. The parents of majority students (297 (73.5%)) were uneducated (Table1).Table 1 Socio-demographic characteristics of the study participants study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariableCategoryFrequencyPercentSexMale22756.2Female17743.8Age5-913132.410-1418545.815-198821.8Grade level1-426966.65-813533.4Parent education statusUneducated29773.5Educated10726.5 ## 3.2. Prevalence ofS. mansoni Infection The overall prevalence ofS. mansoni among the study participants was 51 (12.6%), while the remaining 353 (87.4%) were found to be negative for S. mansoni infection (Figure 1). In addition to S. mansoni, poly parasitism were also observed during the stool samples examination such as E. histolytica 42 (10.4%), G. lamblia 4 (1.0%), and Hook worm 4 (1.0%) (Figure 2).Figure 1 Prevalence ofschistosoma mansoni infection among school children in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019).Figure 2 Prevalence ofS. mansoni infection and other common intestinal parasite among study participants in Anguay kebele, Guangua district, northwest Ethiopia (2018). ## 3.3. Risk Factors ofS. mansoni In Chi-square analysis, no statistically significant differences inS. mansoni infection were observed among the categories of the variable, age, parent education status, body shower practices in rivers, and knowledge about S. mansoni infection and study site. However, the remaining variables were significantly associated with high risk of S. mansoni infection. S. mansoni infection was detected across all categories of the variable with varied prevalence rates of infection. The prevalence of S. mansoni infection was higher among males 36 (15.9%) than females 15 (8.5%) participants. The prevalence of S. mansoni infection was higher 28 (15.1%) among the study participants in the age group 10-14 years followed by age group of 5-9 years 16 (12.2%) and 15-19 years 7(8.0%). Students who attend grades 5-8 (22.2%) were highly infected by S. mansoni compared to those grades 1-4 (7.8%) (Table 2).Table 2 Chi-square analysis of association ofSchistosoma mansoni infection with socio demographic and associated risk factors of study participants in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). VariablesCategoryNumberexamined n(%)S. mansoni infectionχ2, pPositiveNegativeSexMale227 (56.2)36 (15.9)191 (84.1)4.917, 0.027Female177 (43.8)15 (8.5)162 (91.5)Age5-9131 (32.4)16 (12.2)115 (87.8)2.817, 0.24410-14185 (45.8)28 (15.1)157 (84.9)15-1988 (21.8)7 (8.0)81 (92.0)Grade level1-4269 (66.6)21 (7.8)248 (92.2)16.935,0.0015-8135 (33.4)30 (22.2)105 (77.8)Parent education statusUneducated254 (62.9)43 (14.5)254 (85.5)3.496, 0.062Educated107 (73.5)8 (7.5)99 (92.5)Washing clothes and utensils in the riverYes304 (75.2)45 (14.8)259 (85.2)5.286, 0.021No100 (24.8)6 (6.0)94 (94.0)Frequency of swimming in the riversAlways76 (18.8)29 (38.2)47 (61.8)55.348, 0.001Some times264 (65.3)18 (6.8)246 (93.2)Not at all64 (15.8)4 (6.3)60 (93.8)Body shower practices in in riversYes332 (82.2)42 (12.7)290 (87.3)0.001, 0.972No72 (17.8)9 (12.5)63 (87.5)Crossing the water bodies by therespondents’ way to and from schoolYes164 (40.6)37 (22.6)127 (77.4)24.715, 0.001No240 (59.4)14 (5.8)226 (94.2)Irrigation practicesYes189 (46.8)43 (22.8)146 (77.2)33.024, 0.001No215 (53.2)8 (3.7)207 (96.3)Presence of dams in their localityYes161 (39.9)28 (17.4)133 (82.6)5.516, 0.019No243 (60.1)23 (9.5)220 (90.5)The distance between their homesand water bodiesNear (<1 KM)199 (49.3)37 (18.6)162 (81.4)12.669, 0.001Far (≥1 KM)205 (50.7)14 (6.8)191 (93.2)Shoe wearing habitsYes171 (42.3)13 (7.6)158 (92.4)6.778, 0.009No233 (57.7)38 (16.3)195 (83.7)Presence of latrine in their homesYes159 (39.4)9 (5.7)150 (94.3)11.526, 0.001No245 (60.6)42 (17.1)203 (82.9)Open defecationYes250 (61.9)43 (17.2)207 (82.8)12.452, 0.001No154 (38.1)8 (5.2)146 (94.8)Knowledge aboutS. mansoniinfection and its transmission methodYes151 (37.4)10 (6.6)141 (93.4)7.873, 0.005No253 (62.6)41 (16.2)212 (83.8)Knowledge about the burdenof S. mansoni infection in their areaYes153 (37.9)9 (5.9)144 (94.1)10.147, 0.001No251 (62.1)42 (16.7)209 (83.3)Knowledge aboutS. mansoniinfection preventionYes160 (39.6)14 (8.8)146 (91.3)3.604, 0.058No244 (60.4)37 (15.2)207 (84.8)Study siteGichgich201 (49.8)29 (14.4)172 (85.6)1.180, 0.277Kibi203 (50.2)22 (10.8)181 (89.2)Statistically significant atp<0.05.The majority of 304 (75.2%) participants washes their clothes and utensils in the River, amongst those 45 (14.8%) were positive forS. mansoni infection. With regard to swimming habits in the rivers, the prevalence of S. mansoni infection was higher in those who swim always 29 (38.2%) and followed by some times 18 (6.8%) and not at all 4 (6.3%). In this study, respondents who cross the water bodies (37 (22.6%)), who were involved at irrigation practices (43 (22.8%)), and who live at areas where there are dams in their locality (28 (17.4%)) were found to be positive for S. mansoni infection. In terms of distance between their homes and water bodies, the highest prevalence was observed in near distance (<1 KM) 37 (18.6%) compared to far distance (≥1 KM) 14 (6.8%). School children who had no the habit of shoe wearing (38 (16.3%)) and absence of latrine in their homes (42 (17.1%)) were more infected with S. mansoni (Table 2).This result showed that more than half (253 (62.6%)) of the study participants did not know about the disease,S. mansoni, and its transmission methods among these (41 (16.2%)) were positive. Sixty one percent of study participants had no knowledge about the burden of S. mansoni infection, of those majority (42 (16.7%)) were affected. More than half (250 (61.9%)) of the respondents defecate in open field. Among S. mansoni positive individuals, majority of them were defecates in open field (43(17.2%)) compared to those who did not defecate in open field (8 (5.2%)) (Table 2). ## 3.4. Multivariate Analyses ofS. mansoni Infection and Its Associated Factors In multivariate analysis, the significant independent predictors ofS. mansoni infection in this study were the age group 5-9 years and age group 10-14 years, grade level 5-8, always swimming in the River, having irrigation practice, and crossing the water bodies, while the remaining variables were not observed to have any significant association with S. mansoni infection (Table 3).Table 3 Multivariate logistic regression analysis ofSchistosoma mansoni prevalence with selected seemingly significant variables in Gichgich and Kibi, Guangua district, northwest Ethiopia (2019). List of variableCategoryS. mansoni infectionsCOR (95% CI)p valueAOR (95% CI)pvaluePositiveNegativeSexMale36 (15.9)191 (84.1)2.04 (1.08, 3.85)0.0291.48 (0.574, 3.84)0.416Female15 (8.5)162 (91.5)1.001.00Age5-916 (12.2)115 (87.8)1.61 (0.63, 4.09)0.31722.27 (3.70,134.01)0.001∗10-1428 (15.1)157 (84.9)2.06 (0.86, 4.93)0.1034.58 (1.14, 18.42)0.032∗15-197 (8.0)81 (92.0)1.001.00Grade level5-821 (7.8)248 (92.2)3.37 (1.85, 6.16)0.00114.95 (4.297,52.03)0.001∗1-430 (22.2)105 (77.8)1.001.00Parent education statusUneducated43 (14.5)254 (85.5)2.30 (0.95, 4.61)0.0702.77 (0.96, 7.96)0.059Educated8 (7.5)99 (92.5)1.001.00Washing clothes andutensils in the riverYes45 (14.8)259 (85.2)2.72 (1.13, 6.59)0.0262.50 (0.71, 8.87)0.156No6 (6.0)94 (94.0)1.001.00Frequency of swimming inthe riversAlways29 (38.2)47 (61.8)9.26 (3.04,28.17)0.00111.35 (2.33, 55.33)0.003∗Some times18 (6.8)246 (93.2)1.10 (0.36, 3.36)0.8701.48 (0.33, 6.70)0.613Not at all4 (6.3)60 (93.8)1.001.00Crossing the water bodiesby the respondents’ way to and from schoolYes37 (22.6)127 (77.4)4.70 (2.45, 9.03)0.0014.17 (1.70, 10.21)0.002∗No14 (5.8)226 (94.2)1.001.00Irrigation practicesYes43 (22.8)146 (77.2)7.62 (3.48,16.68)0.0017.10 (2.31, 21.80)0.001∗No8 (3.7)207 (96.3)1.001.00Presence of dams in theirlocalityYes28 (17.4)133 (82.6)2.01 (1.11, 3.64)0.0200.67 (0.25, 1.77)0.417No23 (9.5)220 (90.5)1.001.00The distance between theirhomes and water bodiesNear (<1 KM)37 (18.6)162 (81.4)3.12 (1.63, 5.97)0.0011.67 (0.63, 4.44)0.303Far (≥1 KM)14 (6.8)191 (93.2)1.001.00Shoe wearing habitsYes13 (7.6)158 (92.4)0.42 (0.22, 0.82)0.0110.41 (0.16, 1.80)0.076No38 (16.3)195 (83.7)1.001.00Presence of latrine in theirhomesYes9 (5.7)150 (94.3)0.29 (0.14, 0.61)0.0010.33 (0.06, 1.84)0.204No42 (17.1)203 (82.9)1.001.00Open defecationYes43 (17.2)207 (82.8)3.79 (1.73, 8.30)0.0012.39 (0.41, 13.90)0.333No8 (5.2)146 (94.8)1.001.00Knowledge aboutS. mansoni infection and its transmission methodYes10 (6.6)141 (93.4)0.37 (0.18, 0.76)0.0070.70 (0.26, 1.90)0.485No41 (16.2)212 (83.8)1.001.00Knowledge about theburden of S. mansoniinfection in their areaYes9 (5.9)144 (94.1)0.31 (0.15, 0.66)0.0020.391 (0.14, 1.06)0.064No42 (16.7)209 (83.3)1.001.00Knowledge aboutS. mansoni infection preventionYes14 (8.8)146 (91.3)0.54 (0.28, 1.03)0.0610.46 (0.18, 1.18)0.106No37 (15.2)207 (84.8)1.001.00COR = crude odd ratio; sig. atp≤0.25; AOR∗ = adjusted odd ratio; sig. at p≤0.05.In this study, the likelihood ofS. mansoni infection among participants who belonged to 5-9 years old age group was significant and about 22 times higher (AOR=22.27, 95% CI 3.70-134.01, p=0.001). The odds of S. mansoni infection were also significantly four times higher risk in age group of 10-14 years (AOR=4.58, 95% CI 1.14-18.42, p=0.032). Regarding the grade level of the study participants, grades 5-8 were about fifteen times at higher risk of S. mansoni infection than those who enrolled in grade levels 1-4, and it was statistically significant (AOR=14.95, 95% 4.297-52.03, p=0.001) (Table 3).The odds of positiveS. mansoni infection were significantly eleven times higher among individuals swimming in the river always compared to never swimming in the River at all (AOR=11.35, 95% CI 2.33-55.33, p=0.003). Subjects who practice irrigation were seven times positively associated with S. mansoni infection (AOR=7.10, 95% CI 2.31-21.80, p=0.001) (Table 3). ## 4. Discussion The overall prevalence rate ofS. mansoni among study participants in the current study was 12.6%. It is comparable with the study finding in an endemic area of Niger River basin (12.5%) [17] and Agaie, Niger State (10.17%) [18] of Nigeria, and Dembia (15.4%) [19] and (11.4%) Wondo District of Ethiopia [20].However, it was higher than the prevalence among preschool children in Gondar town (5.9%) [21], among the school children in Côte d’Ivoire (6.1%) [22], and in the White Nile River Basin of Sudan (5.9%) [23]. Higher prevalence of S. mansoni in the current study could be due to the existence of Dura River where the local community uses for washing clothes, taking baths, and fetching water for domestic purpose. The river may serve as a potential source of infection for S. mansoni. Moreover, the weather condition in the area is relatively warmer and more humid which favor the existence and reproduction rate of the snail. The other possible cause might be related with the differences in children’s behavior to water contact and level of awareness about the prevention and control of S. mansoni infection.The prevalence in the current study was lower than the overall prevalence ofS. mansoni infection among school children in districts of north Gondar (Chuahit, Sanja, Debark, and Maksegnit) (33.5%) [24], nearby rivers in Jimma town (28.7%) [25], in rural area of Bahir Dar (24.9%) [26], in Jimma Zone (27.6%) [27], southern Ethiopia (25.8%) [28], and Mekelle city (23.9%) [12]. This study was also much lower as compared to the finding in Sanja area, Amhara region (82.8%) [29], and Damot Woide district, Wolaita Zone (81.3%) [14]. The variations in the prevalence may be due to some factors such as difference in water contact habit, toilet utilization, and ecological distribution of snails in the study area (the presence of fast running rivers (Dura) in the current study may lead to low availability of vector snails, since snails mainly prefer stagnant or slow-moving water bodies). The snail (Biomphalaria pfeifferi), responsible for the transmission of S. mansoni infection, is more prevalent in areas 2000 m above sea level [30]. The variations in the prevalence in this study might also be due to factors which are related with the characteristics of intermediate snail hosts.In the current study, polyparasitism was observed other than theS. mansoni infection. Even if they perform sanitation; it is not enough in preventing schistosoma infection [31]. On the other hand, S. mansoni and other parasite infections will never be a public health problem if there are appropriate improvements of hygiene and sanitation standards [32]. The presence of adequate sanitation does not necessarily guarantee its use. The bulk of Schistosoma specious eggs reach directly into the water usually by children during bathing and swimming. The use of adequate sanitation systems for urine and faeces has reduced schistosomiasis within short periods of time, while it takes longer for other helminths such as Ascaris lumbricoides and Trichuris trichiura [31].Though a study in two the settings of Côte d’Ivoire [22] reported similar prevalence rates for boys and girls in Kenya [33], many studies showed contrasting finding in the prevalence of S. mansoni infection by sex [32–34]. The finding of the current study also showed higher prevalence of S. mansoni infection in males than in female. It is in agreement with the studies done in Wolaita Zone of Ethiopia [14] and the findings in Niger [35] and central Sudan [36]. Male children may have higher exposure of contact with cercariae contaminated water bodies than females while helping their family in outdoor activities such as herding cattle.Unlike a study in Sanja area of Amhara, Ethiopia [29], which reported a similar prevalence of S. mansoni across the age group of school children, the current finding showed that students in the age group of 14 years old and younger were at a higher risk of acquiring S. mansoni infection than those with the age range of 15-19. It was in line with previous studies done in Ethiopia [29, 36]. This might be due to their chance of playing in the field which increases the probability of contact with cercaria contaminated water bodies.In the current study, the prevalence ofS. mansoni infection was significantly associated with the frequency of swimming in the river. The chances of positive S. mansoni infection were eleven times higher in individuals who always swim in the river compared to those who never swim. This was in agreement with studies done in Hawassa and Gorgora of Ethiopia [33, 37].The current study revealed that water contact habit through irrigation practice and crossing the water bodies in their way to school are associated with high prevalence ofS. mansoni infection; this was in line with the study at Wolaita Zone, Southern [14] and Gondar town of Ethiopia [21], and south Côte d’Ivoire, and central Côte d’Ivoire [38], which reported that water contact at stream crosses and herding cattle near the stream increase the risk of S. mansoni infection.This study also showed that working in an irrigated field is significantly associated withS. mansoni infection. This is in agreement with the findings of the studies at Dudicha and Shesha Kekel localities [39], with different water source users in Tigray region of Ethiopia [40] and southeastern Brazil [41]. Distance from the water bodies was also associated with S. mansoni infection. Children who live near to the water bodies were more infected with S. mansoni than those who live far. It was in agreement with previous findings of a study in Cote d’Ivoire [42]. This study also showed that the prevalence of S. mansoni infection was associated with shoe wearing habit. This is in line with the findings of a study in Jiga town [43]. This could be due to the agricultural-based economy of the community; closer distance with water bodies and continuous barefoot water contact inhabit increase the chance of contact with water bodies that contain S. mansoni cercariae. ## 5. Conclusion and Recommendation S. mansoni infection was the major problem among school children in the current study. Students having fourteen and younger years old, students who frequently swim in the river, having irrigation practice, and cross the river bodies were at a higher risk of S. mansoni infection. Hence, it is recommended to focus on raising the awareness of the school children about the prevention and control measures of S. mansoni in the locality. In addition, responsible bodies on irrigational practices should work on the regular cleaning of water canals which favor the reproduction of the snail. --- *Source: 1005637-2022-04-16.xml*
2022
# Identification of Specific Cell Subpopulations and Marker Genes in Ovarian Cancer Using Single-Cell RNA Sequencing **Authors:** Yan Li; Juan Wang; Fang Wang; Chengzhen Gao; Yuanyuan Cao; Jianhua Wang **Journal:** BioMed Research International (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1005793 --- ## Abstract Objective. Ovarian cancer is the deadliest gynaecological cancer globally. In our study, we aimed to analyze specific cell subpopulations and marker genes among ovarian cancer cells by single-cell RNA sequencing (RNA-seq). Methods. Single-cell RNA-seq data of 66 high-grade serous ovarian cancer cells were employed from the Gene Expression Omnibus (GEO). Using the Seurat package, we performed quality control to remove cells with low quality. After normalization, we detected highly variable genes across the single cells. Then, principal component analysis (PCA) and cell clustering were performed. The marker genes in different cell clusters were detected. A total of 568 ovarian cancer samples and 8 normal ovarian samples were obtained from The Cancer Genome Atlas (TCGA) database. Differentially expressed genes were identified according to ∣log2foldchangeFC∣>1 and adjusted p value <0.05. To explore potential biological processes and pathways, functional enrichment analyses were performed. Furthermore, survival analyses of differentially expressed marker genes were performed. Results. After normalization, 6000 highly variable genes were identified across the single cells. The cells were divided into 3 cell populations, including G1, G2M, and S cell cycles. A total of 1,124 differentially expressed genes were identified in ovarian cancer samples. These differentially expressed genes were enriched in several pathways associated with cancer, such as metabolic pathways, pathways in cancer, and PI3K-Akt signaling pathway. Furthermore, marker genes, STAT1, ANP32E, GPRC5A, and EGFL6, were highly expressed in ovarian cancer, while PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer. These marker genes were positively associated with prognosis of ovarian cancer. Conclusion. Our findings revealed specific cell subpopulations and marker genes in ovarian cancer using single-cell RNA-seq, which provided a novel insight into the heterogeneity of ovarian cancer. --- ## Body ## 1. Introduction Ovarian cancer is one of the most common gynaecological cancers in the world, with high heterogeneity and poor prognosis [1]. High-grade serous ovarian cancer is the deadliest subtype of ovarian cancer, with up to 80% of patients recurring after initial treatment [2]. Despite advances in treatments such as surgery and chemotherapy, the 5-year survival rate of patients with advanced ovarian cancer remains around 30%-40% [3, 4]. Since ovarian cancer patients are usually diagnosed at an advanced stage, genetic risk prediction and prevention strategies will be an important way to reduce ovarian cancer mortality [5]. Targeted therapies significantly improve the therapeutic effects of patients with ovarian cancer [6]. However, ovarian cancer shows heterogeneity within the tumor that may affect the therapeutic outcomes of targeted therapies. Tumors including ovarian cancer usually consist of heterogeneous cells that are different in many biological features, like morphology, apoptosis, and invasion [7]. However, RNA-seq data reflect the average expression levels of different cells, not to reveal the intrinsic expression differences between different cell subpopulations. The genetic heterogeneity of ovarian cancer has been confirmed at single-cell resolution. The heterogeneity of gene expression levels greatly affects the patients’ clinical outcomes [8]. Therefore, understanding the heterogeneity of tumors at the transcriptome level and the precise characterization of gene expression in tumors may help to identify better therapeutic molecular targets [9]. The characterization of heterogeneous tumor features will help to develop more effective molecular targeted therapeutics.The basic unit of cancer is the innovative single cell along with genetics and epigenetics. Single-cell control determines the parameters of various aspects of cancer biology. Thus, single-cell analysis provides the ultimate resolution for us to understand the biology of various diseases [10]. Single-cell RNA-seq has been become a promising approach for revealing the clonal genotype and population structure of human cancers. RNA-seq of the single cell can be used to analyze the cell type in the tumor microenvironment, the tumor heterogeneity, and its clinical significance [11]. Unlike traditional sequencing methods, single-cell sequencing methods provide different types of omics analysis for individual cells, such as genomics and transcriptomics [12]. Among them, single-cell RNA sequencing (scRNA-seq) is capable of measuring gene expression at the single-cell level. Based on classical markers, the scRNA-seq reveals the heterogeneity of gene expression in individual cells or cells with the same type [13], rather than simply examining differential expression between two cells. In this study, we analyzed the heterogeneity among ovarian cancer cells and identified marker genes by scRNA-seq. ## 2. Materials and Methods ### 2.1. Ovarian Cancer Single-Cell RNA-seq Datasets Single-cell RNA-seq gene expression data of ovarian cancer were employed from the Gene Expression Omnibus (GEO;https://www.ncbi.nlm.nih.gov/geo/) database with accession number GSE123476. According to the study of Winterhoff et al., 19 cells were excluded due to poor cell morphology, extremely large or small cell size, or evidence of multiple cells in the well. Meanwhile, 7 cells that did not express at least 1,000 of the highly expressed genes were also removed [14]. As a result, 26 ovarian cancer cells with low quality were removed from 92 cells. The barcode information and single-cell RNA-seq gene expression matrix were extracted for further analyses [14]. ### 2.2. Quality Control Filtering and Data Normalization The gene expression matrix was imported into the Seurat package in R (version 3.1.0;http://satijalab.org/seurat/). Seurat, as a tool for single-cell genomics, is used for quality control, analysis, and exploration of single-cell RNA-seq data [15]. For single-cell RNA-seq data, there could be cells with low quality, probably due to the loss of cytoplasmic RNA when the cells were disrupted. Since mitochondria were much larger than single transcriptional molecules, they were not easily leaked out of the broken cell membrane, causing the abnormally high proportion of mitochondrial genes among the cells in sequencing results. Thus, to remove cells with low quality, quality control was performed. After quality control, fragments per kilobase of transcript per million mapped read (FPKM) values were transformed into the log-space. ### 2.3. Detection of Highly Variable Genes across the Single Cells To eliminate the dimensional relationship between variable genes and make the data comparable, using the NormalizeData function of the Seurat package, data were normalized with the log-normalize method. For each gene, we calculated the standard variance in all cells using the FindVariableFeature function. Herein, mean-variance was calculated as 1. Standard variance cut-off of 1 was used to identify highly variable genes. The top 20 highly variable genes were identified. ### 2.4. Cell Clustering Analysis Using Seurat Principal component analysis (PCA) is a multivariate statistical method that examines the correlation between different variables. PCA was used to examine how to reveal the internal structure between multiple variables through a few principal components. That is, a few principal components were derived from the original variables while they retained the information of the original variables as much as possible and were not related to each other. In our study, PCA was carried out based on highly variable genes. Using the screened PCs as input, the cell clustering was visualized using Uniform Manifold Approximation and Projection (UMAP) via the RunUMAP function. ### 2.5. Gene Scoring The CellCycleScoring function of the Seurat package was used to score the marker genes in the two cell cycles G2M and S based on the gene expression levels. We calculated the average expression value of S phase genes and G2/M phase genes for each cell. All genes were divided into different bins based on the average expression levels, and then, the control genes were randomly selected as the background from each bin. The average expression levels of these control genes were calculated. The average expression levels of control genes were subtracted from the average expression levels of S phase genes and G2/M phase genes to obtain S.Score and G2M.Score.S.Score<0 and G2M.Score<0 were judged as G1 phage, otherwise, which phage was judged as which score was higher. The difference between the cell cluster and the cell cycle distribution was examined by Fisher’s test. The top ten differentially expressed genes and the cell cycle were separately plotted, which were visualized into heatmaps. ### 2.6. Detection of Marker Genes and Functional Enrichment Analysis The cluster marker genes with∣log2foldchangeFC∣≥0.25, the expression ratio of cellpopulation≤0.25, and p value ≤0.05 were identified using the “FindAllMarkers” function in the Seurat package. An expression heatmap was generated for given cells and genes using the DoHeatmap. The expression level of markers in each cluster was calculated, and the putative identities of each cell clustering were identified. The top 20 markers were plotted for each cluster. To explore potential biological processes and pathways enriched by markers in each cluster, functional enrichment analyses were performed using the gProfiler package. ### 2.7. Reconstruction of Differentiation Trajectories Using Monocle Analysis The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed using the Monocle package. A pseudotime plot was generated that can account for both branched and linear differentiation processes based on the top 2000 highly variable marker genes. ### 2.8. Differential Expression Analysis and Function Enrichment Analysis Using Ovarian Cancer Datasets A total of 593 ovarian cancer samples were obtained from The Cancer Genome Atlas (TCGA) using the UCSC Xena (https://tcga.xenahubs.net), including gene expression profiles and clinical information. Supplementary table 1 listed the IDs of all samples. After removing 17 relapse ovarian cancer, 568 ovarian cancer samples and 8 normal ovarian tissue samples were employed for this study. Differential expression analysis was then performed according to ∣log2FC∣>1 and adjusted p value <0.05 using the limma package in R. To explore potential biological processes and pathways, functional enrichment analyses of upregulated and downregulated genes were performed using the gProfiler package in R, including Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG). The GO terms include biological process (BP), cellular component (CC), and molecular function (MF). Terms with p value <0.05 were significantly enriched. ### 2.9. Overall Survival Analysis Marker genes and differentially expressed genes were overlapped. Overall survival and recurrence-free survival analyses of differentially expressed marker genes were performed. Kaplan-Meier survival curves and log-rank tests were performed to evaluate the associations between ovarian cancer prognosis and the expression of these prognostic genes. ## 2.1. Ovarian Cancer Single-Cell RNA-seq Datasets Single-cell RNA-seq gene expression data of ovarian cancer were employed from the Gene Expression Omnibus (GEO;https://www.ncbi.nlm.nih.gov/geo/) database with accession number GSE123476. According to the study of Winterhoff et al., 19 cells were excluded due to poor cell morphology, extremely large or small cell size, or evidence of multiple cells in the well. Meanwhile, 7 cells that did not express at least 1,000 of the highly expressed genes were also removed [14]. As a result, 26 ovarian cancer cells with low quality were removed from 92 cells. The barcode information and single-cell RNA-seq gene expression matrix were extracted for further analyses [14]. ## 2.2. Quality Control Filtering and Data Normalization The gene expression matrix was imported into the Seurat package in R (version 3.1.0;http://satijalab.org/seurat/). Seurat, as a tool for single-cell genomics, is used for quality control, analysis, and exploration of single-cell RNA-seq data [15]. For single-cell RNA-seq data, there could be cells with low quality, probably due to the loss of cytoplasmic RNA when the cells were disrupted. Since mitochondria were much larger than single transcriptional molecules, they were not easily leaked out of the broken cell membrane, causing the abnormally high proportion of mitochondrial genes among the cells in sequencing results. Thus, to remove cells with low quality, quality control was performed. After quality control, fragments per kilobase of transcript per million mapped read (FPKM) values were transformed into the log-space. ## 2.3. Detection of Highly Variable Genes across the Single Cells To eliminate the dimensional relationship between variable genes and make the data comparable, using the NormalizeData function of the Seurat package, data were normalized with the log-normalize method. For each gene, we calculated the standard variance in all cells using the FindVariableFeature function. Herein, mean-variance was calculated as 1. Standard variance cut-off of 1 was used to identify highly variable genes. The top 20 highly variable genes were identified. ## 2.4. Cell Clustering Analysis Using Seurat Principal component analysis (PCA) is a multivariate statistical method that examines the correlation between different variables. PCA was used to examine how to reveal the internal structure between multiple variables through a few principal components. That is, a few principal components were derived from the original variables while they retained the information of the original variables as much as possible and were not related to each other. In our study, PCA was carried out based on highly variable genes. Using the screened PCs as input, the cell clustering was visualized using Uniform Manifold Approximation and Projection (UMAP) via the RunUMAP function. ## 2.5. Gene Scoring The CellCycleScoring function of the Seurat package was used to score the marker genes in the two cell cycles G2M and S based on the gene expression levels. We calculated the average expression value of S phase genes and G2/M phase genes for each cell. All genes were divided into different bins based on the average expression levels, and then, the control genes were randomly selected as the background from each bin. The average expression levels of these control genes were calculated. The average expression levels of control genes were subtracted from the average expression levels of S phase genes and G2/M phase genes to obtain S.Score and G2M.Score.S.Score<0 and G2M.Score<0 were judged as G1 phage, otherwise, which phage was judged as which score was higher. The difference between the cell cluster and the cell cycle distribution was examined by Fisher’s test. The top ten differentially expressed genes and the cell cycle were separately plotted, which were visualized into heatmaps. ## 2.6. Detection of Marker Genes and Functional Enrichment Analysis The cluster marker genes with∣log2foldchangeFC∣≥0.25, the expression ratio of cellpopulation≤0.25, and p value ≤0.05 were identified using the “FindAllMarkers” function in the Seurat package. An expression heatmap was generated for given cells and genes using the DoHeatmap. The expression level of markers in each cluster was calculated, and the putative identities of each cell clustering were identified. The top 20 markers were plotted for each cluster. To explore potential biological processes and pathways enriched by markers in each cluster, functional enrichment analyses were performed using the gProfiler package. ## 2.7. Reconstruction of Differentiation Trajectories Using Monocle Analysis The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed using the Monocle package. A pseudotime plot was generated that can account for both branched and linear differentiation processes based on the top 2000 highly variable marker genes. ## 2.8. Differential Expression Analysis and Function Enrichment Analysis Using Ovarian Cancer Datasets A total of 593 ovarian cancer samples were obtained from The Cancer Genome Atlas (TCGA) using the UCSC Xena (https://tcga.xenahubs.net), including gene expression profiles and clinical information. Supplementary table 1 listed the IDs of all samples. After removing 17 relapse ovarian cancer, 568 ovarian cancer samples and 8 normal ovarian tissue samples were employed for this study. Differential expression analysis was then performed according to ∣log2FC∣>1 and adjusted p value <0.05 using the limma package in R. To explore potential biological processes and pathways, functional enrichment analyses of upregulated and downregulated genes were performed using the gProfiler package in R, including Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG). The GO terms include biological process (BP), cellular component (CC), and molecular function (MF). Terms with p value <0.05 were significantly enriched. ## 2.9. Overall Survival Analysis Marker genes and differentially expressed genes were overlapped. Overall survival and recurrence-free survival analyses of differentially expressed marker genes were performed. Kaplan-Meier survival curves and log-rank tests were performed to evaluate the associations between ovarian cancer prognosis and the expression of these prognostic genes. ## 3. Results ### 3.1. Identification of Three Cell Subpopulations across Ovarian Cells Based on Single-Cell RNA-seq In total, 66 ovarian cancer cells were included in this study. Considering that the amount of data and the number of cells was relatively small, we used all the cells without filtering (Figures1(a)–1(e)). Then, we detected 6000 highly variable genes across the single cells after calculating the mean and the variance to mean ratio of each gene. The top 20 highly variable genes such as LUM, COL3A1, and SPARC are shown in Figure 2.Figure 1 Quality control filtering to remove cells with low quality. (a) Violin plots showing the counts of genes in each cell. (b) Violin plots of the sum of the expression levels of all genes in each cell. (c) Violin plots of the percentage of mitochondrial genes. (d) Scatter plots for the percentage of mitochondrial genes in the sum of the expression levels of all genes in each cell. (e) Scatter plots for the counts of genes in the sum of the expression levels of all genes in each cell. (a)(b)(c)(d)(e)Figure 2 Detection of highly variable genes across the cells.x axis represents the average expression, and y axis represents standardized variance.To overcome the various technical noise in any single feature of scRNA-seq data, the Seurat package was used to cluster cells according to their PCA scores, where each PC represented a “meta-feature” (Figures3(a) and 3(b)). JackStraw function was used to resample the test. We randomly replaced a subset of the data (default was 1%) and rerun PCA to construct an “empty distribution” of feature scores and repeated the process (Figure 3(c)). We identified “important” PCs with low p values. Furthermore, the PCs were sorted based on the standard deviation using ElbowPlot function (Figure 3(d)). Because there was no obvious elbow point, we selected 19 PCs for downstream analysis. After cluster analysis, we divided the cells into 3 cell populations across ovarian cancer cells (Figure 3(e)). The number of cells in clusters 0, 1, and 2 was 24, 22, and 20. Supplementary table 2 listed which cells were in which cluster.Figure 3 Individual principal component analysis. (a) The top 30 genes in PC1 and PC2. (b) The correlation between PC1 and PC2. (c) Thep value of PCs. (d) The PCs were sorted based on the standard deviation. (e) UMAP plots showing the three cell clusters. (a)(b)(c)(d)(e) ### 3.2. Analysis of Marker Genes in the Three Cell Subpopulations The top 20 marker genes in the three cell subpopulations are listed in heatmap (Figure4(a)). We used the Seurat tool to score the marker genes in the G1, G2M, and S cell cycles. Figure 4(b) shows the cell counts in the G1, G2M, and S cell cycles. By Fisher’s test, there was no significant difference between the three cell subpopulations and cells in each cell cycle (p value = 0.2834). Cell cycle heatmap shows the top ten differentially expressed genes and cell cycle scores in each cell subpopulation (Figure 4(c)). To explore potential biological processes and pathways, GO and KEGG enrichment analyses were performed (Figure 5). Genes in cluster 1 (Figures 5(a)–5(d)) and cluster 2 (Figures 5(e)–5(h)) were mainly enriched in metabolic processes and pathways. Meanwhile, genes in cluster 2 were primarily involved in cancer-related pathways such as PI3K-Akt pathway and pathways in cancer (Figures 5(i)–5(l)). We found that these marker genes were enriched in different biological processes and pathways in different cell subpopulations such as metabolic pathways, pathways in cancer, and mTOR signaling pathway.Figure 4 Cell cycle analyses. (a) The top 20 marker genes in the three cell subpopulations. (b) Cell cycle phase. (c) Cell cycle heatmap. (a)(b)(c)Figure 5 Function enrichment analyses. (a–d) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 0. (e–h) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 1. (i–l) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 2. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ### 3.3. Reconstruction of Differentiation Trajectories Using Monocle Package Cell fate decisions and differentiation trajectories were reconstructed with the Monocle package. The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed based on the top 2000 highly variable marker genes (Figures6(a) and 6(b)).Figure 6 Reconstruction of differentiation trajectories to ovarian cancer. (a, b) The trajectory plot in pseudotime of epithelial cancer cells and stromal cells using Monocle analysis. Different colors represent different cell states. (a)(b) ### 3.4. Identification of Differentially Expressed Genes Using TCGA Ovarian Cancer Datasets A total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer samples and 8 normal samples (Figures 7(a) and 7(b)). GO enrichment analysis results showed that upregulated genes were primarily enriched in intracellular membrane-bounded organelle, nucleus, nuclear lumen, cytosol, nucleoplasm, cellular nitrogen compound metabolic process, heterocycle metabolic process, cellular aromatic compound metabolic process, and protein metabolic process (Figure 7(c)). Meanwhile, upregulated genes were involved in cell cycle, Herpes simplex virus 1 infection, human papillomavirus infection, human T cell leukemia virus 1 infection, and PI3K-Akt signaling pathway (Figure 7(d)). Downregulated genes primarily participated in multicellular organism development, plasma membrane, cytosol, vesicle, animal organ development, extracellular exosome, extracellular vesicle, positive regulation of cellular metabolic process, cellular response to organic substance, and positive regulation of nitrogen compound metabolic process (Figure 7(e)). In Figure 7(f), downregulated genes were mainly enriched in MAPK, metabolic, pathways in cancer, PI3K-Akt, and Ras signaling pathways.Figure 7 Differentially expressed genes of ovarian cancer. (a, b) Volcano plots and heatmap showing the differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 between ovarian cancer and normal tissues, respectively. (c, d) GO and KEGG enrichment analysis results of upregulated genes. (e, f) GO and KEGG enrichment analysis results of downregulated genes. (a)(b)(c)(d)(e)(f) ### 3.5. Identification of Differentially Expressed Marker Genes Associated with Prognosis of Ovarian cancer All marker genes were overlapped with 1,124 differentially expressed genes in TCGA samples. Survival analysis was used for identifying prognosis-related differentially expressed marker genes. The results showed that marker genes STAT1, ANP32E, GPRC5A, and EGFL6 were highly expressed in ovarian cancer (Figures8(a)–8(d)). Furthermore, marker genes PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer (Figures 8(e)–8(g)). The high expression of ANP32E (p=0.031, HR: 0.79 (0.64-0.98)), STAT1 (p=0.005, HR: 0.74 (0.59-0.91)), GPRC5A (p=0.03, HR: 1.27 (1.02-1.57)), EGFL6 (p=0.018, HR: 0.77 (0.62-0.96)), and PMP22 (p=0.043, HR: 1.25 (1.01-1.54)) was significantly associated with better overall survival time than their low expression (Figures 9(a)–9(e)). The high expression of FBXO21 (p=0.027, HR: 0.57 (0.35-0.94)), ANP32E (p=0.007, HR: 0.51 (0.31-0.84)), and CYB5R3 (p=0.015, HR: 1.86 (1.12-3.08)) indicated better recurrence-free survival time compared with their low expression (Figures 9(f)–9(h)). Furthermore, we found that STAT1 had the highest expression in stage II among all stages (Figure 10(a)). PMP22 had the highest expression in stage III among all stages (Figure 10(b)).Figure 8 The differential expression of marker genes associated with prognosis of ovarian cancer. (a) STAT1; (b) ANP32E; (c) GPRC5A; (d) EGFL6; (e) PMP22; (f) FBXO21; (g) CYB5R3. (a)(b)(c)(d)(e)(f)(g)Figure 9 The survival analysis of differentially expressed marker genes in ovarian cancer. (a–e) The overall survival analysis results of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22. (f–h) The recurrence-free survival analysis results of FBXO21, ANP32E, and CYB5R3. (a)(b)(c)(d)(e)(f)(g)(h)Figure 10 The differential expression of STAT1 and PMP22 across different stages in ovarian cancer. (a) STAT1; (b) PMP22. (a)(b) ## 3.1. Identification of Three Cell Subpopulations across Ovarian Cells Based on Single-Cell RNA-seq In total, 66 ovarian cancer cells were included in this study. Considering that the amount of data and the number of cells was relatively small, we used all the cells without filtering (Figures1(a)–1(e)). Then, we detected 6000 highly variable genes across the single cells after calculating the mean and the variance to mean ratio of each gene. The top 20 highly variable genes such as LUM, COL3A1, and SPARC are shown in Figure 2.Figure 1 Quality control filtering to remove cells with low quality. (a) Violin plots showing the counts of genes in each cell. (b) Violin plots of the sum of the expression levels of all genes in each cell. (c) Violin plots of the percentage of mitochondrial genes. (d) Scatter plots for the percentage of mitochondrial genes in the sum of the expression levels of all genes in each cell. (e) Scatter plots for the counts of genes in the sum of the expression levels of all genes in each cell. (a)(b)(c)(d)(e)Figure 2 Detection of highly variable genes across the cells.x axis represents the average expression, and y axis represents standardized variance.To overcome the various technical noise in any single feature of scRNA-seq data, the Seurat package was used to cluster cells according to their PCA scores, where each PC represented a “meta-feature” (Figures3(a) and 3(b)). JackStraw function was used to resample the test. We randomly replaced a subset of the data (default was 1%) and rerun PCA to construct an “empty distribution” of feature scores and repeated the process (Figure 3(c)). We identified “important” PCs with low p values. Furthermore, the PCs were sorted based on the standard deviation using ElbowPlot function (Figure 3(d)). Because there was no obvious elbow point, we selected 19 PCs for downstream analysis. After cluster analysis, we divided the cells into 3 cell populations across ovarian cancer cells (Figure 3(e)). The number of cells in clusters 0, 1, and 2 was 24, 22, and 20. Supplementary table 2 listed which cells were in which cluster.Figure 3 Individual principal component analysis. (a) The top 30 genes in PC1 and PC2. (b) The correlation between PC1 and PC2. (c) Thep value of PCs. (d) The PCs were sorted based on the standard deviation. (e) UMAP plots showing the three cell clusters. (a)(b)(c)(d)(e) ## 3.2. Analysis of Marker Genes in the Three Cell Subpopulations The top 20 marker genes in the three cell subpopulations are listed in heatmap (Figure4(a)). We used the Seurat tool to score the marker genes in the G1, G2M, and S cell cycles. Figure 4(b) shows the cell counts in the G1, G2M, and S cell cycles. By Fisher’s test, there was no significant difference between the three cell subpopulations and cells in each cell cycle (p value = 0.2834). Cell cycle heatmap shows the top ten differentially expressed genes and cell cycle scores in each cell subpopulation (Figure 4(c)). To explore potential biological processes and pathways, GO and KEGG enrichment analyses were performed (Figure 5). Genes in cluster 1 (Figures 5(a)–5(d)) and cluster 2 (Figures 5(e)–5(h)) were mainly enriched in metabolic processes and pathways. Meanwhile, genes in cluster 2 were primarily involved in cancer-related pathways such as PI3K-Akt pathway and pathways in cancer (Figures 5(i)–5(l)). We found that these marker genes were enriched in different biological processes and pathways in different cell subpopulations such as metabolic pathways, pathways in cancer, and mTOR signaling pathway.Figure 4 Cell cycle analyses. (a) The top 20 marker genes in the three cell subpopulations. (b) Cell cycle phase. (c) Cell cycle heatmap. (a)(b)(c)Figure 5 Function enrichment analyses. (a–d) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 0. (e–h) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 1. (i–l) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 2. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ## 3.3. Reconstruction of Differentiation Trajectories Using Monocle Package Cell fate decisions and differentiation trajectories were reconstructed with the Monocle package. The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed based on the top 2000 highly variable marker genes (Figures6(a) and 6(b)).Figure 6 Reconstruction of differentiation trajectories to ovarian cancer. (a, b) The trajectory plot in pseudotime of epithelial cancer cells and stromal cells using Monocle analysis. Different colors represent different cell states. (a)(b) ## 3.4. Identification of Differentially Expressed Genes Using TCGA Ovarian Cancer Datasets A total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer samples and 8 normal samples (Figures 7(a) and 7(b)). GO enrichment analysis results showed that upregulated genes were primarily enriched in intracellular membrane-bounded organelle, nucleus, nuclear lumen, cytosol, nucleoplasm, cellular nitrogen compound metabolic process, heterocycle metabolic process, cellular aromatic compound metabolic process, and protein metabolic process (Figure 7(c)). Meanwhile, upregulated genes were involved in cell cycle, Herpes simplex virus 1 infection, human papillomavirus infection, human T cell leukemia virus 1 infection, and PI3K-Akt signaling pathway (Figure 7(d)). Downregulated genes primarily participated in multicellular organism development, plasma membrane, cytosol, vesicle, animal organ development, extracellular exosome, extracellular vesicle, positive regulation of cellular metabolic process, cellular response to organic substance, and positive regulation of nitrogen compound metabolic process (Figure 7(e)). In Figure 7(f), downregulated genes were mainly enriched in MAPK, metabolic, pathways in cancer, PI3K-Akt, and Ras signaling pathways.Figure 7 Differentially expressed genes of ovarian cancer. (a, b) Volcano plots and heatmap showing the differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 between ovarian cancer and normal tissues, respectively. (c, d) GO and KEGG enrichment analysis results of upregulated genes. (e, f) GO and KEGG enrichment analysis results of downregulated genes. (a)(b)(c)(d)(e)(f) ## 3.5. Identification of Differentially Expressed Marker Genes Associated with Prognosis of Ovarian cancer All marker genes were overlapped with 1,124 differentially expressed genes in TCGA samples. Survival analysis was used for identifying prognosis-related differentially expressed marker genes. The results showed that marker genes STAT1, ANP32E, GPRC5A, and EGFL6 were highly expressed in ovarian cancer (Figures8(a)–8(d)). Furthermore, marker genes PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer (Figures 8(e)–8(g)). The high expression of ANP32E (p=0.031, HR: 0.79 (0.64-0.98)), STAT1 (p=0.005, HR: 0.74 (0.59-0.91)), GPRC5A (p=0.03, HR: 1.27 (1.02-1.57)), EGFL6 (p=0.018, HR: 0.77 (0.62-0.96)), and PMP22 (p=0.043, HR: 1.25 (1.01-1.54)) was significantly associated with better overall survival time than their low expression (Figures 9(a)–9(e)). The high expression of FBXO21 (p=0.027, HR: 0.57 (0.35-0.94)), ANP32E (p=0.007, HR: 0.51 (0.31-0.84)), and CYB5R3 (p=0.015, HR: 1.86 (1.12-3.08)) indicated better recurrence-free survival time compared with their low expression (Figures 9(f)–9(h)). Furthermore, we found that STAT1 had the highest expression in stage II among all stages (Figure 10(a)). PMP22 had the highest expression in stage III among all stages (Figure 10(b)).Figure 8 The differential expression of marker genes associated with prognosis of ovarian cancer. (a) STAT1; (b) ANP32E; (c) GPRC5A; (d) EGFL6; (e) PMP22; (f) FBXO21; (g) CYB5R3. (a)(b)(c)(d)(e)(f)(g)Figure 9 The survival analysis of differentially expressed marker genes in ovarian cancer. (a–e) The overall survival analysis results of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22. (f–h) The recurrence-free survival analysis results of FBXO21, ANP32E, and CYB5R3. (a)(b)(c)(d)(e)(f)(g)(h)Figure 10 The differential expression of STAT1 and PMP22 across different stages in ovarian cancer. (a) STAT1; (b) PMP22. (a)(b) ## 4. Discussion The treatment of ovarian cancer is complicated by the heterogeneity of the tumor. Different histological types of epithelial ovarian cancer have different cell origins, different mutation profiles, and different prognosis [16, 17]. Even in a histological type, different molecular subtypes with different prognosis can be found. To solve these problems, it is necessary to better characterize the heterogeneity of these ovarian cancer cells, to find reliable biomarkers, and develop appropriate targeted therapies. Single-cell RNA sequencing technology can explore the intercellular heterogeneity at the single-cell level and reconstruct lineage hierarchies. This method allows an unbiased analysis of the heterogeneity profile within a population of cells as it utilizes transcriptome reconstitution from a single cell. Our reanalysis of the ovarian cancer single-cell transcriptome may provide a deeper insight into the heterogeneity spectrum of ovarian cancer cells.Totally, 66 ovarian cancer cells were included in our study. To remove cells with low quality, quality control was performed using the Seurat package. Proliferation induced by abnormal regulation of the cell cycle is thought to be critical for ovarian cancer progression. The G1/S phase is the most critical rate-limiting step in cell cycle promotion. Some studies have shown that the expression of cell cycle-related genes is significantly associated with poor prognosis in patients with ovarian cancer. Therefore, we studied molecules involved in cell cycle progression to discover new prognostic factors and therapeutic targets. In this study, 66 ovarian cancer cells were clustered into three groups (G1, G2M, and S). The marker genes in each cluster were identified. To explore potential biological processes and pathways, KEGG and GO enrichment analyses of these marker genes were performed. The results showed that the marker genes in each cluster were enriched in different biological processes and pathways.Using ovarian cancer dataset from TCGA, a total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer tissues and 8 normal tissues. To explore potential biological processes and pathways, these differentially expressed genes were mainly enriched in metabolic pathways, pathways in cancer, PI3K-Akt signaling pathway, and the like. For example, most ovarian cancer cells are highly proliferative; therefore, they are highly dependent on the metabolism of glucose by the aerobic glycolysis or the Warburg effect [18, 19]. PI3K-Akt signaling pathway is deregulated in various malignant cancers including ovarian cancer, which participates in tumor cell proliferation, survival, metabolism, and angiogenesis [20, 21].The intercellular heterogeneity is one of the major drivers of cancer progression [22]. Gene variation at the single-cell level can rapidly produce cancer heterogeneity [23]. Prognosis-related differentially expressed marker genes were identified. We found that the expression levels of STAT1, ANP32E, GPRC5A, and EGFL6 were all significantly higher in ovarian cancer tissues compared with normal tissues. Furthermore, PMP22, FBXO21, and CYB5R3 expression was significantly lower in ovarian cancer tissues compared with normal tissues. The low expression of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22 was positively associated with overall survival time of ovarian cancer. The low expression of FBXO21, ANP32E, and CYB5R3 was significantly associated with longer recurrence-free survival time of ovarian cancer. STAT1, a member of STAT family, has been confirmed to be highly expressed in ovarian cancer [24, 25]. The high expression of ANP32E is in association with better prognosis, contributing to the proliferation and tumorigenesis of triple-negative breast cancer cells [26, 27]. GPRC5A variants may drive self-renewal of bladder cancer stem cells according to single-cell RNA-seq analysis [28]. EGFL6, a stem cell regulator expressed in ovarian tumor cells and vasculature, may induce the growth and metastasis of ovarian cancer [29, 30]. A previous study has found that EGFL6 is upregulated in drug-resistant ovarian cancer cell lines using microarray analysis [31]. The expression and function of PMP22 in tumors remain unclear. Some studies have shown that PMP22 is a potential tumor suppressor, and others have indicated that PMP22 has a potential carcinogenic function in tumors [32–35]. Studies on the role of PMP22 in the regulation of ovarian cancer have not been reported. Furthermore, there is no report concerning the expression and role of FBXO21 and CYB5R3 in ovarian cancer. Collectively, our study identified specific cell subpopulations and marker genes in ovarian cancer. ## 5. Conclusion In our study, we analyzed the intercellular heterogeneity in ovarian cancer using single-cell RNA sequencing and identified marker genes in each cluster. Combining TCGA ovarian cancer dataset, we identified differentially expressed marker genes that were significantly associated with prognosis of ovarian cancer, including ANP32E, STAT1, GPRC5A, EGFL6, PMP22, FBXO21, and CYB5R3. --- *Source: 1005793-2021-10-07.xml*
1005793-2021-10-07_1005793-2021-10-07.md
40,073
Identification of Specific Cell Subpopulations and Marker Genes in Ovarian Cancer Using Single-Cell RNA Sequencing
Yan Li; Juan Wang; Fang Wang; Chengzhen Gao; Yuanyuan Cao; Jianhua Wang
BioMed Research International (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1005793
1005793-2021-10-07.xml
--- ## Abstract Objective. Ovarian cancer is the deadliest gynaecological cancer globally. In our study, we aimed to analyze specific cell subpopulations and marker genes among ovarian cancer cells by single-cell RNA sequencing (RNA-seq). Methods. Single-cell RNA-seq data of 66 high-grade serous ovarian cancer cells were employed from the Gene Expression Omnibus (GEO). Using the Seurat package, we performed quality control to remove cells with low quality. After normalization, we detected highly variable genes across the single cells. Then, principal component analysis (PCA) and cell clustering were performed. The marker genes in different cell clusters were detected. A total of 568 ovarian cancer samples and 8 normal ovarian samples were obtained from The Cancer Genome Atlas (TCGA) database. Differentially expressed genes were identified according to ∣log2foldchangeFC∣>1 and adjusted p value <0.05. To explore potential biological processes and pathways, functional enrichment analyses were performed. Furthermore, survival analyses of differentially expressed marker genes were performed. Results. After normalization, 6000 highly variable genes were identified across the single cells. The cells were divided into 3 cell populations, including G1, G2M, and S cell cycles. A total of 1,124 differentially expressed genes were identified in ovarian cancer samples. These differentially expressed genes were enriched in several pathways associated with cancer, such as metabolic pathways, pathways in cancer, and PI3K-Akt signaling pathway. Furthermore, marker genes, STAT1, ANP32E, GPRC5A, and EGFL6, were highly expressed in ovarian cancer, while PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer. These marker genes were positively associated with prognosis of ovarian cancer. Conclusion. Our findings revealed specific cell subpopulations and marker genes in ovarian cancer using single-cell RNA-seq, which provided a novel insight into the heterogeneity of ovarian cancer. --- ## Body ## 1. Introduction Ovarian cancer is one of the most common gynaecological cancers in the world, with high heterogeneity and poor prognosis [1]. High-grade serous ovarian cancer is the deadliest subtype of ovarian cancer, with up to 80% of patients recurring after initial treatment [2]. Despite advances in treatments such as surgery and chemotherapy, the 5-year survival rate of patients with advanced ovarian cancer remains around 30%-40% [3, 4]. Since ovarian cancer patients are usually diagnosed at an advanced stage, genetic risk prediction and prevention strategies will be an important way to reduce ovarian cancer mortality [5]. Targeted therapies significantly improve the therapeutic effects of patients with ovarian cancer [6]. However, ovarian cancer shows heterogeneity within the tumor that may affect the therapeutic outcomes of targeted therapies. Tumors including ovarian cancer usually consist of heterogeneous cells that are different in many biological features, like morphology, apoptosis, and invasion [7]. However, RNA-seq data reflect the average expression levels of different cells, not to reveal the intrinsic expression differences between different cell subpopulations. The genetic heterogeneity of ovarian cancer has been confirmed at single-cell resolution. The heterogeneity of gene expression levels greatly affects the patients’ clinical outcomes [8]. Therefore, understanding the heterogeneity of tumors at the transcriptome level and the precise characterization of gene expression in tumors may help to identify better therapeutic molecular targets [9]. The characterization of heterogeneous tumor features will help to develop more effective molecular targeted therapeutics.The basic unit of cancer is the innovative single cell along with genetics and epigenetics. Single-cell control determines the parameters of various aspects of cancer biology. Thus, single-cell analysis provides the ultimate resolution for us to understand the biology of various diseases [10]. Single-cell RNA-seq has been become a promising approach for revealing the clonal genotype and population structure of human cancers. RNA-seq of the single cell can be used to analyze the cell type in the tumor microenvironment, the tumor heterogeneity, and its clinical significance [11]. Unlike traditional sequencing methods, single-cell sequencing methods provide different types of omics analysis for individual cells, such as genomics and transcriptomics [12]. Among them, single-cell RNA sequencing (scRNA-seq) is capable of measuring gene expression at the single-cell level. Based on classical markers, the scRNA-seq reveals the heterogeneity of gene expression in individual cells or cells with the same type [13], rather than simply examining differential expression between two cells. In this study, we analyzed the heterogeneity among ovarian cancer cells and identified marker genes by scRNA-seq. ## 2. Materials and Methods ### 2.1. Ovarian Cancer Single-Cell RNA-seq Datasets Single-cell RNA-seq gene expression data of ovarian cancer were employed from the Gene Expression Omnibus (GEO;https://www.ncbi.nlm.nih.gov/geo/) database with accession number GSE123476. According to the study of Winterhoff et al., 19 cells were excluded due to poor cell morphology, extremely large or small cell size, or evidence of multiple cells in the well. Meanwhile, 7 cells that did not express at least 1,000 of the highly expressed genes were also removed [14]. As a result, 26 ovarian cancer cells with low quality were removed from 92 cells. The barcode information and single-cell RNA-seq gene expression matrix were extracted for further analyses [14]. ### 2.2. Quality Control Filtering and Data Normalization The gene expression matrix was imported into the Seurat package in R (version 3.1.0;http://satijalab.org/seurat/). Seurat, as a tool for single-cell genomics, is used for quality control, analysis, and exploration of single-cell RNA-seq data [15]. For single-cell RNA-seq data, there could be cells with low quality, probably due to the loss of cytoplasmic RNA when the cells were disrupted. Since mitochondria were much larger than single transcriptional molecules, they were not easily leaked out of the broken cell membrane, causing the abnormally high proportion of mitochondrial genes among the cells in sequencing results. Thus, to remove cells with low quality, quality control was performed. After quality control, fragments per kilobase of transcript per million mapped read (FPKM) values were transformed into the log-space. ### 2.3. Detection of Highly Variable Genes across the Single Cells To eliminate the dimensional relationship between variable genes and make the data comparable, using the NormalizeData function of the Seurat package, data were normalized with the log-normalize method. For each gene, we calculated the standard variance in all cells using the FindVariableFeature function. Herein, mean-variance was calculated as 1. Standard variance cut-off of 1 was used to identify highly variable genes. The top 20 highly variable genes were identified. ### 2.4. Cell Clustering Analysis Using Seurat Principal component analysis (PCA) is a multivariate statistical method that examines the correlation between different variables. PCA was used to examine how to reveal the internal structure between multiple variables through a few principal components. That is, a few principal components were derived from the original variables while they retained the information of the original variables as much as possible and were not related to each other. In our study, PCA was carried out based on highly variable genes. Using the screened PCs as input, the cell clustering was visualized using Uniform Manifold Approximation and Projection (UMAP) via the RunUMAP function. ### 2.5. Gene Scoring The CellCycleScoring function of the Seurat package was used to score the marker genes in the two cell cycles G2M and S based on the gene expression levels. We calculated the average expression value of S phase genes and G2/M phase genes for each cell. All genes were divided into different bins based on the average expression levels, and then, the control genes were randomly selected as the background from each bin. The average expression levels of these control genes were calculated. The average expression levels of control genes were subtracted from the average expression levels of S phase genes and G2/M phase genes to obtain S.Score and G2M.Score.S.Score<0 and G2M.Score<0 were judged as G1 phage, otherwise, which phage was judged as which score was higher. The difference between the cell cluster and the cell cycle distribution was examined by Fisher’s test. The top ten differentially expressed genes and the cell cycle were separately plotted, which were visualized into heatmaps. ### 2.6. Detection of Marker Genes and Functional Enrichment Analysis The cluster marker genes with∣log2foldchangeFC∣≥0.25, the expression ratio of cellpopulation≤0.25, and p value ≤0.05 were identified using the “FindAllMarkers” function in the Seurat package. An expression heatmap was generated for given cells and genes using the DoHeatmap. The expression level of markers in each cluster was calculated, and the putative identities of each cell clustering were identified. The top 20 markers were plotted for each cluster. To explore potential biological processes and pathways enriched by markers in each cluster, functional enrichment analyses were performed using the gProfiler package. ### 2.7. Reconstruction of Differentiation Trajectories Using Monocle Analysis The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed using the Monocle package. A pseudotime plot was generated that can account for both branched and linear differentiation processes based on the top 2000 highly variable marker genes. ### 2.8. Differential Expression Analysis and Function Enrichment Analysis Using Ovarian Cancer Datasets A total of 593 ovarian cancer samples were obtained from The Cancer Genome Atlas (TCGA) using the UCSC Xena (https://tcga.xenahubs.net), including gene expression profiles and clinical information. Supplementary table 1 listed the IDs of all samples. After removing 17 relapse ovarian cancer, 568 ovarian cancer samples and 8 normal ovarian tissue samples were employed for this study. Differential expression analysis was then performed according to ∣log2FC∣>1 and adjusted p value <0.05 using the limma package in R. To explore potential biological processes and pathways, functional enrichment analyses of upregulated and downregulated genes were performed using the gProfiler package in R, including Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG). The GO terms include biological process (BP), cellular component (CC), and molecular function (MF). Terms with p value <0.05 were significantly enriched. ### 2.9. Overall Survival Analysis Marker genes and differentially expressed genes were overlapped. Overall survival and recurrence-free survival analyses of differentially expressed marker genes were performed. Kaplan-Meier survival curves and log-rank tests were performed to evaluate the associations between ovarian cancer prognosis and the expression of these prognostic genes. ## 2.1. Ovarian Cancer Single-Cell RNA-seq Datasets Single-cell RNA-seq gene expression data of ovarian cancer were employed from the Gene Expression Omnibus (GEO;https://www.ncbi.nlm.nih.gov/geo/) database with accession number GSE123476. According to the study of Winterhoff et al., 19 cells were excluded due to poor cell morphology, extremely large or small cell size, or evidence of multiple cells in the well. Meanwhile, 7 cells that did not express at least 1,000 of the highly expressed genes were also removed [14]. As a result, 26 ovarian cancer cells with low quality were removed from 92 cells. The barcode information and single-cell RNA-seq gene expression matrix were extracted for further analyses [14]. ## 2.2. Quality Control Filtering and Data Normalization The gene expression matrix was imported into the Seurat package in R (version 3.1.0;http://satijalab.org/seurat/). Seurat, as a tool for single-cell genomics, is used for quality control, analysis, and exploration of single-cell RNA-seq data [15]. For single-cell RNA-seq data, there could be cells with low quality, probably due to the loss of cytoplasmic RNA when the cells were disrupted. Since mitochondria were much larger than single transcriptional molecules, they were not easily leaked out of the broken cell membrane, causing the abnormally high proportion of mitochondrial genes among the cells in sequencing results. Thus, to remove cells with low quality, quality control was performed. After quality control, fragments per kilobase of transcript per million mapped read (FPKM) values were transformed into the log-space. ## 2.3. Detection of Highly Variable Genes across the Single Cells To eliminate the dimensional relationship between variable genes and make the data comparable, using the NormalizeData function of the Seurat package, data were normalized with the log-normalize method. For each gene, we calculated the standard variance in all cells using the FindVariableFeature function. Herein, mean-variance was calculated as 1. Standard variance cut-off of 1 was used to identify highly variable genes. The top 20 highly variable genes were identified. ## 2.4. Cell Clustering Analysis Using Seurat Principal component analysis (PCA) is a multivariate statistical method that examines the correlation between different variables. PCA was used to examine how to reveal the internal structure between multiple variables through a few principal components. That is, a few principal components were derived from the original variables while they retained the information of the original variables as much as possible and were not related to each other. In our study, PCA was carried out based on highly variable genes. Using the screened PCs as input, the cell clustering was visualized using Uniform Manifold Approximation and Projection (UMAP) via the RunUMAP function. ## 2.5. Gene Scoring The CellCycleScoring function of the Seurat package was used to score the marker genes in the two cell cycles G2M and S based on the gene expression levels. We calculated the average expression value of S phase genes and G2/M phase genes for each cell. All genes were divided into different bins based on the average expression levels, and then, the control genes were randomly selected as the background from each bin. The average expression levels of these control genes were calculated. The average expression levels of control genes were subtracted from the average expression levels of S phase genes and G2/M phase genes to obtain S.Score and G2M.Score.S.Score<0 and G2M.Score<0 were judged as G1 phage, otherwise, which phage was judged as which score was higher. The difference between the cell cluster and the cell cycle distribution was examined by Fisher’s test. The top ten differentially expressed genes and the cell cycle were separately plotted, which were visualized into heatmaps. ## 2.6. Detection of Marker Genes and Functional Enrichment Analysis The cluster marker genes with∣log2foldchangeFC∣≥0.25, the expression ratio of cellpopulation≤0.25, and p value ≤0.05 were identified using the “FindAllMarkers” function in the Seurat package. An expression heatmap was generated for given cells and genes using the DoHeatmap. The expression level of markers in each cluster was calculated, and the putative identities of each cell clustering were identified. The top 20 markers were plotted for each cluster. To explore potential biological processes and pathways enriched by markers in each cluster, functional enrichment analyses were performed using the gProfiler package. ## 2.7. Reconstruction of Differentiation Trajectories Using Monocle Analysis The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed using the Monocle package. A pseudotime plot was generated that can account for both branched and linear differentiation processes based on the top 2000 highly variable marker genes. ## 2.8. Differential Expression Analysis and Function Enrichment Analysis Using Ovarian Cancer Datasets A total of 593 ovarian cancer samples were obtained from The Cancer Genome Atlas (TCGA) using the UCSC Xena (https://tcga.xenahubs.net), including gene expression profiles and clinical information. Supplementary table 1 listed the IDs of all samples. After removing 17 relapse ovarian cancer, 568 ovarian cancer samples and 8 normal ovarian tissue samples were employed for this study. Differential expression analysis was then performed according to ∣log2FC∣>1 and adjusted p value <0.05 using the limma package in R. To explore potential biological processes and pathways, functional enrichment analyses of upregulated and downregulated genes were performed using the gProfiler package in R, including Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG). The GO terms include biological process (BP), cellular component (CC), and molecular function (MF). Terms with p value <0.05 were significantly enriched. ## 2.9. Overall Survival Analysis Marker genes and differentially expressed genes were overlapped. Overall survival and recurrence-free survival analyses of differentially expressed marker genes were performed. Kaplan-Meier survival curves and log-rank tests were performed to evaluate the associations between ovarian cancer prognosis and the expression of these prognostic genes. ## 3. Results ### 3.1. Identification of Three Cell Subpopulations across Ovarian Cells Based on Single-Cell RNA-seq In total, 66 ovarian cancer cells were included in this study. Considering that the amount of data and the number of cells was relatively small, we used all the cells without filtering (Figures1(a)–1(e)). Then, we detected 6000 highly variable genes across the single cells after calculating the mean and the variance to mean ratio of each gene. The top 20 highly variable genes such as LUM, COL3A1, and SPARC are shown in Figure 2.Figure 1 Quality control filtering to remove cells with low quality. (a) Violin plots showing the counts of genes in each cell. (b) Violin plots of the sum of the expression levels of all genes in each cell. (c) Violin plots of the percentage of mitochondrial genes. (d) Scatter plots for the percentage of mitochondrial genes in the sum of the expression levels of all genes in each cell. (e) Scatter plots for the counts of genes in the sum of the expression levels of all genes in each cell. (a)(b)(c)(d)(e)Figure 2 Detection of highly variable genes across the cells.x axis represents the average expression, and y axis represents standardized variance.To overcome the various technical noise in any single feature of scRNA-seq data, the Seurat package was used to cluster cells according to their PCA scores, where each PC represented a “meta-feature” (Figures3(a) and 3(b)). JackStraw function was used to resample the test. We randomly replaced a subset of the data (default was 1%) and rerun PCA to construct an “empty distribution” of feature scores and repeated the process (Figure 3(c)). We identified “important” PCs with low p values. Furthermore, the PCs were sorted based on the standard deviation using ElbowPlot function (Figure 3(d)). Because there was no obvious elbow point, we selected 19 PCs for downstream analysis. After cluster analysis, we divided the cells into 3 cell populations across ovarian cancer cells (Figure 3(e)). The number of cells in clusters 0, 1, and 2 was 24, 22, and 20. Supplementary table 2 listed which cells were in which cluster.Figure 3 Individual principal component analysis. (a) The top 30 genes in PC1 and PC2. (b) The correlation between PC1 and PC2. (c) Thep value of PCs. (d) The PCs were sorted based on the standard deviation. (e) UMAP plots showing the three cell clusters. (a)(b)(c)(d)(e) ### 3.2. Analysis of Marker Genes in the Three Cell Subpopulations The top 20 marker genes in the three cell subpopulations are listed in heatmap (Figure4(a)). We used the Seurat tool to score the marker genes in the G1, G2M, and S cell cycles. Figure 4(b) shows the cell counts in the G1, G2M, and S cell cycles. By Fisher’s test, there was no significant difference between the three cell subpopulations and cells in each cell cycle (p value = 0.2834). Cell cycle heatmap shows the top ten differentially expressed genes and cell cycle scores in each cell subpopulation (Figure 4(c)). To explore potential biological processes and pathways, GO and KEGG enrichment analyses were performed (Figure 5). Genes in cluster 1 (Figures 5(a)–5(d)) and cluster 2 (Figures 5(e)–5(h)) were mainly enriched in metabolic processes and pathways. Meanwhile, genes in cluster 2 were primarily involved in cancer-related pathways such as PI3K-Akt pathway and pathways in cancer (Figures 5(i)–5(l)). We found that these marker genes were enriched in different biological processes and pathways in different cell subpopulations such as metabolic pathways, pathways in cancer, and mTOR signaling pathway.Figure 4 Cell cycle analyses. (a) The top 20 marker genes in the three cell subpopulations. (b) Cell cycle phase. (c) Cell cycle heatmap. (a)(b)(c)Figure 5 Function enrichment analyses. (a–d) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 0. (e–h) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 1. (i–l) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 2. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ### 3.3. Reconstruction of Differentiation Trajectories Using Monocle Package Cell fate decisions and differentiation trajectories were reconstructed with the Monocle package. The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed based on the top 2000 highly variable marker genes (Figures6(a) and 6(b)).Figure 6 Reconstruction of differentiation trajectories to ovarian cancer. (a, b) The trajectory plot in pseudotime of epithelial cancer cells and stromal cells using Monocle analysis. Different colors represent different cell states. (a)(b) ### 3.4. Identification of Differentially Expressed Genes Using TCGA Ovarian Cancer Datasets A total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer samples and 8 normal samples (Figures 7(a) and 7(b)). GO enrichment analysis results showed that upregulated genes were primarily enriched in intracellular membrane-bounded organelle, nucleus, nuclear lumen, cytosol, nucleoplasm, cellular nitrogen compound metabolic process, heterocycle metabolic process, cellular aromatic compound metabolic process, and protein metabolic process (Figure 7(c)). Meanwhile, upregulated genes were involved in cell cycle, Herpes simplex virus 1 infection, human papillomavirus infection, human T cell leukemia virus 1 infection, and PI3K-Akt signaling pathway (Figure 7(d)). Downregulated genes primarily participated in multicellular organism development, plasma membrane, cytosol, vesicle, animal organ development, extracellular exosome, extracellular vesicle, positive regulation of cellular metabolic process, cellular response to organic substance, and positive regulation of nitrogen compound metabolic process (Figure 7(e)). In Figure 7(f), downregulated genes were mainly enriched in MAPK, metabolic, pathways in cancer, PI3K-Akt, and Ras signaling pathways.Figure 7 Differentially expressed genes of ovarian cancer. (a, b) Volcano plots and heatmap showing the differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 between ovarian cancer and normal tissues, respectively. (c, d) GO and KEGG enrichment analysis results of upregulated genes. (e, f) GO and KEGG enrichment analysis results of downregulated genes. (a)(b)(c)(d)(e)(f) ### 3.5. Identification of Differentially Expressed Marker Genes Associated with Prognosis of Ovarian cancer All marker genes were overlapped with 1,124 differentially expressed genes in TCGA samples. Survival analysis was used for identifying prognosis-related differentially expressed marker genes. The results showed that marker genes STAT1, ANP32E, GPRC5A, and EGFL6 were highly expressed in ovarian cancer (Figures8(a)–8(d)). Furthermore, marker genes PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer (Figures 8(e)–8(g)). The high expression of ANP32E (p=0.031, HR: 0.79 (0.64-0.98)), STAT1 (p=0.005, HR: 0.74 (0.59-0.91)), GPRC5A (p=0.03, HR: 1.27 (1.02-1.57)), EGFL6 (p=0.018, HR: 0.77 (0.62-0.96)), and PMP22 (p=0.043, HR: 1.25 (1.01-1.54)) was significantly associated with better overall survival time than their low expression (Figures 9(a)–9(e)). The high expression of FBXO21 (p=0.027, HR: 0.57 (0.35-0.94)), ANP32E (p=0.007, HR: 0.51 (0.31-0.84)), and CYB5R3 (p=0.015, HR: 1.86 (1.12-3.08)) indicated better recurrence-free survival time compared with their low expression (Figures 9(f)–9(h)). Furthermore, we found that STAT1 had the highest expression in stage II among all stages (Figure 10(a)). PMP22 had the highest expression in stage III among all stages (Figure 10(b)).Figure 8 The differential expression of marker genes associated with prognosis of ovarian cancer. (a) STAT1; (b) ANP32E; (c) GPRC5A; (d) EGFL6; (e) PMP22; (f) FBXO21; (g) CYB5R3. (a)(b)(c)(d)(e)(f)(g)Figure 9 The survival analysis of differentially expressed marker genes in ovarian cancer. (a–e) The overall survival analysis results of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22. (f–h) The recurrence-free survival analysis results of FBXO21, ANP32E, and CYB5R3. (a)(b)(c)(d)(e)(f)(g)(h)Figure 10 The differential expression of STAT1 and PMP22 across different stages in ovarian cancer. (a) STAT1; (b) PMP22. (a)(b) ## 3.1. Identification of Three Cell Subpopulations across Ovarian Cells Based on Single-Cell RNA-seq In total, 66 ovarian cancer cells were included in this study. Considering that the amount of data and the number of cells was relatively small, we used all the cells without filtering (Figures1(a)–1(e)). Then, we detected 6000 highly variable genes across the single cells after calculating the mean and the variance to mean ratio of each gene. The top 20 highly variable genes such as LUM, COL3A1, and SPARC are shown in Figure 2.Figure 1 Quality control filtering to remove cells with low quality. (a) Violin plots showing the counts of genes in each cell. (b) Violin plots of the sum of the expression levels of all genes in each cell. (c) Violin plots of the percentage of mitochondrial genes. (d) Scatter plots for the percentage of mitochondrial genes in the sum of the expression levels of all genes in each cell. (e) Scatter plots for the counts of genes in the sum of the expression levels of all genes in each cell. (a)(b)(c)(d)(e)Figure 2 Detection of highly variable genes across the cells.x axis represents the average expression, and y axis represents standardized variance.To overcome the various technical noise in any single feature of scRNA-seq data, the Seurat package was used to cluster cells according to their PCA scores, where each PC represented a “meta-feature” (Figures3(a) and 3(b)). JackStraw function was used to resample the test. We randomly replaced a subset of the data (default was 1%) and rerun PCA to construct an “empty distribution” of feature scores and repeated the process (Figure 3(c)). We identified “important” PCs with low p values. Furthermore, the PCs were sorted based on the standard deviation using ElbowPlot function (Figure 3(d)). Because there was no obvious elbow point, we selected 19 PCs for downstream analysis. After cluster analysis, we divided the cells into 3 cell populations across ovarian cancer cells (Figure 3(e)). The number of cells in clusters 0, 1, and 2 was 24, 22, and 20. Supplementary table 2 listed which cells were in which cluster.Figure 3 Individual principal component analysis. (a) The top 30 genes in PC1 and PC2. (b) The correlation between PC1 and PC2. (c) Thep value of PCs. (d) The PCs were sorted based on the standard deviation. (e) UMAP plots showing the three cell clusters. (a)(b)(c)(d)(e) ## 3.2. Analysis of Marker Genes in the Three Cell Subpopulations The top 20 marker genes in the three cell subpopulations are listed in heatmap (Figure4(a)). We used the Seurat tool to score the marker genes in the G1, G2M, and S cell cycles. Figure 4(b) shows the cell counts in the G1, G2M, and S cell cycles. By Fisher’s test, there was no significant difference between the three cell subpopulations and cells in each cell cycle (p value = 0.2834). Cell cycle heatmap shows the top ten differentially expressed genes and cell cycle scores in each cell subpopulation (Figure 4(c)). To explore potential biological processes and pathways, GO and KEGG enrichment analyses were performed (Figure 5). Genes in cluster 1 (Figures 5(a)–5(d)) and cluster 2 (Figures 5(e)–5(h)) were mainly enriched in metabolic processes and pathways. Meanwhile, genes in cluster 2 were primarily involved in cancer-related pathways such as PI3K-Akt pathway and pathways in cancer (Figures 5(i)–5(l)). We found that these marker genes were enriched in different biological processes and pathways in different cell subpopulations such as metabolic pathways, pathways in cancer, and mTOR signaling pathway.Figure 4 Cell cycle analyses. (a) The top 20 marker genes in the three cell subpopulations. (b) Cell cycle phase. (c) Cell cycle heatmap. (a)(b)(c)Figure 5 Function enrichment analyses. (a–d) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 0. (e–h) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 1. (i–l) The enriched KEGG pathways and GO terms including BP, CC, and MF in cluster 2. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l) ## 3.3. Reconstruction of Differentiation Trajectories Using Monocle Package Cell fate decisions and differentiation trajectories were reconstructed with the Monocle package. The pseudotime estimation analysis of epithelial cancer cells and stromal cells was performed based on the top 2000 highly variable marker genes (Figures6(a) and 6(b)).Figure 6 Reconstruction of differentiation trajectories to ovarian cancer. (a, b) The trajectory plot in pseudotime of epithelial cancer cells and stromal cells using Monocle analysis. Different colors represent different cell states. (a)(b) ## 3.4. Identification of Differentially Expressed Genes Using TCGA Ovarian Cancer Datasets A total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer samples and 8 normal samples (Figures 7(a) and 7(b)). GO enrichment analysis results showed that upregulated genes were primarily enriched in intracellular membrane-bounded organelle, nucleus, nuclear lumen, cytosol, nucleoplasm, cellular nitrogen compound metabolic process, heterocycle metabolic process, cellular aromatic compound metabolic process, and protein metabolic process (Figure 7(c)). Meanwhile, upregulated genes were involved in cell cycle, Herpes simplex virus 1 infection, human papillomavirus infection, human T cell leukemia virus 1 infection, and PI3K-Akt signaling pathway (Figure 7(d)). Downregulated genes primarily participated in multicellular organism development, plasma membrane, cytosol, vesicle, animal organ development, extracellular exosome, extracellular vesicle, positive regulation of cellular metabolic process, cellular response to organic substance, and positive regulation of nitrogen compound metabolic process (Figure 7(e)). In Figure 7(f), downregulated genes were mainly enriched in MAPK, metabolic, pathways in cancer, PI3K-Akt, and Ras signaling pathways.Figure 7 Differentially expressed genes of ovarian cancer. (a, b) Volcano plots and heatmap showing the differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 between ovarian cancer and normal tissues, respectively. (c, d) GO and KEGG enrichment analysis results of upregulated genes. (e, f) GO and KEGG enrichment analysis results of downregulated genes. (a)(b)(c)(d)(e)(f) ## 3.5. Identification of Differentially Expressed Marker Genes Associated with Prognosis of Ovarian cancer All marker genes were overlapped with 1,124 differentially expressed genes in TCGA samples. Survival analysis was used for identifying prognosis-related differentially expressed marker genes. The results showed that marker genes STAT1, ANP32E, GPRC5A, and EGFL6 were highly expressed in ovarian cancer (Figures8(a)–8(d)). Furthermore, marker genes PMP22, FBXO21, and CYB5R3 were lowly expressed in ovarian cancer (Figures 8(e)–8(g)). The high expression of ANP32E (p=0.031, HR: 0.79 (0.64-0.98)), STAT1 (p=0.005, HR: 0.74 (0.59-0.91)), GPRC5A (p=0.03, HR: 1.27 (1.02-1.57)), EGFL6 (p=0.018, HR: 0.77 (0.62-0.96)), and PMP22 (p=0.043, HR: 1.25 (1.01-1.54)) was significantly associated with better overall survival time than their low expression (Figures 9(a)–9(e)). The high expression of FBXO21 (p=0.027, HR: 0.57 (0.35-0.94)), ANP32E (p=0.007, HR: 0.51 (0.31-0.84)), and CYB5R3 (p=0.015, HR: 1.86 (1.12-3.08)) indicated better recurrence-free survival time compared with their low expression (Figures 9(f)–9(h)). Furthermore, we found that STAT1 had the highest expression in stage II among all stages (Figure 10(a)). PMP22 had the highest expression in stage III among all stages (Figure 10(b)).Figure 8 The differential expression of marker genes associated with prognosis of ovarian cancer. (a) STAT1; (b) ANP32E; (c) GPRC5A; (d) EGFL6; (e) PMP22; (f) FBXO21; (g) CYB5R3. (a)(b)(c)(d)(e)(f)(g)Figure 9 The survival analysis of differentially expressed marker genes in ovarian cancer. (a–e) The overall survival analysis results of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22. (f–h) The recurrence-free survival analysis results of FBXO21, ANP32E, and CYB5R3. (a)(b)(c)(d)(e)(f)(g)(h)Figure 10 The differential expression of STAT1 and PMP22 across different stages in ovarian cancer. (a) STAT1; (b) PMP22. (a)(b) ## 4. Discussion The treatment of ovarian cancer is complicated by the heterogeneity of the tumor. Different histological types of epithelial ovarian cancer have different cell origins, different mutation profiles, and different prognosis [16, 17]. Even in a histological type, different molecular subtypes with different prognosis can be found. To solve these problems, it is necessary to better characterize the heterogeneity of these ovarian cancer cells, to find reliable biomarkers, and develop appropriate targeted therapies. Single-cell RNA sequencing technology can explore the intercellular heterogeneity at the single-cell level and reconstruct lineage hierarchies. This method allows an unbiased analysis of the heterogeneity profile within a population of cells as it utilizes transcriptome reconstitution from a single cell. Our reanalysis of the ovarian cancer single-cell transcriptome may provide a deeper insight into the heterogeneity spectrum of ovarian cancer cells.Totally, 66 ovarian cancer cells were included in our study. To remove cells with low quality, quality control was performed using the Seurat package. Proliferation induced by abnormal regulation of the cell cycle is thought to be critical for ovarian cancer progression. The G1/S phase is the most critical rate-limiting step in cell cycle promotion. Some studies have shown that the expression of cell cycle-related genes is significantly associated with poor prognosis in patients with ovarian cancer. Therefore, we studied molecules involved in cell cycle progression to discover new prognostic factors and therapeutic targets. In this study, 66 ovarian cancer cells were clustered into three groups (G1, G2M, and S). The marker genes in each cluster were identified. To explore potential biological processes and pathways, KEGG and GO enrichment analyses of these marker genes were performed. The results showed that the marker genes in each cluster were enriched in different biological processes and pathways.Using ovarian cancer dataset from TCGA, a total of 1,124 differentially expressed genes with∣log2FC∣>1 and adjusted p value <0.05 were identified between 568 ovarian cancer tissues and 8 normal tissues. To explore potential biological processes and pathways, these differentially expressed genes were mainly enriched in metabolic pathways, pathways in cancer, PI3K-Akt signaling pathway, and the like. For example, most ovarian cancer cells are highly proliferative; therefore, they are highly dependent on the metabolism of glucose by the aerobic glycolysis or the Warburg effect [18, 19]. PI3K-Akt signaling pathway is deregulated in various malignant cancers including ovarian cancer, which participates in tumor cell proliferation, survival, metabolism, and angiogenesis [20, 21].The intercellular heterogeneity is one of the major drivers of cancer progression [22]. Gene variation at the single-cell level can rapidly produce cancer heterogeneity [23]. Prognosis-related differentially expressed marker genes were identified. We found that the expression levels of STAT1, ANP32E, GPRC5A, and EGFL6 were all significantly higher in ovarian cancer tissues compared with normal tissues. Furthermore, PMP22, FBXO21, and CYB5R3 expression was significantly lower in ovarian cancer tissues compared with normal tissues. The low expression of ANP32E, STAT1, GPRC5A, EGFL6, and PMP22 was positively associated with overall survival time of ovarian cancer. The low expression of FBXO21, ANP32E, and CYB5R3 was significantly associated with longer recurrence-free survival time of ovarian cancer. STAT1, a member of STAT family, has been confirmed to be highly expressed in ovarian cancer [24, 25]. The high expression of ANP32E is in association with better prognosis, contributing to the proliferation and tumorigenesis of triple-negative breast cancer cells [26, 27]. GPRC5A variants may drive self-renewal of bladder cancer stem cells according to single-cell RNA-seq analysis [28]. EGFL6, a stem cell regulator expressed in ovarian tumor cells and vasculature, may induce the growth and metastasis of ovarian cancer [29, 30]. A previous study has found that EGFL6 is upregulated in drug-resistant ovarian cancer cell lines using microarray analysis [31]. The expression and function of PMP22 in tumors remain unclear. Some studies have shown that PMP22 is a potential tumor suppressor, and others have indicated that PMP22 has a potential carcinogenic function in tumors [32–35]. Studies on the role of PMP22 in the regulation of ovarian cancer have not been reported. Furthermore, there is no report concerning the expression and role of FBXO21 and CYB5R3 in ovarian cancer. Collectively, our study identified specific cell subpopulations and marker genes in ovarian cancer. ## 5. Conclusion In our study, we analyzed the intercellular heterogeneity in ovarian cancer using single-cell RNA sequencing and identified marker genes in each cluster. Combining TCGA ovarian cancer dataset, we identified differentially expressed marker genes that were significantly associated with prognosis of ovarian cancer, including ANP32E, STAT1, GPRC5A, EGFL6, PMP22, FBXO21, and CYB5R3. --- *Source: 1005793-2021-10-07.xml*
2021
# Application of Industrial Internet of Things (IIoT) in Crude Oil Production Optimization Using Pump Efficiency Control **Authors:** Ali S. Allahloh; Mohammad Sarfraz; Mejdal Alqahtani; Shafiq Ahmad; Shamsul Huda **Journal:** Wireless Communications and Mobile Computing (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005813 --- ## Abstract The collapse of oil prices in mid-2014 and early 2016 was the biggest in modern history that witnessed more than a 70% drop in the price to around $40 barrel. It prompted companies to think seriously to maintain profitability. Most companies were able to survive partly by simplifying their operations. Price recovery in 2021 is only 70% of its peak value. Companies are focusing on reducing the cost of operations, increasing production simultaneously, and finding new and different strategies to survive. The current scenario is witnessing strong research focusing on the development of process control for oil and gas upstream and downstream to improve the control and preventive maintenance to reduce operating costs and increase production. This paper presents the Industrial Internet of Things (IIoT) practical solution that improves the oil production rate from the well and increases the average pump efficiency (fillage) to 90%. This paper proposes a mechanism for collecting, storing, and analyzing all required parameters to build valuable charts. These charts’ data help optimize the values and parameters for controller setpoints to prevent pump gas lock problems. An artificial lift is required to lift the oil from the well. In this paper, the sucker rod pump is driven by a gas engine fed by the well’s gas. At the same time, SCADAPack 535E remote terminal unit collects all pump and well parameters such as hydraulic pressure, casing pressure, tubing pressure, and pump speed in stroke per minute (SPM). The remote terminal unit sends the data through a wireless network using a 5 GHz antenna to the main control room. The IIoT platform is designed using a visual basic programming language. Microsoft DDE (Dynamic Data Exchange) and Kepware OPC server were used to work on the received data to monitor, generate charts, and apply the controllers. --- ## Body ## 1. Introduction Intelligent oil and gas production depends on understanding all crude oil well and pump parameters. Ensure the uniform increase of liquid production rate from the crude oil well and avoid problems such as gas lock. Field operators currently record these parameters two or three times a day, giving experts a fuzzy picture. Increasing guesswork due to the dynamic behavior of these parameters, which changes minute after minute, this paper introduces an approach to thoroughly understanding the surface and downhole parameters. Using Industrial Internet of Things technology (IIoT) gives users a clear picture. It reduces the guesswork and optimizes different parameters to maximize the production for the long-term in a stable situation. Over 90% of the wells in the United States are currently being artificially lifted. Beam pumping is the most commonly used method, accounting for over 85% of artificial lift installations. The beam pumping system is mechanically very simple. It consists of a surface unit that transmits the upstroke and downstroke motion to a bottom-hole pump through a sucker rod string (Figure1). While the system is simple, a proper design requires many factors. Over the years, formulas developed to be used in the optimum pumping design. However, a good design still depends on experience as a key.Figure 1 Sucker rod pump.Energy from the crude oil pumping unit is transmitted to the bottom-hole pump through a sucker rod string. Sucker rod strings operate under cyclic load in erosive and corrosive environments. At the same time, the target is a good sucker rod design is the most critical part of a successful sucker rod pumping system. It will decrease the pulling cost and increase production. The next step is the surface unit. In this paper, the surface unit is the gas engine that feeds by the well gas itself to reduce the cost. At the same time, the jack pump is driven by SCADAPack 535E remote terminal unit. It is an IIoT device to provide a local and remote controller to prevent the gas lock problem using an on-off controller and the oil well casing pressure, PID controller. It ensures the well’s liquid level and pumps fillage at optimum values. ### 1.1. Literature Review Our work proposes an IIoT-based strategy to control the smart sucker rod pump automatically and intelligently. The development of this strategy aims to ensure the optimization of oil well production using AI models that enable IIoT closed-loop process control. The Industrial Internet of Things found its way to the oil and gas industry via building online intelligent controllers, such as fuzzy logic or neurofuzzy, developing the management and control, centralizing the data, or digitalizing the production process for intelligent production purposes [1, 2]. The current research discussed control strategy, efficiency, optimization, multiphase, pump fillage calculation, pumping system simulation, and an expert system to develop the sucker rod pump system. On the other side, researchers focus on using the IIoT in oil and gas for management, health, and safety to reduce maintenance costs. Integrating optimization and process control with IIoT is still a gap in this field. Our work introduces the solution to integrate intelligent pump control with an IIoT system. The review of the current research discussed below:In this work, authors describe the control strategy method for the sucker-rod pumping (SRP) system which is discussed; this method stands out for its simplicity and low cost in maintenance and investment, and it can be operated in a large range of flow rates with fluids of different compositions and viscosities. The sucker rod pump units require periodic maintenance and adjustments, whether preventive or corrective. Two common procedures are important for the sucker rod pump units: the first one is to adjust the counterbalancing of the pumping unit, while the second is to adjust the polished rod stroke length. Stopping oil well production is required to accomplish these procedures [3]. A special design for the sucker rod pumping unit using a given condition was studied and tested. A horizontal wellbore with a range of up to 90 degrees bore curvature operated by sucker rod pumping units was studied. The target was to increase the volume efficiency, reduce bottom hole pressure, and decrease the gas impact while increasing drainage speed for the best sucker rod pump efficiency. The gas impact increases in the drainage speed for the best sucker rod pump efficiency [4]. In this work, the author tunes the production potential and controls the oil well rate by understanding all parameters to face the challenge in unconventional reservoirs: the deliverability of the reservoir governing the rate changes with time. The complexity of understanding an artificial lift well’s performance pushes the author to optimize the oil well production gains by changing the operating parameters and then finding the optimization and modeling approaches that affect well performance [5]. This paper discussed and developed a pragmatic and robust technique to design and apply a multiphase sucker rod pump in oil wells with high gas-oil ratios. More specifically, in the design of the sucker rod pump structure according to pump working mechanism in the presence of various liquid contents and high gas-oil ratios, effective solutions were enhancing oil production by enforcing gas evacuation. They designed a unique gas buffer to include both chamber gas and fluid inside and connected it through the slotted liner to prevent the pump from the gas lock. In addition, this gas buffer can be bypassed if the stroke is shortened as the traditional downhole pump [6]. This work shows an advanced approach to optimize the sucker rod pump. While most of the operators still depend on surface dynocards for sucker rod pump diagnostic, the author shows the value to use wave equation mathematical calculations as pump (calculated) dynocards to obtain production insights [7]. This paper shows the importance of knowing the correct value of pump fillage in the oil well control to represent the pump efficiency and optimize the production of a rod pumping well. The downhole card graphical representation is often used to know the pump fillage, which can sometimes be inaccurate. The authors introduced a compute method that used the downhole position to calculate pump fillage. The pump fillage accurate calculation involves the correct location of the transfer point, and it is the point of transferring the load from the standing valve to the traveling valve: a method comprised of four algorithms to locate the transfer point introduced. Correct transfer point location extracted using a combination of these methods, the accurate value of the pump fillage optimizes well production. Application of the method over numerous data sets resulted in a wide coverage range of conditions for optimizing sucker rod pump well assets [8]. The authors discussed how a sucker rod pumped oil well increased the gas to liquid ratio, in addition to the unconventional reservoirs that have high gas to liquid ratio from the beginning of production. To improve the sucker rod pump, the gas production efficiency should be handled. Several methods are used to handle gas production, increasing the sucker rod pump efficiency. The authors focused on three areas, gas separator design, variable speed drives, and the backpressure valves [9]. This work discussed how advanced technologies could overcome many common problems in downhole and surface, such as unconventional oil wells production, high gas oil wells, and sandy oil wells. High-capacity sucker rod pumps with ultralong stroke length maximize the production from heavy crude wells with a high liquid rate with fewer problems of downhole equipment. This technology affects operation costs by reducing the OPEX maintenance and the number of operators required [10]. This paper discussed the most common artificial lift technology, a sucker rod pump, and focuses on the efficiency problems caused by incomplete pump fillage. This problem results from a pump capacity that exceeds the rate of production from the well or gas lock. High pump fillage means lower cost, and more efficient operations will result. The author also presents using the pump off the controller to control pump run time to keep the pump displacement in harmony with the wellbore volume to avoid shock loading problems that occur on 24 hours running well with pump capacity excess wellbore volume [11]. The problem of obtaining downhole data makes monitoring the hydraulic performance of the sucker rod pump difficult. These data, including gas interference, pump fillage, gas locking, sticking valves, fluid pound, equipment failure, rod downstroke compression loading, and reduced production, are difficult to diagnose from the surface. Currently, guesswork and component analysis are the base of root cause analysis. They also develop a sucker rod pump knowledge base [12]. They explained the expert technology and system applications to diagnose sucker rod pumps. This capture expert’s knowledge approach is held by a few individuals and makes it available on PC to record it permanently to solve more difficult problems. The authors developed a rule-based expert system that gives users a clear picture to analyze the subsurface pump problems. The analysis information obtained by the presented system utilizes auxiliary programs available for users. The need for such a system means growth in production and reducing the maintenance cost, especially for the wells far away from the head office, to reduce the cost of delayed analysis [13]. The authors discuss the digital transformation and IIoT, and it is effective to keep the plant running, reduce the maintenance cost, and extend life time of equipment that led to increase the productivity. Also, review the IIoT-based project examples to reduce the human and equipment costs in the oil and gas field [14, 15]. They discussed the impact of industry 4.0 and operations based on data centric on oil and gas production that led to discover various scenarios about the future of oil and gas industry [16, 17]. Authors discussed a cyber-physical system for an IoT-based industrial solution such as SCADA to monitor and control their critical infrastructure [18]. Data privacy and security in the Industrial Internet of Things application discussed by enabling user authentication with transfer learning empowered blockchain [19]. Authors discussed machine learning-based malware attack detection protocol in the IoT industrial multimedia environment [20]. New key management and remote user authentication scheme is proposed for securing 6G-enabled NIB deployed for industrial applications [21]. Novel blockchain-edge framework for industrial IoT networks was proposed. It ensures low latency services for industrial IoT applications and optimizes the network usage, the data integrity, trust, and security ensured by a decentralized way provided by blockchain [22]. ## 1.1. Literature Review Our work proposes an IIoT-based strategy to control the smart sucker rod pump automatically and intelligently. The development of this strategy aims to ensure the optimization of oil well production using AI models that enable IIoT closed-loop process control. The Industrial Internet of Things found its way to the oil and gas industry via building online intelligent controllers, such as fuzzy logic or neurofuzzy, developing the management and control, centralizing the data, or digitalizing the production process for intelligent production purposes [1, 2]. The current research discussed control strategy, efficiency, optimization, multiphase, pump fillage calculation, pumping system simulation, and an expert system to develop the sucker rod pump system. On the other side, researchers focus on using the IIoT in oil and gas for management, health, and safety to reduce maintenance costs. Integrating optimization and process control with IIoT is still a gap in this field. Our work introduces the solution to integrate intelligent pump control with an IIoT system. The review of the current research discussed below:In this work, authors describe the control strategy method for the sucker-rod pumping (SRP) system which is discussed; this method stands out for its simplicity and low cost in maintenance and investment, and it can be operated in a large range of flow rates with fluids of different compositions and viscosities. The sucker rod pump units require periodic maintenance and adjustments, whether preventive or corrective. Two common procedures are important for the sucker rod pump units: the first one is to adjust the counterbalancing of the pumping unit, while the second is to adjust the polished rod stroke length. Stopping oil well production is required to accomplish these procedures [3]. A special design for the sucker rod pumping unit using a given condition was studied and tested. A horizontal wellbore with a range of up to 90 degrees bore curvature operated by sucker rod pumping units was studied. The target was to increase the volume efficiency, reduce bottom hole pressure, and decrease the gas impact while increasing drainage speed for the best sucker rod pump efficiency. The gas impact increases in the drainage speed for the best sucker rod pump efficiency [4]. In this work, the author tunes the production potential and controls the oil well rate by understanding all parameters to face the challenge in unconventional reservoirs: the deliverability of the reservoir governing the rate changes with time. The complexity of understanding an artificial lift well’s performance pushes the author to optimize the oil well production gains by changing the operating parameters and then finding the optimization and modeling approaches that affect well performance [5]. This paper discussed and developed a pragmatic and robust technique to design and apply a multiphase sucker rod pump in oil wells with high gas-oil ratios. More specifically, in the design of the sucker rod pump structure according to pump working mechanism in the presence of various liquid contents and high gas-oil ratios, effective solutions were enhancing oil production by enforcing gas evacuation. They designed a unique gas buffer to include both chamber gas and fluid inside and connected it through the slotted liner to prevent the pump from the gas lock. In addition, this gas buffer can be bypassed if the stroke is shortened as the traditional downhole pump [6]. This work shows an advanced approach to optimize the sucker rod pump. While most of the operators still depend on surface dynocards for sucker rod pump diagnostic, the author shows the value to use wave equation mathematical calculations as pump (calculated) dynocards to obtain production insights [7]. This paper shows the importance of knowing the correct value of pump fillage in the oil well control to represent the pump efficiency and optimize the production of a rod pumping well. The downhole card graphical representation is often used to know the pump fillage, which can sometimes be inaccurate. The authors introduced a compute method that used the downhole position to calculate pump fillage. The pump fillage accurate calculation involves the correct location of the transfer point, and it is the point of transferring the load from the standing valve to the traveling valve: a method comprised of four algorithms to locate the transfer point introduced. Correct transfer point location extracted using a combination of these methods, the accurate value of the pump fillage optimizes well production. Application of the method over numerous data sets resulted in a wide coverage range of conditions for optimizing sucker rod pump well assets [8]. The authors discussed how a sucker rod pumped oil well increased the gas to liquid ratio, in addition to the unconventional reservoirs that have high gas to liquid ratio from the beginning of production. To improve the sucker rod pump, the gas production efficiency should be handled. Several methods are used to handle gas production, increasing the sucker rod pump efficiency. The authors focused on three areas, gas separator design, variable speed drives, and the backpressure valves [9]. This work discussed how advanced technologies could overcome many common problems in downhole and surface, such as unconventional oil wells production, high gas oil wells, and sandy oil wells. High-capacity sucker rod pumps with ultralong stroke length maximize the production from heavy crude wells with a high liquid rate with fewer problems of downhole equipment. This technology affects operation costs by reducing the OPEX maintenance and the number of operators required [10]. This paper discussed the most common artificial lift technology, a sucker rod pump, and focuses on the efficiency problems caused by incomplete pump fillage. This problem results from a pump capacity that exceeds the rate of production from the well or gas lock. High pump fillage means lower cost, and more efficient operations will result. The author also presents using the pump off the controller to control pump run time to keep the pump displacement in harmony with the wellbore volume to avoid shock loading problems that occur on 24 hours running well with pump capacity excess wellbore volume [11]. The problem of obtaining downhole data makes monitoring the hydraulic performance of the sucker rod pump difficult. These data, including gas interference, pump fillage, gas locking, sticking valves, fluid pound, equipment failure, rod downstroke compression loading, and reduced production, are difficult to diagnose from the surface. Currently, guesswork and component analysis are the base of root cause analysis. They also develop a sucker rod pump knowledge base [12]. They explained the expert technology and system applications to diagnose sucker rod pumps. This capture expert’s knowledge approach is held by a few individuals and makes it available on PC to record it permanently to solve more difficult problems. The authors developed a rule-based expert system that gives users a clear picture to analyze the subsurface pump problems. The analysis information obtained by the presented system utilizes auxiliary programs available for users. The need for such a system means growth in production and reducing the maintenance cost, especially for the wells far away from the head office, to reduce the cost of delayed analysis [13]. The authors discuss the digital transformation and IIoT, and it is effective to keep the plant running, reduce the maintenance cost, and extend life time of equipment that led to increase the productivity. Also, review the IIoT-based project examples to reduce the human and equipment costs in the oil and gas field [14, 15]. They discussed the impact of industry 4.0 and operations based on data centric on oil and gas production that led to discover various scenarios about the future of oil and gas industry [16, 17]. Authors discussed a cyber-physical system for an IoT-based industrial solution such as SCADA to monitor and control their critical infrastructure [18]. Data privacy and security in the Industrial Internet of Things application discussed by enabling user authentication with transfer learning empowered blockchain [19]. Authors discussed machine learning-based malware attack detection protocol in the IoT industrial multimedia environment [20]. New key management and remote user authentication scheme is proposed for securing 6G-enabled NIB deployed for industrial applications [21]. Novel blockchain-edge framework for industrial IoT networks was proposed. It ensures low latency services for industrial IoT applications and optimizes the network usage, the data integrity, trust, and security ensured by a decentralized way provided by blockchain [22]. ## 2. IIoT in Process Control Industrial IoT is the application of IoT in process control and manufacturing to ensure data exchange between various instrumentation and control equipment. Figure2 shows the potential of the IIoT platform in the process control industry by providing the analysis tools for predictive maintenance through device health analysis and automated tuning recommendations for controllers through analyzing the interaction, error, variance, model, and knowing the tuning needs. Figure 3 shows the IIoT platform layers for the oil and gas plant. These layers are equipment, communications, history, analytics, and reporting.Figure 2 IIoT platform potential.Figure 3 IIoT platform layers.The equipment layer includes the processes, control loops, smart sensing and actuating devices, and traditional sensing and actuating devices. In contrast, the communication or connection layer includes industrial wireless, Profinet, fieldbus, and OPC. The history layer includes all data collected from processes, maintenance management system, laboratory information system, and logistics system.The analytic layer is the most important layer because it is the core of IIoT’s strength. It gives the Industrial Internet of Things power to apply intelligent algorithms such as genetic algorithms, neural networks, fuzzy logic, and neurofuzzy to control the complex nonlinear and dynamic processes. The analytic layer affects the reporting layer and shows better results of production due to the smooth operation and high quality of process control and automation. ## 3. Crude Oil Well This paper selected a crude oil well drilled in 2008 in Yemen for the study. We showed the effect of the Industrial Internet of Things by applying control algorithms or tuning parameters to ensure maximum production with the lowest cost. ### 3.1. Selected Crude Oil Well History The oil field downstream facility has a small separator used to test every crude oil individually for 24 hours or more to know the crude oil production rate. Furthermore, it separates the oil, water, and gas from the crude oil. It measures the average daily production rate for oil, water, and gas. Figure4 shows the crude oil well test history with 43 tests that indicate the maximum and minimum production of crude oil and water.Figure 4 History of selected crude oil well tests with and water capacity. ### 3.2. Crude Oil Pumping Control The pump surface unit shown in Figure5 includes all parts main prime mover (gas engine), and the hydraulic jack pump gives positive displacement to the pump. The SCADAPack 535E controller is used for the control unit. It links the wireless technology through the 5 GHz nanostation to the main control room. The IIoT platform is installed to collect the data, monitor the whole operation, draw the charts, and apply the control algorithms.Figure 5 Crude oil pump surface unit.Figure6 shows the downhole pump installation, including all data such as casing size, depth, tubing and pump string, and rod string.Figure 6 Crude oil pump downhole installation and well bore data. ## 3.1. Selected Crude Oil Well History The oil field downstream facility has a small separator used to test every crude oil individually for 24 hours or more to know the crude oil production rate. Furthermore, it separates the oil, water, and gas from the crude oil. It measures the average daily production rate for oil, water, and gas. Figure4 shows the crude oil well test history with 43 tests that indicate the maximum and minimum production of crude oil and water.Figure 4 History of selected crude oil well tests with and water capacity. ## 3.2. Crude Oil Pumping Control The pump surface unit shown in Figure5 includes all parts main prime mover (gas engine), and the hydraulic jack pump gives positive displacement to the pump. The SCADAPack 535E controller is used for the control unit. It links the wireless technology through the 5 GHz nanostation to the main control room. The IIoT platform is installed to collect the data, monitor the whole operation, draw the charts, and apply the control algorithms.Figure 5 Crude oil pump surface unit.Figure6 shows the downhole pump installation, including all data such as casing size, depth, tubing and pump string, and rod string.Figure 6 Crude oil pump downhole installation and well bore data. ## 4. Design and Configuration of IIoT System This paper purposes novel system hardware and software setup that includes IIoT device, communication network, data acquisition, data log, SCADA, control, and valuable charts. ### 4.1. Proposed Hardware System Figure7 shows the purposed hardware system. It includes the sucker rod pump, SCADAPack 535E as IIoT device, a 5 GHz antenna, and a server in the control room that will include the IIoT platform and SCADA system. The difficult nature of the environment for oil and gas fields represents a real challenge, considering oil well scattered location and distance from the central processing facility which made the industrial wireless Ethernet the solution for data acquisition, especially nanostation M5 5 GHz antennas that provide a reliable connection for up to 10 km.Figure 7 Proposed hardware system.Figure8 shows SCADAPack 535E remote terminal unit and automation controller from Schneider Electric; they are the perfect solution for IIoT applications that need high-speed time stamping and data capture. With an open standard programming environment, SCADAPack 535E supports standard industrial communication protocols such as Modbus RTU, Modbus TCP, and even DNP3 level 4 with security suit and data encryption. SCADAPack 535E works as an agent for the IIoT platform. It collects the data from engine sensors through Modbus RTU communication protocol. Also, it collects data from the rod pump and oil wellhead sensors using 4-20 ma I/O. Finally, SCADAPack 535E sends the data to the control room using Modbus TCP communication protocol through nanostation M5 5 GHz antenna.Figure 8 SCADAPack535E IIoT device. ### 4.2. Proposed Software System Figure9 shows the proposed software system; in this paper, the Kepware OPC server collects the crude oil well controller (SCADAPACK 535E) through a 5 GHz wireless link using TCP/IP network that passes it to two main branches, SCADA and IIoT platform.Figure 9 Proposed software system.The SCADA system monitors and generates valuable charts. At the same time, the IIoT platform uses these data for modeling, controlling and forecasting, and storing optimum parameters in the database. The two branches then send the data to the OFM (Oil Field Manager) software from Schlumberger, France. The visual basic programming language was used to build the SCADA system and IIoT platform. Figure10 shows the proposed software implementation algorithm.Figure 10 Proposed software implementation algorithm.Figure11 shows the proposed platform working flowchart. It clarifies the build of machine learning-based AI models that simulate the expert’s responses. The proposed platform uses code to prepare training datasets by recording all expert’s resonances that achieved the goals and increased pump efficiency. Machine learning builds the prediction models using training datasets.Figure 11 Proposed IIoT platform working flowchart. ### 4.3. Sucker Rod Pump Performance Calculation for Production Maximization Because it is difficult to measure the crude oil level in the well continuously and stop the pump during the measuring process, the pump performance is needed to indicate the level and pump fillage. So, the dynamometer card is used to accomplish this job while the surface card displays the load on the polished rod over the pump cycle. The card result shape is a function of everything, such as speed in stroke per minute, PPU geometry, pump depth, and fluid load on the pump, while the wave equation mathematically models the elastic nature of the rod string (assuming a downhole friction factor) and uses the surface card data to represent what happens at the pump plunger.A dynagraph card represents the forces acting on the pump plunger as it moves upward and downward in the well, capturing and releasing fluid with each stroke. Surface and downhole dynagraph cards measure the load on the polished rod, and this load is plotted with the polished rod position as the pump moves through each stroke cycle. A complete stroke cycle is one up and downstroke. The controller uses this data to create anx-y plot. By observing the graphs, information about the efficiency of the pump operation can be collected. Rather than being a plot of load vs. time, as shown below in Figure 12, a card is a plot of load vs. position, as shown in Figure 13. The ideal card, as shown above in Figure 13, demonstrates the instantaneous increase in load from Lmin to Lmax. The pump plunger begins its upward stroke, and the load remains constant as it travels to the top. As soon as the pump plunger starts back down, the load instantly falls back to Lmin where it remains constant as the pump travels to its bottom position again.Figure 12 Load vs. time.Figure 13 Load vs. position.The card shown in Figure14 shows a dynacard with an ideal upstroke and 30% pump fillage, demonstrating the effect of conditions such as fluid pound. If the traveling valve on the pump opens properly, the load falls instantly to Lmin and remains constant for the entire downstroke (Ptop to Pbottom), and the fluid is transferred from the pump to the tubing. When the pump plunger reaches the bottom, the barrel is empty.Figure 14 Load vs. position.The hydraulic pump begins to lift the entire fluid column to the top again, causing more fluid to be pulled in from the reservoir through the standing valve. However, when a condition such as low fluid level or trapped gas stops traveling valve from opening properly as the plunger starts downward, transfer of the contents of the pump; so, the tubing does not begin at the top of the stroke.The fluid in the tubing descends with the traveling valve, maintaining the load atLmax, until fluid is encountered or the gas compresses enough to open the traveling valve. Only when the plunger reimmerses in the fluid can it use the traveling valve open, and fluid transfer occurs through the travelling valve. The maximum and minimum load can be expressed as (1)LMax=Sf62.4DAp−Ar144+λsDAr144+λsDAr144SN2M70471.2,(2)LMin=Sf62.4DAr144+λsDAr144−λsDAr144SN2M70471.2.The liquid flow rateQ can be expressed as (3)Q=0.1484ApNSpEvBo.The symbols listed in Table1.Table 1 Symbols. SymbolRemarkSfLiquid specific gravityDSucker rod string lengthApPlunger areaArRod areaλsSpecific weight of steelMMachine factorNPump speed stroke per minuteSpStroke lengthEvPump fillage (efficiency)BoFormation volume factorIn this paper, surface load is calculated from the hydraulic pressure and the geometry of the hydraulic system. The position is represented by the position readings obtained by monitoring the hydraulic fluid flow. ## 4.1. Proposed Hardware System Figure7 shows the purposed hardware system. It includes the sucker rod pump, SCADAPack 535E as IIoT device, a 5 GHz antenna, and a server in the control room that will include the IIoT platform and SCADA system. The difficult nature of the environment for oil and gas fields represents a real challenge, considering oil well scattered location and distance from the central processing facility which made the industrial wireless Ethernet the solution for data acquisition, especially nanostation M5 5 GHz antennas that provide a reliable connection for up to 10 km.Figure 7 Proposed hardware system.Figure8 shows SCADAPack 535E remote terminal unit and automation controller from Schneider Electric; they are the perfect solution for IIoT applications that need high-speed time stamping and data capture. With an open standard programming environment, SCADAPack 535E supports standard industrial communication protocols such as Modbus RTU, Modbus TCP, and even DNP3 level 4 with security suit and data encryption. SCADAPack 535E works as an agent for the IIoT platform. It collects the data from engine sensors through Modbus RTU communication protocol. Also, it collects data from the rod pump and oil wellhead sensors using 4-20 ma I/O. Finally, SCADAPack 535E sends the data to the control room using Modbus TCP communication protocol through nanostation M5 5 GHz antenna.Figure 8 SCADAPack535E IIoT device. ## 4.2. Proposed Software System Figure9 shows the proposed software system; in this paper, the Kepware OPC server collects the crude oil well controller (SCADAPACK 535E) through a 5 GHz wireless link using TCP/IP network that passes it to two main branches, SCADA and IIoT platform.Figure 9 Proposed software system.The SCADA system monitors and generates valuable charts. At the same time, the IIoT platform uses these data for modeling, controlling and forecasting, and storing optimum parameters in the database. The two branches then send the data to the OFM (Oil Field Manager) software from Schlumberger, France. The visual basic programming language was used to build the SCADA system and IIoT platform. Figure10 shows the proposed software implementation algorithm.Figure 10 Proposed software implementation algorithm.Figure11 shows the proposed platform working flowchart. It clarifies the build of machine learning-based AI models that simulate the expert’s responses. The proposed platform uses code to prepare training datasets by recording all expert’s resonances that achieved the goals and increased pump efficiency. Machine learning builds the prediction models using training datasets.Figure 11 Proposed IIoT platform working flowchart. ## 4.3. Sucker Rod Pump Performance Calculation for Production Maximization Because it is difficult to measure the crude oil level in the well continuously and stop the pump during the measuring process, the pump performance is needed to indicate the level and pump fillage. So, the dynamometer card is used to accomplish this job while the surface card displays the load on the polished rod over the pump cycle. The card result shape is a function of everything, such as speed in stroke per minute, PPU geometry, pump depth, and fluid load on the pump, while the wave equation mathematically models the elastic nature of the rod string (assuming a downhole friction factor) and uses the surface card data to represent what happens at the pump plunger.A dynagraph card represents the forces acting on the pump plunger as it moves upward and downward in the well, capturing and releasing fluid with each stroke. Surface and downhole dynagraph cards measure the load on the polished rod, and this load is plotted with the polished rod position as the pump moves through each stroke cycle. A complete stroke cycle is one up and downstroke. The controller uses this data to create anx-y plot. By observing the graphs, information about the efficiency of the pump operation can be collected. Rather than being a plot of load vs. time, as shown below in Figure 12, a card is a plot of load vs. position, as shown in Figure 13. The ideal card, as shown above in Figure 13, demonstrates the instantaneous increase in load from Lmin to Lmax. The pump plunger begins its upward stroke, and the load remains constant as it travels to the top. As soon as the pump plunger starts back down, the load instantly falls back to Lmin where it remains constant as the pump travels to its bottom position again.Figure 12 Load vs. time.Figure 13 Load vs. position.The card shown in Figure14 shows a dynacard with an ideal upstroke and 30% pump fillage, demonstrating the effect of conditions such as fluid pound. If the traveling valve on the pump opens properly, the load falls instantly to Lmin and remains constant for the entire downstroke (Ptop to Pbottom), and the fluid is transferred from the pump to the tubing. When the pump plunger reaches the bottom, the barrel is empty.Figure 14 Load vs. position.The hydraulic pump begins to lift the entire fluid column to the top again, causing more fluid to be pulled in from the reservoir through the standing valve. However, when a condition such as low fluid level or trapped gas stops traveling valve from opening properly as the plunger starts downward, transfer of the contents of the pump; so, the tubing does not begin at the top of the stroke.The fluid in the tubing descends with the traveling valve, maintaining the load atLmax, until fluid is encountered or the gas compresses enough to open the traveling valve. Only when the plunger reimmerses in the fluid can it use the traveling valve open, and fluid transfer occurs through the travelling valve. The maximum and minimum load can be expressed as (1)LMax=Sf62.4DAp−Ar144+λsDAr144+λsDAr144SN2M70471.2,(2)LMin=Sf62.4DAr144+λsDAr144−λsDAr144SN2M70471.2.The liquid flow rateQ can be expressed as (3)Q=0.1484ApNSpEvBo.The symbols listed in Table1.Table 1 Symbols. SymbolRemarkSfLiquid specific gravityDSucker rod string lengthApPlunger areaArRod areaλsSpecific weight of steelMMachine factorNPump speed stroke per minuteSpStroke lengthEvPump fillage (efficiency)BoFormation volume factorIn this paper, surface load is calculated from the hydraulic pressure and the geometry of the hydraulic system. The position is represented by the position readings obtained by monitoring the hydraulic fluid flow. ## 5. Results and Experimental Setup The most important parameter that indicates the status of the crude oil well is the fluid level. It must be ensured that this level is above the pump. We determine the casing pressure’s critical value in the next stage, which pushes the level down under the pump. Therefore, the first part of the experiments in this paper is measuring the level. After that, the second part is activated, connecting the dedicated IIoT platform to the crude oil well controller and showing the results. These two parts are explained below. ### 5.1. Measuring Fluid Level in Wells Using Echometer Device The most important parameter is the liquid level in the oil well. It is the process variable that needs to be maintained to ensure the fillage of the pump. This parameter is measured by an echometer device that uses an ultrasound gun to measure the number of tube joints to level.The results of the echometer instrument can be read using Total Manager software to calculate the tubing joints liquid level in the oil well, as shown in Figure15. Figure 16 shows the raw signal recorded by the echometer mic and displays the acoustic gunshot start, tubing joint reflection and fluid level kick (appears in zoom window). Low-pass filter is applied to the signal to make it easier for the operator to read and determine tube joints kick and fluid level kick as shown in Figure 17.Figure 15 Echometer final ultrasound response to count the tube joints.Figure 16 The raw signal recorded by echometer to count the tube joints.Figure 17 Number of tubing joint liquid level in crude oil well by applying low-pass filter to the signal.Depth determination screen is shown in Figure18, where the joints can be counted easily, and zoom tools are used to show the kicks.Figure 18 Depth determination screen. ### 5.2. Performance Enhancement by the Proposed IIoT System Once the system is installed and commissioned, the SCADA system is built, and the IIoT platform is launched. The oil well chart includes fillage (pump efficiency), tubing pressure, jack pump hydraulic pressure, pump speed (stroke per minute), and casing pressure displayed. It depends on the data log recorded by the IIoT platform using a one-second scan rate stored in CSV files or ODBC database.Figure19 above shows the one-month oil well chart that indicates how the gas lock problem affects pump efficiency, which needs to be maintained. For best understanding, the higher resolution chart for 48 hours is shown in Figure 20. The pump fillage affected by gas lock without any control action is noticeable.Figure 19 One month crude oil well chart showing essential parameters.Figure 20 48 hours oil well chart without control.Our proposed solution avoids the gas lock effect by stopping the pump for some time to give the reservoir a chance to build up again. It increases the liquid level in the crude oil well. Figure21 shows the five-day well chart and how the on-off crude oil well pump controller affects pump efficiency.Figure 21 Five-day well chart with pump on-off control.With the proposed method for pump operation as discussed above, the pump fillage efficiency is stable at 90% as Figure22 shows.Figure 22 Five-day pump efficiency chart with pump on-off control.For a clearer picture, the higher resolution chart for 48 hours is shown in Figure23 with continuous stability of pump fillage (efficiency) around 90% after stopping the pump for 6 hours.Figure 23 48 hours oil well chart after 6 hours of shut-down.The other solution to avoid the gas lock problem and maintain the oil well level is to control the casing pressure. The pressure controller maintains the casing pressure at a certain setpoint taken from the experts in real time via the IIoT platform.Figure24 shows the effect of casing pressure on the pump efficiency. This higher resolution chart for 48 hours is shown in Figure 25, while Figure 26 shows the higher resolution chart for 2 hours that gives a deep view of the behavior of all parameters with continuous stability of pump fillage (efficiency) under control via the IIoT platform.Figure 24 Effect of casing pressure controller.Figure 25 48 hours oil well chart with casing pressure controller.Figure 26 2 hours oil well chart.With the effectiveness of our proposed system visible in the results, the oil well’s production rate should be tested on other oil wells to ensure the solution’s acclaim. The last three tests were done before applying the proposed technology in this paper. These tests showed a drop-in liquid production rate with unstable pump fillage after a short time from the last well workover, which means a well needs work over. The average liquid production rate was 43 BPD oil and 50 BPD water while the maximum expected is 100 BPD oil. The other test was done after applying the new IIoT system solution, especially when the pump fillage was stable at 90%, and the liquid production rate increased from 43 to 80 BPD oil. In contrast, the water increased from 50 to 110 BPD, and the increase in water rate is not an issue as it can be reinjected to a reservoir. The results have shown an improvement in the liquid production rate of the oil well, which means duplicating the production and decreasing the work over frequency with reducing work over cost and well-off days. Furthermore, make the right decisions by reflecting the right picture of sucker rod pump behavior that helps experts. ## 5.1. Measuring Fluid Level in Wells Using Echometer Device The most important parameter is the liquid level in the oil well. It is the process variable that needs to be maintained to ensure the fillage of the pump. This parameter is measured by an echometer device that uses an ultrasound gun to measure the number of tube joints to level.The results of the echometer instrument can be read using Total Manager software to calculate the tubing joints liquid level in the oil well, as shown in Figure15. Figure 16 shows the raw signal recorded by the echometer mic and displays the acoustic gunshot start, tubing joint reflection and fluid level kick (appears in zoom window). Low-pass filter is applied to the signal to make it easier for the operator to read and determine tube joints kick and fluid level kick as shown in Figure 17.Figure 15 Echometer final ultrasound response to count the tube joints.Figure 16 The raw signal recorded by echometer to count the tube joints.Figure 17 Number of tubing joint liquid level in crude oil well by applying low-pass filter to the signal.Depth determination screen is shown in Figure18, where the joints can be counted easily, and zoom tools are used to show the kicks.Figure 18 Depth determination screen. ## 5.2. Performance Enhancement by the Proposed IIoT System Once the system is installed and commissioned, the SCADA system is built, and the IIoT platform is launched. The oil well chart includes fillage (pump efficiency), tubing pressure, jack pump hydraulic pressure, pump speed (stroke per minute), and casing pressure displayed. It depends on the data log recorded by the IIoT platform using a one-second scan rate stored in CSV files or ODBC database.Figure19 above shows the one-month oil well chart that indicates how the gas lock problem affects pump efficiency, which needs to be maintained. For best understanding, the higher resolution chart for 48 hours is shown in Figure 20. The pump fillage affected by gas lock without any control action is noticeable.Figure 19 One month crude oil well chart showing essential parameters.Figure 20 48 hours oil well chart without control.Our proposed solution avoids the gas lock effect by stopping the pump for some time to give the reservoir a chance to build up again. It increases the liquid level in the crude oil well. Figure21 shows the five-day well chart and how the on-off crude oil well pump controller affects pump efficiency.Figure 21 Five-day well chart with pump on-off control.With the proposed method for pump operation as discussed above, the pump fillage efficiency is stable at 90% as Figure22 shows.Figure 22 Five-day pump efficiency chart with pump on-off control.For a clearer picture, the higher resolution chart for 48 hours is shown in Figure23 with continuous stability of pump fillage (efficiency) around 90% after stopping the pump for 6 hours.Figure 23 48 hours oil well chart after 6 hours of shut-down.The other solution to avoid the gas lock problem and maintain the oil well level is to control the casing pressure. The pressure controller maintains the casing pressure at a certain setpoint taken from the experts in real time via the IIoT platform.Figure24 shows the effect of casing pressure on the pump efficiency. This higher resolution chart for 48 hours is shown in Figure 25, while Figure 26 shows the higher resolution chart for 2 hours that gives a deep view of the behavior of all parameters with continuous stability of pump fillage (efficiency) under control via the IIoT platform.Figure 24 Effect of casing pressure controller.Figure 25 48 hours oil well chart with casing pressure controller.Figure 26 2 hours oil well chart.With the effectiveness of our proposed system visible in the results, the oil well’s production rate should be tested on other oil wells to ensure the solution’s acclaim. The last three tests were done before applying the proposed technology in this paper. These tests showed a drop-in liquid production rate with unstable pump fillage after a short time from the last well workover, which means a well needs work over. The average liquid production rate was 43 BPD oil and 50 BPD water while the maximum expected is 100 BPD oil. The other test was done after applying the new IIoT system solution, especially when the pump fillage was stable at 90%, and the liquid production rate increased from 43 to 80 BPD oil. In contrast, the water increased from 50 to 110 BPD, and the increase in water rate is not an issue as it can be reinjected to a reservoir. The results have shown an improvement in the liquid production rate of the oil well, which means duplicating the production and decreasing the work over frequency with reducing work over cost and well-off days. Furthermore, make the right decisions by reflecting the right picture of sucker rod pump behavior that helps experts. ## 6. Conclusion Using the Industrial Internet of Things (IIoT) in the oil and gas industry means intelligent production and opening the door for a new level of optimization and cost-effective production. This paper shows how this technology successfully transferred a clear picture of the sucker rod pumping well. It introduces an approach to understand all the required parameters such as pump efficiency, pump fillage, tubing pressure, casing pressure, hydraulic pressure, and pump speed. The clear picture leads to the right expert’s responses and transfers their experience to AI prediction models. The AI models predict the control parameters based on experts’ responses. The production rate of the oil well studied in this paper has increased by 90% when applying this new technology. It prevents the gas lock problem by using an on-off pump controller. It maintains the liquid level in the well by controlling the case pressure using PID controller. At the same time, the experts determine the optimized setpoints for the controller and the lowest pump off-time. The future of oil and gas is the Industrial Internet of Things (IIoT) that will optimize the upstream and downstream to reduce maintenance costs, improve production, increase reliability, and more. --- *Source: 1005813-2022-10-07.xml*
1005813-2022-10-07_1005813-2022-10-07.md
51,372
Application of Industrial Internet of Things (IIoT) in Crude Oil Production Optimization Using Pump Efficiency Control
Ali S. Allahloh; Mohammad Sarfraz; Mejdal Alqahtani; Shafiq Ahmad; Shamsul Huda
Wireless Communications and Mobile Computing (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005813
1005813-2022-10-07.xml
--- ## Abstract The collapse of oil prices in mid-2014 and early 2016 was the biggest in modern history that witnessed more than a 70% drop in the price to around $40 barrel. It prompted companies to think seriously to maintain profitability. Most companies were able to survive partly by simplifying their operations. Price recovery in 2021 is only 70% of its peak value. Companies are focusing on reducing the cost of operations, increasing production simultaneously, and finding new and different strategies to survive. The current scenario is witnessing strong research focusing on the development of process control for oil and gas upstream and downstream to improve the control and preventive maintenance to reduce operating costs and increase production. This paper presents the Industrial Internet of Things (IIoT) practical solution that improves the oil production rate from the well and increases the average pump efficiency (fillage) to 90%. This paper proposes a mechanism for collecting, storing, and analyzing all required parameters to build valuable charts. These charts’ data help optimize the values and parameters for controller setpoints to prevent pump gas lock problems. An artificial lift is required to lift the oil from the well. In this paper, the sucker rod pump is driven by a gas engine fed by the well’s gas. At the same time, SCADAPack 535E remote terminal unit collects all pump and well parameters such as hydraulic pressure, casing pressure, tubing pressure, and pump speed in stroke per minute (SPM). The remote terminal unit sends the data through a wireless network using a 5 GHz antenna to the main control room. The IIoT platform is designed using a visual basic programming language. Microsoft DDE (Dynamic Data Exchange) and Kepware OPC server were used to work on the received data to monitor, generate charts, and apply the controllers. --- ## Body ## 1. Introduction Intelligent oil and gas production depends on understanding all crude oil well and pump parameters. Ensure the uniform increase of liquid production rate from the crude oil well and avoid problems such as gas lock. Field operators currently record these parameters two or three times a day, giving experts a fuzzy picture. Increasing guesswork due to the dynamic behavior of these parameters, which changes minute after minute, this paper introduces an approach to thoroughly understanding the surface and downhole parameters. Using Industrial Internet of Things technology (IIoT) gives users a clear picture. It reduces the guesswork and optimizes different parameters to maximize the production for the long-term in a stable situation. Over 90% of the wells in the United States are currently being artificially lifted. Beam pumping is the most commonly used method, accounting for over 85% of artificial lift installations. The beam pumping system is mechanically very simple. It consists of a surface unit that transmits the upstroke and downstroke motion to a bottom-hole pump through a sucker rod string (Figure1). While the system is simple, a proper design requires many factors. Over the years, formulas developed to be used in the optimum pumping design. However, a good design still depends on experience as a key.Figure 1 Sucker rod pump.Energy from the crude oil pumping unit is transmitted to the bottom-hole pump through a sucker rod string. Sucker rod strings operate under cyclic load in erosive and corrosive environments. At the same time, the target is a good sucker rod design is the most critical part of a successful sucker rod pumping system. It will decrease the pulling cost and increase production. The next step is the surface unit. In this paper, the surface unit is the gas engine that feeds by the well gas itself to reduce the cost. At the same time, the jack pump is driven by SCADAPack 535E remote terminal unit. It is an IIoT device to provide a local and remote controller to prevent the gas lock problem using an on-off controller and the oil well casing pressure, PID controller. It ensures the well’s liquid level and pumps fillage at optimum values. ### 1.1. Literature Review Our work proposes an IIoT-based strategy to control the smart sucker rod pump automatically and intelligently. The development of this strategy aims to ensure the optimization of oil well production using AI models that enable IIoT closed-loop process control. The Industrial Internet of Things found its way to the oil and gas industry via building online intelligent controllers, such as fuzzy logic or neurofuzzy, developing the management and control, centralizing the data, or digitalizing the production process for intelligent production purposes [1, 2]. The current research discussed control strategy, efficiency, optimization, multiphase, pump fillage calculation, pumping system simulation, and an expert system to develop the sucker rod pump system. On the other side, researchers focus on using the IIoT in oil and gas for management, health, and safety to reduce maintenance costs. Integrating optimization and process control with IIoT is still a gap in this field. Our work introduces the solution to integrate intelligent pump control with an IIoT system. The review of the current research discussed below:In this work, authors describe the control strategy method for the sucker-rod pumping (SRP) system which is discussed; this method stands out for its simplicity and low cost in maintenance and investment, and it can be operated in a large range of flow rates with fluids of different compositions and viscosities. The sucker rod pump units require periodic maintenance and adjustments, whether preventive or corrective. Two common procedures are important for the sucker rod pump units: the first one is to adjust the counterbalancing of the pumping unit, while the second is to adjust the polished rod stroke length. Stopping oil well production is required to accomplish these procedures [3]. A special design for the sucker rod pumping unit using a given condition was studied and tested. A horizontal wellbore with a range of up to 90 degrees bore curvature operated by sucker rod pumping units was studied. The target was to increase the volume efficiency, reduce bottom hole pressure, and decrease the gas impact while increasing drainage speed for the best sucker rod pump efficiency. The gas impact increases in the drainage speed for the best sucker rod pump efficiency [4]. In this work, the author tunes the production potential and controls the oil well rate by understanding all parameters to face the challenge in unconventional reservoirs: the deliverability of the reservoir governing the rate changes with time. The complexity of understanding an artificial lift well’s performance pushes the author to optimize the oil well production gains by changing the operating parameters and then finding the optimization and modeling approaches that affect well performance [5]. This paper discussed and developed a pragmatic and robust technique to design and apply a multiphase sucker rod pump in oil wells with high gas-oil ratios. More specifically, in the design of the sucker rod pump structure according to pump working mechanism in the presence of various liquid contents and high gas-oil ratios, effective solutions were enhancing oil production by enforcing gas evacuation. They designed a unique gas buffer to include both chamber gas and fluid inside and connected it through the slotted liner to prevent the pump from the gas lock. In addition, this gas buffer can be bypassed if the stroke is shortened as the traditional downhole pump [6]. This work shows an advanced approach to optimize the sucker rod pump. While most of the operators still depend on surface dynocards for sucker rod pump diagnostic, the author shows the value to use wave equation mathematical calculations as pump (calculated) dynocards to obtain production insights [7]. This paper shows the importance of knowing the correct value of pump fillage in the oil well control to represent the pump efficiency and optimize the production of a rod pumping well. The downhole card graphical representation is often used to know the pump fillage, which can sometimes be inaccurate. The authors introduced a compute method that used the downhole position to calculate pump fillage. The pump fillage accurate calculation involves the correct location of the transfer point, and it is the point of transferring the load from the standing valve to the traveling valve: a method comprised of four algorithms to locate the transfer point introduced. Correct transfer point location extracted using a combination of these methods, the accurate value of the pump fillage optimizes well production. Application of the method over numerous data sets resulted in a wide coverage range of conditions for optimizing sucker rod pump well assets [8]. The authors discussed how a sucker rod pumped oil well increased the gas to liquid ratio, in addition to the unconventional reservoirs that have high gas to liquid ratio from the beginning of production. To improve the sucker rod pump, the gas production efficiency should be handled. Several methods are used to handle gas production, increasing the sucker rod pump efficiency. The authors focused on three areas, gas separator design, variable speed drives, and the backpressure valves [9]. This work discussed how advanced technologies could overcome many common problems in downhole and surface, such as unconventional oil wells production, high gas oil wells, and sandy oil wells. High-capacity sucker rod pumps with ultralong stroke length maximize the production from heavy crude wells with a high liquid rate with fewer problems of downhole equipment. This technology affects operation costs by reducing the OPEX maintenance and the number of operators required [10]. This paper discussed the most common artificial lift technology, a sucker rod pump, and focuses on the efficiency problems caused by incomplete pump fillage. This problem results from a pump capacity that exceeds the rate of production from the well or gas lock. High pump fillage means lower cost, and more efficient operations will result. The author also presents using the pump off the controller to control pump run time to keep the pump displacement in harmony with the wellbore volume to avoid shock loading problems that occur on 24 hours running well with pump capacity excess wellbore volume [11]. The problem of obtaining downhole data makes monitoring the hydraulic performance of the sucker rod pump difficult. These data, including gas interference, pump fillage, gas locking, sticking valves, fluid pound, equipment failure, rod downstroke compression loading, and reduced production, are difficult to diagnose from the surface. Currently, guesswork and component analysis are the base of root cause analysis. They also develop a sucker rod pump knowledge base [12]. They explained the expert technology and system applications to diagnose sucker rod pumps. This capture expert’s knowledge approach is held by a few individuals and makes it available on PC to record it permanently to solve more difficult problems. The authors developed a rule-based expert system that gives users a clear picture to analyze the subsurface pump problems. The analysis information obtained by the presented system utilizes auxiliary programs available for users. The need for such a system means growth in production and reducing the maintenance cost, especially for the wells far away from the head office, to reduce the cost of delayed analysis [13]. The authors discuss the digital transformation and IIoT, and it is effective to keep the plant running, reduce the maintenance cost, and extend life time of equipment that led to increase the productivity. Also, review the IIoT-based project examples to reduce the human and equipment costs in the oil and gas field [14, 15]. They discussed the impact of industry 4.0 and operations based on data centric on oil and gas production that led to discover various scenarios about the future of oil and gas industry [16, 17]. Authors discussed a cyber-physical system for an IoT-based industrial solution such as SCADA to monitor and control their critical infrastructure [18]. Data privacy and security in the Industrial Internet of Things application discussed by enabling user authentication with transfer learning empowered blockchain [19]. Authors discussed machine learning-based malware attack detection protocol in the IoT industrial multimedia environment [20]. New key management and remote user authentication scheme is proposed for securing 6G-enabled NIB deployed for industrial applications [21]. Novel blockchain-edge framework for industrial IoT networks was proposed. It ensures low latency services for industrial IoT applications and optimizes the network usage, the data integrity, trust, and security ensured by a decentralized way provided by blockchain [22]. ## 1.1. Literature Review Our work proposes an IIoT-based strategy to control the smart sucker rod pump automatically and intelligently. The development of this strategy aims to ensure the optimization of oil well production using AI models that enable IIoT closed-loop process control. The Industrial Internet of Things found its way to the oil and gas industry via building online intelligent controllers, such as fuzzy logic or neurofuzzy, developing the management and control, centralizing the data, or digitalizing the production process for intelligent production purposes [1, 2]. The current research discussed control strategy, efficiency, optimization, multiphase, pump fillage calculation, pumping system simulation, and an expert system to develop the sucker rod pump system. On the other side, researchers focus on using the IIoT in oil and gas for management, health, and safety to reduce maintenance costs. Integrating optimization and process control with IIoT is still a gap in this field. Our work introduces the solution to integrate intelligent pump control with an IIoT system. The review of the current research discussed below:In this work, authors describe the control strategy method for the sucker-rod pumping (SRP) system which is discussed; this method stands out for its simplicity and low cost in maintenance and investment, and it can be operated in a large range of flow rates with fluids of different compositions and viscosities. The sucker rod pump units require periodic maintenance and adjustments, whether preventive or corrective. Two common procedures are important for the sucker rod pump units: the first one is to adjust the counterbalancing of the pumping unit, while the second is to adjust the polished rod stroke length. Stopping oil well production is required to accomplish these procedures [3]. A special design for the sucker rod pumping unit using a given condition was studied and tested. A horizontal wellbore with a range of up to 90 degrees bore curvature operated by sucker rod pumping units was studied. The target was to increase the volume efficiency, reduce bottom hole pressure, and decrease the gas impact while increasing drainage speed for the best sucker rod pump efficiency. The gas impact increases in the drainage speed for the best sucker rod pump efficiency [4]. In this work, the author tunes the production potential and controls the oil well rate by understanding all parameters to face the challenge in unconventional reservoirs: the deliverability of the reservoir governing the rate changes with time. The complexity of understanding an artificial lift well’s performance pushes the author to optimize the oil well production gains by changing the operating parameters and then finding the optimization and modeling approaches that affect well performance [5]. This paper discussed and developed a pragmatic and robust technique to design and apply a multiphase sucker rod pump in oil wells with high gas-oil ratios. More specifically, in the design of the sucker rod pump structure according to pump working mechanism in the presence of various liquid contents and high gas-oil ratios, effective solutions were enhancing oil production by enforcing gas evacuation. They designed a unique gas buffer to include both chamber gas and fluid inside and connected it through the slotted liner to prevent the pump from the gas lock. In addition, this gas buffer can be bypassed if the stroke is shortened as the traditional downhole pump [6]. This work shows an advanced approach to optimize the sucker rod pump. While most of the operators still depend on surface dynocards for sucker rod pump diagnostic, the author shows the value to use wave equation mathematical calculations as pump (calculated) dynocards to obtain production insights [7]. This paper shows the importance of knowing the correct value of pump fillage in the oil well control to represent the pump efficiency and optimize the production of a rod pumping well. The downhole card graphical representation is often used to know the pump fillage, which can sometimes be inaccurate. The authors introduced a compute method that used the downhole position to calculate pump fillage. The pump fillage accurate calculation involves the correct location of the transfer point, and it is the point of transferring the load from the standing valve to the traveling valve: a method comprised of four algorithms to locate the transfer point introduced. Correct transfer point location extracted using a combination of these methods, the accurate value of the pump fillage optimizes well production. Application of the method over numerous data sets resulted in a wide coverage range of conditions for optimizing sucker rod pump well assets [8]. The authors discussed how a sucker rod pumped oil well increased the gas to liquid ratio, in addition to the unconventional reservoirs that have high gas to liquid ratio from the beginning of production. To improve the sucker rod pump, the gas production efficiency should be handled. Several methods are used to handle gas production, increasing the sucker rod pump efficiency. The authors focused on three areas, gas separator design, variable speed drives, and the backpressure valves [9]. This work discussed how advanced technologies could overcome many common problems in downhole and surface, such as unconventional oil wells production, high gas oil wells, and sandy oil wells. High-capacity sucker rod pumps with ultralong stroke length maximize the production from heavy crude wells with a high liquid rate with fewer problems of downhole equipment. This technology affects operation costs by reducing the OPEX maintenance and the number of operators required [10]. This paper discussed the most common artificial lift technology, a sucker rod pump, and focuses on the efficiency problems caused by incomplete pump fillage. This problem results from a pump capacity that exceeds the rate of production from the well or gas lock. High pump fillage means lower cost, and more efficient operations will result. The author also presents using the pump off the controller to control pump run time to keep the pump displacement in harmony with the wellbore volume to avoid shock loading problems that occur on 24 hours running well with pump capacity excess wellbore volume [11]. The problem of obtaining downhole data makes monitoring the hydraulic performance of the sucker rod pump difficult. These data, including gas interference, pump fillage, gas locking, sticking valves, fluid pound, equipment failure, rod downstroke compression loading, and reduced production, are difficult to diagnose from the surface. Currently, guesswork and component analysis are the base of root cause analysis. They also develop a sucker rod pump knowledge base [12]. They explained the expert technology and system applications to diagnose sucker rod pumps. This capture expert’s knowledge approach is held by a few individuals and makes it available on PC to record it permanently to solve more difficult problems. The authors developed a rule-based expert system that gives users a clear picture to analyze the subsurface pump problems. The analysis information obtained by the presented system utilizes auxiliary programs available for users. The need for such a system means growth in production and reducing the maintenance cost, especially for the wells far away from the head office, to reduce the cost of delayed analysis [13]. The authors discuss the digital transformation and IIoT, and it is effective to keep the plant running, reduce the maintenance cost, and extend life time of equipment that led to increase the productivity. Also, review the IIoT-based project examples to reduce the human and equipment costs in the oil and gas field [14, 15]. They discussed the impact of industry 4.0 and operations based on data centric on oil and gas production that led to discover various scenarios about the future of oil and gas industry [16, 17]. Authors discussed a cyber-physical system for an IoT-based industrial solution such as SCADA to monitor and control their critical infrastructure [18]. Data privacy and security in the Industrial Internet of Things application discussed by enabling user authentication with transfer learning empowered blockchain [19]. Authors discussed machine learning-based malware attack detection protocol in the IoT industrial multimedia environment [20]. New key management and remote user authentication scheme is proposed for securing 6G-enabled NIB deployed for industrial applications [21]. Novel blockchain-edge framework for industrial IoT networks was proposed. It ensures low latency services for industrial IoT applications and optimizes the network usage, the data integrity, trust, and security ensured by a decentralized way provided by blockchain [22]. ## 2. IIoT in Process Control Industrial IoT is the application of IoT in process control and manufacturing to ensure data exchange between various instrumentation and control equipment. Figure2 shows the potential of the IIoT platform in the process control industry by providing the analysis tools for predictive maintenance through device health analysis and automated tuning recommendations for controllers through analyzing the interaction, error, variance, model, and knowing the tuning needs. Figure 3 shows the IIoT platform layers for the oil and gas plant. These layers are equipment, communications, history, analytics, and reporting.Figure 2 IIoT platform potential.Figure 3 IIoT platform layers.The equipment layer includes the processes, control loops, smart sensing and actuating devices, and traditional sensing and actuating devices. In contrast, the communication or connection layer includes industrial wireless, Profinet, fieldbus, and OPC. The history layer includes all data collected from processes, maintenance management system, laboratory information system, and logistics system.The analytic layer is the most important layer because it is the core of IIoT’s strength. It gives the Industrial Internet of Things power to apply intelligent algorithms such as genetic algorithms, neural networks, fuzzy logic, and neurofuzzy to control the complex nonlinear and dynamic processes. The analytic layer affects the reporting layer and shows better results of production due to the smooth operation and high quality of process control and automation. ## 3. Crude Oil Well This paper selected a crude oil well drilled in 2008 in Yemen for the study. We showed the effect of the Industrial Internet of Things by applying control algorithms or tuning parameters to ensure maximum production with the lowest cost. ### 3.1. Selected Crude Oil Well History The oil field downstream facility has a small separator used to test every crude oil individually for 24 hours or more to know the crude oil production rate. Furthermore, it separates the oil, water, and gas from the crude oil. It measures the average daily production rate for oil, water, and gas. Figure4 shows the crude oil well test history with 43 tests that indicate the maximum and minimum production of crude oil and water.Figure 4 History of selected crude oil well tests with and water capacity. ### 3.2. Crude Oil Pumping Control The pump surface unit shown in Figure5 includes all parts main prime mover (gas engine), and the hydraulic jack pump gives positive displacement to the pump. The SCADAPack 535E controller is used for the control unit. It links the wireless technology through the 5 GHz nanostation to the main control room. The IIoT platform is installed to collect the data, monitor the whole operation, draw the charts, and apply the control algorithms.Figure 5 Crude oil pump surface unit.Figure6 shows the downhole pump installation, including all data such as casing size, depth, tubing and pump string, and rod string.Figure 6 Crude oil pump downhole installation and well bore data. ## 3.1. Selected Crude Oil Well History The oil field downstream facility has a small separator used to test every crude oil individually for 24 hours or more to know the crude oil production rate. Furthermore, it separates the oil, water, and gas from the crude oil. It measures the average daily production rate for oil, water, and gas. Figure4 shows the crude oil well test history with 43 tests that indicate the maximum and minimum production of crude oil and water.Figure 4 History of selected crude oil well tests with and water capacity. ## 3.2. Crude Oil Pumping Control The pump surface unit shown in Figure5 includes all parts main prime mover (gas engine), and the hydraulic jack pump gives positive displacement to the pump. The SCADAPack 535E controller is used for the control unit. It links the wireless technology through the 5 GHz nanostation to the main control room. The IIoT platform is installed to collect the data, monitor the whole operation, draw the charts, and apply the control algorithms.Figure 5 Crude oil pump surface unit.Figure6 shows the downhole pump installation, including all data such as casing size, depth, tubing and pump string, and rod string.Figure 6 Crude oil pump downhole installation and well bore data. ## 4. Design and Configuration of IIoT System This paper purposes novel system hardware and software setup that includes IIoT device, communication network, data acquisition, data log, SCADA, control, and valuable charts. ### 4.1. Proposed Hardware System Figure7 shows the purposed hardware system. It includes the sucker rod pump, SCADAPack 535E as IIoT device, a 5 GHz antenna, and a server in the control room that will include the IIoT platform and SCADA system. The difficult nature of the environment for oil and gas fields represents a real challenge, considering oil well scattered location and distance from the central processing facility which made the industrial wireless Ethernet the solution for data acquisition, especially nanostation M5 5 GHz antennas that provide a reliable connection for up to 10 km.Figure 7 Proposed hardware system.Figure8 shows SCADAPack 535E remote terminal unit and automation controller from Schneider Electric; they are the perfect solution for IIoT applications that need high-speed time stamping and data capture. With an open standard programming environment, SCADAPack 535E supports standard industrial communication protocols such as Modbus RTU, Modbus TCP, and even DNP3 level 4 with security suit and data encryption. SCADAPack 535E works as an agent for the IIoT platform. It collects the data from engine sensors through Modbus RTU communication protocol. Also, it collects data from the rod pump and oil wellhead sensors using 4-20 ma I/O. Finally, SCADAPack 535E sends the data to the control room using Modbus TCP communication protocol through nanostation M5 5 GHz antenna.Figure 8 SCADAPack535E IIoT device. ### 4.2. Proposed Software System Figure9 shows the proposed software system; in this paper, the Kepware OPC server collects the crude oil well controller (SCADAPACK 535E) through a 5 GHz wireless link using TCP/IP network that passes it to two main branches, SCADA and IIoT platform.Figure 9 Proposed software system.The SCADA system monitors and generates valuable charts. At the same time, the IIoT platform uses these data for modeling, controlling and forecasting, and storing optimum parameters in the database. The two branches then send the data to the OFM (Oil Field Manager) software from Schlumberger, France. The visual basic programming language was used to build the SCADA system and IIoT platform. Figure10 shows the proposed software implementation algorithm.Figure 10 Proposed software implementation algorithm.Figure11 shows the proposed platform working flowchart. It clarifies the build of machine learning-based AI models that simulate the expert’s responses. The proposed platform uses code to prepare training datasets by recording all expert’s resonances that achieved the goals and increased pump efficiency. Machine learning builds the prediction models using training datasets.Figure 11 Proposed IIoT platform working flowchart. ### 4.3. Sucker Rod Pump Performance Calculation for Production Maximization Because it is difficult to measure the crude oil level in the well continuously and stop the pump during the measuring process, the pump performance is needed to indicate the level and pump fillage. So, the dynamometer card is used to accomplish this job while the surface card displays the load on the polished rod over the pump cycle. The card result shape is a function of everything, such as speed in stroke per minute, PPU geometry, pump depth, and fluid load on the pump, while the wave equation mathematically models the elastic nature of the rod string (assuming a downhole friction factor) and uses the surface card data to represent what happens at the pump plunger.A dynagraph card represents the forces acting on the pump plunger as it moves upward and downward in the well, capturing and releasing fluid with each stroke. Surface and downhole dynagraph cards measure the load on the polished rod, and this load is plotted with the polished rod position as the pump moves through each stroke cycle. A complete stroke cycle is one up and downstroke. The controller uses this data to create anx-y plot. By observing the graphs, information about the efficiency of the pump operation can be collected. Rather than being a plot of load vs. time, as shown below in Figure 12, a card is a plot of load vs. position, as shown in Figure 13. The ideal card, as shown above in Figure 13, demonstrates the instantaneous increase in load from Lmin to Lmax. The pump plunger begins its upward stroke, and the load remains constant as it travels to the top. As soon as the pump plunger starts back down, the load instantly falls back to Lmin where it remains constant as the pump travels to its bottom position again.Figure 12 Load vs. time.Figure 13 Load vs. position.The card shown in Figure14 shows a dynacard with an ideal upstroke and 30% pump fillage, demonstrating the effect of conditions such as fluid pound. If the traveling valve on the pump opens properly, the load falls instantly to Lmin and remains constant for the entire downstroke (Ptop to Pbottom), and the fluid is transferred from the pump to the tubing. When the pump plunger reaches the bottom, the barrel is empty.Figure 14 Load vs. position.The hydraulic pump begins to lift the entire fluid column to the top again, causing more fluid to be pulled in from the reservoir through the standing valve. However, when a condition such as low fluid level or trapped gas stops traveling valve from opening properly as the plunger starts downward, transfer of the contents of the pump; so, the tubing does not begin at the top of the stroke.The fluid in the tubing descends with the traveling valve, maintaining the load atLmax, until fluid is encountered or the gas compresses enough to open the traveling valve. Only when the plunger reimmerses in the fluid can it use the traveling valve open, and fluid transfer occurs through the travelling valve. The maximum and minimum load can be expressed as (1)LMax=Sf62.4DAp−Ar144+λsDAr144+λsDAr144SN2M70471.2,(2)LMin=Sf62.4DAr144+λsDAr144−λsDAr144SN2M70471.2.The liquid flow rateQ can be expressed as (3)Q=0.1484ApNSpEvBo.The symbols listed in Table1.Table 1 Symbols. SymbolRemarkSfLiquid specific gravityDSucker rod string lengthApPlunger areaArRod areaλsSpecific weight of steelMMachine factorNPump speed stroke per minuteSpStroke lengthEvPump fillage (efficiency)BoFormation volume factorIn this paper, surface load is calculated from the hydraulic pressure and the geometry of the hydraulic system. The position is represented by the position readings obtained by monitoring the hydraulic fluid flow. ## 4.1. Proposed Hardware System Figure7 shows the purposed hardware system. It includes the sucker rod pump, SCADAPack 535E as IIoT device, a 5 GHz antenna, and a server in the control room that will include the IIoT platform and SCADA system. The difficult nature of the environment for oil and gas fields represents a real challenge, considering oil well scattered location and distance from the central processing facility which made the industrial wireless Ethernet the solution for data acquisition, especially nanostation M5 5 GHz antennas that provide a reliable connection for up to 10 km.Figure 7 Proposed hardware system.Figure8 shows SCADAPack 535E remote terminal unit and automation controller from Schneider Electric; they are the perfect solution for IIoT applications that need high-speed time stamping and data capture. With an open standard programming environment, SCADAPack 535E supports standard industrial communication protocols such as Modbus RTU, Modbus TCP, and even DNP3 level 4 with security suit and data encryption. SCADAPack 535E works as an agent for the IIoT platform. It collects the data from engine sensors through Modbus RTU communication protocol. Also, it collects data from the rod pump and oil wellhead sensors using 4-20 ma I/O. Finally, SCADAPack 535E sends the data to the control room using Modbus TCP communication protocol through nanostation M5 5 GHz antenna.Figure 8 SCADAPack535E IIoT device. ## 4.2. Proposed Software System Figure9 shows the proposed software system; in this paper, the Kepware OPC server collects the crude oil well controller (SCADAPACK 535E) through a 5 GHz wireless link using TCP/IP network that passes it to two main branches, SCADA and IIoT platform.Figure 9 Proposed software system.The SCADA system monitors and generates valuable charts. At the same time, the IIoT platform uses these data for modeling, controlling and forecasting, and storing optimum parameters in the database. The two branches then send the data to the OFM (Oil Field Manager) software from Schlumberger, France. The visual basic programming language was used to build the SCADA system and IIoT platform. Figure10 shows the proposed software implementation algorithm.Figure 10 Proposed software implementation algorithm.Figure11 shows the proposed platform working flowchart. It clarifies the build of machine learning-based AI models that simulate the expert’s responses. The proposed platform uses code to prepare training datasets by recording all expert’s resonances that achieved the goals and increased pump efficiency. Machine learning builds the prediction models using training datasets.Figure 11 Proposed IIoT platform working flowchart. ## 4.3. Sucker Rod Pump Performance Calculation for Production Maximization Because it is difficult to measure the crude oil level in the well continuously and stop the pump during the measuring process, the pump performance is needed to indicate the level and pump fillage. So, the dynamometer card is used to accomplish this job while the surface card displays the load on the polished rod over the pump cycle. The card result shape is a function of everything, such as speed in stroke per minute, PPU geometry, pump depth, and fluid load on the pump, while the wave equation mathematically models the elastic nature of the rod string (assuming a downhole friction factor) and uses the surface card data to represent what happens at the pump plunger.A dynagraph card represents the forces acting on the pump plunger as it moves upward and downward in the well, capturing and releasing fluid with each stroke. Surface and downhole dynagraph cards measure the load on the polished rod, and this load is plotted with the polished rod position as the pump moves through each stroke cycle. A complete stroke cycle is one up and downstroke. The controller uses this data to create anx-y plot. By observing the graphs, information about the efficiency of the pump operation can be collected. Rather than being a plot of load vs. time, as shown below in Figure 12, a card is a plot of load vs. position, as shown in Figure 13. The ideal card, as shown above in Figure 13, demonstrates the instantaneous increase in load from Lmin to Lmax. The pump plunger begins its upward stroke, and the load remains constant as it travels to the top. As soon as the pump plunger starts back down, the load instantly falls back to Lmin where it remains constant as the pump travels to its bottom position again.Figure 12 Load vs. time.Figure 13 Load vs. position.The card shown in Figure14 shows a dynacard with an ideal upstroke and 30% pump fillage, demonstrating the effect of conditions such as fluid pound. If the traveling valve on the pump opens properly, the load falls instantly to Lmin and remains constant for the entire downstroke (Ptop to Pbottom), and the fluid is transferred from the pump to the tubing. When the pump plunger reaches the bottom, the barrel is empty.Figure 14 Load vs. position.The hydraulic pump begins to lift the entire fluid column to the top again, causing more fluid to be pulled in from the reservoir through the standing valve. However, when a condition such as low fluid level or trapped gas stops traveling valve from opening properly as the plunger starts downward, transfer of the contents of the pump; so, the tubing does not begin at the top of the stroke.The fluid in the tubing descends with the traveling valve, maintaining the load atLmax, until fluid is encountered or the gas compresses enough to open the traveling valve. Only when the plunger reimmerses in the fluid can it use the traveling valve open, and fluid transfer occurs through the travelling valve. The maximum and minimum load can be expressed as (1)LMax=Sf62.4DAp−Ar144+λsDAr144+λsDAr144SN2M70471.2,(2)LMin=Sf62.4DAr144+λsDAr144−λsDAr144SN2M70471.2.The liquid flow rateQ can be expressed as (3)Q=0.1484ApNSpEvBo.The symbols listed in Table1.Table 1 Symbols. SymbolRemarkSfLiquid specific gravityDSucker rod string lengthApPlunger areaArRod areaλsSpecific weight of steelMMachine factorNPump speed stroke per minuteSpStroke lengthEvPump fillage (efficiency)BoFormation volume factorIn this paper, surface load is calculated from the hydraulic pressure and the geometry of the hydraulic system. The position is represented by the position readings obtained by monitoring the hydraulic fluid flow. ## 5. Results and Experimental Setup The most important parameter that indicates the status of the crude oil well is the fluid level. It must be ensured that this level is above the pump. We determine the casing pressure’s critical value in the next stage, which pushes the level down under the pump. Therefore, the first part of the experiments in this paper is measuring the level. After that, the second part is activated, connecting the dedicated IIoT platform to the crude oil well controller and showing the results. These two parts are explained below. ### 5.1. Measuring Fluid Level in Wells Using Echometer Device The most important parameter is the liquid level in the oil well. It is the process variable that needs to be maintained to ensure the fillage of the pump. This parameter is measured by an echometer device that uses an ultrasound gun to measure the number of tube joints to level.The results of the echometer instrument can be read using Total Manager software to calculate the tubing joints liquid level in the oil well, as shown in Figure15. Figure 16 shows the raw signal recorded by the echometer mic and displays the acoustic gunshot start, tubing joint reflection and fluid level kick (appears in zoom window). Low-pass filter is applied to the signal to make it easier for the operator to read and determine tube joints kick and fluid level kick as shown in Figure 17.Figure 15 Echometer final ultrasound response to count the tube joints.Figure 16 The raw signal recorded by echometer to count the tube joints.Figure 17 Number of tubing joint liquid level in crude oil well by applying low-pass filter to the signal.Depth determination screen is shown in Figure18, where the joints can be counted easily, and zoom tools are used to show the kicks.Figure 18 Depth determination screen. ### 5.2. Performance Enhancement by the Proposed IIoT System Once the system is installed and commissioned, the SCADA system is built, and the IIoT platform is launched. The oil well chart includes fillage (pump efficiency), tubing pressure, jack pump hydraulic pressure, pump speed (stroke per minute), and casing pressure displayed. It depends on the data log recorded by the IIoT platform using a one-second scan rate stored in CSV files or ODBC database.Figure19 above shows the one-month oil well chart that indicates how the gas lock problem affects pump efficiency, which needs to be maintained. For best understanding, the higher resolution chart for 48 hours is shown in Figure 20. The pump fillage affected by gas lock without any control action is noticeable.Figure 19 One month crude oil well chart showing essential parameters.Figure 20 48 hours oil well chart without control.Our proposed solution avoids the gas lock effect by stopping the pump for some time to give the reservoir a chance to build up again. It increases the liquid level in the crude oil well. Figure21 shows the five-day well chart and how the on-off crude oil well pump controller affects pump efficiency.Figure 21 Five-day well chart with pump on-off control.With the proposed method for pump operation as discussed above, the pump fillage efficiency is stable at 90% as Figure22 shows.Figure 22 Five-day pump efficiency chart with pump on-off control.For a clearer picture, the higher resolution chart for 48 hours is shown in Figure23 with continuous stability of pump fillage (efficiency) around 90% after stopping the pump for 6 hours.Figure 23 48 hours oil well chart after 6 hours of shut-down.The other solution to avoid the gas lock problem and maintain the oil well level is to control the casing pressure. The pressure controller maintains the casing pressure at a certain setpoint taken from the experts in real time via the IIoT platform.Figure24 shows the effect of casing pressure on the pump efficiency. This higher resolution chart for 48 hours is shown in Figure 25, while Figure 26 shows the higher resolution chart for 2 hours that gives a deep view of the behavior of all parameters with continuous stability of pump fillage (efficiency) under control via the IIoT platform.Figure 24 Effect of casing pressure controller.Figure 25 48 hours oil well chart with casing pressure controller.Figure 26 2 hours oil well chart.With the effectiveness of our proposed system visible in the results, the oil well’s production rate should be tested on other oil wells to ensure the solution’s acclaim. The last three tests were done before applying the proposed technology in this paper. These tests showed a drop-in liquid production rate with unstable pump fillage after a short time from the last well workover, which means a well needs work over. The average liquid production rate was 43 BPD oil and 50 BPD water while the maximum expected is 100 BPD oil. The other test was done after applying the new IIoT system solution, especially when the pump fillage was stable at 90%, and the liquid production rate increased from 43 to 80 BPD oil. In contrast, the water increased from 50 to 110 BPD, and the increase in water rate is not an issue as it can be reinjected to a reservoir. The results have shown an improvement in the liquid production rate of the oil well, which means duplicating the production and decreasing the work over frequency with reducing work over cost and well-off days. Furthermore, make the right decisions by reflecting the right picture of sucker rod pump behavior that helps experts. ## 5.1. Measuring Fluid Level in Wells Using Echometer Device The most important parameter is the liquid level in the oil well. It is the process variable that needs to be maintained to ensure the fillage of the pump. This parameter is measured by an echometer device that uses an ultrasound gun to measure the number of tube joints to level.The results of the echometer instrument can be read using Total Manager software to calculate the tubing joints liquid level in the oil well, as shown in Figure15. Figure 16 shows the raw signal recorded by the echometer mic and displays the acoustic gunshot start, tubing joint reflection and fluid level kick (appears in zoom window). Low-pass filter is applied to the signal to make it easier for the operator to read and determine tube joints kick and fluid level kick as shown in Figure 17.Figure 15 Echometer final ultrasound response to count the tube joints.Figure 16 The raw signal recorded by echometer to count the tube joints.Figure 17 Number of tubing joint liquid level in crude oil well by applying low-pass filter to the signal.Depth determination screen is shown in Figure18, where the joints can be counted easily, and zoom tools are used to show the kicks.Figure 18 Depth determination screen. ## 5.2. Performance Enhancement by the Proposed IIoT System Once the system is installed and commissioned, the SCADA system is built, and the IIoT platform is launched. The oil well chart includes fillage (pump efficiency), tubing pressure, jack pump hydraulic pressure, pump speed (stroke per minute), and casing pressure displayed. It depends on the data log recorded by the IIoT platform using a one-second scan rate stored in CSV files or ODBC database.Figure19 above shows the one-month oil well chart that indicates how the gas lock problem affects pump efficiency, which needs to be maintained. For best understanding, the higher resolution chart for 48 hours is shown in Figure 20. The pump fillage affected by gas lock without any control action is noticeable.Figure 19 One month crude oil well chart showing essential parameters.Figure 20 48 hours oil well chart without control.Our proposed solution avoids the gas lock effect by stopping the pump for some time to give the reservoir a chance to build up again. It increases the liquid level in the crude oil well. Figure21 shows the five-day well chart and how the on-off crude oil well pump controller affects pump efficiency.Figure 21 Five-day well chart with pump on-off control.With the proposed method for pump operation as discussed above, the pump fillage efficiency is stable at 90% as Figure22 shows.Figure 22 Five-day pump efficiency chart with pump on-off control.For a clearer picture, the higher resolution chart for 48 hours is shown in Figure23 with continuous stability of pump fillage (efficiency) around 90% after stopping the pump for 6 hours.Figure 23 48 hours oil well chart after 6 hours of shut-down.The other solution to avoid the gas lock problem and maintain the oil well level is to control the casing pressure. The pressure controller maintains the casing pressure at a certain setpoint taken from the experts in real time via the IIoT platform.Figure24 shows the effect of casing pressure on the pump efficiency. This higher resolution chart for 48 hours is shown in Figure 25, while Figure 26 shows the higher resolution chart for 2 hours that gives a deep view of the behavior of all parameters with continuous stability of pump fillage (efficiency) under control via the IIoT platform.Figure 24 Effect of casing pressure controller.Figure 25 48 hours oil well chart with casing pressure controller.Figure 26 2 hours oil well chart.With the effectiveness of our proposed system visible in the results, the oil well’s production rate should be tested on other oil wells to ensure the solution’s acclaim. The last three tests were done before applying the proposed technology in this paper. These tests showed a drop-in liquid production rate with unstable pump fillage after a short time from the last well workover, which means a well needs work over. The average liquid production rate was 43 BPD oil and 50 BPD water while the maximum expected is 100 BPD oil. The other test was done after applying the new IIoT system solution, especially when the pump fillage was stable at 90%, and the liquid production rate increased from 43 to 80 BPD oil. In contrast, the water increased from 50 to 110 BPD, and the increase in water rate is not an issue as it can be reinjected to a reservoir. The results have shown an improvement in the liquid production rate of the oil well, which means duplicating the production and decreasing the work over frequency with reducing work over cost and well-off days. Furthermore, make the right decisions by reflecting the right picture of sucker rod pump behavior that helps experts. ## 6. Conclusion Using the Industrial Internet of Things (IIoT) in the oil and gas industry means intelligent production and opening the door for a new level of optimization and cost-effective production. This paper shows how this technology successfully transferred a clear picture of the sucker rod pumping well. It introduces an approach to understand all the required parameters such as pump efficiency, pump fillage, tubing pressure, casing pressure, hydraulic pressure, and pump speed. The clear picture leads to the right expert’s responses and transfers their experience to AI prediction models. The AI models predict the control parameters based on experts’ responses. The production rate of the oil well studied in this paper has increased by 90% when applying this new technology. It prevents the gas lock problem by using an on-off pump controller. It maintains the liquid level in the well by controlling the case pressure using PID controller. At the same time, the experts determine the optimized setpoints for the controller and the lowest pump off-time. The future of oil and gas is the Industrial Internet of Things (IIoT) that will optimize the upstream and downstream to reduce maintenance costs, improve production, increase reliability, and more. --- *Source: 1005813-2022-10-07.xml*
2022
# Assessing the Potential of the Strategic Formation of Urban Platoons for Shared Automated Vehicle Fleets **Authors:** Senlei Wang; Gonçalo Homem de Almeida Correia; Hai Xiang Lin **Journal:** Journal of Advanced Transportation (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1005979 --- ## Abstract This paper addresses the problem of studying the impacts of the strategic formation of platoons in automated mobility-on-demand (AMoD) systems in future cities. Forming platoons has the potential to improve traffic efficiency, resulting in reduced travel times and energy consumption. However, in the platoon formation phase, coordinating the vehicles at formation locations for forming a platoon may delay travelers. In order to assess these effects, an agent-based model has been developed to simulate an urban AMoD system in which vehicles travel between service points transporting passengers either forming or not forming platoons. A simulation study was performed on the road network of the city of The Hague, Netherlands, to assess the impact on traveling and energy usage by the strategic formation of platoons. Results show that forming platoons could save up to 9.6% of the system-wide energy consumption for the most efficient car model. However, this effect can vary significantly with the vehicle types and strategies used to form platoons. Findings suggest that, on average, forming platoons reduces the travel times for travelers even if they experience delays while waiting for a platoon to be formed. However, delays lead to longer travel times for the travelers with the platoon leaders, similar to what people experience while traveling in highly congested networks when platoon formation does not happen. Moreover, the platoon delay increases as the volume of AMoD requests decreases; in the case of an AMoD system serving only 20% of the commuter trips (by private cars in the case-study city), the average platoon delays experienced by these trips increase by 25%. We conclude that it is beneficial to form platoons to achieve energy and travel efficiency goals when the volume of AMoD requests is high. --- ## Body ## 1. Introduction Automated vehicles (AVs), also known as self-driving vehicles, bring a unique opportunity for reshaping urban mobility systems, thereby changing the way people travel. Combining electric and automated vehicles with ride-hailing services brings forth new automated mobility-on-demand (AMoD) services in future cities. In AMoD systems, the convergence of vehicle automation, electrification, and shared mobility has the potential to provide safe, economical, efficient, and sustainable urban mobility [1, 2]. However, there are considerable uncertainties about achieving these benefits.While the large-scale deployment of AMoD systems for urban mobility is still in its infancy, a broad spectrum of research focuses on investigating the potential of operating AMoD systems in different urban or regional application scenarios. One potential application is that AMoD could provide a solution for private car users in urban areas, leading to reduced parking demand, ownership cost, energy consumption, and emissions [3, 4]. Moreover, AMoD services could act as feeders to complement the high-capacity PT systems. Integrating an AMoD feeder service into the traditional PT system could increase the accessibility of PT services and improve the traffic externalities (e.g., congestion and emissions) due to increased demand for PT systems [5, 6].However, taxi-like services offered by AMoD systems in urban areas as competitive alternatives to public transportation (PT) may draw customers away from the traditional PT system. As a result, AMoD services could reduce public transit ridership and cause congestion due to increased vehicle movements (i.e., zero-occupancy movements and movements to serve more demand) [7–9].Like AMoD services in urban areas, a promising application of AMoD systems is to replace fixed-route and low-frequency buses in areas where demand is scattered and low (e.g., rural areas and industrial parks). Compared to existing bus services, AMoD systems could provide direct services to customers with improved availability and accessibility; the operating cost of AMoD services in such application areas is much lower than those of conventional bus services [10].A primary research priority is studying different operational aspects of urban passenger AMoD systems in future cities. Recent advances in vehicle automation have enabled vehicles to drive and connect without human intervention. With the help of connectivity and automation technology, AVs can exchange information for coordinated movements in platoons at closer following distances.Vehicle platooning has been a popular research theme in recent applications of intelligent transportation systems. The impact of platoon operations on urban traffic has been studied, assuming that AVs are already in platoons. However, intriguing questions arise when introducing vehicle platooning in passenger AMoD systems in which shared, automated, and electric vehicles (SAEV) provide on-demand services to travelers in urban areas:(1) What are the impacts of the formation and operation of such urban platoons on the service quality offered to travelers and traffic efficiency related to road network travel times?(2) How do changes in traffic conditions by platoon operations affect the travel-related energy consumption of traffic participants across the urban road network?To answer these questions, an agent-based model (ABM) has been developed to provide performance evaluations of forming platoons in urban passenger AMoD systems of the future.The paper is organized as follows. In Section2, we summarize the existing literature on platoon operations and the formation of platoons, identify the challenges of forming platoons in urban AMoD systems, and present the main contributions of this paper. Section 3 gives an overview of the modeling framework and discusses the model specifications. A detailed description of the model implementation and its application are provided in Section 4. Section 5 analyzes the simulation results. The main conclusions and policy implications are presented in the final section, and future work directions are recommended. ## 2. Background Platooning systems have attracted increasing attention with the rapid progress in automated and connected vehicle technologies. Much work has been done to investigate platoon communication technologies and platoon control strategies [11]. Recent literature has focused on platoon planning: at a low level (e.g., trajectory level), detailed platoon maneuvers (e.g., merging and splitting) are designed and simulated [12]; at a high level, planning and optimization of routes and schedules in the platoon formation are studied [13]. Moreover, vehicles with synchronized movement in platoons can have faster reaction times to dangerous situations and fewer human errors, reducing rear-end crashes. For a detailed analysis of platoon safety issues, the reader is referred to the literature review research by Axelsson [14] and Wang et al. [15]. In this study, we address the problems of forming platoons and assess the travel and energy impact on a future urban mobility system. We herein provide background information about the potential implications of platoon operations on energy consumption, and traffic efficiency. Besides, we review the literature on the strategic formation of platoons. ### 2.1. Energy Impact of Platoon Operations on Highways Platoons of vehicles provide significant potential for energy savings on highway driving. The close-following mechanism can considerably reduce the energy consumed by platoon vehicles to overcome the adverse aerodynamic effect [16]. Several field experiments in research projects, such as the COMPANION project, the PATH platoon research, the SARTRE project, and the Energy ITS project, have been conducted to investigate the potential of platoon operations in reducing energy consumption [17]. ### 2.2. Impact of Platoon Operations on Highway and Urban Traffic Platoon operations can improve highway throughput due to the shorter headways between platoon vehicles [18]. Using communication technologies (e.g., vehicle-to-vehicle or vehicle-to-infrastructure technologies), platoons of vehicles can also smooth out the vehicle-following dynamics on highways [19]. Besides, platoon operations can improve urban road capacity and reduce delays when crossing signalized intersections [20]. ### 2.3. The Strategic Platoon Formation on Highways In the above literature, the energy and traffic studies on platooning systems considered vehicles that are already in platoons and used platoon operations to increase road throughput and reduce energy consumption. Some studies investigated the problem of coordinating vehicles in platoons on highways. Hall and Chin [21] developed different platoon formation strategies to divide vehicles waiting at highway entrance ramps into different groups according to their destinations. Once formed at the highway entrance ramp, platoons remain intact to maximize the platoon driving distance. Saeednia and Menendez [22] studied slow-down and catch-up strategies for merging trucks into a platoon under free-flow traffic. Larsson et al. [23] defined the platoon formation problem as a vehicle routing problem to maximize the fuel savings of platoon vehicles. Studies by Liang et al. [24] and van de Hoef [25] investigated the problem of coordinating many trucks in platoons to maximize fuel savings. In the formation of platoons, trucks can adjust their speed without regard to traffic conditions. Larson et al. [26] developed a distributed control system in which trucks can adjust speed to form platoons to save fuels. Johansson et al. [27] developed two game-theoretic models to study the platoon coordination problem where vehicles can wait at network nodes to form platoons. In Table 1, we compare the newly developed functional components and the performance analysis of the AMoD system with the new components in our modeling framework with the referred studies in the literature. ### 2.4. Challenges for the Platoon Formation in Urban AMoD Systems The formation of platoons in urban AMoD systems poses challenges. First, the current state-of-the-art models consider the traffic demand for the platoon formation in an oversimplified way. Travel demand is generated according to trip lengths, destination distributions, and vehicle arrival patterns. Different distributions could be used to generate travel demand while capturing its uncertainty. However, in AMoD systems, the zero-occupancy vehicle trips of picking up the assigned travelers introduce uncertainty in the traffic demand on the road network. This uncertainty, therefore, requires explicit modeling of the interaction between SAEVs and travelers.Second, existing studies overlook the effect of forming platoons on travelers in the platoon vehicles. In the future, the AMoD system that we are studying, a fleet of SAEVs directly provides on-demand services to travelers between service points. The formation of platoons requires the synchronization of different vehicles in the same coordinates. In the formation of platoons, vehicles may wait for other vehicles to form platoons, causing delays for travelers. The impact of forming platoons on the travelers in the platoon vehicles must be captured.Third, existing studies investigate the effect of reduced aerodynamic drag via platooning on energy consumption in highway driving. However, due to higher traffic demand on the urban transport network, the potential for energy efficiency is primarily influenced by traffic conditions rather than by reducing air resistance. Coordinated movements of platoon vehicles could improve traffic throughput. As a result, the energy consumption of traffic participants (SAEVs) will be affected by platoon operations. Moreover, current studies aimed to investigate the traffic impact of platoon vehicles using predefined platoons. Therefore, the impact of forming platoons on travel conditions and energy consumption of SAEVs in urban driving needs to be assessed for future scenarios.Fourth, platoon sizes (the maximum number of vehicles in a platoon) and the maximum time spent forming platoons are not restricted. This relaxation can lead to overestimation of the platoon driving distances and energy savings by forming long platoons. In AMoD systems, forming a long platoon may cost travelers more time in the situation where vehicles wait for other vehicles. Setting limits on platoon sizes and time spent in the formation can prevent long platoons from disrupting the urban traffic and causing long delays for travelers. Therefore, the platoon size restriction and maximum time spent in the formation of platoons need to be taken into account when coordinating SAEVs in platoons. The impact of the time and platoon size restrictions on the formation of platoons, and the level of service offered to travelers and on energy consumed needs to be studied. ### 2.5. Urban AMoD System Characteristics in Future Cities The AMoD systems envisaged for the future will probably be available in the 2030s to 2040s, when SAEV fleets have become common and affordable [28, 29]. SAEVs, in this paper, considered to be purpose-built microvehicles, are intended to cover the whole trips of commuters. While providing on-demand services for morning commuters in lieu of private cars, SAEVs can be coordinated in platoons at service points. Although purpose-built SAEVs could occupy less space, SAEVs cannot form platoons anywhere because of urban driving conditions characterized by narrow streets and traffic congestion. One idea is to define what in this paper is designated as “service points”: platoon formation and dissolution (platoon is disassembled) locations across the service area. Examples of service points for the platoon formation in today’s urban transportation systems could include public parking garages, public charging service points, petrol service points, empty bus stops, and some parking spaces along the canals in cities. ### 2.6. Research Contributions This paper aims to develop an agent-based model (ABM) to study the impact of forming platoons in future urban AMoD systems on people’s travel and energy usage. Agent-based modeling is suitable for our research questions. The ABM has the advantage of representing entities at a high resolution; the interaction of entities (e.g., vehicles and travelers) can be captured realistically; it is flexible to model a system at different description levels (e.g., vehicles and platoons formed by vehicles) to evaluate different aspects of the system and to make changes to assumptions (e.g., formation policies) for different scenarios. Taking into consideration the limitations of current studies identified above, we summarize the main contributions of this paper as follows.First, the ABM originally developed in this paper includes a high level of detail. The individual travelers are modeled, and their attributes are initiated according to the regional travel demand data and the realistic departure time data. The interaction between SAEVs and travel requests is explicitly modeled by developing a vehicle-to-travelers assignment component, in which SAEV pickup trips and drop-off trips are represented. The modeled interaction between vehicles and travelers captures the uncertainty of traffic demand between areas of origin and destination.Second, the formation behavior of waiting at service points, defined as the hold-on strategy, is explicitly simulated for platoon leaders and their followers. The platoon formation policies that determine when a group of vehicles leaves a service point as a platoon are the maximum elapsed time of the platoon leader and the maximum platoon size. Either one of the two policies can trigger a release of a platoon. The AMB simulates platoon formation operations of vehicles, which allows us to measure the impact of forming platoons on travelers. Moreover, the formed platoons are flexibly represented with specified information (e.g., the platoon route, the vehicle sequences, and the speed) at an aggregate level to model platoon driving and its impact on traffic conditions.Third, a mesoscopic traffic simulation model is used to represent the traffic dynamics throughout the road network. The mesoscopic traffic simulation model can simulate each vehicle’s movement, while a macroscopic speed-density relationship is used to govern congestion effects. The traffic simulation model can incorporate the impact of all SAEV trips, including unoccupied pickup trips and occupied drop-off trips, on the traffic over the road network. Furthermore, the relationship established between road capacity and platoon characteristics is used to assess the impact of formed platoons on traffic conditions.Fourth, an energy consumption model is linked with the mesoscopic traffic model to efficiently calculate the energy consumed by individual SAEVs for travelers’ trips. It can also produce the energy estimate of intended trips, thus ensuring that the assigned SAEVs have sufficient power to complete their journeys.The travel and energy potential of forming platoons under different formation policies and demand levels in AMoD systems is assessed using the urban road network of the case-study city, The Hague, Netherlands, through a set of defined key performance indicators (KPIs). ## 2.1. Energy Impact of Platoon Operations on Highways Platoons of vehicles provide significant potential for energy savings on highway driving. The close-following mechanism can considerably reduce the energy consumed by platoon vehicles to overcome the adverse aerodynamic effect [16]. Several field experiments in research projects, such as the COMPANION project, the PATH platoon research, the SARTRE project, and the Energy ITS project, have been conducted to investigate the potential of platoon operations in reducing energy consumption [17]. ## 2.2. Impact of Platoon Operations on Highway and Urban Traffic Platoon operations can improve highway throughput due to the shorter headways between platoon vehicles [18]. Using communication technologies (e.g., vehicle-to-vehicle or vehicle-to-infrastructure technologies), platoons of vehicles can also smooth out the vehicle-following dynamics on highways [19]. Besides, platoon operations can improve urban road capacity and reduce delays when crossing signalized intersections [20]. ## 2.3. The Strategic Platoon Formation on Highways In the above literature, the energy and traffic studies on platooning systems considered vehicles that are already in platoons and used platoon operations to increase road throughput and reduce energy consumption. Some studies investigated the problem of coordinating vehicles in platoons on highways. Hall and Chin [21] developed different platoon formation strategies to divide vehicles waiting at highway entrance ramps into different groups according to their destinations. Once formed at the highway entrance ramp, platoons remain intact to maximize the platoon driving distance. Saeednia and Menendez [22] studied slow-down and catch-up strategies for merging trucks into a platoon under free-flow traffic. Larsson et al. [23] defined the platoon formation problem as a vehicle routing problem to maximize the fuel savings of platoon vehicles. Studies by Liang et al. [24] and van de Hoef [25] investigated the problem of coordinating many trucks in platoons to maximize fuel savings. In the formation of platoons, trucks can adjust their speed without regard to traffic conditions. Larson et al. [26] developed a distributed control system in which trucks can adjust speed to form platoons to save fuels. Johansson et al. [27] developed two game-theoretic models to study the platoon coordination problem where vehicles can wait at network nodes to form platoons. In Table 1, we compare the newly developed functional components and the performance analysis of the AMoD system with the new components in our modeling framework with the referred studies in the literature. ## 2.4. Challenges for the Platoon Formation in Urban AMoD Systems The formation of platoons in urban AMoD systems poses challenges. First, the current state-of-the-art models consider the traffic demand for the platoon formation in an oversimplified way. Travel demand is generated according to trip lengths, destination distributions, and vehicle arrival patterns. Different distributions could be used to generate travel demand while capturing its uncertainty. However, in AMoD systems, the zero-occupancy vehicle trips of picking up the assigned travelers introduce uncertainty in the traffic demand on the road network. This uncertainty, therefore, requires explicit modeling of the interaction between SAEVs and travelers.Second, existing studies overlook the effect of forming platoons on travelers in the platoon vehicles. In the future, the AMoD system that we are studying, a fleet of SAEVs directly provides on-demand services to travelers between service points. The formation of platoons requires the synchronization of different vehicles in the same coordinates. In the formation of platoons, vehicles may wait for other vehicles to form platoons, causing delays for travelers. The impact of forming platoons on the travelers in the platoon vehicles must be captured.Third, existing studies investigate the effect of reduced aerodynamic drag via platooning on energy consumption in highway driving. However, due to higher traffic demand on the urban transport network, the potential for energy efficiency is primarily influenced by traffic conditions rather than by reducing air resistance. Coordinated movements of platoon vehicles could improve traffic throughput. As a result, the energy consumption of traffic participants (SAEVs) will be affected by platoon operations. Moreover, current studies aimed to investigate the traffic impact of platoon vehicles using predefined platoons. Therefore, the impact of forming platoons on travel conditions and energy consumption of SAEVs in urban driving needs to be assessed for future scenarios.Fourth, platoon sizes (the maximum number of vehicles in a platoon) and the maximum time spent forming platoons are not restricted. This relaxation can lead to overestimation of the platoon driving distances and energy savings by forming long platoons. In AMoD systems, forming a long platoon may cost travelers more time in the situation where vehicles wait for other vehicles. Setting limits on platoon sizes and time spent in the formation can prevent long platoons from disrupting the urban traffic and causing long delays for travelers. Therefore, the platoon size restriction and maximum time spent in the formation of platoons need to be taken into account when coordinating SAEVs in platoons. The impact of the time and platoon size restrictions on the formation of platoons, and the level of service offered to travelers and on energy consumed needs to be studied. ## 2.5. Urban AMoD System Characteristics in Future Cities The AMoD systems envisaged for the future will probably be available in the 2030s to 2040s, when SAEV fleets have become common and affordable [28, 29]. SAEVs, in this paper, considered to be purpose-built microvehicles, are intended to cover the whole trips of commuters. While providing on-demand services for morning commuters in lieu of private cars, SAEVs can be coordinated in platoons at service points. Although purpose-built SAEVs could occupy less space, SAEVs cannot form platoons anywhere because of urban driving conditions characterized by narrow streets and traffic congestion. One idea is to define what in this paper is designated as “service points”: platoon formation and dissolution (platoon is disassembled) locations across the service area. Examples of service points for the platoon formation in today’s urban transportation systems could include public parking garages, public charging service points, petrol service points, empty bus stops, and some parking spaces along the canals in cities. ## 2.6. Research Contributions This paper aims to develop an agent-based model (ABM) to study the impact of forming platoons in future urban AMoD systems on people’s travel and energy usage. Agent-based modeling is suitable for our research questions. The ABM has the advantage of representing entities at a high resolution; the interaction of entities (e.g., vehicles and travelers) can be captured realistically; it is flexible to model a system at different description levels (e.g., vehicles and platoons formed by vehicles) to evaluate different aspects of the system and to make changes to assumptions (e.g., formation policies) for different scenarios. Taking into consideration the limitations of current studies identified above, we summarize the main contributions of this paper as follows.First, the ABM originally developed in this paper includes a high level of detail. The individual travelers are modeled, and their attributes are initiated according to the regional travel demand data and the realistic departure time data. The interaction between SAEVs and travel requests is explicitly modeled by developing a vehicle-to-travelers assignment component, in which SAEV pickup trips and drop-off trips are represented. The modeled interaction between vehicles and travelers captures the uncertainty of traffic demand between areas of origin and destination.Second, the formation behavior of waiting at service points, defined as the hold-on strategy, is explicitly simulated for platoon leaders and their followers. The platoon formation policies that determine when a group of vehicles leaves a service point as a platoon are the maximum elapsed time of the platoon leader and the maximum platoon size. Either one of the two policies can trigger a release of a platoon. The AMB simulates platoon formation operations of vehicles, which allows us to measure the impact of forming platoons on travelers. Moreover, the formed platoons are flexibly represented with specified information (e.g., the platoon route, the vehicle sequences, and the speed) at an aggregate level to model platoon driving and its impact on traffic conditions.Third, a mesoscopic traffic simulation model is used to represent the traffic dynamics throughout the road network. The mesoscopic traffic simulation model can simulate each vehicle’s movement, while a macroscopic speed-density relationship is used to govern congestion effects. The traffic simulation model can incorporate the impact of all SAEV trips, including unoccupied pickup trips and occupied drop-off trips, on the traffic over the road network. Furthermore, the relationship established between road capacity and platoon characteristics is used to assess the impact of formed platoons on traffic conditions.Fourth, an energy consumption model is linked with the mesoscopic traffic model to efficiently calculate the energy consumed by individual SAEVs for travelers’ trips. It can also produce the energy estimate of intended trips, thus ensuring that the assigned SAEVs have sufficient power to complete their journeys.The travel and energy potential of forming platoons under different formation policies and demand levels in AMoD systems is assessed using the urban road network of the case-study city, The Hague, Netherlands, through a set of defined key performance indicators (KPIs). ## 3. Model Specifications For building the ABM, we introduce the following main assumptions regarding the platoon formation of SAEVs in AMoD systems:(i) All travel demand is produced and attracted between what have been designated as service points which are connected to the network nodes. Service points are thus locations where travelers can be picked up or dropped off by a vehicle. This is reasonable for the situation where many service points are designated in a service area.(ii) We assume that vehicles wait at service points to form platoons instead of using slow-down and catch-up strategies. The major drawbacks of slow-down and speed-up strategies are that urban traffic flow can be disrupted when driving slowly, and accelerating vehicles may violate urban road speed limits. Moreover, slow-down and speed-up strategies are very difficult for urban driving, which is characterized by one or two lanes for each direction and traffic congestion.(iii) We assume that there are enough parking places for SAEVs to form a platoon at the service points. SAEVs are purposely designed to be space-saving microvehicles (Renault Twizy for the reference model). Moreover, there are size restrictions on the platoon size.(iv) We assume a future scenario where AMoD services are used to serve all private car trips in an urban area; the usage of conventional vehicles is not considered.The framework presented in Figure1 includes a fleet management center and a traffic management center. The fleet management center mainly matches vehicles with travelers and coordinates the formation of platoons. The traffic management center primarily represents the network traffic dynamics and finds the time-dependent shortest routes for vehicles based on the current network traffic conditions. The fleet management and traffic management components capture different aspects of the system components’ interactions. The modeling framework can evaluate system performance with regard to defined KPIs based on the realistic travel demand data and the existing road network.Figure 1 The conceptual simulation framework.The model assumes that OD trip demand and aggregated departure times are given. The demand generator in the simulation model will generate individual travel requests with an origin location, destination location, and request time according to the given OD matrix and departure time distribution. According to real-time information about the travel requests, the vehicle assignment component matches the available vehicles with incoming travel requests. Once the assignment has been done, the information on travelers’ locations is sent to the assigned vehicles, and travelers are notified about the vehicle details. The assigned vehicle will be dispatched to pick up the traveler, the state of the assigned vehicle transition from idle to in-service state.The traffic management center provides the time-varying traffic conditions, forming a basis for subsequent route calculations. A mesoscopic traffic simulation model is used to represent traffic patterns over the road network, which can be captured by simulating the movement of SAEVs along their routes as they carry out the travelers’ journeys. The traffic simulation model manages static and dynamic information to determine the current network traffic conditions. The static inputs to the traffic simulation model are the traffic network representation, including links and nodes, traffic capacity, free-flow speed, and road length, while the dynamic information is concerned about the information on road segments upon which individual vehicles and/or platoon vehicles are travel. Based on the current network traffic conditions provided by the traffic simulation component, the time-dependent shortest routes between points are computed, which is a string of ordered road segments to be traversed.The energy consumption model estimates the energy consumption of individual vehicles over the road network. The energy consumption of individual vehicles is computed as a function of the link travel speed. The charging component is responsible for finding charging points for low battery vehicles. Vehicles can be charged at every service point after completing the journey of a traveler. The time delay due to the charging operations is considered.The platoon formation component in the fleet management center coordinates in-service vehicles in an existing platoon at designated service points according to their destinations. Also, a new platoon can be initiated when one of the grouped (in-service) vehicles arrives at the formation location. Once the platoon agent type is created, the platoon agents manage the information about the platoon plan, including platoon routes, the number of platoon vehicles, platoon speed, and the assigned leader and its followers with the determined vehicle sequence. The traffic simulation model in the traffic management center can account for the impact of the operations of formed platoons on traffic dynamics. Figure2 illustrates the platoon formation and its potential. The detailed descriptions of the functionalities are explained in the following sections.Figure 2 An illustration of the platoon formation and potential impacts. ### 3.1. Energy Consumption of the SAEVs Existing studies estimate the energy consumption of electric vehicles on the network level as a function of travel distance, which means translating the kilometers driven into an estimate of energy consumed [30, 31]. However, the strong correlation between energy consumption and vehicle speed is not considered. We attempt to estimate the energy consumption of SAEVs and account for traffic congestion by making it a function of experienced travel speed. It is linked to a mesoscopic traffic simulation model in which the effect of forming platoons on traffic conditions is considered. The energy consumption model is thus capable of accounting for the effect of platoon driving. The energy consumption model contains a set of regression models for different vehicle types. These regression models can be used to calculate the energy consumption associated with one vehicle traversing each road segment based on the speed of the vehicle and the length of the road segment. The calculation method is explained as follows.First, the average speed for individual SAEVs traversing the corresponding road segment is calculated. Second, the energy consumed by the SAEVs per unit distance is estimated using the regression model in equation (1), which describes the relationship between energy consumption and travel speed. Third, the total energy consumption on the route between the origin and destination is calculated as the sum of energy consumed by the individual SAEV in each road segment. The formula for calculating total energy consumption is shown in equation (2)(1)E=α+β∗Si+γ∗Si2,where α,β,andγ are coefficients; Si is the travel speed of an individual SAEV traversing road segment i; E is the energy consumption per unit distance.The total energy consumption of each SAEV to complete the pickup trip or drop-off trip can thus be calculated as(2)Et=∑i=1nEi∗Li,where n is the total number of road segments between the locations (e.g., the locations of the assigned vehicle and the origin of the travelers, or the locations between the origin of the traveler and his/her destination) ; Li is the length of each road segment i. Et is the total energy consumption of an SAEV to complete the pickup trip or drop-off trip.We estimate the energy consumption of different types of vehicles. Each vehicle type corresponds to a regression model derived from the laboratory dynamometer tests [32]. The coefficient for different vehicle types is given in Table 2 in Section 4, where the application of the model is presented.Table 1 Comparison of the strategic platoon formation studies at the route level. StudiesModeling componentsImpact analysisMany vehiclesRoad network levelDemand and supply interactionMixed trafficPlatoon policiesCoordination strategiesPlatoon vehiclesTraffic throughputEnergy consumptionService level (waiting and travel times)Platoon sizesFormation time constraintsSpeed adjustment (Slow down or catch up)Hold-on strategyTrafficAerodynamicsHall and Chin [21]✓✓✓✓✓✓Larson et al. [26]✓✓✓✓✓✓Saeednia and Menendez [22]✓✓✓Larsson et al. [23]✓✓✓✓✓Liang et al. [5]✓✓✓✓van de Hoef [25]✓✓✓✓✓✓Johansson et al. [27]✓✓✓✓✓Our approach✓✓✓✓✓✓✓✓✓✓ ### 3.2. Real-Time Vehicle Assignment The vehicle assignment component assigns available vehicles to serve travelers as travel requests come in, which are generated according to the aggregate travel demand (explained in Section4.2). The vehicle assignment component will assign the nearest available SAEV with enough battery power to serve a traveler to his/her destination. For that to happen, there must be a real-time estimation of how much energy is needed if that traveler is satisfied, and this is estimated for each candidate vehicle based on its particular vehicle type.The process of finding available vehicles for travel requests goes as follows. First, the energy consumption of an individual vehicle to complete the intended trip is estimated based on the energy function. The estimate of energy spent on transporting the intended traveler can be calculated using equation (3). Second, based on the estimated energy consumption of the intended traveler, available vehicles with sufficient remaining battery capacity that can undertake the traveler’s journey are filtered from the group of idle vehicles; finally, a vehicle located at the shortest Euclidean distance within the search radius is chosen from the filtered pool of available vehicles(3)Ee=η∗Et,where η is a safety coefficient used to ensure that the estimated energy for a traveler’s intended trip is not less than the actual energy consumed by individual vehicles to complete the trip that might happen if traffic changes. Ee is the estimated energy required by an individual vehicle to complete the trip of a traveler.The function in equation (3) estimates the energy needed to complete travelers’ trips based on the link travel speeds at the moment when a traveler calls the service, while the actual energy consumed uses the experienced speeds of vehicles in equation (2) to calculate the energy spent after completing the traveler’s trip. The proper estimate of energy spent to complete the trip of an intended traveler ensures that the assigned vehicle has sufficient battery capacity to reach the traveler’s destination.Once an available vehicle with sufficient remaining energy is assigned to a traveler, the time-dependent shortest path (lowest duration) from the current vehicle location to the traveler’s location is computed. After the vehicle arrives at the pickup location, the time-dependent shortest path from the traveler’s location to its destination will be determined. The computation of time-dependent shortest routes is based on the Dijkstra algorithm. ### 3.3. Mesoscopic Traffic Simulation The modeling framework for the proposed system needs to simulate the operations of many vehicles to transport all the city’s private car commuters over a realistic urban road network. A mesoscopic traffic simulation model that includes link movement and node transfer is incorporated into the agent-based modeling framework [33, 34]. The mesoscopic traffic simulation model combines a microscopic level representation of individual vehicles with a macroscopic description of the traffic patterns. In the link movement, vehicular movements are simulated. Vehicle speed on the road segments is updated according to the established macroscopic speed-density relationship. A modified Smulders speed-density relationship (equation (4)) is used to update the vehicle speed based on the link density(4)vk=v01−kkj,k<kc,γ1k−1kj,k≥kc,where k is the link traffic density. vk is the speed that is determined by the traffic density k; v0 is the free-flow speed. kc is the link critical density; kj is the link jam density. γ is a parameter. The value of the parameter can be derived as γ=v0kc.Node transfer means that vehicles transfer between adjacent road segments. A vehicle moving from an upstream link (road segment) to a downstream link will follow the defined rules:(1) The vehicle is at the head of the upstream link queue. In other words, there are no preceding vehicles stacking in the waiting queue.(2) The number of outflow vehicles has been checked to determine whether a vehicle can leave the road segment it is traversing.(3) The number of storage vehicles has been checked to determine whether the downstream link has enough storage units to accommodate the upcoming vehicle.The mesoscopic traffic simulation model, including link movement and node transfer, can provide the required level of details in estimating the speeds and travel times of individual vehicles on the network while balancing the trade-off between computational cost and traffic model realism.A platoon that includes multiple platoon vehicles is considered a platoon entity. The rules for the movement of individual vehicles are applied to individual platoons in which the properties (e.g., the number of platoon vehicles) are considered in the node transfer. ### 3.4. Traffic Simulation for Platoon Vehicles In the literature, the strategic platoon formation was studied while ignoring the traffic (as shown in Table1: comparison of the platoon formation studies at the route level). We fill this gap by developing a simulation component for mixed operations of platoon AVs and nonplatoon AVs on top of a mesoscopic traffic simulation. The functional component for the mixed operation of platoon AVs and nonplatoon AVs can capture the traffic impact of forming platoons across the road network. The relationship between road capacity and different penetration rates of platoon AVs is established to assess the impact of the platoon formation on traffic conditions. Chen et al. [35] proposed a formulation to describe the correlation between platoon characteristics, including the proportion of platoon vehicles, intervehicle spacing levels, and the macroscopic capacity. The formulation reveals how the single-lane capacity changes for different penetration rates of platoon AVs. The derived macroscopic capacity formulation for mixed traffic solves the problem of determining the macroscopic traffic variables (used in the mesoscopic traffic simulation) based on platoon characteristics. Therefore, we can combine the macroscopic capacity formulation in the mesoscopic traffic simulation that applies the macroscopic speed-density function to govern the movement of the vehicles that we use in the simulation methodology. The single-lane capacity is expressed as(5)Cc=Ca1−N/M+N∗1−α1−L/N,where Ca denotes the lane capacity for all vehicles traveling regularly. L is the number of leaders. N is the total number of platoon vehicles. M is the total number of regular driving vehicles (i.e., AVs are not in platoons). α is the ratio of platoon spacing to regular spacing.Table 2 Coefficients in the regression model for different vehicle types. CoefficientαβγNissanSV479.1−18.930.7876Kia468.6−14.630.6834Mitsubishi840.4−55.3121.670BMW618.4−31.090.9916Ford1110−96.612.745Chevrolet701.2−35.551.007Smart890.8−43.121.273Nissan2012715.2−38.101.271As shown in equation (5), the capacity Cc depends on the penetration rate of platoon vehicles φ=N/M+N and the number of leaders (L). A smaller distance spacing between platoon vehicles allows an increase in the lane capacity. The lane capacity increases as the penetration rate of platoon vehicles, φ, increases. Moreover, for the same number of platoon vehicles N, the more the leaders L are created, the fewer the capacity increases.We use the following definitions of different critical spacing types according to the operational characteristics of vehicle platooning. The critical spacing when vehicles travel regularly (e.g., AVs that are not in platoons) is defined asda. We define dp=αda, where 0<α<1. We assume that the critical spacing between a platoon vehicle and a regular driving vehicle that is not in a platoon is also da.Notice that regular driving AVs that are not in platoons follow the regular driving distances of conventional vehicles, while platoon vehicles move at a reduced spacing. The formulation of the capacity of one lane (for one direction) shows how it can be improved by increasing the penetration rate of platoon vehicles and the number of leaders or platoons (each platoon has one leader). The detailed derivations of equation (5) can be found in Appendix. ### 3.5. Platoon Formation Mechanism Spontaneous or on-the-fly platoon formation without proper prior planning can cause a high frequency of joining and leaving operations by the vehicles which might disrupt traffic and decrease safety [27, 36]. This type of platoon might not ensure a high rate of in-platoon driving. In AMoD systems, many SAEVs are assigned to take travelers from place to place (service points) in urban areas; therefore, they will be continuously routed to different destinations. The platoon formation for a fleet of vehicles that provide on-demand transport is more effective if done in a coordinated way. SAEVs can be coordinated in a platoon using the hold-on strategy while providing direct on-demand service between service points designated as the platoon formation locations over the AMoD network.The formation behaviors of platoon participating vehicles are realistically represented. In relation to coordinating vehicles in the platoon formation, a vehicle can be assigned to an existing platoon as a follower vehicle. A vehicle can be connected to other vehicles to initiate a new platoon, either as a platoon leader or as a follower. In the first case, arriving vehicles at a service point are assigned to an existing platoon according to the destinations of travelers assigned to them. There are no existing platoons at a service point in the second case, or the arriving vehicles cannot be assigned to an existing platoon. Arriving vehicles at the service point are divided into different groups. For vehicles in each group at a service point, the first vehicle to arrive is designated as the platoon leader. Once a platoon leader is assigned, the platoon is initiated.Algorithm 1: Pseudocode for the formation of platoons. INPUT: information about a list of arriving vehicles A=a0,a1,…,am. The information Zai for vehicle ai can be represented by a set ai,rioi,di. Origin oi is the service point that vehicle ai is moving towards and destination di represents the next service point. ri is the shortest route between oi and di.FOR each arriving vehicle ai in the set ACompare informationai,oi,dii=1,2,…,m to the information Zpj=pj,rpj,opj,dpjj=1,2,…,n of existing platoons’ leaders P=p0,p1,…,pnIF (Zaioi,di==Zpjopj,dpj) AND platoon size sj of the platoon pj is not reached)Add vehiclesai to the platoon pj as a follower;Adjust the vehicle’s shortest routeri to the platoon shortest route rpj;Remove vehicleai from the set A;ENDIFContinueFOR each arriving vehicle aX in the set AIF((ai is not connected to aXANDai≠aXANDZaioi,di==Zaxox,dxAND the number of connected vehicles for ai < platoon size V)ai and aX are paired, and the connection between ai and aX is established;IFaX is not in the destination group d of vehicle aiLet the vehicleaX join the destination group d ;ENDIFENDIFENDFORRemove vehicleai from the set A;Vehicles that are not paired move as individual vehicles;ENDFOROUTPUT: Platoons of vehicles and regular driving vehicles that are not in platoonsThe hold-on strategy of the platoon leader is used to organize arriving vehicles (in-service vehicles with passengers) into platoons at a service point according to their destinations. Coordinating empty SAEVs (that are assigned to pick up passengers) could increase the out-of-vehicle waiting times, leading to great discomfort. The hold-on time of a platoon leader is the time from when the leader starts to wait for other vehicles until the moment the platoon is formed and starts to move. The release of a platoon (the moment when it departs) depends on not only the number of vehicles that it has (there is a maximum number of vehicles in a platoon) but also on the time that the platoon leader has been waiting. That is, the release of a platoon can be triggered by reaching the maximum vehicle size or the maximum hold-on (waiting) time of the platoon leader, as explained before. We denote the time threshold of platoon leaders asT and the maximum number of platoon vehicles as V. The physical constraints of road segments directly set a threshold for the number of vehicles in a platoon. Algorithm 1 explains the platoon formation mechanism.The formation approach uses global knowledge about all arriving vehicles for each service point to assign them to an existing or newly created platoon. Vehicle sequence in a platoon is determined based on the arrival time of a vehicle at the platoon formation location. The platoon leader makes decisions on behalf of the followers to trigger the platoon release. Once a SAEV is assigned to serve a traveler, it has the origin and destination of the traveler, and its shortest route is calculated using the Dijkstra algorithm. In a platoon, platoon followers will adjust their routes from their original shortest routes to the shortest route of the platoon leader. A plan is created for a formed platoon, including platoon ID, a leader and its followers, a platoon route (origin, destination, and road segments), and the vehicle sequence in the formed platoon (see Algorithm 2). Once a platoon arrives at the destination service point of travelers, all platoon vehicles are detached from their formed platoon (arriving vehicles are grouped according to their destinations—platooned vehicles in a platoon have the same destination service point), and then will drop off the passengers at the destination service point—the state of the vehicle transitions from in-service to idle state. Notice that the volume of AMoD services could be high during morning hours, which leads to many arriving vehicles (in-service vehicles) at a service point. Also, there are platoon size restrictions in urban driving conditions. Many platoons could be formed by grouping vehicles with the same destinations, while organizing other vehicles with different destinations may cause detours and be against vehicles with the same destination. Considering the urban formation locations, driving conditions, and the high demand during peak hours, we did not model the scenario where a vehicle with a different destination detaches from a platoon to drop off a passenger, and the other platooned vehicles continue to the next service points.Algorithm 2: Pseudocode for determining platoon plans. INPUT: Groups of vehiclesFOR grouped vehicles in each destination group DDetermine the leader for the grouped vehiclesdk∈D;Initiate a platoonpk according to the platoon leader’s information (location and shortest route);Assign the other vehicles in the group into the new platoon as followers;Determine the vehicle sequence according to the arrival time;Adjust the shortest routes of the followers inpk to the shortest route of the platoon leader rpk;ENDFOROUTPUT: Platoon plans, including platoon ID, a leader and its followers, a platoon route, the vehicle sequence ## 3.1. Energy Consumption of the SAEVs Existing studies estimate the energy consumption of electric vehicles on the network level as a function of travel distance, which means translating the kilometers driven into an estimate of energy consumed [30, 31]. However, the strong correlation between energy consumption and vehicle speed is not considered. We attempt to estimate the energy consumption of SAEVs and account for traffic congestion by making it a function of experienced travel speed. It is linked to a mesoscopic traffic simulation model in which the effect of forming platoons on traffic conditions is considered. The energy consumption model is thus capable of accounting for the effect of platoon driving. The energy consumption model contains a set of regression models for different vehicle types. These regression models can be used to calculate the energy consumption associated with one vehicle traversing each road segment based on the speed of the vehicle and the length of the road segment. The calculation method is explained as follows.First, the average speed for individual SAEVs traversing the corresponding road segment is calculated. Second, the energy consumed by the SAEVs per unit distance is estimated using the regression model in equation (1), which describes the relationship between energy consumption and travel speed. Third, the total energy consumption on the route between the origin and destination is calculated as the sum of energy consumed by the individual SAEV in each road segment. The formula for calculating total energy consumption is shown in equation (2)(1)E=α+β∗Si+γ∗Si2,where α,β,andγ are coefficients; Si is the travel speed of an individual SAEV traversing road segment i; E is the energy consumption per unit distance.The total energy consumption of each SAEV to complete the pickup trip or drop-off trip can thus be calculated as(2)Et=∑i=1nEi∗Li,where n is the total number of road segments between the locations (e.g., the locations of the assigned vehicle and the origin of the travelers, or the locations between the origin of the traveler and his/her destination) ; Li is the length of each road segment i. Et is the total energy consumption of an SAEV to complete the pickup trip or drop-off trip.We estimate the energy consumption of different types of vehicles. Each vehicle type corresponds to a regression model derived from the laboratory dynamometer tests [32]. The coefficient for different vehicle types is given in Table 2 in Section 4, where the application of the model is presented.Table 1 Comparison of the strategic platoon formation studies at the route level. StudiesModeling componentsImpact analysisMany vehiclesRoad network levelDemand and supply interactionMixed trafficPlatoon policiesCoordination strategiesPlatoon vehiclesTraffic throughputEnergy consumptionService level (waiting and travel times)Platoon sizesFormation time constraintsSpeed adjustment (Slow down or catch up)Hold-on strategyTrafficAerodynamicsHall and Chin [21]✓✓✓✓✓✓Larson et al. [26]✓✓✓✓✓✓Saeednia and Menendez [22]✓✓✓Larsson et al. [23]✓✓✓✓✓Liang et al. [5]✓✓✓✓van de Hoef [25]✓✓✓✓✓✓Johansson et al. [27]✓✓✓✓✓Our approach✓✓✓✓✓✓✓✓✓✓ ## 3.2. Real-Time Vehicle Assignment The vehicle assignment component assigns available vehicles to serve travelers as travel requests come in, which are generated according to the aggregate travel demand (explained in Section4.2). The vehicle assignment component will assign the nearest available SAEV with enough battery power to serve a traveler to his/her destination. For that to happen, there must be a real-time estimation of how much energy is needed if that traveler is satisfied, and this is estimated for each candidate vehicle based on its particular vehicle type.The process of finding available vehicles for travel requests goes as follows. First, the energy consumption of an individual vehicle to complete the intended trip is estimated based on the energy function. The estimate of energy spent on transporting the intended traveler can be calculated using equation (3). Second, based on the estimated energy consumption of the intended traveler, available vehicles with sufficient remaining battery capacity that can undertake the traveler’s journey are filtered from the group of idle vehicles; finally, a vehicle located at the shortest Euclidean distance within the search radius is chosen from the filtered pool of available vehicles(3)Ee=η∗Et,where η is a safety coefficient used to ensure that the estimated energy for a traveler’s intended trip is not less than the actual energy consumed by individual vehicles to complete the trip that might happen if traffic changes. Ee is the estimated energy required by an individual vehicle to complete the trip of a traveler.The function in equation (3) estimates the energy needed to complete travelers’ trips based on the link travel speeds at the moment when a traveler calls the service, while the actual energy consumed uses the experienced speeds of vehicles in equation (2) to calculate the energy spent after completing the traveler’s trip. The proper estimate of energy spent to complete the trip of an intended traveler ensures that the assigned vehicle has sufficient battery capacity to reach the traveler’s destination.Once an available vehicle with sufficient remaining energy is assigned to a traveler, the time-dependent shortest path (lowest duration) from the current vehicle location to the traveler’s location is computed. After the vehicle arrives at the pickup location, the time-dependent shortest path from the traveler’s location to its destination will be determined. The computation of time-dependent shortest routes is based on the Dijkstra algorithm. ## 3.3. Mesoscopic Traffic Simulation The modeling framework for the proposed system needs to simulate the operations of many vehicles to transport all the city’s private car commuters over a realistic urban road network. A mesoscopic traffic simulation model that includes link movement and node transfer is incorporated into the agent-based modeling framework [33, 34]. The mesoscopic traffic simulation model combines a microscopic level representation of individual vehicles with a macroscopic description of the traffic patterns. In the link movement, vehicular movements are simulated. Vehicle speed on the road segments is updated according to the established macroscopic speed-density relationship. A modified Smulders speed-density relationship (equation (4)) is used to update the vehicle speed based on the link density(4)vk=v01−kkj,k<kc,γ1k−1kj,k≥kc,where k is the link traffic density. vk is the speed that is determined by the traffic density k; v0 is the free-flow speed. kc is the link critical density; kj is the link jam density. γ is a parameter. The value of the parameter can be derived as γ=v0kc.Node transfer means that vehicles transfer between adjacent road segments. A vehicle moving from an upstream link (road segment) to a downstream link will follow the defined rules:(1) The vehicle is at the head of the upstream link queue. In other words, there are no preceding vehicles stacking in the waiting queue.(2) The number of outflow vehicles has been checked to determine whether a vehicle can leave the road segment it is traversing.(3) The number of storage vehicles has been checked to determine whether the downstream link has enough storage units to accommodate the upcoming vehicle.The mesoscopic traffic simulation model, including link movement and node transfer, can provide the required level of details in estimating the speeds and travel times of individual vehicles on the network while balancing the trade-off between computational cost and traffic model realism.A platoon that includes multiple platoon vehicles is considered a platoon entity. The rules for the movement of individual vehicles are applied to individual platoons in which the properties (e.g., the number of platoon vehicles) are considered in the node transfer. ## 3.4. Traffic Simulation for Platoon Vehicles In the literature, the strategic platoon formation was studied while ignoring the traffic (as shown in Table1: comparison of the platoon formation studies at the route level). We fill this gap by developing a simulation component for mixed operations of platoon AVs and nonplatoon AVs on top of a mesoscopic traffic simulation. The functional component for the mixed operation of platoon AVs and nonplatoon AVs can capture the traffic impact of forming platoons across the road network. The relationship between road capacity and different penetration rates of platoon AVs is established to assess the impact of the platoon formation on traffic conditions. Chen et al. [35] proposed a formulation to describe the correlation between platoon characteristics, including the proportion of platoon vehicles, intervehicle spacing levels, and the macroscopic capacity. The formulation reveals how the single-lane capacity changes for different penetration rates of platoon AVs. The derived macroscopic capacity formulation for mixed traffic solves the problem of determining the macroscopic traffic variables (used in the mesoscopic traffic simulation) based on platoon characteristics. Therefore, we can combine the macroscopic capacity formulation in the mesoscopic traffic simulation that applies the macroscopic speed-density function to govern the movement of the vehicles that we use in the simulation methodology. The single-lane capacity is expressed as(5)Cc=Ca1−N/M+N∗1−α1−L/N,where Ca denotes the lane capacity for all vehicles traveling regularly. L is the number of leaders. N is the total number of platoon vehicles. M is the total number of regular driving vehicles (i.e., AVs are not in platoons). α is the ratio of platoon spacing to regular spacing.Table 2 Coefficients in the regression model for different vehicle types. CoefficientαβγNissanSV479.1−18.930.7876Kia468.6−14.630.6834Mitsubishi840.4−55.3121.670BMW618.4−31.090.9916Ford1110−96.612.745Chevrolet701.2−35.551.007Smart890.8−43.121.273Nissan2012715.2−38.101.271As shown in equation (5), the capacity Cc depends on the penetration rate of platoon vehicles φ=N/M+N and the number of leaders (L). A smaller distance spacing between platoon vehicles allows an increase in the lane capacity. The lane capacity increases as the penetration rate of platoon vehicles, φ, increases. Moreover, for the same number of platoon vehicles N, the more the leaders L are created, the fewer the capacity increases.We use the following definitions of different critical spacing types according to the operational characteristics of vehicle platooning. The critical spacing when vehicles travel regularly (e.g., AVs that are not in platoons) is defined asda. We define dp=αda, where 0<α<1. We assume that the critical spacing between a platoon vehicle and a regular driving vehicle that is not in a platoon is also da.Notice that regular driving AVs that are not in platoons follow the regular driving distances of conventional vehicles, while platoon vehicles move at a reduced spacing. The formulation of the capacity of one lane (for one direction) shows how it can be improved by increasing the penetration rate of platoon vehicles and the number of leaders or platoons (each platoon has one leader). The detailed derivations of equation (5) can be found in Appendix. ## 3.5. Platoon Formation Mechanism Spontaneous or on-the-fly platoon formation without proper prior planning can cause a high frequency of joining and leaving operations by the vehicles which might disrupt traffic and decrease safety [27, 36]. This type of platoon might not ensure a high rate of in-platoon driving. In AMoD systems, many SAEVs are assigned to take travelers from place to place (service points) in urban areas; therefore, they will be continuously routed to different destinations. The platoon formation for a fleet of vehicles that provide on-demand transport is more effective if done in a coordinated way. SAEVs can be coordinated in a platoon using the hold-on strategy while providing direct on-demand service between service points designated as the platoon formation locations over the AMoD network.The formation behaviors of platoon participating vehicles are realistically represented. In relation to coordinating vehicles in the platoon formation, a vehicle can be assigned to an existing platoon as a follower vehicle. A vehicle can be connected to other vehicles to initiate a new platoon, either as a platoon leader or as a follower. In the first case, arriving vehicles at a service point are assigned to an existing platoon according to the destinations of travelers assigned to them. There are no existing platoons at a service point in the second case, or the arriving vehicles cannot be assigned to an existing platoon. Arriving vehicles at the service point are divided into different groups. For vehicles in each group at a service point, the first vehicle to arrive is designated as the platoon leader. Once a platoon leader is assigned, the platoon is initiated.Algorithm 1: Pseudocode for the formation of platoons. INPUT: information about a list of arriving vehicles A=a0,a1,…,am. The information Zai for vehicle ai can be represented by a set ai,rioi,di. Origin oi is the service point that vehicle ai is moving towards and destination di represents the next service point. ri is the shortest route between oi and di.FOR each arriving vehicle ai in the set ACompare informationai,oi,dii=1,2,…,m to the information Zpj=pj,rpj,opj,dpjj=1,2,…,n of existing platoons’ leaders P=p0,p1,…,pnIF (Zaioi,di==Zpjopj,dpj) AND platoon size sj of the platoon pj is not reached)Add vehiclesai to the platoon pj as a follower;Adjust the vehicle’s shortest routeri to the platoon shortest route rpj;Remove vehicleai from the set A;ENDIFContinueFOR each arriving vehicle aX in the set AIF((ai is not connected to aXANDai≠aXANDZaioi,di==Zaxox,dxAND the number of connected vehicles for ai < platoon size V)ai and aX are paired, and the connection between ai and aX is established;IFaX is not in the destination group d of vehicle aiLet the vehicleaX join the destination group d ;ENDIFENDIFENDFORRemove vehicleai from the set A;Vehicles that are not paired move as individual vehicles;ENDFOROUTPUT: Platoons of vehicles and regular driving vehicles that are not in platoonsThe hold-on strategy of the platoon leader is used to organize arriving vehicles (in-service vehicles with passengers) into platoons at a service point according to their destinations. Coordinating empty SAEVs (that are assigned to pick up passengers) could increase the out-of-vehicle waiting times, leading to great discomfort. The hold-on time of a platoon leader is the time from when the leader starts to wait for other vehicles until the moment the platoon is formed and starts to move. The release of a platoon (the moment when it departs) depends on not only the number of vehicles that it has (there is a maximum number of vehicles in a platoon) but also on the time that the platoon leader has been waiting. That is, the release of a platoon can be triggered by reaching the maximum vehicle size or the maximum hold-on (waiting) time of the platoon leader, as explained before. We denote the time threshold of platoon leaders asT and the maximum number of platoon vehicles as V. The physical constraints of road segments directly set a threshold for the number of vehicles in a platoon. Algorithm 1 explains the platoon formation mechanism.The formation approach uses global knowledge about all arriving vehicles for each service point to assign them to an existing or newly created platoon. Vehicle sequence in a platoon is determined based on the arrival time of a vehicle at the platoon formation location. The platoon leader makes decisions on behalf of the followers to trigger the platoon release. Once a SAEV is assigned to serve a traveler, it has the origin and destination of the traveler, and its shortest route is calculated using the Dijkstra algorithm. In a platoon, platoon followers will adjust their routes from their original shortest routes to the shortest route of the platoon leader. A plan is created for a formed platoon, including platoon ID, a leader and its followers, a platoon route (origin, destination, and road segments), and the vehicle sequence in the formed platoon (see Algorithm 2). Once a platoon arrives at the destination service point of travelers, all platoon vehicles are detached from their formed platoon (arriving vehicles are grouped according to their destinations—platooned vehicles in a platoon have the same destination service point), and then will drop off the passengers at the destination service point—the state of the vehicle transitions from in-service to idle state. Notice that the volume of AMoD services could be high during morning hours, which leads to many arriving vehicles (in-service vehicles) at a service point. Also, there are platoon size restrictions in urban driving conditions. Many platoons could be formed by grouping vehicles with the same destinations, while organizing other vehicles with different destinations may cause detours and be against vehicles with the same destination. Considering the urban formation locations, driving conditions, and the high demand during peak hours, we did not model the scenario where a vehicle with a different destination detaches from a platoon to drop off a passenger, and the other platooned vehicles continue to the next service points.Algorithm 2: Pseudocode for determining platoon plans. INPUT: Groups of vehiclesFOR grouped vehicles in each destination group DDetermine the leader for the grouped vehiclesdk∈D;Initiate a platoonpk according to the platoon leader’s information (location and shortest route);Assign the other vehicles in the group into the new platoon as followers;Determine the vehicle sequence according to the arrival time;Adjust the shortest routes of the followers inpk to the shortest route of the platoon leader rpk;ENDFOROUTPUT: Platoon plans, including platoon ID, a leader and its followers, a platoon route, the vehicle sequence ## 4. Model Application The detailed conceptual framework is implemented in the AnyLogic multimethod simulation modeling platform coded with Java programming language. The data used in the simulation experiment is explained below. ### 4.1. The Topology of the Road Network in The Hague Figure3 displays the road network of the Zuidvleugel region (around Rotterdam and The Hague). The blue color indicates the part of the road network that is used for the simulation study, which includes eight districts of The Hague and the towns of Voorburg, Rijswijk, and Wateringen. The dots are the centroids of the traffic analysis zones (TAZs), which are the origins and destinations of all travel requests. The data containing the aggregated OD matrix, departure time distribution, and information about the study area centroids and the road network are exported from OmniTRANS transport planning software.Figure 3 Road network of The Hague in the Zuidvleugel road network. ### 4.2. Detailed Travel Demand The OD trip table containing a total of 27,452 trips made by cars is used as the input to generate time-dependent travel requests. The OD trip table specifies travel demand between TAZs in the AM peak hours over the study area. The departure time fractions shown in Figure4 are used to calculate the number of trips between OD pairs per 15-minute time interval from 5:30 am to 10:00 am. A demand generator (see Appendix ) generates time-dependent travel requests based on the aggregate travel demand. Individual travel requests are characterized by the origin zone, destination zone, and time of the request.Figure 4 Departure time fractions for 18 time intervals from 5:30 am to 10:00 am. ### 4.3. Simulation Parameters The traffic parameters provide information about the traffic flow characteristic of the regular driving vehicles (that are not in platoons). In platoon driving, intervehicle distance (dp) is determined based on the field experiments [37, 38]. We test different platoon formation strategies and compare their performance while treating the parameter dp as fixed.Table 3 Summary of traffic-related parameter values for different road types. Road typesCapacity (vehicles per hour per lane)Free-flow Speed (km/h)Saturation flow (vehicles per hour per lane)Speed at capacity (km/h)Jam density (vehicles per km)Urban road 1120050120035120Urban road 2120050120035120Urban road 3157550157535120Urban road 4160050160035120Urban road 5163350163335120Rural road135050135035120Local road9005090035120Local road9003090025120The vehicle models used for the energy estimation are these commonly sold electric vehicles: Nissan Leaf SV 2013, Kia Soul Electric 2015, Nissan Leaf 2012, BMW i3 BEV 2014, Ford focus Electric 2013, Mitsubishi 1 MiEV 2012, Chevrolet Spark EV 2015, and Smart EV 2014. The coefficients used in equation (1) are adopted from the work by reference [32] (see Table 2).We assume that SAEVs can be charged rapidly to 80% of the battery capacity in 30 minutes at every service point. All types of SAEVs initially have a battery level of 24 kWh. The value ofη used in estimating the energy consumption in equation (3) is determined based on a trial-and-error approach. It must be guaranteed that no travelers are stranded due to insufficient battery power of assigned vehicles. We repeatedly ran the simulation model by increasing the value of η until the estimated energyEe is sufficient for each assigned vehicle to complete the intended trip. SAEVs are deployed over the designated service points in proportion to the amount of travel demand at the corresponding service point. 49 TAZs are connected to the road networks using zone centroids. The 49 locations of the centroids in the road network are designated as service points in the urban AMoD system. Table 4 gives a summary of the main model parameters.Table 4 Summary of the main model parameters. CategoryValueThe perimeter of the study area46 kmThe size of the study area139 km2Time steps for speed update6 secondsIntervehicle distance (dp) in platoons6 metersAvg. fleet size per service point (vehicles) for 100% demand170Service points (centroids of the zones)49Road segments836Road nodes510Total travel demand27452 tripsMaximum number of platoon vehicles{2, 4, 6, 8} vehiclesTime threshold for platoon leaders{2, 4, 6, 8} minutesCharging time30 minutesCoefficientsη3.05Battery initial capacity24 kWhAverage travel time under light traffic18 minutes ## 4.1. The Topology of the Road Network in The Hague Figure3 displays the road network of the Zuidvleugel region (around Rotterdam and The Hague). The blue color indicates the part of the road network that is used for the simulation study, which includes eight districts of The Hague and the towns of Voorburg, Rijswijk, and Wateringen. The dots are the centroids of the traffic analysis zones (TAZs), which are the origins and destinations of all travel requests. The data containing the aggregated OD matrix, departure time distribution, and information about the study area centroids and the road network are exported from OmniTRANS transport planning software.Figure 3 Road network of The Hague in the Zuidvleugel road network. ## 4.2. Detailed Travel Demand The OD trip table containing a total of 27,452 trips made by cars is used as the input to generate time-dependent travel requests. The OD trip table specifies travel demand between TAZs in the AM peak hours over the study area. The departure time fractions shown in Figure4 are used to calculate the number of trips between OD pairs per 15-minute time interval from 5:30 am to 10:00 am. A demand generator (see Appendix ) generates time-dependent travel requests based on the aggregate travel demand. Individual travel requests are characterized by the origin zone, destination zone, and time of the request.Figure 4 Departure time fractions for 18 time intervals from 5:30 am to 10:00 am. ## 4.3. Simulation Parameters The traffic parameters provide information about the traffic flow characteristic of the regular driving vehicles (that are not in platoons). In platoon driving, intervehicle distance (dp) is determined based on the field experiments [37, 38]. We test different platoon formation strategies and compare their performance while treating the parameter dp as fixed.Table 3 Summary of traffic-related parameter values for different road types. Road typesCapacity (vehicles per hour per lane)Free-flow Speed (km/h)Saturation flow (vehicles per hour per lane)Speed at capacity (km/h)Jam density (vehicles per km)Urban road 1120050120035120Urban road 2120050120035120Urban road 3157550157535120Urban road 4160050160035120Urban road 5163350163335120Rural road135050135035120Local road9005090035120Local road9003090025120The vehicle models used for the energy estimation are these commonly sold electric vehicles: Nissan Leaf SV 2013, Kia Soul Electric 2015, Nissan Leaf 2012, BMW i3 BEV 2014, Ford focus Electric 2013, Mitsubishi 1 MiEV 2012, Chevrolet Spark EV 2015, and Smart EV 2014. The coefficients used in equation (1) are adopted from the work by reference [32] (see Table 2).We assume that SAEVs can be charged rapidly to 80% of the battery capacity in 30 minutes at every service point. All types of SAEVs initially have a battery level of 24 kWh. The value ofη used in estimating the energy consumption in equation (3) is determined based on a trial-and-error approach. It must be guaranteed that no travelers are stranded due to insufficient battery power of assigned vehicles. We repeatedly ran the simulation model by increasing the value of η until the estimated energyEe is sufficient for each assigned vehicle to complete the intended trip. SAEVs are deployed over the designated service points in proportion to the amount of travel demand at the corresponding service point. 49 TAZs are connected to the road networks using zone centroids. The 49 locations of the centroids in the road network are designated as service points in the urban AMoD system. Table 4 gives a summary of the main model parameters.Table 4 Summary of the main model parameters. CategoryValueThe perimeter of the study area46 kmThe size of the study area139 km2Time steps for speed update6 secondsIntervehicle distance (dp) in platoons6 metersAvg. fleet size per service point (vehicles) for 100% demand170Service points (centroids of the zones)49Road segments836Road nodes510Total travel demand27452 tripsMaximum number of platoon vehicles{2, 4, 6, 8} vehiclesTime threshold for platoon leaders{2, 4, 6, 8} minutesCharging time30 minutesCoefficientsη3.05Battery initial capacity24 kWhAverage travel time under light traffic18 minutes ## 5. Simulation Results and Discussion Twenty-five effective simulation scenarios are considered for the following purposes. First, scenarios for platoon formation policies are simulated to investigate how the formation of platoons affects the level of service provided to travelers. Second, demand for AMoD services with or without forming platoons may influence the AMoD service levels provided. Therefore, we design simulation scenarios with different demand levels. Third, simulation experiments are conducted to evaluate the impact of forming platoons on energy consumption for different car models under different formation policies. Table5 gives detailed explanations of the main KPIs.Table 5 Description of the main KPIs. Key Performance IndicatorDescriptionDelay of travelers in platoon vehiclesThe time delay of platoon vehicles is the average dwell time that platoon vehicles (platoon leaders and platoon followers) spend at formation points without moving.Delay of travelers with platoon leaders (platoon delay for leaders)The time delay of platoon leaders is the average dwell time that platoon leaders spend at formation locations without moving.Network travel timeThe network travel time is the in-vehicle time spent on average by all served travelers when vehicles are traveling from origin to destination. Platoon delays are not included in the network travel time for travelers in platoon vehicles.Platoon travel timeThe platoon travel time is calculated by the platoon delays plus the network travel time of travelers in platoon vehicles.Congestion levelThe congestion level describes how much longer, on average, vehicular trips take during the AM peak hours compared to the average travel time in light traffic conditions. The average travel time in light traffic in the case-study city is estimated based on the travel speed suggested by Ligterink [39].90% quantile travel timeThe 90% quantile travel time indicates the travel time which is longer than 90% of the trips.The percentage of energy savingsThe percentage of the reduction in the energy consumption of all the vehicular trips in the platoon scenarios compared to the nonplatoon baseline scenario. ### 5.1. Analysis of Service Levels in Platoon Scenarios #### 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 #### 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. #### 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ### 5.2. Energy Consumption Analysis with the Platoon Formation We evaluate the impact of forming platoons on the system-wide energy consumption for different vehicle types. Results in Figure7 indicate that the formation of platoons can reduce the total energy consumed by all vehicles in the AMoD system. The greatest reduction of total energy consumption ranges from 0.42% for the Kia Soul Electric 2015 to 9.56% for the Ford Focus Electric 2013. Moreover, more savings are achieved when the time threshold (V) and the vehicle size threshold (T) for platoon release are increased. The reason is that more vehicles are coordinated in platoons, which results in more vehicles driving in platoons. Less congestion occurs when more platoon vehicles circulate across the transportation network, indicating improvements in traffic efficiency. Therefore, more energy can be saved when platoons are formed.Figure 7 Total energy savings of AMoD systems for different types of electric vehicles (T represents the time threshold of platoon leaders, and V is the maximum number of platoon vehicles.).Results in Figure7 show that energy savings are different from vehicle types when applying the same formation policy. The maximum saving of up to 9.56% is achieved for Ford Focus Electric 2013 in the (T8, V8) formation policy, while the Kia Soul Electric 2015 has the lowest energy saving of 0.42%. This is because the difference in vehicle characteristics for energy consumption leads to different energy savings. The energy consumption model contains a set of regression models corresponding to the different vehicle types. The regression model, derived from laboratory dynamometer tests, is used to calculate energy consumption as a function of travel speeds. In urban driving, the vehicles will consume more energy at lower speeds, while the energy consumption of individual vehicles will decline as the vehicle speed increases. Thus, vehicles will consume less energy per unit distance traveled with an increase in the travel speed. However, the modeled energy performance of different car types is different. The vehicle type with the sharpest gradient of modeled energy consumption-speed function will see the biggest reduction in energy consumption when having the same increase in the vehicles’ speed. The Ford Focus Electric 2013 has the steepest decline in energy consumption-speed function; therefore, when the speed of the vehicle increases, the Ford vehicle type has the most reduction in the energy consumption. The energy saving of the Kia Soul Electric 2015 that has the least steep gradient of the energy consumption function ranks at the bottom.We find that the degrees of energy savings strongly depend on the vehicle types as well as platoon formation policies. Coordinating more vehicles in platoons can significantly improve the energy efficiency for some vehicle types. However, the improvement in energy efficiency for certain vehicle types is relatively small because of the energy consumption characteristics. ## 5.1. Analysis of Service Levels in Platoon Scenarios ### 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 ### 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. ### 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ## 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 ## 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. ## 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ## 5.2. Energy Consumption Analysis with the Platoon Formation We evaluate the impact of forming platoons on the system-wide energy consumption for different vehicle types. Results in Figure7 indicate that the formation of platoons can reduce the total energy consumed by all vehicles in the AMoD system. The greatest reduction of total energy consumption ranges from 0.42% for the Kia Soul Electric 2015 to 9.56% for the Ford Focus Electric 2013. Moreover, more savings are achieved when the time threshold (V) and the vehicle size threshold (T) for platoon release are increased. The reason is that more vehicles are coordinated in platoons, which results in more vehicles driving in platoons. Less congestion occurs when more platoon vehicles circulate across the transportation network, indicating improvements in traffic efficiency. Therefore, more energy can be saved when platoons are formed.Figure 7 Total energy savings of AMoD systems for different types of electric vehicles (T represents the time threshold of platoon leaders, and V is the maximum number of platoon vehicles.).Results in Figure7 show that energy savings are different from vehicle types when applying the same formation policy. The maximum saving of up to 9.56% is achieved for Ford Focus Electric 2013 in the (T8, V8) formation policy, while the Kia Soul Electric 2015 has the lowest energy saving of 0.42%. This is because the difference in vehicle characteristics for energy consumption leads to different energy savings. The energy consumption model contains a set of regression models corresponding to the different vehicle types. The regression model, derived from laboratory dynamometer tests, is used to calculate energy consumption as a function of travel speeds. In urban driving, the vehicles will consume more energy at lower speeds, while the energy consumption of individual vehicles will decline as the vehicle speed increases. Thus, vehicles will consume less energy per unit distance traveled with an increase in the travel speed. However, the modeled energy performance of different car types is different. The vehicle type with the sharpest gradient of modeled energy consumption-speed function will see the biggest reduction in energy consumption when having the same increase in the vehicles’ speed. The Ford Focus Electric 2013 has the steepest decline in energy consumption-speed function; therefore, when the speed of the vehicle increases, the Ford vehicle type has the most reduction in the energy consumption. The energy saving of the Kia Soul Electric 2015 that has the least steep gradient of the energy consumption function ranks at the bottom.We find that the degrees of energy savings strongly depend on the vehicle types as well as platoon formation policies. Coordinating more vehicles in platoons can significantly improve the energy efficiency for some vehicle types. However, the improvement in energy efficiency for certain vehicle types is relatively small because of the energy consumption characteristics. ## 6. Conclusions and Recommendations ### 6.1. Main Conclusions The formation of platoons in the urban AMoD system is more complicated because of the urban road network characteristics (narrow streets and multiple road segments between locations), platoon formation locations and policies, and the interaction between AMoD service users and SAEVs. The goal of this study is not to develop a very sophisticated method but to show through agent-based simulations how the formation of platoons in AMoD systems affects people’s travel and system-wide energy consumption.Shared AVs could lead to more traffic and longer travel times due to the additional zero-occupancy movements. In the scenario where SAEVs replace all morning urban commuter trips (100% demand) made by private cars in the case-study city, without the formation of platoons, a high network congestion level of up to 53.28% is observed.However, the network travel times and congestion levels are improved in the formation of platoons. For example, a congestion level of 11% can be achieved under the policy (V8, T8). That is, for 30 minutes of travel time, 3.3 minutes of additional time must be spent during the rush hours. The extra time spent is far smaller than the time spent either in the nonplatoon situation where SAEVs replace the private car trips or in the current situation where private cars are used. In the first situation, travelers spent extra 15.98 minutes with a 53.28% congestion level. In the second situation, additional 10 minutes is spent in the case-study city (https://www.tomtom.com/en_gb/traffic-index/). In the formation of platoons, travelers are more likely to reach their destination on time or early with the improvement in the network travel times.We also find that the 90% quantile travel times are significantly reduced in the formation of platoons. This suggests that the network travel times are improved without causing extremely long travel times when platoons are formed even though additional (zero-occupancy) movements are generated in AMoD systems.Simulation results demonstrate that the number of total vehicles circulating in the transportation network is reduced by the formation of platoons, which could lead to improved network travel time and reliability. Furthermore, the improved network travel time and reliability could improve the quality of time spent in the vehicles across the transportation network. In this respect, the platoon formation could improve the quality of services offered to all service users (in platoons and not in platoons) when they travel on the transportation network.On average, the platoon travel time, including the platoon delay and the network travel time, is less than the network travel time in nonplatoon scenarios where all morning commuters use AMoD service. That implies that travelers in the platoon vehicles could reach their destination faster even if they experience unexpected delays in the formation of platoons, suggesting improved service levels. In this respect, the benefits from network travel time savings may outweigh the cost associated with the platoon delays. Travelers may opt for the AMoD service in response to service improvements in the formation of platoons.To be specific, we find that travelers in the platoon leaders experience longer platoon travel times due to longer unexpected platoon delays. In this regard, AMoD service users (morning commuters who were previously driving private cars) in the platoon leaders are provided with a low level of service. Travelers in the platoon leaders may be reluctant to use AMoD services.We find the existence of an inverse relationship between platoon delays and demand levels. The platoon delays encountered by travelers in platoon vehicles are small in a high-demand scenario. This implies that forming platoons when the market penetration rate of AMoD services is high leads to lower platoon delays. In contrast, travelers face long unexpected platoon delays with fewer AMoD service users. In the former case, the network travel times can offset the platoon delays travelers’ encounter in the platoon vehicles. Consequently, travelers in platoon vehicles have shorter platoon travel times (total travel times of travelers in the platoon vehicles). In the latter case, no congestion occurs in the transportation network when few travelers request services (this may happen during off-peak hours); coordinating vehicles in platoons only causes unexpected delays for travelers in the platoon vehicles. Forming platoons when demand is low (e.g., below 60% demand) only causes delays for travelers in the platoon vehicles, suggesting a lower level of service. As a result, travelers may not be willing to use the AMoD service. Therefore, a high penetration rate of AMoD service is expected to coordinate vehicles in the formation of platoons to benefit the service users in such vehicles in future AMoD systems.An important finding is that the improvement in traffic efficiency leads to system-wide energy savings. Forming platoons in AMoD systems can save about 9.56% of the system-wide energy consumption for the most efficient car model studied in urban areas. However, energy savings strongly depend on the vehicle characteristics for energy consumption and platoon formation policies used. Demand for AMoD services and operating policies for forming platoons are important variables of interest for obtaining travel and energy benefits from platoon driving. Effective platoon formation strategies need to be developed for different car models to obtain a favorable effect on system-wide energy consumption.At the city scale, the formation of platoons enabled by vehicle automation could reduce travel times and unreliability in the modeled urban road network. This may influence their choices of residence with the improvement in travel times and the reliability of urban commuters. It can be inferred that automated mobility systems may have a detrimental impact on urban sprawl, leading to rapid urban expansions. Moreover, platoon operations effectively reduce energy computation in urban mobility systems. While energy consumption is reduced, emissions reductions could also be achieved in the formation of platoons. Thus, platoon operations could bring benefits to operators with regard to energy savings and to society in terms of emissions reductions.The findings of this study contribute to the growing body of literature on the study of shared AV fleets by quantifying the impact of innovative platoon formation operations on AV energy consumption as well as people’s travel. We shed light on the energy aspect of platoons in urban AMoD systems to complement the existing studies on the fuel consumption of platoons on highways. ### 6.2. Recommendations for Policy and Future Research The findings of this paper raise challenges for policy and for research. The findings suggest that the formation of platoons in AMoD systems can reduce system-wide energy consumption. Platoon operations can be considered as an effective energy-saving and decarbonization strategy to achieve the government’s energy and environmental goals. Moreover, it is recommended that policymakers and transport operators consider the vehicle characteristics for energy consumption in conjunction with platoon formation policies to develop effective energy-saving platoon strategies in future AMoD systems.Developing platoon formation strategies over urban road networks is recommended aiming at improving traffic efficiency, leading to travel time reductions. However, we find that the magnitude of demand for AMoD services could influence the users’ travel times and quality of time. Therefore, the magnitude of demand needs to be considered when deciding whether to coordinate vehicles in platoons. For example, forming platoons below 60% demand over the urban road network only causes unexpected delays. Travelers are reluctant to use the AMoD service due to the long unexpected platoon delays. In this regard, we recommend not forming platoons in the uncongested network with fewer road users (e.g., below 60% demand in the study area, which is the case during off-peak hours). At the same time, vehicles can be coordinated in platoons when congestion occurs to reap the benefits of improving travel times and energy efficiency.Furthermore, travelers, especially those who travel in the platoon leaders, may not be willing to use AMoD service due to the long unexpected delay and long travel time. For policymakers and transport operators, careful consideration is required to reward the travelers who suffer long unexpected delays in the formation of platoons, which the system’s benefit from energy savings can be distributed.Further research efforts are required to develop mechanisms for distributing the energy benefits, in order to incentivize engagement to make the system more sustainable, efficient, and equitable.The modeling framework presented here still has some limitations that could be improved in future research. Relocation capability is not developed and implemented in the model. Relocation operations in anticipation of future demand can mitigate the imbalance between vehicle supply and travel demand. Relocating platooned vehicles in urban driving conditions can be further investigated.The traffic simulation model can estimate the traffic impact of forming platoons using mesoscopic operating characteristics. It can meet the design requirements of determining time-dependent link flows and route travel time according to the relationship established between road capacity and the formed platoons. Hence, the traffic simulation model allows testing different strategies in forming platoons on the network level. However, the mesoscopic model applied to single-lane urban scenarios cannot capture the microscopic traffic behavior such as accelerating, overtaking, lane-changing, and traffic behaviors at intersections. Moreover, the relationship established between formed platoons and road capacity is only meant for the capacity of a single lane for each direction according to the platoon characteristics. This is acceptable for urban driving conditions in most (European) cities with narrow streets (one lane for each direction). However, the traffic simulation component cannot model mixed traffic conditions under multiple-lane scenarios. Operational capacities in multilane scenarios depend on lane policies to distribute platoon vehicles. Modeling multiple-lane capacity with the formation of platoons remains an unsolved challenge in the literature. ## 6.1. Main Conclusions The formation of platoons in the urban AMoD system is more complicated because of the urban road network characteristics (narrow streets and multiple road segments between locations), platoon formation locations and policies, and the interaction between AMoD service users and SAEVs. The goal of this study is not to develop a very sophisticated method but to show through agent-based simulations how the formation of platoons in AMoD systems affects people’s travel and system-wide energy consumption.Shared AVs could lead to more traffic and longer travel times due to the additional zero-occupancy movements. In the scenario where SAEVs replace all morning urban commuter trips (100% demand) made by private cars in the case-study city, without the formation of platoons, a high network congestion level of up to 53.28% is observed.However, the network travel times and congestion levels are improved in the formation of platoons. For example, a congestion level of 11% can be achieved under the policy (V8, T8). That is, for 30 minutes of travel time, 3.3 minutes of additional time must be spent during the rush hours. The extra time spent is far smaller than the time spent either in the nonplatoon situation where SAEVs replace the private car trips or in the current situation where private cars are used. In the first situation, travelers spent extra 15.98 minutes with a 53.28% congestion level. In the second situation, additional 10 minutes is spent in the case-study city (https://www.tomtom.com/en_gb/traffic-index/). In the formation of platoons, travelers are more likely to reach their destination on time or early with the improvement in the network travel times.We also find that the 90% quantile travel times are significantly reduced in the formation of platoons. This suggests that the network travel times are improved without causing extremely long travel times when platoons are formed even though additional (zero-occupancy) movements are generated in AMoD systems.Simulation results demonstrate that the number of total vehicles circulating in the transportation network is reduced by the formation of platoons, which could lead to improved network travel time and reliability. Furthermore, the improved network travel time and reliability could improve the quality of time spent in the vehicles across the transportation network. In this respect, the platoon formation could improve the quality of services offered to all service users (in platoons and not in platoons) when they travel on the transportation network.On average, the platoon travel time, including the platoon delay and the network travel time, is less than the network travel time in nonplatoon scenarios where all morning commuters use AMoD service. That implies that travelers in the platoon vehicles could reach their destination faster even if they experience unexpected delays in the formation of platoons, suggesting improved service levels. In this respect, the benefits from network travel time savings may outweigh the cost associated with the platoon delays. Travelers may opt for the AMoD service in response to service improvements in the formation of platoons.To be specific, we find that travelers in the platoon leaders experience longer platoon travel times due to longer unexpected platoon delays. In this regard, AMoD service users (morning commuters who were previously driving private cars) in the platoon leaders are provided with a low level of service. Travelers in the platoon leaders may be reluctant to use AMoD services.We find the existence of an inverse relationship between platoon delays and demand levels. The platoon delays encountered by travelers in platoon vehicles are small in a high-demand scenario. This implies that forming platoons when the market penetration rate of AMoD services is high leads to lower platoon delays. In contrast, travelers face long unexpected platoon delays with fewer AMoD service users. In the former case, the network travel times can offset the platoon delays travelers’ encounter in the platoon vehicles. Consequently, travelers in platoon vehicles have shorter platoon travel times (total travel times of travelers in the platoon vehicles). In the latter case, no congestion occurs in the transportation network when few travelers request services (this may happen during off-peak hours); coordinating vehicles in platoons only causes unexpected delays for travelers in the platoon vehicles. Forming platoons when demand is low (e.g., below 60% demand) only causes delays for travelers in the platoon vehicles, suggesting a lower level of service. As a result, travelers may not be willing to use the AMoD service. Therefore, a high penetration rate of AMoD service is expected to coordinate vehicles in the formation of platoons to benefit the service users in such vehicles in future AMoD systems.An important finding is that the improvement in traffic efficiency leads to system-wide energy savings. Forming platoons in AMoD systems can save about 9.56% of the system-wide energy consumption for the most efficient car model studied in urban areas. However, energy savings strongly depend on the vehicle characteristics for energy consumption and platoon formation policies used. Demand for AMoD services and operating policies for forming platoons are important variables of interest for obtaining travel and energy benefits from platoon driving. Effective platoon formation strategies need to be developed for different car models to obtain a favorable effect on system-wide energy consumption.At the city scale, the formation of platoons enabled by vehicle automation could reduce travel times and unreliability in the modeled urban road network. This may influence their choices of residence with the improvement in travel times and the reliability of urban commuters. It can be inferred that automated mobility systems may have a detrimental impact on urban sprawl, leading to rapid urban expansions. Moreover, platoon operations effectively reduce energy computation in urban mobility systems. While energy consumption is reduced, emissions reductions could also be achieved in the formation of platoons. Thus, platoon operations could bring benefits to operators with regard to energy savings and to society in terms of emissions reductions.The findings of this study contribute to the growing body of literature on the study of shared AV fleets by quantifying the impact of innovative platoon formation operations on AV energy consumption as well as people’s travel. We shed light on the energy aspect of platoons in urban AMoD systems to complement the existing studies on the fuel consumption of platoons on highways. ## 6.2. Recommendations for Policy and Future Research The findings of this paper raise challenges for policy and for research. The findings suggest that the formation of platoons in AMoD systems can reduce system-wide energy consumption. Platoon operations can be considered as an effective energy-saving and decarbonization strategy to achieve the government’s energy and environmental goals. Moreover, it is recommended that policymakers and transport operators consider the vehicle characteristics for energy consumption in conjunction with platoon formation policies to develop effective energy-saving platoon strategies in future AMoD systems.Developing platoon formation strategies over urban road networks is recommended aiming at improving traffic efficiency, leading to travel time reductions. However, we find that the magnitude of demand for AMoD services could influence the users’ travel times and quality of time. Therefore, the magnitude of demand needs to be considered when deciding whether to coordinate vehicles in platoons. For example, forming platoons below 60% demand over the urban road network only causes unexpected delays. Travelers are reluctant to use the AMoD service due to the long unexpected platoon delays. In this regard, we recommend not forming platoons in the uncongested network with fewer road users (e.g., below 60% demand in the study area, which is the case during off-peak hours). At the same time, vehicles can be coordinated in platoons when congestion occurs to reap the benefits of improving travel times and energy efficiency.Furthermore, travelers, especially those who travel in the platoon leaders, may not be willing to use AMoD service due to the long unexpected delay and long travel time. For policymakers and transport operators, careful consideration is required to reward the travelers who suffer long unexpected delays in the formation of platoons, which the system’s benefit from energy savings can be distributed.Further research efforts are required to develop mechanisms for distributing the energy benefits, in order to incentivize engagement to make the system more sustainable, efficient, and equitable.The modeling framework presented here still has some limitations that could be improved in future research. Relocation capability is not developed and implemented in the model. Relocation operations in anticipation of future demand can mitigate the imbalance between vehicle supply and travel demand. Relocating platooned vehicles in urban driving conditions can be further investigated.The traffic simulation model can estimate the traffic impact of forming platoons using mesoscopic operating characteristics. It can meet the design requirements of determining time-dependent link flows and route travel time according to the relationship established between road capacity and the formed platoons. Hence, the traffic simulation model allows testing different strategies in forming platoons on the network level. However, the mesoscopic model applied to single-lane urban scenarios cannot capture the microscopic traffic behavior such as accelerating, overtaking, lane-changing, and traffic behaviors at intersections. Moreover, the relationship established between formed platoons and road capacity is only meant for the capacity of a single lane for each direction according to the platoon characteristics. This is acceptable for urban driving conditions in most (European) cities with narrow streets (one lane for each direction). However, the traffic simulation component cannot model mixed traffic conditions under multiple-lane scenarios. Operational capacities in multilane scenarios depend on lane policies to distribute platoon vehicles. Modeling multiple-lane capacity with the formation of platoons remains an unsolved challenge in the literature. --- *Source: 1005979-2022-07-21.xml*
1005979-2022-07-21_1005979-2022-07-21.md
137,506
Assessing the Potential of the Strategic Formation of Urban Platoons for Shared Automated Vehicle Fleets
Senlei Wang; Gonçalo Homem de Almeida Correia; Hai Xiang Lin
Journal of Advanced Transportation (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1005979
1005979-2022-07-21.xml
--- ## Abstract This paper addresses the problem of studying the impacts of the strategic formation of platoons in automated mobility-on-demand (AMoD) systems in future cities. Forming platoons has the potential to improve traffic efficiency, resulting in reduced travel times and energy consumption. However, in the platoon formation phase, coordinating the vehicles at formation locations for forming a platoon may delay travelers. In order to assess these effects, an agent-based model has been developed to simulate an urban AMoD system in which vehicles travel between service points transporting passengers either forming or not forming platoons. A simulation study was performed on the road network of the city of The Hague, Netherlands, to assess the impact on traveling and energy usage by the strategic formation of platoons. Results show that forming platoons could save up to 9.6% of the system-wide energy consumption for the most efficient car model. However, this effect can vary significantly with the vehicle types and strategies used to form platoons. Findings suggest that, on average, forming platoons reduces the travel times for travelers even if they experience delays while waiting for a platoon to be formed. However, delays lead to longer travel times for the travelers with the platoon leaders, similar to what people experience while traveling in highly congested networks when platoon formation does not happen. Moreover, the platoon delay increases as the volume of AMoD requests decreases; in the case of an AMoD system serving only 20% of the commuter trips (by private cars in the case-study city), the average platoon delays experienced by these trips increase by 25%. We conclude that it is beneficial to form platoons to achieve energy and travel efficiency goals when the volume of AMoD requests is high. --- ## Body ## 1. Introduction Automated vehicles (AVs), also known as self-driving vehicles, bring a unique opportunity for reshaping urban mobility systems, thereby changing the way people travel. Combining electric and automated vehicles with ride-hailing services brings forth new automated mobility-on-demand (AMoD) services in future cities. In AMoD systems, the convergence of vehicle automation, electrification, and shared mobility has the potential to provide safe, economical, efficient, and sustainable urban mobility [1, 2]. However, there are considerable uncertainties about achieving these benefits.While the large-scale deployment of AMoD systems for urban mobility is still in its infancy, a broad spectrum of research focuses on investigating the potential of operating AMoD systems in different urban or regional application scenarios. One potential application is that AMoD could provide a solution for private car users in urban areas, leading to reduced parking demand, ownership cost, energy consumption, and emissions [3, 4]. Moreover, AMoD services could act as feeders to complement the high-capacity PT systems. Integrating an AMoD feeder service into the traditional PT system could increase the accessibility of PT services and improve the traffic externalities (e.g., congestion and emissions) due to increased demand for PT systems [5, 6].However, taxi-like services offered by AMoD systems in urban areas as competitive alternatives to public transportation (PT) may draw customers away from the traditional PT system. As a result, AMoD services could reduce public transit ridership and cause congestion due to increased vehicle movements (i.e., zero-occupancy movements and movements to serve more demand) [7–9].Like AMoD services in urban areas, a promising application of AMoD systems is to replace fixed-route and low-frequency buses in areas where demand is scattered and low (e.g., rural areas and industrial parks). Compared to existing bus services, AMoD systems could provide direct services to customers with improved availability and accessibility; the operating cost of AMoD services in such application areas is much lower than those of conventional bus services [10].A primary research priority is studying different operational aspects of urban passenger AMoD systems in future cities. Recent advances in vehicle automation have enabled vehicles to drive and connect without human intervention. With the help of connectivity and automation technology, AVs can exchange information for coordinated movements in platoons at closer following distances.Vehicle platooning has been a popular research theme in recent applications of intelligent transportation systems. The impact of platoon operations on urban traffic has been studied, assuming that AVs are already in platoons. However, intriguing questions arise when introducing vehicle platooning in passenger AMoD systems in which shared, automated, and electric vehicles (SAEV) provide on-demand services to travelers in urban areas:(1) What are the impacts of the formation and operation of such urban platoons on the service quality offered to travelers and traffic efficiency related to road network travel times?(2) How do changes in traffic conditions by platoon operations affect the travel-related energy consumption of traffic participants across the urban road network?To answer these questions, an agent-based model (ABM) has been developed to provide performance evaluations of forming platoons in urban passenger AMoD systems of the future.The paper is organized as follows. In Section2, we summarize the existing literature on platoon operations and the formation of platoons, identify the challenges of forming platoons in urban AMoD systems, and present the main contributions of this paper. Section 3 gives an overview of the modeling framework and discusses the model specifications. A detailed description of the model implementation and its application are provided in Section 4. Section 5 analyzes the simulation results. The main conclusions and policy implications are presented in the final section, and future work directions are recommended. ## 2. Background Platooning systems have attracted increasing attention with the rapid progress in automated and connected vehicle technologies. Much work has been done to investigate platoon communication technologies and platoon control strategies [11]. Recent literature has focused on platoon planning: at a low level (e.g., trajectory level), detailed platoon maneuvers (e.g., merging and splitting) are designed and simulated [12]; at a high level, planning and optimization of routes and schedules in the platoon formation are studied [13]. Moreover, vehicles with synchronized movement in platoons can have faster reaction times to dangerous situations and fewer human errors, reducing rear-end crashes. For a detailed analysis of platoon safety issues, the reader is referred to the literature review research by Axelsson [14] and Wang et al. [15]. In this study, we address the problems of forming platoons and assess the travel and energy impact on a future urban mobility system. We herein provide background information about the potential implications of platoon operations on energy consumption, and traffic efficiency. Besides, we review the literature on the strategic formation of platoons. ### 2.1. Energy Impact of Platoon Operations on Highways Platoons of vehicles provide significant potential for energy savings on highway driving. The close-following mechanism can considerably reduce the energy consumed by platoon vehicles to overcome the adverse aerodynamic effect [16]. Several field experiments in research projects, such as the COMPANION project, the PATH platoon research, the SARTRE project, and the Energy ITS project, have been conducted to investigate the potential of platoon operations in reducing energy consumption [17]. ### 2.2. Impact of Platoon Operations on Highway and Urban Traffic Platoon operations can improve highway throughput due to the shorter headways between platoon vehicles [18]. Using communication technologies (e.g., vehicle-to-vehicle or vehicle-to-infrastructure technologies), platoons of vehicles can also smooth out the vehicle-following dynamics on highways [19]. Besides, platoon operations can improve urban road capacity and reduce delays when crossing signalized intersections [20]. ### 2.3. The Strategic Platoon Formation on Highways In the above literature, the energy and traffic studies on platooning systems considered vehicles that are already in platoons and used platoon operations to increase road throughput and reduce energy consumption. Some studies investigated the problem of coordinating vehicles in platoons on highways. Hall and Chin [21] developed different platoon formation strategies to divide vehicles waiting at highway entrance ramps into different groups according to their destinations. Once formed at the highway entrance ramp, platoons remain intact to maximize the platoon driving distance. Saeednia and Menendez [22] studied slow-down and catch-up strategies for merging trucks into a platoon under free-flow traffic. Larsson et al. [23] defined the platoon formation problem as a vehicle routing problem to maximize the fuel savings of platoon vehicles. Studies by Liang et al. [24] and van de Hoef [25] investigated the problem of coordinating many trucks in platoons to maximize fuel savings. In the formation of platoons, trucks can adjust their speed without regard to traffic conditions. Larson et al. [26] developed a distributed control system in which trucks can adjust speed to form platoons to save fuels. Johansson et al. [27] developed two game-theoretic models to study the platoon coordination problem where vehicles can wait at network nodes to form platoons. In Table 1, we compare the newly developed functional components and the performance analysis of the AMoD system with the new components in our modeling framework with the referred studies in the literature. ### 2.4. Challenges for the Platoon Formation in Urban AMoD Systems The formation of platoons in urban AMoD systems poses challenges. First, the current state-of-the-art models consider the traffic demand for the platoon formation in an oversimplified way. Travel demand is generated according to trip lengths, destination distributions, and vehicle arrival patterns. Different distributions could be used to generate travel demand while capturing its uncertainty. However, in AMoD systems, the zero-occupancy vehicle trips of picking up the assigned travelers introduce uncertainty in the traffic demand on the road network. This uncertainty, therefore, requires explicit modeling of the interaction between SAEVs and travelers.Second, existing studies overlook the effect of forming platoons on travelers in the platoon vehicles. In the future, the AMoD system that we are studying, a fleet of SAEVs directly provides on-demand services to travelers between service points. The formation of platoons requires the synchronization of different vehicles in the same coordinates. In the formation of platoons, vehicles may wait for other vehicles to form platoons, causing delays for travelers. The impact of forming platoons on the travelers in the platoon vehicles must be captured.Third, existing studies investigate the effect of reduced aerodynamic drag via platooning on energy consumption in highway driving. However, due to higher traffic demand on the urban transport network, the potential for energy efficiency is primarily influenced by traffic conditions rather than by reducing air resistance. Coordinated movements of platoon vehicles could improve traffic throughput. As a result, the energy consumption of traffic participants (SAEVs) will be affected by platoon operations. Moreover, current studies aimed to investigate the traffic impact of platoon vehicles using predefined platoons. Therefore, the impact of forming platoons on travel conditions and energy consumption of SAEVs in urban driving needs to be assessed for future scenarios.Fourth, platoon sizes (the maximum number of vehicles in a platoon) and the maximum time spent forming platoons are not restricted. This relaxation can lead to overestimation of the platoon driving distances and energy savings by forming long platoons. In AMoD systems, forming a long platoon may cost travelers more time in the situation where vehicles wait for other vehicles. Setting limits on platoon sizes and time spent in the formation can prevent long platoons from disrupting the urban traffic and causing long delays for travelers. Therefore, the platoon size restriction and maximum time spent in the formation of platoons need to be taken into account when coordinating SAEVs in platoons. The impact of the time and platoon size restrictions on the formation of platoons, and the level of service offered to travelers and on energy consumed needs to be studied. ### 2.5. Urban AMoD System Characteristics in Future Cities The AMoD systems envisaged for the future will probably be available in the 2030s to 2040s, when SAEV fleets have become common and affordable [28, 29]. SAEVs, in this paper, considered to be purpose-built microvehicles, are intended to cover the whole trips of commuters. While providing on-demand services for morning commuters in lieu of private cars, SAEVs can be coordinated in platoons at service points. Although purpose-built SAEVs could occupy less space, SAEVs cannot form platoons anywhere because of urban driving conditions characterized by narrow streets and traffic congestion. One idea is to define what in this paper is designated as “service points”: platoon formation and dissolution (platoon is disassembled) locations across the service area. Examples of service points for the platoon formation in today’s urban transportation systems could include public parking garages, public charging service points, petrol service points, empty bus stops, and some parking spaces along the canals in cities. ### 2.6. Research Contributions This paper aims to develop an agent-based model (ABM) to study the impact of forming platoons in future urban AMoD systems on people’s travel and energy usage. Agent-based modeling is suitable for our research questions. The ABM has the advantage of representing entities at a high resolution; the interaction of entities (e.g., vehicles and travelers) can be captured realistically; it is flexible to model a system at different description levels (e.g., vehicles and platoons formed by vehicles) to evaluate different aspects of the system and to make changes to assumptions (e.g., formation policies) for different scenarios. Taking into consideration the limitations of current studies identified above, we summarize the main contributions of this paper as follows.First, the ABM originally developed in this paper includes a high level of detail. The individual travelers are modeled, and their attributes are initiated according to the regional travel demand data and the realistic departure time data. The interaction between SAEVs and travel requests is explicitly modeled by developing a vehicle-to-travelers assignment component, in which SAEV pickup trips and drop-off trips are represented. The modeled interaction between vehicles and travelers captures the uncertainty of traffic demand between areas of origin and destination.Second, the formation behavior of waiting at service points, defined as the hold-on strategy, is explicitly simulated for platoon leaders and their followers. The platoon formation policies that determine when a group of vehicles leaves a service point as a platoon are the maximum elapsed time of the platoon leader and the maximum platoon size. Either one of the two policies can trigger a release of a platoon. The AMB simulates platoon formation operations of vehicles, which allows us to measure the impact of forming platoons on travelers. Moreover, the formed platoons are flexibly represented with specified information (e.g., the platoon route, the vehicle sequences, and the speed) at an aggregate level to model platoon driving and its impact on traffic conditions.Third, a mesoscopic traffic simulation model is used to represent the traffic dynamics throughout the road network. The mesoscopic traffic simulation model can simulate each vehicle’s movement, while a macroscopic speed-density relationship is used to govern congestion effects. The traffic simulation model can incorporate the impact of all SAEV trips, including unoccupied pickup trips and occupied drop-off trips, on the traffic over the road network. Furthermore, the relationship established between road capacity and platoon characteristics is used to assess the impact of formed platoons on traffic conditions.Fourth, an energy consumption model is linked with the mesoscopic traffic model to efficiently calculate the energy consumed by individual SAEVs for travelers’ trips. It can also produce the energy estimate of intended trips, thus ensuring that the assigned SAEVs have sufficient power to complete their journeys.The travel and energy potential of forming platoons under different formation policies and demand levels in AMoD systems is assessed using the urban road network of the case-study city, The Hague, Netherlands, through a set of defined key performance indicators (KPIs). ## 2.1. Energy Impact of Platoon Operations on Highways Platoons of vehicles provide significant potential for energy savings on highway driving. The close-following mechanism can considerably reduce the energy consumed by platoon vehicles to overcome the adverse aerodynamic effect [16]. Several field experiments in research projects, such as the COMPANION project, the PATH platoon research, the SARTRE project, and the Energy ITS project, have been conducted to investigate the potential of platoon operations in reducing energy consumption [17]. ## 2.2. Impact of Platoon Operations on Highway and Urban Traffic Platoon operations can improve highway throughput due to the shorter headways between platoon vehicles [18]. Using communication technologies (e.g., vehicle-to-vehicle or vehicle-to-infrastructure technologies), platoons of vehicles can also smooth out the vehicle-following dynamics on highways [19]. Besides, platoon operations can improve urban road capacity and reduce delays when crossing signalized intersections [20]. ## 2.3. The Strategic Platoon Formation on Highways In the above literature, the energy and traffic studies on platooning systems considered vehicles that are already in platoons and used platoon operations to increase road throughput and reduce energy consumption. Some studies investigated the problem of coordinating vehicles in platoons on highways. Hall and Chin [21] developed different platoon formation strategies to divide vehicles waiting at highway entrance ramps into different groups according to their destinations. Once formed at the highway entrance ramp, platoons remain intact to maximize the platoon driving distance. Saeednia and Menendez [22] studied slow-down and catch-up strategies for merging trucks into a platoon under free-flow traffic. Larsson et al. [23] defined the platoon formation problem as a vehicle routing problem to maximize the fuel savings of platoon vehicles. Studies by Liang et al. [24] and van de Hoef [25] investigated the problem of coordinating many trucks in platoons to maximize fuel savings. In the formation of platoons, trucks can adjust their speed without regard to traffic conditions. Larson et al. [26] developed a distributed control system in which trucks can adjust speed to form platoons to save fuels. Johansson et al. [27] developed two game-theoretic models to study the platoon coordination problem where vehicles can wait at network nodes to form platoons. In Table 1, we compare the newly developed functional components and the performance analysis of the AMoD system with the new components in our modeling framework with the referred studies in the literature. ## 2.4. Challenges for the Platoon Formation in Urban AMoD Systems The formation of platoons in urban AMoD systems poses challenges. First, the current state-of-the-art models consider the traffic demand for the platoon formation in an oversimplified way. Travel demand is generated according to trip lengths, destination distributions, and vehicle arrival patterns. Different distributions could be used to generate travel demand while capturing its uncertainty. However, in AMoD systems, the zero-occupancy vehicle trips of picking up the assigned travelers introduce uncertainty in the traffic demand on the road network. This uncertainty, therefore, requires explicit modeling of the interaction between SAEVs and travelers.Second, existing studies overlook the effect of forming platoons on travelers in the platoon vehicles. In the future, the AMoD system that we are studying, a fleet of SAEVs directly provides on-demand services to travelers between service points. The formation of platoons requires the synchronization of different vehicles in the same coordinates. In the formation of platoons, vehicles may wait for other vehicles to form platoons, causing delays for travelers. The impact of forming platoons on the travelers in the platoon vehicles must be captured.Third, existing studies investigate the effect of reduced aerodynamic drag via platooning on energy consumption in highway driving. However, due to higher traffic demand on the urban transport network, the potential for energy efficiency is primarily influenced by traffic conditions rather than by reducing air resistance. Coordinated movements of platoon vehicles could improve traffic throughput. As a result, the energy consumption of traffic participants (SAEVs) will be affected by platoon operations. Moreover, current studies aimed to investigate the traffic impact of platoon vehicles using predefined platoons. Therefore, the impact of forming platoons on travel conditions and energy consumption of SAEVs in urban driving needs to be assessed for future scenarios.Fourth, platoon sizes (the maximum number of vehicles in a platoon) and the maximum time spent forming platoons are not restricted. This relaxation can lead to overestimation of the platoon driving distances and energy savings by forming long platoons. In AMoD systems, forming a long platoon may cost travelers more time in the situation where vehicles wait for other vehicles. Setting limits on platoon sizes and time spent in the formation can prevent long platoons from disrupting the urban traffic and causing long delays for travelers. Therefore, the platoon size restriction and maximum time spent in the formation of platoons need to be taken into account when coordinating SAEVs in platoons. The impact of the time and platoon size restrictions on the formation of platoons, and the level of service offered to travelers and on energy consumed needs to be studied. ## 2.5. Urban AMoD System Characteristics in Future Cities The AMoD systems envisaged for the future will probably be available in the 2030s to 2040s, when SAEV fleets have become common and affordable [28, 29]. SAEVs, in this paper, considered to be purpose-built microvehicles, are intended to cover the whole trips of commuters. While providing on-demand services for morning commuters in lieu of private cars, SAEVs can be coordinated in platoons at service points. Although purpose-built SAEVs could occupy less space, SAEVs cannot form platoons anywhere because of urban driving conditions characterized by narrow streets and traffic congestion. One idea is to define what in this paper is designated as “service points”: platoon formation and dissolution (platoon is disassembled) locations across the service area. Examples of service points for the platoon formation in today’s urban transportation systems could include public parking garages, public charging service points, petrol service points, empty bus stops, and some parking spaces along the canals in cities. ## 2.6. Research Contributions This paper aims to develop an agent-based model (ABM) to study the impact of forming platoons in future urban AMoD systems on people’s travel and energy usage. Agent-based modeling is suitable for our research questions. The ABM has the advantage of representing entities at a high resolution; the interaction of entities (e.g., vehicles and travelers) can be captured realistically; it is flexible to model a system at different description levels (e.g., vehicles and platoons formed by vehicles) to evaluate different aspects of the system and to make changes to assumptions (e.g., formation policies) for different scenarios. Taking into consideration the limitations of current studies identified above, we summarize the main contributions of this paper as follows.First, the ABM originally developed in this paper includes a high level of detail. The individual travelers are modeled, and their attributes are initiated according to the regional travel demand data and the realistic departure time data. The interaction between SAEVs and travel requests is explicitly modeled by developing a vehicle-to-travelers assignment component, in which SAEV pickup trips and drop-off trips are represented. The modeled interaction between vehicles and travelers captures the uncertainty of traffic demand between areas of origin and destination.Second, the formation behavior of waiting at service points, defined as the hold-on strategy, is explicitly simulated for platoon leaders and their followers. The platoon formation policies that determine when a group of vehicles leaves a service point as a platoon are the maximum elapsed time of the platoon leader and the maximum platoon size. Either one of the two policies can trigger a release of a platoon. The AMB simulates platoon formation operations of vehicles, which allows us to measure the impact of forming platoons on travelers. Moreover, the formed platoons are flexibly represented with specified information (e.g., the platoon route, the vehicle sequences, and the speed) at an aggregate level to model platoon driving and its impact on traffic conditions.Third, a mesoscopic traffic simulation model is used to represent the traffic dynamics throughout the road network. The mesoscopic traffic simulation model can simulate each vehicle’s movement, while a macroscopic speed-density relationship is used to govern congestion effects. The traffic simulation model can incorporate the impact of all SAEV trips, including unoccupied pickup trips and occupied drop-off trips, on the traffic over the road network. Furthermore, the relationship established between road capacity and platoon characteristics is used to assess the impact of formed platoons on traffic conditions.Fourth, an energy consumption model is linked with the mesoscopic traffic model to efficiently calculate the energy consumed by individual SAEVs for travelers’ trips. It can also produce the energy estimate of intended trips, thus ensuring that the assigned SAEVs have sufficient power to complete their journeys.The travel and energy potential of forming platoons under different formation policies and demand levels in AMoD systems is assessed using the urban road network of the case-study city, The Hague, Netherlands, through a set of defined key performance indicators (KPIs). ## 3. Model Specifications For building the ABM, we introduce the following main assumptions regarding the platoon formation of SAEVs in AMoD systems:(i) All travel demand is produced and attracted between what have been designated as service points which are connected to the network nodes. Service points are thus locations where travelers can be picked up or dropped off by a vehicle. This is reasonable for the situation where many service points are designated in a service area.(ii) We assume that vehicles wait at service points to form platoons instead of using slow-down and catch-up strategies. The major drawbacks of slow-down and speed-up strategies are that urban traffic flow can be disrupted when driving slowly, and accelerating vehicles may violate urban road speed limits. Moreover, slow-down and speed-up strategies are very difficult for urban driving, which is characterized by one or two lanes for each direction and traffic congestion.(iii) We assume that there are enough parking places for SAEVs to form a platoon at the service points. SAEVs are purposely designed to be space-saving microvehicles (Renault Twizy for the reference model). Moreover, there are size restrictions on the platoon size.(iv) We assume a future scenario where AMoD services are used to serve all private car trips in an urban area; the usage of conventional vehicles is not considered.The framework presented in Figure1 includes a fleet management center and a traffic management center. The fleet management center mainly matches vehicles with travelers and coordinates the formation of platoons. The traffic management center primarily represents the network traffic dynamics and finds the time-dependent shortest routes for vehicles based on the current network traffic conditions. The fleet management and traffic management components capture different aspects of the system components’ interactions. The modeling framework can evaluate system performance with regard to defined KPIs based on the realistic travel demand data and the existing road network.Figure 1 The conceptual simulation framework.The model assumes that OD trip demand and aggregated departure times are given. The demand generator in the simulation model will generate individual travel requests with an origin location, destination location, and request time according to the given OD matrix and departure time distribution. According to real-time information about the travel requests, the vehicle assignment component matches the available vehicles with incoming travel requests. Once the assignment has been done, the information on travelers’ locations is sent to the assigned vehicles, and travelers are notified about the vehicle details. The assigned vehicle will be dispatched to pick up the traveler, the state of the assigned vehicle transition from idle to in-service state.The traffic management center provides the time-varying traffic conditions, forming a basis for subsequent route calculations. A mesoscopic traffic simulation model is used to represent traffic patterns over the road network, which can be captured by simulating the movement of SAEVs along their routes as they carry out the travelers’ journeys. The traffic simulation model manages static and dynamic information to determine the current network traffic conditions. The static inputs to the traffic simulation model are the traffic network representation, including links and nodes, traffic capacity, free-flow speed, and road length, while the dynamic information is concerned about the information on road segments upon which individual vehicles and/or platoon vehicles are travel. Based on the current network traffic conditions provided by the traffic simulation component, the time-dependent shortest routes between points are computed, which is a string of ordered road segments to be traversed.The energy consumption model estimates the energy consumption of individual vehicles over the road network. The energy consumption of individual vehicles is computed as a function of the link travel speed. The charging component is responsible for finding charging points for low battery vehicles. Vehicles can be charged at every service point after completing the journey of a traveler. The time delay due to the charging operations is considered.The platoon formation component in the fleet management center coordinates in-service vehicles in an existing platoon at designated service points according to their destinations. Also, a new platoon can be initiated when one of the grouped (in-service) vehicles arrives at the formation location. Once the platoon agent type is created, the platoon agents manage the information about the platoon plan, including platoon routes, the number of platoon vehicles, platoon speed, and the assigned leader and its followers with the determined vehicle sequence. The traffic simulation model in the traffic management center can account for the impact of the operations of formed platoons on traffic dynamics. Figure2 illustrates the platoon formation and its potential. The detailed descriptions of the functionalities are explained in the following sections.Figure 2 An illustration of the platoon formation and potential impacts. ### 3.1. Energy Consumption of the SAEVs Existing studies estimate the energy consumption of electric vehicles on the network level as a function of travel distance, which means translating the kilometers driven into an estimate of energy consumed [30, 31]. However, the strong correlation between energy consumption and vehicle speed is not considered. We attempt to estimate the energy consumption of SAEVs and account for traffic congestion by making it a function of experienced travel speed. It is linked to a mesoscopic traffic simulation model in which the effect of forming platoons on traffic conditions is considered. The energy consumption model is thus capable of accounting for the effect of platoon driving. The energy consumption model contains a set of regression models for different vehicle types. These regression models can be used to calculate the energy consumption associated with one vehicle traversing each road segment based on the speed of the vehicle and the length of the road segment. The calculation method is explained as follows.First, the average speed for individual SAEVs traversing the corresponding road segment is calculated. Second, the energy consumed by the SAEVs per unit distance is estimated using the regression model in equation (1), which describes the relationship between energy consumption and travel speed. Third, the total energy consumption on the route between the origin and destination is calculated as the sum of energy consumed by the individual SAEV in each road segment. The formula for calculating total energy consumption is shown in equation (2)(1)E=α+β∗Si+γ∗Si2,where α,β,andγ are coefficients; Si is the travel speed of an individual SAEV traversing road segment i; E is the energy consumption per unit distance.The total energy consumption of each SAEV to complete the pickup trip or drop-off trip can thus be calculated as(2)Et=∑i=1nEi∗Li,where n is the total number of road segments between the locations (e.g., the locations of the assigned vehicle and the origin of the travelers, or the locations between the origin of the traveler and his/her destination) ; Li is the length of each road segment i. Et is the total energy consumption of an SAEV to complete the pickup trip or drop-off trip.We estimate the energy consumption of different types of vehicles. Each vehicle type corresponds to a regression model derived from the laboratory dynamometer tests [32]. The coefficient for different vehicle types is given in Table 2 in Section 4, where the application of the model is presented.Table 1 Comparison of the strategic platoon formation studies at the route level. StudiesModeling componentsImpact analysisMany vehiclesRoad network levelDemand and supply interactionMixed trafficPlatoon policiesCoordination strategiesPlatoon vehiclesTraffic throughputEnergy consumptionService level (waiting and travel times)Platoon sizesFormation time constraintsSpeed adjustment (Slow down or catch up)Hold-on strategyTrafficAerodynamicsHall and Chin [21]✓✓✓✓✓✓Larson et al. [26]✓✓✓✓✓✓Saeednia and Menendez [22]✓✓✓Larsson et al. [23]✓✓✓✓✓Liang et al. [5]✓✓✓✓van de Hoef [25]✓✓✓✓✓✓Johansson et al. [27]✓✓✓✓✓Our approach✓✓✓✓✓✓✓✓✓✓ ### 3.2. Real-Time Vehicle Assignment The vehicle assignment component assigns available vehicles to serve travelers as travel requests come in, which are generated according to the aggregate travel demand (explained in Section4.2). The vehicle assignment component will assign the nearest available SAEV with enough battery power to serve a traveler to his/her destination. For that to happen, there must be a real-time estimation of how much energy is needed if that traveler is satisfied, and this is estimated for each candidate vehicle based on its particular vehicle type.The process of finding available vehicles for travel requests goes as follows. First, the energy consumption of an individual vehicle to complete the intended trip is estimated based on the energy function. The estimate of energy spent on transporting the intended traveler can be calculated using equation (3). Second, based on the estimated energy consumption of the intended traveler, available vehicles with sufficient remaining battery capacity that can undertake the traveler’s journey are filtered from the group of idle vehicles; finally, a vehicle located at the shortest Euclidean distance within the search radius is chosen from the filtered pool of available vehicles(3)Ee=η∗Et,where η is a safety coefficient used to ensure that the estimated energy for a traveler’s intended trip is not less than the actual energy consumed by individual vehicles to complete the trip that might happen if traffic changes. Ee is the estimated energy required by an individual vehicle to complete the trip of a traveler.The function in equation (3) estimates the energy needed to complete travelers’ trips based on the link travel speeds at the moment when a traveler calls the service, while the actual energy consumed uses the experienced speeds of vehicles in equation (2) to calculate the energy spent after completing the traveler’s trip. The proper estimate of energy spent to complete the trip of an intended traveler ensures that the assigned vehicle has sufficient battery capacity to reach the traveler’s destination.Once an available vehicle with sufficient remaining energy is assigned to a traveler, the time-dependent shortest path (lowest duration) from the current vehicle location to the traveler’s location is computed. After the vehicle arrives at the pickup location, the time-dependent shortest path from the traveler’s location to its destination will be determined. The computation of time-dependent shortest routes is based on the Dijkstra algorithm. ### 3.3. Mesoscopic Traffic Simulation The modeling framework for the proposed system needs to simulate the operations of many vehicles to transport all the city’s private car commuters over a realistic urban road network. A mesoscopic traffic simulation model that includes link movement and node transfer is incorporated into the agent-based modeling framework [33, 34]. The mesoscopic traffic simulation model combines a microscopic level representation of individual vehicles with a macroscopic description of the traffic patterns. In the link movement, vehicular movements are simulated. Vehicle speed on the road segments is updated according to the established macroscopic speed-density relationship. A modified Smulders speed-density relationship (equation (4)) is used to update the vehicle speed based on the link density(4)vk=v01−kkj,k<kc,γ1k−1kj,k≥kc,where k is the link traffic density. vk is the speed that is determined by the traffic density k; v0 is the free-flow speed. kc is the link critical density; kj is the link jam density. γ is a parameter. The value of the parameter can be derived as γ=v0kc.Node transfer means that vehicles transfer between adjacent road segments. A vehicle moving from an upstream link (road segment) to a downstream link will follow the defined rules:(1) The vehicle is at the head of the upstream link queue. In other words, there are no preceding vehicles stacking in the waiting queue.(2) The number of outflow vehicles has been checked to determine whether a vehicle can leave the road segment it is traversing.(3) The number of storage vehicles has been checked to determine whether the downstream link has enough storage units to accommodate the upcoming vehicle.The mesoscopic traffic simulation model, including link movement and node transfer, can provide the required level of details in estimating the speeds and travel times of individual vehicles on the network while balancing the trade-off between computational cost and traffic model realism.A platoon that includes multiple platoon vehicles is considered a platoon entity. The rules for the movement of individual vehicles are applied to individual platoons in which the properties (e.g., the number of platoon vehicles) are considered in the node transfer. ### 3.4. Traffic Simulation for Platoon Vehicles In the literature, the strategic platoon formation was studied while ignoring the traffic (as shown in Table1: comparison of the platoon formation studies at the route level). We fill this gap by developing a simulation component for mixed operations of platoon AVs and nonplatoon AVs on top of a mesoscopic traffic simulation. The functional component for the mixed operation of platoon AVs and nonplatoon AVs can capture the traffic impact of forming platoons across the road network. The relationship between road capacity and different penetration rates of platoon AVs is established to assess the impact of the platoon formation on traffic conditions. Chen et al. [35] proposed a formulation to describe the correlation between platoon characteristics, including the proportion of platoon vehicles, intervehicle spacing levels, and the macroscopic capacity. The formulation reveals how the single-lane capacity changes for different penetration rates of platoon AVs. The derived macroscopic capacity formulation for mixed traffic solves the problem of determining the macroscopic traffic variables (used in the mesoscopic traffic simulation) based on platoon characteristics. Therefore, we can combine the macroscopic capacity formulation in the mesoscopic traffic simulation that applies the macroscopic speed-density function to govern the movement of the vehicles that we use in the simulation methodology. The single-lane capacity is expressed as(5)Cc=Ca1−N/M+N∗1−α1−L/N,where Ca denotes the lane capacity for all vehicles traveling regularly. L is the number of leaders. N is the total number of platoon vehicles. M is the total number of regular driving vehicles (i.e., AVs are not in platoons). α is the ratio of platoon spacing to regular spacing.Table 2 Coefficients in the regression model for different vehicle types. CoefficientαβγNissanSV479.1−18.930.7876Kia468.6−14.630.6834Mitsubishi840.4−55.3121.670BMW618.4−31.090.9916Ford1110−96.612.745Chevrolet701.2−35.551.007Smart890.8−43.121.273Nissan2012715.2−38.101.271As shown in equation (5), the capacity Cc depends on the penetration rate of platoon vehicles φ=N/M+N and the number of leaders (L). A smaller distance spacing between platoon vehicles allows an increase in the lane capacity. The lane capacity increases as the penetration rate of platoon vehicles, φ, increases. Moreover, for the same number of platoon vehicles N, the more the leaders L are created, the fewer the capacity increases.We use the following definitions of different critical spacing types according to the operational characteristics of vehicle platooning. The critical spacing when vehicles travel regularly (e.g., AVs that are not in platoons) is defined asda. We define dp=αda, where 0<α<1. We assume that the critical spacing between a platoon vehicle and a regular driving vehicle that is not in a platoon is also da.Notice that regular driving AVs that are not in platoons follow the regular driving distances of conventional vehicles, while platoon vehicles move at a reduced spacing. The formulation of the capacity of one lane (for one direction) shows how it can be improved by increasing the penetration rate of platoon vehicles and the number of leaders or platoons (each platoon has one leader). The detailed derivations of equation (5) can be found in Appendix. ### 3.5. Platoon Formation Mechanism Spontaneous or on-the-fly platoon formation without proper prior planning can cause a high frequency of joining and leaving operations by the vehicles which might disrupt traffic and decrease safety [27, 36]. This type of platoon might not ensure a high rate of in-platoon driving. In AMoD systems, many SAEVs are assigned to take travelers from place to place (service points) in urban areas; therefore, they will be continuously routed to different destinations. The platoon formation for a fleet of vehicles that provide on-demand transport is more effective if done in a coordinated way. SAEVs can be coordinated in a platoon using the hold-on strategy while providing direct on-demand service between service points designated as the platoon formation locations over the AMoD network.The formation behaviors of platoon participating vehicles are realistically represented. In relation to coordinating vehicles in the platoon formation, a vehicle can be assigned to an existing platoon as a follower vehicle. A vehicle can be connected to other vehicles to initiate a new platoon, either as a platoon leader or as a follower. In the first case, arriving vehicles at a service point are assigned to an existing platoon according to the destinations of travelers assigned to them. There are no existing platoons at a service point in the second case, or the arriving vehicles cannot be assigned to an existing platoon. Arriving vehicles at the service point are divided into different groups. For vehicles in each group at a service point, the first vehicle to arrive is designated as the platoon leader. Once a platoon leader is assigned, the platoon is initiated.Algorithm 1: Pseudocode for the formation of platoons. INPUT: information about a list of arriving vehicles A=a0,a1,…,am. The information Zai for vehicle ai can be represented by a set ai,rioi,di. Origin oi is the service point that vehicle ai is moving towards and destination di represents the next service point. ri is the shortest route between oi and di.FOR each arriving vehicle ai in the set ACompare informationai,oi,dii=1,2,…,m to the information Zpj=pj,rpj,opj,dpjj=1,2,…,n of existing platoons’ leaders P=p0,p1,…,pnIF (Zaioi,di==Zpjopj,dpj) AND platoon size sj of the platoon pj is not reached)Add vehiclesai to the platoon pj as a follower;Adjust the vehicle’s shortest routeri to the platoon shortest route rpj;Remove vehicleai from the set A;ENDIFContinueFOR each arriving vehicle aX in the set AIF((ai is not connected to aXANDai≠aXANDZaioi,di==Zaxox,dxAND the number of connected vehicles for ai < platoon size V)ai and aX are paired, and the connection between ai and aX is established;IFaX is not in the destination group d of vehicle aiLet the vehicleaX join the destination group d ;ENDIFENDIFENDFORRemove vehicleai from the set A;Vehicles that are not paired move as individual vehicles;ENDFOROUTPUT: Platoons of vehicles and regular driving vehicles that are not in platoonsThe hold-on strategy of the platoon leader is used to organize arriving vehicles (in-service vehicles with passengers) into platoons at a service point according to their destinations. Coordinating empty SAEVs (that are assigned to pick up passengers) could increase the out-of-vehicle waiting times, leading to great discomfort. The hold-on time of a platoon leader is the time from when the leader starts to wait for other vehicles until the moment the platoon is formed and starts to move. The release of a platoon (the moment when it departs) depends on not only the number of vehicles that it has (there is a maximum number of vehicles in a platoon) but also on the time that the platoon leader has been waiting. That is, the release of a platoon can be triggered by reaching the maximum vehicle size or the maximum hold-on (waiting) time of the platoon leader, as explained before. We denote the time threshold of platoon leaders asT and the maximum number of platoon vehicles as V. The physical constraints of road segments directly set a threshold for the number of vehicles in a platoon. Algorithm 1 explains the platoon formation mechanism.The formation approach uses global knowledge about all arriving vehicles for each service point to assign them to an existing or newly created platoon. Vehicle sequence in a platoon is determined based on the arrival time of a vehicle at the platoon formation location. The platoon leader makes decisions on behalf of the followers to trigger the platoon release. Once a SAEV is assigned to serve a traveler, it has the origin and destination of the traveler, and its shortest route is calculated using the Dijkstra algorithm. In a platoon, platoon followers will adjust their routes from their original shortest routes to the shortest route of the platoon leader. A plan is created for a formed platoon, including platoon ID, a leader and its followers, a platoon route (origin, destination, and road segments), and the vehicle sequence in the formed platoon (see Algorithm 2). Once a platoon arrives at the destination service point of travelers, all platoon vehicles are detached from their formed platoon (arriving vehicles are grouped according to their destinations—platooned vehicles in a platoon have the same destination service point), and then will drop off the passengers at the destination service point—the state of the vehicle transitions from in-service to idle state. Notice that the volume of AMoD services could be high during morning hours, which leads to many arriving vehicles (in-service vehicles) at a service point. Also, there are platoon size restrictions in urban driving conditions. Many platoons could be formed by grouping vehicles with the same destinations, while organizing other vehicles with different destinations may cause detours and be against vehicles with the same destination. Considering the urban formation locations, driving conditions, and the high demand during peak hours, we did not model the scenario where a vehicle with a different destination detaches from a platoon to drop off a passenger, and the other platooned vehicles continue to the next service points.Algorithm 2: Pseudocode for determining platoon plans. INPUT: Groups of vehiclesFOR grouped vehicles in each destination group DDetermine the leader for the grouped vehiclesdk∈D;Initiate a platoonpk according to the platoon leader’s information (location and shortest route);Assign the other vehicles in the group into the new platoon as followers;Determine the vehicle sequence according to the arrival time;Adjust the shortest routes of the followers inpk to the shortest route of the platoon leader rpk;ENDFOROUTPUT: Platoon plans, including platoon ID, a leader and its followers, a platoon route, the vehicle sequence ## 3.1. Energy Consumption of the SAEVs Existing studies estimate the energy consumption of electric vehicles on the network level as a function of travel distance, which means translating the kilometers driven into an estimate of energy consumed [30, 31]. However, the strong correlation between energy consumption and vehicle speed is not considered. We attempt to estimate the energy consumption of SAEVs and account for traffic congestion by making it a function of experienced travel speed. It is linked to a mesoscopic traffic simulation model in which the effect of forming platoons on traffic conditions is considered. The energy consumption model is thus capable of accounting for the effect of platoon driving. The energy consumption model contains a set of regression models for different vehicle types. These regression models can be used to calculate the energy consumption associated with one vehicle traversing each road segment based on the speed of the vehicle and the length of the road segment. The calculation method is explained as follows.First, the average speed for individual SAEVs traversing the corresponding road segment is calculated. Second, the energy consumed by the SAEVs per unit distance is estimated using the regression model in equation (1), which describes the relationship between energy consumption and travel speed. Third, the total energy consumption on the route between the origin and destination is calculated as the sum of energy consumed by the individual SAEV in each road segment. The formula for calculating total energy consumption is shown in equation (2)(1)E=α+β∗Si+γ∗Si2,where α,β,andγ are coefficients; Si is the travel speed of an individual SAEV traversing road segment i; E is the energy consumption per unit distance.The total energy consumption of each SAEV to complete the pickup trip or drop-off trip can thus be calculated as(2)Et=∑i=1nEi∗Li,where n is the total number of road segments between the locations (e.g., the locations of the assigned vehicle and the origin of the travelers, or the locations between the origin of the traveler and his/her destination) ; Li is the length of each road segment i. Et is the total energy consumption of an SAEV to complete the pickup trip or drop-off trip.We estimate the energy consumption of different types of vehicles. Each vehicle type corresponds to a regression model derived from the laboratory dynamometer tests [32]. The coefficient for different vehicle types is given in Table 2 in Section 4, where the application of the model is presented.Table 1 Comparison of the strategic platoon formation studies at the route level. StudiesModeling componentsImpact analysisMany vehiclesRoad network levelDemand and supply interactionMixed trafficPlatoon policiesCoordination strategiesPlatoon vehiclesTraffic throughputEnergy consumptionService level (waiting and travel times)Platoon sizesFormation time constraintsSpeed adjustment (Slow down or catch up)Hold-on strategyTrafficAerodynamicsHall and Chin [21]✓✓✓✓✓✓Larson et al. [26]✓✓✓✓✓✓Saeednia and Menendez [22]✓✓✓Larsson et al. [23]✓✓✓✓✓Liang et al. [5]✓✓✓✓van de Hoef [25]✓✓✓✓✓✓Johansson et al. [27]✓✓✓✓✓Our approach✓✓✓✓✓✓✓✓✓✓ ## 3.2. Real-Time Vehicle Assignment The vehicle assignment component assigns available vehicles to serve travelers as travel requests come in, which are generated according to the aggregate travel demand (explained in Section4.2). The vehicle assignment component will assign the nearest available SAEV with enough battery power to serve a traveler to his/her destination. For that to happen, there must be a real-time estimation of how much energy is needed if that traveler is satisfied, and this is estimated for each candidate vehicle based on its particular vehicle type.The process of finding available vehicles for travel requests goes as follows. First, the energy consumption of an individual vehicle to complete the intended trip is estimated based on the energy function. The estimate of energy spent on transporting the intended traveler can be calculated using equation (3). Second, based on the estimated energy consumption of the intended traveler, available vehicles with sufficient remaining battery capacity that can undertake the traveler’s journey are filtered from the group of idle vehicles; finally, a vehicle located at the shortest Euclidean distance within the search radius is chosen from the filtered pool of available vehicles(3)Ee=η∗Et,where η is a safety coefficient used to ensure that the estimated energy for a traveler’s intended trip is not less than the actual energy consumed by individual vehicles to complete the trip that might happen if traffic changes. Ee is the estimated energy required by an individual vehicle to complete the trip of a traveler.The function in equation (3) estimates the energy needed to complete travelers’ trips based on the link travel speeds at the moment when a traveler calls the service, while the actual energy consumed uses the experienced speeds of vehicles in equation (2) to calculate the energy spent after completing the traveler’s trip. The proper estimate of energy spent to complete the trip of an intended traveler ensures that the assigned vehicle has sufficient battery capacity to reach the traveler’s destination.Once an available vehicle with sufficient remaining energy is assigned to a traveler, the time-dependent shortest path (lowest duration) from the current vehicle location to the traveler’s location is computed. After the vehicle arrives at the pickup location, the time-dependent shortest path from the traveler’s location to its destination will be determined. The computation of time-dependent shortest routes is based on the Dijkstra algorithm. ## 3.3. Mesoscopic Traffic Simulation The modeling framework for the proposed system needs to simulate the operations of many vehicles to transport all the city’s private car commuters over a realistic urban road network. A mesoscopic traffic simulation model that includes link movement and node transfer is incorporated into the agent-based modeling framework [33, 34]. The mesoscopic traffic simulation model combines a microscopic level representation of individual vehicles with a macroscopic description of the traffic patterns. In the link movement, vehicular movements are simulated. Vehicle speed on the road segments is updated according to the established macroscopic speed-density relationship. A modified Smulders speed-density relationship (equation (4)) is used to update the vehicle speed based on the link density(4)vk=v01−kkj,k<kc,γ1k−1kj,k≥kc,where k is the link traffic density. vk is the speed that is determined by the traffic density k; v0 is the free-flow speed. kc is the link critical density; kj is the link jam density. γ is a parameter. The value of the parameter can be derived as γ=v0kc.Node transfer means that vehicles transfer between adjacent road segments. A vehicle moving from an upstream link (road segment) to a downstream link will follow the defined rules:(1) The vehicle is at the head of the upstream link queue. In other words, there are no preceding vehicles stacking in the waiting queue.(2) The number of outflow vehicles has been checked to determine whether a vehicle can leave the road segment it is traversing.(3) The number of storage vehicles has been checked to determine whether the downstream link has enough storage units to accommodate the upcoming vehicle.The mesoscopic traffic simulation model, including link movement and node transfer, can provide the required level of details in estimating the speeds and travel times of individual vehicles on the network while balancing the trade-off between computational cost and traffic model realism.A platoon that includes multiple platoon vehicles is considered a platoon entity. The rules for the movement of individual vehicles are applied to individual platoons in which the properties (e.g., the number of platoon vehicles) are considered in the node transfer. ## 3.4. Traffic Simulation for Platoon Vehicles In the literature, the strategic platoon formation was studied while ignoring the traffic (as shown in Table1: comparison of the platoon formation studies at the route level). We fill this gap by developing a simulation component for mixed operations of platoon AVs and nonplatoon AVs on top of a mesoscopic traffic simulation. The functional component for the mixed operation of platoon AVs and nonplatoon AVs can capture the traffic impact of forming platoons across the road network. The relationship between road capacity and different penetration rates of platoon AVs is established to assess the impact of the platoon formation on traffic conditions. Chen et al. [35] proposed a formulation to describe the correlation between platoon characteristics, including the proportion of platoon vehicles, intervehicle spacing levels, and the macroscopic capacity. The formulation reveals how the single-lane capacity changes for different penetration rates of platoon AVs. The derived macroscopic capacity formulation for mixed traffic solves the problem of determining the macroscopic traffic variables (used in the mesoscopic traffic simulation) based on platoon characteristics. Therefore, we can combine the macroscopic capacity formulation in the mesoscopic traffic simulation that applies the macroscopic speed-density function to govern the movement of the vehicles that we use in the simulation methodology. The single-lane capacity is expressed as(5)Cc=Ca1−N/M+N∗1−α1−L/N,where Ca denotes the lane capacity for all vehicles traveling regularly. L is the number of leaders. N is the total number of platoon vehicles. M is the total number of regular driving vehicles (i.e., AVs are not in platoons). α is the ratio of platoon spacing to regular spacing.Table 2 Coefficients in the regression model for different vehicle types. CoefficientαβγNissanSV479.1−18.930.7876Kia468.6−14.630.6834Mitsubishi840.4−55.3121.670BMW618.4−31.090.9916Ford1110−96.612.745Chevrolet701.2−35.551.007Smart890.8−43.121.273Nissan2012715.2−38.101.271As shown in equation (5), the capacity Cc depends on the penetration rate of platoon vehicles φ=N/M+N and the number of leaders (L). A smaller distance spacing between platoon vehicles allows an increase in the lane capacity. The lane capacity increases as the penetration rate of platoon vehicles, φ, increases. Moreover, for the same number of platoon vehicles N, the more the leaders L are created, the fewer the capacity increases.We use the following definitions of different critical spacing types according to the operational characteristics of vehicle platooning. The critical spacing when vehicles travel regularly (e.g., AVs that are not in platoons) is defined asda. We define dp=αda, where 0<α<1. We assume that the critical spacing between a platoon vehicle and a regular driving vehicle that is not in a platoon is also da.Notice that regular driving AVs that are not in platoons follow the regular driving distances of conventional vehicles, while platoon vehicles move at a reduced spacing. The formulation of the capacity of one lane (for one direction) shows how it can be improved by increasing the penetration rate of platoon vehicles and the number of leaders or platoons (each platoon has one leader). The detailed derivations of equation (5) can be found in Appendix. ## 3.5. Platoon Formation Mechanism Spontaneous or on-the-fly platoon formation without proper prior planning can cause a high frequency of joining and leaving operations by the vehicles which might disrupt traffic and decrease safety [27, 36]. This type of platoon might not ensure a high rate of in-platoon driving. In AMoD systems, many SAEVs are assigned to take travelers from place to place (service points) in urban areas; therefore, they will be continuously routed to different destinations. The platoon formation for a fleet of vehicles that provide on-demand transport is more effective if done in a coordinated way. SAEVs can be coordinated in a platoon using the hold-on strategy while providing direct on-demand service between service points designated as the platoon formation locations over the AMoD network.The formation behaviors of platoon participating vehicles are realistically represented. In relation to coordinating vehicles in the platoon formation, a vehicle can be assigned to an existing platoon as a follower vehicle. A vehicle can be connected to other vehicles to initiate a new platoon, either as a platoon leader or as a follower. In the first case, arriving vehicles at a service point are assigned to an existing platoon according to the destinations of travelers assigned to them. There are no existing platoons at a service point in the second case, or the arriving vehicles cannot be assigned to an existing platoon. Arriving vehicles at the service point are divided into different groups. For vehicles in each group at a service point, the first vehicle to arrive is designated as the platoon leader. Once a platoon leader is assigned, the platoon is initiated.Algorithm 1: Pseudocode for the formation of platoons. INPUT: information about a list of arriving vehicles A=a0,a1,…,am. The information Zai for vehicle ai can be represented by a set ai,rioi,di. Origin oi is the service point that vehicle ai is moving towards and destination di represents the next service point. ri is the shortest route between oi and di.FOR each arriving vehicle ai in the set ACompare informationai,oi,dii=1,2,…,m to the information Zpj=pj,rpj,opj,dpjj=1,2,…,n of existing platoons’ leaders P=p0,p1,…,pnIF (Zaioi,di==Zpjopj,dpj) AND platoon size sj of the platoon pj is not reached)Add vehiclesai to the platoon pj as a follower;Adjust the vehicle’s shortest routeri to the platoon shortest route rpj;Remove vehicleai from the set A;ENDIFContinueFOR each arriving vehicle aX in the set AIF((ai is not connected to aXANDai≠aXANDZaioi,di==Zaxox,dxAND the number of connected vehicles for ai < platoon size V)ai and aX are paired, and the connection between ai and aX is established;IFaX is not in the destination group d of vehicle aiLet the vehicleaX join the destination group d ;ENDIFENDIFENDFORRemove vehicleai from the set A;Vehicles that are not paired move as individual vehicles;ENDFOROUTPUT: Platoons of vehicles and regular driving vehicles that are not in platoonsThe hold-on strategy of the platoon leader is used to organize arriving vehicles (in-service vehicles with passengers) into platoons at a service point according to their destinations. Coordinating empty SAEVs (that are assigned to pick up passengers) could increase the out-of-vehicle waiting times, leading to great discomfort. The hold-on time of a platoon leader is the time from when the leader starts to wait for other vehicles until the moment the platoon is formed and starts to move. The release of a platoon (the moment when it departs) depends on not only the number of vehicles that it has (there is a maximum number of vehicles in a platoon) but also on the time that the platoon leader has been waiting. That is, the release of a platoon can be triggered by reaching the maximum vehicle size or the maximum hold-on (waiting) time of the platoon leader, as explained before. We denote the time threshold of platoon leaders asT and the maximum number of platoon vehicles as V. The physical constraints of road segments directly set a threshold for the number of vehicles in a platoon. Algorithm 1 explains the platoon formation mechanism.The formation approach uses global knowledge about all arriving vehicles for each service point to assign them to an existing or newly created platoon. Vehicle sequence in a platoon is determined based on the arrival time of a vehicle at the platoon formation location. The platoon leader makes decisions on behalf of the followers to trigger the platoon release. Once a SAEV is assigned to serve a traveler, it has the origin and destination of the traveler, and its shortest route is calculated using the Dijkstra algorithm. In a platoon, platoon followers will adjust their routes from their original shortest routes to the shortest route of the platoon leader. A plan is created for a formed platoon, including platoon ID, a leader and its followers, a platoon route (origin, destination, and road segments), and the vehicle sequence in the formed platoon (see Algorithm 2). Once a platoon arrives at the destination service point of travelers, all platoon vehicles are detached from their formed platoon (arriving vehicles are grouped according to their destinations—platooned vehicles in a platoon have the same destination service point), and then will drop off the passengers at the destination service point—the state of the vehicle transitions from in-service to idle state. Notice that the volume of AMoD services could be high during morning hours, which leads to many arriving vehicles (in-service vehicles) at a service point. Also, there are platoon size restrictions in urban driving conditions. Many platoons could be formed by grouping vehicles with the same destinations, while organizing other vehicles with different destinations may cause detours and be against vehicles with the same destination. Considering the urban formation locations, driving conditions, and the high demand during peak hours, we did not model the scenario where a vehicle with a different destination detaches from a platoon to drop off a passenger, and the other platooned vehicles continue to the next service points.Algorithm 2: Pseudocode for determining platoon plans. INPUT: Groups of vehiclesFOR grouped vehicles in each destination group DDetermine the leader for the grouped vehiclesdk∈D;Initiate a platoonpk according to the platoon leader’s information (location and shortest route);Assign the other vehicles in the group into the new platoon as followers;Determine the vehicle sequence according to the arrival time;Adjust the shortest routes of the followers inpk to the shortest route of the platoon leader rpk;ENDFOROUTPUT: Platoon plans, including platoon ID, a leader and its followers, a platoon route, the vehicle sequence ## 4. Model Application The detailed conceptual framework is implemented in the AnyLogic multimethod simulation modeling platform coded with Java programming language. The data used in the simulation experiment is explained below. ### 4.1. The Topology of the Road Network in The Hague Figure3 displays the road network of the Zuidvleugel region (around Rotterdam and The Hague). The blue color indicates the part of the road network that is used for the simulation study, which includes eight districts of The Hague and the towns of Voorburg, Rijswijk, and Wateringen. The dots are the centroids of the traffic analysis zones (TAZs), which are the origins and destinations of all travel requests. The data containing the aggregated OD matrix, departure time distribution, and information about the study area centroids and the road network are exported from OmniTRANS transport planning software.Figure 3 Road network of The Hague in the Zuidvleugel road network. ### 4.2. Detailed Travel Demand The OD trip table containing a total of 27,452 trips made by cars is used as the input to generate time-dependent travel requests. The OD trip table specifies travel demand between TAZs in the AM peak hours over the study area. The departure time fractions shown in Figure4 are used to calculate the number of trips between OD pairs per 15-minute time interval from 5:30 am to 10:00 am. A demand generator (see Appendix ) generates time-dependent travel requests based on the aggregate travel demand. Individual travel requests are characterized by the origin zone, destination zone, and time of the request.Figure 4 Departure time fractions for 18 time intervals from 5:30 am to 10:00 am. ### 4.3. Simulation Parameters The traffic parameters provide information about the traffic flow characteristic of the regular driving vehicles (that are not in platoons). In platoon driving, intervehicle distance (dp) is determined based on the field experiments [37, 38]. We test different platoon formation strategies and compare their performance while treating the parameter dp as fixed.Table 3 Summary of traffic-related parameter values for different road types. Road typesCapacity (vehicles per hour per lane)Free-flow Speed (km/h)Saturation flow (vehicles per hour per lane)Speed at capacity (km/h)Jam density (vehicles per km)Urban road 1120050120035120Urban road 2120050120035120Urban road 3157550157535120Urban road 4160050160035120Urban road 5163350163335120Rural road135050135035120Local road9005090035120Local road9003090025120The vehicle models used for the energy estimation are these commonly sold electric vehicles: Nissan Leaf SV 2013, Kia Soul Electric 2015, Nissan Leaf 2012, BMW i3 BEV 2014, Ford focus Electric 2013, Mitsubishi 1 MiEV 2012, Chevrolet Spark EV 2015, and Smart EV 2014. The coefficients used in equation (1) are adopted from the work by reference [32] (see Table 2).We assume that SAEVs can be charged rapidly to 80% of the battery capacity in 30 minutes at every service point. All types of SAEVs initially have a battery level of 24 kWh. The value ofη used in estimating the energy consumption in equation (3) is determined based on a trial-and-error approach. It must be guaranteed that no travelers are stranded due to insufficient battery power of assigned vehicles. We repeatedly ran the simulation model by increasing the value of η until the estimated energyEe is sufficient for each assigned vehicle to complete the intended trip. SAEVs are deployed over the designated service points in proportion to the amount of travel demand at the corresponding service point. 49 TAZs are connected to the road networks using zone centroids. The 49 locations of the centroids in the road network are designated as service points in the urban AMoD system. Table 4 gives a summary of the main model parameters.Table 4 Summary of the main model parameters. CategoryValueThe perimeter of the study area46 kmThe size of the study area139 km2Time steps for speed update6 secondsIntervehicle distance (dp) in platoons6 metersAvg. fleet size per service point (vehicles) for 100% demand170Service points (centroids of the zones)49Road segments836Road nodes510Total travel demand27452 tripsMaximum number of platoon vehicles{2, 4, 6, 8} vehiclesTime threshold for platoon leaders{2, 4, 6, 8} minutesCharging time30 minutesCoefficientsη3.05Battery initial capacity24 kWhAverage travel time under light traffic18 minutes ## 4.1. The Topology of the Road Network in The Hague Figure3 displays the road network of the Zuidvleugel region (around Rotterdam and The Hague). The blue color indicates the part of the road network that is used for the simulation study, which includes eight districts of The Hague and the towns of Voorburg, Rijswijk, and Wateringen. The dots are the centroids of the traffic analysis zones (TAZs), which are the origins and destinations of all travel requests. The data containing the aggregated OD matrix, departure time distribution, and information about the study area centroids and the road network are exported from OmniTRANS transport planning software.Figure 3 Road network of The Hague in the Zuidvleugel road network. ## 4.2. Detailed Travel Demand The OD trip table containing a total of 27,452 trips made by cars is used as the input to generate time-dependent travel requests. The OD trip table specifies travel demand between TAZs in the AM peak hours over the study area. The departure time fractions shown in Figure4 are used to calculate the number of trips between OD pairs per 15-minute time interval from 5:30 am to 10:00 am. A demand generator (see Appendix ) generates time-dependent travel requests based on the aggregate travel demand. Individual travel requests are characterized by the origin zone, destination zone, and time of the request.Figure 4 Departure time fractions for 18 time intervals from 5:30 am to 10:00 am. ## 4.3. Simulation Parameters The traffic parameters provide information about the traffic flow characteristic of the regular driving vehicles (that are not in platoons). In platoon driving, intervehicle distance (dp) is determined based on the field experiments [37, 38]. We test different platoon formation strategies and compare their performance while treating the parameter dp as fixed.Table 3 Summary of traffic-related parameter values for different road types. Road typesCapacity (vehicles per hour per lane)Free-flow Speed (km/h)Saturation flow (vehicles per hour per lane)Speed at capacity (km/h)Jam density (vehicles per km)Urban road 1120050120035120Urban road 2120050120035120Urban road 3157550157535120Urban road 4160050160035120Urban road 5163350163335120Rural road135050135035120Local road9005090035120Local road9003090025120The vehicle models used for the energy estimation are these commonly sold electric vehicles: Nissan Leaf SV 2013, Kia Soul Electric 2015, Nissan Leaf 2012, BMW i3 BEV 2014, Ford focus Electric 2013, Mitsubishi 1 MiEV 2012, Chevrolet Spark EV 2015, and Smart EV 2014. The coefficients used in equation (1) are adopted from the work by reference [32] (see Table 2).We assume that SAEVs can be charged rapidly to 80% of the battery capacity in 30 minutes at every service point. All types of SAEVs initially have a battery level of 24 kWh. The value ofη used in estimating the energy consumption in equation (3) is determined based on a trial-and-error approach. It must be guaranteed that no travelers are stranded due to insufficient battery power of assigned vehicles. We repeatedly ran the simulation model by increasing the value of η until the estimated energyEe is sufficient for each assigned vehicle to complete the intended trip. SAEVs are deployed over the designated service points in proportion to the amount of travel demand at the corresponding service point. 49 TAZs are connected to the road networks using zone centroids. The 49 locations of the centroids in the road network are designated as service points in the urban AMoD system. Table 4 gives a summary of the main model parameters.Table 4 Summary of the main model parameters. CategoryValueThe perimeter of the study area46 kmThe size of the study area139 km2Time steps for speed update6 secondsIntervehicle distance (dp) in platoons6 metersAvg. fleet size per service point (vehicles) for 100% demand170Service points (centroids of the zones)49Road segments836Road nodes510Total travel demand27452 tripsMaximum number of platoon vehicles{2, 4, 6, 8} vehiclesTime threshold for platoon leaders{2, 4, 6, 8} minutesCharging time30 minutesCoefficientsη3.05Battery initial capacity24 kWhAverage travel time under light traffic18 minutes ## 5. Simulation Results and Discussion Twenty-five effective simulation scenarios are considered for the following purposes. First, scenarios for platoon formation policies are simulated to investigate how the formation of platoons affects the level of service provided to travelers. Second, demand for AMoD services with or without forming platoons may influence the AMoD service levels provided. Therefore, we design simulation scenarios with different demand levels. Third, simulation experiments are conducted to evaluate the impact of forming platoons on energy consumption for different car models under different formation policies. Table5 gives detailed explanations of the main KPIs.Table 5 Description of the main KPIs. Key Performance IndicatorDescriptionDelay of travelers in platoon vehiclesThe time delay of platoon vehicles is the average dwell time that platoon vehicles (platoon leaders and platoon followers) spend at formation points without moving.Delay of travelers with platoon leaders (platoon delay for leaders)The time delay of platoon leaders is the average dwell time that platoon leaders spend at formation locations without moving.Network travel timeThe network travel time is the in-vehicle time spent on average by all served travelers when vehicles are traveling from origin to destination. Platoon delays are not included in the network travel time for travelers in platoon vehicles.Platoon travel timeThe platoon travel time is calculated by the platoon delays plus the network travel time of travelers in platoon vehicles.Congestion levelThe congestion level describes how much longer, on average, vehicular trips take during the AM peak hours compared to the average travel time in light traffic conditions. The average travel time in light traffic in the case-study city is estimated based on the travel speed suggested by Ligterink [39].90% quantile travel timeThe 90% quantile travel time indicates the travel time which is longer than 90% of the trips.The percentage of energy savingsThe percentage of the reduction in the energy consumption of all the vehicular trips in the platoon scenarios compared to the nonplatoon baseline scenario. ### 5.1. Analysis of Service Levels in Platoon Scenarios #### 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 #### 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. #### 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ### 5.2. Energy Consumption Analysis with the Platoon Formation We evaluate the impact of forming platoons on the system-wide energy consumption for different vehicle types. Results in Figure7 indicate that the formation of platoons can reduce the total energy consumed by all vehicles in the AMoD system. The greatest reduction of total energy consumption ranges from 0.42% for the Kia Soul Electric 2015 to 9.56% for the Ford Focus Electric 2013. Moreover, more savings are achieved when the time threshold (V) and the vehicle size threshold (T) for platoon release are increased. The reason is that more vehicles are coordinated in platoons, which results in more vehicles driving in platoons. Less congestion occurs when more platoon vehicles circulate across the transportation network, indicating improvements in traffic efficiency. Therefore, more energy can be saved when platoons are formed.Figure 7 Total energy savings of AMoD systems for different types of electric vehicles (T represents the time threshold of platoon leaders, and V is the maximum number of platoon vehicles.).Results in Figure7 show that energy savings are different from vehicle types when applying the same formation policy. The maximum saving of up to 9.56% is achieved for Ford Focus Electric 2013 in the (T8, V8) formation policy, while the Kia Soul Electric 2015 has the lowest energy saving of 0.42%. This is because the difference in vehicle characteristics for energy consumption leads to different energy savings. The energy consumption model contains a set of regression models corresponding to the different vehicle types. The regression model, derived from laboratory dynamometer tests, is used to calculate energy consumption as a function of travel speeds. In urban driving, the vehicles will consume more energy at lower speeds, while the energy consumption of individual vehicles will decline as the vehicle speed increases. Thus, vehicles will consume less energy per unit distance traveled with an increase in the travel speed. However, the modeled energy performance of different car types is different. The vehicle type with the sharpest gradient of modeled energy consumption-speed function will see the biggest reduction in energy consumption when having the same increase in the vehicles’ speed. The Ford Focus Electric 2013 has the steepest decline in energy consumption-speed function; therefore, when the speed of the vehicle increases, the Ford vehicle type has the most reduction in the energy consumption. The energy saving of the Kia Soul Electric 2015 that has the least steep gradient of the energy consumption function ranks at the bottom.We find that the degrees of energy savings strongly depend on the vehicle types as well as platoon formation policies. Coordinating more vehicles in platoons can significantly improve the energy efficiency for some vehicle types. However, the improvement in energy efficiency for certain vehicle types is relatively small because of the energy consumption characteristics. ## 5.1. Analysis of Service Levels in Platoon Scenarios ### 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 ### 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. ### 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ## 5.1.1. Platoon Delays of Travelers in Platoon Vehicles We analyze the system’s performance with the platoon formation in terms of the platoon delay of travelers in the platoon vehicles at different demand levels. As shown in Table6, demand for AMoD services (as input) is varied from 100% to 20% of the total private car trips in the study area. Fleet sizes at different demand levels in Table 6 are calculated based on the same scale factor as the decrease in the travel demand. For every demand level, platoon formation policies (T, V) (T stands for the time threshold and V for the platoon size threshold)) are defined. We simulate the scenarios with platoon formation policies (T2, V2), (T4, V4), (T6, V6), and (T8, V8), where T2 means the maximum waiting time is 2 minutes and V2 represents the maximum platoon size equals 2.Table 6 Average delay of platoon vehicles for different demand levels. Demand levels100%80%60%40%20%The number of travel requests (trips)274522196216417109805490Avg. fleet size per service point1701361026834Platoon scenariosAvg. delay of platoon vehicles (minutes)(T2, V2)0.660.660.660.660.66(T4, V4)2.302.302.382.512.75(T6, V6)3.233.223.293.503.67(T8, V8)3.673.673.874.014.62Simulation results in Table6 show that the increased values of two attributes (from (T2, V2) to (T8, V8)) for the platoon formation lengthen the platoon delays of travelers in platoon vehicles. Under the platoon formation policies (T8, V8), the platoon delay of travelers inside platoon vehicles is about 3.67 minutes, which is more than five times the platoon delay of travelers under the policy (T2, V2). Results suggest that the formation of platoons can cause long unexpected delays for travelers in the platoon vehicles.Moreover, results suggest that the delay of travelers in platoon vehicles tends to increase as the demand level decreases. For example, the delay of travelers in platoon vehicles increases by 25% when demand falls from 100% to 20% of the total private car trips under the formation policy (T8, V8). Few travelers requesting AMoD services cause more delays for the travelers in platoon vehicles, while a relatively large number of AMoD users lead to smaller platoon delays. There tends to be an inverse relationship between the demand level and platoon delays.In order to look into the platoon delay encountered by travelers in more detail, the delay of travelers with platoon leaders are presented in Table7. Results indicate that the delays experienced by the travelers with platoon leaders are approximately twice that of other platoon vehicles with the formation policy (V8, T8). That is, travelers who are with platoon leaders have to wait longer than travelers in other vehicles of the platoon. The platoon formation has considerably more impact on the level of service provided to travelers with the platoon leaders. Since vehicles in the formed platoon are arranged in order of arrival, the platoon leader arrived early at the service point and waited the longest for the other vehicles to form a platoon. The platoon delays are getting smaller and smaller for the followers that arrive later.Table 7 Platoon delays for platoon leaders and platoon vehicles under different operating policies. The time threshold (minutes)Nonplatoon (0)2468The platoon size threshold (vehicles)Nonplatoon (1)2468Platoon scenariosNo platoons(T2, V2)(T4, V4)(T6, V6)(T8, V8)Avg. delay of platoon leaders (minutes)00.693.495.677.02Avg. delay of platoon vehicles (minutes)00.662.303.233.67 ## 5.1.2. Congestion Levels and Network Travel Times We investigate the impact of forming platoons on network traffic performance. The indicator of the network congestion level (explained in Table5) is defined to evaluate travel conditions under different platoon formation scenarios. The congestion levels in nonplatoon scenarios are used as a baseline for comparison.Moreover, we measure the network travel time of all travelers (in platoons and not in platoons) and platoon travel times of travelers in the platoon vehicles. Note that the platoon delay is not included in the network travel time, while the platoon travel time is calculated by the platoon delay plus the network travel time.Results in Table8 show that the platoon formation can reduce the congestion levels and network travel times for all travelers. Compared to the nonplatoon scenario, the formation policy (T2, V2) obtains a minimal reduction of 18% in the congestion level, resulting in a reduction in the network travel time of about 3 minutes. The formation policy (T8, V8) reduces the congestion level by up to 41.61%, which is equivalent to a reduction in the network travel time of about 7 minutes. This is because more vehicles are coordinated in platoons as the values of the two attributes (T, V) in the platoon formation policy are increased. As shown in Table 8, the total number of vehicular trips in platoons rose from 5564 to 8056 trips. Figure 5 shows that the number of platoon vehicles circulating in the transportation network increases (from the policy (T2, V2) to the policy (T8, V8)). The more the vehicles travel in platoons, the more the road capacity gets increased. The increased road capacity leads to an improvement in the network travel time.Table 8 Congestion levels, network travel times, and platoon travel times at 100% demand level. IndicatorsCongestion levels (%)Network travel time for all vehicles (minutes)The total number of vehicular trips in platoons90% quantile (network) travel times (minutes)Platoon travel time of travelers in platoon vehicles (minutes)Platoon travel time of travelers in platoon leaders (minutes)Nonplatoon scenario53.2827.59No70.05NoNo(T2, V2)35.2824.35556459.8625.0125.04(T4, V4)20.3921.67689951.1223.9725.16(T6, V6)13.5620.44761144.2023.6726.11(T8, V8)11.6720.10805643.7023.7727.12Figure 5 The number of vehicles traveling in platoons on the network over time.Furthermore, as shown in Figure6, the number of vehicles circulating in the transportation network decreases as the number of vehicles traveling in platoons (see Figure 5) increases. The formation of platoons decreases the number of vehicles circulating in the transportation network. When the number of vehicles circulating in the transportation decreases, travel conditions are improved. As a result, vehicles can travel faster through the road network.Figure 6 The number of all vehicles circulating in the network over time (in platoons and not in platoons).As shown in Figure6, the duration during which a high number of vehicles circulates in the transportation network is reduced in platoon scenarios compared to the scenario without forming platoons. The duration is shorter and shorter as more and more vehicles travel in platoons over the transportation network. The result suggests that the platoon formation could reduce the duration of urban road congestion.We compare the 90% quantile travel time in the platoon scenarios to the nonplatoon scenario to take a closer look at how the formation of platoons affects network travel times. Shorter 90% quantile travel times imply reductions in the network travel times. Results in Table8 show that the formation of platoons can reduce the 90% quantile travel times. The 90% quantile travel times are about 44 minutes for the policies (T6, V6) and (T8, V8), which is 30 minutes less than that in the scenario without the formation of platoons. The results indicate that the network travel conditions are significantly improved by the formation of platoons.Overall, the formation of platoons could reduce the road congestion level and shorten the congestion duration. On average, travelers can travel faster across the urban road network. Moreover, the number of vehicles circulating in the transportation network affects the (network) reliability [40]. Therefore, the platoon formation has the potential to improve the travel time reliability. ## 5.1.3. Platoon Travel Times The formation of platoons could cause platoon delays of travelers in the platoon vehicles while reducing network travel times. We found that the platoon travel time, including the platoon delay of travelers in platoons and network travel time, is shorter than the network travel time in the nonplatoon scenario. Results of simulating a high-demand scenario where the AMoD system serves 100% of commuter trips made by private car show that formation policies (T6, V6) and (T8, V8) have more than 1 minute less in the platoon travel times than the in-vehicle travel time of travelers in the nonplatoon scenario (see Table8). The reason for this is that the reduction in the network travel times offset the platoon delays, leading to a shorter platoon travel time.Although the platoon formation can reduce network travel times, travelers in the platoon leaders face longer unexpected delays. This led to a long platoon travel time (27 minutes) of travelers in the leaders, similar to nonplatoon scenarios where high congestion is present.Moreover, we found that the formation of platoons cannot improve network travel time in the low-demand scenario. For example, the 90% quantile (network) travel time is found at around 13 minutes and is not reduced by the formation of platoons when the demand level is below 60% (see Table9). This suggests that platoon driving has no effect on traffic when demand is low but only delays travelers in the platoon vehicles.Table 9 The 90% quantile (network) travel time at different demand levels. Demand levels100%80%60%40%The number of travel requests (trips)27452219621641710980Avg. fleet size per service point17013610268IndicatorThe 90% quantile (network) travel times (minutes)Nonplatoon scenario70.0531.5215.1313.49(T2, V2)59.8628.3414.1013.50(T4, V4)51.1226.3713.9513.17(T6, V6)44.2020.7113.8113.38(T8, V8)43.7019.6713.8913.41 ## 5.2. Energy Consumption Analysis with the Platoon Formation We evaluate the impact of forming platoons on the system-wide energy consumption for different vehicle types. Results in Figure7 indicate that the formation of platoons can reduce the total energy consumed by all vehicles in the AMoD system. The greatest reduction of total energy consumption ranges from 0.42% for the Kia Soul Electric 2015 to 9.56% for the Ford Focus Electric 2013. Moreover, more savings are achieved when the time threshold (V) and the vehicle size threshold (T) for platoon release are increased. The reason is that more vehicles are coordinated in platoons, which results in more vehicles driving in platoons. Less congestion occurs when more platoon vehicles circulate across the transportation network, indicating improvements in traffic efficiency. Therefore, more energy can be saved when platoons are formed.Figure 7 Total energy savings of AMoD systems for different types of electric vehicles (T represents the time threshold of platoon leaders, and V is the maximum number of platoon vehicles.).Results in Figure7 show that energy savings are different from vehicle types when applying the same formation policy. The maximum saving of up to 9.56% is achieved for Ford Focus Electric 2013 in the (T8, V8) formation policy, while the Kia Soul Electric 2015 has the lowest energy saving of 0.42%. This is because the difference in vehicle characteristics for energy consumption leads to different energy savings. The energy consumption model contains a set of regression models corresponding to the different vehicle types. The regression model, derived from laboratory dynamometer tests, is used to calculate energy consumption as a function of travel speeds. In urban driving, the vehicles will consume more energy at lower speeds, while the energy consumption of individual vehicles will decline as the vehicle speed increases. Thus, vehicles will consume less energy per unit distance traveled with an increase in the travel speed. However, the modeled energy performance of different car types is different. The vehicle type with the sharpest gradient of modeled energy consumption-speed function will see the biggest reduction in energy consumption when having the same increase in the vehicles’ speed. The Ford Focus Electric 2013 has the steepest decline in energy consumption-speed function; therefore, when the speed of the vehicle increases, the Ford vehicle type has the most reduction in the energy consumption. The energy saving of the Kia Soul Electric 2015 that has the least steep gradient of the energy consumption function ranks at the bottom.We find that the degrees of energy savings strongly depend on the vehicle types as well as platoon formation policies. Coordinating more vehicles in platoons can significantly improve the energy efficiency for some vehicle types. However, the improvement in energy efficiency for certain vehicle types is relatively small because of the energy consumption characteristics. ## 6. Conclusions and Recommendations ### 6.1. Main Conclusions The formation of platoons in the urban AMoD system is more complicated because of the urban road network characteristics (narrow streets and multiple road segments between locations), platoon formation locations and policies, and the interaction between AMoD service users and SAEVs. The goal of this study is not to develop a very sophisticated method but to show through agent-based simulations how the formation of platoons in AMoD systems affects people’s travel and system-wide energy consumption.Shared AVs could lead to more traffic and longer travel times due to the additional zero-occupancy movements. In the scenario where SAEVs replace all morning urban commuter trips (100% demand) made by private cars in the case-study city, without the formation of platoons, a high network congestion level of up to 53.28% is observed.However, the network travel times and congestion levels are improved in the formation of platoons. For example, a congestion level of 11% can be achieved under the policy (V8, T8). That is, for 30 minutes of travel time, 3.3 minutes of additional time must be spent during the rush hours. The extra time spent is far smaller than the time spent either in the nonplatoon situation where SAEVs replace the private car trips or in the current situation where private cars are used. In the first situation, travelers spent extra 15.98 minutes with a 53.28% congestion level. In the second situation, additional 10 minutes is spent in the case-study city (https://www.tomtom.com/en_gb/traffic-index/). In the formation of platoons, travelers are more likely to reach their destination on time or early with the improvement in the network travel times.We also find that the 90% quantile travel times are significantly reduced in the formation of platoons. This suggests that the network travel times are improved without causing extremely long travel times when platoons are formed even though additional (zero-occupancy) movements are generated in AMoD systems.Simulation results demonstrate that the number of total vehicles circulating in the transportation network is reduced by the formation of platoons, which could lead to improved network travel time and reliability. Furthermore, the improved network travel time and reliability could improve the quality of time spent in the vehicles across the transportation network. In this respect, the platoon formation could improve the quality of services offered to all service users (in platoons and not in platoons) when they travel on the transportation network.On average, the platoon travel time, including the platoon delay and the network travel time, is less than the network travel time in nonplatoon scenarios where all morning commuters use AMoD service. That implies that travelers in the platoon vehicles could reach their destination faster even if they experience unexpected delays in the formation of platoons, suggesting improved service levels. In this respect, the benefits from network travel time savings may outweigh the cost associated with the platoon delays. Travelers may opt for the AMoD service in response to service improvements in the formation of platoons.To be specific, we find that travelers in the platoon leaders experience longer platoon travel times due to longer unexpected platoon delays. In this regard, AMoD service users (morning commuters who were previously driving private cars) in the platoon leaders are provided with a low level of service. Travelers in the platoon leaders may be reluctant to use AMoD services.We find the existence of an inverse relationship between platoon delays and demand levels. The platoon delays encountered by travelers in platoon vehicles are small in a high-demand scenario. This implies that forming platoons when the market penetration rate of AMoD services is high leads to lower platoon delays. In contrast, travelers face long unexpected platoon delays with fewer AMoD service users. In the former case, the network travel times can offset the platoon delays travelers’ encounter in the platoon vehicles. Consequently, travelers in platoon vehicles have shorter platoon travel times (total travel times of travelers in the platoon vehicles). In the latter case, no congestion occurs in the transportation network when few travelers request services (this may happen during off-peak hours); coordinating vehicles in platoons only causes unexpected delays for travelers in the platoon vehicles. Forming platoons when demand is low (e.g., below 60% demand) only causes delays for travelers in the platoon vehicles, suggesting a lower level of service. As a result, travelers may not be willing to use the AMoD service. Therefore, a high penetration rate of AMoD service is expected to coordinate vehicles in the formation of platoons to benefit the service users in such vehicles in future AMoD systems.An important finding is that the improvement in traffic efficiency leads to system-wide energy savings. Forming platoons in AMoD systems can save about 9.56% of the system-wide energy consumption for the most efficient car model studied in urban areas. However, energy savings strongly depend on the vehicle characteristics for energy consumption and platoon formation policies used. Demand for AMoD services and operating policies for forming platoons are important variables of interest for obtaining travel and energy benefits from platoon driving. Effective platoon formation strategies need to be developed for different car models to obtain a favorable effect on system-wide energy consumption.At the city scale, the formation of platoons enabled by vehicle automation could reduce travel times and unreliability in the modeled urban road network. This may influence their choices of residence with the improvement in travel times and the reliability of urban commuters. It can be inferred that automated mobility systems may have a detrimental impact on urban sprawl, leading to rapid urban expansions. Moreover, platoon operations effectively reduce energy computation in urban mobility systems. While energy consumption is reduced, emissions reductions could also be achieved in the formation of platoons. Thus, platoon operations could bring benefits to operators with regard to energy savings and to society in terms of emissions reductions.The findings of this study contribute to the growing body of literature on the study of shared AV fleets by quantifying the impact of innovative platoon formation operations on AV energy consumption as well as people’s travel. We shed light on the energy aspect of platoons in urban AMoD systems to complement the existing studies on the fuel consumption of platoons on highways. ### 6.2. Recommendations for Policy and Future Research The findings of this paper raise challenges for policy and for research. The findings suggest that the formation of platoons in AMoD systems can reduce system-wide energy consumption. Platoon operations can be considered as an effective energy-saving and decarbonization strategy to achieve the government’s energy and environmental goals. Moreover, it is recommended that policymakers and transport operators consider the vehicle characteristics for energy consumption in conjunction with platoon formation policies to develop effective energy-saving platoon strategies in future AMoD systems.Developing platoon formation strategies over urban road networks is recommended aiming at improving traffic efficiency, leading to travel time reductions. However, we find that the magnitude of demand for AMoD services could influence the users’ travel times and quality of time. Therefore, the magnitude of demand needs to be considered when deciding whether to coordinate vehicles in platoons. For example, forming platoons below 60% demand over the urban road network only causes unexpected delays. Travelers are reluctant to use the AMoD service due to the long unexpected platoon delays. In this regard, we recommend not forming platoons in the uncongested network with fewer road users (e.g., below 60% demand in the study area, which is the case during off-peak hours). At the same time, vehicles can be coordinated in platoons when congestion occurs to reap the benefits of improving travel times and energy efficiency.Furthermore, travelers, especially those who travel in the platoon leaders, may not be willing to use AMoD service due to the long unexpected delay and long travel time. For policymakers and transport operators, careful consideration is required to reward the travelers who suffer long unexpected delays in the formation of platoons, which the system’s benefit from energy savings can be distributed.Further research efforts are required to develop mechanisms for distributing the energy benefits, in order to incentivize engagement to make the system more sustainable, efficient, and equitable.The modeling framework presented here still has some limitations that could be improved in future research. Relocation capability is not developed and implemented in the model. Relocation operations in anticipation of future demand can mitigate the imbalance between vehicle supply and travel demand. Relocating platooned vehicles in urban driving conditions can be further investigated.The traffic simulation model can estimate the traffic impact of forming platoons using mesoscopic operating characteristics. It can meet the design requirements of determining time-dependent link flows and route travel time according to the relationship established between road capacity and the formed platoons. Hence, the traffic simulation model allows testing different strategies in forming platoons on the network level. However, the mesoscopic model applied to single-lane urban scenarios cannot capture the microscopic traffic behavior such as accelerating, overtaking, lane-changing, and traffic behaviors at intersections. Moreover, the relationship established between formed platoons and road capacity is only meant for the capacity of a single lane for each direction according to the platoon characteristics. This is acceptable for urban driving conditions in most (European) cities with narrow streets (one lane for each direction). However, the traffic simulation component cannot model mixed traffic conditions under multiple-lane scenarios. Operational capacities in multilane scenarios depend on lane policies to distribute platoon vehicles. Modeling multiple-lane capacity with the formation of platoons remains an unsolved challenge in the literature. ## 6.1. Main Conclusions The formation of platoons in the urban AMoD system is more complicated because of the urban road network characteristics (narrow streets and multiple road segments between locations), platoon formation locations and policies, and the interaction between AMoD service users and SAEVs. The goal of this study is not to develop a very sophisticated method but to show through agent-based simulations how the formation of platoons in AMoD systems affects people’s travel and system-wide energy consumption.Shared AVs could lead to more traffic and longer travel times due to the additional zero-occupancy movements. In the scenario where SAEVs replace all morning urban commuter trips (100% demand) made by private cars in the case-study city, without the formation of platoons, a high network congestion level of up to 53.28% is observed.However, the network travel times and congestion levels are improved in the formation of platoons. For example, a congestion level of 11% can be achieved under the policy (V8, T8). That is, for 30 minutes of travel time, 3.3 minutes of additional time must be spent during the rush hours. The extra time spent is far smaller than the time spent either in the nonplatoon situation where SAEVs replace the private car trips or in the current situation where private cars are used. In the first situation, travelers spent extra 15.98 minutes with a 53.28% congestion level. In the second situation, additional 10 minutes is spent in the case-study city (https://www.tomtom.com/en_gb/traffic-index/). In the formation of platoons, travelers are more likely to reach their destination on time or early with the improvement in the network travel times.We also find that the 90% quantile travel times are significantly reduced in the formation of platoons. This suggests that the network travel times are improved without causing extremely long travel times when platoons are formed even though additional (zero-occupancy) movements are generated in AMoD systems.Simulation results demonstrate that the number of total vehicles circulating in the transportation network is reduced by the formation of platoons, which could lead to improved network travel time and reliability. Furthermore, the improved network travel time and reliability could improve the quality of time spent in the vehicles across the transportation network. In this respect, the platoon formation could improve the quality of services offered to all service users (in platoons and not in platoons) when they travel on the transportation network.On average, the platoon travel time, including the platoon delay and the network travel time, is less than the network travel time in nonplatoon scenarios where all morning commuters use AMoD service. That implies that travelers in the platoon vehicles could reach their destination faster even if they experience unexpected delays in the formation of platoons, suggesting improved service levels. In this respect, the benefits from network travel time savings may outweigh the cost associated with the platoon delays. Travelers may opt for the AMoD service in response to service improvements in the formation of platoons.To be specific, we find that travelers in the platoon leaders experience longer platoon travel times due to longer unexpected platoon delays. In this regard, AMoD service users (morning commuters who were previously driving private cars) in the platoon leaders are provided with a low level of service. Travelers in the platoon leaders may be reluctant to use AMoD services.We find the existence of an inverse relationship between platoon delays and demand levels. The platoon delays encountered by travelers in platoon vehicles are small in a high-demand scenario. This implies that forming platoons when the market penetration rate of AMoD services is high leads to lower platoon delays. In contrast, travelers face long unexpected platoon delays with fewer AMoD service users. In the former case, the network travel times can offset the platoon delays travelers’ encounter in the platoon vehicles. Consequently, travelers in platoon vehicles have shorter platoon travel times (total travel times of travelers in the platoon vehicles). In the latter case, no congestion occurs in the transportation network when few travelers request services (this may happen during off-peak hours); coordinating vehicles in platoons only causes unexpected delays for travelers in the platoon vehicles. Forming platoons when demand is low (e.g., below 60% demand) only causes delays for travelers in the platoon vehicles, suggesting a lower level of service. As a result, travelers may not be willing to use the AMoD service. Therefore, a high penetration rate of AMoD service is expected to coordinate vehicles in the formation of platoons to benefit the service users in such vehicles in future AMoD systems.An important finding is that the improvement in traffic efficiency leads to system-wide energy savings. Forming platoons in AMoD systems can save about 9.56% of the system-wide energy consumption for the most efficient car model studied in urban areas. However, energy savings strongly depend on the vehicle characteristics for energy consumption and platoon formation policies used. Demand for AMoD services and operating policies for forming platoons are important variables of interest for obtaining travel and energy benefits from platoon driving. Effective platoon formation strategies need to be developed for different car models to obtain a favorable effect on system-wide energy consumption.At the city scale, the formation of platoons enabled by vehicle automation could reduce travel times and unreliability in the modeled urban road network. This may influence their choices of residence with the improvement in travel times and the reliability of urban commuters. It can be inferred that automated mobility systems may have a detrimental impact on urban sprawl, leading to rapid urban expansions. Moreover, platoon operations effectively reduce energy computation in urban mobility systems. While energy consumption is reduced, emissions reductions could also be achieved in the formation of platoons. Thus, platoon operations could bring benefits to operators with regard to energy savings and to society in terms of emissions reductions.The findings of this study contribute to the growing body of literature on the study of shared AV fleets by quantifying the impact of innovative platoon formation operations on AV energy consumption as well as people’s travel. We shed light on the energy aspect of platoons in urban AMoD systems to complement the existing studies on the fuel consumption of platoons on highways. ## 6.2. Recommendations for Policy and Future Research The findings of this paper raise challenges for policy and for research. The findings suggest that the formation of platoons in AMoD systems can reduce system-wide energy consumption. Platoon operations can be considered as an effective energy-saving and decarbonization strategy to achieve the government’s energy and environmental goals. Moreover, it is recommended that policymakers and transport operators consider the vehicle characteristics for energy consumption in conjunction with platoon formation policies to develop effective energy-saving platoon strategies in future AMoD systems.Developing platoon formation strategies over urban road networks is recommended aiming at improving traffic efficiency, leading to travel time reductions. However, we find that the magnitude of demand for AMoD services could influence the users’ travel times and quality of time. Therefore, the magnitude of demand needs to be considered when deciding whether to coordinate vehicles in platoons. For example, forming platoons below 60% demand over the urban road network only causes unexpected delays. Travelers are reluctant to use the AMoD service due to the long unexpected platoon delays. In this regard, we recommend not forming platoons in the uncongested network with fewer road users (e.g., below 60% demand in the study area, which is the case during off-peak hours). At the same time, vehicles can be coordinated in platoons when congestion occurs to reap the benefits of improving travel times and energy efficiency.Furthermore, travelers, especially those who travel in the platoon leaders, may not be willing to use AMoD service due to the long unexpected delay and long travel time. For policymakers and transport operators, careful consideration is required to reward the travelers who suffer long unexpected delays in the formation of platoons, which the system’s benefit from energy savings can be distributed.Further research efforts are required to develop mechanisms for distributing the energy benefits, in order to incentivize engagement to make the system more sustainable, efficient, and equitable.The modeling framework presented here still has some limitations that could be improved in future research. Relocation capability is not developed and implemented in the model. Relocation operations in anticipation of future demand can mitigate the imbalance between vehicle supply and travel demand. Relocating platooned vehicles in urban driving conditions can be further investigated.The traffic simulation model can estimate the traffic impact of forming platoons using mesoscopic operating characteristics. It can meet the design requirements of determining time-dependent link flows and route travel time according to the relationship established between road capacity and the formed platoons. Hence, the traffic simulation model allows testing different strategies in forming platoons on the network level. However, the mesoscopic model applied to single-lane urban scenarios cannot capture the microscopic traffic behavior such as accelerating, overtaking, lane-changing, and traffic behaviors at intersections. Moreover, the relationship established between formed platoons and road capacity is only meant for the capacity of a single lane for each direction according to the platoon characteristics. This is acceptable for urban driving conditions in most (European) cities with narrow streets (one lane for each direction). However, the traffic simulation component cannot model mixed traffic conditions under multiple-lane scenarios. Operational capacities in multilane scenarios depend on lane policies to distribute platoon vehicles. Modeling multiple-lane capacity with the formation of platoons remains an unsolved challenge in the literature. --- *Source: 1005979-2022-07-21.xml*
2022
# Simulation Analysis of the Effect of Pile Spacing on the Compressive Load-Bearing Performance of CEP Double Piles **Authors:** Yongmei Qian; Xun Li; Lishuang Ai; Yaling Jiang; Yu Dong **Journal:** Advances in Civil Engineering (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1005985 --- ## Abstract Under the action of vertical pressure, the variation of pile spacing affects the bearing performance of concrete expanded-plate (CEP) double pile and the damaged state of soil around the pile. In this study, finite element simulation analysis is performed by ANSYS software, and six sets of semisectional double pile models with different pile spacings and one set of semisectional monopile models with the same specifications are established after applying the vertical pressure to the model pile. The mapping method is used for the pile-soil model meshing, the model adopts the contact type of rigid and flexible bodies, and the simulation analysis adopts the way of applying surface load and loading step-by-step. The displacement distribution, load-displacement curve, stress cloud, and shear stress curve are collated to obtain the displacement and stress change law of pile-soil under different pile spacings and then determine the effect of the changes in pile spacing on the CEP double pile. Meanwhile, the CEP double pile model is compared with the monopile model to determine their similarities and differences. Finally, a reasonable range of pile spacing for CEP double piles is provided. It further improves the research theory of CEP group pile compressive load-bearing capacity and provides the theoretical basis for its design and application in practical engineering. --- ## Body ## 1. Introduction Concrete expanded-plate (CEP) piles have been widely used in practical engineering because of their high economic efficiency, good load-bearing performance, and small and uniform settlement [1]. Compared with straight-hole piles, CEP piles add bearing discs at the pile position, which can flexibly set the position and parameters of the bearing discs, change the pile bearing condition, and increase the contact area between the pile and the soil, significantly improving the pile bearing capacity and stability. From the current situation of domestic and foreign research, the study on CEP monopiles performed by domestic and foreign scholars has been perfected [2–4]. Many influencing factors of CEP monopiles on the load-bearing performance and the intrinsic mechanism have been studied in depth [5], whereas research on the CEP group piles is still in the primary stage; however, most of the piles in practical engineering appear in the form of group piles [6, 7], and CEP piles are gradually used in high-rise buildings, bridges, deep-sea platforms, and other projects, and many factors, such as environment and geological conditions, influence these construction facilities. The requirements for their bearing performance are gradually improved [8, 9]. Therefore, an in-depth study on the load-bearing performance [10, 11] of CEP group piles must be conducted to meet the needs of actual projects. Under the action of vertical pressure, the neighboring piles will interact with one another when the pile spacing is small, and the change in pile spacing affects the damage state and bearing performance of the soil around the CEP group pile. This study takes CEP double pile as the research object, the pile spacing as a single variable, and ANSYS finite element software is used to establish six groups of semisectional double pile models with different pile spacings and one group of semisectional single pile models with the same specifications [10]. The displacement and stress change law of the CEP double pile and soil body under the action of vertical pressure is obtained through collated analysis, and the damage state and bearing capacity [12–14] change trend of the soil body around the pile of CEP double pile are derived. Their similarities and differences are also determined by comparing them with the monopile model [15]. Finally, the best pile spacing design principle that affects the compressive bearing performance of CEP double piles is given to provide the theoretical basis for further improving the design of CEP group pile bearing capacity under vertical load. ## 2. ANSYS Finite Element Modeling ### 2.1. Constitutive Model and Material Parameters Due to the complex force situation in the soil in the actual project, the Duncan–Chang model in the nonlinear elastic model is chosen for the soil, which meets the DP yield criterion. The elastic-plastic model can reflect the main force characteristics of concrete and is used as the principal structure model of the CEP double pile. The interaction between the pile and the soil specifically influences the structure’s stress and displacement. The finite unit method is used, which can consider the nonlinear stress-strain relationship on the interaction contact surface and improve the accuracy of the calculation results. The hyperbolic model is used for the contact surface.In ANSYS finite element simulation and analysis, according to the preliminary experimental study and simulation analysis of CEP monopiles, the CEP double pile is made of C30 concrete material and the soil is made of powdered clay according to the site investigation report [16] to ensure that the simulation matches the actual elements [17]. The specific material parameters are shown in Table 1.Table 1 Material parameters. MaterialDensity (t/mm3)Modulus of elasticity (MPa)Poisson’s ratio cohesionCohesion (MPa)Friction angle (°)Expansion angle (°)Pile-soil friction coefficientFlexible pile2.25 × 10−93.465 × 1040.2———0.3Powdered clay1.688 × 10−9400.350.0435510.710.7 ### 2.2. Model Determination and Model Dimensions For a more intuitive observation of displacement distributions, stress clouds, and other related data [18], the model piles used in this simulation are semisectional piles because they are symmetrical structures. To ensure the feasibility and convenience of applying the research results of this thesis to the actual project, the dimensions of the established model pile should be consistent with the actual project, and the modeling is completed by using a 1 : 1 scale. The pile dimensions are set as follows: the pile length L = 9200 mm, the pile diameter d = 500 mm, the disk height is 932 mm, the disk overhang diameter R = 828 mm, the upper slope angle of the disk α = 35°, and the lower slope angle of the disk β = 20°. The model pile specifications are shown in Figure 1. In this ANSYS simulation, seven groups of model piles were set up (including six groups of CEP double pile model and one group of single pile model as a comparison), and all the piles have the same specifications. The first six groups took the pile spacing as a single variable. The piles are numbered MS1–MS6, and the pile spacing is 2984, 3398, 3812, 4226, 5054, and 5882 mm in order, which are 1, 1.5, 2, 2.5, 3.5, and 4.5 times the circling pick diameter, respectively. The CEP monopile model of the same specification is set for comparison and numbered as MD.Figure 1 Schematic of the double pile model. ### 2.3. Double Pile Modeling #### 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. #### 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. #### 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 2.1. Constitutive Model and Material Parameters Due to the complex force situation in the soil in the actual project, the Duncan–Chang model in the nonlinear elastic model is chosen for the soil, which meets the DP yield criterion. The elastic-plastic model can reflect the main force characteristics of concrete and is used as the principal structure model of the CEP double pile. The interaction between the pile and the soil specifically influences the structure’s stress and displacement. The finite unit method is used, which can consider the nonlinear stress-strain relationship on the interaction contact surface and improve the accuracy of the calculation results. The hyperbolic model is used for the contact surface.In ANSYS finite element simulation and analysis, according to the preliminary experimental study and simulation analysis of CEP monopiles, the CEP double pile is made of C30 concrete material and the soil is made of powdered clay according to the site investigation report [16] to ensure that the simulation matches the actual elements [17]. The specific material parameters are shown in Table 1.Table 1 Material parameters. MaterialDensity (t/mm3)Modulus of elasticity (MPa)Poisson’s ratio cohesionCohesion (MPa)Friction angle (°)Expansion angle (°)Pile-soil friction coefficientFlexible pile2.25 × 10−93.465 × 1040.2———0.3Powdered clay1.688 × 10−9400.350.0435510.710.7 ## 2.2. Model Determination and Model Dimensions For a more intuitive observation of displacement distributions, stress clouds, and other related data [18], the model piles used in this simulation are semisectional piles because they are symmetrical structures. To ensure the feasibility and convenience of applying the research results of this thesis to the actual project, the dimensions of the established model pile should be consistent with the actual project, and the modeling is completed by using a 1 : 1 scale. The pile dimensions are set as follows: the pile length L = 9200 mm, the pile diameter d = 500 mm, the disk height is 932 mm, the disk overhang diameter R = 828 mm, the upper slope angle of the disk α = 35°, and the lower slope angle of the disk β = 20°. The model pile specifications are shown in Figure 1. In this ANSYS simulation, seven groups of model piles were set up (including six groups of CEP double pile model and one group of single pile model as a comparison), and all the piles have the same specifications. The first six groups took the pile spacing as a single variable. The piles are numbered MS1–MS6, and the pile spacing is 2984, 3398, 3812, 4226, 5054, and 5882 mm in order, which are 1, 1.5, 2, 2.5, 3.5, and 4.5 times the circling pick diameter, respectively. The CEP monopile model of the same specification is set for comparison and numbered as MD.Figure 1 Schematic of the double pile model. ## 2.3. Double Pile Modeling ### 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. ### 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. ### 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. ## 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. ## 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 3. Displacement Results Analysis ### 3.1. Comparison Analysis of Displacement Distributions of Each Group of Model Piles In the ANSYS simulation analysis, the displacement distribution diagram of each group of the double-pile is extracted when it reaches the ultimate compressive bearing capacity (Figure5). The displacement distribution diagram of the MD model pile is extracted when it reaches the ultimate compressive bearing capacity (Figure 6).Figure 5 Displacement distributions of each group of model piles under extreme load. (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 6 MD displacement distribution map.Figure5 shows that when the CEP double pile model is subjected to ultimate load, the overall trend of the displacement distributions of each group of model piles is basically similar; that is, the CEP double pile model produces different degrees of slip misalignment of the soil below the bearing disc under vertical pressure, and the extent of damage to the soil around the pile is mainly concentrated in the soil close to the soil below the bearing disc (green area in the figure). In Figures 5(a)–5(c), the pile spacing is small, the displacement influence range generated by the double-pile model has a certain overlapping part (green area), and the joint action of the double piles leads to a larger displacement of the soil between the piles downward. Figure 5(d) illustrates that when the pile spacing is 4226 mm, the disk end spacing is 2.5 times the disk overhang diameter, the area with larger displacement below the bearing disk of each single pile no longer overlaps, and the overall displacement produced by the soil between piles becomes smaller. Thus, at this pile spacing, the double-pile effect is weakened, and the CEP double pile compressive bearing capacity is improved. In Figures 5(e) and 5(f), when the pile spacing increases to a certain degree, the displacement produced by the two bearing discs that exert effects on the soil body between the piles is smaller and only overlaps at the periphery, whereas the soil body below the bearing disc produces larger displacement, and the double-pile effect is weakened further. From Figure 6, we found that when the CEP monopile is subjected to ultimate load, the pile-soil separation occurs on the disk, the bearing disk plays the main role, and the larger displacement area occurs in the soil below the bearing disk. The CEP double pile increases with pile spacing. Each single pile is closer to the CEP single pile under pressure, the double-pile effect gradually weakens, and the range of soil body between piles interacting with each other continuously declines, thereby reducing the overall displacement generated by the soil body between piles and improving the compressive bearing capacity gradually. ### 3.2. Load-Displacement Curve Analysis The vertical pressure is loaded step-by-step, the pile top displacement data under each level of load are extracted and collated, and the load-displacement curve is drawn, as shown in Figure7(a).(1) Figure7(a) shows that the development trend of each curve is similar, the displacement of the top pile, and the slope of the curve increase continuously with load; that is, the amount of change in displacement gradually rises due to the extrusion of the bearing disc and the soil under the disc under the action of the vertical pressure of the CEP double-pile, which causes the soil to slip. Furthermore, the compressive bearing capacity of the soil gradually decreases, which is not enough to resist the vertical pressure, resulting in the change of displacement rate being accelerated.(2) At the early stage of loading, when the vertical pressure is 200–1000 kN, the six curves are almost in the overlapping state; that is, the displacement changes are the same. At this time, the soil around the pile is not damaged, and the pile spacing has less influence on the bearing performance of the CEP double pile. When the vertical pressure exceeds 1000 kN, the displacement variation gradually decreases with the increase in pile spacing. For example, when the vertical pressure ranges from 1000 to 4000 kN, the displacement variations of MS1–MS6 pile are 94.31, 87.42, 83.74, 81.57, 77.58, and 76.55 mm. The displacement variation of MS5 and MS6 groups with the largest pile spacing almost overlap. This phenomenon is due to the fact that when the vertical pressure increases continuously, the overlapping range of pile-soil interaction decreases, the double-pile effect weakens, and the compressive bearing capacity of the double-pile foundation increases, which makes the displacement variation of the CEP double piles with larger pile spacing smaller.(3) As can be seen from Figure7, when the vertical pressure is greater than 1000 kN and the CEP double piles are subjected to the same vertical pressure, the displacement of the pile top gradually decreases with the increase in pile spacing. For example, when the vertical pressure is 4000 kN, the displacements of MS1–MS6 pile top are 106.51, 97.96, 91.42, 88.87, 85.27, and 84.12 mm. The difference among MS1, MS2, MS3, and MS4 pile top displacement is large, whereas the difference between MS5 and MS6 pile top displacement is small, indicating that when the pile spacing increases, the double pile effect is weakened, the compressive bearing capacity of CEP double pile is gradually improved, and the pile top displacement is gradually reduced. When the spacing of the CEP double pile disc end is greater than 2.5 times the disc overhang diameter, the double pile effect on the compressive bearing capacity of CEP double pile is small. Therefore, in the actual process, to reduce the influence of the double pile effect on the compressive bearing capacity of CEP double pile, we must try to ensure that the spacing between the disc ends of CEP double pile is greater than 2.5 times the disc overhang diameter.Figure 7 (a) Load-displacement curve. (b) Comparison of the CEP double pile and single pile load-displacement curves. (a)(b)To further investigate the similarities and differences between CEP double piles and CEP monopile, the test data of the MD monopile model were compared with MS2, MS4, and MS6 double pile models, and the load-displacement curves are drawn in Figure7(b).Figure7(b) depicts that when the vertical pressure ranges from 200 to 1000 kN, the change in the displacement of the monopile model is small, and the curves of the three sets of double pile models are gentle and almost coincide, indicating that the CEP double piles do not have mutual influence at this time, and the compressive bearing capacity of monopile and double piles is not different. When the vertical pressure exceeds 1000 kN, the displacement of the top of the CEP monopile increases sharply with the load. After this point, the curve trend, which is called the inflection point of the MD monopile load-displacement curve, occurs obviously, and the inflection point also appears when the vertical pressure for both double piles is 2000 kN. The gentle decline of the original curve displacement transformed into a sharp decline. Finally, each curve reached the last data point; that is, the MD monopile, MS2, MS4, and MS6 double piles’ ultimate compressive bearing capacity are 2600, 4200, 4600, and 4800 kN, respectively. The three groups of double-pile ultimate compressive bearing capacity are 1.62, 1.77, and 1.85 times of the monopile, from which we can learn that the ultimate compressive load bearing capacity of CEP double piles increases with the pile spacing, and the larger the pile spacing of the double piles is, the closer the ultimate compressive load bearing capacity is to 2 times of that of the single piles. However, the ultimate compressive load bearing capacity of MS6 CEP double piles did not reach twice that of single piles because of the influence of double piles on each other. Therefore, when studying the compressive load bearing capacity of CEP double piles, the influence of pile spacing and double-pile effect should be fully considered. ## 3.1. Comparison Analysis of Displacement Distributions of Each Group of Model Piles In the ANSYS simulation analysis, the displacement distribution diagram of each group of the double-pile is extracted when it reaches the ultimate compressive bearing capacity (Figure5). The displacement distribution diagram of the MD model pile is extracted when it reaches the ultimate compressive bearing capacity (Figure 6).Figure 5 Displacement distributions of each group of model piles under extreme load. (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 6 MD displacement distribution map.Figure5 shows that when the CEP double pile model is subjected to ultimate load, the overall trend of the displacement distributions of each group of model piles is basically similar; that is, the CEP double pile model produces different degrees of slip misalignment of the soil below the bearing disc under vertical pressure, and the extent of damage to the soil around the pile is mainly concentrated in the soil close to the soil below the bearing disc (green area in the figure). In Figures 5(a)–5(c), the pile spacing is small, the displacement influence range generated by the double-pile model has a certain overlapping part (green area), and the joint action of the double piles leads to a larger displacement of the soil between the piles downward. Figure 5(d) illustrates that when the pile spacing is 4226 mm, the disk end spacing is 2.5 times the disk overhang diameter, the area with larger displacement below the bearing disk of each single pile no longer overlaps, and the overall displacement produced by the soil between piles becomes smaller. Thus, at this pile spacing, the double-pile effect is weakened, and the CEP double pile compressive bearing capacity is improved. In Figures 5(e) and 5(f), when the pile spacing increases to a certain degree, the displacement produced by the two bearing discs that exert effects on the soil body between the piles is smaller and only overlaps at the periphery, whereas the soil body below the bearing disc produces larger displacement, and the double-pile effect is weakened further. From Figure 6, we found that when the CEP monopile is subjected to ultimate load, the pile-soil separation occurs on the disk, the bearing disk plays the main role, and the larger displacement area occurs in the soil below the bearing disk. The CEP double pile increases with pile spacing. Each single pile is closer to the CEP single pile under pressure, the double-pile effect gradually weakens, and the range of soil body between piles interacting with each other continuously declines, thereby reducing the overall displacement generated by the soil body between piles and improving the compressive bearing capacity gradually. ## 3.2. Load-Displacement Curve Analysis The vertical pressure is loaded step-by-step, the pile top displacement data under each level of load are extracted and collated, and the load-displacement curve is drawn, as shown in Figure7(a).(1) Figure7(a) shows that the development trend of each curve is similar, the displacement of the top pile, and the slope of the curve increase continuously with load; that is, the amount of change in displacement gradually rises due to the extrusion of the bearing disc and the soil under the disc under the action of the vertical pressure of the CEP double-pile, which causes the soil to slip. Furthermore, the compressive bearing capacity of the soil gradually decreases, which is not enough to resist the vertical pressure, resulting in the change of displacement rate being accelerated.(2) At the early stage of loading, when the vertical pressure is 200–1000 kN, the six curves are almost in the overlapping state; that is, the displacement changes are the same. At this time, the soil around the pile is not damaged, and the pile spacing has less influence on the bearing performance of the CEP double pile. When the vertical pressure exceeds 1000 kN, the displacement variation gradually decreases with the increase in pile spacing. For example, when the vertical pressure ranges from 1000 to 4000 kN, the displacement variations of MS1–MS6 pile are 94.31, 87.42, 83.74, 81.57, 77.58, and 76.55 mm. The displacement variation of MS5 and MS6 groups with the largest pile spacing almost overlap. This phenomenon is due to the fact that when the vertical pressure increases continuously, the overlapping range of pile-soil interaction decreases, the double-pile effect weakens, and the compressive bearing capacity of the double-pile foundation increases, which makes the displacement variation of the CEP double piles with larger pile spacing smaller.(3) As can be seen from Figure7, when the vertical pressure is greater than 1000 kN and the CEP double piles are subjected to the same vertical pressure, the displacement of the pile top gradually decreases with the increase in pile spacing. For example, when the vertical pressure is 4000 kN, the displacements of MS1–MS6 pile top are 106.51, 97.96, 91.42, 88.87, 85.27, and 84.12 mm. The difference among MS1, MS2, MS3, and MS4 pile top displacement is large, whereas the difference between MS5 and MS6 pile top displacement is small, indicating that when the pile spacing increases, the double pile effect is weakened, the compressive bearing capacity of CEP double pile is gradually improved, and the pile top displacement is gradually reduced. When the spacing of the CEP double pile disc end is greater than 2.5 times the disc overhang diameter, the double pile effect on the compressive bearing capacity of CEP double pile is small. Therefore, in the actual process, to reduce the influence of the double pile effect on the compressive bearing capacity of CEP double pile, we must try to ensure that the spacing between the disc ends of CEP double pile is greater than 2.5 times the disc overhang diameter.Figure 7 (a) Load-displacement curve. (b) Comparison of the CEP double pile and single pile load-displacement curves. (a)(b)To further investigate the similarities and differences between CEP double piles and CEP monopile, the test data of the MD monopile model were compared with MS2, MS4, and MS6 double pile models, and the load-displacement curves are drawn in Figure7(b).Figure7(b) depicts that when the vertical pressure ranges from 200 to 1000 kN, the change in the displacement of the monopile model is small, and the curves of the three sets of double pile models are gentle and almost coincide, indicating that the CEP double piles do not have mutual influence at this time, and the compressive bearing capacity of monopile and double piles is not different. When the vertical pressure exceeds 1000 kN, the displacement of the top of the CEP monopile increases sharply with the load. After this point, the curve trend, which is called the inflection point of the MD monopile load-displacement curve, occurs obviously, and the inflection point also appears when the vertical pressure for both double piles is 2000 kN. The gentle decline of the original curve displacement transformed into a sharp decline. Finally, each curve reached the last data point; that is, the MD monopile, MS2, MS4, and MS6 double piles’ ultimate compressive bearing capacity are 2600, 4200, 4600, and 4800 kN, respectively. The three groups of double-pile ultimate compressive bearing capacity are 1.62, 1.77, and 1.85 times of the monopile, from which we can learn that the ultimate compressive load bearing capacity of CEP double piles increases with the pile spacing, and the larger the pile spacing of the double piles is, the closer the ultimate compressive load bearing capacity is to 2 times of that of the single piles. However, the ultimate compressive load bearing capacity of MS6 CEP double piles did not reach twice that of single piles because of the influence of double piles on each other. Therefore, when studying the compressive load bearing capacity of CEP double piles, the influence of pile spacing and double-pile effect should be fully considered. ## 4. Stress Cloud Analysis To understand the effect of pile spacing on the stress of CEP double piles, the model pile and soilY-directional stress clouds of six groups of different pile spacings at this time were extracted (Figure 8) and compared with the stress distribution of the CEP monopile, the MD model pile, and soil Y-directional stress clouds (Figure 9).Figure 8 Y-directional stress cloud of each model pile and soil body (unit: MPa). (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 9 MD model pile and soil stress cloud.Figure8 shows that when the vertical pressure is the same, the Y-directional stress distribution trend of the dual piles with different pile spacings is the same, the stress above the bearing disc is higher, and the stress below the bearing disc is lower. The comparison with Figure 9 shows that the stress distribution of the dual piles is similar to that of the single piles, which means that the stress distribution of the dual piles is not affected by the pile spacing at this time. Furthermore, the bearing disc plays an important role in bearing the vertical pressure. First of all, under the action of the vertical pressure of the double-pile, the Y-directional stress distribution of the soil at the pile end of each model pile is almost the same. The separation of the bearing disc and the soil on the disc occurs, and the Y-directional stress of the soil on the disc tends to be 0. The Y-directional stress distribution of the soil above the bearing disc and the soil at the pile end of the double-pile is the same as that of the single pile, whereas the Y-directional stress area of the soil under the disc (green area) is larger. The overlapping phenomenon of the soil in the Y-stress area in the MS1 group, which is due to the small pile spacing that resulted in a larger soil interaction between the double piles and a larger double pile effect, is obvious; whereas, from the MS4 group to MS6 group, the soil in the Y-stress area is separated with the increase in pile spacing, and the soil interaction between the double piles gradually decreases. The Y-directional stress distribution of the soil under the bearing plate is similar to that of the single pile. Thus, with the increase in pile spacing, the stress influence of the bearing plate on the soil between the piles and the double-pile effect is weakened and the compressive bearing capacity is gradually improved. ## 5. Shear Stress Curve Analysis To analyze the change in the shear stress of the soil on the left and right sides of the pile, the MS4 model pile under 1500 kN vertical pressure is used as an example to start the study. Then, the shear stress of each node was extracted. Finally, theXY directional shear stress curves of the soil on the left and right sides of the model pile were plotted (Figure 10) to compare the difference between the XY directional shear stress of the soil on the left and right sides of the CEP double pile and the CEP single pile. The points were taken for the MD single pile model under the same circumstances, and the MD in the XY direction shear stress curve of the model pile is shown in Figure 11.Figure 10 XY direction shear stress curves of the soil on the left and right sides of MS4 pile.Figure 11 XY direction shear stress curves of the soil on the left and right sides of the MD pile.Figure10 shows that under the action of 1500 kN vertical pressure, the XY directional shear stress curves of the soil on the left and right sides of the MS4 model pile are approximately symmetrically distributed, and the XY directional shear stresses of the soil distributed in the pile body away from the bearing disk position are not very different. However, the XY directional shear stresses of the soil at the bearing disk position change abruptly, and the XY directional shear stresses of the soil under the bearing disk are larger than the XY directional shear stresses of the soil on the bearing disk. This phenomenon is due to the pile body under vertical pressure. The bearing disc and the soil on the disc are separated, and only the pile side frictional force comes into play while the soil under the bearing disc is extruded, and the slip phenomenon occurs.The comparison of Figures10 and 11 shows that the XY-directional shear stress curves of the soil on the left and right sides of the CEP double pile and CEP single pile are almost the same, but the XY-directional shear stress curves of the soil on the left and right sides of the CEP single pile show a symmetrical pattern with the same shear stress value. Meanwhile, the XY-directional shear stress of the soil on the right side of the CEP double pile is slightly smaller compared with that of the soil on the left side, the maximum shear stress value in the XY direction of the left soil is 0.053 MPa, and the maximum shear stress value in the XY direction of the right soil is 0.048 MPa. The reason is that the soil on the right side of the left pile of MS4 group is damaged by the interaction of the double piles, thereby causing the shear stress on the right side of the bearing plate to decline. Different from the single pile, the double-pile effect has a certain effect on the change in XY-directional shear stress of the soil on the left and right sides of the CEP double pile. ## 6. Conclusions The finite element simulation analysis of the CEP double piles model with different pile spacings under vertical pressure leads to the following conclusions:(1) Due to the increasing pile spacing of the CEP double pile model, the displacement and stress influence range of the soil between piles decrease. Each monopile gradually approaches the CEP monopile compressive condition and the compressive bearing capacity increases continuously.(2) When the pile spacing keeps increasing, the change in the displacement of the CEP double pile model decreases continuously and the pile top displacement also decreases with the increase in pile spacing under the same load. This phenomenon indicates that when the pile spacing increases, the double-pile effect weakens, and the compressive bearing capacity is improved. Furthermore, when the CEP double pile disc end spacing is greater than 2.5 times the disc overhang diameter, the double-pile effect has less influence on the CEP double pile compressive bearing capacity. Therefore, to reduce the influence of double-pile effect on the compressive bearing capacity of CEP double-pile in the actual project, we must try to ensure that the spacing of CEP double pile disc end is greater than 2.5 times the disc overhang diameter.(3) From theXY direction shear stress curves of the soil on the left and right sides of the pile, it can be seen that the XY direction shear stress curves of the soil on the left and right sides of the single pile show a symmetrical pattern and the same shear stress values. Meanwhile, the XY direction shear stress of the soil on the right side of the double-pile is slightly smaller than that of the soil on the left side. The pile spacing has some influence on the XY direction shear stress of the soil on the left and right sides of the pile, and the influence on the soil on the right side is greater.(4) The ultimate compressive bearing capacity of the CEP double pile model increases with pile spacing. Hence, the larger the pile spacing of the double-pile is, the closer the ultimate compressive bearing capacity is 2 times of the single pile. However, the ultimate compressive bearing capacity of the CEP double pile with 4.5 times of disk end spacing is not 2 times of the single pile because of the mutual influence of double pile. Therefore, when studying the compressive load capacity of the CEP double-pile, the effect of pile spacing and the double-pile effect should be considered fully.(5) This simulation study on CEP double pile mainly analyzes the influence of pile spacing on the bearing performance of CEP double pile, compared with the previous study on pile bearing performance, it can be learned that within a specific range, CEP double pile bearing capacity is not twice the relationship of CEP monopile bearing capacity, CEP double pile interaction leads to the bearing capacity discount, which is due to when CEP double pile is subjected to vertical pressure, through the pile body to the bearing disc. When the pile spacing is small, the soil below the bearing disc is extruded, the soil above the bearing disc is cracked, the soil between the piles interacts with each other, the CEP double pile not only bears the external load but also bears the action of the soil on the pile body, with the increase of the pile spacing, the soil interaction between the piles is weakened, the compressive bearing performance is improved, through this study. For this, the study lays the foundation for the research of CEP double pile bearing performance. ## 7. Outlook (1) In this study, the influence of pile spacing on the compressive bearing capacity of CEP double piles under the action of vertical pressure has been studied in depth through ANSYS finite element simulation. However, in actual engineering, many other factors affect the compressive bearing capacity and damage state of the CEP pile, such as the number of bearing discs, bearing disc angle, and bearing disc position. The next step is to consider the influence of various factors on the bearing capacity of the CEP group pile and improve the CEP pile research theory.(2) The ANSYS finite element simulation soil model is selected from powder clay. Since CEP double pile can be applied to many kinds of soil layers, the soil properties also have a specific influence on the compressive bearing capacity of CEP double pile. The next step must study many geological conditions to provide a reliable theoretical basis. --- *Source: 1005985-2023-03-21.xml*
1005985-2023-03-21_1005985-2023-03-21.md
45,206
Simulation Analysis of the Effect of Pile Spacing on the Compressive Load-Bearing Performance of CEP Double Piles
Yongmei Qian; Xun Li; Lishuang Ai; Yaling Jiang; Yu Dong
Advances in Civil Engineering (2023)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1005985
1005985-2023-03-21.xml
--- ## Abstract Under the action of vertical pressure, the variation of pile spacing affects the bearing performance of concrete expanded-plate (CEP) double pile and the damaged state of soil around the pile. In this study, finite element simulation analysis is performed by ANSYS software, and six sets of semisectional double pile models with different pile spacings and one set of semisectional monopile models with the same specifications are established after applying the vertical pressure to the model pile. The mapping method is used for the pile-soil model meshing, the model adopts the contact type of rigid and flexible bodies, and the simulation analysis adopts the way of applying surface load and loading step-by-step. The displacement distribution, load-displacement curve, stress cloud, and shear stress curve are collated to obtain the displacement and stress change law of pile-soil under different pile spacings and then determine the effect of the changes in pile spacing on the CEP double pile. Meanwhile, the CEP double pile model is compared with the monopile model to determine their similarities and differences. Finally, a reasonable range of pile spacing for CEP double piles is provided. It further improves the research theory of CEP group pile compressive load-bearing capacity and provides the theoretical basis for its design and application in practical engineering. --- ## Body ## 1. Introduction Concrete expanded-plate (CEP) piles have been widely used in practical engineering because of their high economic efficiency, good load-bearing performance, and small and uniform settlement [1]. Compared with straight-hole piles, CEP piles add bearing discs at the pile position, which can flexibly set the position and parameters of the bearing discs, change the pile bearing condition, and increase the contact area between the pile and the soil, significantly improving the pile bearing capacity and stability. From the current situation of domestic and foreign research, the study on CEP monopiles performed by domestic and foreign scholars has been perfected [2–4]. Many influencing factors of CEP monopiles on the load-bearing performance and the intrinsic mechanism have been studied in depth [5], whereas research on the CEP group piles is still in the primary stage; however, most of the piles in practical engineering appear in the form of group piles [6, 7], and CEP piles are gradually used in high-rise buildings, bridges, deep-sea platforms, and other projects, and many factors, such as environment and geological conditions, influence these construction facilities. The requirements for their bearing performance are gradually improved [8, 9]. Therefore, an in-depth study on the load-bearing performance [10, 11] of CEP group piles must be conducted to meet the needs of actual projects. Under the action of vertical pressure, the neighboring piles will interact with one another when the pile spacing is small, and the change in pile spacing affects the damage state and bearing performance of the soil around the CEP group pile. This study takes CEP double pile as the research object, the pile spacing as a single variable, and ANSYS finite element software is used to establish six groups of semisectional double pile models with different pile spacings and one group of semisectional single pile models with the same specifications [10]. The displacement and stress change law of the CEP double pile and soil body under the action of vertical pressure is obtained through collated analysis, and the damage state and bearing capacity [12–14] change trend of the soil body around the pile of CEP double pile are derived. Their similarities and differences are also determined by comparing them with the monopile model [15]. Finally, the best pile spacing design principle that affects the compressive bearing performance of CEP double piles is given to provide the theoretical basis for further improving the design of CEP group pile bearing capacity under vertical load. ## 2. ANSYS Finite Element Modeling ### 2.1. Constitutive Model and Material Parameters Due to the complex force situation in the soil in the actual project, the Duncan–Chang model in the nonlinear elastic model is chosen for the soil, which meets the DP yield criterion. The elastic-plastic model can reflect the main force characteristics of concrete and is used as the principal structure model of the CEP double pile. The interaction between the pile and the soil specifically influences the structure’s stress and displacement. The finite unit method is used, which can consider the nonlinear stress-strain relationship on the interaction contact surface and improve the accuracy of the calculation results. The hyperbolic model is used for the contact surface.In ANSYS finite element simulation and analysis, according to the preliminary experimental study and simulation analysis of CEP monopiles, the CEP double pile is made of C30 concrete material and the soil is made of powdered clay according to the site investigation report [16] to ensure that the simulation matches the actual elements [17]. The specific material parameters are shown in Table 1.Table 1 Material parameters. MaterialDensity (t/mm3)Modulus of elasticity (MPa)Poisson’s ratio cohesionCohesion (MPa)Friction angle (°)Expansion angle (°)Pile-soil friction coefficientFlexible pile2.25 × 10−93.465 × 1040.2———0.3Powdered clay1.688 × 10−9400.350.0435510.710.7 ### 2.2. Model Determination and Model Dimensions For a more intuitive observation of displacement distributions, stress clouds, and other related data [18], the model piles used in this simulation are semisectional piles because they are symmetrical structures. To ensure the feasibility and convenience of applying the research results of this thesis to the actual project, the dimensions of the established model pile should be consistent with the actual project, and the modeling is completed by using a 1 : 1 scale. The pile dimensions are set as follows: the pile length L = 9200 mm, the pile diameter d = 500 mm, the disk height is 932 mm, the disk overhang diameter R = 828 mm, the upper slope angle of the disk α = 35°, and the lower slope angle of the disk β = 20°. The model pile specifications are shown in Figure 1. In this ANSYS simulation, seven groups of model piles were set up (including six groups of CEP double pile model and one group of single pile model as a comparison), and all the piles have the same specifications. The first six groups took the pile spacing as a single variable. The piles are numbered MS1–MS6, and the pile spacing is 2984, 3398, 3812, 4226, 5054, and 5882 mm in order, which are 1, 1.5, 2, 2.5, 3.5, and 4.5 times the circling pick diameter, respectively. The CEP monopile model of the same specification is set for comparison and numbered as MD.Figure 1 Schematic of the double pile model. ### 2.3. Double Pile Modeling #### 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. #### 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. #### 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 2.1. Constitutive Model and Material Parameters Due to the complex force situation in the soil in the actual project, the Duncan–Chang model in the nonlinear elastic model is chosen for the soil, which meets the DP yield criterion. The elastic-plastic model can reflect the main force characteristics of concrete and is used as the principal structure model of the CEP double pile. The interaction between the pile and the soil specifically influences the structure’s stress and displacement. The finite unit method is used, which can consider the nonlinear stress-strain relationship on the interaction contact surface and improve the accuracy of the calculation results. The hyperbolic model is used for the contact surface.In ANSYS finite element simulation and analysis, according to the preliminary experimental study and simulation analysis of CEP monopiles, the CEP double pile is made of C30 concrete material and the soil is made of powdered clay according to the site investigation report [16] to ensure that the simulation matches the actual elements [17]. The specific material parameters are shown in Table 1.Table 1 Material parameters. MaterialDensity (t/mm3)Modulus of elasticity (MPa)Poisson’s ratio cohesionCohesion (MPa)Friction angle (°)Expansion angle (°)Pile-soil friction coefficientFlexible pile2.25 × 10−93.465 × 1040.2———0.3Powdered clay1.688 × 10−9400.350.0435510.710.7 ## 2.2. Model Determination and Model Dimensions For a more intuitive observation of displacement distributions, stress clouds, and other related data [18], the model piles used in this simulation are semisectional piles because they are symmetrical structures. To ensure the feasibility and convenience of applying the research results of this thesis to the actual project, the dimensions of the established model pile should be consistent with the actual project, and the modeling is completed by using a 1 : 1 scale. The pile dimensions are set as follows: the pile length L = 9200 mm, the pile diameter d = 500 mm, the disk height is 932 mm, the disk overhang diameter R = 828 mm, the upper slope angle of the disk α = 35°, and the lower slope angle of the disk β = 20°. The model pile specifications are shown in Figure 1. In this ANSYS simulation, seven groups of model piles were set up (including six groups of CEP double pile model and one group of single pile model as a comparison), and all the piles have the same specifications. The first six groups took the pile spacing as a single variable. The piles are numbered MS1–MS6, and the pile spacing is 2984, 3398, 3812, 4226, 5054, and 5882 mm in order, which are 1, 1.5, 2, 2.5, 3.5, and 4.5 times the circling pick diameter, respectively. The CEP monopile model of the same specification is set for comparison and numbered as MD.Figure 1 Schematic of the double pile model. ## 2.3. Double Pile Modeling ### 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. ### 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. ### 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 2.3.1. Pile-Soil Model Establishment The MS1 model pile is taken as an example. Key points are established on the basis of the aforementioned dimensional data. MS1 is rotated 180° after the key points are connected to obtain a semisectional model pile. To prevent the simulation results from being affected by the small soil size, the soil size is designed to be 12000 mm × 10000 mm × 8000 mm, and the model pile is merged with the soil by Boolean after gouging the soil part of the model pile in the soil. The pile-soil model is shown in Figure2. For material parameter setting, the double-pile pile body is set to Solid 65 cells, and the soil body is set to Solid 45 cells to enable the double pile and the soil body to conform to the actual situation.Figure 2 Establishing the pile-soil model. ## 2.3.2. Mesh Delineation and Contact Surface Delineation This simulation uses the mapping method [19], which can generate a more regular grid and improve the calculation speed significantly. The test data obtained by using this method are more consistent with the actual engineering. Figure 3 shows the model after the mesh division. The model adopts the contact type of rigid and flexible bodies and sets the CEP double pile as a rigid body and the soil body around the pile as a flexible body. To match with the actual situation, the pile and soil bodies are set as face-to-face contact, and the outer surface of the pile body is set as a rigid surface, which is set as the target surface and defined by using target170 cell, whereas the contact surface of the soil body and the pile body is set as the contact surface and defined by using contact173 unit.Figure 3 Effect of mesh division. ## 2.3.3. Setting Constraints and Applying Loads To ensure that the finite element simulation is consistent with the actual project and to prevent the pile-soil model from moving due to excessive vertical load, constraints are set for the degrees-of-freedom in each direction of the pile-soil model.The simulation analysis adopts the way of applying surface load and loading step-by-step. To facilitate comparison with the actual project, the concentrated load is transformed equivalently into the surface load, and each step is increased by 200 kN. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements, we load until the ANSYS simulation analysis curve does not converge and then stop loading. According to the conclusion of the preliminary research on the bearing capacity of CEP monopile and the specification requirements [20], when the loading until the ANSYS simulation analysis curve does not converge, we stop loading, which is regarded as loading to the ultimate load at this time. The constraint case level load application position is shown in Figure 4.Figure 4 Constraint situation and load application position. ## 3. Displacement Results Analysis ### 3.1. Comparison Analysis of Displacement Distributions of Each Group of Model Piles In the ANSYS simulation analysis, the displacement distribution diagram of each group of the double-pile is extracted when it reaches the ultimate compressive bearing capacity (Figure5). The displacement distribution diagram of the MD model pile is extracted when it reaches the ultimate compressive bearing capacity (Figure 6).Figure 5 Displacement distributions of each group of model piles under extreme load. (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 6 MD displacement distribution map.Figure5 shows that when the CEP double pile model is subjected to ultimate load, the overall trend of the displacement distributions of each group of model piles is basically similar; that is, the CEP double pile model produces different degrees of slip misalignment of the soil below the bearing disc under vertical pressure, and the extent of damage to the soil around the pile is mainly concentrated in the soil close to the soil below the bearing disc (green area in the figure). In Figures 5(a)–5(c), the pile spacing is small, the displacement influence range generated by the double-pile model has a certain overlapping part (green area), and the joint action of the double piles leads to a larger displacement of the soil between the piles downward. Figure 5(d) illustrates that when the pile spacing is 4226 mm, the disk end spacing is 2.5 times the disk overhang diameter, the area with larger displacement below the bearing disk of each single pile no longer overlaps, and the overall displacement produced by the soil between piles becomes smaller. Thus, at this pile spacing, the double-pile effect is weakened, and the CEP double pile compressive bearing capacity is improved. In Figures 5(e) and 5(f), when the pile spacing increases to a certain degree, the displacement produced by the two bearing discs that exert effects on the soil body between the piles is smaller and only overlaps at the periphery, whereas the soil body below the bearing disc produces larger displacement, and the double-pile effect is weakened further. From Figure 6, we found that when the CEP monopile is subjected to ultimate load, the pile-soil separation occurs on the disk, the bearing disk plays the main role, and the larger displacement area occurs in the soil below the bearing disk. The CEP double pile increases with pile spacing. Each single pile is closer to the CEP single pile under pressure, the double-pile effect gradually weakens, and the range of soil body between piles interacting with each other continuously declines, thereby reducing the overall displacement generated by the soil body between piles and improving the compressive bearing capacity gradually. ### 3.2. Load-Displacement Curve Analysis The vertical pressure is loaded step-by-step, the pile top displacement data under each level of load are extracted and collated, and the load-displacement curve is drawn, as shown in Figure7(a).(1) Figure7(a) shows that the development trend of each curve is similar, the displacement of the top pile, and the slope of the curve increase continuously with load; that is, the amount of change in displacement gradually rises due to the extrusion of the bearing disc and the soil under the disc under the action of the vertical pressure of the CEP double-pile, which causes the soil to slip. Furthermore, the compressive bearing capacity of the soil gradually decreases, which is not enough to resist the vertical pressure, resulting in the change of displacement rate being accelerated.(2) At the early stage of loading, when the vertical pressure is 200–1000 kN, the six curves are almost in the overlapping state; that is, the displacement changes are the same. At this time, the soil around the pile is not damaged, and the pile spacing has less influence on the bearing performance of the CEP double pile. When the vertical pressure exceeds 1000 kN, the displacement variation gradually decreases with the increase in pile spacing. For example, when the vertical pressure ranges from 1000 to 4000 kN, the displacement variations of MS1–MS6 pile are 94.31, 87.42, 83.74, 81.57, 77.58, and 76.55 mm. The displacement variation of MS5 and MS6 groups with the largest pile spacing almost overlap. This phenomenon is due to the fact that when the vertical pressure increases continuously, the overlapping range of pile-soil interaction decreases, the double-pile effect weakens, and the compressive bearing capacity of the double-pile foundation increases, which makes the displacement variation of the CEP double piles with larger pile spacing smaller.(3) As can be seen from Figure7, when the vertical pressure is greater than 1000 kN and the CEP double piles are subjected to the same vertical pressure, the displacement of the pile top gradually decreases with the increase in pile spacing. For example, when the vertical pressure is 4000 kN, the displacements of MS1–MS6 pile top are 106.51, 97.96, 91.42, 88.87, 85.27, and 84.12 mm. The difference among MS1, MS2, MS3, and MS4 pile top displacement is large, whereas the difference between MS5 and MS6 pile top displacement is small, indicating that when the pile spacing increases, the double pile effect is weakened, the compressive bearing capacity of CEP double pile is gradually improved, and the pile top displacement is gradually reduced. When the spacing of the CEP double pile disc end is greater than 2.5 times the disc overhang diameter, the double pile effect on the compressive bearing capacity of CEP double pile is small. Therefore, in the actual process, to reduce the influence of the double pile effect on the compressive bearing capacity of CEP double pile, we must try to ensure that the spacing between the disc ends of CEP double pile is greater than 2.5 times the disc overhang diameter.Figure 7 (a) Load-displacement curve. (b) Comparison of the CEP double pile and single pile load-displacement curves. (a)(b)To further investigate the similarities and differences between CEP double piles and CEP monopile, the test data of the MD monopile model were compared with MS2, MS4, and MS6 double pile models, and the load-displacement curves are drawn in Figure7(b).Figure7(b) depicts that when the vertical pressure ranges from 200 to 1000 kN, the change in the displacement of the monopile model is small, and the curves of the three sets of double pile models are gentle and almost coincide, indicating that the CEP double piles do not have mutual influence at this time, and the compressive bearing capacity of monopile and double piles is not different. When the vertical pressure exceeds 1000 kN, the displacement of the top of the CEP monopile increases sharply with the load. After this point, the curve trend, which is called the inflection point of the MD monopile load-displacement curve, occurs obviously, and the inflection point also appears when the vertical pressure for both double piles is 2000 kN. The gentle decline of the original curve displacement transformed into a sharp decline. Finally, each curve reached the last data point; that is, the MD monopile, MS2, MS4, and MS6 double piles’ ultimate compressive bearing capacity are 2600, 4200, 4600, and 4800 kN, respectively. The three groups of double-pile ultimate compressive bearing capacity are 1.62, 1.77, and 1.85 times of the monopile, from which we can learn that the ultimate compressive load bearing capacity of CEP double piles increases with the pile spacing, and the larger the pile spacing of the double piles is, the closer the ultimate compressive load bearing capacity is to 2 times of that of the single piles. However, the ultimate compressive load bearing capacity of MS6 CEP double piles did not reach twice that of single piles because of the influence of double piles on each other. Therefore, when studying the compressive load bearing capacity of CEP double piles, the influence of pile spacing and double-pile effect should be fully considered. ## 3.1. Comparison Analysis of Displacement Distributions of Each Group of Model Piles In the ANSYS simulation analysis, the displacement distribution diagram of each group of the double-pile is extracted when it reaches the ultimate compressive bearing capacity (Figure5). The displacement distribution diagram of the MD model pile is extracted when it reaches the ultimate compressive bearing capacity (Figure 6).Figure 5 Displacement distributions of each group of model piles under extreme load. (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 6 MD displacement distribution map.Figure5 shows that when the CEP double pile model is subjected to ultimate load, the overall trend of the displacement distributions of each group of model piles is basically similar; that is, the CEP double pile model produces different degrees of slip misalignment of the soil below the bearing disc under vertical pressure, and the extent of damage to the soil around the pile is mainly concentrated in the soil close to the soil below the bearing disc (green area in the figure). In Figures 5(a)–5(c), the pile spacing is small, the displacement influence range generated by the double-pile model has a certain overlapping part (green area), and the joint action of the double piles leads to a larger displacement of the soil between the piles downward. Figure 5(d) illustrates that when the pile spacing is 4226 mm, the disk end spacing is 2.5 times the disk overhang diameter, the area with larger displacement below the bearing disk of each single pile no longer overlaps, and the overall displacement produced by the soil between piles becomes smaller. Thus, at this pile spacing, the double-pile effect is weakened, and the CEP double pile compressive bearing capacity is improved. In Figures 5(e) and 5(f), when the pile spacing increases to a certain degree, the displacement produced by the two bearing discs that exert effects on the soil body between the piles is smaller and only overlaps at the periphery, whereas the soil body below the bearing disc produces larger displacement, and the double-pile effect is weakened further. From Figure 6, we found that when the CEP monopile is subjected to ultimate load, the pile-soil separation occurs on the disk, the bearing disk plays the main role, and the larger displacement area occurs in the soil below the bearing disk. The CEP double pile increases with pile spacing. Each single pile is closer to the CEP single pile under pressure, the double-pile effect gradually weakens, and the range of soil body between piles interacting with each other continuously declines, thereby reducing the overall displacement generated by the soil body between piles and improving the compressive bearing capacity gradually. ## 3.2. Load-Displacement Curve Analysis The vertical pressure is loaded step-by-step, the pile top displacement data under each level of load are extracted and collated, and the load-displacement curve is drawn, as shown in Figure7(a).(1) Figure7(a) shows that the development trend of each curve is similar, the displacement of the top pile, and the slope of the curve increase continuously with load; that is, the amount of change in displacement gradually rises due to the extrusion of the bearing disc and the soil under the disc under the action of the vertical pressure of the CEP double-pile, which causes the soil to slip. Furthermore, the compressive bearing capacity of the soil gradually decreases, which is not enough to resist the vertical pressure, resulting in the change of displacement rate being accelerated.(2) At the early stage of loading, when the vertical pressure is 200–1000 kN, the six curves are almost in the overlapping state; that is, the displacement changes are the same. At this time, the soil around the pile is not damaged, and the pile spacing has less influence on the bearing performance of the CEP double pile. When the vertical pressure exceeds 1000 kN, the displacement variation gradually decreases with the increase in pile spacing. For example, when the vertical pressure ranges from 1000 to 4000 kN, the displacement variations of MS1–MS6 pile are 94.31, 87.42, 83.74, 81.57, 77.58, and 76.55 mm. The displacement variation of MS5 and MS6 groups with the largest pile spacing almost overlap. This phenomenon is due to the fact that when the vertical pressure increases continuously, the overlapping range of pile-soil interaction decreases, the double-pile effect weakens, and the compressive bearing capacity of the double-pile foundation increases, which makes the displacement variation of the CEP double piles with larger pile spacing smaller.(3) As can be seen from Figure7, when the vertical pressure is greater than 1000 kN and the CEP double piles are subjected to the same vertical pressure, the displacement of the pile top gradually decreases with the increase in pile spacing. For example, when the vertical pressure is 4000 kN, the displacements of MS1–MS6 pile top are 106.51, 97.96, 91.42, 88.87, 85.27, and 84.12 mm. The difference among MS1, MS2, MS3, and MS4 pile top displacement is large, whereas the difference between MS5 and MS6 pile top displacement is small, indicating that when the pile spacing increases, the double pile effect is weakened, the compressive bearing capacity of CEP double pile is gradually improved, and the pile top displacement is gradually reduced. When the spacing of the CEP double pile disc end is greater than 2.5 times the disc overhang diameter, the double pile effect on the compressive bearing capacity of CEP double pile is small. Therefore, in the actual process, to reduce the influence of the double pile effect on the compressive bearing capacity of CEP double pile, we must try to ensure that the spacing between the disc ends of CEP double pile is greater than 2.5 times the disc overhang diameter.Figure 7 (a) Load-displacement curve. (b) Comparison of the CEP double pile and single pile load-displacement curves. (a)(b)To further investigate the similarities and differences between CEP double piles and CEP monopile, the test data of the MD monopile model were compared with MS2, MS4, and MS6 double pile models, and the load-displacement curves are drawn in Figure7(b).Figure7(b) depicts that when the vertical pressure ranges from 200 to 1000 kN, the change in the displacement of the monopile model is small, and the curves of the three sets of double pile models are gentle and almost coincide, indicating that the CEP double piles do not have mutual influence at this time, and the compressive bearing capacity of monopile and double piles is not different. When the vertical pressure exceeds 1000 kN, the displacement of the top of the CEP monopile increases sharply with the load. After this point, the curve trend, which is called the inflection point of the MD monopile load-displacement curve, occurs obviously, and the inflection point also appears when the vertical pressure for both double piles is 2000 kN. The gentle decline of the original curve displacement transformed into a sharp decline. Finally, each curve reached the last data point; that is, the MD monopile, MS2, MS4, and MS6 double piles’ ultimate compressive bearing capacity are 2600, 4200, 4600, and 4800 kN, respectively. The three groups of double-pile ultimate compressive bearing capacity are 1.62, 1.77, and 1.85 times of the monopile, from which we can learn that the ultimate compressive load bearing capacity of CEP double piles increases with the pile spacing, and the larger the pile spacing of the double piles is, the closer the ultimate compressive load bearing capacity is to 2 times of that of the single piles. However, the ultimate compressive load bearing capacity of MS6 CEP double piles did not reach twice that of single piles because of the influence of double piles on each other. Therefore, when studying the compressive load bearing capacity of CEP double piles, the influence of pile spacing and double-pile effect should be fully considered. ## 4. Stress Cloud Analysis To understand the effect of pile spacing on the stress of CEP double piles, the model pile and soilY-directional stress clouds of six groups of different pile spacings at this time were extracted (Figure 8) and compared with the stress distribution of the CEP monopile, the MD model pile, and soil Y-directional stress clouds (Figure 9).Figure 8 Y-directional stress cloud of each model pile and soil body (unit: MPa). (a) MS1. (b) MS2. (c) MS3. (d) MS4. (e) MS5. (f) MS6. (a)(b)(c)(d)(e)(f)Figure 9 MD model pile and soil stress cloud.Figure8 shows that when the vertical pressure is the same, the Y-directional stress distribution trend of the dual piles with different pile spacings is the same, the stress above the bearing disc is higher, and the stress below the bearing disc is lower. The comparison with Figure 9 shows that the stress distribution of the dual piles is similar to that of the single piles, which means that the stress distribution of the dual piles is not affected by the pile spacing at this time. Furthermore, the bearing disc plays an important role in bearing the vertical pressure. First of all, under the action of the vertical pressure of the double-pile, the Y-directional stress distribution of the soil at the pile end of each model pile is almost the same. The separation of the bearing disc and the soil on the disc occurs, and the Y-directional stress of the soil on the disc tends to be 0. The Y-directional stress distribution of the soil above the bearing disc and the soil at the pile end of the double-pile is the same as that of the single pile, whereas the Y-directional stress area of the soil under the disc (green area) is larger. The overlapping phenomenon of the soil in the Y-stress area in the MS1 group, which is due to the small pile spacing that resulted in a larger soil interaction between the double piles and a larger double pile effect, is obvious; whereas, from the MS4 group to MS6 group, the soil in the Y-stress area is separated with the increase in pile spacing, and the soil interaction between the double piles gradually decreases. The Y-directional stress distribution of the soil under the bearing plate is similar to that of the single pile. Thus, with the increase in pile spacing, the stress influence of the bearing plate on the soil between the piles and the double-pile effect is weakened and the compressive bearing capacity is gradually improved. ## 5. Shear Stress Curve Analysis To analyze the change in the shear stress of the soil on the left and right sides of the pile, the MS4 model pile under 1500 kN vertical pressure is used as an example to start the study. Then, the shear stress of each node was extracted. Finally, theXY directional shear stress curves of the soil on the left and right sides of the model pile were plotted (Figure 10) to compare the difference between the XY directional shear stress of the soil on the left and right sides of the CEP double pile and the CEP single pile. The points were taken for the MD single pile model under the same circumstances, and the MD in the XY direction shear stress curve of the model pile is shown in Figure 11.Figure 10 XY direction shear stress curves of the soil on the left and right sides of MS4 pile.Figure 11 XY direction shear stress curves of the soil on the left and right sides of the MD pile.Figure10 shows that under the action of 1500 kN vertical pressure, the XY directional shear stress curves of the soil on the left and right sides of the MS4 model pile are approximately symmetrically distributed, and the XY directional shear stresses of the soil distributed in the pile body away from the bearing disk position are not very different. However, the XY directional shear stresses of the soil at the bearing disk position change abruptly, and the XY directional shear stresses of the soil under the bearing disk are larger than the XY directional shear stresses of the soil on the bearing disk. This phenomenon is due to the pile body under vertical pressure. The bearing disc and the soil on the disc are separated, and only the pile side frictional force comes into play while the soil under the bearing disc is extruded, and the slip phenomenon occurs.The comparison of Figures10 and 11 shows that the XY-directional shear stress curves of the soil on the left and right sides of the CEP double pile and CEP single pile are almost the same, but the XY-directional shear stress curves of the soil on the left and right sides of the CEP single pile show a symmetrical pattern with the same shear stress value. Meanwhile, the XY-directional shear stress of the soil on the right side of the CEP double pile is slightly smaller compared with that of the soil on the left side, the maximum shear stress value in the XY direction of the left soil is 0.053 MPa, and the maximum shear stress value in the XY direction of the right soil is 0.048 MPa. The reason is that the soil on the right side of the left pile of MS4 group is damaged by the interaction of the double piles, thereby causing the shear stress on the right side of the bearing plate to decline. Different from the single pile, the double-pile effect has a certain effect on the change in XY-directional shear stress of the soil on the left and right sides of the CEP double pile. ## 6. Conclusions The finite element simulation analysis of the CEP double piles model with different pile spacings under vertical pressure leads to the following conclusions:(1) Due to the increasing pile spacing of the CEP double pile model, the displacement and stress influence range of the soil between piles decrease. Each monopile gradually approaches the CEP monopile compressive condition and the compressive bearing capacity increases continuously.(2) When the pile spacing keeps increasing, the change in the displacement of the CEP double pile model decreases continuously and the pile top displacement also decreases with the increase in pile spacing under the same load. This phenomenon indicates that when the pile spacing increases, the double-pile effect weakens, and the compressive bearing capacity is improved. Furthermore, when the CEP double pile disc end spacing is greater than 2.5 times the disc overhang diameter, the double-pile effect has less influence on the CEP double pile compressive bearing capacity. Therefore, to reduce the influence of double-pile effect on the compressive bearing capacity of CEP double-pile in the actual project, we must try to ensure that the spacing of CEP double pile disc end is greater than 2.5 times the disc overhang diameter.(3) From theXY direction shear stress curves of the soil on the left and right sides of the pile, it can be seen that the XY direction shear stress curves of the soil on the left and right sides of the single pile show a symmetrical pattern and the same shear stress values. Meanwhile, the XY direction shear stress of the soil on the right side of the double-pile is slightly smaller than that of the soil on the left side. The pile spacing has some influence on the XY direction shear stress of the soil on the left and right sides of the pile, and the influence on the soil on the right side is greater.(4) The ultimate compressive bearing capacity of the CEP double pile model increases with pile spacing. Hence, the larger the pile spacing of the double-pile is, the closer the ultimate compressive bearing capacity is 2 times of the single pile. However, the ultimate compressive bearing capacity of the CEP double pile with 4.5 times of disk end spacing is not 2 times of the single pile because of the mutual influence of double pile. Therefore, when studying the compressive load capacity of the CEP double-pile, the effect of pile spacing and the double-pile effect should be considered fully.(5) This simulation study on CEP double pile mainly analyzes the influence of pile spacing on the bearing performance of CEP double pile, compared with the previous study on pile bearing performance, it can be learned that within a specific range, CEP double pile bearing capacity is not twice the relationship of CEP monopile bearing capacity, CEP double pile interaction leads to the bearing capacity discount, which is due to when CEP double pile is subjected to vertical pressure, through the pile body to the bearing disc. When the pile spacing is small, the soil below the bearing disc is extruded, the soil above the bearing disc is cracked, the soil between the piles interacts with each other, the CEP double pile not only bears the external load but also bears the action of the soil on the pile body, with the increase of the pile spacing, the soil interaction between the piles is weakened, the compressive bearing performance is improved, through this study. For this, the study lays the foundation for the research of CEP double pile bearing performance. ## 7. Outlook (1) In this study, the influence of pile spacing on the compressive bearing capacity of CEP double piles under the action of vertical pressure has been studied in depth through ANSYS finite element simulation. However, in actual engineering, many other factors affect the compressive bearing capacity and damage state of the CEP pile, such as the number of bearing discs, bearing disc angle, and bearing disc position. The next step is to consider the influence of various factors on the bearing capacity of the CEP group pile and improve the CEP pile research theory.(2) The ANSYS finite element simulation soil model is selected from powder clay. Since CEP double pile can be applied to many kinds of soil layers, the soil properties also have a specific influence on the compressive bearing capacity of CEP double pile. The next step must study many geological conditions to provide a reliable theoretical basis. --- *Source: 1005985-2023-03-21.xml*
2023
# HCDRNN-NMPC: A New Approach to Design Nonlinear Model Predictive Control (NMPC) Based on the Hyper Chaotic Diagonal Recurrent Neural Network (HCDRNN) **Authors:** Samira Johari; Mahdi Yaghoobi; Hamid R. Kobravi **Journal:** Complexity (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1006197 --- ## Abstract In industrial applications, Stewart platform control is especially important. Because of the Stewart platform’s inherent delays and high nonlinear behavior, a novel nonlinear model predictive controller (NMPC) and new chaotic neural network model (CNNM) are proposed. Here, a novel NMPC based on hyper chaotic diagonal recurrent neural networks (HCDRNN-NMPC) is proposed, in which, the HCDRNN estimates the future system’s outputs. To improve the convergence of the parameters of the HCDRNN to better the system’s modeling, the extent of chaos is adjusted using a logistic map in the hidden layer. The proposed scheme uses an improved gradient method to solve the optimization problem in NMPC. The proposed control is used to control six degrees of freedom Stewart parallel robot with hard-nonlinearity, input constraints, and in the presence of uncertainties including external disturbance. High prediction performance, parameters convergence, and local minima avoidance of the neural network are guaranteed. Stability and high tracking performance are the most significant advantages of the proposed scheme. --- ## Body ## 1. Introduction Stewart platform is a six-degree-of-freedom parallel robot that was first introduced by Stewart in 1965 and has potential uses in industrial contexts due to its good dynamics performance, high precision, and high rigidity. The control of the Stewart platform is quite challenging due to the nonlinear characteristics of dynamic parameters and time-varying delays. Stewart platform has more physical constraints than the serial manipulators, therefore solving their kinematics and dynamics problem is more difficult, and developing an accurate model of the Stewart platform has always been a concern for researchers in this field [1].There has been a lot of research done on using the neural networks to the model nonlinear systems [2–4]. In the study of Chen et al. [5], to control nonlinear teleoperation manipulators, an RBF-neural network-based adaptive robust control is developed. As a result, the RBF neural network is used to estimate the nonlinearities and model uncertainty in system dynamics with external disturbances. To handle parameter variations, the adaptive law is developed by adapting the parameters of the RBF neural network online while the nonlinear robust term is developed to deal with estimation errors. Lu employed a NN approximator to estimate uncertain parametric and unknown functions in a robotic system in the study by Lu and Liu [6], which was subsequently used to construct an adaptive NN controller for uncertain n-joint robotic systems with time-varying state constraints. As outlined in [7], an adaptive global sliding mode controller with two hidden layers is developed. The system nonlinearities were estimated using a new RNN with two hidden layers. An adaptive sliding mode control scheme based on RBFNN-based estimation of environmental parameters on the slave side is proposed in the study by Chen et al. [8] for a multilateral telerobotic system with master-slave manipulators. The environment force is modeled generally.Changes in the structure of the neural network during the training, as well as the use of chaos theory in the neural network, have been considered to cover the behavioral diversity of nonlinear systems. In the study by Chen and Han and Qiao [9, 10], the number of neurons in the hidden layer is changed online. In the study by Han et al. [11], to optimize the NN structure, a penalty-reward method is used. Aihara presented a chaotic NN model in the study by Aihara et al. [12]. Hopfield NN is introduced in the study of Li et al. and Farjami et al. [13, 14] as a chaotic RNN with a chaotic dynamic established temporarily for searching. Reference [15] introduces a context layer that uses chaotic mappings to produce chaotic behavior in NN throughout the training phase in order to prevent local minima. Reference [16] discusses the designing of a chaotic NN by using chaotic neurons which show chaotic behavior in some of their activity areas. In this aspect, the behavior of the neurons and network will change according to the changes in the bifurcation parameters of the neurons which have mostly been inspired by biological studies. A logistic map is utilized as an activation function in the study by Taherkhani et al. [17], which iteratively generates chaotic behavior in the nodes.In the study of Dongsu and Hongbin [18], an adaptive sliding controller has been used to identify fixed unknown parameters, followed by external disturbances compensation. In the study of Ghorbani and Sani [19], a Fuzzy NMPC is introduced to handle uncertainties and external disturbances. In the study of Jin et al. [20], different parallel robots’ NN-based controlling approaches have been reviewed. The applicability of RNN, feedforward NNs, or both for controlling parallel robots has been discussed in detail, comparing them in terms of controlling efficiency and complexity of calculations.In this paper, due to the inherent delays of the Stewart platform and the design of the controller based on future changes, special attention is paid to the model predictive control. To predict the system behavior over a predefined prediction horizon, MPC approaches require a precise linear model of the under-control system. Stewart platform is inherently nonlinear and linear models are mostly inaccurate in dynamical nonlinear systems modeling. These all bring up the motivation for using nonlinear models in MPC, leading to NMPC.The most significant features of NMPCs include the following: (I) nonlinear model utilization, (II) state and input constraints consideration, (III) online minimization of appointed performance criteria, (IV) necessity of solving an online optimal control problem, (V) requirement of the system state measuring or estimation, for providing the prediction. Among universal nonlinear models, which are used for predicting the behavior of the system in future, the neural networks are significantly attractive [21, 22].The effectiveness of the NNs in nonlinear system identification has increased the popularity of NN-based predictive controllers. Nikdel [23], has presented a NN-based MPC to control a shape memory alloy-based manipulator. For nonlinear system modeling and predictive control, a multiinput multioutput radial basis function neural network (RBFNN) was employed in the study of Peng et al. [24]. The recurrent neural networks (RNN) perform well in terms of modeling dynamical systems even in noisy situations because they naturally incorporate dynamic aspects in the form of storing dynamic response of the system through tapping delay, the RNN is utilized in NMPC in the study of Pan and Wang [25], and the results show that the approach converges quickly. In the study of Seyab and Cao [26], a continuous-time RNN is utilized for the NMPC, demonstrating the method’s appropriate performance under various operational settings.In this paper, we will continue this research using the hierarchical structure of the chaotic RNNs, application to NMPC of a complex parallel robot. This paper’s contributions and significant innovations are as follows: (I) a new NMPC based on hierarchical HCDRNNs is suggested to model and regulate typical nonlinear systems with complex dynamics. (II) To overcome the modeling issues of complex nonlinear systems with hard nonlinearities, in the proposed controller, the future output of the under-control system is approximated using a proposed novel hierarchical HCDRNN. Note that the equations of motion of such systems are very difficult to solve by mathematical methods and bring forth flaws such as inaccuracy and computational expenses. (III) The weight updating laws are modified based on the proposed HCDRNN scheme, considering the rules introduced in the study of Wang et al. [15]. (IV) On the one hand, propounding the novel hierarchical structure, and on the other hand, the use of chaos in weights updating rules, significantly reduced the cumulative error. (V) The extent of chaos is regulated based on the modeling error in the proposed HCDRNN, in order to increase the accuracy of modeling and prediction. (VI) The control and prediction horizons are specified based on closed-loop control features. (VII) Weights convergence of the proposed HCDRNN is demonstrated and system stability is assured in terms of the Lyapunov second law, taking into account input/output limitations. Furthermore, the proposed controller’s performance in the presence of external disturbance is evaluated.The remainder of this work is structured as follows: Section2 describes the suggested control strategy in detail, Section 3 discusses the simulation results to validate the efficiency of the proposed method, and Section 4 discusses the final conclusions. ## 2. The Proposed Control Strategy The MPC is made up of three major components: the predictive model, the cost function, and the optimization method. The predictive model forecasts the system’s future behavior. Based on the optimization of the cost function and the projected behavior of the system, MPC applies an appropriate control input to the process. This paper uses a novel HCDRNN as the predictive model. Moreover, to optimize the cost function, it uses a type of improved gradient method which utilizes the data predicted by the proposed HCDRNN.Figure1 shows a block diagram of the designed control system in which Rt represents the desired trajectory for the coordinates origin of the moving plane. In which, yt and ut are the outputs and inputs of the Stewart platform, y^t+1 shows the output predicted by the NN model. Finally, the optimization block extracts the control signal, ut, by minimizing the cost function using the improved gradient descent method.Figure 1 Schematic of the proposed HCDRNN-NMPC. ### 2.1. Stewart Platform Figure2 shows the Stewart platform. All parameters and variables are the same as what Tsai used in [27].Figure 2 Schematic of Stewart platform [27].The dynamic model of Stewart platform is introduced in equation (1), which is obtained based on the virtual-work principle [27].(1)−Fz−Jp−TFp+JxTFx+JyTFy=τ¯,where Jp,Jx,andJy are the manipulator Jacobian matrices, Fp is the resultant of the applied and inertia wrenches exerted at the center of mass of the moving platform, τ¯,Fx,Fy,andFz are the vectors of input torque and forces, which are applied to the center of mass of the moving plate from the prismatic joints of the robot. For more details about the robot and its mathematical model, the interested reader can see the reference [27]. ### 2.2. The Proposed Hyper Chaotic Diagonal Recurrent Neural Network In general, the structure of the NNs may be categorized into feedforward or recurrent types. Possessing the features of having attractor dynamics and data storage capability, the RNNs are more appropriate for modeling dynamical systems than the feedforward ones [28]. Reference [15] introduces the essential concepts of the chaotic diagonal recurrent neural network (CDRNN). This study introduces an HCDRNN, the structure of which is depicted in Figure 3.Figure 3 Structure of the HCDRNN [29].The proposed HCDRNN is made up of four layers: input, context, hidden, and output. The hidden layer outputs withv-step delays are routed into the context layer through a chaotic logistic map. The following equations describe the dynamics of the HCDRNN.(2)y^t=Wotγt,γt=FSt,St=WItXt+W1Dt−a.Z1tΓ1t−1⋮WnDt−a.ZntΓnt−1,Zt+1ζZt1−Zt,where Xt∈Rm×1 and y^t∈R1×1 show the inputs and output of the HCDRNN, γt=γ1t,γit,…,γntT∈Rn×1 represents the hidden layer’s output. A set of Γit−1=γit−1…γit−vT∈Rv×1 is defined as vectors of previous steps’ values of γifori=1,2,…,n. F. shows a symmetric sigmoid activation function. Zt∈Rn×1 represents the chaotic logistic map, with Z0 as a positive random number with normal distribution. The input, context, and output weight matrices are represented as WIt∈Rn×m, WiDt∈R1×v∀i=1,2,…,n, and Wot∈R1×n, respectively. ζ∈R1×1 is the chaos gain coefficient. The degree of chaos within the HCDRNN can be adjusted by adjusting the parameter a, which ranges from 0 for a simple DRNN to close to 4 for an HCDRNN. This fact allows you to regulate the level of chaos within the NN by altering the parameter a in such a manner that the reduction in training error leads to a progressive decrease in the extent of chaos until it reaches stability. The value of the parameter a’s value could be altered as follows. As the change is exponential, the NN will rapidly converge.(3)βt=−e¯te¯t+1,a=μ0+μmax−μ0exp−βtTa,e¯t>ε,0,e¯t≤ε.where e¯t is the samples’ absolute training prediction error. The prediction error, ept, represents the difference between the system’s actual output, yt, and the output of HCDRNN, y^t.(4)ept=yt−y^t.μmax and μo represent the maximum and minimum threshold of the parameter a, respectively. Ta is the annealing parameter, and ε is the prediction error threshold. To minimize the error function, Ept, the weight update laws for the output, hidden, and context layers are based on the robust adaptive dead zone learning algorithm reported in [30].(5)Ept=12ep2t.Accordingly, weights updating laws are modified here for the proposed structure of the HCDRNN as follows [29]: #### 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. #### 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. #### 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ### 2.3. The Proposed HCDRNN-NMPC A finite-horizon NMPC cost function would be the same as indicated in reference [11].(13)V^t=ρ1Rt−Y^tTRt−Y^t+ρ2ΔUtTΔUt,where Rt=rt+1,rt+2,…,rt+HpT is the reference signal, Yt=yt+1,yt+2,…,yt+HpT is the system output, and Y^t=y^t+1,y^t+2,…,y^t+HpT is the predicted output through the prediction horizon. ΔUt=Δut,Δut+1,…,Δut+Hu−1T is the control signal variations during the upcoming control horizon. ρ1 and ρ2 are weighting parameters, determining the significance of the tracking error versus the control signal variation in the cost function, V^. Hp is prediction horizon and Hu is control horizon Hu<Hp. However, the equation faces the following constraints [11]:(14)Δut≤Δumax,umin≤ut≤umax,y^min≤y^t≤y^max,rt+Hp+i−y^t+Hp+i=0,∀i≥1.The control signal,Ut+1=ut+1,ut+2,…,ut+HuT, based on the improved gradient method is given below [11, 31]:(15)Ut+1=Ut+ΔUt=Ut−η∂V^t∂Ut,(16)ΔUt=ηρ11+ηρ2∂Y^t∂UtTRt−Y^t,where η>0 represents the learning rate of the control input sequence and ∂Y^t/∂Ut represents the Jacobian matrix, J, which is computed as a matrix with the dimension of Hp×Hu.(17)∂Y^t∂Ut=∂y^t+1∂ut00⋯0∂y^t+2∂ut∂y^t+2∂ut+10⋯0⋮⋮⋮⋱⋮∂y^t+Hu∂ut∂y^t+Hu∂ut+1∂y^t+Hu∂ut+2⋯∂y^t+Hp∂ut+Hu−1⋮⋮⋮⋮⋮∂y^t+Hp∂ut∂y^t+Hp∂ut+1∂y^t+Hp∂ut+2⋯∂y^t+Hp∂ut+Hu−1Hp×Hu.The Jacobian matrix,J=∂Y^t/∂Ut, can be computed based on the chain rule.(18)∂y^t+i∂ut+j=Wot+i∂γt+i∂ut+j,∀i=1,2,…,Hp,∀j=0,1,…,Hu−1,∂y^t+i∂ut+j=F′t+iWIt+i∂Xt+i∂ut+j+W1Dt+i−a.Z1t+i∂Γ1t+i−1∂ut+j⋮WnDt+i−a.Znt+i∂Γnt+i−1∂ut+j,in which, F′t+i=∂FSt+i/∂St+j=1−γ2t and ∂γt+i/∂ut+j can be computed recurrently knowing that if j=i then ∂γt+i/∂ut+j=0. It means that the computations should be completed from ∂γt+i/∂ut+i−1 to ∂γt+i/∂ut+j. Moreover, considering the structure of Xt+i, the ∂Xt+i/∂ut+j can be calculated recurrently based on equation (26).Algorithm 1 summarizes the proposed HCDRNN-NMPC scheme [29].Algorithm 1: Details of HCDRNN-NMPC scheme. Step 1. DetermineHp and Hu, such that Hp>Hu.Step 2. GetRt, Ut, and Xt in each control step, such that:(i) Rt=rt+1,…,rt+HpT is the desired values for Hp next steps.(ii) Ut=ut,…,ut+Hu−1T is the last optimal sequence of the predicted control signal.(iii) Xt is the delayed input-output vector of nonlinear system.Step 3. Predict the outputs of the system forHp next steps by the proposed HCDRNN.Step 4. CalculateJ by equation (17).Step 5. ComputeΔUt and Ut+1 by equations (15) and (16), respectively.Step 6. ApplyUt+1 as the first element of vector Ut+1 to the nonlinear system, and go back to Step 2 for the next sample time.In steps 4 and 5 of the proposed algorithm, as the number of system inputs increases, the dimensions of the Jacobian matrix increase, and the estimation error increases due to the discretizations performed in calculating the derivatives. Choosing the appropriate sampling, time is used as a solution to reduce the estimation error in this paper. #### 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 2.1. Stewart Platform Figure2 shows the Stewart platform. All parameters and variables are the same as what Tsai used in [27].Figure 2 Schematic of Stewart platform [27].The dynamic model of Stewart platform is introduced in equation (1), which is obtained based on the virtual-work principle [27].(1)−Fz−Jp−TFp+JxTFx+JyTFy=τ¯,where Jp,Jx,andJy are the manipulator Jacobian matrices, Fp is the resultant of the applied and inertia wrenches exerted at the center of mass of the moving platform, τ¯,Fx,Fy,andFz are the vectors of input torque and forces, which are applied to the center of mass of the moving plate from the prismatic joints of the robot. For more details about the robot and its mathematical model, the interested reader can see the reference [27]. ## 2.2. The Proposed Hyper Chaotic Diagonal Recurrent Neural Network In general, the structure of the NNs may be categorized into feedforward or recurrent types. Possessing the features of having attractor dynamics and data storage capability, the RNNs are more appropriate for modeling dynamical systems than the feedforward ones [28]. Reference [15] introduces the essential concepts of the chaotic diagonal recurrent neural network (CDRNN). This study introduces an HCDRNN, the structure of which is depicted in Figure 3.Figure 3 Structure of the HCDRNN [29].The proposed HCDRNN is made up of four layers: input, context, hidden, and output. The hidden layer outputs withv-step delays are routed into the context layer through a chaotic logistic map. The following equations describe the dynamics of the HCDRNN.(2)y^t=Wotγt,γt=FSt,St=WItXt+W1Dt−a.Z1tΓ1t−1⋮WnDt−a.ZntΓnt−1,Zt+1ζZt1−Zt,where Xt∈Rm×1 and y^t∈R1×1 show the inputs and output of the HCDRNN, γt=γ1t,γit,…,γntT∈Rn×1 represents the hidden layer’s output. A set of Γit−1=γit−1…γit−vT∈Rv×1 is defined as vectors of previous steps’ values of γifori=1,2,…,n. F. shows a symmetric sigmoid activation function. Zt∈Rn×1 represents the chaotic logistic map, with Z0 as a positive random number with normal distribution. The input, context, and output weight matrices are represented as WIt∈Rn×m, WiDt∈R1×v∀i=1,2,…,n, and Wot∈R1×n, respectively. ζ∈R1×1 is the chaos gain coefficient. The degree of chaos within the HCDRNN can be adjusted by adjusting the parameter a, which ranges from 0 for a simple DRNN to close to 4 for an HCDRNN. This fact allows you to regulate the level of chaos within the NN by altering the parameter a in such a manner that the reduction in training error leads to a progressive decrease in the extent of chaos until it reaches stability. The value of the parameter a’s value could be altered as follows. As the change is exponential, the NN will rapidly converge.(3)βt=−e¯te¯t+1,a=μ0+μmax−μ0exp−βtTa,e¯t>ε,0,e¯t≤ε.where e¯t is the samples’ absolute training prediction error. The prediction error, ept, represents the difference between the system’s actual output, yt, and the output of HCDRNN, y^t.(4)ept=yt−y^t.μmax and μo represent the maximum and minimum threshold of the parameter a, respectively. Ta is the annealing parameter, and ε is the prediction error threshold. To minimize the error function, Ept, the weight update laws for the output, hidden, and context layers are based on the robust adaptive dead zone learning algorithm reported in [30].(5)Ept=12ep2t.Accordingly, weights updating laws are modified here for the proposed structure of the HCDRNN as follows [29]: ### 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. ### 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. ### 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ## 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. ## 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. ## 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ## 2.3. The Proposed HCDRNN-NMPC A finite-horizon NMPC cost function would be the same as indicated in reference [11].(13)V^t=ρ1Rt−Y^tTRt−Y^t+ρ2ΔUtTΔUt,where Rt=rt+1,rt+2,…,rt+HpT is the reference signal, Yt=yt+1,yt+2,…,yt+HpT is the system output, and Y^t=y^t+1,y^t+2,…,y^t+HpT is the predicted output through the prediction horizon. ΔUt=Δut,Δut+1,…,Δut+Hu−1T is the control signal variations during the upcoming control horizon. ρ1 and ρ2 are weighting parameters, determining the significance of the tracking error versus the control signal variation in the cost function, V^. Hp is prediction horizon and Hu is control horizon Hu<Hp. However, the equation faces the following constraints [11]:(14)Δut≤Δumax,umin≤ut≤umax,y^min≤y^t≤y^max,rt+Hp+i−y^t+Hp+i=0,∀i≥1.The control signal,Ut+1=ut+1,ut+2,…,ut+HuT, based on the improved gradient method is given below [11, 31]:(15)Ut+1=Ut+ΔUt=Ut−η∂V^t∂Ut,(16)ΔUt=ηρ11+ηρ2∂Y^t∂UtTRt−Y^t,where η>0 represents the learning rate of the control input sequence and ∂Y^t/∂Ut represents the Jacobian matrix, J, which is computed as a matrix with the dimension of Hp×Hu.(17)∂Y^t∂Ut=∂y^t+1∂ut00⋯0∂y^t+2∂ut∂y^t+2∂ut+10⋯0⋮⋮⋮⋱⋮∂y^t+Hu∂ut∂y^t+Hu∂ut+1∂y^t+Hu∂ut+2⋯∂y^t+Hp∂ut+Hu−1⋮⋮⋮⋮⋮∂y^t+Hp∂ut∂y^t+Hp∂ut+1∂y^t+Hp∂ut+2⋯∂y^t+Hp∂ut+Hu−1Hp×Hu.The Jacobian matrix,J=∂Y^t/∂Ut, can be computed based on the chain rule.(18)∂y^t+i∂ut+j=Wot+i∂γt+i∂ut+j,∀i=1,2,…,Hp,∀j=0,1,…,Hu−1,∂y^t+i∂ut+j=F′t+iWIt+i∂Xt+i∂ut+j+W1Dt+i−a.Z1t+i∂Γ1t+i−1∂ut+j⋮WnDt+i−a.Znt+i∂Γnt+i−1∂ut+j,in which, F′t+i=∂FSt+i/∂St+j=1−γ2t and ∂γt+i/∂ut+j can be computed recurrently knowing that if j=i then ∂γt+i/∂ut+j=0. It means that the computations should be completed from ∂γt+i/∂ut+i−1 to ∂γt+i/∂ut+j. Moreover, considering the structure of Xt+i, the ∂Xt+i/∂ut+j can be calculated recurrently based on equation (26).Algorithm 1 summarizes the proposed HCDRNN-NMPC scheme [29].Algorithm 1: Details of HCDRNN-NMPC scheme. Step 1. DetermineHp and Hu, such that Hp>Hu.Step 2. GetRt, Ut, and Xt in each control step, such that:(i) Rt=rt+1,…,rt+HpT is the desired values for Hp next steps.(ii) Ut=ut,…,ut+Hu−1T is the last optimal sequence of the predicted control signal.(iii) Xt is the delayed input-output vector of nonlinear system.Step 3. Predict the outputs of the system forHp next steps by the proposed HCDRNN.Step 4. CalculateJ by equation (17).Step 5. ComputeΔUt and Ut+1 by equations (15) and (16), respectively.Step 6. ApplyUt+1 as the first element of vector Ut+1 to the nonlinear system, and go back to Step 2 for the next sample time.In steps 4 and 5 of the proposed algorithm, as the number of system inputs increases, the dimensions of the Jacobian matrix increase, and the estimation error increases due to the discretizations performed in calculating the derivatives. Choosing the appropriate sampling, time is used as a solution to reduce the estimation error in this paper. ### 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 3. Simulation To control the Stewart platform, HCDRNN-NMPC is used such that the upper moving plane of the platform tracks the desired trajectory. The simulations have been carried out by MATLAB software, 2015 version. To evaluate the efficiency of the control method against external disturbances, the effects of the disturbance applied to the force on one of the links of Stewart platforms have been investigated. ### 3.1. Neural Network-Based Model To predict the behavior of the Stewart platform, input-output data of the system under different operating configurations are required. To generate the training data, the inverse dynamics of the Stewart platform are solved for several random desired trajectories, based on the algorithm presented by [27] and the parameters introduced by [32]. The applied sampling time is ts=0.01. #### 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ### 3.2. The Results of HCDRNN-NMPC In this paper, the values for the parameters are considered as follows:Hp=3, Hu=2, ρ1=0.8, ρ2=0.2, and η=0.01. The performance of the NMPC is compared with the MPC, which both are evaluated by the integral absolute error (IAE).(25)IAE=1T∑t=1TRt−Yt,where T shows the total number of samples. Some research studies are using other metrics like mean square error (MSE) and/or integral square error (ISE). Each of these metrics has drawbacks that led us to use of IAE instead. Metrics that use error squares magnify the errors greater than one and minify the errors less than one, which is not precise in robotics motion errors. Input and output signals are bounded in the intervals mentioned in equation (26).(26)0≤Fit≤16,−1.7≤Pxt≤1.2. #### 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ### 3.3. External Disturbance Rejection The effect of external disturbance is assessed here, in the form of a pulse signal with0.4N amplitude, during 1 to 1.4 seconds, applying to another link’s force. The proposed control performance is shown in Figure 12.Figure 12 Disturbance rejection.Figure13 demonstrated the tracking error and control signal in the presence of external disturbance.Figure 13 (a) Tracking error and (b) extracted control signal in the presence of external disturbance. (a)(b)By applying disturbance, the output is changed and a control effort is made to enhance the tracking. Some advantage of the proposed method can be the low number of oscillations in disturbance rejection, the smaller overshoot and undershoot than the initial disturbance magnitude, resulting in a more uniform output, and a significant improvement that is the smaller and more smooth control signal. ### 3.4. Simulation Results Analysis The nonlinear model predictive controller requires high computational to extract the control signal, but because of the system’s nonlinear model, it can achieve the desired control performance with minimum error. Because the Stewart platform has unknown dynamics, the NN was used to model it. Chaos theory was used in NN to reduce and speed up control calculations, which accelerates the learning dynamics and thus solves the problem of predictive control being slow. Furthermore, involvement with local minimums is avoided by employing chaos in the neural network and increasing the order of chaos by employing more chaotic functions in the hidden layer, resulting in hyper-chaos in the proposed neural network. Table5 compares the prediction performance of the DRNN and proposed HCDRNN.Table 5 Prediction performance of the DRNN and proposed HCDRNN. NNDRNNHCDRNNPrediction errorTrainingTestTrainingTestStep11.7545e−64.2148e−53.8214e−81.1284e−7Step27.2561e−46.7316e−39.1852e−85.6371e−7Step39.1027e−11.4111e−16.2379e−79.0141e−7Table6 compares the performance of the proposed control with the proportional-integral-derivative (PID) control [34], the sliding mode control [18], and the fuzzy NMPC [19] and DRNN-NMPC. The comparison results are recorded in terms of IAE.Table 6 IAE comparison. Control approachPIDSliding mode controlFuzzy NMPCDRNN-NMPCThe proposed methodIAE3.2e−42.8e−54.6e−61.9e−32.9e−8The proposed method provides a minor value of IAE compared with the other method.PID controllers need a set of big gains for the proportional, integral, and derivative coefficients, and this makes the control signal highly sensitive to external disturbance so that the control signal rises to a large value with the lowest level of disturbance. However, the control inputs are bounded for factual reasons, thus, the control signal computed through the PID controller would not be applicable in practice.As demonstrated in Figure13(a), when the external disturbance is applied to the robot, tracking error has a small value in the range of −6e−4,−3e−3, which is proof of the proposed method’s high performance. ## 3.1. Neural Network-Based Model To predict the behavior of the Stewart platform, input-output data of the system under different operating configurations are required. To generate the training data, the inverse dynamics of the Stewart platform are solved for several random desired trajectories, based on the algorithm presented by [27] and the parameters introduced by [32]. The applied sampling time is ts=0.01. ### 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ## 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ## 3.2. The Results of HCDRNN-NMPC In this paper, the values for the parameters are considered as follows:Hp=3, Hu=2, ρ1=0.8, ρ2=0.2, and η=0.01. The performance of the NMPC is compared with the MPC, which both are evaluated by the integral absolute error (IAE).(25)IAE=1T∑t=1TRt−Yt,where T shows the total number of samples. Some research studies are using other metrics like mean square error (MSE) and/or integral square error (ISE). Each of these metrics has drawbacks that led us to use of IAE instead. Metrics that use error squares magnify the errors greater than one and minify the errors less than one, which is not precise in robotics motion errors. Input and output signals are bounded in the intervals mentioned in equation (26).(26)0≤Fit≤16,−1.7≤Pxt≤1.2. ### 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ## 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ## 3.3. External Disturbance Rejection The effect of external disturbance is assessed here, in the form of a pulse signal with0.4N amplitude, during 1 to 1.4 seconds, applying to another link’s force. The proposed control performance is shown in Figure 12.Figure 12 Disturbance rejection.Figure13 demonstrated the tracking error and control signal in the presence of external disturbance.Figure 13 (a) Tracking error and (b) extracted control signal in the presence of external disturbance. (a)(b)By applying disturbance, the output is changed and a control effort is made to enhance the tracking. Some advantage of the proposed method can be the low number of oscillations in disturbance rejection, the smaller overshoot and undershoot than the initial disturbance magnitude, resulting in a more uniform output, and a significant improvement that is the smaller and more smooth control signal. ## 3.4. Simulation Results Analysis The nonlinear model predictive controller requires high computational to extract the control signal, but because of the system’s nonlinear model, it can achieve the desired control performance with minimum error. Because the Stewart platform has unknown dynamics, the NN was used to model it. Chaos theory was used in NN to reduce and speed up control calculations, which accelerates the learning dynamics and thus solves the problem of predictive control being slow. Furthermore, involvement with local minimums is avoided by employing chaos in the neural network and increasing the order of chaos by employing more chaotic functions in the hidden layer, resulting in hyper-chaos in the proposed neural network. Table5 compares the prediction performance of the DRNN and proposed HCDRNN.Table 5 Prediction performance of the DRNN and proposed HCDRNN. NNDRNNHCDRNNPrediction errorTrainingTestTrainingTestStep11.7545e−64.2148e−53.8214e−81.1284e−7Step27.2561e−46.7316e−39.1852e−85.6371e−7Step39.1027e−11.4111e−16.2379e−79.0141e−7Table6 compares the performance of the proposed control with the proportional-integral-derivative (PID) control [34], the sliding mode control [18], and the fuzzy NMPC [19] and DRNN-NMPC. The comparison results are recorded in terms of IAE.Table 6 IAE comparison. Control approachPIDSliding mode controlFuzzy NMPCDRNN-NMPCThe proposed methodIAE3.2e−42.8e−54.6e−61.9e−32.9e−8The proposed method provides a minor value of IAE compared with the other method.PID controllers need a set of big gains for the proportional, integral, and derivative coefficients, and this makes the control signal highly sensitive to external disturbance so that the control signal rises to a large value with the lowest level of disturbance. However, the control inputs are bounded for factual reasons, thus, the control signal computed through the PID controller would not be applicable in practice.As demonstrated in Figure13(a), when the external disturbance is applied to the robot, tracking error has a small value in the range of −6e−4,−3e−3, which is proof of the proposed method’s high performance. ## 4. Conclusion This paper proposed a novel hierarchical HCDRNN-NMPC for modeling and control of complex nonlinear dynamical systems. Numerical simulations on the control of a Stewart platform are prepared to demonstrate the performance of the proposed strategy in tracking and external disturbance rejection. One of the most essential aspects of the suggested method is its hierarchical HCDRNN’s ability to accurately estimate the system’s output via a forward-moving window. The hierarchical structure enables the proposed mechanism to precisely adjust each HCDRNN for predicting the outputs of the system for only one specified sample ahead. This enhances the ability of the predictive model in adapting with variations of the complex dynamical systems. This paper provides the adaptive weight update rules for the proposed v-step delayed HCDRNN. Moreover, for determining the sequence of the control signal, an enhanced gradient optimization method is used. Results of the provided simulations and comparisons indicate superior performance of the proposed control system in tracking and removing the effect of the external disturbances. In future research studies, the effects of merging HCDRNNs instead of the hierarchical structure will be investigated, which will last to create a deep HCDRNN structure. The suggested controller’s robustness against various forms of disturbances will also be tested. --- *Source: 1006197-2022-10-18.xml*
1006197-2022-10-18_1006197-2022-10-18.md
66,809
HCDRNN-NMPC: A New Approach to Design Nonlinear Model Predictive Control (NMPC) Based on the Hyper Chaotic Diagonal Recurrent Neural Network (HCDRNN)
Samira Johari; Mahdi Yaghoobi; Hamid R. Kobravi
Complexity (2022)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1006197
1006197-2022-10-18.xml
--- ## Abstract In industrial applications, Stewart platform control is especially important. Because of the Stewart platform’s inherent delays and high nonlinear behavior, a novel nonlinear model predictive controller (NMPC) and new chaotic neural network model (CNNM) are proposed. Here, a novel NMPC based on hyper chaotic diagonal recurrent neural networks (HCDRNN-NMPC) is proposed, in which, the HCDRNN estimates the future system’s outputs. To improve the convergence of the parameters of the HCDRNN to better the system’s modeling, the extent of chaos is adjusted using a logistic map in the hidden layer. The proposed scheme uses an improved gradient method to solve the optimization problem in NMPC. The proposed control is used to control six degrees of freedom Stewart parallel robot with hard-nonlinearity, input constraints, and in the presence of uncertainties including external disturbance. High prediction performance, parameters convergence, and local minima avoidance of the neural network are guaranteed. Stability and high tracking performance are the most significant advantages of the proposed scheme. --- ## Body ## 1. Introduction Stewart platform is a six-degree-of-freedom parallel robot that was first introduced by Stewart in 1965 and has potential uses in industrial contexts due to its good dynamics performance, high precision, and high rigidity. The control of the Stewart platform is quite challenging due to the nonlinear characteristics of dynamic parameters and time-varying delays. Stewart platform has more physical constraints than the serial manipulators, therefore solving their kinematics and dynamics problem is more difficult, and developing an accurate model of the Stewart platform has always been a concern for researchers in this field [1].There has been a lot of research done on using the neural networks to the model nonlinear systems [2–4]. In the study of Chen et al. [5], to control nonlinear teleoperation manipulators, an RBF-neural network-based adaptive robust control is developed. As a result, the RBF neural network is used to estimate the nonlinearities and model uncertainty in system dynamics with external disturbances. To handle parameter variations, the adaptive law is developed by adapting the parameters of the RBF neural network online while the nonlinear robust term is developed to deal with estimation errors. Lu employed a NN approximator to estimate uncertain parametric and unknown functions in a robotic system in the study by Lu and Liu [6], which was subsequently used to construct an adaptive NN controller for uncertain n-joint robotic systems with time-varying state constraints. As outlined in [7], an adaptive global sliding mode controller with two hidden layers is developed. The system nonlinearities were estimated using a new RNN with two hidden layers. An adaptive sliding mode control scheme based on RBFNN-based estimation of environmental parameters on the slave side is proposed in the study by Chen et al. [8] for a multilateral telerobotic system with master-slave manipulators. The environment force is modeled generally.Changes in the structure of the neural network during the training, as well as the use of chaos theory in the neural network, have been considered to cover the behavioral diversity of nonlinear systems. In the study by Chen and Han and Qiao [9, 10], the number of neurons in the hidden layer is changed online. In the study by Han et al. [11], to optimize the NN structure, a penalty-reward method is used. Aihara presented a chaotic NN model in the study by Aihara et al. [12]. Hopfield NN is introduced in the study of Li et al. and Farjami et al. [13, 14] as a chaotic RNN with a chaotic dynamic established temporarily for searching. Reference [15] introduces a context layer that uses chaotic mappings to produce chaotic behavior in NN throughout the training phase in order to prevent local minima. Reference [16] discusses the designing of a chaotic NN by using chaotic neurons which show chaotic behavior in some of their activity areas. In this aspect, the behavior of the neurons and network will change according to the changes in the bifurcation parameters of the neurons which have mostly been inspired by biological studies. A logistic map is utilized as an activation function in the study by Taherkhani et al. [17], which iteratively generates chaotic behavior in the nodes.In the study of Dongsu and Hongbin [18], an adaptive sliding controller has been used to identify fixed unknown parameters, followed by external disturbances compensation. In the study of Ghorbani and Sani [19], a Fuzzy NMPC is introduced to handle uncertainties and external disturbances. In the study of Jin et al. [20], different parallel robots’ NN-based controlling approaches have been reviewed. The applicability of RNN, feedforward NNs, or both for controlling parallel robots has been discussed in detail, comparing them in terms of controlling efficiency and complexity of calculations.In this paper, due to the inherent delays of the Stewart platform and the design of the controller based on future changes, special attention is paid to the model predictive control. To predict the system behavior over a predefined prediction horizon, MPC approaches require a precise linear model of the under-control system. Stewart platform is inherently nonlinear and linear models are mostly inaccurate in dynamical nonlinear systems modeling. These all bring up the motivation for using nonlinear models in MPC, leading to NMPC.The most significant features of NMPCs include the following: (I) nonlinear model utilization, (II) state and input constraints consideration, (III) online minimization of appointed performance criteria, (IV) necessity of solving an online optimal control problem, (V) requirement of the system state measuring or estimation, for providing the prediction. Among universal nonlinear models, which are used for predicting the behavior of the system in future, the neural networks are significantly attractive [21, 22].The effectiveness of the NNs in nonlinear system identification has increased the popularity of NN-based predictive controllers. Nikdel [23], has presented a NN-based MPC to control a shape memory alloy-based manipulator. For nonlinear system modeling and predictive control, a multiinput multioutput radial basis function neural network (RBFNN) was employed in the study of Peng et al. [24]. The recurrent neural networks (RNN) perform well in terms of modeling dynamical systems even in noisy situations because they naturally incorporate dynamic aspects in the form of storing dynamic response of the system through tapping delay, the RNN is utilized in NMPC in the study of Pan and Wang [25], and the results show that the approach converges quickly. In the study of Seyab and Cao [26], a continuous-time RNN is utilized for the NMPC, demonstrating the method’s appropriate performance under various operational settings.In this paper, we will continue this research using the hierarchical structure of the chaotic RNNs, application to NMPC of a complex parallel robot. This paper’s contributions and significant innovations are as follows: (I) a new NMPC based on hierarchical HCDRNNs is suggested to model and regulate typical nonlinear systems with complex dynamics. (II) To overcome the modeling issues of complex nonlinear systems with hard nonlinearities, in the proposed controller, the future output of the under-control system is approximated using a proposed novel hierarchical HCDRNN. Note that the equations of motion of such systems are very difficult to solve by mathematical methods and bring forth flaws such as inaccuracy and computational expenses. (III) The weight updating laws are modified based on the proposed HCDRNN scheme, considering the rules introduced in the study of Wang et al. [15]. (IV) On the one hand, propounding the novel hierarchical structure, and on the other hand, the use of chaos in weights updating rules, significantly reduced the cumulative error. (V) The extent of chaos is regulated based on the modeling error in the proposed HCDRNN, in order to increase the accuracy of modeling and prediction. (VI) The control and prediction horizons are specified based on closed-loop control features. (VII) Weights convergence of the proposed HCDRNN is demonstrated and system stability is assured in terms of the Lyapunov second law, taking into account input/output limitations. Furthermore, the proposed controller’s performance in the presence of external disturbance is evaluated.The remainder of this work is structured as follows: Section2 describes the suggested control strategy in detail, Section 3 discusses the simulation results to validate the efficiency of the proposed method, and Section 4 discusses the final conclusions. ## 2. The Proposed Control Strategy The MPC is made up of three major components: the predictive model, the cost function, and the optimization method. The predictive model forecasts the system’s future behavior. Based on the optimization of the cost function and the projected behavior of the system, MPC applies an appropriate control input to the process. This paper uses a novel HCDRNN as the predictive model. Moreover, to optimize the cost function, it uses a type of improved gradient method which utilizes the data predicted by the proposed HCDRNN.Figure1 shows a block diagram of the designed control system in which Rt represents the desired trajectory for the coordinates origin of the moving plane. In which, yt and ut are the outputs and inputs of the Stewart platform, y^t+1 shows the output predicted by the NN model. Finally, the optimization block extracts the control signal, ut, by minimizing the cost function using the improved gradient descent method.Figure 1 Schematic of the proposed HCDRNN-NMPC. ### 2.1. Stewart Platform Figure2 shows the Stewart platform. All parameters and variables are the same as what Tsai used in [27].Figure 2 Schematic of Stewart platform [27].The dynamic model of Stewart platform is introduced in equation (1), which is obtained based on the virtual-work principle [27].(1)−Fz−Jp−TFp+JxTFx+JyTFy=τ¯,where Jp,Jx,andJy are the manipulator Jacobian matrices, Fp is the resultant of the applied and inertia wrenches exerted at the center of mass of the moving platform, τ¯,Fx,Fy,andFz are the vectors of input torque and forces, which are applied to the center of mass of the moving plate from the prismatic joints of the robot. For more details about the robot and its mathematical model, the interested reader can see the reference [27]. ### 2.2. The Proposed Hyper Chaotic Diagonal Recurrent Neural Network In general, the structure of the NNs may be categorized into feedforward or recurrent types. Possessing the features of having attractor dynamics and data storage capability, the RNNs are more appropriate for modeling dynamical systems than the feedforward ones [28]. Reference [15] introduces the essential concepts of the chaotic diagonal recurrent neural network (CDRNN). This study introduces an HCDRNN, the structure of which is depicted in Figure 3.Figure 3 Structure of the HCDRNN [29].The proposed HCDRNN is made up of four layers: input, context, hidden, and output. The hidden layer outputs withv-step delays are routed into the context layer through a chaotic logistic map. The following equations describe the dynamics of the HCDRNN.(2)y^t=Wotγt,γt=FSt,St=WItXt+W1Dt−a.Z1tΓ1t−1⋮WnDt−a.ZntΓnt−1,Zt+1ζZt1−Zt,where Xt∈Rm×1 and y^t∈R1×1 show the inputs and output of the HCDRNN, γt=γ1t,γit,…,γntT∈Rn×1 represents the hidden layer’s output. A set of Γit−1=γit−1…γit−vT∈Rv×1 is defined as vectors of previous steps’ values of γifori=1,2,…,n. F. shows a symmetric sigmoid activation function. Zt∈Rn×1 represents the chaotic logistic map, with Z0 as a positive random number with normal distribution. The input, context, and output weight matrices are represented as WIt∈Rn×m, WiDt∈R1×v∀i=1,2,…,n, and Wot∈R1×n, respectively. ζ∈R1×1 is the chaos gain coefficient. The degree of chaos within the HCDRNN can be adjusted by adjusting the parameter a, which ranges from 0 for a simple DRNN to close to 4 for an HCDRNN. This fact allows you to regulate the level of chaos within the NN by altering the parameter a in such a manner that the reduction in training error leads to a progressive decrease in the extent of chaos until it reaches stability. The value of the parameter a’s value could be altered as follows. As the change is exponential, the NN will rapidly converge.(3)βt=−e¯te¯t+1,a=μ0+μmax−μ0exp−βtTa,e¯t>ε,0,e¯t≤ε.where e¯t is the samples’ absolute training prediction error. The prediction error, ept, represents the difference between the system’s actual output, yt, and the output of HCDRNN, y^t.(4)ept=yt−y^t.μmax and μo represent the maximum and minimum threshold of the parameter a, respectively. Ta is the annealing parameter, and ε is the prediction error threshold. To minimize the error function, Ept, the weight update laws for the output, hidden, and context layers are based on the robust adaptive dead zone learning algorithm reported in [30].(5)Ept=12ep2t.Accordingly, weights updating laws are modified here for the proposed structure of the HCDRNN as follows [29]: #### 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. #### 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. #### 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ### 2.3. The Proposed HCDRNN-NMPC A finite-horizon NMPC cost function would be the same as indicated in reference [11].(13)V^t=ρ1Rt−Y^tTRt−Y^t+ρ2ΔUtTΔUt,where Rt=rt+1,rt+2,…,rt+HpT is the reference signal, Yt=yt+1,yt+2,…,yt+HpT is the system output, and Y^t=y^t+1,y^t+2,…,y^t+HpT is the predicted output through the prediction horizon. ΔUt=Δut,Δut+1,…,Δut+Hu−1T is the control signal variations during the upcoming control horizon. ρ1 and ρ2 are weighting parameters, determining the significance of the tracking error versus the control signal variation in the cost function, V^. Hp is prediction horizon and Hu is control horizon Hu<Hp. However, the equation faces the following constraints [11]:(14)Δut≤Δumax,umin≤ut≤umax,y^min≤y^t≤y^max,rt+Hp+i−y^t+Hp+i=0,∀i≥1.The control signal,Ut+1=ut+1,ut+2,…,ut+HuT, based on the improved gradient method is given below [11, 31]:(15)Ut+1=Ut+ΔUt=Ut−η∂V^t∂Ut,(16)ΔUt=ηρ11+ηρ2∂Y^t∂UtTRt−Y^t,where η>0 represents the learning rate of the control input sequence and ∂Y^t/∂Ut represents the Jacobian matrix, J, which is computed as a matrix with the dimension of Hp×Hu.(17)∂Y^t∂Ut=∂y^t+1∂ut00⋯0∂y^t+2∂ut∂y^t+2∂ut+10⋯0⋮⋮⋮⋱⋮∂y^t+Hu∂ut∂y^t+Hu∂ut+1∂y^t+Hu∂ut+2⋯∂y^t+Hp∂ut+Hu−1⋮⋮⋮⋮⋮∂y^t+Hp∂ut∂y^t+Hp∂ut+1∂y^t+Hp∂ut+2⋯∂y^t+Hp∂ut+Hu−1Hp×Hu.The Jacobian matrix,J=∂Y^t/∂Ut, can be computed based on the chain rule.(18)∂y^t+i∂ut+j=Wot+i∂γt+i∂ut+j,∀i=1,2,…,Hp,∀j=0,1,…,Hu−1,∂y^t+i∂ut+j=F′t+iWIt+i∂Xt+i∂ut+j+W1Dt+i−a.Z1t+i∂Γ1t+i−1∂ut+j⋮WnDt+i−a.Znt+i∂Γnt+i−1∂ut+j,in which, F′t+i=∂FSt+i/∂St+j=1−γ2t and ∂γt+i/∂ut+j can be computed recurrently knowing that if j=i then ∂γt+i/∂ut+j=0. It means that the computations should be completed from ∂γt+i/∂ut+i−1 to ∂γt+i/∂ut+j. Moreover, considering the structure of Xt+i, the ∂Xt+i/∂ut+j can be calculated recurrently based on equation (26).Algorithm 1 summarizes the proposed HCDRNN-NMPC scheme [29].Algorithm 1: Details of HCDRNN-NMPC scheme. Step 1. DetermineHp and Hu, such that Hp>Hu.Step 2. GetRt, Ut, and Xt in each control step, such that:(i) Rt=rt+1,…,rt+HpT is the desired values for Hp next steps.(ii) Ut=ut,…,ut+Hu−1T is the last optimal sequence of the predicted control signal.(iii) Xt is the delayed input-output vector of nonlinear system.Step 3. Predict the outputs of the system forHp next steps by the proposed HCDRNN.Step 4. CalculateJ by equation (17).Step 5. ComputeΔUt and Ut+1 by equations (15) and (16), respectively.Step 6. ApplyUt+1 as the first element of vector Ut+1 to the nonlinear system, and go back to Step 2 for the next sample time.In steps 4 and 5 of the proposed algorithm, as the number of system inputs increases, the dimensions of the Jacobian matrix increase, and the estimation error increases due to the discretizations performed in calculating the derivatives. Choosing the appropriate sampling, time is used as a solution to reduce the estimation error in this paper. #### 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 2.1. Stewart Platform Figure2 shows the Stewart platform. All parameters and variables are the same as what Tsai used in [27].Figure 2 Schematic of Stewart platform [27].The dynamic model of Stewart platform is introduced in equation (1), which is obtained based on the virtual-work principle [27].(1)−Fz−Jp−TFp+JxTFx+JyTFy=τ¯,where Jp,Jx,andJy are the manipulator Jacobian matrices, Fp is the resultant of the applied and inertia wrenches exerted at the center of mass of the moving platform, τ¯,Fx,Fy,andFz are the vectors of input torque and forces, which are applied to the center of mass of the moving plate from the prismatic joints of the robot. For more details about the robot and its mathematical model, the interested reader can see the reference [27]. ## 2.2. The Proposed Hyper Chaotic Diagonal Recurrent Neural Network In general, the structure of the NNs may be categorized into feedforward or recurrent types. Possessing the features of having attractor dynamics and data storage capability, the RNNs are more appropriate for modeling dynamical systems than the feedforward ones [28]. Reference [15] introduces the essential concepts of the chaotic diagonal recurrent neural network (CDRNN). This study introduces an HCDRNN, the structure of which is depicted in Figure 3.Figure 3 Structure of the HCDRNN [29].The proposed HCDRNN is made up of four layers: input, context, hidden, and output. The hidden layer outputs withv-step delays are routed into the context layer through a chaotic logistic map. The following equations describe the dynamics of the HCDRNN.(2)y^t=Wotγt,γt=FSt,St=WItXt+W1Dt−a.Z1tΓ1t−1⋮WnDt−a.ZntΓnt−1,Zt+1ζZt1−Zt,where Xt∈Rm×1 and y^t∈R1×1 show the inputs and output of the HCDRNN, γt=γ1t,γit,…,γntT∈Rn×1 represents the hidden layer’s output. A set of Γit−1=γit−1…γit−vT∈Rv×1 is defined as vectors of previous steps’ values of γifori=1,2,…,n. F. shows a symmetric sigmoid activation function. Zt∈Rn×1 represents the chaotic logistic map, with Z0 as a positive random number with normal distribution. The input, context, and output weight matrices are represented as WIt∈Rn×m, WiDt∈R1×v∀i=1,2,…,n, and Wot∈R1×n, respectively. ζ∈R1×1 is the chaos gain coefficient. The degree of chaos within the HCDRNN can be adjusted by adjusting the parameter a, which ranges from 0 for a simple DRNN to close to 4 for an HCDRNN. This fact allows you to regulate the level of chaos within the NN by altering the parameter a in such a manner that the reduction in training error leads to a progressive decrease in the extent of chaos until it reaches stability. The value of the parameter a’s value could be altered as follows. As the change is exponential, the NN will rapidly converge.(3)βt=−e¯te¯t+1,a=μ0+μmax−μ0exp−βtTa,e¯t>ε,0,e¯t≤ε.where e¯t is the samples’ absolute training prediction error. The prediction error, ept, represents the difference between the system’s actual output, yt, and the output of HCDRNN, y^t.(4)ept=yt−y^t.μmax and μo represent the maximum and minimum threshold of the parameter a, respectively. Ta is the annealing parameter, and ε is the prediction error threshold. To minimize the error function, Ept, the weight update laws for the output, hidden, and context layers are based on the robust adaptive dead zone learning algorithm reported in [30].(5)Ept=12ep2t.Accordingly, weights updating laws are modified here for the proposed structure of the HCDRNN as follows [29]: ### 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. ### 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. ### 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ## 2.2.1. Output Layer Ifept<Δot then Wot+1 and Δot+1 do not change, otherwise:(6)Wot+1=Wot+2eptγtT1−γt2,(7)Δot+1=Δot+2e¯t1−γt2. ## 2.2.2. Hidden Layer Ifept<ΔIt then WIt+1 and ΔIt+1 do not change, otherwise:(8)WIt+1=WIt+2teptWotF′tXt1−WotF′tXt2,(9)ΔIt+1=ΔIt+2F'minte¯t1−WotF′tXt2,where F′t=1−γ2t is the first derivative of the activation function in the hidden layer and Fj′t=F′sjt,∀i=1,2,…,n, and F'mint=minF'jt≠0,∀j,t. ## 2.2.3. Context Layer Ifept<ΔDt then WDt+1 and ΔDt+1 do not change, otherwise ∀i=1,2,…,n:(10)WiDt+1=WiDt+2Fmin'teptWiot1−γi2tΓiTt−11−Wiot1−γi2tΓiTt−12,(11)ΔDt+1=ΔDt+2F'minte¯t1−Wiot1−γi2tΓiTt−12.In these equations, “Δot,ΔIt,ΔDt are the robust adaptive dead zones for output, hidden and context layers, respectively.”Remark 1. TheoremsA.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the nu delayed system’s inputs and ny delayed system’s outputs.(12)yt=gyt−1,…,yt−ny,ut−1,…,ut−nu. In this equation,g. is an unknown function. Based on Remark1 and Remark 2, an array of Hp HCDRNNs is used to forecast the system’s behavior in a Hp-step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.Figure 4 Prediction ofHp-step-ahead outputs by the hierarchical HCDRNN [29].Remark 3. Each HCDRNN in the array, as shown in Figure4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array. ## 2.3. The Proposed HCDRNN-NMPC A finite-horizon NMPC cost function would be the same as indicated in reference [11].(13)V^t=ρ1Rt−Y^tTRt−Y^t+ρ2ΔUtTΔUt,where Rt=rt+1,rt+2,…,rt+HpT is the reference signal, Yt=yt+1,yt+2,…,yt+HpT is the system output, and Y^t=y^t+1,y^t+2,…,y^t+HpT is the predicted output through the prediction horizon. ΔUt=Δut,Δut+1,…,Δut+Hu−1T is the control signal variations during the upcoming control horizon. ρ1 and ρ2 are weighting parameters, determining the significance of the tracking error versus the control signal variation in the cost function, V^. Hp is prediction horizon and Hu is control horizon Hu<Hp. However, the equation faces the following constraints [11]:(14)Δut≤Δumax,umin≤ut≤umax,y^min≤y^t≤y^max,rt+Hp+i−y^t+Hp+i=0,∀i≥1.The control signal,Ut+1=ut+1,ut+2,…,ut+HuT, based on the improved gradient method is given below [11, 31]:(15)Ut+1=Ut+ΔUt=Ut−η∂V^t∂Ut,(16)ΔUt=ηρ11+ηρ2∂Y^t∂UtTRt−Y^t,where η>0 represents the learning rate of the control input sequence and ∂Y^t/∂Ut represents the Jacobian matrix, J, which is computed as a matrix with the dimension of Hp×Hu.(17)∂Y^t∂Ut=∂y^t+1∂ut00⋯0∂y^t+2∂ut∂y^t+2∂ut+10⋯0⋮⋮⋮⋱⋮∂y^t+Hu∂ut∂y^t+Hu∂ut+1∂y^t+Hu∂ut+2⋯∂y^t+Hp∂ut+Hu−1⋮⋮⋮⋮⋮∂y^t+Hp∂ut∂y^t+Hp∂ut+1∂y^t+Hp∂ut+2⋯∂y^t+Hp∂ut+Hu−1Hp×Hu.The Jacobian matrix,J=∂Y^t/∂Ut, can be computed based on the chain rule.(18)∂y^t+i∂ut+j=Wot+i∂γt+i∂ut+j,∀i=1,2,…,Hp,∀j=0,1,…,Hu−1,∂y^t+i∂ut+j=F′t+iWIt+i∂Xt+i∂ut+j+W1Dt+i−a.Z1t+i∂Γ1t+i−1∂ut+j⋮WnDt+i−a.Znt+i∂Γnt+i−1∂ut+j,in which, F′t+i=∂FSt+i/∂St+j=1−γ2t and ∂γt+i/∂ut+j can be computed recurrently knowing that if j=i then ∂γt+i/∂ut+j=0. It means that the computations should be completed from ∂γt+i/∂ut+i−1 to ∂γt+i/∂ut+j. Moreover, considering the structure of Xt+i, the ∂Xt+i/∂ut+j can be calculated recurrently based on equation (26).Algorithm 1 summarizes the proposed HCDRNN-NMPC scheme [29].Algorithm 1: Details of HCDRNN-NMPC scheme. Step 1. DetermineHp and Hu, such that Hp>Hu.Step 2. GetRt, Ut, and Xt in each control step, such that:(i) Rt=rt+1,…,rt+HpT is the desired values for Hp next steps.(ii) Ut=ut,…,ut+Hu−1T is the last optimal sequence of the predicted control signal.(iii) Xt is the delayed input-output vector of nonlinear system.Step 3. Predict the outputs of the system forHp next steps by the proposed HCDRNN.Step 4. CalculateJ by equation (17).Step 5. ComputeΔUt and Ut+1 by equations (15) and (16), respectively.Step 6. ApplyUt+1 as the first element of vector Ut+1 to the nonlinear system, and go back to Step 2 for the next sample time.In steps 4 and 5 of the proposed algorithm, as the number of system inputs increases, the dimensions of the Jacobian matrix increase, and the estimation error increases due to the discretizations performed in calculating the derivatives. Choosing the appropriate sampling, time is used as a solution to reduce the estimation error in this paper. ### 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 2.3.1. Stability Analysis for HCDRNN-NMPC The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark1 and Appendix, and the fact that the neural network training is done offline.Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative V^˙t if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon:(19)V^t=ρ1∑i=1Hprt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Ut=ut,ut+1,…,ut+Hu−1T is the optimal control sequence obtained at time t using the optimization algorithm. If Ust+1 is the suboptimal control sequence extracted from Ut and considered as Ust+1=ut+1,…,ut+Hu−1T, the suboptimal cost function V^st+1 is defined as follows:(20)V^t+1=ρ1∑i=1Hp+1rt+i−y^t+i2+ρ2∑i=1HuΔu2t+i−1. Using the difference ofV^t and V^st+1, and assuming that et+i=rt+i−y^t+i, equation (21) is written as follows:(21)V^st+1−V^t=ρ1∑i=1Hp+1e2t+i+ρ2∑i=1HuΔu2t+i−1−ρ1∑i=1Hpe2t+i+ρ2∑i=1HuΔu2t+i−1,=ρ1e2t+Hp+1−e2t+1−ρ2Δu2t≤0. Therefore, ifUt+1 is the optimal solution of the optimization problem time t+1 using the control law described in equation (16), it outperforms Ust+1, which is suboptimal and its cost function is smaller according to equation (22).(22)V^t+1−Vt≤V^st+1−V^t≤0. Hence, the proof is complete. ## 3. Simulation To control the Stewart platform, HCDRNN-NMPC is used such that the upper moving plane of the platform tracks the desired trajectory. The simulations have been carried out by MATLAB software, 2015 version. To evaluate the efficiency of the control method against external disturbances, the effects of the disturbance applied to the force on one of the links of Stewart platforms have been investigated. ### 3.1. Neural Network-Based Model To predict the behavior of the Stewart platform, input-output data of the system under different operating configurations are required. To generate the training data, the inverse dynamics of the Stewart platform are solved for several random desired trajectories, based on the algorithm presented by [27] and the parameters introduced by [32]. The applied sampling time is ts=0.01. #### 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ### 3.2. The Results of HCDRNN-NMPC In this paper, the values for the parameters are considered as follows:Hp=3, Hu=2, ρ1=0.8, ρ2=0.2, and η=0.01. The performance of the NMPC is compared with the MPC, which both are evaluated by the integral absolute error (IAE).(25)IAE=1T∑t=1TRt−Yt,where T shows the total number of samples. Some research studies are using other metrics like mean square error (MSE) and/or integral square error (ISE). Each of these metrics has drawbacks that led us to use of IAE instead. Metrics that use error squares magnify the errors greater than one and minify the errors less than one, which is not precise in robotics motion errors. Input and output signals are bounded in the intervals mentioned in equation (26).(26)0≤Fit≤16,−1.7≤Pxt≤1.2. #### 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ### 3.3. External Disturbance Rejection The effect of external disturbance is assessed here, in the form of a pulse signal with0.4N amplitude, during 1 to 1.4 seconds, applying to another link’s force. The proposed control performance is shown in Figure 12.Figure 12 Disturbance rejection.Figure13 demonstrated the tracking error and control signal in the presence of external disturbance.Figure 13 (a) Tracking error and (b) extracted control signal in the presence of external disturbance. (a)(b)By applying disturbance, the output is changed and a control effort is made to enhance the tracking. Some advantage of the proposed method can be the low number of oscillations in disturbance rejection, the smaller overshoot and undershoot than the initial disturbance magnitude, resulting in a more uniform output, and a significant improvement that is the smaller and more smooth control signal. ### 3.4. Simulation Results Analysis The nonlinear model predictive controller requires high computational to extract the control signal, but because of the system’s nonlinear model, it can achieve the desired control performance with minimum error. Because the Stewart platform has unknown dynamics, the NN was used to model it. Chaos theory was used in NN to reduce and speed up control calculations, which accelerates the learning dynamics and thus solves the problem of predictive control being slow. Furthermore, involvement with local minimums is avoided by employing chaos in the neural network and increasing the order of chaos by employing more chaotic functions in the hidden layer, resulting in hyper-chaos in the proposed neural network. Table5 compares the prediction performance of the DRNN and proposed HCDRNN.Table 5 Prediction performance of the DRNN and proposed HCDRNN. NNDRNNHCDRNNPrediction errorTrainingTestTrainingTestStep11.7545e−64.2148e−53.8214e−81.1284e−7Step27.2561e−46.7316e−39.1852e−85.6371e−7Step39.1027e−11.4111e−16.2379e−79.0141e−7Table6 compares the performance of the proposed control with the proportional-integral-derivative (PID) control [34], the sliding mode control [18], and the fuzzy NMPC [19] and DRNN-NMPC. The comparison results are recorded in terms of IAE.Table 6 IAE comparison. Control approachPIDSliding mode controlFuzzy NMPCDRNN-NMPCThe proposed methodIAE3.2e−42.8e−54.6e−61.9e−32.9e−8The proposed method provides a minor value of IAE compared with the other method.PID controllers need a set of big gains for the proportional, integral, and derivative coefficients, and this makes the control signal highly sensitive to external disturbance so that the control signal rises to a large value with the lowest level of disturbance. However, the control inputs are bounded for factual reasons, thus, the control signal computed through the PID controller would not be applicable in practice.As demonstrated in Figure13(a), when the external disturbance is applied to the robot, tracking error has a small value in the range of −6e−4,−3e−3, which is proof of the proposed method’s high performance. ## 3.1. Neural Network-Based Model To predict the behavior of the Stewart platform, input-output data of the system under different operating configurations are required. To generate the training data, the inverse dynamics of the Stewart platform are solved for several random desired trajectories, based on the algorithm presented by [27] and the parameters introduced by [32]. The applied sampling time is ts=0.01. ### 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ## 3.1.1. Training The general structure of the HCDRNN is designed in such a way that its inputs vector,Xt, includes the previous position of the moving plane, pxt, and the forces exerted on each link, Fit, as in equation (23), and its output vector, yt, includes the position of the moving plane as in equation (24).(23)Xt=F1t,…,F1t−nu,…,F6t,…,F6t−nu,pxt−1,…,pxt−ny,(24)yt=Pxt.The number of elements ofXt determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as ε=0.01, Ta=0.07, μmax=4, μ0=0, μo=μI=μD=0.01, ny=2, and nu=3 as well as WI,WD,Wo,ΔI,ΔD,Δo are randomly initialized.The neural network training is done offline, but during the training, the coefficienta is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.Figure 5 Traces of the training (a) traces of the training mean square error (MSE) and (b) traces of the coefficienta. (a)(b)The impact of the number of hidden layer neurons on the approximation performance is studied. Table1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.Table 1 Comparison of prediction performance versus the number of hidden layer neurons. No. of hidden layer neuronsTraining MSETraining time (sec)79.7927e−5658.28279.6511e−8978.12439.4508e−51804.05As shown in Table1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table2 reports the MSE of the prediction error.Table 2 MSE of prediction error for a sinusoidal trajectory. No. of step aheadOne-step-aheadTwo-step-aheadThree-step-aheadMSE of prediction error8.9321e−81.2463e−69.7927e−5Table2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions. ## 3.2. The Results of HCDRNN-NMPC In this paper, the values for the parameters are considered as follows:Hp=3, Hu=2, ρ1=0.8, ρ2=0.2, and η=0.01. The performance of the NMPC is compared with the MPC, which both are evaluated by the integral absolute error (IAE).(25)IAE=1T∑t=1TRt−Yt,where T shows the total number of samples. Some research studies are using other metrics like mean square error (MSE) and/or integral square error (ISE). Each of these metrics has drawbacks that led us to use of IAE instead. Metrics that use error squares magnify the errors greater than one and minify the errors less than one, which is not precise in robotics motion errors. Input and output signals are bounded in the intervals mentioned in equation (26).(26)0≤Fit≤16,−1.7≤Pxt≤1.2. ### 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ## 3.2.1. Sample Trajectories The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1) A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.(27)Pt=PxtPytPzt=−1.5+0.2sinωt0.2sinωt1.0+0.2sinωt,whereω=3 and 0≤ωt≤2π. Considering the desired trajectory for the robot’s movement and the actual trajectory on x,y,z axis, tracking error varies within the range of −5.625e−3,0.015e−3 for all three axes which are neglectable.Figure6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2) The second sample trajectory sets out to track the following two-frequency trajectory.(28)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=0.2sinω1t+0.4sinω2t0.2sinω1t1.0+0.2sinω2tm,ω1=3,ω2=2,0≤ω1t,ω2t≤2.Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis, is reported in Table 3.Taking Table3 into consideration, it can be concluded that the tracking error varies within the range of −1.01_2.03∗e−8 for all three axes which are neglectable.Figure8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3) In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.(29)ϕx=ϕy=ϕz=0,Pt=PxtPytPzt=3.5ut−ut−53ut−ut−15.3ut−ut−7cm,0≤Pxt,Pyt,Pzt≤,0≤Fit≤6.Figure 6 Tracking the three- dimensional path.Figure 7 Force exerted to link 1 (control signal).Table 3 Tracking error range onx,y,z axis for two-frequency trajectory. AxisxyzTracking error range−1.01_2.03∗e−8−1.01_1.88∗e−6−0.79_1.05∗e−5Figure 8 Tracking the three-dimensional path.Figure 9 Force exerted to link 1 (control signal).Considering the desired trajectory for the robot’s movement and the actual trajectory onx,y,z axis, tracking error range on x,y,z axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.Table 4 Tracking error range onx,y,z axis for two-level step trajectory. AxisxyzTracking error range−3.2_3.5−1.8_3−5.1_5.5Figure 10 Tracking the three-dimensional path.Figure 11 Force exerted to link 1 (control signal).As shown in Figures10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint. ## 3.3. External Disturbance Rejection The effect of external disturbance is assessed here, in the form of a pulse signal with0.4N amplitude, during 1 to 1.4 seconds, applying to another link’s force. The proposed control performance is shown in Figure 12.Figure 12 Disturbance rejection.Figure13 demonstrated the tracking error and control signal in the presence of external disturbance.Figure 13 (a) Tracking error and (b) extracted control signal in the presence of external disturbance. (a)(b)By applying disturbance, the output is changed and a control effort is made to enhance the tracking. Some advantage of the proposed method can be the low number of oscillations in disturbance rejection, the smaller overshoot and undershoot than the initial disturbance magnitude, resulting in a more uniform output, and a significant improvement that is the smaller and more smooth control signal. ## 3.4. Simulation Results Analysis The nonlinear model predictive controller requires high computational to extract the control signal, but because of the system’s nonlinear model, it can achieve the desired control performance with minimum error. Because the Stewart platform has unknown dynamics, the NN was used to model it. Chaos theory was used in NN to reduce and speed up control calculations, which accelerates the learning dynamics and thus solves the problem of predictive control being slow. Furthermore, involvement with local minimums is avoided by employing chaos in the neural network and increasing the order of chaos by employing more chaotic functions in the hidden layer, resulting in hyper-chaos in the proposed neural network. Table5 compares the prediction performance of the DRNN and proposed HCDRNN.Table 5 Prediction performance of the DRNN and proposed HCDRNN. NNDRNNHCDRNNPrediction errorTrainingTestTrainingTestStep11.7545e−64.2148e−53.8214e−81.1284e−7Step27.2561e−46.7316e−39.1852e−85.6371e−7Step39.1027e−11.4111e−16.2379e−79.0141e−7Table6 compares the performance of the proposed control with the proportional-integral-derivative (PID) control [34], the sliding mode control [18], and the fuzzy NMPC [19] and DRNN-NMPC. The comparison results are recorded in terms of IAE.Table 6 IAE comparison. Control approachPIDSliding mode controlFuzzy NMPCDRNN-NMPCThe proposed methodIAE3.2e−42.8e−54.6e−61.9e−32.9e−8The proposed method provides a minor value of IAE compared with the other method.PID controllers need a set of big gains for the proportional, integral, and derivative coefficients, and this makes the control signal highly sensitive to external disturbance so that the control signal rises to a large value with the lowest level of disturbance. However, the control inputs are bounded for factual reasons, thus, the control signal computed through the PID controller would not be applicable in practice.As demonstrated in Figure13(a), when the external disturbance is applied to the robot, tracking error has a small value in the range of −6e−4,−3e−3, which is proof of the proposed method’s high performance. ## 4. Conclusion This paper proposed a novel hierarchical HCDRNN-NMPC for modeling and control of complex nonlinear dynamical systems. Numerical simulations on the control of a Stewart platform are prepared to demonstrate the performance of the proposed strategy in tracking and external disturbance rejection. One of the most essential aspects of the suggested method is its hierarchical HCDRNN’s ability to accurately estimate the system’s output via a forward-moving window. The hierarchical structure enables the proposed mechanism to precisely adjust each HCDRNN for predicting the outputs of the system for only one specified sample ahead. This enhances the ability of the predictive model in adapting with variations of the complex dynamical systems. This paper provides the adaptive weight update rules for the proposed v-step delayed HCDRNN. Moreover, for determining the sequence of the control signal, an enhanced gradient optimization method is used. Results of the provided simulations and comparisons indicate superior performance of the proposed control system in tracking and removing the effect of the external disturbances. In future research studies, the effects of merging HCDRNNs instead of the hierarchical structure will be investigated, which will last to create a deep HCDRNN structure. The suggested controller’s robustness against various forms of disturbances will also be tested. --- *Source: 1006197-2022-10-18.xml*
2022
# Construction of a New 3D Cu(II) Compound with Photocatalytic Activity and Therapeutic Effect on Ventriculitis **Authors:** Xingzhao Chen; Qing Mao **Journal:** Journal of Chemistry (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1006203 --- ## Abstract A new Cu(II) compound, that is {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n (1, L is 4-(furan-2-yl)-2,6-bis(pyridin-2-yl)pyridine), was solvothermally synthesized and structurally characterized. It features a 0D isolated unit, and these 0D isolated units are further hydrogen-bonded into a 3D supramolecular framework. The framework of 1 can be stable up to 284°C, and the optical band gap of 1 is 3.71 eV. Under the irradiation of ultraviolet light, complex 1 as a photocatalyst exhibits high activity for methylene blue (MB) degradation in the water solution. The above compound’s inhibitory activity on the inflammatory cytokines releasing was tested through exploiting ELISA detection. The real time RT-PCR was conducted to assess the compound’s suppression on the expression of bacterial survival gene. --- ## Body ## 1. Introduction Medical-related ventriculitis and meningitis are common complications of neurosurgery, which seriously affect the prognosis and outcome of patients [1]. Although most patients have acute onset during hospitalization, there are still a small number of patients who develop delayed infections even several years after they are discharged from the hospital [2]. So far, the therapeutic effects of neurosurgery and neurosurgery on medical-related ventriculitis and meningitis are not satisfactory. Thus, new candidates need to be developed for the treatment of ventriculitis.In the last several decades, much interest has been focused on designing and synthesizing the metal-organic frameworks (MOFs) owing to their underlying application values in magnetism, luminescence, heterogeneous catalysis, drug delivery, gas adsorption/separation, and so forth [3–6]. With the acceleration of industrialization, more and more toxic dye contaminants, which are hard to be biodegraded effectively, are discharged into the water system without any pretreatment, leading to sewer pollution that will cause lots of fatal diseases to human beings and other living things [7]. Recently, semiconductive MOFs, which not only feature a large surface area, high porosity, tunable pore structure, and coordinated unsaturated metal site but also can harvest solar energy effectively, have been successfully synthesized, and further utilized as photocatalysts in order to effectively degrade the abovementioned dyes into harmless small molecules under the irradiation of ultraviolet or visible light [8–12]. Therefore, the construction of more photoactive MOFs-based photocatalysts is urgent and indispensable. As we all know, the properties of MOFs are closely related with the structures of MOFs, organic ligands, and central metal ions [13, 14]. In order to obtain the photoactive MOFs, this requires us to choose an appropriate organic ligand with large conjugated groups that are favorable for the absorption of the solar energy [15, 16]. Considering that, in this work, we selected 4-(furan-2-yl)-2,6-bis(pyridin-2-yl)pyridine (L) (Scheme 1), which has large conjugated system and four potential coordination sites, as an organic building block for assembling with the Cu(ClO4)2·6H2O under conditions of solvothermal. In success, we gathered a novel Cu(II) complex showing a 0D isolated unit. These 0D isolated units arefinally combined into a 3-dimensional supramolecular framework via intermolecular H bonds. More importantly, this compound is highly thermostable and exhibits high photocatalytic effect in MB degradation in the water solution. A variety of biological researches were finished to assess the novel complex’s biological activity synthesized in this research.Scheme 1 The chemical structure of L ligand. ## 2. Experimental ### 2.1. Materials and Instrumentation All starting chemicals were commercially available in analytical grade and were utilized with no modifications. A Perkin-Elmer 240°C was employed for conducting the N, H together with C elemental analysis. A Rigaku D/Max-2500PC with 1.54056 Å Cu/Kα radiation with 0.05° step size was applied for collecting the patterns of PXRD. Thermostable experiment was implemented with the NETZSCH TG 209 with 2°C/min heating rate under the nitrogen flow. The UV-vis diffuse reflectance spectra data in solid state were gathered through employing Puxi TU-1901 ultraviolet-visible spectrophotometer, where BaSO4 plate was used as a standard. ### 2.2. Synthesis of {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n(1) The mixture of 0.1 mmol Cu(ClO4)2·6H2O, 0.1 mmol L, 3 mL CH3CN, and 4.0 mL CH3OH was placed into a stainless reactor (25 mL) lined by Teflon, which was heated under a temperature of 135°C for 3 days. The 1’s blue massive crystals were collected after cooling the above mixture to RT at 2°C per min decreasing rate, and with 38 percent yielding according to Cu(ClO4)2·6H2O. Elemental analysis calcd. for C21H18Cl2CuN4O10 (620.84 g/mol): N, 9.02, C, 40.59, and H, 2.90%. Found: N, 8.98, C, 40.57, and H, 2.92%. ### 2.3. X-Ray Structural Determination On a Rigaku Mercury CCD diffractometer that with 0.71073 Å Mo-Kα radiation, the single-crystal architecture of compound 1 was detected at RT. The direct mean together with the full-matrix least squares was, respectively, applied for the generation and modification of crystal structure with SHELXTL embedded in OLEX2 [17]. All the nonhydrogen atoms were anisotropically refined, and the H atoms were fixed in the calculated sites. The complex 1’s crystallographic parameters together with its refinement parameters are listed in Table 1. The parameters of bond about central Cu(II) ion are summarized in Table S1.Table 1 The complex1’s crystal data. FormulaC21H18Cl2CuN4O10Fw620.84Crystal systemTriclinicSpace groupP-1a (Å)8.1922 (4)b (Å)10.7290 (7)c (Å)14.2825 (10)α (°)92.748 (6)β (°)100.469 (5)γ (°)96.407 (5)Volume (Å3)1223.75 (13)Z2Density (calculated)1.685Abs. coeff. (mm−1)1.175Total reflections8189Unique reflections4298Goodness of fit onF21.029FinalR indices (I > 2sigma(I2))R = 0.0559, wR2 = 0.1494R (all data)R = 0.0729, wR2 = 0.1639CCDC2164492 ### 2.4. Inflammatory Cytokines Determination The ELISA detection was implemented in the current work to investigate the compound’s inhibitory activity on the TNF-α and IL-1β content. This research was accomplished strictly following the instructions along with some changes. Shortly, forty BALB/c mice exploited in the current paper were provided by the Model Animal Research Center of Shanghai Jiao Tong University. All the mice were maintained between 20 and 25°C, with free food and free water. The animals were subsequently classified as five groups, namely, compound treatment group and model group as well as the control group. In compound treatment and model groups, the bacteria were injected into animal to create the model of ventriculitis. Next, the compound was applied for finishing the treatment at 1, 2, and 5 mg/kg concentration. After finishing specific treatment, the respiratory mucus was gathered and the content of TNF-α and IL-1β in brain was tested via ELISA detection. ### 2.5. Bacterial Survival Genes The real time RT-PCR was employed for detecting the complex’s influence on the bacterial survival gene expression for the assessment. This study was implemented fully according to the protocols after mini change. Briefly, the bacteria were gathered and inoculated into the plates (96 well), the incubation was finished after adding compound with 10, 20, and 50 ng/ml. After specific treatment, the overall RNA in bacteria was extracted through utilizing the TRIZOL reagent. After testing its concentration, it was next reverse transcript into the cDNA. Eventually, the real-time RT-PCR was accomplished and the bacterial survival gene relative expression was detected. ## 2.1. Materials and Instrumentation All starting chemicals were commercially available in analytical grade and were utilized with no modifications. A Perkin-Elmer 240°C was employed for conducting the N, H together with C elemental analysis. A Rigaku D/Max-2500PC with 1.54056 Å Cu/Kα radiation with 0.05° step size was applied for collecting the patterns of PXRD. Thermostable experiment was implemented with the NETZSCH TG 209 with 2°C/min heating rate under the nitrogen flow. The UV-vis diffuse reflectance spectra data in solid state were gathered through employing Puxi TU-1901 ultraviolet-visible spectrophotometer, where BaSO4 plate was used as a standard. ## 2.2. Synthesis of {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n(1) The mixture of 0.1 mmol Cu(ClO4)2·6H2O, 0.1 mmol L, 3 mL CH3CN, and 4.0 mL CH3OH was placed into a stainless reactor (25 mL) lined by Teflon, which was heated under a temperature of 135°C for 3 days. The 1’s blue massive crystals were collected after cooling the above mixture to RT at 2°C per min decreasing rate, and with 38 percent yielding according to Cu(ClO4)2·6H2O. Elemental analysis calcd. for C21H18Cl2CuN4O10 (620.84 g/mol): N, 9.02, C, 40.59, and H, 2.90%. Found: N, 8.98, C, 40.57, and H, 2.92%. ## 2.3. X-Ray Structural Determination On a Rigaku Mercury CCD diffractometer that with 0.71073 Å Mo-Kα radiation, the single-crystal architecture of compound 1 was detected at RT. The direct mean together with the full-matrix least squares was, respectively, applied for the generation and modification of crystal structure with SHELXTL embedded in OLEX2 [17]. All the nonhydrogen atoms were anisotropically refined, and the H atoms were fixed in the calculated sites. The complex 1’s crystallographic parameters together with its refinement parameters are listed in Table 1. The parameters of bond about central Cu(II) ion are summarized in Table S1.Table 1 The complex1’s crystal data. FormulaC21H18Cl2CuN4O10Fw620.84Crystal systemTriclinicSpace groupP-1a (Å)8.1922 (4)b (Å)10.7290 (7)c (Å)14.2825 (10)α (°)92.748 (6)β (°)100.469 (5)γ (°)96.407 (5)Volume (Å3)1223.75 (13)Z2Density (calculated)1.685Abs. coeff. (mm−1)1.175Total reflections8189Unique reflections4298Goodness of fit onF21.029FinalR indices (I > 2sigma(I2))R = 0.0559, wR2 = 0.1494R (all data)R = 0.0729, wR2 = 0.1639CCDC2164492 ## 2.4. Inflammatory Cytokines Determination The ELISA detection was implemented in the current work to investigate the compound’s inhibitory activity on the TNF-α and IL-1β content. This research was accomplished strictly following the instructions along with some changes. Shortly, forty BALB/c mice exploited in the current paper were provided by the Model Animal Research Center of Shanghai Jiao Tong University. All the mice were maintained between 20 and 25°C, with free food and free water. The animals were subsequently classified as five groups, namely, compound treatment group and model group as well as the control group. In compound treatment and model groups, the bacteria were injected into animal to create the model of ventriculitis. Next, the compound was applied for finishing the treatment at 1, 2, and 5 mg/kg concentration. After finishing specific treatment, the respiratory mucus was gathered and the content of TNF-α and IL-1β in brain was tested via ELISA detection. ## 2.5. Bacterial Survival Genes The real time RT-PCR was employed for detecting the complex’s influence on the bacterial survival gene expression for the assessment. This study was implemented fully according to the protocols after mini change. Briefly, the bacteria were gathered and inoculated into the plates (96 well), the incubation was finished after adding compound with 10, 20, and 50 ng/ml. After specific treatment, the overall RNA in bacteria was extracted through utilizing the TRIZOL reagent. After testing its concentration, it was next reverse transcript into the cDNA. Eventually, the real-time RT-PCR was accomplished and the bacterial survival gene relative expression was detected. ## 3. Results and Discussion ### 3.1. Crystal Structure of Compound 1 The crystallographic study of single crystal X-ray suggested that the complex1’s fundamental unit contains a Cu(II) ion, a L ligand, a molecule of coordinated CH3CN, a coordinated H2O molecule, and a coordinated ClO4− anion, as well as one free ClO4− anion, and the structure of 1 shows a 0D isolated framework. According to Figure 1(a), Cu1 is 6-coordinated through four N atoms provided by a L ligand and one coordinated CH3CN molecule, and two O atoms come from a coordinated molecule of water and a ClO4− anion. Notably, two axial Cu-O distances (2.296(4)–2.584(4)Å) are longer than the Cu-N distances (1.920(4)–2.028(4)Å), thus the coordination geometry of Cu1 can be described as an elongated octahedron. Interestingly, the intermolecular H bonds between the ClO4− anions and coordinated molecules of water extend these 0D separated units into a 1-dimensional chain along direction b, as is revealed in Figure 1(b). In addition to O1w-H-O H bonds, there also are C-H-O H bonds between the L ligands and ClO4− anions from adjacent 1D supramolecular chain, and such type of hydrogen bonds finally connected adjacent 1D supramolecular chains together, leading to a 3-dimensional supramolecular structure (Figure 1(c)). The related hydrogen parameters are summarized in Table S2.Figure 1 (a) The coordination surroundings of Cu(II) ion in compound1. (b) The 1D chain structure constructed through the linkage of O1W-H…O H bonds. (c) The 1’s 3-dimensional supramolecular structure via the linkage of the C-H…O H bonds (dotted lines: H bonds). (a)(b)(c) ### 3.2. Powder X-Ray Diffraction Pattern (PXRD) and Thermogravimetric Analysis (TGA) To prove whether the as-synthesized bulk solids are in single phase, the research of PXRD was finished under RT. As exhibited in Figure2(a), there exist an excellent accordance between the patterns of simulation and experiment, suggesting the high purity of the obtained products.Figure 2 (a) The compound’s patterns of PXRD and (b) its TGA curve. (a)(b)The structural stability of1 was also estimated by the TGA experiment under a nitrogen atmosphere, as is shown in Figure 2(b). The 1’s structure exhibited a two-step weightlessness between 30 and 800°C. The first occurred ranging from 80 to 114°C, which because of the loss of coordinated water and CH3CN molecules (obsd: 9.48%, calcd: 9.51%), and the second was observed occurring in the range of 284–467°C, which is owing to the organic ligands decomposition. The final residue with a weight of 10.19% corresponds well to the formation of metal copper (calcd: 10.23%). ### 3.3. UV-Vis Diffuse Reflectance Spectrum and Optical Band Gap of Compound 1 At RT, we determined the ultraviolet-visible diffuse reflectance spectra of free L ligand and compound1. As reflected in Figure 3(a), the free ligand of L possess two major absorption bands at 355 nm and 302 nm, which could be assigned to π⟶π∗ transitions, and the 1’s solid also exhibit two major absorption bands at 367 nm and 262 nm. Notably, the UV-vis absorption spectrum of 1 is similar to the ultraviolet-visible absorption spectrum of free L. Hence, the 1’s ultraviolet absorption bands are owing to intraligand π⟶π∗ transitions [18]. In addition, its semiconductive property was also estimated by the optical band gap value. Through the calculation utilizing F = (1-R)2/2R, a Kubelka–Munk function (R: the absolute reflectance of the solid powders), the optical band gap (Eg) is 3.17 eV (Figure 3(b)), indicating compound 1 may be potential candidate as photocatalyst to photodegrade organic dye contaminants under the irradiation of UV light.Figure 3 (a) The ultraviolet-visible absorption spectra of free L ligand and complex1 at RT. (b) Diffuse reflectance spectrum of Kubelka–Munk functions versus energy (eV) for 1. (a)(b) ### 3.4. Photocatalytic Property of Compound 1 The compound1’s photocatalytic property was analyzed through the choice of methyl blue (MB) as the model dye under irradiation of ultraviolet light. In the classical experiment, complex 1’s powder samples (50 mg) were ultrasonically dispersed into a MB water solution (100 mL) with 10 mg/L concentration in the dark for half an hour. After establishment of the absorption-desorption equilibrium, the suspension was exposed to the UV light under the condition of continuous stirring. At intervals of 30 min, we took out 3 mL reaction mixture, which was further separated by centrifugation. The clear solution was ultimately explored via the ultraviolet-visible spectrometer, and at 664 nm, the MB characteristic absorption peak was chosen for monitoring the process of photocatalysis. When using 1 as photocatalyst, the MB characteristic absorption peak sequentially reduced with the extending irradiation time (Figure 4(a)), and about 84.6%, MB was successfully photodegraded after 120 min UV irradiation (Figure 4(b)). While the result of comparison experiment without any photocatalyst shows that the degradation efficiency of MB is only 9.2% at the identical conditions (Figure 4(b)). The degradation efficiency in the presence of 1 is much higher than that without complex 1, demonstrating the 1’s excellent photocatalytic activity under UV light irradiation. After photocatalysis, the framework of 1 was not destroyed that was proved by the PXRD experiment (Figure 2(a)). In viewing of its good structural stability, the reusability was also investigated under the same conditions. Three cycled experiments were conducted using the recovered photocatalyst, and the degradation efficiency decreased a little from 84.6% to 80.5% (Figure 4(c)), indicating that compound 1 as photocatalyst has huge potential for reusability.Figure 4 (a) The ultraviolet-visible absorption spectra of the MB solution in the course of photodegradation using compound1 as a photocatalyst. (b) The plots of C/C0 versus irradiation time with and without 1 as photocatalyst. (c) Three cycling runs with the recovered photocatalyst in the degradation of MB. (a)(b)(c) ### 3.5. Compound Significantly Reduced the Inflammatory Cytokines Releasing into the Brain In the course of ventriculitis, there was generally combined with an upregulated inflammatory cytokines releasing level. Hence, after treating through the abovementioned compound, the ELISA detection was implemented and the inflammatory cytokines content released into the brain was detected. Based on Figure5, it reflects that the inflammatory cytokines content in the model group was much higher in comparison with the control group. After the compound’s treatment, the inflammatory cytokines levels released into the brain was significantly decreased. The biological activity of the new compound showed a dose-dependent manner.Figure 5 Markedly decreased inflammatory cytokines releasing into brain after treating by the above compound. The drug-resistant Gram-negative bacteria were used to induce the ventriculitis on the animal; next, the compound was employed for implementing the treatment at specific concentration. The inflammatory cytokines releasing into the brain was tested utilizing ELISA detection. Control refers to the normal animal, and the model refers to the animal with ventriculitis. 1, 2, and 5 refers to the animal treated with the indicated compound. ### 3.6. Compound Obviously Inhibits Bacterial Survival Gene Expression In the previous research, we demonstrated that this complex could be an outstanding candidate for treating ventriculitis by suppressing the inflammatory cytokines releasing. However, the novel compound’s influence on the expression of bacterial survival gene was required to be investigated. Hence, the real time RT-PCR was carried out and the bacterial survival gene relative expression was determined. From Figure6, we can see this complex could evidently decrease the bacterial survival gene expression. In consistent with the above research, the suppression of this compound also exhibited a dose correlation.Figure 6 Evidently suppressed expression of bacterial survival gene by the treatment of compound. The bacterial were treated through the compound with specific concentrations, the bacterial survival genes relative expression were tested through utilizing real time RT-PCR. ## 3.1. Crystal Structure of Compound 1 The crystallographic study of single crystal X-ray suggested that the complex1’s fundamental unit contains a Cu(II) ion, a L ligand, a molecule of coordinated CH3CN, a coordinated H2O molecule, and a coordinated ClO4− anion, as well as one free ClO4− anion, and the structure of 1 shows a 0D isolated framework. According to Figure 1(a), Cu1 is 6-coordinated through four N atoms provided by a L ligand and one coordinated CH3CN molecule, and two O atoms come from a coordinated molecule of water and a ClO4− anion. Notably, two axial Cu-O distances (2.296(4)–2.584(4)Å) are longer than the Cu-N distances (1.920(4)–2.028(4)Å), thus the coordination geometry of Cu1 can be described as an elongated octahedron. Interestingly, the intermolecular H bonds between the ClO4− anions and coordinated molecules of water extend these 0D separated units into a 1-dimensional chain along direction b, as is revealed in Figure 1(b). In addition to O1w-H-O H bonds, there also are C-H-O H bonds between the L ligands and ClO4− anions from adjacent 1D supramolecular chain, and such type of hydrogen bonds finally connected adjacent 1D supramolecular chains together, leading to a 3-dimensional supramolecular structure (Figure 1(c)). The related hydrogen parameters are summarized in Table S2.Figure 1 (a) The coordination surroundings of Cu(II) ion in compound1. (b) The 1D chain structure constructed through the linkage of O1W-H…O H bonds. (c) The 1’s 3-dimensional supramolecular structure via the linkage of the C-H…O H bonds (dotted lines: H bonds). (a)(b)(c) ## 3.2. Powder X-Ray Diffraction Pattern (PXRD) and Thermogravimetric Analysis (TGA) To prove whether the as-synthesized bulk solids are in single phase, the research of PXRD was finished under RT. As exhibited in Figure2(a), there exist an excellent accordance between the patterns of simulation and experiment, suggesting the high purity of the obtained products.Figure 2 (a) The compound’s patterns of PXRD and (b) its TGA curve. (a)(b)The structural stability of1 was also estimated by the TGA experiment under a nitrogen atmosphere, as is shown in Figure 2(b). The 1’s structure exhibited a two-step weightlessness between 30 and 800°C. The first occurred ranging from 80 to 114°C, which because of the loss of coordinated water and CH3CN molecules (obsd: 9.48%, calcd: 9.51%), and the second was observed occurring in the range of 284–467°C, which is owing to the organic ligands decomposition. The final residue with a weight of 10.19% corresponds well to the formation of metal copper (calcd: 10.23%). ## 3.3. UV-Vis Diffuse Reflectance Spectrum and Optical Band Gap of Compound 1 At RT, we determined the ultraviolet-visible diffuse reflectance spectra of free L ligand and compound1. As reflected in Figure 3(a), the free ligand of L possess two major absorption bands at 355 nm and 302 nm, which could be assigned to π⟶π∗ transitions, and the 1’s solid also exhibit two major absorption bands at 367 nm and 262 nm. Notably, the UV-vis absorption spectrum of 1 is similar to the ultraviolet-visible absorption spectrum of free L. Hence, the 1’s ultraviolet absorption bands are owing to intraligand π⟶π∗ transitions [18]. In addition, its semiconductive property was also estimated by the optical band gap value. Through the calculation utilizing F = (1-R)2/2R, a Kubelka–Munk function (R: the absolute reflectance of the solid powders), the optical band gap (Eg) is 3.17 eV (Figure 3(b)), indicating compound 1 may be potential candidate as photocatalyst to photodegrade organic dye contaminants under the irradiation of UV light.Figure 3 (a) The ultraviolet-visible absorption spectra of free L ligand and complex1 at RT. (b) Diffuse reflectance spectrum of Kubelka–Munk functions versus energy (eV) for 1. (a)(b) ## 3.4. Photocatalytic Property of Compound 1 The compound1’s photocatalytic property was analyzed through the choice of methyl blue (MB) as the model dye under irradiation of ultraviolet light. In the classical experiment, complex 1’s powder samples (50 mg) were ultrasonically dispersed into a MB water solution (100 mL) with 10 mg/L concentration in the dark for half an hour. After establishment of the absorption-desorption equilibrium, the suspension was exposed to the UV light under the condition of continuous stirring. At intervals of 30 min, we took out 3 mL reaction mixture, which was further separated by centrifugation. The clear solution was ultimately explored via the ultraviolet-visible spectrometer, and at 664 nm, the MB characteristic absorption peak was chosen for monitoring the process of photocatalysis. When using 1 as photocatalyst, the MB characteristic absorption peak sequentially reduced with the extending irradiation time (Figure 4(a)), and about 84.6%, MB was successfully photodegraded after 120 min UV irradiation (Figure 4(b)). While the result of comparison experiment without any photocatalyst shows that the degradation efficiency of MB is only 9.2% at the identical conditions (Figure 4(b)). The degradation efficiency in the presence of 1 is much higher than that without complex 1, demonstrating the 1’s excellent photocatalytic activity under UV light irradiation. After photocatalysis, the framework of 1 was not destroyed that was proved by the PXRD experiment (Figure 2(a)). In viewing of its good structural stability, the reusability was also investigated under the same conditions. Three cycled experiments were conducted using the recovered photocatalyst, and the degradation efficiency decreased a little from 84.6% to 80.5% (Figure 4(c)), indicating that compound 1 as photocatalyst has huge potential for reusability.Figure 4 (a) The ultraviolet-visible absorption spectra of the MB solution in the course of photodegradation using compound1 as a photocatalyst. (b) The plots of C/C0 versus irradiation time with and without 1 as photocatalyst. (c) Three cycling runs with the recovered photocatalyst in the degradation of MB. (a)(b)(c) ## 3.5. Compound Significantly Reduced the Inflammatory Cytokines Releasing into the Brain In the course of ventriculitis, there was generally combined with an upregulated inflammatory cytokines releasing level. Hence, after treating through the abovementioned compound, the ELISA detection was implemented and the inflammatory cytokines content released into the brain was detected. Based on Figure5, it reflects that the inflammatory cytokines content in the model group was much higher in comparison with the control group. After the compound’s treatment, the inflammatory cytokines levels released into the brain was significantly decreased. The biological activity of the new compound showed a dose-dependent manner.Figure 5 Markedly decreased inflammatory cytokines releasing into brain after treating by the above compound. The drug-resistant Gram-negative bacteria were used to induce the ventriculitis on the animal; next, the compound was employed for implementing the treatment at specific concentration. The inflammatory cytokines releasing into the brain was tested utilizing ELISA detection. Control refers to the normal animal, and the model refers to the animal with ventriculitis. 1, 2, and 5 refers to the animal treated with the indicated compound. ## 3.6. Compound Obviously Inhibits Bacterial Survival Gene Expression In the previous research, we demonstrated that this complex could be an outstanding candidate for treating ventriculitis by suppressing the inflammatory cytokines releasing. However, the novel compound’s influence on the expression of bacterial survival gene was required to be investigated. Hence, the real time RT-PCR was carried out and the bacterial survival gene relative expression was determined. From Figure6, we can see this complex could evidently decrease the bacterial survival gene expression. In consistent with the above research, the suppression of this compound also exhibited a dose correlation.Figure 6 Evidently suppressed expression of bacterial survival gene by the treatment of compound. The bacterial were treated through the compound with specific concentrations, the bacterial survival genes relative expression were tested through utilizing real time RT-PCR. ## 4. Conclusion To sum up, we have created a 3-dimensional supramolecular Cu(II) compound with high thermostability. The suitable optical band gap (3.17 eV) of1 makes it reveal high photocatalytic effect for MB degradation under the irradiation of UV light. The detection of ELISA revealed that this compound could markedly decrease inflammatory cytokines releasing dose dependently. Moreover, the expression of bacterial survival gene was also suppressed through this complex. Ultimately, this compound could be a superb candidate for treating ventriculitis through suppressing inflammatory response and bacterial survival. --- *Source: 1006203-2022-05-17.xml*
1006203-2022-05-17_1006203-2022-05-17.md
29,640
Construction of a New 3D Cu(II) Compound with Photocatalytic Activity and Therapeutic Effect on Ventriculitis
Xingzhao Chen; Qing Mao
Journal of Chemistry (2022)
Chemistry and Chemical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1006203
1006203-2022-05-17.xml
--- ## Abstract A new Cu(II) compound, that is {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n (1, L is 4-(furan-2-yl)-2,6-bis(pyridin-2-yl)pyridine), was solvothermally synthesized and structurally characterized. It features a 0D isolated unit, and these 0D isolated units are further hydrogen-bonded into a 3D supramolecular framework. The framework of 1 can be stable up to 284°C, and the optical band gap of 1 is 3.71 eV. Under the irradiation of ultraviolet light, complex 1 as a photocatalyst exhibits high activity for methylene blue (MB) degradation in the water solution. The above compound’s inhibitory activity on the inflammatory cytokines releasing was tested through exploiting ELISA detection. The real time RT-PCR was conducted to assess the compound’s suppression on the expression of bacterial survival gene. --- ## Body ## 1. Introduction Medical-related ventriculitis and meningitis are common complications of neurosurgery, which seriously affect the prognosis and outcome of patients [1]. Although most patients have acute onset during hospitalization, there are still a small number of patients who develop delayed infections even several years after they are discharged from the hospital [2]. So far, the therapeutic effects of neurosurgery and neurosurgery on medical-related ventriculitis and meningitis are not satisfactory. Thus, new candidates need to be developed for the treatment of ventriculitis.In the last several decades, much interest has been focused on designing and synthesizing the metal-organic frameworks (MOFs) owing to their underlying application values in magnetism, luminescence, heterogeneous catalysis, drug delivery, gas adsorption/separation, and so forth [3–6]. With the acceleration of industrialization, more and more toxic dye contaminants, which are hard to be biodegraded effectively, are discharged into the water system without any pretreatment, leading to sewer pollution that will cause lots of fatal diseases to human beings and other living things [7]. Recently, semiconductive MOFs, which not only feature a large surface area, high porosity, tunable pore structure, and coordinated unsaturated metal site but also can harvest solar energy effectively, have been successfully synthesized, and further utilized as photocatalysts in order to effectively degrade the abovementioned dyes into harmless small molecules under the irradiation of ultraviolet or visible light [8–12]. Therefore, the construction of more photoactive MOFs-based photocatalysts is urgent and indispensable. As we all know, the properties of MOFs are closely related with the structures of MOFs, organic ligands, and central metal ions [13, 14]. In order to obtain the photoactive MOFs, this requires us to choose an appropriate organic ligand with large conjugated groups that are favorable for the absorption of the solar energy [15, 16]. Considering that, in this work, we selected 4-(furan-2-yl)-2,6-bis(pyridin-2-yl)pyridine (L) (Scheme 1), which has large conjugated system and four potential coordination sites, as an organic building block for assembling with the Cu(ClO4)2·6H2O under conditions of solvothermal. In success, we gathered a novel Cu(II) complex showing a 0D isolated unit. These 0D isolated units arefinally combined into a 3-dimensional supramolecular framework via intermolecular H bonds. More importantly, this compound is highly thermostable and exhibits high photocatalytic effect in MB degradation in the water solution. A variety of biological researches were finished to assess the novel complex’s biological activity synthesized in this research.Scheme 1 The chemical structure of L ligand. ## 2. Experimental ### 2.1. Materials and Instrumentation All starting chemicals were commercially available in analytical grade and were utilized with no modifications. A Perkin-Elmer 240°C was employed for conducting the N, H together with C elemental analysis. A Rigaku D/Max-2500PC with 1.54056 Å Cu/Kα radiation with 0.05° step size was applied for collecting the patterns of PXRD. Thermostable experiment was implemented with the NETZSCH TG 209 with 2°C/min heating rate under the nitrogen flow. The UV-vis diffuse reflectance spectra data in solid state were gathered through employing Puxi TU-1901 ultraviolet-visible spectrophotometer, where BaSO4 plate was used as a standard. ### 2.2. Synthesis of {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n(1) The mixture of 0.1 mmol Cu(ClO4)2·6H2O, 0.1 mmol L, 3 mL CH3CN, and 4.0 mL CH3OH was placed into a stainless reactor (25 mL) lined by Teflon, which was heated under a temperature of 135°C for 3 days. The 1’s blue massive crystals were collected after cooling the above mixture to RT at 2°C per min decreasing rate, and with 38 percent yielding according to Cu(ClO4)2·6H2O. Elemental analysis calcd. for C21H18Cl2CuN4O10 (620.84 g/mol): N, 9.02, C, 40.59, and H, 2.90%. Found: N, 8.98, C, 40.57, and H, 2.92%. ### 2.3. X-Ray Structural Determination On a Rigaku Mercury CCD diffractometer that with 0.71073 Å Mo-Kα radiation, the single-crystal architecture of compound 1 was detected at RT. The direct mean together with the full-matrix least squares was, respectively, applied for the generation and modification of crystal structure with SHELXTL embedded in OLEX2 [17]. All the nonhydrogen atoms were anisotropically refined, and the H atoms were fixed in the calculated sites. The complex 1’s crystallographic parameters together with its refinement parameters are listed in Table 1. The parameters of bond about central Cu(II) ion are summarized in Table S1.Table 1 The complex1’s crystal data. FormulaC21H18Cl2CuN4O10Fw620.84Crystal systemTriclinicSpace groupP-1a (Å)8.1922 (4)b (Å)10.7290 (7)c (Å)14.2825 (10)α (°)92.748 (6)β (°)100.469 (5)γ (°)96.407 (5)Volume (Å3)1223.75 (13)Z2Density (calculated)1.685Abs. coeff. (mm−1)1.175Total reflections8189Unique reflections4298Goodness of fit onF21.029FinalR indices (I > 2sigma(I2))R = 0.0559, wR2 = 0.1494R (all data)R = 0.0729, wR2 = 0.1639CCDC2164492 ### 2.4. Inflammatory Cytokines Determination The ELISA detection was implemented in the current work to investigate the compound’s inhibitory activity on the TNF-α and IL-1β content. This research was accomplished strictly following the instructions along with some changes. Shortly, forty BALB/c mice exploited in the current paper were provided by the Model Animal Research Center of Shanghai Jiao Tong University. All the mice were maintained between 20 and 25°C, with free food and free water. The animals were subsequently classified as five groups, namely, compound treatment group and model group as well as the control group. In compound treatment and model groups, the bacteria were injected into animal to create the model of ventriculitis. Next, the compound was applied for finishing the treatment at 1, 2, and 5 mg/kg concentration. After finishing specific treatment, the respiratory mucus was gathered and the content of TNF-α and IL-1β in brain was tested via ELISA detection. ### 2.5. Bacterial Survival Genes The real time RT-PCR was employed for detecting the complex’s influence on the bacterial survival gene expression for the assessment. This study was implemented fully according to the protocols after mini change. Briefly, the bacteria were gathered and inoculated into the plates (96 well), the incubation was finished after adding compound with 10, 20, and 50 ng/ml. After specific treatment, the overall RNA in bacteria was extracted through utilizing the TRIZOL reagent. After testing its concentration, it was next reverse transcript into the cDNA. Eventually, the real-time RT-PCR was accomplished and the bacterial survival gene relative expression was detected. ## 2.1. Materials and Instrumentation All starting chemicals were commercially available in analytical grade and were utilized with no modifications. A Perkin-Elmer 240°C was employed for conducting the N, H together with C elemental analysis. A Rigaku D/Max-2500PC with 1.54056 Å Cu/Kα radiation with 0.05° step size was applied for collecting the patterns of PXRD. Thermostable experiment was implemented with the NETZSCH TG 209 with 2°C/min heating rate under the nitrogen flow. The UV-vis diffuse reflectance spectra data in solid state were gathered through employing Puxi TU-1901 ultraviolet-visible spectrophotometer, where BaSO4 plate was used as a standard. ## 2.2. Synthesis of {[Cu(L)(ClO4)(H2O)(CH3CN)](ClO4)}n(1) The mixture of 0.1 mmol Cu(ClO4)2·6H2O, 0.1 mmol L, 3 mL CH3CN, and 4.0 mL CH3OH was placed into a stainless reactor (25 mL) lined by Teflon, which was heated under a temperature of 135°C for 3 days. The 1’s blue massive crystals were collected after cooling the above mixture to RT at 2°C per min decreasing rate, and with 38 percent yielding according to Cu(ClO4)2·6H2O. Elemental analysis calcd. for C21H18Cl2CuN4O10 (620.84 g/mol): N, 9.02, C, 40.59, and H, 2.90%. Found: N, 8.98, C, 40.57, and H, 2.92%. ## 2.3. X-Ray Structural Determination On a Rigaku Mercury CCD diffractometer that with 0.71073 Å Mo-Kα radiation, the single-crystal architecture of compound 1 was detected at RT. The direct mean together with the full-matrix least squares was, respectively, applied for the generation and modification of crystal structure with SHELXTL embedded in OLEX2 [17]. All the nonhydrogen atoms were anisotropically refined, and the H atoms were fixed in the calculated sites. The complex 1’s crystallographic parameters together with its refinement parameters are listed in Table 1. The parameters of bond about central Cu(II) ion are summarized in Table S1.Table 1 The complex1’s crystal data. FormulaC21H18Cl2CuN4O10Fw620.84Crystal systemTriclinicSpace groupP-1a (Å)8.1922 (4)b (Å)10.7290 (7)c (Å)14.2825 (10)α (°)92.748 (6)β (°)100.469 (5)γ (°)96.407 (5)Volume (Å3)1223.75 (13)Z2Density (calculated)1.685Abs. coeff. (mm−1)1.175Total reflections8189Unique reflections4298Goodness of fit onF21.029FinalR indices (I > 2sigma(I2))R = 0.0559, wR2 = 0.1494R (all data)R = 0.0729, wR2 = 0.1639CCDC2164492 ## 2.4. Inflammatory Cytokines Determination The ELISA detection was implemented in the current work to investigate the compound’s inhibitory activity on the TNF-α and IL-1β content. This research was accomplished strictly following the instructions along with some changes. Shortly, forty BALB/c mice exploited in the current paper were provided by the Model Animal Research Center of Shanghai Jiao Tong University. All the mice were maintained between 20 and 25°C, with free food and free water. The animals were subsequently classified as five groups, namely, compound treatment group and model group as well as the control group. In compound treatment and model groups, the bacteria were injected into animal to create the model of ventriculitis. Next, the compound was applied for finishing the treatment at 1, 2, and 5 mg/kg concentration. After finishing specific treatment, the respiratory mucus was gathered and the content of TNF-α and IL-1β in brain was tested via ELISA detection. ## 2.5. Bacterial Survival Genes The real time RT-PCR was employed for detecting the complex’s influence on the bacterial survival gene expression for the assessment. This study was implemented fully according to the protocols after mini change. Briefly, the bacteria were gathered and inoculated into the plates (96 well), the incubation was finished after adding compound with 10, 20, and 50 ng/ml. After specific treatment, the overall RNA in bacteria was extracted through utilizing the TRIZOL reagent. After testing its concentration, it was next reverse transcript into the cDNA. Eventually, the real-time RT-PCR was accomplished and the bacterial survival gene relative expression was detected. ## 3. Results and Discussion ### 3.1. Crystal Structure of Compound 1 The crystallographic study of single crystal X-ray suggested that the complex1’s fundamental unit contains a Cu(II) ion, a L ligand, a molecule of coordinated CH3CN, a coordinated H2O molecule, and a coordinated ClO4− anion, as well as one free ClO4− anion, and the structure of 1 shows a 0D isolated framework. According to Figure 1(a), Cu1 is 6-coordinated through four N atoms provided by a L ligand and one coordinated CH3CN molecule, and two O atoms come from a coordinated molecule of water and a ClO4− anion. Notably, two axial Cu-O distances (2.296(4)–2.584(4)Å) are longer than the Cu-N distances (1.920(4)–2.028(4)Å), thus the coordination geometry of Cu1 can be described as an elongated octahedron. Interestingly, the intermolecular H bonds between the ClO4− anions and coordinated molecules of water extend these 0D separated units into a 1-dimensional chain along direction b, as is revealed in Figure 1(b). In addition to O1w-H-O H bonds, there also are C-H-O H bonds between the L ligands and ClO4− anions from adjacent 1D supramolecular chain, and such type of hydrogen bonds finally connected adjacent 1D supramolecular chains together, leading to a 3-dimensional supramolecular structure (Figure 1(c)). The related hydrogen parameters are summarized in Table S2.Figure 1 (a) The coordination surroundings of Cu(II) ion in compound1. (b) The 1D chain structure constructed through the linkage of O1W-H…O H bonds. (c) The 1’s 3-dimensional supramolecular structure via the linkage of the C-H…O H bonds (dotted lines: H bonds). (a)(b)(c) ### 3.2. Powder X-Ray Diffraction Pattern (PXRD) and Thermogravimetric Analysis (TGA) To prove whether the as-synthesized bulk solids are in single phase, the research of PXRD was finished under RT. As exhibited in Figure2(a), there exist an excellent accordance between the patterns of simulation and experiment, suggesting the high purity of the obtained products.Figure 2 (a) The compound’s patterns of PXRD and (b) its TGA curve. (a)(b)The structural stability of1 was also estimated by the TGA experiment under a nitrogen atmosphere, as is shown in Figure 2(b). The 1’s structure exhibited a two-step weightlessness between 30 and 800°C. The first occurred ranging from 80 to 114°C, which because of the loss of coordinated water and CH3CN molecules (obsd: 9.48%, calcd: 9.51%), and the second was observed occurring in the range of 284–467°C, which is owing to the organic ligands decomposition. The final residue with a weight of 10.19% corresponds well to the formation of metal copper (calcd: 10.23%). ### 3.3. UV-Vis Diffuse Reflectance Spectrum and Optical Band Gap of Compound 1 At RT, we determined the ultraviolet-visible diffuse reflectance spectra of free L ligand and compound1. As reflected in Figure 3(a), the free ligand of L possess two major absorption bands at 355 nm and 302 nm, which could be assigned to π⟶π∗ transitions, and the 1’s solid also exhibit two major absorption bands at 367 nm and 262 nm. Notably, the UV-vis absorption spectrum of 1 is similar to the ultraviolet-visible absorption spectrum of free L. Hence, the 1’s ultraviolet absorption bands are owing to intraligand π⟶π∗ transitions [18]. In addition, its semiconductive property was also estimated by the optical band gap value. Through the calculation utilizing F = (1-R)2/2R, a Kubelka–Munk function (R: the absolute reflectance of the solid powders), the optical band gap (Eg) is 3.17 eV (Figure 3(b)), indicating compound 1 may be potential candidate as photocatalyst to photodegrade organic dye contaminants under the irradiation of UV light.Figure 3 (a) The ultraviolet-visible absorption spectra of free L ligand and complex1 at RT. (b) Diffuse reflectance spectrum of Kubelka–Munk functions versus energy (eV) for 1. (a)(b) ### 3.4. Photocatalytic Property of Compound 1 The compound1’s photocatalytic property was analyzed through the choice of methyl blue (MB) as the model dye under irradiation of ultraviolet light. In the classical experiment, complex 1’s powder samples (50 mg) were ultrasonically dispersed into a MB water solution (100 mL) with 10 mg/L concentration in the dark for half an hour. After establishment of the absorption-desorption equilibrium, the suspension was exposed to the UV light under the condition of continuous stirring. At intervals of 30 min, we took out 3 mL reaction mixture, which was further separated by centrifugation. The clear solution was ultimately explored via the ultraviolet-visible spectrometer, and at 664 nm, the MB characteristic absorption peak was chosen for monitoring the process of photocatalysis. When using 1 as photocatalyst, the MB characteristic absorption peak sequentially reduced with the extending irradiation time (Figure 4(a)), and about 84.6%, MB was successfully photodegraded after 120 min UV irradiation (Figure 4(b)). While the result of comparison experiment without any photocatalyst shows that the degradation efficiency of MB is only 9.2% at the identical conditions (Figure 4(b)). The degradation efficiency in the presence of 1 is much higher than that without complex 1, demonstrating the 1’s excellent photocatalytic activity under UV light irradiation. After photocatalysis, the framework of 1 was not destroyed that was proved by the PXRD experiment (Figure 2(a)). In viewing of its good structural stability, the reusability was also investigated under the same conditions. Three cycled experiments were conducted using the recovered photocatalyst, and the degradation efficiency decreased a little from 84.6% to 80.5% (Figure 4(c)), indicating that compound 1 as photocatalyst has huge potential for reusability.Figure 4 (a) The ultraviolet-visible absorption spectra of the MB solution in the course of photodegradation using compound1 as a photocatalyst. (b) The plots of C/C0 versus irradiation time with and without 1 as photocatalyst. (c) Three cycling runs with the recovered photocatalyst in the degradation of MB. (a)(b)(c) ### 3.5. Compound Significantly Reduced the Inflammatory Cytokines Releasing into the Brain In the course of ventriculitis, there was generally combined with an upregulated inflammatory cytokines releasing level. Hence, after treating through the abovementioned compound, the ELISA detection was implemented and the inflammatory cytokines content released into the brain was detected. Based on Figure5, it reflects that the inflammatory cytokines content in the model group was much higher in comparison with the control group. After the compound’s treatment, the inflammatory cytokines levels released into the brain was significantly decreased. The biological activity of the new compound showed a dose-dependent manner.Figure 5 Markedly decreased inflammatory cytokines releasing into brain after treating by the above compound. The drug-resistant Gram-negative bacteria were used to induce the ventriculitis on the animal; next, the compound was employed for implementing the treatment at specific concentration. The inflammatory cytokines releasing into the brain was tested utilizing ELISA detection. Control refers to the normal animal, and the model refers to the animal with ventriculitis. 1, 2, and 5 refers to the animal treated with the indicated compound. ### 3.6. Compound Obviously Inhibits Bacterial Survival Gene Expression In the previous research, we demonstrated that this complex could be an outstanding candidate for treating ventriculitis by suppressing the inflammatory cytokines releasing. However, the novel compound’s influence on the expression of bacterial survival gene was required to be investigated. Hence, the real time RT-PCR was carried out and the bacterial survival gene relative expression was determined. From Figure6, we can see this complex could evidently decrease the bacterial survival gene expression. In consistent with the above research, the suppression of this compound also exhibited a dose correlation.Figure 6 Evidently suppressed expression of bacterial survival gene by the treatment of compound. The bacterial were treated through the compound with specific concentrations, the bacterial survival genes relative expression were tested through utilizing real time RT-PCR. ## 3.1. Crystal Structure of Compound 1 The crystallographic study of single crystal X-ray suggested that the complex1’s fundamental unit contains a Cu(II) ion, a L ligand, a molecule of coordinated CH3CN, a coordinated H2O molecule, and a coordinated ClO4− anion, as well as one free ClO4− anion, and the structure of 1 shows a 0D isolated framework. According to Figure 1(a), Cu1 is 6-coordinated through four N atoms provided by a L ligand and one coordinated CH3CN molecule, and two O atoms come from a coordinated molecule of water and a ClO4− anion. Notably, two axial Cu-O distances (2.296(4)–2.584(4)Å) are longer than the Cu-N distances (1.920(4)–2.028(4)Å), thus the coordination geometry of Cu1 can be described as an elongated octahedron. Interestingly, the intermolecular H bonds between the ClO4− anions and coordinated molecules of water extend these 0D separated units into a 1-dimensional chain along direction b, as is revealed in Figure 1(b). In addition to O1w-H-O H bonds, there also are C-H-O H bonds between the L ligands and ClO4− anions from adjacent 1D supramolecular chain, and such type of hydrogen bonds finally connected adjacent 1D supramolecular chains together, leading to a 3-dimensional supramolecular structure (Figure 1(c)). The related hydrogen parameters are summarized in Table S2.Figure 1 (a) The coordination surroundings of Cu(II) ion in compound1. (b) The 1D chain structure constructed through the linkage of O1W-H…O H bonds. (c) The 1’s 3-dimensional supramolecular structure via the linkage of the C-H…O H bonds (dotted lines: H bonds). (a)(b)(c) ## 3.2. Powder X-Ray Diffraction Pattern (PXRD) and Thermogravimetric Analysis (TGA) To prove whether the as-synthesized bulk solids are in single phase, the research of PXRD was finished under RT. As exhibited in Figure2(a), there exist an excellent accordance between the patterns of simulation and experiment, suggesting the high purity of the obtained products.Figure 2 (a) The compound’s patterns of PXRD and (b) its TGA curve. (a)(b)The structural stability of1 was also estimated by the TGA experiment under a nitrogen atmosphere, as is shown in Figure 2(b). The 1’s structure exhibited a two-step weightlessness between 30 and 800°C. The first occurred ranging from 80 to 114°C, which because of the loss of coordinated water and CH3CN molecules (obsd: 9.48%, calcd: 9.51%), and the second was observed occurring in the range of 284–467°C, which is owing to the organic ligands decomposition. The final residue with a weight of 10.19% corresponds well to the formation of metal copper (calcd: 10.23%). ## 3.3. UV-Vis Diffuse Reflectance Spectrum and Optical Band Gap of Compound 1 At RT, we determined the ultraviolet-visible diffuse reflectance spectra of free L ligand and compound1. As reflected in Figure 3(a), the free ligand of L possess two major absorption bands at 355 nm and 302 nm, which could be assigned to π⟶π∗ transitions, and the 1’s solid also exhibit two major absorption bands at 367 nm and 262 nm. Notably, the UV-vis absorption spectrum of 1 is similar to the ultraviolet-visible absorption spectrum of free L. Hence, the 1’s ultraviolet absorption bands are owing to intraligand π⟶π∗ transitions [18]. In addition, its semiconductive property was also estimated by the optical band gap value. Through the calculation utilizing F = (1-R)2/2R, a Kubelka–Munk function (R: the absolute reflectance of the solid powders), the optical band gap (Eg) is 3.17 eV (Figure 3(b)), indicating compound 1 may be potential candidate as photocatalyst to photodegrade organic dye contaminants under the irradiation of UV light.Figure 3 (a) The ultraviolet-visible absorption spectra of free L ligand and complex1 at RT. (b) Diffuse reflectance spectrum of Kubelka–Munk functions versus energy (eV) for 1. (a)(b) ## 3.4. Photocatalytic Property of Compound 1 The compound1’s photocatalytic property was analyzed through the choice of methyl blue (MB) as the model dye under irradiation of ultraviolet light. In the classical experiment, complex 1’s powder samples (50 mg) were ultrasonically dispersed into a MB water solution (100 mL) with 10 mg/L concentration in the dark for half an hour. After establishment of the absorption-desorption equilibrium, the suspension was exposed to the UV light under the condition of continuous stirring. At intervals of 30 min, we took out 3 mL reaction mixture, which was further separated by centrifugation. The clear solution was ultimately explored via the ultraviolet-visible spectrometer, and at 664 nm, the MB characteristic absorption peak was chosen for monitoring the process of photocatalysis. When using 1 as photocatalyst, the MB characteristic absorption peak sequentially reduced with the extending irradiation time (Figure 4(a)), and about 84.6%, MB was successfully photodegraded after 120 min UV irradiation (Figure 4(b)). While the result of comparison experiment without any photocatalyst shows that the degradation efficiency of MB is only 9.2% at the identical conditions (Figure 4(b)). The degradation efficiency in the presence of 1 is much higher than that without complex 1, demonstrating the 1’s excellent photocatalytic activity under UV light irradiation. After photocatalysis, the framework of 1 was not destroyed that was proved by the PXRD experiment (Figure 2(a)). In viewing of its good structural stability, the reusability was also investigated under the same conditions. Three cycled experiments were conducted using the recovered photocatalyst, and the degradation efficiency decreased a little from 84.6% to 80.5% (Figure 4(c)), indicating that compound 1 as photocatalyst has huge potential for reusability.Figure 4 (a) The ultraviolet-visible absorption spectra of the MB solution in the course of photodegradation using compound1 as a photocatalyst. (b) The plots of C/C0 versus irradiation time with and without 1 as photocatalyst. (c) Three cycling runs with the recovered photocatalyst in the degradation of MB. (a)(b)(c) ## 3.5. Compound Significantly Reduced the Inflammatory Cytokines Releasing into the Brain In the course of ventriculitis, there was generally combined with an upregulated inflammatory cytokines releasing level. Hence, after treating through the abovementioned compound, the ELISA detection was implemented and the inflammatory cytokines content released into the brain was detected. Based on Figure5, it reflects that the inflammatory cytokines content in the model group was much higher in comparison with the control group. After the compound’s treatment, the inflammatory cytokines levels released into the brain was significantly decreased. The biological activity of the new compound showed a dose-dependent manner.Figure 5 Markedly decreased inflammatory cytokines releasing into brain after treating by the above compound. The drug-resistant Gram-negative bacteria were used to induce the ventriculitis on the animal; next, the compound was employed for implementing the treatment at specific concentration. The inflammatory cytokines releasing into the brain was tested utilizing ELISA detection. Control refers to the normal animal, and the model refers to the animal with ventriculitis. 1, 2, and 5 refers to the animal treated with the indicated compound. ## 3.6. Compound Obviously Inhibits Bacterial Survival Gene Expression In the previous research, we demonstrated that this complex could be an outstanding candidate for treating ventriculitis by suppressing the inflammatory cytokines releasing. However, the novel compound’s influence on the expression of bacterial survival gene was required to be investigated. Hence, the real time RT-PCR was carried out and the bacterial survival gene relative expression was determined. From Figure6, we can see this complex could evidently decrease the bacterial survival gene expression. In consistent with the above research, the suppression of this compound also exhibited a dose correlation.Figure 6 Evidently suppressed expression of bacterial survival gene by the treatment of compound. The bacterial were treated through the compound with specific concentrations, the bacterial survival genes relative expression were tested through utilizing real time RT-PCR. ## 4. Conclusion To sum up, we have created a 3-dimensional supramolecular Cu(II) compound with high thermostability. The suitable optical band gap (3.17 eV) of1 makes it reveal high photocatalytic effect for MB degradation under the irradiation of UV light. The detection of ELISA revealed that this compound could markedly decrease inflammatory cytokines releasing dose dependently. Moreover, the expression of bacterial survival gene was also suppressed through this complex. Ultimately, this compound could be a superb candidate for treating ventriculitis through suppressing inflammatory response and bacterial survival. --- *Source: 1006203-2022-05-17.xml*
2022
# Drug Reaction with Eosinophilia and Systemic Symptoms Syndrome in a Child with Cystic Fibrosis **Authors:** Ahmed Abushahin; Haneen Toma; Sara G. Hamad; Mutasim Abu-Hasan **Journal:** Case Reports in Immunology (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1006376 --- ## Abstract Background. Drug reaction with eosinophilia and systemic symptoms (DRESSs) syndrome is an idiosyncratic drug-induced reaction that rarely occurs in children but can lead to serious complications. It manifests most commonly with fever, extensive skin eruptions, and eosinophilia. Symptoms typically develop two to six weeks after the initiation of the inciting drug. Visceral organ involvement especially the liver can also occur and if not recognized early and the inciting drug is not stopped immediately, it can lead to liver failure. Therefore, early diagnosis is important but can be very challenging because of disease rarity, lack of a diagnostic test, and its overlap with other common pediatric allergic and infectious conditions. Case Presentation. A 2.5-year-old boy with known diagnosis of cystic fibrosis, bilateral bronchiectasis, pancreatic insufficiency, and chronic airway colonization with Pseudomonas aeruginosa was admitted to our hospital with acute pulmonary exacerbation of CF lung disease. He was treated with intravenous piperacillin-tazobactam and intravenous amikacin in addition to airway clearance. On day 18 of treatment, the patient developed high grade fever followed by diffuse erythematous and pruritic maculopapular rash. Blood tests showed high eosinophilia, high C-reactive protein (CRP), and high liver enzymes levels. The clinical features and the laboratory findings were consistent with the DRESS syndrome. Therefore, all antibiotics were discontinued. Progressive resolution of the symptoms was observed within two days. Laboratory abnormalities were also normalized in the follow-up clinic visit 4 months later. Conclusion. Our case demonstrates the importance of early recognition of the DRESS syndrome in children who develop fever and skin rashes with eosinophilia while undergoing long-term antibiotic treatment. Prompt discontinuation of the offending drug is the cornerstone therapy and results in the resolution of symptoms and prevention of serious complications. --- ## Body ## 1. Background Drug reaction with eosinophilia and systemic symptoms (DRESSs) is a very rare but potentially severe drug-induced hypersensitivity reaction that can occur in children and adults [1]. The pathophysiology of DRESS is not completely characterized, but it is hypothesized to be multifactorial and results from a delayed T-cell-dependent allergic reaction to an inciting drug [2].Patients with the DRESS syndrome usually present with fever, skin eruptions, and eosinophilia within days to weeks of drug exposure. The liver, the kidney, and the lung injury can also occur [3]. DRESS may rarely affect the heart but is associated with high mortality [4]. The degree of symptoms and the extent of organ involvement in patients with the DRESS syndrome can range from mild to severe. Substantial mortality can result from a severe disease and estimated at approximately 5% of all affected children and 10% of all affected adults [1, 5]. Death in patients with severe DRESS syndrome occurs mainly due to liver failure. Therefore, early recognition of the condition and immediate discontinuation of the inciting drug is paramount.The diagnosis of DRESS syndrome can be easily overlooked, especially in children, because of its rarity and because of its overlap with other more common pediatric allergic, autoimmune, and infectious conditions [1, 6]. Therefore, clinicians should be aware of this condition in order to effectively treat the disease and prevent the development of serious complications.We present here a case of a child with cystic fibrosis who was hospitalized with CF-related pulmonary exacerbations and developed acute symptoms and laboratory abnormalities characteristic of DRESS syndrome 18 days after the initiation of intravenous piperacillin-tazobactam and amikacin. ## 2. Case Presentation The patient is a 2.5-year-old boy with confirmed diagnosis of cystic fibrosis and multiple CF-related comorbidities including pancreatic insufficiency, bilateral bronchiectasis, and chronic airway colonization withPseudomonas aeruginosa. He was admitted to our hospital for acute pulmonary exacerbation of CF lung disease after presenting with fever and cough for 2 days. The patient was started on intravenous piperacillin-tazobactam (100 mg/kg/day divided three times daily) and amikacin (30 mg/kg/day once daily) based on the results of previously obtained induced-sputum culture. There is no previous history of any drug reaction to treatment with same antibiotics. The home therapy of airway clearance, oral pancreatic-enzyme replacement therapy, and nutritional supplements was continued. The patient’s respiratory symptoms gradually improved with treatment. However, on day 18 of intravenous antibiotic treatment, the patient developed a high-grade fever (up to 39°C), followed by diffuse erythematous maculopapular pruritic rash a day later. The rash initially involved his face and his trunk but rapidly spread to his entire body. Facial edema was also noted. The rest of the physical examination was unremarkable. No lymph node enlargement, mucus membrane involvement, or organomegaly were noted.Complete blood count (CBC) showed low white cell count (4.1 × 109/L), low absolute neutrophil count (0.9 × 109/L), and low platelet count (92 × 109/L). However, differential WBC showed a significantly elevated eosinophil count with an absolute count of 1.8 × 109/L. C-reactive protein (CRP) levels were also high (117 mg/dL).Liver enzymes levels were elevated with increased aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels up to 178 IU/L and 686 IU/L, respectively (>12 times the upper normal limit). Renal function test including BUN and creatinine, as well as electrolyte levels, were all normal. Blood culture results were negative. Serum polymerase chain reaction (PCR) was positive for both Epstein–Barr virus (EBV) and cytomegalovirus (CMV) but negative for human herpes virus-6 (HHV-6), chlamydia, andMycoplasma pneumoniae. Respiratory viral-PCR panel was negative for adenovirus, human metapneumovirus, influenza A-B, MERS-coronavirus, and parainfluenza-1–.The DRESS syndrome was suspected based on the patient’s symptoms and laboratory findings in the context of prolonged administration of intravenous antibiotics. The diagnosis was also supported by the Registry of Severe Cutaneous Adverse Reactions (RegiSCAR) scoring system. Our patient had a total score of 5 (Table1). Therefore, antibiotics were immediately discontinued. We also elected symptomatic treatment with diphenhydramine and close monitoring only, since there was no evidence of pulmonary, cardiac, or renal involvement, and since the hepatic involvement was mild. The patient showed gradual resolution of the fever and the skin rash within two days of antibiotic removal. At the same time, laboratory abnormities were also significantly improved with CRP of 35 mg/dL, ALT of 94 IU/L, and AST of 90 IU/L.Table 1 Patient’s diagnosis of DRESS syndrome based on the RegiSCAR system. Clinical parametersPresent/absent (yes/no)Patient scoreRegiSCAR systemFever (≥38.5)Yes0No/unknown = −1Yes = 0LymphadenopathyNo0>1 cm, two sites; yes = 1No/unknow = 0EosinophiliaYes2Eosinophil count of 0.7–1.49 × 109 = 1Eosinophil count of ≥1.5 × 109 = 2Atypical lymphocyteUnknown0Yes = 1No/unknown = 0Skin rashSuggestive features: ≥2 facial edemas, purpura, infiltration, desquamation (no = −1; unknown = 0; yes = 1)(i) Extent ≥50% of BSAYes1(ii) Rash suggestive of DRESSYes1Organ involvementYes11 point for each organ involvement, maximum score: 2Disease duration ≥15 daysNo−1No/unknown = −1Yes = 0Skin biopsy suggesting DRESSNot applicable0No: −1Yes/unknown: 0Exclusion of other causesYes11 point if ≥3 of the following tests were -ve: HAV, HBV, HCV, EBV, mycoplasma, chlamydia, ANA, blood cultureTotal score5Total score <2 = = no case. Total score 2–3 = possible case. Total score 4–5 = probable case. Total score >5 = definite case. BSA = body surface area; HAV = hepatitis A virus; HBV = hepatitis B virus; HCV = hepatitis C virus; EBV = Epstein-Barr virus; ANA = antinuclear antibodies.The drastic clinical and laboratory improvements after stopping antibiotics further supported the diagnosis of DRESS syndrome over other overlapping diseases in children from different infectious, hematological, or autoimmune etiologies. The skin patch test and the lymphocyte proliferation test were not performed.The patient was eventually discharged on day 24 of admission in good condition and without any complications. The patient remained asymptomatic during the follow-up visits two months after discharge. All laboratory tests were completely normal on the follow-up visit at 4 months. Trends in the relevant laboratory findings are summarized in Table2, and the timeline displaying the patient’s course is shown in Figure 1.Table 2 The trend in relevant laboratory findings in our patient during hospitalization and after discharge. Laboratory test (normal values)On admissionAt diagnosisTwo days after antibiotic removalAt the fourth month follow-up visitWBC (4–14 × 109/L)15.94.11113.9Eosinophils (0.1–0.7 cells × 109/L)0.41.81.20.5CRP (<5 mg/L)1.4117364.6ALT (10–25 IU/L)161789221AST (23–46 IU/L)356869040WBC, white blood cell; CRP, C-reactive protein; ALT, alanine transaminase; AST, aspartate aminotransferase.Figure 1 Patient timeline. The timeline displays the patient’s course beginning from the initiation of the intravenous medications, subsequent symptoms, and the dramatic clinical resolution of symptoms after removal of the offending drugs. ## 3. Discussion and Conclusions We presented a rare case of a child with DRESS syndrome, which is a potentially life-threateningdrug-induced hypersensitivity reaction with multisystem involvement. The diagnosis was made very early and resulted in resolution of symptoms and lab tests soon after stopping inciting drugs [1].DRESS syndrome is very rare, with a reported prevalence of 2.8 : 100,000 in adults [7]. In the pediatric population, only isolated cases and few retrospective studies have been reported [8–10]. Therefore, its actual incidence in children has not yet been established. In general, DRESS syndrome occurs less frequently in children than adults and has a better prognosis, with a significantly lower mortality in children (5%) than in adults (10%) [1, 6].DRESS syndrome is believed to result from drug‐specific T cell activation of eosinophils that appears within days to weeks of exposure to the culprit drug and leads to multisystemic manifestation [1, 2].Approximately, 50 culprit drugs have been implicated in the development of DRESS syndrome, with anticonvulsants and antibiotics being the most common inciting agents [1, 3]. Kim et al., in a recent systematic review, identified the culprit medications in most pediatric cases as that in adult cases. Antiepileptic medications were the most implicated drugs (in 52.6% of cases), including carbamazepine, phenytoin, and lamotrigine, followed by antibiotics (33%) such as amoxicillin/clavulanate, vancomycin, and dapsone [6]. In our case, piperacillin-tazobactam and amikacin were the potential inciting drugs for DRESS syndrome. Moris et al. reported piperacillin-tazobactam in only two of the 103 cases of DRESS syndrome (2%) but was rarely reported by others [1, 5, 11]. To the best of our knowledge, amikacin has not been reported as a triggering drug for DRESS syndrome. Accordingly, DRESS syndrome in our case was very likely triggered by piperacillin-tazobactam. Causality can be supported by demonstrating a positive reaction to piperacillin-tazobactam using skin patch tests and/or in vitro tests (e.g., the lymphocyte proliferation test), both of which were not performed in our case.The exact pathogenesis of DRESS syndrome is not well-characterized. A complex interaction between genetic susceptibility, aberrant drug-detoxification pathways, and the immune system, leading to a vigorous T cell-mediated hypersensitivity response to a specific drug, has been hypothesized [1, 2]. Studies on delayed drug reactions suggest that drug-specific CD4+ and CD8+ T-cell activation can preferentially stimulate and recruit eosinophils by releasing certain cytokines, such as IL5 [12]. Furthermore, the reactivation of several herpes viruses (HHV-6, HHV-7), EBV, and CMV, which coincide with the clinical symptoms of DRESS syndrome, has also been linked to its pathogenesis [12–16]. EBV- and CMV-PCR were positive in our case, indicating possible viral activation.DRESS syndrome has a broad spectrum of clinical features that usually appear two to six weeks after exposure to an offending drug [5, 17]. Kim et al. demonstrated that the average time from drug exposure to the onset of symptoms in children with DRESS syndrome was 23.2 days (range: 0.42–112 days) [6]. The average age of diagnosis was 8.7 years old [6, 17].DRESS syndrome displays a distinct phenotype characterized by a fever ≥38.5°C in 96%–100% of the cases, usually preceded by a skin eruption [6, 17]. Cutaneous eruptions present in 85%–100% of cases as diffuse, pruritic, or nonpruritic maculopapular/morbilliform. Other eruptions may be described as targetoid, urticarial, pustular, blistering, lichenoid, exfoliative, or eczematous lesions [1, 5, 17].Lymphadenopathies are frequently described in 80% of the cases with DRESS syndrome [1]. Among the visceral organs, the liver is the most commonly involved (50%–84% of cases). The liver involvement ranges from the transient elevation of enzymes to fulminant hepatic failure, which is the primary cause of death in DRESS syndrome [5, 6]. Kidney injury (11%–57% of cases) can also range from proteinuria to renal failure [6, 17]. The lung involvement (2.6%–5% of cases) ranges from interstitial pneumonitis to acute respiratory distress syndrome [6, 18]. The cardiac involvement (i.e., myocarditis) has been rarely reported in cases of DRESS, and it was associated with 45.2% mortality rate [4]. Radovanovic et al. reported that abnormal electrocardiography was found in 71.4% of the patients, and depressed left ventricular ejection fraction was found in 45% of the patients with cardiac involvement [4].Other systems involving the gastrointestinal tract (i.e., colitis or pancreatitis) and the central nervous system (i.e., encephalitis) have been less frequently reported [5, 6, 17]. Hematological abnormalities are commonly associated with this condition, including leukocytosis with peripheral eosinophilia, atypical lymphocytosis, and thrombocytopenia. Eosinophilia is typically the predominant abnormality and is found in 82%–95% of the cases [1, 5, 6]. These different clinical manifestations have been attributed to the specific chemical properties of each drug or its reactive metabolites.Owing to the heterogeneity of clinical presentation, other disease mimickers should be excluded. These mimickers include, but are not limited to, toxic shock syndrome, Stevens– viral infections, infectious mononucleosis particularly after amoxicillinpseudolymphoma) .[1, 19].The diagnosis of DRESS syndrome may be challenging. There is no diagnostic test for the disease and the diagnosis is mainly clinical. The Registry of Severe Cutaneous Adverse Reactions (RegiSCAR) scoring system [2] is used to support the diagnosis with DRESS syndrome and is based on clinical manifestations, organs affected, and the clinical course [20]. Our patient exhibited most of the clinical features of DRESS syndrome, including a fever ≥38.5°C, skin rash extending over more than 50% of the body surface, peripheral eosinophilia, and the liver injury 18 days following the initiation of an antibiotic treatment regimen. After excluding other potential causes, the RegiSCAR scoring system was used, and our patient had a total score of 5, fulfilling the criteria for the probable diagnosis of DRESS, as shown in Table 1.Due to the rarity and clinical heterogeneity of DRESS syndrome, evidence-based management guidelines are lacking, and management is primarily based on experts’ opinions [1, 12]. The prompt identification and withdrawal of the offending medication are the mainstay of therapy for all patients with DRESS syndrome. This action alone may be sufficient to resolve clinical and laboratory abnormalities, as was in our case. Further management is generally based on the severity of skin eruptions and other organ involvement. In mild disease (no organ involvement or only mild liver involvement), treatment is symptomatic. Systemic corticosteroids, immunosuppressive medications, and intravenous immunoglobulins are reserved for severe cases with significant visceral organ injury, particularly renal and/or pulmonary involvement [1, 11]. In our patient, progressive resolution of clinical features and laboratory abnormalities (inflammatory markers, eosinophils, and decrease in hepatic enzymes) was observed within two days of ceasing piperacillin-tazobactam and amikacin treatment, highlighting the importance of early recognition and prompt removal of inciting drugs to avoid harmful outcomes associated with DRESS syndrome.In conclusion, the DRESS syndrome is a rare drug-induced hypersensitivity reaction that affects multiple organs and can be fatal if not recognized early. Due to its rarity in children and its symptoms overlapping with other commonly encountered pediatric conditions, pediatricians should have a high index of suspicion for children presenting with cutaneous and internal organ involvement and hematologic abnormalities after the initiation of an offending drug. Early recognition and prompt removal of the offending medication are critical for achieving the best outcomes. --- *Source: 1006376-2023-02-02.xml*
1006376-2023-02-02_1006376-2023-02-02.md
18,357
Drug Reaction with Eosinophilia and Systemic Symptoms Syndrome in a Child with Cystic Fibrosis
Ahmed Abushahin; Haneen Toma; Sara G. Hamad; Mutasim Abu-Hasan
Case Reports in Immunology (2023)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1006376
1006376-2023-02-02.xml
--- ## Abstract Background. Drug reaction with eosinophilia and systemic symptoms (DRESSs) syndrome is an idiosyncratic drug-induced reaction that rarely occurs in children but can lead to serious complications. It manifests most commonly with fever, extensive skin eruptions, and eosinophilia. Symptoms typically develop two to six weeks after the initiation of the inciting drug. Visceral organ involvement especially the liver can also occur and if not recognized early and the inciting drug is not stopped immediately, it can lead to liver failure. Therefore, early diagnosis is important but can be very challenging because of disease rarity, lack of a diagnostic test, and its overlap with other common pediatric allergic and infectious conditions. Case Presentation. A 2.5-year-old boy with known diagnosis of cystic fibrosis, bilateral bronchiectasis, pancreatic insufficiency, and chronic airway colonization with Pseudomonas aeruginosa was admitted to our hospital with acute pulmonary exacerbation of CF lung disease. He was treated with intravenous piperacillin-tazobactam and intravenous amikacin in addition to airway clearance. On day 18 of treatment, the patient developed high grade fever followed by diffuse erythematous and pruritic maculopapular rash. Blood tests showed high eosinophilia, high C-reactive protein (CRP), and high liver enzymes levels. The clinical features and the laboratory findings were consistent with the DRESS syndrome. Therefore, all antibiotics were discontinued. Progressive resolution of the symptoms was observed within two days. Laboratory abnormalities were also normalized in the follow-up clinic visit 4 months later. Conclusion. Our case demonstrates the importance of early recognition of the DRESS syndrome in children who develop fever and skin rashes with eosinophilia while undergoing long-term antibiotic treatment. Prompt discontinuation of the offending drug is the cornerstone therapy and results in the resolution of symptoms and prevention of serious complications. --- ## Body ## 1. Background Drug reaction with eosinophilia and systemic symptoms (DRESSs) is a very rare but potentially severe drug-induced hypersensitivity reaction that can occur in children and adults [1]. The pathophysiology of DRESS is not completely characterized, but it is hypothesized to be multifactorial and results from a delayed T-cell-dependent allergic reaction to an inciting drug [2].Patients with the DRESS syndrome usually present with fever, skin eruptions, and eosinophilia within days to weeks of drug exposure. The liver, the kidney, and the lung injury can also occur [3]. DRESS may rarely affect the heart but is associated with high mortality [4]. The degree of symptoms and the extent of organ involvement in patients with the DRESS syndrome can range from mild to severe. Substantial mortality can result from a severe disease and estimated at approximately 5% of all affected children and 10% of all affected adults [1, 5]. Death in patients with severe DRESS syndrome occurs mainly due to liver failure. Therefore, early recognition of the condition and immediate discontinuation of the inciting drug is paramount.The diagnosis of DRESS syndrome can be easily overlooked, especially in children, because of its rarity and because of its overlap with other more common pediatric allergic, autoimmune, and infectious conditions [1, 6]. Therefore, clinicians should be aware of this condition in order to effectively treat the disease and prevent the development of serious complications.We present here a case of a child with cystic fibrosis who was hospitalized with CF-related pulmonary exacerbations and developed acute symptoms and laboratory abnormalities characteristic of DRESS syndrome 18 days after the initiation of intravenous piperacillin-tazobactam and amikacin. ## 2. Case Presentation The patient is a 2.5-year-old boy with confirmed diagnosis of cystic fibrosis and multiple CF-related comorbidities including pancreatic insufficiency, bilateral bronchiectasis, and chronic airway colonization withPseudomonas aeruginosa. He was admitted to our hospital for acute pulmonary exacerbation of CF lung disease after presenting with fever and cough for 2 days. The patient was started on intravenous piperacillin-tazobactam (100 mg/kg/day divided three times daily) and amikacin (30 mg/kg/day once daily) based on the results of previously obtained induced-sputum culture. There is no previous history of any drug reaction to treatment with same antibiotics. The home therapy of airway clearance, oral pancreatic-enzyme replacement therapy, and nutritional supplements was continued. The patient’s respiratory symptoms gradually improved with treatment. However, on day 18 of intravenous antibiotic treatment, the patient developed a high-grade fever (up to 39°C), followed by diffuse erythematous maculopapular pruritic rash a day later. The rash initially involved his face and his trunk but rapidly spread to his entire body. Facial edema was also noted. The rest of the physical examination was unremarkable. No lymph node enlargement, mucus membrane involvement, or organomegaly were noted.Complete blood count (CBC) showed low white cell count (4.1 × 109/L), low absolute neutrophil count (0.9 × 109/L), and low platelet count (92 × 109/L). However, differential WBC showed a significantly elevated eosinophil count with an absolute count of 1.8 × 109/L. C-reactive protein (CRP) levels were also high (117 mg/dL).Liver enzymes levels were elevated with increased aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels up to 178 IU/L and 686 IU/L, respectively (>12 times the upper normal limit). Renal function test including BUN and creatinine, as well as electrolyte levels, were all normal. Blood culture results were negative. Serum polymerase chain reaction (PCR) was positive for both Epstein–Barr virus (EBV) and cytomegalovirus (CMV) but negative for human herpes virus-6 (HHV-6), chlamydia, andMycoplasma pneumoniae. Respiratory viral-PCR panel was negative for adenovirus, human metapneumovirus, influenza A-B, MERS-coronavirus, and parainfluenza-1–.The DRESS syndrome was suspected based on the patient’s symptoms and laboratory findings in the context of prolonged administration of intravenous antibiotics. The diagnosis was also supported by the Registry of Severe Cutaneous Adverse Reactions (RegiSCAR) scoring system. Our patient had a total score of 5 (Table1). Therefore, antibiotics were immediately discontinued. We also elected symptomatic treatment with diphenhydramine and close monitoring only, since there was no evidence of pulmonary, cardiac, or renal involvement, and since the hepatic involvement was mild. The patient showed gradual resolution of the fever and the skin rash within two days of antibiotic removal. At the same time, laboratory abnormities were also significantly improved with CRP of 35 mg/dL, ALT of 94 IU/L, and AST of 90 IU/L.Table 1 Patient’s diagnosis of DRESS syndrome based on the RegiSCAR system. Clinical parametersPresent/absent (yes/no)Patient scoreRegiSCAR systemFever (≥38.5)Yes0No/unknown = −1Yes = 0LymphadenopathyNo0>1 cm, two sites; yes = 1No/unknow = 0EosinophiliaYes2Eosinophil count of 0.7–1.49 × 109 = 1Eosinophil count of ≥1.5 × 109 = 2Atypical lymphocyteUnknown0Yes = 1No/unknown = 0Skin rashSuggestive features: ≥2 facial edemas, purpura, infiltration, desquamation (no = −1; unknown = 0; yes = 1)(i) Extent ≥50% of BSAYes1(ii) Rash suggestive of DRESSYes1Organ involvementYes11 point for each organ involvement, maximum score: 2Disease duration ≥15 daysNo−1No/unknown = −1Yes = 0Skin biopsy suggesting DRESSNot applicable0No: −1Yes/unknown: 0Exclusion of other causesYes11 point if ≥3 of the following tests were -ve: HAV, HBV, HCV, EBV, mycoplasma, chlamydia, ANA, blood cultureTotal score5Total score <2 = = no case. Total score 2–3 = possible case. Total score 4–5 = probable case. Total score >5 = definite case. BSA = body surface area; HAV = hepatitis A virus; HBV = hepatitis B virus; HCV = hepatitis C virus; EBV = Epstein-Barr virus; ANA = antinuclear antibodies.The drastic clinical and laboratory improvements after stopping antibiotics further supported the diagnosis of DRESS syndrome over other overlapping diseases in children from different infectious, hematological, or autoimmune etiologies. The skin patch test and the lymphocyte proliferation test were not performed.The patient was eventually discharged on day 24 of admission in good condition and without any complications. The patient remained asymptomatic during the follow-up visits two months after discharge. All laboratory tests were completely normal on the follow-up visit at 4 months. Trends in the relevant laboratory findings are summarized in Table2, and the timeline displaying the patient’s course is shown in Figure 1.Table 2 The trend in relevant laboratory findings in our patient during hospitalization and after discharge. Laboratory test (normal values)On admissionAt diagnosisTwo days after antibiotic removalAt the fourth month follow-up visitWBC (4–14 × 109/L)15.94.11113.9Eosinophils (0.1–0.7 cells × 109/L)0.41.81.20.5CRP (<5 mg/L)1.4117364.6ALT (10–25 IU/L)161789221AST (23–46 IU/L)356869040WBC, white blood cell; CRP, C-reactive protein; ALT, alanine transaminase; AST, aspartate aminotransferase.Figure 1 Patient timeline. The timeline displays the patient’s course beginning from the initiation of the intravenous medications, subsequent symptoms, and the dramatic clinical resolution of symptoms after removal of the offending drugs. ## 3. Discussion and Conclusions We presented a rare case of a child with DRESS syndrome, which is a potentially life-threateningdrug-induced hypersensitivity reaction with multisystem involvement. The diagnosis was made very early and resulted in resolution of symptoms and lab tests soon after stopping inciting drugs [1].DRESS syndrome is very rare, with a reported prevalence of 2.8 : 100,000 in adults [7]. In the pediatric population, only isolated cases and few retrospective studies have been reported [8–10]. Therefore, its actual incidence in children has not yet been established. In general, DRESS syndrome occurs less frequently in children than adults and has a better prognosis, with a significantly lower mortality in children (5%) than in adults (10%) [1, 6].DRESS syndrome is believed to result from drug‐specific T cell activation of eosinophils that appears within days to weeks of exposure to the culprit drug and leads to multisystemic manifestation [1, 2].Approximately, 50 culprit drugs have been implicated in the development of DRESS syndrome, with anticonvulsants and antibiotics being the most common inciting agents [1, 3]. Kim et al., in a recent systematic review, identified the culprit medications in most pediatric cases as that in adult cases. Antiepileptic medications were the most implicated drugs (in 52.6% of cases), including carbamazepine, phenytoin, and lamotrigine, followed by antibiotics (33%) such as amoxicillin/clavulanate, vancomycin, and dapsone [6]. In our case, piperacillin-tazobactam and amikacin were the potential inciting drugs for DRESS syndrome. Moris et al. reported piperacillin-tazobactam in only two of the 103 cases of DRESS syndrome (2%) but was rarely reported by others [1, 5, 11]. To the best of our knowledge, amikacin has not been reported as a triggering drug for DRESS syndrome. Accordingly, DRESS syndrome in our case was very likely triggered by piperacillin-tazobactam. Causality can be supported by demonstrating a positive reaction to piperacillin-tazobactam using skin patch tests and/or in vitro tests (e.g., the lymphocyte proliferation test), both of which were not performed in our case.The exact pathogenesis of DRESS syndrome is not well-characterized. A complex interaction between genetic susceptibility, aberrant drug-detoxification pathways, and the immune system, leading to a vigorous T cell-mediated hypersensitivity response to a specific drug, has been hypothesized [1, 2]. Studies on delayed drug reactions suggest that drug-specific CD4+ and CD8+ T-cell activation can preferentially stimulate and recruit eosinophils by releasing certain cytokines, such as IL5 [12]. Furthermore, the reactivation of several herpes viruses (HHV-6, HHV-7), EBV, and CMV, which coincide with the clinical symptoms of DRESS syndrome, has also been linked to its pathogenesis [12–16]. EBV- and CMV-PCR were positive in our case, indicating possible viral activation.DRESS syndrome has a broad spectrum of clinical features that usually appear two to six weeks after exposure to an offending drug [5, 17]. Kim et al. demonstrated that the average time from drug exposure to the onset of symptoms in children with DRESS syndrome was 23.2 days (range: 0.42–112 days) [6]. The average age of diagnosis was 8.7 years old [6, 17].DRESS syndrome displays a distinct phenotype characterized by a fever ≥38.5°C in 96%–100% of the cases, usually preceded by a skin eruption [6, 17]. Cutaneous eruptions present in 85%–100% of cases as diffuse, pruritic, or nonpruritic maculopapular/morbilliform. Other eruptions may be described as targetoid, urticarial, pustular, blistering, lichenoid, exfoliative, or eczematous lesions [1, 5, 17].Lymphadenopathies are frequently described in 80% of the cases with DRESS syndrome [1]. Among the visceral organs, the liver is the most commonly involved (50%–84% of cases). The liver involvement ranges from the transient elevation of enzymes to fulminant hepatic failure, which is the primary cause of death in DRESS syndrome [5, 6]. Kidney injury (11%–57% of cases) can also range from proteinuria to renal failure [6, 17]. The lung involvement (2.6%–5% of cases) ranges from interstitial pneumonitis to acute respiratory distress syndrome [6, 18]. The cardiac involvement (i.e., myocarditis) has been rarely reported in cases of DRESS, and it was associated with 45.2% mortality rate [4]. Radovanovic et al. reported that abnormal electrocardiography was found in 71.4% of the patients, and depressed left ventricular ejection fraction was found in 45% of the patients with cardiac involvement [4].Other systems involving the gastrointestinal tract (i.e., colitis or pancreatitis) and the central nervous system (i.e., encephalitis) have been less frequently reported [5, 6, 17]. Hematological abnormalities are commonly associated with this condition, including leukocytosis with peripheral eosinophilia, atypical lymphocytosis, and thrombocytopenia. Eosinophilia is typically the predominant abnormality and is found in 82%–95% of the cases [1, 5, 6]. These different clinical manifestations have been attributed to the specific chemical properties of each drug or its reactive metabolites.Owing to the heterogeneity of clinical presentation, other disease mimickers should be excluded. These mimickers include, but are not limited to, toxic shock syndrome, Stevens– viral infections, infectious mononucleosis particularly after amoxicillinpseudolymphoma) .[1, 19].The diagnosis of DRESS syndrome may be challenging. There is no diagnostic test for the disease and the diagnosis is mainly clinical. The Registry of Severe Cutaneous Adverse Reactions (RegiSCAR) scoring system [2] is used to support the diagnosis with DRESS syndrome and is based on clinical manifestations, organs affected, and the clinical course [20]. Our patient exhibited most of the clinical features of DRESS syndrome, including a fever ≥38.5°C, skin rash extending over more than 50% of the body surface, peripheral eosinophilia, and the liver injury 18 days following the initiation of an antibiotic treatment regimen. After excluding other potential causes, the RegiSCAR scoring system was used, and our patient had a total score of 5, fulfilling the criteria for the probable diagnosis of DRESS, as shown in Table 1.Due to the rarity and clinical heterogeneity of DRESS syndrome, evidence-based management guidelines are lacking, and management is primarily based on experts’ opinions [1, 12]. The prompt identification and withdrawal of the offending medication are the mainstay of therapy for all patients with DRESS syndrome. This action alone may be sufficient to resolve clinical and laboratory abnormalities, as was in our case. Further management is generally based on the severity of skin eruptions and other organ involvement. In mild disease (no organ involvement or only mild liver involvement), treatment is symptomatic. Systemic corticosteroids, immunosuppressive medications, and intravenous immunoglobulins are reserved for severe cases with significant visceral organ injury, particularly renal and/or pulmonary involvement [1, 11]. In our patient, progressive resolution of clinical features and laboratory abnormalities (inflammatory markers, eosinophils, and decrease in hepatic enzymes) was observed within two days of ceasing piperacillin-tazobactam and amikacin treatment, highlighting the importance of early recognition and prompt removal of inciting drugs to avoid harmful outcomes associated with DRESS syndrome.In conclusion, the DRESS syndrome is a rare drug-induced hypersensitivity reaction that affects multiple organs and can be fatal if not recognized early. Due to its rarity in children and its symptoms overlapping with other commonly encountered pediatric conditions, pediatricians should have a high index of suspicion for children presenting with cutaneous and internal organ involvement and hematologic abnormalities after the initiation of an offending drug. Early recognition and prompt removal of the offending medication are critical for achieving the best outcomes. --- *Source: 1006376-2023-02-02.xml*
2023
# Mitochondrial Transplantation Attenuates Cerebral Ischemia-Reperfusion Injury: Possible Involvement of Mitochondrial Component Separation **Authors:** Qiang Xie; Jun Zeng; Yongtao Zheng; Tianwen Li; Junwei Ren; Kezhu Chen; Quan Zhang; Rong Xie; Feng Xu; Jianhong Zhu **Journal:** Oxidative Medicine and Cellular Longevity (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1006636 --- ## Abstract Background. Mitochondrial dysfunctions play a pivotal role in cerebral ischemia-reperfusion (I/R) injury. Although mitochondrial transplantation has been recently explored for the treatment of cerebral I/R injury, the underlying mechanisms and fate of transplanted mitochondria are still poorly understood. Methods. Mitochondrial morphology and function were assessed by fluorescent staining, electron microscopy, JC-1, PCR, mitochondrial stress testing, and metabolomics. Therapeutic effects of mitochondria were evaluated by cell viability, reactive oxygen species (ROS), and apoptosis levels in a cellular hypoxia-reoxygenation model. Rat middle cerebral artery occlusion model was applied to assess the mitochondrial therapy in vivo. Transcriptomics was performed to explore the underlying mechanisms. Mitochondrial fate tracking was implemented by a variety of fluorescent labeling methods. Results. Neuro-2a (N2a) cell-derived mitochondria had higher mitochondrial membrane potential, more active oxidative respiration capacity, and less mitochondrial DNA copy number. Exogenous mitochondrial transplantation increased cellular viability in an oxygen-dependent manner, decreased ROS and apoptosis levels, improved neurobehavioral deficits, and reduced infarct size. Transcriptomic data showed that the differential gene enrichment pathways are associated with metabolism, especially lipid metabolism. Mitochondrial tracking indicated specific parts of the exogenous mitochondria fused with the mitochondria of the host cell, and others were incorporated into lysosomes. This process occurred at the beginning of internalization and its efficiency is related to intercellular connection. Conclusions. Mitochondrial transplantation may attenuate cerebral I/R injury. The mechanism may be related to mitochondrial component separation, altering cellular metabolism, reducing ROS, and apoptosis in an oxygen-dependent manner. The way of isolated mitochondrial transfer into the cell may be related to intercellular connection. --- ## Body ## 1. Introduction Stroke is an acute cerebrovascular disease, including ischemic and hemorrhagic stroke, and is considered to be one of the leading causes of human death and disability worldwide [1–4]. Ischemic stroke accounts for over 80% of all strokes and is usually triggered by brain arterial embolism [3, 5]. When blood flow is blocked, brain tissue in the area of blood supply becomes ischemic and hypoxic, which then leads to neurological dysfunction. Moreover, following blood reperfusion, the damaged brain tissue can be further harmed by restoration of oxygen-rich blood, causing a so-called ischemia-reperfusion (I/R) injury [3, 6–9]. Major contributors to the pathological process include overproduction of ROS, dramatically increased extracellular glutamate levels, and activation of neuroinflammation responses [6, 7, 9]. Among these, the dysfunction of mitochondria in neurons plays a pivotal role [3, 9]. Currently, the main strategy to mitigate ischemic stroke injury is revascularization [4, 8, 10], which may lead to I/R injury. Despite remarkable progress that has been achieved for ischemic stroke, there seem to be no better options for I/R injury.Studies have shown that mitochondria are not only the energy factories of cells but are also closely related to other biological processes, including calcium homeostasis, ROS production, hormone biosynthesis, and cellular differentiation [3, 9, 11]. Mitochondria play an important role in many diseases. Recently, a growing number of studies have begun to apply isolated mitochondria as a therapeutic agent to treat diseases, including kinds of I/R injury [12–20], liver disorders [21, 22], breast cancer [23–25], lung diseases [26, 27], and central nervous system disorders [28–37]. Furthermore, Emani et al. conducted an autologous mitochondrial transplantation clinical study, which showed a promising clinical application [38, 39]. Also, there are studies registered at ClinicalTrials.gov (NCT03639506, NCT02851758, and NCT04998357). Therefore, mitochondrial transplantation holds great therapeutic potential for cerebral I/R injury. One big concern of mitochondrial transplantation is an immune and inflammatory response based on data of mtDNA [40] and damage-associated molecular patterns (DAMPs) [41]. Ramirez-Barbieri et al. demonstrated that there is no direct or indirect, acute or chronic alloreactivity, allorecognition, or DAMP reaction to single or serial injections of allogeneic mitochondria [42].Recently, several studies have applied isolated mitochondria from various sources as an intervention in many diseases. Four studies focused on cerebral I/R injury have shown benefits of mitochondrial transplantation based on various phenotypes, such as behavioral assessment, infarct size, ROS, and apoptosis. However, the appropriate source of mitochondria, the mechanism of its therapeutic effect, and the fate of isolated mitochondria remain unclear. The clinical application of isolated mitochondria has just begun, and more safety and effectiveness assessments are needed.In order to answer the above questions, we performed this study. Firstly, we evaluated the source of mitochondria and then assessed the therapeutic effects of mitochondrial transplantation in cellular and animal models. Finally, we mainly focused on the therapeutic mechanisms of mitochondrial transplantation and the fate of transplanted mitochondria. ## 2. Methods ### 2.1. Cells The mouse neural stem cell (mNSC) was obtained and cultured as previously described [43]. Adherent culture of mNSC was performed with Matrigel (Corning, NY, USA, Cat#354277). N2a (Cat#SCSP-5035) and induced pluripotent stem cell (iPSC) (Cat#DYR0100) were purchased from the National Collection of Authenticated Cell Cultures, Shanghai. 293T was provided by Dr. Gao Liu from Zhongshan Hospital, Shanghai Medical College, Fudan University. N2a and 293T cells were cultured in DMEM supplemented with 10% fetal bovine serum. The iPSC was cultured according to the manufacturer’s protocol. ### 2.2. Animals Sprague-Dawley rats (7-8 weeks old, 250-300 g) were obtained from Shanghai Super-B&K Laboratory Animal Corp. Ltd. (Shanghai, China). All experimental procedures and animal care were approved by the Animal Welfare and Ethics Group, Laboratory Animal Science Department, Fudan University (ethical approval number 202006013Z) and were carried out according to the Guidelines for the Care and Use of Laboratory Animals by the National Institutes of Health. The rats were divided into three groups: sham, I/R, and I/R+Mito group (sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondria injection). ### 2.3. Mitochondrial Isolation Mitochondria were isolated from N2a and mNSC using the mitochondria isolation kit (ThermoFisher Scientific, USA, Cat#89874) as previously described [32, 44]. Briefly, after cultured cells were orderly digested (trypsin) and centrifuged (300 g, 5 min) and the supernatant was removed, collected cells were resuspended by mitochondrial isolation reagent A (800 μl) in a 2.0 ml microcentrifuge tube and vortexed for 5 s and then incubated for 2 min on ice. Then, the reagent B (10 μl) was further added into the tube and continuously placed in situ for 5 min. Following vortexed at maximal speed for 5 times (each time for 1 min), the reagent C (800 μl) was added into the tube and mixed. Subsequently, the mixed solution was centrifuged (700 g, 10 min, 4°C) and then the supernatant was obtained for further centrifugation (12000 g, 15 min, 4°C). Finally, fresh mitochondria were obtained and used for further experiments. For animal experiments, each rat received mitochondria isolated from 1×107 cells, and the protein content was about 180 μg-200 μg. ### 2.4. Transmission Electron Microscopy (TEM) Cells were fixed with 2.5% glutaraldehyde for 2 h at room temperature and then centrifuged (300×g, 5 min). Subsequently, cells were postfixed with precooled 1% osmic acid (2 h, 4°C) and then centrifuged again (300×g, 5 min). After gradient alcohol dehydration and penetration with a solution of acetone and epoxy resin at different proportions, the cell samples were further embedded into epoxy resin and solidified for 48 h. Subsequently, the embedded samples were sectioned (thickness: 60-100 nm) and then double-stained with 3% uranyl acetate and lead citrate. Finally, the stained sections were observed and imaged by TEM (Tecnai G2 20 TWIN, FEI Company, Oregon, USA). ### 2.5. Mitochondrial Membrane Potential Analysis The mitochondrial membrane potential (MMP/ΔΨm) was assessed by JC-1 dye (Beyotime Biotechnology, Shanghai, China, Cat#C2006) and detected by flow cytometry and confocal microscopy, according to previous methods [45, 46]. For flow cytometry, single-cell suspensions of mNSC and N2a were prepared and then coincubated with JC-1 work solution for 20 min at 37°C. Next, sample cells were centrifuged (600 g, 4°C, 5 min) and washed with JC-1 buffer solution 2 times. Subsequently, resuspended cells were subjected to flow cytometry tests. For image, cells were seeded in glass-bottom Petri dishes. 24 hours later, cells were coincubated with JC-1 work solution for 20 min at 37°C, washed with JC-1 buffer, and then examined by confocal microscopy. ### 2.6. Polymerase Chain Reaction (PCR) Absolute quantitative PCR was performed as previously described [47–49]. The ratio of mtDNA and nuclear DNA was used to assess relative mtDNA copy number. In this experiment, mt-ND1/β-globin and mt-RNR1/β-actin were used to represent abundance. The sequences of the primers are described in Table S1. ### 2.7. Mitochondrial Stress Test A mitochondrial stress test was performed using the Seahorse XF Cell Mito Stress Test Kit according to the manufacturer’s instruction [33, 49, 50]. Oxygen consumption rate (OCR), basic OCR, and maximal OCR were used as the main evaluation indicators. Different levels of cells were tested, including 1×105 and 2×105. ### 2.8. Hypoxia-Reoxygenation (H/R) Cell Model and Mitochondrial Transplantation The H/R cell model was induced by 48 h of hypoxia (1% O2) in a tri-gas CO2 incubator and 24 h of routine culture, according to previously described methods [51]. The cultured cells were divided into 3 groups: control group (routine culture (48 h)+replacing medium+routine culture (24 h)), H/R group (hypoxic culture (48 h)+replacing medium+continuing routine culture (24 h)), and H/R+mitochondrial treatment group (hypoxic culture (48 h)+ replacing medium (containing exogenous mitochondria)+continuing routine culture (24 h)). The ratio of mitochondrial donor cell number to the receiver is 5 (e.g., 2×105 cells need mitochondria isolated from 1×106 cells). ### 2.9. Cell Viability Assay The viability was assessed by Cell Counting Kit-8 (CCK-8) (Dojindo Laboratories, Kumamoto, Japan, Cat#CK04) according to the manufacturer’s instruction. Briefly, N2a were coincubated with the CCK-8 working solution at 37°C for 3 h in the light-avoided environment. Then, cells were detected at 450 nm by a microplate reader (Molecular Devices, Sunnyvale, CA, USA). ### 2.10. ROS Measurement by Flow Cytometry DCFH-DA probes (Beyotime Biotechnology, Shanghai, China, Cat#S0033S) were used to measure the ROS levels in cells according to the manufacturer’s instruction. Fluorescence intensity was detected by flow cytometry and fluorescence plate reader. Briefly, after coincubated with DCFH-DA probes (10μmol/l, excitation wavelength: 488 nm and emission wavelength: 525 nm) at 37°C for 30 min, cells were detected by a microplate reader (Molecular Devices, Sunnyvale, CA, USA) or collected by centrifugation (300 g, 5 min); after resuspended with PBS, DCFH-DA-labeled cells were further detected by flow cytometry. ### 2.11. Western Blot Western blot was performed as previously described [52, 53]. The following primary antibodies were used for WB detection: anti-MFN1 (1 : 500), anti-OPA1 (1 : 1000), and anti-DRP1 antibodies (1 : 1000) were all purchased from Proteintech (Chicago, IL, USA); and anti-Bax (1 : 2000), anti-Bcl-2 (1 : 2000), anti-caspase-3 (1 : 2000), and anti-GAPDH antibodies (1 : 10000) were all purchased from Abcam (Cambridge, Cambs, UK). GAPDH served as internal reference. WB bands were detected with Gel-Pro Analyzer (Media Cybernetics, MD, USA). ### 2.12. Cell Apoptosis Cell apoptosis was evaluated using an Annexin V-FITC/PI Apoptosis Detection Kit (BD Biosciences, NJ, USA, Cat#40302) according to the manufacturer’s instruction. Briefly, cells were coincubated with Annexin V-FITC and then propidium iodide for 15 min at RT in a light-avoided environment and then detected by flow cytometry. ### 2.13. Middle Cerebral Artery Occlusion (MCAO) Intraluminal filament occlusion was used to induce focal cerebral ischemia injury [54, 55]. Anesthetized by 2% pentobarbital sodium (45 mg/kg), the rats were placed in a prone position. Then, the left common carotid artery, external carotid artery (ECA), and internal carotid artery (ICA) were exposed. Next, a silicon-coated monofilament suture was gradually inserted through the left ECA and was moved up into the left ICA to successfully occlude the left middle cerebral artery (MCA) and remained in situ for 120 min. Subsequently, the suture was carefully removed, the ECA was permanently ligated, and the incision was sutured. Sham group rats were subjected to the same procedure except for the 120 min occlusion of MCA. Experimental animals were then placed into individual cages and provided a standard diet and water. After 120 min occlusion, right before ICA reperfusion, the isolated mitochondria (from 1×107 cells, the protein content was about 180 μg-200 μg) or saline (10 μl) was injected into the ICA and all incisions were closed. ### 2.14. Neurobehavioral Evaluation Neurobehavioral deficits were evaluated 24 h after mitochondrial transplantation using multiple scales, including the Clark general functional deficit score [56, 57], the Clark focal functional deficit score [56, 57], the modified neurological severity score (mNSS) [55, 58], and the rotarod test [55, 59]. Behavioral assessments were conducted by two skillful investigators who were both blinded to the animal groups. ### 2.15. Cerebral Infarct Area Detection Triphenyl tetrazolium chloride (TTC) staining was used to display the area of cerebral infarction [60, 61]. Briefly, 24 h after MCAO, the rats were deeply anesthetized and perfused with PBS transcardially, after which the rat brains were obtained and cut into 2 mm thick coronal sections. Subsequently, the brain sections were incubated with a 2% TTC solution at 37°C for 30 min in darkness. Then, stained slices were placed from the frontal to occipital order, and macroscopic images were obtained with a digital camera. Infarct areas were measured by Adobe Photoshop 21.0.0 (Adobe Systems Inc., San Jose, CA, USA). ### 2.16. Transcriptomic Analysis RNA sequencing was performed as previously described [62, 63]. Downstream analysis was performed by R (R Foundation for Statistical Computing, Vienna, Austria). ### 2.17. Mitochondria and Lysosome Labeling The mitochondrial fluorescent dyes MitoTracker™ Red CMXRos (ThermoFisher Scientific, Waltham, MA, USA), MitoTracker™ Green FM (ThermoFisher Scientific, Waltham, MA, USA), and MitoBright Deep Red (Dojindo Laboratories, Kumamoto, Japan) were used to label mitochondria according to the manufacturer’s instruction. In addition, 293T cells expressing COX8A gene N-terminal signal peptide-mCherry fusion protein were constructed by lentivirus (Inovogen Tech, Chongqi, China, Cat#3512) and the mitochondria were well labeled. The Lyso Dye (Dojindo Laboratories, Kumamoto, Japan, Cat#MD01)was used to label lysosomes according to the manufacturer’s instruction. ### 2.18. Statistical Analysis Data that conform to a normal distribution with homogeneous variance are expressed asmean±standarddeviation (SD), and Student’s t-test or one-way analysis of variance (ANOVA) was used to compare the differences between two groups or among multiple groups, respectively. Data with a nonnormal distribution are presented as median (25%, 75% quantiles), and Mann-Whitney U-test was taken into consideration. Statistical analysis and diagram generation were performed using GraphPad Prism 8.0.1 (GraphPad Software, Inc., San Diego, CA, USA). ∗p<0.05 and ∗∗p<0.01 were considered to be statistically different. ## 2.1. Cells The mouse neural stem cell (mNSC) was obtained and cultured as previously described [43]. Adherent culture of mNSC was performed with Matrigel (Corning, NY, USA, Cat#354277). N2a (Cat#SCSP-5035) and induced pluripotent stem cell (iPSC) (Cat#DYR0100) were purchased from the National Collection of Authenticated Cell Cultures, Shanghai. 293T was provided by Dr. Gao Liu from Zhongshan Hospital, Shanghai Medical College, Fudan University. N2a and 293T cells were cultured in DMEM supplemented with 10% fetal bovine serum. The iPSC was cultured according to the manufacturer’s protocol. ## 2.2. Animals Sprague-Dawley rats (7-8 weeks old, 250-300 g) were obtained from Shanghai Super-B&K Laboratory Animal Corp. Ltd. (Shanghai, China). All experimental procedures and animal care were approved by the Animal Welfare and Ethics Group, Laboratory Animal Science Department, Fudan University (ethical approval number 202006013Z) and were carried out according to the Guidelines for the Care and Use of Laboratory Animals by the National Institutes of Health. The rats were divided into three groups: sham, I/R, and I/R+Mito group (sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondria injection). ## 2.3. Mitochondrial Isolation Mitochondria were isolated from N2a and mNSC using the mitochondria isolation kit (ThermoFisher Scientific, USA, Cat#89874) as previously described [32, 44]. Briefly, after cultured cells were orderly digested (trypsin) and centrifuged (300 g, 5 min) and the supernatant was removed, collected cells were resuspended by mitochondrial isolation reagent A (800 μl) in a 2.0 ml microcentrifuge tube and vortexed for 5 s and then incubated for 2 min on ice. Then, the reagent B (10 μl) was further added into the tube and continuously placed in situ for 5 min. Following vortexed at maximal speed for 5 times (each time for 1 min), the reagent C (800 μl) was added into the tube and mixed. Subsequently, the mixed solution was centrifuged (700 g, 10 min, 4°C) and then the supernatant was obtained for further centrifugation (12000 g, 15 min, 4°C). Finally, fresh mitochondria were obtained and used for further experiments. For animal experiments, each rat received mitochondria isolated from 1×107 cells, and the protein content was about 180 μg-200 μg. ## 2.4. Transmission Electron Microscopy (TEM) Cells were fixed with 2.5% glutaraldehyde for 2 h at room temperature and then centrifuged (300×g, 5 min). Subsequently, cells were postfixed with precooled 1% osmic acid (2 h, 4°C) and then centrifuged again (300×g, 5 min). After gradient alcohol dehydration and penetration with a solution of acetone and epoxy resin at different proportions, the cell samples were further embedded into epoxy resin and solidified for 48 h. Subsequently, the embedded samples were sectioned (thickness: 60-100 nm) and then double-stained with 3% uranyl acetate and lead citrate. Finally, the stained sections were observed and imaged by TEM (Tecnai G2 20 TWIN, FEI Company, Oregon, USA). ## 2.5. Mitochondrial Membrane Potential Analysis The mitochondrial membrane potential (MMP/ΔΨm) was assessed by JC-1 dye (Beyotime Biotechnology, Shanghai, China, Cat#C2006) and detected by flow cytometry and confocal microscopy, according to previous methods [45, 46]. For flow cytometry, single-cell suspensions of mNSC and N2a were prepared and then coincubated with JC-1 work solution for 20 min at 37°C. Next, sample cells were centrifuged (600 g, 4°C, 5 min) and washed with JC-1 buffer solution 2 times. Subsequently, resuspended cells were subjected to flow cytometry tests. For image, cells were seeded in glass-bottom Petri dishes. 24 hours later, cells were coincubated with JC-1 work solution for 20 min at 37°C, washed with JC-1 buffer, and then examined by confocal microscopy. ## 2.6. Polymerase Chain Reaction (PCR) Absolute quantitative PCR was performed as previously described [47–49]. The ratio of mtDNA and nuclear DNA was used to assess relative mtDNA copy number. In this experiment, mt-ND1/β-globin and mt-RNR1/β-actin were used to represent abundance. The sequences of the primers are described in Table S1. ## 2.7. Mitochondrial Stress Test A mitochondrial stress test was performed using the Seahorse XF Cell Mito Stress Test Kit according to the manufacturer’s instruction [33, 49, 50]. Oxygen consumption rate (OCR), basic OCR, and maximal OCR were used as the main evaluation indicators. Different levels of cells were tested, including 1×105 and 2×105. ## 2.8. Hypoxia-Reoxygenation (H/R) Cell Model and Mitochondrial Transplantation The H/R cell model was induced by 48 h of hypoxia (1% O2) in a tri-gas CO2 incubator and 24 h of routine culture, according to previously described methods [51]. The cultured cells were divided into 3 groups: control group (routine culture (48 h)+replacing medium+routine culture (24 h)), H/R group (hypoxic culture (48 h)+replacing medium+continuing routine culture (24 h)), and H/R+mitochondrial treatment group (hypoxic culture (48 h)+ replacing medium (containing exogenous mitochondria)+continuing routine culture (24 h)). The ratio of mitochondrial donor cell number to the receiver is 5 (e.g., 2×105 cells need mitochondria isolated from 1×106 cells). ## 2.9. Cell Viability Assay The viability was assessed by Cell Counting Kit-8 (CCK-8) (Dojindo Laboratories, Kumamoto, Japan, Cat#CK04) according to the manufacturer’s instruction. Briefly, N2a were coincubated with the CCK-8 working solution at 37°C for 3 h in the light-avoided environment. Then, cells were detected at 450 nm by a microplate reader (Molecular Devices, Sunnyvale, CA, USA). ## 2.10. ROS Measurement by Flow Cytometry DCFH-DA probes (Beyotime Biotechnology, Shanghai, China, Cat#S0033S) were used to measure the ROS levels in cells according to the manufacturer’s instruction. Fluorescence intensity was detected by flow cytometry and fluorescence plate reader. Briefly, after coincubated with DCFH-DA probes (10μmol/l, excitation wavelength: 488 nm and emission wavelength: 525 nm) at 37°C for 30 min, cells were detected by a microplate reader (Molecular Devices, Sunnyvale, CA, USA) or collected by centrifugation (300 g, 5 min); after resuspended with PBS, DCFH-DA-labeled cells were further detected by flow cytometry. ## 2.11. Western Blot Western blot was performed as previously described [52, 53]. The following primary antibodies were used for WB detection: anti-MFN1 (1 : 500), anti-OPA1 (1 : 1000), and anti-DRP1 antibodies (1 : 1000) were all purchased from Proteintech (Chicago, IL, USA); and anti-Bax (1 : 2000), anti-Bcl-2 (1 : 2000), anti-caspase-3 (1 : 2000), and anti-GAPDH antibodies (1 : 10000) were all purchased from Abcam (Cambridge, Cambs, UK). GAPDH served as internal reference. WB bands were detected with Gel-Pro Analyzer (Media Cybernetics, MD, USA). ## 2.12. Cell Apoptosis Cell apoptosis was evaluated using an Annexin V-FITC/PI Apoptosis Detection Kit (BD Biosciences, NJ, USA, Cat#40302) according to the manufacturer’s instruction. Briefly, cells were coincubated with Annexin V-FITC and then propidium iodide for 15 min at RT in a light-avoided environment and then detected by flow cytometry. ## 2.13. Middle Cerebral Artery Occlusion (MCAO) Intraluminal filament occlusion was used to induce focal cerebral ischemia injury [54, 55]. Anesthetized by 2% pentobarbital sodium (45 mg/kg), the rats were placed in a prone position. Then, the left common carotid artery, external carotid artery (ECA), and internal carotid artery (ICA) were exposed. Next, a silicon-coated monofilament suture was gradually inserted through the left ECA and was moved up into the left ICA to successfully occlude the left middle cerebral artery (MCA) and remained in situ for 120 min. Subsequently, the suture was carefully removed, the ECA was permanently ligated, and the incision was sutured. Sham group rats were subjected to the same procedure except for the 120 min occlusion of MCA. Experimental animals were then placed into individual cages and provided a standard diet and water. After 120 min occlusion, right before ICA reperfusion, the isolated mitochondria (from 1×107 cells, the protein content was about 180 μg-200 μg) or saline (10 μl) was injected into the ICA and all incisions were closed. ## 2.14. Neurobehavioral Evaluation Neurobehavioral deficits were evaluated 24 h after mitochondrial transplantation using multiple scales, including the Clark general functional deficit score [56, 57], the Clark focal functional deficit score [56, 57], the modified neurological severity score (mNSS) [55, 58], and the rotarod test [55, 59]. Behavioral assessments were conducted by two skillful investigators who were both blinded to the animal groups. ## 2.15. Cerebral Infarct Area Detection Triphenyl tetrazolium chloride (TTC) staining was used to display the area of cerebral infarction [60, 61]. Briefly, 24 h after MCAO, the rats were deeply anesthetized and perfused with PBS transcardially, after which the rat brains were obtained and cut into 2 mm thick coronal sections. Subsequently, the brain sections were incubated with a 2% TTC solution at 37°C for 30 min in darkness. Then, stained slices were placed from the frontal to occipital order, and macroscopic images were obtained with a digital camera. Infarct areas were measured by Adobe Photoshop 21.0.0 (Adobe Systems Inc., San Jose, CA, USA). ## 2.16. Transcriptomic Analysis RNA sequencing was performed as previously described [62, 63]. Downstream analysis was performed by R (R Foundation for Statistical Computing, Vienna, Austria). ## 2.17. Mitochondria and Lysosome Labeling The mitochondrial fluorescent dyes MitoTracker™ Red CMXRos (ThermoFisher Scientific, Waltham, MA, USA), MitoTracker™ Green FM (ThermoFisher Scientific, Waltham, MA, USA), and MitoBright Deep Red (Dojindo Laboratories, Kumamoto, Japan) were used to label mitochondria according to the manufacturer’s instruction. In addition, 293T cells expressing COX8A gene N-terminal signal peptide-mCherry fusion protein were constructed by lentivirus (Inovogen Tech, Chongqi, China, Cat#3512) and the mitochondria were well labeled. The Lyso Dye (Dojindo Laboratories, Kumamoto, Japan, Cat#MD01)was used to label lysosomes according to the manufacturer’s instruction. ## 2.18. Statistical Analysis Data that conform to a normal distribution with homogeneous variance are expressed asmean±standarddeviation (SD), and Student’s t-test or one-way analysis of variance (ANOVA) was used to compare the differences between two groups or among multiple groups, respectively. Data with a nonnormal distribution are presented as median (25%, 75% quantiles), and Mann-Whitney U-test was taken into consideration. Statistical analysis and diagram generation were performed using GraphPad Prism 8.0.1 (GraphPad Software, Inc., San Diego, CA, USA). ∗p<0.05 and ∗∗p<0.01 were considered to be statistically different. ## 3. Results ### 3.1. Characteristics of Mitochondrial Donors The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. To evaluate theΔΨm, JC-1 dye was used. Representative images showed N2a has more red components than mNSC (Figures 1(a) and 1(b)). Flow cytometry analysis confirmed that N2a had a higher ΔΨm than mNSC (N2a vs. mNSC: 10.55±0.85 vs. 2.56±0.36, p<0.01) (Figure 1(c)). In addition, we observed that the mtDNA abundance of mNSC was higher than N2a based on the mitochondrial-nuclear DNA ratio, mt-ND1/β-globin (N2a vs. mNSC: 374.0±11.5 vs. 731.1±110.4, p<0.01) (Figure 1(d)) and mt-RNR1/β-actin (N2a vs. mNSC: 149.1±13.07 vs. 593.4±108.3, p<0.01) (Figure 1(e)). We subsequently analyzed the oxidative respiration capacity of mitochondria from N2a and mNSC based on the Seahorse XF analysis platform. The OCR-time diagram is shown in Figure 1(f) (N2a 1∗105 vs. mNSC 1∗105) and Figure 1(i) (mNSC 1∗105 vs. mNSC 2∗105), which implied a huge difference in oxidative respiratory activity between N2a and mNSC. Basal OCR of N2a (1×105 cells) was significantly higher than those of mNSC (1×105 cells) (N2a vs. mNSC: 248.70±56.33 pmol/min vs. 22.14±5.09 pmol/min, p<0.01) (Figure 1(g)). Similarly, N2a (1×105 cells) exhibited higher maximal OCR values than those of mNSC (1×105 cells) (N2a vs. mNSC: 363.90±123.70 pmol/min vs. 28.14±7.50 pmol/min, p<0.01) (Figure 1(h)). These results suggested that compared to mNSC, mitochondria from N2a exhibited a relatively stronger oxidative respiration capacity. In addition, mitochondrial morphology is presented in Figure S1 and mNSC culture and identification data are presented in Figure S2. Metabolic profiles of N2a and mNSC were quite different (Figure S3). Tumorigenicity evaluation of N2a and mNSC is presented in Figure S4. These results suggested that N2a-derived mitochondria have higher oxidative respiratory activity and lower mtDNA copy number, that the mitochondria from mNSC and N2a have similar morphology, and that they have different metabolomic profiles, and neither is tumorigenic. Therefore, we chose the N2a as a major source of mitochondria for subsequent experiments.Figure 1 Characteristics of mitochondrial donors. (a–c) Mitochondria of N2a (a) and mNSC (b) labeled with JC-1, JC-1 aggregates show red and monomers show green; the fluorescence intensity ratio of JC-1 of N2a and mNSC detected by flow cytometry, N2a showed higherΔΨm ((c), n=3). (d, e) Relative mtDNA copy number of N2a and mNSC indicated by mt-ND1/β-globin (d) and RNR1/β-actin (e), N2a showed lower mtDNA copy number (n=3). (f–i) Seahorse XF cell mitochondrial stress test; the performance of N2a and mNSC at 1×105 level (f), N2a exhibited higher oxidative respiratory activity (n=6); quantitative analysis of basal (g) and maximal (h) OCR of two cells showed that N2a exhibits higher oxidative respiratory activity (n=9). The performance of mitochondrial stress test form NSC at 1×105 (same data as in (f)) and 2×105 level showed mNSC reacted well and did not die in the comparison with N2a (i); we did the test with N2a 1×105, mNSC 1×105, and mNSC 2×105 at one time but presented them in two graphs due to the order of magnitude. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i) ### 3.2. Mitochondrial Transplantation Increased Cell Viability and Attenuated ROS and Apoptosis Level under H/R Condition We verified the effect of exogenous mitochondria on cell viability in a cellular model. After 48 h of hypoxia (1% O2), exogenous mitochondria were added and N2a continued being cultured for the next 24 h, and then, CCK-8 was performed to detect cell viability. The result indicated that the cell viability was correlated with the presence of oxygen after mitochondrial intervention. When mitochondrial-treated N2a continued being cultured under a hypoxic condition, the cell viability decreased dramatically (hypoxia vs. hypoxia+Mito: 1.00±0.03 vs. 0.07±0.01, p<0.01) (Figure 2(a)). When continued being cultured under a reoxygenation condition, the exogenous mitochondrial intervention significantly improved the cell viability (reoxygenation vs. reoxygenation+Mito: 1.00±0.12 vs. 1.24±0.14, p<0.01) (Figure 2(b)). To evaluate the ROS levels, DCFH-DA probes were applied and the fluorescence intensity was measured by flow cytometry and fluorescence plate reader. The flow cytometry results showed the H/R intervention significantly increases ROS levels and exogenous mitochondrial transplantation attenuates that process (control, H/R, H/R+Mito: 42.6±0.17, 115.0±1.00, and 101.7±2.41, p<0.05) (Figures 2(c) and 2(d)). Similarly, the fluorescent plate reader confirmed the results (control, H/R, H/R+Mito: 173.2±13.74, 606.1±23.45, and 416.9±31.59, p<0.01) (Figure 2(e)). To measure the apoptosis level, we performed flow cytometry analysis and Western blot. After H/R injury, the apoptosis ratio of N2a dramatically increased (H/R vs. control: 37.90±0.46% vs. 4.44±0.07%, p<0.01) (Figures 2(f) and 2(g)), which was significantly reduced by mitochondrial transplantation (H/R vs. Mito+H/R: vs. 24.35±0.54%, p<0.01) (Figures 2(f) and 2(g)). Similar results were obtained for the expression levels of apoptosis-related proteins, which also suggested that H/R dramatically promoted the upregulation of the Bax/Bcl-2 ratio (H/R vs. control: 16.28±3.82 vs. 1.00±0.11, p<0.01) (Figures 2(h) and 2(i)) and caspase-3 protein (H/R vs. control: 2.21±0.12 vs. 1.00±0.16, p<0.01) (Figures 2(h) and 2(j)). Exogenous mitochondrial transplantation significantly downregulated the ratio of Bax/Bcl-2 (H/R vs. H/R+Mito: vs. 4.25±0.34, p<0.01) (Figures 2(h) and 2(i)) and protein levels of caspase-3 (H/R vs. H/R+Mito: vs. 1.65±0.03, p<0.01) (Figures 2(h) and 2(j)) in cultured cells.Figure 2 Mitochondrial transplantation increased cell viability and attenuated ROS and apoptosis level under H/R condition. (a, b) Cell viability measured by CCK-8; after 48 h of hypoxia, exogenous mitochondria were added and N2a continued to be cultured in a hypoxic (a) or normoxic (b) environment (n=6). (c–e) ROS levels labeled with DCFH-DA probes and measured by flow cytometry, presented as a typical histogram of fluorescence intensity distribution (c) and a fluorescence intensity bar graph ((d), n=3), and fluorescent plate reader ((e), n=3). (f–j) Apoptosis levels were detected by flow cytometry for Annexin V and PI positivity ((f, g), n=3) and by related protein expression; representative WB bands were obtained (h), and correspondingly, quantitative analysis of Bax/Bcl-2 (i) and caspase-3 (j) was shown. Values were reported as means±SD. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ### 3.3. Mitochondrial Transplantation Improved Neurobehavioral Deficits and Reduced Infract Size of MCAO Rats To evaluate the effect of exogenous mitochondria on behavior and infarct size, we used the MCAO model. The Clark general/focal scale, the mNSS, and the rotarod test were used to assess neurological behavior deficits, and TTC staining was performed to measure infarct size. The results suggested that mitochondrial transplantation significantly improved neurological behavior deficits. For sham, I/R, and I/R+Mito groups, the sample size was 8, 9, and 7, and the Clark general scale was 0 (0,0), 3 (1.5,6), and 1 (0,1) (nonnormally distributed data are expressed as median, 25%, 75% quantile), which indicated that mitochondria improved neurological outcome (p<0.05). The Clark focal scale and mNSS confirmed the following: Clark focal: 0±0, 8.33±5.57, and 2.57±1.81, p<0.05; mNSS: 0±0, 8.56±3.01, and 4.71±2.63, p<0.05) (Figures 3(a)–3(c)). The rotarod test also validated the conclusion. The latency time to fall of sham, I/R, and I/R+Mito groups before the surgery was 59.63±11.77 s, 56.33±6.76 s, and 53.43±11.66 s (p>0.05) and 55.63±15.01 s, 21.78±6.78 s, and 30.71±8.98 s (p<0.05) after the surgery (Figure 3(d)). The TTC results suggested exogenous mitochondrial transplantation could reduce the infarct size of the MCAO model (I/R vs. I/R+Mito 26.02±3.24% vs. 13.36±4.00%, p<0.05) (Figures 3(e) and 3(f)).Figure 3 Mitochondrial transplantation improved neurobehavioral deficits and reduced infarct size of MCAO rats. (a–d) Neurological behavior assessment; 24 h after mitochondrial transplantation, multiple score scales were used to evaluate the neurological deficits induced by MCAO, including Clark general functional deficit score (a), Clark focal functional deficit score (b), mNSS score (c), and rotarod test (d). (e, f) Infarction size evaluation; brain infarction areas were stained by TTC (e), and relatively quantitative analysis of infarction size was assessed (f). Values were reported asmeans±SD. ∗p<0.05. Sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondrial injection. (a)(b)(c)(d)(e)(f) ### 3.4. Effects of Mitochondrial Transplantation on Transcriptomic Profile Imply Metabolic Alteration In order to clarify the effects of mitochondrial transplantation on transcriptomic profile and its possible mechanism, we performed RNA sequence analysis. The results identified 14 upregulated genes and 12 downregulated genes between control and H/R groups, 27 upregulated genes and 98 downregulated genes between control and H/R+Mito groups, and 17 upregulated genes and 1 downregulated gene between H/R and H/R+Mito groups. The overlapping genes are presented in Figure4(c). The following KEGG pathway enrichment analysis suggested exogenous mitochondria may affect metabolism-related pathways, especially lipid metabolism-related molecules and pathways such as the PPAR signal pathway, insulin signal pathway, fat intake and digestion-related pathway, cholesterol metabolism, glycolysis, and gluconeogenesis (Figure 4(d)). These results indicated that exogenous mitochondria may be capable of altering the metabolic characteristics of host cells, possibly resulting in metabolic reprogramming.Figure 4 Effects of mitochondrial transplantation on transcriptomic profile imply metabolic alteration. (a) PCA plot, axis 1 18.6%, axis 2 16.5%. (b) Heatmap, horizontal axis-different cell samples; longitudinal axis-different genes; color depth-expression levels of genes. (c) Venn diagram. (d) KEGG bubble chart of the differentially expressed genes between H/R and H/R+Mito groups. (a)(b)(c)(d) ### 3.5. Pattern of Exogenous Mitochondrial Transfer Implies Mitochondrial Component Separation To better understand the mechanism of exogenous mitochondria, we first traced its pathway. MitoTracker™ Red CMXRos and MitoTracker™ Green FM were used to label mitochondria. When the mitochondria of N2a labeled with MitoTracker Green were isolated and added to the medium of another N2a which was labeled with MitoTracker Red, a few hours later, we found that the red and green were completely fused (Figure5(a)). This phenomenon implied that exogenous mitochondria can be fused with endogenous mitochondria of the host cell. Next, we isolated red dye-labeled mitochondria and added them to the medium of green dye-labeled cell. The result validated the previous (Figure 5(b)). To figure out if this fusion property of exogenous mitochondria is species-limited, we isolated red dye-labeled mitochondria from a human-derived cell line-U87 and added it to mNSC medium. Mitochondria fused again (Figure 5(c)). This suggested that the ability of exogenous mitochondria to fuse with host cell mitochondria is cross-species.Figure 5 Exogenous mitochondria colocalized with endogenous mitochondria despite species by coincubation. (a) From left to right, mitochondria of host N2a labeled with MitoTracker Red; isolated green mitochondria entered host N2a; nucleus with DAPI; the merged image. (b) Isolated red mitochondria entered host N2A; mitochondria of host N2a labeled with MitoTracker Green; DAPI; the merged image. (c) Isolated red mitochondria derived from U87 entered host mNSC; mitochondria of host mNSC labeled with MitoTracker Green; DAPI: the merged image. Scale bar: 10μm.To further verify the fusion property of exogenous mitochondria, we constructed a 293T cell overexpressing COX8A N-terminal signal peptide-mCherry to labeled mitochondria. And this tag had little effect on cell and mitochondrial function (FigureS5). When the red-mitochondria 293T cell was labeled with MitoTracker Green again, we found all the mitochondria have both red and green markers (Figure 6(a)). The mitochondria isolated from the double-labeled cell also showed the colocation of the two colors (Figure 6(b)). When we added the double-labeled isolated mitochondria to the medium of a 293T cell labeled with MitoBright Deep Red (set to pink), we came to an interesting result. The pink mitochondria (endogenous mitochondria of host cell) completely colocalized with exogenous green mitochondria, while only a portion of pink mitochondria overlapped with exogenous red mitochondria. Moreover, the two-color marker of the same exogenous mitochondria was partially separated (Figure 6(c)). All of these suggested that the exogenous mitochondrial components segregate when cocultured with host cells and a specific part can fuse with the endogenous mitochondria of host cells and the rest part has another fate. During the mitochondrial transfer process, we found that the green component can be internalized immediately by the host cell within 1 h, while the red component was internalized in a much slower way (Figures 7(a)–7(e)). This again confirmed the different fate of different mitochondrial components. We further investigated the pattern of exogenous mitochondrial transfer using double-labeled 293T as a mitochondrial donor and pink-labeled induced pluripotent stem cell (iPSC) as host. The results showed nearly all the green components colocalized with the pink endogenous mitochondria and the red component was concentrated at the edge of the cell clones or in the scattered cells (Figures 8(a)–8(c)). These suggested the internalization efficiency of red component may be related to intercellular connections.Figure 6 Double-labeled exogenous mitochondria exhibited component separation after being internalized by host cells. (a) Double-labeled mitochondria of 293T; from left to right, mitochondria labeled with a fluorescent protein (Mito-mCherry); mitochondria labeled with MitoTracker Green; DAPI; the merged image showed completely colocalization. (b) Isolated mitochondria labeled with red and green markers; from left to right, isolated mitochondria labeled with Mito-mCherry; isolated mitochondria labeled with MitoTracker Green; the merge image showed well co-localization. (c) From left to right, top to bottom; the red components of exogenous mitochondria entered the host cell, and the distribution was shown; the green components of exogenous mitochondria entered the host cell and its distribution; endogenous mitochondria of the host cell labeled with MitoBright Deep Red (pink); the merged image of three colors showed fusion and separation; the merged image of green and pink indicated that parts of the green overlap with the pink; the merged image of red and pink showed that seldom red components overlap with the pink; the merged image of green and red suggested that a portion of red components overlaps with the green components. Scale bar: 10μm, 100 μm, and 10 μm. (a)(b)(c)Figure 7 Time pattern of different mitochondrial components during transfer. (a) Mitochondria of host 293T cell labeled with MitoBright Deep Red; isolated double-labeled mitochondria were added to the medium of host cell instantly. (b) 1 h after coincubation. (c) 2 h after coincubation. (d) 3 h after coincubation. (e) 24 h after coincubation. Scale bar: 10μm.Figure 8 Intercellular connections affect the transfer of different components of mitochondria. (a) Small iPSC clone; mitochondria of host iPSC labeled with MitoBright Deep Red (Endo-Mito); isolated double-labeled mitochondria (Exo-Mito) were added to the medium of host cell; the picture showed the green components of exogenous mitochondria colocalized with endogenous mitochondria of host iPSC, while the red components mainly concentrated at the edges of cell clones and scattered cells. (b) iPSC clone with tight intercellular connections; the red component entered the cell in a random pattern. (c) iPSC clone with tight intercellular connections and edges; the red component mainly concentrated at the edges of cell clones and scattered cells. Scale bar: 200μm. ### 3.6. The Fate of Exogenous Mitochondria Is Fusion and Lysosomal Degradation To prove the theory that a portion of exogenous mitochondria can fuse with endogenous mitochondria, we assessed the mitochondrial dynamics by WB analysis. Our results suggested that after H/R treatment, the expression levels of the mitochondrial fusion-related proteins MFN1 (control vs. H/R,p<0.01) and OPA1 (control vs. H/R, p<0.01) were dramatically reduced and mitochondrial fission-related protein DRP1 was significantly increased (control vs. H/R, p<0.01). Exogenous mitochondrial intervention alleviated this process and increased the expression of MFN1 (H/R vs. H/R+Mito, p<0.01) and OPA1 (H/R vs. H/R+Mito, p<0.01) but did not significantly reverse DRP1 expression (Figures 9(b)–9(e)). The above data further confirm the fusion property of exogenous mitochondria.Figure 9 The fate of exogenous mitochondria is fusion and lysosomal degradation. (a) Endogenous lysosomes were marked by Lyso Dye (green), and exogenous mitochondria were labeled with COX8A N-terminal signal peptide-mCherry (red); the red mitochondria colocalized with green lysosomes. (b–e) WB analysis of mitochondrial dynamic proteins; typical WB bands of MFN1, OPA1, and DRP1 proteins were obtained (b), and relatively quantitative analysis of MFN1 (c), OPA1 (d), and DRP1 (e) was carried out. Scale bar: 10μm. Values were reported as means±SD. ∗∗p<0.01; ns: not statistical significant, p>0.05. (a)(b)(c)(d)(e)To figure out the fate of the unfused, COX8A N-terminal signal peptide-mCherry fusion protein-labeled mitochondria, we did lysosomal staining with Lyso Dye. Interestingly, the red component of mitochondria is totally colocalized with lysosomes of the host cell (Figure9(a)), suggesting the fate of unfused components of exogenous mitochondria is lysosomal degradation. ## 3.1. Characteristics of Mitochondrial Donors The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. To evaluate theΔΨm, JC-1 dye was used. Representative images showed N2a has more red components than mNSC (Figures 1(a) and 1(b)). Flow cytometry analysis confirmed that N2a had a higher ΔΨm than mNSC (N2a vs. mNSC: 10.55±0.85 vs. 2.56±0.36, p<0.01) (Figure 1(c)). In addition, we observed that the mtDNA abundance of mNSC was higher than N2a based on the mitochondrial-nuclear DNA ratio, mt-ND1/β-globin (N2a vs. mNSC: 374.0±11.5 vs. 731.1±110.4, p<0.01) (Figure 1(d)) and mt-RNR1/β-actin (N2a vs. mNSC: 149.1±13.07 vs. 593.4±108.3, p<0.01) (Figure 1(e)). We subsequently analyzed the oxidative respiration capacity of mitochondria from N2a and mNSC based on the Seahorse XF analysis platform. The OCR-time diagram is shown in Figure 1(f) (N2a 1∗105 vs. mNSC 1∗105) and Figure 1(i) (mNSC 1∗105 vs. mNSC 2∗105), which implied a huge difference in oxidative respiratory activity between N2a and mNSC. Basal OCR of N2a (1×105 cells) was significantly higher than those of mNSC (1×105 cells) (N2a vs. mNSC: 248.70±56.33 pmol/min vs. 22.14±5.09 pmol/min, p<0.01) (Figure 1(g)). Similarly, N2a (1×105 cells) exhibited higher maximal OCR values than those of mNSC (1×105 cells) (N2a vs. mNSC: 363.90±123.70 pmol/min vs. 28.14±7.50 pmol/min, p<0.01) (Figure 1(h)). These results suggested that compared to mNSC, mitochondria from N2a exhibited a relatively stronger oxidative respiration capacity. In addition, mitochondrial morphology is presented in Figure S1 and mNSC culture and identification data are presented in Figure S2. Metabolic profiles of N2a and mNSC were quite different (Figure S3). Tumorigenicity evaluation of N2a and mNSC is presented in Figure S4. These results suggested that N2a-derived mitochondria have higher oxidative respiratory activity and lower mtDNA copy number, that the mitochondria from mNSC and N2a have similar morphology, and that they have different metabolomic profiles, and neither is tumorigenic. Therefore, we chose the N2a as a major source of mitochondria for subsequent experiments.Figure 1 Characteristics of mitochondrial donors. (a–c) Mitochondria of N2a (a) and mNSC (b) labeled with JC-1, JC-1 aggregates show red and monomers show green; the fluorescence intensity ratio of JC-1 of N2a and mNSC detected by flow cytometry, N2a showed higherΔΨm ((c), n=3). (d, e) Relative mtDNA copy number of N2a and mNSC indicated by mt-ND1/β-globin (d) and RNR1/β-actin (e), N2a showed lower mtDNA copy number (n=3). (f–i) Seahorse XF cell mitochondrial stress test; the performance of N2a and mNSC at 1×105 level (f), N2a exhibited higher oxidative respiratory activity (n=6); quantitative analysis of basal (g) and maximal (h) OCR of two cells showed that N2a exhibits higher oxidative respiratory activity (n=9). The performance of mitochondrial stress test form NSC at 1×105 (same data as in (f)) and 2×105 level showed mNSC reacted well and did not die in the comparison with N2a (i); we did the test with N2a 1×105, mNSC 1×105, and mNSC 2×105 at one time but presented them in two graphs due to the order of magnitude. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i) ## 3.2. Mitochondrial Transplantation Increased Cell Viability and Attenuated ROS and Apoptosis Level under H/R Condition We verified the effect of exogenous mitochondria on cell viability in a cellular model. After 48 h of hypoxia (1% O2), exogenous mitochondria were added and N2a continued being cultured for the next 24 h, and then, CCK-8 was performed to detect cell viability. The result indicated that the cell viability was correlated with the presence of oxygen after mitochondrial intervention. When mitochondrial-treated N2a continued being cultured under a hypoxic condition, the cell viability decreased dramatically (hypoxia vs. hypoxia+Mito: 1.00±0.03 vs. 0.07±0.01, p<0.01) (Figure 2(a)). When continued being cultured under a reoxygenation condition, the exogenous mitochondrial intervention significantly improved the cell viability (reoxygenation vs. reoxygenation+Mito: 1.00±0.12 vs. 1.24±0.14, p<0.01) (Figure 2(b)). To evaluate the ROS levels, DCFH-DA probes were applied and the fluorescence intensity was measured by flow cytometry and fluorescence plate reader. The flow cytometry results showed the H/R intervention significantly increases ROS levels and exogenous mitochondrial transplantation attenuates that process (control, H/R, H/R+Mito: 42.6±0.17, 115.0±1.00, and 101.7±2.41, p<0.05) (Figures 2(c) and 2(d)). Similarly, the fluorescent plate reader confirmed the results (control, H/R, H/R+Mito: 173.2±13.74, 606.1±23.45, and 416.9±31.59, p<0.01) (Figure 2(e)). To measure the apoptosis level, we performed flow cytometry analysis and Western blot. After H/R injury, the apoptosis ratio of N2a dramatically increased (H/R vs. control: 37.90±0.46% vs. 4.44±0.07%, p<0.01) (Figures 2(f) and 2(g)), which was significantly reduced by mitochondrial transplantation (H/R vs. Mito+H/R: vs. 24.35±0.54%, p<0.01) (Figures 2(f) and 2(g)). Similar results were obtained for the expression levels of apoptosis-related proteins, which also suggested that H/R dramatically promoted the upregulation of the Bax/Bcl-2 ratio (H/R vs. control: 16.28±3.82 vs. 1.00±0.11, p<0.01) (Figures 2(h) and 2(i)) and caspase-3 protein (H/R vs. control: 2.21±0.12 vs. 1.00±0.16, p<0.01) (Figures 2(h) and 2(j)). Exogenous mitochondrial transplantation significantly downregulated the ratio of Bax/Bcl-2 (H/R vs. H/R+Mito: vs. 4.25±0.34, p<0.01) (Figures 2(h) and 2(i)) and protein levels of caspase-3 (H/R vs. H/R+Mito: vs. 1.65±0.03, p<0.01) (Figures 2(h) and 2(j)) in cultured cells.Figure 2 Mitochondrial transplantation increased cell viability and attenuated ROS and apoptosis level under H/R condition. (a, b) Cell viability measured by CCK-8; after 48 h of hypoxia, exogenous mitochondria were added and N2a continued to be cultured in a hypoxic (a) or normoxic (b) environment (n=6). (c–e) ROS levels labeled with DCFH-DA probes and measured by flow cytometry, presented as a typical histogram of fluorescence intensity distribution (c) and a fluorescence intensity bar graph ((d), n=3), and fluorescent plate reader ((e), n=3). (f–j) Apoptosis levels were detected by flow cytometry for Annexin V and PI positivity ((f, g), n=3) and by related protein expression; representative WB bands were obtained (h), and correspondingly, quantitative analysis of Bax/Bcl-2 (i) and caspase-3 (j) was shown. Values were reported as means±SD. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ## 3.3. Mitochondrial Transplantation Improved Neurobehavioral Deficits and Reduced Infract Size of MCAO Rats To evaluate the effect of exogenous mitochondria on behavior and infarct size, we used the MCAO model. The Clark general/focal scale, the mNSS, and the rotarod test were used to assess neurological behavior deficits, and TTC staining was performed to measure infarct size. The results suggested that mitochondrial transplantation significantly improved neurological behavior deficits. For sham, I/R, and I/R+Mito groups, the sample size was 8, 9, and 7, and the Clark general scale was 0 (0,0), 3 (1.5,6), and 1 (0,1) (nonnormally distributed data are expressed as median, 25%, 75% quantile), which indicated that mitochondria improved neurological outcome (p<0.05). The Clark focal scale and mNSS confirmed the following: Clark focal: 0±0, 8.33±5.57, and 2.57±1.81, p<0.05; mNSS: 0±0, 8.56±3.01, and 4.71±2.63, p<0.05) (Figures 3(a)–3(c)). The rotarod test also validated the conclusion. The latency time to fall of sham, I/R, and I/R+Mito groups before the surgery was 59.63±11.77 s, 56.33±6.76 s, and 53.43±11.66 s (p>0.05) and 55.63±15.01 s, 21.78±6.78 s, and 30.71±8.98 s (p<0.05) after the surgery (Figure 3(d)). The TTC results suggested exogenous mitochondrial transplantation could reduce the infarct size of the MCAO model (I/R vs. I/R+Mito 26.02±3.24% vs. 13.36±4.00%, p<0.05) (Figures 3(e) and 3(f)).Figure 3 Mitochondrial transplantation improved neurobehavioral deficits and reduced infarct size of MCAO rats. (a–d) Neurological behavior assessment; 24 h after mitochondrial transplantation, multiple score scales were used to evaluate the neurological deficits induced by MCAO, including Clark general functional deficit score (a), Clark focal functional deficit score (b), mNSS score (c), and rotarod test (d). (e, f) Infarction size evaluation; brain infarction areas were stained by TTC (e), and relatively quantitative analysis of infarction size was assessed (f). Values were reported asmeans±SD. ∗p<0.05. Sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondrial injection. (a)(b)(c)(d)(e)(f) ## 3.4. Effects of Mitochondrial Transplantation on Transcriptomic Profile Imply Metabolic Alteration In order to clarify the effects of mitochondrial transplantation on transcriptomic profile and its possible mechanism, we performed RNA sequence analysis. The results identified 14 upregulated genes and 12 downregulated genes between control and H/R groups, 27 upregulated genes and 98 downregulated genes between control and H/R+Mito groups, and 17 upregulated genes and 1 downregulated gene between H/R and H/R+Mito groups. The overlapping genes are presented in Figure4(c). The following KEGG pathway enrichment analysis suggested exogenous mitochondria may affect metabolism-related pathways, especially lipid metabolism-related molecules and pathways such as the PPAR signal pathway, insulin signal pathway, fat intake and digestion-related pathway, cholesterol metabolism, glycolysis, and gluconeogenesis (Figure 4(d)). These results indicated that exogenous mitochondria may be capable of altering the metabolic characteristics of host cells, possibly resulting in metabolic reprogramming.Figure 4 Effects of mitochondrial transplantation on transcriptomic profile imply metabolic alteration. (a) PCA plot, axis 1 18.6%, axis 2 16.5%. (b) Heatmap, horizontal axis-different cell samples; longitudinal axis-different genes; color depth-expression levels of genes. (c) Venn diagram. (d) KEGG bubble chart of the differentially expressed genes between H/R and H/R+Mito groups. (a)(b)(c)(d) ## 3.5. Pattern of Exogenous Mitochondrial Transfer Implies Mitochondrial Component Separation To better understand the mechanism of exogenous mitochondria, we first traced its pathway. MitoTracker™ Red CMXRos and MitoTracker™ Green FM were used to label mitochondria. When the mitochondria of N2a labeled with MitoTracker Green were isolated and added to the medium of another N2a which was labeled with MitoTracker Red, a few hours later, we found that the red and green were completely fused (Figure5(a)). This phenomenon implied that exogenous mitochondria can be fused with endogenous mitochondria of the host cell. Next, we isolated red dye-labeled mitochondria and added them to the medium of green dye-labeled cell. The result validated the previous (Figure 5(b)). To figure out if this fusion property of exogenous mitochondria is species-limited, we isolated red dye-labeled mitochondria from a human-derived cell line-U87 and added it to mNSC medium. Mitochondria fused again (Figure 5(c)). This suggested that the ability of exogenous mitochondria to fuse with host cell mitochondria is cross-species.Figure 5 Exogenous mitochondria colocalized with endogenous mitochondria despite species by coincubation. (a) From left to right, mitochondria of host N2a labeled with MitoTracker Red; isolated green mitochondria entered host N2a; nucleus with DAPI; the merged image. (b) Isolated red mitochondria entered host N2A; mitochondria of host N2a labeled with MitoTracker Green; DAPI; the merged image. (c) Isolated red mitochondria derived from U87 entered host mNSC; mitochondria of host mNSC labeled with MitoTracker Green; DAPI: the merged image. Scale bar: 10μm.To further verify the fusion property of exogenous mitochondria, we constructed a 293T cell overexpressing COX8A N-terminal signal peptide-mCherry to labeled mitochondria. And this tag had little effect on cell and mitochondrial function (FigureS5). When the red-mitochondria 293T cell was labeled with MitoTracker Green again, we found all the mitochondria have both red and green markers (Figure 6(a)). The mitochondria isolated from the double-labeled cell also showed the colocation of the two colors (Figure 6(b)). When we added the double-labeled isolated mitochondria to the medium of a 293T cell labeled with MitoBright Deep Red (set to pink), we came to an interesting result. The pink mitochondria (endogenous mitochondria of host cell) completely colocalized with exogenous green mitochondria, while only a portion of pink mitochondria overlapped with exogenous red mitochondria. Moreover, the two-color marker of the same exogenous mitochondria was partially separated (Figure 6(c)). All of these suggested that the exogenous mitochondrial components segregate when cocultured with host cells and a specific part can fuse with the endogenous mitochondria of host cells and the rest part has another fate. During the mitochondrial transfer process, we found that the green component can be internalized immediately by the host cell within 1 h, while the red component was internalized in a much slower way (Figures 7(a)–7(e)). This again confirmed the different fate of different mitochondrial components. We further investigated the pattern of exogenous mitochondrial transfer using double-labeled 293T as a mitochondrial donor and pink-labeled induced pluripotent stem cell (iPSC) as host. The results showed nearly all the green components colocalized with the pink endogenous mitochondria and the red component was concentrated at the edge of the cell clones or in the scattered cells (Figures 8(a)–8(c)). These suggested the internalization efficiency of red component may be related to intercellular connections.Figure 6 Double-labeled exogenous mitochondria exhibited component separation after being internalized by host cells. (a) Double-labeled mitochondria of 293T; from left to right, mitochondria labeled with a fluorescent protein (Mito-mCherry); mitochondria labeled with MitoTracker Green; DAPI; the merged image showed completely colocalization. (b) Isolated mitochondria labeled with red and green markers; from left to right, isolated mitochondria labeled with Mito-mCherry; isolated mitochondria labeled with MitoTracker Green; the merge image showed well co-localization. (c) From left to right, top to bottom; the red components of exogenous mitochondria entered the host cell, and the distribution was shown; the green components of exogenous mitochondria entered the host cell and its distribution; endogenous mitochondria of the host cell labeled with MitoBright Deep Red (pink); the merged image of three colors showed fusion and separation; the merged image of green and pink indicated that parts of the green overlap with the pink; the merged image of red and pink showed that seldom red components overlap with the pink; the merged image of green and red suggested that a portion of red components overlaps with the green components. Scale bar: 10μm, 100 μm, and 10 μm. (a)(b)(c)Figure 7 Time pattern of different mitochondrial components during transfer. (a) Mitochondria of host 293T cell labeled with MitoBright Deep Red; isolated double-labeled mitochondria were added to the medium of host cell instantly. (b) 1 h after coincubation. (c) 2 h after coincubation. (d) 3 h after coincubation. (e) 24 h after coincubation. Scale bar: 10μm.Figure 8 Intercellular connections affect the transfer of different components of mitochondria. (a) Small iPSC clone; mitochondria of host iPSC labeled with MitoBright Deep Red (Endo-Mito); isolated double-labeled mitochondria (Exo-Mito) were added to the medium of host cell; the picture showed the green components of exogenous mitochondria colocalized with endogenous mitochondria of host iPSC, while the red components mainly concentrated at the edges of cell clones and scattered cells. (b) iPSC clone with tight intercellular connections; the red component entered the cell in a random pattern. (c) iPSC clone with tight intercellular connections and edges; the red component mainly concentrated at the edges of cell clones and scattered cells. Scale bar: 200μm. ## 3.6. The Fate of Exogenous Mitochondria Is Fusion and Lysosomal Degradation To prove the theory that a portion of exogenous mitochondria can fuse with endogenous mitochondria, we assessed the mitochondrial dynamics by WB analysis. Our results suggested that after H/R treatment, the expression levels of the mitochondrial fusion-related proteins MFN1 (control vs. H/R,p<0.01) and OPA1 (control vs. H/R, p<0.01) were dramatically reduced and mitochondrial fission-related protein DRP1 was significantly increased (control vs. H/R, p<0.01). Exogenous mitochondrial intervention alleviated this process and increased the expression of MFN1 (H/R vs. H/R+Mito, p<0.01) and OPA1 (H/R vs. H/R+Mito, p<0.01) but did not significantly reverse DRP1 expression (Figures 9(b)–9(e)). The above data further confirm the fusion property of exogenous mitochondria.Figure 9 The fate of exogenous mitochondria is fusion and lysosomal degradation. (a) Endogenous lysosomes were marked by Lyso Dye (green), and exogenous mitochondria were labeled with COX8A N-terminal signal peptide-mCherry (red); the red mitochondria colocalized with green lysosomes. (b–e) WB analysis of mitochondrial dynamic proteins; typical WB bands of MFN1, OPA1, and DRP1 proteins were obtained (b), and relatively quantitative analysis of MFN1 (c), OPA1 (d), and DRP1 (e) was carried out. Scale bar: 10μm. Values were reported as means±SD. ∗∗p<0.01; ns: not statistical significant, p>0.05. (a)(b)(c)(d)(e)To figure out the fate of the unfused, COX8A N-terminal signal peptide-mCherry fusion protein-labeled mitochondria, we did lysosomal staining with Lyso Dye. Interestingly, the red component of mitochondria is totally colocalized with lysosomes of the host cell (Figure9(a)), suggesting the fate of unfused components of exogenous mitochondria is lysosomal degradation. ## 4. Discussion There are four studies [30, 31, 64, 65] focused on the treatment of cerebral I/R injury with isolated mitochondria, according to the latest review [66]. All of them showed a favorable outcome in behavioral assessment or cerebral infarct size with different mitochondrial donors by intravenous or intracerebroventricular injection. However, there is no more detailed information about the source of mitochondria, the mechanism, and the fate of isolated mitochondria. And this is the information we want to provide.In order to apply mitochondrial transplantation therapy to the clinic, the first priority is the source and quality control of mitochondria. The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. Previous studies isolated mitochondria from the placenta [65], or human umbilical cord-derived mesenchymal stem cells [64], or pectoralis major muscle [30], or baby hamster kidney fibroblast [31], and evaluated them mainly by MMP or respiratory activity. The outcome is closely related to the isolation and preservation process, and the consistency may not be good among different batches. The four studies did not give the reason why they choose these cells as mitochondrial donors. However, in our study, we evaluated the mitochondria before isolation through multiple dimensions, including morphology, MMP, mtDNA copy number, respiratory activity, metabolomic profile, and tumorigenicity. We hope to provide reference data when choosing a mitochondrial donor in future research.For the first time, we have identified the oxygen dependence of therapeutic effects of isolated mitochondria. This reminds us of the application scenario of isolated mitochondria, where incorrect application may lead to serious consequences. It is generally believed that exogenous mitochondria have a relatively intact function and can replace the damaged mitochondria in the host cell [67]. Considering the oxygen dependence, we presume that the exogenous mitochondria are a load for the cell. In the presence of oxygen, the cell is able to handle this load and make it functional using the large amount of ATP produced by oxidative phosphorylation. However, in hypoxic conditions, host cells require additional energy to handle this load, which accelerates cell death. Also, according to our transcriptome data, the exogenous mitochondrial function is closely related to lipid metabolism, which may increase its oxygen dependence. In fact, there is little known about oxygen dependence, and this will be one of our future research directions.The therapeutic effects of isolated mitochondria in our cell and animal models are similar to other studies [30, 31, 64, 65], which showed that mitochondrial intervention attenuated I/R injury, improved neurological outcomes, and reduced cerebral infarct size. Our data again confirmed the potential clinical application of mitochondrial transplantation. The transcriptomic data suggest that the therapeutic effect of mitochondria may be related to altered metabolism, especially lipid metabolism, providing clues for future mechanistic studies. Few studies focused on the behavior of isolated mitochondria in vivo, especially whether it can cross the brain-blood barrier. Nakamura et al. [65] injected mitochondria intravenously and found exogenous mitochondria distributed in the brain under ischemic-reperfusion condition. Shi et al. [36] injected isolated mitochondria intravenously in mice and found that the exogenous mitochondria distributed in various tissues including the brain, liver, kidney, muscle, and heart. However, we did not find that mitochondria can pass the intact blood-brain barrier in our projects. More research is needed on the permeability of the blood-brain barrier to mitochondria.The discovery of mitochondrial component separation phenomenon was based on different mitochondrial labeling techniques, and this gives us a new perspective to study the behavior of mitochondria. To our knowledge, most studies [25, 30, 31, 33, 36, 64, 65] labeled mitochondria with a single MitoTracker dye or a fluorescent fusion protein and got a conclusion based on that. However, we used both techniques to label the same mitochondria. Surprisingly, we found the different markers are separated. Regardless of whether it was technical or not, at least, it proved that a different conclusion may be made based on a different single-label method. Therefore, previous works need to be revisited. This is one of the important information provided in this article. MitoTracker dyes are roughly divided into voltage-dependent and non-voltage-dependent. We chose the non-voltage-dependent dye MitoTracker Green to label mitochondria, which are covalently bound to the free sulfhydryl group of cysteine in mitochondrial protein [68, 69]. Therefore, the possibility of dye transfer between mitochondria is extremely low, and no studies have reported this dye-transfer phenomenon. Thus, MitoTracker Green represents the mitochondrial component that binds to it. Fluorescent fusion proteins (most fused with mitochondrial targeting sequence of cytochrome c oxidase subunit VIII) are another commonly used method of labeling mitochondria [33, 36, 37]. Due to the wide distribution in the mitochondria, both labeling methods can display mitochondria and are well overlapped (Figure 6(a)). When the extracted double-labeled mitochondria enter the host cell, the two markers are separated, which represents that different component (not subgroups) of mitochondria has different fates. Considering the lysosomal labeling and mitochondria dynamic protein Western blot experiments, it is indicated that the isolated mitochondria may function by fusing its useful part to the host mitochondria rather than replacing it entirely. The unfused part will enter the lysosome for degradation. Furthermore, we used iPSC to study the effect of cell connections on the entry of mitochondria into host cells. The result implied that tight intercellular connections will greatly reduce the red component of isolated mitochondria from entering the host cell.Few studies have covered the fate of isolated mitochondria after being internalized by host cells. Cowan et al. [70] reported that after being incorporated into host cells, isolated mitochondria are transported to endosomes and lysosomes, and then, most of these mitochondria escape from the compartments and fuse with the endogenous mitochondrial network. Their work described the fate of exogenous mitochondria, treating mitochondria as a whole, while our work found that the different part of mitochondria has different fate, which is consistent with the interaction of components between intracellular membrane systems. This result reminds us to understand the behavior of mitochondria from a more microscopic perspective and to pay more attention to its communication with other organelles.Based on our data, we hypothesized that when the exogenous mitochondria entered into the host cells, in the presence of oxygen, mitochondrial component separation occurred, reducing ROS levels, apoptosis, and altering cellular metabolism, thus improving cell survival (Figure10). We made a reasonable assumption. Whether correct or not, it is of great significance, because this phenomenon will allow us to reexamine previous studies and consider this factor in future research design.Figure 10 Hypothetical model diagram. When isolated exogenous enters host cells, the specific components of it undergo separation. Some fuses with the host mitochondria, while the other enters lysosome and undergoes lysosomal degradation. This process reduces ROS and apoptosis and alters metabolic profile, which in turn attenuates ischemia-reperfusion injury.Still, there are limitations. The correctness of the conclusion is closely related to the mitochondrial label methods. When previous studies labeled mitochondria by a single method, they got incomplete information. The current label methods are mainly developed to display mitochondria in cells. Unpredictable events may happen when mitochondria are isolated. Therefore, a new fate tracking tool should be developed. Another limitation would be lacking proper controls in the fate experiments. Due to the superficial knowledge of the event, we had no idea where to intervene. Although we used a mitochondrial fission inhibitor, Mdivi-1, as a control (FigureS6), it did not seem to show any difference. Another similar study did not set up controls too [70]. Therefore, our future direction is to develop new mitochondrial label tools and clarify the fate and transportation of isolated mitochondria through proper controls. In addition, the mechanism conclusions need to be verified in vivo.In general, studies focusing on mitochondrial transplantation therapy are in their infancy, but existing data indicate a promising clinical application. When there is no effective treatment for ischemia-reperfusion injury, mitochondrial transplantation therapy provides a new idea. However, more data is needed to confirm its safety and efficacy, and more mechanism studies are needed to figure out how it works. We hope our study provides useful information in this area and has enlightenment for future studies. ## 5. Conclusions Mitochondrial transplantation may attenuate cerebral I/R injury. The mechanism may be related to selective mitochondrial component separation, altering cellular metabolism, reducing ROS, and apoptosis in an oxygen-dependent manner. The way of isolated mitochondrial transfer into the cell may be related to intercellular connection. --- *Source: 1006636-2021-11-20.xml*
1006636-2021-11-20_1006636-2021-11-20.md
75,571
Mitochondrial Transplantation Attenuates Cerebral Ischemia-Reperfusion Injury: Possible Involvement of Mitochondrial Component Separation
Qiang Xie; Jun Zeng; Yongtao Zheng; Tianwen Li; Junwei Ren; Kezhu Chen; Quan Zhang; Rong Xie; Feng Xu; Jianhong Zhu
Oxidative Medicine and Cellular Longevity (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1006636
1006636-2021-11-20.xml
--- ## Abstract Background. Mitochondrial dysfunctions play a pivotal role in cerebral ischemia-reperfusion (I/R) injury. Although mitochondrial transplantation has been recently explored for the treatment of cerebral I/R injury, the underlying mechanisms and fate of transplanted mitochondria are still poorly understood. Methods. Mitochondrial morphology and function were assessed by fluorescent staining, electron microscopy, JC-1, PCR, mitochondrial stress testing, and metabolomics. Therapeutic effects of mitochondria were evaluated by cell viability, reactive oxygen species (ROS), and apoptosis levels in a cellular hypoxia-reoxygenation model. Rat middle cerebral artery occlusion model was applied to assess the mitochondrial therapy in vivo. Transcriptomics was performed to explore the underlying mechanisms. Mitochondrial fate tracking was implemented by a variety of fluorescent labeling methods. Results. Neuro-2a (N2a) cell-derived mitochondria had higher mitochondrial membrane potential, more active oxidative respiration capacity, and less mitochondrial DNA copy number. Exogenous mitochondrial transplantation increased cellular viability in an oxygen-dependent manner, decreased ROS and apoptosis levels, improved neurobehavioral deficits, and reduced infarct size. Transcriptomic data showed that the differential gene enrichment pathways are associated with metabolism, especially lipid metabolism. Mitochondrial tracking indicated specific parts of the exogenous mitochondria fused with the mitochondria of the host cell, and others were incorporated into lysosomes. This process occurred at the beginning of internalization and its efficiency is related to intercellular connection. Conclusions. Mitochondrial transplantation may attenuate cerebral I/R injury. The mechanism may be related to mitochondrial component separation, altering cellular metabolism, reducing ROS, and apoptosis in an oxygen-dependent manner. The way of isolated mitochondrial transfer into the cell may be related to intercellular connection. --- ## Body ## 1. Introduction Stroke is an acute cerebrovascular disease, including ischemic and hemorrhagic stroke, and is considered to be one of the leading causes of human death and disability worldwide [1–4]. Ischemic stroke accounts for over 80% of all strokes and is usually triggered by brain arterial embolism [3, 5]. When blood flow is blocked, brain tissue in the area of blood supply becomes ischemic and hypoxic, which then leads to neurological dysfunction. Moreover, following blood reperfusion, the damaged brain tissue can be further harmed by restoration of oxygen-rich blood, causing a so-called ischemia-reperfusion (I/R) injury [3, 6–9]. Major contributors to the pathological process include overproduction of ROS, dramatically increased extracellular glutamate levels, and activation of neuroinflammation responses [6, 7, 9]. Among these, the dysfunction of mitochondria in neurons plays a pivotal role [3, 9]. Currently, the main strategy to mitigate ischemic stroke injury is revascularization [4, 8, 10], which may lead to I/R injury. Despite remarkable progress that has been achieved for ischemic stroke, there seem to be no better options for I/R injury.Studies have shown that mitochondria are not only the energy factories of cells but are also closely related to other biological processes, including calcium homeostasis, ROS production, hormone biosynthesis, and cellular differentiation [3, 9, 11]. Mitochondria play an important role in many diseases. Recently, a growing number of studies have begun to apply isolated mitochondria as a therapeutic agent to treat diseases, including kinds of I/R injury [12–20], liver disorders [21, 22], breast cancer [23–25], lung diseases [26, 27], and central nervous system disorders [28–37]. Furthermore, Emani et al. conducted an autologous mitochondrial transplantation clinical study, which showed a promising clinical application [38, 39]. Also, there are studies registered at ClinicalTrials.gov (NCT03639506, NCT02851758, and NCT04998357). Therefore, mitochondrial transplantation holds great therapeutic potential for cerebral I/R injury. One big concern of mitochondrial transplantation is an immune and inflammatory response based on data of mtDNA [40] and damage-associated molecular patterns (DAMPs) [41]. Ramirez-Barbieri et al. demonstrated that there is no direct or indirect, acute or chronic alloreactivity, allorecognition, or DAMP reaction to single or serial injections of allogeneic mitochondria [42].Recently, several studies have applied isolated mitochondria from various sources as an intervention in many diseases. Four studies focused on cerebral I/R injury have shown benefits of mitochondrial transplantation based on various phenotypes, such as behavioral assessment, infarct size, ROS, and apoptosis. However, the appropriate source of mitochondria, the mechanism of its therapeutic effect, and the fate of isolated mitochondria remain unclear. The clinical application of isolated mitochondria has just begun, and more safety and effectiveness assessments are needed.In order to answer the above questions, we performed this study. Firstly, we evaluated the source of mitochondria and then assessed the therapeutic effects of mitochondrial transplantation in cellular and animal models. Finally, we mainly focused on the therapeutic mechanisms of mitochondrial transplantation and the fate of transplanted mitochondria. ## 2. Methods ### 2.1. Cells The mouse neural stem cell (mNSC) was obtained and cultured as previously described [43]. Adherent culture of mNSC was performed with Matrigel (Corning, NY, USA, Cat#354277). N2a (Cat#SCSP-5035) and induced pluripotent stem cell (iPSC) (Cat#DYR0100) were purchased from the National Collection of Authenticated Cell Cultures, Shanghai. 293T was provided by Dr. Gao Liu from Zhongshan Hospital, Shanghai Medical College, Fudan University. N2a and 293T cells were cultured in DMEM supplemented with 10% fetal bovine serum. The iPSC was cultured according to the manufacturer’s protocol. ### 2.2. Animals Sprague-Dawley rats (7-8 weeks old, 250-300 g) were obtained from Shanghai Super-B&K Laboratory Animal Corp. Ltd. (Shanghai, China). All experimental procedures and animal care were approved by the Animal Welfare and Ethics Group, Laboratory Animal Science Department, Fudan University (ethical approval number 202006013Z) and were carried out according to the Guidelines for the Care and Use of Laboratory Animals by the National Institutes of Health. The rats were divided into three groups: sham, I/R, and I/R+Mito group (sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondria injection). ### 2.3. Mitochondrial Isolation Mitochondria were isolated from N2a and mNSC using the mitochondria isolation kit (ThermoFisher Scientific, USA, Cat#89874) as previously described [32, 44]. Briefly, after cultured cells were orderly digested (trypsin) and centrifuged (300 g, 5 min) and the supernatant was removed, collected cells were resuspended by mitochondrial isolation reagent A (800 μl) in a 2.0 ml microcentrifuge tube and vortexed for 5 s and then incubated for 2 min on ice. Then, the reagent B (10 μl) was further added into the tube and continuously placed in situ for 5 min. Following vortexed at maximal speed for 5 times (each time for 1 min), the reagent C (800 μl) was added into the tube and mixed. Subsequently, the mixed solution was centrifuged (700 g, 10 min, 4°C) and then the supernatant was obtained for further centrifugation (12000 g, 15 min, 4°C). Finally, fresh mitochondria were obtained and used for further experiments. For animal experiments, each rat received mitochondria isolated from 1×107 cells, and the protein content was about 180 μg-200 μg. ### 2.4. Transmission Electron Microscopy (TEM) Cells were fixed with 2.5% glutaraldehyde for 2 h at room temperature and then centrifuged (300×g, 5 min). Subsequently, cells were postfixed with precooled 1% osmic acid (2 h, 4°C) and then centrifuged again (300×g, 5 min). After gradient alcohol dehydration and penetration with a solution of acetone and epoxy resin at different proportions, the cell samples were further embedded into epoxy resin and solidified for 48 h. Subsequently, the embedded samples were sectioned (thickness: 60-100 nm) and then double-stained with 3% uranyl acetate and lead citrate. Finally, the stained sections were observed and imaged by TEM (Tecnai G2 20 TWIN, FEI Company, Oregon, USA). ### 2.5. Mitochondrial Membrane Potential Analysis The mitochondrial membrane potential (MMP/ΔΨm) was assessed by JC-1 dye (Beyotime Biotechnology, Shanghai, China, Cat#C2006) and detected by flow cytometry and confocal microscopy, according to previous methods [45, 46]. For flow cytometry, single-cell suspensions of mNSC and N2a were prepared and then coincubated with JC-1 work solution for 20 min at 37°C. Next, sample cells were centrifuged (600 g, 4°C, 5 min) and washed with JC-1 buffer solution 2 times. Subsequently, resuspended cells were subjected to flow cytometry tests. For image, cells were seeded in glass-bottom Petri dishes. 24 hours later, cells were coincubated with JC-1 work solution for 20 min at 37°C, washed with JC-1 buffer, and then examined by confocal microscopy. ### 2.6. Polymerase Chain Reaction (PCR) Absolute quantitative PCR was performed as previously described [47–49]. The ratio of mtDNA and nuclear DNA was used to assess relative mtDNA copy number. In this experiment, mt-ND1/β-globin and mt-RNR1/β-actin were used to represent abundance. The sequences of the primers are described in Table S1. ### 2.7. Mitochondrial Stress Test A mitochondrial stress test was performed using the Seahorse XF Cell Mito Stress Test Kit according to the manufacturer’s instruction [33, 49, 50]. Oxygen consumption rate (OCR), basic OCR, and maximal OCR were used as the main evaluation indicators. Different levels of cells were tested, including 1×105 and 2×105. ### 2.8. Hypoxia-Reoxygenation (H/R) Cell Model and Mitochondrial Transplantation The H/R cell model was induced by 48 h of hypoxia (1% O2) in a tri-gas CO2 incubator and 24 h of routine culture, according to previously described methods [51]. The cultured cells were divided into 3 groups: control group (routine culture (48 h)+replacing medium+routine culture (24 h)), H/R group (hypoxic culture (48 h)+replacing medium+continuing routine culture (24 h)), and H/R+mitochondrial treatment group (hypoxic culture (48 h)+ replacing medium (containing exogenous mitochondria)+continuing routine culture (24 h)). The ratio of mitochondrial donor cell number to the receiver is 5 (e.g., 2×105 cells need mitochondria isolated from 1×106 cells). ### 2.9. Cell Viability Assay The viability was assessed by Cell Counting Kit-8 (CCK-8) (Dojindo Laboratories, Kumamoto, Japan, Cat#CK04) according to the manufacturer’s instruction. Briefly, N2a were coincubated with the CCK-8 working solution at 37°C for 3 h in the light-avoided environment. Then, cells were detected at 450 nm by a microplate reader (Molecular Devices, Sunnyvale, CA, USA). ### 2.10. ROS Measurement by Flow Cytometry DCFH-DA probes (Beyotime Biotechnology, Shanghai, China, Cat#S0033S) were used to measure the ROS levels in cells according to the manufacturer’s instruction. Fluorescence intensity was detected by flow cytometry and fluorescence plate reader. Briefly, after coincubated with DCFH-DA probes (10μmol/l, excitation wavelength: 488 nm and emission wavelength: 525 nm) at 37°C for 30 min, cells were detected by a microplate reader (Molecular Devices, Sunnyvale, CA, USA) or collected by centrifugation (300 g, 5 min); after resuspended with PBS, DCFH-DA-labeled cells were further detected by flow cytometry. ### 2.11. Western Blot Western blot was performed as previously described [52, 53]. The following primary antibodies were used for WB detection: anti-MFN1 (1 : 500), anti-OPA1 (1 : 1000), and anti-DRP1 antibodies (1 : 1000) were all purchased from Proteintech (Chicago, IL, USA); and anti-Bax (1 : 2000), anti-Bcl-2 (1 : 2000), anti-caspase-3 (1 : 2000), and anti-GAPDH antibodies (1 : 10000) were all purchased from Abcam (Cambridge, Cambs, UK). GAPDH served as internal reference. WB bands were detected with Gel-Pro Analyzer (Media Cybernetics, MD, USA). ### 2.12. Cell Apoptosis Cell apoptosis was evaluated using an Annexin V-FITC/PI Apoptosis Detection Kit (BD Biosciences, NJ, USA, Cat#40302) according to the manufacturer’s instruction. Briefly, cells were coincubated with Annexin V-FITC and then propidium iodide for 15 min at RT in a light-avoided environment and then detected by flow cytometry. ### 2.13. Middle Cerebral Artery Occlusion (MCAO) Intraluminal filament occlusion was used to induce focal cerebral ischemia injury [54, 55]. Anesthetized by 2% pentobarbital sodium (45 mg/kg), the rats were placed in a prone position. Then, the left common carotid artery, external carotid artery (ECA), and internal carotid artery (ICA) were exposed. Next, a silicon-coated monofilament suture was gradually inserted through the left ECA and was moved up into the left ICA to successfully occlude the left middle cerebral artery (MCA) and remained in situ for 120 min. Subsequently, the suture was carefully removed, the ECA was permanently ligated, and the incision was sutured. Sham group rats were subjected to the same procedure except for the 120 min occlusion of MCA. Experimental animals were then placed into individual cages and provided a standard diet and water. After 120 min occlusion, right before ICA reperfusion, the isolated mitochondria (from 1×107 cells, the protein content was about 180 μg-200 μg) or saline (10 μl) was injected into the ICA and all incisions were closed. ### 2.14. Neurobehavioral Evaluation Neurobehavioral deficits were evaluated 24 h after mitochondrial transplantation using multiple scales, including the Clark general functional deficit score [56, 57], the Clark focal functional deficit score [56, 57], the modified neurological severity score (mNSS) [55, 58], and the rotarod test [55, 59]. Behavioral assessments were conducted by two skillful investigators who were both blinded to the animal groups. ### 2.15. Cerebral Infarct Area Detection Triphenyl tetrazolium chloride (TTC) staining was used to display the area of cerebral infarction [60, 61]. Briefly, 24 h after MCAO, the rats were deeply anesthetized and perfused with PBS transcardially, after which the rat brains were obtained and cut into 2 mm thick coronal sections. Subsequently, the brain sections were incubated with a 2% TTC solution at 37°C for 30 min in darkness. Then, stained slices were placed from the frontal to occipital order, and macroscopic images were obtained with a digital camera. Infarct areas were measured by Adobe Photoshop 21.0.0 (Adobe Systems Inc., San Jose, CA, USA). ### 2.16. Transcriptomic Analysis RNA sequencing was performed as previously described [62, 63]. Downstream analysis was performed by R (R Foundation for Statistical Computing, Vienna, Austria). ### 2.17. Mitochondria and Lysosome Labeling The mitochondrial fluorescent dyes MitoTracker™ Red CMXRos (ThermoFisher Scientific, Waltham, MA, USA), MitoTracker™ Green FM (ThermoFisher Scientific, Waltham, MA, USA), and MitoBright Deep Red (Dojindo Laboratories, Kumamoto, Japan) were used to label mitochondria according to the manufacturer’s instruction. In addition, 293T cells expressing COX8A gene N-terminal signal peptide-mCherry fusion protein were constructed by lentivirus (Inovogen Tech, Chongqi, China, Cat#3512) and the mitochondria were well labeled. The Lyso Dye (Dojindo Laboratories, Kumamoto, Japan, Cat#MD01)was used to label lysosomes according to the manufacturer’s instruction. ### 2.18. Statistical Analysis Data that conform to a normal distribution with homogeneous variance are expressed asmean±standarddeviation (SD), and Student’s t-test or one-way analysis of variance (ANOVA) was used to compare the differences between two groups or among multiple groups, respectively. Data with a nonnormal distribution are presented as median (25%, 75% quantiles), and Mann-Whitney U-test was taken into consideration. Statistical analysis and diagram generation were performed using GraphPad Prism 8.0.1 (GraphPad Software, Inc., San Diego, CA, USA). ∗p<0.05 and ∗∗p<0.01 were considered to be statistically different. ## 2.1. Cells The mouse neural stem cell (mNSC) was obtained and cultured as previously described [43]. Adherent culture of mNSC was performed with Matrigel (Corning, NY, USA, Cat#354277). N2a (Cat#SCSP-5035) and induced pluripotent stem cell (iPSC) (Cat#DYR0100) were purchased from the National Collection of Authenticated Cell Cultures, Shanghai. 293T was provided by Dr. Gao Liu from Zhongshan Hospital, Shanghai Medical College, Fudan University. N2a and 293T cells were cultured in DMEM supplemented with 10% fetal bovine serum. The iPSC was cultured according to the manufacturer’s protocol. ## 2.2. Animals Sprague-Dawley rats (7-8 weeks old, 250-300 g) were obtained from Shanghai Super-B&K Laboratory Animal Corp. Ltd. (Shanghai, China). All experimental procedures and animal care were approved by the Animal Welfare and Ethics Group, Laboratory Animal Science Department, Fudan University (ethical approval number 202006013Z) and were carried out according to the Guidelines for the Care and Use of Laboratory Animals by the National Institutes of Health. The rats were divided into three groups: sham, I/R, and I/R+Mito group (sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondria injection). ## 2.3. Mitochondrial Isolation Mitochondria were isolated from N2a and mNSC using the mitochondria isolation kit (ThermoFisher Scientific, USA, Cat#89874) as previously described [32, 44]. Briefly, after cultured cells were orderly digested (trypsin) and centrifuged (300 g, 5 min) and the supernatant was removed, collected cells were resuspended by mitochondrial isolation reagent A (800 μl) in a 2.0 ml microcentrifuge tube and vortexed for 5 s and then incubated for 2 min on ice. Then, the reagent B (10 μl) was further added into the tube and continuously placed in situ for 5 min. Following vortexed at maximal speed for 5 times (each time for 1 min), the reagent C (800 μl) was added into the tube and mixed. Subsequently, the mixed solution was centrifuged (700 g, 10 min, 4°C) and then the supernatant was obtained for further centrifugation (12000 g, 15 min, 4°C). Finally, fresh mitochondria were obtained and used for further experiments. For animal experiments, each rat received mitochondria isolated from 1×107 cells, and the protein content was about 180 μg-200 μg. ## 2.4. Transmission Electron Microscopy (TEM) Cells were fixed with 2.5% glutaraldehyde for 2 h at room temperature and then centrifuged (300×g, 5 min). Subsequently, cells were postfixed with precooled 1% osmic acid (2 h, 4°C) and then centrifuged again (300×g, 5 min). After gradient alcohol dehydration and penetration with a solution of acetone and epoxy resin at different proportions, the cell samples were further embedded into epoxy resin and solidified for 48 h. Subsequently, the embedded samples were sectioned (thickness: 60-100 nm) and then double-stained with 3% uranyl acetate and lead citrate. Finally, the stained sections were observed and imaged by TEM (Tecnai G2 20 TWIN, FEI Company, Oregon, USA). ## 2.5. Mitochondrial Membrane Potential Analysis The mitochondrial membrane potential (MMP/ΔΨm) was assessed by JC-1 dye (Beyotime Biotechnology, Shanghai, China, Cat#C2006) and detected by flow cytometry and confocal microscopy, according to previous methods [45, 46]. For flow cytometry, single-cell suspensions of mNSC and N2a were prepared and then coincubated with JC-1 work solution for 20 min at 37°C. Next, sample cells were centrifuged (600 g, 4°C, 5 min) and washed with JC-1 buffer solution 2 times. Subsequently, resuspended cells were subjected to flow cytometry tests. For image, cells were seeded in glass-bottom Petri dishes. 24 hours later, cells were coincubated with JC-1 work solution for 20 min at 37°C, washed with JC-1 buffer, and then examined by confocal microscopy. ## 2.6. Polymerase Chain Reaction (PCR) Absolute quantitative PCR was performed as previously described [47–49]. The ratio of mtDNA and nuclear DNA was used to assess relative mtDNA copy number. In this experiment, mt-ND1/β-globin and mt-RNR1/β-actin were used to represent abundance. The sequences of the primers are described in Table S1. ## 2.7. Mitochondrial Stress Test A mitochondrial stress test was performed using the Seahorse XF Cell Mito Stress Test Kit according to the manufacturer’s instruction [33, 49, 50]. Oxygen consumption rate (OCR), basic OCR, and maximal OCR were used as the main evaluation indicators. Different levels of cells were tested, including 1×105 and 2×105. ## 2.8. Hypoxia-Reoxygenation (H/R) Cell Model and Mitochondrial Transplantation The H/R cell model was induced by 48 h of hypoxia (1% O2) in a tri-gas CO2 incubator and 24 h of routine culture, according to previously described methods [51]. The cultured cells were divided into 3 groups: control group (routine culture (48 h)+replacing medium+routine culture (24 h)), H/R group (hypoxic culture (48 h)+replacing medium+continuing routine culture (24 h)), and H/R+mitochondrial treatment group (hypoxic culture (48 h)+ replacing medium (containing exogenous mitochondria)+continuing routine culture (24 h)). The ratio of mitochondrial donor cell number to the receiver is 5 (e.g., 2×105 cells need mitochondria isolated from 1×106 cells). ## 2.9. Cell Viability Assay The viability was assessed by Cell Counting Kit-8 (CCK-8) (Dojindo Laboratories, Kumamoto, Japan, Cat#CK04) according to the manufacturer’s instruction. Briefly, N2a were coincubated with the CCK-8 working solution at 37°C for 3 h in the light-avoided environment. Then, cells were detected at 450 nm by a microplate reader (Molecular Devices, Sunnyvale, CA, USA). ## 2.10. ROS Measurement by Flow Cytometry DCFH-DA probes (Beyotime Biotechnology, Shanghai, China, Cat#S0033S) were used to measure the ROS levels in cells according to the manufacturer’s instruction. Fluorescence intensity was detected by flow cytometry and fluorescence plate reader. Briefly, after coincubated with DCFH-DA probes (10μmol/l, excitation wavelength: 488 nm and emission wavelength: 525 nm) at 37°C for 30 min, cells were detected by a microplate reader (Molecular Devices, Sunnyvale, CA, USA) or collected by centrifugation (300 g, 5 min); after resuspended with PBS, DCFH-DA-labeled cells were further detected by flow cytometry. ## 2.11. Western Blot Western blot was performed as previously described [52, 53]. The following primary antibodies were used for WB detection: anti-MFN1 (1 : 500), anti-OPA1 (1 : 1000), and anti-DRP1 antibodies (1 : 1000) were all purchased from Proteintech (Chicago, IL, USA); and anti-Bax (1 : 2000), anti-Bcl-2 (1 : 2000), anti-caspase-3 (1 : 2000), and anti-GAPDH antibodies (1 : 10000) were all purchased from Abcam (Cambridge, Cambs, UK). GAPDH served as internal reference. WB bands were detected with Gel-Pro Analyzer (Media Cybernetics, MD, USA). ## 2.12. Cell Apoptosis Cell apoptosis was evaluated using an Annexin V-FITC/PI Apoptosis Detection Kit (BD Biosciences, NJ, USA, Cat#40302) according to the manufacturer’s instruction. Briefly, cells were coincubated with Annexin V-FITC and then propidium iodide for 15 min at RT in a light-avoided environment and then detected by flow cytometry. ## 2.13. Middle Cerebral Artery Occlusion (MCAO) Intraluminal filament occlusion was used to induce focal cerebral ischemia injury [54, 55]. Anesthetized by 2% pentobarbital sodium (45 mg/kg), the rats were placed in a prone position. Then, the left common carotid artery, external carotid artery (ECA), and internal carotid artery (ICA) were exposed. Next, a silicon-coated monofilament suture was gradually inserted through the left ECA and was moved up into the left ICA to successfully occlude the left middle cerebral artery (MCA) and remained in situ for 120 min. Subsequently, the suture was carefully removed, the ECA was permanently ligated, and the incision was sutured. Sham group rats were subjected to the same procedure except for the 120 min occlusion of MCA. Experimental animals were then placed into individual cages and provided a standard diet and water. After 120 min occlusion, right before ICA reperfusion, the isolated mitochondria (from 1×107 cells, the protein content was about 180 μg-200 μg) or saline (10 μl) was injected into the ICA and all incisions were closed. ## 2.14. Neurobehavioral Evaluation Neurobehavioral deficits were evaluated 24 h after mitochondrial transplantation using multiple scales, including the Clark general functional deficit score [56, 57], the Clark focal functional deficit score [56, 57], the modified neurological severity score (mNSS) [55, 58], and the rotarod test [55, 59]. Behavioral assessments were conducted by two skillful investigators who were both blinded to the animal groups. ## 2.15. Cerebral Infarct Area Detection Triphenyl tetrazolium chloride (TTC) staining was used to display the area of cerebral infarction [60, 61]. Briefly, 24 h after MCAO, the rats were deeply anesthetized and perfused with PBS transcardially, after which the rat brains were obtained and cut into 2 mm thick coronal sections. Subsequently, the brain sections were incubated with a 2% TTC solution at 37°C for 30 min in darkness. Then, stained slices were placed from the frontal to occipital order, and macroscopic images were obtained with a digital camera. Infarct areas were measured by Adobe Photoshop 21.0.0 (Adobe Systems Inc., San Jose, CA, USA). ## 2.16. Transcriptomic Analysis RNA sequencing was performed as previously described [62, 63]. Downstream analysis was performed by R (R Foundation for Statistical Computing, Vienna, Austria). ## 2.17. Mitochondria and Lysosome Labeling The mitochondrial fluorescent dyes MitoTracker™ Red CMXRos (ThermoFisher Scientific, Waltham, MA, USA), MitoTracker™ Green FM (ThermoFisher Scientific, Waltham, MA, USA), and MitoBright Deep Red (Dojindo Laboratories, Kumamoto, Japan) were used to label mitochondria according to the manufacturer’s instruction. In addition, 293T cells expressing COX8A gene N-terminal signal peptide-mCherry fusion protein were constructed by lentivirus (Inovogen Tech, Chongqi, China, Cat#3512) and the mitochondria were well labeled. The Lyso Dye (Dojindo Laboratories, Kumamoto, Japan, Cat#MD01)was used to label lysosomes according to the manufacturer’s instruction. ## 2.18. Statistical Analysis Data that conform to a normal distribution with homogeneous variance are expressed asmean±standarddeviation (SD), and Student’s t-test or one-way analysis of variance (ANOVA) was used to compare the differences between two groups or among multiple groups, respectively. Data with a nonnormal distribution are presented as median (25%, 75% quantiles), and Mann-Whitney U-test was taken into consideration. Statistical analysis and diagram generation were performed using GraphPad Prism 8.0.1 (GraphPad Software, Inc., San Diego, CA, USA). ∗p<0.05 and ∗∗p<0.01 were considered to be statistically different. ## 3. Results ### 3.1. Characteristics of Mitochondrial Donors The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. To evaluate theΔΨm, JC-1 dye was used. Representative images showed N2a has more red components than mNSC (Figures 1(a) and 1(b)). Flow cytometry analysis confirmed that N2a had a higher ΔΨm than mNSC (N2a vs. mNSC: 10.55±0.85 vs. 2.56±0.36, p<0.01) (Figure 1(c)). In addition, we observed that the mtDNA abundance of mNSC was higher than N2a based on the mitochondrial-nuclear DNA ratio, mt-ND1/β-globin (N2a vs. mNSC: 374.0±11.5 vs. 731.1±110.4, p<0.01) (Figure 1(d)) and mt-RNR1/β-actin (N2a vs. mNSC: 149.1±13.07 vs. 593.4±108.3, p<0.01) (Figure 1(e)). We subsequently analyzed the oxidative respiration capacity of mitochondria from N2a and mNSC based on the Seahorse XF analysis platform. The OCR-time diagram is shown in Figure 1(f) (N2a 1∗105 vs. mNSC 1∗105) and Figure 1(i) (mNSC 1∗105 vs. mNSC 2∗105), which implied a huge difference in oxidative respiratory activity between N2a and mNSC. Basal OCR of N2a (1×105 cells) was significantly higher than those of mNSC (1×105 cells) (N2a vs. mNSC: 248.70±56.33 pmol/min vs. 22.14±5.09 pmol/min, p<0.01) (Figure 1(g)). Similarly, N2a (1×105 cells) exhibited higher maximal OCR values than those of mNSC (1×105 cells) (N2a vs. mNSC: 363.90±123.70 pmol/min vs. 28.14±7.50 pmol/min, p<0.01) (Figure 1(h)). These results suggested that compared to mNSC, mitochondria from N2a exhibited a relatively stronger oxidative respiration capacity. In addition, mitochondrial morphology is presented in Figure S1 and mNSC culture and identification data are presented in Figure S2. Metabolic profiles of N2a and mNSC were quite different (Figure S3). Tumorigenicity evaluation of N2a and mNSC is presented in Figure S4. These results suggested that N2a-derived mitochondria have higher oxidative respiratory activity and lower mtDNA copy number, that the mitochondria from mNSC and N2a have similar morphology, and that they have different metabolomic profiles, and neither is tumorigenic. Therefore, we chose the N2a as a major source of mitochondria for subsequent experiments.Figure 1 Characteristics of mitochondrial donors. (a–c) Mitochondria of N2a (a) and mNSC (b) labeled with JC-1, JC-1 aggregates show red and monomers show green; the fluorescence intensity ratio of JC-1 of N2a and mNSC detected by flow cytometry, N2a showed higherΔΨm ((c), n=3). (d, e) Relative mtDNA copy number of N2a and mNSC indicated by mt-ND1/β-globin (d) and RNR1/β-actin (e), N2a showed lower mtDNA copy number (n=3). (f–i) Seahorse XF cell mitochondrial stress test; the performance of N2a and mNSC at 1×105 level (f), N2a exhibited higher oxidative respiratory activity (n=6); quantitative analysis of basal (g) and maximal (h) OCR of two cells showed that N2a exhibits higher oxidative respiratory activity (n=9). The performance of mitochondrial stress test form NSC at 1×105 (same data as in (f)) and 2×105 level showed mNSC reacted well and did not die in the comparison with N2a (i); we did the test with N2a 1×105, mNSC 1×105, and mNSC 2×105 at one time but presented them in two graphs due to the order of magnitude. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i) ### 3.2. Mitochondrial Transplantation Increased Cell Viability and Attenuated ROS and Apoptosis Level under H/R Condition We verified the effect of exogenous mitochondria on cell viability in a cellular model. After 48 h of hypoxia (1% O2), exogenous mitochondria were added and N2a continued being cultured for the next 24 h, and then, CCK-8 was performed to detect cell viability. The result indicated that the cell viability was correlated with the presence of oxygen after mitochondrial intervention. When mitochondrial-treated N2a continued being cultured under a hypoxic condition, the cell viability decreased dramatically (hypoxia vs. hypoxia+Mito: 1.00±0.03 vs. 0.07±0.01, p<0.01) (Figure 2(a)). When continued being cultured under a reoxygenation condition, the exogenous mitochondrial intervention significantly improved the cell viability (reoxygenation vs. reoxygenation+Mito: 1.00±0.12 vs. 1.24±0.14, p<0.01) (Figure 2(b)). To evaluate the ROS levels, DCFH-DA probes were applied and the fluorescence intensity was measured by flow cytometry and fluorescence plate reader. The flow cytometry results showed the H/R intervention significantly increases ROS levels and exogenous mitochondrial transplantation attenuates that process (control, H/R, H/R+Mito: 42.6±0.17, 115.0±1.00, and 101.7±2.41, p<0.05) (Figures 2(c) and 2(d)). Similarly, the fluorescent plate reader confirmed the results (control, H/R, H/R+Mito: 173.2±13.74, 606.1±23.45, and 416.9±31.59, p<0.01) (Figure 2(e)). To measure the apoptosis level, we performed flow cytometry analysis and Western blot. After H/R injury, the apoptosis ratio of N2a dramatically increased (H/R vs. control: 37.90±0.46% vs. 4.44±0.07%, p<0.01) (Figures 2(f) and 2(g)), which was significantly reduced by mitochondrial transplantation (H/R vs. Mito+H/R: vs. 24.35±0.54%, p<0.01) (Figures 2(f) and 2(g)). Similar results were obtained for the expression levels of apoptosis-related proteins, which also suggested that H/R dramatically promoted the upregulation of the Bax/Bcl-2 ratio (H/R vs. control: 16.28±3.82 vs. 1.00±0.11, p<0.01) (Figures 2(h) and 2(i)) and caspase-3 protein (H/R vs. control: 2.21±0.12 vs. 1.00±0.16, p<0.01) (Figures 2(h) and 2(j)). Exogenous mitochondrial transplantation significantly downregulated the ratio of Bax/Bcl-2 (H/R vs. H/R+Mito: vs. 4.25±0.34, p<0.01) (Figures 2(h) and 2(i)) and protein levels of caspase-3 (H/R vs. H/R+Mito: vs. 1.65±0.03, p<0.01) (Figures 2(h) and 2(j)) in cultured cells.Figure 2 Mitochondrial transplantation increased cell viability and attenuated ROS and apoptosis level under H/R condition. (a, b) Cell viability measured by CCK-8; after 48 h of hypoxia, exogenous mitochondria were added and N2a continued to be cultured in a hypoxic (a) or normoxic (b) environment (n=6). (c–e) ROS levels labeled with DCFH-DA probes and measured by flow cytometry, presented as a typical histogram of fluorescence intensity distribution (c) and a fluorescence intensity bar graph ((d), n=3), and fluorescent plate reader ((e), n=3). (f–j) Apoptosis levels were detected by flow cytometry for Annexin V and PI positivity ((f, g), n=3) and by related protein expression; representative WB bands were obtained (h), and correspondingly, quantitative analysis of Bax/Bcl-2 (i) and caspase-3 (j) was shown. Values were reported as means±SD. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ### 3.3. Mitochondrial Transplantation Improved Neurobehavioral Deficits and Reduced Infract Size of MCAO Rats To evaluate the effect of exogenous mitochondria on behavior and infarct size, we used the MCAO model. The Clark general/focal scale, the mNSS, and the rotarod test were used to assess neurological behavior deficits, and TTC staining was performed to measure infarct size. The results suggested that mitochondrial transplantation significantly improved neurological behavior deficits. For sham, I/R, and I/R+Mito groups, the sample size was 8, 9, and 7, and the Clark general scale was 0 (0,0), 3 (1.5,6), and 1 (0,1) (nonnormally distributed data are expressed as median, 25%, 75% quantile), which indicated that mitochondria improved neurological outcome (p<0.05). The Clark focal scale and mNSS confirmed the following: Clark focal: 0±0, 8.33±5.57, and 2.57±1.81, p<0.05; mNSS: 0±0, 8.56±3.01, and 4.71±2.63, p<0.05) (Figures 3(a)–3(c)). The rotarod test also validated the conclusion. The latency time to fall of sham, I/R, and I/R+Mito groups before the surgery was 59.63±11.77 s, 56.33±6.76 s, and 53.43±11.66 s (p>0.05) and 55.63±15.01 s, 21.78±6.78 s, and 30.71±8.98 s (p<0.05) after the surgery (Figure 3(d)). The TTC results suggested exogenous mitochondrial transplantation could reduce the infarct size of the MCAO model (I/R vs. I/R+Mito 26.02±3.24% vs. 13.36±4.00%, p<0.05) (Figures 3(e) and 3(f)).Figure 3 Mitochondrial transplantation improved neurobehavioral deficits and reduced infarct size of MCAO rats. (a–d) Neurological behavior assessment; 24 h after mitochondrial transplantation, multiple score scales were used to evaluate the neurological deficits induced by MCAO, including Clark general functional deficit score (a), Clark focal functional deficit score (b), mNSS score (c), and rotarod test (d). (e, f) Infarction size evaluation; brain infarction areas were stained by TTC (e), and relatively quantitative analysis of infarction size was assessed (f). Values were reported asmeans±SD. ∗p<0.05. Sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondrial injection. (a)(b)(c)(d)(e)(f) ### 3.4. Effects of Mitochondrial Transplantation on Transcriptomic Profile Imply Metabolic Alteration In order to clarify the effects of mitochondrial transplantation on transcriptomic profile and its possible mechanism, we performed RNA sequence analysis. The results identified 14 upregulated genes and 12 downregulated genes between control and H/R groups, 27 upregulated genes and 98 downregulated genes between control and H/R+Mito groups, and 17 upregulated genes and 1 downregulated gene between H/R and H/R+Mito groups. The overlapping genes are presented in Figure4(c). The following KEGG pathway enrichment analysis suggested exogenous mitochondria may affect metabolism-related pathways, especially lipid metabolism-related molecules and pathways such as the PPAR signal pathway, insulin signal pathway, fat intake and digestion-related pathway, cholesterol metabolism, glycolysis, and gluconeogenesis (Figure 4(d)). These results indicated that exogenous mitochondria may be capable of altering the metabolic characteristics of host cells, possibly resulting in metabolic reprogramming.Figure 4 Effects of mitochondrial transplantation on transcriptomic profile imply metabolic alteration. (a) PCA plot, axis 1 18.6%, axis 2 16.5%. (b) Heatmap, horizontal axis-different cell samples; longitudinal axis-different genes; color depth-expression levels of genes. (c) Venn diagram. (d) KEGG bubble chart of the differentially expressed genes between H/R and H/R+Mito groups. (a)(b)(c)(d) ### 3.5. Pattern of Exogenous Mitochondrial Transfer Implies Mitochondrial Component Separation To better understand the mechanism of exogenous mitochondria, we first traced its pathway. MitoTracker™ Red CMXRos and MitoTracker™ Green FM were used to label mitochondria. When the mitochondria of N2a labeled with MitoTracker Green were isolated and added to the medium of another N2a which was labeled with MitoTracker Red, a few hours later, we found that the red and green were completely fused (Figure5(a)). This phenomenon implied that exogenous mitochondria can be fused with endogenous mitochondria of the host cell. Next, we isolated red dye-labeled mitochondria and added them to the medium of green dye-labeled cell. The result validated the previous (Figure 5(b)). To figure out if this fusion property of exogenous mitochondria is species-limited, we isolated red dye-labeled mitochondria from a human-derived cell line-U87 and added it to mNSC medium. Mitochondria fused again (Figure 5(c)). This suggested that the ability of exogenous mitochondria to fuse with host cell mitochondria is cross-species.Figure 5 Exogenous mitochondria colocalized with endogenous mitochondria despite species by coincubation. (a) From left to right, mitochondria of host N2a labeled with MitoTracker Red; isolated green mitochondria entered host N2a; nucleus with DAPI; the merged image. (b) Isolated red mitochondria entered host N2A; mitochondria of host N2a labeled with MitoTracker Green; DAPI; the merged image. (c) Isolated red mitochondria derived from U87 entered host mNSC; mitochondria of host mNSC labeled with MitoTracker Green; DAPI: the merged image. Scale bar: 10μm.To further verify the fusion property of exogenous mitochondria, we constructed a 293T cell overexpressing COX8A N-terminal signal peptide-mCherry to labeled mitochondria. And this tag had little effect on cell and mitochondrial function (FigureS5). When the red-mitochondria 293T cell was labeled with MitoTracker Green again, we found all the mitochondria have both red and green markers (Figure 6(a)). The mitochondria isolated from the double-labeled cell also showed the colocation of the two colors (Figure 6(b)). When we added the double-labeled isolated mitochondria to the medium of a 293T cell labeled with MitoBright Deep Red (set to pink), we came to an interesting result. The pink mitochondria (endogenous mitochondria of host cell) completely colocalized with exogenous green mitochondria, while only a portion of pink mitochondria overlapped with exogenous red mitochondria. Moreover, the two-color marker of the same exogenous mitochondria was partially separated (Figure 6(c)). All of these suggested that the exogenous mitochondrial components segregate when cocultured with host cells and a specific part can fuse with the endogenous mitochondria of host cells and the rest part has another fate. During the mitochondrial transfer process, we found that the green component can be internalized immediately by the host cell within 1 h, while the red component was internalized in a much slower way (Figures 7(a)–7(e)). This again confirmed the different fate of different mitochondrial components. We further investigated the pattern of exogenous mitochondrial transfer using double-labeled 293T as a mitochondrial donor and pink-labeled induced pluripotent stem cell (iPSC) as host. The results showed nearly all the green components colocalized with the pink endogenous mitochondria and the red component was concentrated at the edge of the cell clones or in the scattered cells (Figures 8(a)–8(c)). These suggested the internalization efficiency of red component may be related to intercellular connections.Figure 6 Double-labeled exogenous mitochondria exhibited component separation after being internalized by host cells. (a) Double-labeled mitochondria of 293T; from left to right, mitochondria labeled with a fluorescent protein (Mito-mCherry); mitochondria labeled with MitoTracker Green; DAPI; the merged image showed completely colocalization. (b) Isolated mitochondria labeled with red and green markers; from left to right, isolated mitochondria labeled with Mito-mCherry; isolated mitochondria labeled with MitoTracker Green; the merge image showed well co-localization. (c) From left to right, top to bottom; the red components of exogenous mitochondria entered the host cell, and the distribution was shown; the green components of exogenous mitochondria entered the host cell and its distribution; endogenous mitochondria of the host cell labeled with MitoBright Deep Red (pink); the merged image of three colors showed fusion and separation; the merged image of green and pink indicated that parts of the green overlap with the pink; the merged image of red and pink showed that seldom red components overlap with the pink; the merged image of green and red suggested that a portion of red components overlaps with the green components. Scale bar: 10μm, 100 μm, and 10 μm. (a)(b)(c)Figure 7 Time pattern of different mitochondrial components during transfer. (a) Mitochondria of host 293T cell labeled with MitoBright Deep Red; isolated double-labeled mitochondria were added to the medium of host cell instantly. (b) 1 h after coincubation. (c) 2 h after coincubation. (d) 3 h after coincubation. (e) 24 h after coincubation. Scale bar: 10μm.Figure 8 Intercellular connections affect the transfer of different components of mitochondria. (a) Small iPSC clone; mitochondria of host iPSC labeled with MitoBright Deep Red (Endo-Mito); isolated double-labeled mitochondria (Exo-Mito) were added to the medium of host cell; the picture showed the green components of exogenous mitochondria colocalized with endogenous mitochondria of host iPSC, while the red components mainly concentrated at the edges of cell clones and scattered cells. (b) iPSC clone with tight intercellular connections; the red component entered the cell in a random pattern. (c) iPSC clone with tight intercellular connections and edges; the red component mainly concentrated at the edges of cell clones and scattered cells. Scale bar: 200μm. ### 3.6. The Fate of Exogenous Mitochondria Is Fusion and Lysosomal Degradation To prove the theory that a portion of exogenous mitochondria can fuse with endogenous mitochondria, we assessed the mitochondrial dynamics by WB analysis. Our results suggested that after H/R treatment, the expression levels of the mitochondrial fusion-related proteins MFN1 (control vs. H/R,p<0.01) and OPA1 (control vs. H/R, p<0.01) were dramatically reduced and mitochondrial fission-related protein DRP1 was significantly increased (control vs. H/R, p<0.01). Exogenous mitochondrial intervention alleviated this process and increased the expression of MFN1 (H/R vs. H/R+Mito, p<0.01) and OPA1 (H/R vs. H/R+Mito, p<0.01) but did not significantly reverse DRP1 expression (Figures 9(b)–9(e)). The above data further confirm the fusion property of exogenous mitochondria.Figure 9 The fate of exogenous mitochondria is fusion and lysosomal degradation. (a) Endogenous lysosomes were marked by Lyso Dye (green), and exogenous mitochondria were labeled with COX8A N-terminal signal peptide-mCherry (red); the red mitochondria colocalized with green lysosomes. (b–e) WB analysis of mitochondrial dynamic proteins; typical WB bands of MFN1, OPA1, and DRP1 proteins were obtained (b), and relatively quantitative analysis of MFN1 (c), OPA1 (d), and DRP1 (e) was carried out. Scale bar: 10μm. Values were reported as means±SD. ∗∗p<0.01; ns: not statistical significant, p>0.05. (a)(b)(c)(d)(e)To figure out the fate of the unfused, COX8A N-terminal signal peptide-mCherry fusion protein-labeled mitochondria, we did lysosomal staining with Lyso Dye. Interestingly, the red component of mitochondria is totally colocalized with lysosomes of the host cell (Figure9(a)), suggesting the fate of unfused components of exogenous mitochondria is lysosomal degradation. ## 3.1. Characteristics of Mitochondrial Donors The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. To evaluate theΔΨm, JC-1 dye was used. Representative images showed N2a has more red components than mNSC (Figures 1(a) and 1(b)). Flow cytometry analysis confirmed that N2a had a higher ΔΨm than mNSC (N2a vs. mNSC: 10.55±0.85 vs. 2.56±0.36, p<0.01) (Figure 1(c)). In addition, we observed that the mtDNA abundance of mNSC was higher than N2a based on the mitochondrial-nuclear DNA ratio, mt-ND1/β-globin (N2a vs. mNSC: 374.0±11.5 vs. 731.1±110.4, p<0.01) (Figure 1(d)) and mt-RNR1/β-actin (N2a vs. mNSC: 149.1±13.07 vs. 593.4±108.3, p<0.01) (Figure 1(e)). We subsequently analyzed the oxidative respiration capacity of mitochondria from N2a and mNSC based on the Seahorse XF analysis platform. The OCR-time diagram is shown in Figure 1(f) (N2a 1∗105 vs. mNSC 1∗105) and Figure 1(i) (mNSC 1∗105 vs. mNSC 2∗105), which implied a huge difference in oxidative respiratory activity between N2a and mNSC. Basal OCR of N2a (1×105 cells) was significantly higher than those of mNSC (1×105 cells) (N2a vs. mNSC: 248.70±56.33 pmol/min vs. 22.14±5.09 pmol/min, p<0.01) (Figure 1(g)). Similarly, N2a (1×105 cells) exhibited higher maximal OCR values than those of mNSC (1×105 cells) (N2a vs. mNSC: 363.90±123.70 pmol/min vs. 28.14±7.50 pmol/min, p<0.01) (Figure 1(h)). These results suggested that compared to mNSC, mitochondria from N2a exhibited a relatively stronger oxidative respiration capacity. In addition, mitochondrial morphology is presented in Figure S1 and mNSC culture and identification data are presented in Figure S2. Metabolic profiles of N2a and mNSC were quite different (Figure S3). Tumorigenicity evaluation of N2a and mNSC is presented in Figure S4. These results suggested that N2a-derived mitochondria have higher oxidative respiratory activity and lower mtDNA copy number, that the mitochondria from mNSC and N2a have similar morphology, and that they have different metabolomic profiles, and neither is tumorigenic. Therefore, we chose the N2a as a major source of mitochondria for subsequent experiments.Figure 1 Characteristics of mitochondrial donors. (a–c) Mitochondria of N2a (a) and mNSC (b) labeled with JC-1, JC-1 aggregates show red and monomers show green; the fluorescence intensity ratio of JC-1 of N2a and mNSC detected by flow cytometry, N2a showed higherΔΨm ((c), n=3). (d, e) Relative mtDNA copy number of N2a and mNSC indicated by mt-ND1/β-globin (d) and RNR1/β-actin (e), N2a showed lower mtDNA copy number (n=3). (f–i) Seahorse XF cell mitochondrial stress test; the performance of N2a and mNSC at 1×105 level (f), N2a exhibited higher oxidative respiratory activity (n=6); quantitative analysis of basal (g) and maximal (h) OCR of two cells showed that N2a exhibits higher oxidative respiratory activity (n=9). The performance of mitochondrial stress test form NSC at 1×105 (same data as in (f)) and 2×105 level showed mNSC reacted well and did not die in the comparison with N2a (i); we did the test with N2a 1×105, mNSC 1×105, and mNSC 2×105 at one time but presented them in two graphs due to the order of magnitude. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i) ## 3.2. Mitochondrial Transplantation Increased Cell Viability and Attenuated ROS and Apoptosis Level under H/R Condition We verified the effect of exogenous mitochondria on cell viability in a cellular model. After 48 h of hypoxia (1% O2), exogenous mitochondria were added and N2a continued being cultured for the next 24 h, and then, CCK-8 was performed to detect cell viability. The result indicated that the cell viability was correlated with the presence of oxygen after mitochondrial intervention. When mitochondrial-treated N2a continued being cultured under a hypoxic condition, the cell viability decreased dramatically (hypoxia vs. hypoxia+Mito: 1.00±0.03 vs. 0.07±0.01, p<0.01) (Figure 2(a)). When continued being cultured under a reoxygenation condition, the exogenous mitochondrial intervention significantly improved the cell viability (reoxygenation vs. reoxygenation+Mito: 1.00±0.12 vs. 1.24±0.14, p<0.01) (Figure 2(b)). To evaluate the ROS levels, DCFH-DA probes were applied and the fluorescence intensity was measured by flow cytometry and fluorescence plate reader. The flow cytometry results showed the H/R intervention significantly increases ROS levels and exogenous mitochondrial transplantation attenuates that process (control, H/R, H/R+Mito: 42.6±0.17, 115.0±1.00, and 101.7±2.41, p<0.05) (Figures 2(c) and 2(d)). Similarly, the fluorescent plate reader confirmed the results (control, H/R, H/R+Mito: 173.2±13.74, 606.1±23.45, and 416.9±31.59, p<0.01) (Figure 2(e)). To measure the apoptosis level, we performed flow cytometry analysis and Western blot. After H/R injury, the apoptosis ratio of N2a dramatically increased (H/R vs. control: 37.90±0.46% vs. 4.44±0.07%, p<0.01) (Figures 2(f) and 2(g)), which was significantly reduced by mitochondrial transplantation (H/R vs. Mito+H/R: vs. 24.35±0.54%, p<0.01) (Figures 2(f) and 2(g)). Similar results were obtained for the expression levels of apoptosis-related proteins, which also suggested that H/R dramatically promoted the upregulation of the Bax/Bcl-2 ratio (H/R vs. control: 16.28±3.82 vs. 1.00±0.11, p<0.01) (Figures 2(h) and 2(i)) and caspase-3 protein (H/R vs. control: 2.21±0.12 vs. 1.00±0.16, p<0.01) (Figures 2(h) and 2(j)). Exogenous mitochondrial transplantation significantly downregulated the ratio of Bax/Bcl-2 (H/R vs. H/R+Mito: vs. 4.25±0.34, p<0.01) (Figures 2(h) and 2(i)) and protein levels of caspase-3 (H/R vs. H/R+Mito: vs. 1.65±0.03, p<0.01) (Figures 2(h) and 2(j)) in cultured cells.Figure 2 Mitochondrial transplantation increased cell viability and attenuated ROS and apoptosis level under H/R condition. (a, b) Cell viability measured by CCK-8; after 48 h of hypoxia, exogenous mitochondria were added and N2a continued to be cultured in a hypoxic (a) or normoxic (b) environment (n=6). (c–e) ROS levels labeled with DCFH-DA probes and measured by flow cytometry, presented as a typical histogram of fluorescence intensity distribution (c) and a fluorescence intensity bar graph ((d), n=3), and fluorescent plate reader ((e), n=3). (f–j) Apoptosis levels were detected by flow cytometry for Annexin V and PI positivity ((f, g), n=3) and by related protein expression; representative WB bands were obtained (h), and correspondingly, quantitative analysis of Bax/Bcl-2 (i) and caspase-3 (j) was shown. Values were reported as means±SD. ∗∗p<0.01. (a)(b)(c)(d)(e)(f)(g)(h)(i)(j) ## 3.3. Mitochondrial Transplantation Improved Neurobehavioral Deficits and Reduced Infract Size of MCAO Rats To evaluate the effect of exogenous mitochondria on behavior and infarct size, we used the MCAO model. The Clark general/focal scale, the mNSS, and the rotarod test were used to assess neurological behavior deficits, and TTC staining was performed to measure infarct size. The results suggested that mitochondrial transplantation significantly improved neurological behavior deficits. For sham, I/R, and I/R+Mito groups, the sample size was 8, 9, and 7, and the Clark general scale was 0 (0,0), 3 (1.5,6), and 1 (0,1) (nonnormally distributed data are expressed as median, 25%, 75% quantile), which indicated that mitochondria improved neurological outcome (p<0.05). The Clark focal scale and mNSS confirmed the following: Clark focal: 0±0, 8.33±5.57, and 2.57±1.81, p<0.05; mNSS: 0±0, 8.56±3.01, and 4.71±2.63, p<0.05) (Figures 3(a)–3(c)). The rotarod test also validated the conclusion. The latency time to fall of sham, I/R, and I/R+Mito groups before the surgery was 59.63±11.77 s, 56.33±6.76 s, and 53.43±11.66 s (p>0.05) and 55.63±15.01 s, 21.78±6.78 s, and 30.71±8.98 s (p<0.05) after the surgery (Figure 3(d)). The TTC results suggested exogenous mitochondrial transplantation could reduce the infarct size of the MCAO model (I/R vs. I/R+Mito 26.02±3.24% vs. 13.36±4.00%, p<0.05) (Figures 3(e) and 3(f)).Figure 3 Mitochondrial transplantation improved neurobehavioral deficits and reduced infarct size of MCAO rats. (a–d) Neurological behavior assessment; 24 h after mitochondrial transplantation, multiple score scales were used to evaluate the neurological deficits induced by MCAO, including Clark general functional deficit score (a), Clark focal functional deficit score (b), mNSS score (c), and rotarod test (d). (e, f) Infarction size evaluation; brain infarction areas were stained by TTC (e), and relatively quantitative analysis of infarction size was assessed (f). Values were reported asmeans±SD. ∗p<0.05. Sham = sham-operated; I/R = MCAO+reperfusion with saline injection; I/R+Mito = MCAO+reperfusion with mitochondrial injection. (a)(b)(c)(d)(e)(f) ## 3.4. Effects of Mitochondrial Transplantation on Transcriptomic Profile Imply Metabolic Alteration In order to clarify the effects of mitochondrial transplantation on transcriptomic profile and its possible mechanism, we performed RNA sequence analysis. The results identified 14 upregulated genes and 12 downregulated genes between control and H/R groups, 27 upregulated genes and 98 downregulated genes between control and H/R+Mito groups, and 17 upregulated genes and 1 downregulated gene between H/R and H/R+Mito groups. The overlapping genes are presented in Figure4(c). The following KEGG pathway enrichment analysis suggested exogenous mitochondria may affect metabolism-related pathways, especially lipid metabolism-related molecules and pathways such as the PPAR signal pathway, insulin signal pathway, fat intake and digestion-related pathway, cholesterol metabolism, glycolysis, and gluconeogenesis (Figure 4(d)). These results indicated that exogenous mitochondria may be capable of altering the metabolic characteristics of host cells, possibly resulting in metabolic reprogramming.Figure 4 Effects of mitochondrial transplantation on transcriptomic profile imply metabolic alteration. (a) PCA plot, axis 1 18.6%, axis 2 16.5%. (b) Heatmap, horizontal axis-different cell samples; longitudinal axis-different genes; color depth-expression levels of genes. (c) Venn diagram. (d) KEGG bubble chart of the differentially expressed genes between H/R and H/R+Mito groups. (a)(b)(c)(d) ## 3.5. Pattern of Exogenous Mitochondrial Transfer Implies Mitochondrial Component Separation To better understand the mechanism of exogenous mitochondria, we first traced its pathway. MitoTracker™ Red CMXRos and MitoTracker™ Green FM were used to label mitochondria. When the mitochondria of N2a labeled with MitoTracker Green were isolated and added to the medium of another N2a which was labeled with MitoTracker Red, a few hours later, we found that the red and green were completely fused (Figure5(a)). This phenomenon implied that exogenous mitochondria can be fused with endogenous mitochondria of the host cell. Next, we isolated red dye-labeled mitochondria and added them to the medium of green dye-labeled cell. The result validated the previous (Figure 5(b)). To figure out if this fusion property of exogenous mitochondria is species-limited, we isolated red dye-labeled mitochondria from a human-derived cell line-U87 and added it to mNSC medium. Mitochondria fused again (Figure 5(c)). This suggested that the ability of exogenous mitochondria to fuse with host cell mitochondria is cross-species.Figure 5 Exogenous mitochondria colocalized with endogenous mitochondria despite species by coincubation. (a) From left to right, mitochondria of host N2a labeled with MitoTracker Red; isolated green mitochondria entered host N2a; nucleus with DAPI; the merged image. (b) Isolated red mitochondria entered host N2A; mitochondria of host N2a labeled with MitoTracker Green; DAPI; the merged image. (c) Isolated red mitochondria derived from U87 entered host mNSC; mitochondria of host mNSC labeled with MitoTracker Green; DAPI: the merged image. Scale bar: 10μm.To further verify the fusion property of exogenous mitochondria, we constructed a 293T cell overexpressing COX8A N-terminal signal peptide-mCherry to labeled mitochondria. And this tag had little effect on cell and mitochondrial function (FigureS5). When the red-mitochondria 293T cell was labeled with MitoTracker Green again, we found all the mitochondria have both red and green markers (Figure 6(a)). The mitochondria isolated from the double-labeled cell also showed the colocation of the two colors (Figure 6(b)). When we added the double-labeled isolated mitochondria to the medium of a 293T cell labeled with MitoBright Deep Red (set to pink), we came to an interesting result. The pink mitochondria (endogenous mitochondria of host cell) completely colocalized with exogenous green mitochondria, while only a portion of pink mitochondria overlapped with exogenous red mitochondria. Moreover, the two-color marker of the same exogenous mitochondria was partially separated (Figure 6(c)). All of these suggested that the exogenous mitochondrial components segregate when cocultured with host cells and a specific part can fuse with the endogenous mitochondria of host cells and the rest part has another fate. During the mitochondrial transfer process, we found that the green component can be internalized immediately by the host cell within 1 h, while the red component was internalized in a much slower way (Figures 7(a)–7(e)). This again confirmed the different fate of different mitochondrial components. We further investigated the pattern of exogenous mitochondrial transfer using double-labeled 293T as a mitochondrial donor and pink-labeled induced pluripotent stem cell (iPSC) as host. The results showed nearly all the green components colocalized with the pink endogenous mitochondria and the red component was concentrated at the edge of the cell clones or in the scattered cells (Figures 8(a)–8(c)). These suggested the internalization efficiency of red component may be related to intercellular connections.Figure 6 Double-labeled exogenous mitochondria exhibited component separation after being internalized by host cells. (a) Double-labeled mitochondria of 293T; from left to right, mitochondria labeled with a fluorescent protein (Mito-mCherry); mitochondria labeled with MitoTracker Green; DAPI; the merged image showed completely colocalization. (b) Isolated mitochondria labeled with red and green markers; from left to right, isolated mitochondria labeled with Mito-mCherry; isolated mitochondria labeled with MitoTracker Green; the merge image showed well co-localization. (c) From left to right, top to bottom; the red components of exogenous mitochondria entered the host cell, and the distribution was shown; the green components of exogenous mitochondria entered the host cell and its distribution; endogenous mitochondria of the host cell labeled with MitoBright Deep Red (pink); the merged image of three colors showed fusion and separation; the merged image of green and pink indicated that parts of the green overlap with the pink; the merged image of red and pink showed that seldom red components overlap with the pink; the merged image of green and red suggested that a portion of red components overlaps with the green components. Scale bar: 10μm, 100 μm, and 10 μm. (a)(b)(c)Figure 7 Time pattern of different mitochondrial components during transfer. (a) Mitochondria of host 293T cell labeled with MitoBright Deep Red; isolated double-labeled mitochondria were added to the medium of host cell instantly. (b) 1 h after coincubation. (c) 2 h after coincubation. (d) 3 h after coincubation. (e) 24 h after coincubation. Scale bar: 10μm.Figure 8 Intercellular connections affect the transfer of different components of mitochondria. (a) Small iPSC clone; mitochondria of host iPSC labeled with MitoBright Deep Red (Endo-Mito); isolated double-labeled mitochondria (Exo-Mito) were added to the medium of host cell; the picture showed the green components of exogenous mitochondria colocalized with endogenous mitochondria of host iPSC, while the red components mainly concentrated at the edges of cell clones and scattered cells. (b) iPSC clone with tight intercellular connections; the red component entered the cell in a random pattern. (c) iPSC clone with tight intercellular connections and edges; the red component mainly concentrated at the edges of cell clones and scattered cells. Scale bar: 200μm. ## 3.6. The Fate of Exogenous Mitochondria Is Fusion and Lysosomal Degradation To prove the theory that a portion of exogenous mitochondria can fuse with endogenous mitochondria, we assessed the mitochondrial dynamics by WB analysis. Our results suggested that after H/R treatment, the expression levels of the mitochondrial fusion-related proteins MFN1 (control vs. H/R,p<0.01) and OPA1 (control vs. H/R, p<0.01) were dramatically reduced and mitochondrial fission-related protein DRP1 was significantly increased (control vs. H/R, p<0.01). Exogenous mitochondrial intervention alleviated this process and increased the expression of MFN1 (H/R vs. H/R+Mito, p<0.01) and OPA1 (H/R vs. H/R+Mito, p<0.01) but did not significantly reverse DRP1 expression (Figures 9(b)–9(e)). The above data further confirm the fusion property of exogenous mitochondria.Figure 9 The fate of exogenous mitochondria is fusion and lysosomal degradation. (a) Endogenous lysosomes were marked by Lyso Dye (green), and exogenous mitochondria were labeled with COX8A N-terminal signal peptide-mCherry (red); the red mitochondria colocalized with green lysosomes. (b–e) WB analysis of mitochondrial dynamic proteins; typical WB bands of MFN1, OPA1, and DRP1 proteins were obtained (b), and relatively quantitative analysis of MFN1 (c), OPA1 (d), and DRP1 (e) was carried out. Scale bar: 10μm. Values were reported as means±SD. ∗∗p<0.01; ns: not statistical significant, p>0.05. (a)(b)(c)(d)(e)To figure out the fate of the unfused, COX8A N-terminal signal peptide-mCherry fusion protein-labeled mitochondria, we did lysosomal staining with Lyso Dye. Interestingly, the red component of mitochondria is totally colocalized with lysosomes of the host cell (Figure9(a)), suggesting the fate of unfused components of exogenous mitochondria is lysosomal degradation. ## 4. Discussion There are four studies [30, 31, 64, 65] focused on the treatment of cerebral I/R injury with isolated mitochondria, according to the latest review [66]. All of them showed a favorable outcome in behavioral assessment or cerebral infarct size with different mitochondrial donors by intravenous or intracerebroventricular injection. However, there is no more detailed information about the source of mitochondria, the mechanism, and the fate of isolated mitochondria. And this is the information we want to provide.In order to apply mitochondrial transplantation therapy to the clinic, the first priority is the source and quality control of mitochondria. The ideal source of mitochondria is one that is readily available and can be amplified in large numbers. Both stem cells and tumor cells meet this requirement. Therefore, we chose N2a and mNSC as mitochondrial source cells to assess a range of mitochondrial characteristics. Previous studies isolated mitochondria from the placenta [65], or human umbilical cord-derived mesenchymal stem cells [64], or pectoralis major muscle [30], or baby hamster kidney fibroblast [31], and evaluated them mainly by MMP or respiratory activity. The outcome is closely related to the isolation and preservation process, and the consistency may not be good among different batches. The four studies did not give the reason why they choose these cells as mitochondrial donors. However, in our study, we evaluated the mitochondria before isolation through multiple dimensions, including morphology, MMP, mtDNA copy number, respiratory activity, metabolomic profile, and tumorigenicity. We hope to provide reference data when choosing a mitochondrial donor in future research.For the first time, we have identified the oxygen dependence of therapeutic effects of isolated mitochondria. This reminds us of the application scenario of isolated mitochondria, where incorrect application may lead to serious consequences. It is generally believed that exogenous mitochondria have a relatively intact function and can replace the damaged mitochondria in the host cell [67]. Considering the oxygen dependence, we presume that the exogenous mitochondria are a load for the cell. In the presence of oxygen, the cell is able to handle this load and make it functional using the large amount of ATP produced by oxidative phosphorylation. However, in hypoxic conditions, host cells require additional energy to handle this load, which accelerates cell death. Also, according to our transcriptome data, the exogenous mitochondrial function is closely related to lipid metabolism, which may increase its oxygen dependence. In fact, there is little known about oxygen dependence, and this will be one of our future research directions.The therapeutic effects of isolated mitochondria in our cell and animal models are similar to other studies [30, 31, 64, 65], which showed that mitochondrial intervention attenuated I/R injury, improved neurological outcomes, and reduced cerebral infarct size. Our data again confirmed the potential clinical application of mitochondrial transplantation. The transcriptomic data suggest that the therapeutic effect of mitochondria may be related to altered metabolism, especially lipid metabolism, providing clues for future mechanistic studies. Few studies focused on the behavior of isolated mitochondria in vivo, especially whether it can cross the brain-blood barrier. Nakamura et al. [65] injected mitochondria intravenously and found exogenous mitochondria distributed in the brain under ischemic-reperfusion condition. Shi et al. [36] injected isolated mitochondria intravenously in mice and found that the exogenous mitochondria distributed in various tissues including the brain, liver, kidney, muscle, and heart. However, we did not find that mitochondria can pass the intact blood-brain barrier in our projects. More research is needed on the permeability of the blood-brain barrier to mitochondria.The discovery of mitochondrial component separation phenomenon was based on different mitochondrial labeling techniques, and this gives us a new perspective to study the behavior of mitochondria. To our knowledge, most studies [25, 30, 31, 33, 36, 64, 65] labeled mitochondria with a single MitoTracker dye or a fluorescent fusion protein and got a conclusion based on that. However, we used both techniques to label the same mitochondria. Surprisingly, we found the different markers are separated. Regardless of whether it was technical or not, at least, it proved that a different conclusion may be made based on a different single-label method. Therefore, previous works need to be revisited. This is one of the important information provided in this article. MitoTracker dyes are roughly divided into voltage-dependent and non-voltage-dependent. We chose the non-voltage-dependent dye MitoTracker Green to label mitochondria, which are covalently bound to the free sulfhydryl group of cysteine in mitochondrial protein [68, 69]. Therefore, the possibility of dye transfer between mitochondria is extremely low, and no studies have reported this dye-transfer phenomenon. Thus, MitoTracker Green represents the mitochondrial component that binds to it. Fluorescent fusion proteins (most fused with mitochondrial targeting sequence of cytochrome c oxidase subunit VIII) are another commonly used method of labeling mitochondria [33, 36, 37]. Due to the wide distribution in the mitochondria, both labeling methods can display mitochondria and are well overlapped (Figure 6(a)). When the extracted double-labeled mitochondria enter the host cell, the two markers are separated, which represents that different component (not subgroups) of mitochondria has different fates. Considering the lysosomal labeling and mitochondria dynamic protein Western blot experiments, it is indicated that the isolated mitochondria may function by fusing its useful part to the host mitochondria rather than replacing it entirely. The unfused part will enter the lysosome for degradation. Furthermore, we used iPSC to study the effect of cell connections on the entry of mitochondria into host cells. The result implied that tight intercellular connections will greatly reduce the red component of isolated mitochondria from entering the host cell.Few studies have covered the fate of isolated mitochondria after being internalized by host cells. Cowan et al. [70] reported that after being incorporated into host cells, isolated mitochondria are transported to endosomes and lysosomes, and then, most of these mitochondria escape from the compartments and fuse with the endogenous mitochondrial network. Their work described the fate of exogenous mitochondria, treating mitochondria as a whole, while our work found that the different part of mitochondria has different fate, which is consistent with the interaction of components between intracellular membrane systems. This result reminds us to understand the behavior of mitochondria from a more microscopic perspective and to pay more attention to its communication with other organelles.Based on our data, we hypothesized that when the exogenous mitochondria entered into the host cells, in the presence of oxygen, mitochondrial component separation occurred, reducing ROS levels, apoptosis, and altering cellular metabolism, thus improving cell survival (Figure10). We made a reasonable assumption. Whether correct or not, it is of great significance, because this phenomenon will allow us to reexamine previous studies and consider this factor in future research design.Figure 10 Hypothetical model diagram. When isolated exogenous enters host cells, the specific components of it undergo separation. Some fuses with the host mitochondria, while the other enters lysosome and undergoes lysosomal degradation. This process reduces ROS and apoptosis and alters metabolic profile, which in turn attenuates ischemia-reperfusion injury.Still, there are limitations. The correctness of the conclusion is closely related to the mitochondrial label methods. When previous studies labeled mitochondria by a single method, they got incomplete information. The current label methods are mainly developed to display mitochondria in cells. Unpredictable events may happen when mitochondria are isolated. Therefore, a new fate tracking tool should be developed. Another limitation would be lacking proper controls in the fate experiments. Due to the superficial knowledge of the event, we had no idea where to intervene. Although we used a mitochondrial fission inhibitor, Mdivi-1, as a control (FigureS6), it did not seem to show any difference. Another similar study did not set up controls too [70]. Therefore, our future direction is to develop new mitochondrial label tools and clarify the fate and transportation of isolated mitochondria through proper controls. In addition, the mechanism conclusions need to be verified in vivo.In general, studies focusing on mitochondrial transplantation therapy are in their infancy, but existing data indicate a promising clinical application. When there is no effective treatment for ischemia-reperfusion injury, mitochondrial transplantation therapy provides a new idea. However, more data is needed to confirm its safety and efficacy, and more mechanism studies are needed to figure out how it works. We hope our study provides useful information in this area and has enlightenment for future studies. ## 5. Conclusions Mitochondrial transplantation may attenuate cerebral I/R injury. The mechanism may be related to selective mitochondrial component separation, altering cellular metabolism, reducing ROS, and apoptosis in an oxygen-dependent manner. The way of isolated mitochondrial transfer into the cell may be related to intercellular connection. --- *Source: 1006636-2021-11-20.xml*
2021
# A Distributed Online Newton Step Algorithm for Multi-Agent Systems **Authors:** Xiaofei Chu **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007032 --- ## Abstract Most of the current algorithms for solving distributed online optimization problems are based on the first-order method, which are simple in computation but slow in convergence. Newton’s algorithm with fast convergence speed needs to calculate the Hessian matrix and its inverse, leading to computationally complex. A distributed online optimization algorithm based on Newton’s step is proposed in this paper, which constructs a positive definite matrix by using the first-order information of the objective function to replace the inverse of the Hessian matrix in Newton’s method. The convergence of the algorithm is proved theoretically and the regret bound of the algorithm is obtained. Finally, numerical experiments are used to verify the feasibility and efficiency of the proposed algorithm. The experimental results show that the proposed algorithm has an efficient performance on practical problems, compared to several existing gradient descent algorithms. --- ## Body ## 1. Introduction In recent years, with the development of computer network technology, distributed optimization algorithms [1–3] have been successfully applied to solve large-scale optimization problems, which are considered to be an effective method. Distributed optimization decomposes a large-scale optimization problem into multiple subproblems according to different agents in a multi-agent network. Different agents calculate their associated subproblems simultaneously and communicate information with their immediate neighbor agents. And, all the agents finally obtain a common optimal solution that can minimize the sum of their objective functions through the exchange of information along with their respective optimization iteration. Many problems in science and engineering can be modeled as problems, such as machine learning [4], signal tracking and location [5], sensor networks [6, 7], and smart grids [8].Distributed optimization assumes that the local objective function is known and invariant. However, many practical problems in diverse fields are affected by their environment and the corresponding objective function is changing all the time, which requires the optimization process of these problems in the online setting. In the distributed online optimization problem, each agent has a limited amount of information. Only when one agent makes a decision with the current information can it know the relevant information of its corresponding objective function, which leads to the inevitable outcome: the decision it makes is not the optimal and the difference, so-called regret, exists between make-decisions of all agents, respectively. Regret is one of the most straightforward measures of the performance of an online algorithm. Obviously, the smaller the regret, the better the performance of the algorithm. Since the implementation of the algorithm is completed after multiple iterations, we theoretically require that the regret generated by multiple iterations should gradually approach to zero along with the increasing of iterative number. That is, if the corresponding cumulative regret bound of the algorithm can be got, the regret bound should be sublinear convergence with respect to the number of iteration.Distributed online optimization and its applications in multi-agent systems have become a hot research area nowadays in the systems and control community. Inspired by the distributed dual average algorithm in [3], the authors in [9] proposed a distributed online weighted dual average algorithm for the distributed optimization problem on dynamic networks and obtained a square-root regret bound. Yan et al. [10] introduced a distributed online projected subgradient descent algorithm and achieved a square-root regret for convex locally cost functions and a logarithmic regret for strongly convex locally cost function. In [11], a distributed stochastic subgradient online optimization algorithm is proposed. In the case that the local objective function is convex and strongly convex, the convergence of the algorithm is proved, and the corresponding regret bounds are obtained respectively. For more references on distributed online algorithms, see [12–21].The distributed online optimization algorithm based on the first-order information is simple in a calculation but converges slowly in most cases. In the traditional optimization method, Newton’s method converges faster than the first-order information when it uses the second-order gradient information of the objective function. Some scholars have applied it to distributed optimization problems [22–24] to improve the convergence of distributed online optimization algorithm. However, these algorithms need to compute and store the Hessian matrix of the objective along with its inverse at each iteration, which will inevitably affect the efficiency. To overcome such inconvenience, inspired by the algorithm mentioned in reference [25], we propose a distributed online Newton step algorithm, which can achieve the effect of the Newton method by using the first information of the objective function.The major contributions of this article are as follows: (i) for the distributed online optimization problem, we propose a distributed online Newton step algorithm which can not only avoid the deficiencies of calculation and storage in the process of Newton method implementation but also achieve the effectiveness of Newton’s method. In the algorithm, the first-order gradient of the object function has been used to construct a positive definite matrix, which is similar to the inverse of the Hessian matrix in the Newton method. Moreover, the convergence of the proposed algorithm is proved theoretically, and the regret bound of the algorithm is obtained. (ii) We apply the proposed algorithm to a practical problem. The effectiveness and practicality of the algorithm is verified by numerical experiments. Meanwhile, we compare the proposed algorithm with several existing gradient descent algorithms, and the results show that the convergence rate of this algorithm is obviously faster than the gradient descent algorithms.The rest of this paper is organized as follows: in Section2, we discuss some closely related works about distributed Newton method. Some necessary mathematical preliminaries and assumptions which will be used in this paper are introduced in Section 3. Our algorithm is stated in Section 4, and the concrete proof of the convergence of the algorithm is presented in Section 5. The performance of the proposed algorithm is verified in comparison with several gradient descent algorithms over the practical problem in Section 6, and then the conclusion of this paper is included in Section 7. ## 2. Related Work Newton and Quasi-Newton methods are recognized as a class of effective algorithms in solving optimization problems. The iterative formula of Newton method is as follows:(1)xk+1=xk−αkHk−1gk,where Hk−1 is the inverse of Hessian matrix ∇2fxk. Newton’s method needs to calculate the second derivative of the objective function and the Hessian matrix obtained may not be positive definite. In order to overcome these shortcomings, some scholars put forward the Quasi-Newton method. The basic idea of the Quasi-Newton method is to structure an approximate matrix without the second derivative of the objective function to instead of the inverse of the Hessian matrix in the Newton method. Different methods of constructing approximate matrix represent different Quasi-Newtonian methods.Although the Quasi-Newton method is recognized as a more efficient algorithm, it is seldom used in a distributed environment because the distribution approximation of Newton steps is difficult to design. Mark Eisen et al. [26] proposed a decentralized Quasi-Newton Method. In this method, the idea of determining the Newton direction is to utilize the inverse of the Hessian matrix to approximate the curvature of the cost function of the agent and its neighbor agents. This method has good convergence but faces storage and computational deficiencies for large data sets (The approach involves the power of matrices of sizes np×np with n being the total number of nodes and p being the number of features). Aryan Mokhtari, Qing Ling, and Alejandro Ribeiro in [27] proposed a Network Newton method. In this method, a distributed computing matrix is constructed by equivalent transformation of the original problem, which can replace the original Hessian matrix, so as to realize the distributed solution of the problem. The authors proved that the algorithm can converge to the approximate value of the optimal parameter at least at the linear rate. They further demonstrated the convergence rate of the algorithm and analyzed several practical implementation matters in the literature [28]. Rasul Tutunov et al. [29] proposed a distributed Newton optimization algorithm based on consensus. By using the sparsity of the dual Hessian matrix, they reconstructed the calculation of Newton steps into the method of solving diagonally dominant linear equations and realized the distributed calculation of Newton’s method. They also theoretically proved that the algorithm has superlinear convergence similar to centralized Newton’s algorithm in the field of the optimal solution. Although these algorithms realize the distributed computation of the Newton method, they need to calculate the inverse of the Hessian, which is expensive for the algorithm.Motivated by these observations, for the online setting, we propose a distributed Newton-step algorithm which can achieve a convergence rate approximate to Newton’s method on the basis of distributed computing, and the inverse of the approximate Hessian matrix can be easily calculated. Numerical experimental results show that our algorithm can run significantly faster than the algorithms in [9, 10, 19] with a lower computational cost in preiteration. ## 3. Preliminaries In this section, some notational conventions and basic notions are introduced first. Then, we provide a brief description distributed online optimization problem. At the same time, some concepts will be used and relevant assumptions are represented in this paper. ### 3.1. Some Conceptions and Assumptions Then-dimension Euclidean space is denoted by ℝn, X is a subset of ℝn, and ⋅ represents the Euclidean norm. Strongly convex functions are defined as follows:Definition 1. [30] Let fx be a differentiable function on ℝn, ∇fx be the gradient of function fx at x, and X∈ℝn be a convex subset. Then fx is strictly convex on X if and only if(2)fx>fx0+∇fx0,x−x0,for all x0,x∈X×X.Lemma 1 (see [25]). Functionfx:X⟶ℝ is differentiable on the set X with a diameter D, and α>0 is a constant. For ∀x∈X,∇fx≤L and exp−αfx is concave, then when β≤min1/4LD,α, for any x,y∈X, the following inequation holds:(3)fx≥fy+∇fyTx−y+β2x−yT∇fy∇fyTx−y.Some notations about matrices are given to be used in our proof of the convergence of the algorithm. Denote the space of alln×n matrices by ℝn×n. For a matrix A=aijn×n∈ℝn×n, aij represents the entry of A at the ith row and the jth column. AT=ajin×n is the transpose of A. A denotes the determinant of A, and λi is the ith eigenvalue of the matrix A. Then, the next equations are set up: A=∏i=1nλi, trA=∑i=1naii=∑i=1nλi. In addition, for any vector x,y,z∈ℝn, equation xTAz+yTAz=x+yTAz is set up.Definition 2. [31] Matrix A∈ℝn×n is positive definite, if and only if for any x∈ℝn and x≠0 (0 denotes an n-dimensional vector where all the entries are 0), xTAx>0. ### 3.2. Distributed Online Optimization Problem We consider a multiagent network system with multiple agents, each agenti is associated with a strictly convex function (with bounded first and second derivatives) ft,ix:ℝn⟶ℝ, and the function ft,ix is varying over time. All the agents cooperate to solve the following general convex consensus problem:(4)min∑i=1nft,ix,subject tox∈X.At each roundt=1,…,T, the ith agent is required to generate a decision point xit∈X according to its current local information as well as the information received from its immediate neighbors. Then, the adversary responds to each agent′s decision with a loss function ft,ix:X⟶ℝ and each agent gets the loss ft,ixit. The communication between agents is specified by an undirected graph G=V,E, where V=1,…,n is a vertex set, and E⊂V×V denotes an edge set. Undirected means if i,j∈E then j,i∈E. Each agent i can only communicate directly with its immediate neighbors Ni=j∈V|i,j∈E. The goal of the agents is to seek a sequence of decision points xit∈X,i∈V, so that the regret with respect to each agent i regarding any fixed decision x∗∈X in hindsight(5)RTxit,x=∑t=1T∑i=1nft,ixit−ft,ix∗,is sublinear in T.Throughout this paper, we make the following assumptions:(i) each cost functionft,ix is strictly convex and twice continuous differentiable and L-Lipschitz on the convex set X(ii) X is compact and the Euclidean diameter of X is bounded by D(iii) exp−αftix is concave in the set X for all t and iBy assumption (i), the functionftix is convex in the set X, and with some reasonable assumptions over the domains of the value of α and x, exp−αftix is concave in the set X. In addition, the Lipschitz condition (i) implies that for any x∈X and any gradient gi, we have the following equation:(6)gi≤L. ## 3.1. Some Conceptions and Assumptions Then-dimension Euclidean space is denoted by ℝn, X is a subset of ℝn, and ⋅ represents the Euclidean norm. Strongly convex functions are defined as follows:Definition 1. [30] Let fx be a differentiable function on ℝn, ∇fx be the gradient of function fx at x, and X∈ℝn be a convex subset. Then fx is strictly convex on X if and only if(2)fx>fx0+∇fx0,x−x0,for all x0,x∈X×X.Lemma 1 (see [25]). Functionfx:X⟶ℝ is differentiable on the set X with a diameter D, and α>0 is a constant. For ∀x∈X,∇fx≤L and exp−αfx is concave, then when β≤min1/4LD,α, for any x,y∈X, the following inequation holds:(3)fx≥fy+∇fyTx−y+β2x−yT∇fy∇fyTx−y.Some notations about matrices are given to be used in our proof of the convergence of the algorithm. Denote the space of alln×n matrices by ℝn×n. For a matrix A=aijn×n∈ℝn×n, aij represents the entry of A at the ith row and the jth column. AT=ajin×n is the transpose of A. A denotes the determinant of A, and λi is the ith eigenvalue of the matrix A. Then, the next equations are set up: A=∏i=1nλi, trA=∑i=1naii=∑i=1nλi. In addition, for any vector x,y,z∈ℝn, equation xTAz+yTAz=x+yTAz is set up.Definition 2. [31] Matrix A∈ℝn×n is positive definite, if and only if for any x∈ℝn and x≠0 (0 denotes an n-dimensional vector where all the entries are 0), xTAx>0. ## 3.2. Distributed Online Optimization Problem We consider a multiagent network system with multiple agents, each agenti is associated with a strictly convex function (with bounded first and second derivatives) ft,ix:ℝn⟶ℝ, and the function ft,ix is varying over time. All the agents cooperate to solve the following general convex consensus problem:(4)min∑i=1nft,ix,subject tox∈X.At each roundt=1,…,T, the ith agent is required to generate a decision point xit∈X according to its current local information as well as the information received from its immediate neighbors. Then, the adversary responds to each agent′s decision with a loss function ft,ix:X⟶ℝ and each agent gets the loss ft,ixit. The communication between agents is specified by an undirected graph G=V,E, where V=1,…,n is a vertex set, and E⊂V×V denotes an edge set. Undirected means if i,j∈E then j,i∈E. Each agent i can only communicate directly with its immediate neighbors Ni=j∈V|i,j∈E. The goal of the agents is to seek a sequence of decision points xit∈X,i∈V, so that the regret with respect to each agent i regarding any fixed decision x∗∈X in hindsight(5)RTxit,x=∑t=1T∑i=1nft,ixit−ft,ix∗,is sublinear in T.Throughout this paper, we make the following assumptions:(i) each cost functionft,ix is strictly convex and twice continuous differentiable and L-Lipschitz on the convex set X(ii) X is compact and the Euclidean diameter of X is bounded by D(iii) exp−αftix is concave in the set X for all t and iBy assumption (i), the functionftix is convex in the set X, and with some reasonable assumptions over the domains of the value of α and x, exp−αftix is concave in the set X. In addition, the Lipschitz condition (i) implies that for any x∈X and any gradient gi, we have the following equation:(6)gi≤L. ## 4. Distributed Online Newton Step Algorithm For problem (4), we assume that information can be exchanged among each agent in a timely manner, that is, the network topology graph between n agents is a complete graph. The communication between agents in our algorithm is modeled by a doubly stochastic symmetric P, so that 1>pij>0 only if i,j∈E, else pij=0, and ∑j=1npij=∑j∈Nipij=1 for all i∈V, ∑i=1npij=∑i∈Njpij=1 for all j∈V. ### 4.1. Algorithm The distributed online Newton step algorithm is presented in Algorithm1.Algorithm 1: The Distributed Online Newton Step Algorithm (D-ONS). (1) Input: convex setχ, maximum round number T.(2) β=1/2min1/4LD,α.(3) Initialize:xi1∈χ, ∀i∈V.(4) fort=1,⋯⋯,Tdo(5) The adversary revealsft,i,∀i∈V.(6) Compute gradientsgit∈∂ft,ixit, ∀i∈V.(7) ComputeHit=∑r=1tgirgirT+ϵIn, ∀i∈V.(8) for each i∈Vdo(9) zit+1=∑j=1nPijxjt−1/βHit−1git(10) xit+1=∏χHitzit+1(11) end for(12) end forThe projection function used in this algorithm is defined as follows:(7)∏XAy=argminx∈Xy−xTAy−x,where A is a positive definite matrix. ### 4.2. Algorithm Analysis In this algorithm, when a decisionxit is made by the agent i with the current information, the corresponding cost function ft,ix can be obtained. So we can get the gradient git=∇ft,ixit. Construct a symmetric positive definite matrix Hit=∑r=1tgirgirT+ϵIn, then the direction of iteration is constructed by utilizing Hit−1 which always exists to replace the inverse of the Hessian matrix in Newton’s method. Take the linear combination of the current iteration point of agent i and the current iteration point of its neighbor agent as the starting point of the new iteration along with the size 1/β, and the projection operation is used to get the next iteration point xit+1.The calculation ofHit and its inverse Hit−1 in the algorithm can be seen from Step 7, Hit=∑r=1tgirgirT+ϵIn=∑r=1t−1girgirT+ϵIn+gitgitT=Hit−1+gitgitT, which shows that Hit can be computed via using the previous approximation matrix Hit−1 as well as the gradient git at step t. Therefore, we do not have to store all the gradients from the previous t-step iteration, at the same time, as shown by the following equation (32): (8)A+uvT−1=A−1−A−1uvTA−11+vTA−1u.LetHit−1=A,u=v=git, and Hi0=ϵIn, then Hi0−1=1/ϵIn, the inverse of Hit can be got simply. It is the same thing as solving for Hit, we just use the information from the current and the previous step. ## 4.1. Algorithm The distributed online Newton step algorithm is presented in Algorithm1.Algorithm 1: The Distributed Online Newton Step Algorithm (D-ONS). (1) Input: convex setχ, maximum round number T.(2) β=1/2min1/4LD,α.(3) Initialize:xi1∈χ, ∀i∈V.(4) fort=1,⋯⋯,Tdo(5) The adversary revealsft,i,∀i∈V.(6) Compute gradientsgit∈∂ft,ixit, ∀i∈V.(7) ComputeHit=∑r=1tgirgirT+ϵIn, ∀i∈V.(8) for each i∈Vdo(9) zit+1=∑j=1nPijxjt−1/βHit−1git(10) xit+1=∏χHitzit+1(11) end for(12) end forThe projection function used in this algorithm is defined as follows:(7)∏XAy=argminx∈Xy−xTAy−x,where A is a positive definite matrix. ## 4.2. Algorithm Analysis In this algorithm, when a decisionxit is made by the agent i with the current information, the corresponding cost function ft,ix can be obtained. So we can get the gradient git=∇ft,ixit. Construct a symmetric positive definite matrix Hit=∑r=1tgirgirT+ϵIn, then the direction of iteration is constructed by utilizing Hit−1 which always exists to replace the inverse of the Hessian matrix in Newton’s method. Take the linear combination of the current iteration point of agent i and the current iteration point of its neighbor agent as the starting point of the new iteration along with the size 1/β, and the projection operation is used to get the next iteration point xit+1.The calculation ofHit and its inverse Hit−1 in the algorithm can be seen from Step 7, Hit=∑r=1tgirgirT+ϵIn=∑r=1t−1girgirT+ϵIn+gitgitT=Hit−1+gitgitT, which shows that Hit can be computed via using the previous approximation matrix Hit−1 as well as the gradient git at step t. Therefore, we do not have to store all the gradients from the previous t-step iteration, at the same time, as shown by the following equation (32): (8)A+uvT−1=A−1−A−1uvTA−11+vTA−1u.LetHit−1=A,u=v=git, and Hi0=ϵIn, then Hi0−1=1/ϵIn, the inverse of Hit can be got simply. It is the same thing as solving for Hit, we just use the information from the current and the previous step. ## 5. Convergence Analysis Now, the main result of this paper is stated in the following.Theorem 1. Give the sequence ofxit and zit generated by Algorithm 1 for all i∈V, x∗=argminx∈X∑i=1nft,ix, and the regret with respect to agent i′s action is(9)RTx∗,xit=∑t=1Tftixit−ftix∗,≤ϵD2β2+n2βlnTL2ϵ+1+C+C3lnTL^2+ϵL^2+ϵ,where C=C1+C2, C1=βη2n2D21−2lnη​L2/lnη2−1/lnη, C2=n3DLM+m2η/Mmϵ1−η2, C3=n4L2M+m4/8βM2m21−η2L^2, η=max1≤i,j≤npij,andM,m,L^are constants, and L^≤1/t∑r=1tgir2.From Theorem 1, the regret bound of Algorithm1 is sublinear convergence with respect to iterative number T, that is, limT⟶∞RTx∗,xit/T=0. Note that, the regret bound is related to the scale of the network. Specifically, as the network grows in size, the regret bound value also increases. In addition, the value of the regret bound is also influenced by the values of parameter ϵ and the diameter of the convex set X. The value of η indirectly reflects the connectivity of the network implying that the smaller the value of η , the smaller the regret bound of the algorithm.To prove the conclusion of Theorem 1, we first give some lemmas and their proofs.Lemma 2. For any fixedi∈V, let ftix=ftx, then ∇ftixit=∇ftxit=git, and the following bound holds for any j∈V and x∈X(10)∑t=1Tftxit−ftx∗≤∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗,where β=1/2min1/4LD,α.Proof. According to the assumption that the functionftx is strictly convex and continuous differentiable in convex set X, and xit∈X,x∗∈X, by Lemma 1 we can obtain(11)ftxit−ftx∗≤gitTxit−x∗−β2xit−x∗TgitgitTxit−x∗. Summing up overt=1,2,…,T can get the conclusion of Lemma 2. ■ From Lemma2, if the upper bound of the right side of the inequality can be obtained, the upper bound of the left side can be obtained, too. Therefore, we are committed to solving the upper bound of the right side of the above equation.Lemma 3. Letyit=∑j=1npijxjt−xit, and the following bound holds for any j∈V and any x∈X,(12)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2ϵD2−∑t=1TgitTyit+β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git.Proof. according to Algorithm1, we have the following equation:(13)zit+1=∑j=1npijxjt−1βHit−1git,so(14)zit+1−x∗=∑j=1npijxjt−1βHit−1git−x∗,=xit−x∗+∑j=1npijxjt−xit−1βHit−1git,=xit−x∗+yit−1βHit−1git,then we can obtain the following next equation:(15)zit+1−x∗THitzit+1−x∗,=xit−x∗THitxit−x∗+yit−1βHit−1gitTHitxit−x∗,+xit−x∗THityit−1βHit−1git,+yit−1βHit−1gitTHityit−1βHit−1git,=xit−x∗THitxit−x∗+2yitTHitxit−x∗,−2βgitTxit−x∗−2βgitTyit+yitTHityit+1β2gitTHit−1git. Sincexit+1 is the projection of zit+1 in the norm induced by Hit, it is a well known fact that (see [25] section 3.5 Lemma 3.9)(16)xit+1−x∗THitxit+1−x∗≤zit+1−x∗THitzit+1−x∗. This fact together with (15) gives(17)xit+1−x∗THitxit+1−x∗≤xit−x∗THitxit−x∗+2yitTHitxit−x∗−2βgitTxit−x∗−2βgitTyit+yitTHityit+1β2gitTHit−1git. Summing both sides of (17) from t=1 to T, we obtain the following equation:(18)∑t=1Txit+1−x∗THitxit+1−x∗≤∑t=1Txit−x∗THitxit−x∗−2β∑t=1TgitTyit−2β∑t=1TgitTxit−x∗+2∑t=1TyitTHitxit−x∗+∑t=1TyitTHityit+1β2∑t=1TgitTHit−1git,that is(19)−∑t=1Txit−x∗THit+1−Hitxit−x∗≤xi1−x∗THi1−gi1gi1Txi1−x∗−2β∑t=1TgitTyit−2β∑t=1TgitTxit−x∗−xiT+1−x∗THiTxiT+1−x∗+∑t=1TyitTHityit+2∑t=1TyitTHitxit−x∗+1β2∑t=1TgitTHit−1git. According to Algorithm1, Hit+1−Hit=gitgitT, then(20)∑t=1Txit−x∗THit+1−Hitxit−x∗=∑t=1Txit−x∗TgitgitTxit−x∗,thus we obtain(21)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2xi1−x∗THi1−gi1gi1Txi1−x∗−∑t=1TgitTyit+β∑t=1TyitTHitxit−x∗−β2xiT+1−x∗THiTxiT+1−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git. Due toHi1−gi1gi1T=ϵIn, then(22)xi1−x∗THi1−gi1gi1Txi1−x∗,=ϵxi1−x∗Txi1−x∗=ϵxi1−x∗2≤ϵD2. And, sinceHiT is positive definite, and β>0, so −β/2xiT+1−x∗THiTxiT+1−x∗≤0. Combining (21) and (22), we can state(23)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2ϵD2−∑t=1TyitTgit+β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git. Thus, the proof of Lemma3 is completed. ■ Next, we consider the last term of (23).Lemma 4. For anyi∈V, we can get the following bound holding:(24)12β∑t=1TgitTHit−1git≤n2βlogTL2ϵ+1.Proof. Note that,(25)gitTHit−1git=Hit−1•gitgitT=Hit−1•Hit−Hit−1,where for matrices A,B∈ℝn×n, we denote by A•B=∑i=1n∑j=1naijbij the inner product of these matrices as vectors in ℝn2. For real numbers a,b∈ℝ+ and the logarithm function y=lnx, the Taylor expansion of y in a is y=logx=loga+1/ax−a+Rnx. So logb≤loga+1/ab−a, implying a−1a−b≤loga/b. An analogous fact holds for the positive definite matrices, i.e., A−1•A−B≤logA/B, where A,B denote the determinant of the matrix A,B (see the detailed proof in [25]). This fact gives us (for convenience we denote Hi0=ϵIn)(26)∑t=1TgitTHit−1git=∑t=1THit−1•Hit−Hit−1,≤∑t=1TlogHitHit−1=logHiTHi0. SinceHiT=∑t=1TgitgitT+ϵIn and git≤L, from the properties of matrices and determinants, we know that the largest eigenvalue of HiT is TL2+ϵ at most. Hence HiT≤TL2+ϵn and Hi0=ϵn, then(27)∑t=1TgitTHit−1git≤logHiTHi0≤nlogTL2ϵ+1. Combining the above factors, we obtain the following equation:(28)12β∑t=1TgitTHit−1git≤n2βlogTL2ϵ+1. The proof of Lemma4 is completed. ■ According to Algorithm1, zit+1=∑j=1npijxjt−1/βHit−1git, where −Hit−1git is the direction of iteration. Using the knowledge of matrix analysis, we have the following conclusions.Lemma 5. For anyi∈V,1≤t≤T,(29)Hit−1git≤M+m2n2L4Mm∑r=1tgir2+ϵ,where m=minλ1,λ2,…,λn, M=maxλ1,λ2,…,λn, 0<m≤M, and λii=1,…,n is the i th eigenvalue of Hit.This conclusion gives us that when the number of iterations increases,Hit−1git converges to zero, which ensures the consistency of the algorithm. The detailed proof can be seen in Appendix A.Now, we consider the norm of vectoryit, zit+1−x∗ and get the following inequation.Lemma 6. For any1≤i≤n,1≤t≤T, let η=max1≤i,j≤npij, 0<η<1, then(30)zit+1−x∗≤nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η,and(31)yit≤2nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η,where n is the size of the network, T is the total number of iterations. The specific proof is represented in Appendix B.Next, we turn our attention to the bound of the following termβ∑t=1TyitTHitxit−x∗−∑t=1TyitTgit+β/2∑t=1TyitTHityit. By combining the knowledge of vectors and matrices, we get Lemma 7.Lemma 7. For anyi∈V, the following inequality holds(32)β∑t=1TyitTHitxit−x∗−∑t=1TyitTgitβ2∑t=1TyitTHityit≤2β∑t=1T∑r=1tgir2+ϵnD+M+m2n2L4βMm∑r=1tgir2+ϵ1−η2,≤C+C3lnTL^2+ϵL^2+ϵ,where C1=βη2n2D21−2lnηL2/lnη2−1/lnη, C2=n3DLM+m2η/Mmϵ1−η2, C3=n4L2M+m4/8βM2m21−η2L^2,andC=C1+C2.Proof. According to Algorithm1, we can state(33)zit+1=∑j=1npijxjt−1βHit−1git,where β is a positive constant, and Hit−1 is the inverse of matrix Hit. We obtain the following equation:(34)git=βHit∑j=1npijxjt−zit+1. Now, by multiplying both sides of this equation by the vectoryitT, we can obtain the following equation:(35)yitTgit=βyitTHit∑j=1npijxjt−zit+1,then the left of (32) can be written as follows:(36)∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−∑t=1TyitTgit=β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1TyitTHit∑j=1npijxjt−zit+1. The matrixHit is symmetric and positive definite, which means that yitTHityit≥0, therefore we can obtain the following equation:(37)β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1T∑j=1npijxjt−zit+1THityit≤β∑t=1TyitTHitxit−x∗+β∑t=1TyitTHityit−β∑t=1TyitTHit∑j=1npijxjt−zit+1≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitHit2zit+1−x∗Hit2. Here, we apply the Cauchy–Schwarz inequation:xTAy2≤xA2yA2, where A is a n×n Hermite matrix and A is positive semidefinite. Next, we consider the super bound ofyitHit2 and zit+1−x∗Hit2. According to the Step 7 of Algorithm 1, Hit=∑r=1tgirgirT+ϵIn=Hit−1+gitgitT, so(38)yitHit2=yitTHityit=yitTHit−1+gitgitTyit,=yitHit−1+yitTgit2≤yitHit−1+yit2git2,≤yitHi0+yit2gi12+⋯+yit2git2,≤ϵyit2+∑r=1tyit2gir2≤ϵ+∑r=1tgir2yit2. Similarly, we have the following equation:(39)zit+1−x∗Hit2≤ϵ+∑r=1tgir2zit+1−x∗2. Combining the results of Lemmas5 and 6, we have the following equation:(40)yitHit2zit+1−x∗Hit2≤4ϵ+∑r=1tgir22nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η4,then(41)β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1T∑j=1npijxjt−zit+1THityit≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitHit2zit+1−x∗Hit2,≤2β∑t=1Tϵ+∑r=1tgir2nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η2. Thus, we complete the proof of Lemma7. ■ Putting all these together, Theorem1 can be proved as follows.Proof of Theorem 1. According to the assumptions,fx is strictly convex, and the function exp−αfx is concave in X when the value of α is sufficiently small. Setting β=min1/4LD,α, combined with axioms 2–7, we can obtain the regret bound(42)RTx∗,xit=∑t=1Tftixitftix∗≤ϵD2β2+n2βlogTL2ϵ+1+C+C3logTL^2+ϵL^2+ϵ. The values of the parameters in equation (15) are the same as Theorem 1. ■ ## 6. Numerical Experiment In order to verify the performance of our proposed algorithm, we conducted a numerical experiment on an online estimation over a distributed sensor network which is mentioned in reference [9]. In a distributed sensor network, there are n sensors (See Figure 1 in [9]). Each sensor is connected to one or more sensors. It is assumed that each sensor is connected to a processing unit. Finally, the processing units are integrated to obtain the best evaluation of the environment. The specific model is as follows: given a closed convex set X=x∈ℝn|x2≤xmax, the observation vector yt,i:ℝn⟶ℝd represents the i th sensor measurement at time t which is uncertain and time-varying due to the sensor’s susceptibility to unknown environmental factors such as jamming. The sensor is assumed (not necessarily accurately) to have a linear model of the form hix=Hix, where Hi∈ℝn×d is the observation matrix of sensor i and Hi1≤hmax for all i. The objective is to find the argument x^∈X that minimizes the cost function fx^=1/n∑i=1nft,ix^, namely,(43)min1n∑i=1nft,ix^,subject tox^∈X,where the cost function associated with sensor i is ft,ix^=1/2yt,ix−Hix^22. Since the observed value yt,i changes with time t, only when we calculate the value of x^it can we get the local error of the i th sensor at time t. In other words, due to modeling errors and uncertainties in the environment, the local error functions are allowed to change over time.Figure 1 Comparisonn = 1, n = 10, n = 100.In an offline setting, each sensori has a noisy observation yt,i=Hix+vt,i=Hix+v1,i for all t=1,2,…,T, where vt,i is generally assumed to be (independent) white noise at time t. In this case, the centralized optimal estimate for problem (37) is(44)x^∗=∑i=1nHiTHi−1∑i=1nHiTy1,ix.While in an online setting, the white noisevt,i varies with time t (see [9]). We assume vt,i∼U−1/4,1/4 (The white noise vt,i is uniformly distributed on the interval −1/4,1/4). In the proposed distributed online algorithm, each sensor i calculate x^it∈X based on the local information available to it and then an “oracle” reveals the cost ft,ix^ at time step t.The performance of the proposed algorithm is discussed based on the following aspects: ### 6.1. The Analysis of the Algorithm Performance The numerical experiments consist of two parts: the impact of network size on the performance of the D-ONS and the effect of network connectivity on the effectiveness of the algorithm iterations.We carried out numerical experiments atn = 1, n = 2 and n = 100, respectively. Figure 1 depicts the convergence curves of the algorithm for different network sizes. According to Figure 1, it is obvious that the average regret decreases fast and the algorithm can converge on different scaled networks as the number of the agent in the network increase. Especially, when n=1, the problem is equivalent to a centralized optimization problem, and our distributed optimization algorithm can reach the same effect as the centralized algorithm.According to Theorem1, the effectiveness of the algorithm is directly affected by the connectivity of the network, so we verify the algorithm under different network topology. (i) Complete graph. All the agents are connected to each other. (ii) Cycle graph. Each agent is only connected to its two immediate neighbors. (iii) Watts–Strogatz. The connectivity of random graphs is related to the average degree and connection probability. Here, let the average degree of the graph is 3 and the probability of connection is 0.6. As shown in Figure 2, D-ONS can lead to a significantly faster convergence on a complete graph than a cycle graph and has the similar convergence on Watts–Strogatz. The experimental result is consistent with the theoretical analysis results in this paper.Figure 2 Comparison under different topology. ### 6.2. Performance Comparison of Algorithms To verify the performance of the proposed algorithm, we compared the proposed algorithm with the class algorithms D-OGD in [10], D-ODA in [9] and the algorithm in [19]. The parameters in these algorithms are based on their theoretical proofs. The network topology relationship among agents is complete, and the size of the network is the same n=10. As shown in Figure 3, the presented algorithm D-ONS displays better performance with faster convergence and higher accuracy than D-ODA, D-OGD, and the algorithm in [19].Figure 3 The convergence curves of compared algorithms. ## 6.1. The Analysis of the Algorithm Performance The numerical experiments consist of two parts: the impact of network size on the performance of the D-ONS and the effect of network connectivity on the effectiveness of the algorithm iterations.We carried out numerical experiments atn = 1, n = 2 and n = 100, respectively. Figure 1 depicts the convergence curves of the algorithm for different network sizes. According to Figure 1, it is obvious that the average regret decreases fast and the algorithm can converge on different scaled networks as the number of the agent in the network increase. Especially, when n=1, the problem is equivalent to a centralized optimization problem, and our distributed optimization algorithm can reach the same effect as the centralized algorithm.According to Theorem1, the effectiveness of the algorithm is directly affected by the connectivity of the network, so we verify the algorithm under different network topology. (i) Complete graph. All the agents are connected to each other. (ii) Cycle graph. Each agent is only connected to its two immediate neighbors. (iii) Watts–Strogatz. The connectivity of random graphs is related to the average degree and connection probability. Here, let the average degree of the graph is 3 and the probability of connection is 0.6. As shown in Figure 2, D-ONS can lead to a significantly faster convergence on a complete graph than a cycle graph and has the similar convergence on Watts–Strogatz. The experimental result is consistent with the theoretical analysis results in this paper.Figure 2 Comparison under different topology. ## 6.2. Performance Comparison of Algorithms To verify the performance of the proposed algorithm, we compared the proposed algorithm with the class algorithms D-OGD in [10], D-ODA in [9] and the algorithm in [19]. The parameters in these algorithms are based on their theoretical proofs. The network topology relationship among agents is complete, and the size of the network is the same n=10. As shown in Figure 3, the presented algorithm D-ONS displays better performance with faster convergence and higher accuracy than D-ODA, D-OGD, and the algorithm in [19].Figure 3 The convergence curves of compared algorithms. ## 7. Conclusion and Discussion A distributed online optimization algorithm based on Newton step is proposed for a multiagent distributed online optimization problem, where the local objective function is strictly convex and quadratic continuously differentiable. In each iteration, the gradient of the current iteration point is used to construct a positive definite matrix, and then the direction of the next iteration is constructed by substituting this positive matrix for the inverse of the Hessian matrix in Newton’s method. Through theoretical analysis, the regret bound of the algorithm is obtained, and the regret bound converges sublinear with respect to the number of iterations. Numerical examples also demonstrate the feasibility and effectiveness of the proposed algorithm. Simulation results indicate significant convergence rate improvement of our algorithm relative to the existing distributed online algorithms based on first-order methods. --- *Source: 1007032-2022-10-28.xml*
1007032-2022-10-28_1007032-2022-10-28.md
39,079
A Distributed Online Newton Step Algorithm for Multi-Agent Systems
Xiaofei Chu
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007032
1007032-2022-10-28.xml
--- ## Abstract Most of the current algorithms for solving distributed online optimization problems are based on the first-order method, which are simple in computation but slow in convergence. Newton’s algorithm with fast convergence speed needs to calculate the Hessian matrix and its inverse, leading to computationally complex. A distributed online optimization algorithm based on Newton’s step is proposed in this paper, which constructs a positive definite matrix by using the first-order information of the objective function to replace the inverse of the Hessian matrix in Newton’s method. The convergence of the algorithm is proved theoretically and the regret bound of the algorithm is obtained. Finally, numerical experiments are used to verify the feasibility and efficiency of the proposed algorithm. The experimental results show that the proposed algorithm has an efficient performance on practical problems, compared to several existing gradient descent algorithms. --- ## Body ## 1. Introduction In recent years, with the development of computer network technology, distributed optimization algorithms [1–3] have been successfully applied to solve large-scale optimization problems, which are considered to be an effective method. Distributed optimization decomposes a large-scale optimization problem into multiple subproblems according to different agents in a multi-agent network. Different agents calculate their associated subproblems simultaneously and communicate information with their immediate neighbor agents. And, all the agents finally obtain a common optimal solution that can minimize the sum of their objective functions through the exchange of information along with their respective optimization iteration. Many problems in science and engineering can be modeled as problems, such as machine learning [4], signal tracking and location [5], sensor networks [6, 7], and smart grids [8].Distributed optimization assumes that the local objective function is known and invariant. However, many practical problems in diverse fields are affected by their environment and the corresponding objective function is changing all the time, which requires the optimization process of these problems in the online setting. In the distributed online optimization problem, each agent has a limited amount of information. Only when one agent makes a decision with the current information can it know the relevant information of its corresponding objective function, which leads to the inevitable outcome: the decision it makes is not the optimal and the difference, so-called regret, exists between make-decisions of all agents, respectively. Regret is one of the most straightforward measures of the performance of an online algorithm. Obviously, the smaller the regret, the better the performance of the algorithm. Since the implementation of the algorithm is completed after multiple iterations, we theoretically require that the regret generated by multiple iterations should gradually approach to zero along with the increasing of iterative number. That is, if the corresponding cumulative regret bound of the algorithm can be got, the regret bound should be sublinear convergence with respect to the number of iteration.Distributed online optimization and its applications in multi-agent systems have become a hot research area nowadays in the systems and control community. Inspired by the distributed dual average algorithm in [3], the authors in [9] proposed a distributed online weighted dual average algorithm for the distributed optimization problem on dynamic networks and obtained a square-root regret bound. Yan et al. [10] introduced a distributed online projected subgradient descent algorithm and achieved a square-root regret for convex locally cost functions and a logarithmic regret for strongly convex locally cost function. In [11], a distributed stochastic subgradient online optimization algorithm is proposed. In the case that the local objective function is convex and strongly convex, the convergence of the algorithm is proved, and the corresponding regret bounds are obtained respectively. For more references on distributed online algorithms, see [12–21].The distributed online optimization algorithm based on the first-order information is simple in a calculation but converges slowly in most cases. In the traditional optimization method, Newton’s method converges faster than the first-order information when it uses the second-order gradient information of the objective function. Some scholars have applied it to distributed optimization problems [22–24] to improve the convergence of distributed online optimization algorithm. However, these algorithms need to compute and store the Hessian matrix of the objective along with its inverse at each iteration, which will inevitably affect the efficiency. To overcome such inconvenience, inspired by the algorithm mentioned in reference [25], we propose a distributed online Newton step algorithm, which can achieve the effect of the Newton method by using the first information of the objective function.The major contributions of this article are as follows: (i) for the distributed online optimization problem, we propose a distributed online Newton step algorithm which can not only avoid the deficiencies of calculation and storage in the process of Newton method implementation but also achieve the effectiveness of Newton’s method. In the algorithm, the first-order gradient of the object function has been used to construct a positive definite matrix, which is similar to the inverse of the Hessian matrix in the Newton method. Moreover, the convergence of the proposed algorithm is proved theoretically, and the regret bound of the algorithm is obtained. (ii) We apply the proposed algorithm to a practical problem. The effectiveness and practicality of the algorithm is verified by numerical experiments. Meanwhile, we compare the proposed algorithm with several existing gradient descent algorithms, and the results show that the convergence rate of this algorithm is obviously faster than the gradient descent algorithms.The rest of this paper is organized as follows: in Section2, we discuss some closely related works about distributed Newton method. Some necessary mathematical preliminaries and assumptions which will be used in this paper are introduced in Section 3. Our algorithm is stated in Section 4, and the concrete proof of the convergence of the algorithm is presented in Section 5. The performance of the proposed algorithm is verified in comparison with several gradient descent algorithms over the practical problem in Section 6, and then the conclusion of this paper is included in Section 7. ## 2. Related Work Newton and Quasi-Newton methods are recognized as a class of effective algorithms in solving optimization problems. The iterative formula of Newton method is as follows:(1)xk+1=xk−αkHk−1gk,where Hk−1 is the inverse of Hessian matrix ∇2fxk. Newton’s method needs to calculate the second derivative of the objective function and the Hessian matrix obtained may not be positive definite. In order to overcome these shortcomings, some scholars put forward the Quasi-Newton method. The basic idea of the Quasi-Newton method is to structure an approximate matrix without the second derivative of the objective function to instead of the inverse of the Hessian matrix in the Newton method. Different methods of constructing approximate matrix represent different Quasi-Newtonian methods.Although the Quasi-Newton method is recognized as a more efficient algorithm, it is seldom used in a distributed environment because the distribution approximation of Newton steps is difficult to design. Mark Eisen et al. [26] proposed a decentralized Quasi-Newton Method. In this method, the idea of determining the Newton direction is to utilize the inverse of the Hessian matrix to approximate the curvature of the cost function of the agent and its neighbor agents. This method has good convergence but faces storage and computational deficiencies for large data sets (The approach involves the power of matrices of sizes np×np with n being the total number of nodes and p being the number of features). Aryan Mokhtari, Qing Ling, and Alejandro Ribeiro in [27] proposed a Network Newton method. In this method, a distributed computing matrix is constructed by equivalent transformation of the original problem, which can replace the original Hessian matrix, so as to realize the distributed solution of the problem. The authors proved that the algorithm can converge to the approximate value of the optimal parameter at least at the linear rate. They further demonstrated the convergence rate of the algorithm and analyzed several practical implementation matters in the literature [28]. Rasul Tutunov et al. [29] proposed a distributed Newton optimization algorithm based on consensus. By using the sparsity of the dual Hessian matrix, they reconstructed the calculation of Newton steps into the method of solving diagonally dominant linear equations and realized the distributed calculation of Newton’s method. They also theoretically proved that the algorithm has superlinear convergence similar to centralized Newton’s algorithm in the field of the optimal solution. Although these algorithms realize the distributed computation of the Newton method, they need to calculate the inverse of the Hessian, which is expensive for the algorithm.Motivated by these observations, for the online setting, we propose a distributed Newton-step algorithm which can achieve a convergence rate approximate to Newton’s method on the basis of distributed computing, and the inverse of the approximate Hessian matrix can be easily calculated. Numerical experimental results show that our algorithm can run significantly faster than the algorithms in [9, 10, 19] with a lower computational cost in preiteration. ## 3. Preliminaries In this section, some notational conventions and basic notions are introduced first. Then, we provide a brief description distributed online optimization problem. At the same time, some concepts will be used and relevant assumptions are represented in this paper. ### 3.1. Some Conceptions and Assumptions Then-dimension Euclidean space is denoted by ℝn, X is a subset of ℝn, and ⋅ represents the Euclidean norm. Strongly convex functions are defined as follows:Definition 1. [30] Let fx be a differentiable function on ℝn, ∇fx be the gradient of function fx at x, and X∈ℝn be a convex subset. Then fx is strictly convex on X if and only if(2)fx>fx0+∇fx0,x−x0,for all x0,x∈X×X.Lemma 1 (see [25]). Functionfx:X⟶ℝ is differentiable on the set X with a diameter D, and α>0 is a constant. For ∀x∈X,∇fx≤L and exp−αfx is concave, then when β≤min1/4LD,α, for any x,y∈X, the following inequation holds:(3)fx≥fy+∇fyTx−y+β2x−yT∇fy∇fyTx−y.Some notations about matrices are given to be used in our proof of the convergence of the algorithm. Denote the space of alln×n matrices by ℝn×n. For a matrix A=aijn×n∈ℝn×n, aij represents the entry of A at the ith row and the jth column. AT=ajin×n is the transpose of A. A denotes the determinant of A, and λi is the ith eigenvalue of the matrix A. Then, the next equations are set up: A=∏i=1nλi, trA=∑i=1naii=∑i=1nλi. In addition, for any vector x,y,z∈ℝn, equation xTAz+yTAz=x+yTAz is set up.Definition 2. [31] Matrix A∈ℝn×n is positive definite, if and only if for any x∈ℝn and x≠0 (0 denotes an n-dimensional vector where all the entries are 0), xTAx>0. ### 3.2. Distributed Online Optimization Problem We consider a multiagent network system with multiple agents, each agenti is associated with a strictly convex function (with bounded first and second derivatives) ft,ix:ℝn⟶ℝ, and the function ft,ix is varying over time. All the agents cooperate to solve the following general convex consensus problem:(4)min∑i=1nft,ix,subject tox∈X.At each roundt=1,…,T, the ith agent is required to generate a decision point xit∈X according to its current local information as well as the information received from its immediate neighbors. Then, the adversary responds to each agent′s decision with a loss function ft,ix:X⟶ℝ and each agent gets the loss ft,ixit. The communication between agents is specified by an undirected graph G=V,E, where V=1,…,n is a vertex set, and E⊂V×V denotes an edge set. Undirected means if i,j∈E then j,i∈E. Each agent i can only communicate directly with its immediate neighbors Ni=j∈V|i,j∈E. The goal of the agents is to seek a sequence of decision points xit∈X,i∈V, so that the regret with respect to each agent i regarding any fixed decision x∗∈X in hindsight(5)RTxit,x=∑t=1T∑i=1nft,ixit−ft,ix∗,is sublinear in T.Throughout this paper, we make the following assumptions:(i) each cost functionft,ix is strictly convex and twice continuous differentiable and L-Lipschitz on the convex set X(ii) X is compact and the Euclidean diameter of X is bounded by D(iii) exp−αftix is concave in the set X for all t and iBy assumption (i), the functionftix is convex in the set X, and with some reasonable assumptions over the domains of the value of α and x, exp−αftix is concave in the set X. In addition, the Lipschitz condition (i) implies that for any x∈X and any gradient gi, we have the following equation:(6)gi≤L. ## 3.1. Some Conceptions and Assumptions Then-dimension Euclidean space is denoted by ℝn, X is a subset of ℝn, and ⋅ represents the Euclidean norm. Strongly convex functions are defined as follows:Definition 1. [30] Let fx be a differentiable function on ℝn, ∇fx be the gradient of function fx at x, and X∈ℝn be a convex subset. Then fx is strictly convex on X if and only if(2)fx>fx0+∇fx0,x−x0,for all x0,x∈X×X.Lemma 1 (see [25]). Functionfx:X⟶ℝ is differentiable on the set X with a diameter D, and α>0 is a constant. For ∀x∈X,∇fx≤L and exp−αfx is concave, then when β≤min1/4LD,α, for any x,y∈X, the following inequation holds:(3)fx≥fy+∇fyTx−y+β2x−yT∇fy∇fyTx−y.Some notations about matrices are given to be used in our proof of the convergence of the algorithm. Denote the space of alln×n matrices by ℝn×n. For a matrix A=aijn×n∈ℝn×n, aij represents the entry of A at the ith row and the jth column. AT=ajin×n is the transpose of A. A denotes the determinant of A, and λi is the ith eigenvalue of the matrix A. Then, the next equations are set up: A=∏i=1nλi, trA=∑i=1naii=∑i=1nλi. In addition, for any vector x,y,z∈ℝn, equation xTAz+yTAz=x+yTAz is set up.Definition 2. [31] Matrix A∈ℝn×n is positive definite, if and only if for any x∈ℝn and x≠0 (0 denotes an n-dimensional vector where all the entries are 0), xTAx>0. ## 3.2. Distributed Online Optimization Problem We consider a multiagent network system with multiple agents, each agenti is associated with a strictly convex function (with bounded first and second derivatives) ft,ix:ℝn⟶ℝ, and the function ft,ix is varying over time. All the agents cooperate to solve the following general convex consensus problem:(4)min∑i=1nft,ix,subject tox∈X.At each roundt=1,…,T, the ith agent is required to generate a decision point xit∈X according to its current local information as well as the information received from its immediate neighbors. Then, the adversary responds to each agent′s decision with a loss function ft,ix:X⟶ℝ and each agent gets the loss ft,ixit. The communication between agents is specified by an undirected graph G=V,E, where V=1,…,n is a vertex set, and E⊂V×V denotes an edge set. Undirected means if i,j∈E then j,i∈E. Each agent i can only communicate directly with its immediate neighbors Ni=j∈V|i,j∈E. The goal of the agents is to seek a sequence of decision points xit∈X,i∈V, so that the regret with respect to each agent i regarding any fixed decision x∗∈X in hindsight(5)RTxit,x=∑t=1T∑i=1nft,ixit−ft,ix∗,is sublinear in T.Throughout this paper, we make the following assumptions:(i) each cost functionft,ix is strictly convex and twice continuous differentiable and L-Lipschitz on the convex set X(ii) X is compact and the Euclidean diameter of X is bounded by D(iii) exp−αftix is concave in the set X for all t and iBy assumption (i), the functionftix is convex in the set X, and with some reasonable assumptions over the domains of the value of α and x, exp−αftix is concave in the set X. In addition, the Lipschitz condition (i) implies that for any x∈X and any gradient gi, we have the following equation:(6)gi≤L. ## 4. Distributed Online Newton Step Algorithm For problem (4), we assume that information can be exchanged among each agent in a timely manner, that is, the network topology graph between n agents is a complete graph. The communication between agents in our algorithm is modeled by a doubly stochastic symmetric P, so that 1>pij>0 only if i,j∈E, else pij=0, and ∑j=1npij=∑j∈Nipij=1 for all i∈V, ∑i=1npij=∑i∈Njpij=1 for all j∈V. ### 4.1. Algorithm The distributed online Newton step algorithm is presented in Algorithm1.Algorithm 1: The Distributed Online Newton Step Algorithm (D-ONS). (1) Input: convex setχ, maximum round number T.(2) β=1/2min1/4LD,α.(3) Initialize:xi1∈χ, ∀i∈V.(4) fort=1,⋯⋯,Tdo(5) The adversary revealsft,i,∀i∈V.(6) Compute gradientsgit∈∂ft,ixit, ∀i∈V.(7) ComputeHit=∑r=1tgirgirT+ϵIn, ∀i∈V.(8) for each i∈Vdo(9) zit+1=∑j=1nPijxjt−1/βHit−1git(10) xit+1=∏χHitzit+1(11) end for(12) end forThe projection function used in this algorithm is defined as follows:(7)∏XAy=argminx∈Xy−xTAy−x,where A is a positive definite matrix. ### 4.2. Algorithm Analysis In this algorithm, when a decisionxit is made by the agent i with the current information, the corresponding cost function ft,ix can be obtained. So we can get the gradient git=∇ft,ixit. Construct a symmetric positive definite matrix Hit=∑r=1tgirgirT+ϵIn, then the direction of iteration is constructed by utilizing Hit−1 which always exists to replace the inverse of the Hessian matrix in Newton’s method. Take the linear combination of the current iteration point of agent i and the current iteration point of its neighbor agent as the starting point of the new iteration along with the size 1/β, and the projection operation is used to get the next iteration point xit+1.The calculation ofHit and its inverse Hit−1 in the algorithm can be seen from Step 7, Hit=∑r=1tgirgirT+ϵIn=∑r=1t−1girgirT+ϵIn+gitgitT=Hit−1+gitgitT, which shows that Hit can be computed via using the previous approximation matrix Hit−1 as well as the gradient git at step t. Therefore, we do not have to store all the gradients from the previous t-step iteration, at the same time, as shown by the following equation (32): (8)A+uvT−1=A−1−A−1uvTA−11+vTA−1u.LetHit−1=A,u=v=git, and Hi0=ϵIn, then Hi0−1=1/ϵIn, the inverse of Hit can be got simply. It is the same thing as solving for Hit, we just use the information from the current and the previous step. ## 4.1. Algorithm The distributed online Newton step algorithm is presented in Algorithm1.Algorithm 1: The Distributed Online Newton Step Algorithm (D-ONS). (1) Input: convex setχ, maximum round number T.(2) β=1/2min1/4LD,α.(3) Initialize:xi1∈χ, ∀i∈V.(4) fort=1,⋯⋯,Tdo(5) The adversary revealsft,i,∀i∈V.(6) Compute gradientsgit∈∂ft,ixit, ∀i∈V.(7) ComputeHit=∑r=1tgirgirT+ϵIn, ∀i∈V.(8) for each i∈Vdo(9) zit+1=∑j=1nPijxjt−1/βHit−1git(10) xit+1=∏χHitzit+1(11) end for(12) end forThe projection function used in this algorithm is defined as follows:(7)∏XAy=argminx∈Xy−xTAy−x,where A is a positive definite matrix. ## 4.2. Algorithm Analysis In this algorithm, when a decisionxit is made by the agent i with the current information, the corresponding cost function ft,ix can be obtained. So we can get the gradient git=∇ft,ixit. Construct a symmetric positive definite matrix Hit=∑r=1tgirgirT+ϵIn, then the direction of iteration is constructed by utilizing Hit−1 which always exists to replace the inverse of the Hessian matrix in Newton’s method. Take the linear combination of the current iteration point of agent i and the current iteration point of its neighbor agent as the starting point of the new iteration along with the size 1/β, and the projection operation is used to get the next iteration point xit+1.The calculation ofHit and its inverse Hit−1 in the algorithm can be seen from Step 7, Hit=∑r=1tgirgirT+ϵIn=∑r=1t−1girgirT+ϵIn+gitgitT=Hit−1+gitgitT, which shows that Hit can be computed via using the previous approximation matrix Hit−1 as well as the gradient git at step t. Therefore, we do not have to store all the gradients from the previous t-step iteration, at the same time, as shown by the following equation (32): (8)A+uvT−1=A−1−A−1uvTA−11+vTA−1u.LetHit−1=A,u=v=git, and Hi0=ϵIn, then Hi0−1=1/ϵIn, the inverse of Hit can be got simply. It is the same thing as solving for Hit, we just use the information from the current and the previous step. ## 5. Convergence Analysis Now, the main result of this paper is stated in the following.Theorem 1. Give the sequence ofxit and zit generated by Algorithm 1 for all i∈V, x∗=argminx∈X∑i=1nft,ix, and the regret with respect to agent i′s action is(9)RTx∗,xit=∑t=1Tftixit−ftix∗,≤ϵD2β2+n2βlnTL2ϵ+1+C+C3lnTL^2+ϵL^2+ϵ,where C=C1+C2, C1=βη2n2D21−2lnη​L2/lnη2−1/lnη, C2=n3DLM+m2η/Mmϵ1−η2, C3=n4L2M+m4/8βM2m21−η2L^2, η=max1≤i,j≤npij,andM,m,L^are constants, and L^≤1/t∑r=1tgir2.From Theorem 1, the regret bound of Algorithm1 is sublinear convergence with respect to iterative number T, that is, limT⟶∞RTx∗,xit/T=0. Note that, the regret bound is related to the scale of the network. Specifically, as the network grows in size, the regret bound value also increases. In addition, the value of the regret bound is also influenced by the values of parameter ϵ and the diameter of the convex set X. The value of η indirectly reflects the connectivity of the network implying that the smaller the value of η , the smaller the regret bound of the algorithm.To prove the conclusion of Theorem 1, we first give some lemmas and their proofs.Lemma 2. For any fixedi∈V, let ftix=ftx, then ∇ftixit=∇ftxit=git, and the following bound holds for any j∈V and x∈X(10)∑t=1Tftxit−ftx∗≤∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗,where β=1/2min1/4LD,α.Proof. According to the assumption that the functionftx is strictly convex and continuous differentiable in convex set X, and xit∈X,x∗∈X, by Lemma 1 we can obtain(11)ftxit−ftx∗≤gitTxit−x∗−β2xit−x∗TgitgitTxit−x∗. Summing up overt=1,2,…,T can get the conclusion of Lemma 2. ■ From Lemma2, if the upper bound of the right side of the inequality can be obtained, the upper bound of the left side can be obtained, too. Therefore, we are committed to solving the upper bound of the right side of the above equation.Lemma 3. Letyit=∑j=1npijxjt−xit, and the following bound holds for any j∈V and any x∈X,(12)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2ϵD2−∑t=1TgitTyit+β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git.Proof. according to Algorithm1, we have the following equation:(13)zit+1=∑j=1npijxjt−1βHit−1git,so(14)zit+1−x∗=∑j=1npijxjt−1βHit−1git−x∗,=xit−x∗+∑j=1npijxjt−xit−1βHit−1git,=xit−x∗+yit−1βHit−1git,then we can obtain the following next equation:(15)zit+1−x∗THitzit+1−x∗,=xit−x∗THitxit−x∗+yit−1βHit−1gitTHitxit−x∗,+xit−x∗THityit−1βHit−1git,+yit−1βHit−1gitTHityit−1βHit−1git,=xit−x∗THitxit−x∗+2yitTHitxit−x∗,−2βgitTxit−x∗−2βgitTyit+yitTHityit+1β2gitTHit−1git. Sincexit+1 is the projection of zit+1 in the norm induced by Hit, it is a well known fact that (see [25] section 3.5 Lemma 3.9)(16)xit+1−x∗THitxit+1−x∗≤zit+1−x∗THitzit+1−x∗. This fact together with (15) gives(17)xit+1−x∗THitxit+1−x∗≤xit−x∗THitxit−x∗+2yitTHitxit−x∗−2βgitTxit−x∗−2βgitTyit+yitTHityit+1β2gitTHit−1git. Summing both sides of (17) from t=1 to T, we obtain the following equation:(18)∑t=1Txit+1−x∗THitxit+1−x∗≤∑t=1Txit−x∗THitxit−x∗−2β∑t=1TgitTyit−2β∑t=1TgitTxit−x∗+2∑t=1TyitTHitxit−x∗+∑t=1TyitTHityit+1β2∑t=1TgitTHit−1git,that is(19)−∑t=1Txit−x∗THit+1−Hitxit−x∗≤xi1−x∗THi1−gi1gi1Txi1−x∗−2β∑t=1TgitTyit−2β∑t=1TgitTxit−x∗−xiT+1−x∗THiTxiT+1−x∗+∑t=1TyitTHityit+2∑t=1TyitTHitxit−x∗+1β2∑t=1TgitTHit−1git. According to Algorithm1, Hit+1−Hit=gitgitT, then(20)∑t=1Txit−x∗THit+1−Hitxit−x∗=∑t=1Txit−x∗TgitgitTxit−x∗,thus we obtain(21)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2xi1−x∗THi1−gi1gi1Txi1−x∗−∑t=1TgitTyit+β∑t=1TyitTHitxit−x∗−β2xiT+1−x∗THiTxiT+1−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git. Due toHi1−gi1gi1T=ϵIn, then(22)xi1−x∗THi1−gi1gi1Txi1−x∗,=ϵxi1−x∗Txi1−x∗=ϵxi1−x∗2≤ϵD2. And, sinceHiT is positive definite, and β>0, so −β/2xiT+1−x∗THiTxiT+1−x∗≤0. Combining (21) and (22), we can state(23)∑t=1TgitTxit−x∗−β2∑t=1Txit−x∗TgitgitTxit−x∗≤β2ϵD2−∑t=1TyitTgit+β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit+12β∑t=1TgitTHit−1git. Thus, the proof of Lemma3 is completed. ■ Next, we consider the last term of (23).Lemma 4. For anyi∈V, we can get the following bound holding:(24)12β∑t=1TgitTHit−1git≤n2βlogTL2ϵ+1.Proof. Note that,(25)gitTHit−1git=Hit−1•gitgitT=Hit−1•Hit−Hit−1,where for matrices A,B∈ℝn×n, we denote by A•B=∑i=1n∑j=1naijbij the inner product of these matrices as vectors in ℝn2. For real numbers a,b∈ℝ+ and the logarithm function y=lnx, the Taylor expansion of y in a is y=logx=loga+1/ax−a+Rnx. So logb≤loga+1/ab−a, implying a−1a−b≤loga/b. An analogous fact holds for the positive definite matrices, i.e., A−1•A−B≤logA/B, where A,B denote the determinant of the matrix A,B (see the detailed proof in [25]). This fact gives us (for convenience we denote Hi0=ϵIn)(26)∑t=1TgitTHit−1git=∑t=1THit−1•Hit−Hit−1,≤∑t=1TlogHitHit−1=logHiTHi0. SinceHiT=∑t=1TgitgitT+ϵIn and git≤L, from the properties of matrices and determinants, we know that the largest eigenvalue of HiT is TL2+ϵ at most. Hence HiT≤TL2+ϵn and Hi0=ϵn, then(27)∑t=1TgitTHit−1git≤logHiTHi0≤nlogTL2ϵ+1. Combining the above factors, we obtain the following equation:(28)12β∑t=1TgitTHit−1git≤n2βlogTL2ϵ+1. The proof of Lemma4 is completed. ■ According to Algorithm1, zit+1=∑j=1npijxjt−1/βHit−1git, where −Hit−1git is the direction of iteration. Using the knowledge of matrix analysis, we have the following conclusions.Lemma 5. For anyi∈V,1≤t≤T,(29)Hit−1git≤M+m2n2L4Mm∑r=1tgir2+ϵ,where m=minλ1,λ2,…,λn, M=maxλ1,λ2,…,λn, 0<m≤M, and λii=1,…,n is the i th eigenvalue of Hit.This conclusion gives us that when the number of iterations increases,Hit−1git converges to zero, which ensures the consistency of the algorithm. The detailed proof can be seen in Appendix A.Now, we consider the norm of vectoryit, zit+1−x∗ and get the following inequation.Lemma 6. For any1≤i≤n,1≤t≤T, let η=max1≤i,j≤npij, 0<η<1, then(30)zit+1−x∗≤nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η,and(31)yit≤2nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η,where n is the size of the network, T is the total number of iterations. The specific proof is represented in Appendix B.Next, we turn our attention to the bound of the following termβ∑t=1TyitTHitxit−x∗−∑t=1TyitTgit+β/2∑t=1TyitTHityit. By combining the knowledge of vectors and matrices, we get Lemma 7.Lemma 7. For anyi∈V, the following inequality holds(32)β∑t=1TyitTHitxit−x∗−∑t=1TyitTgitβ2∑t=1TyitTHityit≤2β∑t=1T∑r=1tgir2+ϵnD+M+m2n2L4βMm∑r=1tgir2+ϵ1−η2,≤C+C3lnTL^2+ϵL^2+ϵ,where C1=βη2n2D21−2lnηL2/lnη2−1/lnη, C2=n3DLM+m2η/Mmϵ1−η2, C3=n4L2M+m4/8βM2m21−η2L^2,andC=C1+C2.Proof. According to Algorithm1, we can state(33)zit+1=∑j=1npijxjt−1βHit−1git,where β is a positive constant, and Hit−1 is the inverse of matrix Hit. We obtain the following equation:(34)git=βHit∑j=1npijxjt−zit+1. Now, by multiplying both sides of this equation by the vectoryitT, we can obtain the following equation:(35)yitTgit=βyitTHit∑j=1npijxjt−zit+1,then the left of (32) can be written as follows:(36)∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−∑t=1TyitTgit=β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1TyitTHit∑j=1npijxjt−zit+1. The matrixHit is symmetric and positive definite, which means that yitTHityit≥0, therefore we can obtain the following equation:(37)β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1T∑j=1npijxjt−zit+1THityit≤β∑t=1TyitTHitxit−x∗+β∑t=1TyitTHityit−β∑t=1TyitTHit∑j=1npijxjt−zit+1≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitHit2zit+1−x∗Hit2. Here, we apply the Cauchy–Schwarz inequation:xTAy2≤xA2yA2, where A is a n×n Hermite matrix and A is positive semidefinite. Next, we consider the super bound ofyitHit2 and zit+1−x∗Hit2. According to the Step 7 of Algorithm 1, Hit=∑r=1tgirgirT+ϵIn=Hit−1+gitgitT, so(38)yitHit2=yitTHityit=yitTHit−1+gitgitTyit,=yitHit−1+yitTgit2≤yitHit−1+yit2git2,≤yitHi0+yit2gi12+⋯+yit2git2,≤ϵyit2+∑r=1tyit2gir2≤ϵ+∑r=1tgir2yit2. Similarly, we have the following equation:(39)zit+1−x∗Hit2≤ϵ+∑r=1tgir2zit+1−x∗2. Combining the results of Lemmas5 and 6, we have the following equation:(40)yitHit2zit+1−x∗Hit2≤4ϵ+∑r=1tgir22nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η4,then(41)β∑t=1TyitTHitxit−x∗+β2∑t=1TyitTHityit−β∑t=1T∑j=1npijxjt−zit+1THityit≤β∑t=1TyitTHitzit+1−x∗≤β∑t=1TyitHit2zit+1−x∗Hit2,≤2β∑t=1Tϵ+∑r=1tgir2nDηt+M+m2n2L4βMm∑r=1tgir2+ϵ1−η2. Thus, we complete the proof of Lemma7. ■ Putting all these together, Theorem1 can be proved as follows.Proof of Theorem 1. According to the assumptions,fx is strictly convex, and the function exp−αfx is concave in X when the value of α is sufficiently small. Setting β=min1/4LD,α, combined with axioms 2–7, we can obtain the regret bound(42)RTx∗,xit=∑t=1Tftixitftix∗≤ϵD2β2+n2βlogTL2ϵ+1+C+C3logTL^2+ϵL^2+ϵ. The values of the parameters in equation (15) are the same as Theorem 1. ■ ## 6. Numerical Experiment In order to verify the performance of our proposed algorithm, we conducted a numerical experiment on an online estimation over a distributed sensor network which is mentioned in reference [9]. In a distributed sensor network, there are n sensors (See Figure 1 in [9]). Each sensor is connected to one or more sensors. It is assumed that each sensor is connected to a processing unit. Finally, the processing units are integrated to obtain the best evaluation of the environment. The specific model is as follows: given a closed convex set X=x∈ℝn|x2≤xmax, the observation vector yt,i:ℝn⟶ℝd represents the i th sensor measurement at time t which is uncertain and time-varying due to the sensor’s susceptibility to unknown environmental factors such as jamming. The sensor is assumed (not necessarily accurately) to have a linear model of the form hix=Hix, where Hi∈ℝn×d is the observation matrix of sensor i and Hi1≤hmax for all i. The objective is to find the argument x^∈X that minimizes the cost function fx^=1/n∑i=1nft,ix^, namely,(43)min1n∑i=1nft,ix^,subject tox^∈X,where the cost function associated with sensor i is ft,ix^=1/2yt,ix−Hix^22. Since the observed value yt,i changes with time t, only when we calculate the value of x^it can we get the local error of the i th sensor at time t. In other words, due to modeling errors and uncertainties in the environment, the local error functions are allowed to change over time.Figure 1 Comparisonn = 1, n = 10, n = 100.In an offline setting, each sensori has a noisy observation yt,i=Hix+vt,i=Hix+v1,i for all t=1,2,…,T, where vt,i is generally assumed to be (independent) white noise at time t. In this case, the centralized optimal estimate for problem (37) is(44)x^∗=∑i=1nHiTHi−1∑i=1nHiTy1,ix.While in an online setting, the white noisevt,i varies with time t (see [9]). We assume vt,i∼U−1/4,1/4 (The white noise vt,i is uniformly distributed on the interval −1/4,1/4). In the proposed distributed online algorithm, each sensor i calculate x^it∈X based on the local information available to it and then an “oracle” reveals the cost ft,ix^ at time step t.The performance of the proposed algorithm is discussed based on the following aspects: ### 6.1. The Analysis of the Algorithm Performance The numerical experiments consist of two parts: the impact of network size on the performance of the D-ONS and the effect of network connectivity on the effectiveness of the algorithm iterations.We carried out numerical experiments atn = 1, n = 2 and n = 100, respectively. Figure 1 depicts the convergence curves of the algorithm for different network sizes. According to Figure 1, it is obvious that the average regret decreases fast and the algorithm can converge on different scaled networks as the number of the agent in the network increase. Especially, when n=1, the problem is equivalent to a centralized optimization problem, and our distributed optimization algorithm can reach the same effect as the centralized algorithm.According to Theorem1, the effectiveness of the algorithm is directly affected by the connectivity of the network, so we verify the algorithm under different network topology. (i) Complete graph. All the agents are connected to each other. (ii) Cycle graph. Each agent is only connected to its two immediate neighbors. (iii) Watts–Strogatz. The connectivity of random graphs is related to the average degree and connection probability. Here, let the average degree of the graph is 3 and the probability of connection is 0.6. As shown in Figure 2, D-ONS can lead to a significantly faster convergence on a complete graph than a cycle graph and has the similar convergence on Watts–Strogatz. The experimental result is consistent with the theoretical analysis results in this paper.Figure 2 Comparison under different topology. ### 6.2. Performance Comparison of Algorithms To verify the performance of the proposed algorithm, we compared the proposed algorithm with the class algorithms D-OGD in [10], D-ODA in [9] and the algorithm in [19]. The parameters in these algorithms are based on their theoretical proofs. The network topology relationship among agents is complete, and the size of the network is the same n=10. As shown in Figure 3, the presented algorithm D-ONS displays better performance with faster convergence and higher accuracy than D-ODA, D-OGD, and the algorithm in [19].Figure 3 The convergence curves of compared algorithms. ## 6.1. The Analysis of the Algorithm Performance The numerical experiments consist of two parts: the impact of network size on the performance of the D-ONS and the effect of network connectivity on the effectiveness of the algorithm iterations.We carried out numerical experiments atn = 1, n = 2 and n = 100, respectively. Figure 1 depicts the convergence curves of the algorithm for different network sizes. According to Figure 1, it is obvious that the average regret decreases fast and the algorithm can converge on different scaled networks as the number of the agent in the network increase. Especially, when n=1, the problem is equivalent to a centralized optimization problem, and our distributed optimization algorithm can reach the same effect as the centralized algorithm.According to Theorem1, the effectiveness of the algorithm is directly affected by the connectivity of the network, so we verify the algorithm under different network topology. (i) Complete graph. All the agents are connected to each other. (ii) Cycle graph. Each agent is only connected to its two immediate neighbors. (iii) Watts–Strogatz. The connectivity of random graphs is related to the average degree and connection probability. Here, let the average degree of the graph is 3 and the probability of connection is 0.6. As shown in Figure 2, D-ONS can lead to a significantly faster convergence on a complete graph than a cycle graph and has the similar convergence on Watts–Strogatz. The experimental result is consistent with the theoretical analysis results in this paper.Figure 2 Comparison under different topology. ## 6.2. Performance Comparison of Algorithms To verify the performance of the proposed algorithm, we compared the proposed algorithm with the class algorithms D-OGD in [10], D-ODA in [9] and the algorithm in [19]. The parameters in these algorithms are based on their theoretical proofs. The network topology relationship among agents is complete, and the size of the network is the same n=10. As shown in Figure 3, the presented algorithm D-ONS displays better performance with faster convergence and higher accuracy than D-ODA, D-OGD, and the algorithm in [19].Figure 3 The convergence curves of compared algorithms. ## 7. Conclusion and Discussion A distributed online optimization algorithm based on Newton step is proposed for a multiagent distributed online optimization problem, where the local objective function is strictly convex and quadratic continuously differentiable. In each iteration, the gradient of the current iteration point is used to construct a positive definite matrix, and then the direction of the next iteration is constructed by substituting this positive matrix for the inverse of the Hessian matrix in Newton’s method. Through theoretical analysis, the regret bound of the algorithm is obtained, and the regret bound converges sublinear with respect to the number of iterations. Numerical examples also demonstrate the feasibility and effectiveness of the proposed algorithm. Simulation results indicate significant convergence rate improvement of our algorithm relative to the existing distributed online algorithms based on first-order methods. --- *Source: 1007032-2022-10-28.xml*
2022
# The Emerging Roles of Tripartite Motif Proteins (TRIMs) in Acute Lung Injury **Authors:** Yingjie Huang; Yue Xiao; Xuekang Zhang; Xuan Huang; Yong Li **Journal:** Journal of Immunology Research (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1007126 --- ## Abstract Acute lung injury (ALI) is an inflammatory disorder of the lung that causes high mortality and lacks any pharmacological intervention. Ubiquitination plays a critical role in the pathogenesis of ALI as it regulates the alveolocapillary barrier and the inflammatory response. Tripartite motif (TRIM) proteins are one of the subfamilies of the RING-type E3 ubiquitin ligases, which contains more than 80 distinct members in humans involved in a broad range of biological processes including antivirus innate immunity, development, and tumorigenesis. Recently, some studies have shown that several members of TRIM family proteins play important regulatory roles in inflammation and ALI. Herein, we integrate emerging evidence regarding the roles of TRIMs in ALI. Articles were selected from the searches of PubMed database that had the terms “acute lung injury,” “ubiquitin ligases,” “tripartite motif protein,” “inflammation,” and “ubiquitination” using both MeSH terms and keywords. Better understanding of these mechanisms may ultimately lead to novel therapeutic approaches by targeting TRIMs for ALI treatment. --- ## Body ## 1. Introduction Acute lung injury (ALI) is an acute hypoxic respiratory insufficiency caused by various direct (pulmonary) or indirect (extrapulmonary) injuries including sepsis syndrome, ischemia-reperfusion, pneumonia, and mechanical ventilation, which leads to the destruction of the barrier of alveolar epithelial cells and capillary endothelial cells, resulting in overinfiltration of inflammatory cells and diffuse pulmonary interstitial and alveolar edema [1]. In 1994, the diagnostic criteria of ALI were put forward by the American–European Consensus Conference: an acute onset; oxygenation index PaO2/FiO2>200mmHgand<300mmHg (1mmHg=0.133kPa); patchy shadows in both lungs on the chest X-ray; pulmonaryarterywedgepressure≤18mmHg or no clinical evidence of left atrial hypertension; etc. [2]. Due to the lack of drug intervention, ALI remains a significant cause of morbidity and mortality in the critically ill patient population [2]. More severe situations with PaO2/FiO2≤200mmHg, ALI turns to the worse stage acute respiratory distress syndrome (ARDS) [3].Pathological hallmarks of ALI are injury to the vascular endothelium/alveolar epithelium, activation of innate immune response, and enhanced coagulation [4]. Exposure to several risk factors (i.e., pneumonia, sepsis, and shock) firstly leads to endothelial and/or epithelial monolayers damage, increasing permeability and impairing their barrier function [4]. A large amount of protein-rich fluid and inflammatory cells leaks into the alveoli and lung interstitium, resulting in pulmonary edema, neutrophil infiltration, cytokine and reactive oxygen species-mediated inflammation coagulation disorders, and pulmonary fibrosis [5]. Notedly, mast cells (MCs) and polymorphonuclear neutrophils (PMNs) are the main inflammatory cells, which play a critical role in the pathogenesis of ALI [6, 7]. MC activation induces tryptase release to trigger ALI [8], which is supported by the finding that MCs “stabilizers” can reduce ALI severity [9].At present, there is still no effective pharmaceutical intervention for ALI; mechanical ventilation is the main approach to prevent respiratory failure and, combined with intensive care support, could improve health condition [10]. However, mechanical ventilation can exacerbate preexisting lung injury or even induce de novo injury in healthy lungs, which is called ventilator-induced lung injury (VILI) [11, 12]. In recent years, researchers have paid close attention to the identification of new routes at cellular level which could provide a better understanding of the physiopathology of ALI, but the precise cellular and molecular underlying mechanisms are still to be fully elucidated. Chen et al. recently summarized and introduced the role of lncRNAs in ALI in detail [13]. However, emerging evidence points out to the ubiquitination which functions as an important regulator in the pathobiology of ALI since it regulates the proteins evolved in the modulation of the alveolocapillary barrier and inflammatory response, opening a highly promising research field for the treatment of lung diseases [14]. ## 2. Ubiquitination in ALI Ubiquitination is the major protein posttranslational modification in cells by which ubiquitin (Ub) covalently attached the target protein for degradation through the 26S proteasome or lysosome or nonproteolytic modifications [15]. It plays crucial roles in diverse biological processes such as DNA repair, cell proliferation, signal transduction, apoptosis, and inflammation, whose dysregulation leads to many diseases [16]. However, bacterial infection or inflammatory stimulation often disrupts the process of protein ubiquitination. We and some other investigators have shown that expressions of some E3 Ub ligases were altered by infection or inflammation thus affecting the levels and functions of their target proteins. Thus, uncovering new E3 Ub ligase-related molecular mechanisms and signaling pathways will provide a unique opportunity for the potential design of new strategies to alleviate ALI. ### 2.1. The Ubiquitin System As a multicomponent regulatory system, the Ub system is composed of three types of Ub enzymes, which is a highly controlled mechanism of protein degradation and turnover in cells, starting with approximately 8 kDa monomeric Ub [17]. Ub is activated by a Ub-activating enzyme (E1) in an adenosine-triphosphate- (ATP-) dependent manner and then conjugated by a ubiquitin conjugating enzyme (E2), finally resulting in transfer of Ub to an internal lysine of the substrate protein by an E3 ligase [18]. To date, there are only two E1 enzymes (UBA1 and UBA6) [19], around 40 E2 enzymes [20], but more than 600 E3 ligases [21] existed in the human genome. Although the addition of Ub moieties to specific residues on a substrate protein is partly because of E2/E3 enzymes pairings, E3 ligases were considered the predominant source of substrate specificity [22]. ### 2.2. E3 Ligases As a direct mediator of substrate tagging and ubiquitin chain elongation, E3 ligase is considered an essential component in the ubiquitin system that determines substrate specificity. In human genome, more than 600 putative E3 ligases have been identified [21]. There are three major kinds of E3 ligases divided by the molecular structure and functional mechanism, including HECT (homologous to the E6-associated protein carboxyl-terminus) domain family, RING (really interesting new gene) finger family, and the RBR (RING in-between-RING) E3 ubiquitin ligases [23]. HECT E3 ligase contains an N-terminal lobe which is responsible for E2 binding and substrate recognition, and a C-terminal HECT domain containing a catalytic cysteine that receives and passes an ubiquitin molecule from the E2 enzyme before conjugating ubiquitin to a substrate protein [24]. RING finger E3 ligases constitute the largest family of E3 ligases which are characterized based on the presence of a RING domain [25]. Interestingly, the canonical RING domain is a type of zinc finger with a RING fold structure while another type is the U-box domain which possesses the same RING fold but without zinc [26]. Unlike HECT E3 ligase, RING finger E3 ligase mediates the direct transfer of ubiquitin to a substrate protein by binding to a ubiquitin-charging E2 enzyme as a scaffold [27]. Notably, RING E3 ligases function either as monomers (e.g., c-CBL, E4B), homodimers (e.g., cIAP, CHIP), heterodimers (e.g., Mdm2-MdmX), or large multisubunit complexes, such as the Cullin-RING ligases (CRLs), which make up a distinct subtype characterized by their common Cullin scaffold protein [23]. RBR E3 ligases contain two RING domains (RING1 and RING2) with an in-between-RING (IBR) domain and share the common features of both HECT and RING finger E3 ligases which function as a hybrid of these two types of E3 ligases [28]. Specifically, the RING1 domain binds to ubiquitin-loaded E2 and transfers ubiquitin onto the RING2 domain at a catalytic cysteine residue before conjugation to the substrate protein [29]. ### 2.3. Role of E3 Ligases in ALI Although ubiquitination has been reported to play a pivotal role in multiple biological functions, its function in ALI remains poorly understood. Recently, the key regulative role of ubiquitination in ALI has been mentioned increasingly [14]. Of them, most studies have been focused on the E3 ubiquitin ligases and the conventional K48-ubiquitination which leads to the substrate proteins degradation via the 26S proteasome [30]. Accumulating evidence has demonstrated that E3 ligase plays a critical role in the pathobiology of ALI since it modulates critical proteins involved in the alveolocapillary barrier and the inflammatory response [14].Tight junctions form a highly selective diffusion barrier between endothelial cells and epithelial cells by preventing most dissolved molecules and ions from passing freely through the paracellular pathway [31]. The function impairment of tight junction is a sign of ALI [32]. E3 ligase Itch, a member of the HECT Ub ligases, could directly interact with and degrade the tight junction-specific protein occludin [33] via ubiquitination. E-cadherin, a well-studied member of the classical cadherin family, is a central component in the cell-cell adhesion junction and plays a critical role in maintaining cell polarity and the integrity of epithelial cells [34]. Dysfunction of E-cadherin contributes to the pathogenesis of ALI [35]. A RING finger E3 ligase Hakai induces E2-dependent ubiquitination and endocytosis of E-cadherin complex in epithelial cells [36]. Recently, Dong et al. found that the HECT E3 ligase Smurf2 induced μ-opioid receptor 1 (MOR1) degradation in the ubiquitin-proteasome system in lung epithelial cells, and MOR1 has a potential effect in lung repair and remodeling after ALI [37].E3 ligase Cblb inhibits the MyD88-dependent Toll-like receptor 4 (TLR4) signaling and attenuates acute lung inflammation induced by polymicrobial sepsis [38]. The ST2L receptor for interleukin 33 (IL-33) mediates pulmonary inflammation during ALI, which is bound and ubiquitinated by FBXL19, a member of the Skp1-Cullin-F-box family of E3 ubiquitin ligases [39]. In addition, E3 ligase FBXO3 targets the TRAF inhibitor FBXL2 for its destabilization and potently stimulates cytokine release, leading to changes in lung permeability, alveolar edema, and ALI [40]. FBXO17 has been described as an E3 ligase that recognizes and mediates the ubiquitination and degradation of GSK3β to reduce inflammatory responses in lung epithelial cells after LPS injury [41]. Most recently, Lear et al. reported that E3 ligase KIAA0317 targets SOCS2 for ubiquitination and degradation by the proteasome and exacerbates pulmonary inflammation [42]. These studies have proved that ubiquitin-proteasome system especially E3 ligases is closely related to the pathogenesis of lung injury. ## 2.1. The Ubiquitin System As a multicomponent regulatory system, the Ub system is composed of three types of Ub enzymes, which is a highly controlled mechanism of protein degradation and turnover in cells, starting with approximately 8 kDa monomeric Ub [17]. Ub is activated by a Ub-activating enzyme (E1) in an adenosine-triphosphate- (ATP-) dependent manner and then conjugated by a ubiquitin conjugating enzyme (E2), finally resulting in transfer of Ub to an internal lysine of the substrate protein by an E3 ligase [18]. To date, there are only two E1 enzymes (UBA1 and UBA6) [19], around 40 E2 enzymes [20], but more than 600 E3 ligases [21] existed in the human genome. Although the addition of Ub moieties to specific residues on a substrate protein is partly because of E2/E3 enzymes pairings, E3 ligases were considered the predominant source of substrate specificity [22]. ## 2.2. E3 Ligases As a direct mediator of substrate tagging and ubiquitin chain elongation, E3 ligase is considered an essential component in the ubiquitin system that determines substrate specificity. In human genome, more than 600 putative E3 ligases have been identified [21]. There are three major kinds of E3 ligases divided by the molecular structure and functional mechanism, including HECT (homologous to the E6-associated protein carboxyl-terminus) domain family, RING (really interesting new gene) finger family, and the RBR (RING in-between-RING) E3 ubiquitin ligases [23]. HECT E3 ligase contains an N-terminal lobe which is responsible for E2 binding and substrate recognition, and a C-terminal HECT domain containing a catalytic cysteine that receives and passes an ubiquitin molecule from the E2 enzyme before conjugating ubiquitin to a substrate protein [24]. RING finger E3 ligases constitute the largest family of E3 ligases which are characterized based on the presence of a RING domain [25]. Interestingly, the canonical RING domain is a type of zinc finger with a RING fold structure while another type is the U-box domain which possesses the same RING fold but without zinc [26]. Unlike HECT E3 ligase, RING finger E3 ligase mediates the direct transfer of ubiquitin to a substrate protein by binding to a ubiquitin-charging E2 enzyme as a scaffold [27]. Notably, RING E3 ligases function either as monomers (e.g., c-CBL, E4B), homodimers (e.g., cIAP, CHIP), heterodimers (e.g., Mdm2-MdmX), or large multisubunit complexes, such as the Cullin-RING ligases (CRLs), which make up a distinct subtype characterized by their common Cullin scaffold protein [23]. RBR E3 ligases contain two RING domains (RING1 and RING2) with an in-between-RING (IBR) domain and share the common features of both HECT and RING finger E3 ligases which function as a hybrid of these two types of E3 ligases [28]. Specifically, the RING1 domain binds to ubiquitin-loaded E2 and transfers ubiquitin onto the RING2 domain at a catalytic cysteine residue before conjugation to the substrate protein [29]. ## 2.3. Role of E3 Ligases in ALI Although ubiquitination has been reported to play a pivotal role in multiple biological functions, its function in ALI remains poorly understood. Recently, the key regulative role of ubiquitination in ALI has been mentioned increasingly [14]. Of them, most studies have been focused on the E3 ubiquitin ligases and the conventional K48-ubiquitination which leads to the substrate proteins degradation via the 26S proteasome [30]. Accumulating evidence has demonstrated that E3 ligase plays a critical role in the pathobiology of ALI since it modulates critical proteins involved in the alveolocapillary barrier and the inflammatory response [14].Tight junctions form a highly selective diffusion barrier between endothelial cells and epithelial cells by preventing most dissolved molecules and ions from passing freely through the paracellular pathway [31]. The function impairment of tight junction is a sign of ALI [32]. E3 ligase Itch, a member of the HECT Ub ligases, could directly interact with and degrade the tight junction-specific protein occludin [33] via ubiquitination. E-cadherin, a well-studied member of the classical cadherin family, is a central component in the cell-cell adhesion junction and plays a critical role in maintaining cell polarity and the integrity of epithelial cells [34]. Dysfunction of E-cadherin contributes to the pathogenesis of ALI [35]. A RING finger E3 ligase Hakai induces E2-dependent ubiquitination and endocytosis of E-cadherin complex in epithelial cells [36]. Recently, Dong et al. found that the HECT E3 ligase Smurf2 induced μ-opioid receptor 1 (MOR1) degradation in the ubiquitin-proteasome system in lung epithelial cells, and MOR1 has a potential effect in lung repair and remodeling after ALI [37].E3 ligase Cblb inhibits the MyD88-dependent Toll-like receptor 4 (TLR4) signaling and attenuates acute lung inflammation induced by polymicrobial sepsis [38]. The ST2L receptor for interleukin 33 (IL-33) mediates pulmonary inflammation during ALI, which is bound and ubiquitinated by FBXL19, a member of the Skp1-Cullin-F-box family of E3 ubiquitin ligases [39]. In addition, E3 ligase FBXO3 targets the TRAF inhibitor FBXL2 for its destabilization and potently stimulates cytokine release, leading to changes in lung permeability, alveolar edema, and ALI [40]. FBXO17 has been described as an E3 ligase that recognizes and mediates the ubiquitination and degradation of GSK3β to reduce inflammatory responses in lung epithelial cells after LPS injury [41]. Most recently, Lear et al. reported that E3 ligase KIAA0317 targets SOCS2 for ubiquitination and degradation by the proteasome and exacerbates pulmonary inflammation [42]. These studies have proved that ubiquitin-proteasome system especially E3 ligases is closely related to the pathogenesis of lung injury. ## 3. TRIMs in ALI TRIM proteins are regarded as a subfamily of the RING finger E3 ligase, which contain more than 80 distinct members in humans [43]. TRIMs are composed by conserved three zinc-binding domains, an N-terminal RING domain, one or two B-boxes, and a central coiled-coiled domain (CDD) [44]. We recently found that TRIM65 selectively targeted vascular cell adhesion molecule 1 (VCAM-1) and promoted its ubiquitination and degradation, by which it critically controlled the duration and magnitude of pulmonary inflammation in ALI [45]. Particularly, Whitson and his colleagues reported that TRIM72 (also known as MG53) could function as a novel therapeutic protein to treat ALI [46]. Here, we discuss our current understanding of TRIMs as E3 ligases that executes its effector functions in ALI (Table 1).Table 1 Role of TRIMs in acute lung inflammation. TRIMsModelsCell typesMechanismsRef No.TRIM8LPS-induced AHIHuman liver cellsLINC00472/miR-373-3p/TRIM8 axis[51]LPS-induced ALILung epithelial cellsp-AMPKα/NF-κB/Nrf2/ROS/HO-1 axis[52]TRIM14LPS-induced ALIHuman vascular endothelial cellsNEMO/TAK1/NF-κb/TRIM14 pathway[63]TRIM21LPS-induced ALILung microvascular endothelial cellsNF-κB signaling[79]TRIM65ALIHuman vascular endothelial cellsVACM-1 ubiquitination and degradation[45]TRIM72Ischaemia-reperfusion and overventilation-induced ALILung epithelial cellsCell membrane repair[97]Influenza virus -induced ALIMacrophageLung tissueNF-κB signalingInhibition of pyroptosis[98, 106]Hemorrhagic shock/contusive ALIHuman bronchial epithelial cellsCell membrane repair[46] ### 3.1. TRIM8 TRIM8, as a member of TRIM family, has a common structural feature of a typical RBCC motif as well as a monopartite nuclear localization signal (NLS), which allows shuttling and functioning into the nucleus [47]. It has been reported that TRIM8 can regulate NF-κB signaling both in the nucleus and cytoplasm: TRIM8 inhibited PIAS3-mediated negative regulation of p65 to enhance NF-κB activity in the nucleus [48] and can also positively regulate NF-κB pathway through k63-linked polyubiquitination of cytoplasmic protein TAK1 [49]. TRIM8 is ubiquitously expressed in human and mouse tissues, which has higher expression levels in the central nervous system, kidney, and lens, and lower expression level in digestive tract [44]. TRIM8 plays a key role in the immune response and participates in various fundamental biological processes such as cell survival, apoptosis, autophagy, differentiation, inflammation, and carcinogenesis [50].Recently, studies have revealed that TRIM8 is involved in the regulation of sepsis and ALI. TRIM8 was significantly upregulated in LPS (lipopolysaccharide) sepsis-induced acute hepatic injury (AHI), which was a direct target of miR-373-3p [51]. Moreover, inhibition of TRIM8 by downregulation of long noncoding RNA (lncRNA) LINC00472, which served as a sponge for miR-373-3p and negatively regulated its expression, could reduce sepsis-induced expression of main proinflammatory cytokines such as IL-6, IL-10, and TNF-α [51]. Xiaoli et al. found that TRIM8 was increased in a time-dependent manner during LPS-induced ALI, promoting inflammatory response and ROS generation via the inactivation of p-AMPKα. In addition, suppression of TRIM8 markedly downregulated mRNA levels of interleukin-1β (IL-1β), IL-6, and tumor necrosis factor-α (TNF-α) in lung epithelial cells mainly through blocking the NF-κB signaling pathway and alleviated oxidative stress by regulating Nrf2 signaling and heme oxygenase-1 (HO-1) expression [52]. Although TRIM8 has been revealed to play an important role in acute lung injury, precise regulatory mechanisms such as whether it depends on the activity of E3 ubiquitin ligase and its specific target proteins need to be further clarified. ### 3.2. TRIM14 TRIM14 was originally known as KIAA0129 [53], and its overexpression was first found in human immunodeficiency virus- (HIV-) infected human and simian non-Hodgkin’s lymphoma infected with simian immunodeficiency virus (SIV) [54, 55]. TRIM14 is a noncanonical member of the TRIM family, since it lacks the N-terminal RING domain of the typical RBCC motif which can exert an E3 ubiquitin ligase [56]. Studies have shown that TRIM14 can perform various functions via partners with which directly interact with its PRYSPRY domain [57]. Interestingly, TRIM14 was reported to be an important mediator of antiviral immunity both in DNA virus and double-stranded RNA virus infection [58, 59]. Furthermore, several groups found that TRIM14 may be involved in tumorigenesis [60–64].Recently, we found that TRIM14 was overexpressed in human vascular endothelial cells (ECs) and markedly induced by inflammatory stimuli such as LPS [65]. TRIM14 was a new positive regulator of endothelial activation via activating the NF-κB signaling pathway, which can directly bind to the promoter of TRIM14 gene and control its transcription [65]. Zhou et al. revealed that TRIM14 underwent Lys-63-linked autopolyubiquitination at Lys-365 and served as a platform and recruited NEMO to the mitochondrial antiviral signal (MAVS) complex, leading to the activation of interferon regulatory factor 3 (IRF3) and NF-κB signaling in human lung epithelial cells, which boost antiviral innate immune response [66]. TRIM14 also can recruit USP14 to cleave the K63-linked ubiquitin chain at lysine 332/338/341 of p100/p52, hinder the recognition of receptor p62, and inhibit the autophagy degradation of p100/p52, thus promoting the atypical activation of NF-κB in vivo and in vitro [67]. Considering endothelial inflammation and dysfunction play a prominent role in development of ALI and NF-κB is a central transcriptional factor in ALI, it is suggested that TRIM14 may be involved in the pathological process of ALI, which needs further study. ### 3.3. TRIM21 TRIM21, also known as Ro52, has a typical RBCC motif and E3 ligase activity [68]. It is broadly expressed in most human tissues and cells and predominantly expressed in hematopoietic cells and endothelial and epithelial cells [69]. TRIM21 was identified as a major autoantigen in autoimmune diseases including Sjögren’s syndrome, systemic lupus erythematosus (SLE), and rheumatoid arthritis [70–72]. Later studies revealed that TRIM21 is a highly conserved IgG receptor with high cytoplasmic affinity and specificity [73, 74], which can be induced by interferon to exert antiviral effect [75]. TRIM21 serves as a multifaceted regulator in viral immunity and can not only promote the production of type I interferon [76] and triggers an innate immune response via RIG-1 and cGAS sensing [77] but also negatively regulate innate immunity by targeting and degrading the viral DNA sensor DEAD (Asp-Glu-Ala-Asp) box polypeptide 41 (DDX41) [78]. The biological function and application of TRIM21 in antiviral immunity are described in detail in other reviews [79].Using TRIM21-deficient mice, Yoshimi and colleagues found that TRIM21 is a negative regulator of NF-κB-dependent proinflammatory cytokine production induction in fibroblasts after TLR ligands (poly(I:C), CpG, and LPS) stimulation [80]. In addition, TRIM21 deletion can lead enhanced production of proinflammatory cytokines and systemic autoimmunity though the IL-23-Th17 pathway [81]. Recently, Li et al. reported that TRIM21 exhibited an anti-inflammatory property against LPS-induced lung endothelial dysfunction and monocytes adhesion to endothelial cells [82]. TRIM21 can be monoubiquitylated and lysosomal degraded in response to LPS and may contribute to the pathogenesis of ALI [52]. TRIM21 can be used as a therapeutic target for endothelial dysfunction induced by sepsis, such as acute lung injury [83]. However, whether ubiquitination of TRIM21 is dependent on its phosphorylation and the specific phosphorylation or ubiquitination sites needs to be further clarified. ### 3.4. TRIM65 Human TRIM65 is a 517-amino acid protein, containing a N-terminal RING domain, a B-box, a coiled-coil domain, and a SPRY domain, is first known as a gene associated with white matter lesions [84, 85]. Using a systematic discovery-type proteomic analysis, Li et al. found that TRIM65 can negatively regulate miRNA-mediated mRNA translation inhibition through ubiquitination and subsequent degradation of trinucleotide repeat containing six (TNRC6) [86, 87]. Like other TRIMs, TRIM65 also participates in the antiviral innate immune response by ubiquitination of MDA5 [88, 89]. Over the years, several reports suggest that TRIM65 acts as a ubiquitin E3 ligase, targeting p53, ANXA2, Axin1, and ARHGAP35 to regulate carcinogenesis [90–93]. Most recently, Liu et al. published a review to introduce TRIM65 in white matter lesions, innate immunity and tumor [94].We recently found that TRIM65 may control the magnitude and duration of LPS-induced lung inflammation and injury [45]. TRIM65-deficient (TRIM65−/−) mice are more sensitive to LPS-induced death due to sustained and severe pulmonary inflammation. Further studies showed that monocytes/macrophages were higher in the BAL from TRIM65−/− mice, by which TRIM65 selectively targets vascular cell adhesion molecule 1 (VCAM-1) and directly induces its ubiquitination degradation in endothelial cells. It is worth noting that TRIM65 does not affect the MAPK and NF-κB signaling pathways in ALI, although some studies have revealed that TRIM65 can activate the Erk1/2 pathway [95, 96], which suggests that TRIM65 has diverse functions in different cells and under distinct pathological conditions. Furthermore, TRIM65 is enriched in endothelial cells and declined at the early stage during endothelial activation; the mechanisms that precisely regulate TRIM65 levels in endothelial inflammation remain unknown. Further studies are necessary to understand the regulatory mechanisms that control TRIM65 expression. ### 3.5. TRIM72 TRIM72 (also known as MG53) is composed of a typical TRIM family protein RBCC structure and a PRY-SPRY subdomain which is mainly expressed in cardiac and skeletal muscle, as well as in renal and alveolar epithelial cells, monocyte, and macrophages with detectable amount level [97–100]. Cai et al. first revealed that TRIM72 acted as a key component of the sarcolemma cell plasma membrane repair machinery [101]. Upon the membrane injurious stimuli, TRIM72 oligomerized by oxidizing the thiol group of cysteine at position 242 and a leucine zipper motif to induce the intracellular vesicles coated with TRIM72 to nucleate at the injured site, resulting in resealing the damaged membrane [102, 103]. At the membrane, TRIM72 protein binds to phosphatidyl serine to mediate the recruitment of vesicles at the injured site [104]. Interestingly, TRIM72 can be secreted and circulate throughout the entire body to reach all tissues and organs, allowing the recombinant TRIM72 protein to have therapeutic benefit in treatment of injuries to multiple tissues, such as the heart, kidney, lung, brain, liver, skin, skeletal muscle, and cornea [105].Ablation of the TRIM72 gene leads to increased susceptibility to ischemia-reperfusion and overventilation-induced ALI in mice [97]. Recently, Sermersheim and his colleagues found that knockdown of TRIM72 in macrophages results in activation of NF-κB signaling and increased inflammatory factor interleukin-1β upon influenza virus infection, and knockout of TRIM72 promotes CD45+ cells infiltration and IFNβ elevation in the lung [98]. Kenney et al. found that exogenous injection of recombinant human TRIM72 protein could protect ALI caused by lethal influenza virus infection [106]. Recombinant TRIM72 protein significantly decreased the level of inflammatory cytokines of IFNβ, IL-6, and IL-1β and infiltrating CD11b+ lymphocytes in lung tissues [106]. It is reported that intravenous (IV) delivery or inhalation of recombinant human TRIM72 protein reduces symptoms in rodent models of ALI and emphysema [97]. The extracellular recombinant protein protects cultured lung epithelial cells against anoxia/reoxygenation-induced injuries [97]. Most recently, Whitson et al. had evaluated the therapeutic benefits of recombinant human TRIM72 protein in porcine models of ALI and found that recombinant TRIM72 protein can mitigate lung injury in the porcine model of combined hemorrhagic shock/contusive lung injury and reduce warm ischemia-induced injury to the isolated porcine lung through ex vivo lung perfusion administration [46]. These findings revealed that TRIM72 plays a critical role in ALI, and exogenous-recombinant TRIM72 protein may be a shelf stable therapeutic agent with the potential to restore lung function and lessen the impact of ALI. ## 3.1. TRIM8 TRIM8, as a member of TRIM family, has a common structural feature of a typical RBCC motif as well as a monopartite nuclear localization signal (NLS), which allows shuttling and functioning into the nucleus [47]. It has been reported that TRIM8 can regulate NF-κB signaling both in the nucleus and cytoplasm: TRIM8 inhibited PIAS3-mediated negative regulation of p65 to enhance NF-κB activity in the nucleus [48] and can also positively regulate NF-κB pathway through k63-linked polyubiquitination of cytoplasmic protein TAK1 [49]. TRIM8 is ubiquitously expressed in human and mouse tissues, which has higher expression levels in the central nervous system, kidney, and lens, and lower expression level in digestive tract [44]. TRIM8 plays a key role in the immune response and participates in various fundamental biological processes such as cell survival, apoptosis, autophagy, differentiation, inflammation, and carcinogenesis [50].Recently, studies have revealed that TRIM8 is involved in the regulation of sepsis and ALI. TRIM8 was significantly upregulated in LPS (lipopolysaccharide) sepsis-induced acute hepatic injury (AHI), which was a direct target of miR-373-3p [51]. Moreover, inhibition of TRIM8 by downregulation of long noncoding RNA (lncRNA) LINC00472, which served as a sponge for miR-373-3p and negatively regulated its expression, could reduce sepsis-induced expression of main proinflammatory cytokines such as IL-6, IL-10, and TNF-α [51]. Xiaoli et al. found that TRIM8 was increased in a time-dependent manner during LPS-induced ALI, promoting inflammatory response and ROS generation via the inactivation of p-AMPKα. In addition, suppression of TRIM8 markedly downregulated mRNA levels of interleukin-1β (IL-1β), IL-6, and tumor necrosis factor-α (TNF-α) in lung epithelial cells mainly through blocking the NF-κB signaling pathway and alleviated oxidative stress by regulating Nrf2 signaling and heme oxygenase-1 (HO-1) expression [52]. Although TRIM8 has been revealed to play an important role in acute lung injury, precise regulatory mechanisms such as whether it depends on the activity of E3 ubiquitin ligase and its specific target proteins need to be further clarified. ## 3.2. TRIM14 TRIM14 was originally known as KIAA0129 [53], and its overexpression was first found in human immunodeficiency virus- (HIV-) infected human and simian non-Hodgkin’s lymphoma infected with simian immunodeficiency virus (SIV) [54, 55]. TRIM14 is a noncanonical member of the TRIM family, since it lacks the N-terminal RING domain of the typical RBCC motif which can exert an E3 ubiquitin ligase [56]. Studies have shown that TRIM14 can perform various functions via partners with which directly interact with its PRYSPRY domain [57]. Interestingly, TRIM14 was reported to be an important mediator of antiviral immunity both in DNA virus and double-stranded RNA virus infection [58, 59]. Furthermore, several groups found that TRIM14 may be involved in tumorigenesis [60–64].Recently, we found that TRIM14 was overexpressed in human vascular endothelial cells (ECs) and markedly induced by inflammatory stimuli such as LPS [65]. TRIM14 was a new positive regulator of endothelial activation via activating the NF-κB signaling pathway, which can directly bind to the promoter of TRIM14 gene and control its transcription [65]. Zhou et al. revealed that TRIM14 underwent Lys-63-linked autopolyubiquitination at Lys-365 and served as a platform and recruited NEMO to the mitochondrial antiviral signal (MAVS) complex, leading to the activation of interferon regulatory factor 3 (IRF3) and NF-κB signaling in human lung epithelial cells, which boost antiviral innate immune response [66]. TRIM14 also can recruit USP14 to cleave the K63-linked ubiquitin chain at lysine 332/338/341 of p100/p52, hinder the recognition of receptor p62, and inhibit the autophagy degradation of p100/p52, thus promoting the atypical activation of NF-κB in vivo and in vitro [67]. Considering endothelial inflammation and dysfunction play a prominent role in development of ALI and NF-κB is a central transcriptional factor in ALI, it is suggested that TRIM14 may be involved in the pathological process of ALI, which needs further study. ## 3.3. TRIM21 TRIM21, also known as Ro52, has a typical RBCC motif and E3 ligase activity [68]. It is broadly expressed in most human tissues and cells and predominantly expressed in hematopoietic cells and endothelial and epithelial cells [69]. TRIM21 was identified as a major autoantigen in autoimmune diseases including Sjögren’s syndrome, systemic lupus erythematosus (SLE), and rheumatoid arthritis [70–72]. Later studies revealed that TRIM21 is a highly conserved IgG receptor with high cytoplasmic affinity and specificity [73, 74], which can be induced by interferon to exert antiviral effect [75]. TRIM21 serves as a multifaceted regulator in viral immunity and can not only promote the production of type I interferon [76] and triggers an innate immune response via RIG-1 and cGAS sensing [77] but also negatively regulate innate immunity by targeting and degrading the viral DNA sensor DEAD (Asp-Glu-Ala-Asp) box polypeptide 41 (DDX41) [78]. The biological function and application of TRIM21 in antiviral immunity are described in detail in other reviews [79].Using TRIM21-deficient mice, Yoshimi and colleagues found that TRIM21 is a negative regulator of NF-κB-dependent proinflammatory cytokine production induction in fibroblasts after TLR ligands (poly(I:C), CpG, and LPS) stimulation [80]. In addition, TRIM21 deletion can lead enhanced production of proinflammatory cytokines and systemic autoimmunity though the IL-23-Th17 pathway [81]. Recently, Li et al. reported that TRIM21 exhibited an anti-inflammatory property against LPS-induced lung endothelial dysfunction and monocytes adhesion to endothelial cells [82]. TRIM21 can be monoubiquitylated and lysosomal degraded in response to LPS and may contribute to the pathogenesis of ALI [52]. TRIM21 can be used as a therapeutic target for endothelial dysfunction induced by sepsis, such as acute lung injury [83]. However, whether ubiquitination of TRIM21 is dependent on its phosphorylation and the specific phosphorylation or ubiquitination sites needs to be further clarified. ## 3.4. TRIM65 Human TRIM65 is a 517-amino acid protein, containing a N-terminal RING domain, a B-box, a coiled-coil domain, and a SPRY domain, is first known as a gene associated with white matter lesions [84, 85]. Using a systematic discovery-type proteomic analysis, Li et al. found that TRIM65 can negatively regulate miRNA-mediated mRNA translation inhibition through ubiquitination and subsequent degradation of trinucleotide repeat containing six (TNRC6) [86, 87]. Like other TRIMs, TRIM65 also participates in the antiviral innate immune response by ubiquitination of MDA5 [88, 89]. Over the years, several reports suggest that TRIM65 acts as a ubiquitin E3 ligase, targeting p53, ANXA2, Axin1, and ARHGAP35 to regulate carcinogenesis [90–93]. Most recently, Liu et al. published a review to introduce TRIM65 in white matter lesions, innate immunity and tumor [94].We recently found that TRIM65 may control the magnitude and duration of LPS-induced lung inflammation and injury [45]. TRIM65-deficient (TRIM65−/−) mice are more sensitive to LPS-induced death due to sustained and severe pulmonary inflammation. Further studies showed that monocytes/macrophages were higher in the BAL from TRIM65−/− mice, by which TRIM65 selectively targets vascular cell adhesion molecule 1 (VCAM-1) and directly induces its ubiquitination degradation in endothelial cells. It is worth noting that TRIM65 does not affect the MAPK and NF-κB signaling pathways in ALI, although some studies have revealed that TRIM65 can activate the Erk1/2 pathway [95, 96], which suggests that TRIM65 has diverse functions in different cells and under distinct pathological conditions. Furthermore, TRIM65 is enriched in endothelial cells and declined at the early stage during endothelial activation; the mechanisms that precisely regulate TRIM65 levels in endothelial inflammation remain unknown. Further studies are necessary to understand the regulatory mechanisms that control TRIM65 expression. ## 3.5. TRIM72 TRIM72 (also known as MG53) is composed of a typical TRIM family protein RBCC structure and a PRY-SPRY subdomain which is mainly expressed in cardiac and skeletal muscle, as well as in renal and alveolar epithelial cells, monocyte, and macrophages with detectable amount level [97–100]. Cai et al. first revealed that TRIM72 acted as a key component of the sarcolemma cell plasma membrane repair machinery [101]. Upon the membrane injurious stimuli, TRIM72 oligomerized by oxidizing the thiol group of cysteine at position 242 and a leucine zipper motif to induce the intracellular vesicles coated with TRIM72 to nucleate at the injured site, resulting in resealing the damaged membrane [102, 103]. At the membrane, TRIM72 protein binds to phosphatidyl serine to mediate the recruitment of vesicles at the injured site [104]. Interestingly, TRIM72 can be secreted and circulate throughout the entire body to reach all tissues and organs, allowing the recombinant TRIM72 protein to have therapeutic benefit in treatment of injuries to multiple tissues, such as the heart, kidney, lung, brain, liver, skin, skeletal muscle, and cornea [105].Ablation of the TRIM72 gene leads to increased susceptibility to ischemia-reperfusion and overventilation-induced ALI in mice [97]. Recently, Sermersheim and his colleagues found that knockdown of TRIM72 in macrophages results in activation of NF-κB signaling and increased inflammatory factor interleukin-1β upon influenza virus infection, and knockout of TRIM72 promotes CD45+ cells infiltration and IFNβ elevation in the lung [98]. Kenney et al. found that exogenous injection of recombinant human TRIM72 protein could protect ALI caused by lethal influenza virus infection [106]. Recombinant TRIM72 protein significantly decreased the level of inflammatory cytokines of IFNβ, IL-6, and IL-1β and infiltrating CD11b+ lymphocytes in lung tissues [106]. It is reported that intravenous (IV) delivery or inhalation of recombinant human TRIM72 protein reduces symptoms in rodent models of ALI and emphysema [97]. The extracellular recombinant protein protects cultured lung epithelial cells against anoxia/reoxygenation-induced injuries [97]. Most recently, Whitson et al. had evaluated the therapeutic benefits of recombinant human TRIM72 protein in porcine models of ALI and found that recombinant TRIM72 protein can mitigate lung injury in the porcine model of combined hemorrhagic shock/contusive lung injury and reduce warm ischemia-induced injury to the isolated porcine lung through ex vivo lung perfusion administration [46]. These findings revealed that TRIM72 plays a critical role in ALI, and exogenous-recombinant TRIM72 protein may be a shelf stable therapeutic agent with the potential to restore lung function and lessen the impact of ALI. ## 4. Conclusions TRIMs are a wide and well-conserved family of proteins defined as a subfamily of the RING-type E3 Ub ligases, which have been implicated in a broad range of biological processes including antiviral immunity, cell differentiation, development, and carcinogenesis. Accumulating evidence has shown that several TRIM members have unique and vital roles in ALI using distinct mechanisms (Table1). Particularly, the regulation of ALI by targeting cell membrane repair has been a focus of intense research in the last few years. Interestingly, systemically administered recombinant human TRIM72 proteins could recognize injury to both epithelial and endothelial layers in the lung, which can effectively preserve lung structure and function in ALI. TRIM72 will be one of the most promising therapeutic agents with the potential to restore lung function and lessen the impact of ALI. Further work is needed to understand full contribution of TRIMs including discovered and undiscovered members to ALI. Identification of TRIM proteins with the potential to serve as therapeutic targets of ALI may help to novel drug development of ALI treatment. --- *Source: 1007126-2021-10-19.xml*
1007126-2021-10-19_1007126-2021-10-19.md
42,326
The Emerging Roles of Tripartite Motif Proteins (TRIMs) in Acute Lung Injury
Yingjie Huang; Yue Xiao; Xuekang Zhang; Xuan Huang; Yong Li
Journal of Immunology Research (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1007126
1007126-2021-10-19.xml
--- ## Abstract Acute lung injury (ALI) is an inflammatory disorder of the lung that causes high mortality and lacks any pharmacological intervention. Ubiquitination plays a critical role in the pathogenesis of ALI as it regulates the alveolocapillary barrier and the inflammatory response. Tripartite motif (TRIM) proteins are one of the subfamilies of the RING-type E3 ubiquitin ligases, which contains more than 80 distinct members in humans involved in a broad range of biological processes including antivirus innate immunity, development, and tumorigenesis. Recently, some studies have shown that several members of TRIM family proteins play important regulatory roles in inflammation and ALI. Herein, we integrate emerging evidence regarding the roles of TRIMs in ALI. Articles were selected from the searches of PubMed database that had the terms “acute lung injury,” “ubiquitin ligases,” “tripartite motif protein,” “inflammation,” and “ubiquitination” using both MeSH terms and keywords. Better understanding of these mechanisms may ultimately lead to novel therapeutic approaches by targeting TRIMs for ALI treatment. --- ## Body ## 1. Introduction Acute lung injury (ALI) is an acute hypoxic respiratory insufficiency caused by various direct (pulmonary) or indirect (extrapulmonary) injuries including sepsis syndrome, ischemia-reperfusion, pneumonia, and mechanical ventilation, which leads to the destruction of the barrier of alveolar epithelial cells and capillary endothelial cells, resulting in overinfiltration of inflammatory cells and diffuse pulmonary interstitial and alveolar edema [1]. In 1994, the diagnostic criteria of ALI were put forward by the American–European Consensus Conference: an acute onset; oxygenation index PaO2/FiO2>200mmHgand<300mmHg (1mmHg=0.133kPa); patchy shadows in both lungs on the chest X-ray; pulmonaryarterywedgepressure≤18mmHg or no clinical evidence of left atrial hypertension; etc. [2]. Due to the lack of drug intervention, ALI remains a significant cause of morbidity and mortality in the critically ill patient population [2]. More severe situations with PaO2/FiO2≤200mmHg, ALI turns to the worse stage acute respiratory distress syndrome (ARDS) [3].Pathological hallmarks of ALI are injury to the vascular endothelium/alveolar epithelium, activation of innate immune response, and enhanced coagulation [4]. Exposure to several risk factors (i.e., pneumonia, sepsis, and shock) firstly leads to endothelial and/or epithelial monolayers damage, increasing permeability and impairing their barrier function [4]. A large amount of protein-rich fluid and inflammatory cells leaks into the alveoli and lung interstitium, resulting in pulmonary edema, neutrophil infiltration, cytokine and reactive oxygen species-mediated inflammation coagulation disorders, and pulmonary fibrosis [5]. Notedly, mast cells (MCs) and polymorphonuclear neutrophils (PMNs) are the main inflammatory cells, which play a critical role in the pathogenesis of ALI [6, 7]. MC activation induces tryptase release to trigger ALI [8], which is supported by the finding that MCs “stabilizers” can reduce ALI severity [9].At present, there is still no effective pharmaceutical intervention for ALI; mechanical ventilation is the main approach to prevent respiratory failure and, combined with intensive care support, could improve health condition [10]. However, mechanical ventilation can exacerbate preexisting lung injury or even induce de novo injury in healthy lungs, which is called ventilator-induced lung injury (VILI) [11, 12]. In recent years, researchers have paid close attention to the identification of new routes at cellular level which could provide a better understanding of the physiopathology of ALI, but the precise cellular and molecular underlying mechanisms are still to be fully elucidated. Chen et al. recently summarized and introduced the role of lncRNAs in ALI in detail [13]. However, emerging evidence points out to the ubiquitination which functions as an important regulator in the pathobiology of ALI since it regulates the proteins evolved in the modulation of the alveolocapillary barrier and inflammatory response, opening a highly promising research field for the treatment of lung diseases [14]. ## 2. Ubiquitination in ALI Ubiquitination is the major protein posttranslational modification in cells by which ubiquitin (Ub) covalently attached the target protein for degradation through the 26S proteasome or lysosome or nonproteolytic modifications [15]. It plays crucial roles in diverse biological processes such as DNA repair, cell proliferation, signal transduction, apoptosis, and inflammation, whose dysregulation leads to many diseases [16]. However, bacterial infection or inflammatory stimulation often disrupts the process of protein ubiquitination. We and some other investigators have shown that expressions of some E3 Ub ligases were altered by infection or inflammation thus affecting the levels and functions of their target proteins. Thus, uncovering new E3 Ub ligase-related molecular mechanisms and signaling pathways will provide a unique opportunity for the potential design of new strategies to alleviate ALI. ### 2.1. The Ubiquitin System As a multicomponent regulatory system, the Ub system is composed of three types of Ub enzymes, which is a highly controlled mechanism of protein degradation and turnover in cells, starting with approximately 8 kDa monomeric Ub [17]. Ub is activated by a Ub-activating enzyme (E1) in an adenosine-triphosphate- (ATP-) dependent manner and then conjugated by a ubiquitin conjugating enzyme (E2), finally resulting in transfer of Ub to an internal lysine of the substrate protein by an E3 ligase [18]. To date, there are only two E1 enzymes (UBA1 and UBA6) [19], around 40 E2 enzymes [20], but more than 600 E3 ligases [21] existed in the human genome. Although the addition of Ub moieties to specific residues on a substrate protein is partly because of E2/E3 enzymes pairings, E3 ligases were considered the predominant source of substrate specificity [22]. ### 2.2. E3 Ligases As a direct mediator of substrate tagging and ubiquitin chain elongation, E3 ligase is considered an essential component in the ubiquitin system that determines substrate specificity. In human genome, more than 600 putative E3 ligases have been identified [21]. There are three major kinds of E3 ligases divided by the molecular structure and functional mechanism, including HECT (homologous to the E6-associated protein carboxyl-terminus) domain family, RING (really interesting new gene) finger family, and the RBR (RING in-between-RING) E3 ubiquitin ligases [23]. HECT E3 ligase contains an N-terminal lobe which is responsible for E2 binding and substrate recognition, and a C-terminal HECT domain containing a catalytic cysteine that receives and passes an ubiquitin molecule from the E2 enzyme before conjugating ubiquitin to a substrate protein [24]. RING finger E3 ligases constitute the largest family of E3 ligases which are characterized based on the presence of a RING domain [25]. Interestingly, the canonical RING domain is a type of zinc finger with a RING fold structure while another type is the U-box domain which possesses the same RING fold but without zinc [26]. Unlike HECT E3 ligase, RING finger E3 ligase mediates the direct transfer of ubiquitin to a substrate protein by binding to a ubiquitin-charging E2 enzyme as a scaffold [27]. Notably, RING E3 ligases function either as monomers (e.g., c-CBL, E4B), homodimers (e.g., cIAP, CHIP), heterodimers (e.g., Mdm2-MdmX), or large multisubunit complexes, such as the Cullin-RING ligases (CRLs), which make up a distinct subtype characterized by their common Cullin scaffold protein [23]. RBR E3 ligases contain two RING domains (RING1 and RING2) with an in-between-RING (IBR) domain and share the common features of both HECT and RING finger E3 ligases which function as a hybrid of these two types of E3 ligases [28]. Specifically, the RING1 domain binds to ubiquitin-loaded E2 and transfers ubiquitin onto the RING2 domain at a catalytic cysteine residue before conjugation to the substrate protein [29]. ### 2.3. Role of E3 Ligases in ALI Although ubiquitination has been reported to play a pivotal role in multiple biological functions, its function in ALI remains poorly understood. Recently, the key regulative role of ubiquitination in ALI has been mentioned increasingly [14]. Of them, most studies have been focused on the E3 ubiquitin ligases and the conventional K48-ubiquitination which leads to the substrate proteins degradation via the 26S proteasome [30]. Accumulating evidence has demonstrated that E3 ligase plays a critical role in the pathobiology of ALI since it modulates critical proteins involved in the alveolocapillary barrier and the inflammatory response [14].Tight junctions form a highly selective diffusion barrier between endothelial cells and epithelial cells by preventing most dissolved molecules and ions from passing freely through the paracellular pathway [31]. The function impairment of tight junction is a sign of ALI [32]. E3 ligase Itch, a member of the HECT Ub ligases, could directly interact with and degrade the tight junction-specific protein occludin [33] via ubiquitination. E-cadherin, a well-studied member of the classical cadherin family, is a central component in the cell-cell adhesion junction and plays a critical role in maintaining cell polarity and the integrity of epithelial cells [34]. Dysfunction of E-cadherin contributes to the pathogenesis of ALI [35]. A RING finger E3 ligase Hakai induces E2-dependent ubiquitination and endocytosis of E-cadherin complex in epithelial cells [36]. Recently, Dong et al. found that the HECT E3 ligase Smurf2 induced μ-opioid receptor 1 (MOR1) degradation in the ubiquitin-proteasome system in lung epithelial cells, and MOR1 has a potential effect in lung repair and remodeling after ALI [37].E3 ligase Cblb inhibits the MyD88-dependent Toll-like receptor 4 (TLR4) signaling and attenuates acute lung inflammation induced by polymicrobial sepsis [38]. The ST2L receptor for interleukin 33 (IL-33) mediates pulmonary inflammation during ALI, which is bound and ubiquitinated by FBXL19, a member of the Skp1-Cullin-F-box family of E3 ubiquitin ligases [39]. In addition, E3 ligase FBXO3 targets the TRAF inhibitor FBXL2 for its destabilization and potently stimulates cytokine release, leading to changes in lung permeability, alveolar edema, and ALI [40]. FBXO17 has been described as an E3 ligase that recognizes and mediates the ubiquitination and degradation of GSK3β to reduce inflammatory responses in lung epithelial cells after LPS injury [41]. Most recently, Lear et al. reported that E3 ligase KIAA0317 targets SOCS2 for ubiquitination and degradation by the proteasome and exacerbates pulmonary inflammation [42]. These studies have proved that ubiquitin-proteasome system especially E3 ligases is closely related to the pathogenesis of lung injury. ## 2.1. The Ubiquitin System As a multicomponent regulatory system, the Ub system is composed of three types of Ub enzymes, which is a highly controlled mechanism of protein degradation and turnover in cells, starting with approximately 8 kDa monomeric Ub [17]. Ub is activated by a Ub-activating enzyme (E1) in an adenosine-triphosphate- (ATP-) dependent manner and then conjugated by a ubiquitin conjugating enzyme (E2), finally resulting in transfer of Ub to an internal lysine of the substrate protein by an E3 ligase [18]. To date, there are only two E1 enzymes (UBA1 and UBA6) [19], around 40 E2 enzymes [20], but more than 600 E3 ligases [21] existed in the human genome. Although the addition of Ub moieties to specific residues on a substrate protein is partly because of E2/E3 enzymes pairings, E3 ligases were considered the predominant source of substrate specificity [22]. ## 2.2. E3 Ligases As a direct mediator of substrate tagging and ubiquitin chain elongation, E3 ligase is considered an essential component in the ubiquitin system that determines substrate specificity. In human genome, more than 600 putative E3 ligases have been identified [21]. There are three major kinds of E3 ligases divided by the molecular structure and functional mechanism, including HECT (homologous to the E6-associated protein carboxyl-terminus) domain family, RING (really interesting new gene) finger family, and the RBR (RING in-between-RING) E3 ubiquitin ligases [23]. HECT E3 ligase contains an N-terminal lobe which is responsible for E2 binding and substrate recognition, and a C-terminal HECT domain containing a catalytic cysteine that receives and passes an ubiquitin molecule from the E2 enzyme before conjugating ubiquitin to a substrate protein [24]. RING finger E3 ligases constitute the largest family of E3 ligases which are characterized based on the presence of a RING domain [25]. Interestingly, the canonical RING domain is a type of zinc finger with a RING fold structure while another type is the U-box domain which possesses the same RING fold but without zinc [26]. Unlike HECT E3 ligase, RING finger E3 ligase mediates the direct transfer of ubiquitin to a substrate protein by binding to a ubiquitin-charging E2 enzyme as a scaffold [27]. Notably, RING E3 ligases function either as monomers (e.g., c-CBL, E4B), homodimers (e.g., cIAP, CHIP), heterodimers (e.g., Mdm2-MdmX), or large multisubunit complexes, such as the Cullin-RING ligases (CRLs), which make up a distinct subtype characterized by their common Cullin scaffold protein [23]. RBR E3 ligases contain two RING domains (RING1 and RING2) with an in-between-RING (IBR) domain and share the common features of both HECT and RING finger E3 ligases which function as a hybrid of these two types of E3 ligases [28]. Specifically, the RING1 domain binds to ubiquitin-loaded E2 and transfers ubiquitin onto the RING2 domain at a catalytic cysteine residue before conjugation to the substrate protein [29]. ## 2.3. Role of E3 Ligases in ALI Although ubiquitination has been reported to play a pivotal role in multiple biological functions, its function in ALI remains poorly understood. Recently, the key regulative role of ubiquitination in ALI has been mentioned increasingly [14]. Of them, most studies have been focused on the E3 ubiquitin ligases and the conventional K48-ubiquitination which leads to the substrate proteins degradation via the 26S proteasome [30]. Accumulating evidence has demonstrated that E3 ligase plays a critical role in the pathobiology of ALI since it modulates critical proteins involved in the alveolocapillary barrier and the inflammatory response [14].Tight junctions form a highly selective diffusion barrier between endothelial cells and epithelial cells by preventing most dissolved molecules and ions from passing freely through the paracellular pathway [31]. The function impairment of tight junction is a sign of ALI [32]. E3 ligase Itch, a member of the HECT Ub ligases, could directly interact with and degrade the tight junction-specific protein occludin [33] via ubiquitination. E-cadherin, a well-studied member of the classical cadherin family, is a central component in the cell-cell adhesion junction and plays a critical role in maintaining cell polarity and the integrity of epithelial cells [34]. Dysfunction of E-cadherin contributes to the pathogenesis of ALI [35]. A RING finger E3 ligase Hakai induces E2-dependent ubiquitination and endocytosis of E-cadherin complex in epithelial cells [36]. Recently, Dong et al. found that the HECT E3 ligase Smurf2 induced μ-opioid receptor 1 (MOR1) degradation in the ubiquitin-proteasome system in lung epithelial cells, and MOR1 has a potential effect in lung repair and remodeling after ALI [37].E3 ligase Cblb inhibits the MyD88-dependent Toll-like receptor 4 (TLR4) signaling and attenuates acute lung inflammation induced by polymicrobial sepsis [38]. The ST2L receptor for interleukin 33 (IL-33) mediates pulmonary inflammation during ALI, which is bound and ubiquitinated by FBXL19, a member of the Skp1-Cullin-F-box family of E3 ubiquitin ligases [39]. In addition, E3 ligase FBXO3 targets the TRAF inhibitor FBXL2 for its destabilization and potently stimulates cytokine release, leading to changes in lung permeability, alveolar edema, and ALI [40]. FBXO17 has been described as an E3 ligase that recognizes and mediates the ubiquitination and degradation of GSK3β to reduce inflammatory responses in lung epithelial cells after LPS injury [41]. Most recently, Lear et al. reported that E3 ligase KIAA0317 targets SOCS2 for ubiquitination and degradation by the proteasome and exacerbates pulmonary inflammation [42]. These studies have proved that ubiquitin-proteasome system especially E3 ligases is closely related to the pathogenesis of lung injury. ## 3. TRIMs in ALI TRIM proteins are regarded as a subfamily of the RING finger E3 ligase, which contain more than 80 distinct members in humans [43]. TRIMs are composed by conserved three zinc-binding domains, an N-terminal RING domain, one or two B-boxes, and a central coiled-coiled domain (CDD) [44]. We recently found that TRIM65 selectively targeted vascular cell adhesion molecule 1 (VCAM-1) and promoted its ubiquitination and degradation, by which it critically controlled the duration and magnitude of pulmonary inflammation in ALI [45]. Particularly, Whitson and his colleagues reported that TRIM72 (also known as MG53) could function as a novel therapeutic protein to treat ALI [46]. Here, we discuss our current understanding of TRIMs as E3 ligases that executes its effector functions in ALI (Table 1).Table 1 Role of TRIMs in acute lung inflammation. TRIMsModelsCell typesMechanismsRef No.TRIM8LPS-induced AHIHuman liver cellsLINC00472/miR-373-3p/TRIM8 axis[51]LPS-induced ALILung epithelial cellsp-AMPKα/NF-κB/Nrf2/ROS/HO-1 axis[52]TRIM14LPS-induced ALIHuman vascular endothelial cellsNEMO/TAK1/NF-κb/TRIM14 pathway[63]TRIM21LPS-induced ALILung microvascular endothelial cellsNF-κB signaling[79]TRIM65ALIHuman vascular endothelial cellsVACM-1 ubiquitination and degradation[45]TRIM72Ischaemia-reperfusion and overventilation-induced ALILung epithelial cellsCell membrane repair[97]Influenza virus -induced ALIMacrophageLung tissueNF-κB signalingInhibition of pyroptosis[98, 106]Hemorrhagic shock/contusive ALIHuman bronchial epithelial cellsCell membrane repair[46] ### 3.1. TRIM8 TRIM8, as a member of TRIM family, has a common structural feature of a typical RBCC motif as well as a monopartite nuclear localization signal (NLS), which allows shuttling and functioning into the nucleus [47]. It has been reported that TRIM8 can regulate NF-κB signaling both in the nucleus and cytoplasm: TRIM8 inhibited PIAS3-mediated negative regulation of p65 to enhance NF-κB activity in the nucleus [48] and can also positively regulate NF-κB pathway through k63-linked polyubiquitination of cytoplasmic protein TAK1 [49]. TRIM8 is ubiquitously expressed in human and mouse tissues, which has higher expression levels in the central nervous system, kidney, and lens, and lower expression level in digestive tract [44]. TRIM8 plays a key role in the immune response and participates in various fundamental biological processes such as cell survival, apoptosis, autophagy, differentiation, inflammation, and carcinogenesis [50].Recently, studies have revealed that TRIM8 is involved in the regulation of sepsis and ALI. TRIM8 was significantly upregulated in LPS (lipopolysaccharide) sepsis-induced acute hepatic injury (AHI), which was a direct target of miR-373-3p [51]. Moreover, inhibition of TRIM8 by downregulation of long noncoding RNA (lncRNA) LINC00472, which served as a sponge for miR-373-3p and negatively regulated its expression, could reduce sepsis-induced expression of main proinflammatory cytokines such as IL-6, IL-10, and TNF-α [51]. Xiaoli et al. found that TRIM8 was increased in a time-dependent manner during LPS-induced ALI, promoting inflammatory response and ROS generation via the inactivation of p-AMPKα. In addition, suppression of TRIM8 markedly downregulated mRNA levels of interleukin-1β (IL-1β), IL-6, and tumor necrosis factor-α (TNF-α) in lung epithelial cells mainly through blocking the NF-κB signaling pathway and alleviated oxidative stress by regulating Nrf2 signaling and heme oxygenase-1 (HO-1) expression [52]. Although TRIM8 has been revealed to play an important role in acute lung injury, precise regulatory mechanisms such as whether it depends on the activity of E3 ubiquitin ligase and its specific target proteins need to be further clarified. ### 3.2. TRIM14 TRIM14 was originally known as KIAA0129 [53], and its overexpression was first found in human immunodeficiency virus- (HIV-) infected human and simian non-Hodgkin’s lymphoma infected with simian immunodeficiency virus (SIV) [54, 55]. TRIM14 is a noncanonical member of the TRIM family, since it lacks the N-terminal RING domain of the typical RBCC motif which can exert an E3 ubiquitin ligase [56]. Studies have shown that TRIM14 can perform various functions via partners with which directly interact with its PRYSPRY domain [57]. Interestingly, TRIM14 was reported to be an important mediator of antiviral immunity both in DNA virus and double-stranded RNA virus infection [58, 59]. Furthermore, several groups found that TRIM14 may be involved in tumorigenesis [60–64].Recently, we found that TRIM14 was overexpressed in human vascular endothelial cells (ECs) and markedly induced by inflammatory stimuli such as LPS [65]. TRIM14 was a new positive regulator of endothelial activation via activating the NF-κB signaling pathway, which can directly bind to the promoter of TRIM14 gene and control its transcription [65]. Zhou et al. revealed that TRIM14 underwent Lys-63-linked autopolyubiquitination at Lys-365 and served as a platform and recruited NEMO to the mitochondrial antiviral signal (MAVS) complex, leading to the activation of interferon regulatory factor 3 (IRF3) and NF-κB signaling in human lung epithelial cells, which boost antiviral innate immune response [66]. TRIM14 also can recruit USP14 to cleave the K63-linked ubiquitin chain at lysine 332/338/341 of p100/p52, hinder the recognition of receptor p62, and inhibit the autophagy degradation of p100/p52, thus promoting the atypical activation of NF-κB in vivo and in vitro [67]. Considering endothelial inflammation and dysfunction play a prominent role in development of ALI and NF-κB is a central transcriptional factor in ALI, it is suggested that TRIM14 may be involved in the pathological process of ALI, which needs further study. ### 3.3. TRIM21 TRIM21, also known as Ro52, has a typical RBCC motif and E3 ligase activity [68]. It is broadly expressed in most human tissues and cells and predominantly expressed in hematopoietic cells and endothelial and epithelial cells [69]. TRIM21 was identified as a major autoantigen in autoimmune diseases including Sjögren’s syndrome, systemic lupus erythematosus (SLE), and rheumatoid arthritis [70–72]. Later studies revealed that TRIM21 is a highly conserved IgG receptor with high cytoplasmic affinity and specificity [73, 74], which can be induced by interferon to exert antiviral effect [75]. TRIM21 serves as a multifaceted regulator in viral immunity and can not only promote the production of type I interferon [76] and triggers an innate immune response via RIG-1 and cGAS sensing [77] but also negatively regulate innate immunity by targeting and degrading the viral DNA sensor DEAD (Asp-Glu-Ala-Asp) box polypeptide 41 (DDX41) [78]. The biological function and application of TRIM21 in antiviral immunity are described in detail in other reviews [79].Using TRIM21-deficient mice, Yoshimi and colleagues found that TRIM21 is a negative regulator of NF-κB-dependent proinflammatory cytokine production induction in fibroblasts after TLR ligands (poly(I:C), CpG, and LPS) stimulation [80]. In addition, TRIM21 deletion can lead enhanced production of proinflammatory cytokines and systemic autoimmunity though the IL-23-Th17 pathway [81]. Recently, Li et al. reported that TRIM21 exhibited an anti-inflammatory property against LPS-induced lung endothelial dysfunction and monocytes adhesion to endothelial cells [82]. TRIM21 can be monoubiquitylated and lysosomal degraded in response to LPS and may contribute to the pathogenesis of ALI [52]. TRIM21 can be used as a therapeutic target for endothelial dysfunction induced by sepsis, such as acute lung injury [83]. However, whether ubiquitination of TRIM21 is dependent on its phosphorylation and the specific phosphorylation or ubiquitination sites needs to be further clarified. ### 3.4. TRIM65 Human TRIM65 is a 517-amino acid protein, containing a N-terminal RING domain, a B-box, a coiled-coil domain, and a SPRY domain, is first known as a gene associated with white matter lesions [84, 85]. Using a systematic discovery-type proteomic analysis, Li et al. found that TRIM65 can negatively regulate miRNA-mediated mRNA translation inhibition through ubiquitination and subsequent degradation of trinucleotide repeat containing six (TNRC6) [86, 87]. Like other TRIMs, TRIM65 also participates in the antiviral innate immune response by ubiquitination of MDA5 [88, 89]. Over the years, several reports suggest that TRIM65 acts as a ubiquitin E3 ligase, targeting p53, ANXA2, Axin1, and ARHGAP35 to regulate carcinogenesis [90–93]. Most recently, Liu et al. published a review to introduce TRIM65 in white matter lesions, innate immunity and tumor [94].We recently found that TRIM65 may control the magnitude and duration of LPS-induced lung inflammation and injury [45]. TRIM65-deficient (TRIM65−/−) mice are more sensitive to LPS-induced death due to sustained and severe pulmonary inflammation. Further studies showed that monocytes/macrophages were higher in the BAL from TRIM65−/− mice, by which TRIM65 selectively targets vascular cell adhesion molecule 1 (VCAM-1) and directly induces its ubiquitination degradation in endothelial cells. It is worth noting that TRIM65 does not affect the MAPK and NF-κB signaling pathways in ALI, although some studies have revealed that TRIM65 can activate the Erk1/2 pathway [95, 96], which suggests that TRIM65 has diverse functions in different cells and under distinct pathological conditions. Furthermore, TRIM65 is enriched in endothelial cells and declined at the early stage during endothelial activation; the mechanisms that precisely regulate TRIM65 levels in endothelial inflammation remain unknown. Further studies are necessary to understand the regulatory mechanisms that control TRIM65 expression. ### 3.5. TRIM72 TRIM72 (also known as MG53) is composed of a typical TRIM family protein RBCC structure and a PRY-SPRY subdomain which is mainly expressed in cardiac and skeletal muscle, as well as in renal and alveolar epithelial cells, monocyte, and macrophages with detectable amount level [97–100]. Cai et al. first revealed that TRIM72 acted as a key component of the sarcolemma cell plasma membrane repair machinery [101]. Upon the membrane injurious stimuli, TRIM72 oligomerized by oxidizing the thiol group of cysteine at position 242 and a leucine zipper motif to induce the intracellular vesicles coated with TRIM72 to nucleate at the injured site, resulting in resealing the damaged membrane [102, 103]. At the membrane, TRIM72 protein binds to phosphatidyl serine to mediate the recruitment of vesicles at the injured site [104]. Interestingly, TRIM72 can be secreted and circulate throughout the entire body to reach all tissues and organs, allowing the recombinant TRIM72 protein to have therapeutic benefit in treatment of injuries to multiple tissues, such as the heart, kidney, lung, brain, liver, skin, skeletal muscle, and cornea [105].Ablation of the TRIM72 gene leads to increased susceptibility to ischemia-reperfusion and overventilation-induced ALI in mice [97]. Recently, Sermersheim and his colleagues found that knockdown of TRIM72 in macrophages results in activation of NF-κB signaling and increased inflammatory factor interleukin-1β upon influenza virus infection, and knockout of TRIM72 promotes CD45+ cells infiltration and IFNβ elevation in the lung [98]. Kenney et al. found that exogenous injection of recombinant human TRIM72 protein could protect ALI caused by lethal influenza virus infection [106]. Recombinant TRIM72 protein significantly decreased the level of inflammatory cytokines of IFNβ, IL-6, and IL-1β and infiltrating CD11b+ lymphocytes in lung tissues [106]. It is reported that intravenous (IV) delivery or inhalation of recombinant human TRIM72 protein reduces symptoms in rodent models of ALI and emphysema [97]. The extracellular recombinant protein protects cultured lung epithelial cells against anoxia/reoxygenation-induced injuries [97]. Most recently, Whitson et al. had evaluated the therapeutic benefits of recombinant human TRIM72 protein in porcine models of ALI and found that recombinant TRIM72 protein can mitigate lung injury in the porcine model of combined hemorrhagic shock/contusive lung injury and reduce warm ischemia-induced injury to the isolated porcine lung through ex vivo lung perfusion administration [46]. These findings revealed that TRIM72 plays a critical role in ALI, and exogenous-recombinant TRIM72 protein may be a shelf stable therapeutic agent with the potential to restore lung function and lessen the impact of ALI. ## 3.1. TRIM8 TRIM8, as a member of TRIM family, has a common structural feature of a typical RBCC motif as well as a monopartite nuclear localization signal (NLS), which allows shuttling and functioning into the nucleus [47]. It has been reported that TRIM8 can regulate NF-κB signaling both in the nucleus and cytoplasm: TRIM8 inhibited PIAS3-mediated negative regulation of p65 to enhance NF-κB activity in the nucleus [48] and can also positively regulate NF-κB pathway through k63-linked polyubiquitination of cytoplasmic protein TAK1 [49]. TRIM8 is ubiquitously expressed in human and mouse tissues, which has higher expression levels in the central nervous system, kidney, and lens, and lower expression level in digestive tract [44]. TRIM8 plays a key role in the immune response and participates in various fundamental biological processes such as cell survival, apoptosis, autophagy, differentiation, inflammation, and carcinogenesis [50].Recently, studies have revealed that TRIM8 is involved in the regulation of sepsis and ALI. TRIM8 was significantly upregulated in LPS (lipopolysaccharide) sepsis-induced acute hepatic injury (AHI), which was a direct target of miR-373-3p [51]. Moreover, inhibition of TRIM8 by downregulation of long noncoding RNA (lncRNA) LINC00472, which served as a sponge for miR-373-3p and negatively regulated its expression, could reduce sepsis-induced expression of main proinflammatory cytokines such as IL-6, IL-10, and TNF-α [51]. Xiaoli et al. found that TRIM8 was increased in a time-dependent manner during LPS-induced ALI, promoting inflammatory response and ROS generation via the inactivation of p-AMPKα. In addition, suppression of TRIM8 markedly downregulated mRNA levels of interleukin-1β (IL-1β), IL-6, and tumor necrosis factor-α (TNF-α) in lung epithelial cells mainly through blocking the NF-κB signaling pathway and alleviated oxidative stress by regulating Nrf2 signaling and heme oxygenase-1 (HO-1) expression [52]. Although TRIM8 has been revealed to play an important role in acute lung injury, precise regulatory mechanisms such as whether it depends on the activity of E3 ubiquitin ligase and its specific target proteins need to be further clarified. ## 3.2. TRIM14 TRIM14 was originally known as KIAA0129 [53], and its overexpression was first found in human immunodeficiency virus- (HIV-) infected human and simian non-Hodgkin’s lymphoma infected with simian immunodeficiency virus (SIV) [54, 55]. TRIM14 is a noncanonical member of the TRIM family, since it lacks the N-terminal RING domain of the typical RBCC motif which can exert an E3 ubiquitin ligase [56]. Studies have shown that TRIM14 can perform various functions via partners with which directly interact with its PRYSPRY domain [57]. Interestingly, TRIM14 was reported to be an important mediator of antiviral immunity both in DNA virus and double-stranded RNA virus infection [58, 59]. Furthermore, several groups found that TRIM14 may be involved in tumorigenesis [60–64].Recently, we found that TRIM14 was overexpressed in human vascular endothelial cells (ECs) and markedly induced by inflammatory stimuli such as LPS [65]. TRIM14 was a new positive regulator of endothelial activation via activating the NF-κB signaling pathway, which can directly bind to the promoter of TRIM14 gene and control its transcription [65]. Zhou et al. revealed that TRIM14 underwent Lys-63-linked autopolyubiquitination at Lys-365 and served as a platform and recruited NEMO to the mitochondrial antiviral signal (MAVS) complex, leading to the activation of interferon regulatory factor 3 (IRF3) and NF-κB signaling in human lung epithelial cells, which boost antiviral innate immune response [66]. TRIM14 also can recruit USP14 to cleave the K63-linked ubiquitin chain at lysine 332/338/341 of p100/p52, hinder the recognition of receptor p62, and inhibit the autophagy degradation of p100/p52, thus promoting the atypical activation of NF-κB in vivo and in vitro [67]. Considering endothelial inflammation and dysfunction play a prominent role in development of ALI and NF-κB is a central transcriptional factor in ALI, it is suggested that TRIM14 may be involved in the pathological process of ALI, which needs further study. ## 3.3. TRIM21 TRIM21, also known as Ro52, has a typical RBCC motif and E3 ligase activity [68]. It is broadly expressed in most human tissues and cells and predominantly expressed in hematopoietic cells and endothelial and epithelial cells [69]. TRIM21 was identified as a major autoantigen in autoimmune diseases including Sjögren’s syndrome, systemic lupus erythematosus (SLE), and rheumatoid arthritis [70–72]. Later studies revealed that TRIM21 is a highly conserved IgG receptor with high cytoplasmic affinity and specificity [73, 74], which can be induced by interferon to exert antiviral effect [75]. TRIM21 serves as a multifaceted regulator in viral immunity and can not only promote the production of type I interferon [76] and triggers an innate immune response via RIG-1 and cGAS sensing [77] but also negatively regulate innate immunity by targeting and degrading the viral DNA sensor DEAD (Asp-Glu-Ala-Asp) box polypeptide 41 (DDX41) [78]. The biological function and application of TRIM21 in antiviral immunity are described in detail in other reviews [79].Using TRIM21-deficient mice, Yoshimi and colleagues found that TRIM21 is a negative regulator of NF-κB-dependent proinflammatory cytokine production induction in fibroblasts after TLR ligands (poly(I:C), CpG, and LPS) stimulation [80]. In addition, TRIM21 deletion can lead enhanced production of proinflammatory cytokines and systemic autoimmunity though the IL-23-Th17 pathway [81]. Recently, Li et al. reported that TRIM21 exhibited an anti-inflammatory property against LPS-induced lung endothelial dysfunction and monocytes adhesion to endothelial cells [82]. TRIM21 can be monoubiquitylated and lysosomal degraded in response to LPS and may contribute to the pathogenesis of ALI [52]. TRIM21 can be used as a therapeutic target for endothelial dysfunction induced by sepsis, such as acute lung injury [83]. However, whether ubiquitination of TRIM21 is dependent on its phosphorylation and the specific phosphorylation or ubiquitination sites needs to be further clarified. ## 3.4. TRIM65 Human TRIM65 is a 517-amino acid protein, containing a N-terminal RING domain, a B-box, a coiled-coil domain, and a SPRY domain, is first known as a gene associated with white matter lesions [84, 85]. Using a systematic discovery-type proteomic analysis, Li et al. found that TRIM65 can negatively regulate miRNA-mediated mRNA translation inhibition through ubiquitination and subsequent degradation of trinucleotide repeat containing six (TNRC6) [86, 87]. Like other TRIMs, TRIM65 also participates in the antiviral innate immune response by ubiquitination of MDA5 [88, 89]. Over the years, several reports suggest that TRIM65 acts as a ubiquitin E3 ligase, targeting p53, ANXA2, Axin1, and ARHGAP35 to regulate carcinogenesis [90–93]. Most recently, Liu et al. published a review to introduce TRIM65 in white matter lesions, innate immunity and tumor [94].We recently found that TRIM65 may control the magnitude and duration of LPS-induced lung inflammation and injury [45]. TRIM65-deficient (TRIM65−/−) mice are more sensitive to LPS-induced death due to sustained and severe pulmonary inflammation. Further studies showed that monocytes/macrophages were higher in the BAL from TRIM65−/− mice, by which TRIM65 selectively targets vascular cell adhesion molecule 1 (VCAM-1) and directly induces its ubiquitination degradation in endothelial cells. It is worth noting that TRIM65 does not affect the MAPK and NF-κB signaling pathways in ALI, although some studies have revealed that TRIM65 can activate the Erk1/2 pathway [95, 96], which suggests that TRIM65 has diverse functions in different cells and under distinct pathological conditions. Furthermore, TRIM65 is enriched in endothelial cells and declined at the early stage during endothelial activation; the mechanisms that precisely regulate TRIM65 levels in endothelial inflammation remain unknown. Further studies are necessary to understand the regulatory mechanisms that control TRIM65 expression. ## 3.5. TRIM72 TRIM72 (also known as MG53) is composed of a typical TRIM family protein RBCC structure and a PRY-SPRY subdomain which is mainly expressed in cardiac and skeletal muscle, as well as in renal and alveolar epithelial cells, monocyte, and macrophages with detectable amount level [97–100]. Cai et al. first revealed that TRIM72 acted as a key component of the sarcolemma cell plasma membrane repair machinery [101]. Upon the membrane injurious stimuli, TRIM72 oligomerized by oxidizing the thiol group of cysteine at position 242 and a leucine zipper motif to induce the intracellular vesicles coated with TRIM72 to nucleate at the injured site, resulting in resealing the damaged membrane [102, 103]. At the membrane, TRIM72 protein binds to phosphatidyl serine to mediate the recruitment of vesicles at the injured site [104]. Interestingly, TRIM72 can be secreted and circulate throughout the entire body to reach all tissues and organs, allowing the recombinant TRIM72 protein to have therapeutic benefit in treatment of injuries to multiple tissues, such as the heart, kidney, lung, brain, liver, skin, skeletal muscle, and cornea [105].Ablation of the TRIM72 gene leads to increased susceptibility to ischemia-reperfusion and overventilation-induced ALI in mice [97]. Recently, Sermersheim and his colleagues found that knockdown of TRIM72 in macrophages results in activation of NF-κB signaling and increased inflammatory factor interleukin-1β upon influenza virus infection, and knockout of TRIM72 promotes CD45+ cells infiltration and IFNβ elevation in the lung [98]. Kenney et al. found that exogenous injection of recombinant human TRIM72 protein could protect ALI caused by lethal influenza virus infection [106]. Recombinant TRIM72 protein significantly decreased the level of inflammatory cytokines of IFNβ, IL-6, and IL-1β and infiltrating CD11b+ lymphocytes in lung tissues [106]. It is reported that intravenous (IV) delivery or inhalation of recombinant human TRIM72 protein reduces symptoms in rodent models of ALI and emphysema [97]. The extracellular recombinant protein protects cultured lung epithelial cells against anoxia/reoxygenation-induced injuries [97]. Most recently, Whitson et al. had evaluated the therapeutic benefits of recombinant human TRIM72 protein in porcine models of ALI and found that recombinant TRIM72 protein can mitigate lung injury in the porcine model of combined hemorrhagic shock/contusive lung injury and reduce warm ischemia-induced injury to the isolated porcine lung through ex vivo lung perfusion administration [46]. These findings revealed that TRIM72 plays a critical role in ALI, and exogenous-recombinant TRIM72 protein may be a shelf stable therapeutic agent with the potential to restore lung function and lessen the impact of ALI. ## 4. Conclusions TRIMs are a wide and well-conserved family of proteins defined as a subfamily of the RING-type E3 Ub ligases, which have been implicated in a broad range of biological processes including antiviral immunity, cell differentiation, development, and carcinogenesis. Accumulating evidence has shown that several TRIM members have unique and vital roles in ALI using distinct mechanisms (Table1). Particularly, the regulation of ALI by targeting cell membrane repair has been a focus of intense research in the last few years. Interestingly, systemically administered recombinant human TRIM72 proteins could recognize injury to both epithelial and endothelial layers in the lung, which can effectively preserve lung structure and function in ALI. TRIM72 will be one of the most promising therapeutic agents with the potential to restore lung function and lessen the impact of ALI. Further work is needed to understand full contribution of TRIMs including discovered and undiscovered members to ALI. Identification of TRIM proteins with the potential to serve as therapeutic targets of ALI may help to novel drug development of ALI treatment. --- *Source: 1007126-2021-10-19.xml*
2021
# Spherical Fuzzy Soft Topology and Its Application in Group Decision-Making Problems **Authors:** Harish Garg; Fathima Perveen P A; Sunil Jacob John; Luis Perez-Dominguez **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007133 --- ## Abstract The spherical fuzzy soft set is a generalized soft set model, which is more realistic, practical, and accurate. It is an extended version of existing fuzzy soft set models that can be used to describe imprecise data in real-world scenarios. The paper seeks to introduce the new concept of spherical fuzzy soft topology defined on spherical fuzzy soft sets. In this work, we define some basic concepts including spherical fuzzy soft basis, spherical fuzzy soft subspace, spherical fuzzy soft interior, spherical fuzzy soft closure, and spherical fuzzy soft boundary. The properties of these defined set are also discussed and explained with an appropriate examples. Also, we establish certain important results and introduce spherical fuzzy soft separation axioms, spherical fuzzy soft regular space, and spherical fuzzy soft normal space. Furthermore, as an application, a group decision-making algorithm is presented based on the TOPSIS (Technique of Order Preference by Similarity to an Ideal Solution) method for solving the decision-making problems. The applicability of the proposed method is demonstrated through a numerical example. The comprehensive advantages of the proposed work have been stated over the existing methods. --- ## Body ## 1. Introduction The human life with all of its complexities, is currently in flux due to the exponential growth of innovation and changing technologies that constantly redefine, reshape, and redesign the way the world is perceived and experienced, and the tools once used to solve problems become obsolete and inappropriate. This is no exception to any discipline of knowledge. Thus, The strategies commonly adopted in classical mathematics are not effective all the time due to the uncertainty and ambiguity it entails. Techniques such as fuzzy set theory [1], vague set theory [2], and interval mathematics [3] are viewed as mathematical models for coping with uncertainty and variability. However, these theories suffer from their own shortcomings and inadequacies to deal with the task at hand more objectively. Zadeh’s fuzzy set theory was extensively used in the beginning for many applications. Fuzzy sets are thought to be an extended version of classical sets, where each element has a membership grade. The definition of intuitionistic fuzzy sets was developed by Atanassov [4] to circumvent some limitations of fuzzy sets. Many other fuzzy set extensions have been proposed, including interval-valued intuitionistic fuzzy sets [5], Pythagorean fuzzy sets [6], picture fuzzy sets [7], and so on. These sets were effectively applied in several areas of science and engineering, economics, medical science, and environmental science. Recently, as a generalization of fuzzy set, intuitionistic fuzzy set, and picture fuzzy set, certain authors have developed the concept of spherical fuzzy sets [8] and T-spherical fuzzy sets [9] to enlarge the picture fuzzy sets as it has their restrictions. To address decision-making problems, Ashraf et al. [10] proposed the spherical fuzzy aggregation operators. Akram et al. [11] introduced the complex spherical fuzzy model that excels at expressing ambiguous information in two dimensions. The applications of these sets to solve decision-making problems are prevalent in a variety of fields [12–17].In 1999, Molodtsov [18] proposed a new type of set, called soft set, to deal with uncertainty and vagueness. The challenge of determining the membership function in fuzzy set theory does not occur in soft set theory, making the theory applicable to multiple fields of game theory, operations research, Riemann integration, etc. Later, Maji et al. [19] studied more on soft sets and used Pawlak’s rough mathematics [20] to propose a decision-making problem as an application of soft sets. Also, Maji et al. [21] developed a hybrid structure of soft sets and fuzzy sets, known as fuzzy soft sets, which is a more powerful mathematical model for handling different kinds of real-life situations. Many researchers were interested in this concept and various fuzzy set generalizations such as generalized fuzzy soft sets [22], group generalized fuzzy soft sets [23], intuitionistic fuzzy soft sets [24], Pythagorean fuzzy soft sets [25], interval-valued picture fuzzy soft sets [26] were put forward. In the recent times, Perveen et al. [27] created a spherical fuzzy soft set (SFSS), which is a more advanced form of fuzzy soft set. This newly evolved set is arguably the more realistic, practical and accurate. SFSSs are a new variation of the picture fuzzy soft set that was developed by merging soft sets and spherical fuzzy sets, where the membership degrees satisfy the condition 0≤μℵϖ2ς+ηℵϖ2ς+ϑℵϖ2ς≤1 rather than 0≤μℵϖς+ηℵϖς+ϑℵϖς≤1 as in picture fuzzy soft sets. SFSS has more capability in modeling vagueness and uncertainty while dealing with decision-making problems that occur in real-life circumstances. The authors [28] also developed similarity measures of SFSS and applied the proposed spherical fuzzy soft similarity measure in the field of medical science.These theories have applications in topology and many other fields of mathematics. Chang [29] suggested the concept of a fuzzy topological space in 1968. He extended many basic concepts like continuity, compactness, open set, and closed set in general topology to the fuzzy topological spaces. Again, Lowen [30] conducted an elaborated study of the structure of fuzzy topological spaces. Çoker [31] invented the idea of an intuitionistic fuzzy topological space in 1995. Many other results including continuity, compactness, and connectedness of intuitionistic fuzzy topological spaces were proposed by Coker et al. [32, 33]. The notion of Pythagorean fuzzy topological space was presented by Olgun et al. [34]. Kiruthika and Thangavelu [35] discussed the link between topology and soft topology. Recently, by using elementary operations over a universal set with a set of parameters, Taskopru and Altintas [36] established the elementary soft topology. Tanay and Kandemir [37] defined the idea of fuzzy soft topology. They also introduced fuzzy soft neighbourhood, fuzzy soft basis, fuzzy soft interior, and fuzzy soft subspace topology. Several related works on fuzzy soft topology can be seen in [38–40]. Osmanoglu and Tokat [41] proposed the subspace, compactness, connectedness, and separation axioms of intuitionistic fuzzy soft topological spaces. Also, intuitionistic fuzzy soft topological spaces were examined by Bayramov and Gunduz [42]. They studied intuitionistic fuzzy soft continuous mapping and related properties. Riaz et al. [43] proposed the concept of Pythagorean fuzzy soft topology defined on Pythagorean fuzzy soft sets, and provided an application of Pythagorean fuzzy soft topology in medical diagnosis by making use of TOPSIS method.Hwang and Yoon [44] developed Technique for order of Preference by Similarity to ideal solution (TOPSIS) as a multi-criteria decision analysis and further studied by Chen et al. [45, 46]. Boran et al. [47] invented the TOPSIS approach based on intuitionistic fuzzy sets for multi-criteria decision-making problems. Chen et al. [48] developed a proportional interval T2 hesitant fuzzy TOPSIS approach based on the Hamacher aggregation operators and the andness optimization models. Further, the fuzzy soft TOPSIS method presented briefly as a multi-criteria decision-making technique by Selim and Karaaslan [49]. They proposed a group decision-making process in a fuzzy soft environment based on the TOPSIS method. Also, many researchers in [50–54] have looked at the TOPSIS approach for solving decision-making problems under the different fuzzy environment.Topological structures on fuzzy soft sets have application in several areas including medical diagnosis, decision-making, pattern recognition, and image processing. Since SFSS is one of the most generalized versions of the fuzzy soft set, introducing topology on SFSS is highly essential in both theoretical and practical scenarios. There are some basic operations of SFSSs in the literature, more functional operations of SFSSs are derived day by day. The development of topology on SFSSs can be considered as an important contribution to fill the gap in the literature on the theory of SFSS. The aim of this paper is to introduce the notion of spherical fuzzy soft topology (SFS-topology) on SFSS, and to discuss some basic concepts such as SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary, SFS-exterior and SFS-separation axioms. Also, through this paper, we use the SFS-topology in group decision-making method based on TOPSIS under spherical fuzzy soft environment.The rest of the paper is ordered as follows. In Section2, some fundamental concepts of fuzzy sets, spherical fuzzy sets, soft sets, fuzzy soft sets, and spherical fuzzy soft sets are recalled, and definitions of spherical fuzzy subset, spherical fuzzy union and spherical fuzzy intersection are modified. In Section 3, the concept of SFS-topology is defined on SFSS including some basic definitions. In Section 4, by using the ideas of SFS-points, SFS-open set, and SFS-closed set, SFS-separation axioms are proposed. In Section 5, an algorithm is presented besed on group decision-making method and extension of TOPSIS approach accompanied by a numerical example. This theory will have implications in the discipline of Human resource management, organizational behavior and assessing the rationale of consumer choice. In Section 6, a comparative study is conducted with an already existing algorithm to show the effectiveness of the proposed algorithm. Finally, Section 7 ends with a conclusion and recommendations for future work. ## 2. Preliminaries In this section, we recall certain fundamental ideas associated with various kinds of sets including fuzzy sets, spherical fuzzy sets, soft sets, fuzzy soft sets, and spherical fuzzy soft sets. We redefine the definitions of spherical fuzzy subset, spherical fuzzy union, and spherical fuzzy intersection, also propose the notions of null SFSS and absolute SFSS. LetΣ be the initial universal set of discourse and K be the attribute (or parameter) set in connection with the objects in Σ, and ℒ⊆K.Definition 1 (see [1]). A fuzzy setℵ on a universe Σ is an object of the form(1)ℵ=ς,μℵς|ς∈Σ,where μℵ:Σ⟶0,1 is the membership function of ℵ, the value μℵς is the grade of membership of ς in ℵ.Definition 2 (see [9]). A spherical fuzzy set (SFS)S over the universal set Σ can be written as(2)S=ς,μSς,ηSς,ϑSς|ς∈Σ,where μSς,ηSς and ϑSς are the membership functions defined from Σ to [0, 1], indicate the positive, neutral, and negative membership degrees of ς∈Σ respectively, with the condition, 0≤μS2ς+ηS2ς+ϑS2ς≤1, ∀ς∈Σ.Definition 3 (see [9]). Letℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ be two SFSs over Σ. Then(1) ℵ⊆Ω if μℵς≤μΩς, ηℵς≤ηΩς, and ϑℵς≥ϑΩς(2) ℵ=Ω if and only if ℵ⊆Ω and ℵ⊇Ω(3) ℵ∪Ω=ς,μℵς∨μΩς,ηℵς∧ηΩς,ϑℵς∧ϑΩς|ς∈Σ(4) ℵ∩Ω=ς,μℵς∧μΩς,ηℵς∧ηΩς,ϑℵς∨ϑΩς|ς∈ΣWhere the symbols “∨” and “∧” represent the maximum and minimum operations respectively.Definition 4 (see [10]). LetΣ be the initial universal set.(1) An SFS is said to be an absolute SFS over the universeΣ, denoted by 1Σ, if ∀ς∈Σ,(3)μ1Σς=1,η1Σς=0,andϑ1Σς=0.(2) An SFS is said to be a null SFS over the universeΣ, denoted by 1Σ, if ∀ς∈Σ,(4)μ1Σς=0,η1Σς=0,andϑ1Σς=1.Example 1. LetΣ=ς1,ς2 be the universal set. Let ℵ and Ω be two SFSs over Σ given by,(5)ℵ=ς1,0.3,0.4,0.5,ς2,0.5,0.2,0.4,(6)Ω=ς1,0.4,0.5,0.2,ς2,0.6,0.3,0.3. Then it is clear thatℵ⊆Ω, and ℵ∪Ω=ς1,0.4,0.4,0.2,ς2,0.6,0.2,0.3. Further,1Σ=ς1,1.0,0.0,0.0,ς2,1.0,0.0,0.1 and 1Σ=ς1,0.0,0.0,0.0,ς2,0.0,0.0,0.1. Then ℵ∪1Σ=ς1,0.3,0.0,0.5,ς2,0.5,0.0,0.4 and ℵ∩1Σ=ς1,0.3,0.0,0.5,ς2,0.5,0.0,0.4. From the above example, It can be showed that the following results are not true generally in spherical fuzzy set theory.(1) ℵ⊆1Σ(2) ℵ∪1Σ=ℵ(3) ℵ∩1Σ=ℵ(4) Ifℵ⊆Ω, then ℵ∪Ω=Ω To overcome this difficulty, we modified the definitions of spherical fuzzy subset, spherical fuzzy union, and spherical fuzzy intersection as follows.Definition 5. Letℵ and Ω be two spherical fuzzy sets over the universe Σ, where ℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ. Then ℵ is said to be a spherical fuzzy subset (modified) of Ω, denoted by ℵ⊆^Ω, if ∀ς∈Σ(7)μℵς≤μΩς,ηℵς≤ηΩς,ϑℵς≥ϑΩς;ifμΩς≠1μℵς≤μΩς,ηℵς≥ηΩς,ϑℵς≥ϑΩς;otherwise.Definition 6. Letℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ be two spherical fuzzy sets over Σ. Then the spherical fuzzy union (modified), denoted by ℵ∪^Ω, and the spherical fuzzy intersection (modified), denoted by ℵ∩^Ω, are defined as follows:(1) Λ=ℵ∪^Ω=ς,μΛς,ηΛς,ϑΛς|ς∈Σ, whereμΛς=μℵς∨μΩςηΛς=ηℵς∨ηΩς;ifμℵς∨μΩς2+ηℵς∨ηΩς2+ϑℵς∧ϑΩς2≤1ηℵς∧ηΩς;otherwiseϑΛς=ϑℵς∧ϑΩς(2) Π=ℵ∩^Ω=ς,μΠς,ηΠς,ϑΠς|ς∈Σ, whereμΠς=μℵς∧μΩςηΠς=ηℵς∨ηΩς;ifμℵςorμΩς=1ηℵς∧ηΩς;otherwiseϑΠς=ϑℵς∨ϑΩςDefinition 7 (see [18]). LetPΣ denote the power set of the universal set Σ and K be the set of attributes. A soft set over Σ is a pair ℵ,ℒ, where ℵ is a function from ℒ to PΣ, and ℒ⊆K.Definition 8 (see [21]). LetFSΣ denote the collection of all fuzzy subsets over the universal set Σ. A fuzzy soft set (FSS) is a pair ℵ,ℒ, where ℵ is a mapping given by ℵ:ℒ⟶FSΣ and ℒ⊆K.Definition 9 (see [27]). LetSFSΣ be the set of all spherical fuzzy sets over Σ. A spherical fuzzy soft set (SFSS) is a pair ℵ,ℒ, where ℵ is a mapping from ℒ to SFSΣ and ℒ⊆K. For eachϖ∈ℒ,ℵϖ is a spherical fuzzy set such that ℵϖ=ς,μℵϖς,ηℵϖς,ϑℵϖςς∈Σ, where μℵϖς,ηℵϖς, ϑℵϖς∈0,1 are the membership degrees which are explained in Definition 2, with the same condition.Definition 10 (see [27]). Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, and ℒ,ℳ⊆K. Then ℵ,ℒ is said to be a SFS-subset of Ω,ℳ, if(1) ℒ⊆ℳ(2) ∀ϖ∈ℒ,ℵϖ⊆^ΩϖDefinition 11 (see [27]). Letℵ,ℒ be a SFSS over the universal set Σ. Then the SFS-complement of ℵ,ℒ, denoted by ℵ,ℒc, is defined by ℵ,ℒc=ℵc,ℒ, where ℵc:ℒ⟶SFSΣ,K is a mapping given by ℵcϖ=ς,ϑℵϖς,ηℵϖς,μℵϖςς∈Σ for every ϖ∈ℒ.Definition 12 (see [27]). Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, and ℒ,ℳ⊆K. then the SFS-union of ℵ,ℒ and Ω,ℳ, denoted by ℵ,ℒ∪^Ω,ℳ, is a SFSS Γ,N, where N=ℒ∪ℳ and ∀ϖ∈N(8)Γe=ℵϖ,ifϖ∈ℒ−ℳΩϖ,ifϖ∈ℳ−ℒℵϖ∪^Ωϖ,ifϖ∈ℒ∩ℳ, Now, we propose the definitions of spherical fuzzy soft restricted intersection, null spherical fuzzy soft, and absolute spherical fuzzy soft, which are essential for further discussions.Definition 13. Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, ℒ,ℳ⊆K. then the SFS-restricted intersection of ℵ,ℒ and Ω,ℳ, denoted by ℵ,ℒ∩^Ω,ℳ, is a SFSS Γ,N, where N=ℒ∩ℳ and ∀ϖ∈N, Γϖ=ℵϖ∩^ΩϖDefinition 14. Letℵ,K be a SFSS defined over Σ. ℵ,K is said to be a null spherical fuzzy soft set, if for every ϖ∈K, ℵϖ=ς,0,0,1|ς∈Σ. That is, ∀ς∈Σ and ϖ∈K, μℵϖς=0, ηℵϖς=0 and ϑℵϖς=1. It is denoted by ∅K.Definition 15. A SFSSℵ,K over Σ is said to be an absolute spherical fuzzy soft set, if for every ϖ∈K, ℵϖ=ς,1,0,0|ς∈Σ. That is, ∀ς∈Σ and ϖ∈K, μℵϖς=1, ηℵϖς=0 and ϑℵϖς=0. It is denoted by ΣK. ## 3. Spherical Fuzzy Soft Topology In this section, we define the notion of spherical fuzzy soft topological space (SFS-topological space) so as to differentiate the concept from the existing fuzzy models and to mark the boundaries and deliberate the basic properties thereof. Further, we define SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary and SFS-exterior with the support of befitting numerical illustrations.Definition 16. LetSFSSΣ,K be the collection of all spherical fuzzy soft sets over the universal set Σ and the parameter set K. Let ℒ,ℳ⊆^K. Then a sub-collection T of SFSSΣ,K is said to be a spherical fuzzy soft topology (SFS-topology) on Σ, if(1) ∅K, ΣK∈T(2) Ifℵ1,ℒ, ℵ2,ℳ∈T, then ℵ1,ℒ∩^ℵ2,ℳ∈T(3) Ifℵi,ℒi∈T∀i∈I, an index set, then ∪^i∈Iℵi,ℒi∈T The binaryΣK,T is known as a spherical fuzzy soft topological space over Σ. Each member of T is considered as spherical fuzzy soft open sets and their complements are considered as spherical fuzzy soft closed sets.Example 2. LetΣ=ς1,ς2,ς3 be the universal set with the attribute set K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ,ℳ⊆K, where ℒ=ϖ1,ϖ2 and ℳ=ϖ1,ϖ2,ϖ3. Consider the following SFSSs(9) ThenT=σK,∅K,ℵ1,ℒ,ℵ2,ℳ is a SFS-topology on Σ.Definition 17. LetΣK,T be a SFS-topology on Σ and let Z⊆Σ and ℒ⊆K. Then TZ=Ω,ℒ:Ω,ℒ=ℵ,ℒ∩^ZK,ℵ,ℒ∈T is called the SFS-subspace topology of T, where ZK is the absolute SFSS on Z. The doublet ZK,TZ is known as the SFS-subspace of the SFS-topological space ΣK,T.Example 3. Consider Example2. Suppose Z=ς1,ς3⊆^Σ. Now,(10) ThenTZ=ZK,∅K,Ω1,ℒ,Ω2,ℳ is a SFS-subspace topology of T.Definition 18. LetKK,T be a SFS-topological space with T=∅K,ΣK, then T is said to be the indiscrete SFS-topology on Σ and ΣK,T is called the indiscrete SFS-topological space. The indiscrete SFS-topology is the smallest SFS-topology on Σ.Definition 19. LetΣK,T be a SFS-topological space with T=SFSΣ,K, then T is called the discrete SFS-topology on Σ and ΣK,T is said to be the discrete SFS-topological space. The discrete SFS-topology is the largest SFS-topology on Σ.Example 4. LetΣ be the universal set and K be the parameter set, where Σ=ς1,ς2 and K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ1,ℒ2,ℳ1,ℳ2⊆K with ℒ1=ℳ2=ϖ1,ϖ2, ℒ2=ϖ1, ℳ1=ϖ1,ϖ2,ϖ3. Consider the following SFSSs;(11) ThenT1=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2 and T2=∅K,ΣK,Ω1,ℳ1,Ω2,ℳ2 are two SFS-topologies. Consider T1∪T2=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2,Ω1,ℳ1,Ω2,ℳ2. Now.(12) Thus,ℵ1,ℒ1,Ω1,ℳ1∈T1∪T2, but ℵ1,ℒ1∩^Ω1,ℳ1∉T1∪T2. Therefore, T1∪T2 is not a SFS-topology on Σ.Theorem 1. SupposeT1 and T2 are two SFS-topologies on Σ, then T1∩T2 is also a SFS-topology on Σ. But, T1∪T2 need not be a SFS-topology on Σ.Proof. Suppose that,T1 and T2 are two SFS-topologies on Σ. Since∅K,ΣK∈T1 and ∅K,ΣK∈T2, then ∅K,ΣK∈T1∩T. Letℵ,ℒ,Ω,ℳ∈T1∩T2⇒ℵ,ℒ,Ω,ℳ∈T1 and ℵ,ℒ,Ω,ℳ∈T2⇒ℵ,ℒ∩^Ω,ℳ∈T1 and ℵ,ℒ∩^Ω,ℳ∈T2⇒ℵ,ℒ∩^Ω,ℳ∈T1∩T2. Letℵi,ℒi∈T1∩T2, i∈I, an index set. ⇒ℵi,ℒi∈T1 and ℵi,ℒi∈T2, ∀i∈I⇒∪^i∈Iℵi,ℒi∈T1 and ∪^i∈Iℵi,ℒi∈T2∪^i∈Iℵi,ℒi∈T1∩T2 ThusT1∩T2 satisfies all requirements of SFS-topology on Σ.Definition 20. Consider the two SFS-topologiesT1 and T2 on Σ. T1 is called weaker or coarser than T2 or T2 is called finer or stronger than T1 if and only if T1⊆T2.Remark 3.1. If eitherT1⊆T2 or T2⊆T1, then T1 and T2 are comparable. Otherwise T1 and T2 are not comparable.Example 5. ConsiderΣ=ς1,ς2 as the universal set with the attribute set K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ1,ℒ2,ℒ1⊆K, where ℒ1=ϖ1,ϖ2,ϖ3, ℒ2=ϖ1,ϖ2 and ℒ3=ϖ1. The SFSSs ℵ1,ℒ1,ℵ2,ℒ2,ℵ3,ℒ3 are given as follows:(13) Here,T1=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2,ℵ3,ℒ3 and T2=∅K,ΣK,ℵ1,ℒ1 are two SFS-topologies on Σ. It is clear that T2⊆T1. Thus T1 is finer than T2 or T2 is weaker than T1.Definition 21. A SFSSℵ,ℒ is said to be a spherical fuzzy soft point (SFS-point), denoted by ϖℵ, if for every ϖ∈ℒ, ℵϖ≠ς,0,0,1|ς∈Σ and ℵϖ^=ς,0,0,1|ς∈Σ, ∀ϖ^∈ℒ−ϖ. Note that, any SFS-point ϖℵ (say) is also considered as a singleton SFS-subset of the SFSS ℵ,ℒ.Definition 22. A SFS-pointϖℵ is said to be in the SFSS Ω,ℒ, that is, ϖℵ∈Ω,ℒ, if ℵϖ⊆^Ωϖ, for every ϖ∈ℒ.Example 6. Suppose thatΣ=ς1,ς2,ς3 and ℒ=ϖ1,ϖ2,ϖ3⊆K=ϖ1,ϖ2,ϖ3,ϖ4. Consider the SFSS(14) Here,ϖ3∈ℒ and ℵϖ3≠ς,0,0,1|ς∈Σ. But, for ℒ−ϖ3=ϖ1,ϖ2, ℵϖ1=ℵϖ2=ς,0,0,1|ς∈Σ. Thus, ℵ,ℒ is a SFS-point in Σ and denoted by ϖ3ℵ. Let(15) Here,ℵϖ3⊆^Ωϖ3. Thus, we can say that ϖ3ℵ∈Ω,ℒ.Definition 23. LetΓ,ℒ be a SFSS over Σ. Γ,ℒ is said to be a spherical fuzzy soft neighbourhood (SFS-nbd) of the SFS-point ϖℵ over Σ, if there exist a SFS-open set Ω,ℳ such that ϖℵ∈Ω,ℳ⊆^Γ,ℒ.Definition 24. LetΓ,ℒ be a SFSS over Σ. Γ,ℒ is said to be a spherical fuzzy soft neighbourhood (SFS-nbd) of the SFSS ℵ,ℳ, if there exist a SFS-open set Ω,N such that ℵ,ℳ⊆^Ω,N⊆^Γ,ℒ.Theorem 2. LetΣK,T be a SFS-topological space. A SFSS ℵ,ℒ is open if and only if for each SFSS Ω,ℳ such that Ω,ℳ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of Ω,ℳ.Proof. Suppose that the SFSSℵ,ℒ is SFS-open. That is, ℵ,ℒ∈T. Thus for eachΩ,ℳ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of Ω,ℳ. Conversely, suppose that, for eachΩ,ℳ⊆^ℵ,ℒ, ℵ,ℒ is A SFS-nbd of Ω,ℳ. Sinceℵ,ℒ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of ℵ,ℒ itself. Therefore, there exist an open setΓ,N such that ℵ,ℒ⊆^Γ,N⊆^ℵ,ℒ⇒ℵ,ℒ=Γ,N⇒ℵ,ℒ is open.Definition 25. LetΣK,T be a SFS-topological space. A sub-collection ℬ of the SFS-topology T is referred as a spherical fuzzy soft basis (SFS-basis) for T, if for each ℵ,ℒ∈T, ∃B∈ℬ such that Tex translation failed.Example 7. LetΣ=ς1,ς2 and K=ϖ1,ϖ2,ϖ3. Let ℒi⊆Ki=1to11 with ℒ1=ℒ2=ℒ3=ℒ4=ℒ5=ℒ6=ℒ7=K, ℒ8=ℒ9=ϖ1,ϖ2, and ℒ10=ℒ11=ϖ1. Consider the following SFSSs;(16) Then the sub-collection is a SFS-basis for the SFS-topologyT=∅K,ΣK,ℵi,ℒi,i=1to11.Theorem 3. Letℬ be a SFS-basis for a SFS-topology T, then for each ϖ∈ℒ,ℒ⊆K, ℬϖ=ℵϖ:ℵ,ℒ∈ℬ acts as a spherical fuzzy basis for the spherical fuzzy topology Tϖ=ℵϖ:ℵ,ℒ∈T.Proof. Suppose thatℵϖ∈Tϖ for some ϖ∈ℒ ⇒ℵ,ℒ∈T Sinceℬ is a SFS-basis for the SFS-topology T, ∃B′⊆^B such that ℵ,ℒ=∪^B′⇒ℵϖ=∪^B′ϖ, where B′ϖ=ℵϖ:ℵ,ℒ∈B′⊆^ℬϖℵϖ=∪^ℬϖ⇒ℬϖ is a spherical fuzzy basis for the spherical fuzzy topology Tϖ.Theorem 4. LetΣE,T be a SFS-topological space. Let ℬ=ℵi,ℒi:i∈I be a sub-collection of SFS-topology T. ℬ is a SFS-basis for T if and only if for any SFS-open set Ω,ℳ and a SFS-point ϖΓ∈Ω,ℳ, there exist a ℵi,ℒi∈ℬ for some i∈I, such that ϖΓ∈ℵi,ℒi⊆^Ω,ℳ.Proof. Suppose that,ℬ=ℵi,ℒi:i∈I⊆T is a SFS- basis for the SFS-topology T. For any SFS-open setΩ,ℳ, there exists SFSSs ℵj,ℒj,j∈J⊆I, where Tex translation failed Thus, for any SFS-pointϖΓ∈Ω,ℳ, there exist a ℵj,ℒj∈ℬ such that ϖΓ∈ℵj,ℒj⊆^Ω,ℳ. Conversely, suppose for any SFS-open setΩ,ℳ and a SFS-point ϖΓ∈Ω,ℳ, there exist a ℵi,ℒi∈ℬ such that ϖΓ∈ℵi,ℒi⊆^Ω,ℳ Thus,Ω,ℳ⊆^∪ϖΓ∈Ω,ℳ^ℵi,ℒi⊆^Ω,ℳΩ,ℳ∪ϖΓ∈Ω,ℳ^ℵi,ℒi Sinceℵi,ℒi∈ℬ, ℬ is a SFS-basis for the SFS-topology T.Definition 26. SupposeΣK,T is a SFS-topological space and ℵ,ℒ is a SFSS over Σ, where ℒ⊆K. Then(1) The SFS-union of all SFS-open subsets ofℵ,ℒ is known as spherical fuzzy soft interior (SFS-interior) of ℵ,ℒ, symbolized by ℵ,ℒ. It is the largest SFS-open set contained in ℵ,ℒ. That is, ℵ,ℒ°⊆^ℵ,ℒ.(2) The SFS-intersection of all SFS-closed supersets ofℵ,ℒ is known as spherical fuzzy soft closure (SFS-closure) of ℵ,ℒ, symbolized by ℵ,ℒ¯. It is the smallest SFS-closed set containing ℵ,ℒ. That is, ℵ,ℒ⊆^ℵ,ℒ¯.(3) The spherical fuzzy soft boundary (SFS-boundary) ofℵ,ℒ, denoted by ∂ℵ,ℒ, is defined as follows: ∂ℵ,ℒ=ℵ,ℒ¯∩^ℵ,ℒc¯(4) The spherical fuzzy soft exterior (SFS-exterior) ofℵ,ℒ, denoted by Extℵ,ℒ, is defined as follows: Extℵ,ℒ=ℵ,ℒc°Example 8. Suppose thatΣ=ς1,ς2 is the universal set with the attribute set K=ϖ1,ϖ2,ϖ3. Consider the SFS-topology T=∅K,ΣK,ℵ1,K,ℵ2,K,ℵ3,K, where(17) Clearly, the members ofT are the SFS-open sets. Now, the corresponding closed sets are given as follows: ∅Kc=ΣKΣKc=∅K(18) Consider the following SFSS.(19) Thus.(20) Then, the SFS-interior ofℵ,K, 4 The SFS-closure ofℵ,K, ℵ,K¯=ΣK(21) So that the SFS-boundary ofℵ,K,(22) The SFS-exterior ofℵ,K, Extℵ,K=ℵ,Kc°=∅K.Theorem 5. Suppose thatΣK,T is a SFS-topological space and ℵ,ℒ is a spherical fuzzy soft set over Σ, where ℒ⊆K. Then we have(1) ℵ,ℒ°c=ℵ,ℒc¯(2) ℵ,ℒ¯c=ℵ,ℒc°Proof. Proof is directTheorem 6. Suppose thatΣK,T is a SFS-topological space and ℵ,ℒ is a spherical fuzzy soft set over Σ, where ℒ⊆K. Then ∂ℵ,K=∂ℵ,KcProof. Proof is direct.Definition 27. LetϖΞ and ϖΨ be two SFS-points. ϖΞ and ϖΨ are said to be distinct, denoted by ϖΞ≠ϖΨ, if their corresponding SFSSs Ξ,ℒ and Ψ,ℳ are disjoint. That is, Ξ,ℒ∩^Ψ,ℳ=∅ℒ∩ℳ. ## 4. Spherical Fuzzy Soft Separation Axioms In this section, we define SFS-separation axioms by using the concepts SFS-point, SFS-open sets and SFS-closed sets.Definition 28. LetΣK,T be a SFS-topological space and let ϖΞ and ϖΨ be any two distinct SFS-points over Σ. If there exist SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ and ϖΨ∉ℵ,ℒ or ϖΨ∈Ω,ℳ and ϖΞ∉Ω,ℳ, then ΣK,T is known as SFS T0-space.Example 9. All discrete SFS-topological spaces are SFST0-spaces. Because, for any two distinct SFS-points ϖΞ and ϖΨ over Σ, there exist a SFS-open set ϖΞ, such that ϖΞ∈ϖΞ and ϖΨ∉ϖΞ.Definition 29. LetΣK,T be a SFS-topological space and let ϖΞ,ϖΨ be two SFS-points over Σ with ϖΞ≠ϖΨ. If there exist two SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ, ϖΨ∉ℵ,ℒ and ϖΨ∈Ω,ℳ, ϖΞ∉Ω,ℳ, then ΣK,T is known as SFS T1-space.Example 10. Every discrete SFS-topological space is a SFST1-space. Because, for any two distinct SFS -points ϖΞ and ϖΨ over Σ, there exist SFS-open sets ϖΞ and ϖΨ, such that ϖΞ∈ϖΞ, ϖΨ∉ϖΞ and ϖΞ∉ϖΨ, ϖΨ∈ϖΨ.Definition 30. LetΣK,T be a SFS-topological space and let ϖΞ and ϖΨ be any two distinct SFS-points over Σ. If there exist two SFS-open sets ℵ,ℒ and ℵ,ℒ such that ϖΞ∈ℵ,ℒ and ϖΨ∈Ω,ℳ, and ℵ,ℒ∩^Ω,ℳ=∅ℒ∩ℳ, then ΣK,T is said to be SFS T2-space or SFS-Hausdorff space.Example 11. Suppose thatΣK,T is a discrete SFS-topological space. If ϖΞ and ϖΨ are any two distinct SFS-points over Σ. Then there exists distinct SFS-open sets ϖΞ and ϖΨ such that ϖΞ∈ϖΞ and ϖΨ∈ϖΨ. Therefore, ΣK,T is a SFS-Hausdorff space.Theorem 7. LetΣK,T be a SFS-topological space with attribute set K. ΣK,T is a SFS-Hausdorff space if and only if for any two distinct SFS-points ϖΞ and ϖΨ, there exist SFS-closed sets Ω1,K and Ω2,K such that ϖΞ∈Ω1,K, ϖΨ∉Ω1,K and ϖΞ∉Ω2,K, ϖΨ∈Ω2,K, and also Ω1,K∪^Ω2,K=ΣK.Proof. Suppose thatΣK,T is a SFS-Hausdorff space, ϖΞ and ϖΨ are any two distinct SFS-points over Σ. That is, ϖΞ∩^ϖΨ=∅K. SinceΣK,T is SFS-Hausdorff space, there exist two SFS-open sets ℵ1,K and ℵ2,K such that ϖΞ∈ℵ1,K, ϖΨ∉ℵ1,K and ϖΞ∉ℵ2,K, ϖΨ∈ℵ2,K. And also ℵ1,K∩^ℵ1,K=∅K⇒ℵ1,Kc∪^ℵ1,Kc=ΣK and also both ℵ1,Kc and ℵ2,Kc are SFS-closed sets. Letℵ1,Kc=Ω1,K and ℵ2,Kc=Ω2,K Then,ϖΞ∉Ω1,K,ϖΨ∈Ω1,K and ϖΞ∉Ω2,K,ϖΨ∈Ω2,K. Conversely, suppose that for any two distinct SFS-pointsϖΞ and ϖΨ, there exist SFS-closed sets Ω1,K and Ω2,K such that ϖΞ∈Ω1,K, ϖΨ∉Ω1,K and ϖΞ∉Ω2,K, ϖΨ∈Ω2,K, and also Ω1,K∪^Ω2,K=ΣK. ⇒Ω1,Kc and Ω2,Kc are SFS-open sets and Ω1,Kc∩^Ω2,Kc=∅K Also,ϖΞ∉Ω1,Kc, ϖΨ∈Ω1,Kc and ϖΞ∈Ω2,Kc, ϖΨ∉Ω2,Kc. Thus,ΣK,T is a SFS-Hausdorff space. □Definition 31. LetΣK,T be a SFS-topological space, Ω,ℳ be a SFS-closed set ϖΞ and ϖΨ, be a SFS-point over Σ such that ϖΞ∉Ω,ℳ. If there is SFS-open sets ℵ1,ℒ1 and ℵ2,ℒ2 such that ϖΞ∈ℵ1,ℒ1, Ω,ℳ⊆^ℵ2,ℒ2 and ℵ1,ℒ1∩^ℵ2,ℒ2=∅ℒ1∩ℒ2, then ΣK,T is called a SFS-regular space.Example 12. LetΣK,T be a SFS-topological space over Σ=ς1,ς2 with SFS-topology T=ΣK,∅K,ℵ1,K,ℵ2,K, where,(23) ThenΣK,T is a SFS-regular space.Definition 32. LetΣK,T be a SFS-topological space. If ΣK,T is a SFS-regular T1-space, then it is called a SFS T3-space.Definition 33. LetΣK,T be a SFS-topological space and let Ω1,ℳ1 and Ω2,ℳ2 be two disjoint SFS-closed sets in ΣK,T. If there exist SFS-open sets ℵ1,ℒ1 and ℵ2,ℒ2 such that Ω1,ℳ1⊆^ℵ1,ℒ1, Ω2,ℳ2⊆^ℵ2,ℒ2 and ℵ1,ℒ1∩^ℵ2,ℒ2=∅ℒ1∩ℒ2, then ΣK,T is called a SFS-normal space.Example 13. LetΣK,T be a SFS-topological space over Σ=ς1,ς2 with SFS-topology (24) ThenΣK,T is a SFS-normal space.Definition 34. LetΣK,T be a SFS-topological space. If ΣK,T is a SFS-normal T1-space, then it is known as SFS T4-space.Theorem 8. Suppose thatΣK,T is a SFS-topological space and Z is a non-empty subset of Σ.(1) IfΣK,T is a SFS T0-space, then ZK,TZ is also a SFS T0-space.(2) IfΣK,T is a SFS T1-space, then ZK,TZ is also a SFS T1-space.(3) IfΣK,T is a SFS T2-space, then ZK,TZ is also a SFS T2-space.Proof. Here we provide the proof if (1). (2) and (3) can be proved in the similar way. Suppose thatϖΞ and ϖΨ are two distinct SFS-points over Z. SinceΣK,T is a SFS T0-space, there is SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ, ϖΨ∉ℵ,ℒ or ϖΨ∈Ω,ℳ, ϖΞ∉Ω,ℳ Thus,ϖΞ∈ℵ,ℒ∩^ZK, ϖΨ∉ℵ,ℒ∩^ZK or ϖΨ∈Ω,ℳ∩^ZK, ϖΞ∉Ω,ℳ∩^ZK Therefore,ZK,TZ is also a SFS T0-space. ## 5. Group Decision Algorithm and Illustrative Example In this section, we utilize the proposed SFS-topology to the group decision-making (GDM) process under the spherical fuzzy soft environment. For it, we presented the concept of TOPSIS method and embedding it into the proposed SFS-topology. ### 5.1. Proposed Algorithm with TOPSIS Method Consider a GDM process which consist a certain set of alternativesK=ς1,ς2,…,ςm. Each alternative is evaluated under the different set of attributes denoted by K=ϖ1,ϖ2,…,ϖn by the different “p” decision-makers (or experts), say Dℳ1,Dℳ2,…,Dℳp. Each expert has evaluated the given alternatives and provide their ratings in terms of linguistic variables such as “Excellent,” “Good” etc. All the linguistic variables and their corresponding weights are considered in this work from the list which is summarized in Table 1.Table 1 Linguistic terms to determine the alternatives. Linguistic termsWeightsExcellent0.90Very good0.70Good0.50Bad0.30Very bad0.10Then to access the finest alternative(s) from the given alternative, we summarize the following steps of the proposed approach as below.Step 1: Create a weighted SFS parameter matrixAw=αijp×m by considering the linguistic terms from Table 1. That is,(25)where each elementαij is the linguistic rating given by the decision-maker Dℳi to the attribute ϖj.Step 2: Create the weighted normalized SFS parameter matrixNw as follows:(26)where,ρij=αij/∑i=1pαij2Step 3: Compute the weight vectorΘ=θ1,θ2,…,θn, where θi’s are obtained as(27)Step 4: Construct a SFS-topology by aggregating the SFSSsDℳi,K,i=1,2,…,p, accorded by each decision-makers in the matrix form as their evaluation value. The matrix corresponding to the SFSS Dℳi,K is denoted by DMi for all i=1,2,…,p and it is called the SFS-decision matrix, where the rows and columns of each DMi represents the alternatives and the attributes respectively.Step 5: Compute the aggregated SFS matrixDMAgg given as follows:(28)Step 6: Construct the weighted SFS-decision matrix(29)whereβpq=θq×dpq and each βpq=μϖqςp,ηϖqςp,ϑϖqςp, p=1,2,…,m and q=1,2,…,n.Step 7: Obtain SFS-valued positive ideal solutionSFSV+ and SFS-valued negative ideal solution SFSV−, where(30)(31)Step 8: Compute the SFS-separation measurementsEdp+ and Edp−, ∀p=1,2,…,m, defined as follows:(32)Edp+=∑q=1nμϖqςp−μq+2+ηϖqςp−ηq+2+ϑϖqςp−ϑq+2,(33)Edp−−=∑q=1nμϖqςp−μq−2+ηϖqςp−ηq+2+ϑϖqςp−ϑq−2.Step 9: Obtain the SFS-closeness coefficientCp^ of each alternatives. Where(34)Cp^=Edp−Edp++Edp−∈0,1.providedEdp+≠0.Step 10: Based on the SFS-closeness coefficient, rank the alternatives in decreasing (or increasing) order and choose the optimal object from the alternatives. ### 5.2. Illustrative Example An international company conducted a campus recruitment in a college and shortlisted four studentsΣ=ς1,ς2,ς3,ς4 through the first round of recruitment. There is only one vacancy and they have to select one student as their candidate out of these five students. Suppose there are six decision-makers Dℳ=Dℳ1,Dℳ2,Dℳ3,Dℳ4,Dℳ5,Dℳ6 for the final round and they must have select the candidate based on the parameter set K=ϖ1,ϖ2,ϖ3,ϖ4,ϖ5. For i=1,2,3,4,5, the parameters ϖj stand for “educational discipline,” “English speaking,” “writing skill,” “technical discipline,” and “general knowledge” respectively. Then the steps of the proposed approach have been executed to find the best alternative(s) as follows.Step 1: The weighted SFS parameter matrixAw is formulated on the basis of equation (25) as follows:(35)Step 2: The weighted normalized SFS parameter matrixNw is computed by using equation (26).(36)Step 3: By using equation (27), the weight vector of the given attributes are computed as(37)Step 4: For each decision-makerDMi,i=1to6 and their corresponding SFS-decision matrices, we get a SFS-topology on Σ as(38)Thus, the collectionDM1,DM2,DM3,DM4,DM5,DM6 gives a SFS-topology on Σ.Step 5: The aggregated SFS matrixDMAgg is obtained by using equation (28) and summarized as(39)Step 6: The weighted SFS-decision matrixB is obtained by using equation (29) and written as(40)Step 7: From the weighted matrixB and utilizing equations (30), (31), we obtain ideal solutions SFSV+ and SFSV− are(41)Step 8: For eachp=1,2,3,4, the SFS-separation measurements Edp+ and Edp− are calculated by using equations (32), (33) as(42)Step 9: Using equation (34), compute the SFS-closeness coefficients Cp^, for each p=1,2,3,4 and get(43)Step 10: Based on the ratings ofCp^’s, we can obtain the ordering of the given alternatives as(44)Which corresponds to the alternatives ratings asς1>ς2>ς3>ς4. This, we conclude that the international company should select the student ς1 as their candidate. ## 5.1. Proposed Algorithm with TOPSIS Method Consider a GDM process which consist a certain set of alternativesK=ς1,ς2,…,ςm. Each alternative is evaluated under the different set of attributes denoted by K=ϖ1,ϖ2,…,ϖn by the different “p” decision-makers (or experts), say Dℳ1,Dℳ2,…,Dℳp. Each expert has evaluated the given alternatives and provide their ratings in terms of linguistic variables such as “Excellent,” “Good” etc. All the linguistic variables and their corresponding weights are considered in this work from the list which is summarized in Table 1.Table 1 Linguistic terms to determine the alternatives. Linguistic termsWeightsExcellent0.90Very good0.70Good0.50Bad0.30Very bad0.10Then to access the finest alternative(s) from the given alternative, we summarize the following steps of the proposed approach as below.Step 1: Create a weighted SFS parameter matrixAw=αijp×m by considering the linguistic terms from Table 1. That is,(25)where each elementαij is the linguistic rating given by the decision-maker Dℳi to the attribute ϖj.Step 2: Create the weighted normalized SFS parameter matrixNw as follows:(26)where,ρij=αij/∑i=1pαij2Step 3: Compute the weight vectorΘ=θ1,θ2,…,θn, where θi’s are obtained as(27)Step 4: Construct a SFS-topology by aggregating the SFSSsDℳi,K,i=1,2,…,p, accorded by each decision-makers in the matrix form as their evaluation value. The matrix corresponding to the SFSS Dℳi,K is denoted by DMi for all i=1,2,…,p and it is called the SFS-decision matrix, where the rows and columns of each DMi represents the alternatives and the attributes respectively.Step 5: Compute the aggregated SFS matrixDMAgg given as follows:(28)Step 6: Construct the weighted SFS-decision matrix(29)whereβpq=θq×dpq and each βpq=μϖqςp,ηϖqςp,ϑϖqςp, p=1,2,…,m and q=1,2,…,n.Step 7: Obtain SFS-valued positive ideal solutionSFSV+ and SFS-valued negative ideal solution SFSV−, where(30)(31)Step 8: Compute the SFS-separation measurementsEdp+ and Edp−, ∀p=1,2,…,m, defined as follows:(32)Edp+=∑q=1nμϖqςp−μq+2+ηϖqςp−ηq+2+ϑϖqςp−ϑq+2,(33)Edp−−=∑q=1nμϖqςp−μq−2+ηϖqςp−ηq+2+ϑϖqςp−ϑq−2.Step 9: Obtain the SFS-closeness coefficientCp^ of each alternatives. Where(34)Cp^=Edp−Edp++Edp−∈0,1.providedEdp+≠0.Step 10: Based on the SFS-closeness coefficient, rank the alternatives in decreasing (or increasing) order and choose the optimal object from the alternatives. ## 5.2. Illustrative Example An international company conducted a campus recruitment in a college and shortlisted four studentsΣ=ς1,ς2,ς3,ς4 through the first round of recruitment. There is only one vacancy and they have to select one student as their candidate out of these five students. Suppose there are six decision-makers Dℳ=Dℳ1,Dℳ2,Dℳ3,Dℳ4,Dℳ5,Dℳ6 for the final round and they must have select the candidate based on the parameter set K=ϖ1,ϖ2,ϖ3,ϖ4,ϖ5. For i=1,2,3,4,5, the parameters ϖj stand for “educational discipline,” “English speaking,” “writing skill,” “technical discipline,” and “general knowledge” respectively. Then the steps of the proposed approach have been executed to find the best alternative(s) as follows.Step 1: The weighted SFS parameter matrixAw is formulated on the basis of equation (25) as follows:(35)Step 2: The weighted normalized SFS parameter matrixNw is computed by using equation (26).(36)Step 3: By using equation (27), the weight vector of the given attributes are computed as(37)Step 4: For each decision-makerDMi,i=1to6 and their corresponding SFS-decision matrices, we get a SFS-topology on Σ as(38)Thus, the collectionDM1,DM2,DM3,DM4,DM5,DM6 gives a SFS-topology on Σ.Step 5: The aggregated SFS matrixDMAgg is obtained by using equation (28) and summarized as(39)Step 6: The weighted SFS-decision matrixB is obtained by using equation (29) and written as(40)Step 7: From the weighted matrixB and utilizing equations (30), (31), we obtain ideal solutions SFSV+ and SFSV− are(41)Step 8: For eachp=1,2,3,4, the SFS-separation measurements Edp+ and Edp− are calculated by using equations (32), (33) as(42)Step 9: Using equation (34), compute the SFS-closeness coefficients Cp^, for each p=1,2,3,4 and get(43)Step 10: Based on the ratings ofCp^’s, we can obtain the ordering of the given alternatives as(44)Which corresponds to the alternatives ratings asς1>ς2>ς3>ς4. This, we conclude that the international company should select the student ς1 as their candidate. ## 6. Comparison Analysis In this section, the proposed algorithm is compared to the existing algorithm (Algorithm 1: Decision making based on adjustable soft discernibility matrix) [27]. Since the optimal solution of the study discussed in Section 5.2 using Algorithm 1 is also “ς1,” it can be seen that the proposed algorithm based on the group decision-making method and the extension of TOPSIS approach is comparable to previously known method, which validates the reliability and dependability of the proposed algorithm.The advantages of the work drawn in earlier sections can be summarized as follows:(i) Topological structures on fuzzy soft sets are used in a variety of applications, including medical diagnosis, decision-making, pattern recognition, image processing, and so on.(ii) SFSS is one of the most generalized version of fuzzy soft set and it is arguably the more realistic, practical and accurate.(iii) Introducing topology on SFSS is seem to be highly important in both theoretical and practical scenarios.(iv) While dealing with group decision-making problems of SFSS, the proposed algorithm is more reliable and expressive. ## 7. Conclusions The spherical fuzzy soft set is the most generalized version of all other existing fuzzy soft set models. This newest concept is more precise, accurate, and sensible and the models are thus capable of solving myriad problems more deftly and practically. In this paper, we probed into certain basic aspects of spherical fuzzy soft topological space. SFS-topology is developed by using the notions of SFS-union and SFS-intersection. The paper has also provided certain fundamental definitions pertaining to the SFS-topology including SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary, and SFS-exterior and on the basis of the said definitions mooted, we have proven a few theorems. Further, SFS-separation axioms are presented by using the concepts of SFS-point, SFS-closed sets, and SFS-open sets on the basis of which an algorithm is also proposed as an application with vivid implications in group decision-making method. The model is presented as an extension of TOPSIS approach as well. A numerical example is used to illustrate the efficiency of the proposed algorithm.In the future, we will explore algebraicproperties of SFSSs and investigate their applications in decision making, medical diagnosis, clustering analysis, pattern recognition, and information science. Also relationship between SFSSs and T-SFSSs, and the algebraic and topological structures of T-SFSSs can be studied as future work. --- *Source: 1007133-2022-04-26.xml*
1007133-2022-04-26_1007133-2022-04-26.md
42,861
Spherical Fuzzy Soft Topology and Its Application in Group Decision-Making Problems
Harish Garg; Fathima Perveen P A; Sunil Jacob John; Luis Perez-Dominguez
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007133
1007133-2022-04-26.xml
--- ## Abstract The spherical fuzzy soft set is a generalized soft set model, which is more realistic, practical, and accurate. It is an extended version of existing fuzzy soft set models that can be used to describe imprecise data in real-world scenarios. The paper seeks to introduce the new concept of spherical fuzzy soft topology defined on spherical fuzzy soft sets. In this work, we define some basic concepts including spherical fuzzy soft basis, spherical fuzzy soft subspace, spherical fuzzy soft interior, spherical fuzzy soft closure, and spherical fuzzy soft boundary. The properties of these defined set are also discussed and explained with an appropriate examples. Also, we establish certain important results and introduce spherical fuzzy soft separation axioms, spherical fuzzy soft regular space, and spherical fuzzy soft normal space. Furthermore, as an application, a group decision-making algorithm is presented based on the TOPSIS (Technique of Order Preference by Similarity to an Ideal Solution) method for solving the decision-making problems. The applicability of the proposed method is demonstrated through a numerical example. The comprehensive advantages of the proposed work have been stated over the existing methods. --- ## Body ## 1. Introduction The human life with all of its complexities, is currently in flux due to the exponential growth of innovation and changing technologies that constantly redefine, reshape, and redesign the way the world is perceived and experienced, and the tools once used to solve problems become obsolete and inappropriate. This is no exception to any discipline of knowledge. Thus, The strategies commonly adopted in classical mathematics are not effective all the time due to the uncertainty and ambiguity it entails. Techniques such as fuzzy set theory [1], vague set theory [2], and interval mathematics [3] are viewed as mathematical models for coping with uncertainty and variability. However, these theories suffer from their own shortcomings and inadequacies to deal with the task at hand more objectively. Zadeh’s fuzzy set theory was extensively used in the beginning for many applications. Fuzzy sets are thought to be an extended version of classical sets, where each element has a membership grade. The definition of intuitionistic fuzzy sets was developed by Atanassov [4] to circumvent some limitations of fuzzy sets. Many other fuzzy set extensions have been proposed, including interval-valued intuitionistic fuzzy sets [5], Pythagorean fuzzy sets [6], picture fuzzy sets [7], and so on. These sets were effectively applied in several areas of science and engineering, economics, medical science, and environmental science. Recently, as a generalization of fuzzy set, intuitionistic fuzzy set, and picture fuzzy set, certain authors have developed the concept of spherical fuzzy sets [8] and T-spherical fuzzy sets [9] to enlarge the picture fuzzy sets as it has their restrictions. To address decision-making problems, Ashraf et al. [10] proposed the spherical fuzzy aggregation operators. Akram et al. [11] introduced the complex spherical fuzzy model that excels at expressing ambiguous information in two dimensions. The applications of these sets to solve decision-making problems are prevalent in a variety of fields [12–17].In 1999, Molodtsov [18] proposed a new type of set, called soft set, to deal with uncertainty and vagueness. The challenge of determining the membership function in fuzzy set theory does not occur in soft set theory, making the theory applicable to multiple fields of game theory, operations research, Riemann integration, etc. Later, Maji et al. [19] studied more on soft sets and used Pawlak’s rough mathematics [20] to propose a decision-making problem as an application of soft sets. Also, Maji et al. [21] developed a hybrid structure of soft sets and fuzzy sets, known as fuzzy soft sets, which is a more powerful mathematical model for handling different kinds of real-life situations. Many researchers were interested in this concept and various fuzzy set generalizations such as generalized fuzzy soft sets [22], group generalized fuzzy soft sets [23], intuitionistic fuzzy soft sets [24], Pythagorean fuzzy soft sets [25], interval-valued picture fuzzy soft sets [26] were put forward. In the recent times, Perveen et al. [27] created a spherical fuzzy soft set (SFSS), which is a more advanced form of fuzzy soft set. This newly evolved set is arguably the more realistic, practical and accurate. SFSSs are a new variation of the picture fuzzy soft set that was developed by merging soft sets and spherical fuzzy sets, where the membership degrees satisfy the condition 0≤μℵϖ2ς+ηℵϖ2ς+ϑℵϖ2ς≤1 rather than 0≤μℵϖς+ηℵϖς+ϑℵϖς≤1 as in picture fuzzy soft sets. SFSS has more capability in modeling vagueness and uncertainty while dealing with decision-making problems that occur in real-life circumstances. The authors [28] also developed similarity measures of SFSS and applied the proposed spherical fuzzy soft similarity measure in the field of medical science.These theories have applications in topology and many other fields of mathematics. Chang [29] suggested the concept of a fuzzy topological space in 1968. He extended many basic concepts like continuity, compactness, open set, and closed set in general topology to the fuzzy topological spaces. Again, Lowen [30] conducted an elaborated study of the structure of fuzzy topological spaces. Çoker [31] invented the idea of an intuitionistic fuzzy topological space in 1995. Many other results including continuity, compactness, and connectedness of intuitionistic fuzzy topological spaces were proposed by Coker et al. [32, 33]. The notion of Pythagorean fuzzy topological space was presented by Olgun et al. [34]. Kiruthika and Thangavelu [35] discussed the link between topology and soft topology. Recently, by using elementary operations over a universal set with a set of parameters, Taskopru and Altintas [36] established the elementary soft topology. Tanay and Kandemir [37] defined the idea of fuzzy soft topology. They also introduced fuzzy soft neighbourhood, fuzzy soft basis, fuzzy soft interior, and fuzzy soft subspace topology. Several related works on fuzzy soft topology can be seen in [38–40]. Osmanoglu and Tokat [41] proposed the subspace, compactness, connectedness, and separation axioms of intuitionistic fuzzy soft topological spaces. Also, intuitionistic fuzzy soft topological spaces were examined by Bayramov and Gunduz [42]. They studied intuitionistic fuzzy soft continuous mapping and related properties. Riaz et al. [43] proposed the concept of Pythagorean fuzzy soft topology defined on Pythagorean fuzzy soft sets, and provided an application of Pythagorean fuzzy soft topology in medical diagnosis by making use of TOPSIS method.Hwang and Yoon [44] developed Technique for order of Preference by Similarity to ideal solution (TOPSIS) as a multi-criteria decision analysis and further studied by Chen et al. [45, 46]. Boran et al. [47] invented the TOPSIS approach based on intuitionistic fuzzy sets for multi-criteria decision-making problems. Chen et al. [48] developed a proportional interval T2 hesitant fuzzy TOPSIS approach based on the Hamacher aggregation operators and the andness optimization models. Further, the fuzzy soft TOPSIS method presented briefly as a multi-criteria decision-making technique by Selim and Karaaslan [49]. They proposed a group decision-making process in a fuzzy soft environment based on the TOPSIS method. Also, many researchers in [50–54] have looked at the TOPSIS approach for solving decision-making problems under the different fuzzy environment.Topological structures on fuzzy soft sets have application in several areas including medical diagnosis, decision-making, pattern recognition, and image processing. Since SFSS is one of the most generalized versions of the fuzzy soft set, introducing topology on SFSS is highly essential in both theoretical and practical scenarios. There are some basic operations of SFSSs in the literature, more functional operations of SFSSs are derived day by day. The development of topology on SFSSs can be considered as an important contribution to fill the gap in the literature on the theory of SFSS. The aim of this paper is to introduce the notion of spherical fuzzy soft topology (SFS-topology) on SFSS, and to discuss some basic concepts such as SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary, SFS-exterior and SFS-separation axioms. Also, through this paper, we use the SFS-topology in group decision-making method based on TOPSIS under spherical fuzzy soft environment.The rest of the paper is ordered as follows. In Section2, some fundamental concepts of fuzzy sets, spherical fuzzy sets, soft sets, fuzzy soft sets, and spherical fuzzy soft sets are recalled, and definitions of spherical fuzzy subset, spherical fuzzy union and spherical fuzzy intersection are modified. In Section 3, the concept of SFS-topology is defined on SFSS including some basic definitions. In Section 4, by using the ideas of SFS-points, SFS-open set, and SFS-closed set, SFS-separation axioms are proposed. In Section 5, an algorithm is presented besed on group decision-making method and extension of TOPSIS approach accompanied by a numerical example. This theory will have implications in the discipline of Human resource management, organizational behavior and assessing the rationale of consumer choice. In Section 6, a comparative study is conducted with an already existing algorithm to show the effectiveness of the proposed algorithm. Finally, Section 7 ends with a conclusion and recommendations for future work. ## 2. Preliminaries In this section, we recall certain fundamental ideas associated with various kinds of sets including fuzzy sets, spherical fuzzy sets, soft sets, fuzzy soft sets, and spherical fuzzy soft sets. We redefine the definitions of spherical fuzzy subset, spherical fuzzy union, and spherical fuzzy intersection, also propose the notions of null SFSS and absolute SFSS. LetΣ be the initial universal set of discourse and K be the attribute (or parameter) set in connection with the objects in Σ, and ℒ⊆K.Definition 1 (see [1]). A fuzzy setℵ on a universe Σ is an object of the form(1)ℵ=ς,μℵς|ς∈Σ,where μℵ:Σ⟶0,1 is the membership function of ℵ, the value μℵς is the grade of membership of ς in ℵ.Definition 2 (see [9]). A spherical fuzzy set (SFS)S over the universal set Σ can be written as(2)S=ς,μSς,ηSς,ϑSς|ς∈Σ,where μSς,ηSς and ϑSς are the membership functions defined from Σ to [0, 1], indicate the positive, neutral, and negative membership degrees of ς∈Σ respectively, with the condition, 0≤μS2ς+ηS2ς+ϑS2ς≤1, ∀ς∈Σ.Definition 3 (see [9]). Letℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ be two SFSs over Σ. Then(1) ℵ⊆Ω if μℵς≤μΩς, ηℵς≤ηΩς, and ϑℵς≥ϑΩς(2) ℵ=Ω if and only if ℵ⊆Ω and ℵ⊇Ω(3) ℵ∪Ω=ς,μℵς∨μΩς,ηℵς∧ηΩς,ϑℵς∧ϑΩς|ς∈Σ(4) ℵ∩Ω=ς,μℵς∧μΩς,ηℵς∧ηΩς,ϑℵς∨ϑΩς|ς∈ΣWhere the symbols “∨” and “∧” represent the maximum and minimum operations respectively.Definition 4 (see [10]). LetΣ be the initial universal set.(1) An SFS is said to be an absolute SFS over the universeΣ, denoted by 1Σ, if ∀ς∈Σ,(3)μ1Σς=1,η1Σς=0,andϑ1Σς=0.(2) An SFS is said to be a null SFS over the universeΣ, denoted by 1Σ, if ∀ς∈Σ,(4)μ1Σς=0,η1Σς=0,andϑ1Σς=1.Example 1. LetΣ=ς1,ς2 be the universal set. Let ℵ and Ω be two SFSs over Σ given by,(5)ℵ=ς1,0.3,0.4,0.5,ς2,0.5,0.2,0.4,(6)Ω=ς1,0.4,0.5,0.2,ς2,0.6,0.3,0.3. Then it is clear thatℵ⊆Ω, and ℵ∪Ω=ς1,0.4,0.4,0.2,ς2,0.6,0.2,0.3. Further,1Σ=ς1,1.0,0.0,0.0,ς2,1.0,0.0,0.1 and 1Σ=ς1,0.0,0.0,0.0,ς2,0.0,0.0,0.1. Then ℵ∪1Σ=ς1,0.3,0.0,0.5,ς2,0.5,0.0,0.4 and ℵ∩1Σ=ς1,0.3,0.0,0.5,ς2,0.5,0.0,0.4. From the above example, It can be showed that the following results are not true generally in spherical fuzzy set theory.(1) ℵ⊆1Σ(2) ℵ∪1Σ=ℵ(3) ℵ∩1Σ=ℵ(4) Ifℵ⊆Ω, then ℵ∪Ω=Ω To overcome this difficulty, we modified the definitions of spherical fuzzy subset, spherical fuzzy union, and spherical fuzzy intersection as follows.Definition 5. Letℵ and Ω be two spherical fuzzy sets over the universe Σ, where ℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ. Then ℵ is said to be a spherical fuzzy subset (modified) of Ω, denoted by ℵ⊆^Ω, if ∀ς∈Σ(7)μℵς≤μΩς,ηℵς≤ηΩς,ϑℵς≥ϑΩς;ifμΩς≠1μℵς≤μΩς,ηℵς≥ηΩς,ϑℵς≥ϑΩς;otherwise.Definition 6. Letℵ=ς,μℵς,ηℵς,ϑℵς|ς∈Σ and Ω=ς,μΩς,ηΩς,ϑΩς|ς∈Σ be two spherical fuzzy sets over Σ. Then the spherical fuzzy union (modified), denoted by ℵ∪^Ω, and the spherical fuzzy intersection (modified), denoted by ℵ∩^Ω, are defined as follows:(1) Λ=ℵ∪^Ω=ς,μΛς,ηΛς,ϑΛς|ς∈Σ, whereμΛς=μℵς∨μΩςηΛς=ηℵς∨ηΩς;ifμℵς∨μΩς2+ηℵς∨ηΩς2+ϑℵς∧ϑΩς2≤1ηℵς∧ηΩς;otherwiseϑΛς=ϑℵς∧ϑΩς(2) Π=ℵ∩^Ω=ς,μΠς,ηΠς,ϑΠς|ς∈Σ, whereμΠς=μℵς∧μΩςηΠς=ηℵς∨ηΩς;ifμℵςorμΩς=1ηℵς∧ηΩς;otherwiseϑΠς=ϑℵς∨ϑΩςDefinition 7 (see [18]). LetPΣ denote the power set of the universal set Σ and K be the set of attributes. A soft set over Σ is a pair ℵ,ℒ, where ℵ is a function from ℒ to PΣ, and ℒ⊆K.Definition 8 (see [21]). LetFSΣ denote the collection of all fuzzy subsets over the universal set Σ. A fuzzy soft set (FSS) is a pair ℵ,ℒ, where ℵ is a mapping given by ℵ:ℒ⟶FSΣ and ℒ⊆K.Definition 9 (see [27]). LetSFSΣ be the set of all spherical fuzzy sets over Σ. A spherical fuzzy soft set (SFSS) is a pair ℵ,ℒ, where ℵ is a mapping from ℒ to SFSΣ and ℒ⊆K. For eachϖ∈ℒ,ℵϖ is a spherical fuzzy set such that ℵϖ=ς,μℵϖς,ηℵϖς,ϑℵϖςς∈Σ, where μℵϖς,ηℵϖς, ϑℵϖς∈0,1 are the membership degrees which are explained in Definition 2, with the same condition.Definition 10 (see [27]). Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, and ℒ,ℳ⊆K. Then ℵ,ℒ is said to be a SFS-subset of Ω,ℳ, if(1) ℒ⊆ℳ(2) ∀ϖ∈ℒ,ℵϖ⊆^ΩϖDefinition 11 (see [27]). Letℵ,ℒ be a SFSS over the universal set Σ. Then the SFS-complement of ℵ,ℒ, denoted by ℵ,ℒc, is defined by ℵ,ℒc=ℵc,ℒ, where ℵc:ℒ⟶SFSΣ,K is a mapping given by ℵcϖ=ς,ϑℵϖς,ηℵϖς,μℵϖςς∈Σ for every ϖ∈ℒ.Definition 12 (see [27]). Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, and ℒ,ℳ⊆K. then the SFS-union of ℵ,ℒ and Ω,ℳ, denoted by ℵ,ℒ∪^Ω,ℳ, is a SFSS Γ,N, where N=ℒ∪ℳ and ∀ϖ∈N(8)Γe=ℵϖ,ifϖ∈ℒ−ℳΩϖ,ifϖ∈ℳ−ℒℵϖ∪^Ωϖ,ifϖ∈ℒ∩ℳ, Now, we propose the definitions of spherical fuzzy soft restricted intersection, null spherical fuzzy soft, and absolute spherical fuzzy soft, which are essential for further discussions.Definition 13. Letℵ,ℒ and Ω,ℳ be two SFSSs over Σ, ℒ,ℳ⊆K. then the SFS-restricted intersection of ℵ,ℒ and Ω,ℳ, denoted by ℵ,ℒ∩^Ω,ℳ, is a SFSS Γ,N, where N=ℒ∩ℳ and ∀ϖ∈N, Γϖ=ℵϖ∩^ΩϖDefinition 14. Letℵ,K be a SFSS defined over Σ. ℵ,K is said to be a null spherical fuzzy soft set, if for every ϖ∈K, ℵϖ=ς,0,0,1|ς∈Σ. That is, ∀ς∈Σ and ϖ∈K, μℵϖς=0, ηℵϖς=0 and ϑℵϖς=1. It is denoted by ∅K.Definition 15. A SFSSℵ,K over Σ is said to be an absolute spherical fuzzy soft set, if for every ϖ∈K, ℵϖ=ς,1,0,0|ς∈Σ. That is, ∀ς∈Σ and ϖ∈K, μℵϖς=1, ηℵϖς=0 and ϑℵϖς=0. It is denoted by ΣK. ## 3. Spherical Fuzzy Soft Topology In this section, we define the notion of spherical fuzzy soft topological space (SFS-topological space) so as to differentiate the concept from the existing fuzzy models and to mark the boundaries and deliberate the basic properties thereof. Further, we define SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary and SFS-exterior with the support of befitting numerical illustrations.Definition 16. LetSFSSΣ,K be the collection of all spherical fuzzy soft sets over the universal set Σ and the parameter set K. Let ℒ,ℳ⊆^K. Then a sub-collection T of SFSSΣ,K is said to be a spherical fuzzy soft topology (SFS-topology) on Σ, if(1) ∅K, ΣK∈T(2) Ifℵ1,ℒ, ℵ2,ℳ∈T, then ℵ1,ℒ∩^ℵ2,ℳ∈T(3) Ifℵi,ℒi∈T∀i∈I, an index set, then ∪^i∈Iℵi,ℒi∈T The binaryΣK,T is known as a spherical fuzzy soft topological space over Σ. Each member of T is considered as spherical fuzzy soft open sets and their complements are considered as spherical fuzzy soft closed sets.Example 2. LetΣ=ς1,ς2,ς3 be the universal set with the attribute set K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ,ℳ⊆K, where ℒ=ϖ1,ϖ2 and ℳ=ϖ1,ϖ2,ϖ3. Consider the following SFSSs(9) ThenT=σK,∅K,ℵ1,ℒ,ℵ2,ℳ is a SFS-topology on Σ.Definition 17. LetΣK,T be a SFS-topology on Σ and let Z⊆Σ and ℒ⊆K. Then TZ=Ω,ℒ:Ω,ℒ=ℵ,ℒ∩^ZK,ℵ,ℒ∈T is called the SFS-subspace topology of T, where ZK is the absolute SFSS on Z. The doublet ZK,TZ is known as the SFS-subspace of the SFS-topological space ΣK,T.Example 3. Consider Example2. Suppose Z=ς1,ς3⊆^Σ. Now,(10) ThenTZ=ZK,∅K,Ω1,ℒ,Ω2,ℳ is a SFS-subspace topology of T.Definition 18. LetKK,T be a SFS-topological space with T=∅K,ΣK, then T is said to be the indiscrete SFS-topology on Σ and ΣK,T is called the indiscrete SFS-topological space. The indiscrete SFS-topology is the smallest SFS-topology on Σ.Definition 19. LetΣK,T be a SFS-topological space with T=SFSΣ,K, then T is called the discrete SFS-topology on Σ and ΣK,T is said to be the discrete SFS-topological space. The discrete SFS-topology is the largest SFS-topology on Σ.Example 4. LetΣ be the universal set and K be the parameter set, where Σ=ς1,ς2 and K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ1,ℒ2,ℳ1,ℳ2⊆K with ℒ1=ℳ2=ϖ1,ϖ2, ℒ2=ϖ1, ℳ1=ϖ1,ϖ2,ϖ3. Consider the following SFSSs;(11) ThenT1=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2 and T2=∅K,ΣK,Ω1,ℳ1,Ω2,ℳ2 are two SFS-topologies. Consider T1∪T2=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2,Ω1,ℳ1,Ω2,ℳ2. Now.(12) Thus,ℵ1,ℒ1,Ω1,ℳ1∈T1∪T2, but ℵ1,ℒ1∩^Ω1,ℳ1∉T1∪T2. Therefore, T1∪T2 is not a SFS-topology on Σ.Theorem 1. SupposeT1 and T2 are two SFS-topologies on Σ, then T1∩T2 is also a SFS-topology on Σ. But, T1∪T2 need not be a SFS-topology on Σ.Proof. Suppose that,T1 and T2 are two SFS-topologies on Σ. Since∅K,ΣK∈T1 and ∅K,ΣK∈T2, then ∅K,ΣK∈T1∩T. Letℵ,ℒ,Ω,ℳ∈T1∩T2⇒ℵ,ℒ,Ω,ℳ∈T1 and ℵ,ℒ,Ω,ℳ∈T2⇒ℵ,ℒ∩^Ω,ℳ∈T1 and ℵ,ℒ∩^Ω,ℳ∈T2⇒ℵ,ℒ∩^Ω,ℳ∈T1∩T2. Letℵi,ℒi∈T1∩T2, i∈I, an index set. ⇒ℵi,ℒi∈T1 and ℵi,ℒi∈T2, ∀i∈I⇒∪^i∈Iℵi,ℒi∈T1 and ∪^i∈Iℵi,ℒi∈T2∪^i∈Iℵi,ℒi∈T1∩T2 ThusT1∩T2 satisfies all requirements of SFS-topology on Σ.Definition 20. Consider the two SFS-topologiesT1 and T2 on Σ. T1 is called weaker or coarser than T2 or T2 is called finer or stronger than T1 if and only if T1⊆T2.Remark 3.1. If eitherT1⊆T2 or T2⊆T1, then T1 and T2 are comparable. Otherwise T1 and T2 are not comparable.Example 5. ConsiderΣ=ς1,ς2 as the universal set with the attribute set K=ϖ1,ϖ2,ϖ3,ϖ4. Let ℒ1,ℒ2,ℒ1⊆K, where ℒ1=ϖ1,ϖ2,ϖ3, ℒ2=ϖ1,ϖ2 and ℒ3=ϖ1. The SFSSs ℵ1,ℒ1,ℵ2,ℒ2,ℵ3,ℒ3 are given as follows:(13) Here,T1=∅K,ΣK,ℵ1,ℒ1,ℵ2,ℒ2,ℵ3,ℒ3 and T2=∅K,ΣK,ℵ1,ℒ1 are two SFS-topologies on Σ. It is clear that T2⊆T1. Thus T1 is finer than T2 or T2 is weaker than T1.Definition 21. A SFSSℵ,ℒ is said to be a spherical fuzzy soft point (SFS-point), denoted by ϖℵ, if for every ϖ∈ℒ, ℵϖ≠ς,0,0,1|ς∈Σ and ℵϖ^=ς,0,0,1|ς∈Σ, ∀ϖ^∈ℒ−ϖ. Note that, any SFS-point ϖℵ (say) is also considered as a singleton SFS-subset of the SFSS ℵ,ℒ.Definition 22. A SFS-pointϖℵ is said to be in the SFSS Ω,ℒ, that is, ϖℵ∈Ω,ℒ, if ℵϖ⊆^Ωϖ, for every ϖ∈ℒ.Example 6. Suppose thatΣ=ς1,ς2,ς3 and ℒ=ϖ1,ϖ2,ϖ3⊆K=ϖ1,ϖ2,ϖ3,ϖ4. Consider the SFSS(14) Here,ϖ3∈ℒ and ℵϖ3≠ς,0,0,1|ς∈Σ. But, for ℒ−ϖ3=ϖ1,ϖ2, ℵϖ1=ℵϖ2=ς,0,0,1|ς∈Σ. Thus, ℵ,ℒ is a SFS-point in Σ and denoted by ϖ3ℵ. Let(15) Here,ℵϖ3⊆^Ωϖ3. Thus, we can say that ϖ3ℵ∈Ω,ℒ.Definition 23. LetΓ,ℒ be a SFSS over Σ. Γ,ℒ is said to be a spherical fuzzy soft neighbourhood (SFS-nbd) of the SFS-point ϖℵ over Σ, if there exist a SFS-open set Ω,ℳ such that ϖℵ∈Ω,ℳ⊆^Γ,ℒ.Definition 24. LetΓ,ℒ be a SFSS over Σ. Γ,ℒ is said to be a spherical fuzzy soft neighbourhood (SFS-nbd) of the SFSS ℵ,ℳ, if there exist a SFS-open set Ω,N such that ℵ,ℳ⊆^Ω,N⊆^Γ,ℒ.Theorem 2. LetΣK,T be a SFS-topological space. A SFSS ℵ,ℒ is open if and only if for each SFSS Ω,ℳ such that Ω,ℳ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of Ω,ℳ.Proof. Suppose that the SFSSℵ,ℒ is SFS-open. That is, ℵ,ℒ∈T. Thus for eachΩ,ℳ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of Ω,ℳ. Conversely, suppose that, for eachΩ,ℳ⊆^ℵ,ℒ, ℵ,ℒ is A SFS-nbd of Ω,ℳ. Sinceℵ,ℒ⊆^ℵ,ℒ, ℵ,ℒ is a SFS-nbd of ℵ,ℒ itself. Therefore, there exist an open setΓ,N such that ℵ,ℒ⊆^Γ,N⊆^ℵ,ℒ⇒ℵ,ℒ=Γ,N⇒ℵ,ℒ is open.Definition 25. LetΣK,T be a SFS-topological space. A sub-collection ℬ of the SFS-topology T is referred as a spherical fuzzy soft basis (SFS-basis) for T, if for each ℵ,ℒ∈T, ∃B∈ℬ such that Tex translation failed.Example 7. LetΣ=ς1,ς2 and K=ϖ1,ϖ2,ϖ3. Let ℒi⊆Ki=1to11 with ℒ1=ℒ2=ℒ3=ℒ4=ℒ5=ℒ6=ℒ7=K, ℒ8=ℒ9=ϖ1,ϖ2, and ℒ10=ℒ11=ϖ1. Consider the following SFSSs;(16) Then the sub-collection is a SFS-basis for the SFS-topologyT=∅K,ΣK,ℵi,ℒi,i=1to11.Theorem 3. Letℬ be a SFS-basis for a SFS-topology T, then for each ϖ∈ℒ,ℒ⊆K, ℬϖ=ℵϖ:ℵ,ℒ∈ℬ acts as a spherical fuzzy basis for the spherical fuzzy topology Tϖ=ℵϖ:ℵ,ℒ∈T.Proof. Suppose thatℵϖ∈Tϖ for some ϖ∈ℒ ⇒ℵ,ℒ∈T Sinceℬ is a SFS-basis for the SFS-topology T, ∃B′⊆^B such that ℵ,ℒ=∪^B′⇒ℵϖ=∪^B′ϖ, where B′ϖ=ℵϖ:ℵ,ℒ∈B′⊆^ℬϖℵϖ=∪^ℬϖ⇒ℬϖ is a spherical fuzzy basis for the spherical fuzzy topology Tϖ.Theorem 4. LetΣE,T be a SFS-topological space. Let ℬ=ℵi,ℒi:i∈I be a sub-collection of SFS-topology T. ℬ is a SFS-basis for T if and only if for any SFS-open set Ω,ℳ and a SFS-point ϖΓ∈Ω,ℳ, there exist a ℵi,ℒi∈ℬ for some i∈I, such that ϖΓ∈ℵi,ℒi⊆^Ω,ℳ.Proof. Suppose that,ℬ=ℵi,ℒi:i∈I⊆T is a SFS- basis for the SFS-topology T. For any SFS-open setΩ,ℳ, there exists SFSSs ℵj,ℒj,j∈J⊆I, where Tex translation failed Thus, for any SFS-pointϖΓ∈Ω,ℳ, there exist a ℵj,ℒj∈ℬ such that ϖΓ∈ℵj,ℒj⊆^Ω,ℳ. Conversely, suppose for any SFS-open setΩ,ℳ and a SFS-point ϖΓ∈Ω,ℳ, there exist a ℵi,ℒi∈ℬ such that ϖΓ∈ℵi,ℒi⊆^Ω,ℳ Thus,Ω,ℳ⊆^∪ϖΓ∈Ω,ℳ^ℵi,ℒi⊆^Ω,ℳΩ,ℳ∪ϖΓ∈Ω,ℳ^ℵi,ℒi Sinceℵi,ℒi∈ℬ, ℬ is a SFS-basis for the SFS-topology T.Definition 26. SupposeΣK,T is a SFS-topological space and ℵ,ℒ is a SFSS over Σ, where ℒ⊆K. Then(1) The SFS-union of all SFS-open subsets ofℵ,ℒ is known as spherical fuzzy soft interior (SFS-interior) of ℵ,ℒ, symbolized by ℵ,ℒ. It is the largest SFS-open set contained in ℵ,ℒ. That is, ℵ,ℒ°⊆^ℵ,ℒ.(2) The SFS-intersection of all SFS-closed supersets ofℵ,ℒ is known as spherical fuzzy soft closure (SFS-closure) of ℵ,ℒ, symbolized by ℵ,ℒ¯. It is the smallest SFS-closed set containing ℵ,ℒ. That is, ℵ,ℒ⊆^ℵ,ℒ¯.(3) The spherical fuzzy soft boundary (SFS-boundary) ofℵ,ℒ, denoted by ∂ℵ,ℒ, is defined as follows: ∂ℵ,ℒ=ℵ,ℒ¯∩^ℵ,ℒc¯(4) The spherical fuzzy soft exterior (SFS-exterior) ofℵ,ℒ, denoted by Extℵ,ℒ, is defined as follows: Extℵ,ℒ=ℵ,ℒc°Example 8. Suppose thatΣ=ς1,ς2 is the universal set with the attribute set K=ϖ1,ϖ2,ϖ3. Consider the SFS-topology T=∅K,ΣK,ℵ1,K,ℵ2,K,ℵ3,K, where(17) Clearly, the members ofT are the SFS-open sets. Now, the corresponding closed sets are given as follows: ∅Kc=ΣKΣKc=∅K(18) Consider the following SFSS.(19) Thus.(20) Then, the SFS-interior ofℵ,K, 4 The SFS-closure ofℵ,K, ℵ,K¯=ΣK(21) So that the SFS-boundary ofℵ,K,(22) The SFS-exterior ofℵ,K, Extℵ,K=ℵ,Kc°=∅K.Theorem 5. Suppose thatΣK,T is a SFS-topological space and ℵ,ℒ is a spherical fuzzy soft set over Σ, where ℒ⊆K. Then we have(1) ℵ,ℒ°c=ℵ,ℒc¯(2) ℵ,ℒ¯c=ℵ,ℒc°Proof. Proof is directTheorem 6. Suppose thatΣK,T is a SFS-topological space and ℵ,ℒ is a spherical fuzzy soft set over Σ, where ℒ⊆K. Then ∂ℵ,K=∂ℵ,KcProof. Proof is direct.Definition 27. LetϖΞ and ϖΨ be two SFS-points. ϖΞ and ϖΨ are said to be distinct, denoted by ϖΞ≠ϖΨ, if their corresponding SFSSs Ξ,ℒ and Ψ,ℳ are disjoint. That is, Ξ,ℒ∩^Ψ,ℳ=∅ℒ∩ℳ. ## 4. Spherical Fuzzy Soft Separation Axioms In this section, we define SFS-separation axioms by using the concepts SFS-point, SFS-open sets and SFS-closed sets.Definition 28. LetΣK,T be a SFS-topological space and let ϖΞ and ϖΨ be any two distinct SFS-points over Σ. If there exist SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ and ϖΨ∉ℵ,ℒ or ϖΨ∈Ω,ℳ and ϖΞ∉Ω,ℳ, then ΣK,T is known as SFS T0-space.Example 9. All discrete SFS-topological spaces are SFST0-spaces. Because, for any two distinct SFS-points ϖΞ and ϖΨ over Σ, there exist a SFS-open set ϖΞ, such that ϖΞ∈ϖΞ and ϖΨ∉ϖΞ.Definition 29. LetΣK,T be a SFS-topological space and let ϖΞ,ϖΨ be two SFS-points over Σ with ϖΞ≠ϖΨ. If there exist two SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ, ϖΨ∉ℵ,ℒ and ϖΨ∈Ω,ℳ, ϖΞ∉Ω,ℳ, then ΣK,T is known as SFS T1-space.Example 10. Every discrete SFS-topological space is a SFST1-space. Because, for any two distinct SFS -points ϖΞ and ϖΨ over Σ, there exist SFS-open sets ϖΞ and ϖΨ, such that ϖΞ∈ϖΞ, ϖΨ∉ϖΞ and ϖΞ∉ϖΨ, ϖΨ∈ϖΨ.Definition 30. LetΣK,T be a SFS-topological space and let ϖΞ and ϖΨ be any two distinct SFS-points over Σ. If there exist two SFS-open sets ℵ,ℒ and ℵ,ℒ such that ϖΞ∈ℵ,ℒ and ϖΨ∈Ω,ℳ, and ℵ,ℒ∩^Ω,ℳ=∅ℒ∩ℳ, then ΣK,T is said to be SFS T2-space or SFS-Hausdorff space.Example 11. Suppose thatΣK,T is a discrete SFS-topological space. If ϖΞ and ϖΨ are any two distinct SFS-points over Σ. Then there exists distinct SFS-open sets ϖΞ and ϖΨ such that ϖΞ∈ϖΞ and ϖΨ∈ϖΨ. Therefore, ΣK,T is a SFS-Hausdorff space.Theorem 7. LetΣK,T be a SFS-topological space with attribute set K. ΣK,T is a SFS-Hausdorff space if and only if for any two distinct SFS-points ϖΞ and ϖΨ, there exist SFS-closed sets Ω1,K and Ω2,K such that ϖΞ∈Ω1,K, ϖΨ∉Ω1,K and ϖΞ∉Ω2,K, ϖΨ∈Ω2,K, and also Ω1,K∪^Ω2,K=ΣK.Proof. Suppose thatΣK,T is a SFS-Hausdorff space, ϖΞ and ϖΨ are any two distinct SFS-points over Σ. That is, ϖΞ∩^ϖΨ=∅K. SinceΣK,T is SFS-Hausdorff space, there exist two SFS-open sets ℵ1,K and ℵ2,K such that ϖΞ∈ℵ1,K, ϖΨ∉ℵ1,K and ϖΞ∉ℵ2,K, ϖΨ∈ℵ2,K. And also ℵ1,K∩^ℵ1,K=∅K⇒ℵ1,Kc∪^ℵ1,Kc=ΣK and also both ℵ1,Kc and ℵ2,Kc are SFS-closed sets. Letℵ1,Kc=Ω1,K and ℵ2,Kc=Ω2,K Then,ϖΞ∉Ω1,K,ϖΨ∈Ω1,K and ϖΞ∉Ω2,K,ϖΨ∈Ω2,K. Conversely, suppose that for any two distinct SFS-pointsϖΞ and ϖΨ, there exist SFS-closed sets Ω1,K and Ω2,K such that ϖΞ∈Ω1,K, ϖΨ∉Ω1,K and ϖΞ∉Ω2,K, ϖΨ∈Ω2,K, and also Ω1,K∪^Ω2,K=ΣK. ⇒Ω1,Kc and Ω2,Kc are SFS-open sets and Ω1,Kc∩^Ω2,Kc=∅K Also,ϖΞ∉Ω1,Kc, ϖΨ∈Ω1,Kc and ϖΞ∈Ω2,Kc, ϖΨ∉Ω2,Kc. Thus,ΣK,T is a SFS-Hausdorff space. □Definition 31. LetΣK,T be a SFS-topological space, Ω,ℳ be a SFS-closed set ϖΞ and ϖΨ, be a SFS-point over Σ such that ϖΞ∉Ω,ℳ. If there is SFS-open sets ℵ1,ℒ1 and ℵ2,ℒ2 such that ϖΞ∈ℵ1,ℒ1, Ω,ℳ⊆^ℵ2,ℒ2 and ℵ1,ℒ1∩^ℵ2,ℒ2=∅ℒ1∩ℒ2, then ΣK,T is called a SFS-regular space.Example 12. LetΣK,T be a SFS-topological space over Σ=ς1,ς2 with SFS-topology T=ΣK,∅K,ℵ1,K,ℵ2,K, where,(23) ThenΣK,T is a SFS-regular space.Definition 32. LetΣK,T be a SFS-topological space. If ΣK,T is a SFS-regular T1-space, then it is called a SFS T3-space.Definition 33. LetΣK,T be a SFS-topological space and let Ω1,ℳ1 and Ω2,ℳ2 be two disjoint SFS-closed sets in ΣK,T. If there exist SFS-open sets ℵ1,ℒ1 and ℵ2,ℒ2 such that Ω1,ℳ1⊆^ℵ1,ℒ1, Ω2,ℳ2⊆^ℵ2,ℒ2 and ℵ1,ℒ1∩^ℵ2,ℒ2=∅ℒ1∩ℒ2, then ΣK,T is called a SFS-normal space.Example 13. LetΣK,T be a SFS-topological space over Σ=ς1,ς2 with SFS-topology (24) ThenΣK,T is a SFS-normal space.Definition 34. LetΣK,T be a SFS-topological space. If ΣK,T is a SFS-normal T1-space, then it is known as SFS T4-space.Theorem 8. Suppose thatΣK,T is a SFS-topological space and Z is a non-empty subset of Σ.(1) IfΣK,T is a SFS T0-space, then ZK,TZ is also a SFS T0-space.(2) IfΣK,T is a SFS T1-space, then ZK,TZ is also a SFS T1-space.(3) IfΣK,T is a SFS T2-space, then ZK,TZ is also a SFS T2-space.Proof. Here we provide the proof if (1). (2) and (3) can be proved in the similar way. Suppose thatϖΞ and ϖΨ are two distinct SFS-points over Z. SinceΣK,T is a SFS T0-space, there is SFS-open sets ℵ,ℒ and Ω,ℳ such that ϖΞ∈ℵ,ℒ, ϖΨ∉ℵ,ℒ or ϖΨ∈Ω,ℳ, ϖΞ∉Ω,ℳ Thus,ϖΞ∈ℵ,ℒ∩^ZK, ϖΨ∉ℵ,ℒ∩^ZK or ϖΨ∈Ω,ℳ∩^ZK, ϖΞ∉Ω,ℳ∩^ZK Therefore,ZK,TZ is also a SFS T0-space. ## 5. Group Decision Algorithm and Illustrative Example In this section, we utilize the proposed SFS-topology to the group decision-making (GDM) process under the spherical fuzzy soft environment. For it, we presented the concept of TOPSIS method and embedding it into the proposed SFS-topology. ### 5.1. Proposed Algorithm with TOPSIS Method Consider a GDM process which consist a certain set of alternativesK=ς1,ς2,…,ςm. Each alternative is evaluated under the different set of attributes denoted by K=ϖ1,ϖ2,…,ϖn by the different “p” decision-makers (or experts), say Dℳ1,Dℳ2,…,Dℳp. Each expert has evaluated the given alternatives and provide their ratings in terms of linguistic variables such as “Excellent,” “Good” etc. All the linguistic variables and their corresponding weights are considered in this work from the list which is summarized in Table 1.Table 1 Linguistic terms to determine the alternatives. Linguistic termsWeightsExcellent0.90Very good0.70Good0.50Bad0.30Very bad0.10Then to access the finest alternative(s) from the given alternative, we summarize the following steps of the proposed approach as below.Step 1: Create a weighted SFS parameter matrixAw=αijp×m by considering the linguistic terms from Table 1. That is,(25)where each elementαij is the linguistic rating given by the decision-maker Dℳi to the attribute ϖj.Step 2: Create the weighted normalized SFS parameter matrixNw as follows:(26)where,ρij=αij/∑i=1pαij2Step 3: Compute the weight vectorΘ=θ1,θ2,…,θn, where θi’s are obtained as(27)Step 4: Construct a SFS-topology by aggregating the SFSSsDℳi,K,i=1,2,…,p, accorded by each decision-makers in the matrix form as their evaluation value. The matrix corresponding to the SFSS Dℳi,K is denoted by DMi for all i=1,2,…,p and it is called the SFS-decision matrix, where the rows and columns of each DMi represents the alternatives and the attributes respectively.Step 5: Compute the aggregated SFS matrixDMAgg given as follows:(28)Step 6: Construct the weighted SFS-decision matrix(29)whereβpq=θq×dpq and each βpq=μϖqςp,ηϖqςp,ϑϖqςp, p=1,2,…,m and q=1,2,…,n.Step 7: Obtain SFS-valued positive ideal solutionSFSV+ and SFS-valued negative ideal solution SFSV−, where(30)(31)Step 8: Compute the SFS-separation measurementsEdp+ and Edp−, ∀p=1,2,…,m, defined as follows:(32)Edp+=∑q=1nμϖqςp−μq+2+ηϖqςp−ηq+2+ϑϖqςp−ϑq+2,(33)Edp−−=∑q=1nμϖqςp−μq−2+ηϖqςp−ηq+2+ϑϖqςp−ϑq−2.Step 9: Obtain the SFS-closeness coefficientCp^ of each alternatives. Where(34)Cp^=Edp−Edp++Edp−∈0,1.providedEdp+≠0.Step 10: Based on the SFS-closeness coefficient, rank the alternatives in decreasing (or increasing) order and choose the optimal object from the alternatives. ### 5.2. Illustrative Example An international company conducted a campus recruitment in a college and shortlisted four studentsΣ=ς1,ς2,ς3,ς4 through the first round of recruitment. There is only one vacancy and they have to select one student as their candidate out of these five students. Suppose there are six decision-makers Dℳ=Dℳ1,Dℳ2,Dℳ3,Dℳ4,Dℳ5,Dℳ6 for the final round and they must have select the candidate based on the parameter set K=ϖ1,ϖ2,ϖ3,ϖ4,ϖ5. For i=1,2,3,4,5, the parameters ϖj stand for “educational discipline,” “English speaking,” “writing skill,” “technical discipline,” and “general knowledge” respectively. Then the steps of the proposed approach have been executed to find the best alternative(s) as follows.Step 1: The weighted SFS parameter matrixAw is formulated on the basis of equation (25) as follows:(35)Step 2: The weighted normalized SFS parameter matrixNw is computed by using equation (26).(36)Step 3: By using equation (27), the weight vector of the given attributes are computed as(37)Step 4: For each decision-makerDMi,i=1to6 and their corresponding SFS-decision matrices, we get a SFS-topology on Σ as(38)Thus, the collectionDM1,DM2,DM3,DM4,DM5,DM6 gives a SFS-topology on Σ.Step 5: The aggregated SFS matrixDMAgg is obtained by using equation (28) and summarized as(39)Step 6: The weighted SFS-decision matrixB is obtained by using equation (29) and written as(40)Step 7: From the weighted matrixB and utilizing equations (30), (31), we obtain ideal solutions SFSV+ and SFSV− are(41)Step 8: For eachp=1,2,3,4, the SFS-separation measurements Edp+ and Edp− are calculated by using equations (32), (33) as(42)Step 9: Using equation (34), compute the SFS-closeness coefficients Cp^, for each p=1,2,3,4 and get(43)Step 10: Based on the ratings ofCp^’s, we can obtain the ordering of the given alternatives as(44)Which corresponds to the alternatives ratings asς1>ς2>ς3>ς4. This, we conclude that the international company should select the student ς1 as their candidate. ## 5.1. Proposed Algorithm with TOPSIS Method Consider a GDM process which consist a certain set of alternativesK=ς1,ς2,…,ςm. Each alternative is evaluated under the different set of attributes denoted by K=ϖ1,ϖ2,…,ϖn by the different “p” decision-makers (or experts), say Dℳ1,Dℳ2,…,Dℳp. Each expert has evaluated the given alternatives and provide their ratings in terms of linguistic variables such as “Excellent,” “Good” etc. All the linguistic variables and their corresponding weights are considered in this work from the list which is summarized in Table 1.Table 1 Linguistic terms to determine the alternatives. Linguistic termsWeightsExcellent0.90Very good0.70Good0.50Bad0.30Very bad0.10Then to access the finest alternative(s) from the given alternative, we summarize the following steps of the proposed approach as below.Step 1: Create a weighted SFS parameter matrixAw=αijp×m by considering the linguistic terms from Table 1. That is,(25)where each elementαij is the linguistic rating given by the decision-maker Dℳi to the attribute ϖj.Step 2: Create the weighted normalized SFS parameter matrixNw as follows:(26)where,ρij=αij/∑i=1pαij2Step 3: Compute the weight vectorΘ=θ1,θ2,…,θn, where θi’s are obtained as(27)Step 4: Construct a SFS-topology by aggregating the SFSSsDℳi,K,i=1,2,…,p, accorded by each decision-makers in the matrix form as their evaluation value. The matrix corresponding to the SFSS Dℳi,K is denoted by DMi for all i=1,2,…,p and it is called the SFS-decision matrix, where the rows and columns of each DMi represents the alternatives and the attributes respectively.Step 5: Compute the aggregated SFS matrixDMAgg given as follows:(28)Step 6: Construct the weighted SFS-decision matrix(29)whereβpq=θq×dpq and each βpq=μϖqςp,ηϖqςp,ϑϖqςp, p=1,2,…,m and q=1,2,…,n.Step 7: Obtain SFS-valued positive ideal solutionSFSV+ and SFS-valued negative ideal solution SFSV−, where(30)(31)Step 8: Compute the SFS-separation measurementsEdp+ and Edp−, ∀p=1,2,…,m, defined as follows:(32)Edp+=∑q=1nμϖqςp−μq+2+ηϖqςp−ηq+2+ϑϖqςp−ϑq+2,(33)Edp−−=∑q=1nμϖqςp−μq−2+ηϖqςp−ηq+2+ϑϖqςp−ϑq−2.Step 9: Obtain the SFS-closeness coefficientCp^ of each alternatives. Where(34)Cp^=Edp−Edp++Edp−∈0,1.providedEdp+≠0.Step 10: Based on the SFS-closeness coefficient, rank the alternatives in decreasing (or increasing) order and choose the optimal object from the alternatives. ## 5.2. Illustrative Example An international company conducted a campus recruitment in a college and shortlisted four studentsΣ=ς1,ς2,ς3,ς4 through the first round of recruitment. There is only one vacancy and they have to select one student as their candidate out of these five students. Suppose there are six decision-makers Dℳ=Dℳ1,Dℳ2,Dℳ3,Dℳ4,Dℳ5,Dℳ6 for the final round and they must have select the candidate based on the parameter set K=ϖ1,ϖ2,ϖ3,ϖ4,ϖ5. For i=1,2,3,4,5, the parameters ϖj stand for “educational discipline,” “English speaking,” “writing skill,” “technical discipline,” and “general knowledge” respectively. Then the steps of the proposed approach have been executed to find the best alternative(s) as follows.Step 1: The weighted SFS parameter matrixAw is formulated on the basis of equation (25) as follows:(35)Step 2: The weighted normalized SFS parameter matrixNw is computed by using equation (26).(36)Step 3: By using equation (27), the weight vector of the given attributes are computed as(37)Step 4: For each decision-makerDMi,i=1to6 and their corresponding SFS-decision matrices, we get a SFS-topology on Σ as(38)Thus, the collectionDM1,DM2,DM3,DM4,DM5,DM6 gives a SFS-topology on Σ.Step 5: The aggregated SFS matrixDMAgg is obtained by using equation (28) and summarized as(39)Step 6: The weighted SFS-decision matrixB is obtained by using equation (29) and written as(40)Step 7: From the weighted matrixB and utilizing equations (30), (31), we obtain ideal solutions SFSV+ and SFSV− are(41)Step 8: For eachp=1,2,3,4, the SFS-separation measurements Edp+ and Edp− are calculated by using equations (32), (33) as(42)Step 9: Using equation (34), compute the SFS-closeness coefficients Cp^, for each p=1,2,3,4 and get(43)Step 10: Based on the ratings ofCp^’s, we can obtain the ordering of the given alternatives as(44)Which corresponds to the alternatives ratings asς1>ς2>ς3>ς4. This, we conclude that the international company should select the student ς1 as their candidate. ## 6. Comparison Analysis In this section, the proposed algorithm is compared to the existing algorithm (Algorithm 1: Decision making based on adjustable soft discernibility matrix) [27]. Since the optimal solution of the study discussed in Section 5.2 using Algorithm 1 is also “ς1,” it can be seen that the proposed algorithm based on the group decision-making method and the extension of TOPSIS approach is comparable to previously known method, which validates the reliability and dependability of the proposed algorithm.The advantages of the work drawn in earlier sections can be summarized as follows:(i) Topological structures on fuzzy soft sets are used in a variety of applications, including medical diagnosis, decision-making, pattern recognition, image processing, and so on.(ii) SFSS is one of the most generalized version of fuzzy soft set and it is arguably the more realistic, practical and accurate.(iii) Introducing topology on SFSS is seem to be highly important in both theoretical and practical scenarios.(iv) While dealing with group decision-making problems of SFSS, the proposed algorithm is more reliable and expressive. ## 7. Conclusions The spherical fuzzy soft set is the most generalized version of all other existing fuzzy soft set models. This newest concept is more precise, accurate, and sensible and the models are thus capable of solving myriad problems more deftly and practically. In this paper, we probed into certain basic aspects of spherical fuzzy soft topological space. SFS-topology is developed by using the notions of SFS-union and SFS-intersection. The paper has also provided certain fundamental definitions pertaining to the SFS-topology including SFS-subspace, SFS-point, SFS-nbd, SFS-basis, SFS-interior, SFS-closure, SFS-boundary, and SFS-exterior and on the basis of the said definitions mooted, we have proven a few theorems. Further, SFS-separation axioms are presented by using the concepts of SFS-point, SFS-closed sets, and SFS-open sets on the basis of which an algorithm is also proposed as an application with vivid implications in group decision-making method. The model is presented as an extension of TOPSIS approach as well. A numerical example is used to illustrate the efficiency of the proposed algorithm.In the future, we will explore algebraicproperties of SFSSs and investigate their applications in decision making, medical diagnosis, clustering analysis, pattern recognition, and information science. Also relationship between SFSSs and T-SFSSs, and the algebraic and topological structures of T-SFSSs can be studied as future work. --- *Source: 1007133-2022-04-26.xml*
2022
# The Coal Pillar Width Effect of Principal Stress Deflection and Plastic Zone Form of Surrounding Rock Roadway in Deep Excavation **Authors:** Ji Li; Rongguang Zhang; Xubo Qiang **Journal:** Geofluids (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007222 --- ## Abstract In order to explore the influence of coal pillar width on the principal stress deflection and plastic zone form of surrounding rock in deep roadway excavation, taking 11030 working face transportation roadway of Zhaogu No. 2 Coal Mine as engineering background, theoretical analysis, numerical simulation, and field detection were used to study the effect of coal pillar width on principal stress deflection and plastic zone form and field detection and verification of plastic zone form of surrounding rock in 11030 transportation roadway. The results show that the maximum principal stress is deflected in the vertical direction, which in roadway surrounding rock excavation. The coal pillar width effect of principal stress deflection on both sides of roadway roof and floor and inside coal pillar are more obvious than that of middle roof and floor, coal pillar edge and coal wall position. The deflection of the principal stress affects the morphological distribution of the plastic zone of the surrounding rock, which led to the width effect of coal pillar in roof, and two sides plastic zone are more obvious than that in floor. The principal stress deflection of roadway surrounding rock is highly consistent with the maximum damage depth of plastic zone, and at the same time, the drilling peep results of surrounding rock are basically consistent with the form characteristics of plastic zone in numerical simulation. On this basis, the surrounding rock reinforcement support scheme of 11030 working face transportation roadway was proposed. --- ## Body ## 1. Introduction With the increase of coal seam mining depth and mining intensity in China, and the comprehensive influence of production geological conditions and other factors, the deformation and damage of surrounding rock in deep roadways with different width of roadway protection coal pillars show different distribution characteristics. Under the influence of mining and coal pillar width, the stress field direction of roadway surrounding rock will deflect, resulting in the change of form distribution of surrounding rock plastic zone, and then affect the stability of roadway surrounding rock [1–3]. Therefore, the influence of roadway pillar width on principal stress deflection and plastic zone form of surrounding rock in deep roadway excavation is studied, which has important guiding significance for the stability control of surrounding rock in deep roadway.By using the methods of theoretical analysis and numerical simulation, documents [4, 5] gave the calculation formula of failure width of supporting coal pillar under different failure criteria, revealed the stress distribution characteristics of working face and provided a new idea for retaining coal pillar width and roadway layout. Literature [6] studies showed that the larger the width of the reserved coal pillar, the smaller the proportion of plastic zone form, vertical stress, and horizontal deformation in the coal pillar, and the higher the stability of the coal pillar itself, which was helpful to maintain the stability of roadway surrounding rock. Literature [7] studied the variation law of vertical stress in the surrounding rock and coal pillar width along the open cut tunnel and combined with the limit equilibrium theory of coal body, and the optimized design of coal pillar width along the open cut tunnel was carried out. Literatures [8–10] studied an integrated detection system, established an ultrasonic model, and coupled it with a mechanical model, and the results of the research can predict whether the excavation damage zone, stress distribution, stress rotation, and ultrasonic velocity evolution of the roadway are consistent with the actual situation in the field. Literatures [11–13] proposed the characteristic radii of the plastic zone in the horizontal axis, longitudinal axis, and medial axis according to the damage boundary characteristics of the plastic zone of the roadway, through theoretical calculations and other research methods, in order to reflect the shape characteristics of the plastic zone, evaluate the potential hazard location of the roadway enclosure and the critical point of the roadway dynamic hazard evaluation based on the characteristic radii. The literatures [14, 15], based on mining rock mechanics, focused on the failure behavior and deformation mechanism of rocks with large burial depths and initially established the surrounding rock stress gradient failure theory and research results to provide the theoretical basis and technical support for the future development of deep mineral resources. The above results studied the influence law of coal pillar width on its own stability, roadway deformation, and the size of surrounding rock stress field and displacement field, respectively. However, the related research results on the deflection of the main stress and the form of the plastic zone in the surrounding rock of deep roadway excavation under different widths of coal pillars of the protection roadway are less. Therefore, this paper used numerical simulation to study the characteristics of main stress deflection and plastic zone form distribution of deep roadway excavation surrounding rocks under different widths of coal pillar protecting the roadway and discovered the coal pillar width effect of main stress deflection and plastic zone form and conducted theoretical analysis on the influence of coal pillar width effect on the stability of roadway surrounding rocks. On this basis, the detection and verification of the damage zone of the surrounding rock was carried out in the transporting roadway of 11030 working face of Zhaogu No. 2 Coal Mine, and the corresponding countermeasures for the control of the surrounding rock were proposed. ## 2. Deformation Characteristics of the Surrounding Rock in Deep Roadway Excavation under Different Coal Pillar Widths ### 2.1. Deformation Characteristics’ Analysis of the Surrounding Rock In order to study the deformation characteristics of the surrounding rock, which is in deep roadway under different widths of coal pillars of roadway protection, five roadways under different widths of coal pillars of roadway protection were selected through field research, and the deformation characteristics of the roadway surrounding rock after excavation were analyzed, as shown in Table1.Table 1 Deformation characteristics of surrounding rock in roadway excavation. RoadwayGeneral informationScene photosDeformation featuresWulihou coal mine lower group coal tape transport roadwayAverage depth of coal seam 550 m;straight wall semi-circular arch roadway: 4.74 m and 4.3 m (net width and net height), wall height 2 m, arch height 2.3 m;5 m coal pillar.Maximum shifting in of the top and bottom plates is about 45 mm;the maximum shifting in of the two gangs is about 38 mm.Transportation roadway of 11030 working face in Zhaogu No. 2 coal mineAverage depth of coal seam 700 m;rectangular roadway: 4.8 m and 3.3 m (width and height);8 m coal pillar.Maximum sinking of the roof is about 428 mm;maximum displacement of two sides is about 270 mm.7608 return air roadway of Wuyang coal mine of Lu’an groupAverage depth of coal seam 750 m;rectangular roadway: 5.4 m and 3.2 m (width and height);15 m coal pillar.Maximum sinking of the roof is about 200 mm;maximum displacement of two sides is about 700 mm.Air return roadway in No. 9 mining area of Chensilou coal mineAverage depth of coal seam 900 m;straight wall semi-circular arch roadway: 4.2 m and 4.6 m (net width and net height), wall height 2.4 m, arch height 2.2 m;13 m coal pillar.Maximum shifting in of the top and bottom plates is about 200 mm;the maximum shifting in of the two gangs is about 100 mm.Transportation roadway in No. 7 mining area of Zhaolou coal mineAverage depth of coal seam 910 m;flat-topped domed roadway: 5 m and 4.5 m (net width and net height), upper arc height 2 m, flat top 3 m;70 m coal pillar.Maximum sinking of the roof is about 579 mm;maximum displacement of two sides is about 600 mm.Through the above case analysis, it can be seen that under different widths of coal pillar, the deformation of the roadway surrounding rock shows differential distribution characteristics, but the width of coal pillar and the amount of surrounding rock deformation does not show a direct correlation, that is, the larger the width of coal pillar, the smaller the amount of deformation of the roadway surrounding rock. And the deformation of the surrounding rock at different locations of the same roadway section shows nonuniform distribution characteristics. For example, in case 5, the maximum deformation of the roadway is 600 mm when 70 m coal pillar is left (two gang convergence), in case 1, the maximum deformation of the roadway is 45 mm when 5 m coal pillar is left (roof subsidence), and in case 2, the deformation of the roadway is different at different locations of the same section when 8 m coal pillar is left (maximum subsidence of roof is 428 mm, and maximum two gang convergence is 270 mm). The essence of the above phenomenon is that the width of the coal pillar will affect the distribution of the plastic zone of the surrounding rock in the roadway. ### 2.2. Factors Influencing the Deformation of the Surrounding Rock The width of the coal pillar protecting the roadway is an important factor, which influencing the deformation and damage of the roadway excavation. The width of the coal pillar not only affects the size of the surrounding rock stress field but also deflects the direction of the surrounding rock stress field, which changes the form of the plastic zone of the surrounding rock (as shown in Figure1) [16–19]. Under different pillar widths, the surrounding rock of mining roadway excavation presents differential deformation characteristics. The essence is that the surrounding rock forms different plastic zone. Therefore, this paper will focus on the deflection characteristics of the principal stress of the surrounding rock and the morphological distribution characteristics of the plastic zone under the condition of retaining different roadway pillar widths.Figure 1 Deformation and failure mechanism of surrounding rock in roadway excavation. ## 2.1. Deformation Characteristics’ Analysis of the Surrounding Rock In order to study the deformation characteristics of the surrounding rock, which is in deep roadway under different widths of coal pillars of roadway protection, five roadways under different widths of coal pillars of roadway protection were selected through field research, and the deformation characteristics of the roadway surrounding rock after excavation were analyzed, as shown in Table1.Table 1 Deformation characteristics of surrounding rock in roadway excavation. RoadwayGeneral informationScene photosDeformation featuresWulihou coal mine lower group coal tape transport roadwayAverage depth of coal seam 550 m;straight wall semi-circular arch roadway: 4.74 m and 4.3 m (net width and net height), wall height 2 m, arch height 2.3 m;5 m coal pillar.Maximum shifting in of the top and bottom plates is about 45 mm;the maximum shifting in of the two gangs is about 38 mm.Transportation roadway of 11030 working face in Zhaogu No. 2 coal mineAverage depth of coal seam 700 m;rectangular roadway: 4.8 m and 3.3 m (width and height);8 m coal pillar.Maximum sinking of the roof is about 428 mm;maximum displacement of two sides is about 270 mm.7608 return air roadway of Wuyang coal mine of Lu’an groupAverage depth of coal seam 750 m;rectangular roadway: 5.4 m and 3.2 m (width and height);15 m coal pillar.Maximum sinking of the roof is about 200 mm;maximum displacement of two sides is about 700 mm.Air return roadway in No. 9 mining area of Chensilou coal mineAverage depth of coal seam 900 m;straight wall semi-circular arch roadway: 4.2 m and 4.6 m (net width and net height), wall height 2.4 m, arch height 2.2 m;13 m coal pillar.Maximum shifting in of the top and bottom plates is about 200 mm;the maximum shifting in of the two gangs is about 100 mm.Transportation roadway in No. 7 mining area of Zhaolou coal mineAverage depth of coal seam 910 m;flat-topped domed roadway: 5 m and 4.5 m (net width and net height), upper arc height 2 m, flat top 3 m;70 m coal pillar.Maximum sinking of the roof is about 579 mm;maximum displacement of two sides is about 600 mm.Through the above case analysis, it can be seen that under different widths of coal pillar, the deformation of the roadway surrounding rock shows differential distribution characteristics, but the width of coal pillar and the amount of surrounding rock deformation does not show a direct correlation, that is, the larger the width of coal pillar, the smaller the amount of deformation of the roadway surrounding rock. And the deformation of the surrounding rock at different locations of the same roadway section shows nonuniform distribution characteristics. For example, in case 5, the maximum deformation of the roadway is 600 mm when 70 m coal pillar is left (two gang convergence), in case 1, the maximum deformation of the roadway is 45 mm when 5 m coal pillar is left (roof subsidence), and in case 2, the deformation of the roadway is different at different locations of the same section when 8 m coal pillar is left (maximum subsidence of roof is 428 mm, and maximum two gang convergence is 270 mm). The essence of the above phenomenon is that the width of the coal pillar will affect the distribution of the plastic zone of the surrounding rock in the roadway. ## 2.2. Factors Influencing the Deformation of the Surrounding Rock The width of the coal pillar protecting the roadway is an important factor, which influencing the deformation and damage of the roadway excavation. The width of the coal pillar not only affects the size of the surrounding rock stress field but also deflects the direction of the surrounding rock stress field, which changes the form of the plastic zone of the surrounding rock (as shown in Figure1) [16–19]. Under different pillar widths, the surrounding rock of mining roadway excavation presents differential deformation characteristics. The essence is that the surrounding rock forms different plastic zone. Therefore, this paper will focus on the deflection characteristics of the principal stress of the surrounding rock and the morphological distribution characteristics of the plastic zone under the condition of retaining different roadway pillar widths.Figure 1 Deformation and failure mechanism of surrounding rock in roadway excavation. ## 3. Effect of Coal Pillar Width on Deflection of Principal Stress and Plastic Zone Form of Surrounding Rock in Deep Roadway Excavation ### 3.1. Establishment of Numerical Model According to the production geological conditions of transportation roadway in 11030 working face of Zhaogu No. 2 coal mine, FLAC3D numerical simulation software was used to build a model with a length of 250 m, a width of 50 m, and a height of 42 m, with a model grid cell size of 0.5 m, as shown in Figure 2.Figure 2 Numerical calculation model.Displacement constraints were applied to the top and bottom of the model in the vertical direction, and displacement constraints in the horizontal direction were applied around the model, and the initial ground stresses were applied to the model based on the rock formation loads at a burial depth of 700 m, rock laboratory test, and the measured ground stresses in the adjacent mine area: 15.30 MPa for vertical stresses, 29.36 MPa (along thex-axis direction), and 16.82 MPa (along the y-axis direction) for horizontal stresses. The model is used Mohr-Coulomb, and the physical and mechanical parameters of each rock layer are shown in Table 2.Table 2 Rock physical and mechanical parameters table. Rock formationInternal friction angle (°)Cohesion (MPa)Density (kg/m3)Shear modulus (GPa)Bulk modulus (GPa)Uniaxial tensile strength (MPa)Sandstone252.815004.85.41.5Mudstone305.2422005.048.821.48Coal seam345.3625004.5410.442.6Sandy mudstone381627009.010.27.5Limestone357.923005.38.98.71The plastic zone distribution characteristics of the roadway surrounding rock were simulated when the width of coal pillar was 4 m, 6 m, 8 m, 10 m, 12 m, 14 m, 16 m, 18 m, and 20 m, respectively, and the main stress direction measurement lines were arranged around the excavated roadway to study the main stress deflection characteristics. The length of each measurement line is 40 m, arranged along the tendency of coal seam, and the interval of measurement line is 0.5 m, and 31 measurement lines are arranged in total, as shown in Figure3.Figure 3 Layout of measuring line in principal stress direction of roadway surrounding rock. ### 3.2. Coal Pillar Width Effect of Principal Stress Deflection of Roadway Surrounding Rock According to the stress field imposed by numerical simulation, it is known that the maximum principal stress is 29.36 MPa, along the horizontal direction; the minimum principal stress is 15.30 MPa, along the vertical direction, and the following studies are analyzed by the angle between the maximum principal stress and the horizontal direction. Figure4 shows the contour cloud of the maximum principal stress and the angle between the horizontal directions in the stress field of the roadway enclosure under different coal column widths. From the figure, it can be seen that the distribution characteristics of the maximum principal stress direction in the roadway surrounding rock change significantly under different coal column widths. For the roof, the maximum principal stress is mainly in the horizontal direction, and the minimum principal stress is mainly in the vertical direction; with the increase of the width of the coal pillar, the maximum principal stress of the overlying rocks near the coal pillar side and the coal wall side of the top slab has a tendency to deflect in the vertical direction, and the direction of the maximum principal stress of the overlying rocks in the middle of the top slab has no obvious deflection. For the coal pillar wall, with the increase of the width of the coal pillar, the direction of the maximum principal stress is obviously deflected, when the width of the coal pillar is 4 m~14 m, the maximum principal stress of the whole coal pillar is mainly in the vertical direction, and the minimum principal stress is mainly in the horizontal direction, when the width of the coal pillar is 16 m~20 m, the maximum principal stress within 4 m from the edge of the roadway is mainly in the vertical direction, and the maximum principal stress within 4~6 m from the edge of the roadway is mainly in the horizontal direction. For the coal wall, the maximum principal stress direction does not change significantly with the change of coal pillar width, and the maximum principal stress at the edge of the coal wall is mainly in the vertical direction, while the maximum principal stress at the deep part of the coal wall gang is mainly in the horizontal direction. For the floor, when the width of coal pillar is 4 m, the maximum principal stress is mainly in the vertical direction near the pillar side and in the horizontal direction near the coal wall side; when the width of coal pillar is 6 m~20 m, the maximum principal stress is mainly in the horizontal direction, and with the increase of the width of coal pillar, the direction of the maximum principal stress is not significantly deflected.Figure 4 Angle cloud chart of maximum principal stress and horizontal direction of surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFigure5 shows the curves of the maximum principal stresses, which are at different locations of road roof and floor and two surrounding rocks under different coal pillar widths with respect to the angle in the horizontal direction. From the above analysis, it can be seen that the 4 m and 6 m coal pillars are in plastic damage state, so no further analysis will be made here. By comparing the variation characteristics of the maximum principal stress direction in Figure 5 with the direction of the original rock stress field, it is obtained that the variation law of the maximum principal stress direction in the roadway surrounding rock area with the coal pillar width is shown in Table 3. (According to the deflection angle, three deflection degrees are defined: ① Weak: ≤30°; ② Medium: 30°≤60°; and ③ Strong: ≥60°).Figure 5 Variation curve of maximum principal stress direction of surrounding rock. (a) roof(b) coal pillar(c) coal wall(d) floorTable 3 The maximum principal stress deflection law of surrounding rock in roadway excavation. Surrounding rock locationDeflection patternRoofContinuous deflection in the vertical direction, but the degree of deflection is weak.Degree of deflection angle change: bothsidesoftheroof>middleoftheroof.Coal pillarThe deflection is in the vertical direction, and the degree of deflection is decreasing.The degree of deflection from the edge of the coal pillar to the middle of the coal pillar shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalpillar>edgeofcoalpillar.FloorContinuous deflection in the vertical direction, with medium deflection at the position of 1.5 m~2 m under both sides of the floor, and weak deflection at other depth positions and the middle of the floor.Degree of deflection angle change: both sides of the floor > the middle of the floor.All deflected in the vertical direction and the degree of deflection did not change significantly.Coal wallThe degree of deflection from the edge of the coal wall to the deep part of the solid coal shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalwall>edgeofcoalwall.From the above analysis, it can be seen that the maximum principal stresses in the stress field of the roadway enclosure area are deflected in the vertical direction, but the deflection of the maximum principal stresses at different locations in the roadway enclosure has different sensitivities to the width of the coal pillar, resulting in the variability of the coal pillar width effect of the deflection of the principal stresses at different locations in the roadway enclosure.(1) For the roof and floor, the sensitivity of the maximum principal stress deflection at the position of the roof and floor near the two sides of the roadway to the width of the coal column is stronger. With the increase of coal pillar width, the maximum principal stress deflection angle changes obviously (maximum change of 20°) on both sides of the roof and floor, and the change is smaller (maximum change of 8°) in the middle position. Therefore, the maximum principal stress deflection in the roof and floor near the edge of the two gangs has an obvious coal pillar width effect, while the coal pillar width effect in the middle position is weaker(2) For the coal pillar, the sensitivity of the maximum principal stress deflection inside the coal pillar to the change of the coal pillar width is stronger. With the increase of coal pillar width, the change of maximum principal stress deflection angle inside the coal pillar is obvious (maximum change of 70°), and the change of edge position is relatively small (maximum change of 12°). Therefore, the maximum principal stress deflection inside the coal pillar has obvious coal pillar width effect, while the coal pillar width effect of edge position is weaker(3) For the coal wall, the sensitivity of the maximum principal stress deflection of the coal wall to the coal pillar width is weaker. With the increase of coal pillar width, the change of coal wall maximum principal stress deflection angle is small (maximum change 15°). Therefore, the coal pillar width effect of coal wall maximum principal stress deflection is weaker ### 3.3. Coal Pillar Width Effect of Plastic Zone Form of Roadway Surrounding Rock Figure6 shows the distribution of the plastic zone form in the surrounding rock of the roadway under different coal pillar widths obtained by numerical simulation. For the convenience of analysis, two indicators, plastic zone maximum damage depth and plastic zone maximum damage depth location, are defined to characterize the plastic zone form, where the plastic zone maximum damage depth location is expressed by the angle between the plastic zone maximum damage depth boundary and the centerline of the top and bottom plates of the roadway and the two sides of the gang, and the counterclockwise direction is specified as positive. From Figure 6, it can be seen that with the increase of coal pillar width, the plastic zone form of the roadway surrounding rock changes to different degrees. For the roof, the maximum damage depth of plastic zone is 2.75 m, when the width of coal pillar is 4 m, and the position of the maximum damage depth (10°~32°) is close to the roof of coal pillar side. When the width of coal pillar is 6 m~20 m, the maximum damage depth of plastic zone decreases to 2.25 m and remains unchanged, but its position is deflected to clockwise direction (solid coal side) when the width of coal pillar is 6 m~14 m, and it is not deflected when the width of coal pillar is 16 m~20 m. The form of the plastic zone is symmetric about the centerline of the roof. For the coal pillar, when the width of coal pillar is 4 m~6 m, the whole coal pillar is in the plastic damage state; when the width of coal pillar is 8 m~20 m, the coal pillar is no longer in the plastic damage state completely, and the maximum damage depth of the plastic zone does not change significantly, which is about 1.75 m, but the position of the maximum damage depth of the plastic zone continues to deflect in the clockwise direction (roof direction) with the increase of the width of coal pillar. For the coal wall, when the width of coal pillar is 4 m, the maximum damage depth of plastic zone is 1.75 m, and the position of maximum break depth is 0°~36°. When the width of coal pillar is 6 m~14 m, the maximum damage depth of plastic zone decreases to 1.5 m and remains unchanged, but its position deflects significantly in the counterclockwise direction (roof direction). When the width of coal pillar is 16 m~20 m, the distribution of the plastic zone does not change significantly. For the floor, the maximum damage depth of the plastic zone does not change with the increase of coal pillar width, which is 3 m, and the location of the maximum damage depth also does not change significantly, and the form of the plastic zone is approximately symmetrical about the center line of the floor.Figure 6 The form distribution of plastic zone of roadway surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFrom the analysis of Figure7 and Table 4, it can be seen that the form of plastic zone in different locations of the roadway surrounding rock shows different sensitivity to the change of coal pillar width, which leads to the variability of the coal pillar width effect of the form of plastic zone in different locations of the roadway surrounding rock.Figure 7 Location of maximum failure depth in plastic zone of roadway surrounding rock.Table 4 The variation law of the maximum failure depth position in plastic zone pillar. RoofWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal wall10°~13.5°8 m~12 mCoal wall2°~2.5°12 m~16 mCoal pillar1°~1.5°16 m~20 mNo change0°Coal pillarWidth of coal pillarDeflection directionDeflection angle4 m~12 mRoof0.5°~8.5°12 m~16 mRoof10°~13°16 m~18 mNo change0°18 m~20 mRoof4°FloorWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal pillar0.5°~6.5°8 m~10 mNo change0°10 m~14 mCoal pillar0.5°~1.5°14 m~20 mNo change0°Coal wallWidth of coal pillarDeflection directionDeflection angle4 m~6 mFloor12°6 m~10 mRoof10°~12°10 m~14 mRoof3°~5°14 m~20 mNo change0°From the above analysis, it can be seen that the plastic zone form at different locations of the roadway surrounding rock shows different sensitivities to the changes of coal pillar width, which leads to the variability of the coal pillar width effect of the plastic zone form at different locations of the roadway surrounding rock.(1) For the roof, when the width of coal pillar is less than 8 m, the sensitivity of the top plastic zone form to the change of coal pillar width is stronger, and with the increase of coal pillar width, the maximum damage depth of plastic zone decreases significantly, and the position of the maximum damage depth of plastic zone continues to deflect toward the coal wall, and the top plastic zone form has obvious coal pillar width effect. When the width of coal pillar is greater than 8 m, the sensitivity of roof plastic zone form to coal pillar width change is weak, with the increase of coal pillar width, the maximum damage depth of plastic zone does not change obviously, its position change is small (maximum change 2.5°), and the coal pillar width effect of roof plastic zone form is weak(2) For two sides of roadway, the form of plastic zone of two sides is sensitive to the change of coal pillar width. With the increase of coal pillar width, the maximum damage depth of plastic zone of coal pillar wall first decreases and then remains unchanged, and its position continues to deflect towards the roof. When the coal pillar width is less than 12 m, the maximum damage depth of the plastic zone of the coal wall first decreases and then remains unchanged, and its position first deflects to the floor direction and then to the roof direction. When the coal pillar width is greater than 12 m, the maximum damage depth and position of the plastic zone do not change significantly. The plastic zone form of two sides of roadway has obvious coal pillar width effect(3) For the floor, the form of the plastic zone of the floor is less sensitive to the change of the coal pillar width. With the increase of the coal pillar width, the maximum damage depth of the plastic zone does not change significantly, and its position changes slightly (the maximum change is 6.5°), and the coal pillar width effect of the form of the plastic zone of the floor should be weak ## 3.1. Establishment of Numerical Model According to the production geological conditions of transportation roadway in 11030 working face of Zhaogu No. 2 coal mine, FLAC3D numerical simulation software was used to build a model with a length of 250 m, a width of 50 m, and a height of 42 m, with a model grid cell size of 0.5 m, as shown in Figure 2.Figure 2 Numerical calculation model.Displacement constraints were applied to the top and bottom of the model in the vertical direction, and displacement constraints in the horizontal direction were applied around the model, and the initial ground stresses were applied to the model based on the rock formation loads at a burial depth of 700 m, rock laboratory test, and the measured ground stresses in the adjacent mine area: 15.30 MPa for vertical stresses, 29.36 MPa (along thex-axis direction), and 16.82 MPa (along the y-axis direction) for horizontal stresses. The model is used Mohr-Coulomb, and the physical and mechanical parameters of each rock layer are shown in Table 2.Table 2 Rock physical and mechanical parameters table. Rock formationInternal friction angle (°)Cohesion (MPa)Density (kg/m3)Shear modulus (GPa)Bulk modulus (GPa)Uniaxial tensile strength (MPa)Sandstone252.815004.85.41.5Mudstone305.2422005.048.821.48Coal seam345.3625004.5410.442.6Sandy mudstone381627009.010.27.5Limestone357.923005.38.98.71The plastic zone distribution characteristics of the roadway surrounding rock were simulated when the width of coal pillar was 4 m, 6 m, 8 m, 10 m, 12 m, 14 m, 16 m, 18 m, and 20 m, respectively, and the main stress direction measurement lines were arranged around the excavated roadway to study the main stress deflection characteristics. The length of each measurement line is 40 m, arranged along the tendency of coal seam, and the interval of measurement line is 0.5 m, and 31 measurement lines are arranged in total, as shown in Figure3.Figure 3 Layout of measuring line in principal stress direction of roadway surrounding rock. ## 3.2. Coal Pillar Width Effect of Principal Stress Deflection of Roadway Surrounding Rock According to the stress field imposed by numerical simulation, it is known that the maximum principal stress is 29.36 MPa, along the horizontal direction; the minimum principal stress is 15.30 MPa, along the vertical direction, and the following studies are analyzed by the angle between the maximum principal stress and the horizontal direction. Figure4 shows the contour cloud of the maximum principal stress and the angle between the horizontal directions in the stress field of the roadway enclosure under different coal column widths. From the figure, it can be seen that the distribution characteristics of the maximum principal stress direction in the roadway surrounding rock change significantly under different coal column widths. For the roof, the maximum principal stress is mainly in the horizontal direction, and the minimum principal stress is mainly in the vertical direction; with the increase of the width of the coal pillar, the maximum principal stress of the overlying rocks near the coal pillar side and the coal wall side of the top slab has a tendency to deflect in the vertical direction, and the direction of the maximum principal stress of the overlying rocks in the middle of the top slab has no obvious deflection. For the coal pillar wall, with the increase of the width of the coal pillar, the direction of the maximum principal stress is obviously deflected, when the width of the coal pillar is 4 m~14 m, the maximum principal stress of the whole coal pillar is mainly in the vertical direction, and the minimum principal stress is mainly in the horizontal direction, when the width of the coal pillar is 16 m~20 m, the maximum principal stress within 4 m from the edge of the roadway is mainly in the vertical direction, and the maximum principal stress within 4~6 m from the edge of the roadway is mainly in the horizontal direction. For the coal wall, the maximum principal stress direction does not change significantly with the change of coal pillar width, and the maximum principal stress at the edge of the coal wall is mainly in the vertical direction, while the maximum principal stress at the deep part of the coal wall gang is mainly in the horizontal direction. For the floor, when the width of coal pillar is 4 m, the maximum principal stress is mainly in the vertical direction near the pillar side and in the horizontal direction near the coal wall side; when the width of coal pillar is 6 m~20 m, the maximum principal stress is mainly in the horizontal direction, and with the increase of the width of coal pillar, the direction of the maximum principal stress is not significantly deflected.Figure 4 Angle cloud chart of maximum principal stress and horizontal direction of surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFigure5 shows the curves of the maximum principal stresses, which are at different locations of road roof and floor and two surrounding rocks under different coal pillar widths with respect to the angle in the horizontal direction. From the above analysis, it can be seen that the 4 m and 6 m coal pillars are in plastic damage state, so no further analysis will be made here. By comparing the variation characteristics of the maximum principal stress direction in Figure 5 with the direction of the original rock stress field, it is obtained that the variation law of the maximum principal stress direction in the roadway surrounding rock area with the coal pillar width is shown in Table 3. (According to the deflection angle, three deflection degrees are defined: ① Weak: ≤30°; ② Medium: 30°≤60°; and ③ Strong: ≥60°).Figure 5 Variation curve of maximum principal stress direction of surrounding rock. (a) roof(b) coal pillar(c) coal wall(d) floorTable 3 The maximum principal stress deflection law of surrounding rock in roadway excavation. Surrounding rock locationDeflection patternRoofContinuous deflection in the vertical direction, but the degree of deflection is weak.Degree of deflection angle change: bothsidesoftheroof>middleoftheroof.Coal pillarThe deflection is in the vertical direction, and the degree of deflection is decreasing.The degree of deflection from the edge of the coal pillar to the middle of the coal pillar shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalpillar>edgeofcoalpillar.FloorContinuous deflection in the vertical direction, with medium deflection at the position of 1.5 m~2 m under both sides of the floor, and weak deflection at other depth positions and the middle of the floor.Degree of deflection angle change: both sides of the floor > the middle of the floor.All deflected in the vertical direction and the degree of deflection did not change significantly.Coal wallThe degree of deflection from the edge of the coal wall to the deep part of the solid coal shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalwall>edgeofcoalwall.From the above analysis, it can be seen that the maximum principal stresses in the stress field of the roadway enclosure area are deflected in the vertical direction, but the deflection of the maximum principal stresses at different locations in the roadway enclosure has different sensitivities to the width of the coal pillar, resulting in the variability of the coal pillar width effect of the deflection of the principal stresses at different locations in the roadway enclosure.(1) For the roof and floor, the sensitivity of the maximum principal stress deflection at the position of the roof and floor near the two sides of the roadway to the width of the coal column is stronger. With the increase of coal pillar width, the maximum principal stress deflection angle changes obviously (maximum change of 20°) on both sides of the roof and floor, and the change is smaller (maximum change of 8°) in the middle position. Therefore, the maximum principal stress deflection in the roof and floor near the edge of the two gangs has an obvious coal pillar width effect, while the coal pillar width effect in the middle position is weaker(2) For the coal pillar, the sensitivity of the maximum principal stress deflection inside the coal pillar to the change of the coal pillar width is stronger. With the increase of coal pillar width, the change of maximum principal stress deflection angle inside the coal pillar is obvious (maximum change of 70°), and the change of edge position is relatively small (maximum change of 12°). Therefore, the maximum principal stress deflection inside the coal pillar has obvious coal pillar width effect, while the coal pillar width effect of edge position is weaker(3) For the coal wall, the sensitivity of the maximum principal stress deflection of the coal wall to the coal pillar width is weaker. With the increase of coal pillar width, the change of coal wall maximum principal stress deflection angle is small (maximum change 15°). Therefore, the coal pillar width effect of coal wall maximum principal stress deflection is weaker ## 3.3. Coal Pillar Width Effect of Plastic Zone Form of Roadway Surrounding Rock Figure6 shows the distribution of the plastic zone form in the surrounding rock of the roadway under different coal pillar widths obtained by numerical simulation. For the convenience of analysis, two indicators, plastic zone maximum damage depth and plastic zone maximum damage depth location, are defined to characterize the plastic zone form, where the plastic zone maximum damage depth location is expressed by the angle between the plastic zone maximum damage depth boundary and the centerline of the top and bottom plates of the roadway and the two sides of the gang, and the counterclockwise direction is specified as positive. From Figure 6, it can be seen that with the increase of coal pillar width, the plastic zone form of the roadway surrounding rock changes to different degrees. For the roof, the maximum damage depth of plastic zone is 2.75 m, when the width of coal pillar is 4 m, and the position of the maximum damage depth (10°~32°) is close to the roof of coal pillar side. When the width of coal pillar is 6 m~20 m, the maximum damage depth of plastic zone decreases to 2.25 m and remains unchanged, but its position is deflected to clockwise direction (solid coal side) when the width of coal pillar is 6 m~14 m, and it is not deflected when the width of coal pillar is 16 m~20 m. The form of the plastic zone is symmetric about the centerline of the roof. For the coal pillar, when the width of coal pillar is 4 m~6 m, the whole coal pillar is in the plastic damage state; when the width of coal pillar is 8 m~20 m, the coal pillar is no longer in the plastic damage state completely, and the maximum damage depth of the plastic zone does not change significantly, which is about 1.75 m, but the position of the maximum damage depth of the plastic zone continues to deflect in the clockwise direction (roof direction) with the increase of the width of coal pillar. For the coal wall, when the width of coal pillar is 4 m, the maximum damage depth of plastic zone is 1.75 m, and the position of maximum break depth is 0°~36°. When the width of coal pillar is 6 m~14 m, the maximum damage depth of plastic zone decreases to 1.5 m and remains unchanged, but its position deflects significantly in the counterclockwise direction (roof direction). When the width of coal pillar is 16 m~20 m, the distribution of the plastic zone does not change significantly. For the floor, the maximum damage depth of the plastic zone does not change with the increase of coal pillar width, which is 3 m, and the location of the maximum damage depth also does not change significantly, and the form of the plastic zone is approximately symmetrical about the center line of the floor.Figure 6 The form distribution of plastic zone of roadway surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFrom the analysis of Figure7 and Table 4, it can be seen that the form of plastic zone in different locations of the roadway surrounding rock shows different sensitivity to the change of coal pillar width, which leads to the variability of the coal pillar width effect of the form of plastic zone in different locations of the roadway surrounding rock.Figure 7 Location of maximum failure depth in plastic zone of roadway surrounding rock.Table 4 The variation law of the maximum failure depth position in plastic zone pillar. RoofWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal wall10°~13.5°8 m~12 mCoal wall2°~2.5°12 m~16 mCoal pillar1°~1.5°16 m~20 mNo change0°Coal pillarWidth of coal pillarDeflection directionDeflection angle4 m~12 mRoof0.5°~8.5°12 m~16 mRoof10°~13°16 m~18 mNo change0°18 m~20 mRoof4°FloorWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal pillar0.5°~6.5°8 m~10 mNo change0°10 m~14 mCoal pillar0.5°~1.5°14 m~20 mNo change0°Coal wallWidth of coal pillarDeflection directionDeflection angle4 m~6 mFloor12°6 m~10 mRoof10°~12°10 m~14 mRoof3°~5°14 m~20 mNo change0°From the above analysis, it can be seen that the plastic zone form at different locations of the roadway surrounding rock shows different sensitivities to the changes of coal pillar width, which leads to the variability of the coal pillar width effect of the plastic zone form at different locations of the roadway surrounding rock.(1) For the roof, when the width of coal pillar is less than 8 m, the sensitivity of the top plastic zone form to the change of coal pillar width is stronger, and with the increase of coal pillar width, the maximum damage depth of plastic zone decreases significantly, and the position of the maximum damage depth of plastic zone continues to deflect toward the coal wall, and the top plastic zone form has obvious coal pillar width effect. When the width of coal pillar is greater than 8 m, the sensitivity of roof plastic zone form to coal pillar width change is weak, with the increase of coal pillar width, the maximum damage depth of plastic zone does not change obviously, its position change is small (maximum change 2.5°), and the coal pillar width effect of roof plastic zone form is weak(2) For two sides of roadway, the form of plastic zone of two sides is sensitive to the change of coal pillar width. With the increase of coal pillar width, the maximum damage depth of plastic zone of coal pillar wall first decreases and then remains unchanged, and its position continues to deflect towards the roof. When the coal pillar width is less than 12 m, the maximum damage depth of the plastic zone of the coal wall first decreases and then remains unchanged, and its position first deflects to the floor direction and then to the roof direction. When the coal pillar width is greater than 12 m, the maximum damage depth and position of the plastic zone do not change significantly. The plastic zone form of two sides of roadway has obvious coal pillar width effect(3) For the floor, the form of the plastic zone of the floor is less sensitive to the change of the coal pillar width. With the increase of the coal pillar width, the maximum damage depth of the plastic zone does not change significantly, and its position changes slightly (the maximum change is 6.5°), and the coal pillar width effect of the form of the plastic zone of the floor should be weak ## 4. Analysis of the Stability of the Surrounding Rock Based on the Coal Pillar Width Effect ### 4.1. Mechanism of Coal Pillar Width Effect Formation For the stability of roadway surrounding rock, the key is to ensure the stability of roadway roof and coal pillar. Therefore, based on the previous research results, taking the roadway roof and coal pillar wall as the research object, the author analyzes the variation law of the deflection angle of the main stress direction of roadway surrounding rock and the position of the maximum failure depth of plastic zone under different coal pillar widths and reveals the formation mechanism of coal pillar width effect in the plastic zone of roadway surrounding rock. Figure8 shows the curves of the change in the deflection angle of the main stress direction and the position of the maximum damage depth in the plastic zone of the roadway roof and coal pillar at different coal pillar widths (the counterclockwise direction is specified as positive). From the figure, it can be seen that the position of the maximum damage depth in the plastic zone of the roadway surrounding rock and the angle of deflection of the main stress direction have approximately the same changing trend. As the width of coal pillar increases, the position of maximum damage depth and the direction of main stress in the plastic zone of the floor and coal pillar are deflected in the clockwise direction.Figure 8 Curve of maximum failure depth position and principal stress direction in plastic zone of roadway surrounding rock. (a) roof(b) coal pillarThe width effect of the main stress deflection of the surrounding rock after the roadway excavation will cause the obvious deflection of the main stress direction of the surrounding rock, which will cause the maximum damage depth and location of the surrounding rock plastic zone to change, which results in the difference distribution of the plastic zone form of the surrounding rock, and which results in the shape of the surrounding rock plastic zone having the width effect of the coal pillar, as shown in Figure9.Figure 9 Formation mechanism of coal pillar width effect. ### 4.2. The Influence of Coal Pillar Width Effect on the Stability of the Surrounding Rock According to the above study, influenced by the coal pillar width effect, the principal stress direction and plastic zone form at different positions of surrounding rock after roadway excavation will change to different degrees. For the roof, the deflection degree of the principal stress direction near the two sides of the roof is significantly greater than that in the central position, and the damage depth of the plastic zone at different positions of the roof is quite different, resulting in the difference in the stability of the surrounding rock on both sides and in the central position of the roof. For the coal pillar side, the deflection degree of the principal stress direction in the middle position of the coal pillar is significantly greater than that in the upper and lower sides, and the damage depth of the plastic zone in different positions is also different, resulting in a large difference in the stability of the surrounding rock in the middle and upper and lower sides of the coal pillar. Therefore, for roadways with different coal pillar widths, it is necessary to adopt different control measures for different positions of roof and coal pillar to ensure the stability of surrounding rock. Taking the 11030 working face transportation roadway of Zhaogu No. 2 Mine as an example, the stability of roadway surrounding rock is analyzed combined with the width effect of coal pillar, which provides basic guidance for the stability control of roadway surrounding rock. For the roof, the maximum principal stress is mainly in the horizontal direction, but affected by the width effect of coal pillar, and the maximum principal stress on both sides of the roof tends to change in the vertical direction. And the maximum damage depth of plastic zone is located in the middle of roof. Therefore, the stability of the surrounding rock in the middle of the roof is poorer than that on both sides. In order to prevent the roof falling accident of the roadway, it is necessary to reinforce and support the middle position of the roadway roof after the roadway excavation is completed. For the coal pillar, the maximum principal stress is mainly in the vertical direction, but affected by the width effect of coal pillar, and the maximum principal stress at the upper and lower sides of coal pillar changes significantly in the horizontal direction. And the maximum damage depth of plastic zone deflects to the roof direction. The comprehensive analysis shows that the stability of the upper and middle sides of the coal pillar is worse than that of the lower side. In order to prevent the failure and instability of the coal pillar, it is necessary to reinforce the surrounding rock of the middle and upper sides of the coal pillar after the roadway excavation. ## 4.1. Mechanism of Coal Pillar Width Effect Formation For the stability of roadway surrounding rock, the key is to ensure the stability of roadway roof and coal pillar. Therefore, based on the previous research results, taking the roadway roof and coal pillar wall as the research object, the author analyzes the variation law of the deflection angle of the main stress direction of roadway surrounding rock and the position of the maximum failure depth of plastic zone under different coal pillar widths and reveals the formation mechanism of coal pillar width effect in the plastic zone of roadway surrounding rock. Figure8 shows the curves of the change in the deflection angle of the main stress direction and the position of the maximum damage depth in the plastic zone of the roadway roof and coal pillar at different coal pillar widths (the counterclockwise direction is specified as positive). From the figure, it can be seen that the position of the maximum damage depth in the plastic zone of the roadway surrounding rock and the angle of deflection of the main stress direction have approximately the same changing trend. As the width of coal pillar increases, the position of maximum damage depth and the direction of main stress in the plastic zone of the floor and coal pillar are deflected in the clockwise direction.Figure 8 Curve of maximum failure depth position and principal stress direction in plastic zone of roadway surrounding rock. (a) roof(b) coal pillarThe width effect of the main stress deflection of the surrounding rock after the roadway excavation will cause the obvious deflection of the main stress direction of the surrounding rock, which will cause the maximum damage depth and location of the surrounding rock plastic zone to change, which results in the difference distribution of the plastic zone form of the surrounding rock, and which results in the shape of the surrounding rock plastic zone having the width effect of the coal pillar, as shown in Figure9.Figure 9 Formation mechanism of coal pillar width effect. ## 4.2. The Influence of Coal Pillar Width Effect on the Stability of the Surrounding Rock According to the above study, influenced by the coal pillar width effect, the principal stress direction and plastic zone form at different positions of surrounding rock after roadway excavation will change to different degrees. For the roof, the deflection degree of the principal stress direction near the two sides of the roof is significantly greater than that in the central position, and the damage depth of the plastic zone at different positions of the roof is quite different, resulting in the difference in the stability of the surrounding rock on both sides and in the central position of the roof. For the coal pillar side, the deflection degree of the principal stress direction in the middle position of the coal pillar is significantly greater than that in the upper and lower sides, and the damage depth of the plastic zone in different positions is also different, resulting in a large difference in the stability of the surrounding rock in the middle and upper and lower sides of the coal pillar. Therefore, for roadways with different coal pillar widths, it is necessary to adopt different control measures for different positions of roof and coal pillar to ensure the stability of surrounding rock. Taking the 11030 working face transportation roadway of Zhaogu No. 2 Mine as an example, the stability of roadway surrounding rock is analyzed combined with the width effect of coal pillar, which provides basic guidance for the stability control of roadway surrounding rock. For the roof, the maximum principal stress is mainly in the horizontal direction, but affected by the width effect of coal pillar, and the maximum principal stress on both sides of the roof tends to change in the vertical direction. And the maximum damage depth of plastic zone is located in the middle of roof. Therefore, the stability of the surrounding rock in the middle of the roof is poorer than that on both sides. In order to prevent the roof falling accident of the roadway, it is necessary to reinforce and support the middle position of the roadway roof after the roadway excavation is completed. For the coal pillar, the maximum principal stress is mainly in the vertical direction, but affected by the width effect of coal pillar, and the maximum principal stress at the upper and lower sides of coal pillar changes significantly in the horizontal direction. And the maximum damage depth of plastic zone deflects to the roof direction. The comprehensive analysis shows that the stability of the upper and middle sides of the coal pillar is worse than that of the lower side. In order to prevent the failure and instability of the coal pillar, it is necessary to reinforce the surrounding rock of the middle and upper sides of the coal pillar after the roadway excavation. ## 5. Engineering Case The transportation roadway of 11030 working face in Zhaogu No. 2 Mine was excavated along the coal seam roof, and 8 m coal pillar was left between it and the 11011 mined-out area (Figure10). The roadway was designed as a rectangular section of 4.8m×3.3mwidth×height. During the excavation of the roadway, the roadway surrounding rock had a nonuniform large deformation, the section contraction was serious, the maximum subsidence of roof was about 428 mm, and the maximum displacement of two sides was about 270 mm.Figure 10 The layout plan of carrying roadway along 11030 working face. ### 5.1. Detection of Rock Surrounding Damage Areas In order to master the surrounding rock damage of 11030 working face transportation roadway and compare with the numerical simulation results (Figure6(c)), the JL-IDOI (A) intelligent borehole TV imager was used to peep the surrounding rock damage of the roadway. Three boreholes were arranged at the roof and floor of the roadway and the two sides at 260 m from the opening of the roadway in 11030 working face. A total of 12 boreholes were arranged, as shown in Figure 11, the diameter of the borehole was 32 mm, and the depth of the borehole was 4 m.Figure 11 Borehole layout plan of transportation lane in 11030 working face.Figure12 shows the comparison between the borehole peeping results of surrounding rock of 11030 transportation roadway and the plastic zone form of numerical simulation. The damage area of roadway surrounding rock is asymmetric. The damage depth of rock stratum in the middle of roof is large, and the coal body in the middle of coal pillar is seriously broken. The damage depth of rock stratum near the roof of coal wall is the largest, and the distribution range of floor rock stratum is relatively uniform. The nonuniform morphological characteristics of plastic zone of roadway surrounding rock are basically consistent with the results of borehole peeping.Figure 12 Comparison of the damage range of surrounding rock in 11030 transportation lane with the results of numerical simulation. (a) drill hole peeping results(b) numerical simulation results ### 5.2. Roadway Support Design Combined with the field peeping results and previous research results, it can be seen that the section convergence of 11030 transport roadway during excavation is serious, and the surrounding rock deformation of roadway shows obvious nonuniform characteristics. Therefore, considering the influence of mining activities in the later stage of the roadway, the middle roof is reinforced and supported by the length enable bolt and the high strength screw steel bolt to prevent the roof from falling [20–22]. Two sides, by patching high strength thread steel anchor, prevent two sides due to excessive deformation and cross. Specific design parameters are shown in Figure 13.Figure 13 Section diagram of supporting design parameters for 11030 working face transportation roadway. ### 5.3. Support Effect In order to verify the stability of the surrounding rock of the roadway after reinforcement support, a measuring station was arranged at 260 m from the opening of 11030 transportation roadway to monitor the change of the roadway surface displacement. The monitoring results show that the maximum deformation of the top and bottom plates from 0 to 28d after the roadway was reinforced and supported and is 268 mm, and the maximum deformation of the two sides is 123 mm; the displacement of the surrounding rock of the roadway remains basically the same after 28 d after the roadway was reinforced and supported, and there is no significant increase, indicating that the surrounding rock of the roadway tends to be stable. The monitoring results of the roadway surface displacement are shown in Figure14.Figure 14 Change of surface displacement of roadway. ## 5.1. Detection of Rock Surrounding Damage Areas In order to master the surrounding rock damage of 11030 working face transportation roadway and compare with the numerical simulation results (Figure6(c)), the JL-IDOI (A) intelligent borehole TV imager was used to peep the surrounding rock damage of the roadway. Three boreholes were arranged at the roof and floor of the roadway and the two sides at 260 m from the opening of the roadway in 11030 working face. A total of 12 boreholes were arranged, as shown in Figure 11, the diameter of the borehole was 32 mm, and the depth of the borehole was 4 m.Figure 11 Borehole layout plan of transportation lane in 11030 working face.Figure12 shows the comparison between the borehole peeping results of surrounding rock of 11030 transportation roadway and the plastic zone form of numerical simulation. The damage area of roadway surrounding rock is asymmetric. The damage depth of rock stratum in the middle of roof is large, and the coal body in the middle of coal pillar is seriously broken. The damage depth of rock stratum near the roof of coal wall is the largest, and the distribution range of floor rock stratum is relatively uniform. The nonuniform morphological characteristics of plastic zone of roadway surrounding rock are basically consistent with the results of borehole peeping.Figure 12 Comparison of the damage range of surrounding rock in 11030 transportation lane with the results of numerical simulation. (a) drill hole peeping results(b) numerical simulation results ## 5.2. Roadway Support Design Combined with the field peeping results and previous research results, it can be seen that the section convergence of 11030 transport roadway during excavation is serious, and the surrounding rock deformation of roadway shows obvious nonuniform characteristics. Therefore, considering the influence of mining activities in the later stage of the roadway, the middle roof is reinforced and supported by the length enable bolt and the high strength screw steel bolt to prevent the roof from falling [20–22]. Two sides, by patching high strength thread steel anchor, prevent two sides due to excessive deformation and cross. Specific design parameters are shown in Figure 13.Figure 13 Section diagram of supporting design parameters for 11030 working face transportation roadway. ## 5.3. Support Effect In order to verify the stability of the surrounding rock of the roadway after reinforcement support, a measuring station was arranged at 260 m from the opening of 11030 transportation roadway to monitor the change of the roadway surface displacement. The monitoring results show that the maximum deformation of the top and bottom plates from 0 to 28d after the roadway was reinforced and supported and is 268 mm, and the maximum deformation of the two sides is 123 mm; the displacement of the surrounding rock of the roadway remains basically the same after 28 d after the roadway was reinforced and supported, and there is no significant increase, indicating that the surrounding rock of the roadway tends to be stable. The monitoring results of the roadway surface displacement are shown in Figure14.Figure 14 Change of surface displacement of roadway. ## 6. Conclusion (1) The deformation of the surrounding rock in the deep roadway under different widths of the coal pillar does not show a direct correlation between the width of the coal pillar and the deformation of the surrounding rock (the larger the width of the coal pillar, the smaller the deformation of the surrounding rock in the roadway), and the deformation of the surrounding rock at different locations in the same section of the roadway shows a nonuniform distribution(2) The maximum principal stress deflects to the vertical direction in the stress field of the surrounding rock of deep roadway excavation, but the deflection of the maximum principal stress at different positions of the surrounding rock of roadway has different sensitivity to the width of the coal pillar. The coal pillar width effect of principal stress deflection on both sides of the roof and floor and inside the coal pillar is obvious, and the coal pillar width effect of principal stress deflection in the middle of the roof and floor and the edge of the coal pillar and the coal wall are weak(3) The plastic zone form of surrounding rock of deep roadway after excavation will show differential distribution characteristics due to the change of coal pillar width, and the plastic zone form of surrounding rock at different positions of roadway has different sensitivities to the change of coal pillar width, resulting in obvious coal pillar width effect of plastic zone form of roadway roof and two sides, and weak coal pillar width effect of plastic zone form of floor(4) The principal stress in the surrounding rock area of deep roadway excavation will deflect to varying degrees, which affects the form of the plastic zone of the surrounding rock. The position of the maximum damage depth of the plastic zone is approximately the same as the principal stress deflection, and the width effect of the coal pillar will have different degrees of influence on the stability of the surrounding rock at different positions of the roadway --- *Source: 1007222-2022-03-18.xml*
1007222-2022-03-18_1007222-2022-03-18.md
66,547
The Coal Pillar Width Effect of Principal Stress Deflection and Plastic Zone Form of Surrounding Rock Roadway in Deep Excavation
Ji Li; Rongguang Zhang; Xubo Qiang
Geofluids (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007222
1007222-2022-03-18.xml
--- ## Abstract In order to explore the influence of coal pillar width on the principal stress deflection and plastic zone form of surrounding rock in deep roadway excavation, taking 11030 working face transportation roadway of Zhaogu No. 2 Coal Mine as engineering background, theoretical analysis, numerical simulation, and field detection were used to study the effect of coal pillar width on principal stress deflection and plastic zone form and field detection and verification of plastic zone form of surrounding rock in 11030 transportation roadway. The results show that the maximum principal stress is deflected in the vertical direction, which in roadway surrounding rock excavation. The coal pillar width effect of principal stress deflection on both sides of roadway roof and floor and inside coal pillar are more obvious than that of middle roof and floor, coal pillar edge and coal wall position. The deflection of the principal stress affects the morphological distribution of the plastic zone of the surrounding rock, which led to the width effect of coal pillar in roof, and two sides plastic zone are more obvious than that in floor. The principal stress deflection of roadway surrounding rock is highly consistent with the maximum damage depth of plastic zone, and at the same time, the drilling peep results of surrounding rock are basically consistent with the form characteristics of plastic zone in numerical simulation. On this basis, the surrounding rock reinforcement support scheme of 11030 working face transportation roadway was proposed. --- ## Body ## 1. Introduction With the increase of coal seam mining depth and mining intensity in China, and the comprehensive influence of production geological conditions and other factors, the deformation and damage of surrounding rock in deep roadways with different width of roadway protection coal pillars show different distribution characteristics. Under the influence of mining and coal pillar width, the stress field direction of roadway surrounding rock will deflect, resulting in the change of form distribution of surrounding rock plastic zone, and then affect the stability of roadway surrounding rock [1–3]. Therefore, the influence of roadway pillar width on principal stress deflection and plastic zone form of surrounding rock in deep roadway excavation is studied, which has important guiding significance for the stability control of surrounding rock in deep roadway.By using the methods of theoretical analysis and numerical simulation, documents [4, 5] gave the calculation formula of failure width of supporting coal pillar under different failure criteria, revealed the stress distribution characteristics of working face and provided a new idea for retaining coal pillar width and roadway layout. Literature [6] studies showed that the larger the width of the reserved coal pillar, the smaller the proportion of plastic zone form, vertical stress, and horizontal deformation in the coal pillar, and the higher the stability of the coal pillar itself, which was helpful to maintain the stability of roadway surrounding rock. Literature [7] studied the variation law of vertical stress in the surrounding rock and coal pillar width along the open cut tunnel and combined with the limit equilibrium theory of coal body, and the optimized design of coal pillar width along the open cut tunnel was carried out. Literatures [8–10] studied an integrated detection system, established an ultrasonic model, and coupled it with a mechanical model, and the results of the research can predict whether the excavation damage zone, stress distribution, stress rotation, and ultrasonic velocity evolution of the roadway are consistent with the actual situation in the field. Literatures [11–13] proposed the characteristic radii of the plastic zone in the horizontal axis, longitudinal axis, and medial axis according to the damage boundary characteristics of the plastic zone of the roadway, through theoretical calculations and other research methods, in order to reflect the shape characteristics of the plastic zone, evaluate the potential hazard location of the roadway enclosure and the critical point of the roadway dynamic hazard evaluation based on the characteristic radii. The literatures [14, 15], based on mining rock mechanics, focused on the failure behavior and deformation mechanism of rocks with large burial depths and initially established the surrounding rock stress gradient failure theory and research results to provide the theoretical basis and technical support for the future development of deep mineral resources. The above results studied the influence law of coal pillar width on its own stability, roadway deformation, and the size of surrounding rock stress field and displacement field, respectively. However, the related research results on the deflection of the main stress and the form of the plastic zone in the surrounding rock of deep roadway excavation under different widths of coal pillars of the protection roadway are less. Therefore, this paper used numerical simulation to study the characteristics of main stress deflection and plastic zone form distribution of deep roadway excavation surrounding rocks under different widths of coal pillar protecting the roadway and discovered the coal pillar width effect of main stress deflection and plastic zone form and conducted theoretical analysis on the influence of coal pillar width effect on the stability of roadway surrounding rocks. On this basis, the detection and verification of the damage zone of the surrounding rock was carried out in the transporting roadway of 11030 working face of Zhaogu No. 2 Coal Mine, and the corresponding countermeasures for the control of the surrounding rock were proposed. ## 2. Deformation Characteristics of the Surrounding Rock in Deep Roadway Excavation under Different Coal Pillar Widths ### 2.1. Deformation Characteristics’ Analysis of the Surrounding Rock In order to study the deformation characteristics of the surrounding rock, which is in deep roadway under different widths of coal pillars of roadway protection, five roadways under different widths of coal pillars of roadway protection were selected through field research, and the deformation characteristics of the roadway surrounding rock after excavation were analyzed, as shown in Table1.Table 1 Deformation characteristics of surrounding rock in roadway excavation. RoadwayGeneral informationScene photosDeformation featuresWulihou coal mine lower group coal tape transport roadwayAverage depth of coal seam 550 m;straight wall semi-circular arch roadway: 4.74 m and 4.3 m (net width and net height), wall height 2 m, arch height 2.3 m;5 m coal pillar.Maximum shifting in of the top and bottom plates is about 45 mm;the maximum shifting in of the two gangs is about 38 mm.Transportation roadway of 11030 working face in Zhaogu No. 2 coal mineAverage depth of coal seam 700 m;rectangular roadway: 4.8 m and 3.3 m (width and height);8 m coal pillar.Maximum sinking of the roof is about 428 mm;maximum displacement of two sides is about 270 mm.7608 return air roadway of Wuyang coal mine of Lu’an groupAverage depth of coal seam 750 m;rectangular roadway: 5.4 m and 3.2 m (width and height);15 m coal pillar.Maximum sinking of the roof is about 200 mm;maximum displacement of two sides is about 700 mm.Air return roadway in No. 9 mining area of Chensilou coal mineAverage depth of coal seam 900 m;straight wall semi-circular arch roadway: 4.2 m and 4.6 m (net width and net height), wall height 2.4 m, arch height 2.2 m;13 m coal pillar.Maximum shifting in of the top and bottom plates is about 200 mm;the maximum shifting in of the two gangs is about 100 mm.Transportation roadway in No. 7 mining area of Zhaolou coal mineAverage depth of coal seam 910 m;flat-topped domed roadway: 5 m and 4.5 m (net width and net height), upper arc height 2 m, flat top 3 m;70 m coal pillar.Maximum sinking of the roof is about 579 mm;maximum displacement of two sides is about 600 mm.Through the above case analysis, it can be seen that under different widths of coal pillar, the deformation of the roadway surrounding rock shows differential distribution characteristics, but the width of coal pillar and the amount of surrounding rock deformation does not show a direct correlation, that is, the larger the width of coal pillar, the smaller the amount of deformation of the roadway surrounding rock. And the deformation of the surrounding rock at different locations of the same roadway section shows nonuniform distribution characteristics. For example, in case 5, the maximum deformation of the roadway is 600 mm when 70 m coal pillar is left (two gang convergence), in case 1, the maximum deformation of the roadway is 45 mm when 5 m coal pillar is left (roof subsidence), and in case 2, the deformation of the roadway is different at different locations of the same section when 8 m coal pillar is left (maximum subsidence of roof is 428 mm, and maximum two gang convergence is 270 mm). The essence of the above phenomenon is that the width of the coal pillar will affect the distribution of the plastic zone of the surrounding rock in the roadway. ### 2.2. Factors Influencing the Deformation of the Surrounding Rock The width of the coal pillar protecting the roadway is an important factor, which influencing the deformation and damage of the roadway excavation. The width of the coal pillar not only affects the size of the surrounding rock stress field but also deflects the direction of the surrounding rock stress field, which changes the form of the plastic zone of the surrounding rock (as shown in Figure1) [16–19]. Under different pillar widths, the surrounding rock of mining roadway excavation presents differential deformation characteristics. The essence is that the surrounding rock forms different plastic zone. Therefore, this paper will focus on the deflection characteristics of the principal stress of the surrounding rock and the morphological distribution characteristics of the plastic zone under the condition of retaining different roadway pillar widths.Figure 1 Deformation and failure mechanism of surrounding rock in roadway excavation. ## 2.1. Deformation Characteristics’ Analysis of the Surrounding Rock In order to study the deformation characteristics of the surrounding rock, which is in deep roadway under different widths of coal pillars of roadway protection, five roadways under different widths of coal pillars of roadway protection were selected through field research, and the deformation characteristics of the roadway surrounding rock after excavation were analyzed, as shown in Table1.Table 1 Deformation characteristics of surrounding rock in roadway excavation. RoadwayGeneral informationScene photosDeformation featuresWulihou coal mine lower group coal tape transport roadwayAverage depth of coal seam 550 m;straight wall semi-circular arch roadway: 4.74 m and 4.3 m (net width and net height), wall height 2 m, arch height 2.3 m;5 m coal pillar.Maximum shifting in of the top and bottom plates is about 45 mm;the maximum shifting in of the two gangs is about 38 mm.Transportation roadway of 11030 working face in Zhaogu No. 2 coal mineAverage depth of coal seam 700 m;rectangular roadway: 4.8 m and 3.3 m (width and height);8 m coal pillar.Maximum sinking of the roof is about 428 mm;maximum displacement of two sides is about 270 mm.7608 return air roadway of Wuyang coal mine of Lu’an groupAverage depth of coal seam 750 m;rectangular roadway: 5.4 m and 3.2 m (width and height);15 m coal pillar.Maximum sinking of the roof is about 200 mm;maximum displacement of two sides is about 700 mm.Air return roadway in No. 9 mining area of Chensilou coal mineAverage depth of coal seam 900 m;straight wall semi-circular arch roadway: 4.2 m and 4.6 m (net width and net height), wall height 2.4 m, arch height 2.2 m;13 m coal pillar.Maximum shifting in of the top and bottom plates is about 200 mm;the maximum shifting in of the two gangs is about 100 mm.Transportation roadway in No. 7 mining area of Zhaolou coal mineAverage depth of coal seam 910 m;flat-topped domed roadway: 5 m and 4.5 m (net width and net height), upper arc height 2 m, flat top 3 m;70 m coal pillar.Maximum sinking of the roof is about 579 mm;maximum displacement of two sides is about 600 mm.Through the above case analysis, it can be seen that under different widths of coal pillar, the deformation of the roadway surrounding rock shows differential distribution characteristics, but the width of coal pillar and the amount of surrounding rock deformation does not show a direct correlation, that is, the larger the width of coal pillar, the smaller the amount of deformation of the roadway surrounding rock. And the deformation of the surrounding rock at different locations of the same roadway section shows nonuniform distribution characteristics. For example, in case 5, the maximum deformation of the roadway is 600 mm when 70 m coal pillar is left (two gang convergence), in case 1, the maximum deformation of the roadway is 45 mm when 5 m coal pillar is left (roof subsidence), and in case 2, the deformation of the roadway is different at different locations of the same section when 8 m coal pillar is left (maximum subsidence of roof is 428 mm, and maximum two gang convergence is 270 mm). The essence of the above phenomenon is that the width of the coal pillar will affect the distribution of the plastic zone of the surrounding rock in the roadway. ## 2.2. Factors Influencing the Deformation of the Surrounding Rock The width of the coal pillar protecting the roadway is an important factor, which influencing the deformation and damage of the roadway excavation. The width of the coal pillar not only affects the size of the surrounding rock stress field but also deflects the direction of the surrounding rock stress field, which changes the form of the plastic zone of the surrounding rock (as shown in Figure1) [16–19]. Under different pillar widths, the surrounding rock of mining roadway excavation presents differential deformation characteristics. The essence is that the surrounding rock forms different plastic zone. Therefore, this paper will focus on the deflection characteristics of the principal stress of the surrounding rock and the morphological distribution characteristics of the plastic zone under the condition of retaining different roadway pillar widths.Figure 1 Deformation and failure mechanism of surrounding rock in roadway excavation. ## 3. Effect of Coal Pillar Width on Deflection of Principal Stress and Plastic Zone Form of Surrounding Rock in Deep Roadway Excavation ### 3.1. Establishment of Numerical Model According to the production geological conditions of transportation roadway in 11030 working face of Zhaogu No. 2 coal mine, FLAC3D numerical simulation software was used to build a model with a length of 250 m, a width of 50 m, and a height of 42 m, with a model grid cell size of 0.5 m, as shown in Figure 2.Figure 2 Numerical calculation model.Displacement constraints were applied to the top and bottom of the model in the vertical direction, and displacement constraints in the horizontal direction were applied around the model, and the initial ground stresses were applied to the model based on the rock formation loads at a burial depth of 700 m, rock laboratory test, and the measured ground stresses in the adjacent mine area: 15.30 MPa for vertical stresses, 29.36 MPa (along thex-axis direction), and 16.82 MPa (along the y-axis direction) for horizontal stresses. The model is used Mohr-Coulomb, and the physical and mechanical parameters of each rock layer are shown in Table 2.Table 2 Rock physical and mechanical parameters table. Rock formationInternal friction angle (°)Cohesion (MPa)Density (kg/m3)Shear modulus (GPa)Bulk modulus (GPa)Uniaxial tensile strength (MPa)Sandstone252.815004.85.41.5Mudstone305.2422005.048.821.48Coal seam345.3625004.5410.442.6Sandy mudstone381627009.010.27.5Limestone357.923005.38.98.71The plastic zone distribution characteristics of the roadway surrounding rock were simulated when the width of coal pillar was 4 m, 6 m, 8 m, 10 m, 12 m, 14 m, 16 m, 18 m, and 20 m, respectively, and the main stress direction measurement lines were arranged around the excavated roadway to study the main stress deflection characteristics. The length of each measurement line is 40 m, arranged along the tendency of coal seam, and the interval of measurement line is 0.5 m, and 31 measurement lines are arranged in total, as shown in Figure3.Figure 3 Layout of measuring line in principal stress direction of roadway surrounding rock. ### 3.2. Coal Pillar Width Effect of Principal Stress Deflection of Roadway Surrounding Rock According to the stress field imposed by numerical simulation, it is known that the maximum principal stress is 29.36 MPa, along the horizontal direction; the minimum principal stress is 15.30 MPa, along the vertical direction, and the following studies are analyzed by the angle between the maximum principal stress and the horizontal direction. Figure4 shows the contour cloud of the maximum principal stress and the angle between the horizontal directions in the stress field of the roadway enclosure under different coal column widths. From the figure, it can be seen that the distribution characteristics of the maximum principal stress direction in the roadway surrounding rock change significantly under different coal column widths. For the roof, the maximum principal stress is mainly in the horizontal direction, and the minimum principal stress is mainly in the vertical direction; with the increase of the width of the coal pillar, the maximum principal stress of the overlying rocks near the coal pillar side and the coal wall side of the top slab has a tendency to deflect in the vertical direction, and the direction of the maximum principal stress of the overlying rocks in the middle of the top slab has no obvious deflection. For the coal pillar wall, with the increase of the width of the coal pillar, the direction of the maximum principal stress is obviously deflected, when the width of the coal pillar is 4 m~14 m, the maximum principal stress of the whole coal pillar is mainly in the vertical direction, and the minimum principal stress is mainly in the horizontal direction, when the width of the coal pillar is 16 m~20 m, the maximum principal stress within 4 m from the edge of the roadway is mainly in the vertical direction, and the maximum principal stress within 4~6 m from the edge of the roadway is mainly in the horizontal direction. For the coal wall, the maximum principal stress direction does not change significantly with the change of coal pillar width, and the maximum principal stress at the edge of the coal wall is mainly in the vertical direction, while the maximum principal stress at the deep part of the coal wall gang is mainly in the horizontal direction. For the floor, when the width of coal pillar is 4 m, the maximum principal stress is mainly in the vertical direction near the pillar side and in the horizontal direction near the coal wall side; when the width of coal pillar is 6 m~20 m, the maximum principal stress is mainly in the horizontal direction, and with the increase of the width of coal pillar, the direction of the maximum principal stress is not significantly deflected.Figure 4 Angle cloud chart of maximum principal stress and horizontal direction of surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFigure5 shows the curves of the maximum principal stresses, which are at different locations of road roof and floor and two surrounding rocks under different coal pillar widths with respect to the angle in the horizontal direction. From the above analysis, it can be seen that the 4 m and 6 m coal pillars are in plastic damage state, so no further analysis will be made here. By comparing the variation characteristics of the maximum principal stress direction in Figure 5 with the direction of the original rock stress field, it is obtained that the variation law of the maximum principal stress direction in the roadway surrounding rock area with the coal pillar width is shown in Table 3. (According to the deflection angle, three deflection degrees are defined: ① Weak: ≤30°; ② Medium: 30°≤60°; and ③ Strong: ≥60°).Figure 5 Variation curve of maximum principal stress direction of surrounding rock. (a) roof(b) coal pillar(c) coal wall(d) floorTable 3 The maximum principal stress deflection law of surrounding rock in roadway excavation. Surrounding rock locationDeflection patternRoofContinuous deflection in the vertical direction, but the degree of deflection is weak.Degree of deflection angle change: bothsidesoftheroof>middleoftheroof.Coal pillarThe deflection is in the vertical direction, and the degree of deflection is decreasing.The degree of deflection from the edge of the coal pillar to the middle of the coal pillar shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalpillar>edgeofcoalpillar.FloorContinuous deflection in the vertical direction, with medium deflection at the position of 1.5 m~2 m under both sides of the floor, and weak deflection at other depth positions and the middle of the floor.Degree of deflection angle change: both sides of the floor > the middle of the floor.All deflected in the vertical direction and the degree of deflection did not change significantly.Coal wallThe degree of deflection from the edge of the coal wall to the deep part of the solid coal shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalwall>edgeofcoalwall.From the above analysis, it can be seen that the maximum principal stresses in the stress field of the roadway enclosure area are deflected in the vertical direction, but the deflection of the maximum principal stresses at different locations in the roadway enclosure has different sensitivities to the width of the coal pillar, resulting in the variability of the coal pillar width effect of the deflection of the principal stresses at different locations in the roadway enclosure.(1) For the roof and floor, the sensitivity of the maximum principal stress deflection at the position of the roof and floor near the two sides of the roadway to the width of the coal column is stronger. With the increase of coal pillar width, the maximum principal stress deflection angle changes obviously (maximum change of 20°) on both sides of the roof and floor, and the change is smaller (maximum change of 8°) in the middle position. Therefore, the maximum principal stress deflection in the roof and floor near the edge of the two gangs has an obvious coal pillar width effect, while the coal pillar width effect in the middle position is weaker(2) For the coal pillar, the sensitivity of the maximum principal stress deflection inside the coal pillar to the change of the coal pillar width is stronger. With the increase of coal pillar width, the change of maximum principal stress deflection angle inside the coal pillar is obvious (maximum change of 70°), and the change of edge position is relatively small (maximum change of 12°). Therefore, the maximum principal stress deflection inside the coal pillar has obvious coal pillar width effect, while the coal pillar width effect of edge position is weaker(3) For the coal wall, the sensitivity of the maximum principal stress deflection of the coal wall to the coal pillar width is weaker. With the increase of coal pillar width, the change of coal wall maximum principal stress deflection angle is small (maximum change 15°). Therefore, the coal pillar width effect of coal wall maximum principal stress deflection is weaker ### 3.3. Coal Pillar Width Effect of Plastic Zone Form of Roadway Surrounding Rock Figure6 shows the distribution of the plastic zone form in the surrounding rock of the roadway under different coal pillar widths obtained by numerical simulation. For the convenience of analysis, two indicators, plastic zone maximum damage depth and plastic zone maximum damage depth location, are defined to characterize the plastic zone form, where the plastic zone maximum damage depth location is expressed by the angle between the plastic zone maximum damage depth boundary and the centerline of the top and bottom plates of the roadway and the two sides of the gang, and the counterclockwise direction is specified as positive. From Figure 6, it can be seen that with the increase of coal pillar width, the plastic zone form of the roadway surrounding rock changes to different degrees. For the roof, the maximum damage depth of plastic zone is 2.75 m, when the width of coal pillar is 4 m, and the position of the maximum damage depth (10°~32°) is close to the roof of coal pillar side. When the width of coal pillar is 6 m~20 m, the maximum damage depth of plastic zone decreases to 2.25 m and remains unchanged, but its position is deflected to clockwise direction (solid coal side) when the width of coal pillar is 6 m~14 m, and it is not deflected when the width of coal pillar is 16 m~20 m. The form of the plastic zone is symmetric about the centerline of the roof. For the coal pillar, when the width of coal pillar is 4 m~6 m, the whole coal pillar is in the plastic damage state; when the width of coal pillar is 8 m~20 m, the coal pillar is no longer in the plastic damage state completely, and the maximum damage depth of the plastic zone does not change significantly, which is about 1.75 m, but the position of the maximum damage depth of the plastic zone continues to deflect in the clockwise direction (roof direction) with the increase of the width of coal pillar. For the coal wall, when the width of coal pillar is 4 m, the maximum damage depth of plastic zone is 1.75 m, and the position of maximum break depth is 0°~36°. When the width of coal pillar is 6 m~14 m, the maximum damage depth of plastic zone decreases to 1.5 m and remains unchanged, but its position deflects significantly in the counterclockwise direction (roof direction). When the width of coal pillar is 16 m~20 m, the distribution of the plastic zone does not change significantly. For the floor, the maximum damage depth of the plastic zone does not change with the increase of coal pillar width, which is 3 m, and the location of the maximum damage depth also does not change significantly, and the form of the plastic zone is approximately symmetrical about the center line of the floor.Figure 6 The form distribution of plastic zone of roadway surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFrom the analysis of Figure7 and Table 4, it can be seen that the form of plastic zone in different locations of the roadway surrounding rock shows different sensitivity to the change of coal pillar width, which leads to the variability of the coal pillar width effect of the form of plastic zone in different locations of the roadway surrounding rock.Figure 7 Location of maximum failure depth in plastic zone of roadway surrounding rock.Table 4 The variation law of the maximum failure depth position in plastic zone pillar. RoofWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal wall10°~13.5°8 m~12 mCoal wall2°~2.5°12 m~16 mCoal pillar1°~1.5°16 m~20 mNo change0°Coal pillarWidth of coal pillarDeflection directionDeflection angle4 m~12 mRoof0.5°~8.5°12 m~16 mRoof10°~13°16 m~18 mNo change0°18 m~20 mRoof4°FloorWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal pillar0.5°~6.5°8 m~10 mNo change0°10 m~14 mCoal pillar0.5°~1.5°14 m~20 mNo change0°Coal wallWidth of coal pillarDeflection directionDeflection angle4 m~6 mFloor12°6 m~10 mRoof10°~12°10 m~14 mRoof3°~5°14 m~20 mNo change0°From the above analysis, it can be seen that the plastic zone form at different locations of the roadway surrounding rock shows different sensitivities to the changes of coal pillar width, which leads to the variability of the coal pillar width effect of the plastic zone form at different locations of the roadway surrounding rock.(1) For the roof, when the width of coal pillar is less than 8 m, the sensitivity of the top plastic zone form to the change of coal pillar width is stronger, and with the increase of coal pillar width, the maximum damage depth of plastic zone decreases significantly, and the position of the maximum damage depth of plastic zone continues to deflect toward the coal wall, and the top plastic zone form has obvious coal pillar width effect. When the width of coal pillar is greater than 8 m, the sensitivity of roof plastic zone form to coal pillar width change is weak, with the increase of coal pillar width, the maximum damage depth of plastic zone does not change obviously, its position change is small (maximum change 2.5°), and the coal pillar width effect of roof plastic zone form is weak(2) For two sides of roadway, the form of plastic zone of two sides is sensitive to the change of coal pillar width. With the increase of coal pillar width, the maximum damage depth of plastic zone of coal pillar wall first decreases and then remains unchanged, and its position continues to deflect towards the roof. When the coal pillar width is less than 12 m, the maximum damage depth of the plastic zone of the coal wall first decreases and then remains unchanged, and its position first deflects to the floor direction and then to the roof direction. When the coal pillar width is greater than 12 m, the maximum damage depth and position of the plastic zone do not change significantly. The plastic zone form of two sides of roadway has obvious coal pillar width effect(3) For the floor, the form of the plastic zone of the floor is less sensitive to the change of the coal pillar width. With the increase of the coal pillar width, the maximum damage depth of the plastic zone does not change significantly, and its position changes slightly (the maximum change is 6.5°), and the coal pillar width effect of the form of the plastic zone of the floor should be weak ## 3.1. Establishment of Numerical Model According to the production geological conditions of transportation roadway in 11030 working face of Zhaogu No. 2 coal mine, FLAC3D numerical simulation software was used to build a model with a length of 250 m, a width of 50 m, and a height of 42 m, with a model grid cell size of 0.5 m, as shown in Figure 2.Figure 2 Numerical calculation model.Displacement constraints were applied to the top and bottom of the model in the vertical direction, and displacement constraints in the horizontal direction were applied around the model, and the initial ground stresses were applied to the model based on the rock formation loads at a burial depth of 700 m, rock laboratory test, and the measured ground stresses in the adjacent mine area: 15.30 MPa for vertical stresses, 29.36 MPa (along thex-axis direction), and 16.82 MPa (along the y-axis direction) for horizontal stresses. The model is used Mohr-Coulomb, and the physical and mechanical parameters of each rock layer are shown in Table 2.Table 2 Rock physical and mechanical parameters table. Rock formationInternal friction angle (°)Cohesion (MPa)Density (kg/m3)Shear modulus (GPa)Bulk modulus (GPa)Uniaxial tensile strength (MPa)Sandstone252.815004.85.41.5Mudstone305.2422005.048.821.48Coal seam345.3625004.5410.442.6Sandy mudstone381627009.010.27.5Limestone357.923005.38.98.71The plastic zone distribution characteristics of the roadway surrounding rock were simulated when the width of coal pillar was 4 m, 6 m, 8 m, 10 m, 12 m, 14 m, 16 m, 18 m, and 20 m, respectively, and the main stress direction measurement lines were arranged around the excavated roadway to study the main stress deflection characteristics. The length of each measurement line is 40 m, arranged along the tendency of coal seam, and the interval of measurement line is 0.5 m, and 31 measurement lines are arranged in total, as shown in Figure3.Figure 3 Layout of measuring line in principal stress direction of roadway surrounding rock. ## 3.2. Coal Pillar Width Effect of Principal Stress Deflection of Roadway Surrounding Rock According to the stress field imposed by numerical simulation, it is known that the maximum principal stress is 29.36 MPa, along the horizontal direction; the minimum principal stress is 15.30 MPa, along the vertical direction, and the following studies are analyzed by the angle between the maximum principal stress and the horizontal direction. Figure4 shows the contour cloud of the maximum principal stress and the angle between the horizontal directions in the stress field of the roadway enclosure under different coal column widths. From the figure, it can be seen that the distribution characteristics of the maximum principal stress direction in the roadway surrounding rock change significantly under different coal column widths. For the roof, the maximum principal stress is mainly in the horizontal direction, and the minimum principal stress is mainly in the vertical direction; with the increase of the width of the coal pillar, the maximum principal stress of the overlying rocks near the coal pillar side and the coal wall side of the top slab has a tendency to deflect in the vertical direction, and the direction of the maximum principal stress of the overlying rocks in the middle of the top slab has no obvious deflection. For the coal pillar wall, with the increase of the width of the coal pillar, the direction of the maximum principal stress is obviously deflected, when the width of the coal pillar is 4 m~14 m, the maximum principal stress of the whole coal pillar is mainly in the vertical direction, and the minimum principal stress is mainly in the horizontal direction, when the width of the coal pillar is 16 m~20 m, the maximum principal stress within 4 m from the edge of the roadway is mainly in the vertical direction, and the maximum principal stress within 4~6 m from the edge of the roadway is mainly in the horizontal direction. For the coal wall, the maximum principal stress direction does not change significantly with the change of coal pillar width, and the maximum principal stress at the edge of the coal wall is mainly in the vertical direction, while the maximum principal stress at the deep part of the coal wall gang is mainly in the horizontal direction. For the floor, when the width of coal pillar is 4 m, the maximum principal stress is mainly in the vertical direction near the pillar side and in the horizontal direction near the coal wall side; when the width of coal pillar is 6 m~20 m, the maximum principal stress is mainly in the horizontal direction, and with the increase of the width of coal pillar, the direction of the maximum principal stress is not significantly deflected.Figure 4 Angle cloud chart of maximum principal stress and horizontal direction of surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFigure5 shows the curves of the maximum principal stresses, which are at different locations of road roof and floor and two surrounding rocks under different coal pillar widths with respect to the angle in the horizontal direction. From the above analysis, it can be seen that the 4 m and 6 m coal pillars are in plastic damage state, so no further analysis will be made here. By comparing the variation characteristics of the maximum principal stress direction in Figure 5 with the direction of the original rock stress field, it is obtained that the variation law of the maximum principal stress direction in the roadway surrounding rock area with the coal pillar width is shown in Table 3. (According to the deflection angle, three deflection degrees are defined: ① Weak: ≤30°; ② Medium: 30°≤60°; and ③ Strong: ≥60°).Figure 5 Variation curve of maximum principal stress direction of surrounding rock. (a) roof(b) coal pillar(c) coal wall(d) floorTable 3 The maximum principal stress deflection law of surrounding rock in roadway excavation. Surrounding rock locationDeflection patternRoofContinuous deflection in the vertical direction, but the degree of deflection is weak.Degree of deflection angle change: bothsidesoftheroof>middleoftheroof.Coal pillarThe deflection is in the vertical direction, and the degree of deflection is decreasing.The degree of deflection from the edge of the coal pillar to the middle of the coal pillar shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalpillar>edgeofcoalpillar.FloorContinuous deflection in the vertical direction, with medium deflection at the position of 1.5 m~2 m under both sides of the floor, and weak deflection at other depth positions and the middle of the floor.Degree of deflection angle change: both sides of the floor > the middle of the floor.All deflected in the vertical direction and the degree of deflection did not change significantly.Coal wallThe degree of deflection from the edge of the coal wall to the deep part of the solid coal shows “strong-medium-weak” transition.Degree of deflection angle change: middleofcoalwall>edgeofcoalwall.From the above analysis, it can be seen that the maximum principal stresses in the stress field of the roadway enclosure area are deflected in the vertical direction, but the deflection of the maximum principal stresses at different locations in the roadway enclosure has different sensitivities to the width of the coal pillar, resulting in the variability of the coal pillar width effect of the deflection of the principal stresses at different locations in the roadway enclosure.(1) For the roof and floor, the sensitivity of the maximum principal stress deflection at the position of the roof and floor near the two sides of the roadway to the width of the coal column is stronger. With the increase of coal pillar width, the maximum principal stress deflection angle changes obviously (maximum change of 20°) on both sides of the roof and floor, and the change is smaller (maximum change of 8°) in the middle position. Therefore, the maximum principal stress deflection in the roof and floor near the edge of the two gangs has an obvious coal pillar width effect, while the coal pillar width effect in the middle position is weaker(2) For the coal pillar, the sensitivity of the maximum principal stress deflection inside the coal pillar to the change of the coal pillar width is stronger. With the increase of coal pillar width, the change of maximum principal stress deflection angle inside the coal pillar is obvious (maximum change of 70°), and the change of edge position is relatively small (maximum change of 12°). Therefore, the maximum principal stress deflection inside the coal pillar has obvious coal pillar width effect, while the coal pillar width effect of edge position is weaker(3) For the coal wall, the sensitivity of the maximum principal stress deflection of the coal wall to the coal pillar width is weaker. With the increase of coal pillar width, the change of coal wall maximum principal stress deflection angle is small (maximum change 15°). Therefore, the coal pillar width effect of coal wall maximum principal stress deflection is weaker ## 3.3. Coal Pillar Width Effect of Plastic Zone Form of Roadway Surrounding Rock Figure6 shows the distribution of the plastic zone form in the surrounding rock of the roadway under different coal pillar widths obtained by numerical simulation. For the convenience of analysis, two indicators, plastic zone maximum damage depth and plastic zone maximum damage depth location, are defined to characterize the plastic zone form, where the plastic zone maximum damage depth location is expressed by the angle between the plastic zone maximum damage depth boundary and the centerline of the top and bottom plates of the roadway and the two sides of the gang, and the counterclockwise direction is specified as positive. From Figure 6, it can be seen that with the increase of coal pillar width, the plastic zone form of the roadway surrounding rock changes to different degrees. For the roof, the maximum damage depth of plastic zone is 2.75 m, when the width of coal pillar is 4 m, and the position of the maximum damage depth (10°~32°) is close to the roof of coal pillar side. When the width of coal pillar is 6 m~20 m, the maximum damage depth of plastic zone decreases to 2.25 m and remains unchanged, but its position is deflected to clockwise direction (solid coal side) when the width of coal pillar is 6 m~14 m, and it is not deflected when the width of coal pillar is 16 m~20 m. The form of the plastic zone is symmetric about the centerline of the roof. For the coal pillar, when the width of coal pillar is 4 m~6 m, the whole coal pillar is in the plastic damage state; when the width of coal pillar is 8 m~20 m, the coal pillar is no longer in the plastic damage state completely, and the maximum damage depth of the plastic zone does not change significantly, which is about 1.75 m, but the position of the maximum damage depth of the plastic zone continues to deflect in the clockwise direction (roof direction) with the increase of the width of coal pillar. For the coal wall, when the width of coal pillar is 4 m, the maximum damage depth of plastic zone is 1.75 m, and the position of maximum break depth is 0°~36°. When the width of coal pillar is 6 m~14 m, the maximum damage depth of plastic zone decreases to 1.5 m and remains unchanged, but its position deflects significantly in the counterclockwise direction (roof direction). When the width of coal pillar is 16 m~20 m, the distribution of the plastic zone does not change significantly. For the floor, the maximum damage depth of the plastic zone does not change with the increase of coal pillar width, which is 3 m, and the location of the maximum damage depth also does not change significantly, and the form of the plastic zone is approximately symmetrical about the center line of the floor.Figure 6 The form distribution of plastic zone of roadway surrounding rock. (left side is coal pillar, right side is coal wall). (a) 4 m coal pillar(b) 6 m coal pillar(c) 8 m coal pillar(d) 10 m coal pillar(e) 12 m coal pillar(f) 14 m coal pillar(g) 16 m coal pillar(h) 18 m coal pillar(i) 20 m coal pillarFrom the analysis of Figure7 and Table 4, it can be seen that the form of plastic zone in different locations of the roadway surrounding rock shows different sensitivity to the change of coal pillar width, which leads to the variability of the coal pillar width effect of the form of plastic zone in different locations of the roadway surrounding rock.Figure 7 Location of maximum failure depth in plastic zone of roadway surrounding rock.Table 4 The variation law of the maximum failure depth position in plastic zone pillar. RoofWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal wall10°~13.5°8 m~12 mCoal wall2°~2.5°12 m~16 mCoal pillar1°~1.5°16 m~20 mNo change0°Coal pillarWidth of coal pillarDeflection directionDeflection angle4 m~12 mRoof0.5°~8.5°12 m~16 mRoof10°~13°16 m~18 mNo change0°18 m~20 mRoof4°FloorWidth of coal pillarDeflection directionDeflection angle4 m~8 mCoal pillar0.5°~6.5°8 m~10 mNo change0°10 m~14 mCoal pillar0.5°~1.5°14 m~20 mNo change0°Coal wallWidth of coal pillarDeflection directionDeflection angle4 m~6 mFloor12°6 m~10 mRoof10°~12°10 m~14 mRoof3°~5°14 m~20 mNo change0°From the above analysis, it can be seen that the plastic zone form at different locations of the roadway surrounding rock shows different sensitivities to the changes of coal pillar width, which leads to the variability of the coal pillar width effect of the plastic zone form at different locations of the roadway surrounding rock.(1) For the roof, when the width of coal pillar is less than 8 m, the sensitivity of the top plastic zone form to the change of coal pillar width is stronger, and with the increase of coal pillar width, the maximum damage depth of plastic zone decreases significantly, and the position of the maximum damage depth of plastic zone continues to deflect toward the coal wall, and the top plastic zone form has obvious coal pillar width effect. When the width of coal pillar is greater than 8 m, the sensitivity of roof plastic zone form to coal pillar width change is weak, with the increase of coal pillar width, the maximum damage depth of plastic zone does not change obviously, its position change is small (maximum change 2.5°), and the coal pillar width effect of roof plastic zone form is weak(2) For two sides of roadway, the form of plastic zone of two sides is sensitive to the change of coal pillar width. With the increase of coal pillar width, the maximum damage depth of plastic zone of coal pillar wall first decreases and then remains unchanged, and its position continues to deflect towards the roof. When the coal pillar width is less than 12 m, the maximum damage depth of the plastic zone of the coal wall first decreases and then remains unchanged, and its position first deflects to the floor direction and then to the roof direction. When the coal pillar width is greater than 12 m, the maximum damage depth and position of the plastic zone do not change significantly. The plastic zone form of two sides of roadway has obvious coal pillar width effect(3) For the floor, the form of the plastic zone of the floor is less sensitive to the change of the coal pillar width. With the increase of the coal pillar width, the maximum damage depth of the plastic zone does not change significantly, and its position changes slightly (the maximum change is 6.5°), and the coal pillar width effect of the form of the plastic zone of the floor should be weak ## 4. Analysis of the Stability of the Surrounding Rock Based on the Coal Pillar Width Effect ### 4.1. Mechanism of Coal Pillar Width Effect Formation For the stability of roadway surrounding rock, the key is to ensure the stability of roadway roof and coal pillar. Therefore, based on the previous research results, taking the roadway roof and coal pillar wall as the research object, the author analyzes the variation law of the deflection angle of the main stress direction of roadway surrounding rock and the position of the maximum failure depth of plastic zone under different coal pillar widths and reveals the formation mechanism of coal pillar width effect in the plastic zone of roadway surrounding rock. Figure8 shows the curves of the change in the deflection angle of the main stress direction and the position of the maximum damage depth in the plastic zone of the roadway roof and coal pillar at different coal pillar widths (the counterclockwise direction is specified as positive). From the figure, it can be seen that the position of the maximum damage depth in the plastic zone of the roadway surrounding rock and the angle of deflection of the main stress direction have approximately the same changing trend. As the width of coal pillar increases, the position of maximum damage depth and the direction of main stress in the plastic zone of the floor and coal pillar are deflected in the clockwise direction.Figure 8 Curve of maximum failure depth position and principal stress direction in plastic zone of roadway surrounding rock. (a) roof(b) coal pillarThe width effect of the main stress deflection of the surrounding rock after the roadway excavation will cause the obvious deflection of the main stress direction of the surrounding rock, which will cause the maximum damage depth and location of the surrounding rock plastic zone to change, which results in the difference distribution of the plastic zone form of the surrounding rock, and which results in the shape of the surrounding rock plastic zone having the width effect of the coal pillar, as shown in Figure9.Figure 9 Formation mechanism of coal pillar width effect. ### 4.2. The Influence of Coal Pillar Width Effect on the Stability of the Surrounding Rock According to the above study, influenced by the coal pillar width effect, the principal stress direction and plastic zone form at different positions of surrounding rock after roadway excavation will change to different degrees. For the roof, the deflection degree of the principal stress direction near the two sides of the roof is significantly greater than that in the central position, and the damage depth of the plastic zone at different positions of the roof is quite different, resulting in the difference in the stability of the surrounding rock on both sides and in the central position of the roof. For the coal pillar side, the deflection degree of the principal stress direction in the middle position of the coal pillar is significantly greater than that in the upper and lower sides, and the damage depth of the plastic zone in different positions is also different, resulting in a large difference in the stability of the surrounding rock in the middle and upper and lower sides of the coal pillar. Therefore, for roadways with different coal pillar widths, it is necessary to adopt different control measures for different positions of roof and coal pillar to ensure the stability of surrounding rock. Taking the 11030 working face transportation roadway of Zhaogu No. 2 Mine as an example, the stability of roadway surrounding rock is analyzed combined with the width effect of coal pillar, which provides basic guidance for the stability control of roadway surrounding rock. For the roof, the maximum principal stress is mainly in the horizontal direction, but affected by the width effect of coal pillar, and the maximum principal stress on both sides of the roof tends to change in the vertical direction. And the maximum damage depth of plastic zone is located in the middle of roof. Therefore, the stability of the surrounding rock in the middle of the roof is poorer than that on both sides. In order to prevent the roof falling accident of the roadway, it is necessary to reinforce and support the middle position of the roadway roof after the roadway excavation is completed. For the coal pillar, the maximum principal stress is mainly in the vertical direction, but affected by the width effect of coal pillar, and the maximum principal stress at the upper and lower sides of coal pillar changes significantly in the horizontal direction. And the maximum damage depth of plastic zone deflects to the roof direction. The comprehensive analysis shows that the stability of the upper and middle sides of the coal pillar is worse than that of the lower side. In order to prevent the failure and instability of the coal pillar, it is necessary to reinforce the surrounding rock of the middle and upper sides of the coal pillar after the roadway excavation. ## 4.1. Mechanism of Coal Pillar Width Effect Formation For the stability of roadway surrounding rock, the key is to ensure the stability of roadway roof and coal pillar. Therefore, based on the previous research results, taking the roadway roof and coal pillar wall as the research object, the author analyzes the variation law of the deflection angle of the main stress direction of roadway surrounding rock and the position of the maximum failure depth of plastic zone under different coal pillar widths and reveals the formation mechanism of coal pillar width effect in the plastic zone of roadway surrounding rock. Figure8 shows the curves of the change in the deflection angle of the main stress direction and the position of the maximum damage depth in the plastic zone of the roadway roof and coal pillar at different coal pillar widths (the counterclockwise direction is specified as positive). From the figure, it can be seen that the position of the maximum damage depth in the plastic zone of the roadway surrounding rock and the angle of deflection of the main stress direction have approximately the same changing trend. As the width of coal pillar increases, the position of maximum damage depth and the direction of main stress in the plastic zone of the floor and coal pillar are deflected in the clockwise direction.Figure 8 Curve of maximum failure depth position and principal stress direction in plastic zone of roadway surrounding rock. (a) roof(b) coal pillarThe width effect of the main stress deflection of the surrounding rock after the roadway excavation will cause the obvious deflection of the main stress direction of the surrounding rock, which will cause the maximum damage depth and location of the surrounding rock plastic zone to change, which results in the difference distribution of the plastic zone form of the surrounding rock, and which results in the shape of the surrounding rock plastic zone having the width effect of the coal pillar, as shown in Figure9.Figure 9 Formation mechanism of coal pillar width effect. ## 4.2. The Influence of Coal Pillar Width Effect on the Stability of the Surrounding Rock According to the above study, influenced by the coal pillar width effect, the principal stress direction and plastic zone form at different positions of surrounding rock after roadway excavation will change to different degrees. For the roof, the deflection degree of the principal stress direction near the two sides of the roof is significantly greater than that in the central position, and the damage depth of the plastic zone at different positions of the roof is quite different, resulting in the difference in the stability of the surrounding rock on both sides and in the central position of the roof. For the coal pillar side, the deflection degree of the principal stress direction in the middle position of the coal pillar is significantly greater than that in the upper and lower sides, and the damage depth of the plastic zone in different positions is also different, resulting in a large difference in the stability of the surrounding rock in the middle and upper and lower sides of the coal pillar. Therefore, for roadways with different coal pillar widths, it is necessary to adopt different control measures for different positions of roof and coal pillar to ensure the stability of surrounding rock. Taking the 11030 working face transportation roadway of Zhaogu No. 2 Mine as an example, the stability of roadway surrounding rock is analyzed combined with the width effect of coal pillar, which provides basic guidance for the stability control of roadway surrounding rock. For the roof, the maximum principal stress is mainly in the horizontal direction, but affected by the width effect of coal pillar, and the maximum principal stress on both sides of the roof tends to change in the vertical direction. And the maximum damage depth of plastic zone is located in the middle of roof. Therefore, the stability of the surrounding rock in the middle of the roof is poorer than that on both sides. In order to prevent the roof falling accident of the roadway, it is necessary to reinforce and support the middle position of the roadway roof after the roadway excavation is completed. For the coal pillar, the maximum principal stress is mainly in the vertical direction, but affected by the width effect of coal pillar, and the maximum principal stress at the upper and lower sides of coal pillar changes significantly in the horizontal direction. And the maximum damage depth of plastic zone deflects to the roof direction. The comprehensive analysis shows that the stability of the upper and middle sides of the coal pillar is worse than that of the lower side. In order to prevent the failure and instability of the coal pillar, it is necessary to reinforce the surrounding rock of the middle and upper sides of the coal pillar after the roadway excavation. ## 5. Engineering Case The transportation roadway of 11030 working face in Zhaogu No. 2 Mine was excavated along the coal seam roof, and 8 m coal pillar was left between it and the 11011 mined-out area (Figure10). The roadway was designed as a rectangular section of 4.8m×3.3mwidth×height. During the excavation of the roadway, the roadway surrounding rock had a nonuniform large deformation, the section contraction was serious, the maximum subsidence of roof was about 428 mm, and the maximum displacement of two sides was about 270 mm.Figure 10 The layout plan of carrying roadway along 11030 working face. ### 5.1. Detection of Rock Surrounding Damage Areas In order to master the surrounding rock damage of 11030 working face transportation roadway and compare with the numerical simulation results (Figure6(c)), the JL-IDOI (A) intelligent borehole TV imager was used to peep the surrounding rock damage of the roadway. Three boreholes were arranged at the roof and floor of the roadway and the two sides at 260 m from the opening of the roadway in 11030 working face. A total of 12 boreholes were arranged, as shown in Figure 11, the diameter of the borehole was 32 mm, and the depth of the borehole was 4 m.Figure 11 Borehole layout plan of transportation lane in 11030 working face.Figure12 shows the comparison between the borehole peeping results of surrounding rock of 11030 transportation roadway and the plastic zone form of numerical simulation. The damage area of roadway surrounding rock is asymmetric. The damage depth of rock stratum in the middle of roof is large, and the coal body in the middle of coal pillar is seriously broken. The damage depth of rock stratum near the roof of coal wall is the largest, and the distribution range of floor rock stratum is relatively uniform. The nonuniform morphological characteristics of plastic zone of roadway surrounding rock are basically consistent with the results of borehole peeping.Figure 12 Comparison of the damage range of surrounding rock in 11030 transportation lane with the results of numerical simulation. (a) drill hole peeping results(b) numerical simulation results ### 5.2. Roadway Support Design Combined with the field peeping results and previous research results, it can be seen that the section convergence of 11030 transport roadway during excavation is serious, and the surrounding rock deformation of roadway shows obvious nonuniform characteristics. Therefore, considering the influence of mining activities in the later stage of the roadway, the middle roof is reinforced and supported by the length enable bolt and the high strength screw steel bolt to prevent the roof from falling [20–22]. Two sides, by patching high strength thread steel anchor, prevent two sides due to excessive deformation and cross. Specific design parameters are shown in Figure 13.Figure 13 Section diagram of supporting design parameters for 11030 working face transportation roadway. ### 5.3. Support Effect In order to verify the stability of the surrounding rock of the roadway after reinforcement support, a measuring station was arranged at 260 m from the opening of 11030 transportation roadway to monitor the change of the roadway surface displacement. The monitoring results show that the maximum deformation of the top and bottom plates from 0 to 28d after the roadway was reinforced and supported and is 268 mm, and the maximum deformation of the two sides is 123 mm; the displacement of the surrounding rock of the roadway remains basically the same after 28 d after the roadway was reinforced and supported, and there is no significant increase, indicating that the surrounding rock of the roadway tends to be stable. The monitoring results of the roadway surface displacement are shown in Figure14.Figure 14 Change of surface displacement of roadway. ## 5.1. Detection of Rock Surrounding Damage Areas In order to master the surrounding rock damage of 11030 working face transportation roadway and compare with the numerical simulation results (Figure6(c)), the JL-IDOI (A) intelligent borehole TV imager was used to peep the surrounding rock damage of the roadway. Three boreholes were arranged at the roof and floor of the roadway and the two sides at 260 m from the opening of the roadway in 11030 working face. A total of 12 boreholes were arranged, as shown in Figure 11, the diameter of the borehole was 32 mm, and the depth of the borehole was 4 m.Figure 11 Borehole layout plan of transportation lane in 11030 working face.Figure12 shows the comparison between the borehole peeping results of surrounding rock of 11030 transportation roadway and the plastic zone form of numerical simulation. The damage area of roadway surrounding rock is asymmetric. The damage depth of rock stratum in the middle of roof is large, and the coal body in the middle of coal pillar is seriously broken. The damage depth of rock stratum near the roof of coal wall is the largest, and the distribution range of floor rock stratum is relatively uniform. The nonuniform morphological characteristics of plastic zone of roadway surrounding rock are basically consistent with the results of borehole peeping.Figure 12 Comparison of the damage range of surrounding rock in 11030 transportation lane with the results of numerical simulation. (a) drill hole peeping results(b) numerical simulation results ## 5.2. Roadway Support Design Combined with the field peeping results and previous research results, it can be seen that the section convergence of 11030 transport roadway during excavation is serious, and the surrounding rock deformation of roadway shows obvious nonuniform characteristics. Therefore, considering the influence of mining activities in the later stage of the roadway, the middle roof is reinforced and supported by the length enable bolt and the high strength screw steel bolt to prevent the roof from falling [20–22]. Two sides, by patching high strength thread steel anchor, prevent two sides due to excessive deformation and cross. Specific design parameters are shown in Figure 13.Figure 13 Section diagram of supporting design parameters for 11030 working face transportation roadway. ## 5.3. Support Effect In order to verify the stability of the surrounding rock of the roadway after reinforcement support, a measuring station was arranged at 260 m from the opening of 11030 transportation roadway to monitor the change of the roadway surface displacement. The monitoring results show that the maximum deformation of the top and bottom plates from 0 to 28d after the roadway was reinforced and supported and is 268 mm, and the maximum deformation of the two sides is 123 mm; the displacement of the surrounding rock of the roadway remains basically the same after 28 d after the roadway was reinforced and supported, and there is no significant increase, indicating that the surrounding rock of the roadway tends to be stable. The monitoring results of the roadway surface displacement are shown in Figure14.Figure 14 Change of surface displacement of roadway. ## 6. Conclusion (1) The deformation of the surrounding rock in the deep roadway under different widths of the coal pillar does not show a direct correlation between the width of the coal pillar and the deformation of the surrounding rock (the larger the width of the coal pillar, the smaller the deformation of the surrounding rock in the roadway), and the deformation of the surrounding rock at different locations in the same section of the roadway shows a nonuniform distribution(2) The maximum principal stress deflects to the vertical direction in the stress field of the surrounding rock of deep roadway excavation, but the deflection of the maximum principal stress at different positions of the surrounding rock of roadway has different sensitivity to the width of the coal pillar. The coal pillar width effect of principal stress deflection on both sides of the roof and floor and inside the coal pillar is obvious, and the coal pillar width effect of principal stress deflection in the middle of the roof and floor and the edge of the coal pillar and the coal wall are weak(3) The plastic zone form of surrounding rock of deep roadway after excavation will show differential distribution characteristics due to the change of coal pillar width, and the plastic zone form of surrounding rock at different positions of roadway has different sensitivities to the change of coal pillar width, resulting in obvious coal pillar width effect of plastic zone form of roadway roof and two sides, and weak coal pillar width effect of plastic zone form of floor(4) The principal stress in the surrounding rock area of deep roadway excavation will deflect to varying degrees, which affects the form of the plastic zone of the surrounding rock. The position of the maximum damage depth of the plastic zone is approximately the same as the principal stress deflection, and the width effect of the coal pillar will have different degrees of influence on the stability of the surrounding rock at different positions of the roadway --- *Source: 1007222-2022-03-18.xml*
2022
# Microbiota, Prostatitis, and Fertility: Bacterial Diversity as a Possible Health Ally **Authors:** Jenniffer Puerta Suárez; Walter D. Cardona Maya **Journal:** Advances in Urology (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1007366 --- ## Abstract Background. In health, microorganisms have been associated with the disease, although the current knowledge shows that the microbiota present in various anatomical sites is associated with multiple benefits. Objective. This study aimed to evaluate and compare the genitourinary microbiota of chronic prostatitis symptoms patients and fertile men. Materials and Methods. In this preliminary study, ten volunteers have included 5 volunteers with symptoms of chronic prostatitis (prostatitis group) and five fertile volunteers, asymptomatic for urogenital infections (control group) matched by age. Bacterial diversity analysis was performed using the 16S molecular marker to compare the microbiota present in urine and semen samples from chronic prostatitis symptoms and fertile volunteers. Seminal quality, nitric oxide levels, and seminal and serum concentration of proinflammatory cytokines were quantified. Results. Fertile men present a greater variety of operational taxonomical units-OTUs in semen (67.5%) and urine (17.6%) samples than chronic prostatitis symptoms men. Chronic prostatitis symptoms men presented a higher concentration of IL-12p70 in seminal plasma. No statistically significant differences were observed in conventional and functional seminal parameters. The species diversity in semen samples was similar in healthy men than prostatitis patients, inverted Simpson index median 5.3 (5.0–10.7) vs. 4.5 (2.1–7.8, p=0.1508). Nevertheless, the microbiota present in the semen and urine samples of fertile men presents more OTUs. Less microbial diversity could be associated with chronic prostatitis symptoms. The presence of bacteria in the genitourinary tract is not always associated with the disease. Understanding the factors that affect the microbiota can implement lifestyle habits that prevent chronic prostatitis. Conclusion. Chronic prostatitis does not seem to affect male fertility; however, studies with a larger sample size are required. Our preliminary results strengthen the potential role; the greater bacterial diversity is a protective factor for chronic prostatitis. --- ## Body ## 1. Introduction Chronic prostatitis is a frequent and multicausal condition. Among the causes of prostatitis that include infections [1], mainly bacterial, however, identifying the causative agent is not always possible, and in addition, the gland can harbour microbiota. With sequencing techniques, it is possible to identify a great variety of microbiota, its effects on health, and its effect on the immune system [2]. In chronic bacterial prostatitis men, an improvement in symptoms, life quality, and less antibiotic consumption have been described with the oral administration of Lactobacillus casei [3], suggesting the close relationship between gastrointestinal and urogenital microbiota [4].Prostatitis is the most common urological diagnosis in young men under 50 years after benign prostatic hyperplasia and prostate cancer [5]. The primary causes of the illness such as infections, immunologic status, urine reflux, and mental stress have been identified as primary causes [1]. Clinically, prostatitis is classified into four types: (i) acute bacterial prostatitis, (ii) chronic bacterial prostatitis, (iii) chronic pelvic pain syndrome, and (iv) asymptomatic inflammatory prostatitis [6–8]. Chronic bacterial prostatitis is responsible for 5–10% of total prostatitis cases, of which at least 30% is associated with recurrent urinary infections [8]. Up to 90% of patients are classified as having chronic pelvic pain syndrome because the cause cannot be identified. The gold standard for prostatitis is the four-vessel test [9]; however, at present, it is not performed routinely due to the risk of causing bacteremia [7].Furthermore, this test only detects those culturable microorganisms, so new techniques are necessary to identify all the organisms in the gland responsible for the disease. The use of next-generation sequencing (NGS) techniques could improve the understanding of the microbiome [10], especially dysbiosis caused by prostatitis and its impact on health. Bacterial diversity analysis (metataxonomics) uses the 16S molecular marker (variable regions V3–V4) to assess men’s microbiota. Besides, the semen sample has higher sensitivity than expressed prostatic excretion (EPS) for diagnosing chronic bacterial prostatitis [11], and it is also practical and comfortable for the patient. Therefore, this preliminary study aimed to determine men’s microbiota with prostatitis-like symptoms and their impact on seminal quality. ## 2. Materials and Methods ### 2.1. Study Participants The protocol and informed consent form were approved by the Bioethics Committee for Research in Humans at the Institute of Medical Research, School of Medicine, University of Antioquia (act number 006), in April 2018. All patients provided written informed consent regarding their participation and publication of their clinical data. Five chronic prostatitis symptoms and five fertile men asymptomatic for urogenital infections volunteers were included. All men were generally in good health, without sexually transmitted diseases. None was under antibiotic treatment at the time of the sampling. National Institute of Health of Chronic Prostatitis Symptoms Index (NIH–CPSI [12]) translated and validated into Spanish [13] was employed to select the volunteers according to the criteria reported by Nickel et al., [5].Each volunteer gave a semen sample and a midstream urine sample; both samples replaced the prostate fluid sample obtained through stimulation of the gland through the rectum [14]. In addition, a blood sample was taken by qualified personnel in a nonanticoagulated vacutainer tube (Becton Dickinson, NJ, USA) to obtain the serum. The donors also filled out a survey with sociodemographic factors, lifestyle, urinary symptoms, and other relevant aspects of sexual and reproductive health to identify factors associated with prostatitis symptoms. ### 2.2. Semen and Seminal Parameters Prior to semen analysis, all donors were asked to abstain from sexual intercourse or masturbation for 3–5 days before semen collection and delivered to the laboratory within 1 hour of ejaculation. ### 2.3. Conventional Seminal Parameters After semen liquefaction was completed, each semen sample was analyzed for conventional parameters: sperm motility, vitality, concentration, total sperm concentration, and sperm morphology according to those established by the World Health Organization in the fifth edition of its Human Semen Processing Manual, while the sperm concentration was evaluated using the Makler chamber [15, 16]. ### 2.4. Functional Seminal Parameters Sperm mitochondrial membrane potential [17], sperm membrane integrity [18], chromatin structure assay [19], sperm membrane lipoperoxidation [20], and intracellular levels of reactive oxygen species (ROS) [17] were evaluated by flow cytometry (Fortessa, Becton Dickinson, NJ, USA), according to previously established protocols, analyzing between 5,000 and 10,000 sperm cells. Data analysis was carried out with FlowJo 7.6 (TreeStar Inc., Oregon, USA). ### 2.5. Seminal Plasma Total Antioxidant Capacity For this test, 3 mL of DPPH (2,2-diphenyl-1-picrylhydrazyl) was mixed with 200μL of the sample. After one hour of incubation, the sample was read in a spectrophotometer (Spectronic 20 spectrophotometer®, Genesys, Rochester, NY, USA) at 515 nm, used ascorbic acid as a positive control [21]. ### 2.6. Prostate-Specific Antigen (PSA) According to the manufacturer’s instructions, PSA was quantified using the commercial total PSA kit (DiaMetra, Perugia, Italy). Prostate antigen values greater than 4 ng/mL were considered positive. ### 2.7. Nitric Oxide Quantification Nitrite quantification was performed using the commercial Griess reagent kit for nitrite determination (Molecular probes, Oregon, USA) according to the manufacturer’s instructions and after deproteinization of the semen and serum samples according to the Serafini method [22]. ### 2.8. Cytokine Quantification Quantification of the cytokines IL-12p70, IL-10, IL-1β, IL-6, IL-8, TNF, IL-2, IL-4, IL-10, IL-17, and INF-γ was performed using the kits BD Cytometric Bead Array (CBA) Human Inflammatory Cytokines and Human Th1/Th2/Th17 Cytokine Kit (Becton Dickinson, NJ, USA). The analysis was carried out in the FlowJo 7.6. ### 2.9. Bacterial Diversity Analysis DNA extraction was performed using the Stool DNA Isolation Kit (Norgen) to identify the microbiota, and gDNA quantification was performed by the PicoGreen colourimetric method. Subsequently, control of 16S gene amplification was performed using the primers 27F (AGAGTTTGATCCTGGCTCAG) and 1492R (TACGGYTACCTTGTTACGACTT), and a 1500 base pair (bp) fragment was successfully amplified for all samples. For sequencing, the 16S ribosomal gene variables V3 and V4, the Bakt_341F (CCTACGGGNGGCWGCAG), and Bakt_805R (GACTACHVGGGTATCTAATCC) oligonucleotides were used. Sequencing was performed on the platform MiSeq Illumina, generating paired reads 300 bases each. Sequence quality analysis and classification were developed in MetaCoMET (Metagenomics Core Microbiome Exploration Tool, MetaCoMET). ### 2.10. Statistical Analysis A chi-square and a Mann–Whitney test were used to compare the dichotomous and numerical variables between both groups. The data were analyzed using the statistical program GraphPad Prism 6.0 (GraphPad, San Diego, CA, USA), and a value ofp<0.05 was considered significant. ## 2.1. Study Participants The protocol and informed consent form were approved by the Bioethics Committee for Research in Humans at the Institute of Medical Research, School of Medicine, University of Antioquia (act number 006), in April 2018. All patients provided written informed consent regarding their participation and publication of their clinical data. Five chronic prostatitis symptoms and five fertile men asymptomatic for urogenital infections volunteers were included. All men were generally in good health, without sexually transmitted diseases. None was under antibiotic treatment at the time of the sampling. National Institute of Health of Chronic Prostatitis Symptoms Index (NIH–CPSI [12]) translated and validated into Spanish [13] was employed to select the volunteers according to the criteria reported by Nickel et al., [5].Each volunteer gave a semen sample and a midstream urine sample; both samples replaced the prostate fluid sample obtained through stimulation of the gland through the rectum [14]. In addition, a blood sample was taken by qualified personnel in a nonanticoagulated vacutainer tube (Becton Dickinson, NJ, USA) to obtain the serum. The donors also filled out a survey with sociodemographic factors, lifestyle, urinary symptoms, and other relevant aspects of sexual and reproductive health to identify factors associated with prostatitis symptoms. ## 2.2. Semen and Seminal Parameters Prior to semen analysis, all donors were asked to abstain from sexual intercourse or masturbation for 3–5 days before semen collection and delivered to the laboratory within 1 hour of ejaculation. ## 2.3. Conventional Seminal Parameters After semen liquefaction was completed, each semen sample was analyzed for conventional parameters: sperm motility, vitality, concentration, total sperm concentration, and sperm morphology according to those established by the World Health Organization in the fifth edition of its Human Semen Processing Manual, while the sperm concentration was evaluated using the Makler chamber [15, 16]. ## 2.4. Functional Seminal Parameters Sperm mitochondrial membrane potential [17], sperm membrane integrity [18], chromatin structure assay [19], sperm membrane lipoperoxidation [20], and intracellular levels of reactive oxygen species (ROS) [17] were evaluated by flow cytometry (Fortessa, Becton Dickinson, NJ, USA), according to previously established protocols, analyzing between 5,000 and 10,000 sperm cells. Data analysis was carried out with FlowJo 7.6 (TreeStar Inc., Oregon, USA). ## 2.5. Seminal Plasma Total Antioxidant Capacity For this test, 3 mL of DPPH (2,2-diphenyl-1-picrylhydrazyl) was mixed with 200μL of the sample. After one hour of incubation, the sample was read in a spectrophotometer (Spectronic 20 spectrophotometer®, Genesys, Rochester, NY, USA) at 515 nm, used ascorbic acid as a positive control [21]. ## 2.6. Prostate-Specific Antigen (PSA) According to the manufacturer’s instructions, PSA was quantified using the commercial total PSA kit (DiaMetra, Perugia, Italy). Prostate antigen values greater than 4 ng/mL were considered positive. ## 2.7. Nitric Oxide Quantification Nitrite quantification was performed using the commercial Griess reagent kit for nitrite determination (Molecular probes, Oregon, USA) according to the manufacturer’s instructions and after deproteinization of the semen and serum samples according to the Serafini method [22]. ## 2.8. Cytokine Quantification Quantification of the cytokines IL-12p70, IL-10, IL-1β, IL-6, IL-8, TNF, IL-2, IL-4, IL-10, IL-17, and INF-γ was performed using the kits BD Cytometric Bead Array (CBA) Human Inflammatory Cytokines and Human Th1/Th2/Th17 Cytokine Kit (Becton Dickinson, NJ, USA). The analysis was carried out in the FlowJo 7.6. ## 2.9. Bacterial Diversity Analysis DNA extraction was performed using the Stool DNA Isolation Kit (Norgen) to identify the microbiota, and gDNA quantification was performed by the PicoGreen colourimetric method. Subsequently, control of 16S gene amplification was performed using the primers 27F (AGAGTTTGATCCTGGCTCAG) and 1492R (TACGGYTACCTTGTTACGACTT), and a 1500 base pair (bp) fragment was successfully amplified for all samples. For sequencing, the 16S ribosomal gene variables V3 and V4, the Bakt_341F (CCTACGGGNGGCWGCAG), and Bakt_805R (GACTACHVGGGTATCTAATCC) oligonucleotides were used. Sequencing was performed on the platform MiSeq Illumina, generating paired reads 300 bases each. Sequence quality analysis and classification were developed in MetaCoMET (Metagenomics Core Microbiome Exploration Tool, MetaCoMET). ## 2.10. Statistical Analysis A chi-square and a Mann–Whitney test were used to compare the dichotomous and numerical variables between both groups. The data were analyzed using the statistical program GraphPad Prism 6.0 (GraphPad, San Diego, CA, USA), and a value ofp<0.05 was considered significant. ## 3. Results The median and range of age, abstinence period, and body mass index of the fertile group and prostatitis-like symptoms patients included in the study was 34 (21–44) years and 34 (21–44) years (p>0.999), 2 (2–5) and 3 (2–5) days (p=0.9206), and 26.4 (24.4–30.9) and 22.9 (19.7–26.8) kg/m2 (p=0.0952), respectively.Semen and urine microbiota of five fertile volunteers and five chronic prostatitis symptoms volunteers matched by age, and classified using the NIH–CPSI were compared (Table1).Table 1 NIH–CPSI classification of volunteers. Domain (score)Symptoms classification, median (range)P valueFertile group, controlProstatitis-like symptomsPain (0–21)0 (0-1)11 (7–11)0.0079Urinary symptoms (0–10)1 (0–3)7 (6–10)0.0079Quality of life impact (0–12)0 (0–2)4 (2–5)0.0159Total score (0–43)1 (0–5)20 (20–26)0.0079When comparing the conventional and functional semen analyses of prostatitis-like symptoms men and fertile men matched by age, we did not find statistically significant differences (Table2). All conventional seminal parameters were in both groups above the lower limit of reference according to the WHO [23]. In addition, no PSA serum concentration differences were observed (Table 2). The median level of IL-12p70 in seminal plasma was significantly increased in chronic prostatitis symptoms volunteers compared to the fertile group. Also, there was no difference in other cytokine concentrations between both groups (Table 3).Table 2 Seminal quality and oxidative stress in prostatitis-like symptoms men and fertile men. Fertile group, control, median (range)Prostatitis-like symptoms, median (range)P valueSeminal volume (mL)4 (1.5–4.5)3 (1.5–7.5)0.7460Progressive motility (%)55.5 (41.5–81.0)57 (6.0–64.0)0.6667Nonprogressive motility (%)6.5 (2.0–16.0)6.0 (1.0–25.0)0.9524Immotile spermatozoa (%)39.0 (17.0–45.0)42.0 (29.0–90.0)0.8016Concentration/mL (106/mL)80.0 (40.5–270.0)205.0 (7.0–254.0)0.9444Total concentration (106/ejaculate)178.2 (115.5–1080.0)431.8 (22.4–1538.0)0.8016Viability (%)79.0 (77.0–90.0)70.0 (49.0–85.0)0.1667Sperm with normal morphology (%)4.8 (4.2–8.6)4.4 (2.8–6.8)0.8016Teratozoospermia index1.4 (1.1–1.5)1.2 (1.1–1.5)0.2222High mitochondrial membrane potential (%)63.2 (44.2–7.5)50.0 (12.3–55.5)0.0556Plasma membrane integrity (%)64.8 (48.1–84.4)35.8 (6.2–69.7)0.1508ROS production (%)64.3 (50.5–86.2)55.7 (17.7–61.9)0.0556DNA fragmentation index (%)10.6 (10.4–11.4)11.1 (10.5–14.3)0.3889Membrane lipid peroxidation (%)49.8 (9.1–80.5)67.0 (44.9–93.3)0.4127Antioxidant capacity of seminal plasma %)61.0 (45.3–81.4)62.0 (9.5–69.3)0.5317Serum nitric oxide (µM)3.7 (2.1–13.0)3.0 (1.7–4.1)0.5000Plasma nitric oxide (µM)1.5 (0.6–1.7)0.5 (0.2–1.9)0.3016PSA (ng/mL)0.0 (0.0–1.4)0.9 (0.0–120.0)0.3651Mann–Whitney test. Data indicate median and range.Table 3 Concentrations of seminal cytokines in seminal and serum samples of the control group and prostatitis-like symptoms. Cytokine (pg/mL)IL-12p70IL-10IL-1βIL-6IL-8TNFIL-2IL-4IL-10IL-17IFN-γSeminal samplesFertile group0a (0–1.9)0 (0–0.8)0 (0–4.5)2.2 (0–32.7)1812.0 (997.9–3299)0 (0–7.1)0 (0–27.09)0 (0–6.3)1.5 (0.0–18.3)5.9 (5.3–10.2)NDProstatitis-like symptoms group45.5 (0–252.8)1.0 (0–29.4)1.6 (0–31.6)6.0 (01–100.9)1230.0 (684.1–2533)47.75 (0–127.5)0.0 (0.0–116.3)0 (0–20.25)0.9 (0.0–21.3)8.1 (5.5–48.1)0 (0–22.1)Serum samplesFertile group0 (0–210.3)0 (0–199.1)0 (0–455.7)0 (0–304.2)24.4 (0–313.5)0 (0–50.8)1.0 (0.0–176.1)2.0 (0–13.5)0.0 (0.0–7.3)10.4 (9.1–24.8)0 (0–5.3)Prostatitis-like symptoms groupNDNDNDND2.2 (0.0–11.5)0.0 (0.0–1.4)0.0 (0.0–21.9)4.5 (0–10.5)0 (0–3.1)17.1 (11.2–53.7)0 (0–27.3)Data indicate median (range). ND, not detected.aSignificantly different from the control group: p<0.05 (Mann–Whitney U test).When evaluating the genitourinary microbiota, it is observed that the species diversity in semen samples was similar in healthy men than prostatitis patients, inverted Simpson index median 5.3 (5.0–10.7) vs. 4.5 (2.1–7.8,p=0.1508). Nevertheless, the microbiota present in the semen and urine samples of fertile men presents 67.5% and 17.6% more OTUs, respectively, than prostatitis-like symptoms volunteer (Figure 1). We also observe men with prostatitis and fertile men share 144 operational taxonomic units (OTU). We also found no differences in the urine sample (inverted Simpson index median control 6.2 (4.5–6.8) vs. prostatitis 4.8 (4.3–12.2), p=0.8016). Finally, we observed statistically significant differences in 14 OTUs in the different samples of the groups (Figure 2).Figure 1 Venn diagram of OTU overlapping in urine and semen samples. (a) Venn diagram obtained with semen and urine samples from chronic prostatitis symptoms and fertile men showing that both types of samples in both groups share 144 OTUs. (b) Semen and urine samples from volunteers with chronic prostatitis showing a lower amount of OTU than samples from fertile men in the control group.Figure 2 Different OTUs between prostatitis-like symptoms men and fertile men. Semen and urine samples from chronic prostatitis symptoms and fertile men differ statistically in 14 OTU. Most of them are not detectable by traditional culture techniques, which show the importance of sequencing in the study of prostate disease. ## 4. Discussion This preliminary study evaluated some factors associated with chronic prostatitis by comparing fertile men with no urogenital infections and men with chronic prostatitis in a small sample of volunteers. We explored the microbial content of the semen and urine of men to evaluate the prostatitis effect on seminal parameters and fertility. We compared the conventional and functional seminal parameters of prostatitis symptoms with men who had a pregnant partner or children under two years of age.We observed no differences in semen quality between both groups. In fact, the seminal parameters of the volunteers in both groups were higher than the WHO lower reference limit. Additionally, men with prostatitis also presented higher seminal concentration and high concentrations of IL-12p70 in serum. This proinflammatory cytokine is secreted mainly by macrophages and monocytes, stimulating the production of IFN-γ, which suggests a predominance of Th1 lymphocyte activation, facilitating the establishment of an inflammatory environment that becomes chronic [24].The microbiome is composed of genetic material and microorganisms. It is also considered more complex than the human genome, and the microbiota refers to the population of bacteria present in various anatomical sites [10]. Although prostatitis is a multicausal condition, genitourinary infections are included among its causes, and the majority of bacterial prostatitis follows a urinary tract infection [1, 6]. However, the presence of microorganisms does not always imply disease, and caution is required in interpreting the microbiological results of urinary tract samples until the microbiota of this anatomical site is correctly established. An adequate association between symptoms and the detection of microorganisms should contribute to the diagnosis of prostatitis.Bacterial diversity is a crucial factor in preventing the appearance of genitourinary diseases. The urine of prostatitis-like symptoms men presented 17.6% less diversity of OTUs than fertile men. This result was higher in the semen sample since 67.5% fewer OTUs were observed in the semen of prostatitis-like symptoms men. The urogenital tract microbiota of men with and without symptoms of prostatitis includes bacteriaRhizobiaceae, Burkholderia, Achromobacter, Delftia, Campylobacter, Ezakiella, Anaerococcus, Prevotella, Haemophilus, and Porphyromonas. Specifically, in urine, the most common bacterial genera in men with and without symptoms of prostatitis are Pantoea, Geobacillus, Kocuria, Veillonella, Brevibacterium, Pseudomonas, Acetobacteraceae, Neisseria, Chryseobacterium, and Dialister and in semen are Weissella, Proteobacteria, Burkholderiales, Achromobacter, Campylobacter, and Prevotella because today’s available technique makes that some microorganisms are uncultivable in traditional microbial evaluation methods that are obsolete. Prevotella appeared to exert a negative effect on sperm quality [10]. Firmicutes (especially Lactobacilli), Bacteroidetes, Proteobacteria, and Actinobacteria comprise the highest proportion of the seminal microbiome [25].In this preliminary study, we only found statistically significant differences in the presence of fourteen OTUs. These fourteen OTUs explain the difference in the microbiota of prostatitis-like symptoms and fertile men:Burkholderiaceae, Achromobacter, Aerococcus, Blautia, Burkholderiales, Propionibacterium, Betaproteobacteria, Haemophilus, Burkholderia, Massilia, Rhizobiaceae, and Neorhizobium. In general, they are little-known microorganisms in the clinical field, so sequencing is a powerful tool that allows us to discover the world surrounding enigmatic infectious diseases such as prostatitis [26–28]. Culture-based studies detected fewer anaerobic bacteria than NGS [10]. Few are the cases described for prostatitis caused by the Burkholderia genus. However, pulmonary infections by this microorganism that spread through the hematogenous route can reach the prostate causing chronic prostatitis [29]. The particularities in the microbial culture have made the accurate diagnosis of genitourinary infections difficult, as in Corynebacterium urealyticum, a Gram-positive, facultative anaerobic bacillus that is difficult to grow, responsible for prostatitis that is also associated with calcifications in the prostate [30]. Molecular tools allow the diagnosis of hidden infections in the light of traditional microbiological culture techniques.Nevertheless, sequencing costs today are higher than traditional bacteriological methods; the microorganism’s effect on fertility is still under discussion. Much more research is required to establish the microbiota of health and disease and validate powerful tools such as sequencing for daily use in the clinic. Microbiota studies are novel and have shown bacteria as a protective factor against disease; it should not be forgotten that bacteria can negatively impact sperm function [10]. However, studies evaluating the microbiota have used semen or urine samples, which could be biased by contaminating the sample collection [31].Prostatitis is associated with affected male fertility, during illness can be affected semen quality, especially sperm concentration, motility, vitality, and morphology [25]. Semen and vaginal discharge are not sterile. The bacterial microbiome impacts on fertility and pregnancy [10], and this microbiome can be changed all the time, depending on such environmental factors and interaction with other organisms [31].In addition to aerobic bacteria, anaerobes can also cause chronic bacterial prostatitis, mainly due to microorganisms such asPeptostreptococcus spp. and Bacteroides spp. Knowledge about these infections is limited by the diagnostic methods, causing underestimating of the natural role of anaerobes [6].The main limitations that affected the study quality included selecting participants using the NIH–CPSI without diagnostic imaging techniques, digital rectal examination, the use of the four-vessel test, and the limited number of participants in both groups. The selection of participants and the similarities of the patients, primarily in terms of age and most of the evaluated characteristics, allow us to somehow rule out the impact of lifestyle habits, features, and sexual behaviours over genitourinary microbiota. ## 5. Conclusion Chronic prostatitis does not seem to affect seminal quality; however, more studies are required. The greater bacterial diversity of the genitourinary microbiota could be a protective factor for chronic prostatitis in men. Studying the factors associated with this greater microbial diversity will allow the establishment of healthy behaviours that limit the appearance of genitourinary diseases in men. --- *Source: 1007366-2021-09-28.xml*
1007366-2021-09-28_1007366-2021-09-28.md
27,138
Microbiota, Prostatitis, and Fertility: Bacterial Diversity as a Possible Health Ally
Jenniffer Puerta Suárez; Walter D. Cardona Maya
Advances in Urology (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1007366
1007366-2021-09-28.xml
--- ## Abstract Background. In health, microorganisms have been associated with the disease, although the current knowledge shows that the microbiota present in various anatomical sites is associated with multiple benefits. Objective. This study aimed to evaluate and compare the genitourinary microbiota of chronic prostatitis symptoms patients and fertile men. Materials and Methods. In this preliminary study, ten volunteers have included 5 volunteers with symptoms of chronic prostatitis (prostatitis group) and five fertile volunteers, asymptomatic for urogenital infections (control group) matched by age. Bacterial diversity analysis was performed using the 16S molecular marker to compare the microbiota present in urine and semen samples from chronic prostatitis symptoms and fertile volunteers. Seminal quality, nitric oxide levels, and seminal and serum concentration of proinflammatory cytokines were quantified. Results. Fertile men present a greater variety of operational taxonomical units-OTUs in semen (67.5%) and urine (17.6%) samples than chronic prostatitis symptoms men. Chronic prostatitis symptoms men presented a higher concentration of IL-12p70 in seminal plasma. No statistically significant differences were observed in conventional and functional seminal parameters. The species diversity in semen samples was similar in healthy men than prostatitis patients, inverted Simpson index median 5.3 (5.0–10.7) vs. 4.5 (2.1–7.8, p=0.1508). Nevertheless, the microbiota present in the semen and urine samples of fertile men presents more OTUs. Less microbial diversity could be associated with chronic prostatitis symptoms. The presence of bacteria in the genitourinary tract is not always associated with the disease. Understanding the factors that affect the microbiota can implement lifestyle habits that prevent chronic prostatitis. Conclusion. Chronic prostatitis does not seem to affect male fertility; however, studies with a larger sample size are required. Our preliminary results strengthen the potential role; the greater bacterial diversity is a protective factor for chronic prostatitis. --- ## Body ## 1. Introduction Chronic prostatitis is a frequent and multicausal condition. Among the causes of prostatitis that include infections [1], mainly bacterial, however, identifying the causative agent is not always possible, and in addition, the gland can harbour microbiota. With sequencing techniques, it is possible to identify a great variety of microbiota, its effects on health, and its effect on the immune system [2]. In chronic bacterial prostatitis men, an improvement in symptoms, life quality, and less antibiotic consumption have been described with the oral administration of Lactobacillus casei [3], suggesting the close relationship between gastrointestinal and urogenital microbiota [4].Prostatitis is the most common urological diagnosis in young men under 50 years after benign prostatic hyperplasia and prostate cancer [5]. The primary causes of the illness such as infections, immunologic status, urine reflux, and mental stress have been identified as primary causes [1]. Clinically, prostatitis is classified into four types: (i) acute bacterial prostatitis, (ii) chronic bacterial prostatitis, (iii) chronic pelvic pain syndrome, and (iv) asymptomatic inflammatory prostatitis [6–8]. Chronic bacterial prostatitis is responsible for 5–10% of total prostatitis cases, of which at least 30% is associated with recurrent urinary infections [8]. Up to 90% of patients are classified as having chronic pelvic pain syndrome because the cause cannot be identified. The gold standard for prostatitis is the four-vessel test [9]; however, at present, it is not performed routinely due to the risk of causing bacteremia [7].Furthermore, this test only detects those culturable microorganisms, so new techniques are necessary to identify all the organisms in the gland responsible for the disease. The use of next-generation sequencing (NGS) techniques could improve the understanding of the microbiome [10], especially dysbiosis caused by prostatitis and its impact on health. Bacterial diversity analysis (metataxonomics) uses the 16S molecular marker (variable regions V3–V4) to assess men’s microbiota. Besides, the semen sample has higher sensitivity than expressed prostatic excretion (EPS) for diagnosing chronic bacterial prostatitis [11], and it is also practical and comfortable for the patient. Therefore, this preliminary study aimed to determine men’s microbiota with prostatitis-like symptoms and their impact on seminal quality. ## 2. Materials and Methods ### 2.1. Study Participants The protocol and informed consent form were approved by the Bioethics Committee for Research in Humans at the Institute of Medical Research, School of Medicine, University of Antioquia (act number 006), in April 2018. All patients provided written informed consent regarding their participation and publication of their clinical data. Five chronic prostatitis symptoms and five fertile men asymptomatic for urogenital infections volunteers were included. All men were generally in good health, without sexually transmitted diseases. None was under antibiotic treatment at the time of the sampling. National Institute of Health of Chronic Prostatitis Symptoms Index (NIH–CPSI [12]) translated and validated into Spanish [13] was employed to select the volunteers according to the criteria reported by Nickel et al., [5].Each volunteer gave a semen sample and a midstream urine sample; both samples replaced the prostate fluid sample obtained through stimulation of the gland through the rectum [14]. In addition, a blood sample was taken by qualified personnel in a nonanticoagulated vacutainer tube (Becton Dickinson, NJ, USA) to obtain the serum. The donors also filled out a survey with sociodemographic factors, lifestyle, urinary symptoms, and other relevant aspects of sexual and reproductive health to identify factors associated with prostatitis symptoms. ### 2.2. Semen and Seminal Parameters Prior to semen analysis, all donors were asked to abstain from sexual intercourse or masturbation for 3–5 days before semen collection and delivered to the laboratory within 1 hour of ejaculation. ### 2.3. Conventional Seminal Parameters After semen liquefaction was completed, each semen sample was analyzed for conventional parameters: sperm motility, vitality, concentration, total sperm concentration, and sperm morphology according to those established by the World Health Organization in the fifth edition of its Human Semen Processing Manual, while the sperm concentration was evaluated using the Makler chamber [15, 16]. ### 2.4. Functional Seminal Parameters Sperm mitochondrial membrane potential [17], sperm membrane integrity [18], chromatin structure assay [19], sperm membrane lipoperoxidation [20], and intracellular levels of reactive oxygen species (ROS) [17] were evaluated by flow cytometry (Fortessa, Becton Dickinson, NJ, USA), according to previously established protocols, analyzing between 5,000 and 10,000 sperm cells. Data analysis was carried out with FlowJo 7.6 (TreeStar Inc., Oregon, USA). ### 2.5. Seminal Plasma Total Antioxidant Capacity For this test, 3 mL of DPPH (2,2-diphenyl-1-picrylhydrazyl) was mixed with 200μL of the sample. After one hour of incubation, the sample was read in a spectrophotometer (Spectronic 20 spectrophotometer®, Genesys, Rochester, NY, USA) at 515 nm, used ascorbic acid as a positive control [21]. ### 2.6. Prostate-Specific Antigen (PSA) According to the manufacturer’s instructions, PSA was quantified using the commercial total PSA kit (DiaMetra, Perugia, Italy). Prostate antigen values greater than 4 ng/mL were considered positive. ### 2.7. Nitric Oxide Quantification Nitrite quantification was performed using the commercial Griess reagent kit for nitrite determination (Molecular probes, Oregon, USA) according to the manufacturer’s instructions and after deproteinization of the semen and serum samples according to the Serafini method [22]. ### 2.8. Cytokine Quantification Quantification of the cytokines IL-12p70, IL-10, IL-1β, IL-6, IL-8, TNF, IL-2, IL-4, IL-10, IL-17, and INF-γ was performed using the kits BD Cytometric Bead Array (CBA) Human Inflammatory Cytokines and Human Th1/Th2/Th17 Cytokine Kit (Becton Dickinson, NJ, USA). The analysis was carried out in the FlowJo 7.6. ### 2.9. Bacterial Diversity Analysis DNA extraction was performed using the Stool DNA Isolation Kit (Norgen) to identify the microbiota, and gDNA quantification was performed by the PicoGreen colourimetric method. Subsequently, control of 16S gene amplification was performed using the primers 27F (AGAGTTTGATCCTGGCTCAG) and 1492R (TACGGYTACCTTGTTACGACTT), and a 1500 base pair (bp) fragment was successfully amplified for all samples. For sequencing, the 16S ribosomal gene variables V3 and V4, the Bakt_341F (CCTACGGGNGGCWGCAG), and Bakt_805R (GACTACHVGGGTATCTAATCC) oligonucleotides were used. Sequencing was performed on the platform MiSeq Illumina, generating paired reads 300 bases each. Sequence quality analysis and classification were developed in MetaCoMET (Metagenomics Core Microbiome Exploration Tool, MetaCoMET). ### 2.10. Statistical Analysis A chi-square and a Mann–Whitney test were used to compare the dichotomous and numerical variables between both groups. The data were analyzed using the statistical program GraphPad Prism 6.0 (GraphPad, San Diego, CA, USA), and a value ofp<0.05 was considered significant. ## 2.1. Study Participants The protocol and informed consent form were approved by the Bioethics Committee for Research in Humans at the Institute of Medical Research, School of Medicine, University of Antioquia (act number 006), in April 2018. All patients provided written informed consent regarding their participation and publication of their clinical data. Five chronic prostatitis symptoms and five fertile men asymptomatic for urogenital infections volunteers were included. All men were generally in good health, without sexually transmitted diseases. None was under antibiotic treatment at the time of the sampling. National Institute of Health of Chronic Prostatitis Symptoms Index (NIH–CPSI [12]) translated and validated into Spanish [13] was employed to select the volunteers according to the criteria reported by Nickel et al., [5].Each volunteer gave a semen sample and a midstream urine sample; both samples replaced the prostate fluid sample obtained through stimulation of the gland through the rectum [14]. In addition, a blood sample was taken by qualified personnel in a nonanticoagulated vacutainer tube (Becton Dickinson, NJ, USA) to obtain the serum. The donors also filled out a survey with sociodemographic factors, lifestyle, urinary symptoms, and other relevant aspects of sexual and reproductive health to identify factors associated with prostatitis symptoms. ## 2.2. Semen and Seminal Parameters Prior to semen analysis, all donors were asked to abstain from sexual intercourse or masturbation for 3–5 days before semen collection and delivered to the laboratory within 1 hour of ejaculation. ## 2.3. Conventional Seminal Parameters After semen liquefaction was completed, each semen sample was analyzed for conventional parameters: sperm motility, vitality, concentration, total sperm concentration, and sperm morphology according to those established by the World Health Organization in the fifth edition of its Human Semen Processing Manual, while the sperm concentration was evaluated using the Makler chamber [15, 16]. ## 2.4. Functional Seminal Parameters Sperm mitochondrial membrane potential [17], sperm membrane integrity [18], chromatin structure assay [19], sperm membrane lipoperoxidation [20], and intracellular levels of reactive oxygen species (ROS) [17] were evaluated by flow cytometry (Fortessa, Becton Dickinson, NJ, USA), according to previously established protocols, analyzing between 5,000 and 10,000 sperm cells. Data analysis was carried out with FlowJo 7.6 (TreeStar Inc., Oregon, USA). ## 2.5. Seminal Plasma Total Antioxidant Capacity For this test, 3 mL of DPPH (2,2-diphenyl-1-picrylhydrazyl) was mixed with 200μL of the sample. After one hour of incubation, the sample was read in a spectrophotometer (Spectronic 20 spectrophotometer®, Genesys, Rochester, NY, USA) at 515 nm, used ascorbic acid as a positive control [21]. ## 2.6. Prostate-Specific Antigen (PSA) According to the manufacturer’s instructions, PSA was quantified using the commercial total PSA kit (DiaMetra, Perugia, Italy). Prostate antigen values greater than 4 ng/mL were considered positive. ## 2.7. Nitric Oxide Quantification Nitrite quantification was performed using the commercial Griess reagent kit for nitrite determination (Molecular probes, Oregon, USA) according to the manufacturer’s instructions and after deproteinization of the semen and serum samples according to the Serafini method [22]. ## 2.8. Cytokine Quantification Quantification of the cytokines IL-12p70, IL-10, IL-1β, IL-6, IL-8, TNF, IL-2, IL-4, IL-10, IL-17, and INF-γ was performed using the kits BD Cytometric Bead Array (CBA) Human Inflammatory Cytokines and Human Th1/Th2/Th17 Cytokine Kit (Becton Dickinson, NJ, USA). The analysis was carried out in the FlowJo 7.6. ## 2.9. Bacterial Diversity Analysis DNA extraction was performed using the Stool DNA Isolation Kit (Norgen) to identify the microbiota, and gDNA quantification was performed by the PicoGreen colourimetric method. Subsequently, control of 16S gene amplification was performed using the primers 27F (AGAGTTTGATCCTGGCTCAG) and 1492R (TACGGYTACCTTGTTACGACTT), and a 1500 base pair (bp) fragment was successfully amplified for all samples. For sequencing, the 16S ribosomal gene variables V3 and V4, the Bakt_341F (CCTACGGGNGGCWGCAG), and Bakt_805R (GACTACHVGGGTATCTAATCC) oligonucleotides were used. Sequencing was performed on the platform MiSeq Illumina, generating paired reads 300 bases each. Sequence quality analysis and classification were developed in MetaCoMET (Metagenomics Core Microbiome Exploration Tool, MetaCoMET). ## 2.10. Statistical Analysis A chi-square and a Mann–Whitney test were used to compare the dichotomous and numerical variables between both groups. The data were analyzed using the statistical program GraphPad Prism 6.0 (GraphPad, San Diego, CA, USA), and a value ofp<0.05 was considered significant. ## 3. Results The median and range of age, abstinence period, and body mass index of the fertile group and prostatitis-like symptoms patients included in the study was 34 (21–44) years and 34 (21–44) years (p>0.999), 2 (2–5) and 3 (2–5) days (p=0.9206), and 26.4 (24.4–30.9) and 22.9 (19.7–26.8) kg/m2 (p=0.0952), respectively.Semen and urine microbiota of five fertile volunteers and five chronic prostatitis symptoms volunteers matched by age, and classified using the NIH–CPSI were compared (Table1).Table 1 NIH–CPSI classification of volunteers. Domain (score)Symptoms classification, median (range)P valueFertile group, controlProstatitis-like symptomsPain (0–21)0 (0-1)11 (7–11)0.0079Urinary symptoms (0–10)1 (0–3)7 (6–10)0.0079Quality of life impact (0–12)0 (0–2)4 (2–5)0.0159Total score (0–43)1 (0–5)20 (20–26)0.0079When comparing the conventional and functional semen analyses of prostatitis-like symptoms men and fertile men matched by age, we did not find statistically significant differences (Table2). All conventional seminal parameters were in both groups above the lower limit of reference according to the WHO [23]. In addition, no PSA serum concentration differences were observed (Table 2). The median level of IL-12p70 in seminal plasma was significantly increased in chronic prostatitis symptoms volunteers compared to the fertile group. Also, there was no difference in other cytokine concentrations between both groups (Table 3).Table 2 Seminal quality and oxidative stress in prostatitis-like symptoms men and fertile men. Fertile group, control, median (range)Prostatitis-like symptoms, median (range)P valueSeminal volume (mL)4 (1.5–4.5)3 (1.5–7.5)0.7460Progressive motility (%)55.5 (41.5–81.0)57 (6.0–64.0)0.6667Nonprogressive motility (%)6.5 (2.0–16.0)6.0 (1.0–25.0)0.9524Immotile spermatozoa (%)39.0 (17.0–45.0)42.0 (29.0–90.0)0.8016Concentration/mL (106/mL)80.0 (40.5–270.0)205.0 (7.0–254.0)0.9444Total concentration (106/ejaculate)178.2 (115.5–1080.0)431.8 (22.4–1538.0)0.8016Viability (%)79.0 (77.0–90.0)70.0 (49.0–85.0)0.1667Sperm with normal morphology (%)4.8 (4.2–8.6)4.4 (2.8–6.8)0.8016Teratozoospermia index1.4 (1.1–1.5)1.2 (1.1–1.5)0.2222High mitochondrial membrane potential (%)63.2 (44.2–7.5)50.0 (12.3–55.5)0.0556Plasma membrane integrity (%)64.8 (48.1–84.4)35.8 (6.2–69.7)0.1508ROS production (%)64.3 (50.5–86.2)55.7 (17.7–61.9)0.0556DNA fragmentation index (%)10.6 (10.4–11.4)11.1 (10.5–14.3)0.3889Membrane lipid peroxidation (%)49.8 (9.1–80.5)67.0 (44.9–93.3)0.4127Antioxidant capacity of seminal plasma %)61.0 (45.3–81.4)62.0 (9.5–69.3)0.5317Serum nitric oxide (µM)3.7 (2.1–13.0)3.0 (1.7–4.1)0.5000Plasma nitric oxide (µM)1.5 (0.6–1.7)0.5 (0.2–1.9)0.3016PSA (ng/mL)0.0 (0.0–1.4)0.9 (0.0–120.0)0.3651Mann–Whitney test. Data indicate median and range.Table 3 Concentrations of seminal cytokines in seminal and serum samples of the control group and prostatitis-like symptoms. Cytokine (pg/mL)IL-12p70IL-10IL-1βIL-6IL-8TNFIL-2IL-4IL-10IL-17IFN-γSeminal samplesFertile group0a (0–1.9)0 (0–0.8)0 (0–4.5)2.2 (0–32.7)1812.0 (997.9–3299)0 (0–7.1)0 (0–27.09)0 (0–6.3)1.5 (0.0–18.3)5.9 (5.3–10.2)NDProstatitis-like symptoms group45.5 (0–252.8)1.0 (0–29.4)1.6 (0–31.6)6.0 (01–100.9)1230.0 (684.1–2533)47.75 (0–127.5)0.0 (0.0–116.3)0 (0–20.25)0.9 (0.0–21.3)8.1 (5.5–48.1)0 (0–22.1)Serum samplesFertile group0 (0–210.3)0 (0–199.1)0 (0–455.7)0 (0–304.2)24.4 (0–313.5)0 (0–50.8)1.0 (0.0–176.1)2.0 (0–13.5)0.0 (0.0–7.3)10.4 (9.1–24.8)0 (0–5.3)Prostatitis-like symptoms groupNDNDNDND2.2 (0.0–11.5)0.0 (0.0–1.4)0.0 (0.0–21.9)4.5 (0–10.5)0 (0–3.1)17.1 (11.2–53.7)0 (0–27.3)Data indicate median (range). ND, not detected.aSignificantly different from the control group: p<0.05 (Mann–Whitney U test).When evaluating the genitourinary microbiota, it is observed that the species diversity in semen samples was similar in healthy men than prostatitis patients, inverted Simpson index median 5.3 (5.0–10.7) vs. 4.5 (2.1–7.8,p=0.1508). Nevertheless, the microbiota present in the semen and urine samples of fertile men presents 67.5% and 17.6% more OTUs, respectively, than prostatitis-like symptoms volunteer (Figure 1). We also observe men with prostatitis and fertile men share 144 operational taxonomic units (OTU). We also found no differences in the urine sample (inverted Simpson index median control 6.2 (4.5–6.8) vs. prostatitis 4.8 (4.3–12.2), p=0.8016). Finally, we observed statistically significant differences in 14 OTUs in the different samples of the groups (Figure 2).Figure 1 Venn diagram of OTU overlapping in urine and semen samples. (a) Venn diagram obtained with semen and urine samples from chronic prostatitis symptoms and fertile men showing that both types of samples in both groups share 144 OTUs. (b) Semen and urine samples from volunteers with chronic prostatitis showing a lower amount of OTU than samples from fertile men in the control group.Figure 2 Different OTUs between prostatitis-like symptoms men and fertile men. Semen and urine samples from chronic prostatitis symptoms and fertile men differ statistically in 14 OTU. Most of them are not detectable by traditional culture techniques, which show the importance of sequencing in the study of prostate disease. ## 4. Discussion This preliminary study evaluated some factors associated with chronic prostatitis by comparing fertile men with no urogenital infections and men with chronic prostatitis in a small sample of volunteers. We explored the microbial content of the semen and urine of men to evaluate the prostatitis effect on seminal parameters and fertility. We compared the conventional and functional seminal parameters of prostatitis symptoms with men who had a pregnant partner or children under two years of age.We observed no differences in semen quality between both groups. In fact, the seminal parameters of the volunteers in both groups were higher than the WHO lower reference limit. Additionally, men with prostatitis also presented higher seminal concentration and high concentrations of IL-12p70 in serum. This proinflammatory cytokine is secreted mainly by macrophages and monocytes, stimulating the production of IFN-γ, which suggests a predominance of Th1 lymphocyte activation, facilitating the establishment of an inflammatory environment that becomes chronic [24].The microbiome is composed of genetic material and microorganisms. It is also considered more complex than the human genome, and the microbiota refers to the population of bacteria present in various anatomical sites [10]. Although prostatitis is a multicausal condition, genitourinary infections are included among its causes, and the majority of bacterial prostatitis follows a urinary tract infection [1, 6]. However, the presence of microorganisms does not always imply disease, and caution is required in interpreting the microbiological results of urinary tract samples until the microbiota of this anatomical site is correctly established. An adequate association between symptoms and the detection of microorganisms should contribute to the diagnosis of prostatitis.Bacterial diversity is a crucial factor in preventing the appearance of genitourinary diseases. The urine of prostatitis-like symptoms men presented 17.6% less diversity of OTUs than fertile men. This result was higher in the semen sample since 67.5% fewer OTUs were observed in the semen of prostatitis-like symptoms men. The urogenital tract microbiota of men with and without symptoms of prostatitis includes bacteriaRhizobiaceae, Burkholderia, Achromobacter, Delftia, Campylobacter, Ezakiella, Anaerococcus, Prevotella, Haemophilus, and Porphyromonas. Specifically, in urine, the most common bacterial genera in men with and without symptoms of prostatitis are Pantoea, Geobacillus, Kocuria, Veillonella, Brevibacterium, Pseudomonas, Acetobacteraceae, Neisseria, Chryseobacterium, and Dialister and in semen are Weissella, Proteobacteria, Burkholderiales, Achromobacter, Campylobacter, and Prevotella because today’s available technique makes that some microorganisms are uncultivable in traditional microbial evaluation methods that are obsolete. Prevotella appeared to exert a negative effect on sperm quality [10]. Firmicutes (especially Lactobacilli), Bacteroidetes, Proteobacteria, and Actinobacteria comprise the highest proportion of the seminal microbiome [25].In this preliminary study, we only found statistically significant differences in the presence of fourteen OTUs. These fourteen OTUs explain the difference in the microbiota of prostatitis-like symptoms and fertile men:Burkholderiaceae, Achromobacter, Aerococcus, Blautia, Burkholderiales, Propionibacterium, Betaproteobacteria, Haemophilus, Burkholderia, Massilia, Rhizobiaceae, and Neorhizobium. In general, they are little-known microorganisms in the clinical field, so sequencing is a powerful tool that allows us to discover the world surrounding enigmatic infectious diseases such as prostatitis [26–28]. Culture-based studies detected fewer anaerobic bacteria than NGS [10]. Few are the cases described for prostatitis caused by the Burkholderia genus. However, pulmonary infections by this microorganism that spread through the hematogenous route can reach the prostate causing chronic prostatitis [29]. The particularities in the microbial culture have made the accurate diagnosis of genitourinary infections difficult, as in Corynebacterium urealyticum, a Gram-positive, facultative anaerobic bacillus that is difficult to grow, responsible for prostatitis that is also associated with calcifications in the prostate [30]. Molecular tools allow the diagnosis of hidden infections in the light of traditional microbiological culture techniques.Nevertheless, sequencing costs today are higher than traditional bacteriological methods; the microorganism’s effect on fertility is still under discussion. Much more research is required to establish the microbiota of health and disease and validate powerful tools such as sequencing for daily use in the clinic. Microbiota studies are novel and have shown bacteria as a protective factor against disease; it should not be forgotten that bacteria can negatively impact sperm function [10]. However, studies evaluating the microbiota have used semen or urine samples, which could be biased by contaminating the sample collection [31].Prostatitis is associated with affected male fertility, during illness can be affected semen quality, especially sperm concentration, motility, vitality, and morphology [25]. Semen and vaginal discharge are not sterile. The bacterial microbiome impacts on fertility and pregnancy [10], and this microbiome can be changed all the time, depending on such environmental factors and interaction with other organisms [31].In addition to aerobic bacteria, anaerobes can also cause chronic bacterial prostatitis, mainly due to microorganisms such asPeptostreptococcus spp. and Bacteroides spp. Knowledge about these infections is limited by the diagnostic methods, causing underestimating of the natural role of anaerobes [6].The main limitations that affected the study quality included selecting participants using the NIH–CPSI without diagnostic imaging techniques, digital rectal examination, the use of the four-vessel test, and the limited number of participants in both groups. The selection of participants and the similarities of the patients, primarily in terms of age and most of the evaluated characteristics, allow us to somehow rule out the impact of lifestyle habits, features, and sexual behaviours over genitourinary microbiota. ## 5. Conclusion Chronic prostatitis does not seem to affect seminal quality; however, more studies are required. The greater bacterial diversity of the genitourinary microbiota could be a protective factor for chronic prostatitis in men. Studying the factors associated with this greater microbial diversity will allow the establishment of healthy behaviours that limit the appearance of genitourinary diseases in men. --- *Source: 1007366-2021-09-28.xml*
2021
# Discussion on Redundant Processing Algorithm of Association Rules Based on Hypergraph in Data Mining **Authors:** Jintan Zhu **Journal:** Journal of Robotics (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007464 --- ## Abstract With the rapid advancement of big data, it is becoming a great problem for people to find objective information in the database. The relevance data processing rule for digging the information can be the way. Relevance data processing rule for digging the information is mainly studied in three aspects: data dimension, data abstraction level, and processing variable type. In the aspect of rules, the research mainly focuses on three aspects: active relationships, passive rules, and uncommon relationship rules. Association rules of digging the data can be the most well-employed investigation goal and aim for the data digging. Along with the advancement of the information scale, the time rate of traditional relationship rules exploration of counting ways is too low. How to increase the time rate for the way of counting is the main research content of relevance data processing rules for digging the information. Current relevance data processing rules for digging the information have two limitations: (1) metrics such as support, confidence, and lift rely too much on expert knowledge or complex adjustment processes in value selection; (2) it is often difficult to explain rare association rules. Based on the existing research, this paper proposes a Markov logic network framework model of association rules to address the above shortcomings. The theory of a hypergraph and system is proposed, and the method of a hypergraph in 3D matrix modeling is studied. Aiming at the new characteristics of big data analysis, a new super edge definition method is introduced according to the definition of the system, which greatly enhances the ability to solve problems. In the cluster analysis and calculation of hypergraph, this paper will use the hypergraph segmentation operator hMETIS to carry out the cluster analysis method in order to achieve higher accuracy in cluster analysis and calculation. As for the test of cycle ones, which is in line with the relevance of the hypergraph with clear directions, the thesis will offer a brand-new way to make an analysis and turn the rule of relevance into the hypergraph with clear directions with a new definition of the near linking matrix, and it will change the dealing way from the test of the cycle and more ones into the linkingf bricks and circles, which is a new way to explore. This paper uses two datasets of different sizes to conduct rule prediction accuracy experiments on the Markov logic network framework model algorithm of association rules and the traditional association rule algorithm. The results show that compared with the traditional association rule algorithm, the rules obtained by the Markov logic network framework model of association rules have a higher prediction accuracy. --- ## Body ## 1. Introduction Since the 21st century, Internet technology and computer hardware technology have developed rapidly and become popular [1]. The data stock of the Internet has increased exponentially, but there are only a handful of data containing valuable information [2]. Therefore, mining valuable information from massive data has changed into the most essential factor or the content in society at present [3]. For the old and ancient dataset digging information skills, the relationship rules which are unseen from the data will be analyzed with an effective rate, what is more, the result of the old-fashioned data digging skills and it will cause the results of massive data because it cannot predict the things happened in the long future [4]. The appearance of the AI will give more skills to the information-digging process. And the big process will be the digging of the data information, which has created the message of big data [5]. At present, data mining has an important application value in many aspects. By describing the existing data, it can effectively predict the future pattern of the data. Data mining technology has become a promising research direction in today’s world. Data mining technology has been applied in scientific research, production process monitoring, quality management, decision support, and design.The Internet, the Internet of Things, cloud computing, and other information technologies are constantly updated, and they are constantly integrated with the human world in various fields such as economy, politics, military, scientific research, and life [6]. Information visualization is the use of images to express information clearly and effectively. The image representation can be used to find the original data and the information relationship that cannot be observed. Data visualization can enhance users’ understanding of multidimensional and large-scale information and plays an important role in the discovery, determination, and understanding of association rules. As a major information discovery and pattern recognition technology, the association processing law of mining data is to find the most meaningful information that can be described. The visualization of association rules is an inseparable subset of association rule theory. Its main task is to display information and help users further grasp the rules of association processing in order to discover the information results [7].Data mining can discover the hidden laws in the data and effectively exert the value of the data [8]. The relevance data processing rule for digging the information can extract potential and valuable frequent patterns or correlations between attributes from the data [9]. Frequent patterns and correlations can be displayed clearly and intuitively in the form of text, but due to the limited cognitive ability of users, the value of relevant data processing rules for digging the information cannot be fully reflected [10]. Therefore, it is urgent to study the visualization method of association rules in depth, combined with human-computer interaction technology, to help users analyze and process data resources from multiple perspectives, gain insight into valuable information, and support their decision-making and planning [11].The hypergraphs are widely used in many fields of information science. In the past, information visualization technology and visual data analysis technology mainly focused on analyzing simple binary signals inside data objects [12]. However, studies have shown that multiple associations can more naturally represent the sum pattern of internal connections implicit in signals [13]. A Hypergraph is a generalization of ordinary topological relations and can be easily expressed as multiple relations [14]. This also provides strong conditions and theoretical support for the visualization of association rules. The hypergraph model combines the characteristics of hypergraphs and multidirectional graphs and can visually describe association rules. In the graph, nodes represent data items and edges represent association relationships. The support and reliability of rules can also be described in different ways and with different values. Therefore, the intuitive display of multirelationships by the hypergraph provides strong theoretical support for further in-depth research on visualization methods of frequent item sets and association rules. ## 2. State of the Art ### 2.1. Data Mining Overview and Research Status In 1990, the first KDD International Seminar was held in Detroit, USA. Since then, people have shown interest in data mining terminology [15]. Information is used by people to describe specific events in actual society. This is an abstract description of the information. With the development and progress of the times, there are more and more aspects of human exploration, and digital has become a tool to support human exploration. Indispensable means that due to the development of science and technology, human beings are exploring the physical universe more and more extensively, which makes the range of numbers more and more extensive, and the complexity of data is rapidly increasing [16]. In this case, people can no longer find hidden laws through simple logical reasoning. Therefore, people began to pay attention to the importance of data and eagerly hoped to find the value and meaning hidden behind the data. It is precisely to meet this demand that data mining technology was born. Mining, also known as information mining, is the process of discovering potential, meaningful, and interesting things from a large amount of incomplete and noisy information stored in large transaction databases or data warehouses. The information mined can often help us to conduct in-depth exploration. Of course, it is not a knowledge exploration method that can retrieve information at any time. Therefore, through the search engine to find the web pages and search databases that you are interested in, records cannot become knowledge discovery. These methods are only looking for information that meets specific conditions, but they do not explore the things behind big data. Data mining is not a panacea. That is, what is found in large transaction databases is not always correct or valuable [17]. This needs to meet special commercial conditions. People study business and do statistical analysis. Under these premises, the use of information mining is more likely to mine valuable and instructive messages [18].In the procedure of digging the data will be decided by requests from businesses and data features [19]. But it will be classified as the changing data, preprocessing of data, data mining, and knowledge assessment steps. The pre-processing of data will deal with the data and transform it. It costs a lot of time to deal with the data [20]. The image of the data preprocessing process is shown in Figure 1.Figure 1 Data preprocessing process.Data mining technology originally came from abroad, but its research and development direction are varied. At present, the most common way to deal with analysis problems is decision tree induction. The corresponding calculation methods are C4. 5 calculation, ID3 algorithm, ID4 algorithm, IDS algorithm, and quest calculation. Complex structured data learning, the slio algorithm, the sprint algorithm, and “rainforest” calculations are used for building a decision tree. Both emphasize the establishment of a decision tree with scalability. The decision tree pruning algorithm includes cost complexity pruning, error reduction pruning, and pessimistic evaluation pruning. Some methods, such as Bayesian classification, the back propagation algorithm, the neural network method, the machine learning method, the CAEP classification method, and the rough set method, are all applied to analysis and data mining. There are also many data mining techniques and methods to deal with data clustering. Common systems divide clustering methods into thek-means method and the basic condensed hierarchical clustering method. DBSCAN is a clustering algorithm based on density, while optics is a clustering algorithm based on density. ### 2.2. Research Status of Association Rules Up to now, large-scale exploration has been carried out in the field of data mining, but the research in the field of data mining is still popular. That is, although the current mining method is quite perfect, its data mining efficiency is very low in the face of large-scale information. The result is not ideal. Therefore, the information accumulated by the network e-commerce industry is extremely rich and complicated, resulting in a large number of useless bits of information and garbage materials. For such a large amount of information, the preprocessing steps of data mining will be very difficult. Once the pretreatment is poor, the whole mining process may even fail. At the same time, although the preprocessing process is relatively smooth, conventional mining methods may not obtain valuable and meaningful data in a large number of databases, even if the mined data are useless and meaningless. All these show that there are still many problems in data mining. Considering the differences in the main research directions and mining methods of data mining, there are still many challenging research topics in the field of data mining applications. These topics are closely linked, mainly involving information fusion technology of information discovery and data warehouses, visual data mining, super large-scale data mining of complex types, network data mining, and network security technology. The visualization technology of a two-dimensional matrix is usually used to represent the features on the bar graph. The items in the front part and the rear part are arranged on two axes in turn. The width and color of the bar in turn represent the support and confidence, as shown in Figure2.Figure 2 Visualization is based on a two-dimensional matrix. ### 2.3. Hypergraph Overview A hypergraph is a subset system of a finite set, which is a generalization of graph theory and plays a very important role in discrete mathematics. The term “hypergraph” was first proposed by Berge in the monograph “Hypergraphs” in 1966. The original purpose was to promote some classical results in graph theory. Later, people gradually realized some theorems in graph theory. It can be generalized in the unified form of a hypergraph, thus opening the prelude to the study of hypergraph theory, making it a huge new branch of graph theory. Compared with the study of general graphs, the study of hypergraphs is more complicated. Some important structures and properties in general graphs no longer exist in hypergraphs, which complicates the discussion of many similar problems in graph theory. At present, hypergraphs have been widely used in circuit division, knowledge representation and organization methods, cellular communication systems, and the representation of molecular structures of atypical compounds and polycyclic conjugated molecules. There are two types of hypergraphs: directed hypergraphs and undirected hypergraphs. Since the 1960s, after decades of unremitting efforts, the development of hypergraph theory has made great progress.A hypergraph is a binary pairH=(V, E), where V=v1,v2,v3,…vn, denotes the n vertices of the hypergraph, and E=e1,e2,e3,…en, denotes the m hyperedges of the hypergraph. A hyperedge set E is a subset defined on a vertex set V, that is ∀ej⊆V,j=1,2,…m, and satisfies(1)ej≠∅,j=1,2,…m,∑i=1mej=V.The size of a graph is generally uniquely determined by the number of verticesN and the number of super edges M. In hypergraph, its size also depends on the cardinality of each super edge. We can define the size of a hypergraph as the cardinality of each hyperedge, and it is given using the following equation:(2)sizeH=∑ei∈Eei.As an important branch of hypergraph theory, directed hypergraph theory has not been studied deeply for a long time, and the results are relatively small. The foreign paper that has a far-reaching impact on the development of directed hypergraph theory is directed hypergraph and its application by Giorgio Gallo. Here, the author systematically summarizes the previous achievements in the field of directed hypergraphs. In China, based on the demand for electrical engineering and automation research, Professor Huang has provided another new way to describe directed hypergraphs and has done relevant research work on this basis. Similar to the theoretical study of undirected graphs and directed graphs, directed hypergraphs add directions to the super edges in undirected hypergraphs, and then describe the arrangement of the vertices of the super edges. On this basis, we introduce some properties of undirected hypergraph into directed hypergraph and find the special properties of directed hyper-tree. Of course, the main purpose of studying directed hypergraphs is to solve problems in practical applications. In this paper, people try to describe the relevant laws through the directed hypergraph and solve the redundancy and circulation problems in the relevant data processing rules by using the attributes in the directed hypergraph, and mining information.. Figure3 is a representation of a directed hypergraph H.Figure 3 Representation of a directed hypergraphH.A directed hypergraph isH¯=V,E, V is the vertex set, and E is the directed hyperedge set. Its adjacency matrix is given as follows:(3)A=a11…a1n⋮⋱⋮an1…ann,ai,j∈0,1,2,……,aij represents the number of directed hyperedges from the start point 1 to the end point 1, where vi and vi are the vertices of H¯=V,E.Like an undirected hypergraph, the size of a directed hypergraph is defined as the sum of the cardinality of each super edge, and the rank (low rank) is also defined as the largest (small) multiple of the cardinality of each super edge. However, it is difficult to form a one-to-one correspondence between the adjacency matrix and other matrices in a directed hypergraph due to the characteristics of clusters in the hypergraph itself. People can also understand that the directed hypergraph corresponds to the adjacency matrix, but people cannot uniquely recover the original directed hypergraph from the adjacency matrix. Therefore, how to reduce these factors is also an important topic in the study of directed hypergraph theory. ## 2.1. Data Mining Overview and Research Status In 1990, the first KDD International Seminar was held in Detroit, USA. Since then, people have shown interest in data mining terminology [15]. Information is used by people to describe specific events in actual society. This is an abstract description of the information. With the development and progress of the times, there are more and more aspects of human exploration, and digital has become a tool to support human exploration. Indispensable means that due to the development of science and technology, human beings are exploring the physical universe more and more extensively, which makes the range of numbers more and more extensive, and the complexity of data is rapidly increasing [16]. In this case, people can no longer find hidden laws through simple logical reasoning. Therefore, people began to pay attention to the importance of data and eagerly hoped to find the value and meaning hidden behind the data. It is precisely to meet this demand that data mining technology was born. Mining, also known as information mining, is the process of discovering potential, meaningful, and interesting things from a large amount of incomplete and noisy information stored in large transaction databases or data warehouses. The information mined can often help us to conduct in-depth exploration. Of course, it is not a knowledge exploration method that can retrieve information at any time. Therefore, through the search engine to find the web pages and search databases that you are interested in, records cannot become knowledge discovery. These methods are only looking for information that meets specific conditions, but they do not explore the things behind big data. Data mining is not a panacea. That is, what is found in large transaction databases is not always correct or valuable [17]. This needs to meet special commercial conditions. People study business and do statistical analysis. Under these premises, the use of information mining is more likely to mine valuable and instructive messages [18].In the procedure of digging the data will be decided by requests from businesses and data features [19]. But it will be classified as the changing data, preprocessing of data, data mining, and knowledge assessment steps. The pre-processing of data will deal with the data and transform it. It costs a lot of time to deal with the data [20]. The image of the data preprocessing process is shown in Figure 1.Figure 1 Data preprocessing process.Data mining technology originally came from abroad, but its research and development direction are varied. At present, the most common way to deal with analysis problems is decision tree induction. The corresponding calculation methods are C4. 5 calculation, ID3 algorithm, ID4 algorithm, IDS algorithm, and quest calculation. Complex structured data learning, the slio algorithm, the sprint algorithm, and “rainforest” calculations are used for building a decision tree. Both emphasize the establishment of a decision tree with scalability. The decision tree pruning algorithm includes cost complexity pruning, error reduction pruning, and pessimistic evaluation pruning. Some methods, such as Bayesian classification, the back propagation algorithm, the neural network method, the machine learning method, the CAEP classification method, and the rough set method, are all applied to analysis and data mining. There are also many data mining techniques and methods to deal with data clustering. Common systems divide clustering methods into thek-means method and the basic condensed hierarchical clustering method. DBSCAN is a clustering algorithm based on density, while optics is a clustering algorithm based on density. ## 2.2. Research Status of Association Rules Up to now, large-scale exploration has been carried out in the field of data mining, but the research in the field of data mining is still popular. That is, although the current mining method is quite perfect, its data mining efficiency is very low in the face of large-scale information. The result is not ideal. Therefore, the information accumulated by the network e-commerce industry is extremely rich and complicated, resulting in a large number of useless bits of information and garbage materials. For such a large amount of information, the preprocessing steps of data mining will be very difficult. Once the pretreatment is poor, the whole mining process may even fail. At the same time, although the preprocessing process is relatively smooth, conventional mining methods may not obtain valuable and meaningful data in a large number of databases, even if the mined data are useless and meaningless. All these show that there are still many problems in data mining. Considering the differences in the main research directions and mining methods of data mining, there are still many challenging research topics in the field of data mining applications. These topics are closely linked, mainly involving information fusion technology of information discovery and data warehouses, visual data mining, super large-scale data mining of complex types, network data mining, and network security technology. The visualization technology of a two-dimensional matrix is usually used to represent the features on the bar graph. The items in the front part and the rear part are arranged on two axes in turn. The width and color of the bar in turn represent the support and confidence, as shown in Figure2.Figure 2 Visualization is based on a two-dimensional matrix. ## 2.3. Hypergraph Overview A hypergraph is a subset system of a finite set, which is a generalization of graph theory and plays a very important role in discrete mathematics. The term “hypergraph” was first proposed by Berge in the monograph “Hypergraphs” in 1966. The original purpose was to promote some classical results in graph theory. Later, people gradually realized some theorems in graph theory. It can be generalized in the unified form of a hypergraph, thus opening the prelude to the study of hypergraph theory, making it a huge new branch of graph theory. Compared with the study of general graphs, the study of hypergraphs is more complicated. Some important structures and properties in general graphs no longer exist in hypergraphs, which complicates the discussion of many similar problems in graph theory. At present, hypergraphs have been widely used in circuit division, knowledge representation and organization methods, cellular communication systems, and the representation of molecular structures of atypical compounds and polycyclic conjugated molecules. There are two types of hypergraphs: directed hypergraphs and undirected hypergraphs. Since the 1960s, after decades of unremitting efforts, the development of hypergraph theory has made great progress.A hypergraph is a binary pairH=(V, E), where V=v1,v2,v3,…vn, denotes the n vertices of the hypergraph, and E=e1,e2,e3,…en, denotes the m hyperedges of the hypergraph. A hyperedge set E is a subset defined on a vertex set V, that is ∀ej⊆V,j=1,2,…m, and satisfies(1)ej≠∅,j=1,2,…m,∑i=1mej=V.The size of a graph is generally uniquely determined by the number of verticesN and the number of super edges M. In hypergraph, its size also depends on the cardinality of each super edge. We can define the size of a hypergraph as the cardinality of each hyperedge, and it is given using the following equation:(2)sizeH=∑ei∈Eei.As an important branch of hypergraph theory, directed hypergraph theory has not been studied deeply for a long time, and the results are relatively small. The foreign paper that has a far-reaching impact on the development of directed hypergraph theory is directed hypergraph and its application by Giorgio Gallo. Here, the author systematically summarizes the previous achievements in the field of directed hypergraphs. In China, based on the demand for electrical engineering and automation research, Professor Huang has provided another new way to describe directed hypergraphs and has done relevant research work on this basis. Similar to the theoretical study of undirected graphs and directed graphs, directed hypergraphs add directions to the super edges in undirected hypergraphs, and then describe the arrangement of the vertices of the super edges. On this basis, we introduce some properties of undirected hypergraph into directed hypergraph and find the special properties of directed hyper-tree. Of course, the main purpose of studying directed hypergraphs is to solve problems in practical applications. In this paper, people try to describe the relevant laws through the directed hypergraph and solve the redundancy and circulation problems in the relevant data processing rules by using the attributes in the directed hypergraph, and mining information.. Figure3 is a representation of a directed hypergraph H.Figure 3 Representation of a directed hypergraphH.A directed hypergraph isH¯=V,E, V is the vertex set, and E is the directed hyperedge set. Its adjacency matrix is given as follows:(3)A=a11…a1n⋮⋱⋮an1…ann,ai,j∈0,1,2,……,aij represents the number of directed hyperedges from the start point 1 to the end point 1, where vi and vi are the vertices of H¯=V,E.Like an undirected hypergraph, the size of a directed hypergraph is defined as the sum of the cardinality of each super edge, and the rank (low rank) is also defined as the largest (small) multiple of the cardinality of each super edge. However, it is difficult to form a one-to-one correspondence between the adjacency matrix and other matrices in a directed hypergraph due to the characteristics of clusters in the hypergraph itself. People can also understand that the directed hypergraph corresponds to the adjacency matrix, but people cannot uniquely recover the original directed hypergraph from the adjacency matrix. Therefore, how to reduce these factors is also an important topic in the study of directed hypergraph theory. ## 3. Methodology ### 3.1. Relevance Data Processing Rule for Digging the Information Process The purpose of a relevance data processing rule for digging the information is to find some credible rules from massive data. These rules usually have potential value and significance and can help enterprise managers analyze the current market situation and make correct decisions. The relevance data processing rule for digging the information system searches for association rules based on two minimum thresholds, which are the minimum support threshold min sup and the minimum confidence threshold min conf, which can usually be specified by the user. The relevance data processing rule for digging the information work is mainly divided into two stages: (1) Find all item sets not less than the minimum support threshold min_sup, that is, frequent sets; (2) For each frequent set, find not less than the minimum confidence threshold. Association rules for min_conf. The process of mining association rules in stage 2 is as follows: there is a frequent setL. For any proper subset L′⊂L,L′≠∅ in the itemset L, if Support (L)/Support (L′) 2 min_conf, the association rule L′⇒L−L′ is a credible rule.The following uses an example to illustrate the specific process of relevance data processing rules for digging the information. Suppose all items in databaseD constitute an itemset, and all records in transaction database D are shown in Table 1:Table 1 Transaction databaseD. TIDPhase setT011a,b, cT021a,c, dT031a,bT041c,dT051a, b, dLooking at Table1, it is easy to calculate all 1-, 2-, 3-, and 4-item sets. The four item sets and their corresponding support information are shown in Table 2.Table 2 All item sets and support. Item setsSupport (%){a}88{b}65{c}65{d}65{a, b}65{a, c}48{a, d}45{b, c}21{b, d}20{c, d}45{a, b, c}25{a, b, d}25{b, c, d}25{a, b, c, d}0Given two thresholds, support threshold min_sup = 30% and confidence threshold min_conf = 70%, then, according to Table2, all frequent sets have {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {c, d}. For each frequent set whose number of item sets is greater than 1, association rules can be mined from it. For example, two rules {a} = {b} and {b} > {a} can be mined from the frequent set {a, b}. Examining each such frequent item set, we can get all the association rules as shown in Table 3:Table 3 All association rules. Association rulesConfidence (%){a} {b}77{b} {a}90{a} {c}55{c} {a}65{a} {d}51{d} {a}65{c} {d}65{d} {c}66Since the confidence threshold min_conf = 70%, according to Table3, all the strong association rules have {a} = {b} and {b} = {a}. At this point, the mining of association rules is over. ### 3.2. Apriori Algorithm The Apriori algorithm is the most classic algorithm for relevance data processing rules for digging the information. The algorithm is easy to understand in the principle of the algorithm, and concise and convenient in the realization of the algorithm. The principle of the Apriori algorithm is that the more times the two items appear in pairs in the transaction data, the greater the correlation between the two items, and the two items are items with a strong correlation. The implementation process of the Apriori algorithm needs to scan the database multiple times. The first scan counts the number of occurrences of each item in the data and deletes some items that do not meet the minimum support requirements. Before the second scan, the items obtained from the first scan should be combined in pairs. Then, the second scan of the data is performed to count the occurrences of the combination, and some combinations that do not meet the minimum support are deleted. After that, the combination and scan are repeated until no new combination is generated, and the implementation process of the Apriori algorithm ends. Figure4 below shows the implementation process of the Apriori algorithm when the minimum support is 0.2.Figure 4 Example diagram of the mining process of the Apriori algorithm. ### 3.3. The Redundant Processing Method of Association Rules Based on Hypergraph The rule dug up from the relationship rule will include some projects that users do not need. Or it is in line with the information that users are familiar with, or it gets the same rule table that expresses the same meaning. Such rules will not bring more messages to users or offer any effective help. In most cases, the number of the redundant rule is larger than the meaning rule numbers.Redundancy rules can generally include two types: one is subordinate tone rules. For example, the conclusion of rule Xi is consistent with that of XJ, and the premise of Xi meets the sufficient condition for the existence of XJ, that is, XJ is redundant. Therefore, repetition rules are also regarded as subordinate rules. Special circumstances. The second is the repeated path principle. If there are selectors Xi and XJ in the rule base at the same time, and there must be two paths between Xi and XJ, the principle of redundancy can be judged.The subordination principle can be expressed by the following formula (4):(4)X2⟶X4,X2X3⟶X4.The repeating path rule can be expressed by the following given formula:(5)X1⟶X2X3⟶X4,X1⟶X5⟶X4.The adjacency matrix in a directed hypergraph completely illustrates the adjacency problem between nodes of a graph. In the directed hypergraph based on association rules, the association between the checked items and the association rules can be expressed by the adjacency matrix. Based on the concept of redundancy rules in circuit science and its related characteristics, the redundancy checking method can be realized on the basis of the directed hypergraph.(6)designG=VG,EG.Its path is a finite, nonempty sequence, and it is given as following:(7)W=v0e1v1e2…ekvk.That is, the staggered sequence of vertices and edges, whereei∈EG, vj∈VG, e are associated with Vi−1, vi, respectively, 1≤i≤k,0≤j≤k is denoted as (v0,vk) path, and the vertex v0,vk is called the starting point and end point of the path W, respectively. It is called a path v1,v2,…,vk−1 is the inner vertex of W, k will be named the long W.From the graph theory, the procedure of dealing with the redundant ones with the hypergraphs into the linking bricks and also it will change it into formative tress, because every line in the hypergraph represents a linking rule. When the linking picture will become the formative trees, we need remove it, and this line is the redundant rule.Given a hypergraph is as follows:(8)H=X,X=X1,X2,……,Xn,E=E1,E2,……,Em.IfH is connected and does not contain any hyperloops, then H is called a hyper-tree. ## 3.1. Relevance Data Processing Rule for Digging the Information Process The purpose of a relevance data processing rule for digging the information is to find some credible rules from massive data. These rules usually have potential value and significance and can help enterprise managers analyze the current market situation and make correct decisions. The relevance data processing rule for digging the information system searches for association rules based on two minimum thresholds, which are the minimum support threshold min sup and the minimum confidence threshold min conf, which can usually be specified by the user. The relevance data processing rule for digging the information work is mainly divided into two stages: (1) Find all item sets not less than the minimum support threshold min_sup, that is, frequent sets; (2) For each frequent set, find not less than the minimum confidence threshold. Association rules for min_conf. The process of mining association rules in stage 2 is as follows: there is a frequent setL. For any proper subset L′⊂L,L′≠∅ in the itemset L, if Support (L)/Support (L′) 2 min_conf, the association rule L′⇒L−L′ is a credible rule.The following uses an example to illustrate the specific process of relevance data processing rules for digging the information. Suppose all items in databaseD constitute an itemset, and all records in transaction database D are shown in Table 1:Table 1 Transaction databaseD. TIDPhase setT011a,b, cT021a,c, dT031a,bT041c,dT051a, b, dLooking at Table1, it is easy to calculate all 1-, 2-, 3-, and 4-item sets. The four item sets and their corresponding support information are shown in Table 2.Table 2 All item sets and support. Item setsSupport (%){a}88{b}65{c}65{d}65{a, b}65{a, c}48{a, d}45{b, c}21{b, d}20{c, d}45{a, b, c}25{a, b, d}25{b, c, d}25{a, b, c, d}0Given two thresholds, support threshold min_sup = 30% and confidence threshold min_conf = 70%, then, according to Table2, all frequent sets have {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {c, d}. For each frequent set whose number of item sets is greater than 1, association rules can be mined from it. For example, two rules {a} = {b} and {b} > {a} can be mined from the frequent set {a, b}. Examining each such frequent item set, we can get all the association rules as shown in Table 3:Table 3 All association rules. Association rulesConfidence (%){a} {b}77{b} {a}90{a} {c}55{c} {a}65{a} {d}51{d} {a}65{c} {d}65{d} {c}66Since the confidence threshold min_conf = 70%, according to Table3, all the strong association rules have {a} = {b} and {b} = {a}. At this point, the mining of association rules is over. ## 3.2. Apriori Algorithm The Apriori algorithm is the most classic algorithm for relevance data processing rules for digging the information. The algorithm is easy to understand in the principle of the algorithm, and concise and convenient in the realization of the algorithm. The principle of the Apriori algorithm is that the more times the two items appear in pairs in the transaction data, the greater the correlation between the two items, and the two items are items with a strong correlation. The implementation process of the Apriori algorithm needs to scan the database multiple times. The first scan counts the number of occurrences of each item in the data and deletes some items that do not meet the minimum support requirements. Before the second scan, the items obtained from the first scan should be combined in pairs. Then, the second scan of the data is performed to count the occurrences of the combination, and some combinations that do not meet the minimum support are deleted. After that, the combination and scan are repeated until no new combination is generated, and the implementation process of the Apriori algorithm ends. Figure4 below shows the implementation process of the Apriori algorithm when the minimum support is 0.2.Figure 4 Example diagram of the mining process of the Apriori algorithm. ## 3.3. The Redundant Processing Method of Association Rules Based on Hypergraph The rule dug up from the relationship rule will include some projects that users do not need. Or it is in line with the information that users are familiar with, or it gets the same rule table that expresses the same meaning. Such rules will not bring more messages to users or offer any effective help. In most cases, the number of the redundant rule is larger than the meaning rule numbers.Redundancy rules can generally include two types: one is subordinate tone rules. For example, the conclusion of rule Xi is consistent with that of XJ, and the premise of Xi meets the sufficient condition for the existence of XJ, that is, XJ is redundant. Therefore, repetition rules are also regarded as subordinate rules. Special circumstances. The second is the repeated path principle. If there are selectors Xi and XJ in the rule base at the same time, and there must be two paths between Xi and XJ, the principle of redundancy can be judged.The subordination principle can be expressed by the following formula (4):(4)X2⟶X4,X2X3⟶X4.The repeating path rule can be expressed by the following given formula:(5)X1⟶X2X3⟶X4,X1⟶X5⟶X4.The adjacency matrix in a directed hypergraph completely illustrates the adjacency problem between nodes of a graph. In the directed hypergraph based on association rules, the association between the checked items and the association rules can be expressed by the adjacency matrix. Based on the concept of redundancy rules in circuit science and its related characteristics, the redundancy checking method can be realized on the basis of the directed hypergraph.(6)designG=VG,EG.Its path is a finite, nonempty sequence, and it is given as following:(7)W=v0e1v1e2…ekvk.That is, the staggered sequence of vertices and edges, whereei∈EG, vj∈VG, e are associated with Vi−1, vi, respectively, 1≤i≤k,0≤j≤k is denoted as (v0,vk) path, and the vertex v0,vk is called the starting point and end point of the path W, respectively. It is called a path v1,v2,…,vk−1 is the inner vertex of W, k will be named the long W.From the graph theory, the procedure of dealing with the redundant ones with the hypergraphs into the linking bricks and also it will change it into formative tress, because every line in the hypergraph represents a linking rule. When the linking picture will become the formative trees, we need remove it, and this line is the redundant rule.Given a hypergraph is as follows:(8)H=X,X=X1,X2,……,Xn,E=E1,E2,……,Em.IfH is connected and does not contain any hyperloops, then H is called a hyper-tree. ## 4. Result Analysis and Discussion ### 4.1. Model Establishment In the Markov logic network, this paper regards the items in the transaction dataset as nodes in the Markov logic network. In this way, the weight between the two nodes can be regarded as the degree of association between the two items. In the Markov logic network, adding weights to the rules is adopted so that the knowledge base of the first-order predicate logic is not so rigid. The higher the weight value attached to the rule, the greater the restriction on each group in the Markov logic network. When the weights of all rules in the knowledge base are infinite, the Markov logic network is the same as the standard one. The logical reasoning framework for order predicates is the same.To obtain the parameter value in the data, adjust the value of the data. The most common way to deal with modulus problems, such as infinite numbers, is the step reduction method. In computer mathematics, the ladder lowering methods generally include batch ladder lowering, small batch ladder lowering, random gradient lowering, and batch ladder lowering. The batch ladder lowering method is to calculate the lowest point (or the highest point along the gradient rising trend) along the ladder falling trend during the calculation process, and the gradient direction change can be obtained by functional derivation. A training set is selected for improvement in each iteration. Different from batch ladder reduction, small batch ladder reduction refers to selecting local training samples for improvement in each iteration. Random gradient reduction refers to randomly selecting a training sample to improve in each iteration process. ### 4.2. Experimental Results and Analysis To prove correctness, parameters in the Markov logic network framework with relationship rules are learned through real data to see if the model converges. This paper uses a dataset from a grocery store to learn the parameters of the Markov logic network framework model of association rules, and the data set contains 75 variables. There are a total of 3956 pieces of information. By using the stochastic gradient descent method to learn the parameters of the Markov logic network framework model of association rules, the convergence graph of the Markov logic network framework model of association rules is obtained. Figure5 indicates that when the number reaches 1000, the algorithm begins to converge, indicating that the one logic net sample of the relationship rules is converged.Figure 5 Convergence diagram of the Markov logic network model algorithm.This chapter conducts an example analysis of a small dataset. The data is the data that comes with SPSS Clementine11.1, named BSAKETS1n. The data contains 18 fields and 1000 records, mainly including customer information, total purchase amount, and purchased items. The sighs include vegetables, fruit, meat, fish, and soft drinks. We use part of this data, the purchase item information, for association rule analysis. By processing the data, we get the data in Table4 (including only part of the processed data).Table 4 Section BSAKETS data sheet. Fruit vegFresh meatDairyCanned vegCanned meatFrozen mealBeerWineSofter inkFishConfectionery11100000000101000000000001111000001000001000100000110000000000011000000100000000010001000000001010001101011100This experiment continues to compare the time efficiency of the Apriori algorithm and the improved algorithm. The dataset used in the experiment is the basket data retail.dat file of a retail store in Belgium from the CSDN blog. The dataset has a total of 88,162 records and 16,470 binary attributes. The experimental test results are shown in Figure6. As can be seen from the figure, the execution time of the improved algorithm has been greatly reduced compared with the Apriori algorithm.Figure 6 Comparison of the execution time of two algorithms with different numbers of records. ## 4.1. Model Establishment In the Markov logic network, this paper regards the items in the transaction dataset as nodes in the Markov logic network. In this way, the weight between the two nodes can be regarded as the degree of association between the two items. In the Markov logic network, adding weights to the rules is adopted so that the knowledge base of the first-order predicate logic is not so rigid. The higher the weight value attached to the rule, the greater the restriction on each group in the Markov logic network. When the weights of all rules in the knowledge base are infinite, the Markov logic network is the same as the standard one. The logical reasoning framework for order predicates is the same.To obtain the parameter value in the data, adjust the value of the data. The most common way to deal with modulus problems, such as infinite numbers, is the step reduction method. In computer mathematics, the ladder lowering methods generally include batch ladder lowering, small batch ladder lowering, random gradient lowering, and batch ladder lowering. The batch ladder lowering method is to calculate the lowest point (or the highest point along the gradient rising trend) along the ladder falling trend during the calculation process, and the gradient direction change can be obtained by functional derivation. A training set is selected for improvement in each iteration. Different from batch ladder reduction, small batch ladder reduction refers to selecting local training samples for improvement in each iteration. Random gradient reduction refers to randomly selecting a training sample to improve in each iteration process. ## 4.2. Experimental Results and Analysis To prove correctness, parameters in the Markov logic network framework with relationship rules are learned through real data to see if the model converges. This paper uses a dataset from a grocery store to learn the parameters of the Markov logic network framework model of association rules, and the data set contains 75 variables. There are a total of 3956 pieces of information. By using the stochastic gradient descent method to learn the parameters of the Markov logic network framework model of association rules, the convergence graph of the Markov logic network framework model of association rules is obtained. Figure5 indicates that when the number reaches 1000, the algorithm begins to converge, indicating that the one logic net sample of the relationship rules is converged.Figure 5 Convergence diagram of the Markov logic network model algorithm.This chapter conducts an example analysis of a small dataset. The data is the data that comes with SPSS Clementine11.1, named BSAKETS1n. The data contains 18 fields and 1000 records, mainly including customer information, total purchase amount, and purchased items. The sighs include vegetables, fruit, meat, fish, and soft drinks. We use part of this data, the purchase item information, for association rule analysis. By processing the data, we get the data in Table4 (including only part of the processed data).Table 4 Section BSAKETS data sheet. Fruit vegFresh meatDairyCanned vegCanned meatFrozen mealBeerWineSofter inkFishConfectionery11100000000101000000000001111000001000001000100000110000000000011000000100000000010001000000001010001101011100This experiment continues to compare the time efficiency of the Apriori algorithm and the improved algorithm. The dataset used in the experiment is the basket data retail.dat file of a retail store in Belgium from the CSDN blog. The dataset has a total of 88,162 records and 16,470 binary attributes. The experimental test results are shown in Figure6. As can be seen from the figure, the execution time of the improved algorithm has been greatly reduced compared with the Apriori algorithm.Figure 6 Comparison of the execution time of two algorithms with different numbers of records. ## 5. Conclusion In today’s big data era, relationship rule mining can be a popular investigation direction. More and more people start to study association law and they are employed in many areas. In the research about association rules, people have obtained a lot of results and gained good consequences. But there are still some shortcomings. To address these problems, this paper proposes a Markov net model with relationship rules. Most of the current relevance data processing rules for digging up the information algorithms are constructed under a unified framework model. The main contents of this paper include the following aspects: (1) Combining Markov logic network and association rules, a new model, the Markov one framework of relationship rules, will be proposed. (2) Employ the random counting method of the Markov logic network framework. By comparing the prediction accuracy of the Markov logical network framework model and the traditional relevance data processing rule for digging the information, the Apriori algorithm, through different datasets, achieved higher accuracy than the traditional association rule algorithm (Apriori algorithm). (3) In line with the test about the relationship with the rule redundancy for the hypergraph, the author will change the relation rule into the hypergraph and give a new concept to the term about the matrix to offer a new way to the test of the circle and linking bricks in the hypergraph. --- *Source: 1007464-2022-09-16.xml*
1007464-2022-09-16_1007464-2022-09-16.md
50,305
Discussion on Redundant Processing Algorithm of Association Rules Based on Hypergraph in Data Mining
Jintan Zhu
Journal of Robotics (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007464
1007464-2022-09-16.xml
--- ## Abstract With the rapid advancement of big data, it is becoming a great problem for people to find objective information in the database. The relevance data processing rule for digging the information can be the way. Relevance data processing rule for digging the information is mainly studied in three aspects: data dimension, data abstraction level, and processing variable type. In the aspect of rules, the research mainly focuses on three aspects: active relationships, passive rules, and uncommon relationship rules. Association rules of digging the data can be the most well-employed investigation goal and aim for the data digging. Along with the advancement of the information scale, the time rate of traditional relationship rules exploration of counting ways is too low. How to increase the time rate for the way of counting is the main research content of relevance data processing rules for digging the information. Current relevance data processing rules for digging the information have two limitations: (1) metrics such as support, confidence, and lift rely too much on expert knowledge or complex adjustment processes in value selection; (2) it is often difficult to explain rare association rules. Based on the existing research, this paper proposes a Markov logic network framework model of association rules to address the above shortcomings. The theory of a hypergraph and system is proposed, and the method of a hypergraph in 3D matrix modeling is studied. Aiming at the new characteristics of big data analysis, a new super edge definition method is introduced according to the definition of the system, which greatly enhances the ability to solve problems. In the cluster analysis and calculation of hypergraph, this paper will use the hypergraph segmentation operator hMETIS to carry out the cluster analysis method in order to achieve higher accuracy in cluster analysis and calculation. As for the test of cycle ones, which is in line with the relevance of the hypergraph with clear directions, the thesis will offer a brand-new way to make an analysis and turn the rule of relevance into the hypergraph with clear directions with a new definition of the near linking matrix, and it will change the dealing way from the test of the cycle and more ones into the linkingf bricks and circles, which is a new way to explore. This paper uses two datasets of different sizes to conduct rule prediction accuracy experiments on the Markov logic network framework model algorithm of association rules and the traditional association rule algorithm. The results show that compared with the traditional association rule algorithm, the rules obtained by the Markov logic network framework model of association rules have a higher prediction accuracy. --- ## Body ## 1. Introduction Since the 21st century, Internet technology and computer hardware technology have developed rapidly and become popular [1]. The data stock of the Internet has increased exponentially, but there are only a handful of data containing valuable information [2]. Therefore, mining valuable information from massive data has changed into the most essential factor or the content in society at present [3]. For the old and ancient dataset digging information skills, the relationship rules which are unseen from the data will be analyzed with an effective rate, what is more, the result of the old-fashioned data digging skills and it will cause the results of massive data because it cannot predict the things happened in the long future [4]. The appearance of the AI will give more skills to the information-digging process. And the big process will be the digging of the data information, which has created the message of big data [5]. At present, data mining has an important application value in many aspects. By describing the existing data, it can effectively predict the future pattern of the data. Data mining technology has become a promising research direction in today’s world. Data mining technology has been applied in scientific research, production process monitoring, quality management, decision support, and design.The Internet, the Internet of Things, cloud computing, and other information technologies are constantly updated, and they are constantly integrated with the human world in various fields such as economy, politics, military, scientific research, and life [6]. Information visualization is the use of images to express information clearly and effectively. The image representation can be used to find the original data and the information relationship that cannot be observed. Data visualization can enhance users’ understanding of multidimensional and large-scale information and plays an important role in the discovery, determination, and understanding of association rules. As a major information discovery and pattern recognition technology, the association processing law of mining data is to find the most meaningful information that can be described. The visualization of association rules is an inseparable subset of association rule theory. Its main task is to display information and help users further grasp the rules of association processing in order to discover the information results [7].Data mining can discover the hidden laws in the data and effectively exert the value of the data [8]. The relevance data processing rule for digging the information can extract potential and valuable frequent patterns or correlations between attributes from the data [9]. Frequent patterns and correlations can be displayed clearly and intuitively in the form of text, but due to the limited cognitive ability of users, the value of relevant data processing rules for digging the information cannot be fully reflected [10]. Therefore, it is urgent to study the visualization method of association rules in depth, combined with human-computer interaction technology, to help users analyze and process data resources from multiple perspectives, gain insight into valuable information, and support their decision-making and planning [11].The hypergraphs are widely used in many fields of information science. In the past, information visualization technology and visual data analysis technology mainly focused on analyzing simple binary signals inside data objects [12]. However, studies have shown that multiple associations can more naturally represent the sum pattern of internal connections implicit in signals [13]. A Hypergraph is a generalization of ordinary topological relations and can be easily expressed as multiple relations [14]. This also provides strong conditions and theoretical support for the visualization of association rules. The hypergraph model combines the characteristics of hypergraphs and multidirectional graphs and can visually describe association rules. In the graph, nodes represent data items and edges represent association relationships. The support and reliability of rules can also be described in different ways and with different values. Therefore, the intuitive display of multirelationships by the hypergraph provides strong theoretical support for further in-depth research on visualization methods of frequent item sets and association rules. ## 2. State of the Art ### 2.1. Data Mining Overview and Research Status In 1990, the first KDD International Seminar was held in Detroit, USA. Since then, people have shown interest in data mining terminology [15]. Information is used by people to describe specific events in actual society. This is an abstract description of the information. With the development and progress of the times, there are more and more aspects of human exploration, and digital has become a tool to support human exploration. Indispensable means that due to the development of science and technology, human beings are exploring the physical universe more and more extensively, which makes the range of numbers more and more extensive, and the complexity of data is rapidly increasing [16]. In this case, people can no longer find hidden laws through simple logical reasoning. Therefore, people began to pay attention to the importance of data and eagerly hoped to find the value and meaning hidden behind the data. It is precisely to meet this demand that data mining technology was born. Mining, also known as information mining, is the process of discovering potential, meaningful, and interesting things from a large amount of incomplete and noisy information stored in large transaction databases or data warehouses. The information mined can often help us to conduct in-depth exploration. Of course, it is not a knowledge exploration method that can retrieve information at any time. Therefore, through the search engine to find the web pages and search databases that you are interested in, records cannot become knowledge discovery. These methods are only looking for information that meets specific conditions, but they do not explore the things behind big data. Data mining is not a panacea. That is, what is found in large transaction databases is not always correct or valuable [17]. This needs to meet special commercial conditions. People study business and do statistical analysis. Under these premises, the use of information mining is more likely to mine valuable and instructive messages [18].In the procedure of digging the data will be decided by requests from businesses and data features [19]. But it will be classified as the changing data, preprocessing of data, data mining, and knowledge assessment steps. The pre-processing of data will deal with the data and transform it. It costs a lot of time to deal with the data [20]. The image of the data preprocessing process is shown in Figure 1.Figure 1 Data preprocessing process.Data mining technology originally came from abroad, but its research and development direction are varied. At present, the most common way to deal with analysis problems is decision tree induction. The corresponding calculation methods are C4. 5 calculation, ID3 algorithm, ID4 algorithm, IDS algorithm, and quest calculation. Complex structured data learning, the slio algorithm, the sprint algorithm, and “rainforest” calculations are used for building a decision tree. Both emphasize the establishment of a decision tree with scalability. The decision tree pruning algorithm includes cost complexity pruning, error reduction pruning, and pessimistic evaluation pruning. Some methods, such as Bayesian classification, the back propagation algorithm, the neural network method, the machine learning method, the CAEP classification method, and the rough set method, are all applied to analysis and data mining. There are also many data mining techniques and methods to deal with data clustering. Common systems divide clustering methods into thek-means method and the basic condensed hierarchical clustering method. DBSCAN is a clustering algorithm based on density, while optics is a clustering algorithm based on density. ### 2.2. Research Status of Association Rules Up to now, large-scale exploration has been carried out in the field of data mining, but the research in the field of data mining is still popular. That is, although the current mining method is quite perfect, its data mining efficiency is very low in the face of large-scale information. The result is not ideal. Therefore, the information accumulated by the network e-commerce industry is extremely rich and complicated, resulting in a large number of useless bits of information and garbage materials. For such a large amount of information, the preprocessing steps of data mining will be very difficult. Once the pretreatment is poor, the whole mining process may even fail. At the same time, although the preprocessing process is relatively smooth, conventional mining methods may not obtain valuable and meaningful data in a large number of databases, even if the mined data are useless and meaningless. All these show that there are still many problems in data mining. Considering the differences in the main research directions and mining methods of data mining, there are still many challenging research topics in the field of data mining applications. These topics are closely linked, mainly involving information fusion technology of information discovery and data warehouses, visual data mining, super large-scale data mining of complex types, network data mining, and network security technology. The visualization technology of a two-dimensional matrix is usually used to represent the features on the bar graph. The items in the front part and the rear part are arranged on two axes in turn. The width and color of the bar in turn represent the support and confidence, as shown in Figure2.Figure 2 Visualization is based on a two-dimensional matrix. ### 2.3. Hypergraph Overview A hypergraph is a subset system of a finite set, which is a generalization of graph theory and plays a very important role in discrete mathematics. The term “hypergraph” was first proposed by Berge in the monograph “Hypergraphs” in 1966. The original purpose was to promote some classical results in graph theory. Later, people gradually realized some theorems in graph theory. It can be generalized in the unified form of a hypergraph, thus opening the prelude to the study of hypergraph theory, making it a huge new branch of graph theory. Compared with the study of general graphs, the study of hypergraphs is more complicated. Some important structures and properties in general graphs no longer exist in hypergraphs, which complicates the discussion of many similar problems in graph theory. At present, hypergraphs have been widely used in circuit division, knowledge representation and organization methods, cellular communication systems, and the representation of molecular structures of atypical compounds and polycyclic conjugated molecules. There are two types of hypergraphs: directed hypergraphs and undirected hypergraphs. Since the 1960s, after decades of unremitting efforts, the development of hypergraph theory has made great progress.A hypergraph is a binary pairH=(V, E), where V=v1,v2,v3,…vn, denotes the n vertices of the hypergraph, and E=e1,e2,e3,…en, denotes the m hyperedges of the hypergraph. A hyperedge set E is a subset defined on a vertex set V, that is ∀ej⊆V,j=1,2,…m, and satisfies(1)ej≠∅,j=1,2,…m,∑i=1mej=V.The size of a graph is generally uniquely determined by the number of verticesN and the number of super edges M. In hypergraph, its size also depends on the cardinality of each super edge. We can define the size of a hypergraph as the cardinality of each hyperedge, and it is given using the following equation:(2)sizeH=∑ei∈Eei.As an important branch of hypergraph theory, directed hypergraph theory has not been studied deeply for a long time, and the results are relatively small. The foreign paper that has a far-reaching impact on the development of directed hypergraph theory is directed hypergraph and its application by Giorgio Gallo. Here, the author systematically summarizes the previous achievements in the field of directed hypergraphs. In China, based on the demand for electrical engineering and automation research, Professor Huang has provided another new way to describe directed hypergraphs and has done relevant research work on this basis. Similar to the theoretical study of undirected graphs and directed graphs, directed hypergraphs add directions to the super edges in undirected hypergraphs, and then describe the arrangement of the vertices of the super edges. On this basis, we introduce some properties of undirected hypergraph into directed hypergraph and find the special properties of directed hyper-tree. Of course, the main purpose of studying directed hypergraphs is to solve problems in practical applications. In this paper, people try to describe the relevant laws through the directed hypergraph and solve the redundancy and circulation problems in the relevant data processing rules by using the attributes in the directed hypergraph, and mining information.. Figure3 is a representation of a directed hypergraph H.Figure 3 Representation of a directed hypergraphH.A directed hypergraph isH¯=V,E, V is the vertex set, and E is the directed hyperedge set. Its adjacency matrix is given as follows:(3)A=a11…a1n⋮⋱⋮an1…ann,ai,j∈0,1,2,……,aij represents the number of directed hyperedges from the start point 1 to the end point 1, where vi and vi are the vertices of H¯=V,E.Like an undirected hypergraph, the size of a directed hypergraph is defined as the sum of the cardinality of each super edge, and the rank (low rank) is also defined as the largest (small) multiple of the cardinality of each super edge. However, it is difficult to form a one-to-one correspondence between the adjacency matrix and other matrices in a directed hypergraph due to the characteristics of clusters in the hypergraph itself. People can also understand that the directed hypergraph corresponds to the adjacency matrix, but people cannot uniquely recover the original directed hypergraph from the adjacency matrix. Therefore, how to reduce these factors is also an important topic in the study of directed hypergraph theory. ## 2.1. Data Mining Overview and Research Status In 1990, the first KDD International Seminar was held in Detroit, USA. Since then, people have shown interest in data mining terminology [15]. Information is used by people to describe specific events in actual society. This is an abstract description of the information. With the development and progress of the times, there are more and more aspects of human exploration, and digital has become a tool to support human exploration. Indispensable means that due to the development of science and technology, human beings are exploring the physical universe more and more extensively, which makes the range of numbers more and more extensive, and the complexity of data is rapidly increasing [16]. In this case, people can no longer find hidden laws through simple logical reasoning. Therefore, people began to pay attention to the importance of data and eagerly hoped to find the value and meaning hidden behind the data. It is precisely to meet this demand that data mining technology was born. Mining, also known as information mining, is the process of discovering potential, meaningful, and interesting things from a large amount of incomplete and noisy information stored in large transaction databases or data warehouses. The information mined can often help us to conduct in-depth exploration. Of course, it is not a knowledge exploration method that can retrieve information at any time. Therefore, through the search engine to find the web pages and search databases that you are interested in, records cannot become knowledge discovery. These methods are only looking for information that meets specific conditions, but they do not explore the things behind big data. Data mining is not a panacea. That is, what is found in large transaction databases is not always correct or valuable [17]. This needs to meet special commercial conditions. People study business and do statistical analysis. Under these premises, the use of information mining is more likely to mine valuable and instructive messages [18].In the procedure of digging the data will be decided by requests from businesses and data features [19]. But it will be classified as the changing data, preprocessing of data, data mining, and knowledge assessment steps. The pre-processing of data will deal with the data and transform it. It costs a lot of time to deal with the data [20]. The image of the data preprocessing process is shown in Figure 1.Figure 1 Data preprocessing process.Data mining technology originally came from abroad, but its research and development direction are varied. At present, the most common way to deal with analysis problems is decision tree induction. The corresponding calculation methods are C4. 5 calculation, ID3 algorithm, ID4 algorithm, IDS algorithm, and quest calculation. Complex structured data learning, the slio algorithm, the sprint algorithm, and “rainforest” calculations are used for building a decision tree. Both emphasize the establishment of a decision tree with scalability. The decision tree pruning algorithm includes cost complexity pruning, error reduction pruning, and pessimistic evaluation pruning. Some methods, such as Bayesian classification, the back propagation algorithm, the neural network method, the machine learning method, the CAEP classification method, and the rough set method, are all applied to analysis and data mining. There are also many data mining techniques and methods to deal with data clustering. Common systems divide clustering methods into thek-means method and the basic condensed hierarchical clustering method. DBSCAN is a clustering algorithm based on density, while optics is a clustering algorithm based on density. ## 2.2. Research Status of Association Rules Up to now, large-scale exploration has been carried out in the field of data mining, but the research in the field of data mining is still popular. That is, although the current mining method is quite perfect, its data mining efficiency is very low in the face of large-scale information. The result is not ideal. Therefore, the information accumulated by the network e-commerce industry is extremely rich and complicated, resulting in a large number of useless bits of information and garbage materials. For such a large amount of information, the preprocessing steps of data mining will be very difficult. Once the pretreatment is poor, the whole mining process may even fail. At the same time, although the preprocessing process is relatively smooth, conventional mining methods may not obtain valuable and meaningful data in a large number of databases, even if the mined data are useless and meaningless. All these show that there are still many problems in data mining. Considering the differences in the main research directions and mining methods of data mining, there are still many challenging research topics in the field of data mining applications. These topics are closely linked, mainly involving information fusion technology of information discovery and data warehouses, visual data mining, super large-scale data mining of complex types, network data mining, and network security technology. The visualization technology of a two-dimensional matrix is usually used to represent the features on the bar graph. The items in the front part and the rear part are arranged on two axes in turn. The width and color of the bar in turn represent the support and confidence, as shown in Figure2.Figure 2 Visualization is based on a two-dimensional matrix. ## 2.3. Hypergraph Overview A hypergraph is a subset system of a finite set, which is a generalization of graph theory and plays a very important role in discrete mathematics. The term “hypergraph” was first proposed by Berge in the monograph “Hypergraphs” in 1966. The original purpose was to promote some classical results in graph theory. Later, people gradually realized some theorems in graph theory. It can be generalized in the unified form of a hypergraph, thus opening the prelude to the study of hypergraph theory, making it a huge new branch of graph theory. Compared with the study of general graphs, the study of hypergraphs is more complicated. Some important structures and properties in general graphs no longer exist in hypergraphs, which complicates the discussion of many similar problems in graph theory. At present, hypergraphs have been widely used in circuit division, knowledge representation and organization methods, cellular communication systems, and the representation of molecular structures of atypical compounds and polycyclic conjugated molecules. There are two types of hypergraphs: directed hypergraphs and undirected hypergraphs. Since the 1960s, after decades of unremitting efforts, the development of hypergraph theory has made great progress.A hypergraph is a binary pairH=(V, E), where V=v1,v2,v3,…vn, denotes the n vertices of the hypergraph, and E=e1,e2,e3,…en, denotes the m hyperedges of the hypergraph. A hyperedge set E is a subset defined on a vertex set V, that is ∀ej⊆V,j=1,2,…m, and satisfies(1)ej≠∅,j=1,2,…m,∑i=1mej=V.The size of a graph is generally uniquely determined by the number of verticesN and the number of super edges M. In hypergraph, its size also depends on the cardinality of each super edge. We can define the size of a hypergraph as the cardinality of each hyperedge, and it is given using the following equation:(2)sizeH=∑ei∈Eei.As an important branch of hypergraph theory, directed hypergraph theory has not been studied deeply for a long time, and the results are relatively small. The foreign paper that has a far-reaching impact on the development of directed hypergraph theory is directed hypergraph and its application by Giorgio Gallo. Here, the author systematically summarizes the previous achievements in the field of directed hypergraphs. In China, based on the demand for electrical engineering and automation research, Professor Huang has provided another new way to describe directed hypergraphs and has done relevant research work on this basis. Similar to the theoretical study of undirected graphs and directed graphs, directed hypergraphs add directions to the super edges in undirected hypergraphs, and then describe the arrangement of the vertices of the super edges. On this basis, we introduce some properties of undirected hypergraph into directed hypergraph and find the special properties of directed hyper-tree. Of course, the main purpose of studying directed hypergraphs is to solve problems in practical applications. In this paper, people try to describe the relevant laws through the directed hypergraph and solve the redundancy and circulation problems in the relevant data processing rules by using the attributes in the directed hypergraph, and mining information.. Figure3 is a representation of a directed hypergraph H.Figure 3 Representation of a directed hypergraphH.A directed hypergraph isH¯=V,E, V is the vertex set, and E is the directed hyperedge set. Its adjacency matrix is given as follows:(3)A=a11…a1n⋮⋱⋮an1…ann,ai,j∈0,1,2,……,aij represents the number of directed hyperedges from the start point 1 to the end point 1, where vi and vi are the vertices of H¯=V,E.Like an undirected hypergraph, the size of a directed hypergraph is defined as the sum of the cardinality of each super edge, and the rank (low rank) is also defined as the largest (small) multiple of the cardinality of each super edge. However, it is difficult to form a one-to-one correspondence between the adjacency matrix and other matrices in a directed hypergraph due to the characteristics of clusters in the hypergraph itself. People can also understand that the directed hypergraph corresponds to the adjacency matrix, but people cannot uniquely recover the original directed hypergraph from the adjacency matrix. Therefore, how to reduce these factors is also an important topic in the study of directed hypergraph theory. ## 3. Methodology ### 3.1. Relevance Data Processing Rule for Digging the Information Process The purpose of a relevance data processing rule for digging the information is to find some credible rules from massive data. These rules usually have potential value and significance and can help enterprise managers analyze the current market situation and make correct decisions. The relevance data processing rule for digging the information system searches for association rules based on two minimum thresholds, which are the minimum support threshold min sup and the minimum confidence threshold min conf, which can usually be specified by the user. The relevance data processing rule for digging the information work is mainly divided into two stages: (1) Find all item sets not less than the minimum support threshold min_sup, that is, frequent sets; (2) For each frequent set, find not less than the minimum confidence threshold. Association rules for min_conf. The process of mining association rules in stage 2 is as follows: there is a frequent setL. For any proper subset L′⊂L,L′≠∅ in the itemset L, if Support (L)/Support (L′) 2 min_conf, the association rule L′⇒L−L′ is a credible rule.The following uses an example to illustrate the specific process of relevance data processing rules for digging the information. Suppose all items in databaseD constitute an itemset, and all records in transaction database D are shown in Table 1:Table 1 Transaction databaseD. TIDPhase setT011a,b, cT021a,c, dT031a,bT041c,dT051a, b, dLooking at Table1, it is easy to calculate all 1-, 2-, 3-, and 4-item sets. The four item sets and their corresponding support information are shown in Table 2.Table 2 All item sets and support. Item setsSupport (%){a}88{b}65{c}65{d}65{a, b}65{a, c}48{a, d}45{b, c}21{b, d}20{c, d}45{a, b, c}25{a, b, d}25{b, c, d}25{a, b, c, d}0Given two thresholds, support threshold min_sup = 30% and confidence threshold min_conf = 70%, then, according to Table2, all frequent sets have {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {c, d}. For each frequent set whose number of item sets is greater than 1, association rules can be mined from it. For example, two rules {a} = {b} and {b} > {a} can be mined from the frequent set {a, b}. Examining each such frequent item set, we can get all the association rules as shown in Table 3:Table 3 All association rules. Association rulesConfidence (%){a} {b}77{b} {a}90{a} {c}55{c} {a}65{a} {d}51{d} {a}65{c} {d}65{d} {c}66Since the confidence threshold min_conf = 70%, according to Table3, all the strong association rules have {a} = {b} and {b} = {a}. At this point, the mining of association rules is over. ### 3.2. Apriori Algorithm The Apriori algorithm is the most classic algorithm for relevance data processing rules for digging the information. The algorithm is easy to understand in the principle of the algorithm, and concise and convenient in the realization of the algorithm. The principle of the Apriori algorithm is that the more times the two items appear in pairs in the transaction data, the greater the correlation between the two items, and the two items are items with a strong correlation. The implementation process of the Apriori algorithm needs to scan the database multiple times. The first scan counts the number of occurrences of each item in the data and deletes some items that do not meet the minimum support requirements. Before the second scan, the items obtained from the first scan should be combined in pairs. Then, the second scan of the data is performed to count the occurrences of the combination, and some combinations that do not meet the minimum support are deleted. After that, the combination and scan are repeated until no new combination is generated, and the implementation process of the Apriori algorithm ends. Figure4 below shows the implementation process of the Apriori algorithm when the minimum support is 0.2.Figure 4 Example diagram of the mining process of the Apriori algorithm. ### 3.3. The Redundant Processing Method of Association Rules Based on Hypergraph The rule dug up from the relationship rule will include some projects that users do not need. Or it is in line with the information that users are familiar with, or it gets the same rule table that expresses the same meaning. Such rules will not bring more messages to users or offer any effective help. In most cases, the number of the redundant rule is larger than the meaning rule numbers.Redundancy rules can generally include two types: one is subordinate tone rules. For example, the conclusion of rule Xi is consistent with that of XJ, and the premise of Xi meets the sufficient condition for the existence of XJ, that is, XJ is redundant. Therefore, repetition rules are also regarded as subordinate rules. Special circumstances. The second is the repeated path principle. If there are selectors Xi and XJ in the rule base at the same time, and there must be two paths between Xi and XJ, the principle of redundancy can be judged.The subordination principle can be expressed by the following formula (4):(4)X2⟶X4,X2X3⟶X4.The repeating path rule can be expressed by the following given formula:(5)X1⟶X2X3⟶X4,X1⟶X5⟶X4.The adjacency matrix in a directed hypergraph completely illustrates the adjacency problem between nodes of a graph. In the directed hypergraph based on association rules, the association between the checked items and the association rules can be expressed by the adjacency matrix. Based on the concept of redundancy rules in circuit science and its related characteristics, the redundancy checking method can be realized on the basis of the directed hypergraph.(6)designG=VG,EG.Its path is a finite, nonempty sequence, and it is given as following:(7)W=v0e1v1e2…ekvk.That is, the staggered sequence of vertices and edges, whereei∈EG, vj∈VG, e are associated with Vi−1, vi, respectively, 1≤i≤k,0≤j≤k is denoted as (v0,vk) path, and the vertex v0,vk is called the starting point and end point of the path W, respectively. It is called a path v1,v2,…,vk−1 is the inner vertex of W, k will be named the long W.From the graph theory, the procedure of dealing with the redundant ones with the hypergraphs into the linking bricks and also it will change it into formative tress, because every line in the hypergraph represents a linking rule. When the linking picture will become the formative trees, we need remove it, and this line is the redundant rule.Given a hypergraph is as follows:(8)H=X,X=X1,X2,……,Xn,E=E1,E2,……,Em.IfH is connected and does not contain any hyperloops, then H is called a hyper-tree. ## 3.1. Relevance Data Processing Rule for Digging the Information Process The purpose of a relevance data processing rule for digging the information is to find some credible rules from massive data. These rules usually have potential value and significance and can help enterprise managers analyze the current market situation and make correct decisions. The relevance data processing rule for digging the information system searches for association rules based on two minimum thresholds, which are the minimum support threshold min sup and the minimum confidence threshold min conf, which can usually be specified by the user. The relevance data processing rule for digging the information work is mainly divided into two stages: (1) Find all item sets not less than the minimum support threshold min_sup, that is, frequent sets; (2) For each frequent set, find not less than the minimum confidence threshold. Association rules for min_conf. The process of mining association rules in stage 2 is as follows: there is a frequent setL. For any proper subset L′⊂L,L′≠∅ in the itemset L, if Support (L)/Support (L′) 2 min_conf, the association rule L′⇒L−L′ is a credible rule.The following uses an example to illustrate the specific process of relevance data processing rules for digging the information. Suppose all items in databaseD constitute an itemset, and all records in transaction database D are shown in Table 1:Table 1 Transaction databaseD. TIDPhase setT011a,b, cT021a,c, dT031a,bT041c,dT051a, b, dLooking at Table1, it is easy to calculate all 1-, 2-, 3-, and 4-item sets. The four item sets and their corresponding support information are shown in Table 2.Table 2 All item sets and support. Item setsSupport (%){a}88{b}65{c}65{d}65{a, b}65{a, c}48{a, d}45{b, c}21{b, d}20{c, d}45{a, b, c}25{a, b, d}25{b, c, d}25{a, b, c, d}0Given two thresholds, support threshold min_sup = 30% and confidence threshold min_conf = 70%, then, according to Table2, all frequent sets have {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {c, d}. For each frequent set whose number of item sets is greater than 1, association rules can be mined from it. For example, two rules {a} = {b} and {b} > {a} can be mined from the frequent set {a, b}. Examining each such frequent item set, we can get all the association rules as shown in Table 3:Table 3 All association rules. Association rulesConfidence (%){a} {b}77{b} {a}90{a} {c}55{c} {a}65{a} {d}51{d} {a}65{c} {d}65{d} {c}66Since the confidence threshold min_conf = 70%, according to Table3, all the strong association rules have {a} = {b} and {b} = {a}. At this point, the mining of association rules is over. ## 3.2. Apriori Algorithm The Apriori algorithm is the most classic algorithm for relevance data processing rules for digging the information. The algorithm is easy to understand in the principle of the algorithm, and concise and convenient in the realization of the algorithm. The principle of the Apriori algorithm is that the more times the two items appear in pairs in the transaction data, the greater the correlation between the two items, and the two items are items with a strong correlation. The implementation process of the Apriori algorithm needs to scan the database multiple times. The first scan counts the number of occurrences of each item in the data and deletes some items that do not meet the minimum support requirements. Before the second scan, the items obtained from the first scan should be combined in pairs. Then, the second scan of the data is performed to count the occurrences of the combination, and some combinations that do not meet the minimum support are deleted. After that, the combination and scan are repeated until no new combination is generated, and the implementation process of the Apriori algorithm ends. Figure4 below shows the implementation process of the Apriori algorithm when the minimum support is 0.2.Figure 4 Example diagram of the mining process of the Apriori algorithm. ## 3.3. The Redundant Processing Method of Association Rules Based on Hypergraph The rule dug up from the relationship rule will include some projects that users do not need. Or it is in line with the information that users are familiar with, or it gets the same rule table that expresses the same meaning. Such rules will not bring more messages to users or offer any effective help. In most cases, the number of the redundant rule is larger than the meaning rule numbers.Redundancy rules can generally include two types: one is subordinate tone rules. For example, the conclusion of rule Xi is consistent with that of XJ, and the premise of Xi meets the sufficient condition for the existence of XJ, that is, XJ is redundant. Therefore, repetition rules are also regarded as subordinate rules. Special circumstances. The second is the repeated path principle. If there are selectors Xi and XJ in the rule base at the same time, and there must be two paths between Xi and XJ, the principle of redundancy can be judged.The subordination principle can be expressed by the following formula (4):(4)X2⟶X4,X2X3⟶X4.The repeating path rule can be expressed by the following given formula:(5)X1⟶X2X3⟶X4,X1⟶X5⟶X4.The adjacency matrix in a directed hypergraph completely illustrates the adjacency problem between nodes of a graph. In the directed hypergraph based on association rules, the association between the checked items and the association rules can be expressed by the adjacency matrix. Based on the concept of redundancy rules in circuit science and its related characteristics, the redundancy checking method can be realized on the basis of the directed hypergraph.(6)designG=VG,EG.Its path is a finite, nonempty sequence, and it is given as following:(7)W=v0e1v1e2…ekvk.That is, the staggered sequence of vertices and edges, whereei∈EG, vj∈VG, e are associated with Vi−1, vi, respectively, 1≤i≤k,0≤j≤k is denoted as (v0,vk) path, and the vertex v0,vk is called the starting point and end point of the path W, respectively. It is called a path v1,v2,…,vk−1 is the inner vertex of W, k will be named the long W.From the graph theory, the procedure of dealing with the redundant ones with the hypergraphs into the linking bricks and also it will change it into formative tress, because every line in the hypergraph represents a linking rule. When the linking picture will become the formative trees, we need remove it, and this line is the redundant rule.Given a hypergraph is as follows:(8)H=X,X=X1,X2,……,Xn,E=E1,E2,……,Em.IfH is connected and does not contain any hyperloops, then H is called a hyper-tree. ## 4. Result Analysis and Discussion ### 4.1. Model Establishment In the Markov logic network, this paper regards the items in the transaction dataset as nodes in the Markov logic network. In this way, the weight between the two nodes can be regarded as the degree of association between the two items. In the Markov logic network, adding weights to the rules is adopted so that the knowledge base of the first-order predicate logic is not so rigid. The higher the weight value attached to the rule, the greater the restriction on each group in the Markov logic network. When the weights of all rules in the knowledge base are infinite, the Markov logic network is the same as the standard one. The logical reasoning framework for order predicates is the same.To obtain the parameter value in the data, adjust the value of the data. The most common way to deal with modulus problems, such as infinite numbers, is the step reduction method. In computer mathematics, the ladder lowering methods generally include batch ladder lowering, small batch ladder lowering, random gradient lowering, and batch ladder lowering. The batch ladder lowering method is to calculate the lowest point (or the highest point along the gradient rising trend) along the ladder falling trend during the calculation process, and the gradient direction change can be obtained by functional derivation. A training set is selected for improvement in each iteration. Different from batch ladder reduction, small batch ladder reduction refers to selecting local training samples for improvement in each iteration. Random gradient reduction refers to randomly selecting a training sample to improve in each iteration process. ### 4.2. Experimental Results and Analysis To prove correctness, parameters in the Markov logic network framework with relationship rules are learned through real data to see if the model converges. This paper uses a dataset from a grocery store to learn the parameters of the Markov logic network framework model of association rules, and the data set contains 75 variables. There are a total of 3956 pieces of information. By using the stochastic gradient descent method to learn the parameters of the Markov logic network framework model of association rules, the convergence graph of the Markov logic network framework model of association rules is obtained. Figure5 indicates that when the number reaches 1000, the algorithm begins to converge, indicating that the one logic net sample of the relationship rules is converged.Figure 5 Convergence diagram of the Markov logic network model algorithm.This chapter conducts an example analysis of a small dataset. The data is the data that comes with SPSS Clementine11.1, named BSAKETS1n. The data contains 18 fields and 1000 records, mainly including customer information, total purchase amount, and purchased items. The sighs include vegetables, fruit, meat, fish, and soft drinks. We use part of this data, the purchase item information, for association rule analysis. By processing the data, we get the data in Table4 (including only part of the processed data).Table 4 Section BSAKETS data sheet. Fruit vegFresh meatDairyCanned vegCanned meatFrozen mealBeerWineSofter inkFishConfectionery11100000000101000000000001111000001000001000100000110000000000011000000100000000010001000000001010001101011100This experiment continues to compare the time efficiency of the Apriori algorithm and the improved algorithm. The dataset used in the experiment is the basket data retail.dat file of a retail store in Belgium from the CSDN blog. The dataset has a total of 88,162 records and 16,470 binary attributes. The experimental test results are shown in Figure6. As can be seen from the figure, the execution time of the improved algorithm has been greatly reduced compared with the Apriori algorithm.Figure 6 Comparison of the execution time of two algorithms with different numbers of records. ## 4.1. Model Establishment In the Markov logic network, this paper regards the items in the transaction dataset as nodes in the Markov logic network. In this way, the weight between the two nodes can be regarded as the degree of association between the two items. In the Markov logic network, adding weights to the rules is adopted so that the knowledge base of the first-order predicate logic is not so rigid. The higher the weight value attached to the rule, the greater the restriction on each group in the Markov logic network. When the weights of all rules in the knowledge base are infinite, the Markov logic network is the same as the standard one. The logical reasoning framework for order predicates is the same.To obtain the parameter value in the data, adjust the value of the data. The most common way to deal with modulus problems, such as infinite numbers, is the step reduction method. In computer mathematics, the ladder lowering methods generally include batch ladder lowering, small batch ladder lowering, random gradient lowering, and batch ladder lowering. The batch ladder lowering method is to calculate the lowest point (or the highest point along the gradient rising trend) along the ladder falling trend during the calculation process, and the gradient direction change can be obtained by functional derivation. A training set is selected for improvement in each iteration. Different from batch ladder reduction, small batch ladder reduction refers to selecting local training samples for improvement in each iteration. Random gradient reduction refers to randomly selecting a training sample to improve in each iteration process. ## 4.2. Experimental Results and Analysis To prove correctness, parameters in the Markov logic network framework with relationship rules are learned through real data to see if the model converges. This paper uses a dataset from a grocery store to learn the parameters of the Markov logic network framework model of association rules, and the data set contains 75 variables. There are a total of 3956 pieces of information. By using the stochastic gradient descent method to learn the parameters of the Markov logic network framework model of association rules, the convergence graph of the Markov logic network framework model of association rules is obtained. Figure5 indicates that when the number reaches 1000, the algorithm begins to converge, indicating that the one logic net sample of the relationship rules is converged.Figure 5 Convergence diagram of the Markov logic network model algorithm.This chapter conducts an example analysis of a small dataset. The data is the data that comes with SPSS Clementine11.1, named BSAKETS1n. The data contains 18 fields and 1000 records, mainly including customer information, total purchase amount, and purchased items. The sighs include vegetables, fruit, meat, fish, and soft drinks. We use part of this data, the purchase item information, for association rule analysis. By processing the data, we get the data in Table4 (including only part of the processed data).Table 4 Section BSAKETS data sheet. Fruit vegFresh meatDairyCanned vegCanned meatFrozen mealBeerWineSofter inkFishConfectionery11100000000101000000000001111000001000001000100000110000000000011000000100000000010001000000001010001101011100This experiment continues to compare the time efficiency of the Apriori algorithm and the improved algorithm. The dataset used in the experiment is the basket data retail.dat file of a retail store in Belgium from the CSDN blog. The dataset has a total of 88,162 records and 16,470 binary attributes. The experimental test results are shown in Figure6. As can be seen from the figure, the execution time of the improved algorithm has been greatly reduced compared with the Apriori algorithm.Figure 6 Comparison of the execution time of two algorithms with different numbers of records. ## 5. Conclusion In today’s big data era, relationship rule mining can be a popular investigation direction. More and more people start to study association law and they are employed in many areas. In the research about association rules, people have obtained a lot of results and gained good consequences. But there are still some shortcomings. To address these problems, this paper proposes a Markov net model with relationship rules. Most of the current relevance data processing rules for digging up the information algorithms are constructed under a unified framework model. The main contents of this paper include the following aspects: (1) Combining Markov logic network and association rules, a new model, the Markov one framework of relationship rules, will be proposed. (2) Employ the random counting method of the Markov logic network framework. By comparing the prediction accuracy of the Markov logical network framework model and the traditional relevance data processing rule for digging the information, the Apriori algorithm, through different datasets, achieved higher accuracy than the traditional association rule algorithm (Apriori algorithm). (3) In line with the test about the relationship with the rule redundancy for the hypergraph, the author will change the relation rule into the hypergraph and give a new concept to the term about the matrix to offer a new way to the test of the circle and linking bricks in the hypergraph. --- *Source: 1007464-2022-09-16.xml*
2022
# An Improved BP Neural Network Algorithm for the Evaluation System of Innovation and Entrepreneurship Education in Colleges and Universities **Authors:** Xuying Sun; Yu Zhang **Journal:** Mobile Information Systems (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007538 --- ## Abstract The ability of college students has important index to evaluate the training quality of universities. Domestic scholars begin to cultivate spirit and ability of students. With the rapid development of tourism industry and the continuous emergence of “tourism +” new business forms, there is an urgent demand for professionals with outstanding innovation and entrepreneurship ability. China’s education field urgently needs a system that can scientifically evaluate the teaching quality. The purpose of this topic is to enrich the theoretical methods of universities. Taking S University as the research sample, the relevant evaluation index system is set up, and, on this basis, the evaluation model of network is established, providing relevant basis for evaluation and cultivation of universities. According to certain evaluation indicators, this paper constructs the main framework of teaching quality evaluation in colleges and universities. 7 representative universities in China are randomly selected, 6 of which are samples and 1 university is the research target, MATLAB is used to calculate the scores of each index, the current situation of quality in a university is analyzed, and corresponding improvements opinion is proposed. Based on the analysis of the current education in a university, it is found that, in the current education, innovation and entrepreneurship knowledge and professional knowledge are taken into account, and the academic achievements are remarkable, forming a preliminary education system, but it is also found that there are some problems of low educational practicality, and corresponding suggestions for this problem are put forward. If the evaluation system is put into practical application, it will improve the education level of cultivating innovative entrepreneurial talents in tourism major in universities. --- ## Body ## 1. Introduction College students’ education is the need to realize the value of life. College students should not only have theoretical knowledge but also have talents in entrepreneurship. It is helpful to help college students master entrepreneurial methods and develop the psychology and will to overcome difficulties and take risks.Innovation and entrepreneurship in universities industries provide a huge entrepreneurial space for students. Universities can effectively support the development of entrepreneurial economy [1]. But, at present, entrepreneurship development at home and abroad has not been able to meet its expectations. Not only is the entrepreneurship rate in the tourism industry not good, but also the development of education in the tourism management major of universities is not conducive to entrepreneurship and tourism development. This requires colleges and universities to reform the major of tourism management, pay more attention to the education of cultivating students' entrepreneurial awareness, and teach students entrepreneurial skills and knowledge related to tourism. Improve the innovation and entrepreneurship ability of students majoring in tourism management, stimulate their willingness to start their own businesses after graduation, and encourage them to build a platform for their own businesses. By improving the quality of education in universities, it can reduce the employment pressure of college students and expand the development space of college students and tourism industry [2].The talents are increasing day by day, which is also the urgent need of China’s economic development. A scientific and systematic evaluation method for tourism education has not yet been formed. In order to solve this problem, this study starts from the nature of education, analyzes the impact of indicators, and adopts BP neural network calculation method to try to establish a perfect evaluation system of education for tourism majors in universities [3]. ## 2. State of the Art American Professor Tismon is known as a leader in education. His research fields cover innovative curriculum development, venture capital, venture financing, entrepreneurial management, and other aspects, and he takes Parkson Business School as the promotion place. The results have obvious characteristics: it is forward-looking in the period of the transition between traditional industry and new industry. The systematic arrangement of the curriculum system, entrepreneurs, business plans and resource supply, venture financing and development speed reasonable arrangement, the entrepreneurial ability of students; Use case method to thinking enthusiasm to solve the problems; Provide students with entrepreneurial practice opportunities [4].During the 90s, UNESCO held many meetings to discuss higher education in the world and how to deal with a development in the 21st century’s needs, and it was made clear that the concept of “degree is not equal to work” and emphasized that graduates should no longer be purely job seekers and become job creators, and it was put forward that “entrepreneurship” as content of the university graduates should be given. It is suggested that students’ entrepreneurial skills are as important as initiative creativity [5].After the 1990s, the perspective of entrepreneurship education in the United States and Canada has changed from the improvement of individual ability to the emphasis on team, company, and industry, taking entrepreneurship as a management style. Its role is no longer to establish new enterprises, but large-scale enterprises also need this quality. To cultivate students’ “attitude, knowledge and skills necessary for self-employment” are clearly stated in the National Education Policy published by India [6].In this study, the retrieval function of CNKI was used to select “journal” as literature source, set “innovation and entrepreneurship” and “education” as key words, and select only “core journal” and “CSSCI” as literature source, and 200 relevant literature works were retrieved. With “innovation and entrepreneurship” and “education” as keywords and “master and doctor” as literature sources, 13 related articles were retrieved. This can be illustrated in Figure1 [7].Figure 1 Distribution of relevant literature.From the research level, the education of vocational college students and undergraduates is the main research object of scholars. Scholars mainly focus on exploring training mode and constructing system; for example, Giancristofaro et al. established the undergraduate education model by using the model of project exploration. Giancristofaro et al. proposed and constructed an education system of “one core, three platforms, and nine modules” [8]. The evaluation of the study of curriculum system is very limited. Samuel et al. proposed the “four-in-one” evaluation system of education, which is the theoretical guidance. The education model of “mentor + project + team” was proposed by Samuel et al. and its effect remains to be discussed [9].The current development status of education in China reflects that although it started late, it has not hindered the rapid education in China. In the process of research, exploration fields have been expanded and exploration levels deepened. In general, the research in China still needs to be further integrated and be systematic, and the research in this field needs to be constantly the actual situation in China.ptIn Warner et al.’s work, innovation education refers to the positive influence of genetic and environmental, using leading role of education; they exert the students’ cognition and practice of the subjective initiative and pay attention to student’s main body to create consciousness, as well as the cultivation of innovation spirit and innovation ability, and form the innovation personality to adapt to the education of students’ individual development [10]. Geisner et al. believe that the emergence of innovative education is an education concept with the rise of knowledge economy, which is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [11]. Bejerholm et al. believe that innovative education is an education concept that emerges knowledge economy. It is an education model to shape innovative consciousness and innovative ability through the means of modern university [12]. Dillahunt-Aspillaga et al. from Zhenjiang Institute of Education and Science understood entrepreneurship education from its functional perspective and believed that it could transform the new labor force form from single to composite and from operation to intelligence, which was an important measure for the new generation of students to meet future challenges and adapt to market demand [13]. Zivin et al. believe that the cultivation of innovative talents should first start the training objectives, so that the attention to education can be reflected, and ensure that the personal and social value of education can be brought into full play [14]. Wu et al. focused on the main line and how to cultivate and who to cultivate and opened up a maker education and teaching mode of “integration of doing, learning, and teaching” [15]. Based on “Internet +” and secondary vocational students' policies, this paper puts forward a set of targeted talent training strategies. ## 3. Methodology ### 3.1. Introduction to Innovation Education and Entrepreneurship Education Both the educator and the educated need to have basic innovation spirit, innovation ability, and innovation personality. Through the exploration of traditional education and the construction of applicable theories and models, education should aim to tap people's creative potential, carry forward people's subject spirit, and promote the harmonious development of personality. Innovative education that appears with the rise of knowledge economy is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [16]. Innovative education is a kind of educational idea that comes into being with information age and knowledge economy. It is a kind of educational mode that realizes the shaping of students’ innovative spirit, through the means of modern university. Opposite to traditional acceptance education, it insists on “creation orientation” and focuses on cultivating students’ ability of secondary discovery and practice. It is a unified form of idea and practice, which is the core of modern education as well as the reflection and sublimation of traditional educational idea model. At the same time, one of the educational activities is education, which is educational practice innovation ability [17].Entrepreneurship education refers to starting a career and corresponding educational activities, emphasizing teaching reform, and cultivating innovative ability and entrepreneurial consciousness which aims at comprehensive qualities such as spirit and knowledge. Entrepreneurship education can promote the development of students’ career ambition, enterprising spirit, pioneering spirit, innovative spirit, and so on [18]. It is a new educational concept and mode compared with employment education, which focuses on cultivating students’ entrepreneurship and ability. To put it simply, it is to let students have all kinds of qualities and abilities needed in the process of entrepreneurship. In a broad sense, the purpose of entrepreneurship education is to stimulate students’ entrepreneurial consciousness. The biggest goal is to shape potential successful entrepreneurs. In a narrow sense, it is entrepreneurship training behavior to cultivate the knowledge, quality, and skills needed for the purpose of independent entrepreneurship [19]. It is to cultivate people with entrepreneurial qualities. In the narrow sense, it is behavior to transform students from job seekers to entrepreneurs and provide students with comprehensive abilities needed in the process of transformation. In the broad sense, it is to improve the overall quality required by students in the process of entrepreneurship through relevant curriculum system. To become a pioneering person with innovative spirit, entrepreneurial consciousness, risk-taking spirit, stable mentality, and correct decision making; entrepreneurship education in the narrow sense is a kind of vocational education, and the purpose is to establish enterprises. To sum up, more attention is paid to the process of education. Entrepreneurship education in a narrow sense is a kind of vocational education, whose purpose is to let learners successfully establish enterprises [20]. ### 3.2. Introduction to Artificial Neural Network ANN is a model system composed of a large number of processing units (neurons). This system has strong independent and nonlinear, nonlocal characteristics. It tries to design a new machine with the information processing ability of the human brain by simulating the processing and memory of information by the neural network of the brain.Artificial neural network takes neuron as the basic processing unit. It is a nonlinear device, and its structure is shown in Figure2.Figure 2 General description of a neuron.In the figure, the input signal is,xiwijθj. Is the external input signal as the set S, and is. The transformation of the jth neuron can be described as follows:(1)yj=f∑wijxi−θj+sj.The running process of the network is calculated as follows:(2)netij1=∑nt−1nt−1Wij1gOipl−1,(3)Ojp1=f1netjp1.The error energy function of BP network is(4)Ep=∑i=1nϕei,p=12∑i=1nyi,p−y^i,p2.The data is normalized so that it is between 0 and 1 and the expected output value is determined.(5)y^j=f∑i=1nwijxi−θj,(6)z^k=f∑i=1nwikyj−θk,and the adjustment consensus is as follows:(7)Wjk+1=Wjk+ηδkVj,(8)Wij+1=wij+ηδjXi.We have that(9)δk=Zk−Z^kZ^k1−Z^k,(10)δj=yj1−yj⋅∑k=0L−1δk⋅Wjk.The learning and training of BP network is a process of error back-propagation and correction. The total errorE is calculated. If, the learning stops; otherwise, go to equation (3) recalculation. E≤ε In practical network design, it will be slow, but if it is too large, the network will wobble. ηη a momentum (0 << 1) can be added into equation (4), i.e αα(11)wjk+1=wjk+ηδkyj+α⋅ΔWjk,(12)Wij+1=Wij+ηδjyi+α⋅ΔWij.The BP algorithm process is an iterative algorithm process; each round will adjustw again, so the iteration goes on until the error meets the requirements.It is a kind of multilayer feedforward type network, which has very strong capability in nonlinear mapping. In this model, each layer is with adjacent neurons, and the neurons are at each layer. These neurons are shown in Figure3.Figure 3 Neural network learning flow chart.In essence, the standard network learning algorithm takes the sum of squares of network errors as the objective function, and the gradient method uses the objective function to realize the minimum algorithm. The most basic principle is to propagate through the network, adjust the minimum error, calculate the learning process, and transmit the error back (as shown in Figure3).The research framework of this article is shown in Figure4.Figure 4 Entrepreneurship system evaluation and research framework. ## 3.1. Introduction to Innovation Education and Entrepreneurship Education Both the educator and the educated need to have basic innovation spirit, innovation ability, and innovation personality. Through the exploration of traditional education and the construction of applicable theories and models, education should aim to tap people's creative potential, carry forward people's subject spirit, and promote the harmonious development of personality. Innovative education that appears with the rise of knowledge economy is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [16]. Innovative education is a kind of educational idea that comes into being with information age and knowledge economy. It is a kind of educational mode that realizes the shaping of students’ innovative spirit, through the means of modern university. Opposite to traditional acceptance education, it insists on “creation orientation” and focuses on cultivating students’ ability of secondary discovery and practice. It is a unified form of idea and practice, which is the core of modern education as well as the reflection and sublimation of traditional educational idea model. At the same time, one of the educational activities is education, which is educational practice innovation ability [17].Entrepreneurship education refers to starting a career and corresponding educational activities, emphasizing teaching reform, and cultivating innovative ability and entrepreneurial consciousness which aims at comprehensive qualities such as spirit and knowledge. Entrepreneurship education can promote the development of students’ career ambition, enterprising spirit, pioneering spirit, innovative spirit, and so on [18]. It is a new educational concept and mode compared with employment education, which focuses on cultivating students’ entrepreneurship and ability. To put it simply, it is to let students have all kinds of qualities and abilities needed in the process of entrepreneurship. In a broad sense, the purpose of entrepreneurship education is to stimulate students’ entrepreneurial consciousness. The biggest goal is to shape potential successful entrepreneurs. In a narrow sense, it is entrepreneurship training behavior to cultivate the knowledge, quality, and skills needed for the purpose of independent entrepreneurship [19]. It is to cultivate people with entrepreneurial qualities. In the narrow sense, it is behavior to transform students from job seekers to entrepreneurs and provide students with comprehensive abilities needed in the process of transformation. In the broad sense, it is to improve the overall quality required by students in the process of entrepreneurship through relevant curriculum system. To become a pioneering person with innovative spirit, entrepreneurial consciousness, risk-taking spirit, stable mentality, and correct decision making; entrepreneurship education in the narrow sense is a kind of vocational education, and the purpose is to establish enterprises. To sum up, more attention is paid to the process of education. Entrepreneurship education in a narrow sense is a kind of vocational education, whose purpose is to let learners successfully establish enterprises [20]. ## 3.2. Introduction to Artificial Neural Network ANN is a model system composed of a large number of processing units (neurons). This system has strong independent and nonlinear, nonlocal characteristics. It tries to design a new machine with the information processing ability of the human brain by simulating the processing and memory of information by the neural network of the brain.Artificial neural network takes neuron as the basic processing unit. It is a nonlinear device, and its structure is shown in Figure2.Figure 2 General description of a neuron.In the figure, the input signal is,xiwijθj. Is the external input signal as the set S, and is. The transformation of the jth neuron can be described as follows:(1)yj=f∑wijxi−θj+sj.The running process of the network is calculated as follows:(2)netij1=∑nt−1nt−1Wij1gOipl−1,(3)Ojp1=f1netjp1.The error energy function of BP network is(4)Ep=∑i=1nϕei,p=12∑i=1nyi,p−y^i,p2.The data is normalized so that it is between 0 and 1 and the expected output value is determined.(5)y^j=f∑i=1nwijxi−θj,(6)z^k=f∑i=1nwikyj−θk,and the adjustment consensus is as follows:(7)Wjk+1=Wjk+ηδkVj,(8)Wij+1=wij+ηδjXi.We have that(9)δk=Zk−Z^kZ^k1−Z^k,(10)δj=yj1−yj⋅∑k=0L−1δk⋅Wjk.The learning and training of BP network is a process of error back-propagation and correction. The total errorE is calculated. If, the learning stops; otherwise, go to equation (3) recalculation. E≤ε In practical network design, it will be slow, but if it is too large, the network will wobble. ηη a momentum (0 << 1) can be added into equation (4), i.e αα(11)wjk+1=wjk+ηδkyj+α⋅ΔWjk,(12)Wij+1=Wij+ηδjyi+α⋅ΔWij.The BP algorithm process is an iterative algorithm process; each round will adjustw again, so the iteration goes on until the error meets the requirements.It is a kind of multilayer feedforward type network, which has very strong capability in nonlinear mapping. In this model, each layer is with adjacent neurons, and the neurons are at each layer. These neurons are shown in Figure3.Figure 3 Neural network learning flow chart.In essence, the standard network learning algorithm takes the sum of squares of network errors as the objective function, and the gradient method uses the objective function to realize the minimum algorithm. The most basic principle is to propagate through the network, adjust the minimum error, calculate the learning process, and transmit the error back (as shown in Figure3).The research framework of this article is shown in Figure4.Figure 4 Entrepreneurship system evaluation and research framework. ## 4. Result Analysis and Discussion ### 4.1. Construction of Evaluation Index System of Education Referring to the teaching work evaluation index system of ordinary universities, the general content of the open questionnaire is extracted, and the result indexes are selected to reflect the scientific, comprehensive, accurate, and operational principles of this research. Then, 5 education evaluation experts and school supervision experts are interviewed. After listening to their preliminary opinions on the tourism specialty in universities, their opinions are adopted, and a quality evaluation system for education of tourism specialty in universities is preliminarily formulated. Different subsystems should be set up for teaching quality evaluation of tourism education in universities as shown in Figure5.Figure 5 Dimensions of teaching quality evaluation of tourism education in universities.This study conducted a survey: Open questionnaires were issued, indicators were initially screened, and questionnaires were generated under the guidance of experts. Rigorous statistical methods were used to analyze questionnaire of tourism major in S University. In the next step, 100 copies of the questionnaire were randomly distributed in the university town where S University is located, and 96 copies were effectively recovered with rate of 96%. The valid questionnaire data were input into SPSS to analyze them. #### 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 #### 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ### 4.2. Construction of BP Neural Network in the Quality Evaluation Model of Education There are 45 evaluation indicators about the quality of tourism education in S University, so the number of nodes of input layer is 45.Hidden Layer Node. The constructed BP is the basis of hidden layer nodes. The time difference between input and output layers will have a certain impact, and the characteristics of sample data will also have a certain impact on the absolute fault tolerance and generalization of the optimal network (which will improve the test accuracy). The following formula is generally used to determine its impact:(13)q=n+m+a.Output Layer Node. The result of evaluation is the nodes. In this case, the number of nodes is 1, which is the comprehensive score value of tourism major in S University.It is relatively hidden neurons, and there is no relevant theoretical basis at present as shown in Table4.Table 4 Evaluation and grading standards. Comprehensive evaluation85–10075–8565–7555–65Less than 55GradeSuperiorGoodMiddleQualifiedUnqualifiedNeural network is adapted to new data, the number of hidden layer nodes is reduced, and the training speed is improved. Therefore, on the premise of meeting the learning accuracy, the “trial-and-error method” is adopted: if there are too many training times within the specified training times or convergence conditions are not met, the training should be stopped. According to the evaluation system mentioned above, the number of hidden layers is determined to be 8 as shown in Table5.Table 5 Convergence comparison. Hidden layer element345678Number of training times3110156105Error9.730514.001313.983142.354447.311422.0122Therefore, the selection can only be based on past experience, and the learning rate in this model is between 0.005 and 0.9. Finally, according to the learning results, the learning rate is determined to be 0.04.Based on the intelligence of each system, the output is the tourism business innovation teaching quality evaluation result, divided into outstanding, good, medium, pass, and fail. Therefore, this is a three-layer BP layer. However, there is only one output node in the output layer. The value range is {0, 1}. ### 4.3. Application of BP Neural Network in Quality Evaluation of Education The neural network toolbox (NNT) of Matlab 7.0 software is used for modeling in this paper. MatrixLaboratory, short for MATLAB, is a set of scientific and engineering computing software based on matrix calculation developed by MathWorks in the 1980s. It has numerical calculation, visualization, and programming functions. In addition, it can also draw a variety of toolboxes to solve special scientific and engineering calculation problems. The calculation function is strong and the programming efficiency is high. MATLAB can be used with neural network toolbox for neural network system to provide analysis and design functions, can be directly called functions, images, and simulation tools and simplify the weight training process, and is excellent software for neural network training.According to the above business travel innovation education teaching system, through data processing, a neural network model is established for the evaluation index of the scale.According to the requirements of the standardization of the index system, the collected sample data are standardized, and the scoring data [0, 100] is converted into data between [0, 1], which is convenient for neural network operation. The neural network toolbox in MATLAB takes three steps to calculate:(1) Initialization: set weights and initial values through init function, using init () command format, and Net is the return function, representing the initialized neural network. The ini () function represents weights and thresholds according to its arguments, net.intfcn and net.initparam. For BP networks, the value of net.intfcn is initwb as shown in Figure6.(2) Network training: since the function training is realized, the network is trained as shown in Figure7.(3) Network simulation: the function Sim is implemented according to the trained network for the book data simulation training.Figure 6 Changes in accuracy of the algorithm under different iterations.Figure 7 Neural network training.The tested data is input into the trained BP model to obtain it with the expected value. The error table is obtained as shown in Table6.Table 6 Error detection table. Test the sampleC5C6C7Expected output0.680.750.72Network output0.68010.71230.7221Error value0.00010.03770.0021The university data collected from performance of education was evaluated by using the BP trained and improved above, and the output value of the comprehensive network was 0.7261. This proves that education in universities is at a medium level as shown in Table7.Table 7 Output results of each indicator. IndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsX10.8X110.72X210.40X310.20X410.65X20.61X120.49X220.60X320.42X420.87X30.71X130.96X230.72X330.91X430.92X40.84X140.51X240.81X340.87X440.67X50.96X150.81X250.45X350.62X450.97X60.81X160.67X260.62X360.90 ## 4.1. Construction of Evaluation Index System of Education Referring to the teaching work evaluation index system of ordinary universities, the general content of the open questionnaire is extracted, and the result indexes are selected to reflect the scientific, comprehensive, accurate, and operational principles of this research. Then, 5 education evaluation experts and school supervision experts are interviewed. After listening to their preliminary opinions on the tourism specialty in universities, their opinions are adopted, and a quality evaluation system for education of tourism specialty in universities is preliminarily formulated. Different subsystems should be set up for teaching quality evaluation of tourism education in universities as shown in Figure5.Figure 5 Dimensions of teaching quality evaluation of tourism education in universities.This study conducted a survey: Open questionnaires were issued, indicators were initially screened, and questionnaires were generated under the guidance of experts. Rigorous statistical methods were used to analyze questionnaire of tourism major in S University. In the next step, 100 copies of the questionnaire were randomly distributed in the university town where S University is located, and 96 copies were effectively recovered with rate of 96%. The valid questionnaire data were input into SPSS to analyze them. ### 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 ### 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ## 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 ## 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ## 4.2. Construction of BP Neural Network in the Quality Evaluation Model of Education There are 45 evaluation indicators about the quality of tourism education in S University, so the number of nodes of input layer is 45.Hidden Layer Node. The constructed BP is the basis of hidden layer nodes. The time difference between input and output layers will have a certain impact, and the characteristics of sample data will also have a certain impact on the absolute fault tolerance and generalization of the optimal network (which will improve the test accuracy). The following formula is generally used to determine its impact:(13)q=n+m+a.Output Layer Node. The result of evaluation is the nodes. In this case, the number of nodes is 1, which is the comprehensive score value of tourism major in S University.It is relatively hidden neurons, and there is no relevant theoretical basis at present as shown in Table4.Table 4 Evaluation and grading standards. Comprehensive evaluation85–10075–8565–7555–65Less than 55GradeSuperiorGoodMiddleQualifiedUnqualifiedNeural network is adapted to new data, the number of hidden layer nodes is reduced, and the training speed is improved. Therefore, on the premise of meeting the learning accuracy, the “trial-and-error method” is adopted: if there are too many training times within the specified training times or convergence conditions are not met, the training should be stopped. According to the evaluation system mentioned above, the number of hidden layers is determined to be 8 as shown in Table5.Table 5 Convergence comparison. Hidden layer element345678Number of training times3110156105Error9.730514.001313.983142.354447.311422.0122Therefore, the selection can only be based on past experience, and the learning rate in this model is between 0.005 and 0.9. Finally, according to the learning results, the learning rate is determined to be 0.04.Based on the intelligence of each system, the output is the tourism business innovation teaching quality evaluation result, divided into outstanding, good, medium, pass, and fail. Therefore, this is a three-layer BP layer. However, there is only one output node in the output layer. The value range is {0, 1}. ## 4.3. Application of BP Neural Network in Quality Evaluation of Education The neural network toolbox (NNT) of Matlab 7.0 software is used for modeling in this paper. MatrixLaboratory, short for MATLAB, is a set of scientific and engineering computing software based on matrix calculation developed by MathWorks in the 1980s. It has numerical calculation, visualization, and programming functions. In addition, it can also draw a variety of toolboxes to solve special scientific and engineering calculation problems. The calculation function is strong and the programming efficiency is high. MATLAB can be used with neural network toolbox for neural network system to provide analysis and design functions, can be directly called functions, images, and simulation tools and simplify the weight training process, and is excellent software for neural network training.According to the above business travel innovation education teaching system, through data processing, a neural network model is established for the evaluation index of the scale.According to the requirements of the standardization of the index system, the collected sample data are standardized, and the scoring data [0, 100] is converted into data between [0, 1], which is convenient for neural network operation. The neural network toolbox in MATLAB takes three steps to calculate:(1) Initialization: set weights and initial values through init function, using init () command format, and Net is the return function, representing the initialized neural network. The ini () function represents weights and thresholds according to its arguments, net.intfcn and net.initparam. For BP networks, the value of net.intfcn is initwb as shown in Figure6.(2) Network training: since the function training is realized, the network is trained as shown in Figure7.(3) Network simulation: the function Sim is implemented according to the trained network for the book data simulation training.Figure 6 Changes in accuracy of the algorithm under different iterations.Figure 7 Neural network training.The tested data is input into the trained BP model to obtain it with the expected value. The error table is obtained as shown in Table6.Table 6 Error detection table. Test the sampleC5C6C7Expected output0.680.750.72Network output0.68010.71230.7221Error value0.00010.03770.0021The university data collected from performance of education was evaluated by using the BP trained and improved above, and the output value of the comprehensive network was 0.7261. This proves that education in universities is at a medium level as shown in Table7.Table 7 Output results of each indicator. IndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsX10.8X110.72X210.40X310.20X410.65X20.61X120.49X220.60X320.42X420.87X30.71X130.96X230.72X330.91X430.92X40.84X140.51X240.81X340.87X440.67X50.96X150.81X250.45X350.62X450.97X60.81X160.67X260.62X360.90 ## 5. Conclusion ### 5.1. Result Analysis The evaluation model of education based on BP is established and optimized to evaluate tourism specialty in S University.Based on the survey results, status analysis, and quality model evaluation results, this study found the following achievements and deficiencies in the evaluation of education and teaching quality of tourism major in S University. #### 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. #### 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. #### 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. #### 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ### 5.2. Improvement Opinions on Education #### 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. #### 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” ## 5.1. Result Analysis The evaluation model of education based on BP is established and optimized to evaluate tourism specialty in S University.Based on the survey results, status analysis, and quality model evaluation results, this study found the following achievements and deficiencies in the evaluation of education and teaching quality of tourism major in S University. ### 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. ### 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. ### 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. ### 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ## 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. ## 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. ## 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. ## 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ## 5.2. Improvement Opinions on Education ### 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. ### 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” ## 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. ## 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” --- *Source: 1007538-2022-06-30.xml*
1007538-2022-06-30_1007538-2022-06-30.md
57,352
An Improved BP Neural Network Algorithm for the Evaluation System of Innovation and Entrepreneurship Education in Colleges and Universities
Xuying Sun; Yu Zhang
Mobile Information Systems (2022)
Computer Science
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007538
1007538-2022-06-30.xml
--- ## Abstract The ability of college students has important index to evaluate the training quality of universities. Domestic scholars begin to cultivate spirit and ability of students. With the rapid development of tourism industry and the continuous emergence of “tourism +” new business forms, there is an urgent demand for professionals with outstanding innovation and entrepreneurship ability. China’s education field urgently needs a system that can scientifically evaluate the teaching quality. The purpose of this topic is to enrich the theoretical methods of universities. Taking S University as the research sample, the relevant evaluation index system is set up, and, on this basis, the evaluation model of network is established, providing relevant basis for evaluation and cultivation of universities. According to certain evaluation indicators, this paper constructs the main framework of teaching quality evaluation in colleges and universities. 7 representative universities in China are randomly selected, 6 of which are samples and 1 university is the research target, MATLAB is used to calculate the scores of each index, the current situation of quality in a university is analyzed, and corresponding improvements opinion is proposed. Based on the analysis of the current education in a university, it is found that, in the current education, innovation and entrepreneurship knowledge and professional knowledge are taken into account, and the academic achievements are remarkable, forming a preliminary education system, but it is also found that there are some problems of low educational practicality, and corresponding suggestions for this problem are put forward. If the evaluation system is put into practical application, it will improve the education level of cultivating innovative entrepreneurial talents in tourism major in universities. --- ## Body ## 1. Introduction College students’ education is the need to realize the value of life. College students should not only have theoretical knowledge but also have talents in entrepreneurship. It is helpful to help college students master entrepreneurial methods and develop the psychology and will to overcome difficulties and take risks.Innovation and entrepreneurship in universities industries provide a huge entrepreneurial space for students. Universities can effectively support the development of entrepreneurial economy [1]. But, at present, entrepreneurship development at home and abroad has not been able to meet its expectations. Not only is the entrepreneurship rate in the tourism industry not good, but also the development of education in the tourism management major of universities is not conducive to entrepreneurship and tourism development. This requires colleges and universities to reform the major of tourism management, pay more attention to the education of cultivating students' entrepreneurial awareness, and teach students entrepreneurial skills and knowledge related to tourism. Improve the innovation and entrepreneurship ability of students majoring in tourism management, stimulate their willingness to start their own businesses after graduation, and encourage them to build a platform for their own businesses. By improving the quality of education in universities, it can reduce the employment pressure of college students and expand the development space of college students and tourism industry [2].The talents are increasing day by day, which is also the urgent need of China’s economic development. A scientific and systematic evaluation method for tourism education has not yet been formed. In order to solve this problem, this study starts from the nature of education, analyzes the impact of indicators, and adopts BP neural network calculation method to try to establish a perfect evaluation system of education for tourism majors in universities [3]. ## 2. State of the Art American Professor Tismon is known as a leader in education. His research fields cover innovative curriculum development, venture capital, venture financing, entrepreneurial management, and other aspects, and he takes Parkson Business School as the promotion place. The results have obvious characteristics: it is forward-looking in the period of the transition between traditional industry and new industry. The systematic arrangement of the curriculum system, entrepreneurs, business plans and resource supply, venture financing and development speed reasonable arrangement, the entrepreneurial ability of students; Use case method to thinking enthusiasm to solve the problems; Provide students with entrepreneurial practice opportunities [4].During the 90s, UNESCO held many meetings to discuss higher education in the world and how to deal with a development in the 21st century’s needs, and it was made clear that the concept of “degree is not equal to work” and emphasized that graduates should no longer be purely job seekers and become job creators, and it was put forward that “entrepreneurship” as content of the university graduates should be given. It is suggested that students’ entrepreneurial skills are as important as initiative creativity [5].After the 1990s, the perspective of entrepreneurship education in the United States and Canada has changed from the improvement of individual ability to the emphasis on team, company, and industry, taking entrepreneurship as a management style. Its role is no longer to establish new enterprises, but large-scale enterprises also need this quality. To cultivate students’ “attitude, knowledge and skills necessary for self-employment” are clearly stated in the National Education Policy published by India [6].In this study, the retrieval function of CNKI was used to select “journal” as literature source, set “innovation and entrepreneurship” and “education” as key words, and select only “core journal” and “CSSCI” as literature source, and 200 relevant literature works were retrieved. With “innovation and entrepreneurship” and “education” as keywords and “master and doctor” as literature sources, 13 related articles were retrieved. This can be illustrated in Figure1 [7].Figure 1 Distribution of relevant literature.From the research level, the education of vocational college students and undergraduates is the main research object of scholars. Scholars mainly focus on exploring training mode and constructing system; for example, Giancristofaro et al. established the undergraduate education model by using the model of project exploration. Giancristofaro et al. proposed and constructed an education system of “one core, three platforms, and nine modules” [8]. The evaluation of the study of curriculum system is very limited. Samuel et al. proposed the “four-in-one” evaluation system of education, which is the theoretical guidance. The education model of “mentor + project + team” was proposed by Samuel et al. and its effect remains to be discussed [9].The current development status of education in China reflects that although it started late, it has not hindered the rapid education in China. In the process of research, exploration fields have been expanded and exploration levels deepened. In general, the research in China still needs to be further integrated and be systematic, and the research in this field needs to be constantly the actual situation in China.ptIn Warner et al.’s work, innovation education refers to the positive influence of genetic and environmental, using leading role of education; they exert the students’ cognition and practice of the subjective initiative and pay attention to student’s main body to create consciousness, as well as the cultivation of innovation spirit and innovation ability, and form the innovation personality to adapt to the education of students’ individual development [10]. Geisner et al. believe that the emergence of innovative education is an education concept with the rise of knowledge economy, which is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [11]. Bejerholm et al. believe that innovative education is an education concept that emerges knowledge economy. It is an education model to shape innovative consciousness and innovative ability through the means of modern university [12]. Dillahunt-Aspillaga et al. from Zhenjiang Institute of Education and Science understood entrepreneurship education from its functional perspective and believed that it could transform the new labor force form from single to composite and from operation to intelligence, which was an important measure for the new generation of students to meet future challenges and adapt to market demand [13]. Zivin et al. believe that the cultivation of innovative talents should first start the training objectives, so that the attention to education can be reflected, and ensure that the personal and social value of education can be brought into full play [14]. Wu et al. focused on the main line and how to cultivate and who to cultivate and opened up a maker education and teaching mode of “integration of doing, learning, and teaching” [15]. Based on “Internet +” and secondary vocational students' policies, this paper puts forward a set of targeted talent training strategies. ## 3. Methodology ### 3.1. Introduction to Innovation Education and Entrepreneurship Education Both the educator and the educated need to have basic innovation spirit, innovation ability, and innovation personality. Through the exploration of traditional education and the construction of applicable theories and models, education should aim to tap people's creative potential, carry forward people's subject spirit, and promote the harmonious development of personality. Innovative education that appears with the rise of knowledge economy is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [16]. Innovative education is a kind of educational idea that comes into being with information age and knowledge economy. It is a kind of educational mode that realizes the shaping of students’ innovative spirit, through the means of modern university. Opposite to traditional acceptance education, it insists on “creation orientation” and focuses on cultivating students’ ability of secondary discovery and practice. It is a unified form of idea and practice, which is the core of modern education as well as the reflection and sublimation of traditional educational idea model. At the same time, one of the educational activities is education, which is educational practice innovation ability [17].Entrepreneurship education refers to starting a career and corresponding educational activities, emphasizing teaching reform, and cultivating innovative ability and entrepreneurial consciousness which aims at comprehensive qualities such as spirit and knowledge. Entrepreneurship education can promote the development of students’ career ambition, enterprising spirit, pioneering spirit, innovative spirit, and so on [18]. It is a new educational concept and mode compared with employment education, which focuses on cultivating students’ entrepreneurship and ability. To put it simply, it is to let students have all kinds of qualities and abilities needed in the process of entrepreneurship. In a broad sense, the purpose of entrepreneurship education is to stimulate students’ entrepreneurial consciousness. The biggest goal is to shape potential successful entrepreneurs. In a narrow sense, it is entrepreneurship training behavior to cultivate the knowledge, quality, and skills needed for the purpose of independent entrepreneurship [19]. It is to cultivate people with entrepreneurial qualities. In the narrow sense, it is behavior to transform students from job seekers to entrepreneurs and provide students with comprehensive abilities needed in the process of transformation. In the broad sense, it is to improve the overall quality required by students in the process of entrepreneurship through relevant curriculum system. To become a pioneering person with innovative spirit, entrepreneurial consciousness, risk-taking spirit, stable mentality, and correct decision making; entrepreneurship education in the narrow sense is a kind of vocational education, and the purpose is to establish enterprises. To sum up, more attention is paid to the process of education. Entrepreneurship education in a narrow sense is a kind of vocational education, whose purpose is to let learners successfully establish enterprises [20]. ### 3.2. Introduction to Artificial Neural Network ANN is a model system composed of a large number of processing units (neurons). This system has strong independent and nonlinear, nonlocal characteristics. It tries to design a new machine with the information processing ability of the human brain by simulating the processing and memory of information by the neural network of the brain.Artificial neural network takes neuron as the basic processing unit. It is a nonlinear device, and its structure is shown in Figure2.Figure 2 General description of a neuron.In the figure, the input signal is,xiwijθj. Is the external input signal as the set S, and is. The transformation of the jth neuron can be described as follows:(1)yj=f∑wijxi−θj+sj.The running process of the network is calculated as follows:(2)netij1=∑nt−1nt−1Wij1gOipl−1,(3)Ojp1=f1netjp1.The error energy function of BP network is(4)Ep=∑i=1nϕei,p=12∑i=1nyi,p−y^i,p2.The data is normalized so that it is between 0 and 1 and the expected output value is determined.(5)y^j=f∑i=1nwijxi−θj,(6)z^k=f∑i=1nwikyj−θk,and the adjustment consensus is as follows:(7)Wjk+1=Wjk+ηδkVj,(8)Wij+1=wij+ηδjXi.We have that(9)δk=Zk−Z^kZ^k1−Z^k,(10)δj=yj1−yj⋅∑k=0L−1δk⋅Wjk.The learning and training of BP network is a process of error back-propagation and correction. The total errorE is calculated. If, the learning stops; otherwise, go to equation (3) recalculation. E≤ε In practical network design, it will be slow, but if it is too large, the network will wobble. ηη a momentum (0 << 1) can be added into equation (4), i.e αα(11)wjk+1=wjk+ηδkyj+α⋅ΔWjk,(12)Wij+1=Wij+ηδjyi+α⋅ΔWij.The BP algorithm process is an iterative algorithm process; each round will adjustw again, so the iteration goes on until the error meets the requirements.It is a kind of multilayer feedforward type network, which has very strong capability in nonlinear mapping. In this model, each layer is with adjacent neurons, and the neurons are at each layer. These neurons are shown in Figure3.Figure 3 Neural network learning flow chart.In essence, the standard network learning algorithm takes the sum of squares of network errors as the objective function, and the gradient method uses the objective function to realize the minimum algorithm. The most basic principle is to propagate through the network, adjust the minimum error, calculate the learning process, and transmit the error back (as shown in Figure3).The research framework of this article is shown in Figure4.Figure 4 Entrepreneurship system evaluation and research framework. ## 3.1. Introduction to Innovation Education and Entrepreneurship Education Both the educator and the educated need to have basic innovation spirit, innovation ability, and innovation personality. Through the exploration of traditional education and the construction of applicable theories and models, education should aim to tap people's creative potential, carry forward people's subject spirit, and promote the harmonious development of personality. Innovative education that appears with the rise of knowledge economy is based on creation and aims to cultivate innovative spirit in students’ consciousness, ability, and personality [16]. Innovative education is a kind of educational idea that comes into being with information age and knowledge economy. It is a kind of educational mode that realizes the shaping of students’ innovative spirit, through the means of modern university. Opposite to traditional acceptance education, it insists on “creation orientation” and focuses on cultivating students’ ability of secondary discovery and practice. It is a unified form of idea and practice, which is the core of modern education as well as the reflection and sublimation of traditional educational idea model. At the same time, one of the educational activities is education, which is educational practice innovation ability [17].Entrepreneurship education refers to starting a career and corresponding educational activities, emphasizing teaching reform, and cultivating innovative ability and entrepreneurial consciousness which aims at comprehensive qualities such as spirit and knowledge. Entrepreneurship education can promote the development of students’ career ambition, enterprising spirit, pioneering spirit, innovative spirit, and so on [18]. It is a new educational concept and mode compared with employment education, which focuses on cultivating students’ entrepreneurship and ability. To put it simply, it is to let students have all kinds of qualities and abilities needed in the process of entrepreneurship. In a broad sense, the purpose of entrepreneurship education is to stimulate students’ entrepreneurial consciousness. The biggest goal is to shape potential successful entrepreneurs. In a narrow sense, it is entrepreneurship training behavior to cultivate the knowledge, quality, and skills needed for the purpose of independent entrepreneurship [19]. It is to cultivate people with entrepreneurial qualities. In the narrow sense, it is behavior to transform students from job seekers to entrepreneurs and provide students with comprehensive abilities needed in the process of transformation. In the broad sense, it is to improve the overall quality required by students in the process of entrepreneurship through relevant curriculum system. To become a pioneering person with innovative spirit, entrepreneurial consciousness, risk-taking spirit, stable mentality, and correct decision making; entrepreneurship education in the narrow sense is a kind of vocational education, and the purpose is to establish enterprises. To sum up, more attention is paid to the process of education. Entrepreneurship education in a narrow sense is a kind of vocational education, whose purpose is to let learners successfully establish enterprises [20]. ## 3.2. Introduction to Artificial Neural Network ANN is a model system composed of a large number of processing units (neurons). This system has strong independent and nonlinear, nonlocal characteristics. It tries to design a new machine with the information processing ability of the human brain by simulating the processing and memory of information by the neural network of the brain.Artificial neural network takes neuron as the basic processing unit. It is a nonlinear device, and its structure is shown in Figure2.Figure 2 General description of a neuron.In the figure, the input signal is,xiwijθj. Is the external input signal as the set S, and is. The transformation of the jth neuron can be described as follows:(1)yj=f∑wijxi−θj+sj.The running process of the network is calculated as follows:(2)netij1=∑nt−1nt−1Wij1gOipl−1,(3)Ojp1=f1netjp1.The error energy function of BP network is(4)Ep=∑i=1nϕei,p=12∑i=1nyi,p−y^i,p2.The data is normalized so that it is between 0 and 1 and the expected output value is determined.(5)y^j=f∑i=1nwijxi−θj,(6)z^k=f∑i=1nwikyj−θk,and the adjustment consensus is as follows:(7)Wjk+1=Wjk+ηδkVj,(8)Wij+1=wij+ηδjXi.We have that(9)δk=Zk−Z^kZ^k1−Z^k,(10)δj=yj1−yj⋅∑k=0L−1δk⋅Wjk.The learning and training of BP network is a process of error back-propagation and correction. The total errorE is calculated. If, the learning stops; otherwise, go to equation (3) recalculation. E≤ε In practical network design, it will be slow, but if it is too large, the network will wobble. ηη a momentum (0 << 1) can be added into equation (4), i.e αα(11)wjk+1=wjk+ηδkyj+α⋅ΔWjk,(12)Wij+1=Wij+ηδjyi+α⋅ΔWij.The BP algorithm process is an iterative algorithm process; each round will adjustw again, so the iteration goes on until the error meets the requirements.It is a kind of multilayer feedforward type network, which has very strong capability in nonlinear mapping. In this model, each layer is with adjacent neurons, and the neurons are at each layer. These neurons are shown in Figure3.Figure 3 Neural network learning flow chart.In essence, the standard network learning algorithm takes the sum of squares of network errors as the objective function, and the gradient method uses the objective function to realize the minimum algorithm. The most basic principle is to propagate through the network, adjust the minimum error, calculate the learning process, and transmit the error back (as shown in Figure3).The research framework of this article is shown in Figure4.Figure 4 Entrepreneurship system evaluation and research framework. ## 4. Result Analysis and Discussion ### 4.1. Construction of Evaluation Index System of Education Referring to the teaching work evaluation index system of ordinary universities, the general content of the open questionnaire is extracted, and the result indexes are selected to reflect the scientific, comprehensive, accurate, and operational principles of this research. Then, 5 education evaluation experts and school supervision experts are interviewed. After listening to their preliminary opinions on the tourism specialty in universities, their opinions are adopted, and a quality evaluation system for education of tourism specialty in universities is preliminarily formulated. Different subsystems should be set up for teaching quality evaluation of tourism education in universities as shown in Figure5.Figure 5 Dimensions of teaching quality evaluation of tourism education in universities.This study conducted a survey: Open questionnaires were issued, indicators were initially screened, and questionnaires were generated under the guidance of experts. Rigorous statistical methods were used to analyze questionnaire of tourism major in S University. In the next step, 100 copies of the questionnaire were randomly distributed in the university town where S University is located, and 96 copies were effectively recovered with rate of 96%. The valid questionnaire data were input into SPSS to analyze them. #### 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 #### 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ### 4.2. Construction of BP Neural Network in the Quality Evaluation Model of Education There are 45 evaluation indicators about the quality of tourism education in S University, so the number of nodes of input layer is 45.Hidden Layer Node. The constructed BP is the basis of hidden layer nodes. The time difference between input and output layers will have a certain impact, and the characteristics of sample data will also have a certain impact on the absolute fault tolerance and generalization of the optimal network (which will improve the test accuracy). The following formula is generally used to determine its impact:(13)q=n+m+a.Output Layer Node. The result of evaluation is the nodes. In this case, the number of nodes is 1, which is the comprehensive score value of tourism major in S University.It is relatively hidden neurons, and there is no relevant theoretical basis at present as shown in Table4.Table 4 Evaluation and grading standards. Comprehensive evaluation85–10075–8565–7555–65Less than 55GradeSuperiorGoodMiddleQualifiedUnqualifiedNeural network is adapted to new data, the number of hidden layer nodes is reduced, and the training speed is improved. Therefore, on the premise of meeting the learning accuracy, the “trial-and-error method” is adopted: if there are too many training times within the specified training times or convergence conditions are not met, the training should be stopped. According to the evaluation system mentioned above, the number of hidden layers is determined to be 8 as shown in Table5.Table 5 Convergence comparison. Hidden layer element345678Number of training times3110156105Error9.730514.001313.983142.354447.311422.0122Therefore, the selection can only be based on past experience, and the learning rate in this model is between 0.005 and 0.9. Finally, according to the learning results, the learning rate is determined to be 0.04.Based on the intelligence of each system, the output is the tourism business innovation teaching quality evaluation result, divided into outstanding, good, medium, pass, and fail. Therefore, this is a three-layer BP layer. However, there is only one output node in the output layer. The value range is {0, 1}. ### 4.3. Application of BP Neural Network in Quality Evaluation of Education The neural network toolbox (NNT) of Matlab 7.0 software is used for modeling in this paper. MatrixLaboratory, short for MATLAB, is a set of scientific and engineering computing software based on matrix calculation developed by MathWorks in the 1980s. It has numerical calculation, visualization, and programming functions. In addition, it can also draw a variety of toolboxes to solve special scientific and engineering calculation problems. The calculation function is strong and the programming efficiency is high. MATLAB can be used with neural network toolbox for neural network system to provide analysis and design functions, can be directly called functions, images, and simulation tools and simplify the weight training process, and is excellent software for neural network training.According to the above business travel innovation education teaching system, through data processing, a neural network model is established for the evaluation index of the scale.According to the requirements of the standardization of the index system, the collected sample data are standardized, and the scoring data [0, 100] is converted into data between [0, 1], which is convenient for neural network operation. The neural network toolbox in MATLAB takes three steps to calculate:(1) Initialization: set weights and initial values through init function, using init () command format, and Net is the return function, representing the initialized neural network. The ini () function represents weights and thresholds according to its arguments, net.intfcn and net.initparam. For BP networks, the value of net.intfcn is initwb as shown in Figure6.(2) Network training: since the function training is realized, the network is trained as shown in Figure7.(3) Network simulation: the function Sim is implemented according to the trained network for the book data simulation training.Figure 6 Changes in accuracy of the algorithm under different iterations.Figure 7 Neural network training.The tested data is input into the trained BP model to obtain it with the expected value. The error table is obtained as shown in Table6.Table 6 Error detection table. Test the sampleC5C6C7Expected output0.680.750.72Network output0.68010.71230.7221Error value0.00010.03770.0021The university data collected from performance of education was evaluated by using the BP trained and improved above, and the output value of the comprehensive network was 0.7261. This proves that education in universities is at a medium level as shown in Table7.Table 7 Output results of each indicator. IndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsX10.8X110.72X210.40X310.20X410.65X20.61X120.49X220.60X320.42X420.87X30.71X130.96X230.72X330.91X430.92X40.84X140.51X240.81X340.87X440.67X50.96X150.81X250.45X350.62X450.97X60.81X160.67X260.62X360.90 ## 4.1. Construction of Evaluation Index System of Education Referring to the teaching work evaluation index system of ordinary universities, the general content of the open questionnaire is extracted, and the result indexes are selected to reflect the scientific, comprehensive, accurate, and operational principles of this research. Then, 5 education evaluation experts and school supervision experts are interviewed. After listening to their preliminary opinions on the tourism specialty in universities, their opinions are adopted, and a quality evaluation system for education of tourism specialty in universities is preliminarily formulated. Different subsystems should be set up for teaching quality evaluation of tourism education in universities as shown in Figure5.Figure 5 Dimensions of teaching quality evaluation of tourism education in universities.This study conducted a survey: Open questionnaires were issued, indicators were initially screened, and questionnaires were generated under the guidance of experts. Rigorous statistical methods were used to analyze questionnaire of tourism major in S University. In the next step, 100 copies of the questionnaire were randomly distributed in the university town where S University is located, and 96 copies were effectively recovered with rate of 96%. The valid questionnaire data were input into SPSS to analyze them. ### 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 ### 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ## 4.1.1. Reliability Analysis From Table1, according to the reliability analysis, Cronbach’s alpha coefficient value can be calculated from some tables to be higher than 0.8, indicating that the developed questionnaire’s reliability index is ideal, all indicators are consistent, and the questionnaire is reliable.Table 1 Reliability analysis table. Cronbach’s alphaCronbach’s alpha based on standardized itemsNo. of items0.8730.86325 ## 4.1.2. Validity Analysis Generally speaking, the value of sampling adequacy can reflect the adequacy of the questionnaire sample. Bartlett and KMD of this survey are shown in table 2. Therefore, KMD coefficient value of the questionnaire is 0.915 and the probability is 0.00 < 0.01, indicating that the questionnaire variables have many factors in Table2.Table 2 Validity analysis. Kaiser-Meyer-Olkin measure of sampling adequacy0.915Bartlett’s test of sphericityBartlett’s test of sphericity11015.038Df378Sig0.000The characteristics of education in tourism majors of different levels are comprehensively analyzed by following the principles of strategic goal-oriented, comprehensive and complete, objective and scientific, dynamic and flexible, and systematic and operational principles. According to the students’ personality, their own characteristics, and other factors, the index design and evaluation method are determined. The method is shown in Table3.Table 3 Teaching quality evaluation system of university tourism education. Evaluation systemFirst level indexSecondary indexThree-level indexEvaluation index system of tourism educationUniversity linkSoft environmentEntrepreneurial communityX1The degree of implementation of national policyX2Number of entrepreneurship competitions heldX3Number of school-enterprise cooperationsX4Number of school-enterprise cooperation projectsX5Hardware supportNumber of innovative and entrepreneurial institutionsX6Percentage of students starting a business after participating in entrepreneurship education coursesX7Student coverage of entrepreneurship fundsX8The service rate of infrastructure such as entrepreneurship park to students isX9Number of students received by entrepreneurship practice baseX10Teaching linkCurriculum design. Teaching methodThe conversion rate of innovation achievementsX11Ratio of practical courses to theoretical coursesX12Participation rate of practical coursesX13Core curriculum ratioX14EntrepreneursX15Degree of penetration of business management in the curriculumX16Cross-disciplinary curriculum opening rateX17 ## 4.2. Construction of BP Neural Network in the Quality Evaluation Model of Education There are 45 evaluation indicators about the quality of tourism education in S University, so the number of nodes of input layer is 45.Hidden Layer Node. The constructed BP is the basis of hidden layer nodes. The time difference between input and output layers will have a certain impact, and the characteristics of sample data will also have a certain impact on the absolute fault tolerance and generalization of the optimal network (which will improve the test accuracy). The following formula is generally used to determine its impact:(13)q=n+m+a.Output Layer Node. The result of evaluation is the nodes. In this case, the number of nodes is 1, which is the comprehensive score value of tourism major in S University.It is relatively hidden neurons, and there is no relevant theoretical basis at present as shown in Table4.Table 4 Evaluation and grading standards. Comprehensive evaluation85–10075–8565–7555–65Less than 55GradeSuperiorGoodMiddleQualifiedUnqualifiedNeural network is adapted to new data, the number of hidden layer nodes is reduced, and the training speed is improved. Therefore, on the premise of meeting the learning accuracy, the “trial-and-error method” is adopted: if there are too many training times within the specified training times or convergence conditions are not met, the training should be stopped. According to the evaluation system mentioned above, the number of hidden layers is determined to be 8 as shown in Table5.Table 5 Convergence comparison. Hidden layer element345678Number of training times3110156105Error9.730514.001313.983142.354447.311422.0122Therefore, the selection can only be based on past experience, and the learning rate in this model is between 0.005 and 0.9. Finally, according to the learning results, the learning rate is determined to be 0.04.Based on the intelligence of each system, the output is the tourism business innovation teaching quality evaluation result, divided into outstanding, good, medium, pass, and fail. Therefore, this is a three-layer BP layer. However, there is only one output node in the output layer. The value range is {0, 1}. ## 4.3. Application of BP Neural Network in Quality Evaluation of Education The neural network toolbox (NNT) of Matlab 7.0 software is used for modeling in this paper. MatrixLaboratory, short for MATLAB, is a set of scientific and engineering computing software based on matrix calculation developed by MathWorks in the 1980s. It has numerical calculation, visualization, and programming functions. In addition, it can also draw a variety of toolboxes to solve special scientific and engineering calculation problems. The calculation function is strong and the programming efficiency is high. MATLAB can be used with neural network toolbox for neural network system to provide analysis and design functions, can be directly called functions, images, and simulation tools and simplify the weight training process, and is excellent software for neural network training.According to the above business travel innovation education teaching system, through data processing, a neural network model is established for the evaluation index of the scale.According to the requirements of the standardization of the index system, the collected sample data are standardized, and the scoring data [0, 100] is converted into data between [0, 1], which is convenient for neural network operation. The neural network toolbox in MATLAB takes three steps to calculate:(1) Initialization: set weights and initial values through init function, using init () command format, and Net is the return function, representing the initialized neural network. The ini () function represents weights and thresholds according to its arguments, net.intfcn and net.initparam. For BP networks, the value of net.intfcn is initwb as shown in Figure6.(2) Network training: since the function training is realized, the network is trained as shown in Figure7.(3) Network simulation: the function Sim is implemented according to the trained network for the book data simulation training.Figure 6 Changes in accuracy of the algorithm under different iterations.Figure 7 Neural network training.The tested data is input into the trained BP model to obtain it with the expected value. The error table is obtained as shown in Table6.Table 6 Error detection table. Test the sampleC5C6C7Expected output0.680.750.72Network output0.68010.71230.7221Error value0.00010.03770.0021The university data collected from performance of education was evaluated by using the BP trained and improved above, and the output value of the comprehensive network was 0.7261. This proves that education in universities is at a medium level as shown in Table7.Table 7 Output results of each indicator. IndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsIndicatorsResultsX10.8X110.72X210.40X310.20X410.65X20.61X120.49X220.60X320.42X420.87X30.71X130.96X230.72X330.91X430.92X40.84X140.51X240.81X340.87X440.67X50.96X150.81X250.45X350.62X450.97X60.81X160.67X260.62X360.90 ## 5. Conclusion ### 5.1. Result Analysis The evaluation model of education based on BP is established and optimized to evaluate tourism specialty in S University.Based on the survey results, status analysis, and quality model evaluation results, this study found the following achievements and deficiencies in the evaluation of education and teaching quality of tourism major in S University. #### 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. #### 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. #### 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. #### 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ### 5.2. Improvement Opinions on Education #### 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. #### 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” ## 5.1. Result Analysis The evaluation model of education based on BP is established and optimized to evaluate tourism specialty in S University.Based on the survey results, status analysis, and quality model evaluation results, this study found the following achievements and deficiencies in the evaluation of education and teaching quality of tourism major in S University. ### 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. ### 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. ### 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. ### 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ## 5.1.1. Both Innovation and Entrepreneurship Knowledge and Professional Knowledge By analyzing the evaluation results of indicatorsX4 = 0.84, X5 = 0.96, X14 = 0.96, X34 = 0.92, and X36 = 0.90, it can be seen that teachers in S University have paid attention to encouraging students to start businesses in the field of tourism while teaching the courses of tourism specialty, and students have gradually begun to master relevant knowledge and use other disciplines and the Internet research new tourism professional business model. Besides teaching professional theoretical knowledge, it also pays attention to practical teaching. Combining with the characteristics of tourism specialty, it makes continuous improvement from the aspects of discipline design, internship, and after-school experiments. Through continuous efforts, the number of students attending special courses on innovation and entrepreneurship is increasing and they are satisfied with the teaching effect, which provides corresponding guarantee for the continued promotion of education. The school-enterprise cooperation has gradually increased and deepened. ## 5.1.2. Outstanding Academic Achievements According to the evaluation indicatorsX24 = 0.81, X33 = 0.91, X37 = 0.91, and X38 = 0.92, it can be seen that S University has made great progress in academic research on tourism education, and its research results are gradually increasing journals and remarkable achievements in social practice. The proportion of academic competitions in related fields is increasing, and the social influence is increasing. By implementing the corresponding reward mechanism for award-winning teachers and students, we encourage more academic input into the research of relevant fields. ## 5.1.3. A Preliminary System of Extracurricular Activities According to the evaluation indicatorsX1 = 0.80, X3 = 0.71, and X15 = 0.81, S University can regularly invite successful entrepreneurs and managers in related professional fields to participate in lectures, forums, training, and other activities related to innovation and entrepreneurship held by the university, which expands students’ horizons and provides a primary channel for students to acquire knowledge in related fields. Since the “entrepreneurial design competition” held by S University in 2004, the university has regularly provided corresponding information supply and teacher guidance for the competition and set up rich bonuses to encourage students to participate in it. On the other hand, the number of participants in this entrepreneurship competition has increased year by year. Through the simulated entrepreneurship course, students have realized the process of transforming knowledge into practical results and deepened their understanding of entrepreneurship. ## 5.1.4. Lack of Experience in Innovation and Entrepreneurship Teachers The innovation and entrepreneurship teachers of S University are in the middle of the school. As teachers, they either have theoretical knowledge and lack management experience or lack teaching experience, so they cannot truly establish students’ entrepreneurial awareness in the teaching process. Although S University has established a corresponding entrepreneurship guidance center and an entrepreneurship research center, the teachers in this position are temporarily held by relevant teachers or leaders of institutions. Teachers who impart relevant entrepreneurship knowledge generally have other teaching and scientific research tasks, which cannot guarantee it. The quality of education classrooms and scientific research results are under the condition that meets the requirements of reality. ## 5.2. Improvement Opinions on Education ### 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. ### 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” ## 5.2.1. Establish an Excellent Team of Tourism Innovation and Entrepreneurship Teachers We need to establish an excellent team of teachers. Strong guidance ability is required. We need to be familiar with China’s relevant policies, master the process of it, understand the risks of it, and even have certain experience in it, so as to guide and help students. ## 5.2.2. Create a Good Education Environment Practice has proved that many students major in education environment during their school education. For example, some universities use various practical teaching conditions to establish corresponding travel agency college business departments to improve students’ ability by operating practical projects. For another example, actively participating in and holding various innovation and entrepreneurship competitions at all levels is also a good way. “Shandong Huang Yanpei occupation education innovation and entrepreneurship competition.” --- *Source: 1007538-2022-06-30.xml*
2022
# A Mini Review on Flotation Techniques and Reagents Used in Graphite Beneficiation **Authors:** N. Vasumathi; Anshuli Sarjekar; Hrishikesh Chandrayan; K. Chennakesavulu; G. Ramanjaneya Reddy; T. V. Vijaya Kumar; Nour Sh. El-Gendy; S. J. Gopalkrishna **Journal:** International Journal of Chemical Engineering (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1007689 --- ## Abstract Due to its numerous and major industrial uses, graphite is one of the significant carbon allotropes. Refractories and batteries are only a couple of the many uses for graphite. A growing market wants high-purity graphite with big flakes. Since there are fewer naturally occurring high-grade graphite ores, low-grade ores must be processed to increase their value to meet the rising demand, which is predicted to increase by >700% by 2025 due to the adoption of electric vehicles. Since graphite is inherently hydrophobic, flotation is frequently used to beneficiate low-grade ores. The pretreatment process, both conventional and unconventional; liberation/grinding methods; flotation methods like mechanical froth flotation, column flotation, ultrasound-assisted flotation, and electroflotation; and more emphasis on various flotation reagents are all covered in this review of beneficiation techniques. This review also focuses on the different types of flotation reagents that are used to separate graphite, such as conventional reagents and possible nonconventional environmentally friendly reagents. --- ## Body ## 1. Introduction Graphite is a natural crystalline allotrope of carbon that is greenish-black and shiny [1]. K. W. Scheele first chemically characterised graphite in 1779, and A. G. Werner later gave it the term “graphite,” which was derived from the Greek word “grapho,” which means “I write” [2]. There are many different physical and structural forms of carbon that exist in nature [3]. Van der Waals forces cause the parallel sheets that make up graphite’s structure to be weakly attracted to one another [3]. Carbon nanotubes, diamonds, and fullerenes are also crystalline materials [4]. Carbon atoms in graphite are sp2 hybrids with “pi” electrons in plane P-orbitals [5–8]. One out-of-plane electron per carbon atom gives carbon its metal-like characteristics, such as lustre and high electrical and thermal conductivities [9]. Being an allotrope of carbon, it also possesses nonmetallic qualities, including inertness and high lubricity. It is the perfect material for use in fuel cells, refractories, lithium-ion batteries, fibre optics, and electrical vehicles because it combines metallic and nonmetallic qualities [4]. Graphite can take on a variety of shapes, including hexagonal, rhombohedral, and turbostratic structures [6, 7]. According to Jin et al. [10], there are three different forms of naturally occurring graphite: crystalline (flake), microcrystalline (amorphous), and vein (lump) [10]. Their attributes are presented in Table 1. Metamorphism of carbon compounds in sedimentary rocks produces graphite. The majority of it is found in the rocks’ fractures, pockets, veins, and scattered forms [2, 13]. The most sought-after of these kinds, flake, is found inside the rock in flaky form. The viability of mining a graphite ore depends on how many flakes are there. Due to its superior heat and erosion resistance compared to other graphite forms, refractory manufacturers are the largest consumers of flaked graphite. In addition, the flake form of graphite is preferred since lump graphite is more expensive and less common. The most common type of graphite, amorphous graphite, has significant commercial value and is one of the top stocks. With the right processing, one can achieve up to 99% purity from this low-grade ore [10]. Lump graphite has a high market value since it is very rare and has higher purity and crystallinity than amorphous graphite. This limited its use to applications that required its exceptional qualities, including high electrical conductivity and purity.Table 1 Properties of different graphite forms [4, 6, 11, 12]. PropertyAmorphousFlakeLumpOccurrenceFormed as a result of anthracite coal seams metamorphosingThe development of small flakes inside the rock is the result of regional metamorphismLump graphite is created when hydrothermal activity changes the carbon compounds inside the rockDescriptionMost prevalent, carbon quality: 20%–40% (low)Good abundance, carbon grade <90%Carbon grade >90%, rarest in supplyMajor producersChina, North Korea, Mexico, and AustriaChina, India, Madagascar, Mexico, and BrazilSri LankaInternational market valueLowGoodVery highMarket price/ton300–500 USD500–3,000 USD3,000–6,000 USDMorphologyFine granular structureFlaky, flat, plate-like particlesMassive fibrous aggregatesBy accident, Edward G. Acheson created synthetic graphite by heating carborundum (SiC) at a high temperature, which produced extremely pure graphite [12]. Graphite can currently be produced in a variety of ways, depending on the purity required.More than 800 million tonnes of viable graphite are thought to be in global reserves. However, the estimated global graphite reserves are only 300 million tonnes. Turkey (30%), Brazil (24%), China (24.34%), Mozambique (5.67%), Tanzania (5.67%), and India (2.67%) make up the world’s graphite reserves [14]. Other significant countries that produce graphite are Sri Lanka, Canada, Europe, Mexico, and the United States [4]. In 2017, 1.03 million tonnes of graphite were produced, with 88% of that production coming from China, 8% from Brazil, and 3% from India, according to the US Geological Survey. The top nations looking into new natural supply sources are Canada and Brazil [6, 14].Natural graphite is used in lubricants, brake linings, batteries, steel production, foundries, and refractories [3, 8, 15]. In addition to its particle size, the proportion of fixed carbon in graphite is extremely important for a variety of graphite uses (Table 2). While synthetic graphite is utilized as a neutron moderator, radar adsorbent materials, electrodes, graphite powder, and other things [3, 16], there are numerous ongoing research projects to investigate novel uses for graphite. A few of them include preventing fires, cleaning up oil spills, and removing arsenic from water, among others [17]. It is clear from Table 2 that the majority of businesses use natural graphite demand purity levels of at least 90%. Since lump graphite with a purity of >90% is uncommonly abundant, beneficiating lower-quality graphite to the desired grade (typically >90%) becomes imperative. Due to the presence of mineral acids in them, current conventional procedures like acid leaching are detrimental to the environment [4]. Hydrochloric acid is the only mineral acid that is safe, but it is ineffective for acid leaching [4, 18]. Due to innovative uses for the material and possible high future output, there will be an increase in the beneficiation of low-quality graphite ore reserves around the world. Therefore, methods of beneficiation that does not hurt the environment will be very important for the graphite industry to grow.Table 2 Specifications and uses of graphite [14]. End productsPercentage of graphite usedThe quality of the graphite usedFixed carbon (F.C.) (%)Size (micron)Mag-carb refractories1287–90150–710Alumina-carb (graphitised) alumina refractories8–1085 min150–500Clay-bonded crucibles60–65∼80149–841Expanded (or flexible) graphite foils and products-based thereon (e.g., sealing gaskets in refineries, fuel pumps, and automobiles)10090 min. (Preferably + 99)250–1800Pencil50–60+95–9850 maxBrake-linings1–1598 nub,75 maxFoundry—40–7053–75Batteries(a) Dry cells—88 min75 max(b) Alkaline—98 min5–75Brushes—Usually 99Usually less than 53Lubricants—98–9953–106Sintered products (e.g., clog wheels)—98-995PaintUp to 7550–5575% nubAmorphous powder flakeBraid used for sealing (e.g., on a ship)40–5095 min—Graphitized grease (used in seamless steel tube manufacturing)—+9938 maxColloidal graphite10099.9Colloidal ## 2. Microwave Pretreatment It is necessary to separate or break down the impurities that are present in the ores. Microwave irradiation, first proposed by Walkiewicz in 1991, is a great, environmentally friendly way to heat the ore. That process results in isolated fractures at the intergranular and transgranular regions without triggering catastrophic failure, which results in cracks along the grain boundaries. This breaks down the impurities that are stuck in a “honeycomb” pattern [19]. According to Özbayoğlu et al. [20], the eliminated contaminants are primarily moisture, sulphur, and other volatile salts. For a low-grade graphite sample, effective impurity removal results in a marked rise in the fixed carbon content in the concentrates. That technique has important advantages like lowering the work index and reducing the wear and tear on the mill, mill liners, and grinding media [19, 21–23]. It has been discovered that such a method is now quite effective for the flotation of coal and ilmenite ores, so it can be expected that it has good potential for beneficiating graphite as well [24, 25]. ## 3. Grinding ### 3.1. Conventional Grinding Methods: Ball and Rod Milling The two most used methods for grinding graphite ore are ball and rod mills:A cylindrical shell that spins about its axis is used in ball milling. Graphite ore and chrome steel or stainless steel balls are placed inside the shell. An abrasion-resistant material, usually a manganese-steel alloy, is used to make the interior of the object. Grinding is mostly accomplished through the impact of the balls on the edges of the ore particle pile at the bottom. Secondarily, the ore is ground by the friction created by the slipping balls [26]. This results in size reduction by means of compression and shearing forces, which arrange the ore particles in an anisotropic manner [6, 27]. High-intensity milling, on the other hand, results in amorphous carbon, compromising flake size [28, 29]. Ball mills are easy to use and cheap to run [6, 30].A rod mill has a cylindrical body similar to a ball mill, except instead of balls, it uses rods. The rod system consumes 35–40% of the mill’s capacity [31]. Large particles are crushed to a smaller average particle size in a rod mill by being trapped between the rods and moving toward the discharge end of the rod. As a result, the size of the particles at the two ends of the rods varies and is dependent on the mill’s length, feed rate, and grinding speed [32]. A crucial aspect of this approach is its improved ability to grind big particles. Rod mills are chosen over other types of grinding whenever the ore is sticky. Compared to ball milling, rod milling generally does less harm to the size and shape of the flake. Due to the relatively smaller surface area of the rods, the rod mill uses more energy [31]. ### 3.2. Nonconventional Grinding and Regrinding: Stirred Milling, Jet Milling, Delamination, and Attrition Milling Traditional grinding methods like ball milling or rod milling do not take into account impurities that exist in the layered graphite [6]. In contrast to strong intralayer covalent bonding, the weak Van der Waals forces between the graphite layers are easily overcome [3, 33]. Such layered minerals produce an anisotropic structure upon grinding. Thus, abrasive forces must be applied instead of compressive or shearing forces [34] that can be achieved using a stirred mill, which consists of a cylindrical shell, stirrer, shear blade, and distribution disc. The stirred mill rotates the pulp while agitating the ore to grind it while minimizing impact force and preventing overgrinding. It was reported that stirred milling causes less damage to flake shape and size as compared to rod milling and ball milling because the grinding media or pulp rubs against each other due to various rotational speeds, resulting in an active force leading to the release of layers of the mineral [33]. Jet milling, on the other hand, uses high-impact stress to keep the ore’s flaky structure and nonuniform shape on large particles [35].Earlier, it was believed that graphite regrinding, which limited the maximum fixed carbon concentration to 95%, was useless because graphite coats gangue particles and makes them floatable [36]. The graphite middling delaminates when it is reground in a slow-speed ball mill employing a flint pebble grinding media. This method is superior to traditional milling for the preparation of flake graphite since it has little effect on the size and form of the graphite flakes. This method produced fixed carbon contents of up to 98% [37].Attrition milling can be utilized as the final stage of liberation by grinding. The attrition method is used to selectively separate tiny particles. This is a very effective method if a small particle size of graphite is required for its usage as a lubricant. Attrition can keep the flaky form and crystallinity of graphite while reducing 90% of a 150μm sample to 1 μm [38]. ## 3.1. Conventional Grinding Methods: Ball and Rod Milling The two most used methods for grinding graphite ore are ball and rod mills:A cylindrical shell that spins about its axis is used in ball milling. Graphite ore and chrome steel or stainless steel balls are placed inside the shell. An abrasion-resistant material, usually a manganese-steel alloy, is used to make the interior of the object. Grinding is mostly accomplished through the impact of the balls on the edges of the ore particle pile at the bottom. Secondarily, the ore is ground by the friction created by the slipping balls [26]. This results in size reduction by means of compression and shearing forces, which arrange the ore particles in an anisotropic manner [6, 27]. High-intensity milling, on the other hand, results in amorphous carbon, compromising flake size [28, 29]. Ball mills are easy to use and cheap to run [6, 30].A rod mill has a cylindrical body similar to a ball mill, except instead of balls, it uses rods. The rod system consumes 35–40% of the mill’s capacity [31]. Large particles are crushed to a smaller average particle size in a rod mill by being trapped between the rods and moving toward the discharge end of the rod. As a result, the size of the particles at the two ends of the rods varies and is dependent on the mill’s length, feed rate, and grinding speed [32]. A crucial aspect of this approach is its improved ability to grind big particles. Rod mills are chosen over other types of grinding whenever the ore is sticky. Compared to ball milling, rod milling generally does less harm to the size and shape of the flake. Due to the relatively smaller surface area of the rods, the rod mill uses more energy [31]. ## 3.2. Nonconventional Grinding and Regrinding: Stirred Milling, Jet Milling, Delamination, and Attrition Milling Traditional grinding methods like ball milling or rod milling do not take into account impurities that exist in the layered graphite [6]. In contrast to strong intralayer covalent bonding, the weak Van der Waals forces between the graphite layers are easily overcome [3, 33]. Such layered minerals produce an anisotropic structure upon grinding. Thus, abrasive forces must be applied instead of compressive or shearing forces [34] that can be achieved using a stirred mill, which consists of a cylindrical shell, stirrer, shear blade, and distribution disc. The stirred mill rotates the pulp while agitating the ore to grind it while minimizing impact force and preventing overgrinding. It was reported that stirred milling causes less damage to flake shape and size as compared to rod milling and ball milling because the grinding media or pulp rubs against each other due to various rotational speeds, resulting in an active force leading to the release of layers of the mineral [33]. Jet milling, on the other hand, uses high-impact stress to keep the ore’s flaky structure and nonuniform shape on large particles [35].Earlier, it was believed that graphite regrinding, which limited the maximum fixed carbon concentration to 95%, was useless because graphite coats gangue particles and makes them floatable [36]. The graphite middling delaminates when it is reground in a slow-speed ball mill employing a flint pebble grinding media. This method is superior to traditional milling for the preparation of flake graphite since it has little effect on the size and form of the graphite flakes. This method produced fixed carbon contents of up to 98% [37].Attrition milling can be utilized as the final stage of liberation by grinding. The attrition method is used to selectively separate tiny particles. This is a very effective method if a small particle size of graphite is required for its usage as a lubricant. Attrition can keep the flaky form and crystallinity of graphite while reducing 90% of a 150μm sample to 1 μm [38]. ## 4. Gravity Separation This technique is based on the fundamental idea of the disparity between the specific gravities of the ore and contaminants. The simplicity of the process, cost-effectiveness, and environmental friendliness are this method’s defining characteristics [39]. It has been reported that when fine graphite particles (less than 150 μm) are passed through a hydraulic classifier, they consume 34% less acid in the leaching process.Due to the technique’s increased environmental friendliness, it is thought to be even more environmentally beneficial than traditional froth flotation [40, 41]. Gravity separation can also be used if it is necessary to concentrate a specific component from the tailings [6]. ## 5. Froth Flotation The separation of hydrophobic minerals from hydrophilic contaminants is an important application of this technique. It is a physiochemical technique that August and Adolph carried out for graphite for the first time in 1877 [40, 42]. This concentrates them by taking advantage of the hydrophobic properties of graphite and related ores. The studies indicated that when compared to other concentration processes, the froth flotation process required three to four times fewer samples to produce the same amount of concentrated ore. When compared to alternative techniques, it can also dramatically lower the expense of tailing management and treatment [43]. Depending on the application, two alternative types of flotation methods are used for graphite on an industrial scale: mechanical and column flotation. ### 5.1. Column Flotation This occurs in a cylindrical flotation column. With the aid of a bubble generator, gas bubbles are introduced from the bottom of the column while the slurry is fed from the top. Gas particles disperse into the slurry as a result of a concentration differential. A counter-current laminar flow results from this. As a result of this action, there is a high relative velocity and an increased likelihood of a collision [44]. The froth zone, collection zone, and scavenger cyclone zone are the three zones that form in the flotation column, as shown in Figure 1. When water (a diluent) is added, froth with hydrophobic pure graphite particles forms at the top of the column [45]. The froth selectivity increases with the froth height. The improved ore and wash water mixing caused by the closer proximity of the wash water inlet to the pulp-froth interface is what accounts for the increased selectivity [46]. The froth zone has an air holdup (volume of liquid displaced by air) of about 80% [44].Figure 1 Schematic diagram of flotation column.Superficial gas velocity is one of the most significant variables in the column flotation process. Low gas holdup due to a lower gas velocity of the flow shows a negative impact on the grade and recovery of the ore. The reduced turbulence caused by column flotation, in addition to its low operating cost, is a significant benefit. The energy that the turbulence contributes leads to the unwanted dissociation of the mineral particle from the bubble. Negative bias is used to keep the flow of tailings lower than the feed flow to recover more flaky graphite [44, 47, 48]. The bubbles in column flotation have a large lower limit in terms of diameter. As a result, microparticles find it challenging to collide with the froth. The water’s streamlines allow the fine particles to move. The float recovery is decreased in this way, but only for fine and ultrafine particles [48, 49]. ### 5.2. Flotation by Mechanical Cell This process is used to beneficiate mineral particles that are difficult to remove. These are the tiny particles of graphite. Figure2 shows an example of flotation cell, as reported by Kuan [50]. In mechanical flotation cells, froth is produced by agitation with a rotating impeller. The high-speed impellers release air bubbles into the system [5]. The impeller rotation speed is a crucial process parameter. Studies show that controlling bubble-particle adhesion depends on this component [51]. The relative velocity of bubble and particles close to the impeller only matters in this sort of cell. As a result, there are fewer collisions.Figure 2 A representation of flotation cell.In addition, the residence period is shorter than the overall amount of time a bubble spends in the pulp. For these reasons, less concentration of graphite particles is caused. pH, promoter addition, feed size, impeller speed, viscosity, and collector dose are the major factors [44, 52, 53]. The lack of spargers in mechanical cells gives them a significant advantage over flotation columns. In flotation columns, spargers are employed to create bubbles. They need constant upkeep because they are vulnerable to particle obstruction, degradation, and frequent malfunctions. Another benefit of mechanical cells is their ease of use and lower cost for small-scale applications [54]. ### 5.3. Reagents Used in Graphite Flotation To create a pulp environment that is favourable for separating undesirable gangue particles from valuable minerals, flotation reagents are used [55]. The qualities of the reagents, such as collectors, frothers, and depressants, which are crucial to the effectiveness of the flotation process of concentration, include mineral hydrophobicity, bubble size, contact angle, bubble formation, and particle adherence. These properties are discussed below in detail. #### 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. #### 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. #### 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 5.1. Column Flotation This occurs in a cylindrical flotation column. With the aid of a bubble generator, gas bubbles are introduced from the bottom of the column while the slurry is fed from the top. Gas particles disperse into the slurry as a result of a concentration differential. A counter-current laminar flow results from this. As a result of this action, there is a high relative velocity and an increased likelihood of a collision [44]. The froth zone, collection zone, and scavenger cyclone zone are the three zones that form in the flotation column, as shown in Figure 1. When water (a diluent) is added, froth with hydrophobic pure graphite particles forms at the top of the column [45]. The froth selectivity increases with the froth height. The improved ore and wash water mixing caused by the closer proximity of the wash water inlet to the pulp-froth interface is what accounts for the increased selectivity [46]. The froth zone has an air holdup (volume of liquid displaced by air) of about 80% [44].Figure 1 Schematic diagram of flotation column.Superficial gas velocity is one of the most significant variables in the column flotation process. Low gas holdup due to a lower gas velocity of the flow shows a negative impact on the grade and recovery of the ore. The reduced turbulence caused by column flotation, in addition to its low operating cost, is a significant benefit. The energy that the turbulence contributes leads to the unwanted dissociation of the mineral particle from the bubble. Negative bias is used to keep the flow of tailings lower than the feed flow to recover more flaky graphite [44, 47, 48]. The bubbles in column flotation have a large lower limit in terms of diameter. As a result, microparticles find it challenging to collide with the froth. The water’s streamlines allow the fine particles to move. The float recovery is decreased in this way, but only for fine and ultrafine particles [48, 49]. ## 5.2. Flotation by Mechanical Cell This process is used to beneficiate mineral particles that are difficult to remove. These are the tiny particles of graphite. Figure2 shows an example of flotation cell, as reported by Kuan [50]. In mechanical flotation cells, froth is produced by agitation with a rotating impeller. The high-speed impellers release air bubbles into the system [5]. The impeller rotation speed is a crucial process parameter. Studies show that controlling bubble-particle adhesion depends on this component [51]. The relative velocity of bubble and particles close to the impeller only matters in this sort of cell. As a result, there are fewer collisions.Figure 2 A representation of flotation cell.In addition, the residence period is shorter than the overall amount of time a bubble spends in the pulp. For these reasons, less concentration of graphite particles is caused. pH, promoter addition, feed size, impeller speed, viscosity, and collector dose are the major factors [44, 52, 53]. The lack of spargers in mechanical cells gives them a significant advantage over flotation columns. In flotation columns, spargers are employed to create bubbles. They need constant upkeep because they are vulnerable to particle obstruction, degradation, and frequent malfunctions. Another benefit of mechanical cells is their ease of use and lower cost for small-scale applications [54]. ## 5.3. Reagents Used in Graphite Flotation To create a pulp environment that is favourable for separating undesirable gangue particles from valuable minerals, flotation reagents are used [55]. The qualities of the reagents, such as collectors, frothers, and depressants, which are crucial to the effectiveness of the flotation process of concentration, include mineral hydrophobicity, bubble size, contact angle, bubble formation, and particle adherence. These properties are discussed below in detail. ### 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. ### 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. ### 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. ## 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. ## 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 6. Electroflotation Due to the inability of current conventional flotation methods to recover a considerable number of tiny and ultrafine graphite particles due to poor bubble-particle collision efficiency, the problem of reduced recovery is observed in low-grade graphite ores [49, 76]. This lowers the process’s net profitability in large-scale operations. Microbubbles are produced by the electroflotation process during the electrolysis of water, during which hydrogen and oxygen are released at cathodic and anodic surfaces, respectively [49, 77, 78]. Greater “apparent particle size” is the outcome of flocculation, which is triggered by smaller bubbles [49, 79, 80]. This speeds up the process of bubble-particle adhesion. Due to the formation of hydrogen and oxygen bubbles, that is more active for hydrophobic materials than air bubbles created by conventional approaches. This method claims a higher efficiency, with a more control over the bubble flow. Consequently, the electroflotation can be a very successful method for the flotation of ultra-fine graphite. ## 7. Ultrasound-Assisted Flotation High-purity flake graphite has a high economic value in the markets. Because the trapped impurities are transported with the graphite in the froth during the traditional flotation process, getting high-purity flake graphite is quite challenging [81–83]. The maximum purity of graphite that can be produced using conventional flotation is 95%, while this also depends on parameters including the ore’s grade, number of flotation stages, and percentage crystallinity [4]. The three stages of ultrasound treatment—rougher stage, cleaner stage, and recleaner stage—are based on a difference in the strengths of “attachment of locked impurities” and “graphite flake structure.” In contrast to the traditional grinding method, it selectively removes the trapped impurities while causing relatively little harm to the flake size and shape. Consequently, it promises to be a promising industrial tool [84]. ## 8. Air Elutriation Compared to amorphous graphite, flake graphite has a much higher market value (Table1). However, the graphite produced by the froth flotation method is a blend of flake and amorphous graphite. Amorphous graphite should be separated from flake graphite for the greatest economic potential. For the same, air elutriation is employed. Figure 5 shows the air elutriation mechanism of coarse particles [85]. In the large-scale configuration, blowers are used to create an upward air flow at the base of vertical pipes. At almost one-third of the length of the entire pipe from the bottom, mixed graphite feed is added. The air flow velocity is restricted to between 0.9 and 4.6 m/s. The amorphous graphite with any remaining impurities falls to the bottom of the pipe, while pure graphite flake rises to the top [6, 86].Figure 5 Air elutriation mechanism of coarse particles. ## 9. Flushing Process For high-quality requirements, the amorphous graphite obtained from the bottom product of air elutriation can be further cleaned with other adhering contaminants by a flushing procedure. The graphite particles are put through a kerosene-based oil phase in this process before being transferred to an aqueous suspension utilizing sodium carbonate as a surfactant. High-speed centrifugation removes the fine clay and ash contaminants by generating little oil droplets [6, 87]. ## 10. Techno Economics Cost and energy requirements of beneficiation by flotation:For the flotation of graphite ore, two main costs are involved: the cost of ore material and the cost incurred in the beneficiation process. The costs involved in the beneficiation process includes plant maintenance, consumables, electricity supplies, utilities cost, cost of flotation reagents (collector, frother, and depressant), and cost of skilled and unskilled labourers.For the beneficiation process of the graphite ore as per the prefeasibility report of Bainibasa Graphite Mining and Beneficiation Project, Orissa, the feed throughput of crude graphite ore is 13,272 TPA which will be processed to obtain 841 TPA of desirable clean graphite with 85% and 65% FC content of the purified graphite. The total water requirement is 130 kilolitre/day. The cost per tonne for the ore material is between 11,141 and 14,766 INR/ton of the finished product for the first five years of the flotation plant installation. While the costs involved in the beneficiation process per ton is between 10,467 and 11,266 INR/ton of the finished product for the first five years after the plant installation, so that the total amount spent on processing the crude ore to the desired purified ore is between 21,608 and 26,032 INR/ton of the finished product. ### 10.1. Energy Requirement The operational capacity of beneficiation is 30 TPH. The hours of plant operation thus are 13,272 TPA/30 TPH = 442.4 hrs/annum ∼443 hrs/annum. The power supply is of 400 kVA = 400 kW = 400 kJ/sec.The energy requirements of the beneficiation plant alone are = (power of the supply) ×  (hours of operation of beneficiation plant) = 400 kJ/sec × 443 hrs × 60 min/hr × 60 min/sec = 637,920,000 kJ = 6,37,920 MJ. ## 10.1. Energy Requirement The operational capacity of beneficiation is 30 TPH. The hours of plant operation thus are 13,272 TPA/30 TPH = 442.4 hrs/annum ∼443 hrs/annum. The power supply is of 400 kVA = 400 kW = 400 kJ/sec.The energy requirements of the beneficiation plant alone are = (power of the supply) ×  (hours of operation of beneficiation plant) = 400 kJ/sec × 443 hrs × 60 min/hr × 60 min/sec = 637,920,000 kJ = 6,37,920 MJ. ## 11. Comparative Assessment of Various Beneficiation Techniques Merits and demerits of different beneficiation techniques are summarized in Table4.Table 4 Merits and demerits of different beneficiation techniques. Beneficiation techniquesMeritsDemeritsMicrowave pretreatment(i) Selective and easy heating(ii) Energy efficient(iii) Environment-friendly output(i) Low absorption of microwave(iii) Overall high energy requirement(iii) Longer processing timeComminution(i) Enhanced surface area(ii) Modifies particle-size distribution as per industry requirements(i) Very low gangue elimination(ii) Can alter and damage particle shape(iii) Further processing is requiredFlotation(i) Cost effective(ii) Suitable for hydrophobic material(iii) Specific gangue removal is possible(iv)High purity output obtained up to 98%(i) Multiple rougher/cleaner stages required(ii) Use of chemicals which might be hazardousGravity Separation(i) Inexpensive(ii) Chemical free(iii) Low heating involved(iv) Large space requirement(i) Less extent of separation(ii) High capital investmentsAir Elutriation(i) Easy segregation by varying air velocities(ii) Simple and easy maintenance(i) Feed must have low moisture(ii) Low size particles input feed is unfavourableElectroflotation(i) Higher efficiency(ii) Faster bubble-particle interaction as compared to air bubble-particle interaction(i) Complex process(ii) High electric supplies requiredUltrasound-assisted flotation(i) High purity product obtained(ii) Release entrapped impurities(i) Expensive(ii) Complex(iii) Less research doneComminution is only responsible for reducing the particle size and hence resulting in different particle-size distribution. The benefits are that we can bring the ore particles in the industrial demand range and enhance the surface area for the improved impact of downstream processes. This method to very less extent can eliminate the gangue particles. Further processing is recommended to meet the high-end purity standards for industrial uses. Microwave irradiation facilitates easy and selective heating, is more energy efficient, and makes the output feed environmentally less benign as it releases sulphurous and nitrous products in the gaseous form. But it releases harmful and toxic gases to the environment. Another major issue with microwave treatment of graphite is less absorption of microwave radiation by graphite ore. Also, the overall energy requirements are high and longer duration of treatment required. This method is a pretreatment process suggested before comminution or chemical treatment methods.Gravity separation is a cheap, pretreatment method with the limited ability to reduce gangue. Advantages of gravity separation are no heating and absence of chemicals. However, space requirements for the gravity separator are too high. Despite low operating costs, huge capital investments are required in employing a physical method of beneficiation.The most adopted beneficiation technique used is flotation. It is highly cost effective and a very well-suited method pertaining to hydrophobicity of graphite. It has the ability to reduce the gangue in the ore to a large extent. Specific gangue removal becomes easier depending upon the type and nature of gangue particles present in crude graphite ore with the help of depressants and conditioning reagents used. However, it requires multiple rougher/cleaner steps and few regrinding steps to obtain higher grade purity of the end product in the desirable size range. Also, use of chemicals is discouraged as it impacts the safety standards and releases residual pollutants which require further treatment. As compared to conventional flotation, electroflotation processes claim higher efficiency due to faster bubble (hydrogen and oxygen molecules are more active for hydrophobic material) and particle adherence over air-particle interaction which aids faster and better separation of ore from gangue. It is specifically more beneficial for low-grade graphite ores for extraction of ultra-fine graphite. However, the process is complex and has high electric supplies’ requirements. Ultrasound-assisted flotation aims to release entrapped impurities and is beneficial if the objective is to produce a very high purity end product with preserved flaky nature of graphite.Air elutriation helps in segregation of the graphite ore in the different size range by tweaking the air velocities. Cleaning and maintenance is simple and it outperforms traditional sieving techniques. However, wet feed is difficult to handle, and too low particle size of input feed cannot be processed. ## 12. Conclusions and Future Directions ### 12.1. Pretreatment Techniques Pretreatment techniques for graphite flotation include “comminution” and “microwave pretreatment.” They disintegrate the contaminants that are trapped in place to enable more effective grinding, and with the help of comminution, the product particle can be brought to desirable size range. Most effective grinding processes are stirred milling and jet milling.For the treatment of middling, a moderate-speed ball mill and a flint pebble grinding medium are suggested. Delamination, when used as a way to regrind, can make coarse flake graphite with up to 98% fixed carbon.It has been discovered that attrition milling works well for generating graphite flakes of smaller sizes while causing the least amount of form and shape degradation. Very less research has been carried out about microwave irradiation as it has the ability to eliminate nitrous and sulphurous gangue particles. ### 12.2. Froth Flotation and Flotation Reagents Froth flotation is one of the most economical, energy efficient, and reliable beneficiation technique for graphite. The studies indicate that froth flotation, which outperforms acid leaching (keeping in mind environmental considerations), can make graphite samples with high purity and fixed carbon content of up to 98%.Column flotation outperforms mechanical cell flotation when several cleaning processes are involved.When selecting the reagents, special attention must be paid to the “Greenness Index.” Ethers and polyglycols, such as Nasfroth 301 and Dowfroth 200, have been determined to represent the least environmental risk and to be extremely safe to handle. In addition, they have a higher flash point (195°C in the case of Dowfroth 200) and perform better than conventional industrial frothers like MIBC and pine oil. Despite the pine oil’s lower flash point, Nasfroth 301 is superior to Dowfroth 200 according to HLB values.Kerosene and diesel are hazardous to the environment, and hence their use is discouraged. A more effective collector reagent is Sokem 705C, an alcohol-ether mixture that is both environmentally friendly and economically viable. It has also been shown that IBM/07, a mix of terpenes and hydrocarbons, yield up to 96% fixed carbon and can be used as a collector in the graphite processing industry.Sodium silicate and sodium cyanide are both acceptable inorganic depressants for siliceous gangue particles and pyrite, respectively. Organic polymers like starch are recommended to stabilise the pulp because of their steric effect.New research for environmentally benign flotation reagents is vegetable oil (like soyabean oil) and biosurfactants produced by microorganisms which can be potential substitute for present day collectors. ### 12.3. Other Beneficiation Techniques Air elutriation and flushing can be performed sequentially after the froth flotation procedure to obtain flake and amorphous graphite of the best quality, suitable for high market value. Electroflotation is suggested as a means of enhancing fine and ultrafine particle recovery which is difficult to be carried out by conventional flotation. To meet the increased demands of flaky graphite ultrasound-assisted flotation is a promising tool as it preserves the flaky shape and form unlike conventional grinding. One or more beneficiation techniques can be employed to get the desired benefits of different technologies. ## 12.1. Pretreatment Techniques Pretreatment techniques for graphite flotation include “comminution” and “microwave pretreatment.” They disintegrate the contaminants that are trapped in place to enable more effective grinding, and with the help of comminution, the product particle can be brought to desirable size range. Most effective grinding processes are stirred milling and jet milling.For the treatment of middling, a moderate-speed ball mill and a flint pebble grinding medium are suggested. Delamination, when used as a way to regrind, can make coarse flake graphite with up to 98% fixed carbon.It has been discovered that attrition milling works well for generating graphite flakes of smaller sizes while causing the least amount of form and shape degradation. Very less research has been carried out about microwave irradiation as it has the ability to eliminate nitrous and sulphurous gangue particles. ## 12.2. Froth Flotation and Flotation Reagents Froth flotation is one of the most economical, energy efficient, and reliable beneficiation technique for graphite. The studies indicate that froth flotation, which outperforms acid leaching (keeping in mind environmental considerations), can make graphite samples with high purity and fixed carbon content of up to 98%.Column flotation outperforms mechanical cell flotation when several cleaning processes are involved.When selecting the reagents, special attention must be paid to the “Greenness Index.” Ethers and polyglycols, such as Nasfroth 301 and Dowfroth 200, have been determined to represent the least environmental risk and to be extremely safe to handle. In addition, they have a higher flash point (195°C in the case of Dowfroth 200) and perform better than conventional industrial frothers like MIBC and pine oil. Despite the pine oil’s lower flash point, Nasfroth 301 is superior to Dowfroth 200 according to HLB values.Kerosene and diesel are hazardous to the environment, and hence their use is discouraged. A more effective collector reagent is Sokem 705C, an alcohol-ether mixture that is both environmentally friendly and economically viable. It has also been shown that IBM/07, a mix of terpenes and hydrocarbons, yield up to 96% fixed carbon and can be used as a collector in the graphite processing industry.Sodium silicate and sodium cyanide are both acceptable inorganic depressants for siliceous gangue particles and pyrite, respectively. Organic polymers like starch are recommended to stabilise the pulp because of their steric effect.New research for environmentally benign flotation reagents is vegetable oil (like soyabean oil) and biosurfactants produced by microorganisms which can be potential substitute for present day collectors. ## 12.3. Other Beneficiation Techniques Air elutriation and flushing can be performed sequentially after the froth flotation procedure to obtain flake and amorphous graphite of the best quality, suitable for high market value. Electroflotation is suggested as a means of enhancing fine and ultrafine particle recovery which is difficult to be carried out by conventional flotation. To meet the increased demands of flaky graphite ultrasound-assisted flotation is a promising tool as it preserves the flaky shape and form unlike conventional grinding. One or more beneficiation techniques can be employed to get the desired benefits of different technologies. --- *Source: 1007689-2023-03-21.xml*
1007689-2023-03-21_1007689-2023-03-21.md
71,471
A Mini Review on Flotation Techniques and Reagents Used in Graphite Beneficiation
N. Vasumathi; Anshuli Sarjekar; Hrishikesh Chandrayan; K. Chennakesavulu; G. Ramanjaneya Reddy; T. V. Vijaya Kumar; Nour Sh. El-Gendy; S. J. Gopalkrishna
International Journal of Chemical Engineering (2023)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1007689
1007689-2023-03-21.xml
--- ## Abstract Due to its numerous and major industrial uses, graphite is one of the significant carbon allotropes. Refractories and batteries are only a couple of the many uses for graphite. A growing market wants high-purity graphite with big flakes. Since there are fewer naturally occurring high-grade graphite ores, low-grade ores must be processed to increase their value to meet the rising demand, which is predicted to increase by >700% by 2025 due to the adoption of electric vehicles. Since graphite is inherently hydrophobic, flotation is frequently used to beneficiate low-grade ores. The pretreatment process, both conventional and unconventional; liberation/grinding methods; flotation methods like mechanical froth flotation, column flotation, ultrasound-assisted flotation, and electroflotation; and more emphasis on various flotation reagents are all covered in this review of beneficiation techniques. This review also focuses on the different types of flotation reagents that are used to separate graphite, such as conventional reagents and possible nonconventional environmentally friendly reagents. --- ## Body ## 1. Introduction Graphite is a natural crystalline allotrope of carbon that is greenish-black and shiny [1]. K. W. Scheele first chemically characterised graphite in 1779, and A. G. Werner later gave it the term “graphite,” which was derived from the Greek word “grapho,” which means “I write” [2]. There are many different physical and structural forms of carbon that exist in nature [3]. Van der Waals forces cause the parallel sheets that make up graphite’s structure to be weakly attracted to one another [3]. Carbon nanotubes, diamonds, and fullerenes are also crystalline materials [4]. Carbon atoms in graphite are sp2 hybrids with “pi” electrons in plane P-orbitals [5–8]. One out-of-plane electron per carbon atom gives carbon its metal-like characteristics, such as lustre and high electrical and thermal conductivities [9]. Being an allotrope of carbon, it also possesses nonmetallic qualities, including inertness and high lubricity. It is the perfect material for use in fuel cells, refractories, lithium-ion batteries, fibre optics, and electrical vehicles because it combines metallic and nonmetallic qualities [4]. Graphite can take on a variety of shapes, including hexagonal, rhombohedral, and turbostratic structures [6, 7]. According to Jin et al. [10], there are three different forms of naturally occurring graphite: crystalline (flake), microcrystalline (amorphous), and vein (lump) [10]. Their attributes are presented in Table 1. Metamorphism of carbon compounds in sedimentary rocks produces graphite. The majority of it is found in the rocks’ fractures, pockets, veins, and scattered forms [2, 13]. The most sought-after of these kinds, flake, is found inside the rock in flaky form. The viability of mining a graphite ore depends on how many flakes are there. Due to its superior heat and erosion resistance compared to other graphite forms, refractory manufacturers are the largest consumers of flaked graphite. In addition, the flake form of graphite is preferred since lump graphite is more expensive and less common. The most common type of graphite, amorphous graphite, has significant commercial value and is one of the top stocks. With the right processing, one can achieve up to 99% purity from this low-grade ore [10]. Lump graphite has a high market value since it is very rare and has higher purity and crystallinity than amorphous graphite. This limited its use to applications that required its exceptional qualities, including high electrical conductivity and purity.Table 1 Properties of different graphite forms [4, 6, 11, 12]. PropertyAmorphousFlakeLumpOccurrenceFormed as a result of anthracite coal seams metamorphosingThe development of small flakes inside the rock is the result of regional metamorphismLump graphite is created when hydrothermal activity changes the carbon compounds inside the rockDescriptionMost prevalent, carbon quality: 20%–40% (low)Good abundance, carbon grade <90%Carbon grade >90%, rarest in supplyMajor producersChina, North Korea, Mexico, and AustriaChina, India, Madagascar, Mexico, and BrazilSri LankaInternational market valueLowGoodVery highMarket price/ton300–500 USD500–3,000 USD3,000–6,000 USDMorphologyFine granular structureFlaky, flat, plate-like particlesMassive fibrous aggregatesBy accident, Edward G. Acheson created synthetic graphite by heating carborundum (SiC) at a high temperature, which produced extremely pure graphite [12]. Graphite can currently be produced in a variety of ways, depending on the purity required.More than 800 million tonnes of viable graphite are thought to be in global reserves. However, the estimated global graphite reserves are only 300 million tonnes. Turkey (30%), Brazil (24%), China (24.34%), Mozambique (5.67%), Tanzania (5.67%), and India (2.67%) make up the world’s graphite reserves [14]. Other significant countries that produce graphite are Sri Lanka, Canada, Europe, Mexico, and the United States [4]. In 2017, 1.03 million tonnes of graphite were produced, with 88% of that production coming from China, 8% from Brazil, and 3% from India, according to the US Geological Survey. The top nations looking into new natural supply sources are Canada and Brazil [6, 14].Natural graphite is used in lubricants, brake linings, batteries, steel production, foundries, and refractories [3, 8, 15]. In addition to its particle size, the proportion of fixed carbon in graphite is extremely important for a variety of graphite uses (Table 2). While synthetic graphite is utilized as a neutron moderator, radar adsorbent materials, electrodes, graphite powder, and other things [3, 16], there are numerous ongoing research projects to investigate novel uses for graphite. A few of them include preventing fires, cleaning up oil spills, and removing arsenic from water, among others [17]. It is clear from Table 2 that the majority of businesses use natural graphite demand purity levels of at least 90%. Since lump graphite with a purity of >90% is uncommonly abundant, beneficiating lower-quality graphite to the desired grade (typically >90%) becomes imperative. Due to the presence of mineral acids in them, current conventional procedures like acid leaching are detrimental to the environment [4]. Hydrochloric acid is the only mineral acid that is safe, but it is ineffective for acid leaching [4, 18]. Due to innovative uses for the material and possible high future output, there will be an increase in the beneficiation of low-quality graphite ore reserves around the world. Therefore, methods of beneficiation that does not hurt the environment will be very important for the graphite industry to grow.Table 2 Specifications and uses of graphite [14]. End productsPercentage of graphite usedThe quality of the graphite usedFixed carbon (F.C.) (%)Size (micron)Mag-carb refractories1287–90150–710Alumina-carb (graphitised) alumina refractories8–1085 min150–500Clay-bonded crucibles60–65∼80149–841Expanded (or flexible) graphite foils and products-based thereon (e.g., sealing gaskets in refineries, fuel pumps, and automobiles)10090 min. (Preferably + 99)250–1800Pencil50–60+95–9850 maxBrake-linings1–1598 nub,75 maxFoundry—40–7053–75Batteries(a) Dry cells—88 min75 max(b) Alkaline—98 min5–75Brushes—Usually 99Usually less than 53Lubricants—98–9953–106Sintered products (e.g., clog wheels)—98-995PaintUp to 7550–5575% nubAmorphous powder flakeBraid used for sealing (e.g., on a ship)40–5095 min—Graphitized grease (used in seamless steel tube manufacturing)—+9938 maxColloidal graphite10099.9Colloidal ## 2. Microwave Pretreatment It is necessary to separate or break down the impurities that are present in the ores. Microwave irradiation, first proposed by Walkiewicz in 1991, is a great, environmentally friendly way to heat the ore. That process results in isolated fractures at the intergranular and transgranular regions without triggering catastrophic failure, which results in cracks along the grain boundaries. This breaks down the impurities that are stuck in a “honeycomb” pattern [19]. According to Özbayoğlu et al. [20], the eliminated contaminants are primarily moisture, sulphur, and other volatile salts. For a low-grade graphite sample, effective impurity removal results in a marked rise in the fixed carbon content in the concentrates. That technique has important advantages like lowering the work index and reducing the wear and tear on the mill, mill liners, and grinding media [19, 21–23]. It has been discovered that such a method is now quite effective for the flotation of coal and ilmenite ores, so it can be expected that it has good potential for beneficiating graphite as well [24, 25]. ## 3. Grinding ### 3.1. Conventional Grinding Methods: Ball and Rod Milling The two most used methods for grinding graphite ore are ball and rod mills:A cylindrical shell that spins about its axis is used in ball milling. Graphite ore and chrome steel or stainless steel balls are placed inside the shell. An abrasion-resistant material, usually a manganese-steel alloy, is used to make the interior of the object. Grinding is mostly accomplished through the impact of the balls on the edges of the ore particle pile at the bottom. Secondarily, the ore is ground by the friction created by the slipping balls [26]. This results in size reduction by means of compression and shearing forces, which arrange the ore particles in an anisotropic manner [6, 27]. High-intensity milling, on the other hand, results in amorphous carbon, compromising flake size [28, 29]. Ball mills are easy to use and cheap to run [6, 30].A rod mill has a cylindrical body similar to a ball mill, except instead of balls, it uses rods. The rod system consumes 35–40% of the mill’s capacity [31]. Large particles are crushed to a smaller average particle size in a rod mill by being trapped between the rods and moving toward the discharge end of the rod. As a result, the size of the particles at the two ends of the rods varies and is dependent on the mill’s length, feed rate, and grinding speed [32]. A crucial aspect of this approach is its improved ability to grind big particles. Rod mills are chosen over other types of grinding whenever the ore is sticky. Compared to ball milling, rod milling generally does less harm to the size and shape of the flake. Due to the relatively smaller surface area of the rods, the rod mill uses more energy [31]. ### 3.2. Nonconventional Grinding and Regrinding: Stirred Milling, Jet Milling, Delamination, and Attrition Milling Traditional grinding methods like ball milling or rod milling do not take into account impurities that exist in the layered graphite [6]. In contrast to strong intralayer covalent bonding, the weak Van der Waals forces between the graphite layers are easily overcome [3, 33]. Such layered minerals produce an anisotropic structure upon grinding. Thus, abrasive forces must be applied instead of compressive or shearing forces [34] that can be achieved using a stirred mill, which consists of a cylindrical shell, stirrer, shear blade, and distribution disc. The stirred mill rotates the pulp while agitating the ore to grind it while minimizing impact force and preventing overgrinding. It was reported that stirred milling causes less damage to flake shape and size as compared to rod milling and ball milling because the grinding media or pulp rubs against each other due to various rotational speeds, resulting in an active force leading to the release of layers of the mineral [33]. Jet milling, on the other hand, uses high-impact stress to keep the ore’s flaky structure and nonuniform shape on large particles [35].Earlier, it was believed that graphite regrinding, which limited the maximum fixed carbon concentration to 95%, was useless because graphite coats gangue particles and makes them floatable [36]. The graphite middling delaminates when it is reground in a slow-speed ball mill employing a flint pebble grinding media. This method is superior to traditional milling for the preparation of flake graphite since it has little effect on the size and form of the graphite flakes. This method produced fixed carbon contents of up to 98% [37].Attrition milling can be utilized as the final stage of liberation by grinding. The attrition method is used to selectively separate tiny particles. This is a very effective method if a small particle size of graphite is required for its usage as a lubricant. Attrition can keep the flaky form and crystallinity of graphite while reducing 90% of a 150μm sample to 1 μm [38]. ## 3.1. Conventional Grinding Methods: Ball and Rod Milling The two most used methods for grinding graphite ore are ball and rod mills:A cylindrical shell that spins about its axis is used in ball milling. Graphite ore and chrome steel or stainless steel balls are placed inside the shell. An abrasion-resistant material, usually a manganese-steel alloy, is used to make the interior of the object. Grinding is mostly accomplished through the impact of the balls on the edges of the ore particle pile at the bottom. Secondarily, the ore is ground by the friction created by the slipping balls [26]. This results in size reduction by means of compression and shearing forces, which arrange the ore particles in an anisotropic manner [6, 27]. High-intensity milling, on the other hand, results in amorphous carbon, compromising flake size [28, 29]. Ball mills are easy to use and cheap to run [6, 30].A rod mill has a cylindrical body similar to a ball mill, except instead of balls, it uses rods. The rod system consumes 35–40% of the mill’s capacity [31]. Large particles are crushed to a smaller average particle size in a rod mill by being trapped between the rods and moving toward the discharge end of the rod. As a result, the size of the particles at the two ends of the rods varies and is dependent on the mill’s length, feed rate, and grinding speed [32]. A crucial aspect of this approach is its improved ability to grind big particles. Rod mills are chosen over other types of grinding whenever the ore is sticky. Compared to ball milling, rod milling generally does less harm to the size and shape of the flake. Due to the relatively smaller surface area of the rods, the rod mill uses more energy [31]. ## 3.2. Nonconventional Grinding and Regrinding: Stirred Milling, Jet Milling, Delamination, and Attrition Milling Traditional grinding methods like ball milling or rod milling do not take into account impurities that exist in the layered graphite [6]. In contrast to strong intralayer covalent bonding, the weak Van der Waals forces between the graphite layers are easily overcome [3, 33]. Such layered minerals produce an anisotropic structure upon grinding. Thus, abrasive forces must be applied instead of compressive or shearing forces [34] that can be achieved using a stirred mill, which consists of a cylindrical shell, stirrer, shear blade, and distribution disc. The stirred mill rotates the pulp while agitating the ore to grind it while minimizing impact force and preventing overgrinding. It was reported that stirred milling causes less damage to flake shape and size as compared to rod milling and ball milling because the grinding media or pulp rubs against each other due to various rotational speeds, resulting in an active force leading to the release of layers of the mineral [33]. Jet milling, on the other hand, uses high-impact stress to keep the ore’s flaky structure and nonuniform shape on large particles [35].Earlier, it was believed that graphite regrinding, which limited the maximum fixed carbon concentration to 95%, was useless because graphite coats gangue particles and makes them floatable [36]. The graphite middling delaminates when it is reground in a slow-speed ball mill employing a flint pebble grinding media. This method is superior to traditional milling for the preparation of flake graphite since it has little effect on the size and form of the graphite flakes. This method produced fixed carbon contents of up to 98% [37].Attrition milling can be utilized as the final stage of liberation by grinding. The attrition method is used to selectively separate tiny particles. This is a very effective method if a small particle size of graphite is required for its usage as a lubricant. Attrition can keep the flaky form and crystallinity of graphite while reducing 90% of a 150μm sample to 1 μm [38]. ## 4. Gravity Separation This technique is based on the fundamental idea of the disparity between the specific gravities of the ore and contaminants. The simplicity of the process, cost-effectiveness, and environmental friendliness are this method’s defining characteristics [39]. It has been reported that when fine graphite particles (less than 150 μm) are passed through a hydraulic classifier, they consume 34% less acid in the leaching process.Due to the technique’s increased environmental friendliness, it is thought to be even more environmentally beneficial than traditional froth flotation [40, 41]. Gravity separation can also be used if it is necessary to concentrate a specific component from the tailings [6]. ## 5. Froth Flotation The separation of hydrophobic minerals from hydrophilic contaminants is an important application of this technique. It is a physiochemical technique that August and Adolph carried out for graphite for the first time in 1877 [40, 42]. This concentrates them by taking advantage of the hydrophobic properties of graphite and related ores. The studies indicated that when compared to other concentration processes, the froth flotation process required three to four times fewer samples to produce the same amount of concentrated ore. When compared to alternative techniques, it can also dramatically lower the expense of tailing management and treatment [43]. Depending on the application, two alternative types of flotation methods are used for graphite on an industrial scale: mechanical and column flotation. ### 5.1. Column Flotation This occurs in a cylindrical flotation column. With the aid of a bubble generator, gas bubbles are introduced from the bottom of the column while the slurry is fed from the top. Gas particles disperse into the slurry as a result of a concentration differential. A counter-current laminar flow results from this. As a result of this action, there is a high relative velocity and an increased likelihood of a collision [44]. The froth zone, collection zone, and scavenger cyclone zone are the three zones that form in the flotation column, as shown in Figure 1. When water (a diluent) is added, froth with hydrophobic pure graphite particles forms at the top of the column [45]. The froth selectivity increases with the froth height. The improved ore and wash water mixing caused by the closer proximity of the wash water inlet to the pulp-froth interface is what accounts for the increased selectivity [46]. The froth zone has an air holdup (volume of liquid displaced by air) of about 80% [44].Figure 1 Schematic diagram of flotation column.Superficial gas velocity is one of the most significant variables in the column flotation process. Low gas holdup due to a lower gas velocity of the flow shows a negative impact on the grade and recovery of the ore. The reduced turbulence caused by column flotation, in addition to its low operating cost, is a significant benefit. The energy that the turbulence contributes leads to the unwanted dissociation of the mineral particle from the bubble. Negative bias is used to keep the flow of tailings lower than the feed flow to recover more flaky graphite [44, 47, 48]. The bubbles in column flotation have a large lower limit in terms of diameter. As a result, microparticles find it challenging to collide with the froth. The water’s streamlines allow the fine particles to move. The float recovery is decreased in this way, but only for fine and ultrafine particles [48, 49]. ### 5.2. Flotation by Mechanical Cell This process is used to beneficiate mineral particles that are difficult to remove. These are the tiny particles of graphite. Figure2 shows an example of flotation cell, as reported by Kuan [50]. In mechanical flotation cells, froth is produced by agitation with a rotating impeller. The high-speed impellers release air bubbles into the system [5]. The impeller rotation speed is a crucial process parameter. Studies show that controlling bubble-particle adhesion depends on this component [51]. The relative velocity of bubble and particles close to the impeller only matters in this sort of cell. As a result, there are fewer collisions.Figure 2 A representation of flotation cell.In addition, the residence period is shorter than the overall amount of time a bubble spends in the pulp. For these reasons, less concentration of graphite particles is caused. pH, promoter addition, feed size, impeller speed, viscosity, and collector dose are the major factors [44, 52, 53]. The lack of spargers in mechanical cells gives them a significant advantage over flotation columns. In flotation columns, spargers are employed to create bubbles. They need constant upkeep because they are vulnerable to particle obstruction, degradation, and frequent malfunctions. Another benefit of mechanical cells is their ease of use and lower cost for small-scale applications [54]. ### 5.3. Reagents Used in Graphite Flotation To create a pulp environment that is favourable for separating undesirable gangue particles from valuable minerals, flotation reagents are used [55]. The qualities of the reagents, such as collectors, frothers, and depressants, which are crucial to the effectiveness of the flotation process of concentration, include mineral hydrophobicity, bubble size, contact angle, bubble formation, and particle adherence. These properties are discussed below in detail. #### 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. #### 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. #### 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 5.1. Column Flotation This occurs in a cylindrical flotation column. With the aid of a bubble generator, gas bubbles are introduced from the bottom of the column while the slurry is fed from the top. Gas particles disperse into the slurry as a result of a concentration differential. A counter-current laminar flow results from this. As a result of this action, there is a high relative velocity and an increased likelihood of a collision [44]. The froth zone, collection zone, and scavenger cyclone zone are the three zones that form in the flotation column, as shown in Figure 1. When water (a diluent) is added, froth with hydrophobic pure graphite particles forms at the top of the column [45]. The froth selectivity increases with the froth height. The improved ore and wash water mixing caused by the closer proximity of the wash water inlet to the pulp-froth interface is what accounts for the increased selectivity [46]. The froth zone has an air holdup (volume of liquid displaced by air) of about 80% [44].Figure 1 Schematic diagram of flotation column.Superficial gas velocity is one of the most significant variables in the column flotation process. Low gas holdup due to a lower gas velocity of the flow shows a negative impact on the grade and recovery of the ore. The reduced turbulence caused by column flotation, in addition to its low operating cost, is a significant benefit. The energy that the turbulence contributes leads to the unwanted dissociation of the mineral particle from the bubble. Negative bias is used to keep the flow of tailings lower than the feed flow to recover more flaky graphite [44, 47, 48]. The bubbles in column flotation have a large lower limit in terms of diameter. As a result, microparticles find it challenging to collide with the froth. The water’s streamlines allow the fine particles to move. The float recovery is decreased in this way, but only for fine and ultrafine particles [48, 49]. ## 5.2. Flotation by Mechanical Cell This process is used to beneficiate mineral particles that are difficult to remove. These are the tiny particles of graphite. Figure2 shows an example of flotation cell, as reported by Kuan [50]. In mechanical flotation cells, froth is produced by agitation with a rotating impeller. The high-speed impellers release air bubbles into the system [5]. The impeller rotation speed is a crucial process parameter. Studies show that controlling bubble-particle adhesion depends on this component [51]. The relative velocity of bubble and particles close to the impeller only matters in this sort of cell. As a result, there are fewer collisions.Figure 2 A representation of flotation cell.In addition, the residence period is shorter than the overall amount of time a bubble spends in the pulp. For these reasons, less concentration of graphite particles is caused. pH, promoter addition, feed size, impeller speed, viscosity, and collector dose are the major factors [44, 52, 53]. The lack of spargers in mechanical cells gives them a significant advantage over flotation columns. In flotation columns, spargers are employed to create bubbles. They need constant upkeep because they are vulnerable to particle obstruction, degradation, and frequent malfunctions. Another benefit of mechanical cells is their ease of use and lower cost for small-scale applications [54]. ## 5.3. Reagents Used in Graphite Flotation To create a pulp environment that is favourable for separating undesirable gangue particles from valuable minerals, flotation reagents are used [55]. The qualities of the reagents, such as collectors, frothers, and depressants, which are crucial to the effectiveness of the flotation process of concentration, include mineral hydrophobicity, bubble size, contact angle, bubble formation, and particle adherence. These properties are discussed below in detail. ### 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. ### 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. ### 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 5.3.1. Frother Nonionic heteropolar compounds known as “frothers” can stabilise froth and selectively restrict the entrainment of gangue particles into it. They promote greater bubble formation when disseminated. While the hydrophobic end of the frother selectively adsorbs on air, the polar end forms a hydrogen bond with water. This reduces the water’s surface tension and improves foam stability [5]. Figure 3 shows a schematic diagram for the interaction of frother with graphite and the hydrophilic and hydrophobic parts [5]. Only frother-acting surfactants are used when there is no oxidation in the graphite ore in any shape or form. Alcohols, alkoxy paraffin, polyglycols, and polyglycol ethers are some of the frothers that are easy to buy [56].Figure 3 Interaction of frother with graphite and hydrophilic and hydrophobic parts.When frothers are added, graphite that has little to no contamination floats with ease. Methyl isobutyl carbinol (MIBC) and fuel oil are frequently used frothers [37]. Isoctanol, pine oil, MIBC, and tri (propylene glycol) butyl ether were the four frothers that were compared to each other to provide better recovery and grade on graphite flotation. The fixed carbon content of the recovered graphite is seen to decrease as the frother dosage is increased. According to Öney and Samanli [57], MIBC was the best of the four blenders.For example, pine oil and Dowfroth are better alternatives to MIBC because they are safer for the environment [58]. A popular frother is made of pine oil, which is obtained from pine stumps or turpentine [4]. However, due to its irritating properties, which are indicated in Table 3, the use of pine oil is gradually decreasing and frequently prohibited. In addition, when employed as a frother, coal oil is more discerning than fuel oil [2].Table 3 Safety risk ratings on flotation frothers [58]. ReagentsFlash point °CRisk ratingExposure riskRisk ratingEnvironmental riskRisk ratingTotal risk ratingAliphatic alcoholMIBC395Odour irritant3Minimal210Cyclic alcoholsPine oil C10H18O783Irritant2Minimal27Aromatic alcoholsCresylic acid CH3C6H4OH813Irritant2Harmful49Alkoxy-type1,1,3-Triethoxy butane C10H22O3803Irritant2Nonhazardous27Polyglycol-typeDowfroth 2001951Minimal1Minimal24For graphite flotation, the MIBC, pine oil, cresylic acid, TEB, and Dowfroth 200 frothers were examined and compared [56]. These are compared for risk-rated attributes such as flash point, exposure, and environmental risks, as seen in Table 3. According to the information given in Table 3, Dowfroth 200 is one of the safest frothers.A numeric term called “hydrophilic-lipophilic balance” (HLB) comes into play as the frothing ability of a frother is considered. This indicates the ratio of hydrophilic to lipophilic groups present in the frother. Dowfroth 400, Nasfroth AHE, Dowfroth 200, Nasfroth 301, and MIBC were evaluated as frothers for graphite with HLB values of 9.9, 8.2, 8, 6.5, and 6.1, respectively. The flotation performance increases in the order of Nasfroth AHE < MIBC < Dowfroth 400 < Dowfroth 200 < Nasfroth 301. Low HLB and low molecular weight of the frother increase the flotation yield [59].The polar interactions that result from the wettability of hydrophobic solids (like graphite) with polar adsorbates are detrimental to the flotation process because they reduce the adherence of the mineral to the air bubbles. It is important to develop a technique that takes into account the polar interactions between substances that have previously been adsorbed and those in solution. Jańczuk et al. [59] list as variables the type of frother, the concentration, the surface tension of the solvent, and the contact angle. ## 5.3.2. Collector The compounds that have a polar and a nonpolar group joined together are called collectors. The primary purpose of the collector is to make the mineral’s surface hydrophobic to improve the mineral’s capacity to float [37, 55]. Since graphite is a hydrophobic mineral, it should float on the water’s surface by nature [60]. But contamination of the graphite surface in an oxidising environment, whether from simple oxidation, nitration, or the presence of other hydrophilic impurities, causes the deposition of excess charges on the surface, similar to those of a hydrophilic solid, and this calls for the addition of a collector, as shown in Figure 4. The hydrophobic layer of the nonpolar surfactant that serves as the collector is applied to this polluted hydrophilic surface [56]. In the flotation of graphite, hydrocarbons such as paraffin, diesel, and kerosene are used, as well as ionic collectors such as potassium amyl (or ethyl) xanthates, dithiocarbamates, and dithiophosphate [57, 61, 62]. Typically, hydrophobic materials like graphite are floated using nonpolar collectors such as kerosene, diesel, and fuel oil [53, 55, 61, 63, 64]. When the collector properties of diesel, n-dodecane, and kerosene oil were compared, diesel produced the best outcomes. The performance of dodecane and kerosene is dose dependent. Kerosene was discovered to be more efficient at low collector dosages [57]. In aqueous media, diesel and kerosene are less soluble, and their emulsification improves flotation performance. Hexanol and octanol were used as coemulsifiers in this experiment. Diesel-hexanol systems are more efficient than diesel-octanol systems because diesel is more evenly dispersed in hexanol than in octanol, which leads to smaller collector droplet sizes and more collisions between mineral particles and the coemulsified collector, improving recovery [65]. In contrast to the diesel-pine oil system, which only fixed 90% of the carbon, the IBM/07 mixture of different hydrocarbons and terpenes was used as a collector to fix 96% of the carbon [66].Figure 4 Polar ends of collector interaction on the surface of graphite ore particle.According to the safety data sheet (SDS), the Greenness Index, an evaluation tool, assesses the reagents based on the parameters of health impact, general properties, odor, fire safety, and stability. It offers useful guidance for making the most sustainable reagent choice [67]. Most commercially available surfactants are made with chemicals because they are toxic, do not break down, and may make harmful by-products [68].It has been discovered that using a single reagent, such as the alcohol-ether-based collector “Sokem 705C,” is more environmentally friendly, more effective, and more economical than using a dual reagent system like a diesel-pine oil system [64]. The carbohydrate-attached lipids known as glycolipids, which are generated from amino acids, carbohydrates, and vegetable oils, have the potential to replace nonrenewable petroleum-based goods like diesel and kerosene. As is done by a group of researchers, where they have replaced oleic acid with soyabean oil as collector and have achieved comparable results [69]. In addition, biosurfactants are molecules with a polar and a nonpolar group linked to them that are biologically manufactured by microorganisms and are capable of creating outcomes similar to those of synthetic surfactants [68].A secondary source of graphite is the depleted Li-ion battery. Lithium carbonate and graphite, which are extracted via a unique method termed “grinding flotation,” are the two valuables that result after purification. As a flotation collector, decane, dodecane, and Fenton reagent (a combination of ferrous sulphate and hydrogen peroxide) can be employed [9, 70]. If this lithium is made chemically from lithium carbonate, it can also help the economy in other ways.When microflotation of amorphous graphite was carried out with different drop sizes of the collector, it was observed that the rate constant of the reaction and the recovery increased with a decrement in the droplet size. This behaviour is explained by the principle that the smaller the droplet size of the collector, the more the surface area is available to be exposed to the hydrophobic part. Thus, the collision of graphite particles with the bubbles becomes faster and more numerous, also the bubble strength is found to increase [4, 71, 72]. With a smaller droplet size, and the reduced contact surface area, resulting in less consumption of the collector, hence saving reagent costs. ## 5.3.3. Depressant The chemical compounds known as depressants, often referred to as inhibitors, specifically block the flotation of other minerals while not affecting the flotation of the desired mineral. In general, there are two types of depressants: organic and inorganic [17]. Graphite is depressed using inorganic substances such as sodium silicate, sodium cyanide, lime, and sodium sulphite [53, 55, 64]. These use electrostatic interactions to keep the foam stable [73]. In contrast to sodium cyanide, which is used to depress pyrite, sodium silicate is utilized to depress siliceous gangue particles. Dextrin, starch, tannic acid, and carboxymethyl cellulose are examples of organic salts that also possess depressant-like characteristics [6, 55, 73–75]. When used with hydrophobic minerals like graphite, they perform incredibly well. They use steric effects to stabilise the pulp suspension [73]. ## 6. Electroflotation Due to the inability of current conventional flotation methods to recover a considerable number of tiny and ultrafine graphite particles due to poor bubble-particle collision efficiency, the problem of reduced recovery is observed in low-grade graphite ores [49, 76]. This lowers the process’s net profitability in large-scale operations. Microbubbles are produced by the electroflotation process during the electrolysis of water, during which hydrogen and oxygen are released at cathodic and anodic surfaces, respectively [49, 77, 78]. Greater “apparent particle size” is the outcome of flocculation, which is triggered by smaller bubbles [49, 79, 80]. This speeds up the process of bubble-particle adhesion. Due to the formation of hydrogen and oxygen bubbles, that is more active for hydrophobic materials than air bubbles created by conventional approaches. This method claims a higher efficiency, with a more control over the bubble flow. Consequently, the electroflotation can be a very successful method for the flotation of ultra-fine graphite. ## 7. Ultrasound-Assisted Flotation High-purity flake graphite has a high economic value in the markets. Because the trapped impurities are transported with the graphite in the froth during the traditional flotation process, getting high-purity flake graphite is quite challenging [81–83]. The maximum purity of graphite that can be produced using conventional flotation is 95%, while this also depends on parameters including the ore’s grade, number of flotation stages, and percentage crystallinity [4]. The three stages of ultrasound treatment—rougher stage, cleaner stage, and recleaner stage—are based on a difference in the strengths of “attachment of locked impurities” and “graphite flake structure.” In contrast to the traditional grinding method, it selectively removes the trapped impurities while causing relatively little harm to the flake size and shape. Consequently, it promises to be a promising industrial tool [84]. ## 8. Air Elutriation Compared to amorphous graphite, flake graphite has a much higher market value (Table1). However, the graphite produced by the froth flotation method is a blend of flake and amorphous graphite. Amorphous graphite should be separated from flake graphite for the greatest economic potential. For the same, air elutriation is employed. Figure 5 shows the air elutriation mechanism of coarse particles [85]. In the large-scale configuration, blowers are used to create an upward air flow at the base of vertical pipes. At almost one-third of the length of the entire pipe from the bottom, mixed graphite feed is added. The air flow velocity is restricted to between 0.9 and 4.6 m/s. The amorphous graphite with any remaining impurities falls to the bottom of the pipe, while pure graphite flake rises to the top [6, 86].Figure 5 Air elutriation mechanism of coarse particles. ## 9. Flushing Process For high-quality requirements, the amorphous graphite obtained from the bottom product of air elutriation can be further cleaned with other adhering contaminants by a flushing procedure. The graphite particles are put through a kerosene-based oil phase in this process before being transferred to an aqueous suspension utilizing sodium carbonate as a surfactant. High-speed centrifugation removes the fine clay and ash contaminants by generating little oil droplets [6, 87]. ## 10. Techno Economics Cost and energy requirements of beneficiation by flotation:For the flotation of graphite ore, two main costs are involved: the cost of ore material and the cost incurred in the beneficiation process. The costs involved in the beneficiation process includes plant maintenance, consumables, electricity supplies, utilities cost, cost of flotation reagents (collector, frother, and depressant), and cost of skilled and unskilled labourers.For the beneficiation process of the graphite ore as per the prefeasibility report of Bainibasa Graphite Mining and Beneficiation Project, Orissa, the feed throughput of crude graphite ore is 13,272 TPA which will be processed to obtain 841 TPA of desirable clean graphite with 85% and 65% FC content of the purified graphite. The total water requirement is 130 kilolitre/day. The cost per tonne for the ore material is between 11,141 and 14,766 INR/ton of the finished product for the first five years of the flotation plant installation. While the costs involved in the beneficiation process per ton is between 10,467 and 11,266 INR/ton of the finished product for the first five years after the plant installation, so that the total amount spent on processing the crude ore to the desired purified ore is between 21,608 and 26,032 INR/ton of the finished product. ### 10.1. Energy Requirement The operational capacity of beneficiation is 30 TPH. The hours of plant operation thus are 13,272 TPA/30 TPH = 442.4 hrs/annum ∼443 hrs/annum. The power supply is of 400 kVA = 400 kW = 400 kJ/sec.The energy requirements of the beneficiation plant alone are = (power of the supply) ×  (hours of operation of beneficiation plant) = 400 kJ/sec × 443 hrs × 60 min/hr × 60 min/sec = 637,920,000 kJ = 6,37,920 MJ. ## 10.1. Energy Requirement The operational capacity of beneficiation is 30 TPH. The hours of plant operation thus are 13,272 TPA/30 TPH = 442.4 hrs/annum ∼443 hrs/annum. The power supply is of 400 kVA = 400 kW = 400 kJ/sec.The energy requirements of the beneficiation plant alone are = (power of the supply) ×  (hours of operation of beneficiation plant) = 400 kJ/sec × 443 hrs × 60 min/hr × 60 min/sec = 637,920,000 kJ = 6,37,920 MJ. ## 11. Comparative Assessment of Various Beneficiation Techniques Merits and demerits of different beneficiation techniques are summarized in Table4.Table 4 Merits and demerits of different beneficiation techniques. Beneficiation techniquesMeritsDemeritsMicrowave pretreatment(i) Selective and easy heating(ii) Energy efficient(iii) Environment-friendly output(i) Low absorption of microwave(iii) Overall high energy requirement(iii) Longer processing timeComminution(i) Enhanced surface area(ii) Modifies particle-size distribution as per industry requirements(i) Very low gangue elimination(ii) Can alter and damage particle shape(iii) Further processing is requiredFlotation(i) Cost effective(ii) Suitable for hydrophobic material(iii) Specific gangue removal is possible(iv)High purity output obtained up to 98%(i) Multiple rougher/cleaner stages required(ii) Use of chemicals which might be hazardousGravity Separation(i) Inexpensive(ii) Chemical free(iii) Low heating involved(iv) Large space requirement(i) Less extent of separation(ii) High capital investmentsAir Elutriation(i) Easy segregation by varying air velocities(ii) Simple and easy maintenance(i) Feed must have low moisture(ii) Low size particles input feed is unfavourableElectroflotation(i) Higher efficiency(ii) Faster bubble-particle interaction as compared to air bubble-particle interaction(i) Complex process(ii) High electric supplies requiredUltrasound-assisted flotation(i) High purity product obtained(ii) Release entrapped impurities(i) Expensive(ii) Complex(iii) Less research doneComminution is only responsible for reducing the particle size and hence resulting in different particle-size distribution. The benefits are that we can bring the ore particles in the industrial demand range and enhance the surface area for the improved impact of downstream processes. This method to very less extent can eliminate the gangue particles. Further processing is recommended to meet the high-end purity standards for industrial uses. Microwave irradiation facilitates easy and selective heating, is more energy efficient, and makes the output feed environmentally less benign as it releases sulphurous and nitrous products in the gaseous form. But it releases harmful and toxic gases to the environment. Another major issue with microwave treatment of graphite is less absorption of microwave radiation by graphite ore. Also, the overall energy requirements are high and longer duration of treatment required. This method is a pretreatment process suggested before comminution or chemical treatment methods.Gravity separation is a cheap, pretreatment method with the limited ability to reduce gangue. Advantages of gravity separation are no heating and absence of chemicals. However, space requirements for the gravity separator are too high. Despite low operating costs, huge capital investments are required in employing a physical method of beneficiation.The most adopted beneficiation technique used is flotation. It is highly cost effective and a very well-suited method pertaining to hydrophobicity of graphite. It has the ability to reduce the gangue in the ore to a large extent. Specific gangue removal becomes easier depending upon the type and nature of gangue particles present in crude graphite ore with the help of depressants and conditioning reagents used. However, it requires multiple rougher/cleaner steps and few regrinding steps to obtain higher grade purity of the end product in the desirable size range. Also, use of chemicals is discouraged as it impacts the safety standards and releases residual pollutants which require further treatment. As compared to conventional flotation, electroflotation processes claim higher efficiency due to faster bubble (hydrogen and oxygen molecules are more active for hydrophobic material) and particle adherence over air-particle interaction which aids faster and better separation of ore from gangue. It is specifically more beneficial for low-grade graphite ores for extraction of ultra-fine graphite. However, the process is complex and has high electric supplies’ requirements. Ultrasound-assisted flotation aims to release entrapped impurities and is beneficial if the objective is to produce a very high purity end product with preserved flaky nature of graphite.Air elutriation helps in segregation of the graphite ore in the different size range by tweaking the air velocities. Cleaning and maintenance is simple and it outperforms traditional sieving techniques. However, wet feed is difficult to handle, and too low particle size of input feed cannot be processed. ## 12. Conclusions and Future Directions ### 12.1. Pretreatment Techniques Pretreatment techniques for graphite flotation include “comminution” and “microwave pretreatment.” They disintegrate the contaminants that are trapped in place to enable more effective grinding, and with the help of comminution, the product particle can be brought to desirable size range. Most effective grinding processes are stirred milling and jet milling.For the treatment of middling, a moderate-speed ball mill and a flint pebble grinding medium are suggested. Delamination, when used as a way to regrind, can make coarse flake graphite with up to 98% fixed carbon.It has been discovered that attrition milling works well for generating graphite flakes of smaller sizes while causing the least amount of form and shape degradation. Very less research has been carried out about microwave irradiation as it has the ability to eliminate nitrous and sulphurous gangue particles. ### 12.2. Froth Flotation and Flotation Reagents Froth flotation is one of the most economical, energy efficient, and reliable beneficiation technique for graphite. The studies indicate that froth flotation, which outperforms acid leaching (keeping in mind environmental considerations), can make graphite samples with high purity and fixed carbon content of up to 98%.Column flotation outperforms mechanical cell flotation when several cleaning processes are involved.When selecting the reagents, special attention must be paid to the “Greenness Index.” Ethers and polyglycols, such as Nasfroth 301 and Dowfroth 200, have been determined to represent the least environmental risk and to be extremely safe to handle. In addition, they have a higher flash point (195°C in the case of Dowfroth 200) and perform better than conventional industrial frothers like MIBC and pine oil. Despite the pine oil’s lower flash point, Nasfroth 301 is superior to Dowfroth 200 according to HLB values.Kerosene and diesel are hazardous to the environment, and hence their use is discouraged. A more effective collector reagent is Sokem 705C, an alcohol-ether mixture that is both environmentally friendly and economically viable. It has also been shown that IBM/07, a mix of terpenes and hydrocarbons, yield up to 96% fixed carbon and can be used as a collector in the graphite processing industry.Sodium silicate and sodium cyanide are both acceptable inorganic depressants for siliceous gangue particles and pyrite, respectively. Organic polymers like starch are recommended to stabilise the pulp because of their steric effect.New research for environmentally benign flotation reagents is vegetable oil (like soyabean oil) and biosurfactants produced by microorganisms which can be potential substitute for present day collectors. ### 12.3. Other Beneficiation Techniques Air elutriation and flushing can be performed sequentially after the froth flotation procedure to obtain flake and amorphous graphite of the best quality, suitable for high market value. Electroflotation is suggested as a means of enhancing fine and ultrafine particle recovery which is difficult to be carried out by conventional flotation. To meet the increased demands of flaky graphite ultrasound-assisted flotation is a promising tool as it preserves the flaky shape and form unlike conventional grinding. One or more beneficiation techniques can be employed to get the desired benefits of different technologies. ## 12.1. Pretreatment Techniques Pretreatment techniques for graphite flotation include “comminution” and “microwave pretreatment.” They disintegrate the contaminants that are trapped in place to enable more effective grinding, and with the help of comminution, the product particle can be brought to desirable size range. Most effective grinding processes are stirred milling and jet milling.For the treatment of middling, a moderate-speed ball mill and a flint pebble grinding medium are suggested. Delamination, when used as a way to regrind, can make coarse flake graphite with up to 98% fixed carbon.It has been discovered that attrition milling works well for generating graphite flakes of smaller sizes while causing the least amount of form and shape degradation. Very less research has been carried out about microwave irradiation as it has the ability to eliminate nitrous and sulphurous gangue particles. ## 12.2. Froth Flotation and Flotation Reagents Froth flotation is one of the most economical, energy efficient, and reliable beneficiation technique for graphite. The studies indicate that froth flotation, which outperforms acid leaching (keeping in mind environmental considerations), can make graphite samples with high purity and fixed carbon content of up to 98%.Column flotation outperforms mechanical cell flotation when several cleaning processes are involved.When selecting the reagents, special attention must be paid to the “Greenness Index.” Ethers and polyglycols, such as Nasfroth 301 and Dowfroth 200, have been determined to represent the least environmental risk and to be extremely safe to handle. In addition, they have a higher flash point (195°C in the case of Dowfroth 200) and perform better than conventional industrial frothers like MIBC and pine oil. Despite the pine oil’s lower flash point, Nasfroth 301 is superior to Dowfroth 200 according to HLB values.Kerosene and diesel are hazardous to the environment, and hence their use is discouraged. A more effective collector reagent is Sokem 705C, an alcohol-ether mixture that is both environmentally friendly and economically viable. It has also been shown that IBM/07, a mix of terpenes and hydrocarbons, yield up to 96% fixed carbon and can be used as a collector in the graphite processing industry.Sodium silicate and sodium cyanide are both acceptable inorganic depressants for siliceous gangue particles and pyrite, respectively. Organic polymers like starch are recommended to stabilise the pulp because of their steric effect.New research for environmentally benign flotation reagents is vegetable oil (like soyabean oil) and biosurfactants produced by microorganisms which can be potential substitute for present day collectors. ## 12.3. Other Beneficiation Techniques Air elutriation and flushing can be performed sequentially after the froth flotation procedure to obtain flake and amorphous graphite of the best quality, suitable for high market value. Electroflotation is suggested as a means of enhancing fine and ultrafine particle recovery which is difficult to be carried out by conventional flotation. To meet the increased demands of flaky graphite ultrasound-assisted flotation is a promising tool as it preserves the flaky shape and form unlike conventional grinding. One or more beneficiation techniques can be employed to get the desired benefits of different technologies. --- *Source: 1007689-2023-03-21.xml*
2023
# Analytical Method for Capped Pile–Soil Interaction considering the Load Action of Soil under the Pile Cap **Authors:** Shilin Luo; Mingquan Liu; Jianqing Jiang; Ailifeila Aierken; Jin Chang; Xuewen Zhang; Rui Zhang **Journal:** Geofluids (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007731 --- ## Abstract To study the pile–soil interaction mechanism of capped pile, an analytical method considering the load action of soil under the pile cap is proposed. The shearing displacement method is used to derive the lateral friction of pile body under the pile top load, and the Boussinesq solutions is used to derive the lateral friction of pile body considering the load of soil under the cap. The theoretical expressions of axial force and load-settlement curves are also achieved by means of establishing and solving the equilibrium differential equation of the pile body. Comparison of calculation results with the ordinary pile indicates that soil load under the pile cap reduces the lateral friction value; the influenced depth is about four times of the load action radius. The axial force and the load-settlement curve data are verified by a case history data. The results show that the computed data agree well with the measured data. The proposed method can direct the design of capped pile composite foundations. --- ## Body ## 1. Introduction Capped pile is a type of reinforcement body in foundation; it is formed by placing a certain size of pile cap on the top of an ordinary pile body. The pile body is usually a prestressed pipe pile or solid concrete pile. The pile cap is normally a round or a square concrete slab. Capped piles have been widely used in soft soil treatment [1–3]. The original purpose of setting a cap on the pile top is to reduce the pile top penetration into the upper cushion and regulate the load sharing ratio of the pile and soil [4, 5]. With the increasing engineering applications, pile caps are found to raise the bearing capacity of piles and influence the working behavior of composite foundations, which many scholars have further researched [6–10]. The research methods include field test [11], model test [12–16], experiment test [17–19], and numerical simulation approaches [20–29].When the pile settled under a vertical load, there is a displacement difference between the pile body and soil of the ordinary pile, while there is no such displacement difference between pile body and soil of capped pile due to the coordination of the cap. This leads to the pile body–soil interaction difference between capped pile and ordinary pile. Shang et al. analyzed the interaction of a flexible subsoil matrix and a piled box (raft) using Geddes stress factor [30]. The results show that the method is reliable. Cui et al. studied the response of pile groups in layered soil under the assumption that pile–cap–soil interaction is linear elastic [31]. The study used the shearing displacement method, and computed load-settlement curves were consistent but beneath the measured ones. Jin-bo and Cong-xin analyzed pile–soil interaction with a stiffness matrix method based on differential displacement of pile and soil. The pile lateral friction and pile tip load were assumed to be proportional to the respective displacement [32]. Wang et al. assumed that the distribution of skin friction on capped pile is simplified as two force triangles; by combining Mindlin-Geddes and Boussinesq solutions, the equation of the additional side stresses for a single capped pile foundation is derived [33]. Chen et al. used a slider displacement method to consider the overall bearing capacity of capped piles. In their study, the Plandel shear slip failure model is used in the upper foundation soil bearing capacity analysis, and the load transfer method is used to analyze the bearing capacity of the lower pile body. This method is suitable for short, rigid piles with large pile caps under the condition of smooth soil interface with the pile cap [34].To sum up, the pile–soil interaction is often considered in the settlement calculation of composite foundation. Compared with the large number of engineering applications, theoretical researches of capped pile–soil interaction are not extensive.In this paper, we propose an analytical method to further study the pile–soil interaction mechanisms of capped piles with consideration to the load action of soil under the cap. Upper load is transmitted to the pile body and the soil under the cap through pile cap. The lateral frictional resistance of the capped body is superposed by friction caused by the load on the pile body and the load on the soil under the pile cap. By establishing the differential equilibrium equation of the pile body, the theoretical expressions of pile axial force and the load-settlement curve are deduced. The feasibility of this method is verified by comparing computed results with other methods and field test data of a case history. We anticipate that the present research could help to better understand the pile–soil interaction mechanism issues of capped pile, such as the relationship of load-settlement curves and treatment of soft soil foundation, and then provide some practical experience and enlightenment on the studying of the pile engineering. ## 2. Analysis Method of Pile–Soil Interaction ### 2.1. Analysis Model When a vertical load is acting on the top of a pile cap, the load is transmitted downward by the pile cap. Because the pile cap is directly in contact with the pile body and the soil, the upper load is transferred to the pile body and the soil under the cap simultaneously. Due to the differential resistance stiffness of the pile body and the soil, each of them supports a different amount of load. Existing test results and finite element simulation results had confirmed that the load sharing phenomenon occurs throughout the pile body settlement process. The influence of the load on the soil under the pile cap cannot be ignored when analyzing the pile–soil interaction. To appropriately simplify the analysis, the following assumptions are used: (1) the pile cap is rigid and without bending and shearing deformation, (2) the pile body is rigid and without radial deformation, (3) the load transmitted to the pile top is equivalent to a concentrated load, and the load transmitted to the soil under the cap is equivalent to a uniformly distributed load, (4) the stress in foundation soil is calculated according to the elastic theory, and (5) the pile tip stress is calculated using the Winkler foundation model.The mechanical model of a pile with cap is established, as shown in Figure1, where R and r are the radii of the pile cap and pile body, respectively, q and q1 are uniform loads acting on the pile top and soil under the cap, respectively, and Pt is the equivalent concentrated load acting on the pile top. The relationship among the variables is described: (1)Pt+q1×πR−r2=q×πR2.Figure 1 Mechanical model of capped pile. (a) Prototype of load(b) Load distribution modelAccording to Figure1 and assumption (3), a pile–soil interaction analysis model (Figure 2) is established on the above basis, where Pb is equivalent concentrated load at the pile tip and qb is uniform load at the bottom of the soil under the cap.Figure 2 Pile–soil interaction analysis model.When a concentrated loadPt is acted on the pile body alone, the lateral friction will appear on the pile side. When a uniform load q1 is acted on the soil under cap individually, there will also be friction on the pile side. When they act together, the above two states can be superimposed. Then, the lateral friction on the pile–soil interface can be calculated by the superposition of τp and τq, which are derived from the concentrated load Pt acting on the top of the pile and the uniform load q1 acting on the top of the soil under the cap, respectively. ### 2.2. Solution for Lateral Friction First, we study the case of concentrated loadPt acting on the top of a pile. Rondoloph and Worth assumed that when a concentrated load is acting on a single pile, the soil around the pile only produces annular shear displacement, which can be approximately treated as a plane problem [35]. Thus, a rectangular coordinate system is employed to establish the calculation schematic diagram (Figure 3), and the origin of the coordinate axis is located at the pile body center.Figure 3 Calculation schematic diagram ofPt. (a) Load schematic(b) Calculation schematic [35]According to the shearing displacement method, ifl is the pile length, Ap is the area of the cross section of the pile body, Up is the pile perimeter, Ep is the elastic modulus of the pile body, Gs is the shearing deformation modulus of the soil, ν is the Poisson ratio of the soil, ωb is displacement of the pile tip, and rm is the maximum influence radius that can be assigned as 0.51−νl, the expressions for lateral frictional resistance of the pile body τpz, displacement of the pile body ωpz, axial force of the pile body Npz, and equivalent concentrated load at the pile tip Pb are shown, respectively, as follows: (2)τpz=k1Upωpz,(3)ωpz=ωbcoshλl−z,(4)NPz=Pbk1k2sinhλl−z+coshλl−z,(5)Pb=k2ωb,where λ2 is the quotient of k1 and ApEp and k1 is the shear stiffness between the pile and soil, calculated as follows: (6)k_1=2πGsln−1r_m/r.The compressive stiffness of soil (k2) beneath the pile tip can be calculated as follows: (7)k2=4rGs/1−υ.Then, we study the case of uniform loadq1 acting on the soil under the cap. The same coordinate system is employed to establish the calculation schematic diagram (Figure 4). The pile–soil interface is located on the vertical plane with distance from the origin r; q1 is effective in the range between r and R, and the coordinates of an arbitrary point P underground on the pile–soil interface are r,z. The load on the micro unit dx is replaced by linear load q1dxr≤x≤R.Figure 4 Calculation schematic of soil loadq1. (a) Load schematic(b) Calculation schematicAccording to the plane stress solution [36], the additional shear stress τqz induced by linear load q1 at point P at depth z (0≤z≤l) on the pile–soil interface is (8)τqz=q1πsin2θ,where θ is the angle between the connection line of point P to unit dx and the pile–soil interface (0≤θ≤π/2).Substituting the geometrical relations in Figure4(b) into Equation (8) produces (9)τqz=q1πR−r2z2+R−r2.Finally, the lateral friction on the pile–soil interfaceτz is the sum of τpz and τqz: (10)τz=τpz+τqz.Considering the directions ofτpz and τqz produces (11)τz=k1Upcoshλl−z⋅ωb−q1π⋅R−r2z2+R−r2. ### 2.3. Solutions for Axial Force and Settlement When the expression of lateral friction is obtained, take the pile body in Figure2 as a calculation isolator (Figure 5).Figure 5 Calculation isolator.According to the force balance of the pile body, the equilibrium equation is established:(12)Nz=Pt−∫0zUpτzdz=Pt−∫0zUpτpz+τqzdz.By solving the integral, the axial force expressionNz is obtained: (13)Nz=NPz+2rq1R−r−zarctanR−rz,where Npz is the axial force of an ordinary pile body.According to the elastic compression conditions of a pile body, the relationship of displacement and frictional resistance of the pile side is expressed as Equation (14), and the relationship of axial force and displacement of the pile body is expressed as (14)ωz=1ApEp∫lzUp⋅τzdz,(15)Nz=ApEp⋅dwzdz.By substituting Equations (2), (3), and (9) into Equation (10), the lateral friction of the capped pile is obtained as (16)τz=Pt⋅k2⋅coshλl−zUp⋅k1⋅sinhλl+k2⋅coshλl−q1⋅R−r2π⋅z2+R−r2.By substituting Equation (11) into Equation (14), the settlement of the capped pile is calculated: (17)ωz=Ptcoshλl−zk1sinhλl+k2coshλl+2q1πrEpR−rarctanlR−r−arctanzR−r.Then substituting Equation (17) into Equation (15), the axial force of the capped pile is calculated: (18)Nz=Pt⋅coshλl‐zk1⋅sinhλl−z+k2⋅coshλl−z+2rq1R−r−zarctanR−rz.Whenz=0, the settlement of the pile top st is equal to ω0. (19)ω0=st=Pt⋅coshλlk1⋅sinhλl+k2⋅coshλl+2q1π⋅r⋅EpR−rarctanlR−r.Equations (18) and (19) are the theoretical expressions of the axial force and the load-settlement curve of a capped pile, respectively. Based on the above analysis, the corresponding computational program is compiled to facilitate the comparison and verification of the calculation results. ## 2.1. Analysis Model When a vertical load is acting on the top of a pile cap, the load is transmitted downward by the pile cap. Because the pile cap is directly in contact with the pile body and the soil, the upper load is transferred to the pile body and the soil under the cap simultaneously. Due to the differential resistance stiffness of the pile body and the soil, each of them supports a different amount of load. Existing test results and finite element simulation results had confirmed that the load sharing phenomenon occurs throughout the pile body settlement process. The influence of the load on the soil under the pile cap cannot be ignored when analyzing the pile–soil interaction. To appropriately simplify the analysis, the following assumptions are used: (1) the pile cap is rigid and without bending and shearing deformation, (2) the pile body is rigid and without radial deformation, (3) the load transmitted to the pile top is equivalent to a concentrated load, and the load transmitted to the soil under the cap is equivalent to a uniformly distributed load, (4) the stress in foundation soil is calculated according to the elastic theory, and (5) the pile tip stress is calculated using the Winkler foundation model.The mechanical model of a pile with cap is established, as shown in Figure1, where R and r are the radii of the pile cap and pile body, respectively, q and q1 are uniform loads acting on the pile top and soil under the cap, respectively, and Pt is the equivalent concentrated load acting on the pile top. The relationship among the variables is described: (1)Pt+q1×πR−r2=q×πR2.Figure 1 Mechanical model of capped pile. (a) Prototype of load(b) Load distribution modelAccording to Figure1 and assumption (3), a pile–soil interaction analysis model (Figure 2) is established on the above basis, where Pb is equivalent concentrated load at the pile tip and qb is uniform load at the bottom of the soil under the cap.Figure 2 Pile–soil interaction analysis model.When a concentrated loadPt is acted on the pile body alone, the lateral friction will appear on the pile side. When a uniform load q1 is acted on the soil under cap individually, there will also be friction on the pile side. When they act together, the above two states can be superimposed. Then, the lateral friction on the pile–soil interface can be calculated by the superposition of τp and τq, which are derived from the concentrated load Pt acting on the top of the pile and the uniform load q1 acting on the top of the soil under the cap, respectively. ## 2.2. Solution for Lateral Friction First, we study the case of concentrated loadPt acting on the top of a pile. Rondoloph and Worth assumed that when a concentrated load is acting on a single pile, the soil around the pile only produces annular shear displacement, which can be approximately treated as a plane problem [35]. Thus, a rectangular coordinate system is employed to establish the calculation schematic diagram (Figure 3), and the origin of the coordinate axis is located at the pile body center.Figure 3 Calculation schematic diagram ofPt. (a) Load schematic(b) Calculation schematic [35]According to the shearing displacement method, ifl is the pile length, Ap is the area of the cross section of the pile body, Up is the pile perimeter, Ep is the elastic modulus of the pile body, Gs is the shearing deformation modulus of the soil, ν is the Poisson ratio of the soil, ωb is displacement of the pile tip, and rm is the maximum influence radius that can be assigned as 0.51−νl, the expressions for lateral frictional resistance of the pile body τpz, displacement of the pile body ωpz, axial force of the pile body Npz, and equivalent concentrated load at the pile tip Pb are shown, respectively, as follows: (2)τpz=k1Upωpz,(3)ωpz=ωbcoshλl−z,(4)NPz=Pbk1k2sinhλl−z+coshλl−z,(5)Pb=k2ωb,where λ2 is the quotient of k1 and ApEp and k1 is the shear stiffness between the pile and soil, calculated as follows: (6)k_1=2πGsln−1r_m/r.The compressive stiffness of soil (k2) beneath the pile tip can be calculated as follows: (7)k2=4rGs/1−υ.Then, we study the case of uniform loadq1 acting on the soil under the cap. The same coordinate system is employed to establish the calculation schematic diagram (Figure 4). The pile–soil interface is located on the vertical plane with distance from the origin r; q1 is effective in the range between r and R, and the coordinates of an arbitrary point P underground on the pile–soil interface are r,z. The load on the micro unit dx is replaced by linear load q1dxr≤x≤R.Figure 4 Calculation schematic of soil loadq1. (a) Load schematic(b) Calculation schematicAccording to the plane stress solution [36], the additional shear stress τqz induced by linear load q1 at point P at depth z (0≤z≤l) on the pile–soil interface is (8)τqz=q1πsin2θ,where θ is the angle between the connection line of point P to unit dx and the pile–soil interface (0≤θ≤π/2).Substituting the geometrical relations in Figure4(b) into Equation (8) produces (9)τqz=q1πR−r2z2+R−r2.Finally, the lateral friction on the pile–soil interfaceτz is the sum of τpz and τqz: (10)τz=τpz+τqz.Considering the directions ofτpz and τqz produces (11)τz=k1Upcoshλl−z⋅ωb−q1π⋅R−r2z2+R−r2. ## 2.3. Solutions for Axial Force and Settlement When the expression of lateral friction is obtained, take the pile body in Figure2 as a calculation isolator (Figure 5).Figure 5 Calculation isolator.According to the force balance of the pile body, the equilibrium equation is established:(12)Nz=Pt−∫0zUpτzdz=Pt−∫0zUpτpz+τqzdz.By solving the integral, the axial force expressionNz is obtained: (13)Nz=NPz+2rq1R−r−zarctanR−rz,where Npz is the axial force of an ordinary pile body.According to the elastic compression conditions of a pile body, the relationship of displacement and frictional resistance of the pile side is expressed as Equation (14), and the relationship of axial force and displacement of the pile body is expressed as (14)ωz=1ApEp∫lzUp⋅τzdz,(15)Nz=ApEp⋅dwzdz.By substituting Equations (2), (3), and (9) into Equation (10), the lateral friction of the capped pile is obtained as (16)τz=Pt⋅k2⋅coshλl−zUp⋅k1⋅sinhλl+k2⋅coshλl−q1⋅R−r2π⋅z2+R−r2.By substituting Equation (11) into Equation (14), the settlement of the capped pile is calculated: (17)ωz=Ptcoshλl−zk1sinhλl+k2coshλl+2q1πrEpR−rarctanlR−r−arctanzR−r.Then substituting Equation (17) into Equation (15), the axial force of the capped pile is calculated: (18)Nz=Pt⋅coshλl‐zk1⋅sinhλl−z+k2⋅coshλl−z+2rq1R−r−zarctanR−rz.Whenz=0, the settlement of the pile top st is equal to ω0. (19)ω0=st=Pt⋅coshλlk1⋅sinhλl+k2⋅coshλl+2q1π⋅r⋅EpR−rarctanlR−r.Equations (18) and (19) are the theoretical expressions of the axial force and the load-settlement curve of a capped pile, respectively. Based on the above analysis, the corresponding computational program is compiled to facilitate the comparison and verification of the calculation results. ## 3. Comparison and Validation A case history is introduced to validate the computed results of this method. The Su-Kun-Tai Expressway is located beside Yangcheng Lake in China, and the majority of the route is constructed on a deep, soft, soil foundation. The distribution and physical parameters of the foundation soil layers are shown in Table1.Table 1 Distribution and physical parameters of soil layers. Soil layerDepth (m)Water content, (%)Bulk density (kN/m3)Compression modulus (MPa)Poisson ratioSandy silt1.230.919.25.830.3Silty clay1.2-1.733.918.43.470.3Clay1.7-8.938.318.12.620.3Muddy clay8.9-10.648.317.22.80.3Silty clay10.6-12.135.8183.180.3Silty clay12.1-26.42419.86.970.3Silty clay26.4-31.423.619.98.230.3Capped piles were used to reduce settlement under the embankment load. The pile body is a PTC-A400-65 concrete pipe pile with lengths of 25 m (T2) and 29 m (T4) in two different test sections. The caps are prefabricated 1.5 m square concrete slabs with thickness of 40 cm. Full-scale tests of capped piles were performed for lateral friction, axial force, and bearing capacity of T2 and T4 piles. The axial force was measured by prefabricated stress gauge at different depths on steel bars in the pile body, and the soil pressure under the pile cap was measured by embedded soil pressure boxes under the pile cap. The bearing capacity of capped piles was determined by load testing [11]. The measured data of the T2 and T4 piles are shown in Table 2 and are used to compare and validate the proposed method.Table 2 Measured data of T2 and T4. Total load (kN)Settlement (mm)Load on pile topPt/(kN)Uniform load on soil under the capq1/(kN/m2)T24002.036317.66002.954525.98004.873132.610007.091738.912008.7110047.1140010.4128454.9160014.6144871.5180035.11107326.3200056.91258349.1T45001.746915.07502.669525.910004.392933.612505.5115644.015007.4138155.917509.6160071.0200012.4181089.1225018.01978128.5250049.21840310.8 ### 3.1. Comparison In order to compare the influence of pile cap on the lateral friction, the capped pile and the ordinary pile are taken into account. The influence of load action on the soil under the pile cap is considered in the lateral friction analysis. The shearing displacement method [35] is adopted for the ordinary pile calculating; the proposed method is adopted for capped pile calculating. The lateral friction of two types of piles is calculated under the same condition, and the results are shown in Figure 6.Figure 6 Comparison of lateral friction along pile length of T4.The lateral friction curve of two types of piles in Figure6 shows obvious difference near the pile top. The lateral friction of capped pile is less than that of ordinary pile. The reason for this phenomenon is that the load on the soil under the pile cap (q1) reduced the soil deformation, which lead to the pile lateral friction becomes small. The value difference between them is the largest at the pile top and decreases with the depth. This indicates that the load (q1) affected lateral friction in a certain depth. In this case, the equivalent action radius of q1 is 0.645 m; the max influenced depth is 2.5 m, which is about 4 times of the load action radius. Beyond this depth, the effect on the lateral friction caused by q1 is very little and can be ignored. In addition, the load level has a great influence on the lateral friction difference. The higher the load level, the greater the difference. For example, it is 8.5 kPa, 19.5 kPa, and 32.5 kPa when the load level is 500 kN, 1000 kN, and 1500 kN, respectively.All of the above indicate that when the soil under the pile cap is carrying a load, the lateral friction is affected by the load level and the action radius. ### 3.2. Validation #### 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 3.1. Comparison In order to compare the influence of pile cap on the lateral friction, the capped pile and the ordinary pile are taken into account. The influence of load action on the soil under the pile cap is considered in the lateral friction analysis. The shearing displacement method [35] is adopted for the ordinary pile calculating; the proposed method is adopted for capped pile calculating. The lateral friction of two types of piles is calculated under the same condition, and the results are shown in Figure 6.Figure 6 Comparison of lateral friction along pile length of T4.The lateral friction curve of two types of piles in Figure6 shows obvious difference near the pile top. The lateral friction of capped pile is less than that of ordinary pile. The reason for this phenomenon is that the load on the soil under the pile cap (q1) reduced the soil deformation, which lead to the pile lateral friction becomes small. The value difference between them is the largest at the pile top and decreases with the depth. This indicates that the load (q1) affected lateral friction in a certain depth. In this case, the equivalent action radius of q1 is 0.645 m; the max influenced depth is 2.5 m, which is about 4 times of the load action radius. Beyond this depth, the effect on the lateral friction caused by q1 is very little and can be ignored. In addition, the load level has a great influence on the lateral friction difference. The higher the load level, the greater the difference. For example, it is 8.5 kPa, 19.5 kPa, and 32.5 kPa when the load level is 500 kN, 1000 kN, and 1500 kN, respectively.All of the above indicate that when the soil under the pile cap is carrying a load, the lateral friction is affected by the load level and the action radius. ## 3.2. Validation ### 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 4. Conclusions Compared with the ordinary pile, the pile–soil interaction of capped pile is more complicated. In this study, we proposed an analysis method of pile–soil interaction for capped piles considering the load action of the soil under the pile cap. We established an analysis model of pile–soil interaction according to the mechanical characteristics of capped piles and analyzed the lateral friction by combining the shearing displacement method and Boussinesq solutions. Theoretical analytical expressions of lateral friction, axial force, and load-settlement curves were obtained by establishing the equilibrium differential equation of a pile body and solving it. By comparing the variation of lateral friction curve between ordinary pile and capped pile, it is found that the load acting on the soil under the cap reduces the lateral friction value in a depth of about 4 times the load action radius. Beyond this depth, the influence can be ignored. This is unfavorable to the bearing capacity of pile body; for short piles with large pile caps, this phenomenon should be fully considered in design. The axial force and load-settlement curves calculated by this method are validated by a case history of reinforcing subgrade with capped piles. The result shows that the variations of axial forces agree with the experimental date and theoretical analysis and the variation of load-settlement curves are consistent with that of the measured curves. The error of load-settlement increases when the soil is close to failure, which need to be further studied. --- *Source: 1007731-2022-03-24.xml*
1007731-2022-03-24_1007731-2022-03-24.md
33,747
Analytical Method for Capped Pile–Soil Interaction considering the Load Action of Soil under the Pile Cap
Shilin Luo; Mingquan Liu; Jianqing Jiang; Ailifeila Aierken; Jin Chang; Xuewen Zhang; Rui Zhang
Geofluids (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007731
1007731-2022-03-24.xml
--- ## Abstract To study the pile–soil interaction mechanism of capped pile, an analytical method considering the load action of soil under the pile cap is proposed. The shearing displacement method is used to derive the lateral friction of pile body under the pile top load, and the Boussinesq solutions is used to derive the lateral friction of pile body considering the load of soil under the cap. The theoretical expressions of axial force and load-settlement curves are also achieved by means of establishing and solving the equilibrium differential equation of the pile body. Comparison of calculation results with the ordinary pile indicates that soil load under the pile cap reduces the lateral friction value; the influenced depth is about four times of the load action radius. The axial force and the load-settlement curve data are verified by a case history data. The results show that the computed data agree well with the measured data. The proposed method can direct the design of capped pile composite foundations. --- ## Body ## 1. Introduction Capped pile is a type of reinforcement body in foundation; it is formed by placing a certain size of pile cap on the top of an ordinary pile body. The pile body is usually a prestressed pipe pile or solid concrete pile. The pile cap is normally a round or a square concrete slab. Capped piles have been widely used in soft soil treatment [1–3]. The original purpose of setting a cap on the pile top is to reduce the pile top penetration into the upper cushion and regulate the load sharing ratio of the pile and soil [4, 5]. With the increasing engineering applications, pile caps are found to raise the bearing capacity of piles and influence the working behavior of composite foundations, which many scholars have further researched [6–10]. The research methods include field test [11], model test [12–16], experiment test [17–19], and numerical simulation approaches [20–29].When the pile settled under a vertical load, there is a displacement difference between the pile body and soil of the ordinary pile, while there is no such displacement difference between pile body and soil of capped pile due to the coordination of the cap. This leads to the pile body–soil interaction difference between capped pile and ordinary pile. Shang et al. analyzed the interaction of a flexible subsoil matrix and a piled box (raft) using Geddes stress factor [30]. The results show that the method is reliable. Cui et al. studied the response of pile groups in layered soil under the assumption that pile–cap–soil interaction is linear elastic [31]. The study used the shearing displacement method, and computed load-settlement curves were consistent but beneath the measured ones. Jin-bo and Cong-xin analyzed pile–soil interaction with a stiffness matrix method based on differential displacement of pile and soil. The pile lateral friction and pile tip load were assumed to be proportional to the respective displacement [32]. Wang et al. assumed that the distribution of skin friction on capped pile is simplified as two force triangles; by combining Mindlin-Geddes and Boussinesq solutions, the equation of the additional side stresses for a single capped pile foundation is derived [33]. Chen et al. used a slider displacement method to consider the overall bearing capacity of capped piles. In their study, the Plandel shear slip failure model is used in the upper foundation soil bearing capacity analysis, and the load transfer method is used to analyze the bearing capacity of the lower pile body. This method is suitable for short, rigid piles with large pile caps under the condition of smooth soil interface with the pile cap [34].To sum up, the pile–soil interaction is often considered in the settlement calculation of composite foundation. Compared with the large number of engineering applications, theoretical researches of capped pile–soil interaction are not extensive.In this paper, we propose an analytical method to further study the pile–soil interaction mechanisms of capped piles with consideration to the load action of soil under the cap. Upper load is transmitted to the pile body and the soil under the cap through pile cap. The lateral frictional resistance of the capped body is superposed by friction caused by the load on the pile body and the load on the soil under the pile cap. By establishing the differential equilibrium equation of the pile body, the theoretical expressions of pile axial force and the load-settlement curve are deduced. The feasibility of this method is verified by comparing computed results with other methods and field test data of a case history. We anticipate that the present research could help to better understand the pile–soil interaction mechanism issues of capped pile, such as the relationship of load-settlement curves and treatment of soft soil foundation, and then provide some practical experience and enlightenment on the studying of the pile engineering. ## 2. Analysis Method of Pile–Soil Interaction ### 2.1. Analysis Model When a vertical load is acting on the top of a pile cap, the load is transmitted downward by the pile cap. Because the pile cap is directly in contact with the pile body and the soil, the upper load is transferred to the pile body and the soil under the cap simultaneously. Due to the differential resistance stiffness of the pile body and the soil, each of them supports a different amount of load. Existing test results and finite element simulation results had confirmed that the load sharing phenomenon occurs throughout the pile body settlement process. The influence of the load on the soil under the pile cap cannot be ignored when analyzing the pile–soil interaction. To appropriately simplify the analysis, the following assumptions are used: (1) the pile cap is rigid and without bending and shearing deformation, (2) the pile body is rigid and without radial deformation, (3) the load transmitted to the pile top is equivalent to a concentrated load, and the load transmitted to the soil under the cap is equivalent to a uniformly distributed load, (4) the stress in foundation soil is calculated according to the elastic theory, and (5) the pile tip stress is calculated using the Winkler foundation model.The mechanical model of a pile with cap is established, as shown in Figure1, where R and r are the radii of the pile cap and pile body, respectively, q and q1 are uniform loads acting on the pile top and soil under the cap, respectively, and Pt is the equivalent concentrated load acting on the pile top. The relationship among the variables is described: (1)Pt+q1×πR−r2=q×πR2.Figure 1 Mechanical model of capped pile. (a) Prototype of load(b) Load distribution modelAccording to Figure1 and assumption (3), a pile–soil interaction analysis model (Figure 2) is established on the above basis, where Pb is equivalent concentrated load at the pile tip and qb is uniform load at the bottom of the soil under the cap.Figure 2 Pile–soil interaction analysis model.When a concentrated loadPt is acted on the pile body alone, the lateral friction will appear on the pile side. When a uniform load q1 is acted on the soil under cap individually, there will also be friction on the pile side. When they act together, the above two states can be superimposed. Then, the lateral friction on the pile–soil interface can be calculated by the superposition of τp and τq, which are derived from the concentrated load Pt acting on the top of the pile and the uniform load q1 acting on the top of the soil under the cap, respectively. ### 2.2. Solution for Lateral Friction First, we study the case of concentrated loadPt acting on the top of a pile. Rondoloph and Worth assumed that when a concentrated load is acting on a single pile, the soil around the pile only produces annular shear displacement, which can be approximately treated as a plane problem [35]. Thus, a rectangular coordinate system is employed to establish the calculation schematic diagram (Figure 3), and the origin of the coordinate axis is located at the pile body center.Figure 3 Calculation schematic diagram ofPt. (a) Load schematic(b) Calculation schematic [35]According to the shearing displacement method, ifl is the pile length, Ap is the area of the cross section of the pile body, Up is the pile perimeter, Ep is the elastic modulus of the pile body, Gs is the shearing deformation modulus of the soil, ν is the Poisson ratio of the soil, ωb is displacement of the pile tip, and rm is the maximum influence radius that can be assigned as 0.51−νl, the expressions for lateral frictional resistance of the pile body τpz, displacement of the pile body ωpz, axial force of the pile body Npz, and equivalent concentrated load at the pile tip Pb are shown, respectively, as follows: (2)τpz=k1Upωpz,(3)ωpz=ωbcoshλl−z,(4)NPz=Pbk1k2sinhλl−z+coshλl−z,(5)Pb=k2ωb,where λ2 is the quotient of k1 and ApEp and k1 is the shear stiffness between the pile and soil, calculated as follows: (6)k_1=2πGsln−1r_m/r.The compressive stiffness of soil (k2) beneath the pile tip can be calculated as follows: (7)k2=4rGs/1−υ.Then, we study the case of uniform loadq1 acting on the soil under the cap. The same coordinate system is employed to establish the calculation schematic diagram (Figure 4). The pile–soil interface is located on the vertical plane with distance from the origin r; q1 is effective in the range between r and R, and the coordinates of an arbitrary point P underground on the pile–soil interface are r,z. The load on the micro unit dx is replaced by linear load q1dxr≤x≤R.Figure 4 Calculation schematic of soil loadq1. (a) Load schematic(b) Calculation schematicAccording to the plane stress solution [36], the additional shear stress τqz induced by linear load q1 at point P at depth z (0≤z≤l) on the pile–soil interface is (8)τqz=q1πsin2θ,where θ is the angle between the connection line of point P to unit dx and the pile–soil interface (0≤θ≤π/2).Substituting the geometrical relations in Figure4(b) into Equation (8) produces (9)τqz=q1πR−r2z2+R−r2.Finally, the lateral friction on the pile–soil interfaceτz is the sum of τpz and τqz: (10)τz=τpz+τqz.Considering the directions ofτpz and τqz produces (11)τz=k1Upcoshλl−z⋅ωb−q1π⋅R−r2z2+R−r2. ### 2.3. Solutions for Axial Force and Settlement When the expression of lateral friction is obtained, take the pile body in Figure2 as a calculation isolator (Figure 5).Figure 5 Calculation isolator.According to the force balance of the pile body, the equilibrium equation is established:(12)Nz=Pt−∫0zUpτzdz=Pt−∫0zUpτpz+τqzdz.By solving the integral, the axial force expressionNz is obtained: (13)Nz=NPz+2rq1R−r−zarctanR−rz,where Npz is the axial force of an ordinary pile body.According to the elastic compression conditions of a pile body, the relationship of displacement and frictional resistance of the pile side is expressed as Equation (14), and the relationship of axial force and displacement of the pile body is expressed as (14)ωz=1ApEp∫lzUp⋅τzdz,(15)Nz=ApEp⋅dwzdz.By substituting Equations (2), (3), and (9) into Equation (10), the lateral friction of the capped pile is obtained as (16)τz=Pt⋅k2⋅coshλl−zUp⋅k1⋅sinhλl+k2⋅coshλl−q1⋅R−r2π⋅z2+R−r2.By substituting Equation (11) into Equation (14), the settlement of the capped pile is calculated: (17)ωz=Ptcoshλl−zk1sinhλl+k2coshλl+2q1πrEpR−rarctanlR−r−arctanzR−r.Then substituting Equation (17) into Equation (15), the axial force of the capped pile is calculated: (18)Nz=Pt⋅coshλl‐zk1⋅sinhλl−z+k2⋅coshλl−z+2rq1R−r−zarctanR−rz.Whenz=0, the settlement of the pile top st is equal to ω0. (19)ω0=st=Pt⋅coshλlk1⋅sinhλl+k2⋅coshλl+2q1π⋅r⋅EpR−rarctanlR−r.Equations (18) and (19) are the theoretical expressions of the axial force and the load-settlement curve of a capped pile, respectively. Based on the above analysis, the corresponding computational program is compiled to facilitate the comparison and verification of the calculation results. ## 2.1. Analysis Model When a vertical load is acting on the top of a pile cap, the load is transmitted downward by the pile cap. Because the pile cap is directly in contact with the pile body and the soil, the upper load is transferred to the pile body and the soil under the cap simultaneously. Due to the differential resistance stiffness of the pile body and the soil, each of them supports a different amount of load. Existing test results and finite element simulation results had confirmed that the load sharing phenomenon occurs throughout the pile body settlement process. The influence of the load on the soil under the pile cap cannot be ignored when analyzing the pile–soil interaction. To appropriately simplify the analysis, the following assumptions are used: (1) the pile cap is rigid and without bending and shearing deformation, (2) the pile body is rigid and without radial deformation, (3) the load transmitted to the pile top is equivalent to a concentrated load, and the load transmitted to the soil under the cap is equivalent to a uniformly distributed load, (4) the stress in foundation soil is calculated according to the elastic theory, and (5) the pile tip stress is calculated using the Winkler foundation model.The mechanical model of a pile with cap is established, as shown in Figure1, where R and r are the radii of the pile cap and pile body, respectively, q and q1 are uniform loads acting on the pile top and soil under the cap, respectively, and Pt is the equivalent concentrated load acting on the pile top. The relationship among the variables is described: (1)Pt+q1×πR−r2=q×πR2.Figure 1 Mechanical model of capped pile. (a) Prototype of load(b) Load distribution modelAccording to Figure1 and assumption (3), a pile–soil interaction analysis model (Figure 2) is established on the above basis, where Pb is equivalent concentrated load at the pile tip and qb is uniform load at the bottom of the soil under the cap.Figure 2 Pile–soil interaction analysis model.When a concentrated loadPt is acted on the pile body alone, the lateral friction will appear on the pile side. When a uniform load q1 is acted on the soil under cap individually, there will also be friction on the pile side. When they act together, the above two states can be superimposed. Then, the lateral friction on the pile–soil interface can be calculated by the superposition of τp and τq, which are derived from the concentrated load Pt acting on the top of the pile and the uniform load q1 acting on the top of the soil under the cap, respectively. ## 2.2. Solution for Lateral Friction First, we study the case of concentrated loadPt acting on the top of a pile. Rondoloph and Worth assumed that when a concentrated load is acting on a single pile, the soil around the pile only produces annular shear displacement, which can be approximately treated as a plane problem [35]. Thus, a rectangular coordinate system is employed to establish the calculation schematic diagram (Figure 3), and the origin of the coordinate axis is located at the pile body center.Figure 3 Calculation schematic diagram ofPt. (a) Load schematic(b) Calculation schematic [35]According to the shearing displacement method, ifl is the pile length, Ap is the area of the cross section of the pile body, Up is the pile perimeter, Ep is the elastic modulus of the pile body, Gs is the shearing deformation modulus of the soil, ν is the Poisson ratio of the soil, ωb is displacement of the pile tip, and rm is the maximum influence radius that can be assigned as 0.51−νl, the expressions for lateral frictional resistance of the pile body τpz, displacement of the pile body ωpz, axial force of the pile body Npz, and equivalent concentrated load at the pile tip Pb are shown, respectively, as follows: (2)τpz=k1Upωpz,(3)ωpz=ωbcoshλl−z,(4)NPz=Pbk1k2sinhλl−z+coshλl−z,(5)Pb=k2ωb,where λ2 is the quotient of k1 and ApEp and k1 is the shear stiffness between the pile and soil, calculated as follows: (6)k_1=2πGsln−1r_m/r.The compressive stiffness of soil (k2) beneath the pile tip can be calculated as follows: (7)k2=4rGs/1−υ.Then, we study the case of uniform loadq1 acting on the soil under the cap. The same coordinate system is employed to establish the calculation schematic diagram (Figure 4). The pile–soil interface is located on the vertical plane with distance from the origin r; q1 is effective in the range between r and R, and the coordinates of an arbitrary point P underground on the pile–soil interface are r,z. The load on the micro unit dx is replaced by linear load q1dxr≤x≤R.Figure 4 Calculation schematic of soil loadq1. (a) Load schematic(b) Calculation schematicAccording to the plane stress solution [36], the additional shear stress τqz induced by linear load q1 at point P at depth z (0≤z≤l) on the pile–soil interface is (8)τqz=q1πsin2θ,where θ is the angle between the connection line of point P to unit dx and the pile–soil interface (0≤θ≤π/2).Substituting the geometrical relations in Figure4(b) into Equation (8) produces (9)τqz=q1πR−r2z2+R−r2.Finally, the lateral friction on the pile–soil interfaceτz is the sum of τpz and τqz: (10)τz=τpz+τqz.Considering the directions ofτpz and τqz produces (11)τz=k1Upcoshλl−z⋅ωb−q1π⋅R−r2z2+R−r2. ## 2.3. Solutions for Axial Force and Settlement When the expression of lateral friction is obtained, take the pile body in Figure2 as a calculation isolator (Figure 5).Figure 5 Calculation isolator.According to the force balance of the pile body, the equilibrium equation is established:(12)Nz=Pt−∫0zUpτzdz=Pt−∫0zUpτpz+τqzdz.By solving the integral, the axial force expressionNz is obtained: (13)Nz=NPz+2rq1R−r−zarctanR−rz,where Npz is the axial force of an ordinary pile body.According to the elastic compression conditions of a pile body, the relationship of displacement and frictional resistance of the pile side is expressed as Equation (14), and the relationship of axial force and displacement of the pile body is expressed as (14)ωz=1ApEp∫lzUp⋅τzdz,(15)Nz=ApEp⋅dwzdz.By substituting Equations (2), (3), and (9) into Equation (10), the lateral friction of the capped pile is obtained as (16)τz=Pt⋅k2⋅coshλl−zUp⋅k1⋅sinhλl+k2⋅coshλl−q1⋅R−r2π⋅z2+R−r2.By substituting Equation (11) into Equation (14), the settlement of the capped pile is calculated: (17)ωz=Ptcoshλl−zk1sinhλl+k2coshλl+2q1πrEpR−rarctanlR−r−arctanzR−r.Then substituting Equation (17) into Equation (15), the axial force of the capped pile is calculated: (18)Nz=Pt⋅coshλl‐zk1⋅sinhλl−z+k2⋅coshλl−z+2rq1R−r−zarctanR−rz.Whenz=0, the settlement of the pile top st is equal to ω0. (19)ω0=st=Pt⋅coshλlk1⋅sinhλl+k2⋅coshλl+2q1π⋅r⋅EpR−rarctanlR−r.Equations (18) and (19) are the theoretical expressions of the axial force and the load-settlement curve of a capped pile, respectively. Based on the above analysis, the corresponding computational program is compiled to facilitate the comparison and verification of the calculation results. ## 3. Comparison and Validation A case history is introduced to validate the computed results of this method. The Su-Kun-Tai Expressway is located beside Yangcheng Lake in China, and the majority of the route is constructed on a deep, soft, soil foundation. The distribution and physical parameters of the foundation soil layers are shown in Table1.Table 1 Distribution and physical parameters of soil layers. Soil layerDepth (m)Water content, (%)Bulk density (kN/m3)Compression modulus (MPa)Poisson ratioSandy silt1.230.919.25.830.3Silty clay1.2-1.733.918.43.470.3Clay1.7-8.938.318.12.620.3Muddy clay8.9-10.648.317.22.80.3Silty clay10.6-12.135.8183.180.3Silty clay12.1-26.42419.86.970.3Silty clay26.4-31.423.619.98.230.3Capped piles were used to reduce settlement under the embankment load. The pile body is a PTC-A400-65 concrete pipe pile with lengths of 25 m (T2) and 29 m (T4) in two different test sections. The caps are prefabricated 1.5 m square concrete slabs with thickness of 40 cm. Full-scale tests of capped piles were performed for lateral friction, axial force, and bearing capacity of T2 and T4 piles. The axial force was measured by prefabricated stress gauge at different depths on steel bars in the pile body, and the soil pressure under the pile cap was measured by embedded soil pressure boxes under the pile cap. The bearing capacity of capped piles was determined by load testing [11]. The measured data of the T2 and T4 piles are shown in Table 2 and are used to compare and validate the proposed method.Table 2 Measured data of T2 and T4. Total load (kN)Settlement (mm)Load on pile topPt/(kN)Uniform load on soil under the capq1/(kN/m2)T24002.036317.66002.954525.98004.873132.610007.091738.912008.7110047.1140010.4128454.9160014.6144871.5180035.11107326.3200056.91258349.1T45001.746915.07502.669525.910004.392933.612505.5115644.015007.4138155.917509.6160071.0200012.4181089.1225018.01978128.5250049.21840310.8 ### 3.1. Comparison In order to compare the influence of pile cap on the lateral friction, the capped pile and the ordinary pile are taken into account. The influence of load action on the soil under the pile cap is considered in the lateral friction analysis. The shearing displacement method [35] is adopted for the ordinary pile calculating; the proposed method is adopted for capped pile calculating. The lateral friction of two types of piles is calculated under the same condition, and the results are shown in Figure 6.Figure 6 Comparison of lateral friction along pile length of T4.The lateral friction curve of two types of piles in Figure6 shows obvious difference near the pile top. The lateral friction of capped pile is less than that of ordinary pile. The reason for this phenomenon is that the load on the soil under the pile cap (q1) reduced the soil deformation, which lead to the pile lateral friction becomes small. The value difference between them is the largest at the pile top and decreases with the depth. This indicates that the load (q1) affected lateral friction in a certain depth. In this case, the equivalent action radius of q1 is 0.645 m; the max influenced depth is 2.5 m, which is about 4 times of the load action radius. Beyond this depth, the effect on the lateral friction caused by q1 is very little and can be ignored. In addition, the load level has a great influence on the lateral friction difference. The higher the load level, the greater the difference. For example, it is 8.5 kPa, 19.5 kPa, and 32.5 kPa when the load level is 500 kN, 1000 kN, and 1500 kN, respectively.All of the above indicate that when the soil under the pile cap is carrying a load, the lateral friction is affected by the load level and the action radius. ### 3.2. Validation #### 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 3.1. Comparison In order to compare the influence of pile cap on the lateral friction, the capped pile and the ordinary pile are taken into account. The influence of load action on the soil under the pile cap is considered in the lateral friction analysis. The shearing displacement method [35] is adopted for the ordinary pile calculating; the proposed method is adopted for capped pile calculating. The lateral friction of two types of piles is calculated under the same condition, and the results are shown in Figure 6.Figure 6 Comparison of lateral friction along pile length of T4.The lateral friction curve of two types of piles in Figure6 shows obvious difference near the pile top. The lateral friction of capped pile is less than that of ordinary pile. The reason for this phenomenon is that the load on the soil under the pile cap (q1) reduced the soil deformation, which lead to the pile lateral friction becomes small. The value difference between them is the largest at the pile top and decreases with the depth. This indicates that the load (q1) affected lateral friction in a certain depth. In this case, the equivalent action radius of q1 is 0.645 m; the max influenced depth is 2.5 m, which is about 4 times of the load action radius. Beyond this depth, the effect on the lateral friction caused by q1 is very little and can be ignored. In addition, the load level has a great influence on the lateral friction difference. The higher the load level, the greater the difference. For example, it is 8.5 kPa, 19.5 kPa, and 32.5 kPa when the load level is 500 kN, 1000 kN, and 1500 kN, respectively.All of the above indicate that when the soil under the pile cap is carrying a load, the lateral friction is affected by the load level and the action radius. ## 3.2. Validation ### 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 3.2.1. Axial Force and Settlement In order to verify the availability of this method, the calculation results of axial force and the settlement are validated with the measured data. The parameters used in calculation are shown in Table3.Table 3 Calculation parameters. Pile numberPile lengthl/(m)Pile radiusr/(m)Soil deformation modulusEs/(Mpa)Pile deformation modulusEp/(MPa)Soil shear modulusGs/(Mpa)Soil Poisson ratioνT2250.25.21300002.210.3T4290.25.45300002.230.3As seen from Figure7, the variation of axial force calculated by this method is consistent with the measured data. With the load increasing, the axial force of the pile body increases evenly in most stages. But when the load changes from 2000 kN to 2250 kN, the increase of axial force at the pile tip is smaller. Under the two load levels, the pile body begins to settle faster, and the measured settlement data in Table 2 reflect this phenomenon clearly. This indicates that the soil under the pile tip began to yield and produced plastic deformation, so the axial force increase of pile tip is small. At this time, the load increment of pile body decreased, and the load on soil under the pile cap (q1) increased correspondingly to balance the total load. When q1 is increased, the lateral friction near the pile top decreases, leading the axial force to increase. On the whole, the results of axial force computed by this method are consistent with the experimental phenomenon and the theoretical analysis.Figure 7 Axial force curves of T4 along pile length. (a) Load 500kN-1250kN(b) Load 1500kN-2000kNThe calculated load-settlement curves (Figure8) are as follow:Figure 8 Load-settlement curves of T2 and T4.Figure8 demonstrates that the variation of load-settlement curves computed by this method is consistent with that of the measured curves, but the calculated settlements are slightly less than the measured data. In the two cases (T2 and T4), the error of settlement increases when the soil is close to failure. The reason for this phenomenon is that when the capped pile is approaching the ultimate load, the differential displacement between pile and soil is increased gradually, the condition of equal strain cannot be strictly satisfied, which leads to a larger error. ## 4. Conclusions Compared with the ordinary pile, the pile–soil interaction of capped pile is more complicated. In this study, we proposed an analysis method of pile–soil interaction for capped piles considering the load action of the soil under the pile cap. We established an analysis model of pile–soil interaction according to the mechanical characteristics of capped piles and analyzed the lateral friction by combining the shearing displacement method and Boussinesq solutions. Theoretical analytical expressions of lateral friction, axial force, and load-settlement curves were obtained by establishing the equilibrium differential equation of a pile body and solving it. By comparing the variation of lateral friction curve between ordinary pile and capped pile, it is found that the load acting on the soil under the cap reduces the lateral friction value in a depth of about 4 times the load action radius. Beyond this depth, the influence can be ignored. This is unfavorable to the bearing capacity of pile body; for short piles with large pile caps, this phenomenon should be fully considered in design. The axial force and load-settlement curves calculated by this method are validated by a case history of reinforcing subgrade with capped piles. The result shows that the variations of axial forces agree with the experimental date and theoretical analysis and the variation of load-settlement curves are consistent with that of the measured curves. The error of load-settlement increases when the soil is close to failure, which need to be further studied. --- *Source: 1007731-2022-03-24.xml*
2022
# Realization of Music-Assisted Interactive Teaching System Based on Virtual Reality Technology **Authors:** Yan Gao; Lin Gao **Journal:** Occupational Therapy International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1007954 --- ## Abstract Virtual reality technology has attracted researchers’ attention because it can provide users with a virtual interactive learning environment. Based on the theory of virtual reality technology, this paper proposes the system model design and architecture of virtual interactive music-assisted interactive teaching and realizes key technologies such as modeling, music-assisted interactive teaching scene interaction, and database access. In the simulation process, based on the VRML/X3D bottom interactive system template, after comprehensive application research, comparative analysis of various modeling methods, the system verified the use of digital cameras combined with the modeling technology based on music elements to collaboratively establish VRML virtual model connections. For inline node function, we combined it with Outline3D to realize VRML integration and then use VizX3D, X3D-Edit to build X3D model and realize the conversion from VRML to X3D, which solves the system completeness problem of music-assisted interactive teaching. The experimental results show that, according to the statistical analysis of the data after the experiment, when the position changes in the virtual 3D music-assisted interactive teaching scene, it will be displayed in the plane layer, and the real-time coordinates of the virtual music-assisted interactive teaching scene displayed in HTML have case. By analyzing the scenes and dynamic effects in the works, the effects of the virtual world can be better displayed through the performance of details. The better accuracy and delay error reached 89.7% and 3.11%, respectively, which effectively improved the effect and feasibility of applying virtual reality technology to music-assisted interactive teaching. --- ## Body ## 1. Introduction Virtual reality technology is one of the hot areas of research at home and abroad and has played an active auxiliary role in the field of music-assisted interactive teaching [1–4]. Under the guidance of the constructivist music-assisted interactive teaching theory, it is connected with various technical means of multimedia technology and network technology, combined with the discipline of music-assisted interactive teaching, and helps learners to construct a learning environment in the virtual music-assisted interactive teaching scene. The environment and the construction of meaning are the new learning methods of interactive teaching in the new era. Constructivist learning theory emphasizes the central position of students, the important role of rich “situations” in meaning construction, and the important role of multiple learning music-assisted interactive teaching scenarios [5–7]. The virtual reality technology provides a technical guarantee for the realization of a simulated work and learning environment, which is deeply in line with the constructivist learning theory [8–11]. In the current situation at home and abroad, we can fully understand the wide range of virtual technology applications and lead the pace of the information age. The learners can interact with the learning situation through the simulation of the interactive teaching environment assisted by music, so as to realize the autonomous learning in the interactive teaching mode.Radianti et al. [12] believes that the use of virtual reality technology in music-assisted interactive teaching can fully mobilize the learner’s senses and thinking organs, so that the observed scenery is vividly displayed in front of them. Especially for some inaccessible music-assisted interactive teaching or experimental content, through virtual reality technology, they can be placed in front of them like real objects and can be carefully observed from the front, side, and back, and even inside the scene for observation and research. Odrekhivskyy et al. [13] believed that for the scenes that are difficult to restore in the real music-assisted interactive teaching, the reproduction of state can also be realized by means of virtual reality technology. Aithal and Aithal [14] found that in the music-assisted interactive teaching experiment, virtual reality can also be integrated with other multimedia technology to play a greater role. The so-called “interactivity” means that the human-computer interaction in the virtual reality system is a nearly natural interaction. Users can not only interact with a computer keyboard and mouse but also through special helmets, data gloves, and other sensing devices. Chen et al. [15] believe that the computer can adjust the musical elements presented by the system and the auxiliary interactive musical elements according to the movements of the user’s head, hands, eyes, language, and body. Users can inspect or operate objects in the virtual environment through their natural skills such as language, body movements, or actions. The information felt by the user in the virtual world, through the thinking and analysis of the brain, forms the action or strategy that they want to implement, and feeds it back to the system through the input interface to realize the function of interacting with the system and independently controlling the operation of the system [16–21].This paper starts with constructivist learning theory. From the learning point of view of learning, it analyzes the important role of virtual reality technology in the field of music-assisted interactive teaching, introduces what virtual reality technology is, the current application and development of virtual reality, VRML (virtual reality modeling language) to build the common technical means in the virtual reality music-assisted interactive teaching scene, and explain in detail the role and significance of the design in the author’s actual work. Secondly, starting from the completed music-assisted interactive teaching media simulation system, the key technologies used to complete the system and the design ideas of the system are expounded, and the specific methods for completing the model making of the virtual system and optimizing the model through 3DS MAX are expressed. In the process of establishing a virtual music-assisted interactive teaching environment, the process of using VRMLPad to interact with the established music-assisted interactive teaching environment model is explained, and the music-assisted interactive teaching equipment model established by Cult3D on 3DS MAX is explained for ideas and methods for interaction. The system implemented in this paper can become an effective supplementary means of music-assisted interactive teaching in the course music-assisted interactive teaching and can help students achieve digital learning. ## 2. Methods ### 2.1. Virtual Reality Level The Script node in the system hierarchy transmits the value of the VR event to the script specified by the URL. If it accepts a group of trigger events, the Script node processes it in sequence according to different methods. If using JavaScript, each input event corresponds to a custom function of the same name. The browser calls the functions according to the order [22] (1)dψ′xdx−dψ′ydydx+dy−dψ′xdxdψ′ydy=0.This method can effectively help the file creator to easily complete the establishment of more complex models and music-assisted interactive teaching scenarios, without involving a large number of code problems, directly use the various controls integrated in the visual editor to complete the corresponding operations. During the establishment of this system, a large number of this method was used to complete the establishment of models and music-assisted interactive teaching scenarios [23–26].Action and autonomy are inseparable from the user and the things in the scene. Whether it is a single dynamic or mutual dynamic, it is inseparable from the interaction generated by each other, because these two characteristics are also derived from the interaction. Character-assisted interaction is an auxiliary interaction mode in the virtual-assisted interaction system. The user is incarnated as an animated character in the virtual environment, and a third-person camera is placed behind the character as the user’s viewpoint. The user can use the keyboard to control the movement of the character and then move the viewpoint. Character assistant interaction is a freer assistant interaction mode, which can break the limitation of automatic assistant interaction on the time and space perspective. In this example, a typical process of writing a 3D event using a text editor is illustrated. From defining a geometric model, to adding trigger nodes, to adding script events, the writing process of VRML is described. ### 2.2. Music-Assisted Dependency The storage of data or code, in the design method for music auxiliary objects, data, or code is a whole, this whole is the object, the member data or member functions of the thorn elephant can be hidden as needed, and other objects cannot directly modify the column. Like all the data, but to be modified through the member function of this strip, so as to avoid the mutual interaction of program modules. It uses descriptive text language to describe the shape of basic three-dimensional objects and combines these basic three-dimensional shapes into a virtual music-assisted interactive teaching scene through certain control interpretation and execution to generate virtual three-dimensional music-assisted interactive teaching scenarios. The biggest feature of VRML is the use of text to describe the three-dimensional space, which greatly reduces the amount of data transmitted on the Internet.When the sweeping stroke is many polygons of the environment of Figure1, its capture transformation process polygon is similar. But at this time, there is a mutual occlusion relationship between polygons, so it is necessary to determine and calculate the visible scan line segments in each scan line, that is, to perform blanking processing. This process can be divided into two steps: calculating the scan line segment and determining the visibility of the line segment. The first step is to calculate. (2)dxi,jdidj=xi/xjxj−1−1−i−jxxi−1.Figure 1 A brief description of music-assisted reliance on browsing information.All intersecting line segments of the sweep line and the polygon formed by the object in the auxiliary interactive music elements. The second step eliminates the invisible line segment or part of the line segment. ### 2.3. Music-Assisted Interactive Teaching Media Routing is the connection channel between the nodes that generate music auxiliary events and receive events, and the routing of events ROUTE can make VRML programs interactive. The routing transmits the events generated by some nodes to other nodes, and, by changing the attribute values of some fields, makes the objects in the three-dimensional space produce motion or special effects, that is, animation and interaction, making the virtual world more realistic. The sense of presence in virtual reality itself is the thing that exists, and the sense of presence is the real feedback that the real thing can give the experiencer. The virtual scene itself is built on the basis of the real thing.The music element program algorithm of the system loading music auxiliary interface is basically the same as that of the main interface. However, its highlight is that when the system loads the music-assisted interactive teaching scene, the operation instruction display function is added, which not only allows the user to understand the specific operation steps while waiting but also diverts the user’s attention and ignores the waiting time, thus strengthening the user sensory experience. In the process of auxiliary interactive programming, in order to reduce the model burden of the system, when the music-assisted interactive teaching scene is loaded into the program design, we store the music-assisted interactive teaching scene in a Virtools file in Table1, according to the auxiliary interaction selected by the user. Patterns are dynamically added to the system.Table 1 Attributes of music-assisted interactive teaching. Music-assisted numberExperiment system data ratioControl system data ratio10.8650.3460.4630.6610.7420.55920.3960.7190.1270.3180.3380.29530.4490.4330.1870.0060.0860.41240.7700.7640.8240.8070.5250.67350.4310.4260.4900.4840.4370.429It is used in conjunction with thef frame buffer to complete the blanking function. The initial value of each unit in the z buffer is taken as the background color or gray value of the corresponding picture. Musical element blanking is achieved during the drawing of the musical element. When the image is drawn, when the attribute (color or grayscale) value of each pair (pixel) of the display wipe is filled into the corresponding unit of the frame buffer, the z-coordinate value of the point should be corresponding to the z-buffer. The values within the cell are compared. If the former is greater than the latter, change the value of the corresponding unit of the frame buffer and update the value of the corresponding unit of the z-buffer to this, and draw the z-coordinate of the gate. On the contrary, if the z-coordinate value of the point is smaller than that of the corresponding unit in the z-buffer value, it means that the point is farther from the viewer than the displayed point and is therefore occluded. ## 2.1. Virtual Reality Level The Script node in the system hierarchy transmits the value of the VR event to the script specified by the URL. If it accepts a group of trigger events, the Script node processes it in sequence according to different methods. If using JavaScript, each input event corresponds to a custom function of the same name. The browser calls the functions according to the order [22] (1)dψ′xdx−dψ′ydydx+dy−dψ′xdxdψ′ydy=0.This method can effectively help the file creator to easily complete the establishment of more complex models and music-assisted interactive teaching scenarios, without involving a large number of code problems, directly use the various controls integrated in the visual editor to complete the corresponding operations. During the establishment of this system, a large number of this method was used to complete the establishment of models and music-assisted interactive teaching scenarios [23–26].Action and autonomy are inseparable from the user and the things in the scene. Whether it is a single dynamic or mutual dynamic, it is inseparable from the interaction generated by each other, because these two characteristics are also derived from the interaction. Character-assisted interaction is an auxiliary interaction mode in the virtual-assisted interaction system. The user is incarnated as an animated character in the virtual environment, and a third-person camera is placed behind the character as the user’s viewpoint. The user can use the keyboard to control the movement of the character and then move the viewpoint. Character assistant interaction is a freer assistant interaction mode, which can break the limitation of automatic assistant interaction on the time and space perspective. In this example, a typical process of writing a 3D event using a text editor is illustrated. From defining a geometric model, to adding trigger nodes, to adding script events, the writing process of VRML is described. ## 2.2. Music-Assisted Dependency The storage of data or code, in the design method for music auxiliary objects, data, or code is a whole, this whole is the object, the member data or member functions of the thorn elephant can be hidden as needed, and other objects cannot directly modify the column. Like all the data, but to be modified through the member function of this strip, so as to avoid the mutual interaction of program modules. It uses descriptive text language to describe the shape of basic three-dimensional objects and combines these basic three-dimensional shapes into a virtual music-assisted interactive teaching scene through certain control interpretation and execution to generate virtual three-dimensional music-assisted interactive teaching scenarios. The biggest feature of VRML is the use of text to describe the three-dimensional space, which greatly reduces the amount of data transmitted on the Internet.When the sweeping stroke is many polygons of the environment of Figure1, its capture transformation process polygon is similar. But at this time, there is a mutual occlusion relationship between polygons, so it is necessary to determine and calculate the visible scan line segments in each scan line, that is, to perform blanking processing. This process can be divided into two steps: calculating the scan line segment and determining the visibility of the line segment. The first step is to calculate. (2)dxi,jdidj=xi/xjxj−1−1−i−jxxi−1.Figure 1 A brief description of music-assisted reliance on browsing information.All intersecting line segments of the sweep line and the polygon formed by the object in the auxiliary interactive music elements. The second step eliminates the invisible line segment or part of the line segment. ## 2.3. Music-Assisted Interactive Teaching Media Routing is the connection channel between the nodes that generate music auxiliary events and receive events, and the routing of events ROUTE can make VRML programs interactive. The routing transmits the events generated by some nodes to other nodes, and, by changing the attribute values of some fields, makes the objects in the three-dimensional space produce motion or special effects, that is, animation and interaction, making the virtual world more realistic. The sense of presence in virtual reality itself is the thing that exists, and the sense of presence is the real feedback that the real thing can give the experiencer. The virtual scene itself is built on the basis of the real thing.The music element program algorithm of the system loading music auxiliary interface is basically the same as that of the main interface. However, its highlight is that when the system loads the music-assisted interactive teaching scene, the operation instruction display function is added, which not only allows the user to understand the specific operation steps while waiting but also diverts the user’s attention and ignores the waiting time, thus strengthening the user sensory experience. In the process of auxiliary interactive programming, in order to reduce the model burden of the system, when the music-assisted interactive teaching scene is loaded into the program design, we store the music-assisted interactive teaching scene in a Virtools file in Table1, according to the auxiliary interaction selected by the user. Patterns are dynamically added to the system.Table 1 Attributes of music-assisted interactive teaching. Music-assisted numberExperiment system data ratioControl system data ratio10.8650.3460.4630.6610.7420.55920.3960.7190.1270.3180.3380.29530.4490.4330.1870.0060.0860.41240.7700.7640.8240.8070.5250.67350.4310.4260.4900.4840.4370.429It is used in conjunction with thef frame buffer to complete the blanking function. The initial value of each unit in the z buffer is taken as the background color or gray value of the corresponding picture. Musical element blanking is achieved during the drawing of the musical element. When the image is drawn, when the attribute (color or grayscale) value of each pair (pixel) of the display wipe is filled into the corresponding unit of the frame buffer, the z-coordinate value of the point should be corresponding to the z-buffer. The values within the cell are compared. If the former is greater than the latter, change the value of the corresponding unit of the frame buffer and update the value of the corresponding unit of the z-buffer to this, and draw the z-coordinate of the gate. On the contrary, if the z-coordinate value of the point is smaller than that of the corresponding unit in the z-buffer value, it means that the point is farther from the viewer than the displayed point and is therefore occluded. ## 3. Results ### 3.1. Virtual Reality Data Pooling In the process of completing the construction of the virtual music-assisted interactive teaching environment, after completing the model, save the file format as wrl, and add interaction. In the modeling of each music-assisted interactive teaching equipment, it is convenient to use VRMLPAD to modify it later. Since there are a lot of interactive processes in this part, it will be a very huge project if the VRML language is written manually. Therefore, we choose to use CULT3D for interactive designs. For this part, save the established equipment model in 3DMAX as the C3D format shown in Figure2 to facilitate the use of subsequent steps. Combined with the relevant experimental data of inverse kinematics, the key frames are simulated to achieve three-dimensional modeling and display, which can intuitively reproduce the standard technical movements of athletes and provide key elements for course teaching.Figure 2 Virtual reality-assisted interactive music element data pooling.The material has four property settings: ambient, diffuse, specular, and emissive, which represent ambient light, diffuse reflection light, and specular reflection properties, respectively. Emissive represents the luminosity of the auxiliary interactive music element light itself. The four parameters are all values of type color. There is also a power property, which is a floating point type. The larger the value, the greater the difference between the highlight intensity and the surrounding brightness. The parallel line parallel to the coordinate axis, the vanishing point formed on the surface of the auxiliary interactive musical element is called the main vanishing point. According to the number of main vanishing points, perspective auxiliary interactive music elements are further divided into one-point perspective, two-point perspective, and three-point perspective. Compared with the parallel auxiliary interactive music elements, the perspective auxiliary interactive music elements have a stronger sense of depth and look more realistic. But perspective projection cannot truly reflect the exact size and shape of objects.Students’ movements can also be recorded through sensors and motion sensing devices, and the comparison of data differences can be used to judge the standardization of students’ movements and realize the interaction between the virtual world and the real world. The higher the sample rate and resolution of the auxiliary interactive musical elements, the more memory is consumed when the auxiliary interactive musical elements file is loaded into the computer. Not all sound cards can support high sampling rates and whether it is necessary to use high sampling rate auxiliary interactive music elements, because human ears are not that powerful, too high resolution is just a waste of memory. In order to save memory and improve the efficiency of the CPU, the virtual auxiliary interactive system uses the auxiliary interactive music element stream to process the music module, that is, the auxiliary interactive music element file is streamed. The principle is this: first load the first point of the auxiliary interactive music element file into the memory, and then load the rest of the file into the memory one after another during playback. ### 3.2. Music Helper Function Recursion In the function recursion of computer music elements, the homogeneous coordinate technique is widely used to study the transformation of music elements, that is, in then+1 dimensional space, the transformation of n-dimensional vectors is discussed, and the normalization process is performed to observe its transformation in the n-dimensional space result. It is precisely because the geometric transformation of music elements can be transformed into the multiplication of a vector representing the point set of music elements and a certain transformation matrix, so the transformed music elements can be obtained quickly, which provides the possibility for the dynamic display of computer music elements. No matter in the two-dimensional plane or the three-dimensional space, the defined geometric musical elements can be continuously transformed for many times to obtain the new required musical elements. At this time, it is only necessary to multiply the corresponding multiple transformation matrices to form a combined transformation matrix, and then act on the geometric music elements. (3)Xa,b=ψ′ada1−a01−aψ′ada1−a01−aψ′ada.It mainly calculates the specific direction and specific position of the tracked object by using the video camera to reach theX-Y plane array, the surrounding light or the projection of the tracking light in the projection plane at different times and different positions. Making a path curve function in Virtools is composed of some 3D nodes and curves. These elements can be made through functions. After they are made, they can be used as a path curve function after setting. The key code is introduced first. X3D’s text editors mainly include X3D-Edit, an open-source X3D development tool developed by the Web3D Alliance, and X3D is also an open-source X3D development toolkit. X3D-Edit is an Extensible 3D (X3D) text editor for musical elements, edit customizes the general XML editor under the Java platform through the X3D 3.0 tagset defined by the X3D 3.0 DTD, and uses IBM Xeena as the customized X3D music-assisted interactive teaching scene graph editor. Supported platforms are Windows, MacOS X, Linux, Unix, and others. The higher the hierarchy, the system will display who is the priority. The hierarchical relationship of 2D frames can be set at any time to ensure that no logical errors will occur during the real-time display of 2D frames. ### 3.3. Deep Fit of Music-Assisted Interactive Teaching The interactive teaching output is divided into three parts: static output, dynamic output, and online version output. Different output forms have different effects. With the support of different technologies, the effects that can be displayed are different, and there are also gaps in practical effects. There are two technical forms of virtual reality technology in works. One is operational technology, which conducts in-depth research and development through existing tools and software technology and the successful case form of previous researchers. For example, the touch-type structure, which uses the mouse and keyboard to operate the virtual space of the computer screen, is also a form of relatively traditional virtual assisted interaction technology, and it is also one of the fastest forms accepted by users.(4)Ui,j=ui,j−ui−ujui−uj,Wi,j=wi,j−wi−wjwi−wj.The main functions realized by the interactive teaching interface module design algorithm are: system initialization, loading textures, materials, and other materials; initial interface and button display; click the button to enter the corresponding function interface; function interface initialization, loading function interface materials; function interface and button display; click the button under the function interface to switch back to the initial interface; after the system is terminated, destroy the data and release the memory. When editing VRML or X3D music-assisted interactive teaching scene graph files, X3D-Edit can provide a simplified and error-free way of creating and editing, edit customizes context-sensitive tooltips through XML files, providing a summary of each node and attribute to facilitate authoring and editing of music-assisted interactive teaching scene graphs. When students use virtual reality technology, students can concentrate faster and more lastingly, the number of questions asked by teachers is relatively reduced, and the work intensity of teachers is reduced. The interactive teaching effect is unanimously optimistic. In this virtual environment, students are not students in the classroom in the biological sense, but refer to those who need to obtain learning information and resources into the virtual teaching environment, and then participate in learning activities through their avatars.In the virtual music-assisted interactive teaching scene, except for the boundary scope of Figure3, such as the music-assisted interactive teaching building and the music-assisted interactive teaching facilities, the characters cannot pass through. Due to the interaction between the system and the user and the movement of the character, the character and the virtual stationary object may collide. In order to maintain the authenticity of the virtual auxiliary interaction system, it is necessary to detect the possible collision in time and design the corresponding collision response; otherwise, the character will occur, which affects the realism of the auxiliary interactive system and affects the user’s immersion. There are 255∗255 horizontal and vertical grids in the grid, and the value of each grid ranges from 0 to 255. When the grid value is 255, it turns yellow, indicating that the grid is forbidden to be crossed. Clever use of numerical values is to plan the limited boundary of the character’s movement and calculate the straight-line distance between the character and the boundary in real time, when the distance is less than 0, stop the character’s progress, indicates that the character has reached the boundary at this time. The function that bounds the character’s motion is setBoundary ().Figure 3 Distribution of deep fit of music-assisted interactive teaching. ### 3.4. Parameter Configuration of Music-Assisted Interactive Teaching Scene For the three-dimensional music-assisted interactive teaching scenes and objects displayed on the Internet, VRML is described using a file format standard, and any editor can be used as a writing tool. Analyze and discuss the role of virtual reality technology in teaching and the application of virtual reality technology in teaching, listen carefully to experts’ opinions and adopt them, provide multiple perspectives to promote the research of this paper and provide more realistic basis for the paper. To enter the virtual reality world, users only need to access through the VRML browser and download the music elements music elements and audio and video resources placed on the server. In addition, VRML’s rendering of musical elements is real-time as the user interacts with objects in the virtual world in real-time. VRML provides6+1 degrees of freedom, that is, rotation and movement in three directions, as well as hyperlinks (Anchors) with other 3D spaces. Users can intuitively feel the impact and changes of their own behavior on the virtual world. (5)Dertimerstc,c−1cx∈0,1,2,⋯,c−10,1,2,⋯,c−10,1,2,⋯,c−1.User domain refers to the entire natural space that programmers use to define sketches. This defines a yellow cube with length 2, width and height 1. The main attribute of the box node is size, which is used to determine the size in the length (x direction), height (y direction), and width (z direction). It is better to use the English folder name to save the file, because in the folder (path), you can only browse and run the files that do not use Java class and music assisted interactive teaching scene; otherwise, you cannot even see the X3D music-assisted interactive teaching scene. The windows can be nested, that is, the second layer window can be defined in the first layer window, the i+1 layer window can be defined in the i-layer window, and so on. In some cases, the users can also define a circular window with a center and a radius or a polygonal window represented by a boundary, as needed. ## 3.1. Virtual Reality Data Pooling In the process of completing the construction of the virtual music-assisted interactive teaching environment, after completing the model, save the file format as wrl, and add interaction. In the modeling of each music-assisted interactive teaching equipment, it is convenient to use VRMLPAD to modify it later. Since there are a lot of interactive processes in this part, it will be a very huge project if the VRML language is written manually. Therefore, we choose to use CULT3D for interactive designs. For this part, save the established equipment model in 3DMAX as the C3D format shown in Figure2 to facilitate the use of subsequent steps. Combined with the relevant experimental data of inverse kinematics, the key frames are simulated to achieve three-dimensional modeling and display, which can intuitively reproduce the standard technical movements of athletes and provide key elements for course teaching.Figure 2 Virtual reality-assisted interactive music element data pooling.The material has four property settings: ambient, diffuse, specular, and emissive, which represent ambient light, diffuse reflection light, and specular reflection properties, respectively. Emissive represents the luminosity of the auxiliary interactive music element light itself. The four parameters are all values of type color. There is also a power property, which is a floating point type. The larger the value, the greater the difference between the highlight intensity and the surrounding brightness. The parallel line parallel to the coordinate axis, the vanishing point formed on the surface of the auxiliary interactive musical element is called the main vanishing point. According to the number of main vanishing points, perspective auxiliary interactive music elements are further divided into one-point perspective, two-point perspective, and three-point perspective. Compared with the parallel auxiliary interactive music elements, the perspective auxiliary interactive music elements have a stronger sense of depth and look more realistic. But perspective projection cannot truly reflect the exact size and shape of objects.Students’ movements can also be recorded through sensors and motion sensing devices, and the comparison of data differences can be used to judge the standardization of students’ movements and realize the interaction between the virtual world and the real world. The higher the sample rate and resolution of the auxiliary interactive musical elements, the more memory is consumed when the auxiliary interactive musical elements file is loaded into the computer. Not all sound cards can support high sampling rates and whether it is necessary to use high sampling rate auxiliary interactive music elements, because human ears are not that powerful, too high resolution is just a waste of memory. In order to save memory and improve the efficiency of the CPU, the virtual auxiliary interactive system uses the auxiliary interactive music element stream to process the music module, that is, the auxiliary interactive music element file is streamed. The principle is this: first load the first point of the auxiliary interactive music element file into the memory, and then load the rest of the file into the memory one after another during playback. ## 3.2. Music Helper Function Recursion In the function recursion of computer music elements, the homogeneous coordinate technique is widely used to study the transformation of music elements, that is, in then+1 dimensional space, the transformation of n-dimensional vectors is discussed, and the normalization process is performed to observe its transformation in the n-dimensional space result. It is precisely because the geometric transformation of music elements can be transformed into the multiplication of a vector representing the point set of music elements and a certain transformation matrix, so the transformed music elements can be obtained quickly, which provides the possibility for the dynamic display of computer music elements. No matter in the two-dimensional plane or the three-dimensional space, the defined geometric musical elements can be continuously transformed for many times to obtain the new required musical elements. At this time, it is only necessary to multiply the corresponding multiple transformation matrices to form a combined transformation matrix, and then act on the geometric music elements. (3)Xa,b=ψ′ada1−a01−aψ′ada1−a01−aψ′ada.It mainly calculates the specific direction and specific position of the tracked object by using the video camera to reach theX-Y plane array, the surrounding light or the projection of the tracking light in the projection plane at different times and different positions. Making a path curve function in Virtools is composed of some 3D nodes and curves. These elements can be made through functions. After they are made, they can be used as a path curve function after setting. The key code is introduced first. X3D’s text editors mainly include X3D-Edit, an open-source X3D development tool developed by the Web3D Alliance, and X3D is also an open-source X3D development toolkit. X3D-Edit is an Extensible 3D (X3D) text editor for musical elements, edit customizes the general XML editor under the Java platform through the X3D 3.0 tagset defined by the X3D 3.0 DTD, and uses IBM Xeena as the customized X3D music-assisted interactive teaching scene graph editor. Supported platforms are Windows, MacOS X, Linux, Unix, and others. The higher the hierarchy, the system will display who is the priority. The hierarchical relationship of 2D frames can be set at any time to ensure that no logical errors will occur during the real-time display of 2D frames. ## 3.3. Deep Fit of Music-Assisted Interactive Teaching The interactive teaching output is divided into three parts: static output, dynamic output, and online version output. Different output forms have different effects. With the support of different technologies, the effects that can be displayed are different, and there are also gaps in practical effects. There are two technical forms of virtual reality technology in works. One is operational technology, which conducts in-depth research and development through existing tools and software technology and the successful case form of previous researchers. For example, the touch-type structure, which uses the mouse and keyboard to operate the virtual space of the computer screen, is also a form of relatively traditional virtual assisted interaction technology, and it is also one of the fastest forms accepted by users.(4)Ui,j=ui,j−ui−ujui−uj,Wi,j=wi,j−wi−wjwi−wj.The main functions realized by the interactive teaching interface module design algorithm are: system initialization, loading textures, materials, and other materials; initial interface and button display; click the button to enter the corresponding function interface; function interface initialization, loading function interface materials; function interface and button display; click the button under the function interface to switch back to the initial interface; after the system is terminated, destroy the data and release the memory. When editing VRML or X3D music-assisted interactive teaching scene graph files, X3D-Edit can provide a simplified and error-free way of creating and editing, edit customizes context-sensitive tooltips through XML files, providing a summary of each node and attribute to facilitate authoring and editing of music-assisted interactive teaching scene graphs. When students use virtual reality technology, students can concentrate faster and more lastingly, the number of questions asked by teachers is relatively reduced, and the work intensity of teachers is reduced. The interactive teaching effect is unanimously optimistic. In this virtual environment, students are not students in the classroom in the biological sense, but refer to those who need to obtain learning information and resources into the virtual teaching environment, and then participate in learning activities through their avatars.In the virtual music-assisted interactive teaching scene, except for the boundary scope of Figure3, such as the music-assisted interactive teaching building and the music-assisted interactive teaching facilities, the characters cannot pass through. Due to the interaction between the system and the user and the movement of the character, the character and the virtual stationary object may collide. In order to maintain the authenticity of the virtual auxiliary interaction system, it is necessary to detect the possible collision in time and design the corresponding collision response; otherwise, the character will occur, which affects the realism of the auxiliary interactive system and affects the user’s immersion. There are 255∗255 horizontal and vertical grids in the grid, and the value of each grid ranges from 0 to 255. When the grid value is 255, it turns yellow, indicating that the grid is forbidden to be crossed. Clever use of numerical values is to plan the limited boundary of the character’s movement and calculate the straight-line distance between the character and the boundary in real time, when the distance is less than 0, stop the character’s progress, indicates that the character has reached the boundary at this time. The function that bounds the character’s motion is setBoundary ().Figure 3 Distribution of deep fit of music-assisted interactive teaching. ## 3.4. Parameter Configuration of Music-Assisted Interactive Teaching Scene For the three-dimensional music-assisted interactive teaching scenes and objects displayed on the Internet, VRML is described using a file format standard, and any editor can be used as a writing tool. Analyze and discuss the role of virtual reality technology in teaching and the application of virtual reality technology in teaching, listen carefully to experts’ opinions and adopt them, provide multiple perspectives to promote the research of this paper and provide more realistic basis for the paper. To enter the virtual reality world, users only need to access through the VRML browser and download the music elements music elements and audio and video resources placed on the server. In addition, VRML’s rendering of musical elements is real-time as the user interacts with objects in the virtual world in real-time. VRML provides6+1 degrees of freedom, that is, rotation and movement in three directions, as well as hyperlinks (Anchors) with other 3D spaces. Users can intuitively feel the impact and changes of their own behavior on the virtual world. (5)Dertimerstc,c−1cx∈0,1,2,⋯,c−10,1,2,⋯,c−10,1,2,⋯,c−1.User domain refers to the entire natural space that programmers use to define sketches. This defines a yellow cube with length 2, width and height 1. The main attribute of the box node is size, which is used to determine the size in the length (x direction), height (y direction), and width (z direction). It is better to use the English folder name to save the file, because in the folder (path), you can only browse and run the files that do not use Java class and music assisted interactive teaching scene; otherwise, you cannot even see the X3D music-assisted interactive teaching scene. The windows can be nested, that is, the second layer window can be defined in the first layer window, the i+1 layer window can be defined in the i-layer window, and so on. In some cases, the users can also define a circular window with a center and a radius or a polygonal window represented by a boundary, as needed. ## 4. Discussion ### 4.1. Extraction of Virtual Reality Factors For VRML browsers, in theory, it can process multiple virtual reality factor objects distributed on the Internet. Enhancing the quality of the musical elements and emulations in case of poor performance, or reduce the quality of musical elements and emulations in the case of poor performance. The node “multiple levels of detail (LOD)” provided by VRML can simulate the situation observed by the human eye in reality. When the distance from an object is relatively far, the details of the object are not clear; but when the distance from the object is getting closer, the level of detail of the object is more and more clear. The selection criterion of the level of detail in VRML is the distance from the viewpoint to the geometry, and the system selects different LOD levels according to the distance. At the beginning of the formal experiment, a questionnaire survey was conducted on the application of virtual reality technology in teaching. The questionnaire was distributed through the network. After the experiment, a questionnaire survey was conducted on the students in the experimental group. When the distance is greater than or equal to a certain value of Figure4, the observer will move to the next level of detail.Figure 4 Extraction of virtual reality network environment factors.In most cases, the client and server run on the same machine, of course, it can also be used in a network environment, so OpenGL has network transparency, and the music element function library is encapsulated in the win32 music element device interface (GD) dynamic link library OpenGL. Like in DLL, the music element library of OpenGL is also encapsulated in a dynamic link library OpenGL DLL. The call of the OpenGL function released from the client application program is firstly opened by OpenGL DLL processing, and then sent to the server by winsrv. The DLL is further processed, and then passed to the win32 device driver interface (DDI), and finally, the processed music element command is sent to the video display driver. ### 4.2. Simulation of Music-Assisted Interactive Teaching System The core part of the interactive teaching system is designed to complete all the main functions of the system, including the premusic-assisted interactive teaching scene processing method, the system interface module, the role-assisted interaction module, the automatic auxiliary interaction module, the sound effect module and the release after the system is completed. After the development of this virtual assistant interaction system is completed, it is only a primary version, and more functions will be expanded as needed. For example, the system can be imported into a tablet computer and adopted the touch screen operation mode; a database module is added to record the key of each assistant interaction, add multirole simultaneous auxiliary interactive interaction function. The powerful secondary development function of Virtools can fully meet the needs of this system expansion function. Using the index scoring method and the chart evaluation method, with a full score of 100 points, the behavior of the testee is divided into several indicators, and each indicator is assigned a certain score, total score.The system abstracts each geometric body into a music element class according to the idea of object-oriented programming, and each interactive teaching class encapsulates the unique attributes and behaviors of the corresponding body. The music element class mainly completes the establishment of the music element model, the generation of wireframes, entity maps, material maps, and texture maps. In addition, music element transformation methods (translation, rotation, and scaling) are provided. First, the object of the simulation is a certain floor of a music-assisted interactive teaching building, which includes multiple areas such as multimedia classrooms, offices, line editing rooms, non-editing rooms, and recording keys. According to the actual size. Using AUTOCAD2004 to draw the architectural plan, and use the file-import command to import the dwg text drawing into 3DSMAX, and use the extrude modifier to build the wall after the line is drawn. The modeling process uses basic geometry for modeling.Any area less than or equal to the screen area is called the view area. The view area can be defined by the device coordinates in the screen area. If the music elements in the window area selected by the user are to be displayed in the view area, they must also be converted into the coordinate values in the device coordinate system by the program. The view area is generally defined as a rectangle, which is defined by the coordinates of the lower left corner and the upper right corner, or by the coordinates of the lower left corner and the x and y directions of the view area: the length of the edge. The view area can be nested, and the nested level of the music element processing software is fixed. For graphics and polygon windows, users can also define circular and polygonal view areas for different applications. ### 4.3. Example Application and Analysis For the design of interactive teaching virtual comics, the most convenient way is to generate EXE executable files, but EXE files cannot be directly generated by Virtools, and the external VirtoolsMakeExe.exe and CustomerPlayer.exe files are needed to achieve this, it may run the VirtoolsMakeExe.exe file, the following setting interface will pop up, click the designated button corresponding to the Virtools project file option, select the CMO file we have completed, and set the resolution in the window setting option. Here, we use1366∗768, click “Generate” button; then, the generated EXE file can be run independently of the Virtools environment. The base class of music elements is used to implement operations common to all music elements, such as translation, rotation, and zoom operations of music elements, and shapeless material settings. It also contains two virtual functions: the music element drawing function Draw() and the music element storage function Save(). Then, we use mathematical methods to establish mathematical models such as charts, analyze various data and materials obtained from the investigation and scoring through mathematical statistics, and finally form quantitative conclusions.The functions of each voxel class are roughly the same, mainly completing voxel modeling, generating various forms of 3D models, and storing the unique data of each geometric shape. For models that appear repeatedly in multiple places, if a single modeling method is used, the code of the file will be greatly increased, and the real-time performance of the virtual music-assisted interactive teaching scene will be greatly reduced. Interactive teaching techniques are used here to apply transparent textures directly to the established 2D surfaces, or onto criss-cross patches. When setting, let the 2D object automatically adjust the normal direction according to the position of the camera.From the perspective of self-rating items, the average score was 4.68 points higher than that of the control group. Using SPSS software fort-test analysis, P=0.002<0.05 and P<0.01, it can be seen that there is a very significant difference. Vizx3D includes a complete 3D modeling function and the interactive editing function of music elements as shown in Figure 5. Vizx3D supports the establishment and editing of H-anim, NURBS, texture music-assisted interactive teaching scenes, and supports XML or VRML97 encoded X3D files. Through Vizx3D, users can intuitively create music-assisted interactive teaching scenes, create animations with key frames, and create music-assisted interactive teaching scene interactions through its unique system. It can output both wrl files and x3dv files.Figure 5 Interactive teaching virtual display data editing.In the tab, you can set the resolution of the virtual auxiliary interactive system. The resolution is related to the size of the design window of this system and the display effect of the published work after the system design is completed. The resolution selected by this system is1366∗768. After adding the lights, the music-assisted interactive teaching scene is bright, but all the models seem to have no textures on them. At this time, the models are pasted with shadow maps. Go to the Meshes settings tab, double-click the mesh under Meshes to open it, and mix it, change the mode to overlay DestColor. At this time, the texture will come out, and the color will be darker. Next, we enter the corresponding material level, brighten its color, and adjust all models to the ideal state. It shows that the students in the experimental group have improved their learning and understanding of the teaching content, are more satisfied with their self-expression, improved their self-confidence and interest in dancing, and gave themselves higher scores. If the model is transparent for the texture, change the mode to mash in material. If it is a double-sided sticker, put the box behind the both sided below. ## 4.1. Extraction of Virtual Reality Factors For VRML browsers, in theory, it can process multiple virtual reality factor objects distributed on the Internet. Enhancing the quality of the musical elements and emulations in case of poor performance, or reduce the quality of musical elements and emulations in the case of poor performance. The node “multiple levels of detail (LOD)” provided by VRML can simulate the situation observed by the human eye in reality. When the distance from an object is relatively far, the details of the object are not clear; but when the distance from the object is getting closer, the level of detail of the object is more and more clear. The selection criterion of the level of detail in VRML is the distance from the viewpoint to the geometry, and the system selects different LOD levels according to the distance. At the beginning of the formal experiment, a questionnaire survey was conducted on the application of virtual reality technology in teaching. The questionnaire was distributed through the network. After the experiment, a questionnaire survey was conducted on the students in the experimental group. When the distance is greater than or equal to a certain value of Figure4, the observer will move to the next level of detail.Figure 4 Extraction of virtual reality network environment factors.In most cases, the client and server run on the same machine, of course, it can also be used in a network environment, so OpenGL has network transparency, and the music element function library is encapsulated in the win32 music element device interface (GD) dynamic link library OpenGL. Like in DLL, the music element library of OpenGL is also encapsulated in a dynamic link library OpenGL DLL. The call of the OpenGL function released from the client application program is firstly opened by OpenGL DLL processing, and then sent to the server by winsrv. The DLL is further processed, and then passed to the win32 device driver interface (DDI), and finally, the processed music element command is sent to the video display driver. ## 4.2. Simulation of Music-Assisted Interactive Teaching System The core part of the interactive teaching system is designed to complete all the main functions of the system, including the premusic-assisted interactive teaching scene processing method, the system interface module, the role-assisted interaction module, the automatic auxiliary interaction module, the sound effect module and the release after the system is completed. After the development of this virtual assistant interaction system is completed, it is only a primary version, and more functions will be expanded as needed. For example, the system can be imported into a tablet computer and adopted the touch screen operation mode; a database module is added to record the key of each assistant interaction, add multirole simultaneous auxiliary interactive interaction function. The powerful secondary development function of Virtools can fully meet the needs of this system expansion function. Using the index scoring method and the chart evaluation method, with a full score of 100 points, the behavior of the testee is divided into several indicators, and each indicator is assigned a certain score, total score.The system abstracts each geometric body into a music element class according to the idea of object-oriented programming, and each interactive teaching class encapsulates the unique attributes and behaviors of the corresponding body. The music element class mainly completes the establishment of the music element model, the generation of wireframes, entity maps, material maps, and texture maps. In addition, music element transformation methods (translation, rotation, and scaling) are provided. First, the object of the simulation is a certain floor of a music-assisted interactive teaching building, which includes multiple areas such as multimedia classrooms, offices, line editing rooms, non-editing rooms, and recording keys. According to the actual size. Using AUTOCAD2004 to draw the architectural plan, and use the file-import command to import the dwg text drawing into 3DSMAX, and use the extrude modifier to build the wall after the line is drawn. The modeling process uses basic geometry for modeling.Any area less than or equal to the screen area is called the view area. The view area can be defined by the device coordinates in the screen area. If the music elements in the window area selected by the user are to be displayed in the view area, they must also be converted into the coordinate values in the device coordinate system by the program. The view area is generally defined as a rectangle, which is defined by the coordinates of the lower left corner and the upper right corner, or by the coordinates of the lower left corner and the x and y directions of the view area: the length of the edge. The view area can be nested, and the nested level of the music element processing software is fixed. For graphics and polygon windows, users can also define circular and polygonal view areas for different applications. ## 4.3. Example Application and Analysis For the design of interactive teaching virtual comics, the most convenient way is to generate EXE executable files, but EXE files cannot be directly generated by Virtools, and the external VirtoolsMakeExe.exe and CustomerPlayer.exe files are needed to achieve this, it may run the VirtoolsMakeExe.exe file, the following setting interface will pop up, click the designated button corresponding to the Virtools project file option, select the CMO file we have completed, and set the resolution in the window setting option. Here, we use1366∗768, click “Generate” button; then, the generated EXE file can be run independently of the Virtools environment. The base class of music elements is used to implement operations common to all music elements, such as translation, rotation, and zoom operations of music elements, and shapeless material settings. It also contains two virtual functions: the music element drawing function Draw() and the music element storage function Save(). Then, we use mathematical methods to establish mathematical models such as charts, analyze various data and materials obtained from the investigation and scoring through mathematical statistics, and finally form quantitative conclusions.The functions of each voxel class are roughly the same, mainly completing voxel modeling, generating various forms of 3D models, and storing the unique data of each geometric shape. For models that appear repeatedly in multiple places, if a single modeling method is used, the code of the file will be greatly increased, and the real-time performance of the virtual music-assisted interactive teaching scene will be greatly reduced. Interactive teaching techniques are used here to apply transparent textures directly to the established 2D surfaces, or onto criss-cross patches. When setting, let the 2D object automatically adjust the normal direction according to the position of the camera.From the perspective of self-rating items, the average score was 4.68 points higher than that of the control group. Using SPSS software fort-test analysis, P=0.002<0.05 and P<0.01, it can be seen that there is a very significant difference. Vizx3D includes a complete 3D modeling function and the interactive editing function of music elements as shown in Figure 5. Vizx3D supports the establishment and editing of H-anim, NURBS, texture music-assisted interactive teaching scenes, and supports XML or VRML97 encoded X3D files. Through Vizx3D, users can intuitively create music-assisted interactive teaching scenes, create animations with key frames, and create music-assisted interactive teaching scene interactions through its unique system. It can output both wrl files and x3dv files.Figure 5 Interactive teaching virtual display data editing.In the tab, you can set the resolution of the virtual auxiliary interactive system. The resolution is related to the size of the design window of this system and the display effect of the published work after the system design is completed. The resolution selected by this system is1366∗768. After adding the lights, the music-assisted interactive teaching scene is bright, but all the models seem to have no textures on them. At this time, the models are pasted with shadow maps. Go to the Meshes settings tab, double-click the mesh under Meshes to open it, and mix it, change the mode to overlay DestColor. At this time, the texture will come out, and the color will be darker. Next, we enter the corresponding material level, brighten its color, and adjust all models to the ideal state. It shows that the students in the experimental group have improved their learning and understanding of the teaching content, are more satisfied with their self-expression, improved their self-confidence and interest in dancing, and gave themselves higher scores. If the model is transparent for the texture, change the mode to mash in material. If it is a double-sided sticker, put the box behind the both sided below. ## 5. Conclusion This paper starts from the role of virtual technology in the auxiliary interactive music teaching environment, breaks through the traditional form of expression, technology, and auxiliary interaction complement each other in the trend of digital information, and expresses the unique auxiliary interaction personality. The main research contents of this paper include the theory of virtual reality technology in the auxiliary interactive music teaching environment, the auxiliary interactive expression of virtual reality in the auxiliary interactive music teaching environment, the interactive presentation of virtual reality technology and auxiliary interaction in the auxiliary interactive music teaching environment, auxiliary interactive music, and the realization of works of virtual reality technology and auxiliary interaction in teaching environment. In order to meet the requirements for real-time rendering of music elements in real-time interaction, scientific computing visualization and dynamic simulation, reducing the number of triangle faces of the rendered entity is a key technical issue for real-time rendering of music-assisted interactive teaching scenes in the external scenic spots of the training center and the interior of the training room, the article applies the LOD technology of the real-time generation method to set the objects of the virtual music-assisted interactive teaching scene within a certain range, so that they can show different details at different viewing distances, so that the browser can take automatic switching between different levels of detail represented. The research on virtual reality technology theory and virtual reality-assisted interactive expression under the auxiliary interactive music teaching environment guides the creation of virtual design based on the theory and finds breakthroughs in the creation and summarizes new theoretical knowledge and value achievements. The development of virtual reality technology should not be hampered by equipment. More and more convenient forms can also bring the immersive experience from reality to virtuality. --- *Source: 1007954-2022-06-11.xml*
1007954-2022-06-11_1007954-2022-06-11.md
64,245
Realization of Music-Assisted Interactive Teaching System Based on Virtual Reality Technology
Yan Gao; Lin Gao
Occupational Therapy International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1007954
1007954-2022-06-11.xml
--- ## Abstract Virtual reality technology has attracted researchers’ attention because it can provide users with a virtual interactive learning environment. Based on the theory of virtual reality technology, this paper proposes the system model design and architecture of virtual interactive music-assisted interactive teaching and realizes key technologies such as modeling, music-assisted interactive teaching scene interaction, and database access. In the simulation process, based on the VRML/X3D bottom interactive system template, after comprehensive application research, comparative analysis of various modeling methods, the system verified the use of digital cameras combined with the modeling technology based on music elements to collaboratively establish VRML virtual model connections. For inline node function, we combined it with Outline3D to realize VRML integration and then use VizX3D, X3D-Edit to build X3D model and realize the conversion from VRML to X3D, which solves the system completeness problem of music-assisted interactive teaching. The experimental results show that, according to the statistical analysis of the data after the experiment, when the position changes in the virtual 3D music-assisted interactive teaching scene, it will be displayed in the plane layer, and the real-time coordinates of the virtual music-assisted interactive teaching scene displayed in HTML have case. By analyzing the scenes and dynamic effects in the works, the effects of the virtual world can be better displayed through the performance of details. The better accuracy and delay error reached 89.7% and 3.11%, respectively, which effectively improved the effect and feasibility of applying virtual reality technology to music-assisted interactive teaching. --- ## Body ## 1. Introduction Virtual reality technology is one of the hot areas of research at home and abroad and has played an active auxiliary role in the field of music-assisted interactive teaching [1–4]. Under the guidance of the constructivist music-assisted interactive teaching theory, it is connected with various technical means of multimedia technology and network technology, combined with the discipline of music-assisted interactive teaching, and helps learners to construct a learning environment in the virtual music-assisted interactive teaching scene. The environment and the construction of meaning are the new learning methods of interactive teaching in the new era. Constructivist learning theory emphasizes the central position of students, the important role of rich “situations” in meaning construction, and the important role of multiple learning music-assisted interactive teaching scenarios [5–7]. The virtual reality technology provides a technical guarantee for the realization of a simulated work and learning environment, which is deeply in line with the constructivist learning theory [8–11]. In the current situation at home and abroad, we can fully understand the wide range of virtual technology applications and lead the pace of the information age. The learners can interact with the learning situation through the simulation of the interactive teaching environment assisted by music, so as to realize the autonomous learning in the interactive teaching mode.Radianti et al. [12] believes that the use of virtual reality technology in music-assisted interactive teaching can fully mobilize the learner’s senses and thinking organs, so that the observed scenery is vividly displayed in front of them. Especially for some inaccessible music-assisted interactive teaching or experimental content, through virtual reality technology, they can be placed in front of them like real objects and can be carefully observed from the front, side, and back, and even inside the scene for observation and research. Odrekhivskyy et al. [13] believed that for the scenes that are difficult to restore in the real music-assisted interactive teaching, the reproduction of state can also be realized by means of virtual reality technology. Aithal and Aithal [14] found that in the music-assisted interactive teaching experiment, virtual reality can also be integrated with other multimedia technology to play a greater role. The so-called “interactivity” means that the human-computer interaction in the virtual reality system is a nearly natural interaction. Users can not only interact with a computer keyboard and mouse but also through special helmets, data gloves, and other sensing devices. Chen et al. [15] believe that the computer can adjust the musical elements presented by the system and the auxiliary interactive musical elements according to the movements of the user’s head, hands, eyes, language, and body. Users can inspect or operate objects in the virtual environment through their natural skills such as language, body movements, or actions. The information felt by the user in the virtual world, through the thinking and analysis of the brain, forms the action or strategy that they want to implement, and feeds it back to the system through the input interface to realize the function of interacting with the system and independently controlling the operation of the system [16–21].This paper starts with constructivist learning theory. From the learning point of view of learning, it analyzes the important role of virtual reality technology in the field of music-assisted interactive teaching, introduces what virtual reality technology is, the current application and development of virtual reality, VRML (virtual reality modeling language) to build the common technical means in the virtual reality music-assisted interactive teaching scene, and explain in detail the role and significance of the design in the author’s actual work. Secondly, starting from the completed music-assisted interactive teaching media simulation system, the key technologies used to complete the system and the design ideas of the system are expounded, and the specific methods for completing the model making of the virtual system and optimizing the model through 3DS MAX are expressed. In the process of establishing a virtual music-assisted interactive teaching environment, the process of using VRMLPad to interact with the established music-assisted interactive teaching environment model is explained, and the music-assisted interactive teaching equipment model established by Cult3D on 3DS MAX is explained for ideas and methods for interaction. The system implemented in this paper can become an effective supplementary means of music-assisted interactive teaching in the course music-assisted interactive teaching and can help students achieve digital learning. ## 2. Methods ### 2.1. Virtual Reality Level The Script node in the system hierarchy transmits the value of the VR event to the script specified by the URL. If it accepts a group of trigger events, the Script node processes it in sequence according to different methods. If using JavaScript, each input event corresponds to a custom function of the same name. The browser calls the functions according to the order [22] (1)dψ′xdx−dψ′ydydx+dy−dψ′xdxdψ′ydy=0.This method can effectively help the file creator to easily complete the establishment of more complex models and music-assisted interactive teaching scenarios, without involving a large number of code problems, directly use the various controls integrated in the visual editor to complete the corresponding operations. During the establishment of this system, a large number of this method was used to complete the establishment of models and music-assisted interactive teaching scenarios [23–26].Action and autonomy are inseparable from the user and the things in the scene. Whether it is a single dynamic or mutual dynamic, it is inseparable from the interaction generated by each other, because these two characteristics are also derived from the interaction. Character-assisted interaction is an auxiliary interaction mode in the virtual-assisted interaction system. The user is incarnated as an animated character in the virtual environment, and a third-person camera is placed behind the character as the user’s viewpoint. The user can use the keyboard to control the movement of the character and then move the viewpoint. Character assistant interaction is a freer assistant interaction mode, which can break the limitation of automatic assistant interaction on the time and space perspective. In this example, a typical process of writing a 3D event using a text editor is illustrated. From defining a geometric model, to adding trigger nodes, to adding script events, the writing process of VRML is described. ### 2.2. Music-Assisted Dependency The storage of data or code, in the design method for music auxiliary objects, data, or code is a whole, this whole is the object, the member data or member functions of the thorn elephant can be hidden as needed, and other objects cannot directly modify the column. Like all the data, but to be modified through the member function of this strip, so as to avoid the mutual interaction of program modules. It uses descriptive text language to describe the shape of basic three-dimensional objects and combines these basic three-dimensional shapes into a virtual music-assisted interactive teaching scene through certain control interpretation and execution to generate virtual three-dimensional music-assisted interactive teaching scenarios. The biggest feature of VRML is the use of text to describe the three-dimensional space, which greatly reduces the amount of data transmitted on the Internet.When the sweeping stroke is many polygons of the environment of Figure1, its capture transformation process polygon is similar. But at this time, there is a mutual occlusion relationship between polygons, so it is necessary to determine and calculate the visible scan line segments in each scan line, that is, to perform blanking processing. This process can be divided into two steps: calculating the scan line segment and determining the visibility of the line segment. The first step is to calculate. (2)dxi,jdidj=xi/xjxj−1−1−i−jxxi−1.Figure 1 A brief description of music-assisted reliance on browsing information.All intersecting line segments of the sweep line and the polygon formed by the object in the auxiliary interactive music elements. The second step eliminates the invisible line segment or part of the line segment. ### 2.3. Music-Assisted Interactive Teaching Media Routing is the connection channel between the nodes that generate music auxiliary events and receive events, and the routing of events ROUTE can make VRML programs interactive. The routing transmits the events generated by some nodes to other nodes, and, by changing the attribute values of some fields, makes the objects in the three-dimensional space produce motion or special effects, that is, animation and interaction, making the virtual world more realistic. The sense of presence in virtual reality itself is the thing that exists, and the sense of presence is the real feedback that the real thing can give the experiencer. The virtual scene itself is built on the basis of the real thing.The music element program algorithm of the system loading music auxiliary interface is basically the same as that of the main interface. However, its highlight is that when the system loads the music-assisted interactive teaching scene, the operation instruction display function is added, which not only allows the user to understand the specific operation steps while waiting but also diverts the user’s attention and ignores the waiting time, thus strengthening the user sensory experience. In the process of auxiliary interactive programming, in order to reduce the model burden of the system, when the music-assisted interactive teaching scene is loaded into the program design, we store the music-assisted interactive teaching scene in a Virtools file in Table1, according to the auxiliary interaction selected by the user. Patterns are dynamically added to the system.Table 1 Attributes of music-assisted interactive teaching. Music-assisted numberExperiment system data ratioControl system data ratio10.8650.3460.4630.6610.7420.55920.3960.7190.1270.3180.3380.29530.4490.4330.1870.0060.0860.41240.7700.7640.8240.8070.5250.67350.4310.4260.4900.4840.4370.429It is used in conjunction with thef frame buffer to complete the blanking function. The initial value of each unit in the z buffer is taken as the background color or gray value of the corresponding picture. Musical element blanking is achieved during the drawing of the musical element. When the image is drawn, when the attribute (color or grayscale) value of each pair (pixel) of the display wipe is filled into the corresponding unit of the frame buffer, the z-coordinate value of the point should be corresponding to the z-buffer. The values within the cell are compared. If the former is greater than the latter, change the value of the corresponding unit of the frame buffer and update the value of the corresponding unit of the z-buffer to this, and draw the z-coordinate of the gate. On the contrary, if the z-coordinate value of the point is smaller than that of the corresponding unit in the z-buffer value, it means that the point is farther from the viewer than the displayed point and is therefore occluded. ## 2.1. Virtual Reality Level The Script node in the system hierarchy transmits the value of the VR event to the script specified by the URL. If it accepts a group of trigger events, the Script node processes it in sequence according to different methods. If using JavaScript, each input event corresponds to a custom function of the same name. The browser calls the functions according to the order [22] (1)dψ′xdx−dψ′ydydx+dy−dψ′xdxdψ′ydy=0.This method can effectively help the file creator to easily complete the establishment of more complex models and music-assisted interactive teaching scenarios, without involving a large number of code problems, directly use the various controls integrated in the visual editor to complete the corresponding operations. During the establishment of this system, a large number of this method was used to complete the establishment of models and music-assisted interactive teaching scenarios [23–26].Action and autonomy are inseparable from the user and the things in the scene. Whether it is a single dynamic or mutual dynamic, it is inseparable from the interaction generated by each other, because these two characteristics are also derived from the interaction. Character-assisted interaction is an auxiliary interaction mode in the virtual-assisted interaction system. The user is incarnated as an animated character in the virtual environment, and a third-person camera is placed behind the character as the user’s viewpoint. The user can use the keyboard to control the movement of the character and then move the viewpoint. Character assistant interaction is a freer assistant interaction mode, which can break the limitation of automatic assistant interaction on the time and space perspective. In this example, a typical process of writing a 3D event using a text editor is illustrated. From defining a geometric model, to adding trigger nodes, to adding script events, the writing process of VRML is described. ## 2.2. Music-Assisted Dependency The storage of data or code, in the design method for music auxiliary objects, data, or code is a whole, this whole is the object, the member data or member functions of the thorn elephant can be hidden as needed, and other objects cannot directly modify the column. Like all the data, but to be modified through the member function of this strip, so as to avoid the mutual interaction of program modules. It uses descriptive text language to describe the shape of basic three-dimensional objects and combines these basic three-dimensional shapes into a virtual music-assisted interactive teaching scene through certain control interpretation and execution to generate virtual three-dimensional music-assisted interactive teaching scenarios. The biggest feature of VRML is the use of text to describe the three-dimensional space, which greatly reduces the amount of data transmitted on the Internet.When the sweeping stroke is many polygons of the environment of Figure1, its capture transformation process polygon is similar. But at this time, there is a mutual occlusion relationship between polygons, so it is necessary to determine and calculate the visible scan line segments in each scan line, that is, to perform blanking processing. This process can be divided into two steps: calculating the scan line segment and determining the visibility of the line segment. The first step is to calculate. (2)dxi,jdidj=xi/xjxj−1−1−i−jxxi−1.Figure 1 A brief description of music-assisted reliance on browsing information.All intersecting line segments of the sweep line and the polygon formed by the object in the auxiliary interactive music elements. The second step eliminates the invisible line segment or part of the line segment. ## 2.3. Music-Assisted Interactive Teaching Media Routing is the connection channel between the nodes that generate music auxiliary events and receive events, and the routing of events ROUTE can make VRML programs interactive. The routing transmits the events generated by some nodes to other nodes, and, by changing the attribute values of some fields, makes the objects in the three-dimensional space produce motion or special effects, that is, animation and interaction, making the virtual world more realistic. The sense of presence in virtual reality itself is the thing that exists, and the sense of presence is the real feedback that the real thing can give the experiencer. The virtual scene itself is built on the basis of the real thing.The music element program algorithm of the system loading music auxiliary interface is basically the same as that of the main interface. However, its highlight is that when the system loads the music-assisted interactive teaching scene, the operation instruction display function is added, which not only allows the user to understand the specific operation steps while waiting but also diverts the user’s attention and ignores the waiting time, thus strengthening the user sensory experience. In the process of auxiliary interactive programming, in order to reduce the model burden of the system, when the music-assisted interactive teaching scene is loaded into the program design, we store the music-assisted interactive teaching scene in a Virtools file in Table1, according to the auxiliary interaction selected by the user. Patterns are dynamically added to the system.Table 1 Attributes of music-assisted interactive teaching. Music-assisted numberExperiment system data ratioControl system data ratio10.8650.3460.4630.6610.7420.55920.3960.7190.1270.3180.3380.29530.4490.4330.1870.0060.0860.41240.7700.7640.8240.8070.5250.67350.4310.4260.4900.4840.4370.429It is used in conjunction with thef frame buffer to complete the blanking function. The initial value of each unit in the z buffer is taken as the background color or gray value of the corresponding picture. Musical element blanking is achieved during the drawing of the musical element. When the image is drawn, when the attribute (color or grayscale) value of each pair (pixel) of the display wipe is filled into the corresponding unit of the frame buffer, the z-coordinate value of the point should be corresponding to the z-buffer. The values within the cell are compared. If the former is greater than the latter, change the value of the corresponding unit of the frame buffer and update the value of the corresponding unit of the z-buffer to this, and draw the z-coordinate of the gate. On the contrary, if the z-coordinate value of the point is smaller than that of the corresponding unit in the z-buffer value, it means that the point is farther from the viewer than the displayed point and is therefore occluded. ## 3. Results ### 3.1. Virtual Reality Data Pooling In the process of completing the construction of the virtual music-assisted interactive teaching environment, after completing the model, save the file format as wrl, and add interaction. In the modeling of each music-assisted interactive teaching equipment, it is convenient to use VRMLPAD to modify it later. Since there are a lot of interactive processes in this part, it will be a very huge project if the VRML language is written manually. Therefore, we choose to use CULT3D for interactive designs. For this part, save the established equipment model in 3DMAX as the C3D format shown in Figure2 to facilitate the use of subsequent steps. Combined with the relevant experimental data of inverse kinematics, the key frames are simulated to achieve three-dimensional modeling and display, which can intuitively reproduce the standard technical movements of athletes and provide key elements for course teaching.Figure 2 Virtual reality-assisted interactive music element data pooling.The material has four property settings: ambient, diffuse, specular, and emissive, which represent ambient light, diffuse reflection light, and specular reflection properties, respectively. Emissive represents the luminosity of the auxiliary interactive music element light itself. The four parameters are all values of type color. There is also a power property, which is a floating point type. The larger the value, the greater the difference between the highlight intensity and the surrounding brightness. The parallel line parallel to the coordinate axis, the vanishing point formed on the surface of the auxiliary interactive musical element is called the main vanishing point. According to the number of main vanishing points, perspective auxiliary interactive music elements are further divided into one-point perspective, two-point perspective, and three-point perspective. Compared with the parallel auxiliary interactive music elements, the perspective auxiliary interactive music elements have a stronger sense of depth and look more realistic. But perspective projection cannot truly reflect the exact size and shape of objects.Students’ movements can also be recorded through sensors and motion sensing devices, and the comparison of data differences can be used to judge the standardization of students’ movements and realize the interaction between the virtual world and the real world. The higher the sample rate and resolution of the auxiliary interactive musical elements, the more memory is consumed when the auxiliary interactive musical elements file is loaded into the computer. Not all sound cards can support high sampling rates and whether it is necessary to use high sampling rate auxiliary interactive music elements, because human ears are not that powerful, too high resolution is just a waste of memory. In order to save memory and improve the efficiency of the CPU, the virtual auxiliary interactive system uses the auxiliary interactive music element stream to process the music module, that is, the auxiliary interactive music element file is streamed. The principle is this: first load the first point of the auxiliary interactive music element file into the memory, and then load the rest of the file into the memory one after another during playback. ### 3.2. Music Helper Function Recursion In the function recursion of computer music elements, the homogeneous coordinate technique is widely used to study the transformation of music elements, that is, in then+1 dimensional space, the transformation of n-dimensional vectors is discussed, and the normalization process is performed to observe its transformation in the n-dimensional space result. It is precisely because the geometric transformation of music elements can be transformed into the multiplication of a vector representing the point set of music elements and a certain transformation matrix, so the transformed music elements can be obtained quickly, which provides the possibility for the dynamic display of computer music elements. No matter in the two-dimensional plane or the three-dimensional space, the defined geometric musical elements can be continuously transformed for many times to obtain the new required musical elements. At this time, it is only necessary to multiply the corresponding multiple transformation matrices to form a combined transformation matrix, and then act on the geometric music elements. (3)Xa,b=ψ′ada1−a01−aψ′ada1−a01−aψ′ada.It mainly calculates the specific direction and specific position of the tracked object by using the video camera to reach theX-Y plane array, the surrounding light or the projection of the tracking light in the projection plane at different times and different positions. Making a path curve function in Virtools is composed of some 3D nodes and curves. These elements can be made through functions. After they are made, they can be used as a path curve function after setting. The key code is introduced first. X3D’s text editors mainly include X3D-Edit, an open-source X3D development tool developed by the Web3D Alliance, and X3D is also an open-source X3D development toolkit. X3D-Edit is an Extensible 3D (X3D) text editor for musical elements, edit customizes the general XML editor under the Java platform through the X3D 3.0 tagset defined by the X3D 3.0 DTD, and uses IBM Xeena as the customized X3D music-assisted interactive teaching scene graph editor. Supported platforms are Windows, MacOS X, Linux, Unix, and others. The higher the hierarchy, the system will display who is the priority. The hierarchical relationship of 2D frames can be set at any time to ensure that no logical errors will occur during the real-time display of 2D frames. ### 3.3. Deep Fit of Music-Assisted Interactive Teaching The interactive teaching output is divided into three parts: static output, dynamic output, and online version output. Different output forms have different effects. With the support of different technologies, the effects that can be displayed are different, and there are also gaps in practical effects. There are two technical forms of virtual reality technology in works. One is operational technology, which conducts in-depth research and development through existing tools and software technology and the successful case form of previous researchers. For example, the touch-type structure, which uses the mouse and keyboard to operate the virtual space of the computer screen, is also a form of relatively traditional virtual assisted interaction technology, and it is also one of the fastest forms accepted by users.(4)Ui,j=ui,j−ui−ujui−uj,Wi,j=wi,j−wi−wjwi−wj.The main functions realized by the interactive teaching interface module design algorithm are: system initialization, loading textures, materials, and other materials; initial interface and button display; click the button to enter the corresponding function interface; function interface initialization, loading function interface materials; function interface and button display; click the button under the function interface to switch back to the initial interface; after the system is terminated, destroy the data and release the memory. When editing VRML or X3D music-assisted interactive teaching scene graph files, X3D-Edit can provide a simplified and error-free way of creating and editing, edit customizes context-sensitive tooltips through XML files, providing a summary of each node and attribute to facilitate authoring and editing of music-assisted interactive teaching scene graphs. When students use virtual reality technology, students can concentrate faster and more lastingly, the number of questions asked by teachers is relatively reduced, and the work intensity of teachers is reduced. The interactive teaching effect is unanimously optimistic. In this virtual environment, students are not students in the classroom in the biological sense, but refer to those who need to obtain learning information and resources into the virtual teaching environment, and then participate in learning activities through their avatars.In the virtual music-assisted interactive teaching scene, except for the boundary scope of Figure3, such as the music-assisted interactive teaching building and the music-assisted interactive teaching facilities, the characters cannot pass through. Due to the interaction between the system and the user and the movement of the character, the character and the virtual stationary object may collide. In order to maintain the authenticity of the virtual auxiliary interaction system, it is necessary to detect the possible collision in time and design the corresponding collision response; otherwise, the character will occur, which affects the realism of the auxiliary interactive system and affects the user’s immersion. There are 255∗255 horizontal and vertical grids in the grid, and the value of each grid ranges from 0 to 255. When the grid value is 255, it turns yellow, indicating that the grid is forbidden to be crossed. Clever use of numerical values is to plan the limited boundary of the character’s movement and calculate the straight-line distance between the character and the boundary in real time, when the distance is less than 0, stop the character’s progress, indicates that the character has reached the boundary at this time. The function that bounds the character’s motion is setBoundary ().Figure 3 Distribution of deep fit of music-assisted interactive teaching. ### 3.4. Parameter Configuration of Music-Assisted Interactive Teaching Scene For the three-dimensional music-assisted interactive teaching scenes and objects displayed on the Internet, VRML is described using a file format standard, and any editor can be used as a writing tool. Analyze and discuss the role of virtual reality technology in teaching and the application of virtual reality technology in teaching, listen carefully to experts’ opinions and adopt them, provide multiple perspectives to promote the research of this paper and provide more realistic basis for the paper. To enter the virtual reality world, users only need to access through the VRML browser and download the music elements music elements and audio and video resources placed on the server. In addition, VRML’s rendering of musical elements is real-time as the user interacts with objects in the virtual world in real-time. VRML provides6+1 degrees of freedom, that is, rotation and movement in three directions, as well as hyperlinks (Anchors) with other 3D spaces. Users can intuitively feel the impact and changes of their own behavior on the virtual world. (5)Dertimerstc,c−1cx∈0,1,2,⋯,c−10,1,2,⋯,c−10,1,2,⋯,c−1.User domain refers to the entire natural space that programmers use to define sketches. This defines a yellow cube with length 2, width and height 1. The main attribute of the box node is size, which is used to determine the size in the length (x direction), height (y direction), and width (z direction). It is better to use the English folder name to save the file, because in the folder (path), you can only browse and run the files that do not use Java class and music assisted interactive teaching scene; otherwise, you cannot even see the X3D music-assisted interactive teaching scene. The windows can be nested, that is, the second layer window can be defined in the first layer window, the i+1 layer window can be defined in the i-layer window, and so on. In some cases, the users can also define a circular window with a center and a radius or a polygonal window represented by a boundary, as needed. ## 3.1. Virtual Reality Data Pooling In the process of completing the construction of the virtual music-assisted interactive teaching environment, after completing the model, save the file format as wrl, and add interaction. In the modeling of each music-assisted interactive teaching equipment, it is convenient to use VRMLPAD to modify it later. Since there are a lot of interactive processes in this part, it will be a very huge project if the VRML language is written manually. Therefore, we choose to use CULT3D for interactive designs. For this part, save the established equipment model in 3DMAX as the C3D format shown in Figure2 to facilitate the use of subsequent steps. Combined with the relevant experimental data of inverse kinematics, the key frames are simulated to achieve three-dimensional modeling and display, which can intuitively reproduce the standard technical movements of athletes and provide key elements for course teaching.Figure 2 Virtual reality-assisted interactive music element data pooling.The material has four property settings: ambient, diffuse, specular, and emissive, which represent ambient light, diffuse reflection light, and specular reflection properties, respectively. Emissive represents the luminosity of the auxiliary interactive music element light itself. The four parameters are all values of type color. There is also a power property, which is a floating point type. The larger the value, the greater the difference between the highlight intensity and the surrounding brightness. The parallel line parallel to the coordinate axis, the vanishing point formed on the surface of the auxiliary interactive musical element is called the main vanishing point. According to the number of main vanishing points, perspective auxiliary interactive music elements are further divided into one-point perspective, two-point perspective, and three-point perspective. Compared with the parallel auxiliary interactive music elements, the perspective auxiliary interactive music elements have a stronger sense of depth and look more realistic. But perspective projection cannot truly reflect the exact size and shape of objects.Students’ movements can also be recorded through sensors and motion sensing devices, and the comparison of data differences can be used to judge the standardization of students’ movements and realize the interaction between the virtual world and the real world. The higher the sample rate and resolution of the auxiliary interactive musical elements, the more memory is consumed when the auxiliary interactive musical elements file is loaded into the computer. Not all sound cards can support high sampling rates and whether it is necessary to use high sampling rate auxiliary interactive music elements, because human ears are not that powerful, too high resolution is just a waste of memory. In order to save memory and improve the efficiency of the CPU, the virtual auxiliary interactive system uses the auxiliary interactive music element stream to process the music module, that is, the auxiliary interactive music element file is streamed. The principle is this: first load the first point of the auxiliary interactive music element file into the memory, and then load the rest of the file into the memory one after another during playback. ## 3.2. Music Helper Function Recursion In the function recursion of computer music elements, the homogeneous coordinate technique is widely used to study the transformation of music elements, that is, in then+1 dimensional space, the transformation of n-dimensional vectors is discussed, and the normalization process is performed to observe its transformation in the n-dimensional space result. It is precisely because the geometric transformation of music elements can be transformed into the multiplication of a vector representing the point set of music elements and a certain transformation matrix, so the transformed music elements can be obtained quickly, which provides the possibility for the dynamic display of computer music elements. No matter in the two-dimensional plane or the three-dimensional space, the defined geometric musical elements can be continuously transformed for many times to obtain the new required musical elements. At this time, it is only necessary to multiply the corresponding multiple transformation matrices to form a combined transformation matrix, and then act on the geometric music elements. (3)Xa,b=ψ′ada1−a01−aψ′ada1−a01−aψ′ada.It mainly calculates the specific direction and specific position of the tracked object by using the video camera to reach theX-Y plane array, the surrounding light or the projection of the tracking light in the projection plane at different times and different positions. Making a path curve function in Virtools is composed of some 3D nodes and curves. These elements can be made through functions. After they are made, they can be used as a path curve function after setting. The key code is introduced first. X3D’s text editors mainly include X3D-Edit, an open-source X3D development tool developed by the Web3D Alliance, and X3D is also an open-source X3D development toolkit. X3D-Edit is an Extensible 3D (X3D) text editor for musical elements, edit customizes the general XML editor under the Java platform through the X3D 3.0 tagset defined by the X3D 3.0 DTD, and uses IBM Xeena as the customized X3D music-assisted interactive teaching scene graph editor. Supported platforms are Windows, MacOS X, Linux, Unix, and others. The higher the hierarchy, the system will display who is the priority. The hierarchical relationship of 2D frames can be set at any time to ensure that no logical errors will occur during the real-time display of 2D frames. ## 3.3. Deep Fit of Music-Assisted Interactive Teaching The interactive teaching output is divided into three parts: static output, dynamic output, and online version output. Different output forms have different effects. With the support of different technologies, the effects that can be displayed are different, and there are also gaps in practical effects. There are two technical forms of virtual reality technology in works. One is operational technology, which conducts in-depth research and development through existing tools and software technology and the successful case form of previous researchers. For example, the touch-type structure, which uses the mouse and keyboard to operate the virtual space of the computer screen, is also a form of relatively traditional virtual assisted interaction technology, and it is also one of the fastest forms accepted by users.(4)Ui,j=ui,j−ui−ujui−uj,Wi,j=wi,j−wi−wjwi−wj.The main functions realized by the interactive teaching interface module design algorithm are: system initialization, loading textures, materials, and other materials; initial interface and button display; click the button to enter the corresponding function interface; function interface initialization, loading function interface materials; function interface and button display; click the button under the function interface to switch back to the initial interface; after the system is terminated, destroy the data and release the memory. When editing VRML or X3D music-assisted interactive teaching scene graph files, X3D-Edit can provide a simplified and error-free way of creating and editing, edit customizes context-sensitive tooltips through XML files, providing a summary of each node and attribute to facilitate authoring and editing of music-assisted interactive teaching scene graphs. When students use virtual reality technology, students can concentrate faster and more lastingly, the number of questions asked by teachers is relatively reduced, and the work intensity of teachers is reduced. The interactive teaching effect is unanimously optimistic. In this virtual environment, students are not students in the classroom in the biological sense, but refer to those who need to obtain learning information and resources into the virtual teaching environment, and then participate in learning activities through their avatars.In the virtual music-assisted interactive teaching scene, except for the boundary scope of Figure3, such as the music-assisted interactive teaching building and the music-assisted interactive teaching facilities, the characters cannot pass through. Due to the interaction between the system and the user and the movement of the character, the character and the virtual stationary object may collide. In order to maintain the authenticity of the virtual auxiliary interaction system, it is necessary to detect the possible collision in time and design the corresponding collision response; otherwise, the character will occur, which affects the realism of the auxiliary interactive system and affects the user’s immersion. There are 255∗255 horizontal and vertical grids in the grid, and the value of each grid ranges from 0 to 255. When the grid value is 255, it turns yellow, indicating that the grid is forbidden to be crossed. Clever use of numerical values is to plan the limited boundary of the character’s movement and calculate the straight-line distance between the character and the boundary in real time, when the distance is less than 0, stop the character’s progress, indicates that the character has reached the boundary at this time. The function that bounds the character’s motion is setBoundary ().Figure 3 Distribution of deep fit of music-assisted interactive teaching. ## 3.4. Parameter Configuration of Music-Assisted Interactive Teaching Scene For the three-dimensional music-assisted interactive teaching scenes and objects displayed on the Internet, VRML is described using a file format standard, and any editor can be used as a writing tool. Analyze and discuss the role of virtual reality technology in teaching and the application of virtual reality technology in teaching, listen carefully to experts’ opinions and adopt them, provide multiple perspectives to promote the research of this paper and provide more realistic basis for the paper. To enter the virtual reality world, users only need to access through the VRML browser and download the music elements music elements and audio and video resources placed on the server. In addition, VRML’s rendering of musical elements is real-time as the user interacts with objects in the virtual world in real-time. VRML provides6+1 degrees of freedom, that is, rotation and movement in three directions, as well as hyperlinks (Anchors) with other 3D spaces. Users can intuitively feel the impact and changes of their own behavior on the virtual world. (5)Dertimerstc,c−1cx∈0,1,2,⋯,c−10,1,2,⋯,c−10,1,2,⋯,c−1.User domain refers to the entire natural space that programmers use to define sketches. This defines a yellow cube with length 2, width and height 1. The main attribute of the box node is size, which is used to determine the size in the length (x direction), height (y direction), and width (z direction). It is better to use the English folder name to save the file, because in the folder (path), you can only browse and run the files that do not use Java class and music assisted interactive teaching scene; otherwise, you cannot even see the X3D music-assisted interactive teaching scene. The windows can be nested, that is, the second layer window can be defined in the first layer window, the i+1 layer window can be defined in the i-layer window, and so on. In some cases, the users can also define a circular window with a center and a radius or a polygonal window represented by a boundary, as needed. ## 4. Discussion ### 4.1. Extraction of Virtual Reality Factors For VRML browsers, in theory, it can process multiple virtual reality factor objects distributed on the Internet. Enhancing the quality of the musical elements and emulations in case of poor performance, or reduce the quality of musical elements and emulations in the case of poor performance. The node “multiple levels of detail (LOD)” provided by VRML can simulate the situation observed by the human eye in reality. When the distance from an object is relatively far, the details of the object are not clear; but when the distance from the object is getting closer, the level of detail of the object is more and more clear. The selection criterion of the level of detail in VRML is the distance from the viewpoint to the geometry, and the system selects different LOD levels according to the distance. At the beginning of the formal experiment, a questionnaire survey was conducted on the application of virtual reality technology in teaching. The questionnaire was distributed through the network. After the experiment, a questionnaire survey was conducted on the students in the experimental group. When the distance is greater than or equal to a certain value of Figure4, the observer will move to the next level of detail.Figure 4 Extraction of virtual reality network environment factors.In most cases, the client and server run on the same machine, of course, it can also be used in a network environment, so OpenGL has network transparency, and the music element function library is encapsulated in the win32 music element device interface (GD) dynamic link library OpenGL. Like in DLL, the music element library of OpenGL is also encapsulated in a dynamic link library OpenGL DLL. The call of the OpenGL function released from the client application program is firstly opened by OpenGL DLL processing, and then sent to the server by winsrv. The DLL is further processed, and then passed to the win32 device driver interface (DDI), and finally, the processed music element command is sent to the video display driver. ### 4.2. Simulation of Music-Assisted Interactive Teaching System The core part of the interactive teaching system is designed to complete all the main functions of the system, including the premusic-assisted interactive teaching scene processing method, the system interface module, the role-assisted interaction module, the automatic auxiliary interaction module, the sound effect module and the release after the system is completed. After the development of this virtual assistant interaction system is completed, it is only a primary version, and more functions will be expanded as needed. For example, the system can be imported into a tablet computer and adopted the touch screen operation mode; a database module is added to record the key of each assistant interaction, add multirole simultaneous auxiliary interactive interaction function. The powerful secondary development function of Virtools can fully meet the needs of this system expansion function. Using the index scoring method and the chart evaluation method, with a full score of 100 points, the behavior of the testee is divided into several indicators, and each indicator is assigned a certain score, total score.The system abstracts each geometric body into a music element class according to the idea of object-oriented programming, and each interactive teaching class encapsulates the unique attributes and behaviors of the corresponding body. The music element class mainly completes the establishment of the music element model, the generation of wireframes, entity maps, material maps, and texture maps. In addition, music element transformation methods (translation, rotation, and scaling) are provided. First, the object of the simulation is a certain floor of a music-assisted interactive teaching building, which includes multiple areas such as multimedia classrooms, offices, line editing rooms, non-editing rooms, and recording keys. According to the actual size. Using AUTOCAD2004 to draw the architectural plan, and use the file-import command to import the dwg text drawing into 3DSMAX, and use the extrude modifier to build the wall after the line is drawn. The modeling process uses basic geometry for modeling.Any area less than or equal to the screen area is called the view area. The view area can be defined by the device coordinates in the screen area. If the music elements in the window area selected by the user are to be displayed in the view area, they must also be converted into the coordinate values in the device coordinate system by the program. The view area is generally defined as a rectangle, which is defined by the coordinates of the lower left corner and the upper right corner, or by the coordinates of the lower left corner and the x and y directions of the view area: the length of the edge. The view area can be nested, and the nested level of the music element processing software is fixed. For graphics and polygon windows, users can also define circular and polygonal view areas for different applications. ### 4.3. Example Application and Analysis For the design of interactive teaching virtual comics, the most convenient way is to generate EXE executable files, but EXE files cannot be directly generated by Virtools, and the external VirtoolsMakeExe.exe and CustomerPlayer.exe files are needed to achieve this, it may run the VirtoolsMakeExe.exe file, the following setting interface will pop up, click the designated button corresponding to the Virtools project file option, select the CMO file we have completed, and set the resolution in the window setting option. Here, we use1366∗768, click “Generate” button; then, the generated EXE file can be run independently of the Virtools environment. The base class of music elements is used to implement operations common to all music elements, such as translation, rotation, and zoom operations of music elements, and shapeless material settings. It also contains two virtual functions: the music element drawing function Draw() and the music element storage function Save(). Then, we use mathematical methods to establish mathematical models such as charts, analyze various data and materials obtained from the investigation and scoring through mathematical statistics, and finally form quantitative conclusions.The functions of each voxel class are roughly the same, mainly completing voxel modeling, generating various forms of 3D models, and storing the unique data of each geometric shape. For models that appear repeatedly in multiple places, if a single modeling method is used, the code of the file will be greatly increased, and the real-time performance of the virtual music-assisted interactive teaching scene will be greatly reduced. Interactive teaching techniques are used here to apply transparent textures directly to the established 2D surfaces, or onto criss-cross patches. When setting, let the 2D object automatically adjust the normal direction according to the position of the camera.From the perspective of self-rating items, the average score was 4.68 points higher than that of the control group. Using SPSS software fort-test analysis, P=0.002<0.05 and P<0.01, it can be seen that there is a very significant difference. Vizx3D includes a complete 3D modeling function and the interactive editing function of music elements as shown in Figure 5. Vizx3D supports the establishment and editing of H-anim, NURBS, texture music-assisted interactive teaching scenes, and supports XML or VRML97 encoded X3D files. Through Vizx3D, users can intuitively create music-assisted interactive teaching scenes, create animations with key frames, and create music-assisted interactive teaching scene interactions through its unique system. It can output both wrl files and x3dv files.Figure 5 Interactive teaching virtual display data editing.In the tab, you can set the resolution of the virtual auxiliary interactive system. The resolution is related to the size of the design window of this system and the display effect of the published work after the system design is completed. The resolution selected by this system is1366∗768. After adding the lights, the music-assisted interactive teaching scene is bright, but all the models seem to have no textures on them. At this time, the models are pasted with shadow maps. Go to the Meshes settings tab, double-click the mesh under Meshes to open it, and mix it, change the mode to overlay DestColor. At this time, the texture will come out, and the color will be darker. Next, we enter the corresponding material level, brighten its color, and adjust all models to the ideal state. It shows that the students in the experimental group have improved their learning and understanding of the teaching content, are more satisfied with their self-expression, improved their self-confidence and interest in dancing, and gave themselves higher scores. If the model is transparent for the texture, change the mode to mash in material. If it is a double-sided sticker, put the box behind the both sided below. ## 4.1. Extraction of Virtual Reality Factors For VRML browsers, in theory, it can process multiple virtual reality factor objects distributed on the Internet. Enhancing the quality of the musical elements and emulations in case of poor performance, or reduce the quality of musical elements and emulations in the case of poor performance. The node “multiple levels of detail (LOD)” provided by VRML can simulate the situation observed by the human eye in reality. When the distance from an object is relatively far, the details of the object are not clear; but when the distance from the object is getting closer, the level of detail of the object is more and more clear. The selection criterion of the level of detail in VRML is the distance from the viewpoint to the geometry, and the system selects different LOD levels according to the distance. At the beginning of the formal experiment, a questionnaire survey was conducted on the application of virtual reality technology in teaching. The questionnaire was distributed through the network. After the experiment, a questionnaire survey was conducted on the students in the experimental group. When the distance is greater than or equal to a certain value of Figure4, the observer will move to the next level of detail.Figure 4 Extraction of virtual reality network environment factors.In most cases, the client and server run on the same machine, of course, it can also be used in a network environment, so OpenGL has network transparency, and the music element function library is encapsulated in the win32 music element device interface (GD) dynamic link library OpenGL. Like in DLL, the music element library of OpenGL is also encapsulated in a dynamic link library OpenGL DLL. The call of the OpenGL function released from the client application program is firstly opened by OpenGL DLL processing, and then sent to the server by winsrv. The DLL is further processed, and then passed to the win32 device driver interface (DDI), and finally, the processed music element command is sent to the video display driver. ## 4.2. Simulation of Music-Assisted Interactive Teaching System The core part of the interactive teaching system is designed to complete all the main functions of the system, including the premusic-assisted interactive teaching scene processing method, the system interface module, the role-assisted interaction module, the automatic auxiliary interaction module, the sound effect module and the release after the system is completed. After the development of this virtual assistant interaction system is completed, it is only a primary version, and more functions will be expanded as needed. For example, the system can be imported into a tablet computer and adopted the touch screen operation mode; a database module is added to record the key of each assistant interaction, add multirole simultaneous auxiliary interactive interaction function. The powerful secondary development function of Virtools can fully meet the needs of this system expansion function. Using the index scoring method and the chart evaluation method, with a full score of 100 points, the behavior of the testee is divided into several indicators, and each indicator is assigned a certain score, total score.The system abstracts each geometric body into a music element class according to the idea of object-oriented programming, and each interactive teaching class encapsulates the unique attributes and behaviors of the corresponding body. The music element class mainly completes the establishment of the music element model, the generation of wireframes, entity maps, material maps, and texture maps. In addition, music element transformation methods (translation, rotation, and scaling) are provided. First, the object of the simulation is a certain floor of a music-assisted interactive teaching building, which includes multiple areas such as multimedia classrooms, offices, line editing rooms, non-editing rooms, and recording keys. According to the actual size. Using AUTOCAD2004 to draw the architectural plan, and use the file-import command to import the dwg text drawing into 3DSMAX, and use the extrude modifier to build the wall after the line is drawn. The modeling process uses basic geometry for modeling.Any area less than or equal to the screen area is called the view area. The view area can be defined by the device coordinates in the screen area. If the music elements in the window area selected by the user are to be displayed in the view area, they must also be converted into the coordinate values in the device coordinate system by the program. The view area is generally defined as a rectangle, which is defined by the coordinates of the lower left corner and the upper right corner, or by the coordinates of the lower left corner and the x and y directions of the view area: the length of the edge. The view area can be nested, and the nested level of the music element processing software is fixed. For graphics and polygon windows, users can also define circular and polygonal view areas for different applications. ## 4.3. Example Application and Analysis For the design of interactive teaching virtual comics, the most convenient way is to generate EXE executable files, but EXE files cannot be directly generated by Virtools, and the external VirtoolsMakeExe.exe and CustomerPlayer.exe files are needed to achieve this, it may run the VirtoolsMakeExe.exe file, the following setting interface will pop up, click the designated button corresponding to the Virtools project file option, select the CMO file we have completed, and set the resolution in the window setting option. Here, we use1366∗768, click “Generate” button; then, the generated EXE file can be run independently of the Virtools environment. The base class of music elements is used to implement operations common to all music elements, such as translation, rotation, and zoom operations of music elements, and shapeless material settings. It also contains two virtual functions: the music element drawing function Draw() and the music element storage function Save(). Then, we use mathematical methods to establish mathematical models such as charts, analyze various data and materials obtained from the investigation and scoring through mathematical statistics, and finally form quantitative conclusions.The functions of each voxel class are roughly the same, mainly completing voxel modeling, generating various forms of 3D models, and storing the unique data of each geometric shape. For models that appear repeatedly in multiple places, if a single modeling method is used, the code of the file will be greatly increased, and the real-time performance of the virtual music-assisted interactive teaching scene will be greatly reduced. Interactive teaching techniques are used here to apply transparent textures directly to the established 2D surfaces, or onto criss-cross patches. When setting, let the 2D object automatically adjust the normal direction according to the position of the camera.From the perspective of self-rating items, the average score was 4.68 points higher than that of the control group. Using SPSS software fort-test analysis, P=0.002<0.05 and P<0.01, it can be seen that there is a very significant difference. Vizx3D includes a complete 3D modeling function and the interactive editing function of music elements as shown in Figure 5. Vizx3D supports the establishment and editing of H-anim, NURBS, texture music-assisted interactive teaching scenes, and supports XML or VRML97 encoded X3D files. Through Vizx3D, users can intuitively create music-assisted interactive teaching scenes, create animations with key frames, and create music-assisted interactive teaching scene interactions through its unique system. It can output both wrl files and x3dv files.Figure 5 Interactive teaching virtual display data editing.In the tab, you can set the resolution of the virtual auxiliary interactive system. The resolution is related to the size of the design window of this system and the display effect of the published work after the system design is completed. The resolution selected by this system is1366∗768. After adding the lights, the music-assisted interactive teaching scene is bright, but all the models seem to have no textures on them. At this time, the models are pasted with shadow maps. Go to the Meshes settings tab, double-click the mesh under Meshes to open it, and mix it, change the mode to overlay DestColor. At this time, the texture will come out, and the color will be darker. Next, we enter the corresponding material level, brighten its color, and adjust all models to the ideal state. It shows that the students in the experimental group have improved their learning and understanding of the teaching content, are more satisfied with their self-expression, improved their self-confidence and interest in dancing, and gave themselves higher scores. If the model is transparent for the texture, change the mode to mash in material. If it is a double-sided sticker, put the box behind the both sided below. ## 5. Conclusion This paper starts from the role of virtual technology in the auxiliary interactive music teaching environment, breaks through the traditional form of expression, technology, and auxiliary interaction complement each other in the trend of digital information, and expresses the unique auxiliary interaction personality. The main research contents of this paper include the theory of virtual reality technology in the auxiliary interactive music teaching environment, the auxiliary interactive expression of virtual reality in the auxiliary interactive music teaching environment, the interactive presentation of virtual reality technology and auxiliary interaction in the auxiliary interactive music teaching environment, auxiliary interactive music, and the realization of works of virtual reality technology and auxiliary interaction in teaching environment. In order to meet the requirements for real-time rendering of music elements in real-time interaction, scientific computing visualization and dynamic simulation, reducing the number of triangle faces of the rendered entity is a key technical issue for real-time rendering of music-assisted interactive teaching scenes in the external scenic spots of the training center and the interior of the training room, the article applies the LOD technology of the real-time generation method to set the objects of the virtual music-assisted interactive teaching scene within a certain range, so that they can show different details at different viewing distances, so that the browser can take automatic switching between different levels of detail represented. The research on virtual reality technology theory and virtual reality-assisted interactive expression under the auxiliary interactive music teaching environment guides the creation of virtual design based on the theory and finds breakthroughs in the creation and summarizes new theoretical knowledge and value achievements. The development of virtual reality technology should not be hampered by equipment. More and more convenient forms can also bring the immersive experience from reality to virtuality. --- *Source: 1007954-2022-06-11.xml*
2022
# Influence of Process Parameters and Convective Heat Transfer on Thermophysical Properties of SiO2 Nanofluids by Multiobjective Function Analysis (DFA) **Authors:** R. Suprabha; C. R. Mahesha; C. E. Nanjundappa **Journal:** Journal of Nanomaterials (2023) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2023/1008046 --- ## Abstract From the requirements, high-performance cooling systems in the applications of solar, heat exchangers, and chemical industries are needed to improve efficiency. The metal-oxides-based nanofluids composed the better thermal properties with exchange of heat-transfer mechanisms. Therefore, this study mainly focused on nanosilicon dioxide materials to produce the nanofluids with blending of water-base medium by two-step techniques. The scanning electron microscope was used to analyze the presence of silicon dioxide nanoparticles from the procured material. During the two-step method input aspects like weight fraction of silicon dioxide (1.5–4.5 wt%), particle sizes (10–30µm), pH range of water (5–9), and sonication process time (2–4 hr) were considered. The outcomes like thermal conductivity, specific heat capacity, and viscosity were selected as thermal physical properties. The desirability techniques were implemented to identify the best optimal input parameters from the nanofluid processing. From the desirability outcomes, the processed nanosilicon dioxide fluids with respective input parameters were 0.9623 W/mK for thermal conductivity, 688 J/kg K forspecific heat capacity, and 0.00162 for viscosity, respectively. The heat-transfer coefficient was successfully identified with processed nanosilicon dioxide fluids, and the Nusselt number and Reynolds number were attained with respective heat-transfer coefficients. --- ## Body ## 1. Introduction In the last decades, energy crisis is more consumed with requirements of energy applications such as electronic products, chemical industry, power plant engineering, food and beverage industries [1]. The subsequent achievements are highly influenced by the requirements like greater efficiency, multimode function, less weight, and compact shape [2]. The enhanced superior heat-transfer medium, or liquid, and the thermic liquid have significant scopes in the thermal-related engineering fields [3]. Thermal fluids have played an essential role in the process and system-based maintaining industries and employed with ideal running temperature [4]. Therefore, to improve the mechanical thermal efficiency, nanofluids are rich medium for heat-transfer liquid [5]. Due to this, it has secured colloidal scattering as compared with the solid-based nanoparticles; these particle sizes must be in the range of 1–100 nm during liquid process [6].Some unique requirement is needed to select appropriate nanofluids. Those characteristics are compatibility, availability, thermal attributes, stability in chemical process, and less in price [7]. The conventional heat-transfer technique is insufficient to exchange the heat resulting in less thermophysical properties. To overcome this issue in the thermal- and heat-transfer managing led to identification of the effective nanofluid, and it is the most admired objective in this research [8]. Therefore, nanofluid is utilized to increase heat transfer and enhanced thermophysical properties. Hence, the thermalophysical properties fully depends up on the efficiency of cooling systems. So the researchers were highly involved in finding the better thermal conductivity in the presence of nanofluids [9]. The medium of nanoparticle with metal oxides have different concentrations to achieve the nanofluids by the conventional technique. The different nanomaterials are graphite, carbon nanotubes, metal oxides, graphene oxides, metals, diamonds, carbides, graphenes, and nitrides [10]. Normally in the metal-based nano-fluids for nanoformulations, the thermal conductivity is lesser when compared with the base metals, and settling issues were developed during the nanofluid formulations [11]. In comparison with metals, metal-oxide-based nanomaterials showed better thermophysical properties. Formulation of nanofluid is very effective to produce maximum thermal conductivity and never compensates the pressure drop [12]. Therefore, the increasing of thermophysical properties enhances the heat-transfer mechanism. The hybrid nanofluid is composed between two or more than two metal oxide nanoparticles when suspended with traditional base liquids. Single- or mononano fluids were prepared from one nanoparticle of metal oxide dispersed in the base fluids [13]. Many techniques like glycols with water, glycols without water, and oils and water were used to compose the nanofluid formulation [14]. Among these methods, water was the majorly used to act as a coolant, and various investigations are focused more on water-based nanofluids by various authors [15]. The overview of experimental plan is displayed in Figure 1.Figure 1 Experimental layout.Abu-Hamdeh et al. [16] investigated the nanofluid of SiO2 in various preparations in the heat exchangers. The main objective was to improve the efficiency in the solar power plant systems. It is concluded that the efficiency was increased at 25% when using SiO2. Singh et al. [17] analyzed the various nanofluids like aluminum oxide, silicon dioxide and magnesium oxide to identify the thermal behaviors. The latent heat thermal energy storage system was used to complete the investigations properly. Among the metal oxides, SiO2 provided better efficiency in this study. Jang et al. [18] studied the nanocoated SiO2 on the steel in the concrete technology to improve the tensile and interfacial bonding. A comparision between the plain steel and coated nano-SiO2 steel to improved the corrosion rate. Finally, the coated SiO2 steel increased the bonding strength at 50%. Yu et al. [19] increased thermophysical properties in the thermal storage applications by the influence of SiO2 nanofluids. The specific heat was enhanced at 2.58 times than the eutectic nitrate. Iqbal et al. [20] addressed the performance of engine-oil flow by using the nanosilicon carbide and silicon dioxide powders. These two nanopowders were suspended in the base fluid of engine oil and, in addition to that, the heat mechanism was migrated through the temperature conditions. Esfe et al. [21] studied the improvement of viscosity in the engine oil with help of multiwall carbon nanotubes and silicon dioxide nanoparticles. The engine performance was improved by 42.53% in the statistical analysis. The response surface methodology was used to forecast the experimental values. Awais et al. [22] enhanced the thermal conductivity in the kerosene oil with help of silicon dioxide and titanium dioxide. The Riga wedge technique was used to create the fluid formulations. SiO2 nanofluid provides better thermal conductivity. In this research, two-step technique was implemented to produce the nanofluids with utilization of sonication and mechanical stirring process. The even dispersion of nanoparticles is very significant to enhance the interfacial bonding with maximum stability in the base fluids and was accomplished in the two-step technique [23]. These stability controls were fully influenced by the frequency, temperature, and time of sonication method. The rising of nanoparticles and its concentration volumes improves the thermal conductivity in the prepared nanofluids. Based on the detail literature reviews, SiO2 was considered for nanoparticle material and to produce the nanofluids along with water-base fluid by two-step technique. This research work aims to investigate the optimum process parameter of nano silicon dioxide materials to produce the nanofluids with the blending of water-base medium by two-step techniques. ## 2. Material and Preparation of Nanofluids Aspects In this study, silicon dioxide has been chosen as a base metal oxide element with water (H2O). It is suspended in base fluids. The particle size analyzer was used to check the size of the nanoparticle and selected sizes were 10–30 nm. The silicon dioxide was purchased at Coimbatore Metal Mart. The thermophysical properties of SiO2 and water were as follows: 2,220 and 3,885 kg/m3 densities, 703 and 731 J/kg K specific heat capacities, 1.2 and 40 W/mK thermal conductivities, respectively. The SEM analysis was employed to identify the presence of particle elements which is displayed in Figure 2. From Figure 2, it is clear that the silicon dioxide particles were observed.Figure 2 SiO2 SEM micrograph.The appropriate weight machine was used to measure the weight of SiO2 particles from the manufacturers of Nitiraj. The two-step technique was used to conduct the nanofluid preparation accompanying the mechanical stirring and ultrasonication process to vanish the agglomerated particles in the working base fluid. To evaluate the essential volume concentration, 0.5 ml measuring flask was used. The parameters like amplitude (123 µm), 130 W and 40 kHz power was maintained in the ultrasonic mixer.In this study, it is required to consider, by technically improving the efficiency of heat transfer systems, that the influencing properties are specific heat capacity (SHC), thermal conductivity, pH range, viscosity, and dynamic viscosity. These properties were measured with calorimeter with differential scanning, thermal meter (KD2-Pro), pH meter, and viscometer (Brookfield), respectively. Before that the volume concentrations are significant aspects to measure the quantity of particles blended with base fluid or working fluid to maintain the thermophysical properties. The forecasted thermal conductivity of prepared nanofluids was not uniform [24].Various techniques like transient hot wire (TWH) method, temperature oscillation, steady-state parallel plate, and optical beam and deflection were used to verify the thermal conductivity of nanofluids. Among these techniques, THW is a suited method to find out the thermal conductivity of composed SiO2 nanofluids. The clustering and maximum temperature were highly impacted with modifications of thermal conductivity [25]. Similarly, the pH value was fine-tuned to improve the nanoparticles’ dispersion evenly with the working fluid during the process. Finally, the thermal conductivity was carried with thermal meter (KD2-Pro) with Fourier’s law in support of transfer-heat conduction [26]. Normally, the working base fluid thermal conductivity was much finer with lesser viscosity, and this viscosity is related with pumping force to affect the pressure fall. This test was conducted by the Brookfield viscometer (plate and cone type). The pH range and SHC were measured with pH meter and the differential scanning calorimeter. Meanwhile, simultaneously thermal conductivity has enhanced with moderate pH values [27]. The weight fraction of SiO2, particle sizes, pH range, and sonication process time are the input factors which are displayed in Table 1.Table 1 Input factor for preparing nanofluids. Process runsWeight fraction of SiO2 (vol.%)SiO2 reinforcement size (µm)Water pH rangeSonication processing time11.5105223207334.53094 ## 3. Results and Discussion ### 3.1. Desirability Function Analysis on Processed Nanofluid Parameters Based on the detailed processing procedures, thermophysical properties are obtained with appropriate processing parameters. There is need to validate the processing parameters to enhance the physical properties. Therefore, desirability approaches were used to identify the optimal process parameters. The composed nanofluids contain conditions like maximum thermal conductivity, minimum SHC, and viscosity [28]. The influenced process parameters are weight fraction of SiO2, particle sizes, pH range, and sonication process time, and outcomes are thermal conductivity, SHC, and viscosity. The Taguchi L9 design was utilized to frame the processing levels [29]. The input aspects and its outcomes are displayed in Table 2 as per the L9 Taguchi technique.Table 2 Outcomes of thermal properties with L9 input. RunsWeight fraction of SiO2SiO2 sizepH range (H2O)Sonication processing time (hr)TC (W/mK)SHC (J/kg K)V (Cp)121050.20.91016860.00145231070.40.92136790.00168341090.60.95286810.00194422090.40.91226900.00159532050.60.93576950.00174642070.20.92126890.00152723070.60.91856960.00146833090.20.91196910.00151943050.40.96236880.00162 ### 3.2. Contour-Analysis of Thermal Properties with Input Factors #### 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) #### 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) #### 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) #### 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) #### 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 3.1. Desirability Function Analysis on Processed Nanofluid Parameters Based on the detailed processing procedures, thermophysical properties are obtained with appropriate processing parameters. There is need to validate the processing parameters to enhance the physical properties. Therefore, desirability approaches were used to identify the optimal process parameters. The composed nanofluids contain conditions like maximum thermal conductivity, minimum SHC, and viscosity [28]. The influenced process parameters are weight fraction of SiO2, particle sizes, pH range, and sonication process time, and outcomes are thermal conductivity, SHC, and viscosity. The Taguchi L9 design was utilized to frame the processing levels [29]. The input aspects and its outcomes are displayed in Table 2 as per the L9 Taguchi technique.Table 2 Outcomes of thermal properties with L9 input. RunsWeight fraction of SiO2SiO2 sizepH range (H2O)Sonication processing time (hr)TC (W/mK)SHC (J/kg K)V (Cp)121050.20.91016860.00145231070.40.92136790.00168341090.60.95286810.00194422090.40.91226900.00159532050.60.93576950.00174642070.20.92126890.00152723070.60.91856960.00146833090.20.91196910.00151943050.40.96236880.00162 ## 3.2. Contour-Analysis of Thermal Properties with Input Factors ### 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) ### 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) ### 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) ### 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) ### 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) ## 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) ## 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) ## 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) ## 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 4. Convective Heat Transfer with Prepared Nanofluids through Straight Tube Based on the L9 array, desirability functions were used to compose better optimal process parameters. Now these parameters were analyzed to carry out the convective heat transfer with straight tube heat transfer section with specialized parameters. The designed heat-transfer system was a loop section which contained nanofluid reservoir, nanofluid pump, flow meter, the chambers having heater, insulator, K-thermocouple, and manometer, and another section contained cooling water reservoir, flow meter, cooling water pump, and heat exchanger.The copper-made tube was used with heat-transfer test chamber with thickness of 2.5 mm, inner diameter of 8.2 mm, and length of 0.873 m, respectively. The thermocouples having K-type design with 0.2°C precisions were maintained with wall temperature. Similarly, the two types of T-thermocouples were maintained at same precision as mentioned in K-type to estimate the bulk temperature on both sides of the sections. Throughout the process, constant heat flux condition was attained with electrically heated transfer condition using power supply. The heat exchanger with plate design was used to locate the inlet temperature in the range of 26+ or −1°C and the volume-flow rate was calculated by the flow meter with 0.02 L/min precision. ### 4.1. Data Processing As per the required formula, density and specific heat properties were found with help of mixture characteristics of nanofluids, and assuming the thermal equilibrium among the nanoparticle SiO2 and working fluid, the SHC and thermal conductivity were also identified and optimal parameters were pointed out. Next to the total heat flux finding, Q1 and Q2 were found with heat input values.(1)Totalheatfluxq=HeatInputfromheaterQ1+HeatInputtothefluidQ22πDL,(2)Heattransfercoefficienth=Totalheatfluxqaxiallengthwalltemperature−bulktemperature.Similarly, heat transfer coefficient was found in the Equation (1) and by the heat temperatures with wall and fluids bulk temperature and axial length (x) from the inlet of the heat transfer. The Nusselt number was identified from the heat flux, tube diameter from thermal conductivity, and the Reynolds number was employed with velocity, diameter of the pipe, and kinematic viscosity. Equations (3) and (4) exhibit the Nusselt and Reynolds number formulas.(3)Nusseltnumber=hDThermalconductivityK,(4)Reynoldsnumber=Densityofthefluid×fluidvelocity×diameterofpipeViscosityV.From the entire equations, the heat transfer coefficients, and Nusselt and Reynolds numbers were identified with output responses like thermal conductivity and viscosity. These number outcomes and heat-transfer coefficient values were employed with L9 table. Table4 exhibits the heat-transfer coefficients, and Nusselt and Reynolds numbers with convective heat transfer throughout the straight tube with specified conditions.Table 4 Heat transfer coefficient, Nusselt and Reynolds number. RunsTC (W/mK)SHC (J/kg K)V (Cp)Fluid velocity (m/s)Heat transfer coefficient (W/m2 K)Nusselt numberReynolds number10.91016860.001450.361.450.0130922.03586220.92136790.001680.521.550.0137962.53809530.95286810.001940.811.690.0145453.42371140.91226900.001590.591.850.016633.04276750.93576950.001740.4581.910.0167382.15839160.92126890.001520.2351.440.0128181.26776370.91856960.001460.671.730.0154453.76301480.91196910.001510.2981.570.0141181.61827890.96236880.001620.3691.390.0118451.867778Figures8 and 9 show the various runs of nano-SiC fluids and Nusselt and Reynolds numbers. From the figures, it is clear that the correlations of different process parameters and numbers were clearly noted. From the Reynolds number, the attained values are <2,000 ranges so that the flows are related with laminar flow. From the figures, it is implicit that the fifth and seventh runs attained highest values.Figure 8 Various runs of nanofluids and Nusselt number.Figure 9 Various runs of nanofluids and Reynolds number. ## 4.1. Data Processing As per the required formula, density and specific heat properties were found with help of mixture characteristics of nanofluids, and assuming the thermal equilibrium among the nanoparticle SiO2 and working fluid, the SHC and thermal conductivity were also identified and optimal parameters were pointed out. Next to the total heat flux finding, Q1 and Q2 were found with heat input values.(1)Totalheatfluxq=HeatInputfromheaterQ1+HeatInputtothefluidQ22πDL,(2)Heattransfercoefficienth=Totalheatfluxqaxiallengthwalltemperature−bulktemperature.Similarly, heat transfer coefficient was found in the Equation (1) and by the heat temperatures with wall and fluids bulk temperature and axial length (x) from the inlet of the heat transfer. The Nusselt number was identified from the heat flux, tube diameter from thermal conductivity, and the Reynolds number was employed with velocity, diameter of the pipe, and kinematic viscosity. Equations (3) and (4) exhibit the Nusselt and Reynolds number formulas.(3)Nusseltnumber=hDThermalconductivityK,(4)Reynoldsnumber=Densityofthefluid×fluidvelocity×diameterofpipeViscosityV.From the entire equations, the heat transfer coefficients, and Nusselt and Reynolds numbers were identified with output responses like thermal conductivity and viscosity. These number outcomes and heat-transfer coefficient values were employed with L9 table. Table4 exhibits the heat-transfer coefficients, and Nusselt and Reynolds numbers with convective heat transfer throughout the straight tube with specified conditions.Table 4 Heat transfer coefficient, Nusselt and Reynolds number. RunsTC (W/mK)SHC (J/kg K)V (Cp)Fluid velocity (m/s)Heat transfer coefficient (W/m2 K)Nusselt numberReynolds number10.91016860.001450.361.450.0130922.03586220.92136790.001680.521.550.0137962.53809530.95286810.001940.811.690.0145453.42371140.91226900.001590.591.850.016633.04276750.93576950.001740.4581.910.0167382.15839160.92126890.001520.2351.440.0128181.26776370.91856960.001460.671.730.0154453.76301480.91196910.001510.2981.570.0141181.61827890.96236880.001620.3691.390.0118451.867778Figures8 and 9 show the various runs of nano-SiC fluids and Nusselt and Reynolds numbers. From the figures, it is clear that the correlations of different process parameters and numbers were clearly noted. From the Reynolds number, the attained values are <2,000 ranges so that the flows are related with laminar flow. From the figures, it is implicit that the fifth and seventh runs attained highest values.Figure 8 Various runs of nanofluids and Nusselt number.Figure 9 Various runs of nanofluids and Reynolds number. ## 5. Conclusion (1) In this investigation, preparation of SiO2 nanofluids were successfully carried with two-step technique.(2) The desirability function was approached effectively to improve the efficiency of thermal properties with appropriate process parameters.(3) The weight fraction of SiO2, particle sizes, pH range of water, and sonication processing durations were most influenced input parameters to thermal properties.(4) The contour analysis between the input parameters of two-step method and their outcomes like thermal conductivity, SFC, and viscosity was detailed in analysis.(5) The superior thermal physical properties are 0.9623 W/mK for thermal conductivity, 688 J/kg K for SHC, and 0.00162 for viscosity, respectively.(6) The heat-transfer coefficient was successfully identified with total heat flux and axial dimensions. The designed heat-transfer system was effectively employed with selected design parameters and heat-transfer coefficient.(7) The heat-transfer section will be expanded and more thermocouples will be added in the planned design and implement in the further studies. --- *Source: 1008046-2023-01-27.xml*
1008046-2023-01-27_1008046-2023-01-27.md
37,315
Influence of Process Parameters and Convective Heat Transfer on Thermophysical Properties of SiO2 Nanofluids by Multiobjective Function Analysis (DFA)
R. Suprabha; C. R. Mahesha; C. E. Nanjundappa
Journal of Nanomaterials (2023)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2023/1008046
1008046-2023-01-27.xml
--- ## Abstract From the requirements, high-performance cooling systems in the applications of solar, heat exchangers, and chemical industries are needed to improve efficiency. The metal-oxides-based nanofluids composed the better thermal properties with exchange of heat-transfer mechanisms. Therefore, this study mainly focused on nanosilicon dioxide materials to produce the nanofluids with blending of water-base medium by two-step techniques. The scanning electron microscope was used to analyze the presence of silicon dioxide nanoparticles from the procured material. During the two-step method input aspects like weight fraction of silicon dioxide (1.5–4.5 wt%), particle sizes (10–30µm), pH range of water (5–9), and sonication process time (2–4 hr) were considered. The outcomes like thermal conductivity, specific heat capacity, and viscosity were selected as thermal physical properties. The desirability techniques were implemented to identify the best optimal input parameters from the nanofluid processing. From the desirability outcomes, the processed nanosilicon dioxide fluids with respective input parameters were 0.9623 W/mK for thermal conductivity, 688 J/kg K forspecific heat capacity, and 0.00162 for viscosity, respectively. The heat-transfer coefficient was successfully identified with processed nanosilicon dioxide fluids, and the Nusselt number and Reynolds number were attained with respective heat-transfer coefficients. --- ## Body ## 1. Introduction In the last decades, energy crisis is more consumed with requirements of energy applications such as electronic products, chemical industry, power plant engineering, food and beverage industries [1]. The subsequent achievements are highly influenced by the requirements like greater efficiency, multimode function, less weight, and compact shape [2]. The enhanced superior heat-transfer medium, or liquid, and the thermic liquid have significant scopes in the thermal-related engineering fields [3]. Thermal fluids have played an essential role in the process and system-based maintaining industries and employed with ideal running temperature [4]. Therefore, to improve the mechanical thermal efficiency, nanofluids are rich medium for heat-transfer liquid [5]. Due to this, it has secured colloidal scattering as compared with the solid-based nanoparticles; these particle sizes must be in the range of 1–100 nm during liquid process [6].Some unique requirement is needed to select appropriate nanofluids. Those characteristics are compatibility, availability, thermal attributes, stability in chemical process, and less in price [7]. The conventional heat-transfer technique is insufficient to exchange the heat resulting in less thermophysical properties. To overcome this issue in the thermal- and heat-transfer managing led to identification of the effective nanofluid, and it is the most admired objective in this research [8]. Therefore, nanofluid is utilized to increase heat transfer and enhanced thermophysical properties. Hence, the thermalophysical properties fully depends up on the efficiency of cooling systems. So the researchers were highly involved in finding the better thermal conductivity in the presence of nanofluids [9]. The medium of nanoparticle with metal oxides have different concentrations to achieve the nanofluids by the conventional technique. The different nanomaterials are graphite, carbon nanotubes, metal oxides, graphene oxides, metals, diamonds, carbides, graphenes, and nitrides [10]. Normally in the metal-based nano-fluids for nanoformulations, the thermal conductivity is lesser when compared with the base metals, and settling issues were developed during the nanofluid formulations [11]. In comparison with metals, metal-oxide-based nanomaterials showed better thermophysical properties. Formulation of nanofluid is very effective to produce maximum thermal conductivity and never compensates the pressure drop [12]. Therefore, the increasing of thermophysical properties enhances the heat-transfer mechanism. The hybrid nanofluid is composed between two or more than two metal oxide nanoparticles when suspended with traditional base liquids. Single- or mononano fluids were prepared from one nanoparticle of metal oxide dispersed in the base fluids [13]. Many techniques like glycols with water, glycols without water, and oils and water were used to compose the nanofluid formulation [14]. Among these methods, water was the majorly used to act as a coolant, and various investigations are focused more on water-based nanofluids by various authors [15]. The overview of experimental plan is displayed in Figure 1.Figure 1 Experimental layout.Abu-Hamdeh et al. [16] investigated the nanofluid of SiO2 in various preparations in the heat exchangers. The main objective was to improve the efficiency in the solar power plant systems. It is concluded that the efficiency was increased at 25% when using SiO2. Singh et al. [17] analyzed the various nanofluids like aluminum oxide, silicon dioxide and magnesium oxide to identify the thermal behaviors. The latent heat thermal energy storage system was used to complete the investigations properly. Among the metal oxides, SiO2 provided better efficiency in this study. Jang et al. [18] studied the nanocoated SiO2 on the steel in the concrete technology to improve the tensile and interfacial bonding. A comparision between the plain steel and coated nano-SiO2 steel to improved the corrosion rate. Finally, the coated SiO2 steel increased the bonding strength at 50%. Yu et al. [19] increased thermophysical properties in the thermal storage applications by the influence of SiO2 nanofluids. The specific heat was enhanced at 2.58 times than the eutectic nitrate. Iqbal et al. [20] addressed the performance of engine-oil flow by using the nanosilicon carbide and silicon dioxide powders. These two nanopowders were suspended in the base fluid of engine oil and, in addition to that, the heat mechanism was migrated through the temperature conditions. Esfe et al. [21] studied the improvement of viscosity in the engine oil with help of multiwall carbon nanotubes and silicon dioxide nanoparticles. The engine performance was improved by 42.53% in the statistical analysis. The response surface methodology was used to forecast the experimental values. Awais et al. [22] enhanced the thermal conductivity in the kerosene oil with help of silicon dioxide and titanium dioxide. The Riga wedge technique was used to create the fluid formulations. SiO2 nanofluid provides better thermal conductivity. In this research, two-step technique was implemented to produce the nanofluids with utilization of sonication and mechanical stirring process. The even dispersion of nanoparticles is very significant to enhance the interfacial bonding with maximum stability in the base fluids and was accomplished in the two-step technique [23]. These stability controls were fully influenced by the frequency, temperature, and time of sonication method. The rising of nanoparticles and its concentration volumes improves the thermal conductivity in the prepared nanofluids. Based on the detail literature reviews, SiO2 was considered for nanoparticle material and to produce the nanofluids along with water-base fluid by two-step technique. This research work aims to investigate the optimum process parameter of nano silicon dioxide materials to produce the nanofluids with the blending of water-base medium by two-step techniques. ## 2. Material and Preparation of Nanofluids Aspects In this study, silicon dioxide has been chosen as a base metal oxide element with water (H2O). It is suspended in base fluids. The particle size analyzer was used to check the size of the nanoparticle and selected sizes were 10–30 nm. The silicon dioxide was purchased at Coimbatore Metal Mart. The thermophysical properties of SiO2 and water were as follows: 2,220 and 3,885 kg/m3 densities, 703 and 731 J/kg K specific heat capacities, 1.2 and 40 W/mK thermal conductivities, respectively. The SEM analysis was employed to identify the presence of particle elements which is displayed in Figure 2. From Figure 2, it is clear that the silicon dioxide particles were observed.Figure 2 SiO2 SEM micrograph.The appropriate weight machine was used to measure the weight of SiO2 particles from the manufacturers of Nitiraj. The two-step technique was used to conduct the nanofluid preparation accompanying the mechanical stirring and ultrasonication process to vanish the agglomerated particles in the working base fluid. To evaluate the essential volume concentration, 0.5 ml measuring flask was used. The parameters like amplitude (123 µm), 130 W and 40 kHz power was maintained in the ultrasonic mixer.In this study, it is required to consider, by technically improving the efficiency of heat transfer systems, that the influencing properties are specific heat capacity (SHC), thermal conductivity, pH range, viscosity, and dynamic viscosity. These properties were measured with calorimeter with differential scanning, thermal meter (KD2-Pro), pH meter, and viscometer (Brookfield), respectively. Before that the volume concentrations are significant aspects to measure the quantity of particles blended with base fluid or working fluid to maintain the thermophysical properties. The forecasted thermal conductivity of prepared nanofluids was not uniform [24].Various techniques like transient hot wire (TWH) method, temperature oscillation, steady-state parallel plate, and optical beam and deflection were used to verify the thermal conductivity of nanofluids. Among these techniques, THW is a suited method to find out the thermal conductivity of composed SiO2 nanofluids. The clustering and maximum temperature were highly impacted with modifications of thermal conductivity [25]. Similarly, the pH value was fine-tuned to improve the nanoparticles’ dispersion evenly with the working fluid during the process. Finally, the thermal conductivity was carried with thermal meter (KD2-Pro) with Fourier’s law in support of transfer-heat conduction [26]. Normally, the working base fluid thermal conductivity was much finer with lesser viscosity, and this viscosity is related with pumping force to affect the pressure fall. This test was conducted by the Brookfield viscometer (plate and cone type). The pH range and SHC were measured with pH meter and the differential scanning calorimeter. Meanwhile, simultaneously thermal conductivity has enhanced with moderate pH values [27]. The weight fraction of SiO2, particle sizes, pH range, and sonication process time are the input factors which are displayed in Table 1.Table 1 Input factor for preparing nanofluids. Process runsWeight fraction of SiO2 (vol.%)SiO2 reinforcement size (µm)Water pH rangeSonication processing time11.5105223207334.53094 ## 3. Results and Discussion ### 3.1. Desirability Function Analysis on Processed Nanofluid Parameters Based on the detailed processing procedures, thermophysical properties are obtained with appropriate processing parameters. There is need to validate the processing parameters to enhance the physical properties. Therefore, desirability approaches were used to identify the optimal process parameters. The composed nanofluids contain conditions like maximum thermal conductivity, minimum SHC, and viscosity [28]. The influenced process parameters are weight fraction of SiO2, particle sizes, pH range, and sonication process time, and outcomes are thermal conductivity, SHC, and viscosity. The Taguchi L9 design was utilized to frame the processing levels [29]. The input aspects and its outcomes are displayed in Table 2 as per the L9 Taguchi technique.Table 2 Outcomes of thermal properties with L9 input. RunsWeight fraction of SiO2SiO2 sizepH range (H2O)Sonication processing time (hr)TC (W/mK)SHC (J/kg K)V (Cp)121050.20.91016860.00145231070.40.92136790.00168341090.60.95286810.00194422090.40.91226900.00159532050.60.93576950.00174642070.20.92126890.00152723070.60.91856960.00146833090.20.91196910.00151943050.40.96236880.00162 ### 3.2. Contour-Analysis of Thermal Properties with Input Factors #### 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) #### 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) #### 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) #### 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) #### 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 3.1. Desirability Function Analysis on Processed Nanofluid Parameters Based on the detailed processing procedures, thermophysical properties are obtained with appropriate processing parameters. There is need to validate the processing parameters to enhance the physical properties. Therefore, desirability approaches were used to identify the optimal process parameters. The composed nanofluids contain conditions like maximum thermal conductivity, minimum SHC, and viscosity [28]. The influenced process parameters are weight fraction of SiO2, particle sizes, pH range, and sonication process time, and outcomes are thermal conductivity, SHC, and viscosity. The Taguchi L9 design was utilized to frame the processing levels [29]. The input aspects and its outcomes are displayed in Table 2 as per the L9 Taguchi technique.Table 2 Outcomes of thermal properties with L9 input. RunsWeight fraction of SiO2SiO2 sizepH range (H2O)Sonication processing time (hr)TC (W/mK)SHC (J/kg K)V (Cp)121050.20.91016860.00145231070.40.92136790.00168341090.60.95286810.00194422090.40.91226900.00159532050.60.93576950.00174642070.20.92126890.00152723070.60.91856960.00146833090.20.91196910.00151943050.40.96236880.00162 ## 3.2. Contour-Analysis of Thermal Properties with Input Factors ### 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) ### 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) ### 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) ### 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) ### 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 3.2.1. Contourplots of Weight Fraction and SiO2 Particle Size on Thermal Properties The contourplots of weight fraction of SiO2 and particle size on the thermal conductivity, SHC and viscosity are displayed in Figures 3(a) and 3(b), respectively. From the figures, the thermal conductivity was increased at increasing of weight fraction 3%–4% and sizes of the particles were increased from 20 to 30 nm. Similarly, SHC, was enhanced with increase of weight fraction 3%–4% and low-level particle size 10 nm, and viscosity was improved at low-weight fraction and increase of particle sizes from 10 to 30 nm, respectively.Figure 3 (a) Weight fraction and SiO2 particle size on TC; (b) weight fraction and SiO2 particle size on SHC; (c) weight fraction and SiO2 particle size on viscosity. (a)(b)(c) ## 3.2.2. Contourplots of Particle Size of SiO2 and pH Range on Thermal Properties The contourplots of SiO2 particle size and pH range on the thermal conductivity, SHC, and viscosity are displayed in Figures 4(a) and 4(b), respectively. From the figures, increase of particle size of SiO2 and pH range from 7 to 9 improved the thermal conductivity. Meanwhile, decreasing the particle size of SiO2 at 10 nm and raising the pH range from 7 to 9 enhanced the SHC. The viscosity was improved when the particle size was increased from 10–30 nm and pH range from 7 to 9.Figure 4 (a) SiO2 particle sizes and pH range of water on TC; (b) SiO2 particle sizes and pH range of water on SHC; (c) SiO2 particle sizes and pH range of water on viscosity. (a)(b)(c) ## 3.2.3. Contourplots of pH Range of Water and Sonication Processing Time on Thermal Properties Figure5(a)–5(c) displays the contourplots pH range of water and processing of sonication on the thermal conductivity, SHC, and viscosity, respectively. From the figures, increase of pH range from 5 to 9 and processing time from low level (0.2) to medium level (0.4) enhanced the thermal conductivity. The SHC was improved on increasing the pH range from 7 to 9 and with low-level processing time. The viscosity was superior on increasing the pH range from 5 to 9 and processing time from 0.4 to 0.6 hr.Figure 5 (a) pH range of water and processing time on TC; (b) pH range of water and processing time on SHC; (c) pH range of water and processing time on viscosity. (a)(b)(c) ## 3.2.4. Contourplots of Sonication Processing Time and Weight Fraction of SiO2 on Thermal Properties Figure6(a)–6(c) exhibited the contour plots sonication processing with durations and weight fraction on the thermal conductivity, SHC and viscosity, respectively. From the figures, it is understood that the thermal conductivity was improved at high level of processing time (0.6 hr) and weight fraction was increased from low (2.0) to moderate level (3.0). The SHC was enhanced at low level of processing time (0.2) to moderate level (0.4) and weight fraction from (2.0% to 3.0%). Finally, viscosity level was enhanced at low level processing time and low level weight fraction.Figure 6 (a) Processing time and weight fraction of SiO2 on TC; (b) processing time and weight fraction of SiO2 on SHC; (c) processing time and weight fraction of SiO2 on viscosity. (a)(b)(c) ## 3.2.5. Contourplots of Desirability on the Thermal Properties of Nanofluid SiO2 In this study, desirability was approached on the thermal properties to find out the optimal process parameters. Before that the input factors were arranged in the Taguchi design with L9 array as per the discussion in Materials and Method section. The main objective of this desirability is to convert the multiresponses into a single response for better understanding. During the desirability, the outcomes like thermal conductivity at maintaining the maximum the better concept, and SHC and viscosity were considered as the smaller the better condition, respectively. Normalized signal to noise ratios of thermal conductivity, SHC, and viscosity values and other related values of desirability coefficient and its composite desirability values are listed in Table3. From Table 3, the ranking methods from the weight of 0.5 values were used to find out the ranking procedures. Experimental runs show that the 9th sample gives a better rank. Optimum process parameters found that weight fraction 4%, 30 nm of particles of SiO2, pH-5 range and 0.4 hr of sonication processing. The ranking graph and its overall processing runs are displayed in Figure 7. The metal oxide of SiO2 achieved better thermal conductivity, SHC, and viscosity as mentioned ranges.Figure 7 Nanofluids SiO2 and DFA rank.Table 3 Desirability values with their input factors and thermal outcomes. RunSignal to noise ratios—NormalizedDFA-coefficientDFAGradeTCSHCVTCSHCV11.0000−0.5882−1.00000.00001.58822.00000.0000920.7854−1.0000−0.53060.21462.00001.53060.8104430.1820−0.88240.00000.81801.88241.00001.2409240.9598−0.3529−0.71430.04021.35291.71430.3055750.5096−0.0588−0.40820.49041.05881.40820.8551360.7874−0.4118−0.85710.21261.41181.85710.7467570.83910.0000−0.97960.16091.00001.97960.5644680.9655−0.2941−0.87760.03451.29411.87760.2895890.0000−0.4706−0.65311.00001.47061.65311.55921 ## 4. Convective Heat Transfer with Prepared Nanofluids through Straight Tube Based on the L9 array, desirability functions were used to compose better optimal process parameters. Now these parameters were analyzed to carry out the convective heat transfer with straight tube heat transfer section with specialized parameters. The designed heat-transfer system was a loop section which contained nanofluid reservoir, nanofluid pump, flow meter, the chambers having heater, insulator, K-thermocouple, and manometer, and another section contained cooling water reservoir, flow meter, cooling water pump, and heat exchanger.The copper-made tube was used with heat-transfer test chamber with thickness of 2.5 mm, inner diameter of 8.2 mm, and length of 0.873 m, respectively. The thermocouples having K-type design with 0.2°C precisions were maintained with wall temperature. Similarly, the two types of T-thermocouples were maintained at same precision as mentioned in K-type to estimate the bulk temperature on both sides of the sections. Throughout the process, constant heat flux condition was attained with electrically heated transfer condition using power supply. The heat exchanger with plate design was used to locate the inlet temperature in the range of 26+ or −1°C and the volume-flow rate was calculated by the flow meter with 0.02 L/min precision. ### 4.1. Data Processing As per the required formula, density and specific heat properties were found with help of mixture characteristics of nanofluids, and assuming the thermal equilibrium among the nanoparticle SiO2 and working fluid, the SHC and thermal conductivity were also identified and optimal parameters were pointed out. Next to the total heat flux finding, Q1 and Q2 were found with heat input values.(1)Totalheatfluxq=HeatInputfromheaterQ1+HeatInputtothefluidQ22πDL,(2)Heattransfercoefficienth=Totalheatfluxqaxiallengthwalltemperature−bulktemperature.Similarly, heat transfer coefficient was found in the Equation (1) and by the heat temperatures with wall and fluids bulk temperature and axial length (x) from the inlet of the heat transfer. The Nusselt number was identified from the heat flux, tube diameter from thermal conductivity, and the Reynolds number was employed with velocity, diameter of the pipe, and kinematic viscosity. Equations (3) and (4) exhibit the Nusselt and Reynolds number formulas.(3)Nusseltnumber=hDThermalconductivityK,(4)Reynoldsnumber=Densityofthefluid×fluidvelocity×diameterofpipeViscosityV.From the entire equations, the heat transfer coefficients, and Nusselt and Reynolds numbers were identified with output responses like thermal conductivity and viscosity. These number outcomes and heat-transfer coefficient values were employed with L9 table. Table4 exhibits the heat-transfer coefficients, and Nusselt and Reynolds numbers with convective heat transfer throughout the straight tube with specified conditions.Table 4 Heat transfer coefficient, Nusselt and Reynolds number. RunsTC (W/mK)SHC (J/kg K)V (Cp)Fluid velocity (m/s)Heat transfer coefficient (W/m2 K)Nusselt numberReynolds number10.91016860.001450.361.450.0130922.03586220.92136790.001680.521.550.0137962.53809530.95286810.001940.811.690.0145453.42371140.91226900.001590.591.850.016633.04276750.93576950.001740.4581.910.0167382.15839160.92126890.001520.2351.440.0128181.26776370.91856960.001460.671.730.0154453.76301480.91196910.001510.2981.570.0141181.61827890.96236880.001620.3691.390.0118451.867778Figures8 and 9 show the various runs of nano-SiC fluids and Nusselt and Reynolds numbers. From the figures, it is clear that the correlations of different process parameters and numbers were clearly noted. From the Reynolds number, the attained values are <2,000 ranges so that the flows are related with laminar flow. From the figures, it is implicit that the fifth and seventh runs attained highest values.Figure 8 Various runs of nanofluids and Nusselt number.Figure 9 Various runs of nanofluids and Reynolds number. ## 4.1. Data Processing As per the required formula, density and specific heat properties were found with help of mixture characteristics of nanofluids, and assuming the thermal equilibrium among the nanoparticle SiO2 and working fluid, the SHC and thermal conductivity were also identified and optimal parameters were pointed out. Next to the total heat flux finding, Q1 and Q2 were found with heat input values.(1)Totalheatfluxq=HeatInputfromheaterQ1+HeatInputtothefluidQ22πDL,(2)Heattransfercoefficienth=Totalheatfluxqaxiallengthwalltemperature−bulktemperature.Similarly, heat transfer coefficient was found in the Equation (1) and by the heat temperatures with wall and fluids bulk temperature and axial length (x) from the inlet of the heat transfer. The Nusselt number was identified from the heat flux, tube diameter from thermal conductivity, and the Reynolds number was employed with velocity, diameter of the pipe, and kinematic viscosity. Equations (3) and (4) exhibit the Nusselt and Reynolds number formulas.(3)Nusseltnumber=hDThermalconductivityK,(4)Reynoldsnumber=Densityofthefluid×fluidvelocity×diameterofpipeViscosityV.From the entire equations, the heat transfer coefficients, and Nusselt and Reynolds numbers were identified with output responses like thermal conductivity and viscosity. These number outcomes and heat-transfer coefficient values were employed with L9 table. Table4 exhibits the heat-transfer coefficients, and Nusselt and Reynolds numbers with convective heat transfer throughout the straight tube with specified conditions.Table 4 Heat transfer coefficient, Nusselt and Reynolds number. RunsTC (W/mK)SHC (J/kg K)V (Cp)Fluid velocity (m/s)Heat transfer coefficient (W/m2 K)Nusselt numberReynolds number10.91016860.001450.361.450.0130922.03586220.92136790.001680.521.550.0137962.53809530.95286810.001940.811.690.0145453.42371140.91226900.001590.591.850.016633.04276750.93576950.001740.4581.910.0167382.15839160.92126890.001520.2351.440.0128181.26776370.91856960.001460.671.730.0154453.76301480.91196910.001510.2981.570.0141181.61827890.96236880.001620.3691.390.0118451.867778Figures8 and 9 show the various runs of nano-SiC fluids and Nusselt and Reynolds numbers. From the figures, it is clear that the correlations of different process parameters and numbers were clearly noted. From the Reynolds number, the attained values are <2,000 ranges so that the flows are related with laminar flow. From the figures, it is implicit that the fifth and seventh runs attained highest values.Figure 8 Various runs of nanofluids and Nusselt number.Figure 9 Various runs of nanofluids and Reynolds number. ## 5. Conclusion (1) In this investigation, preparation of SiO2 nanofluids were successfully carried with two-step technique.(2) The desirability function was approached effectively to improve the efficiency of thermal properties with appropriate process parameters.(3) The weight fraction of SiO2, particle sizes, pH range of water, and sonication processing durations were most influenced input parameters to thermal properties.(4) The contour analysis between the input parameters of two-step method and their outcomes like thermal conductivity, SFC, and viscosity was detailed in analysis.(5) The superior thermal physical properties are 0.9623 W/mK for thermal conductivity, 688 J/kg K for SHC, and 0.00162 for viscosity, respectively.(6) The heat-transfer coefficient was successfully identified with total heat flux and axial dimensions. The designed heat-transfer system was effectively employed with selected design parameters and heat-transfer coefficient.(7) The heat-transfer section will be expanded and more thermocouples will be added in the planned design and implement in the further studies. --- *Source: 1008046-2023-01-27.xml*
2023
# Application of Image Recognition Method Based on Diffusion Equation in Film and Television Production **Authors:** Liyuan Guo **Journal:** Advances in Mathematical Physics (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1008281 --- ## Abstract On the basis of studying the basic theory of anisotropic diffusion equation, this paper focuses on the application of anisotropic diffusion equation in image recognition film production. In order to further improve the application performance of P-M (Perona-Malik) anisotropic diffusion model, an improved P-M anisotropic diffusion model is proposed in this paper, and its application in image ultrasonic image noise reduction is discussed. The experimental results show that the model can effectively suppress the speckle noise and preserve the edge features of the image. Based on the image recognition technology, an image frame testing system is designed and implemented. The method of image recognition diffusion equation is used to extract and recognize the multilayer feature points of the test object according to the design of artificial neural network. To a certain extent, it improves the accuracy of image recognition and the audience rating of film and television. Use visual features of the film and television play in similarity calculation for simple movement scene segmentation problem, at the same time, the camera to obtain information, use the lens frame vision measuring the change of motion of the camera, and use weighted diffusion equation and the visual similarity of lens similarity calculation and motion information, by considering the camera motion of image recognition, effectively solve the sports scene of oversegmentation problem such as fighting and chasing. --- ## Body ## 1. Introduction With the advent of the Internet era, the network image traffic is increasing with each passing day. Most users like to watch movies or TV series with the characteristics of The Times; among which, most of the dramas account for a lot. Because the content of the play is old, the actors’ clothes and scenes in the film and television sometimes do not conform to the background at that time, resulting in great mistakes. Therefore, it is particularly important to ensure that the characters and scenes involved in each film or TV series are in line with the background at that time in order to improve users’ viewing experience and reduce the burden of producers of films and TV dramas. Through the image recognition of the diffusion equation, to detect the factors that do not conform to the historical scene in the film and television drama, greatly reduce the work of the director and producer, and solve some problems existing in the clothing and scene in the current film and television drama and life.Among various forms of digital images, film and television images are the most accessible and indispensable form of images in People’s Daily life. Like other digital images, film and television images are unstructured data in form, but they often have a strong plot structure in content, which is often shown as a combination of scenes connected or associated with each other in plot. This provides a certain factual basis for automatic segmentation of scene structure and scene recognition of film and television images. Generally, the description data of film and television image content includes the following forms: (1) For image metadata, image metadata records information about the image (including title and type) and the production of the image (director, cast, producer, distributor). The metadata of film and television images is generally provided by digital image distributors along with digital media resources [1]. (2) For image structure data, image structure usually refers to the logical structure between image frames and image fragments, such as the connection, switch, and transfer between shots [2]. (3) For image semantic data, semantic data describes the scene, action, story plot, theme, and other semantic content of the image itself [3], which is often obtained by identifying the features of image frames and audio data, as well as from subtitles and other auxiliary explanatory files.Using image processing and recognition technology, Muwen aims to develop software that can identify what an actor is wearing and whether the scene is appropriate for the shooting. Using the neural network technology of deep learning and based on the image recognition technology, the artificial neural network is designed according to the multilevel, which can extract, analyze, identify, and detect the feature points of the target and effectively find out the factors that do not conform to the historical scene in the film and television plays. ## 2. Related Work Image recognition has achieved great development with the help of deep learning technology and has been widely applied in various fields at home and abroad. Some scholars to apply image recognition technology content detection of image frames, in view of the existing retrieval scenario framework and clothing problems [4], the missing clothing information recognition for dress design optimization and scene design recognition problems, this paper proposes a new garment segmentation method and based on cross domain dictionary learning recognition of dress design. A new image information retrieval algorithm [5, 6] based on scale-invariant feature transform features is proposed, which is applied to content-based image information retrieval and improves the traditional image similarity method to accurately and quickly scan the content of the image. The classic deep network [7] is proposed to extract clothing and scene features, and the specialized data is trained repeatedly to make the network features more significant. Image recognition technology has more and more penetrated into daily life. In order to improve audience rating and make scenes of costume dramas more accurate, image recognition technology provides new ideas for whether scenes in films and TV dramas conform to historical background [8].The research on the structure analysis of film and television images can be divided into two categories: shot-based segmentation and scene-based segmentation from the granularity of time. In lense-based segmentation, the image is first detected and represented as a set of fixed or indefinite length image frames. A shot segmentation algorithm based on image color histogram [9] is proposed. This method and its deformation are widely used in image shot segmentation. Scene-based segmentation takes the background in the process of image plot development as a reference and clusters the temporal sequences with the same semantics to form a shot set representing the scene. Scene-based image segmentation is also called image scene boundary detection. In general, image scene boundary detection algorithms can be divided into two categories: methods based on domain-related prior models and domain-independent methods based on image production principles [10]. The method based on domain-related model needs to establish corresponding prior models according to image types and related domain knowledge [11]. Therefore, before this method is applied to a new domain, it is necessary to establish a prior model that depends on domain-related expertise, which is relatively limited. However, the domain-independent method is usually based on the principle of image production, clustering image shots according to the visual similarity and the characteristics of time constraints, clustering the shots that express the semantics of similar scenes in a time continuum into shot clusters, and then constructing the scene on this basis. This method does not require professional knowledge in related fields, so it is widely used in scene boundary detection of film and television images.Scholars proposed to use the evolution of the scene graph to describe the scene graph model [12]. Each vertex in the figure represents an image of a scene; each edge represents the visual similarity between two shots in position and time; the full link method is used to calculate the similarity of the shots, and the hierarchical clustering method is used to form subgraph; each cluster of subgraph represents a scene. Similar graph clustering method is used to find the story in the image unit [13], to use vision algorithms to detect changes in image scenes, it is necessary to calculate the similarity between the two shots of the image. When the image file or contains a large number of lenses (for example, the number of lens film images can reach thousands or more), a lot of time and calculations are required, which leads to low efficiency. At the same time, with images of the scene lens must be continuous on time; you can use the limited time approach to the shot clustering, and literature [14] puts forward within the fixed time window T lens clustering method based on similarity shots, but because in the image, based on the plot and story rhythm slow degree is different, the length of the image lens also varies greatly, so it is more reasonable to use a fixed lens window than a fixed time window. Literature [15] proposed an algorithm for image scene detection based on a sliding window with fixed lens window length. Literature [16] analyzes the background pictures in the same scene shot to complete scene boundary detection of film and television images. For scene recognition of image, the underlying features of the image are usually used to complete the mapping from the underlying features to the scene category through machine learning method. According to the type of characteristics and in general can be divided into scene recognition method based on global features and recognition method based on local features of the scene, typical global features are [17, 18] proposed C71St characteristics, the use of frequency domain information of image scene, said the global shape of this method in the recognition of outdoor scene, and good results have been achieved in the classification. However, the recognition effect of indoor scene is not very ideal. Relative to the global features, local feature better describes the detail of the image information; thus, a global feature in many recognition task has a better performance; the typical image scene recognition methods are mainly based on local feature hierarchical model classification method; the characteristics of the pyramid matching method and scene recognition method are based on the characteristics of CENTRIST descriptor [19]. The above methods can achieve certain results in specific image scene recognition, but the results are often poor when applied directly to image scene recognition. And most of the research on the current image scene recognition for a particular image category in the field of classification and identification, such as in the images of the sports football, table tennis, advertising image recognition, image stream, and video image scene classification problem, is relatively complex, often because of video image in the process of filming; for reasons of artistry, there will be a lot of shooting angle, light intensity, and other changes and often a lot of camera movement, long lens, and other different shooting techniques, all of which make it more difficult to identify the scene of film and television images. In literature [20, 21], action recognition and scene recognition images are combined to extract a variety of local features for action and scene training and recognition. At the same time, using the traditional underlying features to represent images or images and because of the underlying characteristics often contain semantic information is very limited, often when dealing with complex tasks based on semantic incompetence [22, 23]; as a result, the underlying characteristics of different often has good effect to the specific processing tasks and in other task performance in general. Trying to use intermediate semantics to represent images and images has been an important research direction for a long time. The proposed image-based representation method [24–26] achieves good results in scene recognition of images. It uses the objects contained in the image as the intermediate semantic features of the image and uses the SVM classifier for scene recognition of the image.However, all of the above methods have some limitations. For example, the time locality of the shot in the same scene is not taken into account, which requires a lot of calculation, but only the color similarity of the shot is considered, and the similarity of the shot in the moving state is smaller than that in the static state, which leads to the oversegmentation problem. The global similarity between the shots is used, and the motion characteristics of the shots are added, and then the sliding window technology based on the number of shots is used to detect the image scene. ## 3. Image Recognition Based on Anisotropic Diffusion Equation ### 3.1. Diffusion Model of Heat Conduction Equation The intermediate score vector graph of a set of object perception filters is used as the image feature, and a simple linear classifier is used for the scene recognition task. The proposed image recognition method is shown in Figure1. The image recognition using the diffusion equation can easily distinguish and classify different scenes [27]. At the same time, with the popularization of the Internet, it is becoming easier and easier to obtain large-scale digital images. The corresponding object recognition model can be easily trained by using the annotated object images.Figure 1 Image recognition framework of diffusion equation.After discretization,T is the class, and i is the time; the classical P-M equation is(1)Tss+△i=Tsi+∑ic△Isi∗△is.By analyzing the characteristics of anisotropic diffusion functioncs and following the design principle of diffusion coefficient, s is the diffusion; the following diffusion coefficients are constructed:(2)cs=t−s/k2t+s/k2,whereK represents the threshold parameter of the edge amplitude. An automatic estimation method of diffusion threshold is(3)K=T0e−ts,(4)T0=1MN∑i,jM,Nvari,j,(5)vari,j=1s∑sI−PIi,j¯2. ### 3.2. Image Recognition Interaction of Diffusion Equation For image matching based on feature points, the relevant elements of a scene picture similar to the image are obtained by locking a frame of image. The feature points of the two images are edge-extracted, and the pixel gray level of the corresponding feature point area is calculated. The correlation coefficient is used to determine whether the two images are of the same period. The image matching process based on feature points is shown in Figure2.Figure 2 Image matching flow based on feature points.The object model of deformable parts mainly solves the problem of object recognition under different angles and deformation. The object is represented as a set of deformable parts in relative position. Each part model describes the local features of the object. The relative positions of the parts are variable through spring-like connections. And the discriminant learning is carried out through the image marked only with the whole position box of the object. Deformable object model continuous has achieved the best results in PASCAL VOC recognition tasks.The object model of the deforming part uses the 3-dimensional gradient direction histogram feature and provides an alternative of using the 3-dimensional feature vector obtained by using both the contrast sensitive and contrast insensitive features. The detection scores of small and large objects in the image are calibrated. The gradient direction histogram feature is a more commonly used image feature descriptor at present, which is used in object detection. It uses the gradient direction histogram of the image in locally dense cells with uniform size and uses overlapping local contrast normalization technology to improve the accuracy of object detection.Letx,y be a pixel of the image and ux,y and rx;y are the gradient direction and gradient amplitude of the image at the position x,y, respectively:(6)Tx,y=rouux,y2modt.The wavelet denoising method with better effect of removing Gaussian noise is compared with the nonlinear total variation partial differential equation denoising method. In wavelet denoising, different thresholds are selected according to different decomposition scales:(7)tj=δlgTlgj+1,whereT is the signal length, j is the current number of decomposition layers, and j is the maximum number of decomposition layers. The method to estimate the wavelet coefficient is(8)w¯j,k=wj,k,wj,k≥δ0,other.The measurement parameters calculated by using the image recognition method of diffusion equations (1) to (8) are listed in Table 1.Table 1 Performance analysis of wavelet threshold and nonlinear total variation image recognition. VarianceImage recognition methodPPSNR0.01Coif20.80219.69Sym40.852920.46Total variation0.97326.580.05Coif20.78516.89Sym40.83516.77Total variation0.97921.670.1Coif20.79514.56Sym40.98615.78Total variation0.96718.560.15Coif20.70212.78Sym40.79513.65Total variation0.85715.64 ## 3.1. Diffusion Model of Heat Conduction Equation The intermediate score vector graph of a set of object perception filters is used as the image feature, and a simple linear classifier is used for the scene recognition task. The proposed image recognition method is shown in Figure1. The image recognition using the diffusion equation can easily distinguish and classify different scenes [27]. At the same time, with the popularization of the Internet, it is becoming easier and easier to obtain large-scale digital images. The corresponding object recognition model can be easily trained by using the annotated object images.Figure 1 Image recognition framework of diffusion equation.After discretization,T is the class, and i is the time; the classical P-M equation is(1)Tss+△i=Tsi+∑ic△Isi∗△is.By analyzing the characteristics of anisotropic diffusion functioncs and following the design principle of diffusion coefficient, s is the diffusion; the following diffusion coefficients are constructed:(2)cs=t−s/k2t+s/k2,whereK represents the threshold parameter of the edge amplitude. An automatic estimation method of diffusion threshold is(3)K=T0e−ts,(4)T0=1MN∑i,jM,Nvari,j,(5)vari,j=1s∑sI−PIi,j¯2. ## 3.2. Image Recognition Interaction of Diffusion Equation For image matching based on feature points, the relevant elements of a scene picture similar to the image are obtained by locking a frame of image. The feature points of the two images are edge-extracted, and the pixel gray level of the corresponding feature point area is calculated. The correlation coefficient is used to determine whether the two images are of the same period. The image matching process based on feature points is shown in Figure2.Figure 2 Image matching flow based on feature points.The object model of deformable parts mainly solves the problem of object recognition under different angles and deformation. The object is represented as a set of deformable parts in relative position. Each part model describes the local features of the object. The relative positions of the parts are variable through spring-like connections. And the discriminant learning is carried out through the image marked only with the whole position box of the object. Deformable object model continuous has achieved the best results in PASCAL VOC recognition tasks.The object model of the deforming part uses the 3-dimensional gradient direction histogram feature and provides an alternative of using the 3-dimensional feature vector obtained by using both the contrast sensitive and contrast insensitive features. The detection scores of small and large objects in the image are calibrated. The gradient direction histogram feature is a more commonly used image feature descriptor at present, which is used in object detection. It uses the gradient direction histogram of the image in locally dense cells with uniform size and uses overlapping local contrast normalization technology to improve the accuracy of object detection.Letx,y be a pixel of the image and ux,y and rx;y are the gradient direction and gradient amplitude of the image at the position x,y, respectively:(6)Tx,y=rouux,y2modt.The wavelet denoising method with better effect of removing Gaussian noise is compared with the nonlinear total variation partial differential equation denoising method. In wavelet denoising, different thresholds are selected according to different decomposition scales:(7)tj=δlgTlgj+1,whereT is the signal length, j is the current number of decomposition layers, and j is the maximum number of decomposition layers. The method to estimate the wavelet coefficient is(8)w¯j,k=wj,k,wj,k≥δ0,other.The measurement parameters calculated by using the image recognition method of diffusion equations (1) to (8) are listed in Table 1.Table 1 Performance analysis of wavelet threshold and nonlinear total variation image recognition. VarianceImage recognition methodPPSNR0.01Coif20.80219.69Sym40.852920.46Total variation0.97326.580.05Coif20.78516.89Sym40.83516.77Total variation0.97921.670.1Coif20.79514.56Sym40.98615.78Total variation0.96718.560.15Coif20.70212.78Sym40.79513.65Total variation0.85715.64 ## 4. Image Scene Structure Analysis and Recognition Based on Film Diffusion Equation With the rapid development of digital multimedia technology and the popularization of the Internet, digital image resources have increasingly become an important part of People’s Daily entertainment. At the same time, because the film and television images are unstructured data, how to effectively organize and manage the film and television resources and provide users with the ability to quickly locate the content of interest is an important topic for the study of image content. The video image scene structure analysis and recognition system are aimed at segmenting the video image scene and obtaining the structure unit of the video image with the scene as the semantic, so as to realize the image scene semantic structured storage and management. At the same time, scene recognition is carried out for scene fragments of film and television images, and scene tags are automatically annotated to obtain scene semantic content, so as to provide retrieval capability based on scene semantic for film and television images.For the input image, the scene structure is analyzed first. Scene change detection based on image lens, therefore, first of all, based on visual similarity of image frame shot segmentation, and image lens with multiple frames represent key frames and then use the shot key frames of visual characteristics and movement characteristics that were similar to the lens clustering, and by using the scenario development model law and similar adjacent interlaced lens to merge, the scene structure unit of the image is obtained. Then, key frames of image scene clips on the set of predefined object, which can identify and use the object image frame according to test results of statistics and information in order to get a panoramic view of the image scene, using all the largest pool of key frames and the average pooling results as image scene clips feature for image scene training and recognition.After the data set of key points is obtained, the time of landing of left and right feet is marked, and other preprocessing is performed manually, which forms the basis of the next model training. We expect to fit the original data and the marked data through machine learning and finally train a model that can determine the step fall according to the coordinate relationship of human body nodes in each frame, which can be used for the recognition of images in actual work. In this link, two common neural network algorithms, support vector machines (SVM) and multilayer perceptron (MLP), were used. We observed their performance during the process and compare their results and evaluate their differences.For the acquisition of training samples, a GH5 camera was used to shoot individual motion image segments of several different states in the format of 1080p/60fps, including two camera positions and four dimensions of movement (Table1), so as to ensure the diversity of character movement. Single and not many people are given at the beginning of the experiment control unnecessary variables as far as possible. In fact, the attitude judgment accuracy of Openpose for people with the same image is basically the same as that of a single person. If the final training model can be applied to the results, many image processing will only amplify simple calculations. Also, the original reason to consider 60 fps was to be consistent with the base number of minutes and seconds, reducing some errors for manual corrections that might occur in certain segments. However, the increase of the amount of test data caused unnecessary disturbance to the accuracy of the model in the experiment. After observation, it is found that lower frame rate can solve the above two problems to some extent, which not only improves the accuracy but also reduces the calculation cost. Therefore, the frame rate and resolution of the shot picture are reduced to 720p/30fps data. The data processing process is shown in Figure 3.Figure 3 Data processing flow chart of film and television production.MLP or SVM algorithm is difficult to extract features from the internal information of the picture. Each point is an absolute coordinate value, which means that when the character moves from left to right in the picture, thex value of the point will gradually increase. When the character moves up and down in the picture, the Y value will produce some noise. The absolute coordinate values of x and y can be converted to relative coordinate values by:(9)xr=x1+xt2,xit=xi−xr,(10)yr=y1+yt2,yit=yi−yr.Since the Openpose can infer the pose of the inner part of the figure painting, when the body part of the figure is blocked or beyond the picture, the value of this part of the point will be. As a result, the values of this part become discrete values, which will also affect the fitting of the model. Therefore, the missing frameF is filled with the following method:(11)tn=tn−1+tn+12.On the other hand, in the picture, the characters will also move in the depth direction of the picture, which means that in the same image, if the characters move in the depth, the absolute distance between points will change according to the rule of near, far, and small, which is equivalent to adding noise that cannot be ignored on the timing axis. Before data input, this part of noise is removed by the following normalization method:(12)xryr=x1−xr2+y1−yr2,(13)xir,yr=xryrx1y1.In the training process, one frame was taken as a unit sample, aiming to determine whether the movement period of the figure’s footsteps was left or right through the joint coordinate information of each frame. In the training process, the movement modes of characters are divided into the following categories for training:(1) Fixed machine position, four-way figure position fixed footstep movement(2) Fixed machine position, characters to the deep position of the foot movement(3) Fixed position, characters in the picture from left to right/from right to left movement(4) The machine follows the character, and the character moves forward head-on(5) The machine follows the character, and the character moves backward(6) The machine follows the character, and the character moves sidewaysThe purpose is to make the generated graph and the target graph consistent in the low dimensional space and get the generated image with clearer contour. The L1 loss of the generated image and the target image is divided into two parts. One part is the Llloss between the output image and the target image after the damaged image passes through the generated network. The other part is the LllossGx,z between the output image E and the target image after the target image is generated through the network.(14)Lx−GX=Ex,yx−Gx,z,(15)Ly−Gy=Ex,yy−Gy,z,(16)L=L1Lx−GX+Ly−Gy.The application of diffusion equation can be mainly divided into two categories: one is the basic iterative scheme, which makes the image gradually close to the desired effect by updating with time, represented by Perona and Malik’s equation, and the subsequent work on its improvement. This method has the function of backward diffusion as well as forward diffusion, so it has the ability of smoothing image and sharpening edge. However, it is not stable in practical application because it is a sick problem. The other is based on the idea of variational method, which makes the image smooth by determining the energy function of the image and minimizing it, such as the widely used overall variational TV (total variation) model. It is more stable than the first method and has a clear geometric explanation. However, the image has no edge sharpening because it does not have the ability of backward diffusion. By adjusting the ruler library parameters according to the noise intensity at each step of the iteration, the pseudo edge can be preserved. The corresponding denoising method is adopted for noise. Among the denoising methods of sonar images, the method based on diffusion equation has some advantages that the classical algorithms do not have. It can remove the noise and keep the details of the image. However, because the diffusion equation denoising is an iterative operation, when the noise is large, in order to better remove the noise, the operation speed is affected to a certain extent. However, after filtering, the noise information is obviously reduced, the image is clearer, the SNR is high, and the edge of the image is well maintained. And it has high peak signal to noise ratio and edge retention evaluation parameters. ## 5. Example Verification Because different types of film and television images have great differences in the length of shots and scene fragments, different scene boundary detection algorithms have great differences in the performance of different types of images. In order to evaluate the accuracy and comprehensiveness, the text is selected in different types of movies for evaluation. Among them, Hollywood’s The Sixth Sense uses all the shots except the beginning and end of the credits, while other movies select part of the footage in the middle of the image to mark the scene. In these films, A River Runs through it, Forrest Gump, and Thanks To Family, the rhythm is relatively gentle, and the changes in the shot are relatively small. The Sixth Sense is the violent images in the database of Media Eval Affect 2012. The plot is relatively compact, and some shots have great changes. Leo is an action movie, which contains a lot of fighting, gun fighting, and other shots, and there are drastic changes in the shots. Use these different genres and styles of films to ensure the reliability and comprehensiveness of the review results. Table2 shows the information about the reviewed movies.Table 2 Image scene boundary detection test data information. Movie nameMovie styleLens numberScenarioA River Runs Through It (R.R)Relieve14518Forrest Gump (F.G)Calm11315The Sixth Sense (S.S)Nervous96267Leo (Leo)Action17225Thanks.To.Family (T.T.F)Affection19323In the training process, the network is often difficult to converge to the local optimal solution due to the excessive weight update. To solve this problem, the cross lineal loss function with smoothing term is added in Lreal/false and Lreal/false∗. Figure 4 shows the comparison between the loss function of cross siblings during training with and without smoothing items, where the red solid line indicates that smoothing items are added, and the blue dotted line indicates that smoothing items are not added. The sigmoid function outputs 0 to 1, whereas the network is trained when the output value is. The corresponding log value fluctuates greatly, which makes the gradient transfer unstable. By adding smoothing item to restrain its fluctuation, adding loss function of smoothing item in the training process can make the network converge better and faster.Figure 4 Comparison of loss functions of cross siblings.Figure5 shows the change process of data accuracy improvement after each step of data preprocessing method. The order is from left to right and from top to bottom. The ordinate of each icon is the value of precision convergence after each step, and the abscissa is the time (frame). The curves of different colors in each figure represent the fluctuation range of different characteristic values.Figure 5 Accuracy of diffusion equation image recognition for film and television production.Take 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 projection intervals, respectively, to make a projection interval and distance between two classes between the two in three types of curves as shown in Figure6. It can be seen that with the increase of projection interval, the distance between classes gradually decreases to a certain distance, and the calculation time of invariant moment also increases. The higher the distance between classes, the higher the recognition rate. The shorter the calculation time, the smaller the calculation, considering the recognition time and recognition rate.Figure 6 Relation curve between projective interval and interclass dissociation.Object Bank features were extracted from the training data and made statistics. The statistical information obtained is shown in Figure7. Figure 7, respectively, represents the statistical results of bedroom, dining room, living room, and street scene. The statistical results show that the image features are consistent with people’s cognition of image and image scene.Figure 7 Statistical results of Object Bank features in four different scenarios.In view of the Gaussian noise in film and television production, this paper applies the commonly used image denoising methods to film and television production. It is found that these denoising methods destroy the details of the image while denoising, which will have an impact on the future recognition and detection work. The performance of image denoising methods based on wavelet transform and total variation image denoising methods based on partial differential equation are analyzed. The experimental results show that both of them can keep the details of the image better, but the latter is better than the former. Aiming at the speckle noise caused by the imaging basis of film and television production, this paper attempts to apply the improved anisotropic diffusion equation to the speckle suppression of film and television production and adopts the anisotropic diffusion model to remove the speckle noise. By modifying the diffusion coefficient, the diffusion equation can adjust the diffusion coefficient according to the noise of the image and can be more sensitive to the detail information such as the edge of the image. ## 6. Conclusion Boundary detection and image to image scene recognition, on the basis of structure analysis and formation of the film and television image scene recognition system, the structure of the film and television image semantic unit into a scene, scene semantic annotation, and automation, so as to realize semantic image storage and index based on the scene, provide the basis for content-based image retrieval. Improved anisotropic diffusion equation was applied to image ultrasonic image noise reduction; the experimental results show that for ultrasonic image contains a lot of speckle noise, using the method of noise reduction, and enhance the stability of anisotropy and computing time, and the number of iteration is much better than the classical P-M equation and Lin Shi operator, and the effect of filter edge performance has a clear advantage. As deep learning has begun to step into digital image processing, especially convolutional neural network has initially achieved satisfactory results in image recognition and other fields, image recognition methods based on deep learning have also been proposed and developed in recent years, becoming a research hotspot and awaiting further exploration. --- *Source: 1008281-2021-10-04.xml*
1008281-2021-10-04_1008281-2021-10-04.md
36,486
Application of Image Recognition Method Based on Diffusion Equation in Film and Television Production
Liyuan Guo
Advances in Mathematical Physics (2021)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1008281
1008281-2021-10-04.xml
--- ## Abstract On the basis of studying the basic theory of anisotropic diffusion equation, this paper focuses on the application of anisotropic diffusion equation in image recognition film production. In order to further improve the application performance of P-M (Perona-Malik) anisotropic diffusion model, an improved P-M anisotropic diffusion model is proposed in this paper, and its application in image ultrasonic image noise reduction is discussed. The experimental results show that the model can effectively suppress the speckle noise and preserve the edge features of the image. Based on the image recognition technology, an image frame testing system is designed and implemented. The method of image recognition diffusion equation is used to extract and recognize the multilayer feature points of the test object according to the design of artificial neural network. To a certain extent, it improves the accuracy of image recognition and the audience rating of film and television. Use visual features of the film and television play in similarity calculation for simple movement scene segmentation problem, at the same time, the camera to obtain information, use the lens frame vision measuring the change of motion of the camera, and use weighted diffusion equation and the visual similarity of lens similarity calculation and motion information, by considering the camera motion of image recognition, effectively solve the sports scene of oversegmentation problem such as fighting and chasing. --- ## Body ## 1. Introduction With the advent of the Internet era, the network image traffic is increasing with each passing day. Most users like to watch movies or TV series with the characteristics of The Times; among which, most of the dramas account for a lot. Because the content of the play is old, the actors’ clothes and scenes in the film and television sometimes do not conform to the background at that time, resulting in great mistakes. Therefore, it is particularly important to ensure that the characters and scenes involved in each film or TV series are in line with the background at that time in order to improve users’ viewing experience and reduce the burden of producers of films and TV dramas. Through the image recognition of the diffusion equation, to detect the factors that do not conform to the historical scene in the film and television drama, greatly reduce the work of the director and producer, and solve some problems existing in the clothing and scene in the current film and television drama and life.Among various forms of digital images, film and television images are the most accessible and indispensable form of images in People’s Daily life. Like other digital images, film and television images are unstructured data in form, but they often have a strong plot structure in content, which is often shown as a combination of scenes connected or associated with each other in plot. This provides a certain factual basis for automatic segmentation of scene structure and scene recognition of film and television images. Generally, the description data of film and television image content includes the following forms: (1) For image metadata, image metadata records information about the image (including title and type) and the production of the image (director, cast, producer, distributor). The metadata of film and television images is generally provided by digital image distributors along with digital media resources [1]. (2) For image structure data, image structure usually refers to the logical structure between image frames and image fragments, such as the connection, switch, and transfer between shots [2]. (3) For image semantic data, semantic data describes the scene, action, story plot, theme, and other semantic content of the image itself [3], which is often obtained by identifying the features of image frames and audio data, as well as from subtitles and other auxiliary explanatory files.Using image processing and recognition technology, Muwen aims to develop software that can identify what an actor is wearing and whether the scene is appropriate for the shooting. Using the neural network technology of deep learning and based on the image recognition technology, the artificial neural network is designed according to the multilevel, which can extract, analyze, identify, and detect the feature points of the target and effectively find out the factors that do not conform to the historical scene in the film and television plays. ## 2. Related Work Image recognition has achieved great development with the help of deep learning technology and has been widely applied in various fields at home and abroad. Some scholars to apply image recognition technology content detection of image frames, in view of the existing retrieval scenario framework and clothing problems [4], the missing clothing information recognition for dress design optimization and scene design recognition problems, this paper proposes a new garment segmentation method and based on cross domain dictionary learning recognition of dress design. A new image information retrieval algorithm [5, 6] based on scale-invariant feature transform features is proposed, which is applied to content-based image information retrieval and improves the traditional image similarity method to accurately and quickly scan the content of the image. The classic deep network [7] is proposed to extract clothing and scene features, and the specialized data is trained repeatedly to make the network features more significant. Image recognition technology has more and more penetrated into daily life. In order to improve audience rating and make scenes of costume dramas more accurate, image recognition technology provides new ideas for whether scenes in films and TV dramas conform to historical background [8].The research on the structure analysis of film and television images can be divided into two categories: shot-based segmentation and scene-based segmentation from the granularity of time. In lense-based segmentation, the image is first detected and represented as a set of fixed or indefinite length image frames. A shot segmentation algorithm based on image color histogram [9] is proposed. This method and its deformation are widely used in image shot segmentation. Scene-based segmentation takes the background in the process of image plot development as a reference and clusters the temporal sequences with the same semantics to form a shot set representing the scene. Scene-based image segmentation is also called image scene boundary detection. In general, image scene boundary detection algorithms can be divided into two categories: methods based on domain-related prior models and domain-independent methods based on image production principles [10]. The method based on domain-related model needs to establish corresponding prior models according to image types and related domain knowledge [11]. Therefore, before this method is applied to a new domain, it is necessary to establish a prior model that depends on domain-related expertise, which is relatively limited. However, the domain-independent method is usually based on the principle of image production, clustering image shots according to the visual similarity and the characteristics of time constraints, clustering the shots that express the semantics of similar scenes in a time continuum into shot clusters, and then constructing the scene on this basis. This method does not require professional knowledge in related fields, so it is widely used in scene boundary detection of film and television images.Scholars proposed to use the evolution of the scene graph to describe the scene graph model [12]. Each vertex in the figure represents an image of a scene; each edge represents the visual similarity between two shots in position and time; the full link method is used to calculate the similarity of the shots, and the hierarchical clustering method is used to form subgraph; each cluster of subgraph represents a scene. Similar graph clustering method is used to find the story in the image unit [13], to use vision algorithms to detect changes in image scenes, it is necessary to calculate the similarity between the two shots of the image. When the image file or contains a large number of lenses (for example, the number of lens film images can reach thousands or more), a lot of time and calculations are required, which leads to low efficiency. At the same time, with images of the scene lens must be continuous on time; you can use the limited time approach to the shot clustering, and literature [14] puts forward within the fixed time window T lens clustering method based on similarity shots, but because in the image, based on the plot and story rhythm slow degree is different, the length of the image lens also varies greatly, so it is more reasonable to use a fixed lens window than a fixed time window. Literature [15] proposed an algorithm for image scene detection based on a sliding window with fixed lens window length. Literature [16] analyzes the background pictures in the same scene shot to complete scene boundary detection of film and television images. For scene recognition of image, the underlying features of the image are usually used to complete the mapping from the underlying features to the scene category through machine learning method. According to the type of characteristics and in general can be divided into scene recognition method based on global features and recognition method based on local features of the scene, typical global features are [17, 18] proposed C71St characteristics, the use of frequency domain information of image scene, said the global shape of this method in the recognition of outdoor scene, and good results have been achieved in the classification. However, the recognition effect of indoor scene is not very ideal. Relative to the global features, local feature better describes the detail of the image information; thus, a global feature in many recognition task has a better performance; the typical image scene recognition methods are mainly based on local feature hierarchical model classification method; the characteristics of the pyramid matching method and scene recognition method are based on the characteristics of CENTRIST descriptor [19]. The above methods can achieve certain results in specific image scene recognition, but the results are often poor when applied directly to image scene recognition. And most of the research on the current image scene recognition for a particular image category in the field of classification and identification, such as in the images of the sports football, table tennis, advertising image recognition, image stream, and video image scene classification problem, is relatively complex, often because of video image in the process of filming; for reasons of artistry, there will be a lot of shooting angle, light intensity, and other changes and often a lot of camera movement, long lens, and other different shooting techniques, all of which make it more difficult to identify the scene of film and television images. In literature [20, 21], action recognition and scene recognition images are combined to extract a variety of local features for action and scene training and recognition. At the same time, using the traditional underlying features to represent images or images and because of the underlying characteristics often contain semantic information is very limited, often when dealing with complex tasks based on semantic incompetence [22, 23]; as a result, the underlying characteristics of different often has good effect to the specific processing tasks and in other task performance in general. Trying to use intermediate semantics to represent images and images has been an important research direction for a long time. The proposed image-based representation method [24–26] achieves good results in scene recognition of images. It uses the objects contained in the image as the intermediate semantic features of the image and uses the SVM classifier for scene recognition of the image.However, all of the above methods have some limitations. For example, the time locality of the shot in the same scene is not taken into account, which requires a lot of calculation, but only the color similarity of the shot is considered, and the similarity of the shot in the moving state is smaller than that in the static state, which leads to the oversegmentation problem. The global similarity between the shots is used, and the motion characteristics of the shots are added, and then the sliding window technology based on the number of shots is used to detect the image scene. ## 3. Image Recognition Based on Anisotropic Diffusion Equation ### 3.1. Diffusion Model of Heat Conduction Equation The intermediate score vector graph of a set of object perception filters is used as the image feature, and a simple linear classifier is used for the scene recognition task. The proposed image recognition method is shown in Figure1. The image recognition using the diffusion equation can easily distinguish and classify different scenes [27]. At the same time, with the popularization of the Internet, it is becoming easier and easier to obtain large-scale digital images. The corresponding object recognition model can be easily trained by using the annotated object images.Figure 1 Image recognition framework of diffusion equation.After discretization,T is the class, and i is the time; the classical P-M equation is(1)Tss+△i=Tsi+∑ic△Isi∗△is.By analyzing the characteristics of anisotropic diffusion functioncs and following the design principle of diffusion coefficient, s is the diffusion; the following diffusion coefficients are constructed:(2)cs=t−s/k2t+s/k2,whereK represents the threshold parameter of the edge amplitude. An automatic estimation method of diffusion threshold is(3)K=T0e−ts,(4)T0=1MN∑i,jM,Nvari,j,(5)vari,j=1s∑sI−PIi,j¯2. ### 3.2. Image Recognition Interaction of Diffusion Equation For image matching based on feature points, the relevant elements of a scene picture similar to the image are obtained by locking a frame of image. The feature points of the two images are edge-extracted, and the pixel gray level of the corresponding feature point area is calculated. The correlation coefficient is used to determine whether the two images are of the same period. The image matching process based on feature points is shown in Figure2.Figure 2 Image matching flow based on feature points.The object model of deformable parts mainly solves the problem of object recognition under different angles and deformation. The object is represented as a set of deformable parts in relative position. Each part model describes the local features of the object. The relative positions of the parts are variable through spring-like connections. And the discriminant learning is carried out through the image marked only with the whole position box of the object. Deformable object model continuous has achieved the best results in PASCAL VOC recognition tasks.The object model of the deforming part uses the 3-dimensional gradient direction histogram feature and provides an alternative of using the 3-dimensional feature vector obtained by using both the contrast sensitive and contrast insensitive features. The detection scores of small and large objects in the image are calibrated. The gradient direction histogram feature is a more commonly used image feature descriptor at present, which is used in object detection. It uses the gradient direction histogram of the image in locally dense cells with uniform size and uses overlapping local contrast normalization technology to improve the accuracy of object detection.Letx,y be a pixel of the image and ux,y and rx;y are the gradient direction and gradient amplitude of the image at the position x,y, respectively:(6)Tx,y=rouux,y2modt.The wavelet denoising method with better effect of removing Gaussian noise is compared with the nonlinear total variation partial differential equation denoising method. In wavelet denoising, different thresholds are selected according to different decomposition scales:(7)tj=δlgTlgj+1,whereT is the signal length, j is the current number of decomposition layers, and j is the maximum number of decomposition layers. The method to estimate the wavelet coefficient is(8)w¯j,k=wj,k,wj,k≥δ0,other.The measurement parameters calculated by using the image recognition method of diffusion equations (1) to (8) are listed in Table 1.Table 1 Performance analysis of wavelet threshold and nonlinear total variation image recognition. VarianceImage recognition methodPPSNR0.01Coif20.80219.69Sym40.852920.46Total variation0.97326.580.05Coif20.78516.89Sym40.83516.77Total variation0.97921.670.1Coif20.79514.56Sym40.98615.78Total variation0.96718.560.15Coif20.70212.78Sym40.79513.65Total variation0.85715.64 ## 3.1. Diffusion Model of Heat Conduction Equation The intermediate score vector graph of a set of object perception filters is used as the image feature, and a simple linear classifier is used for the scene recognition task. The proposed image recognition method is shown in Figure1. The image recognition using the diffusion equation can easily distinguish and classify different scenes [27]. At the same time, with the popularization of the Internet, it is becoming easier and easier to obtain large-scale digital images. The corresponding object recognition model can be easily trained by using the annotated object images.Figure 1 Image recognition framework of diffusion equation.After discretization,T is the class, and i is the time; the classical P-M equation is(1)Tss+△i=Tsi+∑ic△Isi∗△is.By analyzing the characteristics of anisotropic diffusion functioncs and following the design principle of diffusion coefficient, s is the diffusion; the following diffusion coefficients are constructed:(2)cs=t−s/k2t+s/k2,whereK represents the threshold parameter of the edge amplitude. An automatic estimation method of diffusion threshold is(3)K=T0e−ts,(4)T0=1MN∑i,jM,Nvari,j,(5)vari,j=1s∑sI−PIi,j¯2. ## 3.2. Image Recognition Interaction of Diffusion Equation For image matching based on feature points, the relevant elements of a scene picture similar to the image are obtained by locking a frame of image. The feature points of the two images are edge-extracted, and the pixel gray level of the corresponding feature point area is calculated. The correlation coefficient is used to determine whether the two images are of the same period. The image matching process based on feature points is shown in Figure2.Figure 2 Image matching flow based on feature points.The object model of deformable parts mainly solves the problem of object recognition under different angles and deformation. The object is represented as a set of deformable parts in relative position. Each part model describes the local features of the object. The relative positions of the parts are variable through spring-like connections. And the discriminant learning is carried out through the image marked only with the whole position box of the object. Deformable object model continuous has achieved the best results in PASCAL VOC recognition tasks.The object model of the deforming part uses the 3-dimensional gradient direction histogram feature and provides an alternative of using the 3-dimensional feature vector obtained by using both the contrast sensitive and contrast insensitive features. The detection scores of small and large objects in the image are calibrated. The gradient direction histogram feature is a more commonly used image feature descriptor at present, which is used in object detection. It uses the gradient direction histogram of the image in locally dense cells with uniform size and uses overlapping local contrast normalization technology to improve the accuracy of object detection.Letx,y be a pixel of the image and ux,y and rx;y are the gradient direction and gradient amplitude of the image at the position x,y, respectively:(6)Tx,y=rouux,y2modt.The wavelet denoising method with better effect of removing Gaussian noise is compared with the nonlinear total variation partial differential equation denoising method. In wavelet denoising, different thresholds are selected according to different decomposition scales:(7)tj=δlgTlgj+1,whereT is the signal length, j is the current number of decomposition layers, and j is the maximum number of decomposition layers. The method to estimate the wavelet coefficient is(8)w¯j,k=wj,k,wj,k≥δ0,other.The measurement parameters calculated by using the image recognition method of diffusion equations (1) to (8) are listed in Table 1.Table 1 Performance analysis of wavelet threshold and nonlinear total variation image recognition. VarianceImage recognition methodPPSNR0.01Coif20.80219.69Sym40.852920.46Total variation0.97326.580.05Coif20.78516.89Sym40.83516.77Total variation0.97921.670.1Coif20.79514.56Sym40.98615.78Total variation0.96718.560.15Coif20.70212.78Sym40.79513.65Total variation0.85715.64 ## 4. Image Scene Structure Analysis and Recognition Based on Film Diffusion Equation With the rapid development of digital multimedia technology and the popularization of the Internet, digital image resources have increasingly become an important part of People’s Daily entertainment. At the same time, because the film and television images are unstructured data, how to effectively organize and manage the film and television resources and provide users with the ability to quickly locate the content of interest is an important topic for the study of image content. The video image scene structure analysis and recognition system are aimed at segmenting the video image scene and obtaining the structure unit of the video image with the scene as the semantic, so as to realize the image scene semantic structured storage and management. At the same time, scene recognition is carried out for scene fragments of film and television images, and scene tags are automatically annotated to obtain scene semantic content, so as to provide retrieval capability based on scene semantic for film and television images.For the input image, the scene structure is analyzed first. Scene change detection based on image lens, therefore, first of all, based on visual similarity of image frame shot segmentation, and image lens with multiple frames represent key frames and then use the shot key frames of visual characteristics and movement characteristics that were similar to the lens clustering, and by using the scenario development model law and similar adjacent interlaced lens to merge, the scene structure unit of the image is obtained. Then, key frames of image scene clips on the set of predefined object, which can identify and use the object image frame according to test results of statistics and information in order to get a panoramic view of the image scene, using all the largest pool of key frames and the average pooling results as image scene clips feature for image scene training and recognition.After the data set of key points is obtained, the time of landing of left and right feet is marked, and other preprocessing is performed manually, which forms the basis of the next model training. We expect to fit the original data and the marked data through machine learning and finally train a model that can determine the step fall according to the coordinate relationship of human body nodes in each frame, which can be used for the recognition of images in actual work. In this link, two common neural network algorithms, support vector machines (SVM) and multilayer perceptron (MLP), were used. We observed their performance during the process and compare their results and evaluate their differences.For the acquisition of training samples, a GH5 camera was used to shoot individual motion image segments of several different states in the format of 1080p/60fps, including two camera positions and four dimensions of movement (Table1), so as to ensure the diversity of character movement. Single and not many people are given at the beginning of the experiment control unnecessary variables as far as possible. In fact, the attitude judgment accuracy of Openpose for people with the same image is basically the same as that of a single person. If the final training model can be applied to the results, many image processing will only amplify simple calculations. Also, the original reason to consider 60 fps was to be consistent with the base number of minutes and seconds, reducing some errors for manual corrections that might occur in certain segments. However, the increase of the amount of test data caused unnecessary disturbance to the accuracy of the model in the experiment. After observation, it is found that lower frame rate can solve the above two problems to some extent, which not only improves the accuracy but also reduces the calculation cost. Therefore, the frame rate and resolution of the shot picture are reduced to 720p/30fps data. The data processing process is shown in Figure 3.Figure 3 Data processing flow chart of film and television production.MLP or SVM algorithm is difficult to extract features from the internal information of the picture. Each point is an absolute coordinate value, which means that when the character moves from left to right in the picture, thex value of the point will gradually increase. When the character moves up and down in the picture, the Y value will produce some noise. The absolute coordinate values of x and y can be converted to relative coordinate values by:(9)xr=x1+xt2,xit=xi−xr,(10)yr=y1+yt2,yit=yi−yr.Since the Openpose can infer the pose of the inner part of the figure painting, when the body part of the figure is blocked or beyond the picture, the value of this part of the point will be. As a result, the values of this part become discrete values, which will also affect the fitting of the model. Therefore, the missing frameF is filled with the following method:(11)tn=tn−1+tn+12.On the other hand, in the picture, the characters will also move in the depth direction of the picture, which means that in the same image, if the characters move in the depth, the absolute distance between points will change according to the rule of near, far, and small, which is equivalent to adding noise that cannot be ignored on the timing axis. Before data input, this part of noise is removed by the following normalization method:(12)xryr=x1−xr2+y1−yr2,(13)xir,yr=xryrx1y1.In the training process, one frame was taken as a unit sample, aiming to determine whether the movement period of the figure’s footsteps was left or right through the joint coordinate information of each frame. In the training process, the movement modes of characters are divided into the following categories for training:(1) Fixed machine position, four-way figure position fixed footstep movement(2) Fixed machine position, characters to the deep position of the foot movement(3) Fixed position, characters in the picture from left to right/from right to left movement(4) The machine follows the character, and the character moves forward head-on(5) The machine follows the character, and the character moves backward(6) The machine follows the character, and the character moves sidewaysThe purpose is to make the generated graph and the target graph consistent in the low dimensional space and get the generated image with clearer contour. The L1 loss of the generated image and the target image is divided into two parts. One part is the Llloss between the output image and the target image after the damaged image passes through the generated network. The other part is the LllossGx,z between the output image E and the target image after the target image is generated through the network.(14)Lx−GX=Ex,yx−Gx,z,(15)Ly−Gy=Ex,yy−Gy,z,(16)L=L1Lx−GX+Ly−Gy.The application of diffusion equation can be mainly divided into two categories: one is the basic iterative scheme, which makes the image gradually close to the desired effect by updating with time, represented by Perona and Malik’s equation, and the subsequent work on its improvement. This method has the function of backward diffusion as well as forward diffusion, so it has the ability of smoothing image and sharpening edge. However, it is not stable in practical application because it is a sick problem. The other is based on the idea of variational method, which makes the image smooth by determining the energy function of the image and minimizing it, such as the widely used overall variational TV (total variation) model. It is more stable than the first method and has a clear geometric explanation. However, the image has no edge sharpening because it does not have the ability of backward diffusion. By adjusting the ruler library parameters according to the noise intensity at each step of the iteration, the pseudo edge can be preserved. The corresponding denoising method is adopted for noise. Among the denoising methods of sonar images, the method based on diffusion equation has some advantages that the classical algorithms do not have. It can remove the noise and keep the details of the image. However, because the diffusion equation denoising is an iterative operation, when the noise is large, in order to better remove the noise, the operation speed is affected to a certain extent. However, after filtering, the noise information is obviously reduced, the image is clearer, the SNR is high, and the edge of the image is well maintained. And it has high peak signal to noise ratio and edge retention evaluation parameters. ## 5. Example Verification Because different types of film and television images have great differences in the length of shots and scene fragments, different scene boundary detection algorithms have great differences in the performance of different types of images. In order to evaluate the accuracy and comprehensiveness, the text is selected in different types of movies for evaluation. Among them, Hollywood’s The Sixth Sense uses all the shots except the beginning and end of the credits, while other movies select part of the footage in the middle of the image to mark the scene. In these films, A River Runs through it, Forrest Gump, and Thanks To Family, the rhythm is relatively gentle, and the changes in the shot are relatively small. The Sixth Sense is the violent images in the database of Media Eval Affect 2012. The plot is relatively compact, and some shots have great changes. Leo is an action movie, which contains a lot of fighting, gun fighting, and other shots, and there are drastic changes in the shots. Use these different genres and styles of films to ensure the reliability and comprehensiveness of the review results. Table2 shows the information about the reviewed movies.Table 2 Image scene boundary detection test data information. Movie nameMovie styleLens numberScenarioA River Runs Through It (R.R)Relieve14518Forrest Gump (F.G)Calm11315The Sixth Sense (S.S)Nervous96267Leo (Leo)Action17225Thanks.To.Family (T.T.F)Affection19323In the training process, the network is often difficult to converge to the local optimal solution due to the excessive weight update. To solve this problem, the cross lineal loss function with smoothing term is added in Lreal/false and Lreal/false∗. Figure 4 shows the comparison between the loss function of cross siblings during training with and without smoothing items, where the red solid line indicates that smoothing items are added, and the blue dotted line indicates that smoothing items are not added. The sigmoid function outputs 0 to 1, whereas the network is trained when the output value is. The corresponding log value fluctuates greatly, which makes the gradient transfer unstable. By adding smoothing item to restrain its fluctuation, adding loss function of smoothing item in the training process can make the network converge better and faster.Figure 4 Comparison of loss functions of cross siblings.Figure5 shows the change process of data accuracy improvement after each step of data preprocessing method. The order is from left to right and from top to bottom. The ordinate of each icon is the value of precision convergence after each step, and the abscissa is the time (frame). The curves of different colors in each figure represent the fluctuation range of different characteristic values.Figure 5 Accuracy of diffusion equation image recognition for film and television production.Take 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 projection intervals, respectively, to make a projection interval and distance between two classes between the two in three types of curves as shown in Figure6. It can be seen that with the increase of projection interval, the distance between classes gradually decreases to a certain distance, and the calculation time of invariant moment also increases. The higher the distance between classes, the higher the recognition rate. The shorter the calculation time, the smaller the calculation, considering the recognition time and recognition rate.Figure 6 Relation curve between projective interval and interclass dissociation.Object Bank features were extracted from the training data and made statistics. The statistical information obtained is shown in Figure7. Figure 7, respectively, represents the statistical results of bedroom, dining room, living room, and street scene. The statistical results show that the image features are consistent with people’s cognition of image and image scene.Figure 7 Statistical results of Object Bank features in four different scenarios.In view of the Gaussian noise in film and television production, this paper applies the commonly used image denoising methods to film and television production. It is found that these denoising methods destroy the details of the image while denoising, which will have an impact on the future recognition and detection work. The performance of image denoising methods based on wavelet transform and total variation image denoising methods based on partial differential equation are analyzed. The experimental results show that both of them can keep the details of the image better, but the latter is better than the former. Aiming at the speckle noise caused by the imaging basis of film and television production, this paper attempts to apply the improved anisotropic diffusion equation to the speckle suppression of film and television production and adopts the anisotropic diffusion model to remove the speckle noise. By modifying the diffusion coefficient, the diffusion equation can adjust the diffusion coefficient according to the noise of the image and can be more sensitive to the detail information such as the edge of the image. ## 6. Conclusion Boundary detection and image to image scene recognition, on the basis of structure analysis and formation of the film and television image scene recognition system, the structure of the film and television image semantic unit into a scene, scene semantic annotation, and automation, so as to realize semantic image storage and index based on the scene, provide the basis for content-based image retrieval. Improved anisotropic diffusion equation was applied to image ultrasonic image noise reduction; the experimental results show that for ultrasonic image contains a lot of speckle noise, using the method of noise reduction, and enhance the stability of anisotropy and computing time, and the number of iteration is much better than the classical P-M equation and Lin Shi operator, and the effect of filter edge performance has a clear advantage. As deep learning has begun to step into digital image processing, especially convolutional neural network has initially achieved satisfactory results in image recognition and other fields, image recognition methods based on deep learning have also been proposed and developed in recent years, becoming a research hotspot and awaiting further exploration. --- *Source: 1008281-2021-10-04.xml*
2021
# Ciprofloxacin-ResistantPseudomonas aeruginosa Lung Abscess Complicating COVID-19 Treated with the Novel Oral Fluoroquinolone Delafloxacin **Authors:** Jürgen Panholzer; Matthias Neuboeck; Guangyu Shao; Sven Heldt; Markus Winkler; Paul Greiner; Norbert Fritsch; Bernd Lamprecht; Helmut Salzer **Journal:** Case Reports in Pulmonology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008330 --- ## Abstract Purpose. We report the development of a lung abscess caused by a ciprofloxacin-resistant Pseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin suggesting an oral administration option for ciprofloxacin-resistant Pseudomonas aeruginosa lung abscess. Case Presentation. An 86-year-old male was admitted to the hospital with fever, dry cough, and fatigue. PCR testing from a nasopharyngeal swab confirmed SARS-CoV-2 infection. An initial CT scan of the chest showed COVID-19 typical peripheral ground-glass opacities of both lungs. The patient required supplemental oxygen, and anti-inflammatory treatment with corticosteroids was initiated. After four weeks of corticosteroid therapy, the follow-up CT scan of the chest suddenly showed a new cavernous formation in the right lower lung lobe. The patient’s condition deteriorated requiring high-flow oxygen support. Consequently, the patient was transferred to the intensive care unit. Empiric therapy with intravenous piperacillin/tazobactam was started. Mycobacterial and fungal infections were excluded, while all sputum samples revealed cultural growth of P. aeruginosa. Antimicrobial susceptibility testing showed resistance to meropenem, imipenem, ciprofloxacin, gentamicin, and tobramycin. After two weeks of treatment with intravenous piperacillin/tazobactam, the clinical condition improved significantly, and supplemental oxygen could be stopped. Subsequently antimicrobial treatment was switched to oral delafloxacin facilitating an outpatient management. Conclusion. Our case demonstrates that long-term corticosteroid administration in severe COVID-19 can result in severe bacterial coinfections including P. aeruginosa lung abscess. To our knowledge, this is the first reported case of a P. aeruginosa lung abscess whose successful therapy included oral delafloxacin. This is important because real-life data for the novel drug delafloxacin are scarce, and fluoroquinolones are the only reliable oral treatment option for P. aeruginosa infection. Even more importantly, our case suggests an oral therapy option for P. aeruginosa lung abscess in case of resistance to ciprofloxacin, the most widely used fluoroquinolone in P. aeruginosa infection. --- ## Body ## 1. Introduction In December 2019, the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged causing coronavirus disease 2019 (COVID-19), which rapidly became a global health emergency. The clinical spectrum ranges from asymptomatic to COVID-19 typical symptoms including high fever, dry cough, shortness of breath, fatigue, myalgia, headache, abdominal discomfort, or loss of taste and smell [1]. Risk factors for a severe course of diseases or death are amongst others high age (>60 years), male gender, hypertension, COPD, and diabetes [2]. Diagnosis is based on PCR or antigen testing from a naso- or oropharyngeal swab. Laboratory findings often include leucopenia and thrombocytopenia, and the radiological pattern typically shows peripheral ground-glass opacities of both lungs [3]. Several agents are approved or are under investigation including antivirals, antibodies, immunomodulators, and corticosteroids. The latter is established as standard of care in severe and critically ill COVID-19 patients since a multicenter randomized controlled trial has shown significant reducing duration of mechanical ventilation mortality in moderate-to-severe ARDS [4, 5].In about 7% of patients, COVID-19 is complicated by secondary coinfections. While in influenza, bacterial coinfections emerge in up to 65% of patients, COVID-19 seems to have a rather low coinfection rate [6, 7]. This is surprising as COVID-19 is associated with several immunosuppressive factors such as high age, diabetes, and corticosteroid therapy [3, 4]. If bacterial coinfection appears, it is often associated with a severe course of COVID-19 and with a poorer outcome [8].Bacterial coinfection may be caused byPseudomonas aeruginosa, which is usually treated with piperacillin/tazobactam, ceftazidim, carbapenems, or ciprofloxacin; however, in case of resistance, treatment options are very limited. Delafloxacin (Melinta Therapeutics, USA, Connecticut) is a novel fluoroquinolone for the respiratory tract and skin infections addressing P. aeruginosa and other multidrug-resistant pathogens such as methicillin-resistant Staphylococcus aureus (MRSA). Compared to other fluoroquinolones, delafloxacin is also available in an oral formulation, while most other antibacterial alternatives for P. aeruginosa are only administrable intravenously [9, 10].We report the development of a lung abscess caused by a ciprofloxacin-resistantPseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin. ## 2. Case Presentation An 86-year-old man was admitted to the hospital on July 20th, 2020, with fever, dry cough, and fatigue. Medical history revealed an arterial hypertension, recurrent pulmonary embolism in 1993 and in 2007, and a prostate carcinoma. The last days before admission, he took ciprofloxacin 500 mg two times daily (bid) because of a recent urinary tract infection. His vital signs were normal. Auscultation revealed fine crackles in both basal lung lobes. Cardiac, abdominal, and neurological examination was unremarkable. Laboratory tests showed a leucopenia of 4.07 G/L (3.9-8.8 G/L), a decreased lymphocyte count of 0.3 G/L (1.0-4.8 G/L), and an elevated C-reactive protein (CrP) level of 7.73 mg/dL (0-0.5 mg/dL), while the procalcitonin level was normal. Polymerase chain reaction (PCR, Roche, Basel, Switzerland) from a nasopharyngeal swab confirmed SARS-CoV-2 infection. A computed tomography (CT) scan of the chest showed peripheral distributed ground-glass opacities in all lung lobes (Figures 1(a) and 1(e)). Treatment with methylprednisolone 32 mg once a day (od) and empiric antimicrobial therapy with azithromycin 500 mg was initiated. Blood cultures showed no bacterial growth. The patient’s condition deteriorated, and antimicrobial therapy was switched several times (Figure 2). A follow-up chest CT scan on August 17th, 2020, showed a new cavernous formation in the right lower lung lobe (Figures 1(b) and 1(f)). Due to the radiological presentation, a fungal infection was suspected, and fluconazole 200 mg was started.Figure 1 Axial chest CT scans showing peripheral ground-glass opacities of both lungs at hospital admission (a, e), a large cavitary lesion at the right inferior lung lobe at the end of corticosteroid treatment (b, f), a declining cavitary lesion at the end of antibacterial therapy (c, g), and a small consolidation at two-month follow-up (d, f). (a)(b)(c)(d)(e)(f)(g)(h)Figure 2 Timeline showing the key parameters and therapies. KUK = Kepler University Hospital; ICU = intensive care unit; Aug = August; Sept = September; Oct = October; Nov = November; Dec = December; HFNC = high-flow nasal cannula oxygen; CRP = C-reactive protein; SARS-CoV-2 = severe acute respiratory syndrome coronavirus type 2; PCR = polymerase chain reaction; Pip/Taz = piperacillin/tazobactam.As the patient’s condition further deteriorated and the oxygen demand increased, the patient was transferred to the local university hospital. Antibacterial treatment with meropenem 2 g three times a day (tid) i.v. and corticosteroid therapy was continued, while antifungal treatment was switched to voriconazole 200 mg bid intravenously after a loading dose on day 1.Again, blood samples were negative for bacterial growth. The galactomannan test (IMMY, Oklahoma, USA) from serum was negative. Microscopy for acid-fast bacilli as well as PCR forM. tuberculosis-complex from sputum was negative. Sputum cultures did not show fungal growth but revealed growth of P. aeruginosa on August 25th, 2020. Moreover, multiple skin swabs (throat, groin, and perianal) were also positive for P. aeruginosa. Bronchial secretion obtained during bronchoscopy from the lower right lung lobe also revealed cultural growth of P. aeruginosa, while the galactomannan test and fluorescence microscopy for Pneumocystis jirovecii from bronchoalveolar lavage were negative.Antimicrobial susceptibility testing (Thermo Fisher Scientific, Massachusetts, USA) ofP. aeruginosa showed resistance to meropenem, imipenem, ciprofloxacin, gentamicin, and tobramycin. Antibacterial treatment was switched on August 28th, 2020, to piperacillin/tazobactam 4.5 g tid i.v., while antifungal treatment with voriconazole was stopped. This targeted treatment regime resulted in rapid clinical and radiological improvement enabling to stop oxygen supplementation therapy on September 11th, 2020. Bacterial sputum conversion was achieved on September 21st, 2020. The first negative SARS-CoV-2 PCR was documented on September 1st, 2020.Since our patient did not require additional oxygen anymore, the only reason to be treated on the ward was the i.v. administration of piperacillin/tazobactam for the treatment of the ciprofloxacin-resistantP. aeruginosa lung abscess. A service for an outpatient parenteral therapy (OPAT) is not available in Austria so far. Therefore, the treatment strategy was changed to the novel fluoroquinolone delafloxacin characterized by a sufficient bioavailability when taken orally [9, 10].We discharged the patient at the beginning of October with oral delafloxacin 450 mg bid. Antibacterial treatment was well tolerated, and the patient showed continuous radiological improvement. Delafloxacin treatment was stopped by the end of October 2020. ## 3. Discussion We report the development of a lung abscess caused by a ciprofloxacin-resistantPseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin.Current literature indicates an overall bacterial coinfection rate of 7 to 14% in COVID-19 [7]. These rates are lower compared to the frequency of coinfections in influenza, where most studies report rates ranging from 11 to 35% [6]. Hospital-associated coinfections occur in about 12% of COVID-19 patients, while community-associated coinfections are less frequently seen with around 6% of cases [11]. Surprisingly, there are several studies showing high usage of antibacterial treatment in COVID-19, while the risk of coinfections seems to be rather low. A recent study by Kubin et al. reported 67% of patients receiving at least 1 dose of antibacterial therapy [11, 12]. However, when bacterial coinfection does appear, it is associated with both increased severity of COVID-19 and poorer outcome [8]. Antimicrobial stewardship principles should not be neglected due to concerns of availability for the global supply chain preventing therapy for those who need it, the increased workload with parenteral administration, and long-term complications associated with antibacterial overuse including development of resistance [13].We suggest that our patient had several risk factors for developing a bacterial lung abscess including severe illness requiring ICU stay, high age, multiple antibacterial therapies, and long-term corticosteroid treatment. Our patient was on corticosteroid treatment for almost a month, while most guidelines including the Infectious Disease Society of America (IDSA) guideline recommend a total of 10 days for patients with severe COVID-19. Given the hyperinflammatory state in severe COVID-19 patients, immunomodulatory approaches including steroids, but also others such as tocilizumab, are continuously under investigation [4]. A multicenter randomized controlled trial investigating the use of dexamethasone demonstrated a reduction of the 28-day mortality and a reduction of ventilator-associated days [5]. Furthermore, there are studies showing increased coinfection and mortality rates in viral pneumonia other than COVID-19 treated with corticosteroids [14, 15].Cavitation can also be associated with fungal infections such as pulmonary aspergillosis [16]. The risk for COVID-19-associated pulmonary aspergillosis (CAPA) in critically ill COVID-19 patients is estimated about 5-30% and is associated with corticosteroid therapy [17]. Recent studies showed an increased mortality in COVID-19 complicated by a CAPA compared with patients without CAPA (36% versus 9.5%). Serum and bronchoalveolar lavage galactomannan testing yields high sensitivity and specificity for diagnosing CAPA offering early antifungal treatment [18].In our case, two weeks of intravenous piperacillin/tazobactam treatment improved the clinical condition significantly and supplemental oxygen could be stopped. Due to structural issues in Austria, it was not possible to organize OPAT. Delafloxacin is a novel fluoroquinolone, which is available in an oral formulation [9, 10]. Some experts indicate its low bioavailability, which may have the risk to provoke resistance. In fact, bioavailability for the oral formulation may be below 60%, while 35-50% of its oral administered dose is eliminated unchanged via feces. On the other hand, minimal inhibitory concentrations are three- to five-fold lower for gram-positive strains and four- to seven-fold lower for gram-negative strains compared to other fluoroquinolones. Additionally, in vitro tests showed similar resistance rates to moxifloxacin and lower rates compared to levofloxacin [19]. Furthermore, delafloxacin lacks clinical relevant Qt-prolongation and phototoxicity compared to other fluoroquinolones [9, 20]. An oral treatment alternative for P. aeruginosa could reduce the duration of hospitalization, complications, and costs compared to i.v. drugs.Limitations of our case report are that we were not able to perform antimicrobial susceptibility testing for delafloxacin, because the patient already achieved sputum conversion at the time, we considered delafloxacin the first time, and unfortunately noP. aeruginosa isolates were stored. However, we are convinced that delafloxacin was susceptible since we saw that the radiological pattern further improved under the treatment.In conclusion, we suggest an increased risk of bacterial coinfections including lung abscess in COVID-19 patients on long-term corticosteroid therapy. Therefore, the decision to exceed the recommended duration for corticosteroid administration in severe COVID-19 patients should be critically evaluated based on individual circumstances. To our knowledge, this is the first reported case of aP. aeruginosa lung abscess whose successful therapy included oral delafloxacin. This is important because real-life data for the novel drug delafloxacin are scarce, and fluoroquinolones are the only reliable oral treatment option for P. aeruginosa infection. Even more importantly, our case suggests an oral therapy option for P. aeruginosa lung abscess in case of resistance to ciprofloxacin, the most widely used fluoroquinolone in P. aeruginosa infection. Oral treatment options are important because they enable outpatient care and reduce duration of hospitalization, complications, and costs compared to i.v. drugs. The lack of clinically relevant Qt-prolongation and phototoxicity compared to other fluoroquinolones as reported by the previous literature makes the use of delafloxacin even more intriguing. --- *Source: 1008330-2022-02-16.xml*
1008330-2022-02-16_1008330-2022-02-16.md
16,048
Ciprofloxacin-ResistantPseudomonas aeruginosa Lung Abscess Complicating COVID-19 Treated with the Novel Oral Fluoroquinolone Delafloxacin
Jürgen Panholzer; Matthias Neuboeck; Guangyu Shao; Sven Heldt; Markus Winkler; Paul Greiner; Norbert Fritsch; Bernd Lamprecht; Helmut Salzer
Case Reports in Pulmonology (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008330
1008330-2022-02-16.xml
--- ## Abstract Purpose. We report the development of a lung abscess caused by a ciprofloxacin-resistant Pseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin suggesting an oral administration option for ciprofloxacin-resistant Pseudomonas aeruginosa lung abscess. Case Presentation. An 86-year-old male was admitted to the hospital with fever, dry cough, and fatigue. PCR testing from a nasopharyngeal swab confirmed SARS-CoV-2 infection. An initial CT scan of the chest showed COVID-19 typical peripheral ground-glass opacities of both lungs. The patient required supplemental oxygen, and anti-inflammatory treatment with corticosteroids was initiated. After four weeks of corticosteroid therapy, the follow-up CT scan of the chest suddenly showed a new cavernous formation in the right lower lung lobe. The patient’s condition deteriorated requiring high-flow oxygen support. Consequently, the patient was transferred to the intensive care unit. Empiric therapy with intravenous piperacillin/tazobactam was started. Mycobacterial and fungal infections were excluded, while all sputum samples revealed cultural growth of P. aeruginosa. Antimicrobial susceptibility testing showed resistance to meropenem, imipenem, ciprofloxacin, gentamicin, and tobramycin. After two weeks of treatment with intravenous piperacillin/tazobactam, the clinical condition improved significantly, and supplemental oxygen could be stopped. Subsequently antimicrobial treatment was switched to oral delafloxacin facilitating an outpatient management. Conclusion. Our case demonstrates that long-term corticosteroid administration in severe COVID-19 can result in severe bacterial coinfections including P. aeruginosa lung abscess. To our knowledge, this is the first reported case of a P. aeruginosa lung abscess whose successful therapy included oral delafloxacin. This is important because real-life data for the novel drug delafloxacin are scarce, and fluoroquinolones are the only reliable oral treatment option for P. aeruginosa infection. Even more importantly, our case suggests an oral therapy option for P. aeruginosa lung abscess in case of resistance to ciprofloxacin, the most widely used fluoroquinolone in P. aeruginosa infection. --- ## Body ## 1. Introduction In December 2019, the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged causing coronavirus disease 2019 (COVID-19), which rapidly became a global health emergency. The clinical spectrum ranges from asymptomatic to COVID-19 typical symptoms including high fever, dry cough, shortness of breath, fatigue, myalgia, headache, abdominal discomfort, or loss of taste and smell [1]. Risk factors for a severe course of diseases or death are amongst others high age (>60 years), male gender, hypertension, COPD, and diabetes [2]. Diagnosis is based on PCR or antigen testing from a naso- or oropharyngeal swab. Laboratory findings often include leucopenia and thrombocytopenia, and the radiological pattern typically shows peripheral ground-glass opacities of both lungs [3]. Several agents are approved or are under investigation including antivirals, antibodies, immunomodulators, and corticosteroids. The latter is established as standard of care in severe and critically ill COVID-19 patients since a multicenter randomized controlled trial has shown significant reducing duration of mechanical ventilation mortality in moderate-to-severe ARDS [4, 5].In about 7% of patients, COVID-19 is complicated by secondary coinfections. While in influenza, bacterial coinfections emerge in up to 65% of patients, COVID-19 seems to have a rather low coinfection rate [6, 7]. This is surprising as COVID-19 is associated with several immunosuppressive factors such as high age, diabetes, and corticosteroid therapy [3, 4]. If bacterial coinfection appears, it is often associated with a severe course of COVID-19 and with a poorer outcome [8].Bacterial coinfection may be caused byPseudomonas aeruginosa, which is usually treated with piperacillin/tazobactam, ceftazidim, carbapenems, or ciprofloxacin; however, in case of resistance, treatment options are very limited. Delafloxacin (Melinta Therapeutics, USA, Connecticut) is a novel fluoroquinolone for the respiratory tract and skin infections addressing P. aeruginosa and other multidrug-resistant pathogens such as methicillin-resistant Staphylococcus aureus (MRSA). Compared to other fluoroquinolones, delafloxacin is also available in an oral formulation, while most other antibacterial alternatives for P. aeruginosa are only administrable intravenously [9, 10].We report the development of a lung abscess caused by a ciprofloxacin-resistantPseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin. ## 2. Case Presentation An 86-year-old man was admitted to the hospital on July 20th, 2020, with fever, dry cough, and fatigue. Medical history revealed an arterial hypertension, recurrent pulmonary embolism in 1993 and in 2007, and a prostate carcinoma. The last days before admission, he took ciprofloxacin 500 mg two times daily (bid) because of a recent urinary tract infection. His vital signs were normal. Auscultation revealed fine crackles in both basal lung lobes. Cardiac, abdominal, and neurological examination was unremarkable. Laboratory tests showed a leucopenia of 4.07 G/L (3.9-8.8 G/L), a decreased lymphocyte count of 0.3 G/L (1.0-4.8 G/L), and an elevated C-reactive protein (CrP) level of 7.73 mg/dL (0-0.5 mg/dL), while the procalcitonin level was normal. Polymerase chain reaction (PCR, Roche, Basel, Switzerland) from a nasopharyngeal swab confirmed SARS-CoV-2 infection. A computed tomography (CT) scan of the chest showed peripheral distributed ground-glass opacities in all lung lobes (Figures 1(a) and 1(e)). Treatment with methylprednisolone 32 mg once a day (od) and empiric antimicrobial therapy with azithromycin 500 mg was initiated. Blood cultures showed no bacterial growth. The patient’s condition deteriorated, and antimicrobial therapy was switched several times (Figure 2). A follow-up chest CT scan on August 17th, 2020, showed a new cavernous formation in the right lower lung lobe (Figures 1(b) and 1(f)). Due to the radiological presentation, a fungal infection was suspected, and fluconazole 200 mg was started.Figure 1 Axial chest CT scans showing peripheral ground-glass opacities of both lungs at hospital admission (a, e), a large cavitary lesion at the right inferior lung lobe at the end of corticosteroid treatment (b, f), a declining cavitary lesion at the end of antibacterial therapy (c, g), and a small consolidation at two-month follow-up (d, f). (a)(b)(c)(d)(e)(f)(g)(h)Figure 2 Timeline showing the key parameters and therapies. KUK = Kepler University Hospital; ICU = intensive care unit; Aug = August; Sept = September; Oct = October; Nov = November; Dec = December; HFNC = high-flow nasal cannula oxygen; CRP = C-reactive protein; SARS-CoV-2 = severe acute respiratory syndrome coronavirus type 2; PCR = polymerase chain reaction; Pip/Taz = piperacillin/tazobactam.As the patient’s condition further deteriorated and the oxygen demand increased, the patient was transferred to the local university hospital. Antibacterial treatment with meropenem 2 g three times a day (tid) i.v. and corticosteroid therapy was continued, while antifungal treatment was switched to voriconazole 200 mg bid intravenously after a loading dose on day 1.Again, blood samples were negative for bacterial growth. The galactomannan test (IMMY, Oklahoma, USA) from serum was negative. Microscopy for acid-fast bacilli as well as PCR forM. tuberculosis-complex from sputum was negative. Sputum cultures did not show fungal growth but revealed growth of P. aeruginosa on August 25th, 2020. Moreover, multiple skin swabs (throat, groin, and perianal) were also positive for P. aeruginosa. Bronchial secretion obtained during bronchoscopy from the lower right lung lobe also revealed cultural growth of P. aeruginosa, while the galactomannan test and fluorescence microscopy for Pneumocystis jirovecii from bronchoalveolar lavage were negative.Antimicrobial susceptibility testing (Thermo Fisher Scientific, Massachusetts, USA) ofP. aeruginosa showed resistance to meropenem, imipenem, ciprofloxacin, gentamicin, and tobramycin. Antibacterial treatment was switched on August 28th, 2020, to piperacillin/tazobactam 4.5 g tid i.v., while antifungal treatment with voriconazole was stopped. This targeted treatment regime resulted in rapid clinical and radiological improvement enabling to stop oxygen supplementation therapy on September 11th, 2020. Bacterial sputum conversion was achieved on September 21st, 2020. The first negative SARS-CoV-2 PCR was documented on September 1st, 2020.Since our patient did not require additional oxygen anymore, the only reason to be treated on the ward was the i.v. administration of piperacillin/tazobactam for the treatment of the ciprofloxacin-resistantP. aeruginosa lung abscess. A service for an outpatient parenteral therapy (OPAT) is not available in Austria so far. Therefore, the treatment strategy was changed to the novel fluoroquinolone delafloxacin characterized by a sufficient bioavailability when taken orally [9, 10].We discharged the patient at the beginning of October with oral delafloxacin 450 mg bid. Antibacterial treatment was well tolerated, and the patient showed continuous radiological improvement. Delafloxacin treatment was stopped by the end of October 2020. ## 3. Discussion We report the development of a lung abscess caused by a ciprofloxacin-resistantPseudomonas aeruginosa in a patient with COVID-19 on long-term corticosteroid therapy. Successful antimicrobial treatment included the novel oral fluoroquinolone delafloxacin.Current literature indicates an overall bacterial coinfection rate of 7 to 14% in COVID-19 [7]. These rates are lower compared to the frequency of coinfections in influenza, where most studies report rates ranging from 11 to 35% [6]. Hospital-associated coinfections occur in about 12% of COVID-19 patients, while community-associated coinfections are less frequently seen with around 6% of cases [11]. Surprisingly, there are several studies showing high usage of antibacterial treatment in COVID-19, while the risk of coinfections seems to be rather low. A recent study by Kubin et al. reported 67% of patients receiving at least 1 dose of antibacterial therapy [11, 12]. However, when bacterial coinfection does appear, it is associated with both increased severity of COVID-19 and poorer outcome [8]. Antimicrobial stewardship principles should not be neglected due to concerns of availability for the global supply chain preventing therapy for those who need it, the increased workload with parenteral administration, and long-term complications associated with antibacterial overuse including development of resistance [13].We suggest that our patient had several risk factors for developing a bacterial lung abscess including severe illness requiring ICU stay, high age, multiple antibacterial therapies, and long-term corticosteroid treatment. Our patient was on corticosteroid treatment for almost a month, while most guidelines including the Infectious Disease Society of America (IDSA) guideline recommend a total of 10 days for patients with severe COVID-19. Given the hyperinflammatory state in severe COVID-19 patients, immunomodulatory approaches including steroids, but also others such as tocilizumab, are continuously under investigation [4]. A multicenter randomized controlled trial investigating the use of dexamethasone demonstrated a reduction of the 28-day mortality and a reduction of ventilator-associated days [5]. Furthermore, there are studies showing increased coinfection and mortality rates in viral pneumonia other than COVID-19 treated with corticosteroids [14, 15].Cavitation can also be associated with fungal infections such as pulmonary aspergillosis [16]. The risk for COVID-19-associated pulmonary aspergillosis (CAPA) in critically ill COVID-19 patients is estimated about 5-30% and is associated with corticosteroid therapy [17]. Recent studies showed an increased mortality in COVID-19 complicated by a CAPA compared with patients without CAPA (36% versus 9.5%). Serum and bronchoalveolar lavage galactomannan testing yields high sensitivity and specificity for diagnosing CAPA offering early antifungal treatment [18].In our case, two weeks of intravenous piperacillin/tazobactam treatment improved the clinical condition significantly and supplemental oxygen could be stopped. Due to structural issues in Austria, it was not possible to organize OPAT. Delafloxacin is a novel fluoroquinolone, which is available in an oral formulation [9, 10]. Some experts indicate its low bioavailability, which may have the risk to provoke resistance. In fact, bioavailability for the oral formulation may be below 60%, while 35-50% of its oral administered dose is eliminated unchanged via feces. On the other hand, minimal inhibitory concentrations are three- to five-fold lower for gram-positive strains and four- to seven-fold lower for gram-negative strains compared to other fluoroquinolones. Additionally, in vitro tests showed similar resistance rates to moxifloxacin and lower rates compared to levofloxacin [19]. Furthermore, delafloxacin lacks clinical relevant Qt-prolongation and phototoxicity compared to other fluoroquinolones [9, 20]. An oral treatment alternative for P. aeruginosa could reduce the duration of hospitalization, complications, and costs compared to i.v. drugs.Limitations of our case report are that we were not able to perform antimicrobial susceptibility testing for delafloxacin, because the patient already achieved sputum conversion at the time, we considered delafloxacin the first time, and unfortunately noP. aeruginosa isolates were stored. However, we are convinced that delafloxacin was susceptible since we saw that the radiological pattern further improved under the treatment.In conclusion, we suggest an increased risk of bacterial coinfections including lung abscess in COVID-19 patients on long-term corticosteroid therapy. Therefore, the decision to exceed the recommended duration for corticosteroid administration in severe COVID-19 patients should be critically evaluated based on individual circumstances. To our knowledge, this is the first reported case of aP. aeruginosa lung abscess whose successful therapy included oral delafloxacin. This is important because real-life data for the novel drug delafloxacin are scarce, and fluoroquinolones are the only reliable oral treatment option for P. aeruginosa infection. Even more importantly, our case suggests an oral therapy option for P. aeruginosa lung abscess in case of resistance to ciprofloxacin, the most widely used fluoroquinolone in P. aeruginosa infection. Oral treatment options are important because they enable outpatient care and reduce duration of hospitalization, complications, and costs compared to i.v. drugs. The lack of clinically relevant Qt-prolongation and phototoxicity compared to other fluoroquinolones as reported by the previous literature makes the use of delafloxacin even more intriguing. --- *Source: 1008330-2022-02-16.xml*
2022
# The Application of Transcranial Electrical Stimulation in Sports Psychology **Authors:** Shuzhi Chang **Journal:** Computational and Mathematical Methods in Medicine (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008346 --- ## Abstract The problem of sports psychological fatigue has become one of the focal points of common concern among scholars at home and abroad. Athletes will face many problems and challenges in competition or training, and if they are not handled properly, they will have negative experiences, which will affect the training benefits and develop psychological fatigue. Transcranial electrical stimulation (TES), which contains transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation, is a noninvasive brain stimulation method. By applying specific patterns of low-intensity electrical currents to specific brain regions through electrodes of different sizes, it modulates cortical neural activity and/or excitability and enhances the connections between the brain and nerves and muscles to achieve improved motor performance. TES technology is currently making the transition from laboratory research to applied research in sports science. In this paper, we first describe the neural mechanisms of TES action on the cerebral cortex, including five aspects of body balance, endurance performance, exercise fatigue, muscle strength, and motor learning ability; then, we review the relevant studies on the application of TES in functional connectivity of brain networks and explore the importance of this field for TES to improve athletic performance. This research provides a machine learning-based and transcranial electrical stimulation model for the locomotor psychological fatigue problem in rock climbers since rock climbing is a sport that places great demands on athletes’ psychological quality. The research on the factors influencing the psychological fatigue of climbers and the intervention measures is beneficial to the early diagnosis and the prevention and intervention of it. --- ## Body ## 1. Introduction In modern high water competitive sports, the physical and technical and tactical abilities of the participating athletes are getting closer and closer, and the stable psychological quality becomes an important factor to achieve the victory of the competition, especially the climbing sports competition has a greater consumption of the athletes’ psychological energy; if the athletes do not have good psychological quality, even if they have strong physical quality, technical, and tactical ability, it is difficult to achieve excellent sports performance. As the American scholar Grubaugh pointed out for junior athletes, 80% is biomechanical factors, 20% is psychological factors, senior athletes are the opposite, 80% is psychological factors, and 20% is biomechanical factors. Therefore, in the process of physical and technical and tactical training for rock climbers, it is crucial to strengthen the psychological quality training of athletes.In rock climbing sport characteristics and climbing psychological training, climbing site, and the sport form of special, climbing site mainly rock cliff face fissure rock face boulder and artificial rock wall, etc., the rock face mostly has a certain elevation angle and pitch angle, and the shape of the rock wall and rock point of the shape of a thousand changes, thus forming the form of rock-climbing sport diversity unconventional work at height and the complexity of technical operations and other characteristics [1–3]. Figure 1 depicts the diagram for using rock climbing to enhance psychological quality. A set of difficult and beautiful sports, rock climbing is demanding and risky. Its core traits can be summed up as dangerous, difficult, and beautiful. The angle and shape of the rock wall, the difficulty of the climbing route, the size and shape of the pivot points, and the ever-changing weather are all huge obstacles to climbing; so, athletes are required to have strong resilience and adjust their physical and mental state to better complete the competition. If you make a mistake, you may have the danger of slipping and falling, and you are prone to casualties. Rock-climbing psychological training refers to the process of rock-climbing training, purposely and systematically exerts influence on the psychological process and personality psychological characteristics of athletes, and through special methods and means to make athletes learn to regulate and control their own psychological state, and then regulates and controls their own climbing behavior process due to the specificity of the rock-climbing site the variability of the wall pivot points the diversity of equipment preparation the complexity of technical actions, and it can promote the perfection of athletes’ psychological process, form good personality and psychological characteristics compatible with rock climbing, obtain a high level of psychological energy reserves, enable athletes to adapt to the thrilling climbing activities, and lay a good psychological foundation for the victory of the climbing competition. The climbing competition is carried out under the condition of independent combat without guidance, and in the application of techniques and tactics in the process of the competition, the athletes should make timely self-adjustment according to the difficulty angle of the line pivot point and personal advantages and disadvantages of the competition. Good psychological quality makes athletes confident of victory, energetic, vigorous, muscular strength, and increased resilience, so that they can give full play to their existing skills and tactics, and even play beyond the level of psychological training is generally divided into psychological training and psychological training in preparation for specific competitions. General psychological training aims to improve the psychological factors of athletes related to special sports and is also called long-term psychological training because it can be arranged throughout the training process. Psychological training in preparation for specific competitions is mainly for the specific competition and psychological preparation, generally in the two or three months before the competition to start practicing, and continues until the competition period pregame special psychological training aims to enable athletes to master and use the method of self-regulation of mental state in a relatively short period of time, in order to form the best competitive state [4–6].Figure 1 Rock climbing to enhance the psychological quality of the schematic diagram.Human motor control ability is one of the important motor abilities of the human body. Motor control not only affects the athletes’ performance but also has a significant impact on the quality of life of healthy people in general, especially the elderly and patients with motor control deficits. Previous approaches to improve motor control in humans have focused on enhancing motor control through interventional training to modify the function of the skeletal muscle system. However, the nervous system plays a very important role in the process of motor control. In recent years, research on motor control has focused more on the effects of neurological interventions on human motor control [7]. Transcranial electrical stimulation (TES) is a noninvasive transcranial electrical stimulation technique that generates a weak current (1 to 2 mA) in the superficial layers of the skull through paired electrodes placed on the scalp, which affects neural activity in the cerebral cortex and alters brain function to promote motor performance in humans. The main transcranial electrical stimulation techniques used today are transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), and transcranial random noise stimulation (TRS). These electrical stimulation techniques can be used to achieve different stimulation effects by changing the position of electrodes, the intensity of current, the duration of stimulation, and the frequency of stimulation. As a noninvasive and safe brain stimulation technique, TES has been gradually announced into sports science applications from the treatment of neurological or psychiatric disorders. In this paper, we focus on the research and application of TES in improving sports mental performance, especially in rock climbing. A machine learning algorithm-based transcranial electrical stimulation in sports psychology is proposed for applications including improving mental balance, enhancing mental endurance performance and/or relieving exercise fatigue, improving muscle strength, and enhancing sports psychological learning ability, which provides theoretical references for better understanding and application of TES techniques. The effectiveness of the proposed method was demonstrated in relevant experiments [8].The usefulness of transcranial electrical stimulation in enhancing psychomotor balance, psychomotor endurance performance, psychomotor strength, and decreasing motor fatigue is further examined in this paper.The arrangements of the paper are as follows: Section2 discusses the related work. Section 3 examines the various methods. Section 4 analyzes the experiments and results. Section 5 concludes the article. ## 2. Related Work ### 2.1. Transcranial Electrical Stimulation and Sports Currently, TES is more frequently used in the rehabilitation of patients with neurological impairment or psychiatric disorders and has good belongings on the recovery of brain injury, mood regulation, and improvement of cognitive function. In recent years, some researchers have gradually applied TES to the field of sports science to improve human muscle coordination in sports and enhance human sports performance by increasing the connection between the brain and nerves and muscles, such as improving body balance, enhancing endurance performance, relieving sports fatigue, and improving muscle strength and motor learning ability. Improve body balance ability is one of the main physical qualities of humans, which refers to the ability to maintain body posture even during exercise or under external forces, and it is related to body structure, muscle coordination, and the regulation of brain tissue involved in balance [9–12]. It was found that 1 mA tDCS anodal stimulation of the motor cortex of healthy elderly people for 20 min per day resulted in improved gait and dynamic balance after 5 d of continuous intervention and was maintained for 1 week. Stimulation of the sensorimotor area of healthy adults with high precision tDCS for 20 min better the static balance ability of the subjects. Body balance is one of the basic abilities that athletes possess, especially in nonperiodic events, and directly affects the performance of athletes’ skills and physical abilities, etc. Whether tDCS can improve dynamic and static balance in elite athletes needs further study. Enhance endurance performance and/or relieve exercise and fatigue. Endurance is the body’s ability to perform muscle activity for long periods of time and is the basis for improving qualities such as speed and strength. The goodness of endurance directly affects human athletic performance. Currently, some studies have investigated whether tDCS intervention in the cerebral cortex has a positive effect on neurological and muscular fatigue recovery [13].The exact neurophysiological mechanisms of transcranial direct current action can be divided into the following mainstream views: altering cortical excitability, increasing synaptic plasticity, altering local cerebral blood flow, and regulating local cortical and brain network connections. Changing cortical excitability is as follows: tDCS changes the excitability of the brain by stimulating the membrane potential of neurons, causing depolarization of the anode to lower the threshold of action potential generation, and increasing the excitability of the anodal cortex; [14–16]. Increase synaptic plasticity is as follows: tDCS performs subthreshold stimulation of the resting membrane potential of neurons, which induces the expression of N-methyl aspartate receptors and the release of y-aminobutyric acid, while NMDA is involved in synapse formation, increasing synaptic plasticity, which in turn produces long-duration inhibitory or enhancing effects. Alteration of local cerebral blood flow is as follows: when the tDCS anode acts on the dorsolateral prefrontal cortex, it increases cerebral blood flow in the stimulated area and decreases cerebral blood flow under the cathode, which is highly correlated with the MEP amplitude at the stimulation location. Modulation of local cortical and brain network connectivity is as follows: EEG and functional magnetic resonance imaging studies revealed that anodal tDCS stimulation of area M1 significantly increased functional connectivity in the premotor, motor, and sensorimotor areas within the stimulated hemisphere and induced changes in connectivity between the left and right hemispheres, further corroborating that tDCS induces functional brain connectivity. tACS stimulation is dependent on the stimulation. The mechanisms of tACS action can be classified as follows: exogenous oscillations induce endogenous oscillations in the brain, affecting synaptic plasticity to regulate brain function. When tACS acts on the brain, part of the current reaches the cerebral cortex and changes the membrane potential of dendrites or axons in an oscillatory manner, making it possible for neurons to develop action potentials. When the frequency of tACS is the same as the endogenous frequency, resonance occurs, causing excitation of neurons in the brain; when tACS is stimulated at a higher frequency, it can trigger neuronal oscillations in a wider frequency range. Stimulation at specific frequencies can also drive oscillatory frequencies within the brain and induce synchronous oscillations in neurons with rhythms imposed by tACS on specific cortices. tACS modulates brain function by affecting synaptic plasticity. When tACS causes presynaptic action potentials to precede postsynaptic action potentials, it produces long-lasting excitation of brain function, and when it causes postsynaptic potentials to precede presynaptic potentials, it causes long-lasting inhibition of brain function, a phenomenon that has been confirmed by many studies. ### 2.2. Psychological Analysis of Rock Climbing Rock climbing is a kind of mountaineering sport, which fully exploits the human body’s climbing potential, and the climber relies on the coordination of hands, feet, waist, and muscle functions and uses gouging, grasping, bracing, and other means to operate the body upward. Both man-made and natural rock walls can be climbed, with man-made rock walls having the most participants. According to the psychological requirements of rock climbing, climbers must perform well in specialized psychological quality training to improve their capacity for psychological adjustment and obstacle surmounting [17–19]. The sport of rock climbing demands exceptional physical, intellectual, and psychological qualities.Rock climbing is a sport with long time, high density, and large volume of transportation. The process of rock climbing puts forward extremely high requirements on athletes’ physical ability, and athletes need strong physical ability, intelligence, and good psychological quality to complete a complete climb. In the process of rock climbing, the athletes’ psychological ability is extremely important. In the process of rock climbing, due to the rapid consumption of physical energy, it often causes a rapid decline in the athletes’ functional reserve and central nervous fatigue, which leads to a series of bad emotions such as inattention, irritability, and decrease in the level of will. If these bad emotions deteriorate further, it will cause the athletes to move slowly, have stiff muscles, and panic in their hands and feet. These bad emotions, if further worsened, will cause the athletes to move slowly, muscle stiffness, and hands and feet panic, affecting the completion of rock climbing. In serious cases, it will lead to sports injury or even danger. Therefore, the special psychological training for rock climbers is extremely important, through effective psychological training, so that athletes can effectively psychologically regulate themselves in the process of rock climbing and help them maintain emotional stability [20, 21]. We know that rock climbing has a certain degree of danger, and the psychology of fear in the procedure of rock climbing is caused by a variety of reasons. One of the most important factors is the climber’s understanding of the degree of obstacles and the climber’s level of trust in protective events and protection personnel. To train and improve climbers to overcome psychological barriers, it is necessary to find the cause and prescribe the right remedy: first, to make climbers fully understand the degree of difficulty of this climb and some related knowledge and second, to make climbers do good communication with staff to enhance their trust in protection workers and safety measures.Concentration training is extremely important for rock climbers. Only by concentrating on one goal can the climber not be disturbed by the external environment and internal factors during the climbing process, not be distracted during the climbing, and reach the summit successfully. The concentration of attention training generally has two forms: one is the muzzle training method: the so-called muzzle training refers to the trainer in the usual training by issuing some volume weak, just can barely hear the sound, and let the trainee to complete the task, and such a weak voice muzzle needs athletes to focus on a high degree of attention to complete. Second is the stopwatch training method [22]. The so-called stopwatch training is to allow athletes to focus on the rotation of the stopwatch, record the time of each attentiveness, in the training time lengthened one by one, until the end of inattention, and then repeat the training in accordance with this time standard. Training regulates arousal level in rock climbing; there are two kinds of training to regulate arousal level in rock climbing: one is to increase the arousal level of AH, and the other is to decrease, the arousal level of sex. Arousal level refers to the physiological activation of the muscles in different degrees and states, and the state of arousal level is directly related to the level of athletes. The human body can change and maintain the excitability of the brain’s nervous system through arousal and use this state of arousal to maintain a high level of concentration and provide conscious energy to the muscles. In general, in sports that are mainly speed and strength-based, a higher level of arousal is required, while in sports that are mainly regulated by small muscle groups and have complex tasks. Exercise classes that require coordinated coordination require a lower level of arousal [23].The so-called representational training refers to the training to promote the athletes to improve their climbing skills and technical level of play; in the representational training, coaches can make the athletes form the representational movement in the brain consciousness through verbal explanation, video images, video materials, and other means, so that the athletes can train through the full imagination. The main purpose of representational training is to allow athletes to train in actual combat before. In addition, representational movement also includes the reproduction of previously learned technical movements after training, through the recollection and reproduction of movements in the brain, the wrong movements can be corrected, and the correct movements can be consolidated. In the process of representational training, athletes can categorize and organize the decomposed movements one by one in their mind and make corrections and improvements, so that they can complete the whole climbing process independently in their own mind. Representation training not only makes the athletes get twice the result with half the effort in the climbing process but also increases the information reserve and develops their intelligence. ## 2.1. Transcranial Electrical Stimulation and Sports Currently, TES is more frequently used in the rehabilitation of patients with neurological impairment or psychiatric disorders and has good belongings on the recovery of brain injury, mood regulation, and improvement of cognitive function. In recent years, some researchers have gradually applied TES to the field of sports science to improve human muscle coordination in sports and enhance human sports performance by increasing the connection between the brain and nerves and muscles, such as improving body balance, enhancing endurance performance, relieving sports fatigue, and improving muscle strength and motor learning ability. Improve body balance ability is one of the main physical qualities of humans, which refers to the ability to maintain body posture even during exercise or under external forces, and it is related to body structure, muscle coordination, and the regulation of brain tissue involved in balance [9–12]. It was found that 1 mA tDCS anodal stimulation of the motor cortex of healthy elderly people for 20 min per day resulted in improved gait and dynamic balance after 5 d of continuous intervention and was maintained for 1 week. Stimulation of the sensorimotor area of healthy adults with high precision tDCS for 20 min better the static balance ability of the subjects. Body balance is one of the basic abilities that athletes possess, especially in nonperiodic events, and directly affects the performance of athletes’ skills and physical abilities, etc. Whether tDCS can improve dynamic and static balance in elite athletes needs further study. Enhance endurance performance and/or relieve exercise and fatigue. Endurance is the body’s ability to perform muscle activity for long periods of time and is the basis for improving qualities such as speed and strength. The goodness of endurance directly affects human athletic performance. Currently, some studies have investigated whether tDCS intervention in the cerebral cortex has a positive effect on neurological and muscular fatigue recovery [13].The exact neurophysiological mechanisms of transcranial direct current action can be divided into the following mainstream views: altering cortical excitability, increasing synaptic plasticity, altering local cerebral blood flow, and regulating local cortical and brain network connections. Changing cortical excitability is as follows: tDCS changes the excitability of the brain by stimulating the membrane potential of neurons, causing depolarization of the anode to lower the threshold of action potential generation, and increasing the excitability of the anodal cortex; [14–16]. Increase synaptic plasticity is as follows: tDCS performs subthreshold stimulation of the resting membrane potential of neurons, which induces the expression of N-methyl aspartate receptors and the release of y-aminobutyric acid, while NMDA is involved in synapse formation, increasing synaptic plasticity, which in turn produces long-duration inhibitory or enhancing effects. Alteration of local cerebral blood flow is as follows: when the tDCS anode acts on the dorsolateral prefrontal cortex, it increases cerebral blood flow in the stimulated area and decreases cerebral blood flow under the cathode, which is highly correlated with the MEP amplitude at the stimulation location. Modulation of local cortical and brain network connectivity is as follows: EEG and functional magnetic resonance imaging studies revealed that anodal tDCS stimulation of area M1 significantly increased functional connectivity in the premotor, motor, and sensorimotor areas within the stimulated hemisphere and induced changes in connectivity between the left and right hemispheres, further corroborating that tDCS induces functional brain connectivity. tACS stimulation is dependent on the stimulation. The mechanisms of tACS action can be classified as follows: exogenous oscillations induce endogenous oscillations in the brain, affecting synaptic plasticity to regulate brain function. When tACS acts on the brain, part of the current reaches the cerebral cortex and changes the membrane potential of dendrites or axons in an oscillatory manner, making it possible for neurons to develop action potentials. When the frequency of tACS is the same as the endogenous frequency, resonance occurs, causing excitation of neurons in the brain; when tACS is stimulated at a higher frequency, it can trigger neuronal oscillations in a wider frequency range. Stimulation at specific frequencies can also drive oscillatory frequencies within the brain and induce synchronous oscillations in neurons with rhythms imposed by tACS on specific cortices. tACS modulates brain function by affecting synaptic plasticity. When tACS causes presynaptic action potentials to precede postsynaptic action potentials, it produces long-lasting excitation of brain function, and when it causes postsynaptic potentials to precede presynaptic potentials, it causes long-lasting inhibition of brain function, a phenomenon that has been confirmed by many studies. ## 2.2. Psychological Analysis of Rock Climbing Rock climbing is a kind of mountaineering sport, which fully exploits the human body’s climbing potential, and the climber relies on the coordination of hands, feet, waist, and muscle functions and uses gouging, grasping, bracing, and other means to operate the body upward. Both man-made and natural rock walls can be climbed, with man-made rock walls having the most participants. According to the psychological requirements of rock climbing, climbers must perform well in specialized psychological quality training to improve their capacity for psychological adjustment and obstacle surmounting [17–19]. The sport of rock climbing demands exceptional physical, intellectual, and psychological qualities.Rock climbing is a sport with long time, high density, and large volume of transportation. The process of rock climbing puts forward extremely high requirements on athletes’ physical ability, and athletes need strong physical ability, intelligence, and good psychological quality to complete a complete climb. In the process of rock climbing, the athletes’ psychological ability is extremely important. In the process of rock climbing, due to the rapid consumption of physical energy, it often causes a rapid decline in the athletes’ functional reserve and central nervous fatigue, which leads to a series of bad emotions such as inattention, irritability, and decrease in the level of will. If these bad emotions deteriorate further, it will cause the athletes to move slowly, have stiff muscles, and panic in their hands and feet. These bad emotions, if further worsened, will cause the athletes to move slowly, muscle stiffness, and hands and feet panic, affecting the completion of rock climbing. In serious cases, it will lead to sports injury or even danger. Therefore, the special psychological training for rock climbers is extremely important, through effective psychological training, so that athletes can effectively psychologically regulate themselves in the process of rock climbing and help them maintain emotional stability [20, 21]. We know that rock climbing has a certain degree of danger, and the psychology of fear in the procedure of rock climbing is caused by a variety of reasons. One of the most important factors is the climber’s understanding of the degree of obstacles and the climber’s level of trust in protective events and protection personnel. To train and improve climbers to overcome psychological barriers, it is necessary to find the cause and prescribe the right remedy: first, to make climbers fully understand the degree of difficulty of this climb and some related knowledge and second, to make climbers do good communication with staff to enhance their trust in protection workers and safety measures.Concentration training is extremely important for rock climbers. Only by concentrating on one goal can the climber not be disturbed by the external environment and internal factors during the climbing process, not be distracted during the climbing, and reach the summit successfully. The concentration of attention training generally has two forms: one is the muzzle training method: the so-called muzzle training refers to the trainer in the usual training by issuing some volume weak, just can barely hear the sound, and let the trainee to complete the task, and such a weak voice muzzle needs athletes to focus on a high degree of attention to complete. Second is the stopwatch training method [22]. The so-called stopwatch training is to allow athletes to focus on the rotation of the stopwatch, record the time of each attentiveness, in the training time lengthened one by one, until the end of inattention, and then repeat the training in accordance with this time standard. Training regulates arousal level in rock climbing; there are two kinds of training to regulate arousal level in rock climbing: one is to increase the arousal level of AH, and the other is to decrease, the arousal level of sex. Arousal level refers to the physiological activation of the muscles in different degrees and states, and the state of arousal level is directly related to the level of athletes. The human body can change and maintain the excitability of the brain’s nervous system through arousal and use this state of arousal to maintain a high level of concentration and provide conscious energy to the muscles. In general, in sports that are mainly speed and strength-based, a higher level of arousal is required, while in sports that are mainly regulated by small muscle groups and have complex tasks. Exercise classes that require coordinated coordination require a lower level of arousal [23].The so-called representational training refers to the training to promote the athletes to improve their climbing skills and technical level of play; in the representational training, coaches can make the athletes form the representational movement in the brain consciousness through verbal explanation, video images, video materials, and other means, so that the athletes can train through the full imagination. The main purpose of representational training is to allow athletes to train in actual combat before. In addition, representational movement also includes the reproduction of previously learned technical movements after training, through the recollection and reproduction of movements in the brain, the wrong movements can be corrected, and the correct movements can be consolidated. In the process of representational training, athletes can categorize and organize the decomposed movements one by one in their mind and make corrections and improvements, so that they can complete the whole climbing process independently in their own mind. Representation training not only makes the athletes get twice the result with half the effort in the climbing process but also increases the information reserve and develops their intelligence. ## 3. Methods ### 3.1. Model Architecture Decoding and analyzing the event-related desynchronization/event-related synchronization generated by the conceptual activity of rock climbing to determine the user’s motor intention and brain state are the basis for the implementation of a brain-machine interface based on the psychology of rock-climbing. Users gain the ability to actively regulate their sensorimotor rhythms to give control commands in the psychological brain-machine interface based on rock climbing. The modulation patterns induced by psychomotor rhythms in the brain can be used as control input signals for the BCI, and through learning and training, climbers can autonomously generate the corresponding EEG patterns. The model structure diagram of the psycho-brain-computer interface for rock climbing is shown in Figure2. The system is similar to other brain-computer interface systems and is generally composed of the following parts: signal acquisition part, signal preprocessing, feature extraction, pattern classification, and output module. Signal acquisition part is as follows: the main role is to record the EEG signals that the subjects go through the psychological experiment paradigm of rock-climbing exercise and thus generate. Signal preprocessing is as follows: the preprocessing stage is mainly to remove the noise from the EEG signal, usually using a band-pass filter, and to remove the electrooculography and motion artifacts using various methods as needed. Feature extraction is as follows: since the EEG signals of climbers during the mental activity of rock-climbing sports are mixed with those generated by other neural activities and are overwhelmed by a large amount of spontaneous EEG, the feature signals related to the mental activity of rock-climbing sports need to be extracted from these mixed EEG signals by reducing the dimensionality of the EEG signals so as to extract the relevant features. Pattern classification is as follows: the extracted to feature information is analyzed and judged, so that these features can be decoded into various instructions or parsed into the user’s intention of rock-climbing sports psychology. Output module includes two parts: control system and feedback system. There are some brain-machine interfaces based on rock-climbing psychology aiming at helping paralyzed climbers to perform some daily activities, and then the extracted feature information will be decoded into various control commands to control external devices, such as robots, wheelchairs, and mice. There are also some brain-machine interfaces based on the psychology of rock climbing, whose purpose is to help climbers with neurological damage to carry out neural pathway rehabilitation or healthy climbers to improve their athletic ability through the psychological training of rock-climbing, without controlling external devices, and then the decoded commands will be analyzed and returned to the climbers through the feedback system.Figure 2 Model structure. ### 3.2. Preprocessing In this paper, after the EEG data are acquired, the calculation of subject-specific bandr2 is performed in the preprocessing stage for the determination of the subject’s band-pass filter. r2 is calculated as follows. (1)r2=N1N2N1+N2MEANP1−MEANP2STDP1∪P22.In the equation,N1 and N2 are the number of tasks included in the EEG data, respectively, N1 represents the number of trails for the left-hand motor imagery task, and N2 represents the number of trails for the left-hand motor imagery task. P1 and P2 are the power spectra of the EEG data for the motor imagery task, P1 represents the power spectrum of the left-hand motor imagery EEG data, and P2 represents the power spectrum of the right-hand motor imagery EEG data. The larger the value of r2, the greater the energy difference between the EEG data of left- and right-handed motor imagery tasks in that frequency band. According to the value of r2, a suitable band-pass filtering band is selected to perform specific band-pass filtering on the motor imagery EEG data, followed by CSP feature extraction and LDA pattern classification. ### 3.3. Feature Extraction The flow of signal data feature extraction is shown in Figure3. For the together EEG signals, there are three main features: time domain features, frequency domain features, and spatial domain features, and we need to choose the corresponding feature extraction methods for dissimilar features. For example, spatial domain features are usually extracted by choosing spatial domain filters—Common Spatial Pattern (CSP), while frequency domain features are generally extracted by power spectrum analysis and some other wavelets transform, sample entropy method, etc. Each of these methods has its own advantages and disadvantages, as shown in Table 1.Figure 3 Signal data feature extraction process.Table 1 Advantages and disadvantages of different feature extraction methods. Feature extraction methodAdvantagesDisadvantagesPower spectrum analysis methodSimple algorithm, easy to operateLow resolution, inability to display brain telecommunication nonlinear information of the numberWavelet transformMultiple resolutions for more detailed characterize the signalInflexible algorithmSample entropy methodStable algorithm, low computational effortCannot express the time-frequency characteristics of the signalCommon spatial pattern methodGood performance in feature extraction for binary classificationRequires multiple leads for analysis and is susceptible to noise interferenceIn this study, the purpose is to extract the two characteristic signals of left-handed motion and right-handed motion generated by the user’s motion imagination, and using the CSP algorithm is more in line with our requirements. The CSP algorithm is to calculate all channels, and we know that the variance represents the energy, due to the phenomenon of ERD/ERS generated by motion imagination, and it is easier to use the CSP algorithm. Because of the ERD/ERS phenomenon, it is easier to extract the features with the greatest difference in energy to classify the motor imagery intention. ### 3.4. CSP Algorithm The CSP algorithm is a feature extraction algorithm for two arrangement tasks. The computational procedure of the CSP algorithm is as follows: assume thatX1 and X2 are the matrices of the single evoked EEG signals under the same experimental conditions of the left and right handed two motor imagery responsibilities, respectively, and the matrix dimension N∗T, N is the number of channels of EEG signals, and T is the number of sampling points of EEG signals, as the number of channels of EEG signals commonly used is 8 conductors, 16 conductors, 32 conductors, 64 conductors, and 128 conductors. N must be less than T to satisfy the condition of calculating the covariance matrix. Specify that Y1 and Y2 are two types of tasks for left-handed motion and right-handed motion, and X1 and X2 are represented as follows, respectively, when noise interference is ignored. (2)X1=A1AmY1YM,X2=A2AmY2YM.Assume that the source signals of the two tasks,Y1 for left-handed motion and Y2 for right-handed motion, are linearly independent of each other, Yu represents the common source signal possessed by these two tasks, Y1 consisting of m1 sources and Y2 consisting of m2 sources, then A1 consists of m1 covariance patterns associated with X1, A2 consists of m2 covariance patterns associated with X2. The purpose of the CSP algorithm is to design a spatial filter parameter to obtain the best projection matrix W. The EEG signals are passed through this spatial filter to obtain new signals where one has the largest variance, the other has the smallest variance, and then the two types of signals are classified by a classification algorithm. Calculate the covariance matrix of X1 and X2, respectively. (3)R1=X1X1TtrX1X1T,R2=X2X2TtrX2X2T,r denotes the trace of the matrix, which is the sum of the diagonal elements of the matrix X1X1T. Here, R1 and R2 are the covariance matrices for a single trial, and then the respective average covariance matrices R1 and R2 are calculated based on the total number of trials for each of the left- and right-hand tasks, denoted as n1 and n2: (4)R1¯=1n1∑i=1n1R1i,R2¯=1n2∑i=1n2R2i.Calculate the mixed-space covariance matrixR. (5)R=R1¯+R2¯.Since the obtained mixed-space covariance matrixR is a positive definite matrix, the eigenvalue decomposition of the obtained mixed-space covariance matrix R is performed according to the singular value theorem as follows. (6)R=UλUT.λ denotes the diagonal matrix composed of eigenvalues arranged in descending order, and U denotes the matrix composed of eigenvectors corresponding to the decomposed eigenvalues, resulting in the whitening transformation matrix P. (7)P=1λUT.For the formal experimental data, to avoid transient abrupt changes in the EEG signal caused by body movements, its variance is calculated and normalized for the eigensignal obtained through the spatial filter, and then the eigenvectors are extracted as follows.(8)Zi=WXi,fi=varZi∑varZi. ### 3.5. Pattern Classification To interpret the subject’s motor imagery intention and to differentiate the brain activity caused by left-handed and right-handed motor imagery activities, the recovered features from the EEG signal must be sent to a classifier for pattern classification after the signal has been retrieved. The algorithms used to categorize the extracted features into patterns are both linear and nonlinear, and the linear algorithms include linear discriminant classifiers and linear classifiers of Marxian distance, among others. Based on the above mentioned and the characteristics of motor imagery EEG signals.LDA is an effective feature extraction method that can classify data and compress the feature space dimension. This section introduces the basic principle of LDA starting from the simpler class II LDA. First assume that there arem samples in the dataset, denoted as (9)D=x1,y1,x2,y2,⋯,xm,ym,where x is an n-dimensional vector and yi∈0,1. We define Njj∈0,1 as the number of samples of the j-th class and Njj∈0,1 as the set of samples of the j-th class, and then the mean vector of samples of the j-th class can be expressed as (10)μj=1Nj∑x∈Xjx,j∈0,1.The covariance matrix of thej-th class of samples can be expressed as (11)Σj=∑x∈Xjx−μjx−μjT,j∈0,1.The projection vector is denoted byw. Then, any sample x becomes wTxi after projection. As mentioned above, the purpose of the LDA algorithm is to make the distance between similar data as small as possible, and the distance between different classes of data as large as possible. (12)G1=wTΣ0w+wTΣ1w.The distance between samples of different categories is expressed as the square of the second parametric number, as follows:(13)G2=wTμ0−wTμ122.In summary, the optimization objective of the LDA algorithm is(14)w∗=argmaxwG2G1=wTμ0−wTμ122wTΣ0w+wTΣ1w=wTμ0−μ1μ0−μ1TwwTΣ0+Σ1w. ## 3.1. Model Architecture Decoding and analyzing the event-related desynchronization/event-related synchronization generated by the conceptual activity of rock climbing to determine the user’s motor intention and brain state are the basis for the implementation of a brain-machine interface based on the psychology of rock-climbing. Users gain the ability to actively regulate their sensorimotor rhythms to give control commands in the psychological brain-machine interface based on rock climbing. The modulation patterns induced by psychomotor rhythms in the brain can be used as control input signals for the BCI, and through learning and training, climbers can autonomously generate the corresponding EEG patterns. The model structure diagram of the psycho-brain-computer interface for rock climbing is shown in Figure2. The system is similar to other brain-computer interface systems and is generally composed of the following parts: signal acquisition part, signal preprocessing, feature extraction, pattern classification, and output module. Signal acquisition part is as follows: the main role is to record the EEG signals that the subjects go through the psychological experiment paradigm of rock-climbing exercise and thus generate. Signal preprocessing is as follows: the preprocessing stage is mainly to remove the noise from the EEG signal, usually using a band-pass filter, and to remove the electrooculography and motion artifacts using various methods as needed. Feature extraction is as follows: since the EEG signals of climbers during the mental activity of rock-climbing sports are mixed with those generated by other neural activities and are overwhelmed by a large amount of spontaneous EEG, the feature signals related to the mental activity of rock-climbing sports need to be extracted from these mixed EEG signals by reducing the dimensionality of the EEG signals so as to extract the relevant features. Pattern classification is as follows: the extracted to feature information is analyzed and judged, so that these features can be decoded into various instructions or parsed into the user’s intention of rock-climbing sports psychology. Output module includes two parts: control system and feedback system. There are some brain-machine interfaces based on rock-climbing psychology aiming at helping paralyzed climbers to perform some daily activities, and then the extracted feature information will be decoded into various control commands to control external devices, such as robots, wheelchairs, and mice. There are also some brain-machine interfaces based on the psychology of rock climbing, whose purpose is to help climbers with neurological damage to carry out neural pathway rehabilitation or healthy climbers to improve their athletic ability through the psychological training of rock-climbing, without controlling external devices, and then the decoded commands will be analyzed and returned to the climbers through the feedback system.Figure 2 Model structure. ## 3.2. Preprocessing In this paper, after the EEG data are acquired, the calculation of subject-specific bandr2 is performed in the preprocessing stage for the determination of the subject’s band-pass filter. r2 is calculated as follows. (1)r2=N1N2N1+N2MEANP1−MEANP2STDP1∪P22.In the equation,N1 and N2 are the number of tasks included in the EEG data, respectively, N1 represents the number of trails for the left-hand motor imagery task, and N2 represents the number of trails for the left-hand motor imagery task. P1 and P2 are the power spectra of the EEG data for the motor imagery task, P1 represents the power spectrum of the left-hand motor imagery EEG data, and P2 represents the power spectrum of the right-hand motor imagery EEG data. The larger the value of r2, the greater the energy difference between the EEG data of left- and right-handed motor imagery tasks in that frequency band. According to the value of r2, a suitable band-pass filtering band is selected to perform specific band-pass filtering on the motor imagery EEG data, followed by CSP feature extraction and LDA pattern classification. ## 3.3. Feature Extraction The flow of signal data feature extraction is shown in Figure3. For the together EEG signals, there are three main features: time domain features, frequency domain features, and spatial domain features, and we need to choose the corresponding feature extraction methods for dissimilar features. For example, spatial domain features are usually extracted by choosing spatial domain filters—Common Spatial Pattern (CSP), while frequency domain features are generally extracted by power spectrum analysis and some other wavelets transform, sample entropy method, etc. Each of these methods has its own advantages and disadvantages, as shown in Table 1.Figure 3 Signal data feature extraction process.Table 1 Advantages and disadvantages of different feature extraction methods. Feature extraction methodAdvantagesDisadvantagesPower spectrum analysis methodSimple algorithm, easy to operateLow resolution, inability to display brain telecommunication nonlinear information of the numberWavelet transformMultiple resolutions for more detailed characterize the signalInflexible algorithmSample entropy methodStable algorithm, low computational effortCannot express the time-frequency characteristics of the signalCommon spatial pattern methodGood performance in feature extraction for binary classificationRequires multiple leads for analysis and is susceptible to noise interferenceIn this study, the purpose is to extract the two characteristic signals of left-handed motion and right-handed motion generated by the user’s motion imagination, and using the CSP algorithm is more in line with our requirements. The CSP algorithm is to calculate all channels, and we know that the variance represents the energy, due to the phenomenon of ERD/ERS generated by motion imagination, and it is easier to use the CSP algorithm. Because of the ERD/ERS phenomenon, it is easier to extract the features with the greatest difference in energy to classify the motor imagery intention. ## 3.4. CSP Algorithm The CSP algorithm is a feature extraction algorithm for two arrangement tasks. The computational procedure of the CSP algorithm is as follows: assume thatX1 and X2 are the matrices of the single evoked EEG signals under the same experimental conditions of the left and right handed two motor imagery responsibilities, respectively, and the matrix dimension N∗T, N is the number of channels of EEG signals, and T is the number of sampling points of EEG signals, as the number of channels of EEG signals commonly used is 8 conductors, 16 conductors, 32 conductors, 64 conductors, and 128 conductors. N must be less than T to satisfy the condition of calculating the covariance matrix. Specify that Y1 and Y2 are two types of tasks for left-handed motion and right-handed motion, and X1 and X2 are represented as follows, respectively, when noise interference is ignored. (2)X1=A1AmY1YM,X2=A2AmY2YM.Assume that the source signals of the two tasks,Y1 for left-handed motion and Y2 for right-handed motion, are linearly independent of each other, Yu represents the common source signal possessed by these two tasks, Y1 consisting of m1 sources and Y2 consisting of m2 sources, then A1 consists of m1 covariance patterns associated with X1, A2 consists of m2 covariance patterns associated with X2. The purpose of the CSP algorithm is to design a spatial filter parameter to obtain the best projection matrix W. The EEG signals are passed through this spatial filter to obtain new signals where one has the largest variance, the other has the smallest variance, and then the two types of signals are classified by a classification algorithm. Calculate the covariance matrix of X1 and X2, respectively. (3)R1=X1X1TtrX1X1T,R2=X2X2TtrX2X2T,r denotes the trace of the matrix, which is the sum of the diagonal elements of the matrix X1X1T. Here, R1 and R2 are the covariance matrices for a single trial, and then the respective average covariance matrices R1 and R2 are calculated based on the total number of trials for each of the left- and right-hand tasks, denoted as n1 and n2: (4)R1¯=1n1∑i=1n1R1i,R2¯=1n2∑i=1n2R2i.Calculate the mixed-space covariance matrixR. (5)R=R1¯+R2¯.Since the obtained mixed-space covariance matrixR is a positive definite matrix, the eigenvalue decomposition of the obtained mixed-space covariance matrix R is performed according to the singular value theorem as follows. (6)R=UλUT.λ denotes the diagonal matrix composed of eigenvalues arranged in descending order, and U denotes the matrix composed of eigenvectors corresponding to the decomposed eigenvalues, resulting in the whitening transformation matrix P. (7)P=1λUT.For the formal experimental data, to avoid transient abrupt changes in the EEG signal caused by body movements, its variance is calculated and normalized for the eigensignal obtained through the spatial filter, and then the eigenvectors are extracted as follows.(8)Zi=WXi,fi=varZi∑varZi. ## 3.5. Pattern Classification To interpret the subject’s motor imagery intention and to differentiate the brain activity caused by left-handed and right-handed motor imagery activities, the recovered features from the EEG signal must be sent to a classifier for pattern classification after the signal has been retrieved. The algorithms used to categorize the extracted features into patterns are both linear and nonlinear, and the linear algorithms include linear discriminant classifiers and linear classifiers of Marxian distance, among others. Based on the above mentioned and the characteristics of motor imagery EEG signals.LDA is an effective feature extraction method that can classify data and compress the feature space dimension. This section introduces the basic principle of LDA starting from the simpler class II LDA. First assume that there arem samples in the dataset, denoted as (9)D=x1,y1,x2,y2,⋯,xm,ym,where x is an n-dimensional vector and yi∈0,1. We define Njj∈0,1 as the number of samples of the j-th class and Njj∈0,1 as the set of samples of the j-th class, and then the mean vector of samples of the j-th class can be expressed as (10)μj=1Nj∑x∈Xjx,j∈0,1.The covariance matrix of thej-th class of samples can be expressed as (11)Σj=∑x∈Xjx−μjx−μjT,j∈0,1.The projection vector is denoted byw. Then, any sample x becomes wTxi after projection. As mentioned above, the purpose of the LDA algorithm is to make the distance between similar data as small as possible, and the distance between different classes of data as large as possible. (12)G1=wTΣ0w+wTΣ1w.The distance between samples of different categories is expressed as the square of the second parametric number, as follows:(13)G2=wTμ0−wTμ122.In summary, the optimization objective of the LDA algorithm is(14)w∗=argmaxwG2G1=wTμ0−wTμ122wTΣ0w+wTΣ1w=wTμ0−μ1μ0−μ1TwwTΣ0+Σ1w. ## 4. Experiments and Results In this section, it discusses the various experimental setups and define the transcranial electrical stimulation. They analyze the experimental results. ### 4.1. Experimental Setup This chapter investigates the effects of transcranial electrical stimulation on motor imagery task ability based on the motor imagery brain-machine interface. In the ERD/ERS data, changes in classification accuracy before and after transcranial electrical stimulation were analyzed separately to investigate the electrophysiological effects of transcranial electrical stimulation on motor imagery. This experiment was a single-blind experiment, subjects were required to perform a total of four experiments, and the sequence of stimulation and control experiments was randomly arranged by the main subjects. To avoid the after-effects of transcranial electrical stimulation, the time interval between each group of stimulation and control experiments was ensured to be at least 24 hours.Ten rock climbers (age range 23-25 years, mean age24.4±0.44 years) were recruited for this experiment, and all participants were active climbers and received monetary compensation. During the experiment, subjects would receive a total of four MI task experiments and three transcranial electrical stimulations. The subjects first underwent the first MI task experiment to determine the baseline level of each outcome for comparison and analysis with the subsequent experiments, followed by three randomized transcranial electrical stimulation sessions with corresponding MI task experiments and EEG recordings. The duration of the after-effects of transcranial direct current stimulation is not more than 10 min, and the duration of the after-effects of transcranial alternating current stimulation has not been systematically studied, but the duration of the after-effects of 10 Hz, 1 mA alternating current stimulation, is 30 min; so, the interval between each stimulation is guaranteed to be more than 24 h to avoid the after-effects from affecting the results of the subsequent experiments. The experimental parameters are set as shown in Table 2.Table 2 Experimental parameter setting. Parameter nameParameter valueInitial learning rate0.002OptimizerAdamInitial momentum0.5Batch size30Maximum number of training sessions40 ### 4.2. Transcranial Electrical Stimulation (1) Electrical stimulation equipment: publicly available transcranial electrical stimulation equipment. It is capable of three stimulation modes tACS, and pseudostimulation(2) Stimulation intensity: (a) tDCS: all subjects uniformly use 1 mA current intensity, and the stimulating electrodes use5∗7cm saline soaked sponge electrodes. (b) tACS: The stimulation intensity was determined according to the subject’s stimulation threshold specificity as described, and the stimulation electrode was a 5∗7cm saline-soaked sponge electrode(3) Stimulation position: the electrode placement was as shown in Figure4, with the anode placed at the location of the motor sensory M1 area and the cathode electrode located at the forehead area(4) Stimulation frequency: 10 Hz (mean ofμ rhythm) was used to stimulate the subjects during the tACS experiment. (5) Stimulation time: for both tACS and tDCS, a current stimulation lasting 10 min was applied to the subjectsFigure 4 Stimulation electrode location map.After the raw EEG data were collected and pre-processed, the LDA algorithm was used to calculate the classification accuracy of the EEG data for the two motor imagery tasks, as well as to calculate the power spectrum of the EEG data in the prestimulation and poststimulation phases to observe the changes in ERD/ERS, respectively. The diagram of the training process performance improvement is shown in Figure5.Figure 5 Schematic diagram of the performance improvement of the training process. ### 4.3. Experimental Results For motor mental classification accuracy, this paper used one-way frequent measures ANOVA to test the significance of the classification accuracy of subjects’ motor imagery before and after different experimental conditions.p<0.05 was considered to be a significant difference. Among the ten subjects, one subject’s data was eliminated for excessive noise, and the EEG data of the remaining nine subjects were finally used for the analysis. The accuracy of subjects’ psychological recovery during rock climbing exercise is shown in Table 3.Table 3 Accuracy of subjects’ psychological recovery from rock climbing exercise. Before stimulationAfter pseudoarousalAfter tACSAfter tDCSSubject 191.2593.2198.7596.88Subject 283.8385.8492.5097.50Subject 374.6876.2586.2580.00Subject 485.0085.0089.7490.06Subject 582.4985.0080.0086.25Subject 677.5078.6485.0582.68Subject 777.6171.2581.2787.49Subject 877.4977.5075.1181.27Subject 988.7791.2894.9396.31Mean±standarddeviation82.07±5.6782.66±7.2587.07±7.6388.71+6.88The average accuracy of task classification is shown in Figure6. A one-way repeated measures ANOVA was performed on the four experimental groups, F3,24=10.436, p<0.05, indicating a significant main effect. The following results were obtained from a two-by-two comparison of the four experimental groups: tACS group compared to tDCS compared to the prestimulus group experiment, p1=0.08>0.05 and p2=0.002<0.05, respectively, and tACS group compared to tDCS compared to the pseudostimulus group experiment, p3=0.193>0.05 and p4=0.02<0.05, respectively. From the classification accuracy of the MI task in the four groups, it can be seen that the accuracy of the subjects after the tDCS inspiration was significantly improved compared to the prestimulation and pseudostimulation groups, and the accuracy of the subjects after the tACS stimulation was better quality compared to the prestimulation and pseudostimulation groups but was not significant. In terms of the overall level of accuracy improvement, tDCS was more effective in improving classification accuracy than the tACS group.Figure 6 Average accuracy of task classification.The visualization results of the accuracy of mental recovery in rock climbing without and with transcranial electrical stimulation are shown in Figures7 and 8, respectively. From the individual subjects’ point of view, the accuracy of motor imagery was effectively improved in all nine subjects after tDCS stimulation. For tACS, subjects 5 and 8 showed a decrease in accuracy after tACS compared to the prestimulation period, and subjects 1, 3, and 6 showed a better increase in accuracy after tACS than they did after tDCS. Among all the subjects in the experimental group, the highest accuracy rate was that of subject 1 after tACS inspiration, with an accuracy rate of 98.75%. Among all subjects in the experimental group, the lowest accuracy rate was for subject 8 after tACS stimulation, with an accuracy rate of 75.11%.Figure 7 Accuracy of mental recovery in rock climbing sports without transcranial electrical stimulation.Figure 8 Accuracy of psychological recovery in rock climbing with transcranial electrical stimulation. ## 4.1. Experimental Setup This chapter investigates the effects of transcranial electrical stimulation on motor imagery task ability based on the motor imagery brain-machine interface. In the ERD/ERS data, changes in classification accuracy before and after transcranial electrical stimulation were analyzed separately to investigate the electrophysiological effects of transcranial electrical stimulation on motor imagery. This experiment was a single-blind experiment, subjects were required to perform a total of four experiments, and the sequence of stimulation and control experiments was randomly arranged by the main subjects. To avoid the after-effects of transcranial electrical stimulation, the time interval between each group of stimulation and control experiments was ensured to be at least 24 hours.Ten rock climbers (age range 23-25 years, mean age24.4±0.44 years) were recruited for this experiment, and all participants were active climbers and received monetary compensation. During the experiment, subjects would receive a total of four MI task experiments and three transcranial electrical stimulations. The subjects first underwent the first MI task experiment to determine the baseline level of each outcome for comparison and analysis with the subsequent experiments, followed by three randomized transcranial electrical stimulation sessions with corresponding MI task experiments and EEG recordings. The duration of the after-effects of transcranial direct current stimulation is not more than 10 min, and the duration of the after-effects of transcranial alternating current stimulation has not been systematically studied, but the duration of the after-effects of 10 Hz, 1 mA alternating current stimulation, is 30 min; so, the interval between each stimulation is guaranteed to be more than 24 h to avoid the after-effects from affecting the results of the subsequent experiments. The experimental parameters are set as shown in Table 2.Table 2 Experimental parameter setting. Parameter nameParameter valueInitial learning rate0.002OptimizerAdamInitial momentum0.5Batch size30Maximum number of training sessions40 ## 4.2. Transcranial Electrical Stimulation (1) Electrical stimulation equipment: publicly available transcranial electrical stimulation equipment. It is capable of three stimulation modes tACS, and pseudostimulation(2) Stimulation intensity: (a) tDCS: all subjects uniformly use 1 mA current intensity, and the stimulating electrodes use5∗7cm saline soaked sponge electrodes. (b) tACS: The stimulation intensity was determined according to the subject’s stimulation threshold specificity as described, and the stimulation electrode was a 5∗7cm saline-soaked sponge electrode(3) Stimulation position: the electrode placement was as shown in Figure4, with the anode placed at the location of the motor sensory M1 area and the cathode electrode located at the forehead area(4) Stimulation frequency: 10 Hz (mean ofμ rhythm) was used to stimulate the subjects during the tACS experiment. (5) Stimulation time: for both tACS and tDCS, a current stimulation lasting 10 min was applied to the subjectsFigure 4 Stimulation electrode location map.After the raw EEG data were collected and pre-processed, the LDA algorithm was used to calculate the classification accuracy of the EEG data for the two motor imagery tasks, as well as to calculate the power spectrum of the EEG data in the prestimulation and poststimulation phases to observe the changes in ERD/ERS, respectively. The diagram of the training process performance improvement is shown in Figure5.Figure 5 Schematic diagram of the performance improvement of the training process. ## 4.3. Experimental Results For motor mental classification accuracy, this paper used one-way frequent measures ANOVA to test the significance of the classification accuracy of subjects’ motor imagery before and after different experimental conditions.p<0.05 was considered to be a significant difference. Among the ten subjects, one subject’s data was eliminated for excessive noise, and the EEG data of the remaining nine subjects were finally used for the analysis. The accuracy of subjects’ psychological recovery during rock climbing exercise is shown in Table 3.Table 3 Accuracy of subjects’ psychological recovery from rock climbing exercise. Before stimulationAfter pseudoarousalAfter tACSAfter tDCSSubject 191.2593.2198.7596.88Subject 283.8385.8492.5097.50Subject 374.6876.2586.2580.00Subject 485.0085.0089.7490.06Subject 582.4985.0080.0086.25Subject 677.5078.6485.0582.68Subject 777.6171.2581.2787.49Subject 877.4977.5075.1181.27Subject 988.7791.2894.9396.31Mean±standarddeviation82.07±5.6782.66±7.2587.07±7.6388.71+6.88The average accuracy of task classification is shown in Figure6. A one-way repeated measures ANOVA was performed on the four experimental groups, F3,24=10.436, p<0.05, indicating a significant main effect. The following results were obtained from a two-by-two comparison of the four experimental groups: tACS group compared to tDCS compared to the prestimulus group experiment, p1=0.08>0.05 and p2=0.002<0.05, respectively, and tACS group compared to tDCS compared to the pseudostimulus group experiment, p3=0.193>0.05 and p4=0.02<0.05, respectively. From the classification accuracy of the MI task in the four groups, it can be seen that the accuracy of the subjects after the tDCS inspiration was significantly improved compared to the prestimulation and pseudostimulation groups, and the accuracy of the subjects after the tACS stimulation was better quality compared to the prestimulation and pseudostimulation groups but was not significant. In terms of the overall level of accuracy improvement, tDCS was more effective in improving classification accuracy than the tACS group.Figure 6 Average accuracy of task classification.The visualization results of the accuracy of mental recovery in rock climbing without and with transcranial electrical stimulation are shown in Figures7 and 8, respectively. From the individual subjects’ point of view, the accuracy of motor imagery was effectively improved in all nine subjects after tDCS stimulation. For tACS, subjects 5 and 8 showed a decrease in accuracy after tACS compared to the prestimulation period, and subjects 1, 3, and 6 showed a better increase in accuracy after tACS than they did after tDCS. Among all the subjects in the experimental group, the highest accuracy rate was that of subject 1 after tACS inspiration, with an accuracy rate of 98.75%. Among all subjects in the experimental group, the lowest accuracy rate was for subject 8 after tACS stimulation, with an accuracy rate of 75.11%.Figure 7 Accuracy of mental recovery in rock climbing sports without transcranial electrical stimulation.Figure 8 Accuracy of psychological recovery in rock climbing with transcranial electrical stimulation. ## 5. Conclusion Rock climbing requires climbers to have good psychological quality, and these psychological qualities include the ability to overcome anxiety, fear, and obstacles, but also includes the level of the athlete’s will, the degree of concentration, etc. The psychological training of climbers should be targeted according to the specific characteristics of each athlete to improve their psychological mechanisms through training.The brain controls most of the human learning and movement. Although sports training focuses on physical performance and motor abilities, its fundamental nature still depends on the cerebral cortex to create neural connections that help the nervous system better govern muscles. There is a transcranial electrical stimulation’s efficiency in enhancing human motor performance. This paper further analyzes the effectiveness of transcranial electrical stimulation in improving psychomotor balance, psychomotor endurance performance, and psychomotor strength and reducing motor fatigue. --- *Source: 1008346-2022-07-13.xml*
1008346-2022-07-13_1008346-2022-07-13.md
68,784
The Application of Transcranial Electrical Stimulation in Sports Psychology
Shuzhi Chang
Computational and Mathematical Methods in Medicine (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008346
1008346-2022-07-13.xml
--- ## Abstract The problem of sports psychological fatigue has become one of the focal points of common concern among scholars at home and abroad. Athletes will face many problems and challenges in competition or training, and if they are not handled properly, they will have negative experiences, which will affect the training benefits and develop psychological fatigue. Transcranial electrical stimulation (TES), which contains transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation, is a noninvasive brain stimulation method. By applying specific patterns of low-intensity electrical currents to specific brain regions through electrodes of different sizes, it modulates cortical neural activity and/or excitability and enhances the connections between the brain and nerves and muscles to achieve improved motor performance. TES technology is currently making the transition from laboratory research to applied research in sports science. In this paper, we first describe the neural mechanisms of TES action on the cerebral cortex, including five aspects of body balance, endurance performance, exercise fatigue, muscle strength, and motor learning ability; then, we review the relevant studies on the application of TES in functional connectivity of brain networks and explore the importance of this field for TES to improve athletic performance. This research provides a machine learning-based and transcranial electrical stimulation model for the locomotor psychological fatigue problem in rock climbers since rock climbing is a sport that places great demands on athletes’ psychological quality. The research on the factors influencing the psychological fatigue of climbers and the intervention measures is beneficial to the early diagnosis and the prevention and intervention of it. --- ## Body ## 1. Introduction In modern high water competitive sports, the physical and technical and tactical abilities of the participating athletes are getting closer and closer, and the stable psychological quality becomes an important factor to achieve the victory of the competition, especially the climbing sports competition has a greater consumption of the athletes’ psychological energy; if the athletes do not have good psychological quality, even if they have strong physical quality, technical, and tactical ability, it is difficult to achieve excellent sports performance. As the American scholar Grubaugh pointed out for junior athletes, 80% is biomechanical factors, 20% is psychological factors, senior athletes are the opposite, 80% is psychological factors, and 20% is biomechanical factors. Therefore, in the process of physical and technical and tactical training for rock climbers, it is crucial to strengthen the psychological quality training of athletes.In rock climbing sport characteristics and climbing psychological training, climbing site, and the sport form of special, climbing site mainly rock cliff face fissure rock face boulder and artificial rock wall, etc., the rock face mostly has a certain elevation angle and pitch angle, and the shape of the rock wall and rock point of the shape of a thousand changes, thus forming the form of rock-climbing sport diversity unconventional work at height and the complexity of technical operations and other characteristics [1–3]. Figure 1 depicts the diagram for using rock climbing to enhance psychological quality. A set of difficult and beautiful sports, rock climbing is demanding and risky. Its core traits can be summed up as dangerous, difficult, and beautiful. The angle and shape of the rock wall, the difficulty of the climbing route, the size and shape of the pivot points, and the ever-changing weather are all huge obstacles to climbing; so, athletes are required to have strong resilience and adjust their physical and mental state to better complete the competition. If you make a mistake, you may have the danger of slipping and falling, and you are prone to casualties. Rock-climbing psychological training refers to the process of rock-climbing training, purposely and systematically exerts influence on the psychological process and personality psychological characteristics of athletes, and through special methods and means to make athletes learn to regulate and control their own psychological state, and then regulates and controls their own climbing behavior process due to the specificity of the rock-climbing site the variability of the wall pivot points the diversity of equipment preparation the complexity of technical actions, and it can promote the perfection of athletes’ psychological process, form good personality and psychological characteristics compatible with rock climbing, obtain a high level of psychological energy reserves, enable athletes to adapt to the thrilling climbing activities, and lay a good psychological foundation for the victory of the climbing competition. The climbing competition is carried out under the condition of independent combat without guidance, and in the application of techniques and tactics in the process of the competition, the athletes should make timely self-adjustment according to the difficulty angle of the line pivot point and personal advantages and disadvantages of the competition. Good psychological quality makes athletes confident of victory, energetic, vigorous, muscular strength, and increased resilience, so that they can give full play to their existing skills and tactics, and even play beyond the level of psychological training is generally divided into psychological training and psychological training in preparation for specific competitions. General psychological training aims to improve the psychological factors of athletes related to special sports and is also called long-term psychological training because it can be arranged throughout the training process. Psychological training in preparation for specific competitions is mainly for the specific competition and psychological preparation, generally in the two or three months before the competition to start practicing, and continues until the competition period pregame special psychological training aims to enable athletes to master and use the method of self-regulation of mental state in a relatively short period of time, in order to form the best competitive state [4–6].Figure 1 Rock climbing to enhance the psychological quality of the schematic diagram.Human motor control ability is one of the important motor abilities of the human body. Motor control not only affects the athletes’ performance but also has a significant impact on the quality of life of healthy people in general, especially the elderly and patients with motor control deficits. Previous approaches to improve motor control in humans have focused on enhancing motor control through interventional training to modify the function of the skeletal muscle system. However, the nervous system plays a very important role in the process of motor control. In recent years, research on motor control has focused more on the effects of neurological interventions on human motor control [7]. Transcranial electrical stimulation (TES) is a noninvasive transcranial electrical stimulation technique that generates a weak current (1 to 2 mA) in the superficial layers of the skull through paired electrodes placed on the scalp, which affects neural activity in the cerebral cortex and alters brain function to promote motor performance in humans. The main transcranial electrical stimulation techniques used today are transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), and transcranial random noise stimulation (TRS). These electrical stimulation techniques can be used to achieve different stimulation effects by changing the position of electrodes, the intensity of current, the duration of stimulation, and the frequency of stimulation. As a noninvasive and safe brain stimulation technique, TES has been gradually announced into sports science applications from the treatment of neurological or psychiatric disorders. In this paper, we focus on the research and application of TES in improving sports mental performance, especially in rock climbing. A machine learning algorithm-based transcranial electrical stimulation in sports psychology is proposed for applications including improving mental balance, enhancing mental endurance performance and/or relieving exercise fatigue, improving muscle strength, and enhancing sports psychological learning ability, which provides theoretical references for better understanding and application of TES techniques. The effectiveness of the proposed method was demonstrated in relevant experiments [8].The usefulness of transcranial electrical stimulation in enhancing psychomotor balance, psychomotor endurance performance, psychomotor strength, and decreasing motor fatigue is further examined in this paper.The arrangements of the paper are as follows: Section2 discusses the related work. Section 3 examines the various methods. Section 4 analyzes the experiments and results. Section 5 concludes the article. ## 2. Related Work ### 2.1. Transcranial Electrical Stimulation and Sports Currently, TES is more frequently used in the rehabilitation of patients with neurological impairment or psychiatric disorders and has good belongings on the recovery of brain injury, mood regulation, and improvement of cognitive function. In recent years, some researchers have gradually applied TES to the field of sports science to improve human muscle coordination in sports and enhance human sports performance by increasing the connection between the brain and nerves and muscles, such as improving body balance, enhancing endurance performance, relieving sports fatigue, and improving muscle strength and motor learning ability. Improve body balance ability is one of the main physical qualities of humans, which refers to the ability to maintain body posture even during exercise or under external forces, and it is related to body structure, muscle coordination, and the regulation of brain tissue involved in balance [9–12]. It was found that 1 mA tDCS anodal stimulation of the motor cortex of healthy elderly people for 20 min per day resulted in improved gait and dynamic balance after 5 d of continuous intervention and was maintained for 1 week. Stimulation of the sensorimotor area of healthy adults with high precision tDCS for 20 min better the static balance ability of the subjects. Body balance is one of the basic abilities that athletes possess, especially in nonperiodic events, and directly affects the performance of athletes’ skills and physical abilities, etc. Whether tDCS can improve dynamic and static balance in elite athletes needs further study. Enhance endurance performance and/or relieve exercise and fatigue. Endurance is the body’s ability to perform muscle activity for long periods of time and is the basis for improving qualities such as speed and strength. The goodness of endurance directly affects human athletic performance. Currently, some studies have investigated whether tDCS intervention in the cerebral cortex has a positive effect on neurological and muscular fatigue recovery [13].The exact neurophysiological mechanisms of transcranial direct current action can be divided into the following mainstream views: altering cortical excitability, increasing synaptic plasticity, altering local cerebral blood flow, and regulating local cortical and brain network connections. Changing cortical excitability is as follows: tDCS changes the excitability of the brain by stimulating the membrane potential of neurons, causing depolarization of the anode to lower the threshold of action potential generation, and increasing the excitability of the anodal cortex; [14–16]. Increase synaptic plasticity is as follows: tDCS performs subthreshold stimulation of the resting membrane potential of neurons, which induces the expression of N-methyl aspartate receptors and the release of y-aminobutyric acid, while NMDA is involved in synapse formation, increasing synaptic plasticity, which in turn produces long-duration inhibitory or enhancing effects. Alteration of local cerebral blood flow is as follows: when the tDCS anode acts on the dorsolateral prefrontal cortex, it increases cerebral blood flow in the stimulated area and decreases cerebral blood flow under the cathode, which is highly correlated with the MEP amplitude at the stimulation location. Modulation of local cortical and brain network connectivity is as follows: EEG and functional magnetic resonance imaging studies revealed that anodal tDCS stimulation of area M1 significantly increased functional connectivity in the premotor, motor, and sensorimotor areas within the stimulated hemisphere and induced changes in connectivity between the left and right hemispheres, further corroborating that tDCS induces functional brain connectivity. tACS stimulation is dependent on the stimulation. The mechanisms of tACS action can be classified as follows: exogenous oscillations induce endogenous oscillations in the brain, affecting synaptic plasticity to regulate brain function. When tACS acts on the brain, part of the current reaches the cerebral cortex and changes the membrane potential of dendrites or axons in an oscillatory manner, making it possible for neurons to develop action potentials. When the frequency of tACS is the same as the endogenous frequency, resonance occurs, causing excitation of neurons in the brain; when tACS is stimulated at a higher frequency, it can trigger neuronal oscillations in a wider frequency range. Stimulation at specific frequencies can also drive oscillatory frequencies within the brain and induce synchronous oscillations in neurons with rhythms imposed by tACS on specific cortices. tACS modulates brain function by affecting synaptic plasticity. When tACS causes presynaptic action potentials to precede postsynaptic action potentials, it produces long-lasting excitation of brain function, and when it causes postsynaptic potentials to precede presynaptic potentials, it causes long-lasting inhibition of brain function, a phenomenon that has been confirmed by many studies. ### 2.2. Psychological Analysis of Rock Climbing Rock climbing is a kind of mountaineering sport, which fully exploits the human body’s climbing potential, and the climber relies on the coordination of hands, feet, waist, and muscle functions and uses gouging, grasping, bracing, and other means to operate the body upward. Both man-made and natural rock walls can be climbed, with man-made rock walls having the most participants. According to the psychological requirements of rock climbing, climbers must perform well in specialized psychological quality training to improve their capacity for psychological adjustment and obstacle surmounting [17–19]. The sport of rock climbing demands exceptional physical, intellectual, and psychological qualities.Rock climbing is a sport with long time, high density, and large volume of transportation. The process of rock climbing puts forward extremely high requirements on athletes’ physical ability, and athletes need strong physical ability, intelligence, and good psychological quality to complete a complete climb. In the process of rock climbing, the athletes’ psychological ability is extremely important. In the process of rock climbing, due to the rapid consumption of physical energy, it often causes a rapid decline in the athletes’ functional reserve and central nervous fatigue, which leads to a series of bad emotions such as inattention, irritability, and decrease in the level of will. If these bad emotions deteriorate further, it will cause the athletes to move slowly, have stiff muscles, and panic in their hands and feet. These bad emotions, if further worsened, will cause the athletes to move slowly, muscle stiffness, and hands and feet panic, affecting the completion of rock climbing. In serious cases, it will lead to sports injury or even danger. Therefore, the special psychological training for rock climbers is extremely important, through effective psychological training, so that athletes can effectively psychologically regulate themselves in the process of rock climbing and help them maintain emotional stability [20, 21]. We know that rock climbing has a certain degree of danger, and the psychology of fear in the procedure of rock climbing is caused by a variety of reasons. One of the most important factors is the climber’s understanding of the degree of obstacles and the climber’s level of trust in protective events and protection personnel. To train and improve climbers to overcome psychological barriers, it is necessary to find the cause and prescribe the right remedy: first, to make climbers fully understand the degree of difficulty of this climb and some related knowledge and second, to make climbers do good communication with staff to enhance their trust in protection workers and safety measures.Concentration training is extremely important for rock climbers. Only by concentrating on one goal can the climber not be disturbed by the external environment and internal factors during the climbing process, not be distracted during the climbing, and reach the summit successfully. The concentration of attention training generally has two forms: one is the muzzle training method: the so-called muzzle training refers to the trainer in the usual training by issuing some volume weak, just can barely hear the sound, and let the trainee to complete the task, and such a weak voice muzzle needs athletes to focus on a high degree of attention to complete. Second is the stopwatch training method [22]. The so-called stopwatch training is to allow athletes to focus on the rotation of the stopwatch, record the time of each attentiveness, in the training time lengthened one by one, until the end of inattention, and then repeat the training in accordance with this time standard. Training regulates arousal level in rock climbing; there are two kinds of training to regulate arousal level in rock climbing: one is to increase the arousal level of AH, and the other is to decrease, the arousal level of sex. Arousal level refers to the physiological activation of the muscles in different degrees and states, and the state of arousal level is directly related to the level of athletes. The human body can change and maintain the excitability of the brain’s nervous system through arousal and use this state of arousal to maintain a high level of concentration and provide conscious energy to the muscles. In general, in sports that are mainly speed and strength-based, a higher level of arousal is required, while in sports that are mainly regulated by small muscle groups and have complex tasks. Exercise classes that require coordinated coordination require a lower level of arousal [23].The so-called representational training refers to the training to promote the athletes to improve their climbing skills and technical level of play; in the representational training, coaches can make the athletes form the representational movement in the brain consciousness through verbal explanation, video images, video materials, and other means, so that the athletes can train through the full imagination. The main purpose of representational training is to allow athletes to train in actual combat before. In addition, representational movement also includes the reproduction of previously learned technical movements after training, through the recollection and reproduction of movements in the brain, the wrong movements can be corrected, and the correct movements can be consolidated. In the process of representational training, athletes can categorize and organize the decomposed movements one by one in their mind and make corrections and improvements, so that they can complete the whole climbing process independently in their own mind. Representation training not only makes the athletes get twice the result with half the effort in the climbing process but also increases the information reserve and develops their intelligence. ## 2.1. Transcranial Electrical Stimulation and Sports Currently, TES is more frequently used in the rehabilitation of patients with neurological impairment or psychiatric disorders and has good belongings on the recovery of brain injury, mood regulation, and improvement of cognitive function. In recent years, some researchers have gradually applied TES to the field of sports science to improve human muscle coordination in sports and enhance human sports performance by increasing the connection between the brain and nerves and muscles, such as improving body balance, enhancing endurance performance, relieving sports fatigue, and improving muscle strength and motor learning ability. Improve body balance ability is one of the main physical qualities of humans, which refers to the ability to maintain body posture even during exercise or under external forces, and it is related to body structure, muscle coordination, and the regulation of brain tissue involved in balance [9–12]. It was found that 1 mA tDCS anodal stimulation of the motor cortex of healthy elderly people for 20 min per day resulted in improved gait and dynamic balance after 5 d of continuous intervention and was maintained for 1 week. Stimulation of the sensorimotor area of healthy adults with high precision tDCS for 20 min better the static balance ability of the subjects. Body balance is one of the basic abilities that athletes possess, especially in nonperiodic events, and directly affects the performance of athletes’ skills and physical abilities, etc. Whether tDCS can improve dynamic and static balance in elite athletes needs further study. Enhance endurance performance and/or relieve exercise and fatigue. Endurance is the body’s ability to perform muscle activity for long periods of time and is the basis for improving qualities such as speed and strength. The goodness of endurance directly affects human athletic performance. Currently, some studies have investigated whether tDCS intervention in the cerebral cortex has a positive effect on neurological and muscular fatigue recovery [13].The exact neurophysiological mechanisms of transcranial direct current action can be divided into the following mainstream views: altering cortical excitability, increasing synaptic plasticity, altering local cerebral blood flow, and regulating local cortical and brain network connections. Changing cortical excitability is as follows: tDCS changes the excitability of the brain by stimulating the membrane potential of neurons, causing depolarization of the anode to lower the threshold of action potential generation, and increasing the excitability of the anodal cortex; [14–16]. Increase synaptic plasticity is as follows: tDCS performs subthreshold stimulation of the resting membrane potential of neurons, which induces the expression of N-methyl aspartate receptors and the release of y-aminobutyric acid, while NMDA is involved in synapse formation, increasing synaptic plasticity, which in turn produces long-duration inhibitory or enhancing effects. Alteration of local cerebral blood flow is as follows: when the tDCS anode acts on the dorsolateral prefrontal cortex, it increases cerebral blood flow in the stimulated area and decreases cerebral blood flow under the cathode, which is highly correlated with the MEP amplitude at the stimulation location. Modulation of local cortical and brain network connectivity is as follows: EEG and functional magnetic resonance imaging studies revealed that anodal tDCS stimulation of area M1 significantly increased functional connectivity in the premotor, motor, and sensorimotor areas within the stimulated hemisphere and induced changes in connectivity between the left and right hemispheres, further corroborating that tDCS induces functional brain connectivity. tACS stimulation is dependent on the stimulation. The mechanisms of tACS action can be classified as follows: exogenous oscillations induce endogenous oscillations in the brain, affecting synaptic plasticity to regulate brain function. When tACS acts on the brain, part of the current reaches the cerebral cortex and changes the membrane potential of dendrites or axons in an oscillatory manner, making it possible for neurons to develop action potentials. When the frequency of tACS is the same as the endogenous frequency, resonance occurs, causing excitation of neurons in the brain; when tACS is stimulated at a higher frequency, it can trigger neuronal oscillations in a wider frequency range. Stimulation at specific frequencies can also drive oscillatory frequencies within the brain and induce synchronous oscillations in neurons with rhythms imposed by tACS on specific cortices. tACS modulates brain function by affecting synaptic plasticity. When tACS causes presynaptic action potentials to precede postsynaptic action potentials, it produces long-lasting excitation of brain function, and when it causes postsynaptic potentials to precede presynaptic potentials, it causes long-lasting inhibition of brain function, a phenomenon that has been confirmed by many studies. ## 2.2. Psychological Analysis of Rock Climbing Rock climbing is a kind of mountaineering sport, which fully exploits the human body’s climbing potential, and the climber relies on the coordination of hands, feet, waist, and muscle functions and uses gouging, grasping, bracing, and other means to operate the body upward. Both man-made and natural rock walls can be climbed, with man-made rock walls having the most participants. According to the psychological requirements of rock climbing, climbers must perform well in specialized psychological quality training to improve their capacity for psychological adjustment and obstacle surmounting [17–19]. The sport of rock climbing demands exceptional physical, intellectual, and psychological qualities.Rock climbing is a sport with long time, high density, and large volume of transportation. The process of rock climbing puts forward extremely high requirements on athletes’ physical ability, and athletes need strong physical ability, intelligence, and good psychological quality to complete a complete climb. In the process of rock climbing, the athletes’ psychological ability is extremely important. In the process of rock climbing, due to the rapid consumption of physical energy, it often causes a rapid decline in the athletes’ functional reserve and central nervous fatigue, which leads to a series of bad emotions such as inattention, irritability, and decrease in the level of will. If these bad emotions deteriorate further, it will cause the athletes to move slowly, have stiff muscles, and panic in their hands and feet. These bad emotions, if further worsened, will cause the athletes to move slowly, muscle stiffness, and hands and feet panic, affecting the completion of rock climbing. In serious cases, it will lead to sports injury or even danger. Therefore, the special psychological training for rock climbers is extremely important, through effective psychological training, so that athletes can effectively psychologically regulate themselves in the process of rock climbing and help them maintain emotional stability [20, 21]. We know that rock climbing has a certain degree of danger, and the psychology of fear in the procedure of rock climbing is caused by a variety of reasons. One of the most important factors is the climber’s understanding of the degree of obstacles and the climber’s level of trust in protective events and protection personnel. To train and improve climbers to overcome psychological barriers, it is necessary to find the cause and prescribe the right remedy: first, to make climbers fully understand the degree of difficulty of this climb and some related knowledge and second, to make climbers do good communication with staff to enhance their trust in protection workers and safety measures.Concentration training is extremely important for rock climbers. Only by concentrating on one goal can the climber not be disturbed by the external environment and internal factors during the climbing process, not be distracted during the climbing, and reach the summit successfully. The concentration of attention training generally has two forms: one is the muzzle training method: the so-called muzzle training refers to the trainer in the usual training by issuing some volume weak, just can barely hear the sound, and let the trainee to complete the task, and such a weak voice muzzle needs athletes to focus on a high degree of attention to complete. Second is the stopwatch training method [22]. The so-called stopwatch training is to allow athletes to focus on the rotation of the stopwatch, record the time of each attentiveness, in the training time lengthened one by one, until the end of inattention, and then repeat the training in accordance with this time standard. Training regulates arousal level in rock climbing; there are two kinds of training to regulate arousal level in rock climbing: one is to increase the arousal level of AH, and the other is to decrease, the arousal level of sex. Arousal level refers to the physiological activation of the muscles in different degrees and states, and the state of arousal level is directly related to the level of athletes. The human body can change and maintain the excitability of the brain’s nervous system through arousal and use this state of arousal to maintain a high level of concentration and provide conscious energy to the muscles. In general, in sports that are mainly speed and strength-based, a higher level of arousal is required, while in sports that are mainly regulated by small muscle groups and have complex tasks. Exercise classes that require coordinated coordination require a lower level of arousal [23].The so-called representational training refers to the training to promote the athletes to improve their climbing skills and technical level of play; in the representational training, coaches can make the athletes form the representational movement in the brain consciousness through verbal explanation, video images, video materials, and other means, so that the athletes can train through the full imagination. The main purpose of representational training is to allow athletes to train in actual combat before. In addition, representational movement also includes the reproduction of previously learned technical movements after training, through the recollection and reproduction of movements in the brain, the wrong movements can be corrected, and the correct movements can be consolidated. In the process of representational training, athletes can categorize and organize the decomposed movements one by one in their mind and make corrections and improvements, so that they can complete the whole climbing process independently in their own mind. Representation training not only makes the athletes get twice the result with half the effort in the climbing process but also increases the information reserve and develops their intelligence. ## 3. Methods ### 3.1. Model Architecture Decoding and analyzing the event-related desynchronization/event-related synchronization generated by the conceptual activity of rock climbing to determine the user’s motor intention and brain state are the basis for the implementation of a brain-machine interface based on the psychology of rock-climbing. Users gain the ability to actively regulate their sensorimotor rhythms to give control commands in the psychological brain-machine interface based on rock climbing. The modulation patterns induced by psychomotor rhythms in the brain can be used as control input signals for the BCI, and through learning and training, climbers can autonomously generate the corresponding EEG patterns. The model structure diagram of the psycho-brain-computer interface for rock climbing is shown in Figure2. The system is similar to other brain-computer interface systems and is generally composed of the following parts: signal acquisition part, signal preprocessing, feature extraction, pattern classification, and output module. Signal acquisition part is as follows: the main role is to record the EEG signals that the subjects go through the psychological experiment paradigm of rock-climbing exercise and thus generate. Signal preprocessing is as follows: the preprocessing stage is mainly to remove the noise from the EEG signal, usually using a band-pass filter, and to remove the electrooculography and motion artifacts using various methods as needed. Feature extraction is as follows: since the EEG signals of climbers during the mental activity of rock-climbing sports are mixed with those generated by other neural activities and are overwhelmed by a large amount of spontaneous EEG, the feature signals related to the mental activity of rock-climbing sports need to be extracted from these mixed EEG signals by reducing the dimensionality of the EEG signals so as to extract the relevant features. Pattern classification is as follows: the extracted to feature information is analyzed and judged, so that these features can be decoded into various instructions or parsed into the user’s intention of rock-climbing sports psychology. Output module includes two parts: control system and feedback system. There are some brain-machine interfaces based on rock-climbing psychology aiming at helping paralyzed climbers to perform some daily activities, and then the extracted feature information will be decoded into various control commands to control external devices, such as robots, wheelchairs, and mice. There are also some brain-machine interfaces based on the psychology of rock climbing, whose purpose is to help climbers with neurological damage to carry out neural pathway rehabilitation or healthy climbers to improve their athletic ability through the psychological training of rock-climbing, without controlling external devices, and then the decoded commands will be analyzed and returned to the climbers through the feedback system.Figure 2 Model structure. ### 3.2. Preprocessing In this paper, after the EEG data are acquired, the calculation of subject-specific bandr2 is performed in the preprocessing stage for the determination of the subject’s band-pass filter. r2 is calculated as follows. (1)r2=N1N2N1+N2MEANP1−MEANP2STDP1∪P22.In the equation,N1 and N2 are the number of tasks included in the EEG data, respectively, N1 represents the number of trails for the left-hand motor imagery task, and N2 represents the number of trails for the left-hand motor imagery task. P1 and P2 are the power spectra of the EEG data for the motor imagery task, P1 represents the power spectrum of the left-hand motor imagery EEG data, and P2 represents the power spectrum of the right-hand motor imagery EEG data. The larger the value of r2, the greater the energy difference between the EEG data of left- and right-handed motor imagery tasks in that frequency band. According to the value of r2, a suitable band-pass filtering band is selected to perform specific band-pass filtering on the motor imagery EEG data, followed by CSP feature extraction and LDA pattern classification. ### 3.3. Feature Extraction The flow of signal data feature extraction is shown in Figure3. For the together EEG signals, there are three main features: time domain features, frequency domain features, and spatial domain features, and we need to choose the corresponding feature extraction methods for dissimilar features. For example, spatial domain features are usually extracted by choosing spatial domain filters—Common Spatial Pattern (CSP), while frequency domain features are generally extracted by power spectrum analysis and some other wavelets transform, sample entropy method, etc. Each of these methods has its own advantages and disadvantages, as shown in Table 1.Figure 3 Signal data feature extraction process.Table 1 Advantages and disadvantages of different feature extraction methods. Feature extraction methodAdvantagesDisadvantagesPower spectrum analysis methodSimple algorithm, easy to operateLow resolution, inability to display brain telecommunication nonlinear information of the numberWavelet transformMultiple resolutions for more detailed characterize the signalInflexible algorithmSample entropy methodStable algorithm, low computational effortCannot express the time-frequency characteristics of the signalCommon spatial pattern methodGood performance in feature extraction for binary classificationRequires multiple leads for analysis and is susceptible to noise interferenceIn this study, the purpose is to extract the two characteristic signals of left-handed motion and right-handed motion generated by the user’s motion imagination, and using the CSP algorithm is more in line with our requirements. The CSP algorithm is to calculate all channels, and we know that the variance represents the energy, due to the phenomenon of ERD/ERS generated by motion imagination, and it is easier to use the CSP algorithm. Because of the ERD/ERS phenomenon, it is easier to extract the features with the greatest difference in energy to classify the motor imagery intention. ### 3.4. CSP Algorithm The CSP algorithm is a feature extraction algorithm for two arrangement tasks. The computational procedure of the CSP algorithm is as follows: assume thatX1 and X2 are the matrices of the single evoked EEG signals under the same experimental conditions of the left and right handed two motor imagery responsibilities, respectively, and the matrix dimension N∗T, N is the number of channels of EEG signals, and T is the number of sampling points of EEG signals, as the number of channels of EEG signals commonly used is 8 conductors, 16 conductors, 32 conductors, 64 conductors, and 128 conductors. N must be less than T to satisfy the condition of calculating the covariance matrix. Specify that Y1 and Y2 are two types of tasks for left-handed motion and right-handed motion, and X1 and X2 are represented as follows, respectively, when noise interference is ignored. (2)X1=A1AmY1YM,X2=A2AmY2YM.Assume that the source signals of the two tasks,Y1 for left-handed motion and Y2 for right-handed motion, are linearly independent of each other, Yu represents the common source signal possessed by these two tasks, Y1 consisting of m1 sources and Y2 consisting of m2 sources, then A1 consists of m1 covariance patterns associated with X1, A2 consists of m2 covariance patterns associated with X2. The purpose of the CSP algorithm is to design a spatial filter parameter to obtain the best projection matrix W. The EEG signals are passed through this spatial filter to obtain new signals where one has the largest variance, the other has the smallest variance, and then the two types of signals are classified by a classification algorithm. Calculate the covariance matrix of X1 and X2, respectively. (3)R1=X1X1TtrX1X1T,R2=X2X2TtrX2X2T,r denotes the trace of the matrix, which is the sum of the diagonal elements of the matrix X1X1T. Here, R1 and R2 are the covariance matrices for a single trial, and then the respective average covariance matrices R1 and R2 are calculated based on the total number of trials for each of the left- and right-hand tasks, denoted as n1 and n2: (4)R1¯=1n1∑i=1n1R1i,R2¯=1n2∑i=1n2R2i.Calculate the mixed-space covariance matrixR. (5)R=R1¯+R2¯.Since the obtained mixed-space covariance matrixR is a positive definite matrix, the eigenvalue decomposition of the obtained mixed-space covariance matrix R is performed according to the singular value theorem as follows. (6)R=UλUT.λ denotes the diagonal matrix composed of eigenvalues arranged in descending order, and U denotes the matrix composed of eigenvectors corresponding to the decomposed eigenvalues, resulting in the whitening transformation matrix P. (7)P=1λUT.For the formal experimental data, to avoid transient abrupt changes in the EEG signal caused by body movements, its variance is calculated and normalized for the eigensignal obtained through the spatial filter, and then the eigenvectors are extracted as follows.(8)Zi=WXi,fi=varZi∑varZi. ### 3.5. Pattern Classification To interpret the subject’s motor imagery intention and to differentiate the brain activity caused by left-handed and right-handed motor imagery activities, the recovered features from the EEG signal must be sent to a classifier for pattern classification after the signal has been retrieved. The algorithms used to categorize the extracted features into patterns are both linear and nonlinear, and the linear algorithms include linear discriminant classifiers and linear classifiers of Marxian distance, among others. Based on the above mentioned and the characteristics of motor imagery EEG signals.LDA is an effective feature extraction method that can classify data and compress the feature space dimension. This section introduces the basic principle of LDA starting from the simpler class II LDA. First assume that there arem samples in the dataset, denoted as (9)D=x1,y1,x2,y2,⋯,xm,ym,where x is an n-dimensional vector and yi∈0,1. We define Njj∈0,1 as the number of samples of the j-th class and Njj∈0,1 as the set of samples of the j-th class, and then the mean vector of samples of the j-th class can be expressed as (10)μj=1Nj∑x∈Xjx,j∈0,1.The covariance matrix of thej-th class of samples can be expressed as (11)Σj=∑x∈Xjx−μjx−μjT,j∈0,1.The projection vector is denoted byw. Then, any sample x becomes wTxi after projection. As mentioned above, the purpose of the LDA algorithm is to make the distance between similar data as small as possible, and the distance between different classes of data as large as possible. (12)G1=wTΣ0w+wTΣ1w.The distance between samples of different categories is expressed as the square of the second parametric number, as follows:(13)G2=wTμ0−wTμ122.In summary, the optimization objective of the LDA algorithm is(14)w∗=argmaxwG2G1=wTμ0−wTμ122wTΣ0w+wTΣ1w=wTμ0−μ1μ0−μ1TwwTΣ0+Σ1w. ## 3.1. Model Architecture Decoding and analyzing the event-related desynchronization/event-related synchronization generated by the conceptual activity of rock climbing to determine the user’s motor intention and brain state are the basis for the implementation of a brain-machine interface based on the psychology of rock-climbing. Users gain the ability to actively regulate their sensorimotor rhythms to give control commands in the psychological brain-machine interface based on rock climbing. The modulation patterns induced by psychomotor rhythms in the brain can be used as control input signals for the BCI, and through learning and training, climbers can autonomously generate the corresponding EEG patterns. The model structure diagram of the psycho-brain-computer interface for rock climbing is shown in Figure2. The system is similar to other brain-computer interface systems and is generally composed of the following parts: signal acquisition part, signal preprocessing, feature extraction, pattern classification, and output module. Signal acquisition part is as follows: the main role is to record the EEG signals that the subjects go through the psychological experiment paradigm of rock-climbing exercise and thus generate. Signal preprocessing is as follows: the preprocessing stage is mainly to remove the noise from the EEG signal, usually using a band-pass filter, and to remove the electrooculography and motion artifacts using various methods as needed. Feature extraction is as follows: since the EEG signals of climbers during the mental activity of rock-climbing sports are mixed with those generated by other neural activities and are overwhelmed by a large amount of spontaneous EEG, the feature signals related to the mental activity of rock-climbing sports need to be extracted from these mixed EEG signals by reducing the dimensionality of the EEG signals so as to extract the relevant features. Pattern classification is as follows: the extracted to feature information is analyzed and judged, so that these features can be decoded into various instructions or parsed into the user’s intention of rock-climbing sports psychology. Output module includes two parts: control system and feedback system. There are some brain-machine interfaces based on rock-climbing psychology aiming at helping paralyzed climbers to perform some daily activities, and then the extracted feature information will be decoded into various control commands to control external devices, such as robots, wheelchairs, and mice. There are also some brain-machine interfaces based on the psychology of rock climbing, whose purpose is to help climbers with neurological damage to carry out neural pathway rehabilitation or healthy climbers to improve their athletic ability through the psychological training of rock-climbing, without controlling external devices, and then the decoded commands will be analyzed and returned to the climbers through the feedback system.Figure 2 Model structure. ## 3.2. Preprocessing In this paper, after the EEG data are acquired, the calculation of subject-specific bandr2 is performed in the preprocessing stage for the determination of the subject’s band-pass filter. r2 is calculated as follows. (1)r2=N1N2N1+N2MEANP1−MEANP2STDP1∪P22.In the equation,N1 and N2 are the number of tasks included in the EEG data, respectively, N1 represents the number of trails for the left-hand motor imagery task, and N2 represents the number of trails for the left-hand motor imagery task. P1 and P2 are the power spectra of the EEG data for the motor imagery task, P1 represents the power spectrum of the left-hand motor imagery EEG data, and P2 represents the power spectrum of the right-hand motor imagery EEG data. The larger the value of r2, the greater the energy difference between the EEG data of left- and right-handed motor imagery tasks in that frequency band. According to the value of r2, a suitable band-pass filtering band is selected to perform specific band-pass filtering on the motor imagery EEG data, followed by CSP feature extraction and LDA pattern classification. ## 3.3. Feature Extraction The flow of signal data feature extraction is shown in Figure3. For the together EEG signals, there are three main features: time domain features, frequency domain features, and spatial domain features, and we need to choose the corresponding feature extraction methods for dissimilar features. For example, spatial domain features are usually extracted by choosing spatial domain filters—Common Spatial Pattern (CSP), while frequency domain features are generally extracted by power spectrum analysis and some other wavelets transform, sample entropy method, etc. Each of these methods has its own advantages and disadvantages, as shown in Table 1.Figure 3 Signal data feature extraction process.Table 1 Advantages and disadvantages of different feature extraction methods. Feature extraction methodAdvantagesDisadvantagesPower spectrum analysis methodSimple algorithm, easy to operateLow resolution, inability to display brain telecommunication nonlinear information of the numberWavelet transformMultiple resolutions for more detailed characterize the signalInflexible algorithmSample entropy methodStable algorithm, low computational effortCannot express the time-frequency characteristics of the signalCommon spatial pattern methodGood performance in feature extraction for binary classificationRequires multiple leads for analysis and is susceptible to noise interferenceIn this study, the purpose is to extract the two characteristic signals of left-handed motion and right-handed motion generated by the user’s motion imagination, and using the CSP algorithm is more in line with our requirements. The CSP algorithm is to calculate all channels, and we know that the variance represents the energy, due to the phenomenon of ERD/ERS generated by motion imagination, and it is easier to use the CSP algorithm. Because of the ERD/ERS phenomenon, it is easier to extract the features with the greatest difference in energy to classify the motor imagery intention. ## 3.4. CSP Algorithm The CSP algorithm is a feature extraction algorithm for two arrangement tasks. The computational procedure of the CSP algorithm is as follows: assume thatX1 and X2 are the matrices of the single evoked EEG signals under the same experimental conditions of the left and right handed two motor imagery responsibilities, respectively, and the matrix dimension N∗T, N is the number of channels of EEG signals, and T is the number of sampling points of EEG signals, as the number of channels of EEG signals commonly used is 8 conductors, 16 conductors, 32 conductors, 64 conductors, and 128 conductors. N must be less than T to satisfy the condition of calculating the covariance matrix. Specify that Y1 and Y2 are two types of tasks for left-handed motion and right-handed motion, and X1 and X2 are represented as follows, respectively, when noise interference is ignored. (2)X1=A1AmY1YM,X2=A2AmY2YM.Assume that the source signals of the two tasks,Y1 for left-handed motion and Y2 for right-handed motion, are linearly independent of each other, Yu represents the common source signal possessed by these two tasks, Y1 consisting of m1 sources and Y2 consisting of m2 sources, then A1 consists of m1 covariance patterns associated with X1, A2 consists of m2 covariance patterns associated with X2. The purpose of the CSP algorithm is to design a spatial filter parameter to obtain the best projection matrix W. The EEG signals are passed through this spatial filter to obtain new signals where one has the largest variance, the other has the smallest variance, and then the two types of signals are classified by a classification algorithm. Calculate the covariance matrix of X1 and X2, respectively. (3)R1=X1X1TtrX1X1T,R2=X2X2TtrX2X2T,r denotes the trace of the matrix, which is the sum of the diagonal elements of the matrix X1X1T. Here, R1 and R2 are the covariance matrices for a single trial, and then the respective average covariance matrices R1 and R2 are calculated based on the total number of trials for each of the left- and right-hand tasks, denoted as n1 and n2: (4)R1¯=1n1∑i=1n1R1i,R2¯=1n2∑i=1n2R2i.Calculate the mixed-space covariance matrixR. (5)R=R1¯+R2¯.Since the obtained mixed-space covariance matrixR is a positive definite matrix, the eigenvalue decomposition of the obtained mixed-space covariance matrix R is performed according to the singular value theorem as follows. (6)R=UλUT.λ denotes the diagonal matrix composed of eigenvalues arranged in descending order, and U denotes the matrix composed of eigenvectors corresponding to the decomposed eigenvalues, resulting in the whitening transformation matrix P. (7)P=1λUT.For the formal experimental data, to avoid transient abrupt changes in the EEG signal caused by body movements, its variance is calculated and normalized for the eigensignal obtained through the spatial filter, and then the eigenvectors are extracted as follows.(8)Zi=WXi,fi=varZi∑varZi. ## 3.5. Pattern Classification To interpret the subject’s motor imagery intention and to differentiate the brain activity caused by left-handed and right-handed motor imagery activities, the recovered features from the EEG signal must be sent to a classifier for pattern classification after the signal has been retrieved. The algorithms used to categorize the extracted features into patterns are both linear and nonlinear, and the linear algorithms include linear discriminant classifiers and linear classifiers of Marxian distance, among others. Based on the above mentioned and the characteristics of motor imagery EEG signals.LDA is an effective feature extraction method that can classify data and compress the feature space dimension. This section introduces the basic principle of LDA starting from the simpler class II LDA. First assume that there arem samples in the dataset, denoted as (9)D=x1,y1,x2,y2,⋯,xm,ym,where x is an n-dimensional vector and yi∈0,1. We define Njj∈0,1 as the number of samples of the j-th class and Njj∈0,1 as the set of samples of the j-th class, and then the mean vector of samples of the j-th class can be expressed as (10)μj=1Nj∑x∈Xjx,j∈0,1.The covariance matrix of thej-th class of samples can be expressed as (11)Σj=∑x∈Xjx−μjx−μjT,j∈0,1.The projection vector is denoted byw. Then, any sample x becomes wTxi after projection. As mentioned above, the purpose of the LDA algorithm is to make the distance between similar data as small as possible, and the distance between different classes of data as large as possible. (12)G1=wTΣ0w+wTΣ1w.The distance between samples of different categories is expressed as the square of the second parametric number, as follows:(13)G2=wTμ0−wTμ122.In summary, the optimization objective of the LDA algorithm is(14)w∗=argmaxwG2G1=wTμ0−wTμ122wTΣ0w+wTΣ1w=wTμ0−μ1μ0−μ1TwwTΣ0+Σ1w. ## 4. Experiments and Results In this section, it discusses the various experimental setups and define the transcranial electrical stimulation. They analyze the experimental results. ### 4.1. Experimental Setup This chapter investigates the effects of transcranial electrical stimulation on motor imagery task ability based on the motor imagery brain-machine interface. In the ERD/ERS data, changes in classification accuracy before and after transcranial electrical stimulation were analyzed separately to investigate the electrophysiological effects of transcranial electrical stimulation on motor imagery. This experiment was a single-blind experiment, subjects were required to perform a total of four experiments, and the sequence of stimulation and control experiments was randomly arranged by the main subjects. To avoid the after-effects of transcranial electrical stimulation, the time interval between each group of stimulation and control experiments was ensured to be at least 24 hours.Ten rock climbers (age range 23-25 years, mean age24.4±0.44 years) were recruited for this experiment, and all participants were active climbers and received monetary compensation. During the experiment, subjects would receive a total of four MI task experiments and three transcranial electrical stimulations. The subjects first underwent the first MI task experiment to determine the baseline level of each outcome for comparison and analysis with the subsequent experiments, followed by three randomized transcranial electrical stimulation sessions with corresponding MI task experiments and EEG recordings. The duration of the after-effects of transcranial direct current stimulation is not more than 10 min, and the duration of the after-effects of transcranial alternating current stimulation has not been systematically studied, but the duration of the after-effects of 10 Hz, 1 mA alternating current stimulation, is 30 min; so, the interval between each stimulation is guaranteed to be more than 24 h to avoid the after-effects from affecting the results of the subsequent experiments. The experimental parameters are set as shown in Table 2.Table 2 Experimental parameter setting. Parameter nameParameter valueInitial learning rate0.002OptimizerAdamInitial momentum0.5Batch size30Maximum number of training sessions40 ### 4.2. Transcranial Electrical Stimulation (1) Electrical stimulation equipment: publicly available transcranial electrical stimulation equipment. It is capable of three stimulation modes tACS, and pseudostimulation(2) Stimulation intensity: (a) tDCS: all subjects uniformly use 1 mA current intensity, and the stimulating electrodes use5∗7cm saline soaked sponge electrodes. (b) tACS: The stimulation intensity was determined according to the subject’s stimulation threshold specificity as described, and the stimulation electrode was a 5∗7cm saline-soaked sponge electrode(3) Stimulation position: the electrode placement was as shown in Figure4, with the anode placed at the location of the motor sensory M1 area and the cathode electrode located at the forehead area(4) Stimulation frequency: 10 Hz (mean ofμ rhythm) was used to stimulate the subjects during the tACS experiment. (5) Stimulation time: for both tACS and tDCS, a current stimulation lasting 10 min was applied to the subjectsFigure 4 Stimulation electrode location map.After the raw EEG data were collected and pre-processed, the LDA algorithm was used to calculate the classification accuracy of the EEG data for the two motor imagery tasks, as well as to calculate the power spectrum of the EEG data in the prestimulation and poststimulation phases to observe the changes in ERD/ERS, respectively. The diagram of the training process performance improvement is shown in Figure5.Figure 5 Schematic diagram of the performance improvement of the training process. ### 4.3. Experimental Results For motor mental classification accuracy, this paper used one-way frequent measures ANOVA to test the significance of the classification accuracy of subjects’ motor imagery before and after different experimental conditions.p<0.05 was considered to be a significant difference. Among the ten subjects, one subject’s data was eliminated for excessive noise, and the EEG data of the remaining nine subjects were finally used for the analysis. The accuracy of subjects’ psychological recovery during rock climbing exercise is shown in Table 3.Table 3 Accuracy of subjects’ psychological recovery from rock climbing exercise. Before stimulationAfter pseudoarousalAfter tACSAfter tDCSSubject 191.2593.2198.7596.88Subject 283.8385.8492.5097.50Subject 374.6876.2586.2580.00Subject 485.0085.0089.7490.06Subject 582.4985.0080.0086.25Subject 677.5078.6485.0582.68Subject 777.6171.2581.2787.49Subject 877.4977.5075.1181.27Subject 988.7791.2894.9396.31Mean±standarddeviation82.07±5.6782.66±7.2587.07±7.6388.71+6.88The average accuracy of task classification is shown in Figure6. A one-way repeated measures ANOVA was performed on the four experimental groups, F3,24=10.436, p<0.05, indicating a significant main effect. The following results were obtained from a two-by-two comparison of the four experimental groups: tACS group compared to tDCS compared to the prestimulus group experiment, p1=0.08>0.05 and p2=0.002<0.05, respectively, and tACS group compared to tDCS compared to the pseudostimulus group experiment, p3=0.193>0.05 and p4=0.02<0.05, respectively. From the classification accuracy of the MI task in the four groups, it can be seen that the accuracy of the subjects after the tDCS inspiration was significantly improved compared to the prestimulation and pseudostimulation groups, and the accuracy of the subjects after the tACS stimulation was better quality compared to the prestimulation and pseudostimulation groups but was not significant. In terms of the overall level of accuracy improvement, tDCS was more effective in improving classification accuracy than the tACS group.Figure 6 Average accuracy of task classification.The visualization results of the accuracy of mental recovery in rock climbing without and with transcranial electrical stimulation are shown in Figures7 and 8, respectively. From the individual subjects’ point of view, the accuracy of motor imagery was effectively improved in all nine subjects after tDCS stimulation. For tACS, subjects 5 and 8 showed a decrease in accuracy after tACS compared to the prestimulation period, and subjects 1, 3, and 6 showed a better increase in accuracy after tACS than they did after tDCS. Among all the subjects in the experimental group, the highest accuracy rate was that of subject 1 after tACS inspiration, with an accuracy rate of 98.75%. Among all subjects in the experimental group, the lowest accuracy rate was for subject 8 after tACS stimulation, with an accuracy rate of 75.11%.Figure 7 Accuracy of mental recovery in rock climbing sports without transcranial electrical stimulation.Figure 8 Accuracy of psychological recovery in rock climbing with transcranial electrical stimulation. ## 4.1. Experimental Setup This chapter investigates the effects of transcranial electrical stimulation on motor imagery task ability based on the motor imagery brain-machine interface. In the ERD/ERS data, changes in classification accuracy before and after transcranial electrical stimulation were analyzed separately to investigate the electrophysiological effects of transcranial electrical stimulation on motor imagery. This experiment was a single-blind experiment, subjects were required to perform a total of four experiments, and the sequence of stimulation and control experiments was randomly arranged by the main subjects. To avoid the after-effects of transcranial electrical stimulation, the time interval between each group of stimulation and control experiments was ensured to be at least 24 hours.Ten rock climbers (age range 23-25 years, mean age24.4±0.44 years) were recruited for this experiment, and all participants were active climbers and received monetary compensation. During the experiment, subjects would receive a total of four MI task experiments and three transcranial electrical stimulations. The subjects first underwent the first MI task experiment to determine the baseline level of each outcome for comparison and analysis with the subsequent experiments, followed by three randomized transcranial electrical stimulation sessions with corresponding MI task experiments and EEG recordings. The duration of the after-effects of transcranial direct current stimulation is not more than 10 min, and the duration of the after-effects of transcranial alternating current stimulation has not been systematically studied, but the duration of the after-effects of 10 Hz, 1 mA alternating current stimulation, is 30 min; so, the interval between each stimulation is guaranteed to be more than 24 h to avoid the after-effects from affecting the results of the subsequent experiments. The experimental parameters are set as shown in Table 2.Table 2 Experimental parameter setting. Parameter nameParameter valueInitial learning rate0.002OptimizerAdamInitial momentum0.5Batch size30Maximum number of training sessions40 ## 4.2. Transcranial Electrical Stimulation (1) Electrical stimulation equipment: publicly available transcranial electrical stimulation equipment. It is capable of three stimulation modes tACS, and pseudostimulation(2) Stimulation intensity: (a) tDCS: all subjects uniformly use 1 mA current intensity, and the stimulating electrodes use5∗7cm saline soaked sponge electrodes. (b) tACS: The stimulation intensity was determined according to the subject’s stimulation threshold specificity as described, and the stimulation electrode was a 5∗7cm saline-soaked sponge electrode(3) Stimulation position: the electrode placement was as shown in Figure4, with the anode placed at the location of the motor sensory M1 area and the cathode electrode located at the forehead area(4) Stimulation frequency: 10 Hz (mean ofμ rhythm) was used to stimulate the subjects during the tACS experiment. (5) Stimulation time: for both tACS and tDCS, a current stimulation lasting 10 min was applied to the subjectsFigure 4 Stimulation electrode location map.After the raw EEG data were collected and pre-processed, the LDA algorithm was used to calculate the classification accuracy of the EEG data for the two motor imagery tasks, as well as to calculate the power spectrum of the EEG data in the prestimulation and poststimulation phases to observe the changes in ERD/ERS, respectively. The diagram of the training process performance improvement is shown in Figure5.Figure 5 Schematic diagram of the performance improvement of the training process. ## 4.3. Experimental Results For motor mental classification accuracy, this paper used one-way frequent measures ANOVA to test the significance of the classification accuracy of subjects’ motor imagery before and after different experimental conditions.p<0.05 was considered to be a significant difference. Among the ten subjects, one subject’s data was eliminated for excessive noise, and the EEG data of the remaining nine subjects were finally used for the analysis. The accuracy of subjects’ psychological recovery during rock climbing exercise is shown in Table 3.Table 3 Accuracy of subjects’ psychological recovery from rock climbing exercise. Before stimulationAfter pseudoarousalAfter tACSAfter tDCSSubject 191.2593.2198.7596.88Subject 283.8385.8492.5097.50Subject 374.6876.2586.2580.00Subject 485.0085.0089.7490.06Subject 582.4985.0080.0086.25Subject 677.5078.6485.0582.68Subject 777.6171.2581.2787.49Subject 877.4977.5075.1181.27Subject 988.7791.2894.9396.31Mean±standarddeviation82.07±5.6782.66±7.2587.07±7.6388.71+6.88The average accuracy of task classification is shown in Figure6. A one-way repeated measures ANOVA was performed on the four experimental groups, F3,24=10.436, p<0.05, indicating a significant main effect. The following results were obtained from a two-by-two comparison of the four experimental groups: tACS group compared to tDCS compared to the prestimulus group experiment, p1=0.08>0.05 and p2=0.002<0.05, respectively, and tACS group compared to tDCS compared to the pseudostimulus group experiment, p3=0.193>0.05 and p4=0.02<0.05, respectively. From the classification accuracy of the MI task in the four groups, it can be seen that the accuracy of the subjects after the tDCS inspiration was significantly improved compared to the prestimulation and pseudostimulation groups, and the accuracy of the subjects after the tACS stimulation was better quality compared to the prestimulation and pseudostimulation groups but was not significant. In terms of the overall level of accuracy improvement, tDCS was more effective in improving classification accuracy than the tACS group.Figure 6 Average accuracy of task classification.The visualization results of the accuracy of mental recovery in rock climbing without and with transcranial electrical stimulation are shown in Figures7 and 8, respectively. From the individual subjects’ point of view, the accuracy of motor imagery was effectively improved in all nine subjects after tDCS stimulation. For tACS, subjects 5 and 8 showed a decrease in accuracy after tACS compared to the prestimulation period, and subjects 1, 3, and 6 showed a better increase in accuracy after tACS than they did after tDCS. Among all the subjects in the experimental group, the highest accuracy rate was that of subject 1 after tACS inspiration, with an accuracy rate of 98.75%. Among all subjects in the experimental group, the lowest accuracy rate was for subject 8 after tACS stimulation, with an accuracy rate of 75.11%.Figure 7 Accuracy of mental recovery in rock climbing sports without transcranial electrical stimulation.Figure 8 Accuracy of psychological recovery in rock climbing with transcranial electrical stimulation. ## 5. Conclusion Rock climbing requires climbers to have good psychological quality, and these psychological qualities include the ability to overcome anxiety, fear, and obstacles, but also includes the level of the athlete’s will, the degree of concentration, etc. The psychological training of climbers should be targeted according to the specific characteristics of each athlete to improve their psychological mechanisms through training.The brain controls most of the human learning and movement. Although sports training focuses on physical performance and motor abilities, its fundamental nature still depends on the cerebral cortex to create neural connections that help the nervous system better govern muscles. There is a transcranial electrical stimulation’s efficiency in enhancing human motor performance. This paper further analyzes the effectiveness of transcranial electrical stimulation in improving psychomotor balance, psychomotor endurance performance, and psychomotor strength and reducing motor fatigue. --- *Source: 1008346-2022-07-13.xml*
2022
# A Survey on Deep Learning for Building Load Forecasting **Authors:** Ioannis Patsakos; Eleni Vrochidou; George A. Papakostas **Journal:** Mathematical Problems in Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008491 --- ## Abstract Energy consumption forecasting is essential for efficient resource management related to both economic and environmental benefits. Forecasting can be implemented through statistical analysis of historical data, application of Artificial Intelligence (AI) algorithms, physical models, and more, and focuses on two directions: the required load for a specific area, e.g., a city, and the required load for a building. Building power forecasting is challenging due to the frequent fluctuation of the required electricity and the complexity and alterability of each building’s energy behavior. This paper focuses on the application of Deep Learning (DL) methods to accurately predict building (residential, commercial, or multiple) power consumption, by utilizing the available historical big data. Research findings are compared to state-of-the-art statistical models and AI methods of the literature to comparatively evaluate their efficiency and justify their future application. The aim of this work is to review up-to-date proposed DL approaches, to highlight the current research status, and to point out emerging challenges and future potential directions. Research revealed a higher interest in residential building load forecasting covering 47.5% of the related literature from 2016 up to date, focusing on short-term forecasting horizon in 55% of the referenced papers. The latter was attributed to the lack of available public datasets for experimentation in different building types, since it was found that in the 48.2% of the related literature, the same historical data regarding residential buildings load consumption was used. --- ## Body ## 1. Introduction According to the statistical review on world energy (2020) of the International Energy Agency (IEA) [1], until the end of the year 2019 (although the report for 2021 is available [2], it is not used due to the unique impact of COVID-19 in the Energy sector that, due to the authors’ opinion, requires further research and is beyond the scope of this paper) almost 80% of the consumed electricity, in global scale, was produced by nonrenewable sources [3] (Figure 1(a)) such as coal, lignite, oil, natural gas, and the annual variation, with few exceptions, ranged between 1% and 4% of an increase, corresponding to 1% to 2% in average [4] as illustrated in Figure 1(b). These two factors alone are sufficient to conclude that electric energy is a product with an expiration date, high production costs, and derivatives harmful to the environment (nuclear waste, greenhouse gas (GHG) emissions, etc.). The lack of sufficient means of long-term and large-scale storage of produced electricity, rendered both the problems of successfully balancing the supply–demand scale and power consumption forecasting. These are the most significant problems that the electricity production industry has to address on a daily basis, in order to avoid energy shortages that could lead to line of production problems in industrial compounds, services disruption, residential outages, etc., or produced energy waste [5]. Effective management of the produced electric energy, avoidance of excessive production, and minimization of energy wastage constitute the main keys toward sustainable energy consumption [6].Figure 1 (a) Global electricity share by fuel source [3]. (b) Annual change in primary energy consumption [4]. (a)(b)Over the past years, in multiple scientific papers and articles, the term “smart grid” was frequently used [7] to describe a flexible grid regarding the production and distribution of electric energy [8]. It should be noted that a search on Scopus by using the term “smart grid” indicated 44.598 research articles until 2022. This flexibility is due to the dynamic adjustment in power demand and the cost-effective distribution of electricity produced from various sources, e.g., solar, wind, nuclear. [9]. A grid is considered “smart” when it is able to monitor, predict, schedule, understand, learn, and make decisions regarding power production and distribution, carrying valuable information along with electricity [8]. The upgrade of power grid requires the infusion of AI [10]; recent studies suggest the training of Artificial Neural Networks (ANNs) to recognize multiple energy patterns [11].Regarding the prediction time frame, power consumption forecasting can be divided into three main categories.(i) Short–term, which covers a time frame of up to one day; it is useful in supply and demand (SD) adjustment [12].(ii) Medium–term, which covers a time frame from one day up to a year; it is useful in maintenance and outage planning [13].(iii) Long–term, for a time frame longer than a year; it is useful in infrastructure development planning [14].In published papers/scientific literature, so far, the applied methodology in power consumption forecasting, regarding an area or a specific building, casts in two main categories.(i) Physics principles-based models.(ii) Statistical and Machine Learning (ML) models.Building power consumption forecasting is considered more demanding compared to the area of power consumption forecasting [15]; however, more and more researchers are turning to the application of DL to solve such problems due to the reported promising performances [16–18].According to estimates, residential and commercial buildings are responsible for the consumption of 20–40% of the global energy production [19–22], having a high-energy wastage rate due to insufficient management and planning, age of the building, lack of responsible energy usage, etc. [23]. High consumption percentages have motivated researchers to develop new ways or enhance and improve the existing ones in better savings of electric energy, focusing mainly on the development of a solid strategy for flexible and efficient energy supply–demand management. The success of the latter strategy is highly dependent on timely and accurate energy consumption forecasting [24, 25].Therefore, a “smart” grid should be able to predict not only the total amount of energy needed at a certain period of time in a specific area but to calculate further and with precision the electricity consumption needs of a specific building based on the building’s characteristics such as Heating, Ventilation, and Air Conditioning (HVAC) devices, and historic consumption data. [26]. It is worth mentioning that a research work [27] estimated that an increase of 1% in forecasting accuracy could result in approximately £10 million per year fewer expenses for the power system of the United Kingdom.Towards this end, the contribution of the present work is focused on the methodological searching, collecting, analyzing, and presenting DL methods proposed in the years 2016–2022 for solving the problem of building power consumption prediction. To the best of the authors’ knowledge, literature reviews of DL methodologies and approaches, focusing on forecasting energy use in buildings are limited [28, 29]. This work aims to address the issue of building energy load forecasting and load prediction further and in depth, to update existing literature reviews on the same subject, to shed more light on the current status of DL performance in this area, and to highlight the main challenges that need to be addressed in the future. The main contributions of the current work compared to existing reviews on building load forecasting [28, 29] are the following.(i) The research methodology, analyzed in Section2, is not limited to specific publishers, resulting in a wider range of publications on the subject under study.(ii) This work focuses on research papers that propose methodologies based on the total building energy load and it is not targeted on specific utility loads, such as HVAC load.The rest of the paper is organized as follows. In Section2, the methodology of the contacted literature search is presented, along with statistical analysis and graph displays of the results. In Section 3, all the nondeep learning methodologies proposed or used to date in building load forecasting are presented briefly. In Section 4, the deep learning methodologies that have been proposed and tested so far towards addressing the building energy consumption prediction problem, along with their results and conclusions, are presented in chronological order. Section 5 provides details regarding the datasets used in the referenced literature of Section 4. Section 6 discusses new reflections and concerns regarding the building load forecasting problem that were raised from the conducted research. Finally, Section 7 concludes the paper. ## 2. Materials and Methods The methodology that was followed consisted of four main steps, as illustrated in Figure2.(1) Extensive research in the published literature by using Scopus, a certified, academically approved search engine, to establish a solid baseline for our research. Scopus complies with the most important research features of recall, precision, and importance. Regarding “importance”, Scopus is considered the most effective research engine for an overview of a topic [30], and therefore selected for the scope of this review article. The application of several different combinations of the keywords indexed above, such as “Deep learning” and “Building load forecasting”, resulted initially in 71 papers.(2) By studying the Title–Abstract–Conclusion parts of each paper, we were able to narrow down the relevant, to our subject, papers. In this step, the study focused on papers that could provide potential solutions/suggestions in wider and more generalized applications and benefit in the upcoming research. After this phase, 48 papers remained.(3) Extensive and meticulous study of the remaining papers and categorization according to the suggested solution/methodology. In this phase remained 34 papers.(4) In the final stage of our research, to achieve a more thorough review, we traced/researched the references in the papers of the previous step, which were not included in the results of step 1. This indicated 6 more papers, significant to our research, resulting in a total of 40 papers relevant to the subject.Figure 2 Steps involved in the collection of research papers for this review.Several useful conclusions emerged from this literature review, regarding the up-to-date engagement of the scientific community in the current subject. The matter of the application of deep learning methods to the prognosis of the electrical load of buildings is a subject that first appeared in 2016 and since then continues to demonstrate an upward trend when it comes to the interest of researchers, as it appears in Figure3. It should be noted that only until February of 2022, seven papers relevant to the subject have been published. The latter trend is probably due to the promising results of DL architecture application in research, compared to traditional load forecasting methods.Figure 3 Published research papers relevant to the subject per year.Regarding the type of building, residential, commercial, or multiple types, research reveals an almost similar interest in multiple and commercial buildings, as it can be seen in Figure4, while the higher interest is in residential buildings. We assume this has to do with the limited dataset availability than the sole interest in a specific type of building, since approximately 48.2% of the papers experimenting in residential and multiple building load forecasting are using the same well-known dataset containing residential load consumption data, as it will be further discussed in an upcoming section.Figure 4 Paper rates by building type.Out of the main three categories of forecasting time horizon, short-medium-long, and multiple (more than one category), the one that was mostly expended and researched, as it is shown in Figure5, is short-term forecasting. This is probably due to dataset resolution and wide spread of smart meters installed in an increasing number of buildings. We also assume that since building load forecasting is highly connected to building occupants’ behavior, it is probably better to predict power consumption in short-term, and adjust models accordingly, since it is more sensitive in capturing variations in building consumption patterns.Figure 5 Paper rates by forecasting horizon.Regarding the methods–architectures of deep learning that were proposed and tested, the Long-term Short-Term Memory (LSTM) based architectures summoned the greatest interest, as displayed in Figure6. This is due to the adaptability that they present in maintaining “memory” for a big number of steps and in their ability to apply numerous parameters in order to achieve better accuracy and performance compared to most of the other models. In Figure 5, the category “Hybrid” refers mainly to LSTM Convolutional Neural Network (LSTM–CNN) hybrid architectures, the category “AE” refers to autoencoders, and the category “Other” refers to the rest of researched architectures.Figure 6 Paper rates by deep learning architecture. ## 3. Nondeep Learning Methods in Building Load Forecasting The methods/technics/approaches regarding building energy load forecasting, according to literature, can be divided into three main categories [31].(1) White Box or Physical methods, which include all methods that address the problem by interpreting the thermal behavior of a building. These complex methods require a detailed description of the building’s geometry, they do not require training data, and their results can be interpreted in physical terms. There are several limitations in this methodology regarding forecasting accuracy and reliability [32, 33]. There are three main approaches in this category, and due to their complexity, there are several software solutions simplifying and automating these complex procedures:(i) Computational Fluid Dynamics (CFD), which is considered a three-dimensional approach [34, 35].(ii) Zonal, a simplified CFD, which is considered a two-dimensional approach [36].(iii) Nodal approach, which is the simplest of the three, and is considered a one-dimensional approach [37].(2) Black Box or Statistical methods using traditional Machine Learning. These methods do not require a detailed description of the building geometry; they require a sufficient amount of training data, and their results can be difficult to interpret in physical terms. The most commonly used methods are:(iv) Conditional Demand Analysis (CDA), based on the Multiple Linear Regression method [38].(v) Genetic Algorithms, based on Darwin’s Theory of evolution of the species [39, 40].(vi) Artificial Neural Networks (ANN), inspired by brain neurons [41, 42].(vii) Support Vector Machine (SVM), a classification or regression problem solving method [43, 44].(viii) Autoregressive Integrated Moving Average (ARIMA) [45].(3) Grey Box or Hybrid models, which combine methods from the previous categories, in an effort to overcome their disadvantages and utilize their advantages [46]. These methods require a rough description of the building geometry, a small amount of training data compared to the previous category, and their results can be interpreted in physical terms. ## 4. Deep Learning Methods in Building Load Forecasting As displayed in Figure4, the methodologies proposed for building load forecasting are categorized into three main categories, regarding the type of buildings under investigation. In this section, following the same categorization, the examined DL methodologies are presented. ### 4.1. Residential Building Load Forecasting The first DL-based methodology was proposed by Elena Mocanu et al. [47] in 2016 for load forecasting of a residential building. The examined DL models were: (1) Conditional Restricted Boltzmann Machine (CRBM) [48] and (2) Factored Conditional Restricted Boltzmann Machine (FCRBM) [49], with reduced extra layers. The performance of both models was compared to that of the three most used Machine learning methods of that time [50–52]: (1) Artificial Neural Network - Non-Linear Autoregressive Model (ANN-NAR), (2) Support Vector Machine (SVM), and (3) Recurrent Neural Network (RNN). The used dataset entitled “Individual Household Electric Power Consumption” (IHEPC) [53] was collected from a household at a one-minute sampling rate. It contained 2.075.259 samples in an almost four-year period (47 months) of time, collected between December 2006 and November 2010. The attributes, from the dataset, being used in the experiments were: Aggregated active power (household avg power excluding the devices in the following attributes), Energy Submetering 1 (Kitchen–oven, microwave, dishwasher, etc.), Energy Submetering 2 (Laundry room–washing machine, dryer, refrigerator, and a light bulb), and Energy Submetering 3 (water heater and air condition device). In all the implementations, the authors used the first three years of the dataset for model training and the fourth year for testing. Useful conclusions extracted from that research were the following: all five tested models produced comparable forecasting results, with the best performance attained in experiments predicting the aggregated energy consumption, rather than the other three submetering. It is also worth mentioning that in all the scenarios for submetering prediction the results were the most inaccurate, which could be attributed to the difficulty to predict user behavior. The proposed FCRBM deep learning model outperformed the other four prediction methods in most scenarios. All methods proved to be suitable for near real-time exploitation in power consumption prediction, but the researchers also concluded that when the prediction length was increasing, the accuracy of predictions was decreasing, reposting prediction errors half of that of the ANN. The authors also concluded that even though the use of the proposed deep learning methods was feasible and provided sufficient results, it could be further improved to achieve better accuracy in prediction by fine-tuning, the addition of extra information to the models, such as environmental temperature, time, and more [47].In the same year, Daniel L. Marino et al. [6] proposed another methodology using the LSTM DL model. More precisely, the authors examined three models: (1) Standard Long Short Term Memory (LSTM) [54], a Recurrent Neural Network (RNN) designed to store information for long time periods, that can successfully address the vanishing gradient issue of RNN; (2) LSTM-Based Sequence-to-Sequence (S2S) architecture [55], a more flexible than standard LSTM architecture consisting of two LSTM networks in encoder-decoder duties, which overcomes the naive mapping problem observed in standard LSTM; and (3) Factored Conditional Restricted Boltzmann Machine (FCRBM) method proposed in [47]. This work revealed that the standard LSTM failed in building load forecasting and a naïve mapping issue occurred. The proposed deep learning model LSTM Sequence-to-Sequence (S2S) network, based on standard LSTM network, overcame the naïve mapping issue and produced, comparable results to FCRBM model and to the other methods examined in [47], by using the same dataset [53]. A significant conclusion of this research was that when the prediction length increased, the accuracy of predictions decreased. The researchers also concluded that in order to have a better grasp of the effectiveness of those methods and improve their generalization, more experiments with different datasets and regularization methods had to be conducted. It is worth mentioning that the used dataset was the same as in [47].The following year, in 2017, Kasun Amarasinghe et al. [56] proposed a methodology based on the Convolutional Neural Network (CNN) model. The novelty of this work was the deployment of a grid topology for feeding the data to the CNN model, for the first time in this kind of problems. The authors compared the performance of the CNN model with that of: (1) Standard Long-Short-Term Memory (LSTM), (2) LSTM-Based Sequence-to-Sequence (S2S) Architecture network, (3) Factored Conditional Restricted Boltzmann Machine (FCRBM), (4) Artificial Neural Networks with Non-Linear Autoregressive Model (ANN-NAR), and (5) Support Vector Machine (SVM). This research extracted the following conclusions: all the tested deep learning architectures produced better results in energy load forecasting for a single residence than SVM, and similar or more accurate results than standard ANN. Moreover, the best accuracy has been achieved by LSTM (S2S). The results of the tested CNN architectures were similar, with slight variations, to each other, performed better than SVM and ANN, and even though they did not outperform the other deep learning methods, they managed to remain a promising architecture. A more general observation that puzzled the researchers was that the results in training were better than in testing. The researchers also concluded, based on their recent and previous work [6], that the tested deep learning methods [57, 58] produced promising results in energy load forecasting. They also suggested that weather data should be considered in future works regarding forecasting due to the direct relationship between the two and the fact that it had not been used to date elsewhere than in [57]. Finally, they came to the same conclusion as in their previous work that in order to report a better grasp of the effectiveness of their methods and to improve their generalization, more experiments with different datasets and regularization methods had to be conducted. Once again, the same dataset [53] was utilized.In [59], Lei et al. In 2018 introduced a short-term residential load forecasting model, named Residual Conventional Fusion Network (RCFNet). The proposed model consisted of three branches of residual convolutional units (proximity, tendency, and periodicity modeling), a fully connected NN (weekday or weekend modeling) and an RCN to perform load forecasting based on the fusion of the previous outputs. The dataset used in this research [60], covered a two-year time period (April of 2012 to March 2014) and contained half hour sampled data from smart meters installed in 25 households, in Victoria, Australia. For this research purpose, only 8 households that contained the most complete data series were used. Approximately, 91.7% (22 months) of the dataset was used for training and the remaining 8.3% (2 months) for testing. Six different variations of the proposed RCFNet model were compared to four baseline forecasting models: History Average (HA), Seasonal ARIMA (SARIMA), MLP and LSTM, and all models were evaluated by calculating the round mean-square-error (RMSE) metric. The researchers concluded that their model outperformed all other models and achieved the best accuracy, scalability, and adaptability.In [61], Kim et al. in 2019 introduced a deep learning model for building load forecasting based on the Autoencoder (AE) model. The main idea behind this approach was to devise a scheme capable of considering different features for different states/situations each time, to achieve more accurate and explanatory energy forecasts. The model consisted of two main components, based on LSTM architecture, a projector that gave the input data, the energy current demand that defined the state of the model and a predictor for the building load forecasting, based on that state. The user of the system had a key role and could affect the forecasting through parameter and condition choices. In this work, a well-known dataset [53] was used; 90% of the dataset was used for training and 10% for testing the model. The authors compared their model to traditional forecasting methods, ML methods and DL methods, and they concluded that the proposed model, evaluated by mean square error (MSE), mean absolute error (MAE), and mean relative estimation error (MRE) metrics, outperformed them in most cases. The authors also concluded that their models’ efficiency was enhanced due to the condition adjustment, giving each time the situation/state of the model. The main contribution of the proposed work was that the model could both predict future demand and define the current demand pattern as state.The same research team of Kim et al. [62] in the same year, 2019, proposed a hybrid model, where two DL architectures, a CNN most commonly used in image recognition, and an LSTM, most commonly used in speech recognition and natural language processing, were linearly combined in a CNN–LSTM model architecture. For the experiments, a popular dataset [53] was used. The proposed model was tested in minute–hour–day–week resolutions and it was discovered that as the resolution increased, accuracy improved. The CNN–LSTM model evaluated by MSE-RMSE-MAE-mean absolute percentage error (MAPE) metrics, as compared to several other traditional energy forecasting ML and DL models and produced the most accurate results. It should be noted that the proposed method introduced first a combination of CNN architectures with LSTM models for energy consumption prediction. The authors concluded that the proposed model could deal with noise drawbacks and displayed minimal loss of information. The authors also evaluated the attributes of the used dataset and the impact that each of them had on building load forecasting. Submetering 3 attributes, representing water heater and air conditioner consumption, had the highest impact followed by Global Active Power (GPA) attribute. Another observation of this research was on the lack of available relevant datasets and that future work should focus on data collection and the creation of an automated method for hyperparameter choosing.In [63], Le et al. in 2019 presented a DL model for building load forecasting, named EECP-CBL. The architecture of the model was a combination of Bi-LSTM and CNN networks. For the contacted experiments, the authors utilized the IHEPC dataset [53]. For each model, 60% of the data (first three years) was used for training and the rest 40% of the data (last two years) was used for testing. The EECP–CBL model was compared to several state-of-the-art models at the time, used in the industry or introduced by other researchers for energy load forecasting: Linear Regression, LSTM, and CNN-LSTM. After data optimization, the models were tested for real-time (1 minute), short (1 hour), medium (1 day), and long (1 week) term load prediction, and they were evaluated by MSE, RMSE, MAE, and MAPE metrics. The authors concluded that the proposed model outperformed all other models in terms of accuracy. In this research, the researchers also focused on the time consumed for training and prediction of each model and concluded that while the prediction horizon increased, the time required for each additional task decreased for each model, with the proposed model outperforming all other, reporting as a disadvantage a comparatively higher training time. The research team also concluded that EECP–CBL model achieved peak performance on long-term building load forecasting and could be utilized in intelligent power management systems.In [64], Mehdipour Pirbazari et al. in 2020, in order to explore the extent and the way several factors can affect short-term (1-hour) building load forecasting, performed several experiments on four data-driven prediction models: Support Vector Regression (SVR), Gradient Boosting Regression Trees (GBRT), Feed Forward Neural Networks (FFNNs), and LSTM. The authors focused mainly on the scalability of the models and the prediction accuracy if trained solely in historical consumption data. The dataset covered a four-year time period (November 2011 to February 2014) and contained smart meter hourly data from 5.567 individual households in London, UK [65]. After data normalization and parameter tuning, the dataset utilized in this research focused on the year 2013 (fewer missing values, etc.) regarding 75 households, 15 each out of five different consumer -type groups classified by Acorn [66]. The four models were evaluated by Cumulative Weighted Error (CWE), based on RMSE, MAE, MASE, and Daily Peak Mean Average Percentage Error (DpMAPE) metrics. The researchers concluded that among the four models, LSTM and FFNN presented better adaptability to consumption variations, and resulted in better accuracy, but LSTM had higher computation cost and was clearly outperformed by CBRT, which was significantly faster. According to the reported results, other factors that affected load forecasting, for all four models, were the variations in usage, average energy consumption, and forecasted season temperature. Also, changes in the number of features (input lags) or a total of tested households (size of training dataset) did not affect similarly all models. The developed models were expected to learn various load profiles aiming towards generalization abilities and increase of models’ robustness.In [67], Mlangeni et al. in 2020 introduced, for medium and long-term building load forecasting, Dense Neural Network (DNN), a deep learning architecture that consisted of multiple ANN layers. The dataset used for this research contained, approximately, 2 million records from households in the eThekwini metropolitan area that contained 38 attributes and covered a five-year period, from 2008 to 2013. After data optimization and preparation, only 709.021 samples remained, which contained 7 attributes. For model training, 75% of the data was used, and for testing, the remaining 25%. In order to model load forecasting for the campus buildings of the University of KwaZulu, the authors assigned the household readings to rooms inside university buildings. The proposed architecture was compared to SVM and Multiple Regression (MR) models and was evaluated by RMSE and normalized RMSE (nRMSE) metrics. The authors concluded that the proposed model outperformed the rest of the models, presented good generalization ability, and could follow the data consumption trends. Dispersion of values in the data resulted in inaccurate estimations of large values, probably due to them being outliers. The authors also concluded that their method could be further improved by implementing more ML architectures and then testing in more datasets against other models or even extending from building load forecasting to wider metropolitan areas.In [68], Estebsari et al. In 2020, inspired by the high performance of CNN networks in image recognition, proposed a 2-dimensional CNN model for short-term (15-minutes) building load forecasting. In order to encode the 1-dimensional time series into 2-dimensional images, the authors presented and experimented on four well-known methods: recurrence plots (RP) [69], Gramian angular field (GAF), and Markov transition field (MTF) [70]. For the experimental results, it was used the Boston housing dataset [53]; 80% of the data was used for training and the remaining 20% for testing the models. The performance of three different versions, based on the image encoding method used, of the proposed model, CNN-2D was compared to SVM, ANN, and CNN-1D models. All architectures were evaluated by RMSE, MAPE, and MAE metrics. The researchers concluded that the CNN-2D-RP model outperformed all other models, displaying the best forecasting accuracy, however, due to image encoded data, had a significantly higher computational complexity, making it inappropriate for real-time applications.In [71], Wen et al. in 2020 presented a Deep RNN with Gated Recurrent Unit (DRNN-GRU) architecture, consisting of five layers, for short- to medium-term load forecasting in residential buildings. The proposed models’ prediction accuracy was compared by using MAPE, RMSE, percentage of consonants correct (PCC), and MAE metrics, to several DL (DRNN, DRNN-LSTM) and non-DL schemes (MLP, ARIMA, SVM, MLR). The dataset used in this research contained 15 months of hourly gathered consumption data and was obtained from Pecan Street Inc. Dataport Web Portal [72], while weather data were obtained from [73]. For the experimental evaluation of the method, 20 individual residential buildings were selected from the dataset; the first year of the dataset (80%) was used for training and the remaining three months (20%) for testing. The load demand was calculated for the aggregated load of a group of ten individual residential buildings. The researchers extracted several conclusions from their work. The proposed model achieved a lower error rate compared to the other tested methods and almost 5% less than the LSTM layer variation of DRNN. The researchers also declared that DRNN-GRU model achieved higher accuracy results than the rest models for the aggregated load of 10 residential buildings as well as for the individual load of residencies. There were some issues though to be taken under consideration, regarding the use of the proposed scheme for building load forecasting. The weather attributes, based on historic data, could affect the load forecasting accuracy since the weather could not be predicted with high certainty. In addition, the aggregated load forecasting accuracy was higher than the individual residence load, since the factor of the uncertain human behavior decreased as the number of total residences raised.In 2021, Jin et al. [74] developed an attention-based encoder-decoder network based on a gated recurrent unit (GRU) NN with Bayesian optimization towards short-term power forecasting. The contributions of the proposed method were in the incorporation of a temporal attention mechanism able to adjust the nonlinear and dynamic adaptability of the network, and the automatic verification of the hyperparameters of the encoder-decoder model resulting in improved prediction performance. The verification of the network was tested for 24-hours load forecasting with data acquired from the American Electric Power (AEP) [75]. The dataset included 26280 data from 2017 to 2020, with a sampling frequency of one hour; 70% of the data was used for training, 10% for validation, and 20% for testing. The model was also tested for the load prediction of four special days: Spring Equinox, Easter, Halloween, and Christmas. The proposed method demonstrated high performance and stability compared to nine other models, considering various indicators to reflect their accuracy performance (RMSE, MAE, Pearson correlation coefficient (R), NRMSE, and symmetric mean absolute percentage error (SMAPE)). The proposed model outperformed all nine models in all cases.In [15], a hybrid DL model was proposed for household-level energy forecasting in smart buildings. The model was based on the stacking of fully connected layers and unidirectional LSTMs on bidirectional LSTMs. The proposed model could allow the learning of exceedingly nonlinear and convoluted patterns, and correlations in data that could not be reached by the classical up-to-date unidirectional architectures. The accuracy of the model was evaluated on two datasets through score metrics in comparison with existing relevant state-of-the-art approaches. The first dataset included temperature and humidity in different rooms, appliances energy use, light fixtures energy use, weather data, outdoor temperature and relative humidity, atmospheric pressure, wind speed, visibility, and dewpoint temperature data [76]. The second dataset was the well-known IHEPC set of the University of California, Irvine (UCI) Machine Learning repository [53]. The employed performance comparison indicated the proposed model as the one with the highest accuracy, evaluated with RMSE, MAPE and MAE, even in the case of multistep ahead forecasting. The proposed method could be easily extended to long-term forecasting. Future work could focus on additional household occupancy data and on speeding up the training time of the model in order to facilitate its real-time application.In the same year, Shirzadi et al. [13] developed and compared ML (SVM, RF) and DL models (nonlinear autoregressive exogenous NN (NARX), recurrent NN (RNN-LSTM)) for predicting electrical load demand. Ten years of historical data for Bruce Country in Canada were used [77] regarding hourly electricity consumption by the Independent Electricity System Operator (IESO), feed with temperature, and wind speed information [78]recorded from 2010 to 2019; nine years of data were considered for training and one year for testing. Results revealed that DL models could predict more accurately the load demand, in terms of MAPE and R-squared metrics, for both peak and off-peak values. The windowing size of the analysis period was reported as a limitation of the method, affecting significantly the computation time.Ozer et al. in 2021 [79] proposed a cross-correlation (XCORR)-based transfer learning approach on LSTM. The proposed model was location-independent and global features were added to the load forecasting. Moreover, only one month of original data was considered. More specifically, the training data were obtained from the Dataport website [72], while the building data for which the load demand was estimated and were collected by an academic building for one month. Evaluation metrics RMSE, MAE, and MAPE were calculated. The performance of the proposed model was not compared to different models; however, the effect of transfer learning on LSTM was emphasized. The method resulted in accurate prediction results, paving the way for energy forecasting based on limited data.More recently, in January 2022, Olu-Ajayi et al. [80] presented several techniques for predicting annual building energy consumption utilizing a large dataset of residential buildings: ANN, GB, DNN, Random Forest (RF), Stacking, kNN, SVM, Decision Tree (DT), and Linear Regression (LR) were considered. The dataset included building information retrieved from the Ministry of Housing Communities and Local Government (MHCLG) repository [81] and meteorological data from the Meteostat repository [82]. In addition to forecasting, the effect of building clusters on model performance was examined. The main novelty of that work was the introduction of input key features of building design, enabling designers to forecast the average annual energy consumption at the early stages of development. The effects on the performance of the model of both building clusters on the selected features and the data size were also investigated. Results indicated DNN as the most efficient model in terms of R-squared, MAE, RMSE, and MSE.In the same month of 2022, in [83], Yan et al. proposed a bidirectional nested LSTM (MC-BiNLSTM) model. The model was combined with discrete stationary wavelet transform (SWT) towards more accurate energy consumption forecasting. The integrated approach of the proposed method enabled enhanced precision due to the use of multiple subsignals processing. Moreover, the use of SWT was able to eliminate the signal noise by signal decomposition. The UK-DALE [84] dataset was used for the evaluation of the model by calculating MAE, RMSE, MAPE, and R-squared. The proposed method was compared to cutting-edge algorithms of the literature, such as AVR, MLP, LSTM, GRU, and seven hybrid DL models (Ensemble model combining LSTM and SWT, Ensemble model combining Nested LSTM (NLSTM) and SWT, Ensemble model combining bidirectional LSTM (BLSTM) and SWT, Ensemble model combining LSTM and empirical mode decomposition (EMD), Ensemble model combining LSTM and variational mode decomposition (VMD), Ensemble model combining LSTM and empirical wavelet transform (EWT), and Multichannel framework combining LSTM and CNN (MC–CNN–LSTM)). The proposed model achieved a reduction of MAPE to less than 8% in most of the cases. The method was developed on the edge of a centralized loud system that integrated the edge models and could provide to multiple households a universal IoT energy consumption prediction. The method was limited by the difficulty to integrate multiple models for different household consumption patterns, raising data privacy issues.In [85], a DL model based on LSTM was implemented. The model consisted of two encoders, a decoder, and an explainer. Kullback-Leibler divergence was the selected loss function that introduced the long-term short-term dependencies in latent space created by the second encoder. Experimental results used the IHEPC dataset [53]. The first ten months of 2010 were used for training and the remaining two months for testing. The performance of the model was examined through three evaluation metrics, MSE, MAE, and MRE. Results were compared to conventional ML models such as LR, DT, and RF, and DL models such as LSTM, stacked LSTM, the autoencoder proposed by Li [86], the state-explainable autoencoder (SAE) [61], and the hybrid autoencoder (HAE) proposed by Kim and Cho [87]. The proposed model performed similarly to the state-of-the-art methods, providing additionally an explanation for the prediction results. Temporal information has been considered, paving the way for additional explanation for not only time but also for spatial characteristics.In January 2022, Huang et al. [88] proposed a novel NN based on CNN-attention-bidirectional LSTM (BiLSTM) for residential energy consumption prediction. An attention mechanism was applied to assign different weights to the neurons’ outputs so as to strengthen the impact of important information. The proposed method was evaluated on IHEPC [53] household electricity consumption data. Moreover, different input timestamp lengths, of 10, 60, and 120 minutes, were selected to validate the performance of the model. Evaluation metrics of RMSE, MAE, and MAPE were calculated for the proposed model and traditional ML and DL methods for time-series prediction, such as SVR, LSTM, GRU, and CNN-LSTM, for comparison. Results indicated the proposed method as the one with the higher forecasting accuracy, resulting in the lowest average MAPE. Moreover, the proposed model could avoid the influence of the input sequence long time step and was able to extract information from the features that most affect the energy forecasting. The authors suggested the consideration of weather factors [89] and electricity price policy supplementary data for their future work.The main characteristics of all aforementioned DL-based approaches are summarized in Table1. Comparative performance to state-of-the-art methods is provided throughout this review instead of a numerical performance report for each method, since different evaluation metrics are calculated in each referenced work (round mean squared error (RMSE), correlation coefficient R, p-value, mean absolute error (MAE), mean relative estimation error (MRE), etc.), different datasets and different time frames are selected, not making the results directly comparable.Table 1 Characteristics of DL methods for the case of residential building load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[47]2016CRBMShort-termResidential buildingANNSVMRNNSuitable for near real-time exploitation. Needs fine-tuning and extra information to the modelsIHEPC [53]Medium-termLong-termFCRBMShort-termResidential buildingANNSVMRNNCBRMMedium-termLong-term[6]2016LSTMShort-termResidential buildingNaïve mapping issueLSTM unable to forecast using one-minute resolution, S2S performed well in all cases, needs to be tested on different datasetsIHEPC [53]LSTM (S2S)Residential buildingComparable results to [47][56]2017CNNShort-termResidential buildingANNSVMComparable [6]Results did not vary much across different architectures, need to be tested with different datasets with weather dataIHEPC [53][59]2018RCFNetShort-termResidential buildingHASARIMAMLPLSTMConsiders proximity, periodicity, tendency of consumption, and influence of external factors, good scalability, possibility to further optimize its architecture[60][61]2019DL based on AEShort-termResidential buildingLRDTRFMLPLSTMStacked – LSTMAE modelDefinition of current demand pattern as state, able to predict very complex power demand values with stable and high performanceIHEPC [53][62]2019CNN-LSTMShort-termResidential buildingLRDTRFMLPLSTMGRUBi-LSTMAttention LSTMHigh performance in high resolution, analysis of variableso household appliances that influence the predictionIHEPC [53]Medium-termLong-term[63]2019EECP-CBLShort-termResidential buildingLRLSTMCNN-LSTMHigh training time, good results in all time frame settingsIHEPC [53]Medium-termLong-term[64]2020SVRGBRTFFNNLSTMShort-termResidential buildingSVRGBRTFFNNImproved generalization ability, lower seasonal predictions due to load variations[65][67]2020DNNMedium- termResidential buildingSVMMRGood generalization, unable to predict large valuesN/ALong-term[68]2020CNN-2d-RPShort-termResidential buildingSVMANNCNN-1DComputational complex, inappropriate for real-time applicationsIHEPC [53][71]2020DRNN-GRUShort-termVarious combinations of max 20 residential buildingsMLPARIMASVMMLRDRNNDRNN-LSTMHigh accuracy with limited input variables for aggregated and disaggregated load demand, good for filling missing data[72, 73]Medium- term[74]2021Attention-based encoder-decoder (GRU) with Bayesian optimizationShort-termResidential buildingsDenseRNNLSTMGRULstmSeqGruSeqLstmSeqAttGruSeqAttBLstmSeqAttTemporal attention layer towards greater robustness, optimal prediction through optimizationAEP [75][15]2021Hybrid stacked Bi-directionalUnidirectional fully connected (HSBUFC) model architectureShort-termResidential buildingsLRExtreme learning machine (ELM)NNLSTMCNN-LSTMConvLSTMBi-directional LSTMAllows for learning of exceedingly nonlinear anc convoluted patterns and correlations in data, slow training timeIHEPC [53, 76][13]2021RNN-LSTMNARXMedium-termResidential buildingsSVMRFAccurate prediction of peak and off-peak values, computationally complex due to windowing sizeIESO [77, 78][79]2021LSTMShort-termResidential buildings-A model independent of location and introduction of global features, use of limited dataReference [72] & custom[80]2022 Jan.DNNMedium- termResidential buildingsANNGBRFStackingkNNSVMDTLRAble to predict at the early design phase, not sensitive to a specific building type, size of data affects the performanceMHCLG [81, 82][83]2022 Jan.SWT-MC-BiNLSTMShort-termResidential buildingsAVRMLPLSTMGRU7 hybrid DL modelsA centralized approach, difficulties in integrating multiple models for different energy patternsUK-DALE [84][85]2022 Jan.LSTM with two encoders, decoder and explainerShort-termResidential buildingsSimilar results with LRDTRFLSTM stacked LSTM autoencoder of [86] SAE of [61]HAE of [87]Use of temporal information, explainable prediction results, not considering spatial characteristicsIHEPC [53]Long-term[88]2022 Jan.CNN-attention-BiLSTMShort-termResidential buildingsSVRLSTMGRUCNN-LSTMAvoidance of the influence of the input sequence long time stepIHEPC [53] ### 4.2. Commercial Building Load Forecasting In 2017, Chengdong Li et al. [86] proposed a new DL model from the combination of Stacked Autoencoders (SAE) [90] and an Extreme Learning Machine (ELM) [91]. The role of SAE was to extract features relative to the building’s power consumption, while the role of the ELM was for accurate energy load forecasting. Only the pretraining of the SAE was needed, while the fine-tuning was established by the least-squares learning of the parameters in the last fully connected layer. The authors compared the performance of the proposed Extreme SAE model with: (1) a Back Propagation Neural Network–BPNN; (2) a Support Vector Regressor–SVR; (3) a Generalized radial basis function neural network - GRBFNN, which is a generalized radial basis function neural network – RBFNN; and (4) a Multiple Linear Regression–MLR, a famous, often used regression and prediction statistical method. The dataset was collected from a retail building in Freemont (California, USA) in a 15-minute sampling rate [92]. The dataset contained 34.939 samples that were aggregated to 17.469 30-minutes and 8.734 1-hour samples. The effectiveness of the examined methodologies was measured in terms of MAE, MRE, and RMSE, for 30 and 60 minutes time period. The researchers concluded that the proposed approach in energy load consumption forecasting presented the best performance, especially with abnormal testing data reflecting uncertainties in the building power consumption. The best overall performance in forecasting was achieved by the Extreme SAE model in comparison to the other models. The achieved accuracy from best to worse was: Extreme SAE > SVR > GRBFNN > BPNN > MLR. The authors also concluded that the proposed SAE and ELM combination was superior to standard SAE, mainly, due to the lack of need for fine tuning of the entire network (iterative BP algorithm), which could speed up the learning process and contribute significantly to the generalization performance. The ELM speeded up the training procedure, without iterations, and boosted the overall performance, due to its deeper architecture and improved learning strategies.Widyaning Chandramitasari et al. [5] in 2018 proposed a model constructed by the combination of an LSTM network, used for time-series forecasting, and a Feed Forward Neural Network (FFNN), to increase the forecasting accuracy. The research focused on a time horizon of one day ahead with a 30-minute resolution, for a construction company in Japan. The proposed model was validated and compared against the standard LSTM and Moving Average (MA) model, which were used by a power supply company. The effectiveness of the evaluated methodologies was measured by RMSE. The used dataset covered a time period of approximately 1 year and four months (August 2016 to November 2017) with a 30-minutes resolution. Additional time information considered in the experiments was the day, time, and season (low–middle–high). The authors concluded that separating the day in “weekdays” and “All day” data gave more accurate results in energy load forecasting for weekdays. They also pointed out that the data analysis performed for forecasting should be, each time, according to the type of the client (residential, public, commercial, industrial, etc.).In the same year, Nichiforov et al. [93] experimented on RNN networks with implemented LSTM layers consisting of one sequence input layer, a layer of LSTM units with several different configurations and variations regarding the amount of used hidden units (from 5 up to 125 units), a fully connected layer and a regression output layer. They compared the results for two different nonresidential buildings from the University Campuses, one in Chicago, and the other in Zurich. The datasets used in their experiments were apprehended by BUDS [94] and contained hourly samples over a one-year period and after data optimization, they resulted in two datasets of approximately 8.670 data samples each. Results were promising, pointing out that the method could be used in load management algorithms with limited overhead for periodic adjustments and model retraining.The following year, the same authors in [95] also experimented with the same dataset and the same RNN architectures, adding to their research one more building located in New York. Useful conclusions extracted from both works were the following: RNN architecture was a good candidate, prompting promising accuracy results for building load forecasting. The best performance, graded by RMSE, Coefficient of Variation of the RMSE (CV- RMSE), MAPE and MSE metrics, was achieved with the RNN network when the LSTM layer contained 50 hidden units, while the worst accuracy was observed when contained 125 hidden units, for all buildings. DL-Model testing in load forecasting enhanced in the past few years due to the availability of datasets and relevant algorithms, better hardware necessary for testing, network modeling that could be obtained in lower prices and industry, and academic research teams’ joint efforts leading to better results. Due to the complexity of the building energy forecasting problem (buildings’ architecture, materials, consumption patterns, weather conditions, etc.), experts’ opinions in this domain could provide insights and guidance, along with further investigation and experimentation on a wide model variation. The authors also suggested that on-site energy storage could balance the scale in favor of better energy management.In 2019, Ljubisa Sehovac et al. [96] proposed the GRU (S2S) model [97], a simplified LSTM that maintained similar functionality. Two are the main differences between the two models, regarding their cells: (1) GRU (S2S) has an all-purpose hidden state h instead of two different states, memory and hidden and (2) the input and forget gates are replaced with update gate z. These modifications allowed GRU (S2S) model to train and converge in less time than LSTM (S2S) model, maintaining at the same time, a sufficient amount of hidden states dimension and gates to preserve long-term memory. In this study, the authors experimented in all time frame categories, for power consumption forecasting (Short–Medium–Long). The dataset used in the experiments was collected from a retail building at a 5-minute sampling rate. It contained 132.446 samples and covered a time period of one year and three months. There are 11 features in this dataset: Month, Day of Year, Day of Month, Weekday, Weekend, Holiday, Hour, Season, Temperature (°C), Humidity, and Usage (KW). The data were collected from “smart” sensors part of a “smart grid; the first 80% was used for training and the remaining 20% for testing. The proposed method was compared to LSTM (S2S), RNN (S2S) and a Deep Neural Network and their effectiveness was measured by the use of MAE and MAPE. The authors concluded that the GRU (S2S) and LSTM (S2S) models produced better accuracy in energy load consumption forecasting than the other two models. In addition, the GRU (S2S) model outperformed the LSTM (S2S) model and gave an accurate prediction for all three cases. Finally, a significant conclusion that verified the conclusions of relevant research [6, 47] was that when the prediction length increased the accuracy of predictions was expected to decrease.Mengmeng Cai et al. [98] designed Gated CNN (GCNN) and Gated RNN (GRNN) models. In this research, they tested five different models in short-term forecasting (next day forecasting) and compared them in terms of accuracy in forecasting, ability to be generalized, robustness, and computational efficiency. The models they tested were: (1) GCNN1, a multistep recursive model that made one-hour predictions that applied it 24 times for a day prediction; (2) GRNN1, same as the previous but for RNN model; (3) GCNN24, multistep, direct procedure that predicted the whole 24 hours at once; (4) GRNN24, same like the previous but for RNN model; and (5) ARIMAX, a non-DL, commonly used method for time-series problems. The authors applied the five models in three different nonresidential buildings: (1) Building A (Alexandria, VA, approx. 30.000 sqf, academic, dataset obtained by [99]), Building B (Shirley, NY, approx. 80.000 sqf, school, dataset obtained by [100]), and Building C (Uxbridge, MA, approx. 55.000 sqf, grocery store, dataset obtained by [100]). The datasets used in their experiments were one-hour samples collected in a year time period and contained meteorological data, temperature, humidity, air pressure, and wind speed. After data pre-\processing (cleaning, segmentation, formation, normalization, etc.) for keeping only the weekday samples, the researchers divided the remained data in 90% training data, 5% validation data, and 5% testing data. Several useful conclusions were extracted. The building size, occupancy and peak load mattered significantly in the results of GCNN1 and GRNN1, improving the accuracy of load prediction. While the number of people in the building has risen, the uncertainty caused by each individual’s behavior is averaged, resulting in a more accurate prediction. Among GCNN1, GRNN1, and SARIMAX, the best performance was achieved by GCNN1, while the slightly poorer by GRNN1 and the worst by far by SARIMAX. In another experiment, the GCNN24 outperformed GRNN24, and produced better results in accuracy (22.6% fewer errors compared to SARIMAX) and computational efficiency (8% faster compared to SARIMAX) than GCNN1, GRNN1 and SARIMAX, granting the GCNN24 model as the most suitable, among the five, for short-term (day-ahead) building load forecasting. As a more general conclusion, the researchers stated that DL methods fitted better load forecasting than previously used methods.In [101], Yuan Gao et al. in 2019 experimented in long-term (one year) building load forecasting and proposed an LSTM architecture with an additional self-attention network layer [102]. The proposed model emphasized on the inner logical relations among the dataset during prediction. The attention layer was used towards improving the ability of the model to convey and remember long-term information. The proposed model was compared to an LSTM model and a Dense Back Propagation Neural Network and evaluated, regarding load forecasting accuracy by MAPE. All three models were applied to a Nonresidential Office building in China. The dataset used in this research contained 12 attributes (weather, time, energy consumption, etc.) on daily measurements and ranged in a two-year time period. The main conclusion of this research was that the proposed method was able to address the issue of long-term memory and conveyed information better than the other two architectures, outperformed LSTM by 2.9% and DBPNN by 6.5%.Heidrich Benedikt et al. [103] in 2020 proposed a combination of standard energy load profiles and CNN, and created a Profile Neural Network (PNN). The proposed architecture consisted of three different profile modules: standard load profile, trend and colorful noise, and the utilization of CNNs, which according to the authors has never been proposed before. In this scheme, CNNs were used as data encoders for the second (trend encoder) and third module (external and historical), in the prediction network in the third module (colorful noise calculation) and in the aggregation layer, where the results of the three modules were aggregated to perform load forecasting. The dataset used for the experiments was the result of merging two datasets: (a) one that contained historical load data gathered in a ten-year time period from two different campus buildings (one with weak and one with strong seasonal variation) and (b) weather data apprehended from Deutsche Wetterdienst (DWD) [104]. The merged dataset covered an eight-year period with one-hour resolution samples; 75% of the data was used for training and the remaining 25% for testing the models. In order to measure and better comprehend the performance of the PNN, the authors compared the results of four different variations of their model, regarding time window size (PNN0, PNN1 month, PNN6 month, and PNN12 month), to four state-of-the-art building load forecasting methods from the literature: RCFNet, CNN, LSTM and Stacked-LSTM, and three naïve forecasting models: periodic persistence, profile forecast, and linear regression. All models were evaluated, by RMSE and MASE metrics, and tested in short (one day) and in medium-term (one week) building load forecasting. All the PNN models, besides PNN0, outperformed the rest of the tested models, and among them, PNN1 achieved the best performance for both time horizons and both types of buildings. Regarding the training time, PNN models required the least time for both types of buildings for short-term forecasting but were outperformed by CNN in medium-term forecasting. According to the authors, the excess time needed, compared to the fastest model, offered a much better accuracy and thus it was an acceptable trade off. The authors also concluded that the proposed model was flexible due to the ability to change, according to cases, modules and encoders in order to achieve better results, and could also be used on a higher scale than a building.In [105], Sun et al. in 2020 introduced a novel deep learning architecture that combined an input feature selection, through MRMR (Maximal Relevance Minimal Redundancy) criterion, based on Pearson’s correlation coefficient, and an LSTM–RNN architecture. The dataset used for the short-term forecasting experiments covered one year of historic load data (2017) for three different types of buildings (office, hotel, and shopping mall), apprehended by the Shanghai Power Department, while the weather-related data were collected from a local weather forecast website. In order to establish a baseline and prove the proposed model’s efficiency, the researchers conducted several experiments that were MRMR-based LSTM–RNN model variations competed against ARIMA, BPNN variations and BPNN-SD variations forecasting models, evaluated based on RMSE and MAPE metrics. According to the results, the proposed model, and more specifically the two-time step variation of the model, outperformed all other models and provided the most accurate load forecasting results. The authors concluded that due to the complexity of the building energy load prediction task, the right selection of input features played a key role in the procedure, and in combination with a hybrid prediction model, could present more accurate results.In [106], Gopal Chitalia et al. in 2020 presented their findings, regarding deep learning architectures in short-term load forecasting, after experimenting on nine different DL models: Encoder–Decoder scheme, LSTM, LSTM with attention, Convolutional LSTM, CNN – LSTM, BiLSTM, BiLSTM with an attention mechanism, Convolutional BiLSTM and CNN–BiLSTM. The main idea was that RNN networks with an attention layer could produce more robust and accurate results. All the above models were tested in five different types of buildings on two different continents, Asia and North America. Four out of five datasets used in this research can be found in [100, 107, 108], while the weather data were collected from [109]. The authors investigated short-term building load forecasting, through several various aspects regarding feature selection, data optimization, hyperparameter fine-tuning, learning-based clustering, and minimum dataset volume, with acceptable results of accuracies. All DL architectures were evaluated by RMSE, MAPE, CV, and Root-Mean-Square Logarithmic Error (RMSLE) and provided a fair assessment for each building’s load forecasting results. The researchers concluded that the implementation of the attention layer in RNN networks increased the load forecasting accuracy of the model and could perform adequately in a variation of buildings, loads, locations, and weather conditions.In January 2022, Xiao et al. [110] proposed an LSTM model to predict the day-ahead energy consumption. Two data smoothing methods, Gaussian kernel density estimation and Savitzky-Golay filter, were selected and compared. Data used in that work was from the Energy Detective 2020 dataset [111], including hourly consumption data from 20 office buildings and weather data, from 2015 to 2017. The authors concluded that data smoothing could help enhance the accuracy of prediction in terms of CVRMSE, however, when raw data were taken as the reference, the prediction accuracy decreased dramatically. A larger training set was recommended in conclusions, if the computing cost was acceptable.The main characteristics of the DL-based approaches of this section are summarized in Table2.Table 2 Characteristics of DL methods for the case of commercial building load forecasting. Paper Ref.Pub.YearDeep learning modelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[86]2017Extreme SAEShort-termRetail buildingSVRGRBFNNBPNNMLRQuicker learning speed and stronger generalization performance, not considering periodicity of energy consumption[92][5]2018LSTM – FFNNShort-termConstruction companyLSTMMAGood energy consumption forecast of the next day for each 30-minutesData from a small power company in Japan[93]2018RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelA replicable case study, suitable for online optimization[94][95]2019RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelTendency for overfitting the input data and poor performance on the testing samples[94][96]2019GRU (S2S)Short-termCommercial buildingsLSTM (S2S)RNN (S2S)DNNAccuracy was decreasing as the prediction length was increasingCustomMedium-termLong-term[98]2019GCNNShort-termCommercial buildingsVarious implementation of reviewed models and SARIMAXReduced forecasting error, able to handle high level uncertainties, high computational efficiency[99, 100]GRNN[101]2019LSTM + self-attention layerLong-termCommercial buildingsLSTMDBPNNResolved the problem of long-term memory dependenciesCustom[103]2020PNNMedium- termCommercial buildingsLRRCFNetCNNLSTMStacked-LSTMInserted statistical information about periodicities in the load time series[104][105]2020LSTM – RNN + MRMR criterionShort-termCommercial buildingsARIMABPNNBPNN-SDFeature variables selection to capture distinct load characteristicsShanghai power department[106]2020LSTM + attentionShort-termCommercial buildingsEncoder – DecoderLSTMConvolutional LSTMCNN – LSTMBiLSTM, BiLSTM with attentionConvolutional BiLSTMCNN – BiLSTMRobust against different building types, locations, weather and load uncertaintiesReferences [100, 107, 108],[110]2022 Jan.LSTM + data smoothingSort-termOffice buildings-Prediction would decline for certain periods, data smoothing could help the accuracy of predictionEnergy detective 2020 [111] ### 4.3. Multiple Type of Buildings Load Forecasting In [112], H. Shi et al. in 2018 introduced for short-term household load forecasting, a pooling-based deep RNN architecture (PDRNN), boosted by LSTM units. In the proposed PDRNN, the authors combined DRNN with a new profile pooling technic, utilizing neighboring household data to address overfitting and insufficient data in terms of volume, diversity, etc. There were two stages in the proposed methodology: load profile pooling and load forecasting through DRNN. The data used for model training and testing were apprehended from Commission for the Energy Regulation (CER) in Ireland [113] and were collected from smart metering customer behavior trials (CBTs). The data covered one-and-a-half-year time period (July 2009 to December of 2010). The proposed method was compared to other state-of-the-art forecasting methods, ARIMA, RNN, SVR and DRNN models, and was evaluated by RMSE, NRMSE, and MAE metrics. The researchers concluded that PDRNN outperformed the rest of the models, achieving better accuracy, and successfully addressing overfitting issues.In the same year, Aowabin Rahman et al. [114] proposed a methodology focused on medium- to long-term energy load forecasting. The authors examined two LSTM-based (S2S) architecture models with six layers. The contributions of this work were: (1) energy load consumption forecasting for a time period ranging from a few months, up to a year (medium to long term); (2) quantification of the performance for the proposed models on various consumption profiles for load forecasting in commercial buildings and in aggregated load at the small community scale; and (3) development of an imputation scheme for missing history consumption data values by the use of deep learning RNN models. Regarding the used dataset, the authors followed different protocols to collect useful data: (1) A Public Safety Building at Salt Lake (PSB at Utah, USA). The dataset used for this part of the paper, obtained from the PSB, was at one-hour resolution for a time frame of 448 days (one year, two months, and three weeks) covering a time period from the18th of May 2015 till the 8th of August 2016. The proposed architectures were tested in several load profiles with a combination of variables (weather, day, month, hour of the day, etc.). The first year of the dataset was used for training and the remaining (approximately 83 days) for testing; (2) A number (combinations of maximum 30) of residential buildings in Austin (Texas, USA). The dataset used for this part of the paper was acquired from Pecan Street Inc. Dataport Web Portal [72], at one-hour resolution for an approximate two-year time period from January 2015 till December 2016. The dataset included data for 30 individual residential buildings and the load consumption forecasting was aggregated. The first year of the dataset was used for training and the remaining time for testing. The experiments revealed that the prediction accuracy, for both models, was limited and highly affected by the weather. Moreover, if the training data greatly differed from testing and future weather data, then a model that produced sufficient power load consumption predictions for a specific building cannot be applied successfully to a different building. In addition, if major changes regarding occupancy, building structure, consumer behavior, or the installed appliances/equipment occurred in the specific building, the same model would have decreased accuracy. According to the authors’ findings, both proposed models performed better in commercial building energy load forecasting, than a three-layer MLP model, but worse over a one-year period forecasting regarding the aggregated load for the residential buildings, with the MLP model performed even better as the total of residential buildings increased. As a final remark, the researchers concluded that there was a lot of potential in the use of deep RNN models in energy load forecasting over medium- to long-term time horizon. It is worth mentioning that besides the consumption history data, the authors considered several other variables (day of the week, month, time of the day, use frequency, etc.) and weather conditions acquired from Mesowest web portal [73].In [115] Y. Pang et al. in 2019, in order to overcome the limited historical consumption data for most buildings, to utilize for short-term load forecasting model training, proposed the utilization of Generative Adversarial Network method (GAN). The researchers introduced the GAN-BE model, an LSTM unit-based RNN (LSTM-RNN) deep learning architecture, and experimented with different variations of it, with or without attention layer. For the experiments were used data, collected from four different types of buildings: an office building, a hotel, a mall, and a comprehensive building. The different variations of the proposed model were compared to four LSTM variations and evaluated by MAPE, RMSE, and Dynamic Time Warping (DTW) metrics. The proposed model, with and without the attention layer, outperformed the other models, displaying better accuracy and robustness.In [116], Khan et al. in 2020 developed a hybrid CNN with an LSTM autoencoder architecture (CNN with LSTM-AE) that consisted of ten layers, for short-term load forecasting in residential and commercial buildings. The load forecasting accuracy of the proposed model was compared (by MAPE, RMSE, MSC, and MAE metrics) to other DL schemes (CNN, LSTM, CNN-LSTM, LSTM-AE). Two datasets were used in this research: (1) From the UCI repository [53] and (2) a custom dataset regarding a Korean commercial building from a single sensor, instead of four used on the UCI dataset, sampled in a 15-minute window and a total amount of 960.000 records. For this experiment, the first 75% of the dataset (three years) was used for training and the remaining 25% (one year) for testing. All models were tested, on both datasets, in hourly and daily resolution. The authors extracted several conclusions from their research. When they tested the above DL models, fed on the UCI dataset, over hourly data resolution, they discovered that some cross combinations among them produced better results than each one of them individually. The latter inspired them to develop the proposed model, which outperformed all the above tested DL models. They also experimented using the same dataset, over daily data resolution, and the proposed model achieved again the best forecasting accuracy. In the next step of their research, they tested their model using their own dataset over hourly and daily data resolution. Their model produced less accurate results than LSTM and LSTM-AE models, over hourly data resolution, but outperformed all other models, over daily data resolution. The general conclusion of their research was that the proposed hybrid model performed better during the experiments, especially over daily data resolution, compared to other DL and more traditional building load forecasting methods.AkCNN-LSTM deep learning framework was proposed in [117]. The proposed model combined k-means clustering for analyzing energy consumption patterns, CNNs for feature extraction and LSTM-NN to deal with long-term dependencies. The method was tested with real-time energy data of a four-story academic building, containing more than 30 electrical-related features. The performance of the model was assessed in terms of MAE, MSE, MAPE, and RMSE for the considered year, weekdays, and weekend. The authors observed that the proposed model provided accurate energy demand forecasting, attributed to its ability to learn the spatiotemporal dependencies in the energy consumption data. The kCNN-LSTM was compared to k-means variants of state-of-the-art energy demand forecast models, revealing higher performance analysis in terms of computational time and forecasting accuracy.In the same year, Lei et al. [118] developed an energy consumption prediction model based on the rough set theory and deep belief NN (DBN). The used data were collected from 100 civil public buildings (office, commercial, tourist, science, education, etc.) for rough set reduction and from a laboratory building to train and test the DL model. The public building data referred to five months of a total of 20 inputs data collection. The laboratory building data referred to less than 20 energy consumption inputs, obtained for approximately a year, including building consumption and meteorological data. Short-term and medium-term predictions were included. Prediction results, MAPE and RMSPE, were compared to that of a back-propagation NN, Elman NN, and fuzzy NN, revealing higher accuracy in all cases. The authors concluded that the rough set theory was able to eliminate unnecessary affecting factors of building energy consumption. The DBN with a reduced number of inputs resulted in improved prediction accuracy.In [119], Khan et al. introduced a hybrid model, DB-Net, by incorporating a dilated CNN (DCNN) with bidirectional LSTM (BiLSTM). The proposed method used a moving average filter for noise reduction and handled missing values via the substitution method. Two energy consumption datasets were used: the IHEPC dataset [53] consisting of four-year energy data (three years for training and 1 year for testing) and the Korean dataset of the advanced institutes of convergence technology (AICT) [120] for commercial buildings consisting of three-year energy data (two years for training and one year for testing). The proposed DB-Net model was evaluated using MAE, MSE, RMSE, and MAPE error metrics and it was compared to various ML and DL models. The proposed model outperformed the referenced approaches, by forecasting multi-step power consumption, including hourly, daily, weekly, and monthly output with higher accuracy. However, the method was limited by the fixed-size input data and the use of the invariance time-series data in a supervised sense. The authors suggested applying several alternative methods to boost the performance of the model, more challenging datasets, and more dynamic learning approaches as their future work.Wang et al. [121] proposed a DCNN based on ResNet for hour-ahead building load forecasting. The main contribution of their work was the design of a branch that integrated the temperature per hour into the forecasting branch. The learning capability of the model was enhanced by an innovative feature fusion. The genome project building dataset was adopted [122], including load and weather conditions of nonresidential buildings; the focus was on two laboratories and an office. The performance of five DL models was considered for comparison reasons. Comparison results for single-step and 24-step building load forecasting revealed that the proposed DCNN could provide more accurate forecasting results, higher computational efficiency, and stronger generalization for different buildings.In January of 2022, Jogunola et al. [123] introduced architecture, named CBLSTM-AE, including a CNN, an autoencoder (AE) with bidirectional LSTM (BLSTM). The effectiveness of the proposed architecture was tested with the well-known UCI dataset, IHEPC [53] and the Q-Energy [124] platform dataset was used to further evaluate the generalization ability of the proposed framework. From the Q-Energy dataset, a private part was used including two small-to-medium enterprises (SME), a hospital, a university, and residences. The time resolution of both datasets was converted to 24 hours towards short-term consumption prediction. The IHEPC data was further used for comparison of the proposed method with state-of-the-art frameworks. The proposed model achieved lower MSE, RMSE, and MAE and improved computational time, compared to the other models: LSTM, GRU, BLSTM, Attention LSTM, CNN-LSTM and electric ECP-based CNN, and BLSTM (EECP-CBL). Results demonstrated good generalization ability and robustness, providing an effective prediction tool over various datasets.In February 2022, the most research on energy consumption forecasting up-to-date was presented by Sujan Reddy et al. in [125]. The authors proposed a stacking ensemble model for short-term load consumption. ML and DL models (RF, LSTM, DNN, evolutionary trees (EvTree)) were used as base models. Their prediction results were combined using Gradient Boosting (GBM) and Extreme Gradient Boosting (XGB). Experimental observations on the combinations revealed two different ensemble models with optimal forecasting abilities. The proposed models were tested on a standard dataset [126], available upon request, containing approximately 500000 load consumption values at periodic intervals for over 9 years. Experimental results pointed out the XGB ensemble model as the optimal, resulting in reduced training time and higher accuracy, compared to the state-of-the-art (EvTree, RF, LSTM, NN, ARMA, ARIMA, Ensemble model of [126], feed-forward NN (FNN–H20) of [127] and DNN-smoothing of [127]). Five regression measures were used: MRE, R-squared, MAE, RMSE, and SMAPE. A reduction of 39% was reported in RMSE.The main characteristics of the DL based approaches of this section are summarized in Table3.Table 3 Characteristics of DL methods for the case of multiple type of buildings load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages & DisadvantagesDataset[112]2018PDRNNShort-termResidential small and medium enterprisesARIMARNNSVRDRNNProne to overfitting due to more parameters and fewer data[113][114]2018LSTM 1LSTM 2Long-termCommercial buildingMLPMissing data imputation scheme, decrease of accuracy if weather changes or for other building structures or when data is aggregated[72, 73]Various combinations of max 30 residential buildings[115]2019GAN-BE (LSTM-RNN basedShort-termOffice buildingHotelMallComprehensive buildingLSTM variationsAble to capture distinct load characteristics and choose accurate input variablesCustom[116]2020CNN with LSTM – AEShort-termResidential buildingCNNLSTMCNN-LSTMLSTM-AEOutlier detection and data normalization, spatial features extraction for better accuracyIHEPC [53] & customMedium-termCommercial buildings[117]2021kCNN-LSTMLong-termAcademic buildingARIMADBNMLPLSTMCNNCNN-LSTMAble to learn spatiotemporal dependencies in the energy consumption dataCustom[118]2021DBNShort-term100 civil public buildingsBack-propagation NNElman NNFuzzy NNRequires a large amount of training data, used uncalibrated data and does not need feature extractionCustomMedium-termLaboratory building[119]2021DB-netShort-termResidential buildingSVRCNN-LSTMCNN-BiLSTMDCNN-LSTMDCNN-BiLSTMAbility for multistep forecasting, noise reduction and handling of missing values, small inference time, suitable for real-time applications, limited by the fixed-size input dataIHEPC [53, 120]Long-termCommercial buildings[121]2021RCNNShort-termTwo laboratories and an officeGRUResNetLSTMGCNNIncreased depth of the model, enhanced ability to learn nonlinear relations, able to integrate information of external factors, fast converge[122][123]2022 Jan.CBLSTM-AEShort-termCommercial buildingsLSTMGRUBLSTMAttention LSTMCNN-LSTMEECP-CBLGeneralizes well to varying data, building types, locations, weather and load distributionsIHEPC [53] & private Q-energy [124] dataResidential buildings[125]2022 Feb.Ensemble with GBMEnsemble with XGBShort termVarious buildingsEvTree, RFLSTMNNARMAARIMAEnsemble [126]FNN-H20 [127]DNN-smoothing [127]Reduced training time, can be applied to any stationary time series dataCustom ## 4.1. Residential Building Load Forecasting The first DL-based methodology was proposed by Elena Mocanu et al. [47] in 2016 for load forecasting of a residential building. The examined DL models were: (1) Conditional Restricted Boltzmann Machine (CRBM) [48] and (2) Factored Conditional Restricted Boltzmann Machine (FCRBM) [49], with reduced extra layers. The performance of both models was compared to that of the three most used Machine learning methods of that time [50–52]: (1) Artificial Neural Network - Non-Linear Autoregressive Model (ANN-NAR), (2) Support Vector Machine (SVM), and (3) Recurrent Neural Network (RNN). The used dataset entitled “Individual Household Electric Power Consumption” (IHEPC) [53] was collected from a household at a one-minute sampling rate. It contained 2.075.259 samples in an almost four-year period (47 months) of time, collected between December 2006 and November 2010. The attributes, from the dataset, being used in the experiments were: Aggregated active power (household avg power excluding the devices in the following attributes), Energy Submetering 1 (Kitchen–oven, microwave, dishwasher, etc.), Energy Submetering 2 (Laundry room–washing machine, dryer, refrigerator, and a light bulb), and Energy Submetering 3 (water heater and air condition device). In all the implementations, the authors used the first three years of the dataset for model training and the fourth year for testing. Useful conclusions extracted from that research were the following: all five tested models produced comparable forecasting results, with the best performance attained in experiments predicting the aggregated energy consumption, rather than the other three submetering. It is also worth mentioning that in all the scenarios for submetering prediction the results were the most inaccurate, which could be attributed to the difficulty to predict user behavior. The proposed FCRBM deep learning model outperformed the other four prediction methods in most scenarios. All methods proved to be suitable for near real-time exploitation in power consumption prediction, but the researchers also concluded that when the prediction length was increasing, the accuracy of predictions was decreasing, reposting prediction errors half of that of the ANN. The authors also concluded that even though the use of the proposed deep learning methods was feasible and provided sufficient results, it could be further improved to achieve better accuracy in prediction by fine-tuning, the addition of extra information to the models, such as environmental temperature, time, and more [47].In the same year, Daniel L. Marino et al. [6] proposed another methodology using the LSTM DL model. More precisely, the authors examined three models: (1) Standard Long Short Term Memory (LSTM) [54], a Recurrent Neural Network (RNN) designed to store information for long time periods, that can successfully address the vanishing gradient issue of RNN; (2) LSTM-Based Sequence-to-Sequence (S2S) architecture [55], a more flexible than standard LSTM architecture consisting of two LSTM networks in encoder-decoder duties, which overcomes the naive mapping problem observed in standard LSTM; and (3) Factored Conditional Restricted Boltzmann Machine (FCRBM) method proposed in [47]. This work revealed that the standard LSTM failed in building load forecasting and a naïve mapping issue occurred. The proposed deep learning model LSTM Sequence-to-Sequence (S2S) network, based on standard LSTM network, overcame the naïve mapping issue and produced, comparable results to FCRBM model and to the other methods examined in [47], by using the same dataset [53]. A significant conclusion of this research was that when the prediction length increased, the accuracy of predictions decreased. The researchers also concluded that in order to have a better grasp of the effectiveness of those methods and improve their generalization, more experiments with different datasets and regularization methods had to be conducted. It is worth mentioning that the used dataset was the same as in [47].The following year, in 2017, Kasun Amarasinghe et al. [56] proposed a methodology based on the Convolutional Neural Network (CNN) model. The novelty of this work was the deployment of a grid topology for feeding the data to the CNN model, for the first time in this kind of problems. The authors compared the performance of the CNN model with that of: (1) Standard Long-Short-Term Memory (LSTM), (2) LSTM-Based Sequence-to-Sequence (S2S) Architecture network, (3) Factored Conditional Restricted Boltzmann Machine (FCRBM), (4) Artificial Neural Networks with Non-Linear Autoregressive Model (ANN-NAR), and (5) Support Vector Machine (SVM). This research extracted the following conclusions: all the tested deep learning architectures produced better results in energy load forecasting for a single residence than SVM, and similar or more accurate results than standard ANN. Moreover, the best accuracy has been achieved by LSTM (S2S). The results of the tested CNN architectures were similar, with slight variations, to each other, performed better than SVM and ANN, and even though they did not outperform the other deep learning methods, they managed to remain a promising architecture. A more general observation that puzzled the researchers was that the results in training were better than in testing. The researchers also concluded, based on their recent and previous work [6], that the tested deep learning methods [57, 58] produced promising results in energy load forecasting. They also suggested that weather data should be considered in future works regarding forecasting due to the direct relationship between the two and the fact that it had not been used to date elsewhere than in [57]. Finally, they came to the same conclusion as in their previous work that in order to report a better grasp of the effectiveness of their methods and to improve their generalization, more experiments with different datasets and regularization methods had to be conducted. Once again, the same dataset [53] was utilized.In [59], Lei et al. In 2018 introduced a short-term residential load forecasting model, named Residual Conventional Fusion Network (RCFNet). The proposed model consisted of three branches of residual convolutional units (proximity, tendency, and periodicity modeling), a fully connected NN (weekday or weekend modeling) and an RCN to perform load forecasting based on the fusion of the previous outputs. The dataset used in this research [60], covered a two-year time period (April of 2012 to March 2014) and contained half hour sampled data from smart meters installed in 25 households, in Victoria, Australia. For this research purpose, only 8 households that contained the most complete data series were used. Approximately, 91.7% (22 months) of the dataset was used for training and the remaining 8.3% (2 months) for testing. Six different variations of the proposed RCFNet model were compared to four baseline forecasting models: History Average (HA), Seasonal ARIMA (SARIMA), MLP and LSTM, and all models were evaluated by calculating the round mean-square-error (RMSE) metric. The researchers concluded that their model outperformed all other models and achieved the best accuracy, scalability, and adaptability.In [61], Kim et al. in 2019 introduced a deep learning model for building load forecasting based on the Autoencoder (AE) model. The main idea behind this approach was to devise a scheme capable of considering different features for different states/situations each time, to achieve more accurate and explanatory energy forecasts. The model consisted of two main components, based on LSTM architecture, a projector that gave the input data, the energy current demand that defined the state of the model and a predictor for the building load forecasting, based on that state. The user of the system had a key role and could affect the forecasting through parameter and condition choices. In this work, a well-known dataset [53] was used; 90% of the dataset was used for training and 10% for testing the model. The authors compared their model to traditional forecasting methods, ML methods and DL methods, and they concluded that the proposed model, evaluated by mean square error (MSE), mean absolute error (MAE), and mean relative estimation error (MRE) metrics, outperformed them in most cases. The authors also concluded that their models’ efficiency was enhanced due to the condition adjustment, giving each time the situation/state of the model. The main contribution of the proposed work was that the model could both predict future demand and define the current demand pattern as state.The same research team of Kim et al. [62] in the same year, 2019, proposed a hybrid model, where two DL architectures, a CNN most commonly used in image recognition, and an LSTM, most commonly used in speech recognition and natural language processing, were linearly combined in a CNN–LSTM model architecture. For the experiments, a popular dataset [53] was used. The proposed model was tested in minute–hour–day–week resolutions and it was discovered that as the resolution increased, accuracy improved. The CNN–LSTM model evaluated by MSE-RMSE-MAE-mean absolute percentage error (MAPE) metrics, as compared to several other traditional energy forecasting ML and DL models and produced the most accurate results. It should be noted that the proposed method introduced first a combination of CNN architectures with LSTM models for energy consumption prediction. The authors concluded that the proposed model could deal with noise drawbacks and displayed minimal loss of information. The authors also evaluated the attributes of the used dataset and the impact that each of them had on building load forecasting. Submetering 3 attributes, representing water heater and air conditioner consumption, had the highest impact followed by Global Active Power (GPA) attribute. Another observation of this research was on the lack of available relevant datasets and that future work should focus on data collection and the creation of an automated method for hyperparameter choosing.In [63], Le et al. in 2019 presented a DL model for building load forecasting, named EECP-CBL. The architecture of the model was a combination of Bi-LSTM and CNN networks. For the contacted experiments, the authors utilized the IHEPC dataset [53]. For each model, 60% of the data (first three years) was used for training and the rest 40% of the data (last two years) was used for testing. The EECP–CBL model was compared to several state-of-the-art models at the time, used in the industry or introduced by other researchers for energy load forecasting: Linear Regression, LSTM, and CNN-LSTM. After data optimization, the models were tested for real-time (1 minute), short (1 hour), medium (1 day), and long (1 week) term load prediction, and they were evaluated by MSE, RMSE, MAE, and MAPE metrics. The authors concluded that the proposed model outperformed all other models in terms of accuracy. In this research, the researchers also focused on the time consumed for training and prediction of each model and concluded that while the prediction horizon increased, the time required for each additional task decreased for each model, with the proposed model outperforming all other, reporting as a disadvantage a comparatively higher training time. The research team also concluded that EECP–CBL model achieved peak performance on long-term building load forecasting and could be utilized in intelligent power management systems.In [64], Mehdipour Pirbazari et al. in 2020, in order to explore the extent and the way several factors can affect short-term (1-hour) building load forecasting, performed several experiments on four data-driven prediction models: Support Vector Regression (SVR), Gradient Boosting Regression Trees (GBRT), Feed Forward Neural Networks (FFNNs), and LSTM. The authors focused mainly on the scalability of the models and the prediction accuracy if trained solely in historical consumption data. The dataset covered a four-year time period (November 2011 to February 2014) and contained smart meter hourly data from 5.567 individual households in London, UK [65]. After data normalization and parameter tuning, the dataset utilized in this research focused on the year 2013 (fewer missing values, etc.) regarding 75 households, 15 each out of five different consumer -type groups classified by Acorn [66]. The four models were evaluated by Cumulative Weighted Error (CWE), based on RMSE, MAE, MASE, and Daily Peak Mean Average Percentage Error (DpMAPE) metrics. The researchers concluded that among the four models, LSTM and FFNN presented better adaptability to consumption variations, and resulted in better accuracy, but LSTM had higher computation cost and was clearly outperformed by CBRT, which was significantly faster. According to the reported results, other factors that affected load forecasting, for all four models, were the variations in usage, average energy consumption, and forecasted season temperature. Also, changes in the number of features (input lags) or a total of tested households (size of training dataset) did not affect similarly all models. The developed models were expected to learn various load profiles aiming towards generalization abilities and increase of models’ robustness.In [67], Mlangeni et al. in 2020 introduced, for medium and long-term building load forecasting, Dense Neural Network (DNN), a deep learning architecture that consisted of multiple ANN layers. The dataset used for this research contained, approximately, 2 million records from households in the eThekwini metropolitan area that contained 38 attributes and covered a five-year period, from 2008 to 2013. After data optimization and preparation, only 709.021 samples remained, which contained 7 attributes. For model training, 75% of the data was used, and for testing, the remaining 25%. In order to model load forecasting for the campus buildings of the University of KwaZulu, the authors assigned the household readings to rooms inside university buildings. The proposed architecture was compared to SVM and Multiple Regression (MR) models and was evaluated by RMSE and normalized RMSE (nRMSE) metrics. The authors concluded that the proposed model outperformed the rest of the models, presented good generalization ability, and could follow the data consumption trends. Dispersion of values in the data resulted in inaccurate estimations of large values, probably due to them being outliers. The authors also concluded that their method could be further improved by implementing more ML architectures and then testing in more datasets against other models or even extending from building load forecasting to wider metropolitan areas.In [68], Estebsari et al. In 2020, inspired by the high performance of CNN networks in image recognition, proposed a 2-dimensional CNN model for short-term (15-minutes) building load forecasting. In order to encode the 1-dimensional time series into 2-dimensional images, the authors presented and experimented on four well-known methods: recurrence plots (RP) [69], Gramian angular field (GAF), and Markov transition field (MTF) [70]. For the experimental results, it was used the Boston housing dataset [53]; 80% of the data was used for training and the remaining 20% for testing the models. The performance of three different versions, based on the image encoding method used, of the proposed model, CNN-2D was compared to SVM, ANN, and CNN-1D models. All architectures were evaluated by RMSE, MAPE, and MAE metrics. The researchers concluded that the CNN-2D-RP model outperformed all other models, displaying the best forecasting accuracy, however, due to image encoded data, had a significantly higher computational complexity, making it inappropriate for real-time applications.In [71], Wen et al. in 2020 presented a Deep RNN with Gated Recurrent Unit (DRNN-GRU) architecture, consisting of five layers, for short- to medium-term load forecasting in residential buildings. The proposed models’ prediction accuracy was compared by using MAPE, RMSE, percentage of consonants correct (PCC), and MAE metrics, to several DL (DRNN, DRNN-LSTM) and non-DL schemes (MLP, ARIMA, SVM, MLR). The dataset used in this research contained 15 months of hourly gathered consumption data and was obtained from Pecan Street Inc. Dataport Web Portal [72], while weather data were obtained from [73]. For the experimental evaluation of the method, 20 individual residential buildings were selected from the dataset; the first year of the dataset (80%) was used for training and the remaining three months (20%) for testing. The load demand was calculated for the aggregated load of a group of ten individual residential buildings. The researchers extracted several conclusions from their work. The proposed model achieved a lower error rate compared to the other tested methods and almost 5% less than the LSTM layer variation of DRNN. The researchers also declared that DRNN-GRU model achieved higher accuracy results than the rest models for the aggregated load of 10 residential buildings as well as for the individual load of residencies. There were some issues though to be taken under consideration, regarding the use of the proposed scheme for building load forecasting. The weather attributes, based on historic data, could affect the load forecasting accuracy since the weather could not be predicted with high certainty. In addition, the aggregated load forecasting accuracy was higher than the individual residence load, since the factor of the uncertain human behavior decreased as the number of total residences raised.In 2021, Jin et al. [74] developed an attention-based encoder-decoder network based on a gated recurrent unit (GRU) NN with Bayesian optimization towards short-term power forecasting. The contributions of the proposed method were in the incorporation of a temporal attention mechanism able to adjust the nonlinear and dynamic adaptability of the network, and the automatic verification of the hyperparameters of the encoder-decoder model resulting in improved prediction performance. The verification of the network was tested for 24-hours load forecasting with data acquired from the American Electric Power (AEP) [75]. The dataset included 26280 data from 2017 to 2020, with a sampling frequency of one hour; 70% of the data was used for training, 10% for validation, and 20% for testing. The model was also tested for the load prediction of four special days: Spring Equinox, Easter, Halloween, and Christmas. The proposed method demonstrated high performance and stability compared to nine other models, considering various indicators to reflect their accuracy performance (RMSE, MAE, Pearson correlation coefficient (R), NRMSE, and symmetric mean absolute percentage error (SMAPE)). The proposed model outperformed all nine models in all cases.In [15], a hybrid DL model was proposed for household-level energy forecasting in smart buildings. The model was based on the stacking of fully connected layers and unidirectional LSTMs on bidirectional LSTMs. The proposed model could allow the learning of exceedingly nonlinear and convoluted patterns, and correlations in data that could not be reached by the classical up-to-date unidirectional architectures. The accuracy of the model was evaluated on two datasets through score metrics in comparison with existing relevant state-of-the-art approaches. The first dataset included temperature and humidity in different rooms, appliances energy use, light fixtures energy use, weather data, outdoor temperature and relative humidity, atmospheric pressure, wind speed, visibility, and dewpoint temperature data [76]. The second dataset was the well-known IHEPC set of the University of California, Irvine (UCI) Machine Learning repository [53]. The employed performance comparison indicated the proposed model as the one with the highest accuracy, evaluated with RMSE, MAPE and MAE, even in the case of multistep ahead forecasting. The proposed method could be easily extended to long-term forecasting. Future work could focus on additional household occupancy data and on speeding up the training time of the model in order to facilitate its real-time application.In the same year, Shirzadi et al. [13] developed and compared ML (SVM, RF) and DL models (nonlinear autoregressive exogenous NN (NARX), recurrent NN (RNN-LSTM)) for predicting electrical load demand. Ten years of historical data for Bruce Country in Canada were used [77] regarding hourly electricity consumption by the Independent Electricity System Operator (IESO), feed with temperature, and wind speed information [78]recorded from 2010 to 2019; nine years of data were considered for training and one year for testing. Results revealed that DL models could predict more accurately the load demand, in terms of MAPE and R-squared metrics, for both peak and off-peak values. The windowing size of the analysis period was reported as a limitation of the method, affecting significantly the computation time.Ozer et al. in 2021 [79] proposed a cross-correlation (XCORR)-based transfer learning approach on LSTM. The proposed model was location-independent and global features were added to the load forecasting. Moreover, only one month of original data was considered. More specifically, the training data were obtained from the Dataport website [72], while the building data for which the load demand was estimated and were collected by an academic building for one month. Evaluation metrics RMSE, MAE, and MAPE were calculated. The performance of the proposed model was not compared to different models; however, the effect of transfer learning on LSTM was emphasized. The method resulted in accurate prediction results, paving the way for energy forecasting based on limited data.More recently, in January 2022, Olu-Ajayi et al. [80] presented several techniques for predicting annual building energy consumption utilizing a large dataset of residential buildings: ANN, GB, DNN, Random Forest (RF), Stacking, kNN, SVM, Decision Tree (DT), and Linear Regression (LR) were considered. The dataset included building information retrieved from the Ministry of Housing Communities and Local Government (MHCLG) repository [81] and meteorological data from the Meteostat repository [82]. In addition to forecasting, the effect of building clusters on model performance was examined. The main novelty of that work was the introduction of input key features of building design, enabling designers to forecast the average annual energy consumption at the early stages of development. The effects on the performance of the model of both building clusters on the selected features and the data size were also investigated. Results indicated DNN as the most efficient model in terms of R-squared, MAE, RMSE, and MSE.In the same month of 2022, in [83], Yan et al. proposed a bidirectional nested LSTM (MC-BiNLSTM) model. The model was combined with discrete stationary wavelet transform (SWT) towards more accurate energy consumption forecasting. The integrated approach of the proposed method enabled enhanced precision due to the use of multiple subsignals processing. Moreover, the use of SWT was able to eliminate the signal noise by signal decomposition. The UK-DALE [84] dataset was used for the evaluation of the model by calculating MAE, RMSE, MAPE, and R-squared. The proposed method was compared to cutting-edge algorithms of the literature, such as AVR, MLP, LSTM, GRU, and seven hybrid DL models (Ensemble model combining LSTM and SWT, Ensemble model combining Nested LSTM (NLSTM) and SWT, Ensemble model combining bidirectional LSTM (BLSTM) and SWT, Ensemble model combining LSTM and empirical mode decomposition (EMD), Ensemble model combining LSTM and variational mode decomposition (VMD), Ensemble model combining LSTM and empirical wavelet transform (EWT), and Multichannel framework combining LSTM and CNN (MC–CNN–LSTM)). The proposed model achieved a reduction of MAPE to less than 8% in most of the cases. The method was developed on the edge of a centralized loud system that integrated the edge models and could provide to multiple households a universal IoT energy consumption prediction. The method was limited by the difficulty to integrate multiple models for different household consumption patterns, raising data privacy issues.In [85], a DL model based on LSTM was implemented. The model consisted of two encoders, a decoder, and an explainer. Kullback-Leibler divergence was the selected loss function that introduced the long-term short-term dependencies in latent space created by the second encoder. Experimental results used the IHEPC dataset [53]. The first ten months of 2010 were used for training and the remaining two months for testing. The performance of the model was examined through three evaluation metrics, MSE, MAE, and MRE. Results were compared to conventional ML models such as LR, DT, and RF, and DL models such as LSTM, stacked LSTM, the autoencoder proposed by Li [86], the state-explainable autoencoder (SAE) [61], and the hybrid autoencoder (HAE) proposed by Kim and Cho [87]. The proposed model performed similarly to the state-of-the-art methods, providing additionally an explanation for the prediction results. Temporal information has been considered, paving the way for additional explanation for not only time but also for spatial characteristics.In January 2022, Huang et al. [88] proposed a novel NN based on CNN-attention-bidirectional LSTM (BiLSTM) for residential energy consumption prediction. An attention mechanism was applied to assign different weights to the neurons’ outputs so as to strengthen the impact of important information. The proposed method was evaluated on IHEPC [53] household electricity consumption data. Moreover, different input timestamp lengths, of 10, 60, and 120 minutes, were selected to validate the performance of the model. Evaluation metrics of RMSE, MAE, and MAPE were calculated for the proposed model and traditional ML and DL methods for time-series prediction, such as SVR, LSTM, GRU, and CNN-LSTM, for comparison. Results indicated the proposed method as the one with the higher forecasting accuracy, resulting in the lowest average MAPE. Moreover, the proposed model could avoid the influence of the input sequence long time step and was able to extract information from the features that most affect the energy forecasting. The authors suggested the consideration of weather factors [89] and electricity price policy supplementary data for their future work.The main characteristics of all aforementioned DL-based approaches are summarized in Table1. Comparative performance to state-of-the-art methods is provided throughout this review instead of a numerical performance report for each method, since different evaluation metrics are calculated in each referenced work (round mean squared error (RMSE), correlation coefficient R, p-value, mean absolute error (MAE), mean relative estimation error (MRE), etc.), different datasets and different time frames are selected, not making the results directly comparable.Table 1 Characteristics of DL methods for the case of residential building load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[47]2016CRBMShort-termResidential buildingANNSVMRNNSuitable for near real-time exploitation. Needs fine-tuning and extra information to the modelsIHEPC [53]Medium-termLong-termFCRBMShort-termResidential buildingANNSVMRNNCBRMMedium-termLong-term[6]2016LSTMShort-termResidential buildingNaïve mapping issueLSTM unable to forecast using one-minute resolution, S2S performed well in all cases, needs to be tested on different datasetsIHEPC [53]LSTM (S2S)Residential buildingComparable results to [47][56]2017CNNShort-termResidential buildingANNSVMComparable [6]Results did not vary much across different architectures, need to be tested with different datasets with weather dataIHEPC [53][59]2018RCFNetShort-termResidential buildingHASARIMAMLPLSTMConsiders proximity, periodicity, tendency of consumption, and influence of external factors, good scalability, possibility to further optimize its architecture[60][61]2019DL based on AEShort-termResidential buildingLRDTRFMLPLSTMStacked – LSTMAE modelDefinition of current demand pattern as state, able to predict very complex power demand values with stable and high performanceIHEPC [53][62]2019CNN-LSTMShort-termResidential buildingLRDTRFMLPLSTMGRUBi-LSTMAttention LSTMHigh performance in high resolution, analysis of variableso household appliances that influence the predictionIHEPC [53]Medium-termLong-term[63]2019EECP-CBLShort-termResidential buildingLRLSTMCNN-LSTMHigh training time, good results in all time frame settingsIHEPC [53]Medium-termLong-term[64]2020SVRGBRTFFNNLSTMShort-termResidential buildingSVRGBRTFFNNImproved generalization ability, lower seasonal predictions due to load variations[65][67]2020DNNMedium- termResidential buildingSVMMRGood generalization, unable to predict large valuesN/ALong-term[68]2020CNN-2d-RPShort-termResidential buildingSVMANNCNN-1DComputational complex, inappropriate for real-time applicationsIHEPC [53][71]2020DRNN-GRUShort-termVarious combinations of max 20 residential buildingsMLPARIMASVMMLRDRNNDRNN-LSTMHigh accuracy with limited input variables for aggregated and disaggregated load demand, good for filling missing data[72, 73]Medium- term[74]2021Attention-based encoder-decoder (GRU) with Bayesian optimizationShort-termResidential buildingsDenseRNNLSTMGRULstmSeqGruSeqLstmSeqAttGruSeqAttBLstmSeqAttTemporal attention layer towards greater robustness, optimal prediction through optimizationAEP [75][15]2021Hybrid stacked Bi-directionalUnidirectional fully connected (HSBUFC) model architectureShort-termResidential buildingsLRExtreme learning machine (ELM)NNLSTMCNN-LSTMConvLSTMBi-directional LSTMAllows for learning of exceedingly nonlinear anc convoluted patterns and correlations in data, slow training timeIHEPC [53, 76][13]2021RNN-LSTMNARXMedium-termResidential buildingsSVMRFAccurate prediction of peak and off-peak values, computationally complex due to windowing sizeIESO [77, 78][79]2021LSTMShort-termResidential buildings-A model independent of location and introduction of global features, use of limited dataReference [72] & custom[80]2022 Jan.DNNMedium- termResidential buildingsANNGBRFStackingkNNSVMDTLRAble to predict at the early design phase, not sensitive to a specific building type, size of data affects the performanceMHCLG [81, 82][83]2022 Jan.SWT-MC-BiNLSTMShort-termResidential buildingsAVRMLPLSTMGRU7 hybrid DL modelsA centralized approach, difficulties in integrating multiple models for different energy patternsUK-DALE [84][85]2022 Jan.LSTM with two encoders, decoder and explainerShort-termResidential buildingsSimilar results with LRDTRFLSTM stacked LSTM autoencoder of [86] SAE of [61]HAE of [87]Use of temporal information, explainable prediction results, not considering spatial characteristicsIHEPC [53]Long-term[88]2022 Jan.CNN-attention-BiLSTMShort-termResidential buildingsSVRLSTMGRUCNN-LSTMAvoidance of the influence of the input sequence long time stepIHEPC [53] ## 4.2. Commercial Building Load Forecasting In 2017, Chengdong Li et al. [86] proposed a new DL model from the combination of Stacked Autoencoders (SAE) [90] and an Extreme Learning Machine (ELM) [91]. The role of SAE was to extract features relative to the building’s power consumption, while the role of the ELM was for accurate energy load forecasting. Only the pretraining of the SAE was needed, while the fine-tuning was established by the least-squares learning of the parameters in the last fully connected layer. The authors compared the performance of the proposed Extreme SAE model with: (1) a Back Propagation Neural Network–BPNN; (2) a Support Vector Regressor–SVR; (3) a Generalized radial basis function neural network - GRBFNN, which is a generalized radial basis function neural network – RBFNN; and (4) a Multiple Linear Regression–MLR, a famous, often used regression and prediction statistical method. The dataset was collected from a retail building in Freemont (California, USA) in a 15-minute sampling rate [92]. The dataset contained 34.939 samples that were aggregated to 17.469 30-minutes and 8.734 1-hour samples. The effectiveness of the examined methodologies was measured in terms of MAE, MRE, and RMSE, for 30 and 60 minutes time period. The researchers concluded that the proposed approach in energy load consumption forecasting presented the best performance, especially with abnormal testing data reflecting uncertainties in the building power consumption. The best overall performance in forecasting was achieved by the Extreme SAE model in comparison to the other models. The achieved accuracy from best to worse was: Extreme SAE > SVR > GRBFNN > BPNN > MLR. The authors also concluded that the proposed SAE and ELM combination was superior to standard SAE, mainly, due to the lack of need for fine tuning of the entire network (iterative BP algorithm), which could speed up the learning process and contribute significantly to the generalization performance. The ELM speeded up the training procedure, without iterations, and boosted the overall performance, due to its deeper architecture and improved learning strategies.Widyaning Chandramitasari et al. [5] in 2018 proposed a model constructed by the combination of an LSTM network, used for time-series forecasting, and a Feed Forward Neural Network (FFNN), to increase the forecasting accuracy. The research focused on a time horizon of one day ahead with a 30-minute resolution, for a construction company in Japan. The proposed model was validated and compared against the standard LSTM and Moving Average (MA) model, which were used by a power supply company. The effectiveness of the evaluated methodologies was measured by RMSE. The used dataset covered a time period of approximately 1 year and four months (August 2016 to November 2017) with a 30-minutes resolution. Additional time information considered in the experiments was the day, time, and season (low–middle–high). The authors concluded that separating the day in “weekdays” and “All day” data gave more accurate results in energy load forecasting for weekdays. They also pointed out that the data analysis performed for forecasting should be, each time, according to the type of the client (residential, public, commercial, industrial, etc.).In the same year, Nichiforov et al. [93] experimented on RNN networks with implemented LSTM layers consisting of one sequence input layer, a layer of LSTM units with several different configurations and variations regarding the amount of used hidden units (from 5 up to 125 units), a fully connected layer and a regression output layer. They compared the results for two different nonresidential buildings from the University Campuses, one in Chicago, and the other in Zurich. The datasets used in their experiments were apprehended by BUDS [94] and contained hourly samples over a one-year period and after data optimization, they resulted in two datasets of approximately 8.670 data samples each. Results were promising, pointing out that the method could be used in load management algorithms with limited overhead for periodic adjustments and model retraining.The following year, the same authors in [95] also experimented with the same dataset and the same RNN architectures, adding to their research one more building located in New York. Useful conclusions extracted from both works were the following: RNN architecture was a good candidate, prompting promising accuracy results for building load forecasting. The best performance, graded by RMSE, Coefficient of Variation of the RMSE (CV- RMSE), MAPE and MSE metrics, was achieved with the RNN network when the LSTM layer contained 50 hidden units, while the worst accuracy was observed when contained 125 hidden units, for all buildings. DL-Model testing in load forecasting enhanced in the past few years due to the availability of datasets and relevant algorithms, better hardware necessary for testing, network modeling that could be obtained in lower prices and industry, and academic research teams’ joint efforts leading to better results. Due to the complexity of the building energy forecasting problem (buildings’ architecture, materials, consumption patterns, weather conditions, etc.), experts’ opinions in this domain could provide insights and guidance, along with further investigation and experimentation on a wide model variation. The authors also suggested that on-site energy storage could balance the scale in favor of better energy management.In 2019, Ljubisa Sehovac et al. [96] proposed the GRU (S2S) model [97], a simplified LSTM that maintained similar functionality. Two are the main differences between the two models, regarding their cells: (1) GRU (S2S) has an all-purpose hidden state h instead of two different states, memory and hidden and (2) the input and forget gates are replaced with update gate z. These modifications allowed GRU (S2S) model to train and converge in less time than LSTM (S2S) model, maintaining at the same time, a sufficient amount of hidden states dimension and gates to preserve long-term memory. In this study, the authors experimented in all time frame categories, for power consumption forecasting (Short–Medium–Long). The dataset used in the experiments was collected from a retail building at a 5-minute sampling rate. It contained 132.446 samples and covered a time period of one year and three months. There are 11 features in this dataset: Month, Day of Year, Day of Month, Weekday, Weekend, Holiday, Hour, Season, Temperature (°C), Humidity, and Usage (KW). The data were collected from “smart” sensors part of a “smart grid; the first 80% was used for training and the remaining 20% for testing. The proposed method was compared to LSTM (S2S), RNN (S2S) and a Deep Neural Network and their effectiveness was measured by the use of MAE and MAPE. The authors concluded that the GRU (S2S) and LSTM (S2S) models produced better accuracy in energy load consumption forecasting than the other two models. In addition, the GRU (S2S) model outperformed the LSTM (S2S) model and gave an accurate prediction for all three cases. Finally, a significant conclusion that verified the conclusions of relevant research [6, 47] was that when the prediction length increased the accuracy of predictions was expected to decrease.Mengmeng Cai et al. [98] designed Gated CNN (GCNN) and Gated RNN (GRNN) models. In this research, they tested five different models in short-term forecasting (next day forecasting) and compared them in terms of accuracy in forecasting, ability to be generalized, robustness, and computational efficiency. The models they tested were: (1) GCNN1, a multistep recursive model that made one-hour predictions that applied it 24 times for a day prediction; (2) GRNN1, same as the previous but for RNN model; (3) GCNN24, multistep, direct procedure that predicted the whole 24 hours at once; (4) GRNN24, same like the previous but for RNN model; and (5) ARIMAX, a non-DL, commonly used method for time-series problems. The authors applied the five models in three different nonresidential buildings: (1) Building A (Alexandria, VA, approx. 30.000 sqf, academic, dataset obtained by [99]), Building B (Shirley, NY, approx. 80.000 sqf, school, dataset obtained by [100]), and Building C (Uxbridge, MA, approx. 55.000 sqf, grocery store, dataset obtained by [100]). The datasets used in their experiments were one-hour samples collected in a year time period and contained meteorological data, temperature, humidity, air pressure, and wind speed. After data pre-\processing (cleaning, segmentation, formation, normalization, etc.) for keeping only the weekday samples, the researchers divided the remained data in 90% training data, 5% validation data, and 5% testing data. Several useful conclusions were extracted. The building size, occupancy and peak load mattered significantly in the results of GCNN1 and GRNN1, improving the accuracy of load prediction. While the number of people in the building has risen, the uncertainty caused by each individual’s behavior is averaged, resulting in a more accurate prediction. Among GCNN1, GRNN1, and SARIMAX, the best performance was achieved by GCNN1, while the slightly poorer by GRNN1 and the worst by far by SARIMAX. In another experiment, the GCNN24 outperformed GRNN24, and produced better results in accuracy (22.6% fewer errors compared to SARIMAX) and computational efficiency (8% faster compared to SARIMAX) than GCNN1, GRNN1 and SARIMAX, granting the GCNN24 model as the most suitable, among the five, for short-term (day-ahead) building load forecasting. As a more general conclusion, the researchers stated that DL methods fitted better load forecasting than previously used methods.In [101], Yuan Gao et al. in 2019 experimented in long-term (one year) building load forecasting and proposed an LSTM architecture with an additional self-attention network layer [102]. The proposed model emphasized on the inner logical relations among the dataset during prediction. The attention layer was used towards improving the ability of the model to convey and remember long-term information. The proposed model was compared to an LSTM model and a Dense Back Propagation Neural Network and evaluated, regarding load forecasting accuracy by MAPE. All three models were applied to a Nonresidential Office building in China. The dataset used in this research contained 12 attributes (weather, time, energy consumption, etc.) on daily measurements and ranged in a two-year time period. The main conclusion of this research was that the proposed method was able to address the issue of long-term memory and conveyed information better than the other two architectures, outperformed LSTM by 2.9% and DBPNN by 6.5%.Heidrich Benedikt et al. [103] in 2020 proposed a combination of standard energy load profiles and CNN, and created a Profile Neural Network (PNN). The proposed architecture consisted of three different profile modules: standard load profile, trend and colorful noise, and the utilization of CNNs, which according to the authors has never been proposed before. In this scheme, CNNs were used as data encoders for the second (trend encoder) and third module (external and historical), in the prediction network in the third module (colorful noise calculation) and in the aggregation layer, where the results of the three modules were aggregated to perform load forecasting. The dataset used for the experiments was the result of merging two datasets: (a) one that contained historical load data gathered in a ten-year time period from two different campus buildings (one with weak and one with strong seasonal variation) and (b) weather data apprehended from Deutsche Wetterdienst (DWD) [104]. The merged dataset covered an eight-year period with one-hour resolution samples; 75% of the data was used for training and the remaining 25% for testing the models. In order to measure and better comprehend the performance of the PNN, the authors compared the results of four different variations of their model, regarding time window size (PNN0, PNN1 month, PNN6 month, and PNN12 month), to four state-of-the-art building load forecasting methods from the literature: RCFNet, CNN, LSTM and Stacked-LSTM, and three naïve forecasting models: periodic persistence, profile forecast, and linear regression. All models were evaluated, by RMSE and MASE metrics, and tested in short (one day) and in medium-term (one week) building load forecasting. All the PNN models, besides PNN0, outperformed the rest of the tested models, and among them, PNN1 achieved the best performance for both time horizons and both types of buildings. Regarding the training time, PNN models required the least time for both types of buildings for short-term forecasting but were outperformed by CNN in medium-term forecasting. According to the authors, the excess time needed, compared to the fastest model, offered a much better accuracy and thus it was an acceptable trade off. The authors also concluded that the proposed model was flexible due to the ability to change, according to cases, modules and encoders in order to achieve better results, and could also be used on a higher scale than a building.In [105], Sun et al. in 2020 introduced a novel deep learning architecture that combined an input feature selection, through MRMR (Maximal Relevance Minimal Redundancy) criterion, based on Pearson’s correlation coefficient, and an LSTM–RNN architecture. The dataset used for the short-term forecasting experiments covered one year of historic load data (2017) for three different types of buildings (office, hotel, and shopping mall), apprehended by the Shanghai Power Department, while the weather-related data were collected from a local weather forecast website. In order to establish a baseline and prove the proposed model’s efficiency, the researchers conducted several experiments that were MRMR-based LSTM–RNN model variations competed against ARIMA, BPNN variations and BPNN-SD variations forecasting models, evaluated based on RMSE and MAPE metrics. According to the results, the proposed model, and more specifically the two-time step variation of the model, outperformed all other models and provided the most accurate load forecasting results. The authors concluded that due to the complexity of the building energy load prediction task, the right selection of input features played a key role in the procedure, and in combination with a hybrid prediction model, could present more accurate results.In [106], Gopal Chitalia et al. in 2020 presented their findings, regarding deep learning architectures in short-term load forecasting, after experimenting on nine different DL models: Encoder–Decoder scheme, LSTM, LSTM with attention, Convolutional LSTM, CNN – LSTM, BiLSTM, BiLSTM with an attention mechanism, Convolutional BiLSTM and CNN–BiLSTM. The main idea was that RNN networks with an attention layer could produce more robust and accurate results. All the above models were tested in five different types of buildings on two different continents, Asia and North America. Four out of five datasets used in this research can be found in [100, 107, 108], while the weather data were collected from [109]. The authors investigated short-term building load forecasting, through several various aspects regarding feature selection, data optimization, hyperparameter fine-tuning, learning-based clustering, and minimum dataset volume, with acceptable results of accuracies. All DL architectures were evaluated by RMSE, MAPE, CV, and Root-Mean-Square Logarithmic Error (RMSLE) and provided a fair assessment for each building’s load forecasting results. The researchers concluded that the implementation of the attention layer in RNN networks increased the load forecasting accuracy of the model and could perform adequately in a variation of buildings, loads, locations, and weather conditions.In January 2022, Xiao et al. [110] proposed an LSTM model to predict the day-ahead energy consumption. Two data smoothing methods, Gaussian kernel density estimation and Savitzky-Golay filter, were selected and compared. Data used in that work was from the Energy Detective 2020 dataset [111], including hourly consumption data from 20 office buildings and weather data, from 2015 to 2017. The authors concluded that data smoothing could help enhance the accuracy of prediction in terms of CVRMSE, however, when raw data were taken as the reference, the prediction accuracy decreased dramatically. A larger training set was recommended in conclusions, if the computing cost was acceptable.The main characteristics of the DL-based approaches of this section are summarized in Table2.Table 2 Characteristics of DL methods for the case of commercial building load forecasting. Paper Ref.Pub.YearDeep learning modelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[86]2017Extreme SAEShort-termRetail buildingSVRGRBFNNBPNNMLRQuicker learning speed and stronger generalization performance, not considering periodicity of energy consumption[92][5]2018LSTM – FFNNShort-termConstruction companyLSTMMAGood energy consumption forecast of the next day for each 30-minutesData from a small power company in Japan[93]2018RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelA replicable case study, suitable for online optimization[94][95]2019RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelTendency for overfitting the input data and poor performance on the testing samples[94][96]2019GRU (S2S)Short-termCommercial buildingsLSTM (S2S)RNN (S2S)DNNAccuracy was decreasing as the prediction length was increasingCustomMedium-termLong-term[98]2019GCNNShort-termCommercial buildingsVarious implementation of reviewed models and SARIMAXReduced forecasting error, able to handle high level uncertainties, high computational efficiency[99, 100]GRNN[101]2019LSTM + self-attention layerLong-termCommercial buildingsLSTMDBPNNResolved the problem of long-term memory dependenciesCustom[103]2020PNNMedium- termCommercial buildingsLRRCFNetCNNLSTMStacked-LSTMInserted statistical information about periodicities in the load time series[104][105]2020LSTM – RNN + MRMR criterionShort-termCommercial buildingsARIMABPNNBPNN-SDFeature variables selection to capture distinct load characteristicsShanghai power department[106]2020LSTM + attentionShort-termCommercial buildingsEncoder – DecoderLSTMConvolutional LSTMCNN – LSTMBiLSTM, BiLSTM with attentionConvolutional BiLSTMCNN – BiLSTMRobust against different building types, locations, weather and load uncertaintiesReferences [100, 107, 108],[110]2022 Jan.LSTM + data smoothingSort-termOffice buildings-Prediction would decline for certain periods, data smoothing could help the accuracy of predictionEnergy detective 2020 [111] ## 4.3. Multiple Type of Buildings Load Forecasting In [112], H. Shi et al. in 2018 introduced for short-term household load forecasting, a pooling-based deep RNN architecture (PDRNN), boosted by LSTM units. In the proposed PDRNN, the authors combined DRNN with a new profile pooling technic, utilizing neighboring household data to address overfitting and insufficient data in terms of volume, diversity, etc. There were two stages in the proposed methodology: load profile pooling and load forecasting through DRNN. The data used for model training and testing were apprehended from Commission for the Energy Regulation (CER) in Ireland [113] and were collected from smart metering customer behavior trials (CBTs). The data covered one-and-a-half-year time period (July 2009 to December of 2010). The proposed method was compared to other state-of-the-art forecasting methods, ARIMA, RNN, SVR and DRNN models, and was evaluated by RMSE, NRMSE, and MAE metrics. The researchers concluded that PDRNN outperformed the rest of the models, achieving better accuracy, and successfully addressing overfitting issues.In the same year, Aowabin Rahman et al. [114] proposed a methodology focused on medium- to long-term energy load forecasting. The authors examined two LSTM-based (S2S) architecture models with six layers. The contributions of this work were: (1) energy load consumption forecasting for a time period ranging from a few months, up to a year (medium to long term); (2) quantification of the performance for the proposed models on various consumption profiles for load forecasting in commercial buildings and in aggregated load at the small community scale; and (3) development of an imputation scheme for missing history consumption data values by the use of deep learning RNN models. Regarding the used dataset, the authors followed different protocols to collect useful data: (1) A Public Safety Building at Salt Lake (PSB at Utah, USA). The dataset used for this part of the paper, obtained from the PSB, was at one-hour resolution for a time frame of 448 days (one year, two months, and three weeks) covering a time period from the18th of May 2015 till the 8th of August 2016. The proposed architectures were tested in several load profiles with a combination of variables (weather, day, month, hour of the day, etc.). The first year of the dataset was used for training and the remaining (approximately 83 days) for testing; (2) A number (combinations of maximum 30) of residential buildings in Austin (Texas, USA). The dataset used for this part of the paper was acquired from Pecan Street Inc. Dataport Web Portal [72], at one-hour resolution for an approximate two-year time period from January 2015 till December 2016. The dataset included data for 30 individual residential buildings and the load consumption forecasting was aggregated. The first year of the dataset was used for training and the remaining time for testing. The experiments revealed that the prediction accuracy, for both models, was limited and highly affected by the weather. Moreover, if the training data greatly differed from testing and future weather data, then a model that produced sufficient power load consumption predictions for a specific building cannot be applied successfully to a different building. In addition, if major changes regarding occupancy, building structure, consumer behavior, or the installed appliances/equipment occurred in the specific building, the same model would have decreased accuracy. According to the authors’ findings, both proposed models performed better in commercial building energy load forecasting, than a three-layer MLP model, but worse over a one-year period forecasting regarding the aggregated load for the residential buildings, with the MLP model performed even better as the total of residential buildings increased. As a final remark, the researchers concluded that there was a lot of potential in the use of deep RNN models in energy load forecasting over medium- to long-term time horizon. It is worth mentioning that besides the consumption history data, the authors considered several other variables (day of the week, month, time of the day, use frequency, etc.) and weather conditions acquired from Mesowest web portal [73].In [115] Y. Pang et al. in 2019, in order to overcome the limited historical consumption data for most buildings, to utilize for short-term load forecasting model training, proposed the utilization of Generative Adversarial Network method (GAN). The researchers introduced the GAN-BE model, an LSTM unit-based RNN (LSTM-RNN) deep learning architecture, and experimented with different variations of it, with or without attention layer. For the experiments were used data, collected from four different types of buildings: an office building, a hotel, a mall, and a comprehensive building. The different variations of the proposed model were compared to four LSTM variations and evaluated by MAPE, RMSE, and Dynamic Time Warping (DTW) metrics. The proposed model, with and without the attention layer, outperformed the other models, displaying better accuracy and robustness.In [116], Khan et al. in 2020 developed a hybrid CNN with an LSTM autoencoder architecture (CNN with LSTM-AE) that consisted of ten layers, for short-term load forecasting in residential and commercial buildings. The load forecasting accuracy of the proposed model was compared (by MAPE, RMSE, MSC, and MAE metrics) to other DL schemes (CNN, LSTM, CNN-LSTM, LSTM-AE). Two datasets were used in this research: (1) From the UCI repository [53] and (2) a custom dataset regarding a Korean commercial building from a single sensor, instead of four used on the UCI dataset, sampled in a 15-minute window and a total amount of 960.000 records. For this experiment, the first 75% of the dataset (three years) was used for training and the remaining 25% (one year) for testing. All models were tested, on both datasets, in hourly and daily resolution. The authors extracted several conclusions from their research. When they tested the above DL models, fed on the UCI dataset, over hourly data resolution, they discovered that some cross combinations among them produced better results than each one of them individually. The latter inspired them to develop the proposed model, which outperformed all the above tested DL models. They also experimented using the same dataset, over daily data resolution, and the proposed model achieved again the best forecasting accuracy. In the next step of their research, they tested their model using their own dataset over hourly and daily data resolution. Their model produced less accurate results than LSTM and LSTM-AE models, over hourly data resolution, but outperformed all other models, over daily data resolution. The general conclusion of their research was that the proposed hybrid model performed better during the experiments, especially over daily data resolution, compared to other DL and more traditional building load forecasting methods.AkCNN-LSTM deep learning framework was proposed in [117]. The proposed model combined k-means clustering for analyzing energy consumption patterns, CNNs for feature extraction and LSTM-NN to deal with long-term dependencies. The method was tested with real-time energy data of a four-story academic building, containing more than 30 electrical-related features. The performance of the model was assessed in terms of MAE, MSE, MAPE, and RMSE for the considered year, weekdays, and weekend. The authors observed that the proposed model provided accurate energy demand forecasting, attributed to its ability to learn the spatiotemporal dependencies in the energy consumption data. The kCNN-LSTM was compared to k-means variants of state-of-the-art energy demand forecast models, revealing higher performance analysis in terms of computational time and forecasting accuracy.In the same year, Lei et al. [118] developed an energy consumption prediction model based on the rough set theory and deep belief NN (DBN). The used data were collected from 100 civil public buildings (office, commercial, tourist, science, education, etc.) for rough set reduction and from a laboratory building to train and test the DL model. The public building data referred to five months of a total of 20 inputs data collection. The laboratory building data referred to less than 20 energy consumption inputs, obtained for approximately a year, including building consumption and meteorological data. Short-term and medium-term predictions were included. Prediction results, MAPE and RMSPE, were compared to that of a back-propagation NN, Elman NN, and fuzzy NN, revealing higher accuracy in all cases. The authors concluded that the rough set theory was able to eliminate unnecessary affecting factors of building energy consumption. The DBN with a reduced number of inputs resulted in improved prediction accuracy.In [119], Khan et al. introduced a hybrid model, DB-Net, by incorporating a dilated CNN (DCNN) with bidirectional LSTM (BiLSTM). The proposed method used a moving average filter for noise reduction and handled missing values via the substitution method. Two energy consumption datasets were used: the IHEPC dataset [53] consisting of four-year energy data (three years for training and 1 year for testing) and the Korean dataset of the advanced institutes of convergence technology (AICT) [120] for commercial buildings consisting of three-year energy data (two years for training and one year for testing). The proposed DB-Net model was evaluated using MAE, MSE, RMSE, and MAPE error metrics and it was compared to various ML and DL models. The proposed model outperformed the referenced approaches, by forecasting multi-step power consumption, including hourly, daily, weekly, and monthly output with higher accuracy. However, the method was limited by the fixed-size input data and the use of the invariance time-series data in a supervised sense. The authors suggested applying several alternative methods to boost the performance of the model, more challenging datasets, and more dynamic learning approaches as their future work.Wang et al. [121] proposed a DCNN based on ResNet for hour-ahead building load forecasting. The main contribution of their work was the design of a branch that integrated the temperature per hour into the forecasting branch. The learning capability of the model was enhanced by an innovative feature fusion. The genome project building dataset was adopted [122], including load and weather conditions of nonresidential buildings; the focus was on two laboratories and an office. The performance of five DL models was considered for comparison reasons. Comparison results for single-step and 24-step building load forecasting revealed that the proposed DCNN could provide more accurate forecasting results, higher computational efficiency, and stronger generalization for different buildings.In January of 2022, Jogunola et al. [123] introduced architecture, named CBLSTM-AE, including a CNN, an autoencoder (AE) with bidirectional LSTM (BLSTM). The effectiveness of the proposed architecture was tested with the well-known UCI dataset, IHEPC [53] and the Q-Energy [124] platform dataset was used to further evaluate the generalization ability of the proposed framework. From the Q-Energy dataset, a private part was used including two small-to-medium enterprises (SME), a hospital, a university, and residences. The time resolution of both datasets was converted to 24 hours towards short-term consumption prediction. The IHEPC data was further used for comparison of the proposed method with state-of-the-art frameworks. The proposed model achieved lower MSE, RMSE, and MAE and improved computational time, compared to the other models: LSTM, GRU, BLSTM, Attention LSTM, CNN-LSTM and electric ECP-based CNN, and BLSTM (EECP-CBL). Results demonstrated good generalization ability and robustness, providing an effective prediction tool over various datasets.In February 2022, the most research on energy consumption forecasting up-to-date was presented by Sujan Reddy et al. in [125]. The authors proposed a stacking ensemble model for short-term load consumption. ML and DL models (RF, LSTM, DNN, evolutionary trees (EvTree)) were used as base models. Their prediction results were combined using Gradient Boosting (GBM) and Extreme Gradient Boosting (XGB). Experimental observations on the combinations revealed two different ensemble models with optimal forecasting abilities. The proposed models were tested on a standard dataset [126], available upon request, containing approximately 500000 load consumption values at periodic intervals for over 9 years. Experimental results pointed out the XGB ensemble model as the optimal, resulting in reduced training time and higher accuracy, compared to the state-of-the-art (EvTree, RF, LSTM, NN, ARMA, ARIMA, Ensemble model of [126], feed-forward NN (FNN–H20) of [127] and DNN-smoothing of [127]). Five regression measures were used: MRE, R-squared, MAE, RMSE, and SMAPE. A reduction of 39% was reported in RMSE.The main characteristics of the DL based approaches of this section are summarized in Table3.Table 3 Characteristics of DL methods for the case of multiple type of buildings load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages & DisadvantagesDataset[112]2018PDRNNShort-termResidential small and medium enterprisesARIMARNNSVRDRNNProne to overfitting due to more parameters and fewer data[113][114]2018LSTM 1LSTM 2Long-termCommercial buildingMLPMissing data imputation scheme, decrease of accuracy if weather changes or for other building structures or when data is aggregated[72, 73]Various combinations of max 30 residential buildings[115]2019GAN-BE (LSTM-RNN basedShort-termOffice buildingHotelMallComprehensive buildingLSTM variationsAble to capture distinct load characteristics and choose accurate input variablesCustom[116]2020CNN with LSTM – AEShort-termResidential buildingCNNLSTMCNN-LSTMLSTM-AEOutlier detection and data normalization, spatial features extraction for better accuracyIHEPC [53] & customMedium-termCommercial buildings[117]2021kCNN-LSTMLong-termAcademic buildingARIMADBNMLPLSTMCNNCNN-LSTMAble to learn spatiotemporal dependencies in the energy consumption dataCustom[118]2021DBNShort-term100 civil public buildingsBack-propagation NNElman NNFuzzy NNRequires a large amount of training data, used uncalibrated data and does not need feature extractionCustomMedium-termLaboratory building[119]2021DB-netShort-termResidential buildingSVRCNN-LSTMCNN-BiLSTMDCNN-LSTMDCNN-BiLSTMAbility for multistep forecasting, noise reduction and handling of missing values, small inference time, suitable for real-time applications, limited by the fixed-size input dataIHEPC [53, 120]Long-termCommercial buildings[121]2021RCNNShort-termTwo laboratories and an officeGRUResNetLSTMGCNNIncreased depth of the model, enhanced ability to learn nonlinear relations, able to integrate information of external factors, fast converge[122][123]2022 Jan.CBLSTM-AEShort-termCommercial buildingsLSTMGRUBLSTMAttention LSTMCNN-LSTMEECP-CBLGeneralizes well to varying data, building types, locations, weather and load distributionsIHEPC [53] & private Q-energy [124] dataResidential buildings[125]2022 Feb.Ensemble with GBMEnsemble with XGBShort termVarious buildingsEvTree, RFLSTMNNARMAARIMAEnsemble [126]FNN-H20 [127]DNN-smoothing [127]Reduced training time, can be applied to any stationary time series dataCustom ## 5. Datasets The dataset is the key element of all deep learning methods. In order to train a model in understanding and producing useful results, the dataset should be selected carefully. The user has to weigh the options of choosing certain features of each dataset, in accordance with the result that the model produces. In the problem of building load forecasting, we encounter in the existing literature a finite number of datasets being used by researchers, mostly acquired from the building under investigation. Data collection is labor intensive and presupposes a metering infrastructure installed in the buildings for effective energy consumption monitoring. Moreover, historical data of several years is usually necessary. In most research papers, the datasets are comprised of consumption history data (thousands of samples covering a time period of over a year), focusing on major power consuming devices/appliances (kitchen, water heater, HVAC, etc.) and different load profiles. In some research papers, the authors considered the weather conditions, but the required data were not part of the same dataset as the consumption history data and the weather data had to be acquired from different sources. Some experimentations, driven mostly by the results of each methodology, lead the researchers to add weather conditions and cast the time-series data into categories such as weekday, weekend, hour of the day, and achieve by that way, more promising results. Global and local climate change as well as urban overheating can seriously affect the energy consumption of urban buildings, creating weather datasets that are not reliable over the years. So far, the research is focused on testing and experimenting in different deep learning models, sometimes involving the same dataset, in order to better understand and conclude comparatively to which model provides better results by using the same dataset. It should be noted here that approximately 48.2% of the papers experimenting in residential and multiple building load forecasting in this work are using the same dataset. Towards this end, research efforts are focusing on load forecasting based on limited input variables, which would additionally lead to less computational complex models appropriate for real-time applications. An interesting observation regarding the datasets is that they can be used efficiently for the building that has been acquired from. Any effort to be adjusted in a different building will not produce the desirable results. This limitation in generalization is a major drawback that needs to be addressed in future research in the field. Reliable forecasting models for varying data, building types, locations, weather, and load distributions need to be developed. The solution of the lack of detailed datasets for numerous buildings could be addressed by the rapid growth of the Internet of Things (IoT) and the growing capability of the research community to make use and better comprehend Big Data. The evolution of home/building and grid to “smart home/building” and “smart grid” by applying a number of sensors and actuators (IoT) will provide the researchers with a vast amount of data (volume), rich in features (variety), and in almost real-time (velocity), better described as Big Data. ## 6. Discussion Building load forecasting is an emerging area of building performance simulation (BPS), facing technical complexity and major significance to a variety of stakeholders, since it supports future operational and energy efficiency improvements in existing buildings [128]. Deep learning models have entered the load forecasting field in recent years due to their ability to deal with big data and lead to high forecasting accuracies. Reviewing the relevant literature regarding building load forecasting with deep learning methods, interesting findings became apparent. To date, most DL models have been applied to residential buildings (47.5%). Residential buildings count for almost 70% of total energy consumption [129]. The increase in population and floor area per person in urban cities resulted subsequently in an increase in residential energy consumption. The latter motivated the research community to investigate further the energy load forecasting of residential buildings, so as to account for the spent energy and propose energy conservation measures and future green policies. Furthermore, most DL models were applied for short-term forecasting horizon (55%), e.g., a day or an hour ahead. Short-term forecasting may lead to more accurate results, since a longer forecasting horizon would significantly increase the possibility of alterations of the input data, not known beforehand and able to impact severely the forecasting accuracy. The most popular architecture in the literature was found to be the LSTM model. LSTM models are able to provide a great number of parameters, e.g., learning rates, input and output biases, thus, they do not require fine adjustments. Although the results of DL models appear promising, many challenges need to be addressed, mainly related to data availability and improvements on the DL models.The human factor is one of the most defining factors that add to the difficulty of building load forecasting problem. On the small scale of a building, with a number of offices or flats, or even on the scale of a single household, human behavior can challenge even the most efficient DL load forecasting methods. It is a problem that several researchers pointed out in their work and tried to handle it by aggregating the load of several homes together before proceeding to forecast. The higher the scale of the researched structure or the sum of the people working/living in it, the less of an impact in forecasting, as the individual human behavior falls to a more general behavior that is easier to predict.It is also important to point out that the primary DL models that were utilized in building load forecasting were in a plainer mode than later, as the research progressed, where we encounter more complex schemes, DL model combinations and hybrid models that produced more efficient and accurate results. In general, a lack of general guidelines for developing and testing a DL model is missing from the literature. For example, trial and error was applied in many cases for tuning the hyperparameters, resulting in methodologies not being able to be reproduced easily. Moreover, the models balance between high accuracy and lower training time or higher model complexity. The more computational complex the model, the most accurate is reported in most of the referenced cases; however, the latter leads to an increase in training time, making the models inappropriate for real-time applications.In general, building energy models need to be improved so as to represent in more detail the actual performance of the building. The solution is in model calibration techniques, by calibrating several inputs to the existing building simulation programs. Calibration could significantly improve the performance of the energy models; however, simulation accuracy is determined by multiple parameters, referring to measured energy building data inserted as calibration inputs. The collection of detailed data may require extensive time and costs. Therefore, another challenge that the researchers had to encounter, as already mentioned, is the lack of detailed datasets. Additionally, the absence of public available datasets obstructs the reproduction of results and comparative studies. In a great number of research papers, in order to explore the impact of different features, to enhance the prediction accuracy by producing more robust and generalized models, researchers had to combine different datasets or process the existing data in several different ways. Once the lack of datasets is addressed properly, the greater challenge in this field will be the development of a DL model, or the combination/utilization of the existing DL models, in a way that it can be applied in several different types of buildings (office, residential, academic, etc.), use detailed real-time data, proceed automatically to self-adjustments, and produce accurate and applicable results, for the Energy Industry towards efficient energy management. ## 7. Conclusion The application of deep learning methods to the prognosis of the electrical load of buildings is a subject that first appeared in 2016 and since then continues to demonstrate an upward trend when it comes to the interest of researchers. The latter trend is probably due to the promising results of relevant research work, compared to alternative existing methods. Several useful conclusions were emerged from this literature review, regarding the up-to-date engagement of the scientific community in the current subject. Research revealed a higher interest in residential building load forecasting covering the 47.5% of the referenced literature, mainly towards short-term forecasting, in 55% of the papers. The latter was attributed to the lack of available public datasets for experimentation in different building types, since it was found that in the 48.2% of the related literature, the same historical data regarding residential buildings load consumption was used. Even though the several encountered challenges, researchers proved to be resourceful, resilient in their work, and proposed or utilized several new or pre-existing methods to address most of the issues confronted on the way. The advancement of technology and the price decrease in hardware equipment, necessary for DL methodology applications for the management of the vast amount of data, also contributed to the enhancement of DL methods application. To conclude, considering the up-to-date published research, the DL models produce accurate and promising results regarding building load forecasting, outperforming almost all the other traditional forecasting methods such as physics-based models, statistical models. Most of the researchers concluded that further testing of their models, with different datasets and more features, would apprehend more accurate results. The latter can be addressed by the Internet of Things (IoT) and smart sensors embedded in the grid, upgrading it to “smart”, paving the way for future research work. --- *Source: 1008491-2022-06-17.xml*
1008491-2022-06-17_1008491-2022-06-17.md
158,530
A Survey on Deep Learning for Building Load Forecasting
Ioannis Patsakos; Eleni Vrochidou; George A. Papakostas
Mathematical Problems in Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008491
1008491-2022-06-17.xml
--- ## Abstract Energy consumption forecasting is essential for efficient resource management related to both economic and environmental benefits. Forecasting can be implemented through statistical analysis of historical data, application of Artificial Intelligence (AI) algorithms, physical models, and more, and focuses on two directions: the required load for a specific area, e.g., a city, and the required load for a building. Building power forecasting is challenging due to the frequent fluctuation of the required electricity and the complexity and alterability of each building’s energy behavior. This paper focuses on the application of Deep Learning (DL) methods to accurately predict building (residential, commercial, or multiple) power consumption, by utilizing the available historical big data. Research findings are compared to state-of-the-art statistical models and AI methods of the literature to comparatively evaluate their efficiency and justify their future application. The aim of this work is to review up-to-date proposed DL approaches, to highlight the current research status, and to point out emerging challenges and future potential directions. Research revealed a higher interest in residential building load forecasting covering 47.5% of the related literature from 2016 up to date, focusing on short-term forecasting horizon in 55% of the referenced papers. The latter was attributed to the lack of available public datasets for experimentation in different building types, since it was found that in the 48.2% of the related literature, the same historical data regarding residential buildings load consumption was used. --- ## Body ## 1. Introduction According to the statistical review on world energy (2020) of the International Energy Agency (IEA) [1], until the end of the year 2019 (although the report for 2021 is available [2], it is not used due to the unique impact of COVID-19 in the Energy sector that, due to the authors’ opinion, requires further research and is beyond the scope of this paper) almost 80% of the consumed electricity, in global scale, was produced by nonrenewable sources [3] (Figure 1(a)) such as coal, lignite, oil, natural gas, and the annual variation, with few exceptions, ranged between 1% and 4% of an increase, corresponding to 1% to 2% in average [4] as illustrated in Figure 1(b). These two factors alone are sufficient to conclude that electric energy is a product with an expiration date, high production costs, and derivatives harmful to the environment (nuclear waste, greenhouse gas (GHG) emissions, etc.). The lack of sufficient means of long-term and large-scale storage of produced electricity, rendered both the problems of successfully balancing the supply–demand scale and power consumption forecasting. These are the most significant problems that the electricity production industry has to address on a daily basis, in order to avoid energy shortages that could lead to line of production problems in industrial compounds, services disruption, residential outages, etc., or produced energy waste [5]. Effective management of the produced electric energy, avoidance of excessive production, and minimization of energy wastage constitute the main keys toward sustainable energy consumption [6].Figure 1 (a) Global electricity share by fuel source [3]. (b) Annual change in primary energy consumption [4]. (a)(b)Over the past years, in multiple scientific papers and articles, the term “smart grid” was frequently used [7] to describe a flexible grid regarding the production and distribution of electric energy [8]. It should be noted that a search on Scopus by using the term “smart grid” indicated 44.598 research articles until 2022. This flexibility is due to the dynamic adjustment in power demand and the cost-effective distribution of electricity produced from various sources, e.g., solar, wind, nuclear. [9]. A grid is considered “smart” when it is able to monitor, predict, schedule, understand, learn, and make decisions regarding power production and distribution, carrying valuable information along with electricity [8]. The upgrade of power grid requires the infusion of AI [10]; recent studies suggest the training of Artificial Neural Networks (ANNs) to recognize multiple energy patterns [11].Regarding the prediction time frame, power consumption forecasting can be divided into three main categories.(i) Short–term, which covers a time frame of up to one day; it is useful in supply and demand (SD) adjustment [12].(ii) Medium–term, which covers a time frame from one day up to a year; it is useful in maintenance and outage planning [13].(iii) Long–term, for a time frame longer than a year; it is useful in infrastructure development planning [14].In published papers/scientific literature, so far, the applied methodology in power consumption forecasting, regarding an area or a specific building, casts in two main categories.(i) Physics principles-based models.(ii) Statistical and Machine Learning (ML) models.Building power consumption forecasting is considered more demanding compared to the area of power consumption forecasting [15]; however, more and more researchers are turning to the application of DL to solve such problems due to the reported promising performances [16–18].According to estimates, residential and commercial buildings are responsible for the consumption of 20–40% of the global energy production [19–22], having a high-energy wastage rate due to insufficient management and planning, age of the building, lack of responsible energy usage, etc. [23]. High consumption percentages have motivated researchers to develop new ways or enhance and improve the existing ones in better savings of electric energy, focusing mainly on the development of a solid strategy for flexible and efficient energy supply–demand management. The success of the latter strategy is highly dependent on timely and accurate energy consumption forecasting [24, 25].Therefore, a “smart” grid should be able to predict not only the total amount of energy needed at a certain period of time in a specific area but to calculate further and with precision the electricity consumption needs of a specific building based on the building’s characteristics such as Heating, Ventilation, and Air Conditioning (HVAC) devices, and historic consumption data. [26]. It is worth mentioning that a research work [27] estimated that an increase of 1% in forecasting accuracy could result in approximately £10 million per year fewer expenses for the power system of the United Kingdom.Towards this end, the contribution of the present work is focused on the methodological searching, collecting, analyzing, and presenting DL methods proposed in the years 2016–2022 for solving the problem of building power consumption prediction. To the best of the authors’ knowledge, literature reviews of DL methodologies and approaches, focusing on forecasting energy use in buildings are limited [28, 29]. This work aims to address the issue of building energy load forecasting and load prediction further and in depth, to update existing literature reviews on the same subject, to shed more light on the current status of DL performance in this area, and to highlight the main challenges that need to be addressed in the future. The main contributions of the current work compared to existing reviews on building load forecasting [28, 29] are the following.(i) The research methodology, analyzed in Section2, is not limited to specific publishers, resulting in a wider range of publications on the subject under study.(ii) This work focuses on research papers that propose methodologies based on the total building energy load and it is not targeted on specific utility loads, such as HVAC load.The rest of the paper is organized as follows. In Section2, the methodology of the contacted literature search is presented, along with statistical analysis and graph displays of the results. In Section 3, all the nondeep learning methodologies proposed or used to date in building load forecasting are presented briefly. In Section 4, the deep learning methodologies that have been proposed and tested so far towards addressing the building energy consumption prediction problem, along with their results and conclusions, are presented in chronological order. Section 5 provides details regarding the datasets used in the referenced literature of Section 4. Section 6 discusses new reflections and concerns regarding the building load forecasting problem that were raised from the conducted research. Finally, Section 7 concludes the paper. ## 2. Materials and Methods The methodology that was followed consisted of four main steps, as illustrated in Figure2.(1) Extensive research in the published literature by using Scopus, a certified, academically approved search engine, to establish a solid baseline for our research. Scopus complies with the most important research features of recall, precision, and importance. Regarding “importance”, Scopus is considered the most effective research engine for an overview of a topic [30], and therefore selected for the scope of this review article. The application of several different combinations of the keywords indexed above, such as “Deep learning” and “Building load forecasting”, resulted initially in 71 papers.(2) By studying the Title–Abstract–Conclusion parts of each paper, we were able to narrow down the relevant, to our subject, papers. In this step, the study focused on papers that could provide potential solutions/suggestions in wider and more generalized applications and benefit in the upcoming research. After this phase, 48 papers remained.(3) Extensive and meticulous study of the remaining papers and categorization according to the suggested solution/methodology. In this phase remained 34 papers.(4) In the final stage of our research, to achieve a more thorough review, we traced/researched the references in the papers of the previous step, which were not included in the results of step 1. This indicated 6 more papers, significant to our research, resulting in a total of 40 papers relevant to the subject.Figure 2 Steps involved in the collection of research papers for this review.Several useful conclusions emerged from this literature review, regarding the up-to-date engagement of the scientific community in the current subject. The matter of the application of deep learning methods to the prognosis of the electrical load of buildings is a subject that first appeared in 2016 and since then continues to demonstrate an upward trend when it comes to the interest of researchers, as it appears in Figure3. It should be noted that only until February of 2022, seven papers relevant to the subject have been published. The latter trend is probably due to the promising results of DL architecture application in research, compared to traditional load forecasting methods.Figure 3 Published research papers relevant to the subject per year.Regarding the type of building, residential, commercial, or multiple types, research reveals an almost similar interest in multiple and commercial buildings, as it can be seen in Figure4, while the higher interest is in residential buildings. We assume this has to do with the limited dataset availability than the sole interest in a specific type of building, since approximately 48.2% of the papers experimenting in residential and multiple building load forecasting are using the same well-known dataset containing residential load consumption data, as it will be further discussed in an upcoming section.Figure 4 Paper rates by building type.Out of the main three categories of forecasting time horizon, short-medium-long, and multiple (more than one category), the one that was mostly expended and researched, as it is shown in Figure5, is short-term forecasting. This is probably due to dataset resolution and wide spread of smart meters installed in an increasing number of buildings. We also assume that since building load forecasting is highly connected to building occupants’ behavior, it is probably better to predict power consumption in short-term, and adjust models accordingly, since it is more sensitive in capturing variations in building consumption patterns.Figure 5 Paper rates by forecasting horizon.Regarding the methods–architectures of deep learning that were proposed and tested, the Long-term Short-Term Memory (LSTM) based architectures summoned the greatest interest, as displayed in Figure6. This is due to the adaptability that they present in maintaining “memory” for a big number of steps and in their ability to apply numerous parameters in order to achieve better accuracy and performance compared to most of the other models. In Figure 5, the category “Hybrid” refers mainly to LSTM Convolutional Neural Network (LSTM–CNN) hybrid architectures, the category “AE” refers to autoencoders, and the category “Other” refers to the rest of researched architectures.Figure 6 Paper rates by deep learning architecture. ## 3. Nondeep Learning Methods in Building Load Forecasting The methods/technics/approaches regarding building energy load forecasting, according to literature, can be divided into three main categories [31].(1) White Box or Physical methods, which include all methods that address the problem by interpreting the thermal behavior of a building. These complex methods require a detailed description of the building’s geometry, they do not require training data, and their results can be interpreted in physical terms. There are several limitations in this methodology regarding forecasting accuracy and reliability [32, 33]. There are three main approaches in this category, and due to their complexity, there are several software solutions simplifying and automating these complex procedures:(i) Computational Fluid Dynamics (CFD), which is considered a three-dimensional approach [34, 35].(ii) Zonal, a simplified CFD, which is considered a two-dimensional approach [36].(iii) Nodal approach, which is the simplest of the three, and is considered a one-dimensional approach [37].(2) Black Box or Statistical methods using traditional Machine Learning. These methods do not require a detailed description of the building geometry; they require a sufficient amount of training data, and their results can be difficult to interpret in physical terms. The most commonly used methods are:(iv) Conditional Demand Analysis (CDA), based on the Multiple Linear Regression method [38].(v) Genetic Algorithms, based on Darwin’s Theory of evolution of the species [39, 40].(vi) Artificial Neural Networks (ANN), inspired by brain neurons [41, 42].(vii) Support Vector Machine (SVM), a classification or regression problem solving method [43, 44].(viii) Autoregressive Integrated Moving Average (ARIMA) [45].(3) Grey Box or Hybrid models, which combine methods from the previous categories, in an effort to overcome their disadvantages and utilize their advantages [46]. These methods require a rough description of the building geometry, a small amount of training data compared to the previous category, and their results can be interpreted in physical terms. ## 4. Deep Learning Methods in Building Load Forecasting As displayed in Figure4, the methodologies proposed for building load forecasting are categorized into three main categories, regarding the type of buildings under investigation. In this section, following the same categorization, the examined DL methodologies are presented. ### 4.1. Residential Building Load Forecasting The first DL-based methodology was proposed by Elena Mocanu et al. [47] in 2016 for load forecasting of a residential building. The examined DL models were: (1) Conditional Restricted Boltzmann Machine (CRBM) [48] and (2) Factored Conditional Restricted Boltzmann Machine (FCRBM) [49], with reduced extra layers. The performance of both models was compared to that of the three most used Machine learning methods of that time [50–52]: (1) Artificial Neural Network - Non-Linear Autoregressive Model (ANN-NAR), (2) Support Vector Machine (SVM), and (3) Recurrent Neural Network (RNN). The used dataset entitled “Individual Household Electric Power Consumption” (IHEPC) [53] was collected from a household at a one-minute sampling rate. It contained 2.075.259 samples in an almost four-year period (47 months) of time, collected between December 2006 and November 2010. The attributes, from the dataset, being used in the experiments were: Aggregated active power (household avg power excluding the devices in the following attributes), Energy Submetering 1 (Kitchen–oven, microwave, dishwasher, etc.), Energy Submetering 2 (Laundry room–washing machine, dryer, refrigerator, and a light bulb), and Energy Submetering 3 (water heater and air condition device). In all the implementations, the authors used the first three years of the dataset for model training and the fourth year for testing. Useful conclusions extracted from that research were the following: all five tested models produced comparable forecasting results, with the best performance attained in experiments predicting the aggregated energy consumption, rather than the other three submetering. It is also worth mentioning that in all the scenarios for submetering prediction the results were the most inaccurate, which could be attributed to the difficulty to predict user behavior. The proposed FCRBM deep learning model outperformed the other four prediction methods in most scenarios. All methods proved to be suitable for near real-time exploitation in power consumption prediction, but the researchers also concluded that when the prediction length was increasing, the accuracy of predictions was decreasing, reposting prediction errors half of that of the ANN. The authors also concluded that even though the use of the proposed deep learning methods was feasible and provided sufficient results, it could be further improved to achieve better accuracy in prediction by fine-tuning, the addition of extra information to the models, such as environmental temperature, time, and more [47].In the same year, Daniel L. Marino et al. [6] proposed another methodology using the LSTM DL model. More precisely, the authors examined three models: (1) Standard Long Short Term Memory (LSTM) [54], a Recurrent Neural Network (RNN) designed to store information for long time periods, that can successfully address the vanishing gradient issue of RNN; (2) LSTM-Based Sequence-to-Sequence (S2S) architecture [55], a more flexible than standard LSTM architecture consisting of two LSTM networks in encoder-decoder duties, which overcomes the naive mapping problem observed in standard LSTM; and (3) Factored Conditional Restricted Boltzmann Machine (FCRBM) method proposed in [47]. This work revealed that the standard LSTM failed in building load forecasting and a naïve mapping issue occurred. The proposed deep learning model LSTM Sequence-to-Sequence (S2S) network, based on standard LSTM network, overcame the naïve mapping issue and produced, comparable results to FCRBM model and to the other methods examined in [47], by using the same dataset [53]. A significant conclusion of this research was that when the prediction length increased, the accuracy of predictions decreased. The researchers also concluded that in order to have a better grasp of the effectiveness of those methods and improve their generalization, more experiments with different datasets and regularization methods had to be conducted. It is worth mentioning that the used dataset was the same as in [47].The following year, in 2017, Kasun Amarasinghe et al. [56] proposed a methodology based on the Convolutional Neural Network (CNN) model. The novelty of this work was the deployment of a grid topology for feeding the data to the CNN model, for the first time in this kind of problems. The authors compared the performance of the CNN model with that of: (1) Standard Long-Short-Term Memory (LSTM), (2) LSTM-Based Sequence-to-Sequence (S2S) Architecture network, (3) Factored Conditional Restricted Boltzmann Machine (FCRBM), (4) Artificial Neural Networks with Non-Linear Autoregressive Model (ANN-NAR), and (5) Support Vector Machine (SVM). This research extracted the following conclusions: all the tested deep learning architectures produced better results in energy load forecasting for a single residence than SVM, and similar or more accurate results than standard ANN. Moreover, the best accuracy has been achieved by LSTM (S2S). The results of the tested CNN architectures were similar, with slight variations, to each other, performed better than SVM and ANN, and even though they did not outperform the other deep learning methods, they managed to remain a promising architecture. A more general observation that puzzled the researchers was that the results in training were better than in testing. The researchers also concluded, based on their recent and previous work [6], that the tested deep learning methods [57, 58] produced promising results in energy load forecasting. They also suggested that weather data should be considered in future works regarding forecasting due to the direct relationship between the two and the fact that it had not been used to date elsewhere than in [57]. Finally, they came to the same conclusion as in their previous work that in order to report a better grasp of the effectiveness of their methods and to improve their generalization, more experiments with different datasets and regularization methods had to be conducted. Once again, the same dataset [53] was utilized.In [59], Lei et al. In 2018 introduced a short-term residential load forecasting model, named Residual Conventional Fusion Network (RCFNet). The proposed model consisted of three branches of residual convolutional units (proximity, tendency, and periodicity modeling), a fully connected NN (weekday or weekend modeling) and an RCN to perform load forecasting based on the fusion of the previous outputs. The dataset used in this research [60], covered a two-year time period (April of 2012 to March 2014) and contained half hour sampled data from smart meters installed in 25 households, in Victoria, Australia. For this research purpose, only 8 households that contained the most complete data series were used. Approximately, 91.7% (22 months) of the dataset was used for training and the remaining 8.3% (2 months) for testing. Six different variations of the proposed RCFNet model were compared to four baseline forecasting models: History Average (HA), Seasonal ARIMA (SARIMA), MLP and LSTM, and all models were evaluated by calculating the round mean-square-error (RMSE) metric. The researchers concluded that their model outperformed all other models and achieved the best accuracy, scalability, and adaptability.In [61], Kim et al. in 2019 introduced a deep learning model for building load forecasting based on the Autoencoder (AE) model. The main idea behind this approach was to devise a scheme capable of considering different features for different states/situations each time, to achieve more accurate and explanatory energy forecasts. The model consisted of two main components, based on LSTM architecture, a projector that gave the input data, the energy current demand that defined the state of the model and a predictor for the building load forecasting, based on that state. The user of the system had a key role and could affect the forecasting through parameter and condition choices. In this work, a well-known dataset [53] was used; 90% of the dataset was used for training and 10% for testing the model. The authors compared their model to traditional forecasting methods, ML methods and DL methods, and they concluded that the proposed model, evaluated by mean square error (MSE), mean absolute error (MAE), and mean relative estimation error (MRE) metrics, outperformed them in most cases. The authors also concluded that their models’ efficiency was enhanced due to the condition adjustment, giving each time the situation/state of the model. The main contribution of the proposed work was that the model could both predict future demand and define the current demand pattern as state.The same research team of Kim et al. [62] in the same year, 2019, proposed a hybrid model, where two DL architectures, a CNN most commonly used in image recognition, and an LSTM, most commonly used in speech recognition and natural language processing, were linearly combined in a CNN–LSTM model architecture. For the experiments, a popular dataset [53] was used. The proposed model was tested in minute–hour–day–week resolutions and it was discovered that as the resolution increased, accuracy improved. The CNN–LSTM model evaluated by MSE-RMSE-MAE-mean absolute percentage error (MAPE) metrics, as compared to several other traditional energy forecasting ML and DL models and produced the most accurate results. It should be noted that the proposed method introduced first a combination of CNN architectures with LSTM models for energy consumption prediction. The authors concluded that the proposed model could deal with noise drawbacks and displayed minimal loss of information. The authors also evaluated the attributes of the used dataset and the impact that each of them had on building load forecasting. Submetering 3 attributes, representing water heater and air conditioner consumption, had the highest impact followed by Global Active Power (GPA) attribute. Another observation of this research was on the lack of available relevant datasets and that future work should focus on data collection and the creation of an automated method for hyperparameter choosing.In [63], Le et al. in 2019 presented a DL model for building load forecasting, named EECP-CBL. The architecture of the model was a combination of Bi-LSTM and CNN networks. For the contacted experiments, the authors utilized the IHEPC dataset [53]. For each model, 60% of the data (first three years) was used for training and the rest 40% of the data (last two years) was used for testing. The EECP–CBL model was compared to several state-of-the-art models at the time, used in the industry or introduced by other researchers for energy load forecasting: Linear Regression, LSTM, and CNN-LSTM. After data optimization, the models were tested for real-time (1 minute), short (1 hour), medium (1 day), and long (1 week) term load prediction, and they were evaluated by MSE, RMSE, MAE, and MAPE metrics. The authors concluded that the proposed model outperformed all other models in terms of accuracy. In this research, the researchers also focused on the time consumed for training and prediction of each model and concluded that while the prediction horizon increased, the time required for each additional task decreased for each model, with the proposed model outperforming all other, reporting as a disadvantage a comparatively higher training time. The research team also concluded that EECP–CBL model achieved peak performance on long-term building load forecasting and could be utilized in intelligent power management systems.In [64], Mehdipour Pirbazari et al. in 2020, in order to explore the extent and the way several factors can affect short-term (1-hour) building load forecasting, performed several experiments on four data-driven prediction models: Support Vector Regression (SVR), Gradient Boosting Regression Trees (GBRT), Feed Forward Neural Networks (FFNNs), and LSTM. The authors focused mainly on the scalability of the models and the prediction accuracy if trained solely in historical consumption data. The dataset covered a four-year time period (November 2011 to February 2014) and contained smart meter hourly data from 5.567 individual households in London, UK [65]. After data normalization and parameter tuning, the dataset utilized in this research focused on the year 2013 (fewer missing values, etc.) regarding 75 households, 15 each out of five different consumer -type groups classified by Acorn [66]. The four models were evaluated by Cumulative Weighted Error (CWE), based on RMSE, MAE, MASE, and Daily Peak Mean Average Percentage Error (DpMAPE) metrics. The researchers concluded that among the four models, LSTM and FFNN presented better adaptability to consumption variations, and resulted in better accuracy, but LSTM had higher computation cost and was clearly outperformed by CBRT, which was significantly faster. According to the reported results, other factors that affected load forecasting, for all four models, were the variations in usage, average energy consumption, and forecasted season temperature. Also, changes in the number of features (input lags) or a total of tested households (size of training dataset) did not affect similarly all models. The developed models were expected to learn various load profiles aiming towards generalization abilities and increase of models’ robustness.In [67], Mlangeni et al. in 2020 introduced, for medium and long-term building load forecasting, Dense Neural Network (DNN), a deep learning architecture that consisted of multiple ANN layers. The dataset used for this research contained, approximately, 2 million records from households in the eThekwini metropolitan area that contained 38 attributes and covered a five-year period, from 2008 to 2013. After data optimization and preparation, only 709.021 samples remained, which contained 7 attributes. For model training, 75% of the data was used, and for testing, the remaining 25%. In order to model load forecasting for the campus buildings of the University of KwaZulu, the authors assigned the household readings to rooms inside university buildings. The proposed architecture was compared to SVM and Multiple Regression (MR) models and was evaluated by RMSE and normalized RMSE (nRMSE) metrics. The authors concluded that the proposed model outperformed the rest of the models, presented good generalization ability, and could follow the data consumption trends. Dispersion of values in the data resulted in inaccurate estimations of large values, probably due to them being outliers. The authors also concluded that their method could be further improved by implementing more ML architectures and then testing in more datasets against other models or even extending from building load forecasting to wider metropolitan areas.In [68], Estebsari et al. In 2020, inspired by the high performance of CNN networks in image recognition, proposed a 2-dimensional CNN model for short-term (15-minutes) building load forecasting. In order to encode the 1-dimensional time series into 2-dimensional images, the authors presented and experimented on four well-known methods: recurrence plots (RP) [69], Gramian angular field (GAF), and Markov transition field (MTF) [70]. For the experimental results, it was used the Boston housing dataset [53]; 80% of the data was used for training and the remaining 20% for testing the models. The performance of three different versions, based on the image encoding method used, of the proposed model, CNN-2D was compared to SVM, ANN, and CNN-1D models. All architectures were evaluated by RMSE, MAPE, and MAE metrics. The researchers concluded that the CNN-2D-RP model outperformed all other models, displaying the best forecasting accuracy, however, due to image encoded data, had a significantly higher computational complexity, making it inappropriate for real-time applications.In [71], Wen et al. in 2020 presented a Deep RNN with Gated Recurrent Unit (DRNN-GRU) architecture, consisting of five layers, for short- to medium-term load forecasting in residential buildings. The proposed models’ prediction accuracy was compared by using MAPE, RMSE, percentage of consonants correct (PCC), and MAE metrics, to several DL (DRNN, DRNN-LSTM) and non-DL schemes (MLP, ARIMA, SVM, MLR). The dataset used in this research contained 15 months of hourly gathered consumption data and was obtained from Pecan Street Inc. Dataport Web Portal [72], while weather data were obtained from [73]. For the experimental evaluation of the method, 20 individual residential buildings were selected from the dataset; the first year of the dataset (80%) was used for training and the remaining three months (20%) for testing. The load demand was calculated for the aggregated load of a group of ten individual residential buildings. The researchers extracted several conclusions from their work. The proposed model achieved a lower error rate compared to the other tested methods and almost 5% less than the LSTM layer variation of DRNN. The researchers also declared that DRNN-GRU model achieved higher accuracy results than the rest models for the aggregated load of 10 residential buildings as well as for the individual load of residencies. There were some issues though to be taken under consideration, regarding the use of the proposed scheme for building load forecasting. The weather attributes, based on historic data, could affect the load forecasting accuracy since the weather could not be predicted with high certainty. In addition, the aggregated load forecasting accuracy was higher than the individual residence load, since the factor of the uncertain human behavior decreased as the number of total residences raised.In 2021, Jin et al. [74] developed an attention-based encoder-decoder network based on a gated recurrent unit (GRU) NN with Bayesian optimization towards short-term power forecasting. The contributions of the proposed method were in the incorporation of a temporal attention mechanism able to adjust the nonlinear and dynamic adaptability of the network, and the automatic verification of the hyperparameters of the encoder-decoder model resulting in improved prediction performance. The verification of the network was tested for 24-hours load forecasting with data acquired from the American Electric Power (AEP) [75]. The dataset included 26280 data from 2017 to 2020, with a sampling frequency of one hour; 70% of the data was used for training, 10% for validation, and 20% for testing. The model was also tested for the load prediction of four special days: Spring Equinox, Easter, Halloween, and Christmas. The proposed method demonstrated high performance and stability compared to nine other models, considering various indicators to reflect their accuracy performance (RMSE, MAE, Pearson correlation coefficient (R), NRMSE, and symmetric mean absolute percentage error (SMAPE)). The proposed model outperformed all nine models in all cases.In [15], a hybrid DL model was proposed for household-level energy forecasting in smart buildings. The model was based on the stacking of fully connected layers and unidirectional LSTMs on bidirectional LSTMs. The proposed model could allow the learning of exceedingly nonlinear and convoluted patterns, and correlations in data that could not be reached by the classical up-to-date unidirectional architectures. The accuracy of the model was evaluated on two datasets through score metrics in comparison with existing relevant state-of-the-art approaches. The first dataset included temperature and humidity in different rooms, appliances energy use, light fixtures energy use, weather data, outdoor temperature and relative humidity, atmospheric pressure, wind speed, visibility, and dewpoint temperature data [76]. The second dataset was the well-known IHEPC set of the University of California, Irvine (UCI) Machine Learning repository [53]. The employed performance comparison indicated the proposed model as the one with the highest accuracy, evaluated with RMSE, MAPE and MAE, even in the case of multistep ahead forecasting. The proposed method could be easily extended to long-term forecasting. Future work could focus on additional household occupancy data and on speeding up the training time of the model in order to facilitate its real-time application.In the same year, Shirzadi et al. [13] developed and compared ML (SVM, RF) and DL models (nonlinear autoregressive exogenous NN (NARX), recurrent NN (RNN-LSTM)) for predicting electrical load demand. Ten years of historical data for Bruce Country in Canada were used [77] regarding hourly electricity consumption by the Independent Electricity System Operator (IESO), feed with temperature, and wind speed information [78]recorded from 2010 to 2019; nine years of data were considered for training and one year for testing. Results revealed that DL models could predict more accurately the load demand, in terms of MAPE and R-squared metrics, for both peak and off-peak values. The windowing size of the analysis period was reported as a limitation of the method, affecting significantly the computation time.Ozer et al. in 2021 [79] proposed a cross-correlation (XCORR)-based transfer learning approach on LSTM. The proposed model was location-independent and global features were added to the load forecasting. Moreover, only one month of original data was considered. More specifically, the training data were obtained from the Dataport website [72], while the building data for which the load demand was estimated and were collected by an academic building for one month. Evaluation metrics RMSE, MAE, and MAPE were calculated. The performance of the proposed model was not compared to different models; however, the effect of transfer learning on LSTM was emphasized. The method resulted in accurate prediction results, paving the way for energy forecasting based on limited data.More recently, in January 2022, Olu-Ajayi et al. [80] presented several techniques for predicting annual building energy consumption utilizing a large dataset of residential buildings: ANN, GB, DNN, Random Forest (RF), Stacking, kNN, SVM, Decision Tree (DT), and Linear Regression (LR) were considered. The dataset included building information retrieved from the Ministry of Housing Communities and Local Government (MHCLG) repository [81] and meteorological data from the Meteostat repository [82]. In addition to forecasting, the effect of building clusters on model performance was examined. The main novelty of that work was the introduction of input key features of building design, enabling designers to forecast the average annual energy consumption at the early stages of development. The effects on the performance of the model of both building clusters on the selected features and the data size were also investigated. Results indicated DNN as the most efficient model in terms of R-squared, MAE, RMSE, and MSE.In the same month of 2022, in [83], Yan et al. proposed a bidirectional nested LSTM (MC-BiNLSTM) model. The model was combined with discrete stationary wavelet transform (SWT) towards more accurate energy consumption forecasting. The integrated approach of the proposed method enabled enhanced precision due to the use of multiple subsignals processing. Moreover, the use of SWT was able to eliminate the signal noise by signal decomposition. The UK-DALE [84] dataset was used for the evaluation of the model by calculating MAE, RMSE, MAPE, and R-squared. The proposed method was compared to cutting-edge algorithms of the literature, such as AVR, MLP, LSTM, GRU, and seven hybrid DL models (Ensemble model combining LSTM and SWT, Ensemble model combining Nested LSTM (NLSTM) and SWT, Ensemble model combining bidirectional LSTM (BLSTM) and SWT, Ensemble model combining LSTM and empirical mode decomposition (EMD), Ensemble model combining LSTM and variational mode decomposition (VMD), Ensemble model combining LSTM and empirical wavelet transform (EWT), and Multichannel framework combining LSTM and CNN (MC–CNN–LSTM)). The proposed model achieved a reduction of MAPE to less than 8% in most of the cases. The method was developed on the edge of a centralized loud system that integrated the edge models and could provide to multiple households a universal IoT energy consumption prediction. The method was limited by the difficulty to integrate multiple models for different household consumption patterns, raising data privacy issues.In [85], a DL model based on LSTM was implemented. The model consisted of two encoders, a decoder, and an explainer. Kullback-Leibler divergence was the selected loss function that introduced the long-term short-term dependencies in latent space created by the second encoder. Experimental results used the IHEPC dataset [53]. The first ten months of 2010 were used for training and the remaining two months for testing. The performance of the model was examined through three evaluation metrics, MSE, MAE, and MRE. Results were compared to conventional ML models such as LR, DT, and RF, and DL models such as LSTM, stacked LSTM, the autoencoder proposed by Li [86], the state-explainable autoencoder (SAE) [61], and the hybrid autoencoder (HAE) proposed by Kim and Cho [87]. The proposed model performed similarly to the state-of-the-art methods, providing additionally an explanation for the prediction results. Temporal information has been considered, paving the way for additional explanation for not only time but also for spatial characteristics.In January 2022, Huang et al. [88] proposed a novel NN based on CNN-attention-bidirectional LSTM (BiLSTM) for residential energy consumption prediction. An attention mechanism was applied to assign different weights to the neurons’ outputs so as to strengthen the impact of important information. The proposed method was evaluated on IHEPC [53] household electricity consumption data. Moreover, different input timestamp lengths, of 10, 60, and 120 minutes, were selected to validate the performance of the model. Evaluation metrics of RMSE, MAE, and MAPE were calculated for the proposed model and traditional ML and DL methods for time-series prediction, such as SVR, LSTM, GRU, and CNN-LSTM, for comparison. Results indicated the proposed method as the one with the higher forecasting accuracy, resulting in the lowest average MAPE. Moreover, the proposed model could avoid the influence of the input sequence long time step and was able to extract information from the features that most affect the energy forecasting. The authors suggested the consideration of weather factors [89] and electricity price policy supplementary data for their future work.The main characteristics of all aforementioned DL-based approaches are summarized in Table1. Comparative performance to state-of-the-art methods is provided throughout this review instead of a numerical performance report for each method, since different evaluation metrics are calculated in each referenced work (round mean squared error (RMSE), correlation coefficient R, p-value, mean absolute error (MAE), mean relative estimation error (MRE), etc.), different datasets and different time frames are selected, not making the results directly comparable.Table 1 Characteristics of DL methods for the case of residential building load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[47]2016CRBMShort-termResidential buildingANNSVMRNNSuitable for near real-time exploitation. Needs fine-tuning and extra information to the modelsIHEPC [53]Medium-termLong-termFCRBMShort-termResidential buildingANNSVMRNNCBRMMedium-termLong-term[6]2016LSTMShort-termResidential buildingNaïve mapping issueLSTM unable to forecast using one-minute resolution, S2S performed well in all cases, needs to be tested on different datasetsIHEPC [53]LSTM (S2S)Residential buildingComparable results to [47][56]2017CNNShort-termResidential buildingANNSVMComparable [6]Results did not vary much across different architectures, need to be tested with different datasets with weather dataIHEPC [53][59]2018RCFNetShort-termResidential buildingHASARIMAMLPLSTMConsiders proximity, periodicity, tendency of consumption, and influence of external factors, good scalability, possibility to further optimize its architecture[60][61]2019DL based on AEShort-termResidential buildingLRDTRFMLPLSTMStacked – LSTMAE modelDefinition of current demand pattern as state, able to predict very complex power demand values with stable and high performanceIHEPC [53][62]2019CNN-LSTMShort-termResidential buildingLRDTRFMLPLSTMGRUBi-LSTMAttention LSTMHigh performance in high resolution, analysis of variableso household appliances that influence the predictionIHEPC [53]Medium-termLong-term[63]2019EECP-CBLShort-termResidential buildingLRLSTMCNN-LSTMHigh training time, good results in all time frame settingsIHEPC [53]Medium-termLong-term[64]2020SVRGBRTFFNNLSTMShort-termResidential buildingSVRGBRTFFNNImproved generalization ability, lower seasonal predictions due to load variations[65][67]2020DNNMedium- termResidential buildingSVMMRGood generalization, unable to predict large valuesN/ALong-term[68]2020CNN-2d-RPShort-termResidential buildingSVMANNCNN-1DComputational complex, inappropriate for real-time applicationsIHEPC [53][71]2020DRNN-GRUShort-termVarious combinations of max 20 residential buildingsMLPARIMASVMMLRDRNNDRNN-LSTMHigh accuracy with limited input variables for aggregated and disaggregated load demand, good for filling missing data[72, 73]Medium- term[74]2021Attention-based encoder-decoder (GRU) with Bayesian optimizationShort-termResidential buildingsDenseRNNLSTMGRULstmSeqGruSeqLstmSeqAttGruSeqAttBLstmSeqAttTemporal attention layer towards greater robustness, optimal prediction through optimizationAEP [75][15]2021Hybrid stacked Bi-directionalUnidirectional fully connected (HSBUFC) model architectureShort-termResidential buildingsLRExtreme learning machine (ELM)NNLSTMCNN-LSTMConvLSTMBi-directional LSTMAllows for learning of exceedingly nonlinear anc convoluted patterns and correlations in data, slow training timeIHEPC [53, 76][13]2021RNN-LSTMNARXMedium-termResidential buildingsSVMRFAccurate prediction of peak and off-peak values, computationally complex due to windowing sizeIESO [77, 78][79]2021LSTMShort-termResidential buildings-A model independent of location and introduction of global features, use of limited dataReference [72] & custom[80]2022 Jan.DNNMedium- termResidential buildingsANNGBRFStackingkNNSVMDTLRAble to predict at the early design phase, not sensitive to a specific building type, size of data affects the performanceMHCLG [81, 82][83]2022 Jan.SWT-MC-BiNLSTMShort-termResidential buildingsAVRMLPLSTMGRU7 hybrid DL modelsA centralized approach, difficulties in integrating multiple models for different energy patternsUK-DALE [84][85]2022 Jan.LSTM with two encoders, decoder and explainerShort-termResidential buildingsSimilar results with LRDTRFLSTM stacked LSTM autoencoder of [86] SAE of [61]HAE of [87]Use of temporal information, explainable prediction results, not considering spatial characteristicsIHEPC [53]Long-term[88]2022 Jan.CNN-attention-BiLSTMShort-termResidential buildingsSVRLSTMGRUCNN-LSTMAvoidance of the influence of the input sequence long time stepIHEPC [53] ### 4.2. Commercial Building Load Forecasting In 2017, Chengdong Li et al. [86] proposed a new DL model from the combination of Stacked Autoencoders (SAE) [90] and an Extreme Learning Machine (ELM) [91]. The role of SAE was to extract features relative to the building’s power consumption, while the role of the ELM was for accurate energy load forecasting. Only the pretraining of the SAE was needed, while the fine-tuning was established by the least-squares learning of the parameters in the last fully connected layer. The authors compared the performance of the proposed Extreme SAE model with: (1) a Back Propagation Neural Network–BPNN; (2) a Support Vector Regressor–SVR; (3) a Generalized radial basis function neural network - GRBFNN, which is a generalized radial basis function neural network – RBFNN; and (4) a Multiple Linear Regression–MLR, a famous, often used regression and prediction statistical method. The dataset was collected from a retail building in Freemont (California, USA) in a 15-minute sampling rate [92]. The dataset contained 34.939 samples that were aggregated to 17.469 30-minutes and 8.734 1-hour samples. The effectiveness of the examined methodologies was measured in terms of MAE, MRE, and RMSE, for 30 and 60 minutes time period. The researchers concluded that the proposed approach in energy load consumption forecasting presented the best performance, especially with abnormal testing data reflecting uncertainties in the building power consumption. The best overall performance in forecasting was achieved by the Extreme SAE model in comparison to the other models. The achieved accuracy from best to worse was: Extreme SAE > SVR > GRBFNN > BPNN > MLR. The authors also concluded that the proposed SAE and ELM combination was superior to standard SAE, mainly, due to the lack of need for fine tuning of the entire network (iterative BP algorithm), which could speed up the learning process and contribute significantly to the generalization performance. The ELM speeded up the training procedure, without iterations, and boosted the overall performance, due to its deeper architecture and improved learning strategies.Widyaning Chandramitasari et al. [5] in 2018 proposed a model constructed by the combination of an LSTM network, used for time-series forecasting, and a Feed Forward Neural Network (FFNN), to increase the forecasting accuracy. The research focused on a time horizon of one day ahead with a 30-minute resolution, for a construction company in Japan. The proposed model was validated and compared against the standard LSTM and Moving Average (MA) model, which were used by a power supply company. The effectiveness of the evaluated methodologies was measured by RMSE. The used dataset covered a time period of approximately 1 year and four months (August 2016 to November 2017) with a 30-minutes resolution. Additional time information considered in the experiments was the day, time, and season (low–middle–high). The authors concluded that separating the day in “weekdays” and “All day” data gave more accurate results in energy load forecasting for weekdays. They also pointed out that the data analysis performed for forecasting should be, each time, according to the type of the client (residential, public, commercial, industrial, etc.).In the same year, Nichiforov et al. [93] experimented on RNN networks with implemented LSTM layers consisting of one sequence input layer, a layer of LSTM units with several different configurations and variations regarding the amount of used hidden units (from 5 up to 125 units), a fully connected layer and a regression output layer. They compared the results for two different nonresidential buildings from the University Campuses, one in Chicago, and the other in Zurich. The datasets used in their experiments were apprehended by BUDS [94] and contained hourly samples over a one-year period and after data optimization, they resulted in two datasets of approximately 8.670 data samples each. Results were promising, pointing out that the method could be used in load management algorithms with limited overhead for periodic adjustments and model retraining.The following year, the same authors in [95] also experimented with the same dataset and the same RNN architectures, adding to their research one more building located in New York. Useful conclusions extracted from both works were the following: RNN architecture was a good candidate, prompting promising accuracy results for building load forecasting. The best performance, graded by RMSE, Coefficient of Variation of the RMSE (CV- RMSE), MAPE and MSE metrics, was achieved with the RNN network when the LSTM layer contained 50 hidden units, while the worst accuracy was observed when contained 125 hidden units, for all buildings. DL-Model testing in load forecasting enhanced in the past few years due to the availability of datasets and relevant algorithms, better hardware necessary for testing, network modeling that could be obtained in lower prices and industry, and academic research teams’ joint efforts leading to better results. Due to the complexity of the building energy forecasting problem (buildings’ architecture, materials, consumption patterns, weather conditions, etc.), experts’ opinions in this domain could provide insights and guidance, along with further investigation and experimentation on a wide model variation. The authors also suggested that on-site energy storage could balance the scale in favor of better energy management.In 2019, Ljubisa Sehovac et al. [96] proposed the GRU (S2S) model [97], a simplified LSTM that maintained similar functionality. Two are the main differences between the two models, regarding their cells: (1) GRU (S2S) has an all-purpose hidden state h instead of two different states, memory and hidden and (2) the input and forget gates are replaced with update gate z. These modifications allowed GRU (S2S) model to train and converge in less time than LSTM (S2S) model, maintaining at the same time, a sufficient amount of hidden states dimension and gates to preserve long-term memory. In this study, the authors experimented in all time frame categories, for power consumption forecasting (Short–Medium–Long). The dataset used in the experiments was collected from a retail building at a 5-minute sampling rate. It contained 132.446 samples and covered a time period of one year and three months. There are 11 features in this dataset: Month, Day of Year, Day of Month, Weekday, Weekend, Holiday, Hour, Season, Temperature (°C), Humidity, and Usage (KW). The data were collected from “smart” sensors part of a “smart grid; the first 80% was used for training and the remaining 20% for testing. The proposed method was compared to LSTM (S2S), RNN (S2S) and a Deep Neural Network and their effectiveness was measured by the use of MAE and MAPE. The authors concluded that the GRU (S2S) and LSTM (S2S) models produced better accuracy in energy load consumption forecasting than the other two models. In addition, the GRU (S2S) model outperformed the LSTM (S2S) model and gave an accurate prediction for all three cases. Finally, a significant conclusion that verified the conclusions of relevant research [6, 47] was that when the prediction length increased the accuracy of predictions was expected to decrease.Mengmeng Cai et al. [98] designed Gated CNN (GCNN) and Gated RNN (GRNN) models. In this research, they tested five different models in short-term forecasting (next day forecasting) and compared them in terms of accuracy in forecasting, ability to be generalized, robustness, and computational efficiency. The models they tested were: (1) GCNN1, a multistep recursive model that made one-hour predictions that applied it 24 times for a day prediction; (2) GRNN1, same as the previous but for RNN model; (3) GCNN24, multistep, direct procedure that predicted the whole 24 hours at once; (4) GRNN24, same like the previous but for RNN model; and (5) ARIMAX, a non-DL, commonly used method for time-series problems. The authors applied the five models in three different nonresidential buildings: (1) Building A (Alexandria, VA, approx. 30.000 sqf, academic, dataset obtained by [99]), Building B (Shirley, NY, approx. 80.000 sqf, school, dataset obtained by [100]), and Building C (Uxbridge, MA, approx. 55.000 sqf, grocery store, dataset obtained by [100]). The datasets used in their experiments were one-hour samples collected in a year time period and contained meteorological data, temperature, humidity, air pressure, and wind speed. After data pre-\processing (cleaning, segmentation, formation, normalization, etc.) for keeping only the weekday samples, the researchers divided the remained data in 90% training data, 5% validation data, and 5% testing data. Several useful conclusions were extracted. The building size, occupancy and peak load mattered significantly in the results of GCNN1 and GRNN1, improving the accuracy of load prediction. While the number of people in the building has risen, the uncertainty caused by each individual’s behavior is averaged, resulting in a more accurate prediction. Among GCNN1, GRNN1, and SARIMAX, the best performance was achieved by GCNN1, while the slightly poorer by GRNN1 and the worst by far by SARIMAX. In another experiment, the GCNN24 outperformed GRNN24, and produced better results in accuracy (22.6% fewer errors compared to SARIMAX) and computational efficiency (8% faster compared to SARIMAX) than GCNN1, GRNN1 and SARIMAX, granting the GCNN24 model as the most suitable, among the five, for short-term (day-ahead) building load forecasting. As a more general conclusion, the researchers stated that DL methods fitted better load forecasting than previously used methods.In [101], Yuan Gao et al. in 2019 experimented in long-term (one year) building load forecasting and proposed an LSTM architecture with an additional self-attention network layer [102]. The proposed model emphasized on the inner logical relations among the dataset during prediction. The attention layer was used towards improving the ability of the model to convey and remember long-term information. The proposed model was compared to an LSTM model and a Dense Back Propagation Neural Network and evaluated, regarding load forecasting accuracy by MAPE. All three models were applied to a Nonresidential Office building in China. The dataset used in this research contained 12 attributes (weather, time, energy consumption, etc.) on daily measurements and ranged in a two-year time period. The main conclusion of this research was that the proposed method was able to address the issue of long-term memory and conveyed information better than the other two architectures, outperformed LSTM by 2.9% and DBPNN by 6.5%.Heidrich Benedikt et al. [103] in 2020 proposed a combination of standard energy load profiles and CNN, and created a Profile Neural Network (PNN). The proposed architecture consisted of three different profile modules: standard load profile, trend and colorful noise, and the utilization of CNNs, which according to the authors has never been proposed before. In this scheme, CNNs were used as data encoders for the second (trend encoder) and third module (external and historical), in the prediction network in the third module (colorful noise calculation) and in the aggregation layer, where the results of the three modules were aggregated to perform load forecasting. The dataset used for the experiments was the result of merging two datasets: (a) one that contained historical load data gathered in a ten-year time period from two different campus buildings (one with weak and one with strong seasonal variation) and (b) weather data apprehended from Deutsche Wetterdienst (DWD) [104]. The merged dataset covered an eight-year period with one-hour resolution samples; 75% of the data was used for training and the remaining 25% for testing the models. In order to measure and better comprehend the performance of the PNN, the authors compared the results of four different variations of their model, regarding time window size (PNN0, PNN1 month, PNN6 month, and PNN12 month), to four state-of-the-art building load forecasting methods from the literature: RCFNet, CNN, LSTM and Stacked-LSTM, and three naïve forecasting models: periodic persistence, profile forecast, and linear regression. All models were evaluated, by RMSE and MASE metrics, and tested in short (one day) and in medium-term (one week) building load forecasting. All the PNN models, besides PNN0, outperformed the rest of the tested models, and among them, PNN1 achieved the best performance for both time horizons and both types of buildings. Regarding the training time, PNN models required the least time for both types of buildings for short-term forecasting but were outperformed by CNN in medium-term forecasting. According to the authors, the excess time needed, compared to the fastest model, offered a much better accuracy and thus it was an acceptable trade off. The authors also concluded that the proposed model was flexible due to the ability to change, according to cases, modules and encoders in order to achieve better results, and could also be used on a higher scale than a building.In [105], Sun et al. in 2020 introduced a novel deep learning architecture that combined an input feature selection, through MRMR (Maximal Relevance Minimal Redundancy) criterion, based on Pearson’s correlation coefficient, and an LSTM–RNN architecture. The dataset used for the short-term forecasting experiments covered one year of historic load data (2017) for three different types of buildings (office, hotel, and shopping mall), apprehended by the Shanghai Power Department, while the weather-related data were collected from a local weather forecast website. In order to establish a baseline and prove the proposed model’s efficiency, the researchers conducted several experiments that were MRMR-based LSTM–RNN model variations competed against ARIMA, BPNN variations and BPNN-SD variations forecasting models, evaluated based on RMSE and MAPE metrics. According to the results, the proposed model, and more specifically the two-time step variation of the model, outperformed all other models and provided the most accurate load forecasting results. The authors concluded that due to the complexity of the building energy load prediction task, the right selection of input features played a key role in the procedure, and in combination with a hybrid prediction model, could present more accurate results.In [106], Gopal Chitalia et al. in 2020 presented their findings, regarding deep learning architectures in short-term load forecasting, after experimenting on nine different DL models: Encoder–Decoder scheme, LSTM, LSTM with attention, Convolutional LSTM, CNN – LSTM, BiLSTM, BiLSTM with an attention mechanism, Convolutional BiLSTM and CNN–BiLSTM. The main idea was that RNN networks with an attention layer could produce more robust and accurate results. All the above models were tested in five different types of buildings on two different continents, Asia and North America. Four out of five datasets used in this research can be found in [100, 107, 108], while the weather data were collected from [109]. The authors investigated short-term building load forecasting, through several various aspects regarding feature selection, data optimization, hyperparameter fine-tuning, learning-based clustering, and minimum dataset volume, with acceptable results of accuracies. All DL architectures were evaluated by RMSE, MAPE, CV, and Root-Mean-Square Logarithmic Error (RMSLE) and provided a fair assessment for each building’s load forecasting results. The researchers concluded that the implementation of the attention layer in RNN networks increased the load forecasting accuracy of the model and could perform adequately in a variation of buildings, loads, locations, and weather conditions.In January 2022, Xiao et al. [110] proposed an LSTM model to predict the day-ahead energy consumption. Two data smoothing methods, Gaussian kernel density estimation and Savitzky-Golay filter, were selected and compared. Data used in that work was from the Energy Detective 2020 dataset [111], including hourly consumption data from 20 office buildings and weather data, from 2015 to 2017. The authors concluded that data smoothing could help enhance the accuracy of prediction in terms of CVRMSE, however, when raw data were taken as the reference, the prediction accuracy decreased dramatically. A larger training set was recommended in conclusions, if the computing cost was acceptable.The main characteristics of the DL-based approaches of this section are summarized in Table2.Table 2 Characteristics of DL methods for the case of commercial building load forecasting. Paper Ref.Pub.YearDeep learning modelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[86]2017Extreme SAEShort-termRetail buildingSVRGRBFNNBPNNMLRQuicker learning speed and stronger generalization performance, not considering periodicity of energy consumption[92][5]2018LSTM – FFNNShort-termConstruction companyLSTMMAGood energy consumption forecast of the next day for each 30-minutesData from a small power company in Japan[93]2018RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelA replicable case study, suitable for online optimization[94][95]2019RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelTendency for overfitting the input data and poor performance on the testing samples[94][96]2019GRU (S2S)Short-termCommercial buildingsLSTM (S2S)RNN (S2S)DNNAccuracy was decreasing as the prediction length was increasingCustomMedium-termLong-term[98]2019GCNNShort-termCommercial buildingsVarious implementation of reviewed models and SARIMAXReduced forecasting error, able to handle high level uncertainties, high computational efficiency[99, 100]GRNN[101]2019LSTM + self-attention layerLong-termCommercial buildingsLSTMDBPNNResolved the problem of long-term memory dependenciesCustom[103]2020PNNMedium- termCommercial buildingsLRRCFNetCNNLSTMStacked-LSTMInserted statistical information about periodicities in the load time series[104][105]2020LSTM – RNN + MRMR criterionShort-termCommercial buildingsARIMABPNNBPNN-SDFeature variables selection to capture distinct load characteristicsShanghai power department[106]2020LSTM + attentionShort-termCommercial buildingsEncoder – DecoderLSTMConvolutional LSTMCNN – LSTMBiLSTM, BiLSTM with attentionConvolutional BiLSTMCNN – BiLSTMRobust against different building types, locations, weather and load uncertaintiesReferences [100, 107, 108],[110]2022 Jan.LSTM + data smoothingSort-termOffice buildings-Prediction would decline for certain periods, data smoothing could help the accuracy of predictionEnergy detective 2020 [111] ### 4.3. Multiple Type of Buildings Load Forecasting In [112], H. Shi et al. in 2018 introduced for short-term household load forecasting, a pooling-based deep RNN architecture (PDRNN), boosted by LSTM units. In the proposed PDRNN, the authors combined DRNN with a new profile pooling technic, utilizing neighboring household data to address overfitting and insufficient data in terms of volume, diversity, etc. There were two stages in the proposed methodology: load profile pooling and load forecasting through DRNN. The data used for model training and testing were apprehended from Commission for the Energy Regulation (CER) in Ireland [113] and were collected from smart metering customer behavior trials (CBTs). The data covered one-and-a-half-year time period (July 2009 to December of 2010). The proposed method was compared to other state-of-the-art forecasting methods, ARIMA, RNN, SVR and DRNN models, and was evaluated by RMSE, NRMSE, and MAE metrics. The researchers concluded that PDRNN outperformed the rest of the models, achieving better accuracy, and successfully addressing overfitting issues.In the same year, Aowabin Rahman et al. [114] proposed a methodology focused on medium- to long-term energy load forecasting. The authors examined two LSTM-based (S2S) architecture models with six layers. The contributions of this work were: (1) energy load consumption forecasting for a time period ranging from a few months, up to a year (medium to long term); (2) quantification of the performance for the proposed models on various consumption profiles for load forecasting in commercial buildings and in aggregated load at the small community scale; and (3) development of an imputation scheme for missing history consumption data values by the use of deep learning RNN models. Regarding the used dataset, the authors followed different protocols to collect useful data: (1) A Public Safety Building at Salt Lake (PSB at Utah, USA). The dataset used for this part of the paper, obtained from the PSB, was at one-hour resolution for a time frame of 448 days (one year, two months, and three weeks) covering a time period from the18th of May 2015 till the 8th of August 2016. The proposed architectures were tested in several load profiles with a combination of variables (weather, day, month, hour of the day, etc.). The first year of the dataset was used for training and the remaining (approximately 83 days) for testing; (2) A number (combinations of maximum 30) of residential buildings in Austin (Texas, USA). The dataset used for this part of the paper was acquired from Pecan Street Inc. Dataport Web Portal [72], at one-hour resolution for an approximate two-year time period from January 2015 till December 2016. The dataset included data for 30 individual residential buildings and the load consumption forecasting was aggregated. The first year of the dataset was used for training and the remaining time for testing. The experiments revealed that the prediction accuracy, for both models, was limited and highly affected by the weather. Moreover, if the training data greatly differed from testing and future weather data, then a model that produced sufficient power load consumption predictions for a specific building cannot be applied successfully to a different building. In addition, if major changes regarding occupancy, building structure, consumer behavior, or the installed appliances/equipment occurred in the specific building, the same model would have decreased accuracy. According to the authors’ findings, both proposed models performed better in commercial building energy load forecasting, than a three-layer MLP model, but worse over a one-year period forecasting regarding the aggregated load for the residential buildings, with the MLP model performed even better as the total of residential buildings increased. As a final remark, the researchers concluded that there was a lot of potential in the use of deep RNN models in energy load forecasting over medium- to long-term time horizon. It is worth mentioning that besides the consumption history data, the authors considered several other variables (day of the week, month, time of the day, use frequency, etc.) and weather conditions acquired from Mesowest web portal [73].In [115] Y. Pang et al. in 2019, in order to overcome the limited historical consumption data for most buildings, to utilize for short-term load forecasting model training, proposed the utilization of Generative Adversarial Network method (GAN). The researchers introduced the GAN-BE model, an LSTM unit-based RNN (LSTM-RNN) deep learning architecture, and experimented with different variations of it, with or without attention layer. For the experiments were used data, collected from four different types of buildings: an office building, a hotel, a mall, and a comprehensive building. The different variations of the proposed model were compared to four LSTM variations and evaluated by MAPE, RMSE, and Dynamic Time Warping (DTW) metrics. The proposed model, with and without the attention layer, outperformed the other models, displaying better accuracy and robustness.In [116], Khan et al. in 2020 developed a hybrid CNN with an LSTM autoencoder architecture (CNN with LSTM-AE) that consisted of ten layers, for short-term load forecasting in residential and commercial buildings. The load forecasting accuracy of the proposed model was compared (by MAPE, RMSE, MSC, and MAE metrics) to other DL schemes (CNN, LSTM, CNN-LSTM, LSTM-AE). Two datasets were used in this research: (1) From the UCI repository [53] and (2) a custom dataset regarding a Korean commercial building from a single sensor, instead of four used on the UCI dataset, sampled in a 15-minute window and a total amount of 960.000 records. For this experiment, the first 75% of the dataset (three years) was used for training and the remaining 25% (one year) for testing. All models were tested, on both datasets, in hourly and daily resolution. The authors extracted several conclusions from their research. When they tested the above DL models, fed on the UCI dataset, over hourly data resolution, they discovered that some cross combinations among them produced better results than each one of them individually. The latter inspired them to develop the proposed model, which outperformed all the above tested DL models. They also experimented using the same dataset, over daily data resolution, and the proposed model achieved again the best forecasting accuracy. In the next step of their research, they tested their model using their own dataset over hourly and daily data resolution. Their model produced less accurate results than LSTM and LSTM-AE models, over hourly data resolution, but outperformed all other models, over daily data resolution. The general conclusion of their research was that the proposed hybrid model performed better during the experiments, especially over daily data resolution, compared to other DL and more traditional building load forecasting methods.AkCNN-LSTM deep learning framework was proposed in [117]. The proposed model combined k-means clustering for analyzing energy consumption patterns, CNNs for feature extraction and LSTM-NN to deal with long-term dependencies. The method was tested with real-time energy data of a four-story academic building, containing more than 30 electrical-related features. The performance of the model was assessed in terms of MAE, MSE, MAPE, and RMSE for the considered year, weekdays, and weekend. The authors observed that the proposed model provided accurate energy demand forecasting, attributed to its ability to learn the spatiotemporal dependencies in the energy consumption data. The kCNN-LSTM was compared to k-means variants of state-of-the-art energy demand forecast models, revealing higher performance analysis in terms of computational time and forecasting accuracy.In the same year, Lei et al. [118] developed an energy consumption prediction model based on the rough set theory and deep belief NN (DBN). The used data were collected from 100 civil public buildings (office, commercial, tourist, science, education, etc.) for rough set reduction and from a laboratory building to train and test the DL model. The public building data referred to five months of a total of 20 inputs data collection. The laboratory building data referred to less than 20 energy consumption inputs, obtained for approximately a year, including building consumption and meteorological data. Short-term and medium-term predictions were included. Prediction results, MAPE and RMSPE, were compared to that of a back-propagation NN, Elman NN, and fuzzy NN, revealing higher accuracy in all cases. The authors concluded that the rough set theory was able to eliminate unnecessary affecting factors of building energy consumption. The DBN with a reduced number of inputs resulted in improved prediction accuracy.In [119], Khan et al. introduced a hybrid model, DB-Net, by incorporating a dilated CNN (DCNN) with bidirectional LSTM (BiLSTM). The proposed method used a moving average filter for noise reduction and handled missing values via the substitution method. Two energy consumption datasets were used: the IHEPC dataset [53] consisting of four-year energy data (three years for training and 1 year for testing) and the Korean dataset of the advanced institutes of convergence technology (AICT) [120] for commercial buildings consisting of three-year energy data (two years for training and one year for testing). The proposed DB-Net model was evaluated using MAE, MSE, RMSE, and MAPE error metrics and it was compared to various ML and DL models. The proposed model outperformed the referenced approaches, by forecasting multi-step power consumption, including hourly, daily, weekly, and monthly output with higher accuracy. However, the method was limited by the fixed-size input data and the use of the invariance time-series data in a supervised sense. The authors suggested applying several alternative methods to boost the performance of the model, more challenging datasets, and more dynamic learning approaches as their future work.Wang et al. [121] proposed a DCNN based on ResNet for hour-ahead building load forecasting. The main contribution of their work was the design of a branch that integrated the temperature per hour into the forecasting branch. The learning capability of the model was enhanced by an innovative feature fusion. The genome project building dataset was adopted [122], including load and weather conditions of nonresidential buildings; the focus was on two laboratories and an office. The performance of five DL models was considered for comparison reasons. Comparison results for single-step and 24-step building load forecasting revealed that the proposed DCNN could provide more accurate forecasting results, higher computational efficiency, and stronger generalization for different buildings.In January of 2022, Jogunola et al. [123] introduced architecture, named CBLSTM-AE, including a CNN, an autoencoder (AE) with bidirectional LSTM (BLSTM). The effectiveness of the proposed architecture was tested with the well-known UCI dataset, IHEPC [53] and the Q-Energy [124] platform dataset was used to further evaluate the generalization ability of the proposed framework. From the Q-Energy dataset, a private part was used including two small-to-medium enterprises (SME), a hospital, a university, and residences. The time resolution of both datasets was converted to 24 hours towards short-term consumption prediction. The IHEPC data was further used for comparison of the proposed method with state-of-the-art frameworks. The proposed model achieved lower MSE, RMSE, and MAE and improved computational time, compared to the other models: LSTM, GRU, BLSTM, Attention LSTM, CNN-LSTM and electric ECP-based CNN, and BLSTM (EECP-CBL). Results demonstrated good generalization ability and robustness, providing an effective prediction tool over various datasets.In February 2022, the most research on energy consumption forecasting up-to-date was presented by Sujan Reddy et al. in [125]. The authors proposed a stacking ensemble model for short-term load consumption. ML and DL models (RF, LSTM, DNN, evolutionary trees (EvTree)) were used as base models. Their prediction results were combined using Gradient Boosting (GBM) and Extreme Gradient Boosting (XGB). Experimental observations on the combinations revealed two different ensemble models with optimal forecasting abilities. The proposed models were tested on a standard dataset [126], available upon request, containing approximately 500000 load consumption values at periodic intervals for over 9 years. Experimental results pointed out the XGB ensemble model as the optimal, resulting in reduced training time and higher accuracy, compared to the state-of-the-art (EvTree, RF, LSTM, NN, ARMA, ARIMA, Ensemble model of [126], feed-forward NN (FNN–H20) of [127] and DNN-smoothing of [127]). Five regression measures were used: MRE, R-squared, MAE, RMSE, and SMAPE. A reduction of 39% was reported in RMSE.The main characteristics of the DL based approaches of this section are summarized in Table3.Table 3 Characteristics of DL methods for the case of multiple type of buildings load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages & DisadvantagesDataset[112]2018PDRNNShort-termResidential small and medium enterprisesARIMARNNSVRDRNNProne to overfitting due to more parameters and fewer data[113][114]2018LSTM 1LSTM 2Long-termCommercial buildingMLPMissing data imputation scheme, decrease of accuracy if weather changes or for other building structures or when data is aggregated[72, 73]Various combinations of max 30 residential buildings[115]2019GAN-BE (LSTM-RNN basedShort-termOffice buildingHotelMallComprehensive buildingLSTM variationsAble to capture distinct load characteristics and choose accurate input variablesCustom[116]2020CNN with LSTM – AEShort-termResidential buildingCNNLSTMCNN-LSTMLSTM-AEOutlier detection and data normalization, spatial features extraction for better accuracyIHEPC [53] & customMedium-termCommercial buildings[117]2021kCNN-LSTMLong-termAcademic buildingARIMADBNMLPLSTMCNNCNN-LSTMAble to learn spatiotemporal dependencies in the energy consumption dataCustom[118]2021DBNShort-term100 civil public buildingsBack-propagation NNElman NNFuzzy NNRequires a large amount of training data, used uncalibrated data and does not need feature extractionCustomMedium-termLaboratory building[119]2021DB-netShort-termResidential buildingSVRCNN-LSTMCNN-BiLSTMDCNN-LSTMDCNN-BiLSTMAbility for multistep forecasting, noise reduction and handling of missing values, small inference time, suitable for real-time applications, limited by the fixed-size input dataIHEPC [53, 120]Long-termCommercial buildings[121]2021RCNNShort-termTwo laboratories and an officeGRUResNetLSTMGCNNIncreased depth of the model, enhanced ability to learn nonlinear relations, able to integrate information of external factors, fast converge[122][123]2022 Jan.CBLSTM-AEShort-termCommercial buildingsLSTMGRUBLSTMAttention LSTMCNN-LSTMEECP-CBLGeneralizes well to varying data, building types, locations, weather and load distributionsIHEPC [53] & private Q-energy [124] dataResidential buildings[125]2022 Feb.Ensemble with GBMEnsemble with XGBShort termVarious buildingsEvTree, RFLSTMNNARMAARIMAEnsemble [126]FNN-H20 [127]DNN-smoothing [127]Reduced training time, can be applied to any stationary time series dataCustom ## 4.1. Residential Building Load Forecasting The first DL-based methodology was proposed by Elena Mocanu et al. [47] in 2016 for load forecasting of a residential building. The examined DL models were: (1) Conditional Restricted Boltzmann Machine (CRBM) [48] and (2) Factored Conditional Restricted Boltzmann Machine (FCRBM) [49], with reduced extra layers. The performance of both models was compared to that of the three most used Machine learning methods of that time [50–52]: (1) Artificial Neural Network - Non-Linear Autoregressive Model (ANN-NAR), (2) Support Vector Machine (SVM), and (3) Recurrent Neural Network (RNN). The used dataset entitled “Individual Household Electric Power Consumption” (IHEPC) [53] was collected from a household at a one-minute sampling rate. It contained 2.075.259 samples in an almost four-year period (47 months) of time, collected between December 2006 and November 2010. The attributes, from the dataset, being used in the experiments were: Aggregated active power (household avg power excluding the devices in the following attributes), Energy Submetering 1 (Kitchen–oven, microwave, dishwasher, etc.), Energy Submetering 2 (Laundry room–washing machine, dryer, refrigerator, and a light bulb), and Energy Submetering 3 (water heater and air condition device). In all the implementations, the authors used the first three years of the dataset for model training and the fourth year for testing. Useful conclusions extracted from that research were the following: all five tested models produced comparable forecasting results, with the best performance attained in experiments predicting the aggregated energy consumption, rather than the other three submetering. It is also worth mentioning that in all the scenarios for submetering prediction the results were the most inaccurate, which could be attributed to the difficulty to predict user behavior. The proposed FCRBM deep learning model outperformed the other four prediction methods in most scenarios. All methods proved to be suitable for near real-time exploitation in power consumption prediction, but the researchers also concluded that when the prediction length was increasing, the accuracy of predictions was decreasing, reposting prediction errors half of that of the ANN. The authors also concluded that even though the use of the proposed deep learning methods was feasible and provided sufficient results, it could be further improved to achieve better accuracy in prediction by fine-tuning, the addition of extra information to the models, such as environmental temperature, time, and more [47].In the same year, Daniel L. Marino et al. [6] proposed another methodology using the LSTM DL model. More precisely, the authors examined three models: (1) Standard Long Short Term Memory (LSTM) [54], a Recurrent Neural Network (RNN) designed to store information for long time periods, that can successfully address the vanishing gradient issue of RNN; (2) LSTM-Based Sequence-to-Sequence (S2S) architecture [55], a more flexible than standard LSTM architecture consisting of two LSTM networks in encoder-decoder duties, which overcomes the naive mapping problem observed in standard LSTM; and (3) Factored Conditional Restricted Boltzmann Machine (FCRBM) method proposed in [47]. This work revealed that the standard LSTM failed in building load forecasting and a naïve mapping issue occurred. The proposed deep learning model LSTM Sequence-to-Sequence (S2S) network, based on standard LSTM network, overcame the naïve mapping issue and produced, comparable results to FCRBM model and to the other methods examined in [47], by using the same dataset [53]. A significant conclusion of this research was that when the prediction length increased, the accuracy of predictions decreased. The researchers also concluded that in order to have a better grasp of the effectiveness of those methods and improve their generalization, more experiments with different datasets and regularization methods had to be conducted. It is worth mentioning that the used dataset was the same as in [47].The following year, in 2017, Kasun Amarasinghe et al. [56] proposed a methodology based on the Convolutional Neural Network (CNN) model. The novelty of this work was the deployment of a grid topology for feeding the data to the CNN model, for the first time in this kind of problems. The authors compared the performance of the CNN model with that of: (1) Standard Long-Short-Term Memory (LSTM), (2) LSTM-Based Sequence-to-Sequence (S2S) Architecture network, (3) Factored Conditional Restricted Boltzmann Machine (FCRBM), (4) Artificial Neural Networks with Non-Linear Autoregressive Model (ANN-NAR), and (5) Support Vector Machine (SVM). This research extracted the following conclusions: all the tested deep learning architectures produced better results in energy load forecasting for a single residence than SVM, and similar or more accurate results than standard ANN. Moreover, the best accuracy has been achieved by LSTM (S2S). The results of the tested CNN architectures were similar, with slight variations, to each other, performed better than SVM and ANN, and even though they did not outperform the other deep learning methods, they managed to remain a promising architecture. A more general observation that puzzled the researchers was that the results in training were better than in testing. The researchers also concluded, based on their recent and previous work [6], that the tested deep learning methods [57, 58] produced promising results in energy load forecasting. They also suggested that weather data should be considered in future works regarding forecasting due to the direct relationship between the two and the fact that it had not been used to date elsewhere than in [57]. Finally, they came to the same conclusion as in their previous work that in order to report a better grasp of the effectiveness of their methods and to improve their generalization, more experiments with different datasets and regularization methods had to be conducted. Once again, the same dataset [53] was utilized.In [59], Lei et al. In 2018 introduced a short-term residential load forecasting model, named Residual Conventional Fusion Network (RCFNet). The proposed model consisted of three branches of residual convolutional units (proximity, tendency, and periodicity modeling), a fully connected NN (weekday or weekend modeling) and an RCN to perform load forecasting based on the fusion of the previous outputs. The dataset used in this research [60], covered a two-year time period (April of 2012 to March 2014) and contained half hour sampled data from smart meters installed in 25 households, in Victoria, Australia. For this research purpose, only 8 households that contained the most complete data series were used. Approximately, 91.7% (22 months) of the dataset was used for training and the remaining 8.3% (2 months) for testing. Six different variations of the proposed RCFNet model were compared to four baseline forecasting models: History Average (HA), Seasonal ARIMA (SARIMA), MLP and LSTM, and all models were evaluated by calculating the round mean-square-error (RMSE) metric. The researchers concluded that their model outperformed all other models and achieved the best accuracy, scalability, and adaptability.In [61], Kim et al. in 2019 introduced a deep learning model for building load forecasting based on the Autoencoder (AE) model. The main idea behind this approach was to devise a scheme capable of considering different features for different states/situations each time, to achieve more accurate and explanatory energy forecasts. The model consisted of two main components, based on LSTM architecture, a projector that gave the input data, the energy current demand that defined the state of the model and a predictor for the building load forecasting, based on that state. The user of the system had a key role and could affect the forecasting through parameter and condition choices. In this work, a well-known dataset [53] was used; 90% of the dataset was used for training and 10% for testing the model. The authors compared their model to traditional forecasting methods, ML methods and DL methods, and they concluded that the proposed model, evaluated by mean square error (MSE), mean absolute error (MAE), and mean relative estimation error (MRE) metrics, outperformed them in most cases. The authors also concluded that their models’ efficiency was enhanced due to the condition adjustment, giving each time the situation/state of the model. The main contribution of the proposed work was that the model could both predict future demand and define the current demand pattern as state.The same research team of Kim et al. [62] in the same year, 2019, proposed a hybrid model, where two DL architectures, a CNN most commonly used in image recognition, and an LSTM, most commonly used in speech recognition and natural language processing, were linearly combined in a CNN–LSTM model architecture. For the experiments, a popular dataset [53] was used. The proposed model was tested in minute–hour–day–week resolutions and it was discovered that as the resolution increased, accuracy improved. The CNN–LSTM model evaluated by MSE-RMSE-MAE-mean absolute percentage error (MAPE) metrics, as compared to several other traditional energy forecasting ML and DL models and produced the most accurate results. It should be noted that the proposed method introduced first a combination of CNN architectures with LSTM models for energy consumption prediction. The authors concluded that the proposed model could deal with noise drawbacks and displayed minimal loss of information. The authors also evaluated the attributes of the used dataset and the impact that each of them had on building load forecasting. Submetering 3 attributes, representing water heater and air conditioner consumption, had the highest impact followed by Global Active Power (GPA) attribute. Another observation of this research was on the lack of available relevant datasets and that future work should focus on data collection and the creation of an automated method for hyperparameter choosing.In [63], Le et al. in 2019 presented a DL model for building load forecasting, named EECP-CBL. The architecture of the model was a combination of Bi-LSTM and CNN networks. For the contacted experiments, the authors utilized the IHEPC dataset [53]. For each model, 60% of the data (first three years) was used for training and the rest 40% of the data (last two years) was used for testing. The EECP–CBL model was compared to several state-of-the-art models at the time, used in the industry or introduced by other researchers for energy load forecasting: Linear Regression, LSTM, and CNN-LSTM. After data optimization, the models were tested for real-time (1 minute), short (1 hour), medium (1 day), and long (1 week) term load prediction, and they were evaluated by MSE, RMSE, MAE, and MAPE metrics. The authors concluded that the proposed model outperformed all other models in terms of accuracy. In this research, the researchers also focused on the time consumed for training and prediction of each model and concluded that while the prediction horizon increased, the time required for each additional task decreased for each model, with the proposed model outperforming all other, reporting as a disadvantage a comparatively higher training time. The research team also concluded that EECP–CBL model achieved peak performance on long-term building load forecasting and could be utilized in intelligent power management systems.In [64], Mehdipour Pirbazari et al. in 2020, in order to explore the extent and the way several factors can affect short-term (1-hour) building load forecasting, performed several experiments on four data-driven prediction models: Support Vector Regression (SVR), Gradient Boosting Regression Trees (GBRT), Feed Forward Neural Networks (FFNNs), and LSTM. The authors focused mainly on the scalability of the models and the prediction accuracy if trained solely in historical consumption data. The dataset covered a four-year time period (November 2011 to February 2014) and contained smart meter hourly data from 5.567 individual households in London, UK [65]. After data normalization and parameter tuning, the dataset utilized in this research focused on the year 2013 (fewer missing values, etc.) regarding 75 households, 15 each out of five different consumer -type groups classified by Acorn [66]. The four models were evaluated by Cumulative Weighted Error (CWE), based on RMSE, MAE, MASE, and Daily Peak Mean Average Percentage Error (DpMAPE) metrics. The researchers concluded that among the four models, LSTM and FFNN presented better adaptability to consumption variations, and resulted in better accuracy, but LSTM had higher computation cost and was clearly outperformed by CBRT, which was significantly faster. According to the reported results, other factors that affected load forecasting, for all four models, were the variations in usage, average energy consumption, and forecasted season temperature. Also, changes in the number of features (input lags) or a total of tested households (size of training dataset) did not affect similarly all models. The developed models were expected to learn various load profiles aiming towards generalization abilities and increase of models’ robustness.In [67], Mlangeni et al. in 2020 introduced, for medium and long-term building load forecasting, Dense Neural Network (DNN), a deep learning architecture that consisted of multiple ANN layers. The dataset used for this research contained, approximately, 2 million records from households in the eThekwini metropolitan area that contained 38 attributes and covered a five-year period, from 2008 to 2013. After data optimization and preparation, only 709.021 samples remained, which contained 7 attributes. For model training, 75% of the data was used, and for testing, the remaining 25%. In order to model load forecasting for the campus buildings of the University of KwaZulu, the authors assigned the household readings to rooms inside university buildings. The proposed architecture was compared to SVM and Multiple Regression (MR) models and was evaluated by RMSE and normalized RMSE (nRMSE) metrics. The authors concluded that the proposed model outperformed the rest of the models, presented good generalization ability, and could follow the data consumption trends. Dispersion of values in the data resulted in inaccurate estimations of large values, probably due to them being outliers. The authors also concluded that their method could be further improved by implementing more ML architectures and then testing in more datasets against other models or even extending from building load forecasting to wider metropolitan areas.In [68], Estebsari et al. In 2020, inspired by the high performance of CNN networks in image recognition, proposed a 2-dimensional CNN model for short-term (15-minutes) building load forecasting. In order to encode the 1-dimensional time series into 2-dimensional images, the authors presented and experimented on four well-known methods: recurrence plots (RP) [69], Gramian angular field (GAF), and Markov transition field (MTF) [70]. For the experimental results, it was used the Boston housing dataset [53]; 80% of the data was used for training and the remaining 20% for testing the models. The performance of three different versions, based on the image encoding method used, of the proposed model, CNN-2D was compared to SVM, ANN, and CNN-1D models. All architectures were evaluated by RMSE, MAPE, and MAE metrics. The researchers concluded that the CNN-2D-RP model outperformed all other models, displaying the best forecasting accuracy, however, due to image encoded data, had a significantly higher computational complexity, making it inappropriate for real-time applications.In [71], Wen et al. in 2020 presented a Deep RNN with Gated Recurrent Unit (DRNN-GRU) architecture, consisting of five layers, for short- to medium-term load forecasting in residential buildings. The proposed models’ prediction accuracy was compared by using MAPE, RMSE, percentage of consonants correct (PCC), and MAE metrics, to several DL (DRNN, DRNN-LSTM) and non-DL schemes (MLP, ARIMA, SVM, MLR). The dataset used in this research contained 15 months of hourly gathered consumption data and was obtained from Pecan Street Inc. Dataport Web Portal [72], while weather data were obtained from [73]. For the experimental evaluation of the method, 20 individual residential buildings were selected from the dataset; the first year of the dataset (80%) was used for training and the remaining three months (20%) for testing. The load demand was calculated for the aggregated load of a group of ten individual residential buildings. The researchers extracted several conclusions from their work. The proposed model achieved a lower error rate compared to the other tested methods and almost 5% less than the LSTM layer variation of DRNN. The researchers also declared that DRNN-GRU model achieved higher accuracy results than the rest models for the aggregated load of 10 residential buildings as well as for the individual load of residencies. There were some issues though to be taken under consideration, regarding the use of the proposed scheme for building load forecasting. The weather attributes, based on historic data, could affect the load forecasting accuracy since the weather could not be predicted with high certainty. In addition, the aggregated load forecasting accuracy was higher than the individual residence load, since the factor of the uncertain human behavior decreased as the number of total residences raised.In 2021, Jin et al. [74] developed an attention-based encoder-decoder network based on a gated recurrent unit (GRU) NN with Bayesian optimization towards short-term power forecasting. The contributions of the proposed method were in the incorporation of a temporal attention mechanism able to adjust the nonlinear and dynamic adaptability of the network, and the automatic verification of the hyperparameters of the encoder-decoder model resulting in improved prediction performance. The verification of the network was tested for 24-hours load forecasting with data acquired from the American Electric Power (AEP) [75]. The dataset included 26280 data from 2017 to 2020, with a sampling frequency of one hour; 70% of the data was used for training, 10% for validation, and 20% for testing. The model was also tested for the load prediction of four special days: Spring Equinox, Easter, Halloween, and Christmas. The proposed method demonstrated high performance and stability compared to nine other models, considering various indicators to reflect their accuracy performance (RMSE, MAE, Pearson correlation coefficient (R), NRMSE, and symmetric mean absolute percentage error (SMAPE)). The proposed model outperformed all nine models in all cases.In [15], a hybrid DL model was proposed for household-level energy forecasting in smart buildings. The model was based on the stacking of fully connected layers and unidirectional LSTMs on bidirectional LSTMs. The proposed model could allow the learning of exceedingly nonlinear and convoluted patterns, and correlations in data that could not be reached by the classical up-to-date unidirectional architectures. The accuracy of the model was evaluated on two datasets through score metrics in comparison with existing relevant state-of-the-art approaches. The first dataset included temperature and humidity in different rooms, appliances energy use, light fixtures energy use, weather data, outdoor temperature and relative humidity, atmospheric pressure, wind speed, visibility, and dewpoint temperature data [76]. The second dataset was the well-known IHEPC set of the University of California, Irvine (UCI) Machine Learning repository [53]. The employed performance comparison indicated the proposed model as the one with the highest accuracy, evaluated with RMSE, MAPE and MAE, even in the case of multistep ahead forecasting. The proposed method could be easily extended to long-term forecasting. Future work could focus on additional household occupancy data and on speeding up the training time of the model in order to facilitate its real-time application.In the same year, Shirzadi et al. [13] developed and compared ML (SVM, RF) and DL models (nonlinear autoregressive exogenous NN (NARX), recurrent NN (RNN-LSTM)) for predicting electrical load demand. Ten years of historical data for Bruce Country in Canada were used [77] regarding hourly electricity consumption by the Independent Electricity System Operator (IESO), feed with temperature, and wind speed information [78]recorded from 2010 to 2019; nine years of data were considered for training and one year for testing. Results revealed that DL models could predict more accurately the load demand, in terms of MAPE and R-squared metrics, for both peak and off-peak values. The windowing size of the analysis period was reported as a limitation of the method, affecting significantly the computation time.Ozer et al. in 2021 [79] proposed a cross-correlation (XCORR)-based transfer learning approach on LSTM. The proposed model was location-independent and global features were added to the load forecasting. Moreover, only one month of original data was considered. More specifically, the training data were obtained from the Dataport website [72], while the building data for which the load demand was estimated and were collected by an academic building for one month. Evaluation metrics RMSE, MAE, and MAPE were calculated. The performance of the proposed model was not compared to different models; however, the effect of transfer learning on LSTM was emphasized. The method resulted in accurate prediction results, paving the way for energy forecasting based on limited data.More recently, in January 2022, Olu-Ajayi et al. [80] presented several techniques for predicting annual building energy consumption utilizing a large dataset of residential buildings: ANN, GB, DNN, Random Forest (RF), Stacking, kNN, SVM, Decision Tree (DT), and Linear Regression (LR) were considered. The dataset included building information retrieved from the Ministry of Housing Communities and Local Government (MHCLG) repository [81] and meteorological data from the Meteostat repository [82]. In addition to forecasting, the effect of building clusters on model performance was examined. The main novelty of that work was the introduction of input key features of building design, enabling designers to forecast the average annual energy consumption at the early stages of development. The effects on the performance of the model of both building clusters on the selected features and the data size were also investigated. Results indicated DNN as the most efficient model in terms of R-squared, MAE, RMSE, and MSE.In the same month of 2022, in [83], Yan et al. proposed a bidirectional nested LSTM (MC-BiNLSTM) model. The model was combined with discrete stationary wavelet transform (SWT) towards more accurate energy consumption forecasting. The integrated approach of the proposed method enabled enhanced precision due to the use of multiple subsignals processing. Moreover, the use of SWT was able to eliminate the signal noise by signal decomposition. The UK-DALE [84] dataset was used for the evaluation of the model by calculating MAE, RMSE, MAPE, and R-squared. The proposed method was compared to cutting-edge algorithms of the literature, such as AVR, MLP, LSTM, GRU, and seven hybrid DL models (Ensemble model combining LSTM and SWT, Ensemble model combining Nested LSTM (NLSTM) and SWT, Ensemble model combining bidirectional LSTM (BLSTM) and SWT, Ensemble model combining LSTM and empirical mode decomposition (EMD), Ensemble model combining LSTM and variational mode decomposition (VMD), Ensemble model combining LSTM and empirical wavelet transform (EWT), and Multichannel framework combining LSTM and CNN (MC–CNN–LSTM)). The proposed model achieved a reduction of MAPE to less than 8% in most of the cases. The method was developed on the edge of a centralized loud system that integrated the edge models and could provide to multiple households a universal IoT energy consumption prediction. The method was limited by the difficulty to integrate multiple models for different household consumption patterns, raising data privacy issues.In [85], a DL model based on LSTM was implemented. The model consisted of two encoders, a decoder, and an explainer. Kullback-Leibler divergence was the selected loss function that introduced the long-term short-term dependencies in latent space created by the second encoder. Experimental results used the IHEPC dataset [53]. The first ten months of 2010 were used for training and the remaining two months for testing. The performance of the model was examined through three evaluation metrics, MSE, MAE, and MRE. Results were compared to conventional ML models such as LR, DT, and RF, and DL models such as LSTM, stacked LSTM, the autoencoder proposed by Li [86], the state-explainable autoencoder (SAE) [61], and the hybrid autoencoder (HAE) proposed by Kim and Cho [87]. The proposed model performed similarly to the state-of-the-art methods, providing additionally an explanation for the prediction results. Temporal information has been considered, paving the way for additional explanation for not only time but also for spatial characteristics.In January 2022, Huang et al. [88] proposed a novel NN based on CNN-attention-bidirectional LSTM (BiLSTM) for residential energy consumption prediction. An attention mechanism was applied to assign different weights to the neurons’ outputs so as to strengthen the impact of important information. The proposed method was evaluated on IHEPC [53] household electricity consumption data. Moreover, different input timestamp lengths, of 10, 60, and 120 minutes, were selected to validate the performance of the model. Evaluation metrics of RMSE, MAE, and MAPE were calculated for the proposed model and traditional ML and DL methods for time-series prediction, such as SVR, LSTM, GRU, and CNN-LSTM, for comparison. Results indicated the proposed method as the one with the higher forecasting accuracy, resulting in the lowest average MAPE. Moreover, the proposed model could avoid the influence of the input sequence long time step and was able to extract information from the features that most affect the energy forecasting. The authors suggested the consideration of weather factors [89] and electricity price policy supplementary data for their future work.The main characteristics of all aforementioned DL-based approaches are summarized in Table1. Comparative performance to state-of-the-art methods is provided throughout this review instead of a numerical performance report for each method, since different evaluation metrics are calculated in each referenced work (round mean squared error (RMSE), correlation coefficient R, p-value, mean absolute error (MAE), mean relative estimation error (MRE), etc.), different datasets and different time frames are selected, not making the results directly comparable.Table 1 Characteristics of DL methods for the case of residential building load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[47]2016CRBMShort-termResidential buildingANNSVMRNNSuitable for near real-time exploitation. Needs fine-tuning and extra information to the modelsIHEPC [53]Medium-termLong-termFCRBMShort-termResidential buildingANNSVMRNNCBRMMedium-termLong-term[6]2016LSTMShort-termResidential buildingNaïve mapping issueLSTM unable to forecast using one-minute resolution, S2S performed well in all cases, needs to be tested on different datasetsIHEPC [53]LSTM (S2S)Residential buildingComparable results to [47][56]2017CNNShort-termResidential buildingANNSVMComparable [6]Results did not vary much across different architectures, need to be tested with different datasets with weather dataIHEPC [53][59]2018RCFNetShort-termResidential buildingHASARIMAMLPLSTMConsiders proximity, periodicity, tendency of consumption, and influence of external factors, good scalability, possibility to further optimize its architecture[60][61]2019DL based on AEShort-termResidential buildingLRDTRFMLPLSTMStacked – LSTMAE modelDefinition of current demand pattern as state, able to predict very complex power demand values with stable and high performanceIHEPC [53][62]2019CNN-LSTMShort-termResidential buildingLRDTRFMLPLSTMGRUBi-LSTMAttention LSTMHigh performance in high resolution, analysis of variableso household appliances that influence the predictionIHEPC [53]Medium-termLong-term[63]2019EECP-CBLShort-termResidential buildingLRLSTMCNN-LSTMHigh training time, good results in all time frame settingsIHEPC [53]Medium-termLong-term[64]2020SVRGBRTFFNNLSTMShort-termResidential buildingSVRGBRTFFNNImproved generalization ability, lower seasonal predictions due to load variations[65][67]2020DNNMedium- termResidential buildingSVMMRGood generalization, unable to predict large valuesN/ALong-term[68]2020CNN-2d-RPShort-termResidential buildingSVMANNCNN-1DComputational complex, inappropriate for real-time applicationsIHEPC [53][71]2020DRNN-GRUShort-termVarious combinations of max 20 residential buildingsMLPARIMASVMMLRDRNNDRNN-LSTMHigh accuracy with limited input variables for aggregated and disaggregated load demand, good for filling missing data[72, 73]Medium- term[74]2021Attention-based encoder-decoder (GRU) with Bayesian optimizationShort-termResidential buildingsDenseRNNLSTMGRULstmSeqGruSeqLstmSeqAttGruSeqAttBLstmSeqAttTemporal attention layer towards greater robustness, optimal prediction through optimizationAEP [75][15]2021Hybrid stacked Bi-directionalUnidirectional fully connected (HSBUFC) model architectureShort-termResidential buildingsLRExtreme learning machine (ELM)NNLSTMCNN-LSTMConvLSTMBi-directional LSTMAllows for learning of exceedingly nonlinear anc convoluted patterns and correlations in data, slow training timeIHEPC [53, 76][13]2021RNN-LSTMNARXMedium-termResidential buildingsSVMRFAccurate prediction of peak and off-peak values, computationally complex due to windowing sizeIESO [77, 78][79]2021LSTMShort-termResidential buildings-A model independent of location and introduction of global features, use of limited dataReference [72] & custom[80]2022 Jan.DNNMedium- termResidential buildingsANNGBRFStackingkNNSVMDTLRAble to predict at the early design phase, not sensitive to a specific building type, size of data affects the performanceMHCLG [81, 82][83]2022 Jan.SWT-MC-BiNLSTMShort-termResidential buildingsAVRMLPLSTMGRU7 hybrid DL modelsA centralized approach, difficulties in integrating multiple models for different energy patternsUK-DALE [84][85]2022 Jan.LSTM with two encoders, decoder and explainerShort-termResidential buildingsSimilar results with LRDTRFLSTM stacked LSTM autoencoder of [86] SAE of [61]HAE of [87]Use of temporal information, explainable prediction results, not considering spatial characteristicsIHEPC [53]Long-term[88]2022 Jan.CNN-attention-BiLSTMShort-termResidential buildingsSVRLSTMGRUCNN-LSTMAvoidance of the influence of the input sequence long time stepIHEPC [53] ## 4.2. Commercial Building Load Forecasting In 2017, Chengdong Li et al. [86] proposed a new DL model from the combination of Stacked Autoencoders (SAE) [90] and an Extreme Learning Machine (ELM) [91]. The role of SAE was to extract features relative to the building’s power consumption, while the role of the ELM was for accurate energy load forecasting. Only the pretraining of the SAE was needed, while the fine-tuning was established by the least-squares learning of the parameters in the last fully connected layer. The authors compared the performance of the proposed Extreme SAE model with: (1) a Back Propagation Neural Network–BPNN; (2) a Support Vector Regressor–SVR; (3) a Generalized radial basis function neural network - GRBFNN, which is a generalized radial basis function neural network – RBFNN; and (4) a Multiple Linear Regression–MLR, a famous, often used regression and prediction statistical method. The dataset was collected from a retail building in Freemont (California, USA) in a 15-minute sampling rate [92]. The dataset contained 34.939 samples that were aggregated to 17.469 30-minutes and 8.734 1-hour samples. The effectiveness of the examined methodologies was measured in terms of MAE, MRE, and RMSE, for 30 and 60 minutes time period. The researchers concluded that the proposed approach in energy load consumption forecasting presented the best performance, especially with abnormal testing data reflecting uncertainties in the building power consumption. The best overall performance in forecasting was achieved by the Extreme SAE model in comparison to the other models. The achieved accuracy from best to worse was: Extreme SAE > SVR > GRBFNN > BPNN > MLR. The authors also concluded that the proposed SAE and ELM combination was superior to standard SAE, mainly, due to the lack of need for fine tuning of the entire network (iterative BP algorithm), which could speed up the learning process and contribute significantly to the generalization performance. The ELM speeded up the training procedure, without iterations, and boosted the overall performance, due to its deeper architecture and improved learning strategies.Widyaning Chandramitasari et al. [5] in 2018 proposed a model constructed by the combination of an LSTM network, used for time-series forecasting, and a Feed Forward Neural Network (FFNN), to increase the forecasting accuracy. The research focused on a time horizon of one day ahead with a 30-minute resolution, for a construction company in Japan. The proposed model was validated and compared against the standard LSTM and Moving Average (MA) model, which were used by a power supply company. The effectiveness of the evaluated methodologies was measured by RMSE. The used dataset covered a time period of approximately 1 year and four months (August 2016 to November 2017) with a 30-minutes resolution. Additional time information considered in the experiments was the day, time, and season (low–middle–high). The authors concluded that separating the day in “weekdays” and “All day” data gave more accurate results in energy load forecasting for weekdays. They also pointed out that the data analysis performed for forecasting should be, each time, according to the type of the client (residential, public, commercial, industrial, etc.).In the same year, Nichiforov et al. [93] experimented on RNN networks with implemented LSTM layers consisting of one sequence input layer, a layer of LSTM units with several different configurations and variations regarding the amount of used hidden units (from 5 up to 125 units), a fully connected layer and a regression output layer. They compared the results for two different nonresidential buildings from the University Campuses, one in Chicago, and the other in Zurich. The datasets used in their experiments were apprehended by BUDS [94] and contained hourly samples over a one-year period and after data optimization, they resulted in two datasets of approximately 8.670 data samples each. Results were promising, pointing out that the method could be used in load management algorithms with limited overhead for periodic adjustments and model retraining.The following year, the same authors in [95] also experimented with the same dataset and the same RNN architectures, adding to their research one more building located in New York. Useful conclusions extracted from both works were the following: RNN architecture was a good candidate, prompting promising accuracy results for building load forecasting. The best performance, graded by RMSE, Coefficient of Variation of the RMSE (CV- RMSE), MAPE and MSE metrics, was achieved with the RNN network when the LSTM layer contained 50 hidden units, while the worst accuracy was observed when contained 125 hidden units, for all buildings. DL-Model testing in load forecasting enhanced in the past few years due to the availability of datasets and relevant algorithms, better hardware necessary for testing, network modeling that could be obtained in lower prices and industry, and academic research teams’ joint efforts leading to better results. Due to the complexity of the building energy forecasting problem (buildings’ architecture, materials, consumption patterns, weather conditions, etc.), experts’ opinions in this domain could provide insights and guidance, along with further investigation and experimentation on a wide model variation. The authors also suggested that on-site energy storage could balance the scale in favor of better energy management.In 2019, Ljubisa Sehovac et al. [96] proposed the GRU (S2S) model [97], a simplified LSTM that maintained similar functionality. Two are the main differences between the two models, regarding their cells: (1) GRU (S2S) has an all-purpose hidden state h instead of two different states, memory and hidden and (2) the input and forget gates are replaced with update gate z. These modifications allowed GRU (S2S) model to train and converge in less time than LSTM (S2S) model, maintaining at the same time, a sufficient amount of hidden states dimension and gates to preserve long-term memory. In this study, the authors experimented in all time frame categories, for power consumption forecasting (Short–Medium–Long). The dataset used in the experiments was collected from a retail building at a 5-minute sampling rate. It contained 132.446 samples and covered a time period of one year and three months. There are 11 features in this dataset: Month, Day of Year, Day of Month, Weekday, Weekend, Holiday, Hour, Season, Temperature (°C), Humidity, and Usage (KW). The data were collected from “smart” sensors part of a “smart grid; the first 80% was used for training and the remaining 20% for testing. The proposed method was compared to LSTM (S2S), RNN (S2S) and a Deep Neural Network and their effectiveness was measured by the use of MAE and MAPE. The authors concluded that the GRU (S2S) and LSTM (S2S) models produced better accuracy in energy load consumption forecasting than the other two models. In addition, the GRU (S2S) model outperformed the LSTM (S2S) model and gave an accurate prediction for all three cases. Finally, a significant conclusion that verified the conclusions of relevant research [6, 47] was that when the prediction length increased the accuracy of predictions was expected to decrease.Mengmeng Cai et al. [98] designed Gated CNN (GCNN) and Gated RNN (GRNN) models. In this research, they tested five different models in short-term forecasting (next day forecasting) and compared them in terms of accuracy in forecasting, ability to be generalized, robustness, and computational efficiency. The models they tested were: (1) GCNN1, a multistep recursive model that made one-hour predictions that applied it 24 times for a day prediction; (2) GRNN1, same as the previous but for RNN model; (3) GCNN24, multistep, direct procedure that predicted the whole 24 hours at once; (4) GRNN24, same like the previous but for RNN model; and (5) ARIMAX, a non-DL, commonly used method for time-series problems. The authors applied the five models in three different nonresidential buildings: (1) Building A (Alexandria, VA, approx. 30.000 sqf, academic, dataset obtained by [99]), Building B (Shirley, NY, approx. 80.000 sqf, school, dataset obtained by [100]), and Building C (Uxbridge, MA, approx. 55.000 sqf, grocery store, dataset obtained by [100]). The datasets used in their experiments were one-hour samples collected in a year time period and contained meteorological data, temperature, humidity, air pressure, and wind speed. After data pre-\processing (cleaning, segmentation, formation, normalization, etc.) for keeping only the weekday samples, the researchers divided the remained data in 90% training data, 5% validation data, and 5% testing data. Several useful conclusions were extracted. The building size, occupancy and peak load mattered significantly in the results of GCNN1 and GRNN1, improving the accuracy of load prediction. While the number of people in the building has risen, the uncertainty caused by each individual’s behavior is averaged, resulting in a more accurate prediction. Among GCNN1, GRNN1, and SARIMAX, the best performance was achieved by GCNN1, while the slightly poorer by GRNN1 and the worst by far by SARIMAX. In another experiment, the GCNN24 outperformed GRNN24, and produced better results in accuracy (22.6% fewer errors compared to SARIMAX) and computational efficiency (8% faster compared to SARIMAX) than GCNN1, GRNN1 and SARIMAX, granting the GCNN24 model as the most suitable, among the five, for short-term (day-ahead) building load forecasting. As a more general conclusion, the researchers stated that DL methods fitted better load forecasting than previously used methods.In [101], Yuan Gao et al. in 2019 experimented in long-term (one year) building load forecasting and proposed an LSTM architecture with an additional self-attention network layer [102]. The proposed model emphasized on the inner logical relations among the dataset during prediction. The attention layer was used towards improving the ability of the model to convey and remember long-term information. The proposed model was compared to an LSTM model and a Dense Back Propagation Neural Network and evaluated, regarding load forecasting accuracy by MAPE. All three models were applied to a Nonresidential Office building in China. The dataset used in this research contained 12 attributes (weather, time, energy consumption, etc.) on daily measurements and ranged in a two-year time period. The main conclusion of this research was that the proposed method was able to address the issue of long-term memory and conveyed information better than the other two architectures, outperformed LSTM by 2.9% and DBPNN by 6.5%.Heidrich Benedikt et al. [103] in 2020 proposed a combination of standard energy load profiles and CNN, and created a Profile Neural Network (PNN). The proposed architecture consisted of three different profile modules: standard load profile, trend and colorful noise, and the utilization of CNNs, which according to the authors has never been proposed before. In this scheme, CNNs were used as data encoders for the second (trend encoder) and third module (external and historical), in the prediction network in the third module (colorful noise calculation) and in the aggregation layer, where the results of the three modules were aggregated to perform load forecasting. The dataset used for the experiments was the result of merging two datasets: (a) one that contained historical load data gathered in a ten-year time period from two different campus buildings (one with weak and one with strong seasonal variation) and (b) weather data apprehended from Deutsche Wetterdienst (DWD) [104]. The merged dataset covered an eight-year period with one-hour resolution samples; 75% of the data was used for training and the remaining 25% for testing the models. In order to measure and better comprehend the performance of the PNN, the authors compared the results of four different variations of their model, regarding time window size (PNN0, PNN1 month, PNN6 month, and PNN12 month), to four state-of-the-art building load forecasting methods from the literature: RCFNet, CNN, LSTM and Stacked-LSTM, and three naïve forecasting models: periodic persistence, profile forecast, and linear regression. All models were evaluated, by RMSE and MASE metrics, and tested in short (one day) and in medium-term (one week) building load forecasting. All the PNN models, besides PNN0, outperformed the rest of the tested models, and among them, PNN1 achieved the best performance for both time horizons and both types of buildings. Regarding the training time, PNN models required the least time for both types of buildings for short-term forecasting but were outperformed by CNN in medium-term forecasting. According to the authors, the excess time needed, compared to the fastest model, offered a much better accuracy and thus it was an acceptable trade off. The authors also concluded that the proposed model was flexible due to the ability to change, according to cases, modules and encoders in order to achieve better results, and could also be used on a higher scale than a building.In [105], Sun et al. in 2020 introduced a novel deep learning architecture that combined an input feature selection, through MRMR (Maximal Relevance Minimal Redundancy) criterion, based on Pearson’s correlation coefficient, and an LSTM–RNN architecture. The dataset used for the short-term forecasting experiments covered one year of historic load data (2017) for three different types of buildings (office, hotel, and shopping mall), apprehended by the Shanghai Power Department, while the weather-related data were collected from a local weather forecast website. In order to establish a baseline and prove the proposed model’s efficiency, the researchers conducted several experiments that were MRMR-based LSTM–RNN model variations competed against ARIMA, BPNN variations and BPNN-SD variations forecasting models, evaluated based on RMSE and MAPE metrics. According to the results, the proposed model, and more specifically the two-time step variation of the model, outperformed all other models and provided the most accurate load forecasting results. The authors concluded that due to the complexity of the building energy load prediction task, the right selection of input features played a key role in the procedure, and in combination with a hybrid prediction model, could present more accurate results.In [106], Gopal Chitalia et al. in 2020 presented their findings, regarding deep learning architectures in short-term load forecasting, after experimenting on nine different DL models: Encoder–Decoder scheme, LSTM, LSTM with attention, Convolutional LSTM, CNN – LSTM, BiLSTM, BiLSTM with an attention mechanism, Convolutional BiLSTM and CNN–BiLSTM. The main idea was that RNN networks with an attention layer could produce more robust and accurate results. All the above models were tested in five different types of buildings on two different continents, Asia and North America. Four out of five datasets used in this research can be found in [100, 107, 108], while the weather data were collected from [109]. The authors investigated short-term building load forecasting, through several various aspects regarding feature selection, data optimization, hyperparameter fine-tuning, learning-based clustering, and minimum dataset volume, with acceptable results of accuracies. All DL architectures were evaluated by RMSE, MAPE, CV, and Root-Mean-Square Logarithmic Error (RMSLE) and provided a fair assessment for each building’s load forecasting results. The researchers concluded that the implementation of the attention layer in RNN networks increased the load forecasting accuracy of the model and could perform adequately in a variation of buildings, loads, locations, and weather conditions.In January 2022, Xiao et al. [110] proposed an LSTM model to predict the day-ahead energy consumption. Two data smoothing methods, Gaussian kernel density estimation and Savitzky-Golay filter, were selected and compared. Data used in that work was from the Energy Detective 2020 dataset [111], including hourly consumption data from 20 office buildings and weather data, from 2015 to 2017. The authors concluded that data smoothing could help enhance the accuracy of prediction in terms of CVRMSE, however, when raw data were taken as the reference, the prediction accuracy decreased dramatically. A larger training set was recommended in conclusions, if the computing cost was acceptable.The main characteristics of the DL-based approaches of this section are summarized in Table2.Table 2 Characteristics of DL methods for the case of commercial building load forecasting. Paper Ref.Pub.YearDeep learning modelTime frameBuilding typeBetter results thanAdvantages &disadvantagesDataset[86]2017Extreme SAEShort-termRetail buildingSVRGRBFNNBPNNMLRQuicker learning speed and stronger generalization performance, not considering periodicity of energy consumption[92][5]2018LSTM – FFNNShort-termConstruction companyLSTMMAGood energy consumption forecast of the next day for each 30-minutesData from a small power company in Japan[93]2018RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelA replicable case study, suitable for online optimization[94][95]2019RNN + LSTM layersMedium-termCommercial buildingsVarious implementation of reviewed modelTendency for overfitting the input data and poor performance on the testing samples[94][96]2019GRU (S2S)Short-termCommercial buildingsLSTM (S2S)RNN (S2S)DNNAccuracy was decreasing as the prediction length was increasingCustomMedium-termLong-term[98]2019GCNNShort-termCommercial buildingsVarious implementation of reviewed models and SARIMAXReduced forecasting error, able to handle high level uncertainties, high computational efficiency[99, 100]GRNN[101]2019LSTM + self-attention layerLong-termCommercial buildingsLSTMDBPNNResolved the problem of long-term memory dependenciesCustom[103]2020PNNMedium- termCommercial buildingsLRRCFNetCNNLSTMStacked-LSTMInserted statistical information about periodicities in the load time series[104][105]2020LSTM – RNN + MRMR criterionShort-termCommercial buildingsARIMABPNNBPNN-SDFeature variables selection to capture distinct load characteristicsShanghai power department[106]2020LSTM + attentionShort-termCommercial buildingsEncoder – DecoderLSTMConvolutional LSTMCNN – LSTMBiLSTM, BiLSTM with attentionConvolutional BiLSTMCNN – BiLSTMRobust against different building types, locations, weather and load uncertaintiesReferences [100, 107, 108],[110]2022 Jan.LSTM + data smoothingSort-termOffice buildings-Prediction would decline for certain periods, data smoothing could help the accuracy of predictionEnergy detective 2020 [111] ## 4.3. Multiple Type of Buildings Load Forecasting In [112], H. Shi et al. in 2018 introduced for short-term household load forecasting, a pooling-based deep RNN architecture (PDRNN), boosted by LSTM units. In the proposed PDRNN, the authors combined DRNN with a new profile pooling technic, utilizing neighboring household data to address overfitting and insufficient data in terms of volume, diversity, etc. There were two stages in the proposed methodology: load profile pooling and load forecasting through DRNN. The data used for model training and testing were apprehended from Commission for the Energy Regulation (CER) in Ireland [113] and were collected from smart metering customer behavior trials (CBTs). The data covered one-and-a-half-year time period (July 2009 to December of 2010). The proposed method was compared to other state-of-the-art forecasting methods, ARIMA, RNN, SVR and DRNN models, and was evaluated by RMSE, NRMSE, and MAE metrics. The researchers concluded that PDRNN outperformed the rest of the models, achieving better accuracy, and successfully addressing overfitting issues.In the same year, Aowabin Rahman et al. [114] proposed a methodology focused on medium- to long-term energy load forecasting. The authors examined two LSTM-based (S2S) architecture models with six layers. The contributions of this work were: (1) energy load consumption forecasting for a time period ranging from a few months, up to a year (medium to long term); (2) quantification of the performance for the proposed models on various consumption profiles for load forecasting in commercial buildings and in aggregated load at the small community scale; and (3) development of an imputation scheme for missing history consumption data values by the use of deep learning RNN models. Regarding the used dataset, the authors followed different protocols to collect useful data: (1) A Public Safety Building at Salt Lake (PSB at Utah, USA). The dataset used for this part of the paper, obtained from the PSB, was at one-hour resolution for a time frame of 448 days (one year, two months, and three weeks) covering a time period from the18th of May 2015 till the 8th of August 2016. The proposed architectures were tested in several load profiles with a combination of variables (weather, day, month, hour of the day, etc.). The first year of the dataset was used for training and the remaining (approximately 83 days) for testing; (2) A number (combinations of maximum 30) of residential buildings in Austin (Texas, USA). The dataset used for this part of the paper was acquired from Pecan Street Inc. Dataport Web Portal [72], at one-hour resolution for an approximate two-year time period from January 2015 till December 2016. The dataset included data for 30 individual residential buildings and the load consumption forecasting was aggregated. The first year of the dataset was used for training and the remaining time for testing. The experiments revealed that the prediction accuracy, for both models, was limited and highly affected by the weather. Moreover, if the training data greatly differed from testing and future weather data, then a model that produced sufficient power load consumption predictions for a specific building cannot be applied successfully to a different building. In addition, if major changes regarding occupancy, building structure, consumer behavior, or the installed appliances/equipment occurred in the specific building, the same model would have decreased accuracy. According to the authors’ findings, both proposed models performed better in commercial building energy load forecasting, than a three-layer MLP model, but worse over a one-year period forecasting regarding the aggregated load for the residential buildings, with the MLP model performed even better as the total of residential buildings increased. As a final remark, the researchers concluded that there was a lot of potential in the use of deep RNN models in energy load forecasting over medium- to long-term time horizon. It is worth mentioning that besides the consumption history data, the authors considered several other variables (day of the week, month, time of the day, use frequency, etc.) and weather conditions acquired from Mesowest web portal [73].In [115] Y. Pang et al. in 2019, in order to overcome the limited historical consumption data for most buildings, to utilize for short-term load forecasting model training, proposed the utilization of Generative Adversarial Network method (GAN). The researchers introduced the GAN-BE model, an LSTM unit-based RNN (LSTM-RNN) deep learning architecture, and experimented with different variations of it, with or without attention layer. For the experiments were used data, collected from four different types of buildings: an office building, a hotel, a mall, and a comprehensive building. The different variations of the proposed model were compared to four LSTM variations and evaluated by MAPE, RMSE, and Dynamic Time Warping (DTW) metrics. The proposed model, with and without the attention layer, outperformed the other models, displaying better accuracy and robustness.In [116], Khan et al. in 2020 developed a hybrid CNN with an LSTM autoencoder architecture (CNN with LSTM-AE) that consisted of ten layers, for short-term load forecasting in residential and commercial buildings. The load forecasting accuracy of the proposed model was compared (by MAPE, RMSE, MSC, and MAE metrics) to other DL schemes (CNN, LSTM, CNN-LSTM, LSTM-AE). Two datasets were used in this research: (1) From the UCI repository [53] and (2) a custom dataset regarding a Korean commercial building from a single sensor, instead of four used on the UCI dataset, sampled in a 15-minute window and a total amount of 960.000 records. For this experiment, the first 75% of the dataset (three years) was used for training and the remaining 25% (one year) for testing. All models were tested, on both datasets, in hourly and daily resolution. The authors extracted several conclusions from their research. When they tested the above DL models, fed on the UCI dataset, over hourly data resolution, they discovered that some cross combinations among them produced better results than each one of them individually. The latter inspired them to develop the proposed model, which outperformed all the above tested DL models. They also experimented using the same dataset, over daily data resolution, and the proposed model achieved again the best forecasting accuracy. In the next step of their research, they tested their model using their own dataset over hourly and daily data resolution. Their model produced less accurate results than LSTM and LSTM-AE models, over hourly data resolution, but outperformed all other models, over daily data resolution. The general conclusion of their research was that the proposed hybrid model performed better during the experiments, especially over daily data resolution, compared to other DL and more traditional building load forecasting methods.AkCNN-LSTM deep learning framework was proposed in [117]. The proposed model combined k-means clustering for analyzing energy consumption patterns, CNNs for feature extraction and LSTM-NN to deal with long-term dependencies. The method was tested with real-time energy data of a four-story academic building, containing more than 30 electrical-related features. The performance of the model was assessed in terms of MAE, MSE, MAPE, and RMSE for the considered year, weekdays, and weekend. The authors observed that the proposed model provided accurate energy demand forecasting, attributed to its ability to learn the spatiotemporal dependencies in the energy consumption data. The kCNN-LSTM was compared to k-means variants of state-of-the-art energy demand forecast models, revealing higher performance analysis in terms of computational time and forecasting accuracy.In the same year, Lei et al. [118] developed an energy consumption prediction model based on the rough set theory and deep belief NN (DBN). The used data were collected from 100 civil public buildings (office, commercial, tourist, science, education, etc.) for rough set reduction and from a laboratory building to train and test the DL model. The public building data referred to five months of a total of 20 inputs data collection. The laboratory building data referred to less than 20 energy consumption inputs, obtained for approximately a year, including building consumption and meteorological data. Short-term and medium-term predictions were included. Prediction results, MAPE and RMSPE, were compared to that of a back-propagation NN, Elman NN, and fuzzy NN, revealing higher accuracy in all cases. The authors concluded that the rough set theory was able to eliminate unnecessary affecting factors of building energy consumption. The DBN with a reduced number of inputs resulted in improved prediction accuracy.In [119], Khan et al. introduced a hybrid model, DB-Net, by incorporating a dilated CNN (DCNN) with bidirectional LSTM (BiLSTM). The proposed method used a moving average filter for noise reduction and handled missing values via the substitution method. Two energy consumption datasets were used: the IHEPC dataset [53] consisting of four-year energy data (three years for training and 1 year for testing) and the Korean dataset of the advanced institutes of convergence technology (AICT) [120] for commercial buildings consisting of three-year energy data (two years for training and one year for testing). The proposed DB-Net model was evaluated using MAE, MSE, RMSE, and MAPE error metrics and it was compared to various ML and DL models. The proposed model outperformed the referenced approaches, by forecasting multi-step power consumption, including hourly, daily, weekly, and monthly output with higher accuracy. However, the method was limited by the fixed-size input data and the use of the invariance time-series data in a supervised sense. The authors suggested applying several alternative methods to boost the performance of the model, more challenging datasets, and more dynamic learning approaches as their future work.Wang et al. [121] proposed a DCNN based on ResNet for hour-ahead building load forecasting. The main contribution of their work was the design of a branch that integrated the temperature per hour into the forecasting branch. The learning capability of the model was enhanced by an innovative feature fusion. The genome project building dataset was adopted [122], including load and weather conditions of nonresidential buildings; the focus was on two laboratories and an office. The performance of five DL models was considered for comparison reasons. Comparison results for single-step and 24-step building load forecasting revealed that the proposed DCNN could provide more accurate forecasting results, higher computational efficiency, and stronger generalization for different buildings.In January of 2022, Jogunola et al. [123] introduced architecture, named CBLSTM-AE, including a CNN, an autoencoder (AE) with bidirectional LSTM (BLSTM). The effectiveness of the proposed architecture was tested with the well-known UCI dataset, IHEPC [53] and the Q-Energy [124] platform dataset was used to further evaluate the generalization ability of the proposed framework. From the Q-Energy dataset, a private part was used including two small-to-medium enterprises (SME), a hospital, a university, and residences. The time resolution of both datasets was converted to 24 hours towards short-term consumption prediction. The IHEPC data was further used for comparison of the proposed method with state-of-the-art frameworks. The proposed model achieved lower MSE, RMSE, and MAE and improved computational time, compared to the other models: LSTM, GRU, BLSTM, Attention LSTM, CNN-LSTM and electric ECP-based CNN, and BLSTM (EECP-CBL). Results demonstrated good generalization ability and robustness, providing an effective prediction tool over various datasets.In February 2022, the most research on energy consumption forecasting up-to-date was presented by Sujan Reddy et al. in [125]. The authors proposed a stacking ensemble model for short-term load consumption. ML and DL models (RF, LSTM, DNN, evolutionary trees (EvTree)) were used as base models. Their prediction results were combined using Gradient Boosting (GBM) and Extreme Gradient Boosting (XGB). Experimental observations on the combinations revealed two different ensemble models with optimal forecasting abilities. The proposed models were tested on a standard dataset [126], available upon request, containing approximately 500000 load consumption values at periodic intervals for over 9 years. Experimental results pointed out the XGB ensemble model as the optimal, resulting in reduced training time and higher accuracy, compared to the state-of-the-art (EvTree, RF, LSTM, NN, ARMA, ARIMA, Ensemble model of [126], feed-forward NN (FNN–H20) of [127] and DNN-smoothing of [127]). Five regression measures were used: MRE, R-squared, MAE, RMSE, and SMAPE. A reduction of 39% was reported in RMSE.The main characteristics of the DL based approaches of this section are summarized in Table3.Table 3 Characteristics of DL methods for the case of multiple type of buildings load forecasting. PaperRef.Pub.YearDeep LearningModelTime frameBuilding typeBetter results thanAdvantages & DisadvantagesDataset[112]2018PDRNNShort-termResidential small and medium enterprisesARIMARNNSVRDRNNProne to overfitting due to more parameters and fewer data[113][114]2018LSTM 1LSTM 2Long-termCommercial buildingMLPMissing data imputation scheme, decrease of accuracy if weather changes or for other building structures or when data is aggregated[72, 73]Various combinations of max 30 residential buildings[115]2019GAN-BE (LSTM-RNN basedShort-termOffice buildingHotelMallComprehensive buildingLSTM variationsAble to capture distinct load characteristics and choose accurate input variablesCustom[116]2020CNN with LSTM – AEShort-termResidential buildingCNNLSTMCNN-LSTMLSTM-AEOutlier detection and data normalization, spatial features extraction for better accuracyIHEPC [53] & customMedium-termCommercial buildings[117]2021kCNN-LSTMLong-termAcademic buildingARIMADBNMLPLSTMCNNCNN-LSTMAble to learn spatiotemporal dependencies in the energy consumption dataCustom[118]2021DBNShort-term100 civil public buildingsBack-propagation NNElman NNFuzzy NNRequires a large amount of training data, used uncalibrated data and does not need feature extractionCustomMedium-termLaboratory building[119]2021DB-netShort-termResidential buildingSVRCNN-LSTMCNN-BiLSTMDCNN-LSTMDCNN-BiLSTMAbility for multistep forecasting, noise reduction and handling of missing values, small inference time, suitable for real-time applications, limited by the fixed-size input dataIHEPC [53, 120]Long-termCommercial buildings[121]2021RCNNShort-termTwo laboratories and an officeGRUResNetLSTMGCNNIncreased depth of the model, enhanced ability to learn nonlinear relations, able to integrate information of external factors, fast converge[122][123]2022 Jan.CBLSTM-AEShort-termCommercial buildingsLSTMGRUBLSTMAttention LSTMCNN-LSTMEECP-CBLGeneralizes well to varying data, building types, locations, weather and load distributionsIHEPC [53] & private Q-energy [124] dataResidential buildings[125]2022 Feb.Ensemble with GBMEnsemble with XGBShort termVarious buildingsEvTree, RFLSTMNNARMAARIMAEnsemble [126]FNN-H20 [127]DNN-smoothing [127]Reduced training time, can be applied to any stationary time series dataCustom ## 5. Datasets The dataset is the key element of all deep learning methods. In order to train a model in understanding and producing useful results, the dataset should be selected carefully. The user has to weigh the options of choosing certain features of each dataset, in accordance with the result that the model produces. In the problem of building load forecasting, we encounter in the existing literature a finite number of datasets being used by researchers, mostly acquired from the building under investigation. Data collection is labor intensive and presupposes a metering infrastructure installed in the buildings for effective energy consumption monitoring. Moreover, historical data of several years is usually necessary. In most research papers, the datasets are comprised of consumption history data (thousands of samples covering a time period of over a year), focusing on major power consuming devices/appliances (kitchen, water heater, HVAC, etc.) and different load profiles. In some research papers, the authors considered the weather conditions, but the required data were not part of the same dataset as the consumption history data and the weather data had to be acquired from different sources. Some experimentations, driven mostly by the results of each methodology, lead the researchers to add weather conditions and cast the time-series data into categories such as weekday, weekend, hour of the day, and achieve by that way, more promising results. Global and local climate change as well as urban overheating can seriously affect the energy consumption of urban buildings, creating weather datasets that are not reliable over the years. So far, the research is focused on testing and experimenting in different deep learning models, sometimes involving the same dataset, in order to better understand and conclude comparatively to which model provides better results by using the same dataset. It should be noted here that approximately 48.2% of the papers experimenting in residential and multiple building load forecasting in this work are using the same dataset. Towards this end, research efforts are focusing on load forecasting based on limited input variables, which would additionally lead to less computational complex models appropriate for real-time applications. An interesting observation regarding the datasets is that they can be used efficiently for the building that has been acquired from. Any effort to be adjusted in a different building will not produce the desirable results. This limitation in generalization is a major drawback that needs to be addressed in future research in the field. Reliable forecasting models for varying data, building types, locations, weather, and load distributions need to be developed. The solution of the lack of detailed datasets for numerous buildings could be addressed by the rapid growth of the Internet of Things (IoT) and the growing capability of the research community to make use and better comprehend Big Data. The evolution of home/building and grid to “smart home/building” and “smart grid” by applying a number of sensors and actuators (IoT) will provide the researchers with a vast amount of data (volume), rich in features (variety), and in almost real-time (velocity), better described as Big Data. ## 6. Discussion Building load forecasting is an emerging area of building performance simulation (BPS), facing technical complexity and major significance to a variety of stakeholders, since it supports future operational and energy efficiency improvements in existing buildings [128]. Deep learning models have entered the load forecasting field in recent years due to their ability to deal with big data and lead to high forecasting accuracies. Reviewing the relevant literature regarding building load forecasting with deep learning methods, interesting findings became apparent. To date, most DL models have been applied to residential buildings (47.5%). Residential buildings count for almost 70% of total energy consumption [129]. The increase in population and floor area per person in urban cities resulted subsequently in an increase in residential energy consumption. The latter motivated the research community to investigate further the energy load forecasting of residential buildings, so as to account for the spent energy and propose energy conservation measures and future green policies. Furthermore, most DL models were applied for short-term forecasting horizon (55%), e.g., a day or an hour ahead. Short-term forecasting may lead to more accurate results, since a longer forecasting horizon would significantly increase the possibility of alterations of the input data, not known beforehand and able to impact severely the forecasting accuracy. The most popular architecture in the literature was found to be the LSTM model. LSTM models are able to provide a great number of parameters, e.g., learning rates, input and output biases, thus, they do not require fine adjustments. Although the results of DL models appear promising, many challenges need to be addressed, mainly related to data availability and improvements on the DL models.The human factor is one of the most defining factors that add to the difficulty of building load forecasting problem. On the small scale of a building, with a number of offices or flats, or even on the scale of a single household, human behavior can challenge even the most efficient DL load forecasting methods. It is a problem that several researchers pointed out in their work and tried to handle it by aggregating the load of several homes together before proceeding to forecast. The higher the scale of the researched structure or the sum of the people working/living in it, the less of an impact in forecasting, as the individual human behavior falls to a more general behavior that is easier to predict.It is also important to point out that the primary DL models that were utilized in building load forecasting were in a plainer mode than later, as the research progressed, where we encounter more complex schemes, DL model combinations and hybrid models that produced more efficient and accurate results. In general, a lack of general guidelines for developing and testing a DL model is missing from the literature. For example, trial and error was applied in many cases for tuning the hyperparameters, resulting in methodologies not being able to be reproduced easily. Moreover, the models balance between high accuracy and lower training time or higher model complexity. The more computational complex the model, the most accurate is reported in most of the referenced cases; however, the latter leads to an increase in training time, making the models inappropriate for real-time applications.In general, building energy models need to be improved so as to represent in more detail the actual performance of the building. The solution is in model calibration techniques, by calibrating several inputs to the existing building simulation programs. Calibration could significantly improve the performance of the energy models; however, simulation accuracy is determined by multiple parameters, referring to measured energy building data inserted as calibration inputs. The collection of detailed data may require extensive time and costs. Therefore, another challenge that the researchers had to encounter, as already mentioned, is the lack of detailed datasets. Additionally, the absence of public available datasets obstructs the reproduction of results and comparative studies. In a great number of research papers, in order to explore the impact of different features, to enhance the prediction accuracy by producing more robust and generalized models, researchers had to combine different datasets or process the existing data in several different ways. Once the lack of datasets is addressed properly, the greater challenge in this field will be the development of a DL model, or the combination/utilization of the existing DL models, in a way that it can be applied in several different types of buildings (office, residential, academic, etc.), use detailed real-time data, proceed automatically to self-adjustments, and produce accurate and applicable results, for the Energy Industry towards efficient energy management. ## 7. Conclusion The application of deep learning methods to the prognosis of the electrical load of buildings is a subject that first appeared in 2016 and since then continues to demonstrate an upward trend when it comes to the interest of researchers. The latter trend is probably due to the promising results of relevant research work, compared to alternative existing methods. Several useful conclusions were emerged from this literature review, regarding the up-to-date engagement of the scientific community in the current subject. Research revealed a higher interest in residential building load forecasting covering the 47.5% of the referenced literature, mainly towards short-term forecasting, in 55% of the papers. The latter was attributed to the lack of available public datasets for experimentation in different building types, since it was found that in the 48.2% of the related literature, the same historical data regarding residential buildings load consumption was used. Even though the several encountered challenges, researchers proved to be resourceful, resilient in their work, and proposed or utilized several new or pre-existing methods to address most of the issues confronted on the way. The advancement of technology and the price decrease in hardware equipment, necessary for DL methodology applications for the management of the vast amount of data, also contributed to the enhancement of DL methods application. To conclude, considering the up-to-date published research, the DL models produce accurate and promising results regarding building load forecasting, outperforming almost all the other traditional forecasting methods such as physics-based models, statistical models. Most of the researchers concluded that further testing of their models, with different datasets and more features, would apprehend more accurate results. The latter can be addressed by the Internet of Things (IoT) and smart sensors embedded in the grid, upgrading it to “smart”, paving the way for future research work. --- *Source: 1008491-2022-06-17.xml*
2022
# Exposure to Ambient Air Pollutant PM10 in the Second Trimester of Pregnancy Is Associated with Preterm Birth: A Birth-Based Health Information Cohort Study **Authors:** Pengying Xiao; Lijun Wang; Yingjie Yao; Yu Chang; Jianghou Hou **Journal:** BioMed Research International (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008538 --- ## Abstract Objectives. We evaluated the effects of exposure to high concentrations of particulate matter (PM)10 on preterm birth (PTB) and identified a critical concentration of PM10 that could lead to PTB via a birth-based health information cohort study. Methods. We conducted a birth-based cohort study consisting of nonanomalous singleton births at 22-42 weeks. PTB was defined as babies born alive before 37 weeks of pregnancy. Pregnancy period exposure averages were estimated for PM10 based on the China National Environmental Monitoring Centre (CNEMC). Pregnant women who lived within 50 km of the monitor station were recruited into this study. Logistic regression analyses were performed to determine the association between PTB and exposure to PM10 at different pregnancy periods with adjustment for confounding factors. Results. The relative frequency of PTB was 8.7% in the study cohort of 5,291 singleton live births. A total of 1137 women had a high level of PM10 exposure (≥60 μg/m3) in the second trimester of pregnancy. The average concentrations of PM10 in the first, second, and third trimesters of pregnancy and throughout pregnancy were 53.8 μg/m3, 54.2 μg/m3, 55.6 μg/m3, and 54.3 μg/m3, respectively. The generalized additive model (GAM) analysis showed that there was a nonlinear correlation between PM10 and PTB in the second trimester of pregnancy (P<0.001). The adjusted odds ratio between PTB and low concentration PM10 exposure (PM10<60μg/m3) in the second trimester of pregnancy was 1.01 (95% CI 0.95-1.05). However, high PM10 exposure (PM10≥60 μg/m3) in the second trimester of pregnancy had an increased PTB risk even after adjustment for coexisting risk factors with an adjusted odds ratio of 1.78 (95% CI 1.69-1.87), and the incidence of PTB increased with an increase in PM10 exposure. Conclusions. Our research discovered that exposure to high levels of PM10 increases the risk of PTB and the second trimester is the most vulnerable gestational period to ambient air pollution exposure. PM10 concentrations more than 60 μg/m3 are detrimental to pregnant women in their second trimester. This study has implications for health informatics-oriented healthcare decision support systems. --- ## Body ## 1. Introduction The primary cause of newborn illness and death is preterm birth (PTB) [1]. PTB is expected to occur at a rate ranging from 5% to 13% in industrialized nations [2]. Additionally, PTB has been shown to increase life-long morbidities, such as cardiovascular disease, diabetes, and some types of cancer [3]. Although several risk factors, such as maternal age, alcohol use, smoking, hypertension, diabetes, and infection during pregnancy, are thought to be related to the risk of preterm delivery [4], these variables may not account for all causes of PTB. Numerous studies have shown that environmental variables, such as air pollution, may play a significant role in the risk of PTB.Environmental pollutants have an increasingly significant impact on human health, especially ambient particulate matter (PM) pollution. Ambient PM pollution has become one of the most important public health risks. The term “ambient PM pollution” refers to a diverse array of airborne particles ranging in size from a few hundredths of a micrometer to visible particles as large as 100 m. Prolonged exposure to ambient PM may result in heart and lung illnesses. The majority of research has been on PM with aerodynamic dimensions less than 10 m (PM10) or less than 2.5 m (PM2.5), which may impair placental development, disrupt normal gestational processes, and cause PTB [5].Some studies have reported on the association between PTB and elevated ambient PM levels [6–9]. However, the threshold of PM10 level on PTB risk has not been confirmed. China, as a developing country, has a serious problem of environmental PM pollution with the continuous industrial and social development. In 2021, the average PM10 concentration in China is 54 μg/m3. It is necessary to investigate the relationship between environmental PM pollution and PTB in the country. Clinical studies have found that ultrasonic measurement of the cervical length, measurements of amniotic fluid cytokine and chemokine levels, and sense of coherence 13-item version (SOC-13) scale score in the second trimester of pregnancy can effectively screen women with an increased risk of PTB, which indicates that the second trimester of pregnancy is a sensitive period closely related to the occurrence of PTB [10–12]. Therefore, our study has focused on the second trimester of pregnancy to investigate the correlation between PM10 and PTB.Given the discrepancy between ambient PM pollution and PTB risk and the scarcity of research on high PM10 levels, it is critical to explain the link between PM10 exposure and PTB risk in China by performing large-scale population studies. We performed a birth cohort research in Kunming, China, adjusting for significant confounders, to examine the connection between PM10 and the risk of PTB and to establish a risk threshold for PM10 concentration exposure. ## 2. Methods ### 2.1. Participant Profiles A birth cohort research was performed on births occurring between January 1, 2016, and December 31, 2017, utilizing the Kunming Maternal and Child Health Hospital’s database. Pregnant women who presented to the hospital for delivery of singleton newborns between 22 and 42 weeks of gestation without any significant congenital defects, who were not suffering from a mental disorder, and who were 18 years or older were eligible for this study. The Medical Ethics Committee of Kunming Maternal and Child Health Hospital in China authorized all research protocols. To prevent fixed cohort bias, the study population included all babies conceived between January 1, 2016, and December 31, 2017. The study period began 22 weeks before the start of the research and ended 42 weeks before the conclusion of the study.The estimated date of conception and resultant gestational age (in days) were calculated using the first day of the mother’s last menstrual cycle. The primary exclusion criteria were multiple gestation pregnancies, the absence of critical information (e.g., parity, delivery date, and last menstrual cycle), gestational age of less than 22 weeks or more than 42 weeks, numerous, repeated maternal visits, and any congenital abnormalities (Figure1). After eliminating women who fulfilled the exclusion criteria, a total of 11,514 pregnancies that satisfied the inclusion criteria were originally recruited, and 5,291 pregnancies were included in the analyses.Figure 1 Flow diagram of the study population. ### 2.2. Exposure Assessment Data of PM10 concentrations were obtained from the China National Environmental Monitoring Centre (CNEMC) (http://www.cnemc.cn/). The home and work addresses of participants were within 50 kilometers of the nearest monitoring sites. The 24-hour average PM10 concentration was measured for the period from January 2016 to December 2017 in Kunming by CNEMC. The daily exposure to PM10 was adjusted according to the monitoring week to obtain the annual average of PM10 at the monitoring site. The exposure window was defined as the period of the second trimester (14-26 weeks) [13]. ### 2.3. Preterm Birth PTB was defined as less than 37 completed weeks of gestational age [14]. The gestational age was determined using the starting day of the previous menstrual cycle (LMP). During early pregnancy follow-up visits, obstetricians noted women’s LMP time (no later than 12 weeks after conception). Each woman was questioned again about the time of LMP at the postpartum follow-up appointment (no later than six weeks following birth), and gestational age was computed using these two records. PTB was classified according to the gestational age as moderate or late PTB (32–37 completed weeks), very PTB (28–32 completed weeks), and extremely PTB (28 completed weeks) [15]. ### 2.4. Covariates Variables or potential confounding effects that had biological importance for PTB were included as adjustments [16, 17]. We adjusted for maternal age, parity (0, 1, 2, ≥3), preeclampsia (yes/no), history of cesarean section (yes/no), maternal anemia (yes/no), maternal obesity (yes/no), and diabetes (yes/no) from the baseline data of the birth cohort. Conception season (spring: March-May; summer: June-August; fall: September-November; winter: December-February) and maternal smoking during pregnancy (yes/no) were included from the early gestation follow-up data. We also adjusted for the mode of delivery (vaginal delivery/Cesarean section) and baby’s sex (male/female) from the postpartum follow-up data. The year of conception was also adjusted to eliminate the long-term effects of pollution levels on birth outcomes. ### 2.5. Statistical Analysis To characterize the demographic, medical, pregnancy outcome, and PM10 concentration features, descriptive statistics were used. The association between trimester-specific and total pregnancy PM10 exposure and PTB was estimated using a generalized additive model (GAM), adjusting for confounding factors, such as maternal age, parity, preeclampsia, season of conception, history of cesarean section, maternal anemia, maternal obesity, and diabetes. We further used a two-stage linear regression model to capture the potential nonlinear effect of PM10 concentration on PTB and explored the turning point of PM10 concentration that had a significant positive correlation with PTB through an “exploratory” analysis. We additionally performed stratified analyses of variables, and interaction terms with PM10 concentration (<60 or ≥60 μg/m3) were used to evaluate whether the effect modifications were statistically significant or not.Furthermore, we utilized a sensitivity analysis in the main model to check the robustness of the estimated associations. Univariate analysis was performed to evaluate the variables considered possible moderators of PTB, and the statistically significant confounding factors were identified to be adjustment factors. Analyses were performed using the statistical packages R and EmpowerStats (R). Results were reported as the odds ratios (OR) and 95% confidence intervals (CI) for the association between PM10 exposure during pregnancy and risk of PTB. P values < 0.05 were considered statistically significant. ## 2.1. Participant Profiles A birth cohort research was performed on births occurring between January 1, 2016, and December 31, 2017, utilizing the Kunming Maternal and Child Health Hospital’s database. Pregnant women who presented to the hospital for delivery of singleton newborns between 22 and 42 weeks of gestation without any significant congenital defects, who were not suffering from a mental disorder, and who were 18 years or older were eligible for this study. The Medical Ethics Committee of Kunming Maternal and Child Health Hospital in China authorized all research protocols. To prevent fixed cohort bias, the study population included all babies conceived between January 1, 2016, and December 31, 2017. The study period began 22 weeks before the start of the research and ended 42 weeks before the conclusion of the study.The estimated date of conception and resultant gestational age (in days) were calculated using the first day of the mother’s last menstrual cycle. The primary exclusion criteria were multiple gestation pregnancies, the absence of critical information (e.g., parity, delivery date, and last menstrual cycle), gestational age of less than 22 weeks or more than 42 weeks, numerous, repeated maternal visits, and any congenital abnormalities (Figure1). After eliminating women who fulfilled the exclusion criteria, a total of 11,514 pregnancies that satisfied the inclusion criteria were originally recruited, and 5,291 pregnancies were included in the analyses.Figure 1 Flow diagram of the study population. ## 2.2. Exposure Assessment Data of PM10 concentrations were obtained from the China National Environmental Monitoring Centre (CNEMC) (http://www.cnemc.cn/). The home and work addresses of participants were within 50 kilometers of the nearest monitoring sites. The 24-hour average PM10 concentration was measured for the period from January 2016 to December 2017 in Kunming by CNEMC. The daily exposure to PM10 was adjusted according to the monitoring week to obtain the annual average of PM10 at the monitoring site. The exposure window was defined as the period of the second trimester (14-26 weeks) [13]. ## 2.3. Preterm Birth PTB was defined as less than 37 completed weeks of gestational age [14]. The gestational age was determined using the starting day of the previous menstrual cycle (LMP). During early pregnancy follow-up visits, obstetricians noted women’s LMP time (no later than 12 weeks after conception). Each woman was questioned again about the time of LMP at the postpartum follow-up appointment (no later than six weeks following birth), and gestational age was computed using these two records. PTB was classified according to the gestational age as moderate or late PTB (32–37 completed weeks), very PTB (28–32 completed weeks), and extremely PTB (28 completed weeks) [15]. ## 2.4. Covariates Variables or potential confounding effects that had biological importance for PTB were included as adjustments [16, 17]. We adjusted for maternal age, parity (0, 1, 2, ≥3), preeclampsia (yes/no), history of cesarean section (yes/no), maternal anemia (yes/no), maternal obesity (yes/no), and diabetes (yes/no) from the baseline data of the birth cohort. Conception season (spring: March-May; summer: June-August; fall: September-November; winter: December-February) and maternal smoking during pregnancy (yes/no) were included from the early gestation follow-up data. We also adjusted for the mode of delivery (vaginal delivery/Cesarean section) and baby’s sex (male/female) from the postpartum follow-up data. The year of conception was also adjusted to eliminate the long-term effects of pollution levels on birth outcomes. ## 2.5. Statistical Analysis To characterize the demographic, medical, pregnancy outcome, and PM10 concentration features, descriptive statistics were used. The association between trimester-specific and total pregnancy PM10 exposure and PTB was estimated using a generalized additive model (GAM), adjusting for confounding factors, such as maternal age, parity, preeclampsia, season of conception, history of cesarean section, maternal anemia, maternal obesity, and diabetes. We further used a two-stage linear regression model to capture the potential nonlinear effect of PM10 concentration on PTB and explored the turning point of PM10 concentration that had a significant positive correlation with PTB through an “exploratory” analysis. We additionally performed stratified analyses of variables, and interaction terms with PM10 concentration (<60 or ≥60 μg/m3) were used to evaluate whether the effect modifications were statistically significant or not.Furthermore, we utilized a sensitivity analysis in the main model to check the robustness of the estimated associations. Univariate analysis was performed to evaluate the variables considered possible moderators of PTB, and the statistically significant confounding factors were identified to be adjustment factors. Analyses were performed using the statistical packages R and EmpowerStats (R). Results were reported as the odds ratios (OR) and 95% confidence intervals (CI) for the association between PM10 exposure during pregnancy and risk of PTB. P values < 0.05 were considered statistically significant. ## 3. Results The study population included 5,291 singleton live births: 462 (8.7%) were preterm and 4,829 were term births. Among the PTBs, 409 were moderate or late PTBs and 53 were very PTBs (VPTBs) or extremely PTBs (ExPTBs). The mean concentrations of PM10 exposure over the first, second, and third trimesters of pregnancy and the entire pregnancy were 53.8 μg/m3, 54.2 μg/m3, 55.6 μg/m3, and 54.3 μg/m3, respectively. Furthermore, of the 5291 infants included in our study, 1137 had a high level of PM10 (≥60 μg/m3) in the second trimester of pregnancy (Table 1). Univariate analysis was performed to identify factors associated with PTB. Factors with significant associations included parity, year and season of conception, cesarean, and preeclampsia (Table S1).Table 1 Maternal and fetal characteristics in the birth cohort. CharacteristicDataMaternalAge,mean±SD29.8±4.6Gestational age (wk),mean±SD38.8±1.7Parity, no. (%)12857 (54.3)22250 (42.8)≥3150 (2.9)Year of conception, no. (%)20151686 (31.9)20163605 (68.1)Season of conception, no. (%)Spring1265 (23.9)Summer1043 (19.7)Autumn1613 (30.5)Winter1370 (25.9)Mode of delivery, no. (%)Vaginal5063 (95.7)Cesarean228 (4.3)Preeclampsia, no. (%)173 (3.3)Diabetes, no. (%)642 (12.1)Maternal obesity, no. (%)140 (2.6)Maternal anemia, no. (%)2147 (40.6)History of cesarean section, no. (%)1009 (19.1)InfantBirth weight (g),mean±SD3009.7±366.4Sex of infant, no. (%)Male2495 (47.1)Female2296 (43.4)Missing500(9.5)Term birth, no. (%)4829 (91.3%)PTB, no. (%)462 (8.7)Moderate orlaterpreterm≥224, <259409 (7.7)VPTB52 (1.0)ExPTB1 (0)Mean concentration of PM10 (μg/m3), mean±SDFirst trimester53.8±7.9Second trimester54.2±9.6Third trimester55.6±11.1Entire pregnancy54.3±3.7PM10 exposure during the second trimester (%)<60μg/m34154 (78.5)≥60μg/m31137 (21.5)PM10: particulate matter with aerodynamicdiameters≤10μm; PTB: preterm birth; VPTB: very preterm birth; ExPTB: extremely preterm birth. Dichotomous variables are presented as percent of total for each characteristic.In order to explore the relationship between PM10 exposure during pregnancy and gestational age or PTB, a GAM was used (Figure 2). With adjustment for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes, a nonlinear association was found between PM10 exposure and gestational age (P<0.001), and consistent associations were found between PM10 exposure and PTB (P<0.001) in the second trimester of pregnancy. When we examined the relationships according to the PM10 exposure level, we discovered that exposure to a higher PM10 concentration (≥60 μg/m3) during the second trimester of pregnancy was clearly associated with an elevated risk of PTB.Figure 2 Associations between air pollutant PM10 and risk of PTB or gestational age in the second trimester of pregnancy. (a) A nonlinear association between PM10 exposure during the second trimester and gestational age was found (P<0.001) in a generalized additive model (GAM). (b) Consistent association between PM10 exposure during the second trimester and PTB was found (P<0.001) in a GAM model. The solid red line represents the smooth curve fit between variables. The blue bands represent the 95% of confidence interval from the fit. All adjusted for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. (a)(b)Table2 presents the crude and adjusted OR (with 95% CI) of gestational age or PTB associated with PM10 exposure in the second trimester of pregnancy. In the crude analysis, we observed a consistent relationship between PM10 exposure (<60 μg/m3 or ≥60 μg/m3) and gestational age. Exposure to high PM10levels≥60 μg/m3 in the second trimester of pregnancy was significantly associated with an increased risk of PTB, with an OR of 1.71 (95% CI: 1.63, 1.78). However, no significant association between exposure to PM10levels<60 μg/m3 in the second trimester of pregnancy and PTB was observed, with an OR of 1.02 (95% CI: 1.00, 1.04). In the adjusted models, similar association was found between gestational age and PM10 exposure. We also found that exposure to high PM10levels≥60 μg/m3 in the second trimester of pregnancy was still significantly associated with an increased risk of PTB, with an OR of 1.78 (95% CI: 1.69, 1.87), and we found that the risk of PTB was increased by 78% for each 1 μg/m3 increase in PM10 exposure in the second trimester of pregnancy. There was no significant association between exposure to PM10levels<60 μg/m3 in the second trimester of pregnancy and PTB, with an OR of 1.01 (95% CI: 0.95, 1.05).Table 2 Crude and adjusted odd ratios for the risk of gestational age (wk) and PTB caused by PM10 exposure in the second trimester of pregnancy. OutcomeCrudeModel IModel IIOR/β (95% CI)P valueOR/β (95% CI)P valueOR/β (95% CI)P valueGestational age (wk)PM10<60 (μg/m3)0.02 (0.01, 0.02)<0.0010.02 (0.01, 0.03)<0.0010.02 (0.01, 0.03)<0.001PM10≥60 (μg/m3)-0.47 (-0.48, -0.46)<0.001-0.48 (-0.50, -0.47)<0.001-0.48 (-0.49, -0.47)<0.001PTBPM10<60(μg/m3)1.02 (1.00, 1.04)0.1291.00 (0.95, 1.06)0.8831.01 (0.95, 1.05)0.992PM10≥60 (μg/m3)1.71 (1.63, 1.78)<0.0011.75 (1.67, 1.83)<0.0011.78 (1.69, 1.87)<0.001OR: odds ratio; CI: confidence interval. Model I adjusted for season of conception and maternal age. Model II adjusted for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes.Furthermore, we conducted a stratified analysis by grouping confounding variables, such as season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. After excluding the confounding variables, exposure to high PM10 levels (≥60 μg/m3) in the second trimester of pregnancy was significantly associated with an increased risk of PTB, but there was no significant correlation between exposure to PM10 levels (<60 μg/m3) in the second trimester of pregnancy and PTB. The results indicated the robustness of the association between exposure to high PM10 levels and PTB (Table 3).Table 3 Logistic regression of factors associated with PTB in the second trimester of pregnancy. SubgroupPM10<60 (μg/m3)PM10≥60 (μg/m3)OR (95% CI)P valueOR (95% CI)P valueMaternal age<251.02 (0.82, 1.28)0.8521.88 (1.61, 2.19)<0.00125-291.03 (0.94, 1.13)0.5741.75 (1.62, 1.88)<0.00130-340.99 (0.90, 1.07)0.7331.77 (1.61, 1.95)<0.001≥350.97 (0.87, 1.09)0.6281.86 (1.61, 2.14)<0.001Sex of infantMale0.97 (0.90, 1.05)0.4851.87 (1.71, 2.03)<0.001Female1.01 (0.93, 1.09)0.8681.72 (1.61, 1.85)<0.001Missing1.19 (0.85, 1.67)0.3151.81 (1.49, 2.19)<0.001Parity10.99 (0.92, 1.07)0.8471.78 (1.66, 1.91)<0.00121.02 (0.94, 1.11)0.5891.78 (1.65, 1.93)<0.001≥30.85 (0.67, 1.07)0.1731.90 (1.39, 2.59)<0.001PreeclampsiaNo0.99 (0.94, 1.05)0.8501.79 (1.70, 1.89)<0.001Yes1.48 (0.62, 3.52)0.3791.61 (1.38, 1.88)<0.001DiabetesNo1.01 (0.96, 1.08)0.6481.81 (1.71, 1.92)<0.001Yes0.95 (0.85, 1.06)0.3591.63 (1.45, 1.83)<0.001Maternal obesityNo1.00 (0.95, 1.05)0.94641.76 (1.68, 1.85)<0.001Yes——Maternal anemiaNo1.00 (0.93, 1.07)0.9641.72 (1.62, 1.83)<0.001Yes1.00 (0.92, 1.08)0.9561.87 (1.72, 2.04)<0.001Mode of deliveryVaginal0.97 (0.40, 2.36)0.941—Cesarean1.03 (0.96, 1.11)0.3971.80 (1.69, 1.91)<0.001History of cesarean sectionNo0.99 (0.94, 1.05)0.8051.81 (1.71, 1.92)<0.001Yes1.03 (0.91, 1.16)0.6271.67 (1.51, 1.85)<0.001Year of conception20151.07 (0.80, 1.43)0.6656—20161.04 (0.99, 1.09)0.09098.04 (6.15, 10.52)<0.001Season of conceptionSpring0.93 (0.81, 1.07)0.29862.57 (1.95, 3.40)<0.001Summer1.06 (0.90, 1.24)0.5187—Autumn1.01 (0.92, 1.11)0.85911.24 (0.73, 2.09)0.4271Winter2.01 (1.28, 3.15)0.00231.59 (1.50, 1.68)<0.001Odds ratio estimates for covariates are adjusted for other factors listed in the first column of the table as well as season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. The odds ratio estimates for PM10exposure<60μg/m3 or ≥60 μg/m3 are from separate models with adjustment for the same covariates as listed above. ## 4. Discussion Preterm birth is a significant public health issue. It is not only the biggest cause of newborn death [18], but it also has significant long-term consequences, including asthma, metabolic abnormalities, and disability [19]. Our investigation established a link between PM in the air and unfavorable birth outcomes. Exposure to high levels (PM10≥60 μg/m3) during the second trimester of pregnancy was substantially related with an elevated risk of PTB in our research population. On the other hand, exposure to PM10<60 μg/m3 levels during the second trimester of pregnancy was not related to an increased risk of PTB. As a result, we determined that a PM10 level of 60 μg/m3 considerably increased the risk of PTB. Additionally, we discovered similar correlations between PM10 exposure and PTB across a variety of possible confounding populations.Previous studies have reported various results for the relationship between ambient PM10 and risk of PTB. A prospective birth cohort study in Wuhan in China reported an about 2% increase (OR=1.02; 95% CI: 1.02, 1.03) in PTB per 5 μg/m3 increase in PM10 during pregnancy [20]. A study performed in Australia observed a 15% (OR=1.15; 95% CI: 1.06, 1.25) elevated risk for PTB per 4.5 μg/m3 increase in PM10 during the first trimester [21]. A study performed in Uruguay reported a 10% (OR = 1.10; 95% CI: 1.03, 1.19) increase in PTB per 10 μg/m3 increase in PM10 during the third trimester [22]. A Korean study observed a 7% (OR=1.07; 95% CI: 1.01, 1.14) increase in PTB per 16.53 μg/m3 increase in PM10 during the first or third trimester [23]. These studies reported that PM10 exposure during pregnancy was associated with PTB, with ORs ranging from 1.01 to 1.15, which was confirmed in our study. In addition, we found a non-linear relationship between PM10 exposure and risk of PTB during pregnancy, and the actual risk of PTB with an adjustment OR of 1.78 (95% CI: 1.69-1.87, P<0.001) was observed for PM10≥60 μg/m3.The threshold for the PM10 level that causes adverse birth outcomes is not clearly defined. Pregnant women in our study lived in areas with a high PM10 pollution level (>50 μg/m3), which is much higher than the limit value stated by WHO (PM10<40 μg/m3). Using the exposure response curve, we found that the threshold for the PM10 level was 60 μg/m3. In addition, we identified a significant association between PM10 exposure above the threshold and PTB risk. However, when PM10 exposure was below the threshold, no increase in the risk for PTB was identified. Therefore, it is recommended that the PM10 level should remain below 60 μg/m3, which may be safe for PTB risk.Although many studies have reported about the relation between the risk of PTB and exposure to the PM10 sensitivity window during pregnancy, the conclusions are still controversial. Some studies have suggested that exposure to high levels of PM10 during the first and/or third trimester of pregnancy had a greater impact on PTB than exposure over the second trimester [24]. However, other reports have observed the effect of PM10 exposure during the second trimester of pregnancy on PTB was more significant [25]. In our study, we found a significant correlation between PTB and PM10 exposure in the second trimester of pregnancy.The biological mechanism of PTB caused by airborne PM is still unclear. Some studies have found that immune cells in maternal and umbilical cord blood of pregnant women exposed to PM10 presented the characteristics of inflammation [26]. Particulate matter may affect the overall health of pregnant women by inducing airway inflammation and oxidative stress. Cytokines and peroxides produced in the course of immune inflammation may also have adverse effects on fetal growth [27]. It can be assumed that systemic oxidative stress and inflammatory response may be one of the mechanisms underlying the risk of PTB in pregnant women exposed to PM [28]. To further explore these research findings, additional research is needed so as to expand the research areas and population cohorts.Our study also had several limitations. First, the limited number of monitors (seven monitors in this study) in the population study area might have affected the accuracy of exposure estimation. Although more than 90% of women lived within 50 km of a monitor, exposure misclassification was still possible for residents living far away from monitors. However, this type of misclassification should exist equally among the research groups. Second, air pollution exposure was estimated using data from government monitors in most epidemiology studies, but these data might be inconsistent with the actual level of personal exposure due to the difference in the indoor/outdoor activity environment. However, such exposure assessment errors might generally underestimate the risk of PTB associated with air pollution [29]. Finally, the study population was recruited from only one Chinese city, which weakened the generalizability of the results to other cities or other countries. However, because of the limitations in obtaining hospital data, we have currently completed 2 years of data analysis. We will continue to collect 5 or 10 years of data for analysis in the future.In conclusion, our study suggests that women exposed to high level of PM10 (≥60 μg/m3) over the course of pregnancy are at an increased risk for PTB. The risks of different exposure time windows are consistent. This study defines a safe threshold for PM10 exposure, which supports policy-makers to design air pollution policies in China. --- *Source: 1008538-2022-06-22.xml*
1008538-2022-06-22_1008538-2022-06-22.md
30,118
Exposure to Ambient Air Pollutant PM10 in the Second Trimester of Pregnancy Is Associated with Preterm Birth: A Birth-Based Health Information Cohort Study
Pengying Xiao; Lijun Wang; Yingjie Yao; Yu Chang; Jianghou Hou
BioMed Research International (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008538
1008538-2022-06-22.xml
--- ## Abstract Objectives. We evaluated the effects of exposure to high concentrations of particulate matter (PM)10 on preterm birth (PTB) and identified a critical concentration of PM10 that could lead to PTB via a birth-based health information cohort study. Methods. We conducted a birth-based cohort study consisting of nonanomalous singleton births at 22-42 weeks. PTB was defined as babies born alive before 37 weeks of pregnancy. Pregnancy period exposure averages were estimated for PM10 based on the China National Environmental Monitoring Centre (CNEMC). Pregnant women who lived within 50 km of the monitor station were recruited into this study. Logistic regression analyses were performed to determine the association between PTB and exposure to PM10 at different pregnancy periods with adjustment for confounding factors. Results. The relative frequency of PTB was 8.7% in the study cohort of 5,291 singleton live births. A total of 1137 women had a high level of PM10 exposure (≥60 μg/m3) in the second trimester of pregnancy. The average concentrations of PM10 in the first, second, and third trimesters of pregnancy and throughout pregnancy were 53.8 μg/m3, 54.2 μg/m3, 55.6 μg/m3, and 54.3 μg/m3, respectively. The generalized additive model (GAM) analysis showed that there was a nonlinear correlation between PM10 and PTB in the second trimester of pregnancy (P<0.001). The adjusted odds ratio between PTB and low concentration PM10 exposure (PM10<60μg/m3) in the second trimester of pregnancy was 1.01 (95% CI 0.95-1.05). However, high PM10 exposure (PM10≥60 μg/m3) in the second trimester of pregnancy had an increased PTB risk even after adjustment for coexisting risk factors with an adjusted odds ratio of 1.78 (95% CI 1.69-1.87), and the incidence of PTB increased with an increase in PM10 exposure. Conclusions. Our research discovered that exposure to high levels of PM10 increases the risk of PTB and the second trimester is the most vulnerable gestational period to ambient air pollution exposure. PM10 concentrations more than 60 μg/m3 are detrimental to pregnant women in their second trimester. This study has implications for health informatics-oriented healthcare decision support systems. --- ## Body ## 1. Introduction The primary cause of newborn illness and death is preterm birth (PTB) [1]. PTB is expected to occur at a rate ranging from 5% to 13% in industrialized nations [2]. Additionally, PTB has been shown to increase life-long morbidities, such as cardiovascular disease, diabetes, and some types of cancer [3]. Although several risk factors, such as maternal age, alcohol use, smoking, hypertension, diabetes, and infection during pregnancy, are thought to be related to the risk of preterm delivery [4], these variables may not account for all causes of PTB. Numerous studies have shown that environmental variables, such as air pollution, may play a significant role in the risk of PTB.Environmental pollutants have an increasingly significant impact on human health, especially ambient particulate matter (PM) pollution. Ambient PM pollution has become one of the most important public health risks. The term “ambient PM pollution” refers to a diverse array of airborne particles ranging in size from a few hundredths of a micrometer to visible particles as large as 100 m. Prolonged exposure to ambient PM may result in heart and lung illnesses. The majority of research has been on PM with aerodynamic dimensions less than 10 m (PM10) or less than 2.5 m (PM2.5), which may impair placental development, disrupt normal gestational processes, and cause PTB [5].Some studies have reported on the association between PTB and elevated ambient PM levels [6–9]. However, the threshold of PM10 level on PTB risk has not been confirmed. China, as a developing country, has a serious problem of environmental PM pollution with the continuous industrial and social development. In 2021, the average PM10 concentration in China is 54 μg/m3. It is necessary to investigate the relationship between environmental PM pollution and PTB in the country. Clinical studies have found that ultrasonic measurement of the cervical length, measurements of amniotic fluid cytokine and chemokine levels, and sense of coherence 13-item version (SOC-13) scale score in the second trimester of pregnancy can effectively screen women with an increased risk of PTB, which indicates that the second trimester of pregnancy is a sensitive period closely related to the occurrence of PTB [10–12]. Therefore, our study has focused on the second trimester of pregnancy to investigate the correlation between PM10 and PTB.Given the discrepancy between ambient PM pollution and PTB risk and the scarcity of research on high PM10 levels, it is critical to explain the link between PM10 exposure and PTB risk in China by performing large-scale population studies. We performed a birth cohort research in Kunming, China, adjusting for significant confounders, to examine the connection between PM10 and the risk of PTB and to establish a risk threshold for PM10 concentration exposure. ## 2. Methods ### 2.1. Participant Profiles A birth cohort research was performed on births occurring between January 1, 2016, and December 31, 2017, utilizing the Kunming Maternal and Child Health Hospital’s database. Pregnant women who presented to the hospital for delivery of singleton newborns between 22 and 42 weeks of gestation without any significant congenital defects, who were not suffering from a mental disorder, and who were 18 years or older were eligible for this study. The Medical Ethics Committee of Kunming Maternal and Child Health Hospital in China authorized all research protocols. To prevent fixed cohort bias, the study population included all babies conceived between January 1, 2016, and December 31, 2017. The study period began 22 weeks before the start of the research and ended 42 weeks before the conclusion of the study.The estimated date of conception and resultant gestational age (in days) were calculated using the first day of the mother’s last menstrual cycle. The primary exclusion criteria were multiple gestation pregnancies, the absence of critical information (e.g., parity, delivery date, and last menstrual cycle), gestational age of less than 22 weeks or more than 42 weeks, numerous, repeated maternal visits, and any congenital abnormalities (Figure1). After eliminating women who fulfilled the exclusion criteria, a total of 11,514 pregnancies that satisfied the inclusion criteria were originally recruited, and 5,291 pregnancies were included in the analyses.Figure 1 Flow diagram of the study population. ### 2.2. Exposure Assessment Data of PM10 concentrations were obtained from the China National Environmental Monitoring Centre (CNEMC) (http://www.cnemc.cn/). The home and work addresses of participants were within 50 kilometers of the nearest monitoring sites. The 24-hour average PM10 concentration was measured for the period from January 2016 to December 2017 in Kunming by CNEMC. The daily exposure to PM10 was adjusted according to the monitoring week to obtain the annual average of PM10 at the monitoring site. The exposure window was defined as the period of the second trimester (14-26 weeks) [13]. ### 2.3. Preterm Birth PTB was defined as less than 37 completed weeks of gestational age [14]. The gestational age was determined using the starting day of the previous menstrual cycle (LMP). During early pregnancy follow-up visits, obstetricians noted women’s LMP time (no later than 12 weeks after conception). Each woman was questioned again about the time of LMP at the postpartum follow-up appointment (no later than six weeks following birth), and gestational age was computed using these two records. PTB was classified according to the gestational age as moderate or late PTB (32–37 completed weeks), very PTB (28–32 completed weeks), and extremely PTB (28 completed weeks) [15]. ### 2.4. Covariates Variables or potential confounding effects that had biological importance for PTB were included as adjustments [16, 17]. We adjusted for maternal age, parity (0, 1, 2, ≥3), preeclampsia (yes/no), history of cesarean section (yes/no), maternal anemia (yes/no), maternal obesity (yes/no), and diabetes (yes/no) from the baseline data of the birth cohort. Conception season (spring: March-May; summer: June-August; fall: September-November; winter: December-February) and maternal smoking during pregnancy (yes/no) were included from the early gestation follow-up data. We also adjusted for the mode of delivery (vaginal delivery/Cesarean section) and baby’s sex (male/female) from the postpartum follow-up data. The year of conception was also adjusted to eliminate the long-term effects of pollution levels on birth outcomes. ### 2.5. Statistical Analysis To characterize the demographic, medical, pregnancy outcome, and PM10 concentration features, descriptive statistics were used. The association between trimester-specific and total pregnancy PM10 exposure and PTB was estimated using a generalized additive model (GAM), adjusting for confounding factors, such as maternal age, parity, preeclampsia, season of conception, history of cesarean section, maternal anemia, maternal obesity, and diabetes. We further used a two-stage linear regression model to capture the potential nonlinear effect of PM10 concentration on PTB and explored the turning point of PM10 concentration that had a significant positive correlation with PTB through an “exploratory” analysis. We additionally performed stratified analyses of variables, and interaction terms with PM10 concentration (<60 or ≥60 μg/m3) were used to evaluate whether the effect modifications were statistically significant or not.Furthermore, we utilized a sensitivity analysis in the main model to check the robustness of the estimated associations. Univariate analysis was performed to evaluate the variables considered possible moderators of PTB, and the statistically significant confounding factors were identified to be adjustment factors. Analyses were performed using the statistical packages R and EmpowerStats (R). Results were reported as the odds ratios (OR) and 95% confidence intervals (CI) for the association between PM10 exposure during pregnancy and risk of PTB. P values < 0.05 were considered statistically significant. ## 2.1. Participant Profiles A birth cohort research was performed on births occurring between January 1, 2016, and December 31, 2017, utilizing the Kunming Maternal and Child Health Hospital’s database. Pregnant women who presented to the hospital for delivery of singleton newborns between 22 and 42 weeks of gestation without any significant congenital defects, who were not suffering from a mental disorder, and who were 18 years or older were eligible for this study. The Medical Ethics Committee of Kunming Maternal and Child Health Hospital in China authorized all research protocols. To prevent fixed cohort bias, the study population included all babies conceived between January 1, 2016, and December 31, 2017. The study period began 22 weeks before the start of the research and ended 42 weeks before the conclusion of the study.The estimated date of conception and resultant gestational age (in days) were calculated using the first day of the mother’s last menstrual cycle. The primary exclusion criteria were multiple gestation pregnancies, the absence of critical information (e.g., parity, delivery date, and last menstrual cycle), gestational age of less than 22 weeks or more than 42 weeks, numerous, repeated maternal visits, and any congenital abnormalities (Figure1). After eliminating women who fulfilled the exclusion criteria, a total of 11,514 pregnancies that satisfied the inclusion criteria were originally recruited, and 5,291 pregnancies were included in the analyses.Figure 1 Flow diagram of the study population. ## 2.2. Exposure Assessment Data of PM10 concentrations were obtained from the China National Environmental Monitoring Centre (CNEMC) (http://www.cnemc.cn/). The home and work addresses of participants were within 50 kilometers of the nearest monitoring sites. The 24-hour average PM10 concentration was measured for the period from January 2016 to December 2017 in Kunming by CNEMC. The daily exposure to PM10 was adjusted according to the monitoring week to obtain the annual average of PM10 at the monitoring site. The exposure window was defined as the period of the second trimester (14-26 weeks) [13]. ## 2.3. Preterm Birth PTB was defined as less than 37 completed weeks of gestational age [14]. The gestational age was determined using the starting day of the previous menstrual cycle (LMP). During early pregnancy follow-up visits, obstetricians noted women’s LMP time (no later than 12 weeks after conception). Each woman was questioned again about the time of LMP at the postpartum follow-up appointment (no later than six weeks following birth), and gestational age was computed using these two records. PTB was classified according to the gestational age as moderate or late PTB (32–37 completed weeks), very PTB (28–32 completed weeks), and extremely PTB (28 completed weeks) [15]. ## 2.4. Covariates Variables or potential confounding effects that had biological importance for PTB were included as adjustments [16, 17]. We adjusted for maternal age, parity (0, 1, 2, ≥3), preeclampsia (yes/no), history of cesarean section (yes/no), maternal anemia (yes/no), maternal obesity (yes/no), and diabetes (yes/no) from the baseline data of the birth cohort. Conception season (spring: March-May; summer: June-August; fall: September-November; winter: December-February) and maternal smoking during pregnancy (yes/no) were included from the early gestation follow-up data. We also adjusted for the mode of delivery (vaginal delivery/Cesarean section) and baby’s sex (male/female) from the postpartum follow-up data. The year of conception was also adjusted to eliminate the long-term effects of pollution levels on birth outcomes. ## 2.5. Statistical Analysis To characterize the demographic, medical, pregnancy outcome, and PM10 concentration features, descriptive statistics were used. The association between trimester-specific and total pregnancy PM10 exposure and PTB was estimated using a generalized additive model (GAM), adjusting for confounding factors, such as maternal age, parity, preeclampsia, season of conception, history of cesarean section, maternal anemia, maternal obesity, and diabetes. We further used a two-stage linear regression model to capture the potential nonlinear effect of PM10 concentration on PTB and explored the turning point of PM10 concentration that had a significant positive correlation with PTB through an “exploratory” analysis. We additionally performed stratified analyses of variables, and interaction terms with PM10 concentration (<60 or ≥60 μg/m3) were used to evaluate whether the effect modifications were statistically significant or not.Furthermore, we utilized a sensitivity analysis in the main model to check the robustness of the estimated associations. Univariate analysis was performed to evaluate the variables considered possible moderators of PTB, and the statistically significant confounding factors were identified to be adjustment factors. Analyses were performed using the statistical packages R and EmpowerStats (R). Results were reported as the odds ratios (OR) and 95% confidence intervals (CI) for the association between PM10 exposure during pregnancy and risk of PTB. P values < 0.05 were considered statistically significant. ## 3. Results The study population included 5,291 singleton live births: 462 (8.7%) were preterm and 4,829 were term births. Among the PTBs, 409 were moderate or late PTBs and 53 were very PTBs (VPTBs) or extremely PTBs (ExPTBs). The mean concentrations of PM10 exposure over the first, second, and third trimesters of pregnancy and the entire pregnancy were 53.8 μg/m3, 54.2 μg/m3, 55.6 μg/m3, and 54.3 μg/m3, respectively. Furthermore, of the 5291 infants included in our study, 1137 had a high level of PM10 (≥60 μg/m3) in the second trimester of pregnancy (Table 1). Univariate analysis was performed to identify factors associated with PTB. Factors with significant associations included parity, year and season of conception, cesarean, and preeclampsia (Table S1).Table 1 Maternal and fetal characteristics in the birth cohort. CharacteristicDataMaternalAge,mean±SD29.8±4.6Gestational age (wk),mean±SD38.8±1.7Parity, no. (%)12857 (54.3)22250 (42.8)≥3150 (2.9)Year of conception, no. (%)20151686 (31.9)20163605 (68.1)Season of conception, no. (%)Spring1265 (23.9)Summer1043 (19.7)Autumn1613 (30.5)Winter1370 (25.9)Mode of delivery, no. (%)Vaginal5063 (95.7)Cesarean228 (4.3)Preeclampsia, no. (%)173 (3.3)Diabetes, no. (%)642 (12.1)Maternal obesity, no. (%)140 (2.6)Maternal anemia, no. (%)2147 (40.6)History of cesarean section, no. (%)1009 (19.1)InfantBirth weight (g),mean±SD3009.7±366.4Sex of infant, no. (%)Male2495 (47.1)Female2296 (43.4)Missing500(9.5)Term birth, no. (%)4829 (91.3%)PTB, no. (%)462 (8.7)Moderate orlaterpreterm≥224, <259409 (7.7)VPTB52 (1.0)ExPTB1 (0)Mean concentration of PM10 (μg/m3), mean±SDFirst trimester53.8±7.9Second trimester54.2±9.6Third trimester55.6±11.1Entire pregnancy54.3±3.7PM10 exposure during the second trimester (%)<60μg/m34154 (78.5)≥60μg/m31137 (21.5)PM10: particulate matter with aerodynamicdiameters≤10μm; PTB: preterm birth; VPTB: very preterm birth; ExPTB: extremely preterm birth. Dichotomous variables are presented as percent of total for each characteristic.In order to explore the relationship between PM10 exposure during pregnancy and gestational age or PTB, a GAM was used (Figure 2). With adjustment for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes, a nonlinear association was found between PM10 exposure and gestational age (P<0.001), and consistent associations were found between PM10 exposure and PTB (P<0.001) in the second trimester of pregnancy. When we examined the relationships according to the PM10 exposure level, we discovered that exposure to a higher PM10 concentration (≥60 μg/m3) during the second trimester of pregnancy was clearly associated with an elevated risk of PTB.Figure 2 Associations between air pollutant PM10 and risk of PTB or gestational age in the second trimester of pregnancy. (a) A nonlinear association between PM10 exposure during the second trimester and gestational age was found (P<0.001) in a generalized additive model (GAM). (b) Consistent association between PM10 exposure during the second trimester and PTB was found (P<0.001) in a GAM model. The solid red line represents the smooth curve fit between variables. The blue bands represent the 95% of confidence interval from the fit. All adjusted for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. (a)(b)Table2 presents the crude and adjusted OR (with 95% CI) of gestational age or PTB associated with PM10 exposure in the second trimester of pregnancy. In the crude analysis, we observed a consistent relationship between PM10 exposure (<60 μg/m3 or ≥60 μg/m3) and gestational age. Exposure to high PM10levels≥60 μg/m3 in the second trimester of pregnancy was significantly associated with an increased risk of PTB, with an OR of 1.71 (95% CI: 1.63, 1.78). However, no significant association between exposure to PM10levels<60 μg/m3 in the second trimester of pregnancy and PTB was observed, with an OR of 1.02 (95% CI: 1.00, 1.04). In the adjusted models, similar association was found between gestational age and PM10 exposure. We also found that exposure to high PM10levels≥60 μg/m3 in the second trimester of pregnancy was still significantly associated with an increased risk of PTB, with an OR of 1.78 (95% CI: 1.69, 1.87), and we found that the risk of PTB was increased by 78% for each 1 μg/m3 increase in PM10 exposure in the second trimester of pregnancy. There was no significant association between exposure to PM10levels<60 μg/m3 in the second trimester of pregnancy and PTB, with an OR of 1.01 (95% CI: 0.95, 1.05).Table 2 Crude and adjusted odd ratios for the risk of gestational age (wk) and PTB caused by PM10 exposure in the second trimester of pregnancy. OutcomeCrudeModel IModel IIOR/β (95% CI)P valueOR/β (95% CI)P valueOR/β (95% CI)P valueGestational age (wk)PM10<60 (μg/m3)0.02 (0.01, 0.02)<0.0010.02 (0.01, 0.03)<0.0010.02 (0.01, 0.03)<0.001PM10≥60 (μg/m3)-0.47 (-0.48, -0.46)<0.001-0.48 (-0.50, -0.47)<0.001-0.48 (-0.49, -0.47)<0.001PTBPM10<60(μg/m3)1.02 (1.00, 1.04)0.1291.00 (0.95, 1.06)0.8831.01 (0.95, 1.05)0.992PM10≥60 (μg/m3)1.71 (1.63, 1.78)<0.0011.75 (1.67, 1.83)<0.0011.78 (1.69, 1.87)<0.001OR: odds ratio; CI: confidence interval. Model I adjusted for season of conception and maternal age. Model II adjusted for season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes.Furthermore, we conducted a stratified analysis by grouping confounding variables, such as season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. After excluding the confounding variables, exposure to high PM10 levels (≥60 μg/m3) in the second trimester of pregnancy was significantly associated with an increased risk of PTB, but there was no significant correlation between exposure to PM10 levels (<60 μg/m3) in the second trimester of pregnancy and PTB. The results indicated the robustness of the association between exposure to high PM10 levels and PTB (Table 3).Table 3 Logistic regression of factors associated with PTB in the second trimester of pregnancy. SubgroupPM10<60 (μg/m3)PM10≥60 (μg/m3)OR (95% CI)P valueOR (95% CI)P valueMaternal age<251.02 (0.82, 1.28)0.8521.88 (1.61, 2.19)<0.00125-291.03 (0.94, 1.13)0.5741.75 (1.62, 1.88)<0.00130-340.99 (0.90, 1.07)0.7331.77 (1.61, 1.95)<0.001≥350.97 (0.87, 1.09)0.6281.86 (1.61, 2.14)<0.001Sex of infantMale0.97 (0.90, 1.05)0.4851.87 (1.71, 2.03)<0.001Female1.01 (0.93, 1.09)0.8681.72 (1.61, 1.85)<0.001Missing1.19 (0.85, 1.67)0.3151.81 (1.49, 2.19)<0.001Parity10.99 (0.92, 1.07)0.8471.78 (1.66, 1.91)<0.00121.02 (0.94, 1.11)0.5891.78 (1.65, 1.93)<0.001≥30.85 (0.67, 1.07)0.1731.90 (1.39, 2.59)<0.001PreeclampsiaNo0.99 (0.94, 1.05)0.8501.79 (1.70, 1.89)<0.001Yes1.48 (0.62, 3.52)0.3791.61 (1.38, 1.88)<0.001DiabetesNo1.01 (0.96, 1.08)0.6481.81 (1.71, 1.92)<0.001Yes0.95 (0.85, 1.06)0.3591.63 (1.45, 1.83)<0.001Maternal obesityNo1.00 (0.95, 1.05)0.94641.76 (1.68, 1.85)<0.001Yes——Maternal anemiaNo1.00 (0.93, 1.07)0.9641.72 (1.62, 1.83)<0.001Yes1.00 (0.92, 1.08)0.9561.87 (1.72, 2.04)<0.001Mode of deliveryVaginal0.97 (0.40, 2.36)0.941—Cesarean1.03 (0.96, 1.11)0.3971.80 (1.69, 1.91)<0.001History of cesarean sectionNo0.99 (0.94, 1.05)0.8051.81 (1.71, 1.92)<0.001Yes1.03 (0.91, 1.16)0.6271.67 (1.51, 1.85)<0.001Year of conception20151.07 (0.80, 1.43)0.6656—20161.04 (0.99, 1.09)0.09098.04 (6.15, 10.52)<0.001Season of conceptionSpring0.93 (0.81, 1.07)0.29862.57 (1.95, 3.40)<0.001Summer1.06 (0.90, 1.24)0.5187—Autumn1.01 (0.92, 1.11)0.85911.24 (0.73, 2.09)0.4271Winter2.01 (1.28, 3.15)0.00231.59 (1.50, 1.68)<0.001Odds ratio estimates for covariates are adjusted for other factors listed in the first column of the table as well as season of conception, parity, maternal age, preeclampsia, history of cesarean section, maternal anemia, maternal obesity, and diabetes. The odds ratio estimates for PM10exposure<60μg/m3 or ≥60 μg/m3 are from separate models with adjustment for the same covariates as listed above. ## 4. Discussion Preterm birth is a significant public health issue. It is not only the biggest cause of newborn death [18], but it also has significant long-term consequences, including asthma, metabolic abnormalities, and disability [19]. Our investigation established a link between PM in the air and unfavorable birth outcomes. Exposure to high levels (PM10≥60 μg/m3) during the second trimester of pregnancy was substantially related with an elevated risk of PTB in our research population. On the other hand, exposure to PM10<60 μg/m3 levels during the second trimester of pregnancy was not related to an increased risk of PTB. As a result, we determined that a PM10 level of 60 μg/m3 considerably increased the risk of PTB. Additionally, we discovered similar correlations between PM10 exposure and PTB across a variety of possible confounding populations.Previous studies have reported various results for the relationship between ambient PM10 and risk of PTB. A prospective birth cohort study in Wuhan in China reported an about 2% increase (OR=1.02; 95% CI: 1.02, 1.03) in PTB per 5 μg/m3 increase in PM10 during pregnancy [20]. A study performed in Australia observed a 15% (OR=1.15; 95% CI: 1.06, 1.25) elevated risk for PTB per 4.5 μg/m3 increase in PM10 during the first trimester [21]. A study performed in Uruguay reported a 10% (OR = 1.10; 95% CI: 1.03, 1.19) increase in PTB per 10 μg/m3 increase in PM10 during the third trimester [22]. A Korean study observed a 7% (OR=1.07; 95% CI: 1.01, 1.14) increase in PTB per 16.53 μg/m3 increase in PM10 during the first or third trimester [23]. These studies reported that PM10 exposure during pregnancy was associated with PTB, with ORs ranging from 1.01 to 1.15, which was confirmed in our study. In addition, we found a non-linear relationship between PM10 exposure and risk of PTB during pregnancy, and the actual risk of PTB with an adjustment OR of 1.78 (95% CI: 1.69-1.87, P<0.001) was observed for PM10≥60 μg/m3.The threshold for the PM10 level that causes adverse birth outcomes is not clearly defined. Pregnant women in our study lived in areas with a high PM10 pollution level (>50 μg/m3), which is much higher than the limit value stated by WHO (PM10<40 μg/m3). Using the exposure response curve, we found that the threshold for the PM10 level was 60 μg/m3. In addition, we identified a significant association between PM10 exposure above the threshold and PTB risk. However, when PM10 exposure was below the threshold, no increase in the risk for PTB was identified. Therefore, it is recommended that the PM10 level should remain below 60 μg/m3, which may be safe for PTB risk.Although many studies have reported about the relation between the risk of PTB and exposure to the PM10 sensitivity window during pregnancy, the conclusions are still controversial. Some studies have suggested that exposure to high levels of PM10 during the first and/or third trimester of pregnancy had a greater impact on PTB than exposure over the second trimester [24]. However, other reports have observed the effect of PM10 exposure during the second trimester of pregnancy on PTB was more significant [25]. In our study, we found a significant correlation between PTB and PM10 exposure in the second trimester of pregnancy.The biological mechanism of PTB caused by airborne PM is still unclear. Some studies have found that immune cells in maternal and umbilical cord blood of pregnant women exposed to PM10 presented the characteristics of inflammation [26]. Particulate matter may affect the overall health of pregnant women by inducing airway inflammation and oxidative stress. Cytokines and peroxides produced in the course of immune inflammation may also have adverse effects on fetal growth [27]. It can be assumed that systemic oxidative stress and inflammatory response may be one of the mechanisms underlying the risk of PTB in pregnant women exposed to PM [28]. To further explore these research findings, additional research is needed so as to expand the research areas and population cohorts.Our study also had several limitations. First, the limited number of monitors (seven monitors in this study) in the population study area might have affected the accuracy of exposure estimation. Although more than 90% of women lived within 50 km of a monitor, exposure misclassification was still possible for residents living far away from monitors. However, this type of misclassification should exist equally among the research groups. Second, air pollution exposure was estimated using data from government monitors in most epidemiology studies, but these data might be inconsistent with the actual level of personal exposure due to the difference in the indoor/outdoor activity environment. However, such exposure assessment errors might generally underestimate the risk of PTB associated with air pollution [29]. Finally, the study population was recruited from only one Chinese city, which weakened the generalizability of the results to other cities or other countries. However, because of the limitations in obtaining hospital data, we have currently completed 2 years of data analysis. We will continue to collect 5 or 10 years of data for analysis in the future.In conclusion, our study suggests that women exposed to high level of PM10 (≥60 μg/m3) over the course of pregnancy are at an increased risk for PTB. The risks of different exposure time windows are consistent. This study defines a safe threshold for PM10 exposure, which supports policy-makers to design air pollution policies in China. --- *Source: 1008538-2022-06-22.xml*
2022
# Solving Traveling Salesman Problems Based on Artificial Cooperative Search Algorithm **Authors:** Guangjun Liu; Xiaoping Xu; Feng Wang; Yangli Tang **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008617 --- ## Abstract The traveling salesman problem is a typical NP hard problem and a typical combinatorial optimization problem. Therefore, an improved artificial cooperative search algorithm is proposed to solve the traveling salesman problem. For the basic artificial collaborative search algorithm, firstly, the sigmoid function is used to construct the scale factor to enhance the global search ability of the algorithm; secondly, in the mutation stage, the DE/rand/1 mutation strategy of differential evolution algorithm is added to carry out secondary mutation to the current population, so as to improve the calculation accuracy of the algorithm and the diversity of the population. Then, in the later stage of the algorithm development, the quasi-reverse learning strategy is introduced to further improve the quality of the solution. Finally, several examples of traveling salesman problem library (TSPLIB) are solved using the improved artificial cooperative search algorithm and compared with the related algorithms. The results show that the proposed algorithm is better than the comparison algorithm in solving the travel salesman problem and has good robustness. --- ## Body ## 1. Introduction Traveling salesman problem (TSP) is not only a basic circuit problem but also a typical NP hard problem and a typical combinatorial optimization problem. It is one of the most famous problems in the field of mathematics [1, 2]. It was first proposed by Menger in 1959. After it was proposed, it has attracted great attention of scholars and managers in operations research, logistics science, applied mathematics, computer application, circle theory and network analysis, combinatorial mathematics, and other disciplines and has become a research hotspot in the field of operations research and combinatorial optimization [3]. In recent years, many scholars have studied the TSP [4–9], and more scholars have expanded the traveling salesman problem [10–12]. At present, the traveling salesman problem is widely used in various practical problems such as Internet environment, road traffic, and logistics transportation [13]. Therefore, the research on TSP has important theoretical value and practical significance.In the past 30 years, intelligent optimization algorithms have been favored by scholars because of their few parameters, simple structure, and easy implementation such as genetic algorithm (GA) [14–17], differential evolution algorithm (DE) [18], invasive weed optimization (IWO) [19], and particle swarm optimization (PSO) [20]. Artificial cooperative search algorithm was firstly proposed by Pinar Civicioglu in 2013 to solve numerical optimization problems [21]. The algorithm was proposed to simulate the interaction and cooperation process between two superorganisms with predator-prey relationship in the same natural habitat. In nature, the amount of food that can be found in an area is very sensitive to climate change. Therefore, many species in nature will migrate to find and migrate to higher yield breeding areas. It includes the processes of predator selection, prey selection, mutation, and crossover [22–25]. Firstly, the predator population location is randomly generated, the predator location memory is set, and then the prey population location is randomly generated to reorder the prey location, where biological interactions occur during the variation phase. Finally, enter the crossover stage and update the biological interaction position through the active individuals in the predator population. Compared with other optimization algorithms, artificial cooperative search algorithm has the advantages of less control parameters and strong robustness and adopts different mutation and crossover strategies. At present, the algorithm has been used to solve scheduling problems, design problems, and other practical problems [26, 27]. To some extent, these methods do solve some practical problems, but there are still some defects such as slow convergence speed and low accuracy. Therefore, it is necessary to improve artificial cooperative search algorithm to improve the performance of the algorithm [28–30].Aiming at the disadvantages of slow convergence speed, low accuracy, and easy to fall into local optimization of the basic artificial cooperative search algorithm (ACS), this paper proposes a reverse artificial cooperative search algorithm based on sigmoid function (SQACS), that is, after constructing the scale factor by the sigmoid function, the DE/rand/1 mutation strategy of differential evolution algorithm is added in the mutation stage, and the quasi-reverse learning strategy is introduced in the later development stage of the algorithm. In the numerical simulation, the SQACS is used to solve several examples in TSPLIB. The results show that the presented algorithm is feasible.The remainder of this paper is organized in the following manner. Section2 describes the TSP model. In Section 3, the basic and improved ACS algorithms are introduced in detail. Solving TSP by the SQACS is described in Section 4. Section 5 covers simulations that have been conducted, while Section 6 presents our conclusion. ## 2. TSP Model In general, TSP specifically refers to a traveling salesman who wants to visitn cities, starting from a city, must pass through all the cities only once, and then return to the departure city, requiring the traveling agent to travel the shortest total distance [13]. It is described in graph theory language as follows. In a weighted completely undirected graph, it is necessary to find a Hamilton cycle with the smallest weight. That is, let G=V,E, V=1,2,⋯,n represent the set of vertices and E represent the set of edges, and each edge e=i,j∈E has a non-negative weight me. Now it is necessary to find the Hamilton cycle C of G so that the total weight MC=∑ECme of C is the smallest. If dij is used to represent the distance between city i and city j, dij≥0,i,j∈v, xij=1,edgei,jis on the optimal path0,otherwise, then the mathematical model of TSP is as follows:(1)minZ=∑∑dijxij,(2)s.t.∑j≠1xij=1i∈V,(3)∑i≠jxij=1j∈V,(4)∑i,j∈Sxij=S−1S⊆V,(5)xij∈0,1i,j∈V,where S represents the number of vertices in the set S. The first two constraints shown in (2) and (3) indicate that there is only one inbound and one outbound edge for each vertex, and the third constraint (4) indicates that no sub-loops will be generated. ## 3. Artificial Cooperative Search Algorithm ### 3.1. Basic Artificial Cooperative Search Algorithm Basic artificial cooperative search (ACS) algorithm is a global search algorithm based on two populations, which is used to solve numerical optimization problems [21]. Generally, ACS includes the following population initialization, predator selection, prey selection, mutation, crossover, update selection, and so on. #### 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. #### 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. #### 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. #### 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. #### 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. #### 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ### 3.2. Improved Artificial Cooperative Search Algorithm Because ACS is not mature and perfect in theory and practice, aiming at its shortcomings such as slow convergence speed, low accuracy, and easy to fall into local optimization, a reverse artificial cooperative search algorithm based on sigmoid function (SQACS) is proposed. The specific improvement scheme is as follows. #### 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). #### 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. #### 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 3.1. Basic Artificial Cooperative Search Algorithm Basic artificial cooperative search (ACS) algorithm is a global search algorithm based on two populations, which is used to solve numerical optimization problems [21]. Generally, ACS includes the following population initialization, predator selection, prey selection, mutation, crossover, update selection, and so on. ### 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. ### 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. ### 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. ### 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. ### 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. ### 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ## 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. ## 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. ## 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. ## 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. ## 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. ## 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ## 3.2. Improved Artificial Cooperative Search Algorithm Because ACS is not mature and perfect in theory and practice, aiming at its shortcomings such as slow convergence speed, low accuracy, and easy to fall into local optimization, a reverse artificial cooperative search algorithm based on sigmoid function (SQACS) is proposed. The specific improvement scheme is as follows. ### 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). ### 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. ### 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). ## 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. ## 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 4. Solving TSP by SQACS Taking the shortest path (i.e., equation (1)) as the objective function, the SQACS is used to solve the TSP. In order to better solve the TSP and realize the transformation between biological organism and TSP solution space, each biological interaction location Xi=xi1,xi2,⋯,xin is defined as a sequence of traversing and accessing each city number in this paper. For example, one of the interaction positions Xi=1,3,2,4,5,6 means that the TSP is that the traveler first visits the city with number 1, then successively visits the cities with numbers 3, 2, 4, 5 and 6, and finally returns to the departure city, that is, the city with number 1, and the corresponding objective function is equivalent to the path length of TSP. For the TSP, the shorter the individual’s visit path is, the greater the fitness value is, so the fitness function fxi=1/Zxi is selected, where i=1,2,⋯,n, n is the number of cities to visit, and the lower and upper limits of variables are 1 and n, respectively. The specific steps of SQACS to solve TSP are as follows:Step 1. Population initialization: use the city number to encode the TSP path and randomly generate the arrangement order of n cities.Step 2. Calculate the fitness value of each individual in the population.Step 3. Randomly select the predator and prey population, and then randomly rearrange the position of prey population.Step 4. Calculate the scale factor R of biological interaction velocity.Step 5. Determination of active individuals M in predator population by binary integer mapping.Step 6. Mutation: calculate the location X of biological interaction, i.e., the visit route of the traveler.Step 7. Crossover: if the active individual mapping is greater than 0, update the path to the predator location; otherwise, keep the original location unchanged.Step 8. Reselection of predator and prey populations.Step 9. Judge whether the termination conditions are met. If so, stop the algorithm update and output the optimal position and optimal function value, that is, the shortest route and shortest path value of TSP. Otherwise, return to Step 2. ## 5. Numerical Simulation In order to verify the performance of the proposed SQACS, SQACS is tested with GA [14], DE [18], IWO [19], PSO [20], ACS [21], IACS1 [27], and IACS2 [36] to solve four TSPs of different scales in TSPLIB standard database: Oliver 30, Att48, Eil51, and Eil76. In the simulation, in order to compare the results under the same conditions as much as possible, the maximum evaluation times of each algorithm is 2000 and the initial population size is 20. Other parameter settings of the algorithm involved are shown in the corresponding references.After 30 times of solution by SQACS and the above-mentioned algorithms, the optimal value, average value, and average calculation time of the results are shown in Table1. By comparing the optimal value and average value of each algorithm to solve the TSP, it can be seen that SQACS can obtain the optimal results in the minimum value and average value, and the difference between the optimal value and average value is the smallest, indicating that SQACS has the strongest stability. From the comparison of index values of calculation time, SQACS performs better than other comparison algorithms on the four datasets. It can be seen that SQACS has good feasibility and robustness in solving TSP.Table 1 Comparison of experimental results of eight algorithms. ExampleEvaluation criterionGADEIWOPSOACSIACS1IACS2SQACSOliver30Optimal value427.37423.11431.34428.01453.27452.12434.67420.00Average value432.33435.94449.98437.83475.23468.21440.62421.31Average time/s28.3628.2829.6727.2627.4127.8528.2626.17Att48Optimal value33942.4733793.0633596.8133642.3434663.4134527.2333742.8433516.02Average value40374.2834183.5833637.8633785.1235721.3935123.3134081.8233583.17Average time/s49.3649.4250.2149.2350.2650.8551.9749.02Eil51Optimal value473.56436.81471.36432.99484.05478.67442.43426.00Average value481.52451.08482.90448.60496.55480.89449.76427.71Average time/s55.5455.3558.3654.6557.2557.7858.0255.27Eil76Optimal value568.47547.13562.20541.91572.93568.37543.00538.00Average value584.01583.11578.37550.68586.33581.81552.45543.79Average time/s85.4985.2886.2385.5486.2386.5487.1585.16Figure3 shows the optimal path diagram of four groups of examples in solving TSP by SQACS. As can be seen from Figure 3, except the path intersection in Att48 dataset, the other three figures are a completely closed loop, and the paths do not cross, so the obtained paths are feasible. Also, the solutions of the optimal route obtained are as follows: 6⟶5⟶30⟶23⟶22⟶16⟶17⟶12⟶13⟶4⟶3⟶9⟶11⟶7⟶8⟶25⟶26⟶29⟶28⟶27⟶24⟶15⟶14⟶10⟶21⟶20⟶ 19⟶18⟶2⟶1⟶6; 2⟶29⟶34⟶41⟶16⟶22⟶3⟶40⟶9⟶1⟶8⟶38⟶31⟶44⟶18⟶7⟶28⟶36⟶30⟶6⟶37⟶19⟶27⟶17⟶43⟶20⟶ 33⟶46⟶15⟶12⟶11⟶23⟶14⟶25⟶13⟶47⟶24⟶39⟶32⟶48⟶5⟶42⟶10⟶24⟶45⟶35⟶26⟶4⟶2; 40⟶19⟶42⟶44⟶37⟶15⟶45⟶33⟶39⟶10⟶30⟶34⟶50⟶9⟶49⟶38⟶11⟶5⟶46⟶51⟶27⟶32⟶1⟶22⟶2⟶16⟶21⟶29⟶20⟶35⟶36⟶3⟶28⟶31⟶26⟶8⟶48⟶6⟶23⟶7⟶43⟶24⟶14⟶25⟶18⟶47⟶12⟶17⟶4⟶13⟶41⟶40; 9⟶39⟶72⟶58⟶10⟶38⟶65⟶56⟶11⟶53⟶14⟶59⟶19⟶54⟶13⟶27⟶52⟶34⟶46⟶8⟶35⟶7⟶26⟶67⟶76⟶ 75⟶4⟶45⟶29⟶5⟶15⟶57⟶37⟶20⟶70⟶60⟶74⟶36⟶ 69⟶21⟶47⟶48⟶30⟶2⟶68⟶6⟶51⟶17⟶12⟶40⟶32⟶ 44⟶3⟶16⟶63⟶33⟶73⟶62⟶28⟶74⟶61⟶22⟶1⟶43⟶ 41⟶42⟶64⟶56⟶23⟶49⟶24⟶18⟶50⟶25⟶55⟶31⟶9.Figure 3 The optimal path diagrams of four examples obtained by the SQACS. (a) Oliver30. (b) Att48. (c) Eil51. (d) Eil76. (a)(b)(c)(d)In order to further verify the effectiveness of SQACS, the algorithms in [5–9] are further selected to compare the solution results of Oliver 30, Att48, Eil51, and Eil76 in TSP. The comparison results are shown in Table 2. By comparing the data in Table 2, it can be found that the solution results of SQACS on the other three datasets are better than those proposed in the literature, and the solution results all reach the optimal value in TSPLIB database. This verifies the effectiveness of SQACS in solving TSP.Table 2 Comparison of SQACS calculation results and literature. TSP test setTSPLIB optimal solutionSQACS optimal solutionReference [5] optimal solutionReference [6] optimal solutionReference [7] optimal solutionReference [8] optimal solutionReference [9] optimal solutionOliver30420.00420.00—420.00420.00423.74423.74Att4833503.0033516.0036441.00—33522.00——Eil51426.00426.00479.00428.87428.00814.53426.00Eil76538.00538.00—544.37547.00—538.00 ## 6. Conclusion In order to better solve the traveling salesman problem, this paper proposes a reverse artificial collaborative search algorithm based on sigmoid function. That is, the scale factor is constructed by sigmoid function to improve the global search ability of the algorithm. In the mutation stage, the DE/rand/1 mutation strategy of differential evolution algorithm is added to carry out secondary mutation on the current population, so that the algorithm can avoid falling into local optimization and improve the calculation accuracy. In the later development stage of the algorithm, the quasi-reverse learning strategy is introduced to find the optimal solution more effectively. Finally, the proposed algorithm is used to solve the traveling salesman problem, and the results show that the proposed algorithm in this paper is effective for solving the traveling salesman problem. --- *Source: 1008617-2022-04-12.xml*
1008617-2022-04-12_1008617-2022-04-12.md
44,405
Solving Traveling Salesman Problems Based on Artificial Cooperative Search Algorithm
Guangjun Liu; Xiaoping Xu; Feng Wang; Yangli Tang
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008617
1008617-2022-04-12.xml
--- ## Abstract The traveling salesman problem is a typical NP hard problem and a typical combinatorial optimization problem. Therefore, an improved artificial cooperative search algorithm is proposed to solve the traveling salesman problem. For the basic artificial collaborative search algorithm, firstly, the sigmoid function is used to construct the scale factor to enhance the global search ability of the algorithm; secondly, in the mutation stage, the DE/rand/1 mutation strategy of differential evolution algorithm is added to carry out secondary mutation to the current population, so as to improve the calculation accuracy of the algorithm and the diversity of the population. Then, in the later stage of the algorithm development, the quasi-reverse learning strategy is introduced to further improve the quality of the solution. Finally, several examples of traveling salesman problem library (TSPLIB) are solved using the improved artificial cooperative search algorithm and compared with the related algorithms. The results show that the proposed algorithm is better than the comparison algorithm in solving the travel salesman problem and has good robustness. --- ## Body ## 1. Introduction Traveling salesman problem (TSP) is not only a basic circuit problem but also a typical NP hard problem and a typical combinatorial optimization problem. It is one of the most famous problems in the field of mathematics [1, 2]. It was first proposed by Menger in 1959. After it was proposed, it has attracted great attention of scholars and managers in operations research, logistics science, applied mathematics, computer application, circle theory and network analysis, combinatorial mathematics, and other disciplines and has become a research hotspot in the field of operations research and combinatorial optimization [3]. In recent years, many scholars have studied the TSP [4–9], and more scholars have expanded the traveling salesman problem [10–12]. At present, the traveling salesman problem is widely used in various practical problems such as Internet environment, road traffic, and logistics transportation [13]. Therefore, the research on TSP has important theoretical value and practical significance.In the past 30 years, intelligent optimization algorithms have been favored by scholars because of their few parameters, simple structure, and easy implementation such as genetic algorithm (GA) [14–17], differential evolution algorithm (DE) [18], invasive weed optimization (IWO) [19], and particle swarm optimization (PSO) [20]. Artificial cooperative search algorithm was firstly proposed by Pinar Civicioglu in 2013 to solve numerical optimization problems [21]. The algorithm was proposed to simulate the interaction and cooperation process between two superorganisms with predator-prey relationship in the same natural habitat. In nature, the amount of food that can be found in an area is very sensitive to climate change. Therefore, many species in nature will migrate to find and migrate to higher yield breeding areas. It includes the processes of predator selection, prey selection, mutation, and crossover [22–25]. Firstly, the predator population location is randomly generated, the predator location memory is set, and then the prey population location is randomly generated to reorder the prey location, where biological interactions occur during the variation phase. Finally, enter the crossover stage and update the biological interaction position through the active individuals in the predator population. Compared with other optimization algorithms, artificial cooperative search algorithm has the advantages of less control parameters and strong robustness and adopts different mutation and crossover strategies. At present, the algorithm has been used to solve scheduling problems, design problems, and other practical problems [26, 27]. To some extent, these methods do solve some practical problems, but there are still some defects such as slow convergence speed and low accuracy. Therefore, it is necessary to improve artificial cooperative search algorithm to improve the performance of the algorithm [28–30].Aiming at the disadvantages of slow convergence speed, low accuracy, and easy to fall into local optimization of the basic artificial cooperative search algorithm (ACS), this paper proposes a reverse artificial cooperative search algorithm based on sigmoid function (SQACS), that is, after constructing the scale factor by the sigmoid function, the DE/rand/1 mutation strategy of differential evolution algorithm is added in the mutation stage, and the quasi-reverse learning strategy is introduced in the later development stage of the algorithm. In the numerical simulation, the SQACS is used to solve several examples in TSPLIB. The results show that the presented algorithm is feasible.The remainder of this paper is organized in the following manner. Section2 describes the TSP model. In Section 3, the basic and improved ACS algorithms are introduced in detail. Solving TSP by the SQACS is described in Section 4. Section 5 covers simulations that have been conducted, while Section 6 presents our conclusion. ## 2. TSP Model In general, TSP specifically refers to a traveling salesman who wants to visitn cities, starting from a city, must pass through all the cities only once, and then return to the departure city, requiring the traveling agent to travel the shortest total distance [13]. It is described in graph theory language as follows. In a weighted completely undirected graph, it is necessary to find a Hamilton cycle with the smallest weight. That is, let G=V,E, V=1,2,⋯,n represent the set of vertices and E represent the set of edges, and each edge e=i,j∈E has a non-negative weight me. Now it is necessary to find the Hamilton cycle C of G so that the total weight MC=∑ECme of C is the smallest. If dij is used to represent the distance between city i and city j, dij≥0,i,j∈v, xij=1,edgei,jis on the optimal path0,otherwise, then the mathematical model of TSP is as follows:(1)minZ=∑∑dijxij,(2)s.t.∑j≠1xij=1i∈V,(3)∑i≠jxij=1j∈V,(4)∑i,j∈Sxij=S−1S⊆V,(5)xij∈0,1i,j∈V,where S represents the number of vertices in the set S. The first two constraints shown in (2) and (3) indicate that there is only one inbound and one outbound edge for each vertex, and the third constraint (4) indicates that no sub-loops will be generated. ## 3. Artificial Cooperative Search Algorithm ### 3.1. Basic Artificial Cooperative Search Algorithm Basic artificial cooperative search (ACS) algorithm is a global search algorithm based on two populations, which is used to solve numerical optimization problems [21]. Generally, ACS includes the following population initialization, predator selection, prey selection, mutation, crossover, update selection, and so on. #### 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. #### 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. #### 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. #### 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. #### 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. #### 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ### 3.2. Improved Artificial Cooperative Search Algorithm Because ACS is not mature and perfect in theory and practice, aiming at its shortcomings such as slow convergence speed, low accuracy, and easy to fall into local optimization, a reverse artificial cooperative search algorithm based on sigmoid function (SQACS) is proposed. The specific improvement scheme is as follows. #### 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). #### 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. #### 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 3.1. Basic Artificial Cooperative Search Algorithm Basic artificial cooperative search (ACS) algorithm is a global search algorithm based on two populations, which is used to solve numerical optimization problems [21]. Generally, ACS includes the following population initialization, predator selection, prey selection, mutation, crossover, update selection, and so on. ### 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. ### 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. ### 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. ### 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. ### 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. ### 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ## 3.1.1. Population Initialization ACS contains two superorganisms:α and β, in which α and β contain artificial sub-superorganisms equal to the population size (N). In the relevant sub-superorganisms, the number of individuals is equal to the dimension (D) of the problem. α and β ultrasound organisms are used to detect artificial predators and prey sub-superorganisms. The initial values of the sub-superorganism of α and β are defined by the following:(6)αi,j=lowj+R0,1×upj−lowj,(7)βi,j=lowj+R0,1×upj−lowj,where i=1,2,⋯,N, N is the population size, j=1,2,⋯,D, D is the dimension of the optimization problem, αi,j and βi,j are the components of the i-th sub-superorganism in the j-th dimension, upj and lowj are the upper and lower limits of the j-th dimension search interval, respectively, and R0,1 is a random number uniformly distributed on [0, 1]. ## 3.1.2. Predator Selection At this stage of ACS, the cooperative relationship between two artificial superorganisms is defined. In each iteration of ACS, according to the “if then else” rule, the artificial predator sub-superorganism is randomly defined from two artificial superorganisms (α and β), and the artificial predator is selected through (8). At this stage of ACS, in order to help explore the search space of the problem and promote the utilization of high-quality solutions, a memory process is developed. In order to provide this memory process, during coevolution, artificial predators will follow artificial prey for a period of time to explore more fertile eating areas.(8)predator=α,key=1.r1<r2β,key=2.otherwise,where r1 and r2 are uniformly distributed random numbers on the [0, 1] interval, predator represents the predator, key represents the memory that tracks the origin of the predator in each iteration, and its memory is used to improve the performance of the algorithm. ## 3.1.3. Prey Selection Using the same rules as selecting artificial predators, artificial prey is selected through two artificial superorganisms (α and β). In ACS, the hierarchical sequence of artificial prey is replaced by random transformation function, which is used to simulate the behavior of superorganisms living in nature. The artificial prey is selected by (9), and the selected prey is used to define the search direction of ACS in each iteration.(9)prey=α,r1<r2β,otherwise,where r1 and r2 are uniformly distributed random numbers in the [0, 1] interval and prey represents prey. ## 3.1.4. Mutation Using the mutation process defined in equation (10), the biological interaction position between artificial predator and prey sub-superorganism is simulated. The algorithm embeds a walk process (random walk function) in the mutation process to simulate the foraging behavior of natural superorganisms. In order to promote the exploration of the problem search space and the development of more effective solutions, the variation matrix is generated by using some experience obtained by the artificial predator sub-superorganism in the previous iteration.(10)Xiiter+1=predatoriiter+R×preyiiter−predatoriiter,R=4×a×b−cr1<r2Γ4×rand,1,otherwise,where in order to control the scale factor of biological interaction speed, it is calculated from (13). iter is the current number of iterations, i∈1,2,⋯,N, a, b, c, rand, r1 and r2 are random numbers uniformly distributed on the [0,1] interval, and Γ is the gamma distribution with shape parameter 4×rand and scale parameter 1. ## 3.1.5. Crossover As defined in equation (11), the active individuals in the artificial predator sub-superorganism are determined by a binary integer matrix M. The initial value of M is a matrix whose elements in row N and column D are all 1. In ACS, those individuals who can only find new biological interaction sites and can participate in migration at any time are called active individuals. The degree of cooperation between individuals in the migration process is determined by the control parameter P, which limits the number of active individuals produced by each artificial sub-superorganism. Then, the parameter controls the number of individuals involved in the crossover process, that is, it determines the probability of biological interaction in the crossover process. The crossover operator of ACS is given by(11)Xi,jiter+1=predatori,jiter,Mi,j>0Xi,jiter,otherwise,Conditioni,j=1,r1>P×r20,otherwise,Mi,j=Mi,j,r3>P×r4Mi,j×Conditioni,j,otherwise,where i∈1,2,⋯,N, j∈1,2,⋯,D. predatori,j represents the component of the i-th predator in the j-th dimension, and Mi,j represents the component of the i-th active individual of the predator in the j-th dimension. r1, r2, r3, and r4 represent uniformly distributed random numbers in the [0, 1] interval, and P represents the probability of biological interaction. Different experiments with different P values in the [0.05, 0.15] interval show that ACS is not sensitive to the initial value of its control parameters. ## 3.1.6. Update Selection The memorykey set in the predator selection stage updates the α and β superorganisms, so as to better select predators and prey at the beginning of the next iteration, so as to strengthen the global search performance. The specific operation is shown in (12) and (13).(12)αiiter+1=predatoriiter,key=1αiiter,otherwise,(13)βiiter+1=predatoriiter,key=2βiiter,otherwise,where i∈1,2,⋯,N, predatori represents the i-th predator, and iter represents the current number of iterations. ## 3.2. Improved Artificial Cooperative Search Algorithm Because ACS is not mature and perfect in theory and practice, aiming at its shortcomings such as slow convergence speed, low accuracy, and easy to fall into local optimization, a reverse artificial cooperative search algorithm based on sigmoid function (SQACS) is proposed. The specific improvement scheme is as follows. ### 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). ### 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. ### 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 3.2.1. Constructing Scale Factor R with Sigmoid Function In ACS, the scale factorR controlling the speed of biological interaction is randomly generated, which often makes the algorithm fall into local optimization, which is not conducive to the global search of the algorithm. In order to solve this problem, the following sigmoid function is introduced:(14)y=11+e−x.The sigmoid function is continuous, derivable, bounded, and strictly monotonic, and it is a kind of excitation function [31]. In ACS, according to the mechanism of the biological interaction position, it is known that at the beginning of the algorithm, it needs to quickly approach the optimal position. When it reaches the optimal position, it is necessary to reduce the search speed of the algorithm. Through the sigmoid function and constructing (15), the scale factor R that randomly controls the speed of biological interaction is transformed into a quantity that changes with the number of iterations and is mapped to the range of [0, 1], so that the scale factor R in [0, 1] gradually decreases in the range, so as to find the optimal solution more accurately. In this way, the scale factor R constructed using the sigmoid function is as in equation (15), and its curve is shown in Figure 1.(15)Riter=11+e2ln100×iter/Gmax−ln100×Gmax−iter+1Gmax,where Gmax is the maximum number of iterations, iter is the current number of iterations, and Riter is the scale factor at the iter-th iteration.Figure 1 The curve of scale factor (R). ## 3.2.2. Quadratic Mutation Strategy The DE/rand/1 mutation strategy of the DE is added to the second mutation of the population generated in the mutation stage of the ACS [18]. Research has found that the Gaussian, random, linear, or chaotic changes of the parameters in the DE can effectively prevent premature convergence. Therefore, after the DE/rand/1 mutation strategy of the DE is added to the ACS, a new mutation population is generated, and the next crossover behavior is performed. Thereby, the algorithm can avoid falling into the local optimum and improve the calculation accuracy. The quadratic mutation formula is(16)Xi,jiter+1=Xr1,jiter+sf×Xr2,jiter−Xr3,jiter,where i∈1,2,⋯,N, j∈1,2,⋯,D, iter is the current iteration number, random integers r1,r2,r3∈N, and r1≠r2≠r3≠i. The variation factor sf is a control parameter that scales any two of the three vectors and adds the scaled difference to the third vector. In order to avoid search stagnation, the variation factor sf usually takes a value in the range of [0.1, 1]. ## 3.2.3. Quasi-Reverse Learning Strategy In the later development stage of the algorithm, a better biological interaction position should be found between the populations. Because the position is changing and this change is random, it often prevents it from searching for the optimal solution in a small local area. In order to overcome the above shortcomings, a pseudo-reverse learning strategy is introduced to generate pseudo-reverse populations to increase the diversity of the populations, so that organisms can conduct detailed search for interaction positions in neighboring communities to avoid skipping the optimal solution, and then greedy selection from the current population and quasi-reverse population can effectively find the optimal solution [32–35]. The detailed process is given below:(i) Assuming thatX=x1,x2,⋯,xn is a n-dimensional solution, x1,x2,⋯,xn∈R and xi∈li,ui, i∈1,2,⋯,n. Then, the reverse solution OX=x⌣1,x⌣2,⋯,x⌣n can be defined as(17)x⌣i=li+ui−xi.(ii) On the basis of the reverse solution, the quasi-reverse solutionQOX=x⌣1q,x⌣2q,⋯,x⌣nq can be defined as(18)x⌣iq=randli+ui2,x⌣i.In this way, the choice of the quasi-inverse solution and the current solution is(19)X=x⌣iq,fx⌣iq<fxixi,otherwise.To sum up, the flowchart of the proposed SQACS is shown in Figure2.Figure 2 Flowchart of SQACS. ## 4. Solving TSP by SQACS Taking the shortest path (i.e., equation (1)) as the objective function, the SQACS is used to solve the TSP. In order to better solve the TSP and realize the transformation between biological organism and TSP solution space, each biological interaction location Xi=xi1,xi2,⋯,xin is defined as a sequence of traversing and accessing each city number in this paper. For example, one of the interaction positions Xi=1,3,2,4,5,6 means that the TSP is that the traveler first visits the city with number 1, then successively visits the cities with numbers 3, 2, 4, 5 and 6, and finally returns to the departure city, that is, the city with number 1, and the corresponding objective function is equivalent to the path length of TSP. For the TSP, the shorter the individual’s visit path is, the greater the fitness value is, so the fitness function fxi=1/Zxi is selected, where i=1,2,⋯,n, n is the number of cities to visit, and the lower and upper limits of variables are 1 and n, respectively. The specific steps of SQACS to solve TSP are as follows:Step 1. Population initialization: use the city number to encode the TSP path and randomly generate the arrangement order of n cities.Step 2. Calculate the fitness value of each individual in the population.Step 3. Randomly select the predator and prey population, and then randomly rearrange the position of prey population.Step 4. Calculate the scale factor R of biological interaction velocity.Step 5. Determination of active individuals M in predator population by binary integer mapping.Step 6. Mutation: calculate the location X of biological interaction, i.e., the visit route of the traveler.Step 7. Crossover: if the active individual mapping is greater than 0, update the path to the predator location; otherwise, keep the original location unchanged.Step 8. Reselection of predator and prey populations.Step 9. Judge whether the termination conditions are met. If so, stop the algorithm update and output the optimal position and optimal function value, that is, the shortest route and shortest path value of TSP. Otherwise, return to Step 2. ## 5. Numerical Simulation In order to verify the performance of the proposed SQACS, SQACS is tested with GA [14], DE [18], IWO [19], PSO [20], ACS [21], IACS1 [27], and IACS2 [36] to solve four TSPs of different scales in TSPLIB standard database: Oliver 30, Att48, Eil51, and Eil76. In the simulation, in order to compare the results under the same conditions as much as possible, the maximum evaluation times of each algorithm is 2000 and the initial population size is 20. Other parameter settings of the algorithm involved are shown in the corresponding references.After 30 times of solution by SQACS and the above-mentioned algorithms, the optimal value, average value, and average calculation time of the results are shown in Table1. By comparing the optimal value and average value of each algorithm to solve the TSP, it can be seen that SQACS can obtain the optimal results in the minimum value and average value, and the difference between the optimal value and average value is the smallest, indicating that SQACS has the strongest stability. From the comparison of index values of calculation time, SQACS performs better than other comparison algorithms on the four datasets. It can be seen that SQACS has good feasibility and robustness in solving TSP.Table 1 Comparison of experimental results of eight algorithms. ExampleEvaluation criterionGADEIWOPSOACSIACS1IACS2SQACSOliver30Optimal value427.37423.11431.34428.01453.27452.12434.67420.00Average value432.33435.94449.98437.83475.23468.21440.62421.31Average time/s28.3628.2829.6727.2627.4127.8528.2626.17Att48Optimal value33942.4733793.0633596.8133642.3434663.4134527.2333742.8433516.02Average value40374.2834183.5833637.8633785.1235721.3935123.3134081.8233583.17Average time/s49.3649.4250.2149.2350.2650.8551.9749.02Eil51Optimal value473.56436.81471.36432.99484.05478.67442.43426.00Average value481.52451.08482.90448.60496.55480.89449.76427.71Average time/s55.5455.3558.3654.6557.2557.7858.0255.27Eil76Optimal value568.47547.13562.20541.91572.93568.37543.00538.00Average value584.01583.11578.37550.68586.33581.81552.45543.79Average time/s85.4985.2886.2385.5486.2386.5487.1585.16Figure3 shows the optimal path diagram of four groups of examples in solving TSP by SQACS. As can be seen from Figure 3, except the path intersection in Att48 dataset, the other three figures are a completely closed loop, and the paths do not cross, so the obtained paths are feasible. Also, the solutions of the optimal route obtained are as follows: 6⟶5⟶30⟶23⟶22⟶16⟶17⟶12⟶13⟶4⟶3⟶9⟶11⟶7⟶8⟶25⟶26⟶29⟶28⟶27⟶24⟶15⟶14⟶10⟶21⟶20⟶ 19⟶18⟶2⟶1⟶6; 2⟶29⟶34⟶41⟶16⟶22⟶3⟶40⟶9⟶1⟶8⟶38⟶31⟶44⟶18⟶7⟶28⟶36⟶30⟶6⟶37⟶19⟶27⟶17⟶43⟶20⟶ 33⟶46⟶15⟶12⟶11⟶23⟶14⟶25⟶13⟶47⟶24⟶39⟶32⟶48⟶5⟶42⟶10⟶24⟶45⟶35⟶26⟶4⟶2; 40⟶19⟶42⟶44⟶37⟶15⟶45⟶33⟶39⟶10⟶30⟶34⟶50⟶9⟶49⟶38⟶11⟶5⟶46⟶51⟶27⟶32⟶1⟶22⟶2⟶16⟶21⟶29⟶20⟶35⟶36⟶3⟶28⟶31⟶26⟶8⟶48⟶6⟶23⟶7⟶43⟶24⟶14⟶25⟶18⟶47⟶12⟶17⟶4⟶13⟶41⟶40; 9⟶39⟶72⟶58⟶10⟶38⟶65⟶56⟶11⟶53⟶14⟶59⟶19⟶54⟶13⟶27⟶52⟶34⟶46⟶8⟶35⟶7⟶26⟶67⟶76⟶ 75⟶4⟶45⟶29⟶5⟶15⟶57⟶37⟶20⟶70⟶60⟶74⟶36⟶ 69⟶21⟶47⟶48⟶30⟶2⟶68⟶6⟶51⟶17⟶12⟶40⟶32⟶ 44⟶3⟶16⟶63⟶33⟶73⟶62⟶28⟶74⟶61⟶22⟶1⟶43⟶ 41⟶42⟶64⟶56⟶23⟶49⟶24⟶18⟶50⟶25⟶55⟶31⟶9.Figure 3 The optimal path diagrams of four examples obtained by the SQACS. (a) Oliver30. (b) Att48. (c) Eil51. (d) Eil76. (a)(b)(c)(d)In order to further verify the effectiveness of SQACS, the algorithms in [5–9] are further selected to compare the solution results of Oliver 30, Att48, Eil51, and Eil76 in TSP. The comparison results are shown in Table 2. By comparing the data in Table 2, it can be found that the solution results of SQACS on the other three datasets are better than those proposed in the literature, and the solution results all reach the optimal value in TSPLIB database. This verifies the effectiveness of SQACS in solving TSP.Table 2 Comparison of SQACS calculation results and literature. TSP test setTSPLIB optimal solutionSQACS optimal solutionReference [5] optimal solutionReference [6] optimal solutionReference [7] optimal solutionReference [8] optimal solutionReference [9] optimal solutionOliver30420.00420.00—420.00420.00423.74423.74Att4833503.0033516.0036441.00—33522.00——Eil51426.00426.00479.00428.87428.00814.53426.00Eil76538.00538.00—544.37547.00—538.00 ## 6. Conclusion In order to better solve the traveling salesman problem, this paper proposes a reverse artificial collaborative search algorithm based on sigmoid function. That is, the scale factor is constructed by sigmoid function to improve the global search ability of the algorithm. In the mutation stage, the DE/rand/1 mutation strategy of differential evolution algorithm is added to carry out secondary mutation on the current population, so that the algorithm can avoid falling into local optimization and improve the calculation accuracy. In the later development stage of the algorithm, the quasi-reverse learning strategy is introduced to find the optimal solution more effectively. Finally, the proposed algorithm is used to solve the traveling salesman problem, and the results show that the proposed algorithm in this paper is effective for solving the traveling salesman problem. --- *Source: 1008617-2022-04-12.xml*
2022
# Predicting the Spread of Vessels in Initial Stage Cervical Cancer through Radiomics Strategy Based on Deep Learning Approach **Authors:** Piyush Kumar Pareek; Prasath Alais Surendhar S; Ram Prasad; Govindaraj Ramkumar; Ekta Dixit; R. Subbiah; Saleh H. Salmen; Hesham S. Almoallim; S. S. Priya; S. Arockia Jayadhas **Journal:** Advances in Materials Science and Engineering (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1008652 --- ## Abstract Novel methods and materials are used in healthcare applications for finding cancer in various parts of the human system. To select the most suitable therapy plan for individuals with domestically progressed cervical cancer, robustness metrics are required to estimate their early phase. The goal of the research is to increase the effectiveness of cervical cancer patients' detection by using deep learning-based radiomics assessment of magnetic resonance imaging (MRI). From March 2016 to November 2019, 125 patients with early-stage cervical cancer provided 980 dynamic X1 contrast-enhanced (X1DCE) and 850 X2 weighted imaging (X2WI) MRI images for training and testing. A convolutional neural network model was used to estimate cervical cancer state based on the specified characteristics. The X1DCE exhibited high discriminative outcomes than X2WI MRI in terms of prediction ability, as calculated by the confusion matrix assessment and receiver operating characteristic (ROC) curve approach. The mean maximum region under the curve of 0.95 was found using an attentive ensemble learning method that included both MRI sequencing (Sensitivity = 0.94, Specificity = 0.94, and accuracy = 0.96). Whenever compared with conventional radiomic approaches, the results show that a variety of radiomics based on deep learning might be created to help radiologists anticipate vascular invasion in patients with cervical cancer before surgery. Based on radiomics technique, it has proven to be an effective tool for estimating cervical cancer in its early stages. It would help people choose the best therapy method for them and make medical judgments. --- ## Body ## 1. Introduction Exposure to various chemical and physical agents is a typical environmental problem that contributes to cancer mortality. The cervical cancer is the second most common disease in women, with more than half of the million patients are identified every year throughout the globe. Since most instances happen in developing nations and present at an early incurable phase, more than 350,000 people will lost their life as a result of their illness. Every year, around 38,000 patients are reported in Europe, with more than two-thirds anticipated to be treated and live. Survival rates will differ by nation, dependent on treatment centers and, more significantly, whether a screened program has been developed. By reducing the possibility of invasive carcinoma by treating pre-malignant cervical intraepithelial neoplasia before it grows into a really aggressive and dangerous tumor, these approaches will discover the initial phase of illness [1]. Cervical cancer is one of the most prevalent forms of cancer that may affect a woman's reproductive system, and it represents a significant danger to a woman's life and health. The stages of cervical cancer that occur before treatments are required for identifying the available medical treatment choices and for making medical treatment predictions. Tumors that have invaded the parametrium of the cervical canal may only be treated with radiochemotherapy, whereas malignancies of the cervical canal that do not affect the parametrium can be healed surgically. The presence of parametrial enlargement in cervical cancer is connected to an increased risk of recurrence and worse survival following therapy [2].Penetration of the women cervix infects the cervix deeper tissues. The cervical cancer has the possibility to spread to other functions of the body, including the liver, rectum, bladder, lungs, and genitals. Normal cervical cells grow, reproduce at a certain rate, and then die, causing changes in their DNA. Furthermore, unfavorable cell mutations are exposed, leading to cells violating their control and refusing to die, resulting in the formation of differentiated cells. Abnormal discomfort after intercourse, vaginal bleeding after intercourse, and vaginal discharge after menopause are the most well-known indications of cervical cancer. Figure1 depicts the various symptoms of cervical cancer. The most widespread risk variables are early sexual activity, multiple sexual partners, compromised immune system, sexually transmitted diseases, and exposure to smoking and miscarriage vaccines [3]. Figure 2 depicts the various risk factors for cervical cancer rising to women. Consequently, accurate diagnosis of cervical cancer with parametrial invasion is important in clinical practice. Conventional magnetic resonance imaging (MR) imaging and gynecological examination are commonly used to evaluate parameter amplification. Traditional imaging characteristics such as full-thickness disruption of normal cervical stroma in T2-weighted images and nodular lesions extending to neighboring parameters were previously thought to be parameter invasion; however, image processing is a standard function. In medical care, an objective and measurable approach to measuring criterion penetration is required [4].Figure 1 Basic symptoms of cervical cancer.Figure 2 Various risk factors for cervical cancer.In 2012, there were approximately 530,000 newly diagnosed cervical cancers and 270,000 deaths worldwide. Undeveloped nations confront the second most common malignancy and the third major cause of cancer death among women. The highest percentages are in Melanesia, Sub-Saharan Africa, Latin America, and the Caribbean. New Zealand, West Asia, and North America have the lowest. Cervical cancer kills more than 92 percent of women in developed countries: 28,000 in Latin America, 60,000 in Africa and 150,000 in Asia. India, the world's second most populous country, accounts for 26% of cervical cancer deaths (70,000 deaths). Cervical cancer is the leading cause of death in women in eastern, central, Melanesia, and southern Africa. Cervical cancer rates vary widely across the country due to variations in access to surveillance, which enables early detection and removal of lesions and the incidence of human papilloma virus (HPV) disease. The prevalence of HPV infection (of all forms) varies explicitly, from 16% in Latin America, 21% in Africa, 5% in North America and the Caribbean to 9% in Asia [5].As per a WHO report, cervical cancer is most likely the reason of cancer in women in developing countries. Although clinical centers, thousands of extra cases were reported in the United States in 2016, compared to much greater than 20K in 2014. The dataset of cervical cancer provides almost 800 data specimens, 32 features, and 4 targets from the 2016–17 reporting period. Overall traits, tobacco activities, and previous health histories are all important aspects. The abundance of screening and diagnostic procedures, each of which may produce such a diverse set of outcomes, contributes to the complexity of the data. As a consequence of this, determining how the woman's element will behave and selecting the most appropriate screening strategy are both crucial challenges. As a direct consequence of this, the procedure of identifying the most suitable principal channel constitutes the primary obstacle in the endeavor of measuring a woman's exposure to risk factors. A lot of academics have looked at data on cervical cancer that was compiled from a variety of different sources. The major risk factors for the spread of cervical cancer include improper menstrual hygiene, having children at a young age, smoking, and a lack of preventative measures for mouth cancer. The tumor phase, initial weight mass, and histological grading are all variables that affect the prediction. Therapy is made up of four phases of illness as established by the International Federation of Gynecology and Obstetrics (FIGO) scoring scheme. Surgery or radiation therapy is utilized to handle patients with stage IIA or less. Initial stage cancer patients may need a radical hysterectomy, radiation treatment, or sometimes both. People diagnosed with stage IIB or higher, on the other hand, receive only radiation therapy. Stage IIA disease without stage parametrial involvement and stage IIB disease, in which parametrial involvement are the main difference in stages. Figure3 represents the different stages of cervical cancer. Although lymph node metastasis is not comprised in the formal FIGO scoring system, they are an important prognostic factor. The TNM staging approach to cervical cancer involves nodal status. Unilateral and bilateral parameters are other prognosis variables for the occurrence of pelvic wall disease [6].Figure 3 Different stages of cervical cancer.The size of the tumor at prognosis, the size of the high-risk clinical objective during bronchial therapy, and the duration of therapeutic effects are linked to the potential of local democracy. Furthermore, in the epoch of image-guided responsive therapy, it is essential to check the outcome evaluation, especially for those at significant risk of local recurrence, and to intensify medication (or) radio sensitizing Agent who is applicants for clinical trials. On the other hand, identifying individuals at low risk of local recurrence may be clinically important. Clinical imaging is essential in the primary assessment and condition of victims and in the treatment of treatment options. Because of its great resolution, functional imaging capabilities, and excellent soft-tissue contrast magnetic resonance imaging (MRI) is the gold typical for the pre-treatment evaluation of gynecologic malignant T-status [7]. Radiation therapy and concurrent chemotherapy with cisplatin-based chemotherapy is a standard therapy for women with metastatic cervical cancer, as per the NCCN recommendations; the 5-year life expectancy can approach 60–80 percent. If first-line CCRT failed, though, the longer CCRT therapy time will unavoidably delay the start of other possibly beneficial therapies. Furthermore, CCRT has several adverse effects. Additional pelvic irradiated can induce myelosuppression because it damages the bones, which comprise more than 50% of the body's proliferative functional bone marrow mass. Platinum-based CCRT can worsen myelosuppression, although it is less effective if therapy is initiated or interrupted. As a result, predicting CCRT responsiveness before therapy may help to determine whether CCRT should be used as first-line therapy. Furthermore, by identifying individuals who are most susceptible to CCRT, responder prognosis can lead to individualized therapy [8]. Figure 4 describes the treatment option available for cervical cancer.Figure 4 Treatment option available for cervical cancer.The Ministry of Health of Bangladesh has launched the 5-year national approach for family welfare cervical cancer control and prevention Initiative, which will run from 2016 to 2021. The WHO considers invasive cervical cancer to be the fourth most widespread and second leading cause of cancer among Bangladeshi women aged 20 to 50 years. Each year, approximately 12,000 new cases are identified, and the severity of the condition exceeds 6,000. In Bangladesh, nearly 4.4 percent of the general population has a higher risk of developing cervical HPV16/18 infections at any given time, and HPVs 16 and 18 are responsible for 80.3 percent of invasive cervical malignancies [9]. Figure 5 depicts the mortality rate of cervical cancer during the year from 1990 to 2020. Targeted treatment requires imaging. Radiomics collects huge volumes of information from high-performance clinical pictures to extract attributes for unbiased, measured investigation of diseased biological activities [10]. Radiomics have been intensively explored in tumor detection, differentiated diagnostic tests, prognostic assessment, and therapy outcome prediction in recent decades. This approach has been used to estimate LNM in breast cancer, bladder cancer, biliary tract cancer, and colorectal cancer before surgery [11]. Some research has looked at the effectiveness of radiomics in calculating LNM in cervical cancer. The radiomics properties of positron emission tomography (PET) were linked to the expression of VEGF in cervical cancer in a prior study. This observations suggest that a radiomics system based on health images may be used to predict VEGF expression [12].Figure 5 Mortality rate for cervical cancer from 1990 to 2020.Radiomics is a novel approach for obtaining high-throughput data from normal clinical pictures. The radiomics nomogram was used to identify LNM status by collecting measurable characteristics from CT images connected to colorectal, bladder, esophageal, breast, lung adenocarcinoma, and thyroid. It functioned well [13]. Radiomics is a rapidly expanding field of science that uses image collections of high-dimensional characteristics taken from routinely obtained cross-sectional images to produce data that semantic assessment would otherwise lose. Radiomics records the cystic and necrotic patches within the tumor volume that are typical of tumoral heterogeneity, as well as behavior that characterizes aggression and therefore results. Radiomics is an area of research that uses mathematical modeling to extract qualitative information from clinical images in order to create prediction methods that may be used to estimate treatment prognostic and survivability, with preliminary results reflecting a wide range of medical results [14].In comparison to nonmedical databases, healthcare sets of data feature more attributes and partial information. This is critical to establish the essential and useful properties for quantified approach building by enhancing type. Although deep learning approaches are stronger in forecasting and effective tweaking, they have been frequently utilized in cancer research. An investigation found that long-term HPV infection is the main reason for cervical cancer [15]. Machine learning is a technique that leverages previously established diagnosing characteristics as factors, such as morphological or textures, and needs factors pre-selected by people to do categorization jobs. Deep learning, on the other hand, retrieves whatever the system defines as essential factors directly from the training phase, avoiding the preconceptions that come with past human analysis. This will eventually offer physicians with methods that will aid in the proper detection of cervical cancer [16]. In this study, deep learning-based VGG19network is used for the early detection of cervical cancer. ## 2. Related Works Imaging-based tumor size predicts cervical cancer radiation response before, during, and after treatment. Various imaging-based tumor size measurement approaches and time have not been examined. To compare the diagnostic usefulness of orthogonal diameter-based elliptical tumor volume measurement vs. contoured tracing evaluation 3D tumor volumetric. 60 patients (stages IB2-IVB/recurrent) with advanced cervical cancer underwent continuous MRI exams throughout early RT, mid-RT, and follow-up. In the computer workstation, the measurement based on ROI was calculated by monitoring the whole tumor area on each MR piece. Three orthogonal diameters (a1, a2, a3) were determined on image hard copies to calculate the volume as an ellipse for the diameter-based “elliptical volume”. These results were compared between the two measurement techniques, and the series tumor sizes and regression rates calculated with each approach were linked to local management, disease, and survival time. The average duration of treatment was 5 years. A mid-treatment MRI scan using 3D ROI volumetry is the best approach and time point for estimating tumor size to predict tumor regression rate. Both the diameter-based technique and the ROI measurement had to predict accuracy equivalent to pre-RT tumor size, especially in patients with small and large RT tumors. Tumor size prior to RT was determined by any approach and, on the other hand, showed a significantly lower prognostic value for intermediate-sized tumors that accounted for most patients. The largest result of predicting local control and disease survival rates is the tumor regression rate gotten during mid-RT, which can only be recognized by 3D ROI measurement. Slow ROI-based regression rates were predicted within the difficult intermediate pre-RT group to classify all treatment complications. The results were not predicted by the mid-RT regression rate depend on the base diameter measurement. Of all the measurement methods, the initial-RT and post-RT measurements were the least informative. The first findings show that a simple diameter-based estimate obtained from film hardcopy can be used to determine the initial tumor size for the treatment performance prognosis in cervical cancer. When the initial tumor size is intermediate, ROI measurement and additional MRI are required during the RT to objectively measure the tumor regression rates for clinical outcome evaluation. The baseline diameter-based approach of this study is not optimal for evaluating the response throughout treatment [17].Cells usually split and expand to make additional cells only if the body requires them. Whenever new cells are not required, this ordered procedure keeps the process going. Those cells may produce a cancer progression, which is a mass of excess tissues. Effective data processing methods are used to diagnose whether the cervix is normal or cancerous. With the help of powerful data processing methods, normal cervical or cancer cervical prognosis is calculated in this study. Predictive data processing is important, especially in the medical field. The regression and classification tree system, the K-learning with Random Forest Tree algorithm to predict a usual or cancerous cervix are all based on this principle. Information was analyzed from NCBI and utilized from a data collection with 500 samples and 61 parameters. The results were shown in the format of predictive trees. As previously indicated, a sample of 100 data with 61 biopsies characteristics was chosen. Depending on the biopsies results, an awareness program is implemented, and a questionnaire is undertaken to track women's changes over this time. A personalized interviewing program was done amongst rural women in diverse locations to obtain data effectively. Patients were screened for cervical cancer in collaboration with JIPMER Hospital. The findings of the biopsy tests were statistically analyzed and submitted to MATLAB for algorithmic verification. To determine the results obtained, 100 test datasets and 60 training packages are divided and presented in different heads. To find the best cervical cancer prognosis, the researchers compared the effectiveness of several methods using measures such as sensitivity, specificity and accuracy. The regression approach was initially used to make predictions. Normal cervical or cancers cervical is two possible side effects of CART binary tree. The GINI index is a separation criterion used to determine the different types of cervical information. After the RFT confirmed the optimal accuracy, a new logic was adopted, namely “mixtures of two techniques.” This is the supervised machine learning group approach. To create the best predictive effect, whitening is utilized as a pre-process in the k-mean cluster. With the CART TREE result, the findings represent 83.87 percent accuracy. To increase forecast efficiency, Random Forest Tree (RFT) is used. To achieve 93.54 percent forecast efficiency using the MATLAB code. Because the K-Means method is useful for estimating datasets, the RFT - K- i.e. learning tree output achieves high accuracy. This approach is unique in that it combines RFT with the K-means method, resulting in a greater accuracy result. This algorithm has been ineffective for the exact prediction of cervical cancer [18].Prevention methods are cheaper than medical treatment in practically all countries. The primary diagnosis of any disease improves the chance of effective treatment for patients rather than the disease diagnosed late in its course. Cervical cancer is caused by a variety of factors, including aging and the use of hormonal contraceptives. Cervical cancer can be diagnosed early, which increases healing and reduces mortality. The study use machine learning methods to improve a method that can precisely and sensitively diagnose cervical cancer. The categorization approach was constructed using the cervical cancer potential risk database from the University of California at Irvine (UCI) using a polling approach that included three classification techniques such as decision tree, random forest and logistic regression. To solve the difficulties of asymmetric learning and to decrease the variables that do not disturb the accuracy of the sample, the Minority Surface Model (SMOTE) integrated with the Primary Component Analysis (PCA) approach was used. The excessive fitting problem was avoided using the 10-fold cross-verification approach. The database contains 32 risk factors and four targeted variables (Hinselmann, Cytology, Schiller, and Biopsy). The study found that combining voting classifiers with SMOTE and PCA approaches improves the sensitivity, accuracy, and area of prediction models under the ROC for each of the four target variables. For all target variables, the SMOTE-voting approach increased accuracy, sensitivity and PPA ratios from 0.9 percent to 5.1 percent, 39.2 percent to 46.9 percent, and 2 percent to 29 percent. Furthermore, the PCA technique improved sample performance by eliminating computational processing time. Lastly, after comparing the findings to those of multiple prior research, study exposed that these models were more efficient in identifying cervical cancer based on key assessment criteria. In this method, the correct prediction of the original feature is difficult [19].Another study looked at and proposed an effective and enhanced cervical cancer forecasting model. Previous monitoring and detection methods/tests were complex, time-consuming, and clinical/pathological. Machine learning predicts and diagnoses cervical cancer. For measuring performance in illness diagnosis, an integrative technique of Adaptive Boosting and Genetic Algorithm is applied. To reduce the amount of features, a genetic algorithm is utilized as feature selection. It minimizes both the computing price and the number of components required for diagnosis. Adaptive Boosting is a technique for improving classifier performance. For illness identification, the Support Vector Machine (SVM) and Decision Tree are recommended. For cervical cancer detection, 32 variables are utilized. The set of variables is decreased using a genetic approach, and adaptive boosting is recommended for additional improving performance. For the radial bias function of support vector machine, decision tree, and SVM linear, the improvement in accuracy was 94 percent, the sensitivity was 97 percent–98 percent, the specificity was 93 percent–94 percent, and the accuracy was 93 percent–95 percent. A combined method of adaptive promotion and genetic mechanism is suggested. It requires more time for processing and exact prediction is difficult for the high-noise image [20].Cervical cancer is the most leading cause of mortality, especially in developing countries, although it may be efficiently managed if identified earlier. The goal of this work was to create effective machine-learning-based classification models for early-stage cervical cancer detection utilizing clinical studies. The study used a Kaggle data repository cervical cancer databases that had four different types of aspects including cytology, Hinselmann, biopsy, and Schiller. Those class characteristics were used to divide the database into four groups. This dataset was subjected to three feature modification methods such as sine function, log and Z-score. The performance comparison of many supervised machine learning methods was evaluated. For the biopsies and cytology data, the Random Tree (RT) method performed best, while Random Forest (RF) and Instance-Based K-nearest neighbor (IBk) performed best for Schiller and Hinselmann correspondingly. The logarithmic transformation approach to the biopsy dataset worked best, while the sine function worked best for cytology. The Hinselmann database performed well on both logarithm and sine functions, while the Schiller database performed well with the Z-score. Multiple feature selection techniques (FST) approaches have been used for modified datasets to identify and prioritize related risk variables. The findings of this study show that clinical evidence, tuning and relevant computer structure, classification, and machine learning approaches can be effective and accurate. Diagnose cervical cancer in its early stages. This method is inefficient and difficult to predict the exact value [21].Health care providers are now confronting a significant problem in recognizing cervical cancer before it progresses fast. To access the risk variables for predicting cervical cancer by using machine learning classification algorithms. Effective variation of the eight most categorical algorithms for diagnosing cervical cancer using various excellent features selected from the database. Machine learning classifiers such as Decision Tree, Multilayer Perceptron (MLP), K-Near Neighbor and Random Forest, Logistic Recursion, Gradient Boosting, Adabost, and SVC are help to identify the early detection of cervical cancer. To prevent values from disappearing in the database, several procedures are used. A mixture of selecting features approaches including SelectBest, Random Forest, and Chi-square was used to select several excellent properties. Recall, accuracy and f1-score properties are utilized to evaluate the effectiveness of classifiers. MLP outperforms other classification techniques in the range of best-selected features. At database segmentation rates, most classification techniques claim to have the greatest accuracy in the first 25 characteristics. The ratio of correctly classified examples to each sample is shown, and all of the findings are analyzed. Medical practitioners can carry out cervical cancer prediction in an effective manner by using the recommended method. This method has a cumulative loss function, making it difficult to predict cancer [22].To examine whether strain elastography imaging can be used to diagnose and predict treatment outcomes in patients receiving simultaneous chemo-radiotherapy (CCRT) for locally advanced cervical cancer. In a 2015–2016 feasibility assessment, 47 individuals with advanced localized cervical cancer were registered. All patients had CCRT and filtered elastography before, one week, two weeks, and immediately after therapy. MRI was used to evaluate treatment response during diagnosis and following CCRT. Depending on the MRI findings the outcome of treatment can be classified as a full response, partial response, chronic disease or progressive disease. Clinical results have been compared with the rate of strain of normal parametrial tissue and cervical tumor. Of the 47 patients who completed all four exams, 36 were evaluated: 25 were categorized as CR, 11 as PR and 0 as SD/PD. The CR group (F = 87) and the PR group (F = 38) had significantly different strain ratios at different time periods. The CR and PR sets had considerably different strain rates (F = 7.2). At 1 week of treatment, the strain rates in the CR and PR collections varied considerably (p 0.05). Week 1 and 2, and post-treatment (all p 0.001) showed a significant decrease in the CR group, whereas week 2 and after treatment (both p 0.05) showed a significant decrease in the PR group, but not at week 1 during CCRT. A prospective combination study was performed to estimate cancer response in women who getting CCRT for cervical cancer. The work demonstrates the ability of strain elastography imaging to monitor and predict tumor response developed by CCRT [23]. ## 3. Materials and Methods ### 3.1. Data Collection Samples of 1500 diagnostic MRI images were collected from an average of 150 female patients aged 55 years and above in online databases of international collaborations on cancer reporting over a period of 30 to 65 years between March 2016 and November 2019, with 600 naval invasions and 900 non-naval invasions. In each case, two MRI methods were captured such as X1DCE MRI, which focused on anatomical features and efficiently measured blood flow in vivo; and X2 weight imaging (X2WI) MRI, which stronger the contrast of the soft tissues. Patients with X1DCE and X2WI MRI assessments prior to surgical treatment; Surgical extraction cases with pathological verification utilized as the typical gold for distinguishing non-invasive and vessel invasion properties of cancer; And all women over 20 years of age were included. Individuals with a history of preoperative treatment, women with no X1DCE or X2WI MRI data, women with no histopathological effects, patients receiving congenital therapies, and very young patients with cases of other cervical diseases or tumors were excluded. A radiologist with 10 years of expertise used 4.0-T scanning with sensitive coded abdominal scrolls of 8-channel arrays to perform preliminary MRI examinations. Before screening these individuals were told to drinking some water to fill their bladder, rest taken for 30 minutes, and bring their respiration under control. Clinical records were reviewed to collect patient data such as patient age, menstrual status, international gynecology and obstetrics stage and tumor type, LN and lymph vascular space invasion histological findings after surgery. Table 1 shows the patients' various characteristics for training and testing phase.Table 1 A selected patient training and testing phase attributes. Patients characteristicsTraining phaseN = 150p-valueTesting phaseN = 55p-valuep∗-value+ive lymphovascular invasion-ive lymphovascular invasion+ive lymphovascular invasion-ive lymphovascular invasionPatients age/year0.600.530.98Average age55565360Age ranges27–5527–6030–5535–65Stages0.62<0.00020.45Early stage IB20 (50.2)40 (52.6)15 (30.4)35 (70.2)Late stage IB15 (42.5)48 (50.1)18 (40.2)20 (40.2)Stage IIB8 (18.5)12 (13.2)12 (52.6)8 (10.2)MRI lymph node status<.0020.0020.70Positive20 (7.9)30 (40.2)10 (55.2)50 (92.1)Negative150 (95.7)52 (68.2)12 (60.8)15 (20.5)Menstrual status0.5420.4420.89Postmenopausal15 (40.3)55 (56.2)6 (30.2)30 (52.7)Premenopausal28 (65.2)48 (50.2)15 (80.5)35 (56.8)Maximum cancer diameter0.0020.0080.55≤5 cm25 (60.2)80 (88.5)8 (52.8)42 (68.5)>5 cm20 (45.6)18 (17.06)10 (54.8)9 (15.9)Lymphovascular invasion<.002.001<.002Positive88 (35.9)46 (59.8)18 (22.6)15 (35.9)Negative170 (59.8)35 (40.8)97 (89.0)28 (70.2) ### 3.2. Data Image Preprocessing Each T1DCE and T2WI image was examined using the ITK-SNAP program by MRI radiologists with 10 and 12 years of experience. The ROI per patient was created at an average range of 30 × 40 pixels per image and included tumor areas and borders of cervical cancer located in the cervix. The ROI patch from each MRI image was automatically generated and measured at 256 × 256, and then fed into in-depth learning networks. Data augmentation was utilized to train the convolutional neural network models and balanced the datasets using the image data generator of the Keras module in Python 3.9. Every image is initially measured and cut before being moved up and down, before being moved to right from left, and then arbitrarily interchanged6 degrees around the midpoint. It was thought that pixels beyond ROI could carry important information for discrimination because cervical tumor cells travel to neighboring healthy tissues in patients with vascular invasion. To compare ROIs, the array of pixels from the minimum boundary rectangle (MBR) is stretched at different positions (top, bottom, left, and right). The produced images were enhanced with data amplification utilizing the same technique before being placed on the network’s input layer. Figure 6 depicts the process flow diagram for cervical cancer prediction.Figure 6 Cervical cancer prediction step by step procedure. ### 3.3. Convolutional Neural Models for Classification CNN, also known as convolutional networks, was employed as direct inputs to the network instead of feature representation, unlike standard radiomic techniques. The algorithm is a self-sufficient gathering and improvement of advanced traits and variables. MRI scanned regions from cervical victims were used as inputs for the end-to-end convolutional networks model in this study. The output layer of each strategy was constructed to comprise two neural networks to predict the possibility of with or without ship incursions. Several CNN techniques, including VGGNet, GoogLeNet, Residual Network, and DenseNet, have been used to analyze various radiomic processes. More detailed explanations may be found in the source articles for each CNN model.In order to adapt to this work during the experiments, the first fully connected components in every neural network were exchanged with three additional entirely connecting layers with a neural number of 700, 500, and 5, respectively. Adam optimizer and cross-entropy losses were used to train all networks with a detection rate of 0.0002. During each Conventional blogging of AdaptedVGG19 networks, a custom compression and trigger mechanism and components of the Convulsion Block focus were added to create an AdaptedVG1919-SE and AdaptedVG1919-CBAM accordingly. The two separate AdaptedVGG19-focused network integrated judgments were used to develop a deeper group learning approach and an observational group learning approach accordingly.D = (d1 + d2)/2 was used to calculate the output probability of becoming a ship invasion or non-ship invasion, where d1 and d2 reflect the effect potential of two AdaptedVGG19 networks using X1DCE and X2WI. Figures 7 and 8 represent the architecture model for suggested VGG19 approaches and its inner structure processing.Figure 7 Schematic diagram for suggested adaptive VGG19 approach.Figure 8 Inner structure process flow diagram for adaptive VGG19. ### 3.4. Mechanism for Validity The effectiveness of the radoimic algorithms was evaluated using 10-fold cross-validation, and the accuracy, sensitivity, and specification were calculated using the calculations below. The amount of disorder cases properly identified is represented by True Negative (tn) and True Positive (tp). The quantities of disorder cases incorrectly identified were labeled as False Positive (fp) and False Negative (fn).(1)AccuracyAc=tn+tpfn+tn+fp+tp,SensitivitySn=tpfn+tp,SpecificitySp=tnfp+tn.This method's capacity to distinguish between non-vessel invasion and vessel invasion events is reflected in its efficiency. To refer specificity to a model's ability to appropriately distinguish non-vessel invasion. Sensitivity refers to the model's ability to properly distinguish vessel incursion. The median receiver operational characteristics assessment and the area under the ROC curves were also used to assess these approaches. A confusing matrix was created using the Scikit-Learn module to evaluate the classification performance of the suggested approaches. The gradient-weighted glass activation mapping approach was used to create the heatmaps. Algorithm 1 shows the deep learning-based radiomics strategy for cervical cancer prediction.Algorithm 1: Deep learning-based Radiomics strategy for cervical cancer prediction. Input: Test MRI images from datasetsOutput: Prediction of the cervical cancer (normal cell (or) abnormal cell)Initialize the number of specimen (Ns), tumor length (Lt), Image processing (Ip)While (not satisfied the termination condition)fori ranges (0, Ns)Randomly selected the specimen N1, N2, N3…….Ns then perform the operationforj ranges (0, Lt)If rand (0, 1) < rand(0, Lt) = =jPerform the image processing operationelseDo not perform the image processing operationend ifGet the new image (Nsn)end forend forfori in range (0, Ns)If tumor volume (Nsn) > tumor volume (Ns)Update cancer state (normal/abnormal)elseNot update cancer Nsend ifend forend while ## 3.1. Data Collection Samples of 1500 diagnostic MRI images were collected from an average of 150 female patients aged 55 years and above in online databases of international collaborations on cancer reporting over a period of 30 to 65 years between March 2016 and November 2019, with 600 naval invasions and 900 non-naval invasions. In each case, two MRI methods were captured such as X1DCE MRI, which focused on anatomical features and efficiently measured blood flow in vivo; and X2 weight imaging (X2WI) MRI, which stronger the contrast of the soft tissues. Patients with X1DCE and X2WI MRI assessments prior to surgical treatment; Surgical extraction cases with pathological verification utilized as the typical gold for distinguishing non-invasive and vessel invasion properties of cancer; And all women over 20 years of age were included. Individuals with a history of preoperative treatment, women with no X1DCE or X2WI MRI data, women with no histopathological effects, patients receiving congenital therapies, and very young patients with cases of other cervical diseases or tumors were excluded. A radiologist with 10 years of expertise used 4.0-T scanning with sensitive coded abdominal scrolls of 8-channel arrays to perform preliminary MRI examinations. Before screening these individuals were told to drinking some water to fill their bladder, rest taken for 30 minutes, and bring their respiration under control. Clinical records were reviewed to collect patient data such as patient age, menstrual status, international gynecology and obstetrics stage and tumor type, LN and lymph vascular space invasion histological findings after surgery. Table 1 shows the patients' various characteristics for training and testing phase.Table 1 A selected patient training and testing phase attributes. Patients characteristicsTraining phaseN = 150p-valueTesting phaseN = 55p-valuep∗-value+ive lymphovascular invasion-ive lymphovascular invasion+ive lymphovascular invasion-ive lymphovascular invasionPatients age/year0.600.530.98Average age55565360Age ranges27–5527–6030–5535–65Stages0.62<0.00020.45Early stage IB20 (50.2)40 (52.6)15 (30.4)35 (70.2)Late stage IB15 (42.5)48 (50.1)18 (40.2)20 (40.2)Stage IIB8 (18.5)12 (13.2)12 (52.6)8 (10.2)MRI lymph node status<.0020.0020.70Positive20 (7.9)30 (40.2)10 (55.2)50 (92.1)Negative150 (95.7)52 (68.2)12 (60.8)15 (20.5)Menstrual status0.5420.4420.89Postmenopausal15 (40.3)55 (56.2)6 (30.2)30 (52.7)Premenopausal28 (65.2)48 (50.2)15 (80.5)35 (56.8)Maximum cancer diameter0.0020.0080.55≤5 cm25 (60.2)80 (88.5)8 (52.8)42 (68.5)>5 cm20 (45.6)18 (17.06)10 (54.8)9 (15.9)Lymphovascular invasion<.002.001<.002Positive88 (35.9)46 (59.8)18 (22.6)15 (35.9)Negative170 (59.8)35 (40.8)97 (89.0)28 (70.2) ## 3.2. Data Image Preprocessing Each T1DCE and T2WI image was examined using the ITK-SNAP program by MRI radiologists with 10 and 12 years of experience. The ROI per patient was created at an average range of 30 × 40 pixels per image and included tumor areas and borders of cervical cancer located in the cervix. The ROI patch from each MRI image was automatically generated and measured at 256 × 256, and then fed into in-depth learning networks. Data augmentation was utilized to train the convolutional neural network models and balanced the datasets using the image data generator of the Keras module in Python 3.9. Every image is initially measured and cut before being moved up and down, before being moved to right from left, and then arbitrarily interchanged6 degrees around the midpoint. It was thought that pixels beyond ROI could carry important information for discrimination because cervical tumor cells travel to neighboring healthy tissues in patients with vascular invasion. To compare ROIs, the array of pixels from the minimum boundary rectangle (MBR) is stretched at different positions (top, bottom, left, and right). The produced images were enhanced with data amplification utilizing the same technique before being placed on the network’s input layer. Figure 6 depicts the process flow diagram for cervical cancer prediction.Figure 6 Cervical cancer prediction step by step procedure. ## 3.3. Convolutional Neural Models for Classification CNN, also known as convolutional networks, was employed as direct inputs to the network instead of feature representation, unlike standard radiomic techniques. The algorithm is a self-sufficient gathering and improvement of advanced traits and variables. MRI scanned regions from cervical victims were used as inputs for the end-to-end convolutional networks model in this study. The output layer of each strategy was constructed to comprise two neural networks to predict the possibility of with or without ship incursions. Several CNN techniques, including VGGNet, GoogLeNet, Residual Network, and DenseNet, have been used to analyze various radiomic processes. More detailed explanations may be found in the source articles for each CNN model.In order to adapt to this work during the experiments, the first fully connected components in every neural network were exchanged with three additional entirely connecting layers with a neural number of 700, 500, and 5, respectively. Adam optimizer and cross-entropy losses were used to train all networks with a detection rate of 0.0002. During each Conventional blogging of AdaptedVGG19 networks, a custom compression and trigger mechanism and components of the Convulsion Block focus were added to create an AdaptedVG1919-SE and AdaptedVG1919-CBAM accordingly. The two separate AdaptedVGG19-focused network integrated judgments were used to develop a deeper group learning approach and an observational group learning approach accordingly.D = (d1 + d2)/2 was used to calculate the output probability of becoming a ship invasion or non-ship invasion, where d1 and d2 reflect the effect potential of two AdaptedVGG19 networks using X1DCE and X2WI. Figures 7 and 8 represent the architecture model for suggested VGG19 approaches and its inner structure processing.Figure 7 Schematic diagram for suggested adaptive VGG19 approach.Figure 8 Inner structure process flow diagram for adaptive VGG19. ## 3.4. Mechanism for Validity The effectiveness of the radoimic algorithms was evaluated using 10-fold cross-validation, and the accuracy, sensitivity, and specification were calculated using the calculations below. The amount of disorder cases properly identified is represented by True Negative (tn) and True Positive (tp). The quantities of disorder cases incorrectly identified were labeled as False Positive (fp) and False Negative (fn).(1)AccuracyAc=tn+tpfn+tn+fp+tp,SensitivitySn=tpfn+tp,SpecificitySp=tnfp+tn.This method's capacity to distinguish between non-vessel invasion and vessel invasion events is reflected in its efficiency. To refer specificity to a model's ability to appropriately distinguish non-vessel invasion. Sensitivity refers to the model's ability to properly distinguish vessel incursion. The median receiver operational characteristics assessment and the area under the ROC curves were also used to assess these approaches. A confusing matrix was created using the Scikit-Learn module to evaluate the classification performance of the suggested approaches. The gradient-weighted glass activation mapping approach was used to create the heatmaps. Algorithm 1 shows the deep learning-based radiomics strategy for cervical cancer prediction.Algorithm 1: Deep learning-based Radiomics strategy for cervical cancer prediction. Input: Test MRI images from datasetsOutput: Prediction of the cervical cancer (normal cell (or) abnormal cell)Initialize the number of specimen (Ns), tumor length (Lt), Image processing (Ip)While (not satisfied the termination condition)fori ranges (0, Ns)Randomly selected the specimen N1, N2, N3…….Ns then perform the operationforj ranges (0, Lt)If rand (0, 1) < rand(0, Lt) = =jPerform the image processing operationelseDo not perform the image processing operationend ifGet the new image (Nsn)end forend forfori in range (0, Ns)If tumor volume (Nsn) > tumor volume (Ns)Update cancer state (normal/abnormal)elseNot update cancer Nsend ifend forend while ## 4. Results and Discussion ### 4.1. Performance Classification in Various Configurations Current studies have found that combining data from multiple methods increases discriminatory performance compared to using individual methods. Since both the X1DCE and X2WI MRI datasets provide rich and varied signal intensity within the cancer, Convolutional neural network-based radiomic algorithms were developed in this work. The effectiveness of the approach for vessel invasion discriminations is shown in Table 1. Findings of each model are presented, including AUC average value, sensitivity, accuracy, and uniqueness. As seen in Table 1, X1DCE consistently defeats X2WI, proving that the T1DCE database is more valuable than the X2WI database. Furthermore, the sensitivity determined by X2WI for every scenario was lower than that produced by X1DCE, showing significant error rates by X2WI.The primary cause of X1DCEMRI is the ability to effectively estimate blood flow in vivo by displaying blood vascular density and permeability, estimating the capacity transmission constant, and depending on the permeability of the cancer vasculature, all of that can give more discriminating data on the prognosis of cervical cancer vessel invasion. X2WI provides anatomical data by screening soft tissues with high resolution to reveal tumor morphological characteristics. Furthermore, current findings are difficult to describe because there are no specific indicators to identify the quality of vascular infiltration in cervical cancer using preoperative MRI imaging. Compared to ResNet-v2, Inception-v3, and DenseNet, the AdaptedVGG network generated improved accuracy and AUC values for X1DCE and X2WI databases. Based on the small ROIs collected from MRI images, the most basic topologies of the AdaptedVGG network can be useful in minimizing excess compatibility with sophisticated structures compared to other CNN models.The remaining designs were said to allow greater accuracy in diagnosing clinical images, which contradicted the findings. This can be calculated based on AdaptedResNet50 and AdaptedVGG19, which have 80,402,590 and 84,922,700 training variables, correspondingly. AdaptedcResNet performed worse than AdapedVGG19, which may be due to greater compatibility. In terms of information size and modeling ability, there has to be a compromise. When the information set is large enough to effectively train a large model, the structure is more likely to perform better. Excessive fit, on the other hand, can be frustrating when the information is too large to sustain training. The small number of photos in our investigation may have had an impact on the effectiveness of the models. The optimum AUC of 0.880 was achieved by combining the results of the X1DCE and X2WI databases using the deep group learning approach. The craft properties used to predict vascular invasion status using X1CE MRI created a radiomics nomogram approach and a current of 0.95 AUC before trial. Similarly, X2WI used MRI to capture craft properties and developed a logistic regression model with 0.710 AUC.The maximum AUC in this test reached 0.911, which uses the recommended ensemble technique, which combines both MRI methods. This was in line with previous research that focuses on distinguishing the capabilities of the network and enhancing categorization capabilities. The SE component used in this study can lift the weight of the attributes of the most essential channels. Integrated channel attributes and spatial dimensions data were central to CBAM components. These findings suggest that careful group methods may be useful in predicting vascular invasion in cervical cancer. If the ROC curves are combined with false positive and true positive ratios, a more complete outcome can be obtained.For such an X1DCE MRI, as shown in Figure 9 performance evaluation, the curvature of the X1DCE and X2WI models is always greater than that of other structures. X1DCE & X2WIworked better than AdaptedVGG16. This demonstrates that CNN models of various depths can learn features from various levels and that networks with multiple layers performed somewhat excellent than those with lower levels. Considering the better specification, AdaptedResNet50-v2 quickly surpassed AdaptedInception-v3 and AdaptedDenseNet121. Adapted Inception-curve, on the other hand, v3s are generally lower than others. Similar findings were made for X2WI MRI in predictive performance, with the exception of AdaptedResNet50-v2. The study shows the ROC curve of EL modeling combining X1DCE and X2WI data, which is an optimal average AUC of 0.95. Figures 10 and 11 depicts the different technique evaluation for predicting the accuracy and sensitivity.Figure 9 Various technique performance evaluation by using receiver operating curve.Figure 10 Different approaches to accuracy prediction.Figure 11 Different approaches to sensitivity prediction. ### 4.2. The Peri-Tumor Area’s Effect on Estimating Vessel Invasion Tumor cells can spread to the pelvic area where they can travel to the blood or lymphatic vessels and other human tissues, leading to invasion and metastasis of cervical cancer. To test the discriminatory ability of peri-tumor pixels in the vascular invasion properties in cervical cancer, a set of system features was created by extending the ROI's MBR from 10 to 60 pixels in all directions on the X1DCE MRI images. Different pixels with three patches up to the ROI of the MBR were created for a single MRI image, and they were used as training examples using EL models. With X1DCE MRI data and the EL model, Figure 12represent the confusing matrix separation among non-vessel invasion and vessel invasion.Figure 12 Confusion matrix assessment for vessel and non-vessel invasion.Compared to the AUC values achieved by utilizing the similar models training with pixels of 10 and 60 from the MBR of the original ROIs, the model of EL was training with pixels image enlarged by 30 pixels from the ROI, which reached the largest AUC. Table2 predicts the EL approach performance evaluation with different patches. These findings indicate that the cervical cancer, peri-tumor region plays a significant part in the ultimate classification of vascular invasion. The effect was thought to be explained by the following: The rapid growth of microorganisms inside the cancer before the invasion of the vessel causes the microvascular tumor to expand into neighboring tissues and cause small morphological changes in these tissues. Radiologists can sense design in surrounding tissues only depending on the regional scale features determined via visual inspection, and these current convolutional neural network algorithms can detect pixel-level properties and detect certain connections between morphological and pathological characteristics.Table 2 Various CNN model performance evaluation [Ac = Accuracy; Sn = Sensitivity; Sp = Specificity]. Various methodsROC curveAcSnSpAdaptiveVGG19 X1DCE0.790.750.650.78AdaptiveVGG16 X1DCE0.820.680.740.75AdaptiveInceptionV3/X1DCE0.780.810.850.65AdaptiveVGG16/X2WI0.850.680.880.60AdaptiveResNet50-V2/X1DCE0.890.780.680.87AdaptiveInceptionV3/X2WI0.920.690.660.88AdaptiveResNet50-V2/X2WI0.880.800.700.92AdaptiveDenseNet121/X2WI0.870.880.720.77AdaptiveDenseNet121/X1DCE0.920.770.870.84AdaptiveVGG19-SE/X2WI0.890.850.790.82AdaptiveVGG19-SE/X1DCE0.940.890.880.88AdaptiveVGG19-CBAM/X2WI0.850.910.820.89AdaptiveVGG19-CBAM/X1DCE0.900.870.790.88Proposed A-EL/X1DCE + X2WI0.950.960.910.94By combining predictive outcomes from both the X1DCE and X2WI MRI datasets, the structure of the recommended groups improved the predictive performance. This technique was inspired by the fact that radiologists make diagnostic decisions based on a thorough examination of several methods. These findings indicate the presence of careful group methods as a potential method for integrating multiframetric MRI databases into diagnostic and therapeutic applications. Furthermore, CNN-based radiomic systems are modifiable. As a result, new information classification assumptions can be provided utilizing a pre-trained framework with previously defined dependencies, weights, and other parameters that appear convenient and useful to facilitate the work. This is another important reason why detectives used CNN networks to complete this task Table 3.Table 3 Different pixel performance evaluation. Pixels expansionAUCAcSnSpPixel range 100.780.770.560.89Pixel range 300.900.890.920.97Pixel range 600.880.650.970.78 ## 4.1. Performance Classification in Various Configurations Current studies have found that combining data from multiple methods increases discriminatory performance compared to using individual methods. Since both the X1DCE and X2WI MRI datasets provide rich and varied signal intensity within the cancer, Convolutional neural network-based radiomic algorithms were developed in this work. The effectiveness of the approach for vessel invasion discriminations is shown in Table 1. Findings of each model are presented, including AUC average value, sensitivity, accuracy, and uniqueness. As seen in Table 1, X1DCE consistently defeats X2WI, proving that the T1DCE database is more valuable than the X2WI database. Furthermore, the sensitivity determined by X2WI for every scenario was lower than that produced by X1DCE, showing significant error rates by X2WI.The primary cause of X1DCEMRI is the ability to effectively estimate blood flow in vivo by displaying blood vascular density and permeability, estimating the capacity transmission constant, and depending on the permeability of the cancer vasculature, all of that can give more discriminating data on the prognosis of cervical cancer vessel invasion. X2WI provides anatomical data by screening soft tissues with high resolution to reveal tumor morphological characteristics. Furthermore, current findings are difficult to describe because there are no specific indicators to identify the quality of vascular infiltration in cervical cancer using preoperative MRI imaging. Compared to ResNet-v2, Inception-v3, and DenseNet, the AdaptedVGG network generated improved accuracy and AUC values for X1DCE and X2WI databases. Based on the small ROIs collected from MRI images, the most basic topologies of the AdaptedVGG network can be useful in minimizing excess compatibility with sophisticated structures compared to other CNN models.The remaining designs were said to allow greater accuracy in diagnosing clinical images, which contradicted the findings. This can be calculated based on AdaptedResNet50 and AdaptedVGG19, which have 80,402,590 and 84,922,700 training variables, correspondingly. AdaptedcResNet performed worse than AdapedVGG19, which may be due to greater compatibility. In terms of information size and modeling ability, there has to be a compromise. When the information set is large enough to effectively train a large model, the structure is more likely to perform better. Excessive fit, on the other hand, can be frustrating when the information is too large to sustain training. The small number of photos in our investigation may have had an impact on the effectiveness of the models. The optimum AUC of 0.880 was achieved by combining the results of the X1DCE and X2WI databases using the deep group learning approach. The craft properties used to predict vascular invasion status using X1CE MRI created a radiomics nomogram approach and a current of 0.95 AUC before trial. Similarly, X2WI used MRI to capture craft properties and developed a logistic regression model with 0.710 AUC.The maximum AUC in this test reached 0.911, which uses the recommended ensemble technique, which combines both MRI methods. This was in line with previous research that focuses on distinguishing the capabilities of the network and enhancing categorization capabilities. The SE component used in this study can lift the weight of the attributes of the most essential channels. Integrated channel attributes and spatial dimensions data were central to CBAM components. These findings suggest that careful group methods may be useful in predicting vascular invasion in cervical cancer. If the ROC curves are combined with false positive and true positive ratios, a more complete outcome can be obtained.For such an X1DCE MRI, as shown in Figure 9 performance evaluation, the curvature of the X1DCE and X2WI models is always greater than that of other structures. X1DCE & X2WIworked better than AdaptedVGG16. This demonstrates that CNN models of various depths can learn features from various levels and that networks with multiple layers performed somewhat excellent than those with lower levels. Considering the better specification, AdaptedResNet50-v2 quickly surpassed AdaptedInception-v3 and AdaptedDenseNet121. Adapted Inception-curve, on the other hand, v3s are generally lower than others. Similar findings were made for X2WI MRI in predictive performance, with the exception of AdaptedResNet50-v2. The study shows the ROC curve of EL modeling combining X1DCE and X2WI data, which is an optimal average AUC of 0.95. Figures 10 and 11 depicts the different technique evaluation for predicting the accuracy and sensitivity.Figure 9 Various technique performance evaluation by using receiver operating curve.Figure 10 Different approaches to accuracy prediction.Figure 11 Different approaches to sensitivity prediction. ## 4.2. The Peri-Tumor Area’s Effect on Estimating Vessel Invasion Tumor cells can spread to the pelvic area where they can travel to the blood or lymphatic vessels and other human tissues, leading to invasion and metastasis of cervical cancer. To test the discriminatory ability of peri-tumor pixels in the vascular invasion properties in cervical cancer, a set of system features was created by extending the ROI's MBR from 10 to 60 pixels in all directions on the X1DCE MRI images. Different pixels with three patches up to the ROI of the MBR were created for a single MRI image, and they were used as training examples using EL models. With X1DCE MRI data and the EL model, Figure 12represent the confusing matrix separation among non-vessel invasion and vessel invasion.Figure 12 Confusion matrix assessment for vessel and non-vessel invasion.Compared to the AUC values achieved by utilizing the similar models training with pixels of 10 and 60 from the MBR of the original ROIs, the model of EL was training with pixels image enlarged by 30 pixels from the ROI, which reached the largest AUC. Table2 predicts the EL approach performance evaluation with different patches. These findings indicate that the cervical cancer, peri-tumor region plays a significant part in the ultimate classification of vascular invasion. The effect was thought to be explained by the following: The rapid growth of microorganisms inside the cancer before the invasion of the vessel causes the microvascular tumor to expand into neighboring tissues and cause small morphological changes in these tissues. Radiologists can sense design in surrounding tissues only depending on the regional scale features determined via visual inspection, and these current convolutional neural network algorithms can detect pixel-level properties and detect certain connections between morphological and pathological characteristics.Table 2 Various CNN model performance evaluation [Ac = Accuracy; Sn = Sensitivity; Sp = Specificity]. Various methodsROC curveAcSnSpAdaptiveVGG19 X1DCE0.790.750.650.78AdaptiveVGG16 X1DCE0.820.680.740.75AdaptiveInceptionV3/X1DCE0.780.810.850.65AdaptiveVGG16/X2WI0.850.680.880.60AdaptiveResNet50-V2/X1DCE0.890.780.680.87AdaptiveInceptionV3/X2WI0.920.690.660.88AdaptiveResNet50-V2/X2WI0.880.800.700.92AdaptiveDenseNet121/X2WI0.870.880.720.77AdaptiveDenseNet121/X1DCE0.920.770.870.84AdaptiveVGG19-SE/X2WI0.890.850.790.82AdaptiveVGG19-SE/X1DCE0.940.890.880.88AdaptiveVGG19-CBAM/X2WI0.850.910.820.89AdaptiveVGG19-CBAM/X1DCE0.900.870.790.88Proposed A-EL/X1DCE + X2WI0.950.960.910.94By combining predictive outcomes from both the X1DCE and X2WI MRI datasets, the structure of the recommended groups improved the predictive performance. This technique was inspired by the fact that radiologists make diagnostic decisions based on a thorough examination of several methods. These findings indicate the presence of careful group methods as a potential method for integrating multiframetric MRI databases into diagnostic and therapeutic applications. Furthermore, CNN-based radiomic systems are modifiable. As a result, new information classification assumptions can be provided utilizing a pre-trained framework with previously defined dependencies, weights, and other parameters that appear convenient and useful to facilitate the work. This is another important reason why detectives used CNN networks to complete this task Table 3.Table 3 Different pixel performance evaluation. Pixels expansionAUCAcSnSpPixel range 100.780.770.560.89Pixel range 300.900.890.920.97Pixel range 600.880.650.970.78 ## 5. Conclusion Utilizing multi-parametric MRI data, this research presents in-depth radiomic approaches that may differentiate between non-vessel invasion and vessel invasion in cervical cancer. Specifically, the research focuses on vessel invasion.These findings provide evidence that comprehensive neurological network-based radiomics techniques are able to accurately forecast vascular invasion in cervical cancer that is in its early stages. In addition, these methods do not call for time-consuming human operations such as manual division, the construction of features, or selection.By utilizing a method known as focused group learning, we were able to achieve a high level of prediction accuracy. This method possesses a significant amount of potential and a great deal of promise for use in supporting future clinical applications. --- *Source: 1008652-2022-09-28.xml*
1008652-2022-09-28_1008652-2022-09-28.md
62,862
Predicting the Spread of Vessels in Initial Stage Cervical Cancer through Radiomics Strategy Based on Deep Learning Approach
Piyush Kumar Pareek; Prasath Alais Surendhar S; Ram Prasad; Govindaraj Ramkumar; Ekta Dixit; R. Subbiah; Saleh H. Salmen; Hesham S. Almoallim; S. S. Priya; S. Arockia Jayadhas
Advances in Materials Science and Engineering (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1008652
1008652-2022-09-28.xml
--- ## Abstract Novel methods and materials are used in healthcare applications for finding cancer in various parts of the human system. To select the most suitable therapy plan for individuals with domestically progressed cervical cancer, robustness metrics are required to estimate their early phase. The goal of the research is to increase the effectiveness of cervical cancer patients' detection by using deep learning-based radiomics assessment of magnetic resonance imaging (MRI). From March 2016 to November 2019, 125 patients with early-stage cervical cancer provided 980 dynamic X1 contrast-enhanced (X1DCE) and 850 X2 weighted imaging (X2WI) MRI images for training and testing. A convolutional neural network model was used to estimate cervical cancer state based on the specified characteristics. The X1DCE exhibited high discriminative outcomes than X2WI MRI in terms of prediction ability, as calculated by the confusion matrix assessment and receiver operating characteristic (ROC) curve approach. The mean maximum region under the curve of 0.95 was found using an attentive ensemble learning method that included both MRI sequencing (Sensitivity = 0.94, Specificity = 0.94, and accuracy = 0.96). Whenever compared with conventional radiomic approaches, the results show that a variety of radiomics based on deep learning might be created to help radiologists anticipate vascular invasion in patients with cervical cancer before surgery. Based on radiomics technique, it has proven to be an effective tool for estimating cervical cancer in its early stages. It would help people choose the best therapy method for them and make medical judgments. --- ## Body ## 1. Introduction Exposure to various chemical and physical agents is a typical environmental problem that contributes to cancer mortality. The cervical cancer is the second most common disease in women, with more than half of the million patients are identified every year throughout the globe. Since most instances happen in developing nations and present at an early incurable phase, more than 350,000 people will lost their life as a result of their illness. Every year, around 38,000 patients are reported in Europe, with more than two-thirds anticipated to be treated and live. Survival rates will differ by nation, dependent on treatment centers and, more significantly, whether a screened program has been developed. By reducing the possibility of invasive carcinoma by treating pre-malignant cervical intraepithelial neoplasia before it grows into a really aggressive and dangerous tumor, these approaches will discover the initial phase of illness [1]. Cervical cancer is one of the most prevalent forms of cancer that may affect a woman's reproductive system, and it represents a significant danger to a woman's life and health. The stages of cervical cancer that occur before treatments are required for identifying the available medical treatment choices and for making medical treatment predictions. Tumors that have invaded the parametrium of the cervical canal may only be treated with radiochemotherapy, whereas malignancies of the cervical canal that do not affect the parametrium can be healed surgically. The presence of parametrial enlargement in cervical cancer is connected to an increased risk of recurrence and worse survival following therapy [2].Penetration of the women cervix infects the cervix deeper tissues. The cervical cancer has the possibility to spread to other functions of the body, including the liver, rectum, bladder, lungs, and genitals. Normal cervical cells grow, reproduce at a certain rate, and then die, causing changes in their DNA. Furthermore, unfavorable cell mutations are exposed, leading to cells violating their control and refusing to die, resulting in the formation of differentiated cells. Abnormal discomfort after intercourse, vaginal bleeding after intercourse, and vaginal discharge after menopause are the most well-known indications of cervical cancer. Figure1 depicts the various symptoms of cervical cancer. The most widespread risk variables are early sexual activity, multiple sexual partners, compromised immune system, sexually transmitted diseases, and exposure to smoking and miscarriage vaccines [3]. Figure 2 depicts the various risk factors for cervical cancer rising to women. Consequently, accurate diagnosis of cervical cancer with parametrial invasion is important in clinical practice. Conventional magnetic resonance imaging (MR) imaging and gynecological examination are commonly used to evaluate parameter amplification. Traditional imaging characteristics such as full-thickness disruption of normal cervical stroma in T2-weighted images and nodular lesions extending to neighboring parameters were previously thought to be parameter invasion; however, image processing is a standard function. In medical care, an objective and measurable approach to measuring criterion penetration is required [4].Figure 1 Basic symptoms of cervical cancer.Figure 2 Various risk factors for cervical cancer.In 2012, there were approximately 530,000 newly diagnosed cervical cancers and 270,000 deaths worldwide. Undeveloped nations confront the second most common malignancy and the third major cause of cancer death among women. The highest percentages are in Melanesia, Sub-Saharan Africa, Latin America, and the Caribbean. New Zealand, West Asia, and North America have the lowest. Cervical cancer kills more than 92 percent of women in developed countries: 28,000 in Latin America, 60,000 in Africa and 150,000 in Asia. India, the world's second most populous country, accounts for 26% of cervical cancer deaths (70,000 deaths). Cervical cancer is the leading cause of death in women in eastern, central, Melanesia, and southern Africa. Cervical cancer rates vary widely across the country due to variations in access to surveillance, which enables early detection and removal of lesions and the incidence of human papilloma virus (HPV) disease. The prevalence of HPV infection (of all forms) varies explicitly, from 16% in Latin America, 21% in Africa, 5% in North America and the Caribbean to 9% in Asia [5].As per a WHO report, cervical cancer is most likely the reason of cancer in women in developing countries. Although clinical centers, thousands of extra cases were reported in the United States in 2016, compared to much greater than 20K in 2014. The dataset of cervical cancer provides almost 800 data specimens, 32 features, and 4 targets from the 2016–17 reporting period. Overall traits, tobacco activities, and previous health histories are all important aspects. The abundance of screening and diagnostic procedures, each of which may produce such a diverse set of outcomes, contributes to the complexity of the data. As a consequence of this, determining how the woman's element will behave and selecting the most appropriate screening strategy are both crucial challenges. As a direct consequence of this, the procedure of identifying the most suitable principal channel constitutes the primary obstacle in the endeavor of measuring a woman's exposure to risk factors. A lot of academics have looked at data on cervical cancer that was compiled from a variety of different sources. The major risk factors for the spread of cervical cancer include improper menstrual hygiene, having children at a young age, smoking, and a lack of preventative measures for mouth cancer. The tumor phase, initial weight mass, and histological grading are all variables that affect the prediction. Therapy is made up of four phases of illness as established by the International Federation of Gynecology and Obstetrics (FIGO) scoring scheme. Surgery or radiation therapy is utilized to handle patients with stage IIA or less. Initial stage cancer patients may need a radical hysterectomy, radiation treatment, or sometimes both. People diagnosed with stage IIB or higher, on the other hand, receive only radiation therapy. Stage IIA disease without stage parametrial involvement and stage IIB disease, in which parametrial involvement are the main difference in stages. Figure3 represents the different stages of cervical cancer. Although lymph node metastasis is not comprised in the formal FIGO scoring system, they are an important prognostic factor. The TNM staging approach to cervical cancer involves nodal status. Unilateral and bilateral parameters are other prognosis variables for the occurrence of pelvic wall disease [6].Figure 3 Different stages of cervical cancer.The size of the tumor at prognosis, the size of the high-risk clinical objective during bronchial therapy, and the duration of therapeutic effects are linked to the potential of local democracy. Furthermore, in the epoch of image-guided responsive therapy, it is essential to check the outcome evaluation, especially for those at significant risk of local recurrence, and to intensify medication (or) radio sensitizing Agent who is applicants for clinical trials. On the other hand, identifying individuals at low risk of local recurrence may be clinically important. Clinical imaging is essential in the primary assessment and condition of victims and in the treatment of treatment options. Because of its great resolution, functional imaging capabilities, and excellent soft-tissue contrast magnetic resonance imaging (MRI) is the gold typical for the pre-treatment evaluation of gynecologic malignant T-status [7]. Radiation therapy and concurrent chemotherapy with cisplatin-based chemotherapy is a standard therapy for women with metastatic cervical cancer, as per the NCCN recommendations; the 5-year life expectancy can approach 60–80 percent. If first-line CCRT failed, though, the longer CCRT therapy time will unavoidably delay the start of other possibly beneficial therapies. Furthermore, CCRT has several adverse effects. Additional pelvic irradiated can induce myelosuppression because it damages the bones, which comprise more than 50% of the body's proliferative functional bone marrow mass. Platinum-based CCRT can worsen myelosuppression, although it is less effective if therapy is initiated or interrupted. As a result, predicting CCRT responsiveness before therapy may help to determine whether CCRT should be used as first-line therapy. Furthermore, by identifying individuals who are most susceptible to CCRT, responder prognosis can lead to individualized therapy [8]. Figure 4 describes the treatment option available for cervical cancer.Figure 4 Treatment option available for cervical cancer.The Ministry of Health of Bangladesh has launched the 5-year national approach for family welfare cervical cancer control and prevention Initiative, which will run from 2016 to 2021. The WHO considers invasive cervical cancer to be the fourth most widespread and second leading cause of cancer among Bangladeshi women aged 20 to 50 years. Each year, approximately 12,000 new cases are identified, and the severity of the condition exceeds 6,000. In Bangladesh, nearly 4.4 percent of the general population has a higher risk of developing cervical HPV16/18 infections at any given time, and HPVs 16 and 18 are responsible for 80.3 percent of invasive cervical malignancies [9]. Figure 5 depicts the mortality rate of cervical cancer during the year from 1990 to 2020. Targeted treatment requires imaging. Radiomics collects huge volumes of information from high-performance clinical pictures to extract attributes for unbiased, measured investigation of diseased biological activities [10]. Radiomics have been intensively explored in tumor detection, differentiated diagnostic tests, prognostic assessment, and therapy outcome prediction in recent decades. This approach has been used to estimate LNM in breast cancer, bladder cancer, biliary tract cancer, and colorectal cancer before surgery [11]. Some research has looked at the effectiveness of radiomics in calculating LNM in cervical cancer. The radiomics properties of positron emission tomography (PET) were linked to the expression of VEGF in cervical cancer in a prior study. This observations suggest that a radiomics system based on health images may be used to predict VEGF expression [12].Figure 5 Mortality rate for cervical cancer from 1990 to 2020.Radiomics is a novel approach for obtaining high-throughput data from normal clinical pictures. The radiomics nomogram was used to identify LNM status by collecting measurable characteristics from CT images connected to colorectal, bladder, esophageal, breast, lung adenocarcinoma, and thyroid. It functioned well [13]. Radiomics is a rapidly expanding field of science that uses image collections of high-dimensional characteristics taken from routinely obtained cross-sectional images to produce data that semantic assessment would otherwise lose. Radiomics records the cystic and necrotic patches within the tumor volume that are typical of tumoral heterogeneity, as well as behavior that characterizes aggression and therefore results. Radiomics is an area of research that uses mathematical modeling to extract qualitative information from clinical images in order to create prediction methods that may be used to estimate treatment prognostic and survivability, with preliminary results reflecting a wide range of medical results [14].In comparison to nonmedical databases, healthcare sets of data feature more attributes and partial information. This is critical to establish the essential and useful properties for quantified approach building by enhancing type. Although deep learning approaches are stronger in forecasting and effective tweaking, they have been frequently utilized in cancer research. An investigation found that long-term HPV infection is the main reason for cervical cancer [15]. Machine learning is a technique that leverages previously established diagnosing characteristics as factors, such as morphological or textures, and needs factors pre-selected by people to do categorization jobs. Deep learning, on the other hand, retrieves whatever the system defines as essential factors directly from the training phase, avoiding the preconceptions that come with past human analysis. This will eventually offer physicians with methods that will aid in the proper detection of cervical cancer [16]. In this study, deep learning-based VGG19network is used for the early detection of cervical cancer. ## 2. Related Works Imaging-based tumor size predicts cervical cancer radiation response before, during, and after treatment. Various imaging-based tumor size measurement approaches and time have not been examined. To compare the diagnostic usefulness of orthogonal diameter-based elliptical tumor volume measurement vs. contoured tracing evaluation 3D tumor volumetric. 60 patients (stages IB2-IVB/recurrent) with advanced cervical cancer underwent continuous MRI exams throughout early RT, mid-RT, and follow-up. In the computer workstation, the measurement based on ROI was calculated by monitoring the whole tumor area on each MR piece. Three orthogonal diameters (a1, a2, a3) were determined on image hard copies to calculate the volume as an ellipse for the diameter-based “elliptical volume”. These results were compared between the two measurement techniques, and the series tumor sizes and regression rates calculated with each approach were linked to local management, disease, and survival time. The average duration of treatment was 5 years. A mid-treatment MRI scan using 3D ROI volumetry is the best approach and time point for estimating tumor size to predict tumor regression rate. Both the diameter-based technique and the ROI measurement had to predict accuracy equivalent to pre-RT tumor size, especially in patients with small and large RT tumors. Tumor size prior to RT was determined by any approach and, on the other hand, showed a significantly lower prognostic value for intermediate-sized tumors that accounted for most patients. The largest result of predicting local control and disease survival rates is the tumor regression rate gotten during mid-RT, which can only be recognized by 3D ROI measurement. Slow ROI-based regression rates were predicted within the difficult intermediate pre-RT group to classify all treatment complications. The results were not predicted by the mid-RT regression rate depend on the base diameter measurement. Of all the measurement methods, the initial-RT and post-RT measurements were the least informative. The first findings show that a simple diameter-based estimate obtained from film hardcopy can be used to determine the initial tumor size for the treatment performance prognosis in cervical cancer. When the initial tumor size is intermediate, ROI measurement and additional MRI are required during the RT to objectively measure the tumor regression rates for clinical outcome evaluation. The baseline diameter-based approach of this study is not optimal for evaluating the response throughout treatment [17].Cells usually split and expand to make additional cells only if the body requires them. Whenever new cells are not required, this ordered procedure keeps the process going. Those cells may produce a cancer progression, which is a mass of excess tissues. Effective data processing methods are used to diagnose whether the cervix is normal or cancerous. With the help of powerful data processing methods, normal cervical or cancer cervical prognosis is calculated in this study. Predictive data processing is important, especially in the medical field. The regression and classification tree system, the K-learning with Random Forest Tree algorithm to predict a usual or cancerous cervix are all based on this principle. Information was analyzed from NCBI and utilized from a data collection with 500 samples and 61 parameters. The results were shown in the format of predictive trees. As previously indicated, a sample of 100 data with 61 biopsies characteristics was chosen. Depending on the biopsies results, an awareness program is implemented, and a questionnaire is undertaken to track women's changes over this time. A personalized interviewing program was done amongst rural women in diverse locations to obtain data effectively. Patients were screened for cervical cancer in collaboration with JIPMER Hospital. The findings of the biopsy tests were statistically analyzed and submitted to MATLAB for algorithmic verification. To determine the results obtained, 100 test datasets and 60 training packages are divided and presented in different heads. To find the best cervical cancer prognosis, the researchers compared the effectiveness of several methods using measures such as sensitivity, specificity and accuracy. The regression approach was initially used to make predictions. Normal cervical or cancers cervical is two possible side effects of CART binary tree. The GINI index is a separation criterion used to determine the different types of cervical information. After the RFT confirmed the optimal accuracy, a new logic was adopted, namely “mixtures of two techniques.” This is the supervised machine learning group approach. To create the best predictive effect, whitening is utilized as a pre-process in the k-mean cluster. With the CART TREE result, the findings represent 83.87 percent accuracy. To increase forecast efficiency, Random Forest Tree (RFT) is used. To achieve 93.54 percent forecast efficiency using the MATLAB code. Because the K-Means method is useful for estimating datasets, the RFT - K- i.e. learning tree output achieves high accuracy. This approach is unique in that it combines RFT with the K-means method, resulting in a greater accuracy result. This algorithm has been ineffective for the exact prediction of cervical cancer [18].Prevention methods are cheaper than medical treatment in practically all countries. The primary diagnosis of any disease improves the chance of effective treatment for patients rather than the disease diagnosed late in its course. Cervical cancer is caused by a variety of factors, including aging and the use of hormonal contraceptives. Cervical cancer can be diagnosed early, which increases healing and reduces mortality. The study use machine learning methods to improve a method that can precisely and sensitively diagnose cervical cancer. The categorization approach was constructed using the cervical cancer potential risk database from the University of California at Irvine (UCI) using a polling approach that included three classification techniques such as decision tree, random forest and logistic regression. To solve the difficulties of asymmetric learning and to decrease the variables that do not disturb the accuracy of the sample, the Minority Surface Model (SMOTE) integrated with the Primary Component Analysis (PCA) approach was used. The excessive fitting problem was avoided using the 10-fold cross-verification approach. The database contains 32 risk factors and four targeted variables (Hinselmann, Cytology, Schiller, and Biopsy). The study found that combining voting classifiers with SMOTE and PCA approaches improves the sensitivity, accuracy, and area of prediction models under the ROC for each of the four target variables. For all target variables, the SMOTE-voting approach increased accuracy, sensitivity and PPA ratios from 0.9 percent to 5.1 percent, 39.2 percent to 46.9 percent, and 2 percent to 29 percent. Furthermore, the PCA technique improved sample performance by eliminating computational processing time. Lastly, after comparing the findings to those of multiple prior research, study exposed that these models were more efficient in identifying cervical cancer based on key assessment criteria. In this method, the correct prediction of the original feature is difficult [19].Another study looked at and proposed an effective and enhanced cervical cancer forecasting model. Previous monitoring and detection methods/tests were complex, time-consuming, and clinical/pathological. Machine learning predicts and diagnoses cervical cancer. For measuring performance in illness diagnosis, an integrative technique of Adaptive Boosting and Genetic Algorithm is applied. To reduce the amount of features, a genetic algorithm is utilized as feature selection. It minimizes both the computing price and the number of components required for diagnosis. Adaptive Boosting is a technique for improving classifier performance. For illness identification, the Support Vector Machine (SVM) and Decision Tree are recommended. For cervical cancer detection, 32 variables are utilized. The set of variables is decreased using a genetic approach, and adaptive boosting is recommended for additional improving performance. For the radial bias function of support vector machine, decision tree, and SVM linear, the improvement in accuracy was 94 percent, the sensitivity was 97 percent–98 percent, the specificity was 93 percent–94 percent, and the accuracy was 93 percent–95 percent. A combined method of adaptive promotion and genetic mechanism is suggested. It requires more time for processing and exact prediction is difficult for the high-noise image [20].Cervical cancer is the most leading cause of mortality, especially in developing countries, although it may be efficiently managed if identified earlier. The goal of this work was to create effective machine-learning-based classification models for early-stage cervical cancer detection utilizing clinical studies. The study used a Kaggle data repository cervical cancer databases that had four different types of aspects including cytology, Hinselmann, biopsy, and Schiller. Those class characteristics were used to divide the database into four groups. This dataset was subjected to three feature modification methods such as sine function, log and Z-score. The performance comparison of many supervised machine learning methods was evaluated. For the biopsies and cytology data, the Random Tree (RT) method performed best, while Random Forest (RF) and Instance-Based K-nearest neighbor (IBk) performed best for Schiller and Hinselmann correspondingly. The logarithmic transformation approach to the biopsy dataset worked best, while the sine function worked best for cytology. The Hinselmann database performed well on both logarithm and sine functions, while the Schiller database performed well with the Z-score. Multiple feature selection techniques (FST) approaches have been used for modified datasets to identify and prioritize related risk variables. The findings of this study show that clinical evidence, tuning and relevant computer structure, classification, and machine learning approaches can be effective and accurate. Diagnose cervical cancer in its early stages. This method is inefficient and difficult to predict the exact value [21].Health care providers are now confronting a significant problem in recognizing cervical cancer before it progresses fast. To access the risk variables for predicting cervical cancer by using machine learning classification algorithms. Effective variation of the eight most categorical algorithms for diagnosing cervical cancer using various excellent features selected from the database. Machine learning classifiers such as Decision Tree, Multilayer Perceptron (MLP), K-Near Neighbor and Random Forest, Logistic Recursion, Gradient Boosting, Adabost, and SVC are help to identify the early detection of cervical cancer. To prevent values from disappearing in the database, several procedures are used. A mixture of selecting features approaches including SelectBest, Random Forest, and Chi-square was used to select several excellent properties. Recall, accuracy and f1-score properties are utilized to evaluate the effectiveness of classifiers. MLP outperforms other classification techniques in the range of best-selected features. At database segmentation rates, most classification techniques claim to have the greatest accuracy in the first 25 characteristics. The ratio of correctly classified examples to each sample is shown, and all of the findings are analyzed. Medical practitioners can carry out cervical cancer prediction in an effective manner by using the recommended method. This method has a cumulative loss function, making it difficult to predict cancer [22].To examine whether strain elastography imaging can be used to diagnose and predict treatment outcomes in patients receiving simultaneous chemo-radiotherapy (CCRT) for locally advanced cervical cancer. In a 2015–2016 feasibility assessment, 47 individuals with advanced localized cervical cancer were registered. All patients had CCRT and filtered elastography before, one week, two weeks, and immediately after therapy. MRI was used to evaluate treatment response during diagnosis and following CCRT. Depending on the MRI findings the outcome of treatment can be classified as a full response, partial response, chronic disease or progressive disease. Clinical results have been compared with the rate of strain of normal parametrial tissue and cervical tumor. Of the 47 patients who completed all four exams, 36 were evaluated: 25 were categorized as CR, 11 as PR and 0 as SD/PD. The CR group (F = 87) and the PR group (F = 38) had significantly different strain ratios at different time periods. The CR and PR sets had considerably different strain rates (F = 7.2). At 1 week of treatment, the strain rates in the CR and PR collections varied considerably (p 0.05). Week 1 and 2, and post-treatment (all p 0.001) showed a significant decrease in the CR group, whereas week 2 and after treatment (both p 0.05) showed a significant decrease in the PR group, but not at week 1 during CCRT. A prospective combination study was performed to estimate cancer response in women who getting CCRT for cervical cancer. The work demonstrates the ability of strain elastography imaging to monitor and predict tumor response developed by CCRT [23]. ## 3. Materials and Methods ### 3.1. Data Collection Samples of 1500 diagnostic MRI images were collected from an average of 150 female patients aged 55 years and above in online databases of international collaborations on cancer reporting over a period of 30 to 65 years between March 2016 and November 2019, with 600 naval invasions and 900 non-naval invasions. In each case, two MRI methods were captured such as X1DCE MRI, which focused on anatomical features and efficiently measured blood flow in vivo; and X2 weight imaging (X2WI) MRI, which stronger the contrast of the soft tissues. Patients with X1DCE and X2WI MRI assessments prior to surgical treatment; Surgical extraction cases with pathological verification utilized as the typical gold for distinguishing non-invasive and vessel invasion properties of cancer; And all women over 20 years of age were included. Individuals with a history of preoperative treatment, women with no X1DCE or X2WI MRI data, women with no histopathological effects, patients receiving congenital therapies, and very young patients with cases of other cervical diseases or tumors were excluded. A radiologist with 10 years of expertise used 4.0-T scanning with sensitive coded abdominal scrolls of 8-channel arrays to perform preliminary MRI examinations. Before screening these individuals were told to drinking some water to fill their bladder, rest taken for 30 minutes, and bring their respiration under control. Clinical records were reviewed to collect patient data such as patient age, menstrual status, international gynecology and obstetrics stage and tumor type, LN and lymph vascular space invasion histological findings after surgery. Table 1 shows the patients' various characteristics for training and testing phase.Table 1 A selected patient training and testing phase attributes. Patients characteristicsTraining phaseN = 150p-valueTesting phaseN = 55p-valuep∗-value+ive lymphovascular invasion-ive lymphovascular invasion+ive lymphovascular invasion-ive lymphovascular invasionPatients age/year0.600.530.98Average age55565360Age ranges27–5527–6030–5535–65Stages0.62<0.00020.45Early stage IB20 (50.2)40 (52.6)15 (30.4)35 (70.2)Late stage IB15 (42.5)48 (50.1)18 (40.2)20 (40.2)Stage IIB8 (18.5)12 (13.2)12 (52.6)8 (10.2)MRI lymph node status<.0020.0020.70Positive20 (7.9)30 (40.2)10 (55.2)50 (92.1)Negative150 (95.7)52 (68.2)12 (60.8)15 (20.5)Menstrual status0.5420.4420.89Postmenopausal15 (40.3)55 (56.2)6 (30.2)30 (52.7)Premenopausal28 (65.2)48 (50.2)15 (80.5)35 (56.8)Maximum cancer diameter0.0020.0080.55≤5 cm25 (60.2)80 (88.5)8 (52.8)42 (68.5)>5 cm20 (45.6)18 (17.06)10 (54.8)9 (15.9)Lymphovascular invasion<.002.001<.002Positive88 (35.9)46 (59.8)18 (22.6)15 (35.9)Negative170 (59.8)35 (40.8)97 (89.0)28 (70.2) ### 3.2. Data Image Preprocessing Each T1DCE and T2WI image was examined using the ITK-SNAP program by MRI radiologists with 10 and 12 years of experience. The ROI per patient was created at an average range of 30 × 40 pixels per image and included tumor areas and borders of cervical cancer located in the cervix. The ROI patch from each MRI image was automatically generated and measured at 256 × 256, and then fed into in-depth learning networks. Data augmentation was utilized to train the convolutional neural network models and balanced the datasets using the image data generator of the Keras module in Python 3.9. Every image is initially measured and cut before being moved up and down, before being moved to right from left, and then arbitrarily interchanged6 degrees around the midpoint. It was thought that pixels beyond ROI could carry important information for discrimination because cervical tumor cells travel to neighboring healthy tissues in patients with vascular invasion. To compare ROIs, the array of pixels from the minimum boundary rectangle (MBR) is stretched at different positions (top, bottom, left, and right). The produced images were enhanced with data amplification utilizing the same technique before being placed on the network’s input layer. Figure 6 depicts the process flow diagram for cervical cancer prediction.Figure 6 Cervical cancer prediction step by step procedure. ### 3.3. Convolutional Neural Models for Classification CNN, also known as convolutional networks, was employed as direct inputs to the network instead of feature representation, unlike standard radiomic techniques. The algorithm is a self-sufficient gathering and improvement of advanced traits and variables. MRI scanned regions from cervical victims were used as inputs for the end-to-end convolutional networks model in this study. The output layer of each strategy was constructed to comprise two neural networks to predict the possibility of with or without ship incursions. Several CNN techniques, including VGGNet, GoogLeNet, Residual Network, and DenseNet, have been used to analyze various radiomic processes. More detailed explanations may be found in the source articles for each CNN model.In order to adapt to this work during the experiments, the first fully connected components in every neural network were exchanged with three additional entirely connecting layers with a neural number of 700, 500, and 5, respectively. Adam optimizer and cross-entropy losses were used to train all networks with a detection rate of 0.0002. During each Conventional blogging of AdaptedVGG19 networks, a custom compression and trigger mechanism and components of the Convulsion Block focus were added to create an AdaptedVG1919-SE and AdaptedVG1919-CBAM accordingly. The two separate AdaptedVGG19-focused network integrated judgments were used to develop a deeper group learning approach and an observational group learning approach accordingly.D = (d1 + d2)/2 was used to calculate the output probability of becoming a ship invasion or non-ship invasion, where d1 and d2 reflect the effect potential of two AdaptedVGG19 networks using X1DCE and X2WI. Figures 7 and 8 represent the architecture model for suggested VGG19 approaches and its inner structure processing.Figure 7 Schematic diagram for suggested adaptive VGG19 approach.Figure 8 Inner structure process flow diagram for adaptive VGG19. ### 3.4. Mechanism for Validity The effectiveness of the radoimic algorithms was evaluated using 10-fold cross-validation, and the accuracy, sensitivity, and specification were calculated using the calculations below. The amount of disorder cases properly identified is represented by True Negative (tn) and True Positive (tp). The quantities of disorder cases incorrectly identified were labeled as False Positive (fp) and False Negative (fn).(1)AccuracyAc=tn+tpfn+tn+fp+tp,SensitivitySn=tpfn+tp,SpecificitySp=tnfp+tn.This method's capacity to distinguish between non-vessel invasion and vessel invasion events is reflected in its efficiency. To refer specificity to a model's ability to appropriately distinguish non-vessel invasion. Sensitivity refers to the model's ability to properly distinguish vessel incursion. The median receiver operational characteristics assessment and the area under the ROC curves were also used to assess these approaches. A confusing matrix was created using the Scikit-Learn module to evaluate the classification performance of the suggested approaches. The gradient-weighted glass activation mapping approach was used to create the heatmaps. Algorithm 1 shows the deep learning-based radiomics strategy for cervical cancer prediction.Algorithm 1: Deep learning-based Radiomics strategy for cervical cancer prediction. Input: Test MRI images from datasetsOutput: Prediction of the cervical cancer (normal cell (or) abnormal cell)Initialize the number of specimen (Ns), tumor length (Lt), Image processing (Ip)While (not satisfied the termination condition)fori ranges (0, Ns)Randomly selected the specimen N1, N2, N3…….Ns then perform the operationforj ranges (0, Lt)If rand (0, 1) < rand(0, Lt) = =jPerform the image processing operationelseDo not perform the image processing operationend ifGet the new image (Nsn)end forend forfori in range (0, Ns)If tumor volume (Nsn) > tumor volume (Ns)Update cancer state (normal/abnormal)elseNot update cancer Nsend ifend forend while ## 3.1. Data Collection Samples of 1500 diagnostic MRI images were collected from an average of 150 female patients aged 55 years and above in online databases of international collaborations on cancer reporting over a period of 30 to 65 years between March 2016 and November 2019, with 600 naval invasions and 900 non-naval invasions. In each case, two MRI methods were captured such as X1DCE MRI, which focused on anatomical features and efficiently measured blood flow in vivo; and X2 weight imaging (X2WI) MRI, which stronger the contrast of the soft tissues. Patients with X1DCE and X2WI MRI assessments prior to surgical treatment; Surgical extraction cases with pathological verification utilized as the typical gold for distinguishing non-invasive and vessel invasion properties of cancer; And all women over 20 years of age were included. Individuals with a history of preoperative treatment, women with no X1DCE or X2WI MRI data, women with no histopathological effects, patients receiving congenital therapies, and very young patients with cases of other cervical diseases or tumors were excluded. A radiologist with 10 years of expertise used 4.0-T scanning with sensitive coded abdominal scrolls of 8-channel arrays to perform preliminary MRI examinations. Before screening these individuals were told to drinking some water to fill their bladder, rest taken for 30 minutes, and bring their respiration under control. Clinical records were reviewed to collect patient data such as patient age, menstrual status, international gynecology and obstetrics stage and tumor type, LN and lymph vascular space invasion histological findings after surgery. Table 1 shows the patients' various characteristics for training and testing phase.Table 1 A selected patient training and testing phase attributes. Patients characteristicsTraining phaseN = 150p-valueTesting phaseN = 55p-valuep∗-value+ive lymphovascular invasion-ive lymphovascular invasion+ive lymphovascular invasion-ive lymphovascular invasionPatients age/year0.600.530.98Average age55565360Age ranges27–5527–6030–5535–65Stages0.62<0.00020.45Early stage IB20 (50.2)40 (52.6)15 (30.4)35 (70.2)Late stage IB15 (42.5)48 (50.1)18 (40.2)20 (40.2)Stage IIB8 (18.5)12 (13.2)12 (52.6)8 (10.2)MRI lymph node status<.0020.0020.70Positive20 (7.9)30 (40.2)10 (55.2)50 (92.1)Negative150 (95.7)52 (68.2)12 (60.8)15 (20.5)Menstrual status0.5420.4420.89Postmenopausal15 (40.3)55 (56.2)6 (30.2)30 (52.7)Premenopausal28 (65.2)48 (50.2)15 (80.5)35 (56.8)Maximum cancer diameter0.0020.0080.55≤5 cm25 (60.2)80 (88.5)8 (52.8)42 (68.5)>5 cm20 (45.6)18 (17.06)10 (54.8)9 (15.9)Lymphovascular invasion<.002.001<.002Positive88 (35.9)46 (59.8)18 (22.6)15 (35.9)Negative170 (59.8)35 (40.8)97 (89.0)28 (70.2) ## 3.2. Data Image Preprocessing Each T1DCE and T2WI image was examined using the ITK-SNAP program by MRI radiologists with 10 and 12 years of experience. The ROI per patient was created at an average range of 30 × 40 pixels per image and included tumor areas and borders of cervical cancer located in the cervix. The ROI patch from each MRI image was automatically generated and measured at 256 × 256, and then fed into in-depth learning networks. Data augmentation was utilized to train the convolutional neural network models and balanced the datasets using the image data generator of the Keras module in Python 3.9. Every image is initially measured and cut before being moved up and down, before being moved to right from left, and then arbitrarily interchanged6 degrees around the midpoint. It was thought that pixels beyond ROI could carry important information for discrimination because cervical tumor cells travel to neighboring healthy tissues in patients with vascular invasion. To compare ROIs, the array of pixels from the minimum boundary rectangle (MBR) is stretched at different positions (top, bottom, left, and right). The produced images were enhanced with data amplification utilizing the same technique before being placed on the network’s input layer. Figure 6 depicts the process flow diagram for cervical cancer prediction.Figure 6 Cervical cancer prediction step by step procedure. ## 3.3. Convolutional Neural Models for Classification CNN, also known as convolutional networks, was employed as direct inputs to the network instead of feature representation, unlike standard radiomic techniques. The algorithm is a self-sufficient gathering and improvement of advanced traits and variables. MRI scanned regions from cervical victims were used as inputs for the end-to-end convolutional networks model in this study. The output layer of each strategy was constructed to comprise two neural networks to predict the possibility of with or without ship incursions. Several CNN techniques, including VGGNet, GoogLeNet, Residual Network, and DenseNet, have been used to analyze various radiomic processes. More detailed explanations may be found in the source articles for each CNN model.In order to adapt to this work during the experiments, the first fully connected components in every neural network were exchanged with three additional entirely connecting layers with a neural number of 700, 500, and 5, respectively. Adam optimizer and cross-entropy losses were used to train all networks with a detection rate of 0.0002. During each Conventional blogging of AdaptedVGG19 networks, a custom compression and trigger mechanism and components of the Convulsion Block focus were added to create an AdaptedVG1919-SE and AdaptedVG1919-CBAM accordingly. The two separate AdaptedVGG19-focused network integrated judgments were used to develop a deeper group learning approach and an observational group learning approach accordingly.D = (d1 + d2)/2 was used to calculate the output probability of becoming a ship invasion or non-ship invasion, where d1 and d2 reflect the effect potential of two AdaptedVGG19 networks using X1DCE and X2WI. Figures 7 and 8 represent the architecture model for suggested VGG19 approaches and its inner structure processing.Figure 7 Schematic diagram for suggested adaptive VGG19 approach.Figure 8 Inner structure process flow diagram for adaptive VGG19. ## 3.4. Mechanism for Validity The effectiveness of the radoimic algorithms was evaluated using 10-fold cross-validation, and the accuracy, sensitivity, and specification were calculated using the calculations below. The amount of disorder cases properly identified is represented by True Negative (tn) and True Positive (tp). The quantities of disorder cases incorrectly identified were labeled as False Positive (fp) and False Negative (fn).(1)AccuracyAc=tn+tpfn+tn+fp+tp,SensitivitySn=tpfn+tp,SpecificitySp=tnfp+tn.This method's capacity to distinguish between non-vessel invasion and vessel invasion events is reflected in its efficiency. To refer specificity to a model's ability to appropriately distinguish non-vessel invasion. Sensitivity refers to the model's ability to properly distinguish vessel incursion. The median receiver operational characteristics assessment and the area under the ROC curves were also used to assess these approaches. A confusing matrix was created using the Scikit-Learn module to evaluate the classification performance of the suggested approaches. The gradient-weighted glass activation mapping approach was used to create the heatmaps. Algorithm 1 shows the deep learning-based radiomics strategy for cervical cancer prediction.Algorithm 1: Deep learning-based Radiomics strategy for cervical cancer prediction. Input: Test MRI images from datasetsOutput: Prediction of the cervical cancer (normal cell (or) abnormal cell)Initialize the number of specimen (Ns), tumor length (Lt), Image processing (Ip)While (not satisfied the termination condition)fori ranges (0, Ns)Randomly selected the specimen N1, N2, N3…….Ns then perform the operationforj ranges (0, Lt)If rand (0, 1) < rand(0, Lt) = =jPerform the image processing operationelseDo not perform the image processing operationend ifGet the new image (Nsn)end forend forfori in range (0, Ns)If tumor volume (Nsn) > tumor volume (Ns)Update cancer state (normal/abnormal)elseNot update cancer Nsend ifend forend while ## 4. Results and Discussion ### 4.1. Performance Classification in Various Configurations Current studies have found that combining data from multiple methods increases discriminatory performance compared to using individual methods. Since both the X1DCE and X2WI MRI datasets provide rich and varied signal intensity within the cancer, Convolutional neural network-based radiomic algorithms were developed in this work. The effectiveness of the approach for vessel invasion discriminations is shown in Table 1. Findings of each model are presented, including AUC average value, sensitivity, accuracy, and uniqueness. As seen in Table 1, X1DCE consistently defeats X2WI, proving that the T1DCE database is more valuable than the X2WI database. Furthermore, the sensitivity determined by X2WI for every scenario was lower than that produced by X1DCE, showing significant error rates by X2WI.The primary cause of X1DCEMRI is the ability to effectively estimate blood flow in vivo by displaying blood vascular density and permeability, estimating the capacity transmission constant, and depending on the permeability of the cancer vasculature, all of that can give more discriminating data on the prognosis of cervical cancer vessel invasion. X2WI provides anatomical data by screening soft tissues with high resolution to reveal tumor morphological characteristics. Furthermore, current findings are difficult to describe because there are no specific indicators to identify the quality of vascular infiltration in cervical cancer using preoperative MRI imaging. Compared to ResNet-v2, Inception-v3, and DenseNet, the AdaptedVGG network generated improved accuracy and AUC values for X1DCE and X2WI databases. Based on the small ROIs collected from MRI images, the most basic topologies of the AdaptedVGG network can be useful in minimizing excess compatibility with sophisticated structures compared to other CNN models.The remaining designs were said to allow greater accuracy in diagnosing clinical images, which contradicted the findings. This can be calculated based on AdaptedResNet50 and AdaptedVGG19, which have 80,402,590 and 84,922,700 training variables, correspondingly. AdaptedcResNet performed worse than AdapedVGG19, which may be due to greater compatibility. In terms of information size and modeling ability, there has to be a compromise. When the information set is large enough to effectively train a large model, the structure is more likely to perform better. Excessive fit, on the other hand, can be frustrating when the information is too large to sustain training. The small number of photos in our investigation may have had an impact on the effectiveness of the models. The optimum AUC of 0.880 was achieved by combining the results of the X1DCE and X2WI databases using the deep group learning approach. The craft properties used to predict vascular invasion status using X1CE MRI created a radiomics nomogram approach and a current of 0.95 AUC before trial. Similarly, X2WI used MRI to capture craft properties and developed a logistic regression model with 0.710 AUC.The maximum AUC in this test reached 0.911, which uses the recommended ensemble technique, which combines both MRI methods. This was in line with previous research that focuses on distinguishing the capabilities of the network and enhancing categorization capabilities. The SE component used in this study can lift the weight of the attributes of the most essential channels. Integrated channel attributes and spatial dimensions data were central to CBAM components. These findings suggest that careful group methods may be useful in predicting vascular invasion in cervical cancer. If the ROC curves are combined with false positive and true positive ratios, a more complete outcome can be obtained.For such an X1DCE MRI, as shown in Figure 9 performance evaluation, the curvature of the X1DCE and X2WI models is always greater than that of other structures. X1DCE & X2WIworked better than AdaptedVGG16. This demonstrates that CNN models of various depths can learn features from various levels and that networks with multiple layers performed somewhat excellent than those with lower levels. Considering the better specification, AdaptedResNet50-v2 quickly surpassed AdaptedInception-v3 and AdaptedDenseNet121. Adapted Inception-curve, on the other hand, v3s are generally lower than others. Similar findings were made for X2WI MRI in predictive performance, with the exception of AdaptedResNet50-v2. The study shows the ROC curve of EL modeling combining X1DCE and X2WI data, which is an optimal average AUC of 0.95. Figures 10 and 11 depicts the different technique evaluation for predicting the accuracy and sensitivity.Figure 9 Various technique performance evaluation by using receiver operating curve.Figure 10 Different approaches to accuracy prediction.Figure 11 Different approaches to sensitivity prediction. ### 4.2. The Peri-Tumor Area’s Effect on Estimating Vessel Invasion Tumor cells can spread to the pelvic area where they can travel to the blood or lymphatic vessels and other human tissues, leading to invasion and metastasis of cervical cancer. To test the discriminatory ability of peri-tumor pixels in the vascular invasion properties in cervical cancer, a set of system features was created by extending the ROI's MBR from 10 to 60 pixels in all directions on the X1DCE MRI images. Different pixels with three patches up to the ROI of the MBR were created for a single MRI image, and they were used as training examples using EL models. With X1DCE MRI data and the EL model, Figure 12represent the confusing matrix separation among non-vessel invasion and vessel invasion.Figure 12 Confusion matrix assessment for vessel and non-vessel invasion.Compared to the AUC values achieved by utilizing the similar models training with pixels of 10 and 60 from the MBR of the original ROIs, the model of EL was training with pixels image enlarged by 30 pixels from the ROI, which reached the largest AUC. Table2 predicts the EL approach performance evaluation with different patches. These findings indicate that the cervical cancer, peri-tumor region plays a significant part in the ultimate classification of vascular invasion. The effect was thought to be explained by the following: The rapid growth of microorganisms inside the cancer before the invasion of the vessel causes the microvascular tumor to expand into neighboring tissues and cause small morphological changes in these tissues. Radiologists can sense design in surrounding tissues only depending on the regional scale features determined via visual inspection, and these current convolutional neural network algorithms can detect pixel-level properties and detect certain connections between morphological and pathological characteristics.Table 2 Various CNN model performance evaluation [Ac = Accuracy; Sn = Sensitivity; Sp = Specificity]. Various methodsROC curveAcSnSpAdaptiveVGG19 X1DCE0.790.750.650.78AdaptiveVGG16 X1DCE0.820.680.740.75AdaptiveInceptionV3/X1DCE0.780.810.850.65AdaptiveVGG16/X2WI0.850.680.880.60AdaptiveResNet50-V2/X1DCE0.890.780.680.87AdaptiveInceptionV3/X2WI0.920.690.660.88AdaptiveResNet50-V2/X2WI0.880.800.700.92AdaptiveDenseNet121/X2WI0.870.880.720.77AdaptiveDenseNet121/X1DCE0.920.770.870.84AdaptiveVGG19-SE/X2WI0.890.850.790.82AdaptiveVGG19-SE/X1DCE0.940.890.880.88AdaptiveVGG19-CBAM/X2WI0.850.910.820.89AdaptiveVGG19-CBAM/X1DCE0.900.870.790.88Proposed A-EL/X1DCE + X2WI0.950.960.910.94By combining predictive outcomes from both the X1DCE and X2WI MRI datasets, the structure of the recommended groups improved the predictive performance. This technique was inspired by the fact that radiologists make diagnostic decisions based on a thorough examination of several methods. These findings indicate the presence of careful group methods as a potential method for integrating multiframetric MRI databases into diagnostic and therapeutic applications. Furthermore, CNN-based radiomic systems are modifiable. As a result, new information classification assumptions can be provided utilizing a pre-trained framework with previously defined dependencies, weights, and other parameters that appear convenient and useful to facilitate the work. This is another important reason why detectives used CNN networks to complete this task Table 3.Table 3 Different pixel performance evaluation. Pixels expansionAUCAcSnSpPixel range 100.780.770.560.89Pixel range 300.900.890.920.97Pixel range 600.880.650.970.78 ## 4.1. Performance Classification in Various Configurations Current studies have found that combining data from multiple methods increases discriminatory performance compared to using individual methods. Since both the X1DCE and X2WI MRI datasets provide rich and varied signal intensity within the cancer, Convolutional neural network-based radiomic algorithms were developed in this work. The effectiveness of the approach for vessel invasion discriminations is shown in Table 1. Findings of each model are presented, including AUC average value, sensitivity, accuracy, and uniqueness. As seen in Table 1, X1DCE consistently defeats X2WI, proving that the T1DCE database is more valuable than the X2WI database. Furthermore, the sensitivity determined by X2WI for every scenario was lower than that produced by X1DCE, showing significant error rates by X2WI.The primary cause of X1DCEMRI is the ability to effectively estimate blood flow in vivo by displaying blood vascular density and permeability, estimating the capacity transmission constant, and depending on the permeability of the cancer vasculature, all of that can give more discriminating data on the prognosis of cervical cancer vessel invasion. X2WI provides anatomical data by screening soft tissues with high resolution to reveal tumor morphological characteristics. Furthermore, current findings are difficult to describe because there are no specific indicators to identify the quality of vascular infiltration in cervical cancer using preoperative MRI imaging. Compared to ResNet-v2, Inception-v3, and DenseNet, the AdaptedVGG network generated improved accuracy and AUC values for X1DCE and X2WI databases. Based on the small ROIs collected from MRI images, the most basic topologies of the AdaptedVGG network can be useful in minimizing excess compatibility with sophisticated structures compared to other CNN models.The remaining designs were said to allow greater accuracy in diagnosing clinical images, which contradicted the findings. This can be calculated based on AdaptedResNet50 and AdaptedVGG19, which have 80,402,590 and 84,922,700 training variables, correspondingly. AdaptedcResNet performed worse than AdapedVGG19, which may be due to greater compatibility. In terms of information size and modeling ability, there has to be a compromise. When the information set is large enough to effectively train a large model, the structure is more likely to perform better. Excessive fit, on the other hand, can be frustrating when the information is too large to sustain training. The small number of photos in our investigation may have had an impact on the effectiveness of the models. The optimum AUC of 0.880 was achieved by combining the results of the X1DCE and X2WI databases using the deep group learning approach. The craft properties used to predict vascular invasion status using X1CE MRI created a radiomics nomogram approach and a current of 0.95 AUC before trial. Similarly, X2WI used MRI to capture craft properties and developed a logistic regression model with 0.710 AUC.The maximum AUC in this test reached 0.911, which uses the recommended ensemble technique, which combines both MRI methods. This was in line with previous research that focuses on distinguishing the capabilities of the network and enhancing categorization capabilities. The SE component used in this study can lift the weight of the attributes of the most essential channels. Integrated channel attributes and spatial dimensions data were central to CBAM components. These findings suggest that careful group methods may be useful in predicting vascular invasion in cervical cancer. If the ROC curves are combined with false positive and true positive ratios, a more complete outcome can be obtained.For such an X1DCE MRI, as shown in Figure 9 performance evaluation, the curvature of the X1DCE and X2WI models is always greater than that of other structures. X1DCE & X2WIworked better than AdaptedVGG16. This demonstrates that CNN models of various depths can learn features from various levels and that networks with multiple layers performed somewhat excellent than those with lower levels. Considering the better specification, AdaptedResNet50-v2 quickly surpassed AdaptedInception-v3 and AdaptedDenseNet121. Adapted Inception-curve, on the other hand, v3s are generally lower than others. Similar findings were made for X2WI MRI in predictive performance, with the exception of AdaptedResNet50-v2. The study shows the ROC curve of EL modeling combining X1DCE and X2WI data, which is an optimal average AUC of 0.95. Figures 10 and 11 depicts the different technique evaluation for predicting the accuracy and sensitivity.Figure 9 Various technique performance evaluation by using receiver operating curve.Figure 10 Different approaches to accuracy prediction.Figure 11 Different approaches to sensitivity prediction. ## 4.2. The Peri-Tumor Area’s Effect on Estimating Vessel Invasion Tumor cells can spread to the pelvic area where they can travel to the blood or lymphatic vessels and other human tissues, leading to invasion and metastasis of cervical cancer. To test the discriminatory ability of peri-tumor pixels in the vascular invasion properties in cervical cancer, a set of system features was created by extending the ROI's MBR from 10 to 60 pixels in all directions on the X1DCE MRI images. Different pixels with three patches up to the ROI of the MBR were created for a single MRI image, and they were used as training examples using EL models. With X1DCE MRI data and the EL model, Figure 12represent the confusing matrix separation among non-vessel invasion and vessel invasion.Figure 12 Confusion matrix assessment for vessel and non-vessel invasion.Compared to the AUC values achieved by utilizing the similar models training with pixels of 10 and 60 from the MBR of the original ROIs, the model of EL was training with pixels image enlarged by 30 pixels from the ROI, which reached the largest AUC. Table2 predicts the EL approach performance evaluation with different patches. These findings indicate that the cervical cancer, peri-tumor region plays a significant part in the ultimate classification of vascular invasion. The effect was thought to be explained by the following: The rapid growth of microorganisms inside the cancer before the invasion of the vessel causes the microvascular tumor to expand into neighboring tissues and cause small morphological changes in these tissues. Radiologists can sense design in surrounding tissues only depending on the regional scale features determined via visual inspection, and these current convolutional neural network algorithms can detect pixel-level properties and detect certain connections between morphological and pathological characteristics.Table 2 Various CNN model performance evaluation [Ac = Accuracy; Sn = Sensitivity; Sp = Specificity]. Various methodsROC curveAcSnSpAdaptiveVGG19 X1DCE0.790.750.650.78AdaptiveVGG16 X1DCE0.820.680.740.75AdaptiveInceptionV3/X1DCE0.780.810.850.65AdaptiveVGG16/X2WI0.850.680.880.60AdaptiveResNet50-V2/X1DCE0.890.780.680.87AdaptiveInceptionV3/X2WI0.920.690.660.88AdaptiveResNet50-V2/X2WI0.880.800.700.92AdaptiveDenseNet121/X2WI0.870.880.720.77AdaptiveDenseNet121/X1DCE0.920.770.870.84AdaptiveVGG19-SE/X2WI0.890.850.790.82AdaptiveVGG19-SE/X1DCE0.940.890.880.88AdaptiveVGG19-CBAM/X2WI0.850.910.820.89AdaptiveVGG19-CBAM/X1DCE0.900.870.790.88Proposed A-EL/X1DCE + X2WI0.950.960.910.94By combining predictive outcomes from both the X1DCE and X2WI MRI datasets, the structure of the recommended groups improved the predictive performance. This technique was inspired by the fact that radiologists make diagnostic decisions based on a thorough examination of several methods. These findings indicate the presence of careful group methods as a potential method for integrating multiframetric MRI databases into diagnostic and therapeutic applications. Furthermore, CNN-based radiomic systems are modifiable. As a result, new information classification assumptions can be provided utilizing a pre-trained framework with previously defined dependencies, weights, and other parameters that appear convenient and useful to facilitate the work. This is another important reason why detectives used CNN networks to complete this task Table 3.Table 3 Different pixel performance evaluation. Pixels expansionAUCAcSnSpPixel range 100.780.770.560.89Pixel range 300.900.890.920.97Pixel range 600.880.650.970.78 ## 5. Conclusion Utilizing multi-parametric MRI data, this research presents in-depth radiomic approaches that may differentiate between non-vessel invasion and vessel invasion in cervical cancer. Specifically, the research focuses on vessel invasion.These findings provide evidence that comprehensive neurological network-based radiomics techniques are able to accurately forecast vascular invasion in cervical cancer that is in its early stages. In addition, these methods do not call for time-consuming human operations such as manual division, the construction of features, or selection.By utilizing a method known as focused group learning, we were able to achieve a high level of prediction accuracy. This method possesses a significant amount of potential and a great deal of promise for use in supporting future clinical applications. --- *Source: 1008652-2022-09-28.xml*
2022
# Conferring the Midas Touch on Integrative Taxonomy: A Nanogold-Oligonucleotide Conjugate-Based Quick Species Identification Tool **Authors:** Rahul Kumar; Ajay Kumar Sharma **Journal:** International Journal of Ecology (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1009066 --- ## Abstract Nanogold or functionalized gold nanoparticles (GNPs) have myriad applications in medical sciences. GNPs are widely used in the area of nanodiagnostics and nanotherapeutics. Applications of GNPs in taxonomic studies have not been studied vis-à-vis its extensive medical applications. GNPs have great potential in the area of integrative taxonomy. We have realized that GNPs can be used to visually detect animal species based on molecular signatures. In this regard, we have synthesized gold nanoparticles (<20 nm) and have developed a method based on interactions between thiolated DNA oligonucleotides and small-sized GNPs, interactions between DNA oligonucleotides and target DNA molecules, and self-aggregating properties of small-sized GNPs under high salt concentrations leading to a visible change in colour. Exploiting these intermolecular and interparticle interactions under aqueous conditions, in the present work, we have demonstrated the application of our procedure by using a DNA oligonucleotide probe designed against a portion of the mitochondrial genome of the codling mothCydia pomonella. This method is accurate, quick, and easy to use once devised and can be used as an additional tool along with DNA barcoding. This tool can be used for distinguishing cryptic species, identification of morphovariants of known species, diet analysis, and identification of pest species in quarantine facilities without any need of performing repetitive DNA sequencing. We suggest that designing and selecting a highly specific DNA probe is crucial in increasing the specificity of the procedure. Present work may be considered as an effort to introduce nanotechnology as a new discipline to the extensive field of integrative taxonomy with which disciplines like palaeontology, embryology, anatomy, ethology, ecology, biochemistry, and molecular biology are already associated for a long time. --- ## Body ## 1. Introduction Species identification is central to the area of taxonomy. Nowadays, it has become a trend to identify and study a species using both morphological as well as molecular data. Especially when describing an insect species, mitochondrial DNA-based approaches are quite popular. Mitochondrial DNA-based DNA barcoding is one of the most preferred molecular tools among modern insect taxonomists. The design of the pair of universal primers against the mitochondrial cytochrome oxidase-I (mtCO-I) gene has revolutionized the field of taxonomy [1, 2]. Phylogenetic analyses based on the sequences of both mitochondrial as well as nuclear genes provide a better resolution in tracing inter and intraspecific similarities and differences [3]. As a common practice in DNA barcoding, a stretch of mtCO-I is amplified using universal primers followed by sequencing of the amplicon and sequence analysis postsequencing [2]. Amplifying and sequencing the DNA of every specimen is not possible. Morphologically similar-looking specimens may not always belong to the same species, but it sounds like a redundant, nonfeasible, and time-consuming task to amplify and sequence the DNA of all specimens (when the number of specimens is very high) belonging to the same species. To tackle such situations, we have developed a methodology that can quickly detect a species based on its molecular signature. This tool would help to reduce the need for repetitive sequencing and can be employed to authenticate barcodes in resource-limited setups. Our method utilizes functionalized gold nanoparticles (GNPs) and their unique properties. There is a tsunami of literature dealing with the application of gold nanoparticles in different areas of biological sciences, but we could not find even a single study dealing with the application of GNPs in taxonomic studies of higher animals or even higher plants. GNPs have huge applications in both nanodiagnostics and nanotherapeutics [4, 5]. Nanodiagnostic tools based on GNPs include plasmon resonance biosensors, dot-immunoassay, immune chromatography, and different homophase methods [4]. For the present work, we have used, with some modifications, one of the homophase methods which involves interaction between thiolated ssDNA (small single-stranded DNA molecules) and small-sized functionalized GNPs, the interaction between thiolated ssDNA-GNP complexes and target DNA molecules, and colour change in the solution as a result of aggregation of the particles under conditions of high ionic strength [6–8]. Since the publication of the genomic sequence of Drosophila melanogaster in 2000, which was the first insect genome to be sequenced, a large number of different insect genomes have been sequenced and studied in detail [9, 10]. Instead of the availability of a huge amount of insect genomic data in the public domain, being the most diverse taxa with the largest number of species in the entire animal kingdom, the genomic information of a large number of insect species is still not available. The size of the nuclear genome is far greater than the size of the mitochondrial genome. It makes the mitochondrial genome be sequenced in less time with fewer budgets and easier to analyze. In recent years, more insect mitochondrial genomes have been sequenced and studied in comparison to their nuclear genomes. The mitochondrial genome is the most extensively studied genomic system in insects [11]. Therefore, in the present study, we have selected a short stretch of the mitochondrial genome of an insect for designing a unique oligonucleotide to be used as a part of our probe. This method was found to be accurate, quick, and easy to use once devised and can be used as an additional tool along with DNA barcoding. ## 2. Materials and Methods ### 2.1. Materials Gold (III) chloride trihydrate (HAuCl4.3H2O), trisodium citrate dihydrate (Na3C6H5O7.2H2O), sodium borohydride (NaBH4), magnesium sulphate (MgSO4), ethidium bromide, and agarose were purchased from Sigma Aldrich. All buffers were manually prepared using chemicals of analytical grade. The GeneRular 1 kb DNA ladder was purchased from Thermo Scientific. Micropipette tips, centrifuge tubes, and PCR tubes were purchased from Tarsons. ### 2.2. Species Selection The codling mothCydia pomonella was used for our studies. The oligonucleotide probe and primers were designed against a stretch of its mitochondrial DNA. The cotton bollworm moth Helicoverpa armigera has been used as the control. Complete mitochondrial genome sequences of both of these economically important moths are publicly available and are well characterized too. Therefore, we preferred these species for our studies. ### 2.3. Analysis of Mitochondrial Genome Complete mitochondrial genome sequences ofCydia pomonella were retrieved from the NCBI (National Center for Biotechnology Information) Genome database in FASTA (stands for “FAST-All”) format. Graphical circular and linear maps of the mitochondrial genome were prepared from this sequence using OGDRAW (Organellar Genome DRAW) and MITOS (MITOchondrial genome annotation Server) to demarcate the position of genes and direction of open reading frames (Figure 1(a)) [12, 13]. The same strategy was followed for the control, Helicoverpa armigera (Supplementary Figure 1).Figure 1 Designing and characterization of oligonucleotide probe. (a) Circular and linearized maps of mitochondrial genome of codling mothCydia pomonella. (b) Designing of oligonucleotide probe and PCR primers. (c) Gel showing PCR product at 1.3 kb in case of mitochondrial DNA of Cydia pomonella (CA mtDNA). Helicoverpa armigera mitochondrial DNA (HA mtDNA) was used as negative control. (a)(b)(c) ### 2.4. Mitochondrial DNA Extraction First, the mitochondria were isolated from larval tissue ofCydia pomonella and Helicoverpa armigera using a previously described organelle isolation protocol [14]. In each case, only one caterpillar was used for mitochondrial isolation. Isolated mitochondria of each species were then used to isolate mitochondrial DNA [15]. It was isolated using the DNeasy 96 Blood and Tissue Kit by Qiagen. For each species, mitochondrial DNA isolation was performed multiple times, and all samples were pooled together and vacuum dried to remove excess water for better concentration. Isolated mitochondrial DNA was quantified using a NanoDrop 2000 Spectrophotometer by Thermo Scientific. ### 2.5. Designing of Oligonucleotide Probe Multiple primer pairs were generated from the complete mitochondrial genome sequence ofCydia pomonella using NCBI Primer-BLAST (Basic Local Alignment Search Tool). Out of these primers, the most unique oligonucletide sequence was selected by running NCBI BLAST using each primer sequence. The sequence which was considered the “most unique” sequence exhibits the least cross-species sequence similarity within the order Lepidoptera in BLAST results. This sequence was found to be absent in the mitochondrial genome of Helicoverpa armigera, which was used as the control. This oligonucleotide was labeled with a thiol group (-SH) at the 5′ end to enable its conjugation with GNPs to be used as a probe (Supplementary Figure 2). The 5′ thiol-modified oligonucleotide was synthesized on a 25 nM scale according to standard procedure and supplied in lyophilized form by Eurofins. ### 2.6. Characterization of Oligonucleotide Probe For characterization of oligonucleotide probe, PCR was performed. The oligonucleotide probe itself was used as a forward primer along with a reverse primer to amplify a stretch of 1332 bases of the mitochondrial genome sequence ofCydia pomonella (Figure 1(b)). The reverse primer was selected after analyzing its properties using the IDT OligoAnalyzer with respect to the forward primer [16] (Supplementary Figure 2). Primers were synthesized at a 25 nM scale according to standard procedure and were supplied in lyophilized form by Eurofins. PCR was performed using the mitochondrial DNA of Cydia pomonella as a template. Mitochondrial DNA of Helicoverpa armigera was used as the control. PCR products were run on a 0.8% agarose gel containing ethidium bromide for visualization under UV light using the gel documentation system. ### 2.7. Synthesis of GNPs Gold nanoparticles were synthesized using a two-step chemical reduction method (reduction followed by stabilization) [17]. Briefly, 10 ml of 1 mM of gold (III) chloride trihydrate (HAuCl4.3H2O) solution was taken in a conical flask wrapped with silver foil and kept for stirring on a magnetic stirrer. To this solution, 400 μl of a 500 μg/ml solution of ice-chilled sodium borohydride (NaBH4) was added drop-wise and left for 30 seconds. Then, 200 μl of a 5% solution of trisodium citrate dihydrate (Na3C6H5O7.2H2O) was added and left for another 30 seconds. Citrate-capped GNPs were formed. In this two-step method, the reduction was achieved by the addition of NaBH4, and stabilization was carried out by Na3C6H5O7.2H2O (Figure 2(a)).Figure 2 Synthesis and characterization of GNPs. (a) Two-step chemical reduction method of GNP synthesis. In this two-step method, reduction was achieved by addition of sodium borohydride and stabilization was carried out by trisodium citrate dehydrate. (b) Characterization of GNPs for its colour using visual observations and absorbance using the UV-visible spectrophotometer. (c) Characterization of GNPs for its size distribution using Zetasizer and shape using the transmission electron microscope (TEM). (a)(b)(c) ### 2.8. Characterization of GNPs Size distribution analysis of a tenfold diluted freshly prepared sample of GNPs was done by dynamic light scattering (DLS) using a Zetasizer (Malvern Instruments, Malvern, UK) equipped with the 5 mW helium/neon laser. For morphological characterization, one drop of the same sample was poured on 300-mesh carbon-coated copper grids and dried at room temperature before loading into the transmission electron microscope (TEM) for imaging, which was done using a high-resolution TEM (TECNAI, T20G2, TEM, FEI, Inc., Hillsborough, OR, USA) operated at 200 kV. The absorbance of the same sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific). The colour of the sample was also recorded. The molar concentration of GNPs was also calculated using the absorbance of the sample at 450 nm, determined as above and the value of the extinction coefficient of GNPs at 450 nm, for specific particle size, as previously reported [18]. This calculation provides an average estimate of the molar concentration of GNPs. ### 2.9. Preparation of GNP-Oligonucleotide Conjugate Conjugation of GNP-oligonucleotide was performed using a method modified from previous studies [8, 19, 20] (Figure 3(a)). Two sets of conjugation reaction mixtures were prepared. One set of reaction mixtures had a 1 μM oligonucleotide probe in 1 ml of GNPs solution and the other set had a 0.5 μM oligonucleotide probe in 1 ml of GNPs solution. Two reaction mixtures were prepared for the comparison of sensitivity in detection among different concentrations of conjugates. Both reaction mixtures were kept inside the orbital shaker and incubated overnight at 50°C. To each reaction mixture, phosphate buffer, SDS, and NaCl solution were added to obtain a final concentration of 10 mM (pH 7.4), 0.01% (weight/volume), and 0.1 M, respectively, and they were kept in an orbital shaker for incubation at 50°C for 48 hours. After incubation, both reaction mixtures were centrifuged at 15,000 rpm for 30 min at 4°C followed by washing with washing buffer twice. The washing buffer is 100 mM phosphate buffer saline (PBS) (with 0.01% SDS and 100 mM NaCl). The GNP-oligonucleotide conjugate is finally resuspended in the same washing buffer and stored at 4°C in dark.Figure 3 Preparation and characterization of GNP-oligonucleotide conjugate. (a) Schematic representation of conjugation of GNPs and thiolated oligonucleotide probe. (b) Characterization of GNP-oligonucleotide conjugate using the UV-visible spectrophotometer. Red shift in absorbance peak indicates conjugation success. (a)(b) ### 2.10. Characterization of GNP-Oligonucleotide Conjugate The absorbance of the GNP-oligonucleotide conjugate sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific) and compared with the absorbance of unconjugated GNPs. A similar approach for characterization has also been used by other workers [20, 21]. ### 2.11. Hybridization of GNP-Oligonucleotide Conjugate with Mitochondrial DNA Hybridization and optimization of biomolecules were performed based on previous studies with some modifications [8, 22]. A hybridization reaction mixture was prepared by mixing 20 μl of 50 mM of Cydia pomonella mitochondrial DNA and 20 μl of GNP-oligonucleotide conjugate in a PCR tube. The hybridization reaction mixture was incubated for 5 minutes by placing it on a thermoshaker preheated at 95°C, followed by incubation for further 5 minutes at 63°C for hybridization. After hybridization, 6 μl of the above mixture is aliquoted into 6 different PCR tubes. Afterwards, for optimization of salt concentration, 6 different concentrations of magnesium sulphate (MgSO4), namely, 3 mM, 15 mM, 30 mM, 60 mM, 80 mM, and 100 mM, were added to each of the abovementioned reaction mixtures, respectively. Milli-Q water was used as a negative control in place of Cydia pomonella mitochondrial DNA in the hybridization mixture. A change in red or pink indicates a positive result, while a blue colour change indicates a negative result, which can be observed visually. Assessing the sensitivity and specificity of the GNP-oligonucleotide conjugates is also a part of optimization. For assessing the sensitivity of GNP-oligonucleotide conjugate, 6 different concentrations of Cydia pomonella mitochondrial DNA were prepared using serial dilution, namely, 5 ng/μl, 10 ng/μl, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl. Each DNA concentration was used for hybridization with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. In the negative control, Milli-Q water was added in place of DNA. Similarly, to assess the specificity of GNP-oligonucleotide conjugate, 50 ng/μl of mitochondrial DNA of Cydia pomonella and Helicoverpa armigera was hybridized with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. ## 2.1. Materials Gold (III) chloride trihydrate (HAuCl4.3H2O), trisodium citrate dihydrate (Na3C6H5O7.2H2O), sodium borohydride (NaBH4), magnesium sulphate (MgSO4), ethidium bromide, and agarose were purchased from Sigma Aldrich. All buffers were manually prepared using chemicals of analytical grade. The GeneRular 1 kb DNA ladder was purchased from Thermo Scientific. Micropipette tips, centrifuge tubes, and PCR tubes were purchased from Tarsons. ## 2.2. Species Selection The codling mothCydia pomonella was used for our studies. The oligonucleotide probe and primers were designed against a stretch of its mitochondrial DNA. The cotton bollworm moth Helicoverpa armigera has been used as the control. Complete mitochondrial genome sequences of both of these economically important moths are publicly available and are well characterized too. Therefore, we preferred these species for our studies. ## 2.3. Analysis of Mitochondrial Genome Complete mitochondrial genome sequences ofCydia pomonella were retrieved from the NCBI (National Center for Biotechnology Information) Genome database in FASTA (stands for “FAST-All”) format. Graphical circular and linear maps of the mitochondrial genome were prepared from this sequence using OGDRAW (Organellar Genome DRAW) and MITOS (MITOchondrial genome annotation Server) to demarcate the position of genes and direction of open reading frames (Figure 1(a)) [12, 13]. The same strategy was followed for the control, Helicoverpa armigera (Supplementary Figure 1).Figure 1 Designing and characterization of oligonucleotide probe. (a) Circular and linearized maps of mitochondrial genome of codling mothCydia pomonella. (b) Designing of oligonucleotide probe and PCR primers. (c) Gel showing PCR product at 1.3 kb in case of mitochondrial DNA of Cydia pomonella (CA mtDNA). Helicoverpa armigera mitochondrial DNA (HA mtDNA) was used as negative control. (a)(b)(c) ## 2.4. Mitochondrial DNA Extraction First, the mitochondria were isolated from larval tissue ofCydia pomonella and Helicoverpa armigera using a previously described organelle isolation protocol [14]. In each case, only one caterpillar was used for mitochondrial isolation. Isolated mitochondria of each species were then used to isolate mitochondrial DNA [15]. It was isolated using the DNeasy 96 Blood and Tissue Kit by Qiagen. For each species, mitochondrial DNA isolation was performed multiple times, and all samples were pooled together and vacuum dried to remove excess water for better concentration. Isolated mitochondrial DNA was quantified using a NanoDrop 2000 Spectrophotometer by Thermo Scientific. ## 2.5. Designing of Oligonucleotide Probe Multiple primer pairs were generated from the complete mitochondrial genome sequence ofCydia pomonella using NCBI Primer-BLAST (Basic Local Alignment Search Tool). Out of these primers, the most unique oligonucletide sequence was selected by running NCBI BLAST using each primer sequence. The sequence which was considered the “most unique” sequence exhibits the least cross-species sequence similarity within the order Lepidoptera in BLAST results. This sequence was found to be absent in the mitochondrial genome of Helicoverpa armigera, which was used as the control. This oligonucleotide was labeled with a thiol group (-SH) at the 5′ end to enable its conjugation with GNPs to be used as a probe (Supplementary Figure 2). The 5′ thiol-modified oligonucleotide was synthesized on a 25 nM scale according to standard procedure and supplied in lyophilized form by Eurofins. ## 2.6. Characterization of Oligonucleotide Probe For characterization of oligonucleotide probe, PCR was performed. The oligonucleotide probe itself was used as a forward primer along with a reverse primer to amplify a stretch of 1332 bases of the mitochondrial genome sequence ofCydia pomonella (Figure 1(b)). The reverse primer was selected after analyzing its properties using the IDT OligoAnalyzer with respect to the forward primer [16] (Supplementary Figure 2). Primers were synthesized at a 25 nM scale according to standard procedure and were supplied in lyophilized form by Eurofins. PCR was performed using the mitochondrial DNA of Cydia pomonella as a template. Mitochondrial DNA of Helicoverpa armigera was used as the control. PCR products were run on a 0.8% agarose gel containing ethidium bromide for visualization under UV light using the gel documentation system. ## 2.7. Synthesis of GNPs Gold nanoparticles were synthesized using a two-step chemical reduction method (reduction followed by stabilization) [17]. Briefly, 10 ml of 1 mM of gold (III) chloride trihydrate (HAuCl4.3H2O) solution was taken in a conical flask wrapped with silver foil and kept for stirring on a magnetic stirrer. To this solution, 400 μl of a 500 μg/ml solution of ice-chilled sodium borohydride (NaBH4) was added drop-wise and left for 30 seconds. Then, 200 μl of a 5% solution of trisodium citrate dihydrate (Na3C6H5O7.2H2O) was added and left for another 30 seconds. Citrate-capped GNPs were formed. In this two-step method, the reduction was achieved by the addition of NaBH4, and stabilization was carried out by Na3C6H5O7.2H2O (Figure 2(a)).Figure 2 Synthesis and characterization of GNPs. (a) Two-step chemical reduction method of GNP synthesis. In this two-step method, reduction was achieved by addition of sodium borohydride and stabilization was carried out by trisodium citrate dehydrate. (b) Characterization of GNPs for its colour using visual observations and absorbance using the UV-visible spectrophotometer. (c) Characterization of GNPs for its size distribution using Zetasizer and shape using the transmission electron microscope (TEM). (a)(b)(c) ## 2.8. Characterization of GNPs Size distribution analysis of a tenfold diluted freshly prepared sample of GNPs was done by dynamic light scattering (DLS) using a Zetasizer (Malvern Instruments, Malvern, UK) equipped with the 5 mW helium/neon laser. For morphological characterization, one drop of the same sample was poured on 300-mesh carbon-coated copper grids and dried at room temperature before loading into the transmission electron microscope (TEM) for imaging, which was done using a high-resolution TEM (TECNAI, T20G2, TEM, FEI, Inc., Hillsborough, OR, USA) operated at 200 kV. The absorbance of the same sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific). The colour of the sample was also recorded. The molar concentration of GNPs was also calculated using the absorbance of the sample at 450 nm, determined as above and the value of the extinction coefficient of GNPs at 450 nm, for specific particle size, as previously reported [18]. This calculation provides an average estimate of the molar concentration of GNPs. ## 2.9. Preparation of GNP-Oligonucleotide Conjugate Conjugation of GNP-oligonucleotide was performed using a method modified from previous studies [8, 19, 20] (Figure 3(a)). Two sets of conjugation reaction mixtures were prepared. One set of reaction mixtures had a 1 μM oligonucleotide probe in 1 ml of GNPs solution and the other set had a 0.5 μM oligonucleotide probe in 1 ml of GNPs solution. Two reaction mixtures were prepared for the comparison of sensitivity in detection among different concentrations of conjugates. Both reaction mixtures were kept inside the orbital shaker and incubated overnight at 50°C. To each reaction mixture, phosphate buffer, SDS, and NaCl solution were added to obtain a final concentration of 10 mM (pH 7.4), 0.01% (weight/volume), and 0.1 M, respectively, and they were kept in an orbital shaker for incubation at 50°C for 48 hours. After incubation, both reaction mixtures were centrifuged at 15,000 rpm for 30 min at 4°C followed by washing with washing buffer twice. The washing buffer is 100 mM phosphate buffer saline (PBS) (with 0.01% SDS and 100 mM NaCl). The GNP-oligonucleotide conjugate is finally resuspended in the same washing buffer and stored at 4°C in dark.Figure 3 Preparation and characterization of GNP-oligonucleotide conjugate. (a) Schematic representation of conjugation of GNPs and thiolated oligonucleotide probe. (b) Characterization of GNP-oligonucleotide conjugate using the UV-visible spectrophotometer. Red shift in absorbance peak indicates conjugation success. (a)(b) ## 2.10. Characterization of GNP-Oligonucleotide Conjugate The absorbance of the GNP-oligonucleotide conjugate sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific) and compared with the absorbance of unconjugated GNPs. A similar approach for characterization has also been used by other workers [20, 21]. ## 2.11. Hybridization of GNP-Oligonucleotide Conjugate with Mitochondrial DNA Hybridization and optimization of biomolecules were performed based on previous studies with some modifications [8, 22]. A hybridization reaction mixture was prepared by mixing 20 μl of 50 mM of Cydia pomonella mitochondrial DNA and 20 μl of GNP-oligonucleotide conjugate in a PCR tube. The hybridization reaction mixture was incubated for 5 minutes by placing it on a thermoshaker preheated at 95°C, followed by incubation for further 5 minutes at 63°C for hybridization. After hybridization, 6 μl of the above mixture is aliquoted into 6 different PCR tubes. Afterwards, for optimization of salt concentration, 6 different concentrations of magnesium sulphate (MgSO4), namely, 3 mM, 15 mM, 30 mM, 60 mM, 80 mM, and 100 mM, were added to each of the abovementioned reaction mixtures, respectively. Milli-Q water was used as a negative control in place of Cydia pomonella mitochondrial DNA in the hybridization mixture. A change in red or pink indicates a positive result, while a blue colour change indicates a negative result, which can be observed visually. Assessing the sensitivity and specificity of the GNP-oligonucleotide conjugates is also a part of optimization. For assessing the sensitivity of GNP-oligonucleotide conjugate, 6 different concentrations of Cydia pomonella mitochondrial DNA were prepared using serial dilution, namely, 5 ng/μl, 10 ng/μl, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl. Each DNA concentration was used for hybridization with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. In the negative control, Milli-Q water was added in place of DNA. Similarly, to assess the specificity of GNP-oligonucleotide conjugate, 50 ng/μl of mitochondrial DNA of Cydia pomonella and Helicoverpa armigera was hybridized with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. ## 3. Results ### 3.1. Quantification of Mitochondrial DNA Nanodrop measurements provide a value of 21 ng/μl for Cydia pomonella mitochondrial DNA and 18 ng/μl for Helicoverpa armigera mitochondrial DNA. ### 3.2. Characterization of Oligonucleotide Probe The gel image shows successful amplification of the targeted stretch of DNA after PCR. A bright DNA band is visible in the case of the PCR product where theCydia pomonella mitochondrial DNA template was used. There is no amplification in the case of the Helicoverpa armigera mitochondrial DNA template, which was used as a negative control (Figure 1(c)). ### 3.3. Characterization of GNPs The size distribution of GNPs was found to be around 13 nm as revealed by Zetasizer measurements (Figure2(c)). TEM imaging shows that these particles are spherical in shape (Figure 2(c)). The UV-visible spectrophotometric data show the peak of the curve at 524 nm, which is the value of maximum absorption by the particles (Figure 2(b)). The colour of GNPs was found to be red in solution (Figure 2(b)). Furthermore, the absorbance of the sample at 450 nm was found to be 1.85, and the extinction coefficient of spherical GNPs at 450 nm for 13 nm particle size was noted as 1.39 × 108 M−1 cm−1 from a previously published report [18]. Using these values, the molar concentration of GNPs was calculated as 13.3 nM. This value is an average estimate, not an absolute quantity. Therefore, for conjugation experiments, GNPs were taken in volume (ml) in place of molar concentration. ### 3.4. Characterization of GNP-Oligonucleotide Conjugate UV-visible spectrophotometric measurements show a shift in peak from 524 nm in the case of unconjugated GNPs to 539.50 nm in the case of GNP-oligonucleotide conjugate. This red shift confirms the process of conjugation (Figure3(b)). ### 3.5. Optimization of MgSO4 Concentration Hybridization of the GNP-oligonucleotide conjugate (prepared using 0.5μM oligonucleotide probe) with Cydia pomonella mitochondrial DNA was followed by the addition of 6 different concentrations of MgSO4 as mentioned above. In the negative control, Milli-Q water was used in place of Cydia pomonella mitochondrial DNA. Using 30 mM, 60 mM, 80 mM, and 100 mM MgSO4, Cydia pomonella can be clearly distinguished from negative control (Figure 4(a)). 100 mM was selected as the optimal concentration of MgSO4.Figure 4 Hybridization of GNP-oligonucleotide conjugate with mitochondrial DNA. (a) Optimization of MgSO4 concentration. (b) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 1 μM oligonucleotide probe. (c) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 0.5 μM oligonucleotide probe. (d) Assessment of GNP-oligonucleotide conjugate specificity. (e) Graphical representation of the process of hybridization between GNP-oligonucleotide conjugate and mitochondrial DNA, and its outcome. NC, negative control without mtDNA; PS, sample containing Cydia pomonella mtDNA; CP, Cydia pomonella; HA, Helicoverpa armigera; (+), positive result; (−), negative result. (a)(b)(c)(d)(e) ### 3.6. Assessment of GNP-Oligonucleotide Conjugate Sensitivity Among different concentrations ofCydia pomonella mitochondrial DNA used for hybridization with the GNP-oligonucleotide conjugate, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl remained red after addition of MgSO4, whereas other solutions with lower concentrations of DNA and the negative control having Milli-Q water in place of DNA turned blue. Therefore, the detection limit was found to be 20 ng/μl for Cydia pomonella mitochondrial DNA. Also, both GNP-oligonucleotide conjugates, the one prepared using a 1 μM oligonucleotide probe and the other using a 0.5 μM oligonucleotide probe, show similar results with different concentrations of target DNA. Both of these conjugates are equally sensitive to target detection (Figures 4(b)and 4(c)). Therefore, we selected GNP-oligonucleotide conjugate prepared using a 0.5 μM oligonucleotide probe for assessment of the specificity of this procedure. ### 3.7. Assessment of GNP-Oligonucleotide Conjugate Specificity The specificity of the GNP-oligonucleotide conjugate was evaluated by hybridizing conjugates with mitochondrial DNA ofCydia pomonella and Helicoverpa armigera individually. Mitochondrial DNA of Cydia pomonella displays successful hybridization after addition of MgSO4 as the solution remains red. The solution containing mitochondrial DNA of Helicoverpa armigera turns blue after addition of MgSO4, indicating no hybridization (Figure 4(d)). ## 3.1. Quantification of Mitochondrial DNA Nanodrop measurements provide a value of 21 ng/μl for Cydia pomonella mitochondrial DNA and 18 ng/μl for Helicoverpa armigera mitochondrial DNA. ## 3.2. Characterization of Oligonucleotide Probe The gel image shows successful amplification of the targeted stretch of DNA after PCR. A bright DNA band is visible in the case of the PCR product where theCydia pomonella mitochondrial DNA template was used. There is no amplification in the case of the Helicoverpa armigera mitochondrial DNA template, which was used as a negative control (Figure 1(c)). ## 3.3. Characterization of GNPs The size distribution of GNPs was found to be around 13 nm as revealed by Zetasizer measurements (Figure2(c)). TEM imaging shows that these particles are spherical in shape (Figure 2(c)). The UV-visible spectrophotometric data show the peak of the curve at 524 nm, which is the value of maximum absorption by the particles (Figure 2(b)). The colour of GNPs was found to be red in solution (Figure 2(b)). Furthermore, the absorbance of the sample at 450 nm was found to be 1.85, and the extinction coefficient of spherical GNPs at 450 nm for 13 nm particle size was noted as 1.39 × 108 M−1 cm−1 from a previously published report [18]. Using these values, the molar concentration of GNPs was calculated as 13.3 nM. This value is an average estimate, not an absolute quantity. Therefore, for conjugation experiments, GNPs were taken in volume (ml) in place of molar concentration. ## 3.4. Characterization of GNP-Oligonucleotide Conjugate UV-visible spectrophotometric measurements show a shift in peak from 524 nm in the case of unconjugated GNPs to 539.50 nm in the case of GNP-oligonucleotide conjugate. This red shift confirms the process of conjugation (Figure3(b)). ## 3.5. Optimization of MgSO4 Concentration Hybridization of the GNP-oligonucleotide conjugate (prepared using 0.5μM oligonucleotide probe) with Cydia pomonella mitochondrial DNA was followed by the addition of 6 different concentrations of MgSO4 as mentioned above. In the negative control, Milli-Q water was used in place of Cydia pomonella mitochondrial DNA. Using 30 mM, 60 mM, 80 mM, and 100 mM MgSO4, Cydia pomonella can be clearly distinguished from negative control (Figure 4(a)). 100 mM was selected as the optimal concentration of MgSO4.Figure 4 Hybridization of GNP-oligonucleotide conjugate with mitochondrial DNA. (a) Optimization of MgSO4 concentration. (b) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 1 μM oligonucleotide probe. (c) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 0.5 μM oligonucleotide probe. (d) Assessment of GNP-oligonucleotide conjugate specificity. (e) Graphical representation of the process of hybridization between GNP-oligonucleotide conjugate and mitochondrial DNA, and its outcome. NC, negative control without mtDNA; PS, sample containing Cydia pomonella mtDNA; CP, Cydia pomonella; HA, Helicoverpa armigera; (+), positive result; (−), negative result. (a)(b)(c)(d)(e) ## 3.6. Assessment of GNP-Oligonucleotide Conjugate Sensitivity Among different concentrations ofCydia pomonella mitochondrial DNA used for hybridization with the GNP-oligonucleotide conjugate, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl remained red after addition of MgSO4, whereas other solutions with lower concentrations of DNA and the negative control having Milli-Q water in place of DNA turned blue. Therefore, the detection limit was found to be 20 ng/μl for Cydia pomonella mitochondrial DNA. Also, both GNP-oligonucleotide conjugates, the one prepared using a 1 μM oligonucleotide probe and the other using a 0.5 μM oligonucleotide probe, show similar results with different concentrations of target DNA. Both of these conjugates are equally sensitive to target detection (Figures 4(b)and 4(c)). Therefore, we selected GNP-oligonucleotide conjugate prepared using a 0.5 μM oligonucleotide probe for assessment of the specificity of this procedure. ## 3.7. Assessment of GNP-Oligonucleotide Conjugate Specificity The specificity of the GNP-oligonucleotide conjugate was evaluated by hybridizing conjugates with mitochondrial DNA ofCydia pomonella and Helicoverpa armigera individually. Mitochondrial DNA of Cydia pomonella displays successful hybridization after addition of MgSO4 as the solution remains red. The solution containing mitochondrial DNA of Helicoverpa armigera turns blue after addition of MgSO4, indicating no hybridization (Figure 4(d)). ## 4. Discussion Every method has some advantages and disadvantages. Being time-consuming and expensive, though considered the gold standard, the use of DNA barcoding for every similar-looking specimen of an already known species sounds like a redundant exercise. Simple to use and an inexpensive tool like the one developed by us can be used for quick detection of species just by observing colour of the solution visually in place of repeated DNA sequencing procedures and without the use of expensive and difficult-to-handle instruments (Figure4(e)). Once prepared, the GNP-oligonucleotide conjugate is stable at room temperature for almost a month and can be stored at 4°C for a longer duration for multiple usages, which makes this method cost-effective as shown by other thiolated ssDNA-GNP complex-based methods previously published [23]. This method can further be used to authenticate DNA barcodes by providing additional evidence of a species’ molecular identity in less time. The present method has great potential in a variety of applications associated with species identification. This method can be used to distinguish cryptic species. Such species appear morphologically similar but are genetically different. Contrary to this, some species exhibit polymorphism. Morphovariants of a species are morphologically different but are the same in terms of genetic identity. Such individuals can be identified using this method. This method can be employed for the rapid identification of pest species in quarantine facilities. Using this method, animal and plant species being consumed by other animals as food can also be identified (diet analysis). The method is sensitive and specific. The present method is a simple demonstration of a procedure in its preliminary stage, which could be made more robust and error-proof by adopting a few modifications, which are discussed in the subsequent paragraph.The specificity of this method is highly dependent upon the uniqueness of the oligonucleotide probe sequence. We have used mitochondrial DNA for designing oligonucleotide probes as the nuclear genome sequences of most insects are not available and the DNA barcoding of most insects also relies on the sequence of mtCO-I. But designing an exclusively unique oligonucleotide probe based on the mitochondrial genome is not always possible because of its small size, absence of introns, and conservation of mitochondrial genes across taxa. Also, the chances of cross-binding of the oligonucleotide probe designed against mitochondrial DNA with nuclear DNA cannot be ruled out (when nuclear genome sequence is not available). This is the reason why we selectively extracted mitochondrial DNA (excluding nuclear DNA). The nuclear genomes of eukaryotes are huge in size and differ considerably across as well as within species. Using bioinformatics tools, it is possible to design such an oligonucleotide probe whose sequence is exclusive to a particular eukaryotic species by scanning its whole genome (both nuclear and mitochondrial). This unique probe can then be conjugated with GNPs and be further used as described in our method. The use of such a probe would make this method highly specific with negligible chances of detection failure or false detection. Similarly, the sensitivity of this method can be further enhanced by using an ultrapure GNP- oligonucleotide conjugate. The conjugate solution prepared by us was purified (washed to remove unbound particles) using multiple rounds of centrifugation at high speed, but it cannot be ruled out as it may have negligible amounts of unbound GNPs or free oligonucleotides or both. To get an ultrapure conjugate solution with no unbound GNPs and free oligonucleotides, advanced chromatographic methods can be used after the centrifugation step. These methods can also provide estimates of the respective percentages of GNP-oligonucleotide conjugate, unbound GNPs, and free oligonucleotides in solution [8].Taxonomy is at the core of understanding biodiversity. Like other scientific disciplines, taxonomy has also progressed significantly from being a traditional morphology-based approach to being a modern multisource approach. The modern approach does not lessen the importance of the traditional morphology-based approach but rather strengthens it. The modern multisource approach involves information from various sources like morphology, behaviour, mitochondrial DNA, nuclear DNA, ecology, enzymes, chemistry, reproductive compatibility, cytogenetics, life history, and whole genome scans. Such a multisource approach is the backbone of integrative taxonomy, a synthesis of different traditional and modern approaches. Integrative taxonomy reduces the chances of misidentification and other taxonomic errors and has made the process of identification easier, more efficient, and more reliable. Palaeontology, embryology, anatomy, ethology, ecology, biochemistry, and molecular biology are the major fields of studies with significant applications in integrative taxonomy [24, 25]. Instead of huge development in the field of integrative taxonomy, the application of nanotechnology in this area has not been realized yet, unlike in other popular disciplines. In biological sciences, nanotechnology is emerging as a great tool with huge potential. Applications of nanotechnology in different fields of biology are already being explored. Nanodiagnostics (including nanodetection, nanoimaging, and nanoanalytics) and nanotherapeutics, which are the subareas of nanomedicine, are the most preferred areas of biological sciences where the application of nanotechnology is being explored today [26]. The focus is on regenerative medicine, cancer diagnosis and treatment, neuromorphic engineering, tissue engineering, development of biosurfectants, biomedical nanosensors, enhancing bioavailability and bioactivity of drugs, pathogen detection, stem cell biology, and molecular imaging [27–32]. Some of the nonmedical applications of nanotechnology include the development of pesticide, herbicide, and fertilizer nanoformulations; designing of pest and agrochemical nanosensors; development of nanodevices for genetic engineering, crop improvement, and animal breeding; increasing shelf life of harvested crops; and creation of biomimetic materials [33–36]. Instead of the huge applications of nanotechnology, its application in integrative taxonomy has not been realized yet. We believe that nanotechnology also has great potential in the field of integrative taxonomy. This new area of discourse may be called “nanotaxonomy.” It can add new dimensions to modern taxonomic studies if explored systematically. ## 5. Conclusion In the present study, we have reported a novel method for the detection of insect species based on their molecular signature by using a GNP-oligonucleotide conjugate which can distinguish one species from another by a simple change in the colour of the solution that can be observed by the naked eye. Use of mitochondrial genome sequence for probe designing is the unique strategy. This method can help in saving time and money spent on repetitive barcoding experiments on apparently similar looking specimens. This method has high sensitivity (detection limit = 20 ng/μl) and specificity. The specificity of this method can further be enhanced by designing a species exclusive probe with negligible cross-species similarity employing whole genome scanning assisted by advanced bioinformatic tools. The present work may be considered as a small step towards bridging the existing gap between integrative taxonomy and nanotechnology. --- *Source: 1009066-2022-08-23.xml*
1009066-2022-08-23_1009066-2022-08-23.md
45,021
Conferring the Midas Touch on Integrative Taxonomy: A Nanogold-Oligonucleotide Conjugate-Based Quick Species Identification Tool
Rahul Kumar; Ajay Kumar Sharma
International Journal of Ecology (2022)
Earth and Environmental Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1009066
1009066-2022-08-23.xml
--- ## Abstract Nanogold or functionalized gold nanoparticles (GNPs) have myriad applications in medical sciences. GNPs are widely used in the area of nanodiagnostics and nanotherapeutics. Applications of GNPs in taxonomic studies have not been studied vis-à-vis its extensive medical applications. GNPs have great potential in the area of integrative taxonomy. We have realized that GNPs can be used to visually detect animal species based on molecular signatures. In this regard, we have synthesized gold nanoparticles (<20 nm) and have developed a method based on interactions between thiolated DNA oligonucleotides and small-sized GNPs, interactions between DNA oligonucleotides and target DNA molecules, and self-aggregating properties of small-sized GNPs under high salt concentrations leading to a visible change in colour. Exploiting these intermolecular and interparticle interactions under aqueous conditions, in the present work, we have demonstrated the application of our procedure by using a DNA oligonucleotide probe designed against a portion of the mitochondrial genome of the codling mothCydia pomonella. This method is accurate, quick, and easy to use once devised and can be used as an additional tool along with DNA barcoding. This tool can be used for distinguishing cryptic species, identification of morphovariants of known species, diet analysis, and identification of pest species in quarantine facilities without any need of performing repetitive DNA sequencing. We suggest that designing and selecting a highly specific DNA probe is crucial in increasing the specificity of the procedure. Present work may be considered as an effort to introduce nanotechnology as a new discipline to the extensive field of integrative taxonomy with which disciplines like palaeontology, embryology, anatomy, ethology, ecology, biochemistry, and molecular biology are already associated for a long time. --- ## Body ## 1. Introduction Species identification is central to the area of taxonomy. Nowadays, it has become a trend to identify and study a species using both morphological as well as molecular data. Especially when describing an insect species, mitochondrial DNA-based approaches are quite popular. Mitochondrial DNA-based DNA barcoding is one of the most preferred molecular tools among modern insect taxonomists. The design of the pair of universal primers against the mitochondrial cytochrome oxidase-I (mtCO-I) gene has revolutionized the field of taxonomy [1, 2]. Phylogenetic analyses based on the sequences of both mitochondrial as well as nuclear genes provide a better resolution in tracing inter and intraspecific similarities and differences [3]. As a common practice in DNA barcoding, a stretch of mtCO-I is amplified using universal primers followed by sequencing of the amplicon and sequence analysis postsequencing [2]. Amplifying and sequencing the DNA of every specimen is not possible. Morphologically similar-looking specimens may not always belong to the same species, but it sounds like a redundant, nonfeasible, and time-consuming task to amplify and sequence the DNA of all specimens (when the number of specimens is very high) belonging to the same species. To tackle such situations, we have developed a methodology that can quickly detect a species based on its molecular signature. This tool would help to reduce the need for repetitive sequencing and can be employed to authenticate barcodes in resource-limited setups. Our method utilizes functionalized gold nanoparticles (GNPs) and their unique properties. There is a tsunami of literature dealing with the application of gold nanoparticles in different areas of biological sciences, but we could not find even a single study dealing with the application of GNPs in taxonomic studies of higher animals or even higher plants. GNPs have huge applications in both nanodiagnostics and nanotherapeutics [4, 5]. Nanodiagnostic tools based on GNPs include plasmon resonance biosensors, dot-immunoassay, immune chromatography, and different homophase methods [4]. For the present work, we have used, with some modifications, one of the homophase methods which involves interaction between thiolated ssDNA (small single-stranded DNA molecules) and small-sized functionalized GNPs, the interaction between thiolated ssDNA-GNP complexes and target DNA molecules, and colour change in the solution as a result of aggregation of the particles under conditions of high ionic strength [6–8]. Since the publication of the genomic sequence of Drosophila melanogaster in 2000, which was the first insect genome to be sequenced, a large number of different insect genomes have been sequenced and studied in detail [9, 10]. Instead of the availability of a huge amount of insect genomic data in the public domain, being the most diverse taxa with the largest number of species in the entire animal kingdom, the genomic information of a large number of insect species is still not available. The size of the nuclear genome is far greater than the size of the mitochondrial genome. It makes the mitochondrial genome be sequenced in less time with fewer budgets and easier to analyze. In recent years, more insect mitochondrial genomes have been sequenced and studied in comparison to their nuclear genomes. The mitochondrial genome is the most extensively studied genomic system in insects [11]. Therefore, in the present study, we have selected a short stretch of the mitochondrial genome of an insect for designing a unique oligonucleotide to be used as a part of our probe. This method was found to be accurate, quick, and easy to use once devised and can be used as an additional tool along with DNA barcoding. ## 2. Materials and Methods ### 2.1. Materials Gold (III) chloride trihydrate (HAuCl4.3H2O), trisodium citrate dihydrate (Na3C6H5O7.2H2O), sodium borohydride (NaBH4), magnesium sulphate (MgSO4), ethidium bromide, and agarose were purchased from Sigma Aldrich. All buffers were manually prepared using chemicals of analytical grade. The GeneRular 1 kb DNA ladder was purchased from Thermo Scientific. Micropipette tips, centrifuge tubes, and PCR tubes were purchased from Tarsons. ### 2.2. Species Selection The codling mothCydia pomonella was used for our studies. The oligonucleotide probe and primers were designed against a stretch of its mitochondrial DNA. The cotton bollworm moth Helicoverpa armigera has been used as the control. Complete mitochondrial genome sequences of both of these economically important moths are publicly available and are well characterized too. Therefore, we preferred these species for our studies. ### 2.3. Analysis of Mitochondrial Genome Complete mitochondrial genome sequences ofCydia pomonella were retrieved from the NCBI (National Center for Biotechnology Information) Genome database in FASTA (stands for “FAST-All”) format. Graphical circular and linear maps of the mitochondrial genome were prepared from this sequence using OGDRAW (Organellar Genome DRAW) and MITOS (MITOchondrial genome annotation Server) to demarcate the position of genes and direction of open reading frames (Figure 1(a)) [12, 13]. The same strategy was followed for the control, Helicoverpa armigera (Supplementary Figure 1).Figure 1 Designing and characterization of oligonucleotide probe. (a) Circular and linearized maps of mitochondrial genome of codling mothCydia pomonella. (b) Designing of oligonucleotide probe and PCR primers. (c) Gel showing PCR product at 1.3 kb in case of mitochondrial DNA of Cydia pomonella (CA mtDNA). Helicoverpa armigera mitochondrial DNA (HA mtDNA) was used as negative control. (a)(b)(c) ### 2.4. Mitochondrial DNA Extraction First, the mitochondria were isolated from larval tissue ofCydia pomonella and Helicoverpa armigera using a previously described organelle isolation protocol [14]. In each case, only one caterpillar was used for mitochondrial isolation. Isolated mitochondria of each species were then used to isolate mitochondrial DNA [15]. It was isolated using the DNeasy 96 Blood and Tissue Kit by Qiagen. For each species, mitochondrial DNA isolation was performed multiple times, and all samples were pooled together and vacuum dried to remove excess water for better concentration. Isolated mitochondrial DNA was quantified using a NanoDrop 2000 Spectrophotometer by Thermo Scientific. ### 2.5. Designing of Oligonucleotide Probe Multiple primer pairs were generated from the complete mitochondrial genome sequence ofCydia pomonella using NCBI Primer-BLAST (Basic Local Alignment Search Tool). Out of these primers, the most unique oligonucletide sequence was selected by running NCBI BLAST using each primer sequence. The sequence which was considered the “most unique” sequence exhibits the least cross-species sequence similarity within the order Lepidoptera in BLAST results. This sequence was found to be absent in the mitochondrial genome of Helicoverpa armigera, which was used as the control. This oligonucleotide was labeled with a thiol group (-SH) at the 5′ end to enable its conjugation with GNPs to be used as a probe (Supplementary Figure 2). The 5′ thiol-modified oligonucleotide was synthesized on a 25 nM scale according to standard procedure and supplied in lyophilized form by Eurofins. ### 2.6. Characterization of Oligonucleotide Probe For characterization of oligonucleotide probe, PCR was performed. The oligonucleotide probe itself was used as a forward primer along with a reverse primer to amplify a stretch of 1332 bases of the mitochondrial genome sequence ofCydia pomonella (Figure 1(b)). The reverse primer was selected after analyzing its properties using the IDT OligoAnalyzer with respect to the forward primer [16] (Supplementary Figure 2). Primers were synthesized at a 25 nM scale according to standard procedure and were supplied in lyophilized form by Eurofins. PCR was performed using the mitochondrial DNA of Cydia pomonella as a template. Mitochondrial DNA of Helicoverpa armigera was used as the control. PCR products were run on a 0.8% agarose gel containing ethidium bromide for visualization under UV light using the gel documentation system. ### 2.7. Synthesis of GNPs Gold nanoparticles were synthesized using a two-step chemical reduction method (reduction followed by stabilization) [17]. Briefly, 10 ml of 1 mM of gold (III) chloride trihydrate (HAuCl4.3H2O) solution was taken in a conical flask wrapped with silver foil and kept for stirring on a magnetic stirrer. To this solution, 400 μl of a 500 μg/ml solution of ice-chilled sodium borohydride (NaBH4) was added drop-wise and left for 30 seconds. Then, 200 μl of a 5% solution of trisodium citrate dihydrate (Na3C6H5O7.2H2O) was added and left for another 30 seconds. Citrate-capped GNPs were formed. In this two-step method, the reduction was achieved by the addition of NaBH4, and stabilization was carried out by Na3C6H5O7.2H2O (Figure 2(a)).Figure 2 Synthesis and characterization of GNPs. (a) Two-step chemical reduction method of GNP synthesis. In this two-step method, reduction was achieved by addition of sodium borohydride and stabilization was carried out by trisodium citrate dehydrate. (b) Characterization of GNPs for its colour using visual observations and absorbance using the UV-visible spectrophotometer. (c) Characterization of GNPs for its size distribution using Zetasizer and shape using the transmission electron microscope (TEM). (a)(b)(c) ### 2.8. Characterization of GNPs Size distribution analysis of a tenfold diluted freshly prepared sample of GNPs was done by dynamic light scattering (DLS) using a Zetasizer (Malvern Instruments, Malvern, UK) equipped with the 5 mW helium/neon laser. For morphological characterization, one drop of the same sample was poured on 300-mesh carbon-coated copper grids and dried at room temperature before loading into the transmission electron microscope (TEM) for imaging, which was done using a high-resolution TEM (TECNAI, T20G2, TEM, FEI, Inc., Hillsborough, OR, USA) operated at 200 kV. The absorbance of the same sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific). The colour of the sample was also recorded. The molar concentration of GNPs was also calculated using the absorbance of the sample at 450 nm, determined as above and the value of the extinction coefficient of GNPs at 450 nm, for specific particle size, as previously reported [18]. This calculation provides an average estimate of the molar concentration of GNPs. ### 2.9. Preparation of GNP-Oligonucleotide Conjugate Conjugation of GNP-oligonucleotide was performed using a method modified from previous studies [8, 19, 20] (Figure 3(a)). Two sets of conjugation reaction mixtures were prepared. One set of reaction mixtures had a 1 μM oligonucleotide probe in 1 ml of GNPs solution and the other set had a 0.5 μM oligonucleotide probe in 1 ml of GNPs solution. Two reaction mixtures were prepared for the comparison of sensitivity in detection among different concentrations of conjugates. Both reaction mixtures were kept inside the orbital shaker and incubated overnight at 50°C. To each reaction mixture, phosphate buffer, SDS, and NaCl solution were added to obtain a final concentration of 10 mM (pH 7.4), 0.01% (weight/volume), and 0.1 M, respectively, and they were kept in an orbital shaker for incubation at 50°C for 48 hours. After incubation, both reaction mixtures were centrifuged at 15,000 rpm for 30 min at 4°C followed by washing with washing buffer twice. The washing buffer is 100 mM phosphate buffer saline (PBS) (with 0.01% SDS and 100 mM NaCl). The GNP-oligonucleotide conjugate is finally resuspended in the same washing buffer and stored at 4°C in dark.Figure 3 Preparation and characterization of GNP-oligonucleotide conjugate. (a) Schematic representation of conjugation of GNPs and thiolated oligonucleotide probe. (b) Characterization of GNP-oligonucleotide conjugate using the UV-visible spectrophotometer. Red shift in absorbance peak indicates conjugation success. (a)(b) ### 2.10. Characterization of GNP-Oligonucleotide Conjugate The absorbance of the GNP-oligonucleotide conjugate sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific) and compared with the absorbance of unconjugated GNPs. A similar approach for characterization has also been used by other workers [20, 21]. ### 2.11. Hybridization of GNP-Oligonucleotide Conjugate with Mitochondrial DNA Hybridization and optimization of biomolecules were performed based on previous studies with some modifications [8, 22]. A hybridization reaction mixture was prepared by mixing 20 μl of 50 mM of Cydia pomonella mitochondrial DNA and 20 μl of GNP-oligonucleotide conjugate in a PCR tube. The hybridization reaction mixture was incubated for 5 minutes by placing it on a thermoshaker preheated at 95°C, followed by incubation for further 5 minutes at 63°C for hybridization. After hybridization, 6 μl of the above mixture is aliquoted into 6 different PCR tubes. Afterwards, for optimization of salt concentration, 6 different concentrations of magnesium sulphate (MgSO4), namely, 3 mM, 15 mM, 30 mM, 60 mM, 80 mM, and 100 mM, were added to each of the abovementioned reaction mixtures, respectively. Milli-Q water was used as a negative control in place of Cydia pomonella mitochondrial DNA in the hybridization mixture. A change in red or pink indicates a positive result, while a blue colour change indicates a negative result, which can be observed visually. Assessing the sensitivity and specificity of the GNP-oligonucleotide conjugates is also a part of optimization. For assessing the sensitivity of GNP-oligonucleotide conjugate, 6 different concentrations of Cydia pomonella mitochondrial DNA were prepared using serial dilution, namely, 5 ng/μl, 10 ng/μl, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl. Each DNA concentration was used for hybridization with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. In the negative control, Milli-Q water was added in place of DNA. Similarly, to assess the specificity of GNP-oligonucleotide conjugate, 50 ng/μl of mitochondrial DNA of Cydia pomonella and Helicoverpa armigera was hybridized with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. ## 2.1. Materials Gold (III) chloride trihydrate (HAuCl4.3H2O), trisodium citrate dihydrate (Na3C6H5O7.2H2O), sodium borohydride (NaBH4), magnesium sulphate (MgSO4), ethidium bromide, and agarose were purchased from Sigma Aldrich. All buffers were manually prepared using chemicals of analytical grade. The GeneRular 1 kb DNA ladder was purchased from Thermo Scientific. Micropipette tips, centrifuge tubes, and PCR tubes were purchased from Tarsons. ## 2.2. Species Selection The codling mothCydia pomonella was used for our studies. The oligonucleotide probe and primers were designed against a stretch of its mitochondrial DNA. The cotton bollworm moth Helicoverpa armigera has been used as the control. Complete mitochondrial genome sequences of both of these economically important moths are publicly available and are well characterized too. Therefore, we preferred these species for our studies. ## 2.3. Analysis of Mitochondrial Genome Complete mitochondrial genome sequences ofCydia pomonella were retrieved from the NCBI (National Center for Biotechnology Information) Genome database in FASTA (stands for “FAST-All”) format. Graphical circular and linear maps of the mitochondrial genome were prepared from this sequence using OGDRAW (Organellar Genome DRAW) and MITOS (MITOchondrial genome annotation Server) to demarcate the position of genes and direction of open reading frames (Figure 1(a)) [12, 13]. The same strategy was followed for the control, Helicoverpa armigera (Supplementary Figure 1).Figure 1 Designing and characterization of oligonucleotide probe. (a) Circular and linearized maps of mitochondrial genome of codling mothCydia pomonella. (b) Designing of oligonucleotide probe and PCR primers. (c) Gel showing PCR product at 1.3 kb in case of mitochondrial DNA of Cydia pomonella (CA mtDNA). Helicoverpa armigera mitochondrial DNA (HA mtDNA) was used as negative control. (a)(b)(c) ## 2.4. Mitochondrial DNA Extraction First, the mitochondria were isolated from larval tissue ofCydia pomonella and Helicoverpa armigera using a previously described organelle isolation protocol [14]. In each case, only one caterpillar was used for mitochondrial isolation. Isolated mitochondria of each species were then used to isolate mitochondrial DNA [15]. It was isolated using the DNeasy 96 Blood and Tissue Kit by Qiagen. For each species, mitochondrial DNA isolation was performed multiple times, and all samples were pooled together and vacuum dried to remove excess water for better concentration. Isolated mitochondrial DNA was quantified using a NanoDrop 2000 Spectrophotometer by Thermo Scientific. ## 2.5. Designing of Oligonucleotide Probe Multiple primer pairs were generated from the complete mitochondrial genome sequence ofCydia pomonella using NCBI Primer-BLAST (Basic Local Alignment Search Tool). Out of these primers, the most unique oligonucletide sequence was selected by running NCBI BLAST using each primer sequence. The sequence which was considered the “most unique” sequence exhibits the least cross-species sequence similarity within the order Lepidoptera in BLAST results. This sequence was found to be absent in the mitochondrial genome of Helicoverpa armigera, which was used as the control. This oligonucleotide was labeled with a thiol group (-SH) at the 5′ end to enable its conjugation with GNPs to be used as a probe (Supplementary Figure 2). The 5′ thiol-modified oligonucleotide was synthesized on a 25 nM scale according to standard procedure and supplied in lyophilized form by Eurofins. ## 2.6. Characterization of Oligonucleotide Probe For characterization of oligonucleotide probe, PCR was performed. The oligonucleotide probe itself was used as a forward primer along with a reverse primer to amplify a stretch of 1332 bases of the mitochondrial genome sequence ofCydia pomonella (Figure 1(b)). The reverse primer was selected after analyzing its properties using the IDT OligoAnalyzer with respect to the forward primer [16] (Supplementary Figure 2). Primers were synthesized at a 25 nM scale according to standard procedure and were supplied in lyophilized form by Eurofins. PCR was performed using the mitochondrial DNA of Cydia pomonella as a template. Mitochondrial DNA of Helicoverpa armigera was used as the control. PCR products were run on a 0.8% agarose gel containing ethidium bromide for visualization under UV light using the gel documentation system. ## 2.7. Synthesis of GNPs Gold nanoparticles were synthesized using a two-step chemical reduction method (reduction followed by stabilization) [17]. Briefly, 10 ml of 1 mM of gold (III) chloride trihydrate (HAuCl4.3H2O) solution was taken in a conical flask wrapped with silver foil and kept for stirring on a magnetic stirrer. To this solution, 400 μl of a 500 μg/ml solution of ice-chilled sodium borohydride (NaBH4) was added drop-wise and left for 30 seconds. Then, 200 μl of a 5% solution of trisodium citrate dihydrate (Na3C6H5O7.2H2O) was added and left for another 30 seconds. Citrate-capped GNPs were formed. In this two-step method, the reduction was achieved by the addition of NaBH4, and stabilization was carried out by Na3C6H5O7.2H2O (Figure 2(a)).Figure 2 Synthesis and characterization of GNPs. (a) Two-step chemical reduction method of GNP synthesis. In this two-step method, reduction was achieved by addition of sodium borohydride and stabilization was carried out by trisodium citrate dehydrate. (b) Characterization of GNPs for its colour using visual observations and absorbance using the UV-visible spectrophotometer. (c) Characterization of GNPs for its size distribution using Zetasizer and shape using the transmission electron microscope (TEM). (a)(b)(c) ## 2.8. Characterization of GNPs Size distribution analysis of a tenfold diluted freshly prepared sample of GNPs was done by dynamic light scattering (DLS) using a Zetasizer (Malvern Instruments, Malvern, UK) equipped with the 5 mW helium/neon laser. For morphological characterization, one drop of the same sample was poured on 300-mesh carbon-coated copper grids and dried at room temperature before loading into the transmission electron microscope (TEM) for imaging, which was done using a high-resolution TEM (TECNAI, T20G2, TEM, FEI, Inc., Hillsborough, OR, USA) operated at 200 kV. The absorbance of the same sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific). The colour of the sample was also recorded. The molar concentration of GNPs was also calculated using the absorbance of the sample at 450 nm, determined as above and the value of the extinction coefficient of GNPs at 450 nm, for specific particle size, as previously reported [18]. This calculation provides an average estimate of the molar concentration of GNPs. ## 2.9. Preparation of GNP-Oligonucleotide Conjugate Conjugation of GNP-oligonucleotide was performed using a method modified from previous studies [8, 19, 20] (Figure 3(a)). Two sets of conjugation reaction mixtures were prepared. One set of reaction mixtures had a 1 μM oligonucleotide probe in 1 ml of GNPs solution and the other set had a 0.5 μM oligonucleotide probe in 1 ml of GNPs solution. Two reaction mixtures were prepared for the comparison of sensitivity in detection among different concentrations of conjugates. Both reaction mixtures were kept inside the orbital shaker and incubated overnight at 50°C. To each reaction mixture, phosphate buffer, SDS, and NaCl solution were added to obtain a final concentration of 10 mM (pH 7.4), 0.01% (weight/volume), and 0.1 M, respectively, and they were kept in an orbital shaker for incubation at 50°C for 48 hours. After incubation, both reaction mixtures were centrifuged at 15,000 rpm for 30 min at 4°C followed by washing with washing buffer twice. The washing buffer is 100 mM phosphate buffer saline (PBS) (with 0.01% SDS and 100 mM NaCl). The GNP-oligonucleotide conjugate is finally resuspended in the same washing buffer and stored at 4°C in dark.Figure 3 Preparation and characterization of GNP-oligonucleotide conjugate. (a) Schematic representation of conjugation of GNPs and thiolated oligonucleotide probe. (b) Characterization of GNP-oligonucleotide conjugate using the UV-visible spectrophotometer. Red shift in absorbance peak indicates conjugation success. (a)(b) ## 2.10. Characterization of GNP-Oligonucleotide Conjugate The absorbance of the GNP-oligonucleotide conjugate sample was determined using an Evolution 220 UV-visible spectrophotometer (Thermo Scientific) and compared with the absorbance of unconjugated GNPs. A similar approach for characterization has also been used by other workers [20, 21]. ## 2.11. Hybridization of GNP-Oligonucleotide Conjugate with Mitochondrial DNA Hybridization and optimization of biomolecules were performed based on previous studies with some modifications [8, 22]. A hybridization reaction mixture was prepared by mixing 20 μl of 50 mM of Cydia pomonella mitochondrial DNA and 20 μl of GNP-oligonucleotide conjugate in a PCR tube. The hybridization reaction mixture was incubated for 5 minutes by placing it on a thermoshaker preheated at 95°C, followed by incubation for further 5 minutes at 63°C for hybridization. After hybridization, 6 μl of the above mixture is aliquoted into 6 different PCR tubes. Afterwards, for optimization of salt concentration, 6 different concentrations of magnesium sulphate (MgSO4), namely, 3 mM, 15 mM, 30 mM, 60 mM, 80 mM, and 100 mM, were added to each of the abovementioned reaction mixtures, respectively. Milli-Q water was used as a negative control in place of Cydia pomonella mitochondrial DNA in the hybridization mixture. A change in red or pink indicates a positive result, while a blue colour change indicates a negative result, which can be observed visually. Assessing the sensitivity and specificity of the GNP-oligonucleotide conjugates is also a part of optimization. For assessing the sensitivity of GNP-oligonucleotide conjugate, 6 different concentrations of Cydia pomonella mitochondrial DNA were prepared using serial dilution, namely, 5 ng/μl, 10 ng/μl, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl. Each DNA concentration was used for hybridization with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. In the negative control, Milli-Q water was added in place of DNA. Similarly, to assess the specificity of GNP-oligonucleotide conjugate, 50 ng/μl of mitochondrial DNA of Cydia pomonella and Helicoverpa armigera was hybridized with GNP-oligonucleotide conjugate in separate PCR tubes. After hybridization, an optimized concentration of MgSO4 was added and the colour of the solution was recorded. ## 3. Results ### 3.1. Quantification of Mitochondrial DNA Nanodrop measurements provide a value of 21 ng/μl for Cydia pomonella mitochondrial DNA and 18 ng/μl for Helicoverpa armigera mitochondrial DNA. ### 3.2. Characterization of Oligonucleotide Probe The gel image shows successful amplification of the targeted stretch of DNA after PCR. A bright DNA band is visible in the case of the PCR product where theCydia pomonella mitochondrial DNA template was used. There is no amplification in the case of the Helicoverpa armigera mitochondrial DNA template, which was used as a negative control (Figure 1(c)). ### 3.3. Characterization of GNPs The size distribution of GNPs was found to be around 13 nm as revealed by Zetasizer measurements (Figure2(c)). TEM imaging shows that these particles are spherical in shape (Figure 2(c)). The UV-visible spectrophotometric data show the peak of the curve at 524 nm, which is the value of maximum absorption by the particles (Figure 2(b)). The colour of GNPs was found to be red in solution (Figure 2(b)). Furthermore, the absorbance of the sample at 450 nm was found to be 1.85, and the extinction coefficient of spherical GNPs at 450 nm for 13 nm particle size was noted as 1.39 × 108 M−1 cm−1 from a previously published report [18]. Using these values, the molar concentration of GNPs was calculated as 13.3 nM. This value is an average estimate, not an absolute quantity. Therefore, for conjugation experiments, GNPs were taken in volume (ml) in place of molar concentration. ### 3.4. Characterization of GNP-Oligonucleotide Conjugate UV-visible spectrophotometric measurements show a shift in peak from 524 nm in the case of unconjugated GNPs to 539.50 nm in the case of GNP-oligonucleotide conjugate. This red shift confirms the process of conjugation (Figure3(b)). ### 3.5. Optimization of MgSO4 Concentration Hybridization of the GNP-oligonucleotide conjugate (prepared using 0.5μM oligonucleotide probe) with Cydia pomonella mitochondrial DNA was followed by the addition of 6 different concentrations of MgSO4 as mentioned above. In the negative control, Milli-Q water was used in place of Cydia pomonella mitochondrial DNA. Using 30 mM, 60 mM, 80 mM, and 100 mM MgSO4, Cydia pomonella can be clearly distinguished from negative control (Figure 4(a)). 100 mM was selected as the optimal concentration of MgSO4.Figure 4 Hybridization of GNP-oligonucleotide conjugate with mitochondrial DNA. (a) Optimization of MgSO4 concentration. (b) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 1 μM oligonucleotide probe. (c) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 0.5 μM oligonucleotide probe. (d) Assessment of GNP-oligonucleotide conjugate specificity. (e) Graphical representation of the process of hybridization between GNP-oligonucleotide conjugate and mitochondrial DNA, and its outcome. NC, negative control without mtDNA; PS, sample containing Cydia pomonella mtDNA; CP, Cydia pomonella; HA, Helicoverpa armigera; (+), positive result; (−), negative result. (a)(b)(c)(d)(e) ### 3.6. Assessment of GNP-Oligonucleotide Conjugate Sensitivity Among different concentrations ofCydia pomonella mitochondrial DNA used for hybridization with the GNP-oligonucleotide conjugate, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl remained red after addition of MgSO4, whereas other solutions with lower concentrations of DNA and the negative control having Milli-Q water in place of DNA turned blue. Therefore, the detection limit was found to be 20 ng/μl for Cydia pomonella mitochondrial DNA. Also, both GNP-oligonucleotide conjugates, the one prepared using a 1 μM oligonucleotide probe and the other using a 0.5 μM oligonucleotide probe, show similar results with different concentrations of target DNA. Both of these conjugates are equally sensitive to target detection (Figures 4(b)and 4(c)). Therefore, we selected GNP-oligonucleotide conjugate prepared using a 0.5 μM oligonucleotide probe for assessment of the specificity of this procedure. ### 3.7. Assessment of GNP-Oligonucleotide Conjugate Specificity The specificity of the GNP-oligonucleotide conjugate was evaluated by hybridizing conjugates with mitochondrial DNA ofCydia pomonella and Helicoverpa armigera individually. Mitochondrial DNA of Cydia pomonella displays successful hybridization after addition of MgSO4 as the solution remains red. The solution containing mitochondrial DNA of Helicoverpa armigera turns blue after addition of MgSO4, indicating no hybridization (Figure 4(d)). ## 3.1. Quantification of Mitochondrial DNA Nanodrop measurements provide a value of 21 ng/μl for Cydia pomonella mitochondrial DNA and 18 ng/μl for Helicoverpa armigera mitochondrial DNA. ## 3.2. Characterization of Oligonucleotide Probe The gel image shows successful amplification of the targeted stretch of DNA after PCR. A bright DNA band is visible in the case of the PCR product where theCydia pomonella mitochondrial DNA template was used. There is no amplification in the case of the Helicoverpa armigera mitochondrial DNA template, which was used as a negative control (Figure 1(c)). ## 3.3. Characterization of GNPs The size distribution of GNPs was found to be around 13 nm as revealed by Zetasizer measurements (Figure2(c)). TEM imaging shows that these particles are spherical in shape (Figure 2(c)). The UV-visible spectrophotometric data show the peak of the curve at 524 nm, which is the value of maximum absorption by the particles (Figure 2(b)). The colour of GNPs was found to be red in solution (Figure 2(b)). Furthermore, the absorbance of the sample at 450 nm was found to be 1.85, and the extinction coefficient of spherical GNPs at 450 nm for 13 nm particle size was noted as 1.39 × 108 M−1 cm−1 from a previously published report [18]. Using these values, the molar concentration of GNPs was calculated as 13.3 nM. This value is an average estimate, not an absolute quantity. Therefore, for conjugation experiments, GNPs were taken in volume (ml) in place of molar concentration. ## 3.4. Characterization of GNP-Oligonucleotide Conjugate UV-visible spectrophotometric measurements show a shift in peak from 524 nm in the case of unconjugated GNPs to 539.50 nm in the case of GNP-oligonucleotide conjugate. This red shift confirms the process of conjugation (Figure3(b)). ## 3.5. Optimization of MgSO4 Concentration Hybridization of the GNP-oligonucleotide conjugate (prepared using 0.5μM oligonucleotide probe) with Cydia pomonella mitochondrial DNA was followed by the addition of 6 different concentrations of MgSO4 as mentioned above. In the negative control, Milli-Q water was used in place of Cydia pomonella mitochondrial DNA. Using 30 mM, 60 mM, 80 mM, and 100 mM MgSO4, Cydia pomonella can be clearly distinguished from negative control (Figure 4(a)). 100 mM was selected as the optimal concentration of MgSO4.Figure 4 Hybridization of GNP-oligonucleotide conjugate with mitochondrial DNA. (a) Optimization of MgSO4 concentration. (b) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 1 μM oligonucleotide probe. (c) Assessment of GNP-oligonucleotide conjugate sensitivity of GNP-oligonucleotide conjugate prepared using 0.5 μM oligonucleotide probe. (d) Assessment of GNP-oligonucleotide conjugate specificity. (e) Graphical representation of the process of hybridization between GNP-oligonucleotide conjugate and mitochondrial DNA, and its outcome. NC, negative control without mtDNA; PS, sample containing Cydia pomonella mtDNA; CP, Cydia pomonella; HA, Helicoverpa armigera; (+), positive result; (−), negative result. (a)(b)(c)(d)(e) ## 3.6. Assessment of GNP-Oligonucleotide Conjugate Sensitivity Among different concentrations ofCydia pomonella mitochondrial DNA used for hybridization with the GNP-oligonucleotide conjugate, 20 ng/μl, 30 ng/μl, 40 ng/μl, and 50 ng/μl remained red after addition of MgSO4, whereas other solutions with lower concentrations of DNA and the negative control having Milli-Q water in place of DNA turned blue. Therefore, the detection limit was found to be 20 ng/μl for Cydia pomonella mitochondrial DNA. Also, both GNP-oligonucleotide conjugates, the one prepared using a 1 μM oligonucleotide probe and the other using a 0.5 μM oligonucleotide probe, show similar results with different concentrations of target DNA. Both of these conjugates are equally sensitive to target detection (Figures 4(b)and 4(c)). Therefore, we selected GNP-oligonucleotide conjugate prepared using a 0.5 μM oligonucleotide probe for assessment of the specificity of this procedure. ## 3.7. Assessment of GNP-Oligonucleotide Conjugate Specificity The specificity of the GNP-oligonucleotide conjugate was evaluated by hybridizing conjugates with mitochondrial DNA ofCydia pomonella and Helicoverpa armigera individually. Mitochondrial DNA of Cydia pomonella displays successful hybridization after addition of MgSO4 as the solution remains red. The solution containing mitochondrial DNA of Helicoverpa armigera turns blue after addition of MgSO4, indicating no hybridization (Figure 4(d)). ## 4. Discussion Every method has some advantages and disadvantages. Being time-consuming and expensive, though considered the gold standard, the use of DNA barcoding for every similar-looking specimen of an already known species sounds like a redundant exercise. Simple to use and an inexpensive tool like the one developed by us can be used for quick detection of species just by observing colour of the solution visually in place of repeated DNA sequencing procedures and without the use of expensive and difficult-to-handle instruments (Figure4(e)). Once prepared, the GNP-oligonucleotide conjugate is stable at room temperature for almost a month and can be stored at 4°C for a longer duration for multiple usages, which makes this method cost-effective as shown by other thiolated ssDNA-GNP complex-based methods previously published [23]. This method can further be used to authenticate DNA barcodes by providing additional evidence of a species’ molecular identity in less time. The present method has great potential in a variety of applications associated with species identification. This method can be used to distinguish cryptic species. Such species appear morphologically similar but are genetically different. Contrary to this, some species exhibit polymorphism. Morphovariants of a species are morphologically different but are the same in terms of genetic identity. Such individuals can be identified using this method. This method can be employed for the rapid identification of pest species in quarantine facilities. Using this method, animal and plant species being consumed by other animals as food can also be identified (diet analysis). The method is sensitive and specific. The present method is a simple demonstration of a procedure in its preliminary stage, which could be made more robust and error-proof by adopting a few modifications, which are discussed in the subsequent paragraph.The specificity of this method is highly dependent upon the uniqueness of the oligonucleotide probe sequence. We have used mitochondrial DNA for designing oligonucleotide probes as the nuclear genome sequences of most insects are not available and the DNA barcoding of most insects also relies on the sequence of mtCO-I. But designing an exclusively unique oligonucleotide probe based on the mitochondrial genome is not always possible because of its small size, absence of introns, and conservation of mitochondrial genes across taxa. Also, the chances of cross-binding of the oligonucleotide probe designed against mitochondrial DNA with nuclear DNA cannot be ruled out (when nuclear genome sequence is not available). This is the reason why we selectively extracted mitochondrial DNA (excluding nuclear DNA). The nuclear genomes of eukaryotes are huge in size and differ considerably across as well as within species. Using bioinformatics tools, it is possible to design such an oligonucleotide probe whose sequence is exclusive to a particular eukaryotic species by scanning its whole genome (both nuclear and mitochondrial). This unique probe can then be conjugated with GNPs and be further used as described in our method. The use of such a probe would make this method highly specific with negligible chances of detection failure or false detection. Similarly, the sensitivity of this method can be further enhanced by using an ultrapure GNP- oligonucleotide conjugate. The conjugate solution prepared by us was purified (washed to remove unbound particles) using multiple rounds of centrifugation at high speed, but it cannot be ruled out as it may have negligible amounts of unbound GNPs or free oligonucleotides or both. To get an ultrapure conjugate solution with no unbound GNPs and free oligonucleotides, advanced chromatographic methods can be used after the centrifugation step. These methods can also provide estimates of the respective percentages of GNP-oligonucleotide conjugate, unbound GNPs, and free oligonucleotides in solution [8].Taxonomy is at the core of understanding biodiversity. Like other scientific disciplines, taxonomy has also progressed significantly from being a traditional morphology-based approach to being a modern multisource approach. The modern approach does not lessen the importance of the traditional morphology-based approach but rather strengthens it. The modern multisource approach involves information from various sources like morphology, behaviour, mitochondrial DNA, nuclear DNA, ecology, enzymes, chemistry, reproductive compatibility, cytogenetics, life history, and whole genome scans. Such a multisource approach is the backbone of integrative taxonomy, a synthesis of different traditional and modern approaches. Integrative taxonomy reduces the chances of misidentification and other taxonomic errors and has made the process of identification easier, more efficient, and more reliable. Palaeontology, embryology, anatomy, ethology, ecology, biochemistry, and molecular biology are the major fields of studies with significant applications in integrative taxonomy [24, 25]. Instead of huge development in the field of integrative taxonomy, the application of nanotechnology in this area has not been realized yet, unlike in other popular disciplines. In biological sciences, nanotechnology is emerging as a great tool with huge potential. Applications of nanotechnology in different fields of biology are already being explored. Nanodiagnostics (including nanodetection, nanoimaging, and nanoanalytics) and nanotherapeutics, which are the subareas of nanomedicine, are the most preferred areas of biological sciences where the application of nanotechnology is being explored today [26]. The focus is on regenerative medicine, cancer diagnosis and treatment, neuromorphic engineering, tissue engineering, development of biosurfectants, biomedical nanosensors, enhancing bioavailability and bioactivity of drugs, pathogen detection, stem cell biology, and molecular imaging [27–32]. Some of the nonmedical applications of nanotechnology include the development of pesticide, herbicide, and fertilizer nanoformulations; designing of pest and agrochemical nanosensors; development of nanodevices for genetic engineering, crop improvement, and animal breeding; increasing shelf life of harvested crops; and creation of biomimetic materials [33–36]. Instead of the huge applications of nanotechnology, its application in integrative taxonomy has not been realized yet. We believe that nanotechnology also has great potential in the field of integrative taxonomy. This new area of discourse may be called “nanotaxonomy.” It can add new dimensions to modern taxonomic studies if explored systematically. ## 5. Conclusion In the present study, we have reported a novel method for the detection of insect species based on their molecular signature by using a GNP-oligonucleotide conjugate which can distinguish one species from another by a simple change in the colour of the solution that can be observed by the naked eye. Use of mitochondrial genome sequence for probe designing is the unique strategy. This method can help in saving time and money spent on repetitive barcoding experiments on apparently similar looking specimens. This method has high sensitivity (detection limit = 20 ng/μl) and specificity. The specificity of this method can further be enhanced by designing a species exclusive probe with negligible cross-species similarity employing whole genome scanning assisted by advanced bioinformatic tools. The present work may be considered as a small step towards bridging the existing gap between integrative taxonomy and nanotechnology. --- *Source: 1009066-2022-08-23.xml*
2022
# Influence of Pathogen Type on Neonatal Sepsis Biomarkers **Authors:** Lyudmila Akhmaltdinova; Svetlana Kolesnichenko; Alyona Lavrinenko; Irina Kadyrova; Olga Avdienko; Lyudmila Panibratec **Journal:** International Journal of Inflammation (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1009231 --- ## Abstract Understanding immunoregulation in newborns can help to determine the pathophysiology of neonatal sepsis and will contribute to improve the diagnosis, prognosis, and treatment and remains an urgent and unmet medical need to understand hyperinflammation or hypoinflammation associated with sepsis in newborns. This study included infants (up to 4 days old). The “sepsis” criteria was a positive blood culture. C-reactive protein demonstrates a strong dependence on the pathogen etiology. Therefore, its diagnostic odds ratio in Gram-positive bacteremia was 2.7 and the sensitivity was 45%, while Gram-negative was 15.0 and 81.8%, respectively. A neutrophil-lymphocyte ratio above 1 and thrombocytopenia below50∗109 cells/L generally do not depend on the type of pathogen and have a specificity of 95%; however, the sensitivity of these markers is low. nCD64 demonstrated good analytical performance and was equally discriminated in both Gram (+) and Gram (−) cultures. The sensitivity was 87.5–89%, and the specificity was 65%. The HLA-DR and programmed cell death protein study found that activation-deactivation processes in systemic infection is different at points of application depending on the type of pathogen: Gram-positive infections showed various ways of activation of monocytes (by reducing suppressive signals) and lymphocytes (an increase in activation signals), and Gram-negative pathogens were most commonly involved in suppressing monocytic activation. Thus, the difference in the bacteremia model can partially explain the problems with the high variability of immunologic markers in neonatal sepsis. --- ## Body ## 1. Introduction Sepsis of newborns is one of the most important issues in pediatrics, the third leading cause of death in the neonatal period. Mortality rates range from 13% to 70% [1]. In Kazakhstan, the mortality rate from sepsis among children under one year of age increased to 4.3 in 2019, and in the Karaganda region, it was 8.68 per 1000 live births [2]. However, little progress has been done in the treatment of neonatal sepsis in the last three decades. Early diagnosis is crucial in the prevention of negative outcomes. However, an urgent and unsatisfied medical need for the diagnosis of sepsis-related hyperinflammation in newborns remains.Traditionally, sepsis has been categorized as a manifestation of hyperinflammatory syndrome, but recent evidence has shown that the pathogenesis of inflammation in systemic infection is more complex. There is a shred of increasing evidence supporting the role of immunosuppression in sepsis [3, 4]. However, its role in neonatal sepsis remains to be elucidated. The unique physiological characteristics of organs and systems in the first days of life, especially the unique state of the immune system at the moment of birth, complicate the understanding of the norm and pathology in the immune regulation of newborns.Programmed cell death-1/programmed death-ligand 1 (PD-1/PDL-1) is one of the key models in the development of sepsis-mediated immunosuppression, but its role in newborns is still poorly described [5, 6]. The study of immune suppression in neonates may be instrumental in better defining the immune pathophysiology of neonatal sepsis.This could also aid in the identification of unique biomarkers that may have clinical relevance for immunomonitoring, predicting outcomes, or even targeted therapeutic agents. In the case of neonatal sepsis, the situation is complicated by many factors affecting the outcome and prognosis of sepsis, a wide variability in the degree of maturity at birth, dependence on weight and gestational age, which change every day of the calendar age [7]. Probably, the etiology of the pathogen plays an important role in immunoregulation.Our research is devoted to studying the role of causative agents in proinflammatory and immunosuppressive signals of sepsis at different links of the immune response and an attempt to use them as biomarkers. ## 2. Materials and Methods ### 2.1. Patient Characteristics This prospective controlled trial enrolled infants (up to 4 days old) in the Intensive Care Units of Regional Perinatal Center of Karaganda Research permission from the Karaganda Medical University Bioethics Committee No. 19 from 05.08.2019. Informed written consent was obtained from a parent prior to study enrolment. The criterion for determining a case of “sepsis” was a positive blood culture. The control group consisted of children who received treatment in the intensive care unit with negative blood cultures and unconfirmed infectious complications at the time of discharge.Exclusion criteria were as follows:(1) Patients born to HIV-positive mothers(2) Patients receiving therapy with high doses of glucocorticosteroids(3) Primary immunodeficiency state(4) Blood loss(5) Severe malformations(6) Acute hemolytic disease of the newborn(7) Refusal of the patient’s parents or legal representative to participate in the study ### 2.2. Bacteriological Research Methods The analysis was carried out using the BD BACTEC™ FX (Peds Plus Medium) system. After the appearance of signs of growth, the broth was inoculated on the blood agar plate. Microorganisms were identified by using time-of-flight mass spectrometry (Microflex-LT, Bruker Daltonics). The causative agents of sepsis were divided into 2 groups according to the type of cell wall: (1) Gram-positive (Gram (+)):Staphylococcus haemolyticus, Staphylococcus aureus, Staphylococcus epidermidis, Streptococcus agalactiae, Enterococcus faecalis, and Enterococcus faecium; (2) Gram-negative (Gram (−)): Klebsiella pneumoniae, Escherichia coli, and Enterobacter cloacae. The etiological structure of neonatal sepsis was previously described [8]. ### 2.3. Immunological Research Methods Blood cell counting was conducted using a Mindray hematology analyzer. Blood samples were fixed with a no-wash fixation and lysis technology using OptiLyse C, no-wash lysing solution, according to the manufacturer’s instructions. Surface staining for activation markers was performed withαCD4, αCD8, αCD14, αCD24, αCD64, and αCD279 (Becton Dickinson). The immunological parameters were studied with flow cytometry (Partec CyFlow Space). An unstained sample was used as a negative control. Compensation settings were made using built-in software (FlowMax). The research was carried out while standardizing the gain settings for the entire research period.The leukocyte population was identified according to the expected size and granularity on a forward and side scatter plot (FSC/SSC), and CD24+ (neutrophils), CD14+ (monocytes), CD4+ (T-helper lymphocytes), and CD8+ (T-cytotoxic lymphocytes) were gated and defined by the characteristic phenotypes.αCD279 was used for the PD-1 marker.Then, the CD64 index was defined as the ratio of the mean fluorescence intensity (MFI) of CD64+ neutrophils to the MFI of CD64 lymphocytes (internal negative control) according to the gating strategy.Neutrophil-lymphocyte ratio (NLR) was defined as the ratio of the percentage of neutrophils to lymphocytes and the platelet-lymphocyte ratio (PLR) as the ratio of platelets to lymphocytes (expressed as 109 cells/L).The analysis of C-reactive protein (CRP) was carried out by the hospital laboratory. The analysis of the CRP data included samples, the immunological examination of which could not be done due to a deficiency of biomaterial or a clot; in general, they corresponded to the groups given in Table1.Table 1 Patient information. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueBirth weight (g), Me (Q1; Q3)2175 (1708; 2771)2060 (1335; 2925)2575 (1440; 2951)>0.05Gestational age (week), Me (Q1; Q3)34 (33; 37)33 (29.5; 37)33.5 (31.5; 36.25)>0.05Cesarean section, Me (Q1; Q3)11/20 (55%)11/16 (68%)5/9 (55%)>0.05n (for CRP)262011—n (for biomarker)201610P value, Kruskal–Wallis test for comparing 3 groups. ### 2.4. Statistical Analysis Statistical analysis was carried out in the R statistics (compare groups and rstatix packages) and Statistica programs using the nonparametric Kruskal–Wallis test (nonparametric one-way ANOVA). For repeated pairwise comparisons, the Mann–Whitney test with Holm’s correction (R statistics) was used. Intergroup comparisons withp values are presented in tables, and the pairwise group comparisons with individual p values are discussed in the text. Categorical data were calculated using the chi-square test. The parameters for cutoff were chosen empirically and according to literature data. The parameters for cutoff were chosen empirically and according to literature data. ## 2.1. Patient Characteristics This prospective controlled trial enrolled infants (up to 4 days old) in the Intensive Care Units of Regional Perinatal Center of Karaganda Research permission from the Karaganda Medical University Bioethics Committee No. 19 from 05.08.2019. Informed written consent was obtained from a parent prior to study enrolment. The criterion for determining a case of “sepsis” was a positive blood culture. The control group consisted of children who received treatment in the intensive care unit with negative blood cultures and unconfirmed infectious complications at the time of discharge.Exclusion criteria were as follows:(1) Patients born to HIV-positive mothers(2) Patients receiving therapy with high doses of glucocorticosteroids(3) Primary immunodeficiency state(4) Blood loss(5) Severe malformations(6) Acute hemolytic disease of the newborn(7) Refusal of the patient’s parents or legal representative to participate in the study ## 2.2. Bacteriological Research Methods The analysis was carried out using the BD BACTEC™ FX (Peds Plus Medium) system. After the appearance of signs of growth, the broth was inoculated on the blood agar plate. Microorganisms were identified by using time-of-flight mass spectrometry (Microflex-LT, Bruker Daltonics). The causative agents of sepsis were divided into 2 groups according to the type of cell wall: (1) Gram-positive (Gram (+)):Staphylococcus haemolyticus, Staphylococcus aureus, Staphylococcus epidermidis, Streptococcus agalactiae, Enterococcus faecalis, and Enterococcus faecium; (2) Gram-negative (Gram (−)): Klebsiella pneumoniae, Escherichia coli, and Enterobacter cloacae. The etiological structure of neonatal sepsis was previously described [8]. ## 2.3. Immunological Research Methods Blood cell counting was conducted using a Mindray hematology analyzer. Blood samples were fixed with a no-wash fixation and lysis technology using OptiLyse C, no-wash lysing solution, according to the manufacturer’s instructions. Surface staining for activation markers was performed withαCD4, αCD8, αCD14, αCD24, αCD64, and αCD279 (Becton Dickinson). The immunological parameters were studied with flow cytometry (Partec CyFlow Space). An unstained sample was used as a negative control. Compensation settings were made using built-in software (FlowMax). The research was carried out while standardizing the gain settings for the entire research period.The leukocyte population was identified according to the expected size and granularity on a forward and side scatter plot (FSC/SSC), and CD24+ (neutrophils), CD14+ (monocytes), CD4+ (T-helper lymphocytes), and CD8+ (T-cytotoxic lymphocytes) were gated and defined by the characteristic phenotypes.αCD279 was used for the PD-1 marker.Then, the CD64 index was defined as the ratio of the mean fluorescence intensity (MFI) of CD64+ neutrophils to the MFI of CD64 lymphocytes (internal negative control) according to the gating strategy.Neutrophil-lymphocyte ratio (NLR) was defined as the ratio of the percentage of neutrophils to lymphocytes and the platelet-lymphocyte ratio (PLR) as the ratio of platelets to lymphocytes (expressed as 109 cells/L).The analysis of C-reactive protein (CRP) was carried out by the hospital laboratory. The analysis of the CRP data included samples, the immunological examination of which could not be done due to a deficiency of biomaterial or a clot; in general, they corresponded to the groups given in Table1.Table 1 Patient information. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueBirth weight (g), Me (Q1; Q3)2175 (1708; 2771)2060 (1335; 2925)2575 (1440; 2951)>0.05Gestational age (week), Me (Q1; Q3)34 (33; 37)33 (29.5; 37)33.5 (31.5; 36.25)>0.05Cesarean section, Me (Q1; Q3)11/20 (55%)11/16 (68%)5/9 (55%)>0.05n (for CRP)262011—n (for biomarker)201610P value, Kruskal–Wallis test for comparing 3 groups. ## 2.4. Statistical Analysis Statistical analysis was carried out in the R statistics (compare groups and rstatix packages) and Statistica programs using the nonparametric Kruskal–Wallis test (nonparametric one-way ANOVA). For repeated pairwise comparisons, the Mann–Whitney test with Holm’s correction (R statistics) was used. Intergroup comparisons withp values are presented in tables, and the pairwise group comparisons with individual p values are discussed in the text. Categorical data were calculated using the chi-square test. The parameters for cutoff were chosen empirically and according to literature data. The parameters for cutoff were chosen empirically and according to literature data. ## 3. Results CRP is a basic indicator for assessing the activity of the inflammatory process. We present the data of its content in the groups (Table2).Table 2 CRP data in the study groups. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueCRP (mg/l), Me (Q1; Q3)0.6 (0.0; 4.5)5.3 (0.0; 6.0)6.0 (5.6; 12.0)0.02CRP >5 mg/l (count/all)6/26 (23%)9/20 (45%)9/11 (82%)0.004Diagnostic odds ratio2.7 (0.76; 9.6)15.0 (2.52; 89.2)DOR (±95% CI)4581.8Sensitivity, %76.976.9Specificity, %6060Positive predictive value (PPV), %64.590.9Negative predictive value (NPV), %Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.The data on the content of leukocytes and subpopulations are given in Table3.Table 3 The main indicators of leukocytes and platelets. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueLeukocytosis (109 cells/l), Me (Q1; Q3)18.3 (14.4; 20.7)13.3 (12.3; 15.1)15.3 (8.8; 25.2)>0.05Leukopenia less than5∗109 cells/l1/20 (5%)1/16 (6.25%)1/10 (10%)>0.05Leukocytosis more than20∗109 cells/l7/20 (35%)3/16 (18.75%)3/10 (30%)>0.05Lymphocytes (%), Me (Q1; Q3)52.0 (44.9; 68.0)40.0 (32.3; 51.1)49.3 (35.6; 54.1)0.03Neutrophils (%), Me (Q1; Q3)29.0 (19.2; 40.8)47.1(37.5; 57.1)39.5 (34.4; 43.2)0.003Lymphocytes (109 cells/l), Me (Q1; Q3)10.0 (7.2; 13.1)5.0 (3.5; 6.7)6.4 (3.1; 14.3)0.02Neutrophils (109 cells/l), Me (Q1; Q3)4.55 (3.1; 6.3)6.0 (4.0; 8.8)4.9 (2.43; 10.0)>0.05NLR, Me (Q1; Q3)0.56 (0.27; 0.80)1.39 (0.72; 1.52)0.82 (0.63; 1.3)0.006NLR>1 (count/all)1/20 (5.0%)9/16 (56.3%)4/10 (40%)0.004DOR (±95% CI)24.4 (2.6; 229)12.6 (1.17; 136)Sensitivity, %52.640Specificity, %9595Positive predictive value (PPV), %90.080Negative predictive value (NPV), %73.076Platelets (109 cells/l), Me (Q1; Q3)151.0 (131.0; 167.0)116.0 (50.0; 176.0)154 (47.0; 176.0)>0.05PLR, Me (Q1; Q3)16.62 (10.1; 19.8)25.9 (12.4; 39.1)12.4 (10.1; 12.56)>0.05Platelets less 50 109 cells/l (count/all)1/20 (5%)6/16 (37.5%)4/10 (40.0%)0.031DOR (±95% CI)11.4 (1.2; 108)12.6 (1.1; 136)Sensitivity, %37.540Specificity, %9595Positive predictive value (PPV), %85.780Negative predictive value (NPV), %62.576Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.Taken together, these data demonstrate that the sepsis caused by Gram-positive bacteria is a less stimulus for the production of CRP, the data for this group are more variable, and although, the median values do not differ. The significance of differences from the control is achieved only in the case of Gram-negative bacteremia (control vs. Gram (−),p=0.04). When assessing the diagnostic significance, this assumption was confirmed.There was neither predominant leukocytosis nor leukopenia in both sepsis groups (Table3). Noting the absolute and relative change of subpopulations expected during the infectious process, with the difference in the median value to a greater extent between the Gram-positive bacteremia group vs. the control group (lymphocytes (%), p=0.03; lymphocytes (absolute count), p=0.013; neutrophils (%, absolute count), p=0.003), there is no difference between Gram-negative vs. control groups and Gram-negative vs. Gram-positive bacteremia groups.The median NLR shows the best distinguishing ability of this indicator for Gram-positive sepsis (Gram (+) vs. control,p=0.004; Gram (−) vs. control, p=0.2). As a diagnostic tool with a cutoff of more than 1, it would be noted that with Gram-positive bacteremia, its diagnostic power is higher (DOR 24 vs. 12.6). However, it is statistically applicable in both groups of sepsis (control vs. Gram (+), p=0.002; control vs. Gram (−), p=0.03).The platelet count is one of the important clinical and laboratory indicators of neonatal sepsis. There were observed fluctuations in the median platelet count, especially in the PLR index, but there was no statistical significance. Moreover, indicator such as thrombocytopenia less than 50 is an indicator of the septic process in newborns in both groups of sepsis (control vs. Gram (+),p=0.002; control vs. Gram (−) p=0.03). Activation markers and classic markers of sepsis are given in Table 4.Table 4 Activation markers. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueMFI nCD24, Me (Q1; Q3)9.9 (7.48; 14.1)7.16 (4.44; 15.12)8.02 (7.48; 20.9)>0.05MFI nCD64, Me (Q1; Q3)8.37 (5.0; 13.0)15.3 (10.8; 24.7)15.5 (12.5; 17.8)0.003MFI nCD64 > 10 (count/all)7/20 (35%)12/16 (75%)9/10 (90%)0.004DOR (±95% CI)5.5 (1.3; 23.9)16.7 (1.7; 160)Sensitivity, %7590Specificity, %6565Positive predictive value (PPV), %63.256.3Negative predictive value (NPV), %76.592.9Index CD64, Me (Q1; Q3)3.07 (1.94; 6.3)11.05 (7.7; 19.45)7.2 (6.7; 10.4)0.002Index CD64 > 4 (count/all)7/20 (35%)14/16 (88%)9/10 (90%)<0.001DOR13.0 (2.2; 74.3)16.7 (1.7; 160)Sensitivity, %87.590Specificity, %6565Positive predictive value (PPV), %66.656.3Negative predictive value (NPV), %86.692.86HLA-DR + Mon, Me (Q1; Q3)95 (79.0; 98.0)88.0 (80; 98)88 (75.0; 90.0)>0.05MFI monHLA-DR, Me (Q1; Q3)3.59 (2.38; 7.5)5.2 (2.59; 6.7)2.1 (1.63; 2.9)0.015HLA-DR + lymph, Me (Q1; Q3)5.7 (4.5; 8.6)5.16 (3.0; 6.6)9.82 (3.72; 16.94)>0.05MFI lymphHLA-DR, Me (Q1; Q3)9.7 (8.2; 23.0)16.1 (13.7; 17.7)11.6 (6.7; 12.5)0.008CD25 + CD4+, Me (Q1; Q3)16.0 (6.96; 14.75)25.5 (10.0; 55.00)17.0 (12.25; 38.25)>0.05Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.We found no difference in CD24 expression in all groups.The expression ofαCD64 both as MFI clearly and unambiguously changes in newborns with sepsis in comparison to the control group. The CD64 index showed the significant difference in both groups with positive culture (MFI: control vs. Gram (+), p=0.01; control vs. Gram (−), p=0.02; CD64 index: control vs. Gram (+), p=0.001; control vs. Gram (−), p=0.04; Gram (+) vs. Gram (−), p>0.05).When using a cutoff (index more 4 or MFI more 10) in both cases, good resolution is achievable (MFI: control vs. Gram (+),p=0.02, and control vs. with Gram (−), p=0.006; CD64 index control vs. Gram (+), p=0.002, and control vs. Gram (−), p=0.006). Taken together, these data demonstrate the satisfactory operating characteristic analysis of the test, which probably does not depend on the type of pathogen.Furthermore, there were no changes in the value of HLA-DR + monocytes and lymphocytes but were observed multidirectional changes in their expression; in both cases, the difference was significant not only with the control but also with the other group of sepsis. In the case that expression on monocytes decreases with a Gram-negative pathogen (Gram (−) vs. control,p=0.03, and Gram (−) vs. Gram (+), p=0.03), then on lymphocytes, it increases with a Gram-positive pathogen (Gram (+) vs. control, p=0.034, and Gram (+) vs. Gram (−), p=0.02).CD4+CD25+T cells level did not change significantly in any of the groups. PD-1 changes on lymphocytes and monocytes are given in Table5.Table 5 PD-1 on lymphocytes and monocytes. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valuePD-1 + Mon37.2 (33.0; 43.9)37.5 (33.0; 40.0)44 (38.0; 45.5)0.15PD-1 + Mon MFI25.7 (13.4; 33.8)15.4 (14.7; 18.0)31 (20.0; 39.8)0.008PD-1 + CD430 (21.3; 36.5)34.3 (33.0; 42.0)13.5 (6.5; 33.0)0.0035PD-1 + CD4 MFI13.7 (9.3; 18.3)18.0 (12.6; 26.6)13.0 (12.0; 14.0)0.3PD-1 + CD841.5 (40.0; 44.5)42.0 (41.5; 45.5)37.0 (34.0; 39.0)0.003PD-1 + CD8 MFI16.2 (12.0; 17.8)15.2 (14.9; 28.0)12.2 (10.0; 13.6)0.208Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.Our findings in the study of the PD-1 receptor were partly unexpected. We found that PD-1 expression on monocytes and lymphocytes differs fundamentally depending on the type of pathogen. Thus, for PD-1 on monocytes, MFI decreases with the Gram (+) pathogen group, and the difference between the two groups (but not with the control) is significant (p=0.007). In contrast, the number of PD-1 on lymphocytes decreases with a Gram-negative infection, but not with a Gram-positive one. The difference between the two sepsis groups was again significant (control vs. Gram (−), p=0.06; Gram (+) vs. Gram (−), p=0.01). ## 4. Discussion Neonatal sepsis is still a pediatric problem. The clinical manifestations of the inflammatory syndrome in newborns can be effaced or variable. The normal range of laboratory markers depends on gestational or postnatal age and fluctuates in response to coexisting noninfectious processes. Although positive culture is the gold standard, the reduced volume of blood, which applies to infants and low susceptibility to pathogen concentrations, reduces the sensitivity of the method.Even when compared to children, newborns exhibit a unique immune response to systemic infection, making diagnosis and prognosis difficult. Thus, neonatal-specific clinical trials are needed to improve survival and long-term outcomes for these populations. A better understanding of the pathophysiology of the interaction between the infant’s immune system and a pathogen will open up new opportunities to improve outcomes.CRP is a basic diagnostic test and is most commonly used to diagnose conditions associated with hyperinflammation. Studies evaluating the role of CRP in the diagnosis of early onset neonatal sepsis (EOS) report varying sensitivity and specificity from 29% to 100% and 60% to 100%, respectively [3]. Most authors report a low clinical benefit in the case of EOS [9, 10]. There are many conditions in which there is a false increase in CRP levels. In this study, using a cutoff of 5 mg/l, we showed that such a large spread of data is probably associated with the pathogen. The ratio of the CRP level with Gram-negative showed the sensitivity twice higher, and it is 81.8% versus 45% with Gram-positive bacteremia, and the total DOR is 2.7 vs. 15.0, while the NPV is 0.9% versus 64.5% and while the PPV is 60%.Standard blood test parameters were used for the diagnosis of EOS, but without much success. According to some authors, leukopenia has shown low sensitivity (29%) but high specificity (91%) for the diagnosis of neonatal sepsis [11]. Sharma et al. [7] claimed that values under 5000/mm3 for WBCs have a high specificity (91%) regarding sepsis diagnosis. While according to Philip A.G., neutropenia was more predictable for neonatal sepsis than neutrophilia [12]. In this study, the changes in the number of neutrophils and lymphocytes are in response to systemic infection; however, the discriminant ability was poor and unlikely to be clinically useful apart.There have already been studies showing that NLR has better diagnostic capabilities than CRP, including in premature and low birth weight babies [13–15]. However, there remains the issue of cutoff, which is noticeably different for each author and should change dynamically with each calendar day. It was suggested that using the NLR, which showed a specificity of 95% DOR (12.6 and 24.4, respectively), had some etiological influences on the development of sepsis. We found the best diagnostic capabilities of this parameter in Gram-positive infection.Thrombocytopenia is one of the markers of neonatal sepsis in children associated with negative outcomes [16]. Although, not all authors agree that this indicator is useful in diagnosis and prognosis [17, 18]. In this study, thrombocytopenia was closely associated with sepsis, and although the sensitivity did not exceed 40%, the specificity was 95%. PLR is an index that did not show any significance.Markers of neutrophil and monocyte activation have long been used as markers of sepsis. nCD64 has been the most promising marker for neonatal sepsis and is still on the rise. A meta-analysis investigating the use of nCD64 as a biomarker of NS that included 17 studies with 3,478 participants revealed only a modest pooled sensitivity of 0.77 (95% CI 0.74–0.79), specificity of 0.74 (95% CI 0.72–0.75), and AUC of 0.87 [19–21]. This study demonstrated nCD64 as a universal marker, approximately equally reflecting both Gram-negative and Gram-positive bacteremia, without difference in absolute or relative terms, with a maximum sensitivity of 90% and a specificity of 65%.The expression of HLA-DR is one of the first cytometric markers of sepsis, the most popular, but it is more commonly used in late neonatal sepsis and much more successfully in adults. It reflects the anergy of the immune system as a consequence of systemic inflammation and is associated with negative outcomes [22, 23]. There were no significant differences in the number of HLA-DR + monocytes and lymphocytes; however, the expression of HLA-DR on monocytes was significantly reduced in the group with Gram-negative bacteremia. On the other hand, the opposite tendency was registered on lymphocytes, with Gram-positive systemic infection, and the expression was higher in both the control group and in the group with Gram-negative sepsis.PD-1 is an immune checkpoint molecule, which plays an important role in downregulating the immune system’s proinflammatory activity. The new paradigm of sepsis suggests that immunosuppression is part of the immunopathology in systemic infection, and PD-1 activation plays a key role. Using adult patients as examples, most authors show that PD-1 increases with severe systemic infection [4]. But its role is not well understood in the immune response of newborns, and there are only scattered studies. Zasada et al. [5] confirmed the generally accepted change in PD-1 expression in late neonatal sepsis, an increase in the marker of immune depletion on monocytes in the severe course and negative outcome. Unexpectedly, there was no increase in PD-1 in the study groups. In the case that amount of PD-1 on monocytes decreased during Gram-positive systemic infection, the number of PD-1 + lymphocytes (CD4+ and CD8+ T cells) would decrease during Gram-negative infection. This can be explained by the specifics of the early neonatal period, in which the development of tolerance to environmental antigens and the inflammatory response compete with each other and the outcome will depend on the ratio of these processes. Moreover, a notable fact is that it was not singled out critical and shocked patients, and most of the studied children experienced sepsis. ## 5. Conclusions This study demonstrated that the difference in the pattern of bacteremia may partially explain the problems with the wide variability of immunological markers in neonatal sepsis. Although the basic popular markers of innate immunity (NLR, thrombocytopenia, and CD64) can be equally applied in early neonatal sepsis of various etiologies with satisfactory operational characteristics, CRP is largely stimulated by Gram-negative pathogens.In addition, some combined shifts in activation and suppression markers specific to different types of the pathogen were revealed; thus, monocytes are probably more sensitive to the tolerogenic effect of endotoxin, and the expression of the antigen-presenting receptor decreases, whereas lymphocytes receive either an increase in the activating signal (Gram-positive bacteria) or a decrease in the suppressive signal (Gram-negative pathogen), and any case, the proinflammatory response predominates over immunosuppression.The study of the possibility of quantifying the expression of various markers by MFI has not yet exhausted its potential; however, such studies require methodological support and opportunities for standardization. --- *Source: 1009231-2021-11-19.xml*
1009231-2021-11-19_1009231-2021-11-19.md
29,333
Influence of Pathogen Type on Neonatal Sepsis Biomarkers
Lyudmila Akhmaltdinova; Svetlana Kolesnichenko; Alyona Lavrinenko; Irina Kadyrova; Olga Avdienko; Lyudmila Panibratec
International Journal of Inflammation (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1009231
1009231-2021-11-19.xml
--- ## Abstract Understanding immunoregulation in newborns can help to determine the pathophysiology of neonatal sepsis and will contribute to improve the diagnosis, prognosis, and treatment and remains an urgent and unmet medical need to understand hyperinflammation or hypoinflammation associated with sepsis in newborns. This study included infants (up to 4 days old). The “sepsis” criteria was a positive blood culture. C-reactive protein demonstrates a strong dependence on the pathogen etiology. Therefore, its diagnostic odds ratio in Gram-positive bacteremia was 2.7 and the sensitivity was 45%, while Gram-negative was 15.0 and 81.8%, respectively. A neutrophil-lymphocyte ratio above 1 and thrombocytopenia below50∗109 cells/L generally do not depend on the type of pathogen and have a specificity of 95%; however, the sensitivity of these markers is low. nCD64 demonstrated good analytical performance and was equally discriminated in both Gram (+) and Gram (−) cultures. The sensitivity was 87.5–89%, and the specificity was 65%. The HLA-DR and programmed cell death protein study found that activation-deactivation processes in systemic infection is different at points of application depending on the type of pathogen: Gram-positive infections showed various ways of activation of monocytes (by reducing suppressive signals) and lymphocytes (an increase in activation signals), and Gram-negative pathogens were most commonly involved in suppressing monocytic activation. Thus, the difference in the bacteremia model can partially explain the problems with the high variability of immunologic markers in neonatal sepsis. --- ## Body ## 1. Introduction Sepsis of newborns is one of the most important issues in pediatrics, the third leading cause of death in the neonatal period. Mortality rates range from 13% to 70% [1]. In Kazakhstan, the mortality rate from sepsis among children under one year of age increased to 4.3 in 2019, and in the Karaganda region, it was 8.68 per 1000 live births [2]. However, little progress has been done in the treatment of neonatal sepsis in the last three decades. Early diagnosis is crucial in the prevention of negative outcomes. However, an urgent and unsatisfied medical need for the diagnosis of sepsis-related hyperinflammation in newborns remains.Traditionally, sepsis has been categorized as a manifestation of hyperinflammatory syndrome, but recent evidence has shown that the pathogenesis of inflammation in systemic infection is more complex. There is a shred of increasing evidence supporting the role of immunosuppression in sepsis [3, 4]. However, its role in neonatal sepsis remains to be elucidated. The unique physiological characteristics of organs and systems in the first days of life, especially the unique state of the immune system at the moment of birth, complicate the understanding of the norm and pathology in the immune regulation of newborns.Programmed cell death-1/programmed death-ligand 1 (PD-1/PDL-1) is one of the key models in the development of sepsis-mediated immunosuppression, but its role in newborns is still poorly described [5, 6]. The study of immune suppression in neonates may be instrumental in better defining the immune pathophysiology of neonatal sepsis.This could also aid in the identification of unique biomarkers that may have clinical relevance for immunomonitoring, predicting outcomes, or even targeted therapeutic agents. In the case of neonatal sepsis, the situation is complicated by many factors affecting the outcome and prognosis of sepsis, a wide variability in the degree of maturity at birth, dependence on weight and gestational age, which change every day of the calendar age [7]. Probably, the etiology of the pathogen plays an important role in immunoregulation.Our research is devoted to studying the role of causative agents in proinflammatory and immunosuppressive signals of sepsis at different links of the immune response and an attempt to use them as biomarkers. ## 2. Materials and Methods ### 2.1. Patient Characteristics This prospective controlled trial enrolled infants (up to 4 days old) in the Intensive Care Units of Regional Perinatal Center of Karaganda Research permission from the Karaganda Medical University Bioethics Committee No. 19 from 05.08.2019. Informed written consent was obtained from a parent prior to study enrolment. The criterion for determining a case of “sepsis” was a positive blood culture. The control group consisted of children who received treatment in the intensive care unit with negative blood cultures and unconfirmed infectious complications at the time of discharge.Exclusion criteria were as follows:(1) Patients born to HIV-positive mothers(2) Patients receiving therapy with high doses of glucocorticosteroids(3) Primary immunodeficiency state(4) Blood loss(5) Severe malformations(6) Acute hemolytic disease of the newborn(7) Refusal of the patient’s parents or legal representative to participate in the study ### 2.2. Bacteriological Research Methods The analysis was carried out using the BD BACTEC™ FX (Peds Plus Medium) system. After the appearance of signs of growth, the broth was inoculated on the blood agar plate. Microorganisms were identified by using time-of-flight mass spectrometry (Microflex-LT, Bruker Daltonics). The causative agents of sepsis were divided into 2 groups according to the type of cell wall: (1) Gram-positive (Gram (+)):Staphylococcus haemolyticus, Staphylococcus aureus, Staphylococcus epidermidis, Streptococcus agalactiae, Enterococcus faecalis, and Enterococcus faecium; (2) Gram-negative (Gram (−)): Klebsiella pneumoniae, Escherichia coli, and Enterobacter cloacae. The etiological structure of neonatal sepsis was previously described [8]. ### 2.3. Immunological Research Methods Blood cell counting was conducted using a Mindray hematology analyzer. Blood samples were fixed with a no-wash fixation and lysis technology using OptiLyse C, no-wash lysing solution, according to the manufacturer’s instructions. Surface staining for activation markers was performed withαCD4, αCD8, αCD14, αCD24, αCD64, and αCD279 (Becton Dickinson). The immunological parameters were studied with flow cytometry (Partec CyFlow Space). An unstained sample was used as a negative control. Compensation settings were made using built-in software (FlowMax). The research was carried out while standardizing the gain settings for the entire research period.The leukocyte population was identified according to the expected size and granularity on a forward and side scatter plot (FSC/SSC), and CD24+ (neutrophils), CD14+ (monocytes), CD4+ (T-helper lymphocytes), and CD8+ (T-cytotoxic lymphocytes) were gated and defined by the characteristic phenotypes.αCD279 was used for the PD-1 marker.Then, the CD64 index was defined as the ratio of the mean fluorescence intensity (MFI) of CD64+ neutrophils to the MFI of CD64 lymphocytes (internal negative control) according to the gating strategy.Neutrophil-lymphocyte ratio (NLR) was defined as the ratio of the percentage of neutrophils to lymphocytes and the platelet-lymphocyte ratio (PLR) as the ratio of platelets to lymphocytes (expressed as 109 cells/L).The analysis of C-reactive protein (CRP) was carried out by the hospital laboratory. The analysis of the CRP data included samples, the immunological examination of which could not be done due to a deficiency of biomaterial or a clot; in general, they corresponded to the groups given in Table1.Table 1 Patient information. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueBirth weight (g), Me (Q1; Q3)2175 (1708; 2771)2060 (1335; 2925)2575 (1440; 2951)>0.05Gestational age (week), Me (Q1; Q3)34 (33; 37)33 (29.5; 37)33.5 (31.5; 36.25)>0.05Cesarean section, Me (Q1; Q3)11/20 (55%)11/16 (68%)5/9 (55%)>0.05n (for CRP)262011—n (for biomarker)201610P value, Kruskal–Wallis test for comparing 3 groups. ### 2.4. Statistical Analysis Statistical analysis was carried out in the R statistics (compare groups and rstatix packages) and Statistica programs using the nonparametric Kruskal–Wallis test (nonparametric one-way ANOVA). For repeated pairwise comparisons, the Mann–Whitney test with Holm’s correction (R statistics) was used. Intergroup comparisons withp values are presented in tables, and the pairwise group comparisons with individual p values are discussed in the text. Categorical data were calculated using the chi-square test. The parameters for cutoff were chosen empirically and according to literature data. The parameters for cutoff were chosen empirically and according to literature data. ## 2.1. Patient Characteristics This prospective controlled trial enrolled infants (up to 4 days old) in the Intensive Care Units of Regional Perinatal Center of Karaganda Research permission from the Karaganda Medical University Bioethics Committee No. 19 from 05.08.2019. Informed written consent was obtained from a parent prior to study enrolment. The criterion for determining a case of “sepsis” was a positive blood culture. The control group consisted of children who received treatment in the intensive care unit with negative blood cultures and unconfirmed infectious complications at the time of discharge.Exclusion criteria were as follows:(1) Patients born to HIV-positive mothers(2) Patients receiving therapy with high doses of glucocorticosteroids(3) Primary immunodeficiency state(4) Blood loss(5) Severe malformations(6) Acute hemolytic disease of the newborn(7) Refusal of the patient’s parents or legal representative to participate in the study ## 2.2. Bacteriological Research Methods The analysis was carried out using the BD BACTEC™ FX (Peds Plus Medium) system. After the appearance of signs of growth, the broth was inoculated on the blood agar plate. Microorganisms were identified by using time-of-flight mass spectrometry (Microflex-LT, Bruker Daltonics). The causative agents of sepsis were divided into 2 groups according to the type of cell wall: (1) Gram-positive (Gram (+)):Staphylococcus haemolyticus, Staphylococcus aureus, Staphylococcus epidermidis, Streptococcus agalactiae, Enterococcus faecalis, and Enterococcus faecium; (2) Gram-negative (Gram (−)): Klebsiella pneumoniae, Escherichia coli, and Enterobacter cloacae. The etiological structure of neonatal sepsis was previously described [8]. ## 2.3. Immunological Research Methods Blood cell counting was conducted using a Mindray hematology analyzer. Blood samples were fixed with a no-wash fixation and lysis technology using OptiLyse C, no-wash lysing solution, according to the manufacturer’s instructions. Surface staining for activation markers was performed withαCD4, αCD8, αCD14, αCD24, αCD64, and αCD279 (Becton Dickinson). The immunological parameters were studied with flow cytometry (Partec CyFlow Space). An unstained sample was used as a negative control. Compensation settings were made using built-in software (FlowMax). The research was carried out while standardizing the gain settings for the entire research period.The leukocyte population was identified according to the expected size and granularity on a forward and side scatter plot (FSC/SSC), and CD24+ (neutrophils), CD14+ (monocytes), CD4+ (T-helper lymphocytes), and CD8+ (T-cytotoxic lymphocytes) were gated and defined by the characteristic phenotypes.αCD279 was used for the PD-1 marker.Then, the CD64 index was defined as the ratio of the mean fluorescence intensity (MFI) of CD64+ neutrophils to the MFI of CD64 lymphocytes (internal negative control) according to the gating strategy.Neutrophil-lymphocyte ratio (NLR) was defined as the ratio of the percentage of neutrophils to lymphocytes and the platelet-lymphocyte ratio (PLR) as the ratio of platelets to lymphocytes (expressed as 109 cells/L).The analysis of C-reactive protein (CRP) was carried out by the hospital laboratory. The analysis of the CRP data included samples, the immunological examination of which could not be done due to a deficiency of biomaterial or a clot; in general, they corresponded to the groups given in Table1.Table 1 Patient information. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueBirth weight (g), Me (Q1; Q3)2175 (1708; 2771)2060 (1335; 2925)2575 (1440; 2951)>0.05Gestational age (week), Me (Q1; Q3)34 (33; 37)33 (29.5; 37)33.5 (31.5; 36.25)>0.05Cesarean section, Me (Q1; Q3)11/20 (55%)11/16 (68%)5/9 (55%)>0.05n (for CRP)262011—n (for biomarker)201610P value, Kruskal–Wallis test for comparing 3 groups. ## 2.4. Statistical Analysis Statistical analysis was carried out in the R statistics (compare groups and rstatix packages) and Statistica programs using the nonparametric Kruskal–Wallis test (nonparametric one-way ANOVA). For repeated pairwise comparisons, the Mann–Whitney test with Holm’s correction (R statistics) was used. Intergroup comparisons withp values are presented in tables, and the pairwise group comparisons with individual p values are discussed in the text. Categorical data were calculated using the chi-square test. The parameters for cutoff were chosen empirically and according to literature data. The parameters for cutoff were chosen empirically and according to literature data. ## 3. Results CRP is a basic indicator for assessing the activity of the inflammatory process. We present the data of its content in the groups (Table2).Table 2 CRP data in the study groups. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueCRP (mg/l), Me (Q1; Q3)0.6 (0.0; 4.5)5.3 (0.0; 6.0)6.0 (5.6; 12.0)0.02CRP >5 mg/l (count/all)6/26 (23%)9/20 (45%)9/11 (82%)0.004Diagnostic odds ratio2.7 (0.76; 9.6)15.0 (2.52; 89.2)DOR (±95% CI)4581.8Sensitivity, %76.976.9Specificity, %6060Positive predictive value (PPV), %64.590.9Negative predictive value (NPV), %Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.The data on the content of leukocytes and subpopulations are given in Table3.Table 3 The main indicators of leukocytes and platelets. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueLeukocytosis (109 cells/l), Me (Q1; Q3)18.3 (14.4; 20.7)13.3 (12.3; 15.1)15.3 (8.8; 25.2)>0.05Leukopenia less than5∗109 cells/l1/20 (5%)1/16 (6.25%)1/10 (10%)>0.05Leukocytosis more than20∗109 cells/l7/20 (35%)3/16 (18.75%)3/10 (30%)>0.05Lymphocytes (%), Me (Q1; Q3)52.0 (44.9; 68.0)40.0 (32.3; 51.1)49.3 (35.6; 54.1)0.03Neutrophils (%), Me (Q1; Q3)29.0 (19.2; 40.8)47.1(37.5; 57.1)39.5 (34.4; 43.2)0.003Lymphocytes (109 cells/l), Me (Q1; Q3)10.0 (7.2; 13.1)5.0 (3.5; 6.7)6.4 (3.1; 14.3)0.02Neutrophils (109 cells/l), Me (Q1; Q3)4.55 (3.1; 6.3)6.0 (4.0; 8.8)4.9 (2.43; 10.0)>0.05NLR, Me (Q1; Q3)0.56 (0.27; 0.80)1.39 (0.72; 1.52)0.82 (0.63; 1.3)0.006NLR>1 (count/all)1/20 (5.0%)9/16 (56.3%)4/10 (40%)0.004DOR (±95% CI)24.4 (2.6; 229)12.6 (1.17; 136)Sensitivity, %52.640Specificity, %9595Positive predictive value (PPV), %90.080Negative predictive value (NPV), %73.076Platelets (109 cells/l), Me (Q1; Q3)151.0 (131.0; 167.0)116.0 (50.0; 176.0)154 (47.0; 176.0)>0.05PLR, Me (Q1; Q3)16.62 (10.1; 19.8)25.9 (12.4; 39.1)12.4 (10.1; 12.56)>0.05Platelets less 50 109 cells/l (count/all)1/20 (5%)6/16 (37.5%)4/10 (40.0%)0.031DOR (±95% CI)11.4 (1.2; 108)12.6 (1.1; 136)Sensitivity, %37.540Specificity, %9595Positive predictive value (PPV), %85.780Negative predictive value (NPV), %62.576Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.Taken together, these data demonstrate that the sepsis caused by Gram-positive bacteria is a less stimulus for the production of CRP, the data for this group are more variable, and although, the median values do not differ. The significance of differences from the control is achieved only in the case of Gram-negative bacteremia (control vs. Gram (−),p=0.04). When assessing the diagnostic significance, this assumption was confirmed.There was neither predominant leukocytosis nor leukopenia in both sepsis groups (Table3). Noting the absolute and relative change of subpopulations expected during the infectious process, with the difference in the median value to a greater extent between the Gram-positive bacteremia group vs. the control group (lymphocytes (%), p=0.03; lymphocytes (absolute count), p=0.013; neutrophils (%, absolute count), p=0.003), there is no difference between Gram-negative vs. control groups and Gram-negative vs. Gram-positive bacteremia groups.The median NLR shows the best distinguishing ability of this indicator for Gram-positive sepsis (Gram (+) vs. control,p=0.004; Gram (−) vs. control, p=0.2). As a diagnostic tool with a cutoff of more than 1, it would be noted that with Gram-positive bacteremia, its diagnostic power is higher (DOR 24 vs. 12.6). However, it is statistically applicable in both groups of sepsis (control vs. Gram (+), p=0.002; control vs. Gram (−), p=0.03).The platelet count is one of the important clinical and laboratory indicators of neonatal sepsis. There were observed fluctuations in the median platelet count, especially in the PLR index, but there was no statistical significance. Moreover, indicator such as thrombocytopenia less than 50 is an indicator of the septic process in newborns in both groups of sepsis (control vs. Gram (+),p=0.002; control vs. Gram (−) p=0.03). Activation markers and classic markers of sepsis are given in Table 4.Table 4 Activation markers. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valueMFI nCD24, Me (Q1; Q3)9.9 (7.48; 14.1)7.16 (4.44; 15.12)8.02 (7.48; 20.9)>0.05MFI nCD64, Me (Q1; Q3)8.37 (5.0; 13.0)15.3 (10.8; 24.7)15.5 (12.5; 17.8)0.003MFI nCD64 > 10 (count/all)7/20 (35%)12/16 (75%)9/10 (90%)0.004DOR (±95% CI)5.5 (1.3; 23.9)16.7 (1.7; 160)Sensitivity, %7590Specificity, %6565Positive predictive value (PPV), %63.256.3Negative predictive value (NPV), %76.592.9Index CD64, Me (Q1; Q3)3.07 (1.94; 6.3)11.05 (7.7; 19.45)7.2 (6.7; 10.4)0.002Index CD64 > 4 (count/all)7/20 (35%)14/16 (88%)9/10 (90%)<0.001DOR13.0 (2.2; 74.3)16.7 (1.7; 160)Sensitivity, %87.590Specificity, %6565Positive predictive value (PPV), %66.656.3Negative predictive value (NPV), %86.692.86HLA-DR + Mon, Me (Q1; Q3)95 (79.0; 98.0)88.0 (80; 98)88 (75.0; 90.0)>0.05MFI monHLA-DR, Me (Q1; Q3)3.59 (2.38; 7.5)5.2 (2.59; 6.7)2.1 (1.63; 2.9)0.015HLA-DR + lymph, Me (Q1; Q3)5.7 (4.5; 8.6)5.16 (3.0; 6.6)9.82 (3.72; 16.94)>0.05MFI lymphHLA-DR, Me (Q1; Q3)9.7 (8.2; 23.0)16.1 (13.7; 17.7)11.6 (6.7; 12.5)0.008CD25 + CD4+, Me (Q1; Q3)16.0 (6.96; 14.75)25.5 (10.0; 55.00)17.0 (12.25; 38.25)>0.05Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.We found no difference in CD24 expression in all groups.The expression ofαCD64 both as MFI clearly and unambiguously changes in newborns with sepsis in comparison to the control group. The CD64 index showed the significant difference in both groups with positive culture (MFI: control vs. Gram (+), p=0.01; control vs. Gram (−), p=0.02; CD64 index: control vs. Gram (+), p=0.001; control vs. Gram (−), p=0.04; Gram (+) vs. Gram (−), p>0.05).When using a cutoff (index more 4 or MFI more 10) in both cases, good resolution is achievable (MFI: control vs. Gram (+),p=0.02, and control vs. with Gram (−), p=0.006; CD64 index control vs. Gram (+), p=0.002, and control vs. Gram (−), p=0.006). Taken together, these data demonstrate the satisfactory operating characteristic analysis of the test, which probably does not depend on the type of pathogen.Furthermore, there were no changes in the value of HLA-DR + monocytes and lymphocytes but were observed multidirectional changes in their expression; in both cases, the difference was significant not only with the control but also with the other group of sepsis. In the case that expression on monocytes decreases with a Gram-negative pathogen (Gram (−) vs. control,p=0.03, and Gram (−) vs. Gram (+), p=0.03), then on lymphocytes, it increases with a Gram-positive pathogen (Gram (+) vs. control, p=0.034, and Gram (+) vs. Gram (−), p=0.02).CD4+CD25+T cells level did not change significantly in any of the groups. PD-1 changes on lymphocytes and monocytes are given in Table5.Table 5 PD-1 on lymphocytes and monocytes. ParameterControlGram-positive bacteremiaGram-negative bacteremiaP valuePD-1 + Mon37.2 (33.0; 43.9)37.5 (33.0; 40.0)44 (38.0; 45.5)0.15PD-1 + Mon MFI25.7 (13.4; 33.8)15.4 (14.7; 18.0)31 (20.0; 39.8)0.008PD-1 + CD430 (21.3; 36.5)34.3 (33.0; 42.0)13.5 (6.5; 33.0)0.0035PD-1 + CD4 MFI13.7 (9.3; 18.3)18.0 (12.6; 26.6)13.0 (12.0; 14.0)0.3PD-1 + CD841.5 (40.0; 44.5)42.0 (41.5; 45.5)37.0 (34.0; 39.0)0.003PD-1 + CD8 MFI16.2 (12.0; 17.8)15.2 (14.9; 28.0)12.2 (10.0; 13.6)0.208Statistically significantp values are in bold.P value, Kruskal–Wallis test for comparing 3 groups.Our findings in the study of the PD-1 receptor were partly unexpected. We found that PD-1 expression on monocytes and lymphocytes differs fundamentally depending on the type of pathogen. Thus, for PD-1 on monocytes, MFI decreases with the Gram (+) pathogen group, and the difference between the two groups (but not with the control) is significant (p=0.007). In contrast, the number of PD-1 on lymphocytes decreases with a Gram-negative infection, but not with a Gram-positive one. The difference between the two sepsis groups was again significant (control vs. Gram (−), p=0.06; Gram (+) vs. Gram (−), p=0.01). ## 4. Discussion Neonatal sepsis is still a pediatric problem. The clinical manifestations of the inflammatory syndrome in newborns can be effaced or variable. The normal range of laboratory markers depends on gestational or postnatal age and fluctuates in response to coexisting noninfectious processes. Although positive culture is the gold standard, the reduced volume of blood, which applies to infants and low susceptibility to pathogen concentrations, reduces the sensitivity of the method.Even when compared to children, newborns exhibit a unique immune response to systemic infection, making diagnosis and prognosis difficult. Thus, neonatal-specific clinical trials are needed to improve survival and long-term outcomes for these populations. A better understanding of the pathophysiology of the interaction between the infant’s immune system and a pathogen will open up new opportunities to improve outcomes.CRP is a basic diagnostic test and is most commonly used to diagnose conditions associated with hyperinflammation. Studies evaluating the role of CRP in the diagnosis of early onset neonatal sepsis (EOS) report varying sensitivity and specificity from 29% to 100% and 60% to 100%, respectively [3]. Most authors report a low clinical benefit in the case of EOS [9, 10]. There are many conditions in which there is a false increase in CRP levels. In this study, using a cutoff of 5 mg/l, we showed that such a large spread of data is probably associated with the pathogen. The ratio of the CRP level with Gram-negative showed the sensitivity twice higher, and it is 81.8% versus 45% with Gram-positive bacteremia, and the total DOR is 2.7 vs. 15.0, while the NPV is 0.9% versus 64.5% and while the PPV is 60%.Standard blood test parameters were used for the diagnosis of EOS, but without much success. According to some authors, leukopenia has shown low sensitivity (29%) but high specificity (91%) for the diagnosis of neonatal sepsis [11]. Sharma et al. [7] claimed that values under 5000/mm3 for WBCs have a high specificity (91%) regarding sepsis diagnosis. While according to Philip A.G., neutropenia was more predictable for neonatal sepsis than neutrophilia [12]. In this study, the changes in the number of neutrophils and lymphocytes are in response to systemic infection; however, the discriminant ability was poor and unlikely to be clinically useful apart.There have already been studies showing that NLR has better diagnostic capabilities than CRP, including in premature and low birth weight babies [13–15]. However, there remains the issue of cutoff, which is noticeably different for each author and should change dynamically with each calendar day. It was suggested that using the NLR, which showed a specificity of 95% DOR (12.6 and 24.4, respectively), had some etiological influences on the development of sepsis. We found the best diagnostic capabilities of this parameter in Gram-positive infection.Thrombocytopenia is one of the markers of neonatal sepsis in children associated with negative outcomes [16]. Although, not all authors agree that this indicator is useful in diagnosis and prognosis [17, 18]. In this study, thrombocytopenia was closely associated with sepsis, and although the sensitivity did not exceed 40%, the specificity was 95%. PLR is an index that did not show any significance.Markers of neutrophil and monocyte activation have long been used as markers of sepsis. nCD64 has been the most promising marker for neonatal sepsis and is still on the rise. A meta-analysis investigating the use of nCD64 as a biomarker of NS that included 17 studies with 3,478 participants revealed only a modest pooled sensitivity of 0.77 (95% CI 0.74–0.79), specificity of 0.74 (95% CI 0.72–0.75), and AUC of 0.87 [19–21]. This study demonstrated nCD64 as a universal marker, approximately equally reflecting both Gram-negative and Gram-positive bacteremia, without difference in absolute or relative terms, with a maximum sensitivity of 90% and a specificity of 65%.The expression of HLA-DR is one of the first cytometric markers of sepsis, the most popular, but it is more commonly used in late neonatal sepsis and much more successfully in adults. It reflects the anergy of the immune system as a consequence of systemic inflammation and is associated with negative outcomes [22, 23]. There were no significant differences in the number of HLA-DR + monocytes and lymphocytes; however, the expression of HLA-DR on monocytes was significantly reduced in the group with Gram-negative bacteremia. On the other hand, the opposite tendency was registered on lymphocytes, with Gram-positive systemic infection, and the expression was higher in both the control group and in the group with Gram-negative sepsis.PD-1 is an immune checkpoint molecule, which plays an important role in downregulating the immune system’s proinflammatory activity. The new paradigm of sepsis suggests that immunosuppression is part of the immunopathology in systemic infection, and PD-1 activation plays a key role. Using adult patients as examples, most authors show that PD-1 increases with severe systemic infection [4]. But its role is not well understood in the immune response of newborns, and there are only scattered studies. Zasada et al. [5] confirmed the generally accepted change in PD-1 expression in late neonatal sepsis, an increase in the marker of immune depletion on monocytes in the severe course and negative outcome. Unexpectedly, there was no increase in PD-1 in the study groups. In the case that amount of PD-1 on monocytes decreased during Gram-positive systemic infection, the number of PD-1 + lymphocytes (CD4+ and CD8+ T cells) would decrease during Gram-negative infection. This can be explained by the specifics of the early neonatal period, in which the development of tolerance to environmental antigens and the inflammatory response compete with each other and the outcome will depend on the ratio of these processes. Moreover, a notable fact is that it was not singled out critical and shocked patients, and most of the studied children experienced sepsis. ## 5. Conclusions This study demonstrated that the difference in the pattern of bacteremia may partially explain the problems with the wide variability of immunological markers in neonatal sepsis. Although the basic popular markers of innate immunity (NLR, thrombocytopenia, and CD64) can be equally applied in early neonatal sepsis of various etiologies with satisfactory operational characteristics, CRP is largely stimulated by Gram-negative pathogens.In addition, some combined shifts in activation and suppression markers specific to different types of the pathogen were revealed; thus, monocytes are probably more sensitive to the tolerogenic effect of endotoxin, and the expression of the antigen-presenting receptor decreases, whereas lymphocytes receive either an increase in the activating signal (Gram-positive bacteria) or a decrease in the suppressive signal (Gram-negative pathogen), and any case, the proinflammatory response predominates over immunosuppression.The study of the possibility of quantifying the expression of various markers by MFI has not yet exhausted its potential; however, such studies require methodological support and opportunities for standardization. --- *Source: 1009231-2021-11-19.xml*
2021
# Dynamic Data Scheduling of a Flexible Industrial Job Shop Based on Digital Twin Technology **Authors:** Juan Li; Xianghong Tian; Jing Liu **Journal:** Discrete Dynamics in Nature and Society (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1009507 --- ## Abstract Aiming at the problems of premature convergence of existing workshop dynamic data scheduling methods and the decline in product output, a flexible industrial job shop dynamic data scheduling method based on digital twin technology is proposed. First, digital twin technology is proposed, which provides a design and theoretical basis for the simulation tour of a flexible industrial job shop, building the all-factor digital information fusion model of a flexible industrial workshop to comprehensively control the all-factor digital information of the workshops. A CGA algorithm is proposed by introducing the cloud model. The algorithm is used to solve the model, and the chaotic particle swarm optimization algorithm is used to maintain the particle diversity to complete the dynamic data scheduling of a flexible industrial job shop. The experimental results show that the designed method can complete the coordinated scheduling among multiple production lines in the least amount of time. --- ## Body ## 1. Introduction The job shop dynamic data scheduling problem is a typical combinatorial optimization problem and one of the key problems to be considered in flexible industrial production management technology [1]. How to find an optimal scheduling scheme that meets the constraints is the basis and key to improving the production efficiency of a flexible industry. As an extension of classical JSP (Java server page), the flexible job shop dynamic data scheduling problem not only needs to sort the processes on each flexible industrial job shop machine [2] but also needs to select the appropriate machine for each process before sorting, which clearly increases the difficulty of solving flexible job shop scheduling [3]. At present, there are many chemical enterprises with complex systems, uneven information construction levels, and inconsistent data standards. As a typical industry, flexible industrial workshops are facing new situations such as large price fluctuations, high gas supply pressure, and the continuous emergence of new energy. At the same time, the international market competition is becoming increasingly fierce, and environmental control is becoming increasingly strict [4, 5]. For the production data acquisition system of flexible industrial workshops, due to the poor quality of manually collected data and the lack of dynamic data, data communication and sharing have become the bottleneck restricting its development. Nowadays, with the rapid development of social science and technology [6, 7], the flexible industry is also facing the situation of diversified demand and technology improvement. In addition, the dynamic data scheduling of a flexible job shop often involves multiple conflicting optimization objectives [8], and it is difficult to choose between them. Therefore, scholars have conducted a great deal of research on this aspect.Reference [9] proposed multiobjective job shop scheduling using a multipopulation genetic algorithm. The job shop scheduling problem is a challenging scheduling and optimization problem in the field of industry and engineering. It is related to the work efficiency and operation cost of the factory. The completion time of all jobs is the most common optimization goal in the existing work. A multiobjective job shop scheduling approach is proposed for the first time. The job shop scheduling considers five objectives, which makes the model more practical in terms of reflecting the various needs of the factory. To optimize these five objectives at the same time, a new genetic algorithm method based on a multipopulation and multiobjective framework is proposed. First, five groups are used to optimize the five objectives, respectively. Second, to avoid each group only focusing on its corresponding single goal, a file-sharing technology is proposed to store the elite solutions collected from the five groups so that the group can obtain optimization information about other goals from the files. Third, an archive update strategy is proposed to further improve the quality of the solution in the archive. Test cases from widely used test sets are used to evaluate performance. Reference [10] proposed a new method to solve the energy-efficient flexible job shop scheduling problem. Using the improved unit-specific event time representation, a new mathematical formula for energy-efficient and energy-saving flexible job shop scheduling is proposed, and then the flexible job shop is described using the state task network. Compared with the existing models with the same or better solutions, the model can save 13.5% of the calculation time. In addition, for large-scale examples that cannot be solved by the existing models, this method can generate feasible solutions. Although the above methods have made some progress, they cannot meet the requirements of diversified market demand. In the face of multiple production lines in the multiuser, small batch, and personalized customization mode, problems include the complex calculation process, low real-time efficiency, narrow application range, and unsuitability for wide promotion and use. Therefore, a new dynamic data scheduling method of flexible industrial job shop based on digital twin technology is proposed. In this method, first, the digital twin technology is proposed, and at the same time, the full element digital information fusion model of the flexible industrial workshop is constructed to comprehensively control the full element digital information of the flexible industrial workshop. Then the cloud model is introduced to propose a CGA algorithm, which is used to solve the model, and the chaotic particle swarm optimization algorithm is used to maintain the particle diversity, so as to complete the dynamic data scheduling of flexible industrial job shop. The innovation of the design method is to select the digital twin technology, which provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop. The dynamic data scheduling of the flexible industrial workshop is completed with the objective of minimizing the completion time and maximizing the utilization of the production line. Experimental results show that this method has ideal scheduling efficiency and scheduling ability and can spend the least time to complete the coordinated scheduling among multiple production lines. ## 2. Dynamic Data Scheduling of the Flexible Industrial Job Shop Based on Digital Twin Technology ### 2.1. Digital Twin Technology The rise of Internet technologies such as cloud computing has led to the rapid development of the industry, thus promoting the development of the intelligent industry [11, 12]. The interaction and integration of industrial physical structures and information networks have attracted increasing attention. Digital twin technology integrates the physical model, operation, and maintenance data into an information body to obtain multidimensional and multiinformation simulation data into the information body. The physical object can be reproduced in the information virtual space. Through the information interaction, the product R&D and design, production services, and other aspects can be monitored and analyzed to reduce production costs and improve product competitiveness.The digital twin technology provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop [13]; however, safety hazards in the workshop can occur from time to time. The digital twin technology is used to establish the dynamic data scheduling model of the virtual flexible industrial workshop; access the main control, auxiliary control, security, and other equipment detection points in the flexible industrial workshop; and collect various types of status information in real time. It is not necessary for the transportation inspection personnel to go to the site of the flexible industrial operation workshop in order to master the operation status of the flexible industrial operation workshop, provide reliable support for the workshop to realize “continuous equipment inspection,” provide timely warnings of abnormal equipment in the workshop, and prolong the service life of the equipment. Combined with the current workshop operation, the operation principle of the digital twin line is given, as shown in Figure 1.Figure 1 Schematic diagram of the digital twin operation.When the workshop equipment is running, the service system controls the physical production line to carry out the actual production activities according to the production plan; at the same time, the twin production line maps the production operations in real time according to the real-time data of the entity; the results of analysis and calculation can be fed back into the service system in the future for alarm, control optimization, and prediction analysis of the production process. The flexible industrial operation model is constructed according to the digital twin technology [14]. The mapping between the digital space and the physical space of the industrial equipment is divided into three parts: equipment, environment, and system. The workshop equipment maps the actions, spatial positions, and working states of robots, AGVs, processing equipment, and other equipment on the production line in real time to complete the processing process of each station. ### 2.2. The Factor Digital Information Fusion Model of the Flexible Industrial Workshop To realize the digital information management of all elements and states in the flexible industrial workshop and to realize the real scene construction of digital twin equipment in the flexible industrial workshop, first, the feature classification and adaptive scheduling model of all-factor digital information in a flexible industrial job shop is constructed. The multidimensional and panoramic power grid virtual real fusion analysis method is adopted, combined with the quantitative regression statistical analysis method to realize the digital twin data fusion and regression analysis of the real scene space of the flexible industrial workshop, and the digital twin application construction realizes the integration of the “physical flexible industrial workshop” and the “virtual flexible industrial workshop” [15].Digital technology is used to perceive and understand the digital information of all elements and states of the flexible industrial workshop, to optimize the characteristics of all elements and states of a real flexible industrial workshop, to build the digital information integration model of all elements and states of the flexible industrial workshop in combination with the four-tier architecture design, and to realize the information management of the industrial workshop in combination with three-dimensional visual management technology. The total element digital information fusion model of the flexible industrial workshop based on digital twin technology is obtained, and the structure is shown in Figure2.Figure 2 The all-factor digital information fusion model of the flexible industrial workshop.According to the all-factor digital information fusion model of the flexible industrial workshop based on digital twin technology shown in Figure2, the fusion data are analyzed through threshold judgment. The information fusion layer is video image fusion. In the all-factor digital information fusion of the flexible industrial workshop, the data exchange process is realized through the data flow. Through the multidimensional sensing device, the multidimensional information, environmental geographic information, and weather time state information are extracted to realize the functions of digital twin technology control, equipment monitoring, abnormal alarm, and life prediction, and to determine the all-factor digital information fusion model of the flexible industrial workshop. ### 2.3. Comprehensive Control of All Digital Information Elements in the Flexible Industrial Workshop Based on the above analysis, to realize the comprehensive control of all-factor digital information in flexible industrial workshops, digital twin technology is introduced. The key to digital twin technology is that the mathematical model can widely access the information of the physical production line and then drive the management and control of the research objects. The digital twin technology in the study carries out multidimensional management and control of all-factor digital information of the flexible industrial workshop. Its management and control structure is shown in Figure3.Figure 3 Multidimensional control of digital twin technology with the all-factor digital information of the flexible industrial workshop.In the application of digital twin technology to all-factor digital information in flexible industrial job shops, priority scheduling is first used to realize the priority fusion sorting of all-factor digital information mining in flexible industrial job shops, which is recorded as follows:(1)GV=gv1,gv2,...,gvn.wheregv1, gv2, …, and gvn all represent priority fusion sorting sequence values. According to the detection statistical analysis results of the all-factor digital information of the flexible industrial workshop, the output fuzziness of all-factor digital mining is obtained to complete the comprehensive control of the workshop digital information. ## 2.1. Digital Twin Technology The rise of Internet technologies such as cloud computing has led to the rapid development of the industry, thus promoting the development of the intelligent industry [11, 12]. The interaction and integration of industrial physical structures and information networks have attracted increasing attention. Digital twin technology integrates the physical model, operation, and maintenance data into an information body to obtain multidimensional and multiinformation simulation data into the information body. The physical object can be reproduced in the information virtual space. Through the information interaction, the product R&D and design, production services, and other aspects can be monitored and analyzed to reduce production costs and improve product competitiveness.The digital twin technology provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop [13]; however, safety hazards in the workshop can occur from time to time. The digital twin technology is used to establish the dynamic data scheduling model of the virtual flexible industrial workshop; access the main control, auxiliary control, security, and other equipment detection points in the flexible industrial workshop; and collect various types of status information in real time. It is not necessary for the transportation inspection personnel to go to the site of the flexible industrial operation workshop in order to master the operation status of the flexible industrial operation workshop, provide reliable support for the workshop to realize “continuous equipment inspection,” provide timely warnings of abnormal equipment in the workshop, and prolong the service life of the equipment. Combined with the current workshop operation, the operation principle of the digital twin line is given, as shown in Figure 1.Figure 1 Schematic diagram of the digital twin operation.When the workshop equipment is running, the service system controls the physical production line to carry out the actual production activities according to the production plan; at the same time, the twin production line maps the production operations in real time according to the real-time data of the entity; the results of analysis and calculation can be fed back into the service system in the future for alarm, control optimization, and prediction analysis of the production process. The flexible industrial operation model is constructed according to the digital twin technology [14]. The mapping between the digital space and the physical space of the industrial equipment is divided into three parts: equipment, environment, and system. The workshop equipment maps the actions, spatial positions, and working states of robots, AGVs, processing equipment, and other equipment on the production line in real time to complete the processing process of each station. ## 2.2. The Factor Digital Information Fusion Model of the Flexible Industrial Workshop To realize the digital information management of all elements and states in the flexible industrial workshop and to realize the real scene construction of digital twin equipment in the flexible industrial workshop, first, the feature classification and adaptive scheduling model of all-factor digital information in a flexible industrial job shop is constructed. The multidimensional and panoramic power grid virtual real fusion analysis method is adopted, combined with the quantitative regression statistical analysis method to realize the digital twin data fusion and regression analysis of the real scene space of the flexible industrial workshop, and the digital twin application construction realizes the integration of the “physical flexible industrial workshop” and the “virtual flexible industrial workshop” [15].Digital technology is used to perceive and understand the digital information of all elements and states of the flexible industrial workshop, to optimize the characteristics of all elements and states of a real flexible industrial workshop, to build the digital information integration model of all elements and states of the flexible industrial workshop in combination with the four-tier architecture design, and to realize the information management of the industrial workshop in combination with three-dimensional visual management technology. The total element digital information fusion model of the flexible industrial workshop based on digital twin technology is obtained, and the structure is shown in Figure2.Figure 2 The all-factor digital information fusion model of the flexible industrial workshop.According to the all-factor digital information fusion model of the flexible industrial workshop based on digital twin technology shown in Figure2, the fusion data are analyzed through threshold judgment. The information fusion layer is video image fusion. In the all-factor digital information fusion of the flexible industrial workshop, the data exchange process is realized through the data flow. Through the multidimensional sensing device, the multidimensional information, environmental geographic information, and weather time state information are extracted to realize the functions of digital twin technology control, equipment monitoring, abnormal alarm, and life prediction, and to determine the all-factor digital information fusion model of the flexible industrial workshop. ## 2.3. Comprehensive Control of All Digital Information Elements in the Flexible Industrial Workshop Based on the above analysis, to realize the comprehensive control of all-factor digital information in flexible industrial workshops, digital twin technology is introduced. The key to digital twin technology is that the mathematical model can widely access the information of the physical production line and then drive the management and control of the research objects. The digital twin technology in the study carries out multidimensional management and control of all-factor digital information of the flexible industrial workshop. Its management and control structure is shown in Figure3.Figure 3 Multidimensional control of digital twin technology with the all-factor digital information of the flexible industrial workshop.In the application of digital twin technology to all-factor digital information in flexible industrial job shops, priority scheduling is first used to realize the priority fusion sorting of all-factor digital information mining in flexible industrial job shops, which is recorded as follows:(1)GV=gv1,gv2,...,gvn.wheregv1, gv2, …, and gvn all represent priority fusion sorting sequence values. According to the detection statistical analysis results of the all-factor digital information of the flexible industrial workshop, the output fuzziness of all-factor digital mining is obtained to complete the comprehensive control of the workshop digital information. ## 3. Realizing the Dynamic Data Scheduling of Flexible Industrial Job Shops ### 3.1. Construction of the Buffer Area Layout Model of the Flexible Industrial Workshop The layout of the buffer area of the flexible industrial workshop is mainly analyzed and studied from the workshop conditions to obtain the best local optimization scheme of the buffer area of the flexible industrial workshop, to ensure the maximum improvement of the increased production and benefits.It is assumed that the operation area required by workshop facilities is determined through layout data analysis, and the operation area of each area needs to be calculated in detail. Among them, the process of determining the working area of workshop facilities is shown in Figure4.Figure 4 Flow chart for determining the area of the workshop facility operation buffer area.The workshop facility job buffer plays an essential role in the manufacturing system. In addition to completing the storage function of the layout model, it also needs to support different operations of the layout model, mainly including receiving and storage.This paper analyzes the operation characteristics of the facility operation buffer area in a flexible industrial workshop. Generally, the buffer area is divided into several different functional areas, mainly including the following aspects:(1) Production area(2) Sorting area(3) Distribution area(4) Waiting areaThrough the above analysis, the layout optimization model of the workshop facility job buffer area is described as follows.Multiple work units with known dimensions are placed in the plane of the known workshop facility work buffer area, mainly to make the layout of each work unit more reasonable. In addition, to effectively facilitate handling, it is necessary to reserve a certain activity space and aisle width for the staff of the flexible industrial workshop. At the same time, some constraints need to be considered in the layout of the buffer area.In the process of optimizing the layout of the facility operation buffer area in a flexible industrial workshop, the following operation steps are mainly included:Step 1: Preparation of raw materials. In the process of buffer layout optimization, it is necessary to determine five basic elements such as product, output, and handling path. At the same time, the functional area is divided on the basis of the operation unit, and the area and shape of the best operation area are obtained by means of decomposition or combination.Step 2: Dynamic data scheduling and relationship analysis between operating units. Material handling and loading and unloading in flexible industrial workshops are the main causes of operation costs, so layout optimization plays an essential role in the process of dynamic data scheduling. Through the above analysis, it is necessary to analyze the relationship between dynamic data scheduling and various operation units.Step 3: Calculate the floor area of each unit.Analyze different factors such as equipment and personnel, obtain the floor area of the operation unit through the operation area calculation formula, and ensure that the calculated area and the actual available area match each other.Step 4: Draw the correlation diagram of the unit area of operation. Calculate the load of the actual area of the operation unit with the corresponding position correlation diagram.Step 5: Revise. Combined with the actual limited conditions, the area of the correlation map is adjusted in real time, and several feasible schemes are formulated at the same time.Step 6: Scheme evaluation and selection. For a feasible scheme, it is necessary to evaluate professional technology, cost, and other aspects, modify the scheme through comparison and analysis, and obtain the final layout scheme.Based on the above operations, the layout of the workshop facility job buffer can be described as follows:(1) Objective function: The minimum material handling momentminD of the flexible industrial workshop and the adjacency correlation degree of different workshops in the maximum buffer area are taken as the maxD objective function. The specific expression is shown in the following formula:(2)minD=∫i=1n∫j=1ndijeij,maxD=Wq−∫i=1n∫j=1nfijhij.wheren represents the total number of operating units; dij and eij, respectively, represent the average dynamic data scheduling amount and the total dynamic data scheduling amount between i and j of the operation unit; Wq represents any constant; and fij and hij represent the material handling speed and time between i and j, respectively.(2) The constraints are(3)Ai−Aj+Cij≥xi+xj2+Bxij,i=1,2,…,n−1,j=i+1,…,n,Bi−Bj+Wq1−Cij≥yi+yj2+Bxij,i=1,2,…,n−1,j=i+1,…,n.whereCij represents the correlation function between activities; xi and xj represent the i and j activities, respectively; yi and yj represent the length and width of the activity, respectively; and Bxij represents the handling function between each activity.The buffer area layout model of the flexible industrial workshop is given by using the following formula:(4)Fmx=minD+maxDxi+xj/2+Bxijyi+yj/2+Bxij. ### 3.2. Model Solution Based on the CGA Algorithm The cloud model is mainly characterized by expected value, entropy, and super entropy. The traditional genetic algorithm needs to use empirically specified or fixed crossover and mutation probability for research. The detailed operating principle is that when the average fitness of the population is greater than the individual, the better individual needs to be reported as the fitness value increases; at the same time, better individuals are formed.According to the characteristics of the normal cloud model, the disadvantages of the genetic algorithm for dynamic data scheduling of the flexible industrial job shop can be effectively improved. The calculation process of the CGA (compact genetic algorithm) algorithm is as follows:(5)Pabc=o1ef−Ex/2En2,f>0,o3,f<0,(6)Pmnl=o2ef−Ex/2En2,f≥0,o4,f<0.wherePabc and Pmnl represent the overall and local optimal solutions, respectively, and o1, o2, o3, and o4 represent the control parameters, respectively; f indicates fitness value; and Ex and En represent the fitness values of the variant individuals. The detailed operation steps of the CGA algorithm are given in Figure 5.Figure 5 Operation flow chart of the CGA algorithm.The CGA algorithm is used to solve the layout model of the facility job buffer area in a flexible industrial job shop. The detailed operation steps are as follows:Step 1: Initialize the populationStep 2: Calculate the fitness value of different individualsStep 3: Select, copy, and migrate(1) Copy the best individual to the next generation(2) Replicate the elite population(3) The worst individuals will be eliminated and replaced by randomly formed foreign individualsStep 4: Cross and mutate all elite groups:(1) The membership degree is randomly formed according to the uniform distribution(2) The fitness value is determined by both parents(3) Determine the search scope of different variablesStep 5: Perform the mutation operation:(1) Determine the number of original individuals(2) Determine the variable search rangeStep 6: Judge whether the algorithm meets the termination conditions. If yes, stop the operation, output the optimal solution, and obtain the layout optimization scheme of the facility job buffer area in the flexible industrial job shop; otherwise, skip to step 2; thus, the model solution based on the CGA algorithm is completed. ### 3.3. Realize the Dynamic Data Scheduling of the Flexible Industrial Job Shop #### 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing #### 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 3.1. Construction of the Buffer Area Layout Model of the Flexible Industrial Workshop The layout of the buffer area of the flexible industrial workshop is mainly analyzed and studied from the workshop conditions to obtain the best local optimization scheme of the buffer area of the flexible industrial workshop, to ensure the maximum improvement of the increased production and benefits.It is assumed that the operation area required by workshop facilities is determined through layout data analysis, and the operation area of each area needs to be calculated in detail. Among them, the process of determining the working area of workshop facilities is shown in Figure4.Figure 4 Flow chart for determining the area of the workshop facility operation buffer area.The workshop facility job buffer plays an essential role in the manufacturing system. In addition to completing the storage function of the layout model, it also needs to support different operations of the layout model, mainly including receiving and storage.This paper analyzes the operation characteristics of the facility operation buffer area in a flexible industrial workshop. Generally, the buffer area is divided into several different functional areas, mainly including the following aspects:(1) Production area(2) Sorting area(3) Distribution area(4) Waiting areaThrough the above analysis, the layout optimization model of the workshop facility job buffer area is described as follows.Multiple work units with known dimensions are placed in the plane of the known workshop facility work buffer area, mainly to make the layout of each work unit more reasonable. In addition, to effectively facilitate handling, it is necessary to reserve a certain activity space and aisle width for the staff of the flexible industrial workshop. At the same time, some constraints need to be considered in the layout of the buffer area.In the process of optimizing the layout of the facility operation buffer area in a flexible industrial workshop, the following operation steps are mainly included:Step 1: Preparation of raw materials. In the process of buffer layout optimization, it is necessary to determine five basic elements such as product, output, and handling path. At the same time, the functional area is divided on the basis of the operation unit, and the area and shape of the best operation area are obtained by means of decomposition or combination.Step 2: Dynamic data scheduling and relationship analysis between operating units. Material handling and loading and unloading in flexible industrial workshops are the main causes of operation costs, so layout optimization plays an essential role in the process of dynamic data scheduling. Through the above analysis, it is necessary to analyze the relationship between dynamic data scheduling and various operation units.Step 3: Calculate the floor area of each unit.Analyze different factors such as equipment and personnel, obtain the floor area of the operation unit through the operation area calculation formula, and ensure that the calculated area and the actual available area match each other.Step 4: Draw the correlation diagram of the unit area of operation. Calculate the load of the actual area of the operation unit with the corresponding position correlation diagram.Step 5: Revise. Combined with the actual limited conditions, the area of the correlation map is adjusted in real time, and several feasible schemes are formulated at the same time.Step 6: Scheme evaluation and selection. For a feasible scheme, it is necessary to evaluate professional technology, cost, and other aspects, modify the scheme through comparison and analysis, and obtain the final layout scheme.Based on the above operations, the layout of the workshop facility job buffer can be described as follows:(1) Objective function: The minimum material handling momentminD of the flexible industrial workshop and the adjacency correlation degree of different workshops in the maximum buffer area are taken as the maxD objective function. The specific expression is shown in the following formula:(2)minD=∫i=1n∫j=1ndijeij,maxD=Wq−∫i=1n∫j=1nfijhij.wheren represents the total number of operating units; dij and eij, respectively, represent the average dynamic data scheduling amount and the total dynamic data scheduling amount between i and j of the operation unit; Wq represents any constant; and fij and hij represent the material handling speed and time between i and j, respectively.(2) The constraints are(3)Ai−Aj+Cij≥xi+xj2+Bxij,i=1,2,…,n−1,j=i+1,…,n,Bi−Bj+Wq1−Cij≥yi+yj2+Bxij,i=1,2,…,n−1,j=i+1,…,n.whereCij represents the correlation function between activities; xi and xj represent the i and j activities, respectively; yi and yj represent the length and width of the activity, respectively; and Bxij represents the handling function between each activity.The buffer area layout model of the flexible industrial workshop is given by using the following formula:(4)Fmx=minD+maxDxi+xj/2+Bxijyi+yj/2+Bxij. ## 3.2. Model Solution Based on the CGA Algorithm The cloud model is mainly characterized by expected value, entropy, and super entropy. The traditional genetic algorithm needs to use empirically specified or fixed crossover and mutation probability for research. The detailed operating principle is that when the average fitness of the population is greater than the individual, the better individual needs to be reported as the fitness value increases; at the same time, better individuals are formed.According to the characteristics of the normal cloud model, the disadvantages of the genetic algorithm for dynamic data scheduling of the flexible industrial job shop can be effectively improved. The calculation process of the CGA (compact genetic algorithm) algorithm is as follows:(5)Pabc=o1ef−Ex/2En2,f>0,o3,f<0,(6)Pmnl=o2ef−Ex/2En2,f≥0,o4,f<0.wherePabc and Pmnl represent the overall and local optimal solutions, respectively, and o1, o2, o3, and o4 represent the control parameters, respectively; f indicates fitness value; and Ex and En represent the fitness values of the variant individuals. The detailed operation steps of the CGA algorithm are given in Figure 5.Figure 5 Operation flow chart of the CGA algorithm.The CGA algorithm is used to solve the layout model of the facility job buffer area in a flexible industrial job shop. The detailed operation steps are as follows:Step 1: Initialize the populationStep 2: Calculate the fitness value of different individualsStep 3: Select, copy, and migrate(1) Copy the best individual to the next generation(2) Replicate the elite population(3) The worst individuals will be eliminated and replaced by randomly formed foreign individualsStep 4: Cross and mutate all elite groups:(1) The membership degree is randomly formed according to the uniform distribution(2) The fitness value is determined by both parents(3) Determine the search scope of different variablesStep 5: Perform the mutation operation:(1) Determine the number of original individuals(2) Determine the variable search rangeStep 6: Judge whether the algorithm meets the termination conditions. If yes, stop the operation, output the optimal solution, and obtain the layout optimization scheme of the facility job buffer area in the flexible industrial job shop; otherwise, skip to step 2; thus, the model solution based on the CGA algorithm is completed. ## 3.3. Realize the Dynamic Data Scheduling of the Flexible Industrial Job Shop ### 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing ### 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing ## 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 4. Experimental Analysis To verify the effect and feasibility of flexible industrial job shop dynamic data scheduling based on digital twin technology, experiments were carried out to compare our approach with the methods in references [9, 10]. The dynamic data and parameters of the flexible industrial workshop refer to a large machinery production workshop in the old industrial base in Northeast China. C++ and OpenCV2.2 were used to build a simulation experiment environment. The experimental parameters were set as shown in Table 1.Table 1 Experimental parameters. ProjectParameterOnline monitoring networkGSM network/GPRS networkSystem resource16 MB+Standard802.11 b/802.15.1/802.15.4Network size1/32/7Bandwidth (kb/s)64–128+Transmission distance (m)1,000+Use protocolZigBee agreementFrame length30 msSlot length2.25 msDuplex modeFDD/TDDCarrier broadband15 MHZMultiple access modeDirect spreadUplink modulation modeBPSKDownlink modulation modeQPSKIn the large-scale machinery production workshop, there are a large number of production lines, a large scheduling scale, and very strict requirements for product output. In this paper, 10 production lines are selected. On the premise that the number of workpieces and machines are different, the methods of this paper, references [9, 10] are used to schedule and adjust the production lines. The results are shown in Table 2.Table 2 Comparison of resource allocation results of three algorithms scheduling. Production line12345Number of workpieces2030202530Number of machines2015202020C∗∗9201,1559351,0361,208Paper methodC∗9181,1589401,0351,210Reference [9] methodC∗9281,1659551,0641,231Reference [10] methodC∗9011,2679801,1431,341Note.C∗∗ in Table 2 represents the Pareto optimal solution, which refers to the optimal resource scheduling result in an ideal state, and C∗ represents the optimal solution obtained after 50 iterations of each method.It can be seen from Table2 that among the three methods, the resource allocation result of this method after scheduling is closest to the Pareto optimal solution, which proves that the method can reasonably schedule tasks according to the number of workpieces and machinery, keep the machine on all the time, avoid outages caused by unreasonable resource scheduling, and save production time.Experiments were carried out to verify the balance of the multiproduction line coordinated scheduling of the three methods, to verify which algorithm can avoid uneven division of labor, partial idleness, and partial work. The evaluation indexes are the maximum completion time and production line load. The experimental results are shown in Figure7. In Figure 7, the solid line represents the maximum completion time, and the dotted line represents the load of the production line.Figure 7 Comparison of maximum completion time weights of the multiproduction line scheduling.As can be seen from Figure7, the maximum completion time peak appears in the method of reference [9], and the production line load peak appears in the method of reference [10]. The maximum completion time and production line load curves of the two algorithms fluctuate greatly and are generally high. The maximum completion time and production line load curve of our method change gently. This shows that this method has a certain balance for the coordinated scheduling of multiple production lines, can ensure the balanced division of labor of the production line, and will not allow it to be partially idle and partially working.Combining the two evaluation indexes of maximum completion time and production line load, the income standard of the algorithm can be further evaluated; that is, the final income is obtained by the machining enterprise by ensuring the coordinated scheduling of the production line. The calculation formula is shown in the following formula:(9)CMTai=minaj∈BsMajMai×minaj∈BsCajCai.whereC represents comprehensive income; Bs=b1,b2,b3 corresponds to the methods in this paper, reference [9], and reference [10], respectively; M represents the maximum completion time; and C represents the line load. We used formula (9) to calculate the comprehensive income of the three methods, and the results are shown in Figure 8.Figure 8 Comparison of results of the comprehensive income of the three methods.It can be seen from Figure8 that the comprehensive income of our method is the highest and most stable, while the other two methods have large fluctuations, which shows that the coordinated scheduling of multiple production lines by our method can maximize the production benefits of flexible industrial operations and result in ideal scheduling efficiency and scheduling ability.To sum up, the dynamic data scheduling method of the flexible industrial job shop based on digital twin technology has good scheduling performance. ## 5. Conclusions and Prospects The innovation of the flexible industrial job shop dynamic data scheduling method based on digital twin technology is that it selects digital twin technology to complete the flexible industrial job shop dynamic data scheduling with the goal of minimizing completion time and maximizing production line utilization. In the flexible industrial workshop, the dynamic data between multiple production lines has always been one of the research focuses. The following conclusions were obtained through the research:(1) Because of its great complexity, this paper introduces a cloud computing mode to reduce the calculation difficulty and solve the problem of coordinated scheduling among multiple production lines from the perspective of case-based reasoning.(2) The research shows that the proposed method can complete the coordinated scheduling among multiple production lines in the least amount of time.Due to the limited time and energy, the proposed method still has shortcomings. The follow-up research will focus on the following aspects:(1) In the actual layout process, each working area of the facility operation buffer area of the flexible industrial workshop can be of various shapes. In the later research process, the changeable operation problem needs to be considered and solved by relevant algorithms.(2) Subsequently, the conditional constraints of export location and production location can be added to the all-factor digital information fusion model of the flexible industrial workshop to comprehensively analyze the impact of different constraints on the optimization results.(3) Because the service life of different products is completely different and the types of products are becoming increasingly complex, the layout optimization of the facility job buffer in a flexible industrial job shop could be studied in the future. --- *Source: 1009507-2022-08-03.xml*
1009507-2022-08-03_1009507-2022-08-03.md
51,738
Dynamic Data Scheduling of a Flexible Industrial Job Shop Based on Digital Twin Technology
Juan Li; Xianghong Tian; Jing Liu
Discrete Dynamics in Nature and Society (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1009507
1009507-2022-08-03.xml
--- ## Abstract Aiming at the problems of premature convergence of existing workshop dynamic data scheduling methods and the decline in product output, a flexible industrial job shop dynamic data scheduling method based on digital twin technology is proposed. First, digital twin technology is proposed, which provides a design and theoretical basis for the simulation tour of a flexible industrial job shop, building the all-factor digital information fusion model of a flexible industrial workshop to comprehensively control the all-factor digital information of the workshops. A CGA algorithm is proposed by introducing the cloud model. The algorithm is used to solve the model, and the chaotic particle swarm optimization algorithm is used to maintain the particle diversity to complete the dynamic data scheduling of a flexible industrial job shop. The experimental results show that the designed method can complete the coordinated scheduling among multiple production lines in the least amount of time. --- ## Body ## 1. Introduction The job shop dynamic data scheduling problem is a typical combinatorial optimization problem and one of the key problems to be considered in flexible industrial production management technology [1]. How to find an optimal scheduling scheme that meets the constraints is the basis and key to improving the production efficiency of a flexible industry. As an extension of classical JSP (Java server page), the flexible job shop dynamic data scheduling problem not only needs to sort the processes on each flexible industrial job shop machine [2] but also needs to select the appropriate machine for each process before sorting, which clearly increases the difficulty of solving flexible job shop scheduling [3]. At present, there are many chemical enterprises with complex systems, uneven information construction levels, and inconsistent data standards. As a typical industry, flexible industrial workshops are facing new situations such as large price fluctuations, high gas supply pressure, and the continuous emergence of new energy. At the same time, the international market competition is becoming increasingly fierce, and environmental control is becoming increasingly strict [4, 5]. For the production data acquisition system of flexible industrial workshops, due to the poor quality of manually collected data and the lack of dynamic data, data communication and sharing have become the bottleneck restricting its development. Nowadays, with the rapid development of social science and technology [6, 7], the flexible industry is also facing the situation of diversified demand and technology improvement. In addition, the dynamic data scheduling of a flexible job shop often involves multiple conflicting optimization objectives [8], and it is difficult to choose between them. Therefore, scholars have conducted a great deal of research on this aspect.Reference [9] proposed multiobjective job shop scheduling using a multipopulation genetic algorithm. The job shop scheduling problem is a challenging scheduling and optimization problem in the field of industry and engineering. It is related to the work efficiency and operation cost of the factory. The completion time of all jobs is the most common optimization goal in the existing work. A multiobjective job shop scheduling approach is proposed for the first time. The job shop scheduling considers five objectives, which makes the model more practical in terms of reflecting the various needs of the factory. To optimize these five objectives at the same time, a new genetic algorithm method based on a multipopulation and multiobjective framework is proposed. First, five groups are used to optimize the five objectives, respectively. Second, to avoid each group only focusing on its corresponding single goal, a file-sharing technology is proposed to store the elite solutions collected from the five groups so that the group can obtain optimization information about other goals from the files. Third, an archive update strategy is proposed to further improve the quality of the solution in the archive. Test cases from widely used test sets are used to evaluate performance. Reference [10] proposed a new method to solve the energy-efficient flexible job shop scheduling problem. Using the improved unit-specific event time representation, a new mathematical formula for energy-efficient and energy-saving flexible job shop scheduling is proposed, and then the flexible job shop is described using the state task network. Compared with the existing models with the same or better solutions, the model can save 13.5% of the calculation time. In addition, for large-scale examples that cannot be solved by the existing models, this method can generate feasible solutions. Although the above methods have made some progress, they cannot meet the requirements of diversified market demand. In the face of multiple production lines in the multiuser, small batch, and personalized customization mode, problems include the complex calculation process, low real-time efficiency, narrow application range, and unsuitability for wide promotion and use. Therefore, a new dynamic data scheduling method of flexible industrial job shop based on digital twin technology is proposed. In this method, first, the digital twin technology is proposed, and at the same time, the full element digital information fusion model of the flexible industrial workshop is constructed to comprehensively control the full element digital information of the flexible industrial workshop. Then the cloud model is introduced to propose a CGA algorithm, which is used to solve the model, and the chaotic particle swarm optimization algorithm is used to maintain the particle diversity, so as to complete the dynamic data scheduling of flexible industrial job shop. The innovation of the design method is to select the digital twin technology, which provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop. The dynamic data scheduling of the flexible industrial workshop is completed with the objective of minimizing the completion time and maximizing the utilization of the production line. Experimental results show that this method has ideal scheduling efficiency and scheduling ability and can spend the least time to complete the coordinated scheduling among multiple production lines. ## 2. Dynamic Data Scheduling of the Flexible Industrial Job Shop Based on Digital Twin Technology ### 2.1. Digital Twin Technology The rise of Internet technologies such as cloud computing has led to the rapid development of the industry, thus promoting the development of the intelligent industry [11, 12]. The interaction and integration of industrial physical structures and information networks have attracted increasing attention. Digital twin technology integrates the physical model, operation, and maintenance data into an information body to obtain multidimensional and multiinformation simulation data into the information body. The physical object can be reproduced in the information virtual space. Through the information interaction, the product R&D and design, production services, and other aspects can be monitored and analyzed to reduce production costs and improve product competitiveness.The digital twin technology provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop [13]; however, safety hazards in the workshop can occur from time to time. The digital twin technology is used to establish the dynamic data scheduling model of the virtual flexible industrial workshop; access the main control, auxiliary control, security, and other equipment detection points in the flexible industrial workshop; and collect various types of status information in real time. It is not necessary for the transportation inspection personnel to go to the site of the flexible industrial operation workshop in order to master the operation status of the flexible industrial operation workshop, provide reliable support for the workshop to realize “continuous equipment inspection,” provide timely warnings of abnormal equipment in the workshop, and prolong the service life of the equipment. Combined with the current workshop operation, the operation principle of the digital twin line is given, as shown in Figure 1.Figure 1 Schematic diagram of the digital twin operation.When the workshop equipment is running, the service system controls the physical production line to carry out the actual production activities according to the production plan; at the same time, the twin production line maps the production operations in real time according to the real-time data of the entity; the results of analysis and calculation can be fed back into the service system in the future for alarm, control optimization, and prediction analysis of the production process. The flexible industrial operation model is constructed according to the digital twin technology [14]. The mapping between the digital space and the physical space of the industrial equipment is divided into three parts: equipment, environment, and system. The workshop equipment maps the actions, spatial positions, and working states of robots, AGVs, processing equipment, and other equipment on the production line in real time to complete the processing process of each station. ### 2.2. The Factor Digital Information Fusion Model of the Flexible Industrial Workshop To realize the digital information management of all elements and states in the flexible industrial workshop and to realize the real scene construction of digital twin equipment in the flexible industrial workshop, first, the feature classification and adaptive scheduling model of all-factor digital information in a flexible industrial job shop is constructed. The multidimensional and panoramic power grid virtual real fusion analysis method is adopted, combined with the quantitative regression statistical analysis method to realize the digital twin data fusion and regression analysis of the real scene space of the flexible industrial workshop, and the digital twin application construction realizes the integration of the “physical flexible industrial workshop” and the “virtual flexible industrial workshop” [15].Digital technology is used to perceive and understand the digital information of all elements and states of the flexible industrial workshop, to optimize the characteristics of all elements and states of a real flexible industrial workshop, to build the digital information integration model of all elements and states of the flexible industrial workshop in combination with the four-tier architecture design, and to realize the information management of the industrial workshop in combination with three-dimensional visual management technology. The total element digital information fusion model of the flexible industrial workshop based on digital twin technology is obtained, and the structure is shown in Figure2.Figure 2 The all-factor digital information fusion model of the flexible industrial workshop.According to the all-factor digital information fusion model of the flexible industrial workshop based on digital twin technology shown in Figure2, the fusion data are analyzed through threshold judgment. The information fusion layer is video image fusion. In the all-factor digital information fusion of the flexible industrial workshop, the data exchange process is realized through the data flow. Through the multidimensional sensing device, the multidimensional information, environmental geographic information, and weather time state information are extracted to realize the functions of digital twin technology control, equipment monitoring, abnormal alarm, and life prediction, and to determine the all-factor digital information fusion model of the flexible industrial workshop. ### 2.3. Comprehensive Control of All Digital Information Elements in the Flexible Industrial Workshop Based on the above analysis, to realize the comprehensive control of all-factor digital information in flexible industrial workshops, digital twin technology is introduced. The key to digital twin technology is that the mathematical model can widely access the information of the physical production line and then drive the management and control of the research objects. The digital twin technology in the study carries out multidimensional management and control of all-factor digital information of the flexible industrial workshop. Its management and control structure is shown in Figure3.Figure 3 Multidimensional control of digital twin technology with the all-factor digital information of the flexible industrial workshop.In the application of digital twin technology to all-factor digital information in flexible industrial job shops, priority scheduling is first used to realize the priority fusion sorting of all-factor digital information mining in flexible industrial job shops, which is recorded as follows:(1)GV=gv1,gv2,...,gvn.wheregv1, gv2, …, and gvn all represent priority fusion sorting sequence values. According to the detection statistical analysis results of the all-factor digital information of the flexible industrial workshop, the output fuzziness of all-factor digital mining is obtained to complete the comprehensive control of the workshop digital information. ## 2.1. Digital Twin Technology The rise of Internet technologies such as cloud computing has led to the rapid development of the industry, thus promoting the development of the intelligent industry [11, 12]. The interaction and integration of industrial physical structures and information networks have attracted increasing attention. Digital twin technology integrates the physical model, operation, and maintenance data into an information body to obtain multidimensional and multiinformation simulation data into the information body. The physical object can be reproduced in the information virtual space. Through the information interaction, the product R&D and design, production services, and other aspects can be monitored and analyzed to reduce production costs and improve product competitiveness.The digital twin technology provides the design basis and theoretical basis for the simulation inspection of the flexible industrial workshop [13]; however, safety hazards in the workshop can occur from time to time. The digital twin technology is used to establish the dynamic data scheduling model of the virtual flexible industrial workshop; access the main control, auxiliary control, security, and other equipment detection points in the flexible industrial workshop; and collect various types of status information in real time. It is not necessary for the transportation inspection personnel to go to the site of the flexible industrial operation workshop in order to master the operation status of the flexible industrial operation workshop, provide reliable support for the workshop to realize “continuous equipment inspection,” provide timely warnings of abnormal equipment in the workshop, and prolong the service life of the equipment. Combined with the current workshop operation, the operation principle of the digital twin line is given, as shown in Figure 1.Figure 1 Schematic diagram of the digital twin operation.When the workshop equipment is running, the service system controls the physical production line to carry out the actual production activities according to the production plan; at the same time, the twin production line maps the production operations in real time according to the real-time data of the entity; the results of analysis and calculation can be fed back into the service system in the future for alarm, control optimization, and prediction analysis of the production process. The flexible industrial operation model is constructed according to the digital twin technology [14]. The mapping between the digital space and the physical space of the industrial equipment is divided into three parts: equipment, environment, and system. The workshop equipment maps the actions, spatial positions, and working states of robots, AGVs, processing equipment, and other equipment on the production line in real time to complete the processing process of each station. ## 2.2. The Factor Digital Information Fusion Model of the Flexible Industrial Workshop To realize the digital information management of all elements and states in the flexible industrial workshop and to realize the real scene construction of digital twin equipment in the flexible industrial workshop, first, the feature classification and adaptive scheduling model of all-factor digital information in a flexible industrial job shop is constructed. The multidimensional and panoramic power grid virtual real fusion analysis method is adopted, combined with the quantitative regression statistical analysis method to realize the digital twin data fusion and regression analysis of the real scene space of the flexible industrial workshop, and the digital twin application construction realizes the integration of the “physical flexible industrial workshop” and the “virtual flexible industrial workshop” [15].Digital technology is used to perceive and understand the digital information of all elements and states of the flexible industrial workshop, to optimize the characteristics of all elements and states of a real flexible industrial workshop, to build the digital information integration model of all elements and states of the flexible industrial workshop in combination with the four-tier architecture design, and to realize the information management of the industrial workshop in combination with three-dimensional visual management technology. The total element digital information fusion model of the flexible industrial workshop based on digital twin technology is obtained, and the structure is shown in Figure2.Figure 2 The all-factor digital information fusion model of the flexible industrial workshop.According to the all-factor digital information fusion model of the flexible industrial workshop based on digital twin technology shown in Figure2, the fusion data are analyzed through threshold judgment. The information fusion layer is video image fusion. In the all-factor digital information fusion of the flexible industrial workshop, the data exchange process is realized through the data flow. Through the multidimensional sensing device, the multidimensional information, environmental geographic information, and weather time state information are extracted to realize the functions of digital twin technology control, equipment monitoring, abnormal alarm, and life prediction, and to determine the all-factor digital information fusion model of the flexible industrial workshop. ## 2.3. Comprehensive Control of All Digital Information Elements in the Flexible Industrial Workshop Based on the above analysis, to realize the comprehensive control of all-factor digital information in flexible industrial workshops, digital twin technology is introduced. The key to digital twin technology is that the mathematical model can widely access the information of the physical production line and then drive the management and control of the research objects. The digital twin technology in the study carries out multidimensional management and control of all-factor digital information of the flexible industrial workshop. Its management and control structure is shown in Figure3.Figure 3 Multidimensional control of digital twin technology with the all-factor digital information of the flexible industrial workshop.In the application of digital twin technology to all-factor digital information in flexible industrial job shops, priority scheduling is first used to realize the priority fusion sorting of all-factor digital information mining in flexible industrial job shops, which is recorded as follows:(1)GV=gv1,gv2,...,gvn.wheregv1, gv2, …, and gvn all represent priority fusion sorting sequence values. According to the detection statistical analysis results of the all-factor digital information of the flexible industrial workshop, the output fuzziness of all-factor digital mining is obtained to complete the comprehensive control of the workshop digital information. ## 3. Realizing the Dynamic Data Scheduling of Flexible Industrial Job Shops ### 3.1. Construction of the Buffer Area Layout Model of the Flexible Industrial Workshop The layout of the buffer area of the flexible industrial workshop is mainly analyzed and studied from the workshop conditions to obtain the best local optimization scheme of the buffer area of the flexible industrial workshop, to ensure the maximum improvement of the increased production and benefits.It is assumed that the operation area required by workshop facilities is determined through layout data analysis, and the operation area of each area needs to be calculated in detail. Among them, the process of determining the working area of workshop facilities is shown in Figure4.Figure 4 Flow chart for determining the area of the workshop facility operation buffer area.The workshop facility job buffer plays an essential role in the manufacturing system. In addition to completing the storage function of the layout model, it also needs to support different operations of the layout model, mainly including receiving and storage.This paper analyzes the operation characteristics of the facility operation buffer area in a flexible industrial workshop. Generally, the buffer area is divided into several different functional areas, mainly including the following aspects:(1) Production area(2) Sorting area(3) Distribution area(4) Waiting areaThrough the above analysis, the layout optimization model of the workshop facility job buffer area is described as follows.Multiple work units with known dimensions are placed in the plane of the known workshop facility work buffer area, mainly to make the layout of each work unit more reasonable. In addition, to effectively facilitate handling, it is necessary to reserve a certain activity space and aisle width for the staff of the flexible industrial workshop. At the same time, some constraints need to be considered in the layout of the buffer area.In the process of optimizing the layout of the facility operation buffer area in a flexible industrial workshop, the following operation steps are mainly included:Step 1: Preparation of raw materials. In the process of buffer layout optimization, it is necessary to determine five basic elements such as product, output, and handling path. At the same time, the functional area is divided on the basis of the operation unit, and the area and shape of the best operation area are obtained by means of decomposition or combination.Step 2: Dynamic data scheduling and relationship analysis between operating units. Material handling and loading and unloading in flexible industrial workshops are the main causes of operation costs, so layout optimization plays an essential role in the process of dynamic data scheduling. Through the above analysis, it is necessary to analyze the relationship between dynamic data scheduling and various operation units.Step 3: Calculate the floor area of each unit.Analyze different factors such as equipment and personnel, obtain the floor area of the operation unit through the operation area calculation formula, and ensure that the calculated area and the actual available area match each other.Step 4: Draw the correlation diagram of the unit area of operation. Calculate the load of the actual area of the operation unit with the corresponding position correlation diagram.Step 5: Revise. Combined with the actual limited conditions, the area of the correlation map is adjusted in real time, and several feasible schemes are formulated at the same time.Step 6: Scheme evaluation and selection. For a feasible scheme, it is necessary to evaluate professional technology, cost, and other aspects, modify the scheme through comparison and analysis, and obtain the final layout scheme.Based on the above operations, the layout of the workshop facility job buffer can be described as follows:(1) Objective function: The minimum material handling momentminD of the flexible industrial workshop and the adjacency correlation degree of different workshops in the maximum buffer area are taken as the maxD objective function. The specific expression is shown in the following formula:(2)minD=∫i=1n∫j=1ndijeij,maxD=Wq−∫i=1n∫j=1nfijhij.wheren represents the total number of operating units; dij and eij, respectively, represent the average dynamic data scheduling amount and the total dynamic data scheduling amount between i and j of the operation unit; Wq represents any constant; and fij and hij represent the material handling speed and time between i and j, respectively.(2) The constraints are(3)Ai−Aj+Cij≥xi+xj2+Bxij,i=1,2,…,n−1,j=i+1,…,n,Bi−Bj+Wq1−Cij≥yi+yj2+Bxij,i=1,2,…,n−1,j=i+1,…,n.whereCij represents the correlation function between activities; xi and xj represent the i and j activities, respectively; yi and yj represent the length and width of the activity, respectively; and Bxij represents the handling function between each activity.The buffer area layout model of the flexible industrial workshop is given by using the following formula:(4)Fmx=minD+maxDxi+xj/2+Bxijyi+yj/2+Bxij. ### 3.2. Model Solution Based on the CGA Algorithm The cloud model is mainly characterized by expected value, entropy, and super entropy. The traditional genetic algorithm needs to use empirically specified or fixed crossover and mutation probability for research. The detailed operating principle is that when the average fitness of the population is greater than the individual, the better individual needs to be reported as the fitness value increases; at the same time, better individuals are formed.According to the characteristics of the normal cloud model, the disadvantages of the genetic algorithm for dynamic data scheduling of the flexible industrial job shop can be effectively improved. The calculation process of the CGA (compact genetic algorithm) algorithm is as follows:(5)Pabc=o1ef−Ex/2En2,f>0,o3,f<0,(6)Pmnl=o2ef−Ex/2En2,f≥0,o4,f<0.wherePabc and Pmnl represent the overall and local optimal solutions, respectively, and o1, o2, o3, and o4 represent the control parameters, respectively; f indicates fitness value; and Ex and En represent the fitness values of the variant individuals. The detailed operation steps of the CGA algorithm are given in Figure 5.Figure 5 Operation flow chart of the CGA algorithm.The CGA algorithm is used to solve the layout model of the facility job buffer area in a flexible industrial job shop. The detailed operation steps are as follows:Step 1: Initialize the populationStep 2: Calculate the fitness value of different individualsStep 3: Select, copy, and migrate(1) Copy the best individual to the next generation(2) Replicate the elite population(3) The worst individuals will be eliminated and replaced by randomly formed foreign individualsStep 4: Cross and mutate all elite groups:(1) The membership degree is randomly formed according to the uniform distribution(2) The fitness value is determined by both parents(3) Determine the search scope of different variablesStep 5: Perform the mutation operation:(1) Determine the number of original individuals(2) Determine the variable search rangeStep 6: Judge whether the algorithm meets the termination conditions. If yes, stop the operation, output the optimal solution, and obtain the layout optimization scheme of the facility job buffer area in the flexible industrial job shop; otherwise, skip to step 2; thus, the model solution based on the CGA algorithm is completed. ### 3.3. Realize the Dynamic Data Scheduling of the Flexible Industrial Job Shop #### 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing #### 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 3.1. Construction of the Buffer Area Layout Model of the Flexible Industrial Workshop The layout of the buffer area of the flexible industrial workshop is mainly analyzed and studied from the workshop conditions to obtain the best local optimization scheme of the buffer area of the flexible industrial workshop, to ensure the maximum improvement of the increased production and benefits.It is assumed that the operation area required by workshop facilities is determined through layout data analysis, and the operation area of each area needs to be calculated in detail. Among them, the process of determining the working area of workshop facilities is shown in Figure4.Figure 4 Flow chart for determining the area of the workshop facility operation buffer area.The workshop facility job buffer plays an essential role in the manufacturing system. In addition to completing the storage function of the layout model, it also needs to support different operations of the layout model, mainly including receiving and storage.This paper analyzes the operation characteristics of the facility operation buffer area in a flexible industrial workshop. Generally, the buffer area is divided into several different functional areas, mainly including the following aspects:(1) Production area(2) Sorting area(3) Distribution area(4) Waiting areaThrough the above analysis, the layout optimization model of the workshop facility job buffer area is described as follows.Multiple work units with known dimensions are placed in the plane of the known workshop facility work buffer area, mainly to make the layout of each work unit more reasonable. In addition, to effectively facilitate handling, it is necessary to reserve a certain activity space and aisle width for the staff of the flexible industrial workshop. At the same time, some constraints need to be considered in the layout of the buffer area.In the process of optimizing the layout of the facility operation buffer area in a flexible industrial workshop, the following operation steps are mainly included:Step 1: Preparation of raw materials. In the process of buffer layout optimization, it is necessary to determine five basic elements such as product, output, and handling path. At the same time, the functional area is divided on the basis of the operation unit, and the area and shape of the best operation area are obtained by means of decomposition or combination.Step 2: Dynamic data scheduling and relationship analysis between operating units. Material handling and loading and unloading in flexible industrial workshops are the main causes of operation costs, so layout optimization plays an essential role in the process of dynamic data scheduling. Through the above analysis, it is necessary to analyze the relationship between dynamic data scheduling and various operation units.Step 3: Calculate the floor area of each unit.Analyze different factors such as equipment and personnel, obtain the floor area of the operation unit through the operation area calculation formula, and ensure that the calculated area and the actual available area match each other.Step 4: Draw the correlation diagram of the unit area of operation. Calculate the load of the actual area of the operation unit with the corresponding position correlation diagram.Step 5: Revise. Combined with the actual limited conditions, the area of the correlation map is adjusted in real time, and several feasible schemes are formulated at the same time.Step 6: Scheme evaluation and selection. For a feasible scheme, it is necessary to evaluate professional technology, cost, and other aspects, modify the scheme through comparison and analysis, and obtain the final layout scheme.Based on the above operations, the layout of the workshop facility job buffer can be described as follows:(1) Objective function: The minimum material handling momentminD of the flexible industrial workshop and the adjacency correlation degree of different workshops in the maximum buffer area are taken as the maxD objective function. The specific expression is shown in the following formula:(2)minD=∫i=1n∫j=1ndijeij,maxD=Wq−∫i=1n∫j=1nfijhij.wheren represents the total number of operating units; dij and eij, respectively, represent the average dynamic data scheduling amount and the total dynamic data scheduling amount between i and j of the operation unit; Wq represents any constant; and fij and hij represent the material handling speed and time between i and j, respectively.(2) The constraints are(3)Ai−Aj+Cij≥xi+xj2+Bxij,i=1,2,…,n−1,j=i+1,…,n,Bi−Bj+Wq1−Cij≥yi+yj2+Bxij,i=1,2,…,n−1,j=i+1,…,n.whereCij represents the correlation function between activities; xi and xj represent the i and j activities, respectively; yi and yj represent the length and width of the activity, respectively; and Bxij represents the handling function between each activity.The buffer area layout model of the flexible industrial workshop is given by using the following formula:(4)Fmx=minD+maxDxi+xj/2+Bxijyi+yj/2+Bxij. ## 3.2. Model Solution Based on the CGA Algorithm The cloud model is mainly characterized by expected value, entropy, and super entropy. The traditional genetic algorithm needs to use empirically specified or fixed crossover and mutation probability for research. The detailed operating principle is that when the average fitness of the population is greater than the individual, the better individual needs to be reported as the fitness value increases; at the same time, better individuals are formed.According to the characteristics of the normal cloud model, the disadvantages of the genetic algorithm for dynamic data scheduling of the flexible industrial job shop can be effectively improved. The calculation process of the CGA (compact genetic algorithm) algorithm is as follows:(5)Pabc=o1ef−Ex/2En2,f>0,o3,f<0,(6)Pmnl=o2ef−Ex/2En2,f≥0,o4,f<0.wherePabc and Pmnl represent the overall and local optimal solutions, respectively, and o1, o2, o3, and o4 represent the control parameters, respectively; f indicates fitness value; and Ex and En represent the fitness values of the variant individuals. The detailed operation steps of the CGA algorithm are given in Figure 5.Figure 5 Operation flow chart of the CGA algorithm.The CGA algorithm is used to solve the layout model of the facility job buffer area in a flexible industrial job shop. The detailed operation steps are as follows:Step 1: Initialize the populationStep 2: Calculate the fitness value of different individualsStep 3: Select, copy, and migrate(1) Copy the best individual to the next generation(2) Replicate the elite population(3) The worst individuals will be eliminated and replaced by randomly formed foreign individualsStep 4: Cross and mutate all elite groups:(1) The membership degree is randomly formed according to the uniform distribution(2) The fitness value is determined by both parents(3) Determine the search scope of different variablesStep 5: Perform the mutation operation:(1) Determine the number of original individuals(2) Determine the variable search rangeStep 6: Judge whether the algorithm meets the termination conditions. If yes, stop the operation, output the optimal solution, and obtain the layout optimization scheme of the facility job buffer area in the flexible industrial job shop; otherwise, skip to step 2; thus, the model solution based on the CGA algorithm is completed. ## 3.3. Realize the Dynamic Data Scheduling of the Flexible Industrial Job Shop ### 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing ### 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 3.3.1. The Flexible Industrial Job Shop Dynamic Data Scheduling Problem The flexible industrial job shop dynamic data scheduling problem determines the processing order of workpieces on each workshop machine through a certain optimization strategy so as to minimize the dynamic data scheduling time of the flexible industrial job shop. The corresponding known conditions are described as follows:(1) The workpiece set to be processed isK=k1,k2,…,kn, and kn is the n workpiece to be processed(2) The machine set capable of processing isM=m1,m2,…,mm, and mm is the m machine(3) The operation setsJ=J1,J2,…,JnT, Ji=ji1,ji2,…,jik,…,jim, and jim of each workpiece represent the first operation of the i workpieceThe dynamic data scheduling problem of a flexible industrial job shop must meet the following two constraints:(1) Each operation must be processed in the next operation, and the processing priority of each workpiece is the same(2) Each process will not be interrupted by another process during processing ## 3.3.2. Dynamic Data Scheduling Solution of the Flexible Industrial Job Shop Based on the Chaotic Particle Swarm Optimization Algorithm Particle coding is the first problem to be solved in the dynamic data scheduling of a flexible industrial job shop. The particles are decoded to obtain the optimal scheduling scheme. For a 3 × 3 flexible industrial job shop dynamic data scheduling problem, each particle is composed of 3 × 3 bits. It is composed of 3 bits, and its particle code is set as follows: [1, 1, 1, 2, 2, 2, 3, 3, 3]. The processing sequence of each workpiece is shown in Figure6.Figure 6 Dynamic data scheduling scheme of the 3 × 3 flexible industrial job shop.As shown in Figure6, according to the completion time objective of dynamic data scheduling optimization of the flexible industrial workshop, the particle fitness function of the chaotic particle swarm optimization algorithm is designed, and the corresponding fitness function is defined as follows:(7)Fasf=100×EoptTi×Jm,whereEopt represents the fitness function coefficient, Ti represents all operation times of the machine, and Jm represents the completion time after decoding the m particle line. Particle swarm aggregation is very serious; in the process of multiple iterations, the optimal particle has no change or little change. The specific operations of chaotic operations are explained as follows.The fitness varianceσ2 of particle swarm is calculated as shown in the following formula:(8)σ2=∑i=1nFi−FavgF2,whereFavg represents the average fitness value of the current particle swarm, Fi represents the fitness value of the i particle, and F represents the total number of particle swarms. If σ2<1, the particle swarm has serious aggregation and premature convergence. Some small particles are chaotic until the termination conditions are met, and the optimal solution is output to complete the dynamic data scheduling of a flexible industrial job shop based on digital twin technology. ## 4. Experimental Analysis To verify the effect and feasibility of flexible industrial job shop dynamic data scheduling based on digital twin technology, experiments were carried out to compare our approach with the methods in references [9, 10]. The dynamic data and parameters of the flexible industrial workshop refer to a large machinery production workshop in the old industrial base in Northeast China. C++ and OpenCV2.2 were used to build a simulation experiment environment. The experimental parameters were set as shown in Table 1.Table 1 Experimental parameters. ProjectParameterOnline monitoring networkGSM network/GPRS networkSystem resource16 MB+Standard802.11 b/802.15.1/802.15.4Network size1/32/7Bandwidth (kb/s)64–128+Transmission distance (m)1,000+Use protocolZigBee agreementFrame length30 msSlot length2.25 msDuplex modeFDD/TDDCarrier broadband15 MHZMultiple access modeDirect spreadUplink modulation modeBPSKDownlink modulation modeQPSKIn the large-scale machinery production workshop, there are a large number of production lines, a large scheduling scale, and very strict requirements for product output. In this paper, 10 production lines are selected. On the premise that the number of workpieces and machines are different, the methods of this paper, references [9, 10] are used to schedule and adjust the production lines. The results are shown in Table 2.Table 2 Comparison of resource allocation results of three algorithms scheduling. Production line12345Number of workpieces2030202530Number of machines2015202020C∗∗9201,1559351,0361,208Paper methodC∗9181,1589401,0351,210Reference [9] methodC∗9281,1659551,0641,231Reference [10] methodC∗9011,2679801,1431,341Note.C∗∗ in Table 2 represents the Pareto optimal solution, which refers to the optimal resource scheduling result in an ideal state, and C∗ represents the optimal solution obtained after 50 iterations of each method.It can be seen from Table2 that among the three methods, the resource allocation result of this method after scheduling is closest to the Pareto optimal solution, which proves that the method can reasonably schedule tasks according to the number of workpieces and machinery, keep the machine on all the time, avoid outages caused by unreasonable resource scheduling, and save production time.Experiments were carried out to verify the balance of the multiproduction line coordinated scheduling of the three methods, to verify which algorithm can avoid uneven division of labor, partial idleness, and partial work. The evaluation indexes are the maximum completion time and production line load. The experimental results are shown in Figure7. In Figure 7, the solid line represents the maximum completion time, and the dotted line represents the load of the production line.Figure 7 Comparison of maximum completion time weights of the multiproduction line scheduling.As can be seen from Figure7, the maximum completion time peak appears in the method of reference [9], and the production line load peak appears in the method of reference [10]. The maximum completion time and production line load curves of the two algorithms fluctuate greatly and are generally high. The maximum completion time and production line load curve of our method change gently. This shows that this method has a certain balance for the coordinated scheduling of multiple production lines, can ensure the balanced division of labor of the production line, and will not allow it to be partially idle and partially working.Combining the two evaluation indexes of maximum completion time and production line load, the income standard of the algorithm can be further evaluated; that is, the final income is obtained by the machining enterprise by ensuring the coordinated scheduling of the production line. The calculation formula is shown in the following formula:(9)CMTai=minaj∈BsMajMai×minaj∈BsCajCai.whereC represents comprehensive income; Bs=b1,b2,b3 corresponds to the methods in this paper, reference [9], and reference [10], respectively; M represents the maximum completion time; and C represents the line load. We used formula (9) to calculate the comprehensive income of the three methods, and the results are shown in Figure 8.Figure 8 Comparison of results of the comprehensive income of the three methods.It can be seen from Figure8 that the comprehensive income of our method is the highest and most stable, while the other two methods have large fluctuations, which shows that the coordinated scheduling of multiple production lines by our method can maximize the production benefits of flexible industrial operations and result in ideal scheduling efficiency and scheduling ability.To sum up, the dynamic data scheduling method of the flexible industrial job shop based on digital twin technology has good scheduling performance. ## 5. Conclusions and Prospects The innovation of the flexible industrial job shop dynamic data scheduling method based on digital twin technology is that it selects digital twin technology to complete the flexible industrial job shop dynamic data scheduling with the goal of minimizing completion time and maximizing production line utilization. In the flexible industrial workshop, the dynamic data between multiple production lines has always been one of the research focuses. The following conclusions were obtained through the research:(1) Because of its great complexity, this paper introduces a cloud computing mode to reduce the calculation difficulty and solve the problem of coordinated scheduling among multiple production lines from the perspective of case-based reasoning.(2) The research shows that the proposed method can complete the coordinated scheduling among multiple production lines in the least amount of time.Due to the limited time and energy, the proposed method still has shortcomings. The follow-up research will focus on the following aspects:(1) In the actual layout process, each working area of the facility operation buffer area of the flexible industrial workshop can be of various shapes. In the later research process, the changeable operation problem needs to be considered and solved by relevant algorithms.(2) Subsequently, the conditional constraints of export location and production location can be added to the all-factor digital information fusion model of the flexible industrial workshop to comprehensively analyze the impact of different constraints on the optimization results.(3) Because the service life of different products is completely different and the types of products are becoming increasingly complex, the layout optimization of the facility job buffer in a flexible industrial job shop could be studied in the future. --- *Source: 1009507-2022-08-03.xml*
2022
# Network Pharmacology-Based Systematic Analysis of Molecular Mechanisms ofGeranium wilfordii Maxim for HSV-2 Infection **Authors:** Hao Zhang; Ming-Huang Gao; Yang Chen; Tao Liu **Journal:** Evidence-Based Complementary and Alternative Medicine (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1009551 --- ## Abstract Background. Being a traditional Chinese medicine, Geranium wilfordii Maxim (GWM) is used for the treatment of various infectious diseases, and its main active ingredients are the polyphenolic substances such as polyphenols quercetin, corilagin, and geraniin. Previous studies have demonstrated the anti-HSV-1 viral activity of these three main ingredients. Through employing a network pharmacological method, the authors of the present research intend to probe the mechanism of GWM for the therapeutic treatment of HSV-2 infection. Methods. The bioactive substances and related targets of GWM were obtained from the TCMSP database. Gene expression discrepancy for HSV-2 infection was obtained from dataset GSE18527. Crossover genes between disease target genes and GWM target genes were gained via Circos package. Distinctively displayed genes (DDGs) during HSV-2 infection were uploaded to the Metascape database with GWM target genes for further analysis. The tissue-specific distribution of the genes was obtained by uploading the genes to the PaGenBase database. Ingredient-gene-pathway (IGP) networks were constructed using Cytoscape software. Molecular docking investigations were carried out utilizing AutoDock Vina software. Results. Nine actively involved components were retrieved from the TCMSP database. After taking the intersection among 153 drug target genes and 83 DDGs, 7 crossover genes were screened. Gene enrichment analysis showed that GWM treatment of HSV-2 infection mainly involves cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG). One hub, three core objectives, and two critical paths were filtered out from the built network. Geraniin showed strong binding activity with HSV-2 gD protein and STING protein in molecular docking. Conclusions. This network pharmacological study provides a fundamental molecular mechanistic exploration of GWM for the treatment of HSV-2 infection. --- ## Body ## 1. Introduction Genital herpes is a common sexually transmitted infection (STI) caused by herpes simplex virus type 2 (HSV-2) and represents a major health problem globally [1].HSV-2 frequently modulates the cytokine milieu of the microenvironment in favor of HIV-1 spread [2]. The available antiviral agents used in HSV-2 infections are those that are clinically approved for the general treatment of HSV-2 infections, such as acyclovir and famciclovir. Indeed, previous studies of HSV-2 infection indicated that the use of single nucleoside analogues is inadequate for effective control of virus replication, as the administered nucleoside analogues often exert significant selection pressure on the virus, leading to the rapid generation of escape mutants. While current therapies based on nucleoside analogues suppress viral replication and reduce progression of HSV-2 infection, treatment is lifelong and viral cure is extremely rare [3]. Therefore, to further optimize treatment, new effective drugs are highly warranted.GWM is a traditional Chinese medicine, and it contains geraniin, quercetin, corilagin, and so on [4]. Previous studies have demonstrated the anti-HSV-1 viral activity of these three main ingredients [5–7]. One study revealed a promising role of geraniin as an antiviral agent against HSV-2 infection with no apparent toxicity [8]. However, the mechanisms by which GWM inhibits HSV-2 infection remain unclear.Traditional Chinese medicine (TCM) has the characteristics of multitarget, multistep, and multilevel synergism [9]. Recently, network pharmacology becomes an important bioinformatics tool for identifying the mechanism of action of TCM [10]. In the present study, the network pharmacology approach was performed to further investigate the active ingredients and the underlying mechanism of GWM for the treatment of HSV-2. A flow diagram summarizing the different procedures of this study is shown in Figure 1.Figure 1 Workflow chart ofGeranium wilfordii Maxim in the treatment of genital herpes based on network pharmacology. ## 2. Materials and Methods ### 2.1. Bioactive Chemical Substance and Objective Genes of GWM Bioactive components and action targets of GWM were screened in the TCMSP website (old.tcmsp-e.com/tcmsp.php) [11]. Oral bioavailability (OB) ≥30% and drug-like properties (DL) ≥0.18 were the filtering criteria [12]. The active components were filtered using pharmacokinetic absorption, distribution, metabolism, and excretion guidelines (ADME) filter. Since the polyphenolic substance corilagin and geraniin are the main active ingredients of GWM, they are also listed. The active ingredient target genes were normalized in the UniProt database (uniprot.org). The structures of the active compounds were acquired at PubChem website (pubchem.ncbi.nlm.nih.gov). ### 2.2. DDGs in HSV-2 Infection The GSE18527 dataset in the GEO database (ncbi.nlm.nih.gov/geo) was created by Peng T et al. The dataset had 19 samples, consisting of 3 cases of pretreatment healthy skin, 4 cases of pretreatment diseased skin, 6 cases of diseased skin in the healing group, and 6 cases of healthy skin in the healing group. The screening criteria for distinctively displayed genes wereP<0.05 and logFC >4. The DDGs were established by the intersection of the two datasets group: pretreatment healthy skin and pretreatment lesioned skin group and posttreatment diseased skin and pretreatment lesioned skin group. ### 2.3. Intersect Target Genes Crossover genes between DDGs and GWM target genes were obtained using Circos software. ### 2.4. Gene Pathway and Functional Enrichment Analysis DDGs during HSV-2 infection and GWM target genes were uploaded to the Metascape database (metascape.org) for further analysis of relevant genes and functional enrichment. ### 2.5. Histospecific Gene Enrichment Analysis The distribution of genes was further analyzed after uploading DDGs during HSV-2 infection and GWM target genes to the PaGenBase database (bioinf.xmu.edu.cn/PaGenBase). ### 2.6. Enrichment Analysis of Transcription Factor Targets To assess the potential regulatory patterns of the most enriched and conserved transcripts, the DDGs during HSV-2 infection and GWM target genes were submitted to transcription factor (TF) enrichment analysis by using the TRANSFAC Predicted Transcription Factor Targets dataset (https://maayanlab.cloud/Harmonizome/dataset/TRANSFAC + Predicted + Transcription + Factor + Targets). The obtained TFs were sorted according to their average enrichment scores. The top 20 TFs of both sets of mRNAs were further evaluated to determine their coregulatory network. ### 2.7. Ingredient-Gene-Pathway (IGP) Network The IGP network was created by importing five intersecting active ingredients, seven intersecting genes, and the leading 17 KEGG pathways into Cytoscape software. The topological parameters such as degree centrality (DC), betweenness centrality (BC), and closeness centrality (CC) were utilized to evaluate the centrality features of nodes in IGP networks. ### 2.8. Molecular Docking Studies The HSV-2 gD protein, STING protein, and drug structures were downloaded from the PDB (pdb.org) and PubChem websites. Drug and HSV-2 gD protein and STING protein docking investigations were conducted in AutoDock Vina (version 1.1.2) [13]. The visualization of the docking performance was performed by PyMOL v.2.3 software, and the docking effect was evaluated using the affinity value (AV). ## 2.1. Bioactive Chemical Substance and Objective Genes of GWM Bioactive components and action targets of GWM were screened in the TCMSP website (old.tcmsp-e.com/tcmsp.php) [11]. Oral bioavailability (OB) ≥30% and drug-like properties (DL) ≥0.18 were the filtering criteria [12]. The active components were filtered using pharmacokinetic absorption, distribution, metabolism, and excretion guidelines (ADME) filter. Since the polyphenolic substance corilagin and geraniin are the main active ingredients of GWM, they are also listed. The active ingredient target genes were normalized in the UniProt database (uniprot.org). The structures of the active compounds were acquired at PubChem website (pubchem.ncbi.nlm.nih.gov). ## 2.2. DDGs in HSV-2 Infection The GSE18527 dataset in the GEO database (ncbi.nlm.nih.gov/geo) was created by Peng T et al. The dataset had 19 samples, consisting of 3 cases of pretreatment healthy skin, 4 cases of pretreatment diseased skin, 6 cases of diseased skin in the healing group, and 6 cases of healthy skin in the healing group. The screening criteria for distinctively displayed genes wereP<0.05 and logFC >4. The DDGs were established by the intersection of the two datasets group: pretreatment healthy skin and pretreatment lesioned skin group and posttreatment diseased skin and pretreatment lesioned skin group. ## 2.3. Intersect Target Genes Crossover genes between DDGs and GWM target genes were obtained using Circos software. ## 2.4. Gene Pathway and Functional Enrichment Analysis DDGs during HSV-2 infection and GWM target genes were uploaded to the Metascape database (metascape.org) for further analysis of relevant genes and functional enrichment. ## 2.5. Histospecific Gene Enrichment Analysis The distribution of genes was further analyzed after uploading DDGs during HSV-2 infection and GWM target genes to the PaGenBase database (bioinf.xmu.edu.cn/PaGenBase). ## 2.6. Enrichment Analysis of Transcription Factor Targets To assess the potential regulatory patterns of the most enriched and conserved transcripts, the DDGs during HSV-2 infection and GWM target genes were submitted to transcription factor (TF) enrichment analysis by using the TRANSFAC Predicted Transcription Factor Targets dataset (https://maayanlab.cloud/Harmonizome/dataset/TRANSFAC + Predicted + Transcription + Factor + Targets). The obtained TFs were sorted according to their average enrichment scores. The top 20 TFs of both sets of mRNAs were further evaluated to determine their coregulatory network. ## 2.7. Ingredient-Gene-Pathway (IGP) Network The IGP network was created by importing five intersecting active ingredients, seven intersecting genes, and the leading 17 KEGG pathways into Cytoscape software. The topological parameters such as degree centrality (DC), betweenness centrality (BC), and closeness centrality (CC) were utilized to evaluate the centrality features of nodes in IGP networks. ## 2.8. Molecular Docking Studies The HSV-2 gD protein, STING protein, and drug structures were downloaded from the PDB (pdb.org) and PubChem websites. Drug and HSV-2 gD protein and STING protein docking investigations were conducted in AutoDock Vina (version 1.1.2) [13]. The visualization of the docking performance was performed by PyMOL v.2.3 software, and the docking effect was evaluated using the affinity value (AV). ## 3. Results ### 3.1. Bioactive Components and Drug Targets of GWM Nine GWM bioactive chemicals were obtained in the TCMSP database, containing ellagic acid, sitosterol, kaempferol, furosin, ethyl brevifolincarboxylate, luteolin, quercetin, dehydrogeraniin, and corilagin. We downloaded the two-dimensional structure of the chemicals in PubChem website (Table1). 309 drug targets were acquired in the TCMSP website and converted to target genes in the UniProt website. 153 target genes were isolated as drug-targeting genes after deleting repetitions.Table 1 Active ingredients and ADME parameters ofGeranium wilfordii Maxim (GWM). NO.Molecule IDMolecule nameChemical formulaStructureMWOB (%)DL1MOL001002Ellagic acidC14H6O8302.243.060.432MOL000359SitosterolC29H50O414.7936.910.753MOL000422KaempferolC15H10O6286.2541.880.244MOL005067FurosinC27H22O19650.4940.530.295MOL005073Ethyl brevifolin carboxylateC15H12O8320.2730.860.336MOL000006LuteolinC15H10O6286.2536.160.257MOL000098QuercetinC15H10O7302.2546.430.288MOL005064DehydrogeraniinC41H28O28968.6859.570.019MOL005079CorilaginC27H22O18634.493.010.44ADME: absorption, distribution, metabolism, and excretion. ### 3.2. DDGs in HSV-2 Infection Finally, 40 genes upregulated and 43 genes downregulated were selected. The Venn diagram and heat map of DDGs are shown in Figure2.Figure 2 (a) Volcano map displays the differential genes between the pretreatment healthy skin and the pretreatment lesioned skin group. (b) Volcano map displays the differential genes between the posttreatment lesioned skin and the pretreatment lesioned skin group. (c) Venn diagram displays the intersection of the above 2 groups of differential genes. Up represents upregulated genes, and down represents downregulated genes. (a)(b)(c) ### 3.3. Intersect Target Genes Seven crossover genes (CXCL10, CXCL11, CXCL8, IL-6 and IL-1β, MMP1, and SELE) were filtered in GWM target genes and HSV-2 DDGs via Circos package, as shown in Figure 3.Figure 3 Red represents 153 gene targets ofGeranium wilfordii Maxim; blue represents 83 genes significantly upregulated during HSV-2 infection; orange represents the intersection of the two groups of genes. ### 3.4. Gene Pathway and Function Enrichment Analysis The enrichment analysis of GWM target genes and DDGs during HSV-2 infection were jointly clustered in 26 enrichment items, and the most significant ones included cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG), as shown in Figure4. These analyses suggested that GWM may treat HSV-2 infection by modulating cytokine signaling in the immune system, the process of cell state or activity changes due to viral stimulation, and the process by which unspecialized cells acquire specialized features of epithelial cells, the binding of IFNG to its receptor, and the subsequent phosphorylation cascade reaction involving the JAK and STAT protein families.Figure 4 Gene pathway and functional enrichment analysis. (a) Enrichment analysis of 153 genes that are targets for the action of GWM. (b) Enrichment analysis of 83 genes that are significantly altered during HSV-2 infection. ### 3.5. Histospecific Gene Enrichment Analysis PaGenBase database tissue-specific enrichment profiling indicated that the target genes of GWM were mainly concentrated in lung and smooth muscle tissues; cell-specific was brain cell. The DDGs of HSV-2 were mainly enriched in skin tissue, followed by lung and smooth muscle tissues. Cell-specific was NHEK (normal human epidermal keratinocytes), as shown in Figure5.Figure 5 Summary of histospecific gene enrichment analysis in PaGenBase. ### 3.6. Enrichment Analysis in Transcription Factor Targets Enrichment analysis of GWM target genes and HSV-2 DDGs transcription factors focused on STTTCRNTTT IRF Q6, ISRE 01, IRF1 01, IRF7 01, NFKAPPAB 01, and STAT 01, as shown in Figure6. This suggested that GWM may regulate the ability of target genes during the progression of HSV-2 infection through changes in the activity or expression of the above transcription factors, thereby controlling HSV-2 infection.Figure 6 Summary of enrichment analysis in transcription factor targets. ### 3.7. Ingredient-Gene-Pathway (IGP) Network The IGP network consists of 29 nodes (5 active ingredients, 7 intersecting genes, and 17 pathways) and 73 edges. In this network, we found that all 7 intersecting genes were related to quercetin, as shown in Figure7. It is inferred that quercetin could be the pivotal effective component of GWM for the treatment of HSV-2 infection. According to the topological analysis, IL-6, IL-1β, and CXCL8 are the pivotal genes. Toll-like receptor signaling pathway and cytokine-cytokine receptor interaction pathway are the key pathways in the IGP network. Among them, IL-6, IL-1β, and CXCL10 are associated with the cytosolic DNA-sensing pathway. Since IFNβ exerts a crucial function in the inhibition of HSV-2, HSV-2 has evolved multiple means to inhibit IFNβ expression to produce immune escape [14–16], and the cGAS-STING pathway is a key mechanism for IFNβ production [17]; it is hypothesized that GWM may act through IL-6, IL-1β, and CXCL10 in the cGAS-STING pathway as a key link in controlling HSV-2 infection.Figure 7 Ingredient-gene-pathway networks. Fuchsia triangles, aqua ellipses, and lime octagons stand for pathways of GWM, intersecting with target genes and active components, respectively. The bigger the shape of the graph, the larger the degree value of the node and the higher the role in the network. ### 3.8. Molecular Docking Studies Nectin-1 is a cell adhesion protein, and binding of Nectin-1 protein by HSV-2 gD protein is necessary for HSV-2 to enter infected cells [18]. Epigallocatechin gallate (EGCG) has been shown to bind directly to HSV-2 gD protein to exert its anti-HSV-2 infection effect [19]. We docked quercetin, corilagin, geraniin, and EGCG to HSV-2 gD protein molecules respectively, and the binding activity of all the three was superior to that of EGCG, with geraniin showing the strongest binding activity (−17.44 kcal/mol), as shown in Table 2. cGAS acts as the primary intracellular double-stranded DNA (dsDNA) sensor, sensing intracellular dsDNA and generating the secondary messenger cGMP-AMP (cGAMP), which is further sensed by the sensing protein STING downstream of the interferon gene, leading to IFNβ production [20]. HSV-2 has evolved multiple strategies to counteract this pathway, inhibiting IFN β production and evading host immunity [21]. Mangostin is a STING-targeted pathway agonist [22]. The binding activity of all the three was superior to that of mangostin, with geraniin showing the strongest binding activity (−11.71 kcal/mol), as shown in Table 2. These data suggested that quercetin, corilagin, and geraniin may affect the pathogenic process of HSV-2 by binding to the HSV-2 gD protein (Figure 8) interfering with the binding to the NECTIN receptor. The direct action of quercetin, corilagin, and geraniin with the cGAS-STING pathway relationship has yet to be experimentally verified. Molecular docking is shown in Figure 9.Table 2 Docking scores of active ingredients of GWM with potential targets. TargetsPDB IDCompoundAffinity (kcal/mol)HSV-2 gD4MYVQuercetin−7.92HSV-2 gD4MYVCorilagin−13.08HSV-2 gD4MYVGeraniin−17.44HSV-2 gD4MYVEGCG−6.88STING6NT5Quercetin−7.1STING6NT5Corilagin−11.65STING6NT5Geraniin−11.71STING6NT5Mangostin−4.52Figure 8 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and EGCG with HSV-2 gD protein, and the results are shown as 3D diagrams. (a) Quercetin-gD. (b) Corilagin-gD. (c) Geraniin-gD. (d) EGCG-gD. (a)(b)(c)(d)Figure 9 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and mangostin with STING protein, and the results are shown as 3D diagrams. (a) Quercetin-STING. (b) Corilagin-STING. (c) Geraniin-STING. (d) Mangostin-STING. (a)(b)(c)(d) ## 3.1. Bioactive Components and Drug Targets of GWM Nine GWM bioactive chemicals were obtained in the TCMSP database, containing ellagic acid, sitosterol, kaempferol, furosin, ethyl brevifolincarboxylate, luteolin, quercetin, dehydrogeraniin, and corilagin. We downloaded the two-dimensional structure of the chemicals in PubChem website (Table1). 309 drug targets were acquired in the TCMSP website and converted to target genes in the UniProt website. 153 target genes were isolated as drug-targeting genes after deleting repetitions.Table 1 Active ingredients and ADME parameters ofGeranium wilfordii Maxim (GWM). NO.Molecule IDMolecule nameChemical formulaStructureMWOB (%)DL1MOL001002Ellagic acidC14H6O8302.243.060.432MOL000359SitosterolC29H50O414.7936.910.753MOL000422KaempferolC15H10O6286.2541.880.244MOL005067FurosinC27H22O19650.4940.530.295MOL005073Ethyl brevifolin carboxylateC15H12O8320.2730.860.336MOL000006LuteolinC15H10O6286.2536.160.257MOL000098QuercetinC15H10O7302.2546.430.288MOL005064DehydrogeraniinC41H28O28968.6859.570.019MOL005079CorilaginC27H22O18634.493.010.44ADME: absorption, distribution, metabolism, and excretion. ## 3.2. DDGs in HSV-2 Infection Finally, 40 genes upregulated and 43 genes downregulated were selected. The Venn diagram and heat map of DDGs are shown in Figure2.Figure 2 (a) Volcano map displays the differential genes between the pretreatment healthy skin and the pretreatment lesioned skin group. (b) Volcano map displays the differential genes between the posttreatment lesioned skin and the pretreatment lesioned skin group. (c) Venn diagram displays the intersection of the above 2 groups of differential genes. Up represents upregulated genes, and down represents downregulated genes. (a)(b)(c) ## 3.3. Intersect Target Genes Seven crossover genes (CXCL10, CXCL11, CXCL8, IL-6 and IL-1β, MMP1, and SELE) were filtered in GWM target genes and HSV-2 DDGs via Circos package, as shown in Figure 3.Figure 3 Red represents 153 gene targets ofGeranium wilfordii Maxim; blue represents 83 genes significantly upregulated during HSV-2 infection; orange represents the intersection of the two groups of genes. ## 3.4. Gene Pathway and Function Enrichment Analysis The enrichment analysis of GWM target genes and DDGs during HSV-2 infection were jointly clustered in 26 enrichment items, and the most significant ones included cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG), as shown in Figure4. These analyses suggested that GWM may treat HSV-2 infection by modulating cytokine signaling in the immune system, the process of cell state or activity changes due to viral stimulation, and the process by which unspecialized cells acquire specialized features of epithelial cells, the binding of IFNG to its receptor, and the subsequent phosphorylation cascade reaction involving the JAK and STAT protein families.Figure 4 Gene pathway and functional enrichment analysis. (a) Enrichment analysis of 153 genes that are targets for the action of GWM. (b) Enrichment analysis of 83 genes that are significantly altered during HSV-2 infection. ## 3.5. Histospecific Gene Enrichment Analysis PaGenBase database tissue-specific enrichment profiling indicated that the target genes of GWM were mainly concentrated in lung and smooth muscle tissues; cell-specific was brain cell. The DDGs of HSV-2 were mainly enriched in skin tissue, followed by lung and smooth muscle tissues. Cell-specific was NHEK (normal human epidermal keratinocytes), as shown in Figure5.Figure 5 Summary of histospecific gene enrichment analysis in PaGenBase. ## 3.6. Enrichment Analysis in Transcription Factor Targets Enrichment analysis of GWM target genes and HSV-2 DDGs transcription factors focused on STTTCRNTTT IRF Q6, ISRE 01, IRF1 01, IRF7 01, NFKAPPAB 01, and STAT 01, as shown in Figure6. This suggested that GWM may regulate the ability of target genes during the progression of HSV-2 infection through changes in the activity or expression of the above transcription factors, thereby controlling HSV-2 infection.Figure 6 Summary of enrichment analysis in transcription factor targets. ## 3.7. Ingredient-Gene-Pathway (IGP) Network The IGP network consists of 29 nodes (5 active ingredients, 7 intersecting genes, and 17 pathways) and 73 edges. In this network, we found that all 7 intersecting genes were related to quercetin, as shown in Figure7. It is inferred that quercetin could be the pivotal effective component of GWM for the treatment of HSV-2 infection. According to the topological analysis, IL-6, IL-1β, and CXCL8 are the pivotal genes. Toll-like receptor signaling pathway and cytokine-cytokine receptor interaction pathway are the key pathways in the IGP network. Among them, IL-6, IL-1β, and CXCL10 are associated with the cytosolic DNA-sensing pathway. Since IFNβ exerts a crucial function in the inhibition of HSV-2, HSV-2 has evolved multiple means to inhibit IFNβ expression to produce immune escape [14–16], and the cGAS-STING pathway is a key mechanism for IFNβ production [17]; it is hypothesized that GWM may act through IL-6, IL-1β, and CXCL10 in the cGAS-STING pathway as a key link in controlling HSV-2 infection.Figure 7 Ingredient-gene-pathway networks. Fuchsia triangles, aqua ellipses, and lime octagons stand for pathways of GWM, intersecting with target genes and active components, respectively. The bigger the shape of the graph, the larger the degree value of the node and the higher the role in the network. ## 3.8. Molecular Docking Studies Nectin-1 is a cell adhesion protein, and binding of Nectin-1 protein by HSV-2 gD protein is necessary for HSV-2 to enter infected cells [18]. Epigallocatechin gallate (EGCG) has been shown to bind directly to HSV-2 gD protein to exert its anti-HSV-2 infection effect [19]. We docked quercetin, corilagin, geraniin, and EGCG to HSV-2 gD protein molecules respectively, and the binding activity of all the three was superior to that of EGCG, with geraniin showing the strongest binding activity (−17.44 kcal/mol), as shown in Table 2. cGAS acts as the primary intracellular double-stranded DNA (dsDNA) sensor, sensing intracellular dsDNA and generating the secondary messenger cGMP-AMP (cGAMP), which is further sensed by the sensing protein STING downstream of the interferon gene, leading to IFNβ production [20]. HSV-2 has evolved multiple strategies to counteract this pathway, inhibiting IFN β production and evading host immunity [21]. Mangostin is a STING-targeted pathway agonist [22]. The binding activity of all the three was superior to that of mangostin, with geraniin showing the strongest binding activity (−11.71 kcal/mol), as shown in Table 2. These data suggested that quercetin, corilagin, and geraniin may affect the pathogenic process of HSV-2 by binding to the HSV-2 gD protein (Figure 8) interfering with the binding to the NECTIN receptor. The direct action of quercetin, corilagin, and geraniin with the cGAS-STING pathway relationship has yet to be experimentally verified. Molecular docking is shown in Figure 9.Table 2 Docking scores of active ingredients of GWM with potential targets. TargetsPDB IDCompoundAffinity (kcal/mol)HSV-2 gD4MYVQuercetin−7.92HSV-2 gD4MYVCorilagin−13.08HSV-2 gD4MYVGeraniin−17.44HSV-2 gD4MYVEGCG−6.88STING6NT5Quercetin−7.1STING6NT5Corilagin−11.65STING6NT5Geraniin−11.71STING6NT5Mangostin−4.52Figure 8 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and EGCG with HSV-2 gD protein, and the results are shown as 3D diagrams. (a) Quercetin-gD. (b) Corilagin-gD. (c) Geraniin-gD. (d) EGCG-gD. (a)(b)(c)(d)Figure 9 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and mangostin with STING protein, and the results are shown as 3D diagrams. (a) Quercetin-STING. (b) Corilagin-STING. (c) Geraniin-STING. (d) Mangostin-STING. (a)(b)(c)(d) ## 4. Discussion Five effective chemicals, namely, quercetin, corilagin, kaempferol, luteolin, and ellagic acid, were evaluated in the IGP network. Among them, quercetin exhibited the strongest node value, revealing that quercetin acts as a major element in the network. In a meta-analysis, quercetin-type flavonols were noted to have antiviral activity and significantly reduced the mortality of infected animals [23]. Three central genes were screened in the IGP network, including IL-6, IL-1β, and CXCL8, which were all linked to quercetin. We hypothesized that GWM may exert antiviral effects by regulating the above targets during HSV-2 infection.IL-6 is produced by pathogen-associated molecular patterns (PAMPs) that stimulate cells such as endothelial cells, smooth muscle cells and immune cells to exert a wide range of tissue effects [24]. IL-6 can protect mice from HSV-2-induced mortality [25]. Estradiol-treated mice exhibited sooner recruitment and a larger ratio of Th1 and Th17 effector cells in the vagina and better protection after HSV-2 infection compared to placebo-treated controls, and Th17 responses were abolished in IL-1β knockout APC-T cells, suggesting that IL-1β is a crucial element in the induction of Th17 in the reproductive tract [26]. CXCL8, CXCL9, and CXCL10 were found to be expressed at high levels in both HSV-1 and HSV-2 CNS infections in one study [27].We identified 2 crucial signaling pathways in the IGP network: toll-like receptor (TLR) signaling pathway and cytokine-cytokine receptor interaction pathway. TLR9 pathway specifically recognizes the unmethylated CpG motifs in dsDNA (CpG DNA) [28]. Studies have confirmed that using pattern recognition receptor (PRR) antagonists, such as lipoproteins, CpG DNA, and cyclic dinucleotides, we can greatly limit HSV-2 replication. TLR9 silencing also affects IL-6 secretion when HSV-2 or viral DNA stimulates the cells [29]. The cytokine-cytokine receptor interaction pathway, according to the annotation of the KEGG (genome.jp/kegg) database, is mainly the interaction between HSV-2 glycoprotein and chemokines such as (CCL26, CCL28, CCL22, CCL25, CXCL9, CXCL10, CXCL11, and CXCL13).Therefore, we found that these signaling pathways are closely associated with HSV-2 infection. GWM may contribute to the therapeutic role of HSV-2 infection by modulating these signaling pathways. The constituents of GWM are highly sophisticated, and since it is impossible to include all of them in the database, some constituents and their targets of action could be overlooked. ## 5. Conclusions The “multicomponent, multitarget, and multipathway” nature of GWM for HSV-2 infection was demonstrated in this study. Using a network pharmacology approach, we identified that quercetin acts on the targets IL-6, IL-1β, and CXCL10 through a key signaling pathway (toll-like receptor signaling pathway and cytokine-cytokine receptor interaction) as a key component in controlling HSV-2 infection. This work offers ideas for future research on the molecular mechanisms of GWM for the treatment of HSV-2 infection, and related studies can be seen in future research. --- *Source: 1009551-2021-11-03.xml*
1009551-2021-11-03_1009551-2021-11-03.md
30,549
Network Pharmacology-Based Systematic Analysis of Molecular Mechanisms ofGeranium wilfordii Maxim for HSV-2 Infection
Hao Zhang; Ming-Huang Gao; Yang Chen; Tao Liu
Evidence-Based Complementary and Alternative Medicine (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1009551
1009551-2021-11-03.xml
--- ## Abstract Background. Being a traditional Chinese medicine, Geranium wilfordii Maxim (GWM) is used for the treatment of various infectious diseases, and its main active ingredients are the polyphenolic substances such as polyphenols quercetin, corilagin, and geraniin. Previous studies have demonstrated the anti-HSV-1 viral activity of these three main ingredients. Through employing a network pharmacological method, the authors of the present research intend to probe the mechanism of GWM for the therapeutic treatment of HSV-2 infection. Methods. The bioactive substances and related targets of GWM were obtained from the TCMSP database. Gene expression discrepancy for HSV-2 infection was obtained from dataset GSE18527. Crossover genes between disease target genes and GWM target genes were gained via Circos package. Distinctively displayed genes (DDGs) during HSV-2 infection were uploaded to the Metascape database with GWM target genes for further analysis. The tissue-specific distribution of the genes was obtained by uploading the genes to the PaGenBase database. Ingredient-gene-pathway (IGP) networks were constructed using Cytoscape software. Molecular docking investigations were carried out utilizing AutoDock Vina software. Results. Nine actively involved components were retrieved from the TCMSP database. After taking the intersection among 153 drug target genes and 83 DDGs, 7 crossover genes were screened. Gene enrichment analysis showed that GWM treatment of HSV-2 infection mainly involves cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG). One hub, three core objectives, and two critical paths were filtered out from the built network. Geraniin showed strong binding activity with HSV-2 gD protein and STING protein in molecular docking. Conclusions. This network pharmacological study provides a fundamental molecular mechanistic exploration of GWM for the treatment of HSV-2 infection. --- ## Body ## 1. Introduction Genital herpes is a common sexually transmitted infection (STI) caused by herpes simplex virus type 2 (HSV-2) and represents a major health problem globally [1].HSV-2 frequently modulates the cytokine milieu of the microenvironment in favor of HIV-1 spread [2]. The available antiviral agents used in HSV-2 infections are those that are clinically approved for the general treatment of HSV-2 infections, such as acyclovir and famciclovir. Indeed, previous studies of HSV-2 infection indicated that the use of single nucleoside analogues is inadequate for effective control of virus replication, as the administered nucleoside analogues often exert significant selection pressure on the virus, leading to the rapid generation of escape mutants. While current therapies based on nucleoside analogues suppress viral replication and reduce progression of HSV-2 infection, treatment is lifelong and viral cure is extremely rare [3]. Therefore, to further optimize treatment, new effective drugs are highly warranted.GWM is a traditional Chinese medicine, and it contains geraniin, quercetin, corilagin, and so on [4]. Previous studies have demonstrated the anti-HSV-1 viral activity of these three main ingredients [5–7]. One study revealed a promising role of geraniin as an antiviral agent against HSV-2 infection with no apparent toxicity [8]. However, the mechanisms by which GWM inhibits HSV-2 infection remain unclear.Traditional Chinese medicine (TCM) has the characteristics of multitarget, multistep, and multilevel synergism [9]. Recently, network pharmacology becomes an important bioinformatics tool for identifying the mechanism of action of TCM [10]. In the present study, the network pharmacology approach was performed to further investigate the active ingredients and the underlying mechanism of GWM for the treatment of HSV-2. A flow diagram summarizing the different procedures of this study is shown in Figure 1.Figure 1 Workflow chart ofGeranium wilfordii Maxim in the treatment of genital herpes based on network pharmacology. ## 2. Materials and Methods ### 2.1. Bioactive Chemical Substance and Objective Genes of GWM Bioactive components and action targets of GWM were screened in the TCMSP website (old.tcmsp-e.com/tcmsp.php) [11]. Oral bioavailability (OB) ≥30% and drug-like properties (DL) ≥0.18 were the filtering criteria [12]. The active components were filtered using pharmacokinetic absorption, distribution, metabolism, and excretion guidelines (ADME) filter. Since the polyphenolic substance corilagin and geraniin are the main active ingredients of GWM, they are also listed. The active ingredient target genes were normalized in the UniProt database (uniprot.org). The structures of the active compounds were acquired at PubChem website (pubchem.ncbi.nlm.nih.gov). ### 2.2. DDGs in HSV-2 Infection The GSE18527 dataset in the GEO database (ncbi.nlm.nih.gov/geo) was created by Peng T et al. The dataset had 19 samples, consisting of 3 cases of pretreatment healthy skin, 4 cases of pretreatment diseased skin, 6 cases of diseased skin in the healing group, and 6 cases of healthy skin in the healing group. The screening criteria for distinctively displayed genes wereP<0.05 and logFC >4. The DDGs were established by the intersection of the two datasets group: pretreatment healthy skin and pretreatment lesioned skin group and posttreatment diseased skin and pretreatment lesioned skin group. ### 2.3. Intersect Target Genes Crossover genes between DDGs and GWM target genes were obtained using Circos software. ### 2.4. Gene Pathway and Functional Enrichment Analysis DDGs during HSV-2 infection and GWM target genes were uploaded to the Metascape database (metascape.org) for further analysis of relevant genes and functional enrichment. ### 2.5. Histospecific Gene Enrichment Analysis The distribution of genes was further analyzed after uploading DDGs during HSV-2 infection and GWM target genes to the PaGenBase database (bioinf.xmu.edu.cn/PaGenBase). ### 2.6. Enrichment Analysis of Transcription Factor Targets To assess the potential regulatory patterns of the most enriched and conserved transcripts, the DDGs during HSV-2 infection and GWM target genes were submitted to transcription factor (TF) enrichment analysis by using the TRANSFAC Predicted Transcription Factor Targets dataset (https://maayanlab.cloud/Harmonizome/dataset/TRANSFAC + Predicted + Transcription + Factor + Targets). The obtained TFs were sorted according to their average enrichment scores. The top 20 TFs of both sets of mRNAs were further evaluated to determine their coregulatory network. ### 2.7. Ingredient-Gene-Pathway (IGP) Network The IGP network was created by importing five intersecting active ingredients, seven intersecting genes, and the leading 17 KEGG pathways into Cytoscape software. The topological parameters such as degree centrality (DC), betweenness centrality (BC), and closeness centrality (CC) were utilized to evaluate the centrality features of nodes in IGP networks. ### 2.8. Molecular Docking Studies The HSV-2 gD protein, STING protein, and drug structures were downloaded from the PDB (pdb.org) and PubChem websites. Drug and HSV-2 gD protein and STING protein docking investigations were conducted in AutoDock Vina (version 1.1.2) [13]. The visualization of the docking performance was performed by PyMOL v.2.3 software, and the docking effect was evaluated using the affinity value (AV). ## 2.1. Bioactive Chemical Substance and Objective Genes of GWM Bioactive components and action targets of GWM were screened in the TCMSP website (old.tcmsp-e.com/tcmsp.php) [11]. Oral bioavailability (OB) ≥30% and drug-like properties (DL) ≥0.18 were the filtering criteria [12]. The active components were filtered using pharmacokinetic absorption, distribution, metabolism, and excretion guidelines (ADME) filter. Since the polyphenolic substance corilagin and geraniin are the main active ingredients of GWM, they are also listed. The active ingredient target genes were normalized in the UniProt database (uniprot.org). The structures of the active compounds were acquired at PubChem website (pubchem.ncbi.nlm.nih.gov). ## 2.2. DDGs in HSV-2 Infection The GSE18527 dataset in the GEO database (ncbi.nlm.nih.gov/geo) was created by Peng T et al. The dataset had 19 samples, consisting of 3 cases of pretreatment healthy skin, 4 cases of pretreatment diseased skin, 6 cases of diseased skin in the healing group, and 6 cases of healthy skin in the healing group. The screening criteria for distinctively displayed genes wereP<0.05 and logFC >4. The DDGs were established by the intersection of the two datasets group: pretreatment healthy skin and pretreatment lesioned skin group and posttreatment diseased skin and pretreatment lesioned skin group. ## 2.3. Intersect Target Genes Crossover genes between DDGs and GWM target genes were obtained using Circos software. ## 2.4. Gene Pathway and Functional Enrichment Analysis DDGs during HSV-2 infection and GWM target genes were uploaded to the Metascape database (metascape.org) for further analysis of relevant genes and functional enrichment. ## 2.5. Histospecific Gene Enrichment Analysis The distribution of genes was further analyzed after uploading DDGs during HSV-2 infection and GWM target genes to the PaGenBase database (bioinf.xmu.edu.cn/PaGenBase). ## 2.6. Enrichment Analysis of Transcription Factor Targets To assess the potential regulatory patterns of the most enriched and conserved transcripts, the DDGs during HSV-2 infection and GWM target genes were submitted to transcription factor (TF) enrichment analysis by using the TRANSFAC Predicted Transcription Factor Targets dataset (https://maayanlab.cloud/Harmonizome/dataset/TRANSFAC + Predicted + Transcription + Factor + Targets). The obtained TFs were sorted according to their average enrichment scores. The top 20 TFs of both sets of mRNAs were further evaluated to determine their coregulatory network. ## 2.7. Ingredient-Gene-Pathway (IGP) Network The IGP network was created by importing five intersecting active ingredients, seven intersecting genes, and the leading 17 KEGG pathways into Cytoscape software. The topological parameters such as degree centrality (DC), betweenness centrality (BC), and closeness centrality (CC) were utilized to evaluate the centrality features of nodes in IGP networks. ## 2.8. Molecular Docking Studies The HSV-2 gD protein, STING protein, and drug structures were downloaded from the PDB (pdb.org) and PubChem websites. Drug and HSV-2 gD protein and STING protein docking investigations were conducted in AutoDock Vina (version 1.1.2) [13]. The visualization of the docking performance was performed by PyMOL v.2.3 software, and the docking effect was evaluated using the affinity value (AV). ## 3. Results ### 3.1. Bioactive Components and Drug Targets of GWM Nine GWM bioactive chemicals were obtained in the TCMSP database, containing ellagic acid, sitosterol, kaempferol, furosin, ethyl brevifolincarboxylate, luteolin, quercetin, dehydrogeraniin, and corilagin. We downloaded the two-dimensional structure of the chemicals in PubChem website (Table1). 309 drug targets were acquired in the TCMSP website and converted to target genes in the UniProt website. 153 target genes were isolated as drug-targeting genes after deleting repetitions.Table 1 Active ingredients and ADME parameters ofGeranium wilfordii Maxim (GWM). NO.Molecule IDMolecule nameChemical formulaStructureMWOB (%)DL1MOL001002Ellagic acidC14H6O8302.243.060.432MOL000359SitosterolC29H50O414.7936.910.753MOL000422KaempferolC15H10O6286.2541.880.244MOL005067FurosinC27H22O19650.4940.530.295MOL005073Ethyl brevifolin carboxylateC15H12O8320.2730.860.336MOL000006LuteolinC15H10O6286.2536.160.257MOL000098QuercetinC15H10O7302.2546.430.288MOL005064DehydrogeraniinC41H28O28968.6859.570.019MOL005079CorilaginC27H22O18634.493.010.44ADME: absorption, distribution, metabolism, and excretion. ### 3.2. DDGs in HSV-2 Infection Finally, 40 genes upregulated and 43 genes downregulated were selected. The Venn diagram and heat map of DDGs are shown in Figure2.Figure 2 (a) Volcano map displays the differential genes between the pretreatment healthy skin and the pretreatment lesioned skin group. (b) Volcano map displays the differential genes between the posttreatment lesioned skin and the pretreatment lesioned skin group. (c) Venn diagram displays the intersection of the above 2 groups of differential genes. Up represents upregulated genes, and down represents downregulated genes. (a)(b)(c) ### 3.3. Intersect Target Genes Seven crossover genes (CXCL10, CXCL11, CXCL8, IL-6 and IL-1β, MMP1, and SELE) were filtered in GWM target genes and HSV-2 DDGs via Circos package, as shown in Figure 3.Figure 3 Red represents 153 gene targets ofGeranium wilfordii Maxim; blue represents 83 genes significantly upregulated during HSV-2 infection; orange represents the intersection of the two groups of genes. ### 3.4. Gene Pathway and Function Enrichment Analysis The enrichment analysis of GWM target genes and DDGs during HSV-2 infection were jointly clustered in 26 enrichment items, and the most significant ones included cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG), as shown in Figure4. These analyses suggested that GWM may treat HSV-2 infection by modulating cytokine signaling in the immune system, the process of cell state or activity changes due to viral stimulation, and the process by which unspecialized cells acquire specialized features of epithelial cells, the binding of IFNG to its receptor, and the subsequent phosphorylation cascade reaction involving the JAK and STAT protein families.Figure 4 Gene pathway and functional enrichment analysis. (a) Enrichment analysis of 153 genes that are targets for the action of GWM. (b) Enrichment analysis of 83 genes that are significantly altered during HSV-2 infection. ### 3.5. Histospecific Gene Enrichment Analysis PaGenBase database tissue-specific enrichment profiling indicated that the target genes of GWM were mainly concentrated in lung and smooth muscle tissues; cell-specific was brain cell. The DDGs of HSV-2 were mainly enriched in skin tissue, followed by lung and smooth muscle tissues. Cell-specific was NHEK (normal human epidermal keratinocytes), as shown in Figure5.Figure 5 Summary of histospecific gene enrichment analysis in PaGenBase. ### 3.6. Enrichment Analysis in Transcription Factor Targets Enrichment analysis of GWM target genes and HSV-2 DDGs transcription factors focused on STTTCRNTTT IRF Q6, ISRE 01, IRF1 01, IRF7 01, NFKAPPAB 01, and STAT 01, as shown in Figure6. This suggested that GWM may regulate the ability of target genes during the progression of HSV-2 infection through changes in the activity or expression of the above transcription factors, thereby controlling HSV-2 infection.Figure 6 Summary of enrichment analysis in transcription factor targets. ### 3.7. Ingredient-Gene-Pathway (IGP) Network The IGP network consists of 29 nodes (5 active ingredients, 7 intersecting genes, and 17 pathways) and 73 edges. In this network, we found that all 7 intersecting genes were related to quercetin, as shown in Figure7. It is inferred that quercetin could be the pivotal effective component of GWM for the treatment of HSV-2 infection. According to the topological analysis, IL-6, IL-1β, and CXCL8 are the pivotal genes. Toll-like receptor signaling pathway and cytokine-cytokine receptor interaction pathway are the key pathways in the IGP network. Among them, IL-6, IL-1β, and CXCL10 are associated with the cytosolic DNA-sensing pathway. Since IFNβ exerts a crucial function in the inhibition of HSV-2, HSV-2 has evolved multiple means to inhibit IFNβ expression to produce immune escape [14–16], and the cGAS-STING pathway is a key mechanism for IFNβ production [17]; it is hypothesized that GWM may act through IL-6, IL-1β, and CXCL10 in the cGAS-STING pathway as a key link in controlling HSV-2 infection.Figure 7 Ingredient-gene-pathway networks. Fuchsia triangles, aqua ellipses, and lime octagons stand for pathways of GWM, intersecting with target genes and active components, respectively. The bigger the shape of the graph, the larger the degree value of the node and the higher the role in the network. ### 3.8. Molecular Docking Studies Nectin-1 is a cell adhesion protein, and binding of Nectin-1 protein by HSV-2 gD protein is necessary for HSV-2 to enter infected cells [18]. Epigallocatechin gallate (EGCG) has been shown to bind directly to HSV-2 gD protein to exert its anti-HSV-2 infection effect [19]. We docked quercetin, corilagin, geraniin, and EGCG to HSV-2 gD protein molecules respectively, and the binding activity of all the three was superior to that of EGCG, with geraniin showing the strongest binding activity (−17.44 kcal/mol), as shown in Table 2. cGAS acts as the primary intracellular double-stranded DNA (dsDNA) sensor, sensing intracellular dsDNA and generating the secondary messenger cGMP-AMP (cGAMP), which is further sensed by the sensing protein STING downstream of the interferon gene, leading to IFNβ production [20]. HSV-2 has evolved multiple strategies to counteract this pathway, inhibiting IFN β production and evading host immunity [21]. Mangostin is a STING-targeted pathway agonist [22]. The binding activity of all the three was superior to that of mangostin, with geraniin showing the strongest binding activity (−11.71 kcal/mol), as shown in Table 2. These data suggested that quercetin, corilagin, and geraniin may affect the pathogenic process of HSV-2 by binding to the HSV-2 gD protein (Figure 8) interfering with the binding to the NECTIN receptor. The direct action of quercetin, corilagin, and geraniin with the cGAS-STING pathway relationship has yet to be experimentally verified. Molecular docking is shown in Figure 9.Table 2 Docking scores of active ingredients of GWM with potential targets. TargetsPDB IDCompoundAffinity (kcal/mol)HSV-2 gD4MYVQuercetin−7.92HSV-2 gD4MYVCorilagin−13.08HSV-2 gD4MYVGeraniin−17.44HSV-2 gD4MYVEGCG−6.88STING6NT5Quercetin−7.1STING6NT5Corilagin−11.65STING6NT5Geraniin−11.71STING6NT5Mangostin−4.52Figure 8 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and EGCG with HSV-2 gD protein, and the results are shown as 3D diagrams. (a) Quercetin-gD. (b) Corilagin-gD. (c) Geraniin-gD. (d) EGCG-gD. (a)(b)(c)(d)Figure 9 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and mangostin with STING protein, and the results are shown as 3D diagrams. (a) Quercetin-STING. (b) Corilagin-STING. (c) Geraniin-STING. (d) Mangostin-STING. (a)(b)(c)(d) ## 3.1. Bioactive Components and Drug Targets of GWM Nine GWM bioactive chemicals were obtained in the TCMSP database, containing ellagic acid, sitosterol, kaempferol, furosin, ethyl brevifolincarboxylate, luteolin, quercetin, dehydrogeraniin, and corilagin. We downloaded the two-dimensional structure of the chemicals in PubChem website (Table1). 309 drug targets were acquired in the TCMSP website and converted to target genes in the UniProt website. 153 target genes were isolated as drug-targeting genes after deleting repetitions.Table 1 Active ingredients and ADME parameters ofGeranium wilfordii Maxim (GWM). NO.Molecule IDMolecule nameChemical formulaStructureMWOB (%)DL1MOL001002Ellagic acidC14H6O8302.243.060.432MOL000359SitosterolC29H50O414.7936.910.753MOL000422KaempferolC15H10O6286.2541.880.244MOL005067FurosinC27H22O19650.4940.530.295MOL005073Ethyl brevifolin carboxylateC15H12O8320.2730.860.336MOL000006LuteolinC15H10O6286.2536.160.257MOL000098QuercetinC15H10O7302.2546.430.288MOL005064DehydrogeraniinC41H28O28968.6859.570.019MOL005079CorilaginC27H22O18634.493.010.44ADME: absorption, distribution, metabolism, and excretion. ## 3.2. DDGs in HSV-2 Infection Finally, 40 genes upregulated and 43 genes downregulated were selected. The Venn diagram and heat map of DDGs are shown in Figure2.Figure 2 (a) Volcano map displays the differential genes between the pretreatment healthy skin and the pretreatment lesioned skin group. (b) Volcano map displays the differential genes between the posttreatment lesioned skin and the pretreatment lesioned skin group. (c) Venn diagram displays the intersection of the above 2 groups of differential genes. Up represents upregulated genes, and down represents downregulated genes. (a)(b)(c) ## 3.3. Intersect Target Genes Seven crossover genes (CXCL10, CXCL11, CXCL8, IL-6 and IL-1β, MMP1, and SELE) were filtered in GWM target genes and HSV-2 DDGs via Circos package, as shown in Figure 3.Figure 3 Red represents 153 gene targets ofGeranium wilfordii Maxim; blue represents 83 genes significantly upregulated during HSV-2 infection; orange represents the intersection of the two groups of genes. ## 3.4. Gene Pathway and Function Enrichment Analysis The enrichment analysis of GWM target genes and DDGs during HSV-2 infection were jointly clustered in 26 enrichment items, and the most significant ones included cytokine signaling in the immune system, response to virus, epithelial cell differentiation, and type II interferon signaling (IFNG), as shown in Figure4. These analyses suggested that GWM may treat HSV-2 infection by modulating cytokine signaling in the immune system, the process of cell state or activity changes due to viral stimulation, and the process by which unspecialized cells acquire specialized features of epithelial cells, the binding of IFNG to its receptor, and the subsequent phosphorylation cascade reaction involving the JAK and STAT protein families.Figure 4 Gene pathway and functional enrichment analysis. (a) Enrichment analysis of 153 genes that are targets for the action of GWM. (b) Enrichment analysis of 83 genes that are significantly altered during HSV-2 infection. ## 3.5. Histospecific Gene Enrichment Analysis PaGenBase database tissue-specific enrichment profiling indicated that the target genes of GWM were mainly concentrated in lung and smooth muscle tissues; cell-specific was brain cell. The DDGs of HSV-2 were mainly enriched in skin tissue, followed by lung and smooth muscle tissues. Cell-specific was NHEK (normal human epidermal keratinocytes), as shown in Figure5.Figure 5 Summary of histospecific gene enrichment analysis in PaGenBase. ## 3.6. Enrichment Analysis in Transcription Factor Targets Enrichment analysis of GWM target genes and HSV-2 DDGs transcription factors focused on STTTCRNTTT IRF Q6, ISRE 01, IRF1 01, IRF7 01, NFKAPPAB 01, and STAT 01, as shown in Figure6. This suggested that GWM may regulate the ability of target genes during the progression of HSV-2 infection through changes in the activity or expression of the above transcription factors, thereby controlling HSV-2 infection.Figure 6 Summary of enrichment analysis in transcription factor targets. ## 3.7. Ingredient-Gene-Pathway (IGP) Network The IGP network consists of 29 nodes (5 active ingredients, 7 intersecting genes, and 17 pathways) and 73 edges. In this network, we found that all 7 intersecting genes were related to quercetin, as shown in Figure7. It is inferred that quercetin could be the pivotal effective component of GWM for the treatment of HSV-2 infection. According to the topological analysis, IL-6, IL-1β, and CXCL8 are the pivotal genes. Toll-like receptor signaling pathway and cytokine-cytokine receptor interaction pathway are the key pathways in the IGP network. Among them, IL-6, IL-1β, and CXCL10 are associated with the cytosolic DNA-sensing pathway. Since IFNβ exerts a crucial function in the inhibition of HSV-2, HSV-2 has evolved multiple means to inhibit IFNβ expression to produce immune escape [14–16], and the cGAS-STING pathway is a key mechanism for IFNβ production [17]; it is hypothesized that GWM may act through IL-6, IL-1β, and CXCL10 in the cGAS-STING pathway as a key link in controlling HSV-2 infection.Figure 7 Ingredient-gene-pathway networks. Fuchsia triangles, aqua ellipses, and lime octagons stand for pathways of GWM, intersecting with target genes and active components, respectively. The bigger the shape of the graph, the larger the degree value of the node and the higher the role in the network. ## 3.8. Molecular Docking Studies Nectin-1 is a cell adhesion protein, and binding of Nectin-1 protein by HSV-2 gD protein is necessary for HSV-2 to enter infected cells [18]. Epigallocatechin gallate (EGCG) has been shown to bind directly to HSV-2 gD protein to exert its anti-HSV-2 infection effect [19]. We docked quercetin, corilagin, geraniin, and EGCG to HSV-2 gD protein molecules respectively, and the binding activity of all the three was superior to that of EGCG, with geraniin showing the strongest binding activity (−17.44 kcal/mol), as shown in Table 2. cGAS acts as the primary intracellular double-stranded DNA (dsDNA) sensor, sensing intracellular dsDNA and generating the secondary messenger cGMP-AMP (cGAMP), which is further sensed by the sensing protein STING downstream of the interferon gene, leading to IFNβ production [20]. HSV-2 has evolved multiple strategies to counteract this pathway, inhibiting IFN β production and evading host immunity [21]. Mangostin is a STING-targeted pathway agonist [22]. The binding activity of all the three was superior to that of mangostin, with geraniin showing the strongest binding activity (−11.71 kcal/mol), as shown in Table 2. These data suggested that quercetin, corilagin, and geraniin may affect the pathogenic process of HSV-2 by binding to the HSV-2 gD protein (Figure 8) interfering with the binding to the NECTIN receptor. The direct action of quercetin, corilagin, and geraniin with the cGAS-STING pathway relationship has yet to be experimentally verified. Molecular docking is shown in Figure 9.Table 2 Docking scores of active ingredients of GWM with potential targets. TargetsPDB IDCompoundAffinity (kcal/mol)HSV-2 gD4MYVQuercetin−7.92HSV-2 gD4MYVCorilagin−13.08HSV-2 gD4MYVGeraniin−17.44HSV-2 gD4MYVEGCG−6.88STING6NT5Quercetin−7.1STING6NT5Corilagin−11.65STING6NT5Geraniin−11.71STING6NT5Mangostin−4.52Figure 8 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and EGCG with HSV-2 gD protein, and the results are shown as 3D diagrams. (a) Quercetin-gD. (b) Corilagin-gD. (c) Geraniin-gD. (d) EGCG-gD. (a)(b)(c)(d)Figure 9 Molecular docking diagram. Molecular models of the binding of quercetin, corilagin, geraniin, and mangostin with STING protein, and the results are shown as 3D diagrams. (a) Quercetin-STING. (b) Corilagin-STING. (c) Geraniin-STING. (d) Mangostin-STING. (a)(b)(c)(d) ## 4. Discussion Five effective chemicals, namely, quercetin, corilagin, kaempferol, luteolin, and ellagic acid, were evaluated in the IGP network. Among them, quercetin exhibited the strongest node value, revealing that quercetin acts as a major element in the network. In a meta-analysis, quercetin-type flavonols were noted to have antiviral activity and significantly reduced the mortality of infected animals [23]. Three central genes were screened in the IGP network, including IL-6, IL-1β, and CXCL8, which were all linked to quercetin. We hypothesized that GWM may exert antiviral effects by regulating the above targets during HSV-2 infection.IL-6 is produced by pathogen-associated molecular patterns (PAMPs) that stimulate cells such as endothelial cells, smooth muscle cells and immune cells to exert a wide range of tissue effects [24]. IL-6 can protect mice from HSV-2-induced mortality [25]. Estradiol-treated mice exhibited sooner recruitment and a larger ratio of Th1 and Th17 effector cells in the vagina and better protection after HSV-2 infection compared to placebo-treated controls, and Th17 responses were abolished in IL-1β knockout APC-T cells, suggesting that IL-1β is a crucial element in the induction of Th17 in the reproductive tract [26]. CXCL8, CXCL9, and CXCL10 were found to be expressed at high levels in both HSV-1 and HSV-2 CNS infections in one study [27].We identified 2 crucial signaling pathways in the IGP network: toll-like receptor (TLR) signaling pathway and cytokine-cytokine receptor interaction pathway. TLR9 pathway specifically recognizes the unmethylated CpG motifs in dsDNA (CpG DNA) [28]. Studies have confirmed that using pattern recognition receptor (PRR) antagonists, such as lipoproteins, CpG DNA, and cyclic dinucleotides, we can greatly limit HSV-2 replication. TLR9 silencing also affects IL-6 secretion when HSV-2 or viral DNA stimulates the cells [29]. The cytokine-cytokine receptor interaction pathway, according to the annotation of the KEGG (genome.jp/kegg) database, is mainly the interaction between HSV-2 glycoprotein and chemokines such as (CCL26, CCL28, CCL22, CCL25, CXCL9, CXCL10, CXCL11, and CXCL13).Therefore, we found that these signaling pathways are closely associated with HSV-2 infection. GWM may contribute to the therapeutic role of HSV-2 infection by modulating these signaling pathways. The constituents of GWM are highly sophisticated, and since it is impossible to include all of them in the database, some constituents and their targets of action could be overlooked. ## 5. Conclusions The “multicomponent, multitarget, and multipathway” nature of GWM for HSV-2 infection was demonstrated in this study. Using a network pharmacology approach, we identified that quercetin acts on the targets IL-6, IL-1β, and CXCL10 through a key signaling pathway (toll-like receptor signaling pathway and cytokine-cytokine receptor interaction) as a key component in controlling HSV-2 infection. This work offers ideas for future research on the molecular mechanisms of GWM for the treatment of HSV-2 infection, and related studies can be seen in future research. --- *Source: 1009551-2021-11-03.xml*
2021
# Fluid Geochemistry within the North China Craton: Spatial Variation and Genesis **Authors:** Lu Chang; Li Ying; Chen Zhi; Liu Zhaofei; Zhao Yuanxin; Hu Le **Journal:** Geofluids (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1009766 --- ## Abstract The North China Craton (NCC) is a typical representative of the ancient destruction craton. Numerous studies have shown that extensive destruction of the NCC occurred in the east, whereas the western part was only partially modified. The Bohai Bay Basin is in the center of the destruction area in the eastern NCC. Chemical analyses were conducted on 122 hot spring samples taken from the eastern NCC and the Ordos Basin. Theδ2H and δ18O in water, δ13C in CO2, and 3He/4He and 4He/20Ne ratios in gases were analyzed in combination with chemical analyses of water in the central and eastern NCC. The results showed an obvious spatial variation in chemical and isotopic compositions of the geofluids in the NCC. The average temperature of spring water in the Trans-North China Block (TNCB) and the Bohai Bay Basin was 80.74°C, far exceeding that of the Ordos Basin of 38.43°C. The average δD in the Eastern Block (EB) and the TNCB were −79.22‰ and −84.13‰, respectively. The He isotope values in the eastern region (TNCB and EB) ranged from 0.01 to 2.52, and the rate of contribution of the mantle to He ranged from 0 to 31.38%. δ13C ranged from −20.7 to −6.4‰ which indicated an organic origin. The chemical compositions of the gases in the EB showed that N2 originated mainly from the atmosphere. The EB showed characteristics of a typical gas subduction zone, whereas the TNCB was found to have relatively small mantle sources. The reservoir temperatures in the Ordos Basin and the eastern NCC (EB and TNCB) calculated by the K-Mg temperature scale were 38.43°C and 80.74°C, respectively. This study demonstrated clear spatial variation in the chemical and isotopic compositions of the geofluids in the NCC, suggesting the presence of geofluids from the magmatic reservoir in the middle-lower crust and that active faults played an important role in the transport of mantle-derived components from the mantle upwards. --- ## Body ## 1. Introduction A craton is characterized by a thick lithospheric mantle, cold geotherm, low density, and high viscosity, with these characteristics providing protection from destruction by later geological processes [1]. Cratons are an important geological unit on the surface of the Earth and cover ~50% of the area of the continental crust [2]. The North China Craton (NCC) is an ancient craton that has attracted much attention due to its lithosphere showing signs of severe disturbance or reactivation in some regions, resulting in significant losses or modifications of the mantle root (e.g., [3–9]).The application of theS-wave receiver function showed that the eastern NCC has experienced extensive damage and that thinning of the lithosphere in the east and central parts of the NCC has exceeded that in the west by 60 to 100 km (Chen et al., 2009). Mineral inclusions in diamonds originating from the kimberlite in the provinces of Shandong and Liaoning, China, have indicated the existence of a 200 km thick lithosphere 470 Ma ago [10, 11]. In addition, the mantle-derived inclusions in Cenozoic basalts indicate a lithospheric thickness of 80–120 km [12]. Application of a geothermal evolution model found that the lithosphere experienced two thinning periods during the Cretaceous and Paleogene. The lithosphere experienced thinning between the early Mesozoic to early Cretaceous, reducing in thickness from 150 km to 51 km, following which it thickened to ~80 km. The lithosphere experienced thinning again during the middle and late Paleogene, reducing in thickness to 48 km, consistent with the present thickness of the Bohai Bay fault depression. Subsequently, lithosphere thickening occurred once again, with the crust increasing in thickness to 78 km [13]. The results of a thermal simulation model indicated the presence of mantle heat flow of 24–44 mW·m−2 in the eastern NCC, which exceeded that of 21.2–24.5 mW·m−2 in the western NCC [14].The study of geothermal fluids can increase the understanding of the geological significance of geothermal genesis, reservoir temperature, heating mechanism, circulation depth, and supply source [15–18] and can also act as a basis for exploring mantle-derived material and the shallow response resulting from the geodynamic process of plate subduction. The isotopic composition of gas is widely used to study deep structural changes and the transport mechanism in cratons (e.g., [19–24]).Zhang et al. [25] analyzed the He and C isotope ratios of hot spring gases in the TNCB within the NCC. By analyzing the P-wave velocity and time-averaged fault slip rate, they concluded that mantle volatiles are generated in the upwelling asthenosphere, following which they rise through faults and fractures in which permeabilities are controlled by slip rates. In a study comparing the geochemical characteristics of helium and CO2 in the cratonic and rift basins in China, Dai et al. [26] determined that gas samples collected from the cratonic basins have lower levels of CO2 and R/Ra ratios than those from the rift basins. Gas samples in the rift basin have been shown to have a larger range of variation in δ13CCO2, implying the presence of stronger tectonic activities. Xu et al. [27] determined that helium found in fluids collected from the crust in Liaodong (EB) is derived from the mantle and that active faults play an important role in transferring mantle-derived components to the surface in the nonvolcanic regions.However, there have been limited detailed studies of mantle-derived components associated with the NCC [27–29]. Therefore, a study on the spatial differences in fluid chemistry and isotopic signatures of subduction zones and their genesis would be significant.The present study collected samples of geothermal gases and water from the Ordos Basin, the TNCB, and the EB. The chemical compositions of water and gas samples were measured. In addition, theδD and δ18O isotope compositions of water samples were determined. The gases collected from geothermal wells were analyzed for 3He/4He and 4He/20Ne ratios and δ13C of CO2. The present study was aimed at identifying the spatial variations in chemical and isotopic compositions of geothermal wells in the NCC and at analyzing the sources and genesis of these chemical characteristics. The thermal reservoir temperature and the contribution of mantle-derived helium were determined. The results of the present study confirmed the presence of mantle-derived components in the EB and TNCB gases due to magmatism and active faults. ## 2. Geological Settings The NCC consists of two main crustal blocks, namely, the Western Block (WB) and the Eastern Block (EB). These two crustal blocks are stitched together by the Trans-North China Orogen (Figure1(a)) [30]. The present study collected gas samples from the TNCB and the EB of the NCC (Figure 1), whereas geothermal water samples were collected from within the NCC (Figure 2).Figure 1 Distribution of gas samples collected from the North China Craton (NCC): (a) the subzones of the NCC, modified after Zhao et al. [31]; (b) the geological map of the Eastern Block (EB) and the Trans-North China Block (TNCB) showing faults and topography.Figure 2 Interpolation diagram of a K-Mg ionic temperature scale in the North China Craton (NCC); numbers and names of sampling sites are listed in Tables1 and 2. The interpolation method uses Inverse Distance Weighting (IDW).The Bohai Bay Basin in the EB of the NCC is an important basin in China due to the presence of geothermal resources, oil, and gas. The basin lies adjacent to the Jiaoliao fault-uplift area in the east, the Liaohe Depression in the north, the Jiyang Depression in the south, and the Huanghua Depression in the west [32, 33]. The Bohai Bay Basin is a rift basin superposed by coal-bearing basins of the Meso-Cenozoic Carboniferous-Permian system lying on the basement of the Mesoproterozoic, upper Proterozoic, and Paleozoic cratons. The basin has experienced Indochina, Yanshan, and Himalayan movements and is characterized by active tectonic movements, numerous faults, and strong volcanic activities [34–36]. The study area shows a complex structure, the development of active faults, and frequent earthquakes. The main direction of stress in the study area is northeast-east, resulting in deep fault cuts [37]. Abundant low/medium-temperature pore-type geothermal resources and fractured bedrock occur in the region. Geothermal reservoirs are mainly found in the upper and lower tertiary sandstone reservoirs and particularly in Paleozoic and Proterozoic carbonate reservoirs [15, 38].The basin-range tectonic zone of Shanxi Province falls in the North China orogenic belt. The BRPWB is characterized by northeast-east to southwest-west active normal faults with a dextral strike slip component due to its location on the north end of the S-shaped rift system [25]. The region has experienced many tectonic events and has a unique basin and mountain alternate topography as well as the geological background of the extensional fault along the mountains and within the basin. The counterclockwise movement of the EB and the WB in the NCC since the Paleogene has resulted in the formation of the basin-ridge structure [39] in the Trans-North China Orogen. Cenozoic basalts containing abundant mantle xenoliths were found in the outcrops of the Yangyuan and Datong subbasins in the EB [40, 41]. The age of geothermal water exposed at the junction of faults in the Yanhuai Basin was determined to be 30 ka, whereas the temperature of the reservoir was determined to be ~100°C. Mantle-derived helium was detected in the fluids [42]. These findings suggest that intensive magma activity and mantle-derived material recharge may have occurred in the region.The Ordos Basin is in central-northern China and is associated with the inner-craton depression basin. The typical inland basin represents one of the most tectonically stable areas in China and has an area of over 250,000 m2 [26]. The basin falls above and below the Archean granite and lower Proterozoic greenschist in the North China block, respectively. The southwest region of the basin contains Paleozoic-Cenozoic sedimentary rocks with a thickness of >8 km [43]. Six secondary structures occur in the basin [26]: (1) the Yishan slope, (2) the Tianhuan depression, (3) the Yimeng uplift, (4) the Weibei uplift, (5) the Jin-West flexural fold zone, and (6) the fault fold zone along the western margin. The basin has undergone multiple tectonic movements with a stable internal structure. However, the overall uplift has played a key role [44]. The Fuping block collided with the Western Block at between 1.90 and 1.85 Ga and subducted westward to form the North China orogenic belt, accompanied by many magmatic events [45, 46]. The sedimentary source is complex and experienced thickening in the basin after the Middle Proterozoic [47]. ## 3. Methods A total of 123 geothermal water samples were collected from July to August 2016, consisting of 46 samples from the TNCB and EB and 77 samples from the Ordos Basin. Figures1 and 2 show the sampling locations. Samples were collected in HDPE bottles that had been sterilized by soaking in ultrapure water for 24 h and then dried using ultrasonic cleaning. Samples for the analysis of chemical compositions and isotopes (H and O) were collected in 250 mL and 2 mL HDPE bottles, respectively. All samples were filtered on-site three times through a 0.45 μm membrane filter. Prior to sampling, the sample bottle was rinsed three times using water from the sample source. Once the sample was collected, the sample bottle was sealed with parafilm. Samples for the analysis of major cations were acidified with ultrapurified HNO3 (1 mol L−1) to adjust pH to below 2. Filtered unacidified samples were used for anion analysis. Tables 1 and 2 show the results of water chemistry and isotopic analysis, respectively.Table 1 Sampling locations, water temperatures, isotopes (H and O), and reservoir temperatures in the Eastern block and the Trans-North China Block of the North China Craton. Well no.Longitude (°E)Latitude (°N)Na+ (mg/L)K+ (mg/L)Mg2+ (mg/L)Ca2+ (mg/L)HCO3− (mg/L)Cl− (mg/L)SO42− (mg/L)IB (%)TK‐Mg (°C)T (°C)δ18O (‰)δD (‰)Well depth (m)Guanghegu117.1738.89674.16.87.440.1261.3600.0773.5-3.960.951.0-9.0-71.7—Liyuantou117.1939.03642.65.72.826.9303.6532.7670.9-3.767.856.7-9.2-72.3—Wanjia117.3338.83585.757.07.133.1449.6663.8334.0-3.5115.862.9-8.7-70.8490Longda117.5938.97554.15.02.213.0564.9434.5256.9-2.267.650.5-8.9-71.21800Quanshuiwan117.3339.29306.12.30.95.8626.4119.815.1-1.060.351.6-9.2-72.51500Yongchuan117.3539.25406.177.111.433.9399.7390.1310.8-2.1117.781.2-8.8-71.53000Xinli117.7939.28144.51.10.26.1314.921.733.6-1.857.749.3-9.6-72.21000Xiawucun117.7839.27149.41.20.35.7330.523.432.50.058.053.0-9.5-72.0—Luqiancun117.7839.29110.61.30.15.0284.413.610.4-1.171.644.9-9.6-71.8—Zunhua 1117.7640.21211.46.40.522.6126.832.6364.0-1.191.046.1-10.3-74.0—Zunhua 2117.7640.21198.76.20.317.899.933.4333.0-0.597.941.4-10.3-73.7—Xiyuan118.0639.34179.20.90.811.0292.131.5130.10.041.640.2-9.6-71.2—Jidong 1118.4539.28275.53.91.424.5215.275.9396.6-1.167.046.4-9.4-71.5—Jidong 2118.3439.18609.76.83.866.8222.9560.6739.6-3.968.350.5-8.3-70.81500Caofeidian118.6839.16321.72.20.517.2522.3122.0143.6-2.366.350.4-9.2-70.7—Xiangyunwan118.9839.18630.34.11.34.01437.3216.10.0-1.969.356.3-9.0-70.81800Changsheng118.1739.82555.74.30.87.4822.4471.70.0-2.776.662.9-8.0-69.41600Baodiwang 4117.3439.55198.850.66.938.8384.392.9163.2-0.2112.757.5-9.7-73.53000Lizigu117.3439.52220.256.37.234.8384.3124.4177.9-0.8115.292.6-9.1-72.13000Dijing117.3639.54201.753.37.734.1384.392.3168.1-0.4112.682.2-8.7-71.2—Jixian117.4240.04115.317.617.822.2426.614.532.2-0.872.527.5-9.8-69.3—Xinzhou 1112.6338.54209.65.10.415.688.4198.3136.6-0.989.846.3-10.5-77.3—Xinzhou 2112.6238.54210.77.10.112.992.2195.4124.3-0.4111.256.9-10.4-77.1—Xinzhou 3112.7038.49256.66.81.8155.976.9150.4769.4-3.177.146.0-10.7-80.3—Dingxiangtou112.8338.59424.49.40.5346.611.5471.51510.1-5.9103.050.5-11.3-85.4—Yangouxiang112.7938.95429.06.80.2100.223.0510.9595.8-4.0108.540.3-11.6-87.0—Hunyuan113.9439.41260.39.40.215.661.3141.1326.8-1.5114.463.8-11.3-88.2—Yanggao 1113.8240.42109.95.22.418.1165.264.156.20.667.539.6-10.6-79.7172Yanggao 2113.8240.42114.64.22.514.4153.763.662.20.962.139.7-10.7-81.6203Yanggao 3113.8240.4187.14.63.419.3149.938.244.11.260.545.9-11.0-81.6—Tianzhen 1114.0440.44233.27.56.824.5368.9103.2147.3-0.263.943.8-9.9-80.5504Tianzhen 2114.0440.43271.89.45.525.0380.5145.2178.8-0.871.742.8-9.8-81.5100Yuxian114.4439.8046.53.640.572.5361.226.581.10.231.111.4-10.0-74.5—Sanmafang114.5940.21326.08.922.752.1330.5269.4340.3-1.854.639.4-11.8-88.5180Yangyuan114.5940.21328.49.122.749.2322.8272.2343.9-1.955.139.1-11.8-88.6—Huailai 1115.5440.34292.910.20.220.449.884.4529.5-2.3121.675.0-11.6-89.3—Huailai 2115.5440.34283.19.40.423.373.086.7499.4-1.9104.347.2-11.6-88.0—Huailai 3115.5340.34262.87.30.417.2111.487.1396.7-1.198.447.2-10.7-83.2288Huailai 4115.5340.34256.78.20.217.672.873.1433.9-2.4109.766.0-11.7-88.8500Baimiaocun115.4040.66179.93.82.012.6134.534.2238.4-0.162.339.6-11.5-86.1—Dongwaikou116.0840.96232.86.90.216.661.367.1366.8-1.5109.956.0-11.5-87.6—Chicheng115.7440.90203.08.20.832.4115.329.0403.5-1.791.557.8-12.0-89.2—Shengshi115.9940.4688.413.715.147.4299.846.251.30.968.451.4-11.5-84.3100Songshan115.8240.51143.62.80.29.657.335.6177.2-2.481.934.9-11.9-86.7205Wuliying115.9340.48103.78.011.236.2338.216.468.1-0.559.829.3-11.8-86.0553Jinyu115.9740.4885.513.015.246.8299.844.448.40.867.242.8-11.5-84.2—Average276.812.15.136.4283.8172.2283.078.4— represent natural hot springs and self-flowing wells or well depth being not available.Table 2 Sampling locations, water temperatures, reservoir temperatures, and chemical compositions in the Western block and the Trans-North China Block of the North China Craton. Well no.Longitude (°E)Latitude (°N)Na+ (mg/L)K+ (mg/L)Mg2+ (mg/L)Ca2+ (mg/L)HCO3- (mg/L)Cl- (mg/L)SO42- (mg/L)IB (%)TK‐Mg (°C)T (°C)Shangwangtai106.9634.56189.15.31.437.8169.034.0399.0-1.973.728.5Qianchuan107.1534.6434.77.222.671.9637.011.221.12.150.214.8Shuigouzhen106.9934.748.11.015.560.4452.03.413.12.1—17.1Shenjiazui106.8934.8522.43.320.3100.0634.012.950.50.735.814.8Chaijiawa106.7734.945.31.323.555.8518.02.47.61.4—17.9Shilitan106.4035.4317.01.420.071.4601.09.437.2-1.8—10.2Dayuanzi106.3935.4653.22.450.098.3645.021.2257.2-1.0—11.4Fujianchang106.6835.5320.41.721.458.8455.05.734.63.3—12.2Liuhu106.6735.55134.25.944.261.8660.077.9223.30.339.518.5Beishan 1106.7035.5684.81.225.933.9619.015.944.62.3—13.7Beishan 2106.6835.56138.51.250.936.1809.023.4183.41.1—12.9Baiyunsi 2106.2435.6029.61.534.766.3508.03.7151.1-0.3—12.1Baiyunsi 1106.2535.6141.33.039.566.5445.06.2212.90.127.810.9Baiyunsi 3106.2535.6131.31.734.752.1436.03.3129.01.3—17.3Dongshanpo106.2835.62262.61.90.72.41219.023.44.5-0.759.410.2Anguo 1106.5735.62148.86.045.064.5713.077.7235.00.539.622.3Anguo 2106.5735.62170.16.943.961.1688.097.2269.7-0.242.624.4Longde106.1335.6268.81.837.260.3746.018.9101.1-0.1—13.8Hongjunquan106.1835.6752.52.036.279.9545.03.5228.5-0.7—10.2Heshangpu106.2335.6822.42.233.558.2519.06.7111.0-1.2—11.7Beilianchi106.1835.7425.51.831.144.7475.02.666.81.6—20.0Lianchisi106.1835.7423.71.324.538.8420.02.450.41.3—16.7Fuxiya106.1735.754.31.81.231.6199.01.24.10.052.019.0Hongtai105.7935.76731.85.8199.4142.7689.0256.61754.92.525.313.0Wangminjing105.7435.801020.99.23.218.3606.0357.61313.21.777.511.9Pengyang106.6335.85134.14.552.762.1684.078.3240.70.333.018.2Xiangyang106.4035.95618.92.10.70.01131.031.71002.4-1.860.322.4Choushuihe 3106.0636.016861.925.2122.7264.31006.05057.411260.7-4.558.717.1Xiaokou 2106.0836.0119817.0116.128.2138.12580.07759.830847.2-0.3116.417.6Xiaokou 1106.0836.0224462.5156.590.7220.11508.011018.937317.9-0.4108.18.8Chaigou 2105.8836.07135.12.755.6108.5717.022.4484.7-1.6—9.0Chaigou105.8936.08108.52.453.272.4632.013.8297.81.4—12.9Heiyanquan105.8836.0894.52.546.682.9731.014.9280.3-1.1—10.4Choushuihe 1106.0436.1350.73.063.0101.0857.06.6264.3-1.0—17.1Choushuihe 2106.1736.148210.027.1144.9340.01095.06007.213942.9-4.858.517.8Hongyang105.6436.26300.33.592.7499.1112.0152.02580.0-6.3—11.0Zhengqi105.9636.45901.45.6204.3279.7145.01918.41420.9-4.7—12.4Xiaoshanquan105.6036.5015.74.332.659.8523.06.869.11.336.613.1Shuangjing106.2536.593598.0184.191.0318.81581.0—9313.4-3.0112.725.1Ganyanchi105.2336.67622.16.8107.011.01524.0735.1846.6-8.434.016.2Yaoxian105.1737.41510.310.8117.3133.1435.0660.31064.6-3.741.817.7Shuitaocun106.3137.46452.44.5146.1123.2478.0695.8900.4-3.0—13.9Nitanjing105.2037.46602.37.39.248.229.0629.6474.41.460.119.4Nitanquan105.2037.46867.25.435.628.5608.0404.71516.4-4.039.915.8Daquan106.3437.97105.41.216.114.4248.072.890.92.1—17.6Miaoshan 1105.8538.03273.95.545.058.6540.0267.6337.6-0.838.115.4Miaoshan 2105.8638.03258.35.543.556.7561.0246.2317.8-1.038.417.3Hongshitou 1105.6738.76116.22.741.968.0334.0174.2186.9-0.125.712.6Hongshitou 2105.6838.77160.75.034.158.1425.0148.2238.6-0.638.920.9Dashuigou106.1538.8916.22.014.268.4346.013.359.92.029.712.8Longquansi106.2838.9653.62.819.553.4358.052.8105.5-0.932.916.4Jianquan106.4839.08101.16.3123.0117.7662.080.3650.6-1.031.118.4Shitanjing 2106.3139.18127.27.449.5205.2349.092.9857.6-4.142.823.9Shitanjing 1106.3139.1891.34.338.5150.7337.076.2534.2-2.634.913.5Diyan106.9339.6449.91.924.785.2305.069.0152.61.6—16.8Subeigou106.9539.6783.13.729.768.9343.080.2155.81.534.315.4Qianligou 1106.9939.86122.14.239.177.6405.0107.5248.50.834.512.5Qianligou 2106.9839.86130.13.939.472.1354.0122.1238.71.432.919.1Dahuabei109.4140.739.72.29.568.0426.09.337.1-0.335.512.9Hongqicun109.3340.7315.41.916.680.6514.013.634.71.027.211.4Xishanzui108.7340.7480.04.850.7153.7351.0134.5270.90.334.414.6Aguimiao106.4240.7450.03.527.562.3379.048.2109.50.734.015.1Zhaoer106.5440.8864.617.235.1122.8465.092.9241.4-0.464.219.2Chendexi106.5340.8871.629.837.4126.8454.0125.4248.3-0.276.216.7Shimen106.5640.8875.513.442.899.9443.081.6303.4-1.156.625.8Chahangou106.5740.8828.71.77.345.9277.016.643.70.833.017.2Buerdong106.5740.8968.916.237.4118.7443.092.3306.7-0.862.127.8Shaotoushan109.1441.1269.52.659.161.5589.056.2165.61.9—17.3Dongsheng 1107.0441.1340.314.035.984.8463.052.5151.0-0.159.413.3Dongsheng 2107.0341.1443.815.367.756.3730.032.4112.51.154.414.8Xiliushu107.9341.2842.42.215.658.8401.019.173.10.130.814.7Hulusitai107.7941.2882.93.316.364.2528.046.9101.60.437.914.5Yangguangcun108.2941.2921.43.711.864.1380.012.632.00.843.314.4Xiremiao 2108.6641.5451.02.128.586.4658.040.178.80.7—23.7Xiremiao 1108.6641.5450.22.627.272.5553.038.478.50.928.718.6Xiremiao 3108.6641.5444.41.628.369.8572.035.775.00.0—16.5Hailiutu108.5141.5954.74.822.467.4402.039.469.71.642.217.2Average965.711.145.392.0589.3505.01645.947.4— represents spring water not suitable for a cation temperature scale.The water temperatures of springs were measured using a thermometer with an accuracy of 0.1°C. Water chemistry analyses were performed in the Key Laboratory of Earthquake Prediction, Institute of Earthquake Science, China Earthquake Administration, using a DionexICS-900 ion chromatograph with an ion detection limit 0.1 mg L−1. Calibration for the analysis was achieved using standard samples from the National Institute of Metrology, China. A mixed solution of NaHCO3 and Na2CO3 was used as the anion eluent, whereas a methane sulfonic acid solution was used as the cationic eluent. The titration method with an error less than 5% was used for analyzing CO32− and HCO3−, phenolphthalein and methyl orange were used as indicators, and the test error of the concentration of HCl was 0.08 mol L−1. Oxygen and hydrogen isotope analyses were performed in the Water Isotope and Water-Rock Interaction Laboratory at the Institute of Geology and Geophysics, Chinese Academy of Sciences, using a laser absorption water isotope spectrometer analyzer (L1102-I, Picarro) which used wavelength scanning optical cavity ring-down spectroscopy (WS-CRDS) technology. Analysis of δ18O and δD used the Vienna Standard Mean Ocean Water (V-SMOW) as the standard. The analytical precision of δ18O and δD measurements was ±0.1‰ and ±0.5‰, respectively [37].The quality of the constant elements of hot spring and geothermal water was assessed using theib value [48], with the range of results within ±10%: (1)ib%=∑cations−∑anions0.5×∑cations+∑anions×100.A total of 26 gas samples were collected from TNCB and EB. The gas samples were collected using the water displacement method.The present study used 500 mL AR glass containers, with the glass of the soda lime type containing a high portion of alkali and alkaline earth oxides with a very low permeability for helium [49]. The glass containers were initially immersed in corresponding geothermal water. The bottles were filled with spring water, following which funnels allowed displacement of water with gas. After the gas reached two-thirds of the volume of the bottle, each bottle was forcefully sealed using a solid trapezoidal rubber plug and adhesive plaster [50]. Samples were analyzed within 14 days after collection to avoid the leakage of volatiles.The chemical compositions of the gas sample were analyzed using a Finnigan MAT-271 mass spectrometer with precision of ±0.1% at the Key Laboratory of Petroleum Resources Research, Institute of Geology and Geophysics, Chinese Academy of Sciences. The helium and neon isotopes of the gases were detected using a MM5400 mass spectrometer at the Institute of Geology and Geophysics, Chinese Academy of Sciences. The carbon isotope was measured with the MAT-253 gas isotope mass spectrometer of the Beijing Research Institute of Uranium Geology, with a precision of ±0.1‰ [51].Rc/Ra is the air-corrected3He/4He ratio calculated using (2)RcRa=R/Ra×X−1X−1,(3)X=He4/Ne20measuredHe4/Ne20air×βNeβHe.β is the Bunsen solubility coefficient, which represents the volume of gas absorbed per volume of water at the measured temperature when the partial pressure of the gas is 1 atm [52], assuming a recharge temperature of 15°C. βNe/βHe=1.21 at 15°C. HeM is the mantle helium contribution of the total helium contents using (4)RcRa=RRacrust×1−HeM+RRamantle×HeM.According to [53], R/Ramantle=8; according to [54], R/Racrust=0.02. ## 4. Results The average temperatures of spring water in the Bohai Bay Basin, the TNCB, and the Ordos Basin were 55.0°C, 46.1°C, and 19.4°C, respectively. The geothermal water in the Ordos Basin comprised natural hot springs or artesian wells, and there are no data for the depth of wells. The gas components in the geothermal water were mainly N2, O2, Ar, CH4, and CO2. N2 was the predominant gas component in all samples. The concentrations of heavier hydrocarbons, H2S, SO2, and H2 fell below their respective detection limits. The ranges of concentration of N2 and O2 of nitrogen-rich hot springs were 69.42–98.52% and 0.07–18.57%, respectively. The concentration of Ar ranged between 0.92 and 1.47%, similar to that of air. The concentration of CO2 ranged between 0.01 and 7.91%, which is some several orders of magnitude higher than that in air (0.03%). The concentration of CH4 was low, ranging from 0 to 16.13%, with that in all samples below 1%, except for those of the Baodi, Lizigu, and Dijing springs.The gas isotope data showed that the4He/20Ne ratios ranged from 3.11 to 1,647, far exceeding that of air of 0.318. The R/Ra values of 3He/4He ranged from 0.01 to 2.52, representing typical mixtures of a crust and mantle source or radiogenic gas. The value of δ13C in the samples ranged from −6.3‰ to −20.7‰. ## 5. Discussion ### 5.1. Helium and Neon Isotopes Noble gases within the crust originate from three main sources [55]: (1) the atmosphere, introduced into the crust through groundwater recharge; (2) the mantle, from regions of magmatic activity; and (3) gases produced in the crust by the result of radioactive decay processes. The helium and carbon isotope ratios are sensitive tracers of gas sources due to varying gas isotope ratios among the atmosphere, crust, and mantle [25]. Helium is mainly derived from the atmosphere, crust, and mantle. The mixture of crust and mantle sources can be determined by the R/Ra ratio. The 3He/4He of atmospheric helium is 1.4×10−6, known as Ra [56]. Previous studies on the R/Ra ratio of crustal helium provided slightly different but consistent values. For example, Poreda et al. [57] obtained a crustal R/Ra of 0.013–0.021, whereas Mamyrin and Tolstikhin [56] found that the crustal Ra was generally <0.05. The R/Ra of the mantle source generally exceeds 5; i.e., the R/Ra of 3He/4He is 7.9 [58], whereas White [59] obtained an R/Ra of 8.8±2.5 for the upper mantle and 5–50 for the lower mantle. The R/Ra of the WZ13-1-1 natural gas well in the rift basin of the East China Sea is 8.8, which is the maximum Ra of any sedimentary basin in China [60, 61].Table3 shows the different ratios of 3He/4He. The G1, G2, G3, G4, and G8 sampling points were characterized by helium isotopes from the crust with a R/Ra<0.1. These sampling points were in the TNCB and close to the Ordos Basin. Other sampling points located in EB and TNCB showed different degrees of mantle-derived helium supply (Figure 3), consistent with the results of Zhang et al. [25].Table 3 Chemical and isotopic compositions of hot spring gases in the Eastern block and the Trans-North China Block of the North China Craton. No.Sampling siteLongitude (°E)Latitude (°N)N2 (%)O2 (%)Ar (%)CO2 (%)CH4 (%)4He (%)4He/20NeR/RaRc/RaHeM (%)δ13CPDB (‰)Tectonic unitG1Qicun112.6238.5496.440.601.430.030.031.47319.360.040.040.24-14.3TNCBG2Duncun112.7038.4995.720.851.380.13—1.92734.050.030.030.12-18.2TNCBG3Tangtou112.8338.5992.122.721.470.070.053.58971.090.030.030.12-17.5TNCBG4Yangou112.7938.9591.253.311.410.010.033.991647.600.070.070.62-18.6TNCBG5Hunyuan113.9439.4196.791.191.450.010.100.46188.650.620.627.51-17.5TNCBG6Shengshui114.0440.4489.906.651.301.84—0.31149.341.021.0212.53-15.0TNCBG7Tianzhen114.0440.4394.452.841.311.00—0.41219.280.990.9912.16-14.5TNCBG8Yangyuan114.5940.2196.101.311.250.980.020.34185.520.010.010-15.4TNCBG9Houhaoyao115.5440.3488.339.351.310.010.910.09202.850.960.9611.78-20.7TNCBG10Dongwaikou116.0840.96N.A.N.A.N.A.N.A.N.A.N.A.135.540.490.495.88-16.1TNCBG11Tangquan 1115.7440.90N.A.N.A.N.A.N.A.N.A.N.A.85.420.420.424.99-16.1TNCBG12Shengshiyuan115.9940.46N.A.N.A.N.A.N.A.N.A.N.A.63.152.422.4330.15-12.7TNCBG13Wuliying115.9340.48N.A.N.A.N.A.N.A.N.A.N.A.28.832.032.0425.31-14.2TNCBG14Jinyu115.9740.48N.A.N.A.N.A.N.A.N.A.N.A.96.072.522.5231.38-13.1TNCBG15Liyuantou117.1939.03N.A.N.A.N.A.N.A.N.A.N.A.155.300.270.273.12-19.8EBG16Xiawucun117.7839.27N.A.N.A.N.A.N.A.N.A.N.A.14.700.190.181.95-20.6EBG17Luqiancun117.7839.2996.651.901.300.140.02—3.110.160.080.78-20.7EBG18Zunhua117.7640.2198.520.071.280.050.010.0841.810.420.424.97-17.2EBG19Jidong118.3439.18N.A.N.A.N.A.N.A.N.A.N.A.8.120.340.323.73-18.0EBG20Liuzan118.6839.16N.A.N.A.N.A.N.A.N.A.N.A.211.950.350.354.13-17.4EBG21Changsheng118.1739.82N.A.N.A.N.A.N.A.N.A.N.A.538.730.700.708.52-17.2EBG22Baodi117.3439.5569.425.301.097.9116.13—44.500.490.495.85-9.3EBG23Lizigu117.3439.5274.5518.571.052.193.60—17.800.470.465.54-9.8EBG24Dijing117.3639.5473.8817.200.925.552.45—85.630.480.485.74-6.4EBG25Jingdong116.9439.9695.781.601.290.720.540.0782.530.160.161.72-14.4EBG26Huashuiwan116.5040.1897.190.131.211.740.03—20.650.540.536.44-11.20EB“—” stands for contributions below 0.1%. N.A.: not analyzed.δ13C is the value of CO2 in the analyzed sample.Figure 3 Conceptual model of He isotopes in the North China Craton (NCC). (a) Comparison between the thermal and seismic lithospheric bases for the NCC and the3He/4He (Ra) of the Eastern Block (EB) and the Trans-North China Block (TNCB) [70]. (b) The mantle convection regime for the destruction and modification of the lithosphere beneath the NCC, as modified after Zhu et al. [7]. Faults according to Chen et al. [71].The mantle source contribution of helium isotopes in gases of the hot springs ranged from 0 to 31.38%. The mantle source contributions of sampling locations G1, G2, G3, and G8 were below 0.5%. Sample location G8 was close to the Ordos Basin and represented a typical crustal source gas, with no mantle source contribution (R/Ra=0.01). The sampling points G1, G2, and G3 s were in the Qicun geothermal field in Shanxi Province (Figure 1(b)). The Qicun geothermal field is in the Zhoushan fault-fold zone of Hongtaoshan-I in the Shanxi block of the North China Plate and is categorized as part of the plate interior thermal system. The heat source is vertical heat transfer through the crust, and the circulation of meteoric water occurs through deep faults [62, 63]. The mantle source contributions of sampling locations G6, G7, and G9 exceeded 10%, while those of sampling locations G12, G13, and G14 were 30.15%, 25.31%, and 31.38%, respectively. G9 is in the Houhaoyao thermal field, characterized by developed fractures and rock rupture, thus providing access and space for the upwelling of mantle-derived material [64]. The sampling locations G12, G13, and G14 are in the Yanhuai Basin (Figure 1(b)). The regional geophysical data showed the presence of typical crustal faults, magmatic activities, complex structural patterns composed of shallow faults, and a mantle transition zone in the lower part [65]. The hot spring gases distributed in the central NCC indicated the presence of mantle sources in the TNCB, which may be related to crustal thinning in the region. A low-speed belt of 130 km was noted at the junction between the northern North China Basin and the Yanshan orogenic belt. This belt experienced multiple periods of tectonic activities, thereby providing a favorable channel for gas upwelling [66, 67]. The TNCB is a typical continent-to-continent collision belt that was formed by the collision between the EB and the WB at ~1.85 billion years ago. The rock unit in the region experienced intense multistage deformations accompanied by large-scale overthrusting and ductile shear [68]. Tectonic activity, crustal thinning, and deep cut active faults provide good conditions for mantle gas emission. The high 3He/4He ratios (0.10–2.52 Ra) are associated with the injection of a magmatic reservoir beneath the fault in the TNCB and EB, and subduction of the Pacific Plate also results in higher activity in the TNCB and EB [69].Typical crustal sources of helium isotopes in hot spring gases for sampling sites G1, G2, G3, G4, and G8 were distributed in the western EB, outside of the boundaries of the Ordos Basin. The remaining 21 hot spring sampling sites showed a mantle-derived gas supply, similar to the spatial scope of the destruction of the Pacific Plate subduction to the east of the NCC (Figure3(b)). This assertion was further verified by Dai et al. [26] who found that helium isotopes of gas in the Ordos Basin were characterized by low CO2 and low R/Ra values (<0.1) typical of the craton basin [72, 73].The spatial differentiation of gas isotope sources is not accidental. Many fault basins were formed in the east NCC during the Paleogene. This was accompanied by long basalt eruptions and active magmatic activities. These events facilitated the formation of oil in the east NCC [69]. Besides, a low-velocity zone was observed at a depth of 70 km in the east NCC [66] along with an obvious lithosphere-asthenosphere boundary (LAB) [74]. These observations confirm the presence of deep fluid activity in the eastern NCC. The EB and the TNCB contain greater quantities of mantle-derived gases and more direct channels. The thickness of the lithosphere of the western Ordos Basin (80–180 km) exceeds that of the EB and the TNCB, and an area of thinning is evident to the east of the basin [75], characterized by less magmatic activity and a stable structure [7, 9]. The eastern NCC shows a higher heat flow compared to the western part [76], which can be related to magma activity during the late Mesozoic and thinning of the continental lithosphere. ### 5.2. Gas Composition The present study analyzed 16 hot spring gas samples in the EB and TNCB (Table3). All samples were rich in N2 (69.42–98.52%). The content of N2 of the gas samples was characteristic of that of a medium-low temperature hydrothermal system, such as the peninsula craton in the heat field of central and western India, indicating low deep equilibrium temperatures [77]. The accumulation of N2 may be due to thermal decomposition of organic matter in sedimentary and metamorphic rocks [78].The N2/Ar ratios in air, air-saturated water (ASW), and groundwater were 84, 38, and 50, respectively [80–82]. The results of the present study showed a strong correlation between N2 and Ar (Figure 4). Plotting the results of gas sample analysis showed two separated distributions. Gas samples were distributed along the He-air trend line with N2/Ar ratios approaching that of air of 84, suggesting that N2 originated from the atmosphere and that aquifers were recharged with meteoric water containing dissolved air. Atmospheric precipitation is the main source of Ar in geothermal gas [32]. Therefore, both Ar and N2 in hot spring gas originated from the atmosphere.Figure 4 Relative N2-He-Ar abundances in free gas. The ASW represents air-saturated water [79]. Classification of subduction-derived gases after Fischer et al. [80]. Numbers and names of sampling sites are listed in Table 3.The N2/He ratios of 11 samples showed regional variation of sources. As shown in Figure 2, the sampling locations G1, G2, G3, and G4 showed crustal predominated gases, consistent with the results of the helium-neon isotope (Figure 5). The mantle source contribution of sampling locations G5, G6, G7, and G8 was significantly increased. These gas sampling points were in the central TNCB. Simultaneously, the sampling locations G9, G11, and G15 contained typical subduction zone gases. These sampling locations were in the EB which experienced the highest degree of destruction of the NCC. Arc-type gases are characterized by a high N2 content, N2/Arratios>200, and N2/Heratios>1,000. Mantle-derived gases are characterized by a low N2 content and N2/Heratios<200 [80]. Sample location G15 is in the Bohai Bay Basin in the Huanghua Depression and Cangxian uplift area. This area has conditions that facilitate mantle degassing. The mantle source contributions of the sampling points gradually increased from west to east (Figure 3(b)), and the gas characteristics of the subduction zone were observed in the EB (Figure 4).Figure 5 Plot of R/Ra-4He/20Ne in the Trans-North China Block (TNCB) and Eastern Block (EB). Endmembers in the plot are R/Raair=1, 4He/20Neair=0.254, R/Ramantle=8, 4He/20Nemantle=1000, R/Racrust=0.02, and 4He/20Necrust=1000 [91]. Numbers and names of sampling sites are listed in Table 3. ### 5.3. Carbon Isotopes of CO2 CO2 in hot spring gas is generated by the organic or inorganic process. The formation of CO2 through organic processes involves the decomposition of organic matter and bacterial activities. Formation of CO2 through inorganic processes involves magmatic activities in the mantle, thermal decomposition, and the dissolution of carbonate rocks [83]. The δ13C value is an effective criterion to determine the source of carbon dioxide and methane [84]. δ13C in CO2 for degassing of the upper mantle ranges from −8‰ to −4‰ [85, 86]. The values of δ13CCO2 with a magmatic origin range from −9.1‰ to 2‰ [87, 88]. δ13CCO2 in sedimentary basins is generally regulated by the generation of organic hydrocarbon by thermal decomposition and ranges from −15‰ to −25‰ [89]. δ13CCO2 of organic origin is generally lower than −10‰, while δ13CCO2 of inorganic origin typically ranges from −8‰ to 3‰ [90].Figure6 shows the mixing of different sources. The δ13CCO2 isotope showed a trend indicative of depositional genesis of carbonates (dashed box). Carbonate rocks account for 80% of the volume of sedimentary strata in the region and provide a material source for carbon [37]. Only the δ13CCO2 of sampling locations G22, G23, and G24 exceeded −10‰. These sampling locations were in the central North China Plain (EB). Meanwhile, the CO2 concentration of spring water was relatively low, ranging from 0.01 to 7.91%. The CO2 concentrations of only sampling locations G22, G23, and G24 exceeded 2% (Table 3). Most organic carbon was observed in the EB, and the δ13CCO2 value of sedimentary origin in the EB was similar to that in the Ordos Basin (Figure 6).Figure 6 Plot of3He/4He (R/Ra) vs. δ13CCO2. The endmember compositions for sedimentary organic carbon (S, δ13CCO2=−25‰–−19‰, 3He/4HeR/Ra=0.01; green arrow), mantle carbon (M, δ13CCO2=−6‰–−2‰, 3He/4HeR/Ra=8; orange arrow), and limestones (L, δ13CCO2=0‰, 3He/4HeR/Ra=0.01; blue arrow) [91]; numbers and names of sampling sites are listed in Table 3.The CO2 concentrations in the samples observed in the present study were different from those of gas wells recorded by Dai et al. [26] in the Ordos Basin of 0.02–8.87% with an average of 1.86%. The maximum value and average CO2 concentrations in the Bohai Bay Basin far exceeded those in the Ordos Basin, with an average value of >10%. This observation could be attributed to the dissolution of CO2 in water being promoted by the high temperature of geothermal water, leaving behind only a small quantity of inorganic carbon.Moreover,3He/4He (R/Ra) showed that gas in the sample sites near the EB was of mantle origin. In contrast, the gas component in the sample sites near the Ordos Basin was of sedimentary origin (Figure 6). ### 5.4. Hydrogen and Oxygen Isotopes Stable isotopes of hydrogen and oxygen can be used to identify the geothermal water source, trace the circulation path, and analyze the geothermal reservoir environment [37, 92]. There are significant differences in the hydrogen and oxygen isotopes among geothermal water, groundwater, meteoric water, and mixed water. The analysis of hydrogen and oxygen isotope ratios of meteoric water samples at different latitudes globally has shown that they all follow a linear relationship called the global meteoric water line (GMWL): δD=8δ18O+10 [93].The average values ofδD and δ18O at the 46 sampling points in the study area were −78.46‰ and −10.27‰, respectively, and ranged from −77.26 to −69.27‰ and from −10.45 to −8.04‰, respectively. As shown in Figure 7, when plotting δD against δ18O, most of the hydrogen and oxygen isotopes were distributed near the LMWL, indicating a meteoric water source. The results indicated oxygen shifting, with a maximum shift of 1.39 (G21).Figure 7 Hydrogen and oxygen stable isotopic composition of geothermal water in the Eastern Block (EB) and the Trans-North China Block (TNCB) of the North China Craton (NCC). GMWL stands for global meteoric water line:δD=8δ18O+10 [93]; LMWL stands for local meteoric water line, which is the meteorite water line of the monsoon region of Eastern China: δD=7.46δ18O+0.9 [106]. Numbers and names of sampling sites are listed in Table 1.Rocks are rich in oxygen (>40%) and poor in hydrogen (less than 1%) [94]. Therefore, the occurrence of reactions between water and rocks can result in an oxygen shift in water, whereas δD remains largely stable. For example, the water-rock interaction between atmospheric precipitation and media containing carbonate water can enhance the δ18O value of water [95]. A high temperature can promote interactions between water and rock, thereby intensifying the exchange of oxygen isotopes between geothermal water and oxygen-enriched surrounding rock. The δD and δ18O of Tianzhen 1 in the TNCB were −9.8‰ and −81.5‰, whereas those of Tianzhen 2 were −9.9‰ and −80.5‰, respectively. These results indicate that the exchange of oxygen isotopes may occur in the presence of oxygen-rich rocks in the quaternary sand-gravel aquifer of the Yanggao-Tianzhen Basin [25]. The δ18O values span a wide range (−10.3 to −8.0‰), and it is noteworthy that the samples all plot to the right of the GMWL, with relatively uniform δD values. By considering an average isotopic gradient of precipitation for China, it can be concluded from the H and O isotopes that probably the difference of δ18O is due to the fact that the geothermal water is recharged from different areas, the Taihangshan Range and Yan Shan Range NW of Tianjin [96]. Meanwhile, δ13CCO2 in the EB is of carbonate sedimentary origin (Figure 6), thereby facilitating oxygen isotope exchange in water-rock reactions. The corresponding δD and δ18O values of magmatic water are −20±10‰ and 10±2‰, respectively [97, 98]. In addition, the δD and δ18O of the EB exceeded those of the WB (Figure 7), possibly due to magmatism.There were obvious spatial differences inδD between the EB and TNCB. The average δD values of the EB and TNCB were −79.2‰ and −84.1‰, respectively, significantly lower than those in the area of destruction in the NCC. The δD value can be used to derive the groundwater recharge depth in the crust, which decreases with the depth [99]. The results of reservoir temperature and δD indicated that the EB has deeper groundwater circulation. Moreover, the contribution of deep mantle heat flow in the eastern NCC exceeds that in the western NCC [14]. This result may be related to active underground magmatic activity in the eastern NCC, the thinning of the crust, and the higher intensity of seismic activity. ### 5.5. Reservoir Temperature in the NCC Shallow geothermal data are of great significance for describing thermal states and revealing geodynamic processes of the continental lithosphere [100–102]. Since water-rock reactions are related to temperature, the geochemical thermometer has been widely used to estimate the final temperature of the water-rock balance in a reservoir [103]. Na-K-Mg trigonometry is usually used to determine the degree of water-rock reaction balance [104]. The Na-K-Mg trigonometry diagram is largely used to study hydrothermal systems and can provide a basis for chemical geothermometers [105].The results of the present study showed that the geothermal water in the Ordos Basin was largely local equilibrium water, whereas most of the geothermal water in the EB was immature (Figure8). The present study used the cation temperature scale to calculate the reservoir temperature. The prerequisite for using the temperature scale should be as follows: water should be in equilibrium, temperature should be greater than 25°C, and the calculated reservoir temperature should exceed the measured temperature when using the thermometric scale to calculate the reservoir temperature of a hot spring. Although some spring water in the study area does not apply to these temperature scales, there were differences in the calculated reservoir temperature between the EB and the WB, which are of reference significance. The K-Mg thermometric scale [104] can indicate the medium and low reservoir temperatures: (5)t°C=441014.0−logK2/Mg−273.15.Figure 8 Na-K-Mg triangular diagram of geothermal waters of the North China Craton (NCC) (base map according to [104]).The results of the K-Mg thermometric scale showed that the geothermal reservoir temperature of the WB ranged from 25.3 to 116.4°C, with an average temperature of 38.4°C, whereas the geothermal reservoir temperature in the EB and TNCB ranged from 31.1 to 112.6°C, with an average of 80.7°C (Figure2; Tables 1 and 2). The present study used Inverse Distance Weighting (IDW) to analyze the spatial distribution of reservoir temperature. The IDW interpolation method is widely used in digital elevation models (DEMs), meteorological and hydrological analysis, and other fields due to its simplicity and convenient calculation [107].The temperatures of hot springs in the WB ranged from 8.8 to 28.5°C, with an average of 16.1°C, whereas that in the EB and TNCB ranged from 11.4 to 92.6°C, with an average of 50.2°C (Tables1 and 2).Such regional differences are not accidental. The Ordos Basin has a crust with a temperature that exceeds that of the mantle and a mantle heat flow of 21.2–24.5 mW·m−2 [108]. The EB has a thermal state of mantle temperature that exceeds the temperature of the crust and a mantle heat flow ranging between 30 and 140 mW.m−2, with an average of 61.9±14.8mW·m−2 (He et al., 2011). The heat flow in the WB is related to a thicker lithosphere composed of continental blocks. In addition, the Cenozoic tectonic-thermal activity in the WB is weaker than that in the EB [70]. This can be attributed to the subduction of the Pacific Plate to the EB area, resulting in a thinner lithosphere and higher volcanic activity.The inconsistency between measured temperatures and reservoir temperatures of hot springs can be partially attributed to the difference in heat flow (Figure9) between the EB and the WB. Meanwhile, geothermal water circulation in the eastern NCC occurs at a greater depth. Crustal circulation at a greater depth is also a source of heat.Figure 9 Contour map of heat flow in the North China Craton (NCC). Data from the China Heat Flow Database (CHFDB, chfdb.xyz). ## 5.1. Helium and Neon Isotopes Noble gases within the crust originate from three main sources [55]: (1) the atmosphere, introduced into the crust through groundwater recharge; (2) the mantle, from regions of magmatic activity; and (3) gases produced in the crust by the result of radioactive decay processes. The helium and carbon isotope ratios are sensitive tracers of gas sources due to varying gas isotope ratios among the atmosphere, crust, and mantle [25]. Helium is mainly derived from the atmosphere, crust, and mantle. The mixture of crust and mantle sources can be determined by the R/Ra ratio. The 3He/4He of atmospheric helium is 1.4×10−6, known as Ra [56]. Previous studies on the R/Ra ratio of crustal helium provided slightly different but consistent values. For example, Poreda et al. [57] obtained a crustal R/Ra of 0.013–0.021, whereas Mamyrin and Tolstikhin [56] found that the crustal Ra was generally <0.05. The R/Ra of the mantle source generally exceeds 5; i.e., the R/Ra of 3He/4He is 7.9 [58], whereas White [59] obtained an R/Ra of 8.8±2.5 for the upper mantle and 5–50 for the lower mantle. The R/Ra of the WZ13-1-1 natural gas well in the rift basin of the East China Sea is 8.8, which is the maximum Ra of any sedimentary basin in China [60, 61].Table3 shows the different ratios of 3He/4He. The G1, G2, G3, G4, and G8 sampling points were characterized by helium isotopes from the crust with a R/Ra<0.1. These sampling points were in the TNCB and close to the Ordos Basin. Other sampling points located in EB and TNCB showed different degrees of mantle-derived helium supply (Figure 3), consistent with the results of Zhang et al. [25].Table 3 Chemical and isotopic compositions of hot spring gases in the Eastern block and the Trans-North China Block of the North China Craton. No.Sampling siteLongitude (°E)Latitude (°N)N2 (%)O2 (%)Ar (%)CO2 (%)CH4 (%)4He (%)4He/20NeR/RaRc/RaHeM (%)δ13CPDB (‰)Tectonic unitG1Qicun112.6238.5496.440.601.430.030.031.47319.360.040.040.24-14.3TNCBG2Duncun112.7038.4995.720.851.380.13—1.92734.050.030.030.12-18.2TNCBG3Tangtou112.8338.5992.122.721.470.070.053.58971.090.030.030.12-17.5TNCBG4Yangou112.7938.9591.253.311.410.010.033.991647.600.070.070.62-18.6TNCBG5Hunyuan113.9439.4196.791.191.450.010.100.46188.650.620.627.51-17.5TNCBG6Shengshui114.0440.4489.906.651.301.84—0.31149.341.021.0212.53-15.0TNCBG7Tianzhen114.0440.4394.452.841.311.00—0.41219.280.990.9912.16-14.5TNCBG8Yangyuan114.5940.2196.101.311.250.980.020.34185.520.010.010-15.4TNCBG9Houhaoyao115.5440.3488.339.351.310.010.910.09202.850.960.9611.78-20.7TNCBG10Dongwaikou116.0840.96N.A.N.A.N.A.N.A.N.A.N.A.135.540.490.495.88-16.1TNCBG11Tangquan 1115.7440.90N.A.N.A.N.A.N.A.N.A.N.A.85.420.420.424.99-16.1TNCBG12Shengshiyuan115.9940.46N.A.N.A.N.A.N.A.N.A.N.A.63.152.422.4330.15-12.7TNCBG13Wuliying115.9340.48N.A.N.A.N.A.N.A.N.A.N.A.28.832.032.0425.31-14.2TNCBG14Jinyu115.9740.48N.A.N.A.N.A.N.A.N.A.N.A.96.072.522.5231.38-13.1TNCBG15Liyuantou117.1939.03N.A.N.A.N.A.N.A.N.A.N.A.155.300.270.273.12-19.8EBG16Xiawucun117.7839.27N.A.N.A.N.A.N.A.N.A.N.A.14.700.190.181.95-20.6EBG17Luqiancun117.7839.2996.651.901.300.140.02—3.110.160.080.78-20.7EBG18Zunhua117.7640.2198.520.071.280.050.010.0841.810.420.424.97-17.2EBG19Jidong118.3439.18N.A.N.A.N.A.N.A.N.A.N.A.8.120.340.323.73-18.0EBG20Liuzan118.6839.16N.A.N.A.N.A.N.A.N.A.N.A.211.950.350.354.13-17.4EBG21Changsheng118.1739.82N.A.N.A.N.A.N.A.N.A.N.A.538.730.700.708.52-17.2EBG22Baodi117.3439.5569.425.301.097.9116.13—44.500.490.495.85-9.3EBG23Lizigu117.3439.5274.5518.571.052.193.60—17.800.470.465.54-9.8EBG24Dijing117.3639.5473.8817.200.925.552.45—85.630.480.485.74-6.4EBG25Jingdong116.9439.9695.781.601.290.720.540.0782.530.160.161.72-14.4EBG26Huashuiwan116.5040.1897.190.131.211.740.03—20.650.540.536.44-11.20EB“—” stands for contributions below 0.1%. N.A.: not analyzed.δ13C is the value of CO2 in the analyzed sample.Figure 3 Conceptual model of He isotopes in the North China Craton (NCC). (a) Comparison between the thermal and seismic lithospheric bases for the NCC and the3He/4He (Ra) of the Eastern Block (EB) and the Trans-North China Block (TNCB) [70]. (b) The mantle convection regime for the destruction and modification of the lithosphere beneath the NCC, as modified after Zhu et al. [7]. Faults according to Chen et al. [71].The mantle source contribution of helium isotopes in gases of the hot springs ranged from 0 to 31.38%. The mantle source contributions of sampling locations G1, G2, G3, and G8 were below 0.5%. Sample location G8 was close to the Ordos Basin and represented a typical crustal source gas, with no mantle source contribution (R/Ra=0.01). The sampling points G1, G2, and G3 s were in the Qicun geothermal field in Shanxi Province (Figure 1(b)). The Qicun geothermal field is in the Zhoushan fault-fold zone of Hongtaoshan-I in the Shanxi block of the North China Plate and is categorized as part of the plate interior thermal system. The heat source is vertical heat transfer through the crust, and the circulation of meteoric water occurs through deep faults [62, 63]. The mantle source contributions of sampling locations G6, G7, and G9 exceeded 10%, while those of sampling locations G12, G13, and G14 were 30.15%, 25.31%, and 31.38%, respectively. G9 is in the Houhaoyao thermal field, characterized by developed fractures and rock rupture, thus providing access and space for the upwelling of mantle-derived material [64]. The sampling locations G12, G13, and G14 are in the Yanhuai Basin (Figure 1(b)). The regional geophysical data showed the presence of typical crustal faults, magmatic activities, complex structural patterns composed of shallow faults, and a mantle transition zone in the lower part [65]. The hot spring gases distributed in the central NCC indicated the presence of mantle sources in the TNCB, which may be related to crustal thinning in the region. A low-speed belt of 130 km was noted at the junction between the northern North China Basin and the Yanshan orogenic belt. This belt experienced multiple periods of tectonic activities, thereby providing a favorable channel for gas upwelling [66, 67]. The TNCB is a typical continent-to-continent collision belt that was formed by the collision between the EB and the WB at ~1.85 billion years ago. The rock unit in the region experienced intense multistage deformations accompanied by large-scale overthrusting and ductile shear [68]. Tectonic activity, crustal thinning, and deep cut active faults provide good conditions for mantle gas emission. The high 3He/4He ratios (0.10–2.52 Ra) are associated with the injection of a magmatic reservoir beneath the fault in the TNCB and EB, and subduction of the Pacific Plate also results in higher activity in the TNCB and EB [69].Typical crustal sources of helium isotopes in hot spring gases for sampling sites G1, G2, G3, G4, and G8 were distributed in the western EB, outside of the boundaries of the Ordos Basin. The remaining 21 hot spring sampling sites showed a mantle-derived gas supply, similar to the spatial scope of the destruction of the Pacific Plate subduction to the east of the NCC (Figure3(b)). This assertion was further verified by Dai et al. [26] who found that helium isotopes of gas in the Ordos Basin were characterized by low CO2 and low R/Ra values (<0.1) typical of the craton basin [72, 73].The spatial differentiation of gas isotope sources is not accidental. Many fault basins were formed in the east NCC during the Paleogene. This was accompanied by long basalt eruptions and active magmatic activities. These events facilitated the formation of oil in the east NCC [69]. Besides, a low-velocity zone was observed at a depth of 70 km in the east NCC [66] along with an obvious lithosphere-asthenosphere boundary (LAB) [74]. These observations confirm the presence of deep fluid activity in the eastern NCC. The EB and the TNCB contain greater quantities of mantle-derived gases and more direct channels. The thickness of the lithosphere of the western Ordos Basin (80–180 km) exceeds that of the EB and the TNCB, and an area of thinning is evident to the east of the basin [75], characterized by less magmatic activity and a stable structure [7, 9]. The eastern NCC shows a higher heat flow compared to the western part [76], which can be related to magma activity during the late Mesozoic and thinning of the continental lithosphere. ## 5.2. Gas Composition The present study analyzed 16 hot spring gas samples in the EB and TNCB (Table3). All samples were rich in N2 (69.42–98.52%). The content of N2 of the gas samples was characteristic of that of a medium-low temperature hydrothermal system, such as the peninsula craton in the heat field of central and western India, indicating low deep equilibrium temperatures [77]. The accumulation of N2 may be due to thermal decomposition of organic matter in sedimentary and metamorphic rocks [78].The N2/Ar ratios in air, air-saturated water (ASW), and groundwater were 84, 38, and 50, respectively [80–82]. The results of the present study showed a strong correlation between N2 and Ar (Figure 4). Plotting the results of gas sample analysis showed two separated distributions. Gas samples were distributed along the He-air trend line with N2/Ar ratios approaching that of air of 84, suggesting that N2 originated from the atmosphere and that aquifers were recharged with meteoric water containing dissolved air. Atmospheric precipitation is the main source of Ar in geothermal gas [32]. Therefore, both Ar and N2 in hot spring gas originated from the atmosphere.Figure 4 Relative N2-He-Ar abundances in free gas. The ASW represents air-saturated water [79]. Classification of subduction-derived gases after Fischer et al. [80]. Numbers and names of sampling sites are listed in Table 3.The N2/He ratios of 11 samples showed regional variation of sources. As shown in Figure 2, the sampling locations G1, G2, G3, and G4 showed crustal predominated gases, consistent with the results of the helium-neon isotope (Figure 5). The mantle source contribution of sampling locations G5, G6, G7, and G8 was significantly increased. These gas sampling points were in the central TNCB. Simultaneously, the sampling locations G9, G11, and G15 contained typical subduction zone gases. These sampling locations were in the EB which experienced the highest degree of destruction of the NCC. Arc-type gases are characterized by a high N2 content, N2/Arratios>200, and N2/Heratios>1,000. Mantle-derived gases are characterized by a low N2 content and N2/Heratios<200 [80]. Sample location G15 is in the Bohai Bay Basin in the Huanghua Depression and Cangxian uplift area. This area has conditions that facilitate mantle degassing. The mantle source contributions of the sampling points gradually increased from west to east (Figure 3(b)), and the gas characteristics of the subduction zone were observed in the EB (Figure 4).Figure 5 Plot of R/Ra-4He/20Ne in the Trans-North China Block (TNCB) and Eastern Block (EB). Endmembers in the plot are R/Raair=1, 4He/20Neair=0.254, R/Ramantle=8, 4He/20Nemantle=1000, R/Racrust=0.02, and 4He/20Necrust=1000 [91]. Numbers and names of sampling sites are listed in Table 3. ## 5.3. Carbon Isotopes of CO2 CO2 in hot spring gas is generated by the organic or inorganic process. The formation of CO2 through organic processes involves the decomposition of organic matter and bacterial activities. Formation of CO2 through inorganic processes involves magmatic activities in the mantle, thermal decomposition, and the dissolution of carbonate rocks [83]. The δ13C value is an effective criterion to determine the source of carbon dioxide and methane [84]. δ13C in CO2 for degassing of the upper mantle ranges from −8‰ to −4‰ [85, 86]. The values of δ13CCO2 with a magmatic origin range from −9.1‰ to 2‰ [87, 88]. δ13CCO2 in sedimentary basins is generally regulated by the generation of organic hydrocarbon by thermal decomposition and ranges from −15‰ to −25‰ [89]. δ13CCO2 of organic origin is generally lower than −10‰, while δ13CCO2 of inorganic origin typically ranges from −8‰ to 3‰ [90].Figure6 shows the mixing of different sources. The δ13CCO2 isotope showed a trend indicative of depositional genesis of carbonates (dashed box). Carbonate rocks account for 80% of the volume of sedimentary strata in the region and provide a material source for carbon [37]. Only the δ13CCO2 of sampling locations G22, G23, and G24 exceeded −10‰. These sampling locations were in the central North China Plain (EB). Meanwhile, the CO2 concentration of spring water was relatively low, ranging from 0.01 to 7.91%. The CO2 concentrations of only sampling locations G22, G23, and G24 exceeded 2% (Table 3). Most organic carbon was observed in the EB, and the δ13CCO2 value of sedimentary origin in the EB was similar to that in the Ordos Basin (Figure 6).Figure 6 Plot of3He/4He (R/Ra) vs. δ13CCO2. The endmember compositions for sedimentary organic carbon (S, δ13CCO2=−25‰–−19‰, 3He/4HeR/Ra=0.01; green arrow), mantle carbon (M, δ13CCO2=−6‰–−2‰, 3He/4HeR/Ra=8; orange arrow), and limestones (L, δ13CCO2=0‰, 3He/4HeR/Ra=0.01; blue arrow) [91]; numbers and names of sampling sites are listed in Table 3.The CO2 concentrations in the samples observed in the present study were different from those of gas wells recorded by Dai et al. [26] in the Ordos Basin of 0.02–8.87% with an average of 1.86%. The maximum value and average CO2 concentrations in the Bohai Bay Basin far exceeded those in the Ordos Basin, with an average value of >10%. This observation could be attributed to the dissolution of CO2 in water being promoted by the high temperature of geothermal water, leaving behind only a small quantity of inorganic carbon.Moreover,3He/4He (R/Ra) showed that gas in the sample sites near the EB was of mantle origin. In contrast, the gas component in the sample sites near the Ordos Basin was of sedimentary origin (Figure 6). ## 5.4. Hydrogen and Oxygen Isotopes Stable isotopes of hydrogen and oxygen can be used to identify the geothermal water source, trace the circulation path, and analyze the geothermal reservoir environment [37, 92]. There are significant differences in the hydrogen and oxygen isotopes among geothermal water, groundwater, meteoric water, and mixed water. The analysis of hydrogen and oxygen isotope ratios of meteoric water samples at different latitudes globally has shown that they all follow a linear relationship called the global meteoric water line (GMWL): δD=8δ18O+10 [93].The average values ofδD and δ18O at the 46 sampling points in the study area were −78.46‰ and −10.27‰, respectively, and ranged from −77.26 to −69.27‰ and from −10.45 to −8.04‰, respectively. As shown in Figure 7, when plotting δD against δ18O, most of the hydrogen and oxygen isotopes were distributed near the LMWL, indicating a meteoric water source. The results indicated oxygen shifting, with a maximum shift of 1.39 (G21).Figure 7 Hydrogen and oxygen stable isotopic composition of geothermal water in the Eastern Block (EB) and the Trans-North China Block (TNCB) of the North China Craton (NCC). GMWL stands for global meteoric water line:δD=8δ18O+10 [93]; LMWL stands for local meteoric water line, which is the meteorite water line of the monsoon region of Eastern China: δD=7.46δ18O+0.9 [106]. Numbers and names of sampling sites are listed in Table 1.Rocks are rich in oxygen (>40%) and poor in hydrogen (less than 1%) [94]. Therefore, the occurrence of reactions between water and rocks can result in an oxygen shift in water, whereas δD remains largely stable. For example, the water-rock interaction between atmospheric precipitation and media containing carbonate water can enhance the δ18O value of water [95]. A high temperature can promote interactions between water and rock, thereby intensifying the exchange of oxygen isotopes between geothermal water and oxygen-enriched surrounding rock. The δD and δ18O of Tianzhen 1 in the TNCB were −9.8‰ and −81.5‰, whereas those of Tianzhen 2 were −9.9‰ and −80.5‰, respectively. These results indicate that the exchange of oxygen isotopes may occur in the presence of oxygen-rich rocks in the quaternary sand-gravel aquifer of the Yanggao-Tianzhen Basin [25]. The δ18O values span a wide range (−10.3 to −8.0‰), and it is noteworthy that the samples all plot to the right of the GMWL, with relatively uniform δD values. By considering an average isotopic gradient of precipitation for China, it can be concluded from the H and O isotopes that probably the difference of δ18O is due to the fact that the geothermal water is recharged from different areas, the Taihangshan Range and Yan Shan Range NW of Tianjin [96]. Meanwhile, δ13CCO2 in the EB is of carbonate sedimentary origin (Figure 6), thereby facilitating oxygen isotope exchange in water-rock reactions. The corresponding δD and δ18O values of magmatic water are −20±10‰ and 10±2‰, respectively [97, 98]. In addition, the δD and δ18O of the EB exceeded those of the WB (Figure 7), possibly due to magmatism.There were obvious spatial differences inδD between the EB and TNCB. The average δD values of the EB and TNCB were −79.2‰ and −84.1‰, respectively, significantly lower than those in the area of destruction in the NCC. The δD value can be used to derive the groundwater recharge depth in the crust, which decreases with the depth [99]. The results of reservoir temperature and δD indicated that the EB has deeper groundwater circulation. Moreover, the contribution of deep mantle heat flow in the eastern NCC exceeds that in the western NCC [14]. This result may be related to active underground magmatic activity in the eastern NCC, the thinning of the crust, and the higher intensity of seismic activity. ## 5.5. Reservoir Temperature in the NCC Shallow geothermal data are of great significance for describing thermal states and revealing geodynamic processes of the continental lithosphere [100–102]. Since water-rock reactions are related to temperature, the geochemical thermometer has been widely used to estimate the final temperature of the water-rock balance in a reservoir [103]. Na-K-Mg trigonometry is usually used to determine the degree of water-rock reaction balance [104]. The Na-K-Mg trigonometry diagram is largely used to study hydrothermal systems and can provide a basis for chemical geothermometers [105].The results of the present study showed that the geothermal water in the Ordos Basin was largely local equilibrium water, whereas most of the geothermal water in the EB was immature (Figure8). The present study used the cation temperature scale to calculate the reservoir temperature. The prerequisite for using the temperature scale should be as follows: water should be in equilibrium, temperature should be greater than 25°C, and the calculated reservoir temperature should exceed the measured temperature when using the thermometric scale to calculate the reservoir temperature of a hot spring. Although some spring water in the study area does not apply to these temperature scales, there were differences in the calculated reservoir temperature between the EB and the WB, which are of reference significance. The K-Mg thermometric scale [104] can indicate the medium and low reservoir temperatures: (5)t°C=441014.0−logK2/Mg−273.15.Figure 8 Na-K-Mg triangular diagram of geothermal waters of the North China Craton (NCC) (base map according to [104]).The results of the K-Mg thermometric scale showed that the geothermal reservoir temperature of the WB ranged from 25.3 to 116.4°C, with an average temperature of 38.4°C, whereas the geothermal reservoir temperature in the EB and TNCB ranged from 31.1 to 112.6°C, with an average of 80.7°C (Figure2; Tables 1 and 2). The present study used Inverse Distance Weighting (IDW) to analyze the spatial distribution of reservoir temperature. The IDW interpolation method is widely used in digital elevation models (DEMs), meteorological and hydrological analysis, and other fields due to its simplicity and convenient calculation [107].The temperatures of hot springs in the WB ranged from 8.8 to 28.5°C, with an average of 16.1°C, whereas that in the EB and TNCB ranged from 11.4 to 92.6°C, with an average of 50.2°C (Tables1 and 2).Such regional differences are not accidental. The Ordos Basin has a crust with a temperature that exceeds that of the mantle and a mantle heat flow of 21.2–24.5 mW·m−2 [108]. The EB has a thermal state of mantle temperature that exceeds the temperature of the crust and a mantle heat flow ranging between 30 and 140 mW.m−2, with an average of 61.9±14.8mW·m−2 (He et al., 2011). The heat flow in the WB is related to a thicker lithosphere composed of continental blocks. In addition, the Cenozoic tectonic-thermal activity in the WB is weaker than that in the EB [70]. This can be attributed to the subduction of the Pacific Plate to the EB area, resulting in a thinner lithosphere and higher volcanic activity.The inconsistency between measured temperatures and reservoir temperatures of hot springs can be partially attributed to the difference in heat flow (Figure9) between the EB and the WB. Meanwhile, geothermal water circulation in the eastern NCC occurs at a greater depth. Crustal circulation at a greater depth is also a source of heat.Figure 9 Contour map of heat flow in the North China Craton (NCC). Data from the China Heat Flow Database (CHFDB, chfdb.xyz). ## 6. Conclusions The TNCB and EB of the NCC provide paths for the emission of gas due to the presence of active faults. Continuous upwelling of mantle-derived gas occurs in the asthenosphere beneath the crust. The present study conducted a chemical analysis of the hot springs and gas in the craton basin as well as an isotope analysis in the EB and the TNCB, whereas a chemical analysis of hot springs was conducted in the WB. From the results of the present study, the following conclusions could be made:(1) The compositions of helium-neon isotopes showed that mantle sources contributed to isotope compositions to different degrees in both the Bohai Bay Basin (EB) and the basin-ridge tectonic area (TNCB). There was a small mantle contribution of helium in Xinzhou near the Taihang Mountain. Moreover, the R/Ra of helium in most natural gas components of the Ordos Basin indicated crustal sources(2) Abundant N2 was noted in all hot springs in the eastern region, and the contribution of N2 from mantle sources increased from west to east. Typical subduction zone gases were noted in the eastern region, which remained exposed in the Huailai Basin. Ar and N2 in the study area may be of the same origin. The results of δ13CCO2 showed that CO2 in the gas in the EB is of organic origin, whereas that in the high-temperature geothermal area in some carbonate sedimentary layers was of inorganic origin(3) The results of hydrogen and oxygen isotope analysis showed that the geothermal water in the east of the craton was meteoric water.δD increased gradually from west to east, whereas the depth of groundwater circulation increased. The temperatures of the thermal reservoir and water calculated by the ion temperature scale of the hot spring showed that the temperature of the EB far exceeded that in the WB.Hydrochemistry, gas composition, and isotope analysis of hot springs in the NCC area can provide favorable evidence for the development and utilization of geothermal fields. The results of the present study showed obvious spatial differences in these attributes, which could be related to tectonic conditions, magmatic activities, and active faults. Mantle-derived helium was found in the hot spring gas of the TNCB, consistent with the extent of Pacific Plate subduction. --- *Source: 1009766-2021-10-25.xml*
1009766-2021-10-25_1009766-2021-10-25.md
71,884
Fluid Geochemistry within the North China Craton: Spatial Variation and Genesis
Lu Chang; Li Ying; Chen Zhi; Liu Zhaofei; Zhao Yuanxin; Hu Le
Geofluids (2021)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1009766
1009766-2021-10-25.xml
--- ## Abstract The North China Craton (NCC) is a typical representative of the ancient destruction craton. Numerous studies have shown that extensive destruction of the NCC occurred in the east, whereas the western part was only partially modified. The Bohai Bay Basin is in the center of the destruction area in the eastern NCC. Chemical analyses were conducted on 122 hot spring samples taken from the eastern NCC and the Ordos Basin. Theδ2H and δ18O in water, δ13C in CO2, and 3He/4He and 4He/20Ne ratios in gases were analyzed in combination with chemical analyses of water in the central and eastern NCC. The results showed an obvious spatial variation in chemical and isotopic compositions of the geofluids in the NCC. The average temperature of spring water in the Trans-North China Block (TNCB) and the Bohai Bay Basin was 80.74°C, far exceeding that of the Ordos Basin of 38.43°C. The average δD in the Eastern Block (EB) and the TNCB were −79.22‰ and −84.13‰, respectively. The He isotope values in the eastern region (TNCB and EB) ranged from 0.01 to 2.52, and the rate of contribution of the mantle to He ranged from 0 to 31.38%. δ13C ranged from −20.7 to −6.4‰ which indicated an organic origin. The chemical compositions of the gases in the EB showed that N2 originated mainly from the atmosphere. The EB showed characteristics of a typical gas subduction zone, whereas the TNCB was found to have relatively small mantle sources. The reservoir temperatures in the Ordos Basin and the eastern NCC (EB and TNCB) calculated by the K-Mg temperature scale were 38.43°C and 80.74°C, respectively. This study demonstrated clear spatial variation in the chemical and isotopic compositions of the geofluids in the NCC, suggesting the presence of geofluids from the magmatic reservoir in the middle-lower crust and that active faults played an important role in the transport of mantle-derived components from the mantle upwards. --- ## Body ## 1. Introduction A craton is characterized by a thick lithospheric mantle, cold geotherm, low density, and high viscosity, with these characteristics providing protection from destruction by later geological processes [1]. Cratons are an important geological unit on the surface of the Earth and cover ~50% of the area of the continental crust [2]. The North China Craton (NCC) is an ancient craton that has attracted much attention due to its lithosphere showing signs of severe disturbance or reactivation in some regions, resulting in significant losses or modifications of the mantle root (e.g., [3–9]).The application of theS-wave receiver function showed that the eastern NCC has experienced extensive damage and that thinning of the lithosphere in the east and central parts of the NCC has exceeded that in the west by 60 to 100 km (Chen et al., 2009). Mineral inclusions in diamonds originating from the kimberlite in the provinces of Shandong and Liaoning, China, have indicated the existence of a 200 km thick lithosphere 470 Ma ago [10, 11]. In addition, the mantle-derived inclusions in Cenozoic basalts indicate a lithospheric thickness of 80–120 km [12]. Application of a geothermal evolution model found that the lithosphere experienced two thinning periods during the Cretaceous and Paleogene. The lithosphere experienced thinning between the early Mesozoic to early Cretaceous, reducing in thickness from 150 km to 51 km, following which it thickened to ~80 km. The lithosphere experienced thinning again during the middle and late Paleogene, reducing in thickness to 48 km, consistent with the present thickness of the Bohai Bay fault depression. Subsequently, lithosphere thickening occurred once again, with the crust increasing in thickness to 78 km [13]. The results of a thermal simulation model indicated the presence of mantle heat flow of 24–44 mW·m−2 in the eastern NCC, which exceeded that of 21.2–24.5 mW·m−2 in the western NCC [14].The study of geothermal fluids can increase the understanding of the geological significance of geothermal genesis, reservoir temperature, heating mechanism, circulation depth, and supply source [15–18] and can also act as a basis for exploring mantle-derived material and the shallow response resulting from the geodynamic process of plate subduction. The isotopic composition of gas is widely used to study deep structural changes and the transport mechanism in cratons (e.g., [19–24]).Zhang et al. [25] analyzed the He and C isotope ratios of hot spring gases in the TNCB within the NCC. By analyzing the P-wave velocity and time-averaged fault slip rate, they concluded that mantle volatiles are generated in the upwelling asthenosphere, following which they rise through faults and fractures in which permeabilities are controlled by slip rates. In a study comparing the geochemical characteristics of helium and CO2 in the cratonic and rift basins in China, Dai et al. [26] determined that gas samples collected from the cratonic basins have lower levels of CO2 and R/Ra ratios than those from the rift basins. Gas samples in the rift basin have been shown to have a larger range of variation in δ13CCO2, implying the presence of stronger tectonic activities. Xu et al. [27] determined that helium found in fluids collected from the crust in Liaodong (EB) is derived from the mantle and that active faults play an important role in transferring mantle-derived components to the surface in the nonvolcanic regions.However, there have been limited detailed studies of mantle-derived components associated with the NCC [27–29]. Therefore, a study on the spatial differences in fluid chemistry and isotopic signatures of subduction zones and their genesis would be significant.The present study collected samples of geothermal gases and water from the Ordos Basin, the TNCB, and the EB. The chemical compositions of water and gas samples were measured. In addition, theδD and δ18O isotope compositions of water samples were determined. The gases collected from geothermal wells were analyzed for 3He/4He and 4He/20Ne ratios and δ13C of CO2. The present study was aimed at identifying the spatial variations in chemical and isotopic compositions of geothermal wells in the NCC and at analyzing the sources and genesis of these chemical characteristics. The thermal reservoir temperature and the contribution of mantle-derived helium were determined. The results of the present study confirmed the presence of mantle-derived components in the EB and TNCB gases due to magmatism and active faults. ## 2. Geological Settings The NCC consists of two main crustal blocks, namely, the Western Block (WB) and the Eastern Block (EB). These two crustal blocks are stitched together by the Trans-North China Orogen (Figure1(a)) [30]. The present study collected gas samples from the TNCB and the EB of the NCC (Figure 1), whereas geothermal water samples were collected from within the NCC (Figure 2).Figure 1 Distribution of gas samples collected from the North China Craton (NCC): (a) the subzones of the NCC, modified after Zhao et al. [31]; (b) the geological map of the Eastern Block (EB) and the Trans-North China Block (TNCB) showing faults and topography.Figure 2 Interpolation diagram of a K-Mg ionic temperature scale in the North China Craton (NCC); numbers and names of sampling sites are listed in Tables1 and 2. The interpolation method uses Inverse Distance Weighting (IDW).The Bohai Bay Basin in the EB of the NCC is an important basin in China due to the presence of geothermal resources, oil, and gas. The basin lies adjacent to the Jiaoliao fault-uplift area in the east, the Liaohe Depression in the north, the Jiyang Depression in the south, and the Huanghua Depression in the west [32, 33]. The Bohai Bay Basin is a rift basin superposed by coal-bearing basins of the Meso-Cenozoic Carboniferous-Permian system lying on the basement of the Mesoproterozoic, upper Proterozoic, and Paleozoic cratons. The basin has experienced Indochina, Yanshan, and Himalayan movements and is characterized by active tectonic movements, numerous faults, and strong volcanic activities [34–36]. The study area shows a complex structure, the development of active faults, and frequent earthquakes. The main direction of stress in the study area is northeast-east, resulting in deep fault cuts [37]. Abundant low/medium-temperature pore-type geothermal resources and fractured bedrock occur in the region. Geothermal reservoirs are mainly found in the upper and lower tertiary sandstone reservoirs and particularly in Paleozoic and Proterozoic carbonate reservoirs [15, 38].The basin-range tectonic zone of Shanxi Province falls in the North China orogenic belt. The BRPWB is characterized by northeast-east to southwest-west active normal faults with a dextral strike slip component due to its location on the north end of the S-shaped rift system [25]. The region has experienced many tectonic events and has a unique basin and mountain alternate topography as well as the geological background of the extensional fault along the mountains and within the basin. The counterclockwise movement of the EB and the WB in the NCC since the Paleogene has resulted in the formation of the basin-ridge structure [39] in the Trans-North China Orogen. Cenozoic basalts containing abundant mantle xenoliths were found in the outcrops of the Yangyuan and Datong subbasins in the EB [40, 41]. The age of geothermal water exposed at the junction of faults in the Yanhuai Basin was determined to be 30 ka, whereas the temperature of the reservoir was determined to be ~100°C. Mantle-derived helium was detected in the fluids [42]. These findings suggest that intensive magma activity and mantle-derived material recharge may have occurred in the region.The Ordos Basin is in central-northern China and is associated with the inner-craton depression basin. The typical inland basin represents one of the most tectonically stable areas in China and has an area of over 250,000 m2 [26]. The basin falls above and below the Archean granite and lower Proterozoic greenschist in the North China block, respectively. The southwest region of the basin contains Paleozoic-Cenozoic sedimentary rocks with a thickness of >8 km [43]. Six secondary structures occur in the basin [26]: (1) the Yishan slope, (2) the Tianhuan depression, (3) the Yimeng uplift, (4) the Weibei uplift, (5) the Jin-West flexural fold zone, and (6) the fault fold zone along the western margin. The basin has undergone multiple tectonic movements with a stable internal structure. However, the overall uplift has played a key role [44]. The Fuping block collided with the Western Block at between 1.90 and 1.85 Ga and subducted westward to form the North China orogenic belt, accompanied by many magmatic events [45, 46]. The sedimentary source is complex and experienced thickening in the basin after the Middle Proterozoic [47]. ## 3. Methods A total of 123 geothermal water samples were collected from July to August 2016, consisting of 46 samples from the TNCB and EB and 77 samples from the Ordos Basin. Figures1 and 2 show the sampling locations. Samples were collected in HDPE bottles that had been sterilized by soaking in ultrapure water for 24 h and then dried using ultrasonic cleaning. Samples for the analysis of chemical compositions and isotopes (H and O) were collected in 250 mL and 2 mL HDPE bottles, respectively. All samples were filtered on-site three times through a 0.45 μm membrane filter. Prior to sampling, the sample bottle was rinsed three times using water from the sample source. Once the sample was collected, the sample bottle was sealed with parafilm. Samples for the analysis of major cations were acidified with ultrapurified HNO3 (1 mol L−1) to adjust pH to below 2. Filtered unacidified samples were used for anion analysis. Tables 1 and 2 show the results of water chemistry and isotopic analysis, respectively.Table 1 Sampling locations, water temperatures, isotopes (H and O), and reservoir temperatures in the Eastern block and the Trans-North China Block of the North China Craton. Well no.Longitude (°E)Latitude (°N)Na+ (mg/L)K+ (mg/L)Mg2+ (mg/L)Ca2+ (mg/L)HCO3− (mg/L)Cl− (mg/L)SO42− (mg/L)IB (%)TK‐Mg (°C)T (°C)δ18O (‰)δD (‰)Well depth (m)Guanghegu117.1738.89674.16.87.440.1261.3600.0773.5-3.960.951.0-9.0-71.7—Liyuantou117.1939.03642.65.72.826.9303.6532.7670.9-3.767.856.7-9.2-72.3—Wanjia117.3338.83585.757.07.133.1449.6663.8334.0-3.5115.862.9-8.7-70.8490Longda117.5938.97554.15.02.213.0564.9434.5256.9-2.267.650.5-8.9-71.21800Quanshuiwan117.3339.29306.12.30.95.8626.4119.815.1-1.060.351.6-9.2-72.51500Yongchuan117.3539.25406.177.111.433.9399.7390.1310.8-2.1117.781.2-8.8-71.53000Xinli117.7939.28144.51.10.26.1314.921.733.6-1.857.749.3-9.6-72.21000Xiawucun117.7839.27149.41.20.35.7330.523.432.50.058.053.0-9.5-72.0—Luqiancun117.7839.29110.61.30.15.0284.413.610.4-1.171.644.9-9.6-71.8—Zunhua 1117.7640.21211.46.40.522.6126.832.6364.0-1.191.046.1-10.3-74.0—Zunhua 2117.7640.21198.76.20.317.899.933.4333.0-0.597.941.4-10.3-73.7—Xiyuan118.0639.34179.20.90.811.0292.131.5130.10.041.640.2-9.6-71.2—Jidong 1118.4539.28275.53.91.424.5215.275.9396.6-1.167.046.4-9.4-71.5—Jidong 2118.3439.18609.76.83.866.8222.9560.6739.6-3.968.350.5-8.3-70.81500Caofeidian118.6839.16321.72.20.517.2522.3122.0143.6-2.366.350.4-9.2-70.7—Xiangyunwan118.9839.18630.34.11.34.01437.3216.10.0-1.969.356.3-9.0-70.81800Changsheng118.1739.82555.74.30.87.4822.4471.70.0-2.776.662.9-8.0-69.41600Baodiwang 4117.3439.55198.850.66.938.8384.392.9163.2-0.2112.757.5-9.7-73.53000Lizigu117.3439.52220.256.37.234.8384.3124.4177.9-0.8115.292.6-9.1-72.13000Dijing117.3639.54201.753.37.734.1384.392.3168.1-0.4112.682.2-8.7-71.2—Jixian117.4240.04115.317.617.822.2426.614.532.2-0.872.527.5-9.8-69.3—Xinzhou 1112.6338.54209.65.10.415.688.4198.3136.6-0.989.846.3-10.5-77.3—Xinzhou 2112.6238.54210.77.10.112.992.2195.4124.3-0.4111.256.9-10.4-77.1—Xinzhou 3112.7038.49256.66.81.8155.976.9150.4769.4-3.177.146.0-10.7-80.3—Dingxiangtou112.8338.59424.49.40.5346.611.5471.51510.1-5.9103.050.5-11.3-85.4—Yangouxiang112.7938.95429.06.80.2100.223.0510.9595.8-4.0108.540.3-11.6-87.0—Hunyuan113.9439.41260.39.40.215.661.3141.1326.8-1.5114.463.8-11.3-88.2—Yanggao 1113.8240.42109.95.22.418.1165.264.156.20.667.539.6-10.6-79.7172Yanggao 2113.8240.42114.64.22.514.4153.763.662.20.962.139.7-10.7-81.6203Yanggao 3113.8240.4187.14.63.419.3149.938.244.11.260.545.9-11.0-81.6—Tianzhen 1114.0440.44233.27.56.824.5368.9103.2147.3-0.263.943.8-9.9-80.5504Tianzhen 2114.0440.43271.89.45.525.0380.5145.2178.8-0.871.742.8-9.8-81.5100Yuxian114.4439.8046.53.640.572.5361.226.581.10.231.111.4-10.0-74.5—Sanmafang114.5940.21326.08.922.752.1330.5269.4340.3-1.854.639.4-11.8-88.5180Yangyuan114.5940.21328.49.122.749.2322.8272.2343.9-1.955.139.1-11.8-88.6—Huailai 1115.5440.34292.910.20.220.449.884.4529.5-2.3121.675.0-11.6-89.3—Huailai 2115.5440.34283.19.40.423.373.086.7499.4-1.9104.347.2-11.6-88.0—Huailai 3115.5340.34262.87.30.417.2111.487.1396.7-1.198.447.2-10.7-83.2288Huailai 4115.5340.34256.78.20.217.672.873.1433.9-2.4109.766.0-11.7-88.8500Baimiaocun115.4040.66179.93.82.012.6134.534.2238.4-0.162.339.6-11.5-86.1—Dongwaikou116.0840.96232.86.90.216.661.367.1366.8-1.5109.956.0-11.5-87.6—Chicheng115.7440.90203.08.20.832.4115.329.0403.5-1.791.557.8-12.0-89.2—Shengshi115.9940.4688.413.715.147.4299.846.251.30.968.451.4-11.5-84.3100Songshan115.8240.51143.62.80.29.657.335.6177.2-2.481.934.9-11.9-86.7205Wuliying115.9340.48103.78.011.236.2338.216.468.1-0.559.829.3-11.8-86.0553Jinyu115.9740.4885.513.015.246.8299.844.448.40.867.242.8-11.5-84.2—Average276.812.15.136.4283.8172.2283.078.4— represent natural hot springs and self-flowing wells or well depth being not available.Table 2 Sampling locations, water temperatures, reservoir temperatures, and chemical compositions in the Western block and the Trans-North China Block of the North China Craton. Well no.Longitude (°E)Latitude (°N)Na+ (mg/L)K+ (mg/L)Mg2+ (mg/L)Ca2+ (mg/L)HCO3- (mg/L)Cl- (mg/L)SO42- (mg/L)IB (%)TK‐Mg (°C)T (°C)Shangwangtai106.9634.56189.15.31.437.8169.034.0399.0-1.973.728.5Qianchuan107.1534.6434.77.222.671.9637.011.221.12.150.214.8Shuigouzhen106.9934.748.11.015.560.4452.03.413.12.1—17.1Shenjiazui106.8934.8522.43.320.3100.0634.012.950.50.735.814.8Chaijiawa106.7734.945.31.323.555.8518.02.47.61.4—17.9Shilitan106.4035.4317.01.420.071.4601.09.437.2-1.8—10.2Dayuanzi106.3935.4653.22.450.098.3645.021.2257.2-1.0—11.4Fujianchang106.6835.5320.41.721.458.8455.05.734.63.3—12.2Liuhu106.6735.55134.25.944.261.8660.077.9223.30.339.518.5Beishan 1106.7035.5684.81.225.933.9619.015.944.62.3—13.7Beishan 2106.6835.56138.51.250.936.1809.023.4183.41.1—12.9Baiyunsi 2106.2435.6029.61.534.766.3508.03.7151.1-0.3—12.1Baiyunsi 1106.2535.6141.33.039.566.5445.06.2212.90.127.810.9Baiyunsi 3106.2535.6131.31.734.752.1436.03.3129.01.3—17.3Dongshanpo106.2835.62262.61.90.72.41219.023.44.5-0.759.410.2Anguo 1106.5735.62148.86.045.064.5713.077.7235.00.539.622.3Anguo 2106.5735.62170.16.943.961.1688.097.2269.7-0.242.624.4Longde106.1335.6268.81.837.260.3746.018.9101.1-0.1—13.8Hongjunquan106.1835.6752.52.036.279.9545.03.5228.5-0.7—10.2Heshangpu106.2335.6822.42.233.558.2519.06.7111.0-1.2—11.7Beilianchi106.1835.7425.51.831.144.7475.02.666.81.6—20.0Lianchisi106.1835.7423.71.324.538.8420.02.450.41.3—16.7Fuxiya106.1735.754.31.81.231.6199.01.24.10.052.019.0Hongtai105.7935.76731.85.8199.4142.7689.0256.61754.92.525.313.0Wangminjing105.7435.801020.99.23.218.3606.0357.61313.21.777.511.9Pengyang106.6335.85134.14.552.762.1684.078.3240.70.333.018.2Xiangyang106.4035.95618.92.10.70.01131.031.71002.4-1.860.322.4Choushuihe 3106.0636.016861.925.2122.7264.31006.05057.411260.7-4.558.717.1Xiaokou 2106.0836.0119817.0116.128.2138.12580.07759.830847.2-0.3116.417.6Xiaokou 1106.0836.0224462.5156.590.7220.11508.011018.937317.9-0.4108.18.8Chaigou 2105.8836.07135.12.755.6108.5717.022.4484.7-1.6—9.0Chaigou105.8936.08108.52.453.272.4632.013.8297.81.4—12.9Heiyanquan105.8836.0894.52.546.682.9731.014.9280.3-1.1—10.4Choushuihe 1106.0436.1350.73.063.0101.0857.06.6264.3-1.0—17.1Choushuihe 2106.1736.148210.027.1144.9340.01095.06007.213942.9-4.858.517.8Hongyang105.6436.26300.33.592.7499.1112.0152.02580.0-6.3—11.0Zhengqi105.9636.45901.45.6204.3279.7145.01918.41420.9-4.7—12.4Xiaoshanquan105.6036.5015.74.332.659.8523.06.869.11.336.613.1Shuangjing106.2536.593598.0184.191.0318.81581.0—9313.4-3.0112.725.1Ganyanchi105.2336.67622.16.8107.011.01524.0735.1846.6-8.434.016.2Yaoxian105.1737.41510.310.8117.3133.1435.0660.31064.6-3.741.817.7Shuitaocun106.3137.46452.44.5146.1123.2478.0695.8900.4-3.0—13.9Nitanjing105.2037.46602.37.39.248.229.0629.6474.41.460.119.4Nitanquan105.2037.46867.25.435.628.5608.0404.71516.4-4.039.915.8Daquan106.3437.97105.41.216.114.4248.072.890.92.1—17.6Miaoshan 1105.8538.03273.95.545.058.6540.0267.6337.6-0.838.115.4Miaoshan 2105.8638.03258.35.543.556.7561.0246.2317.8-1.038.417.3Hongshitou 1105.6738.76116.22.741.968.0334.0174.2186.9-0.125.712.6Hongshitou 2105.6838.77160.75.034.158.1425.0148.2238.6-0.638.920.9Dashuigou106.1538.8916.22.014.268.4346.013.359.92.029.712.8Longquansi106.2838.9653.62.819.553.4358.052.8105.5-0.932.916.4Jianquan106.4839.08101.16.3123.0117.7662.080.3650.6-1.031.118.4Shitanjing 2106.3139.18127.27.449.5205.2349.092.9857.6-4.142.823.9Shitanjing 1106.3139.1891.34.338.5150.7337.076.2534.2-2.634.913.5Diyan106.9339.6449.91.924.785.2305.069.0152.61.6—16.8Subeigou106.9539.6783.13.729.768.9343.080.2155.81.534.315.4Qianligou 1106.9939.86122.14.239.177.6405.0107.5248.50.834.512.5Qianligou 2106.9839.86130.13.939.472.1354.0122.1238.71.432.919.1Dahuabei109.4140.739.72.29.568.0426.09.337.1-0.335.512.9Hongqicun109.3340.7315.41.916.680.6514.013.634.71.027.211.4Xishanzui108.7340.7480.04.850.7153.7351.0134.5270.90.334.414.6Aguimiao106.4240.7450.03.527.562.3379.048.2109.50.734.015.1Zhaoer106.5440.8864.617.235.1122.8465.092.9241.4-0.464.219.2Chendexi106.5340.8871.629.837.4126.8454.0125.4248.3-0.276.216.7Shimen106.5640.8875.513.442.899.9443.081.6303.4-1.156.625.8Chahangou106.5740.8828.71.77.345.9277.016.643.70.833.017.2Buerdong106.5740.8968.916.237.4118.7443.092.3306.7-0.862.127.8Shaotoushan109.1441.1269.52.659.161.5589.056.2165.61.9—17.3Dongsheng 1107.0441.1340.314.035.984.8463.052.5151.0-0.159.413.3Dongsheng 2107.0341.1443.815.367.756.3730.032.4112.51.154.414.8Xiliushu107.9341.2842.42.215.658.8401.019.173.10.130.814.7Hulusitai107.7941.2882.93.316.364.2528.046.9101.60.437.914.5Yangguangcun108.2941.2921.43.711.864.1380.012.632.00.843.314.4Xiremiao 2108.6641.5451.02.128.586.4658.040.178.80.7—23.7Xiremiao 1108.6641.5450.22.627.272.5553.038.478.50.928.718.6Xiremiao 3108.6641.5444.41.628.369.8572.035.775.00.0—16.5Hailiutu108.5141.5954.74.822.467.4402.039.469.71.642.217.2Average965.711.145.392.0589.3505.01645.947.4— represents spring water not suitable for a cation temperature scale.The water temperatures of springs were measured using a thermometer with an accuracy of 0.1°C. Water chemistry analyses were performed in the Key Laboratory of Earthquake Prediction, Institute of Earthquake Science, China Earthquake Administration, using a DionexICS-900 ion chromatograph with an ion detection limit 0.1 mg L−1. Calibration for the analysis was achieved using standard samples from the National Institute of Metrology, China. A mixed solution of NaHCO3 and Na2CO3 was used as the anion eluent, whereas a methane sulfonic acid solution was used as the cationic eluent. The titration method with an error less than 5% was used for analyzing CO32− and HCO3−, phenolphthalein and methyl orange were used as indicators, and the test error of the concentration of HCl was 0.08 mol L−1. Oxygen and hydrogen isotope analyses were performed in the Water Isotope and Water-Rock Interaction Laboratory at the Institute of Geology and Geophysics, Chinese Academy of Sciences, using a laser absorption water isotope spectrometer analyzer (L1102-I, Picarro) which used wavelength scanning optical cavity ring-down spectroscopy (WS-CRDS) technology. Analysis of δ18O and δD used the Vienna Standard Mean Ocean Water (V-SMOW) as the standard. The analytical precision of δ18O and δD measurements was ±0.1‰ and ±0.5‰, respectively [37].The quality of the constant elements of hot spring and geothermal water was assessed using theib value [48], with the range of results within ±10%: (1)ib%=∑cations−∑anions0.5×∑cations+∑anions×100.A total of 26 gas samples were collected from TNCB and EB. The gas samples were collected using the water displacement method.The present study used 500 mL AR glass containers, with the glass of the soda lime type containing a high portion of alkali and alkaline earth oxides with a very low permeability for helium [49]. The glass containers were initially immersed in corresponding geothermal water. The bottles were filled with spring water, following which funnels allowed displacement of water with gas. After the gas reached two-thirds of the volume of the bottle, each bottle was forcefully sealed using a solid trapezoidal rubber plug and adhesive plaster [50]. Samples were analyzed within 14 days after collection to avoid the leakage of volatiles.The chemical compositions of the gas sample were analyzed using a Finnigan MAT-271 mass spectrometer with precision of ±0.1% at the Key Laboratory of Petroleum Resources Research, Institute of Geology and Geophysics, Chinese Academy of Sciences. The helium and neon isotopes of the gases were detected using a MM5400 mass spectrometer at the Institute of Geology and Geophysics, Chinese Academy of Sciences. The carbon isotope was measured with the MAT-253 gas isotope mass spectrometer of the Beijing Research Institute of Uranium Geology, with a precision of ±0.1‰ [51].Rc/Ra is the air-corrected3He/4He ratio calculated using (2)RcRa=R/Ra×X−1X−1,(3)X=He4/Ne20measuredHe4/Ne20air×βNeβHe.β is the Bunsen solubility coefficient, which represents the volume of gas absorbed per volume of water at the measured temperature when the partial pressure of the gas is 1 atm [52], assuming a recharge temperature of 15°C. βNe/βHe=1.21 at 15°C. HeM is the mantle helium contribution of the total helium contents using (4)RcRa=RRacrust×1−HeM+RRamantle×HeM.According to [53], R/Ramantle=8; according to [54], R/Racrust=0.02. ## 4. Results The average temperatures of spring water in the Bohai Bay Basin, the TNCB, and the Ordos Basin were 55.0°C, 46.1°C, and 19.4°C, respectively. The geothermal water in the Ordos Basin comprised natural hot springs or artesian wells, and there are no data for the depth of wells. The gas components in the geothermal water were mainly N2, O2, Ar, CH4, and CO2. N2 was the predominant gas component in all samples. The concentrations of heavier hydrocarbons, H2S, SO2, and H2 fell below their respective detection limits. The ranges of concentration of N2 and O2 of nitrogen-rich hot springs were 69.42–98.52% and 0.07–18.57%, respectively. The concentration of Ar ranged between 0.92 and 1.47%, similar to that of air. The concentration of CO2 ranged between 0.01 and 7.91%, which is some several orders of magnitude higher than that in air (0.03%). The concentration of CH4 was low, ranging from 0 to 16.13%, with that in all samples below 1%, except for those of the Baodi, Lizigu, and Dijing springs.The gas isotope data showed that the4He/20Ne ratios ranged from 3.11 to 1,647, far exceeding that of air of 0.318. The R/Ra values of 3He/4He ranged from 0.01 to 2.52, representing typical mixtures of a crust and mantle source or radiogenic gas. The value of δ13C in the samples ranged from −6.3‰ to −20.7‰. ## 5. Discussion ### 5.1. Helium and Neon Isotopes Noble gases within the crust originate from three main sources [55]: (1) the atmosphere, introduced into the crust through groundwater recharge; (2) the mantle, from regions of magmatic activity; and (3) gases produced in the crust by the result of radioactive decay processes. The helium and carbon isotope ratios are sensitive tracers of gas sources due to varying gas isotope ratios among the atmosphere, crust, and mantle [25]. Helium is mainly derived from the atmosphere, crust, and mantle. The mixture of crust and mantle sources can be determined by the R/Ra ratio. The 3He/4He of atmospheric helium is 1.4×10−6, known as Ra [56]. Previous studies on the R/Ra ratio of crustal helium provided slightly different but consistent values. For example, Poreda et al. [57] obtained a crustal R/Ra of 0.013–0.021, whereas Mamyrin and Tolstikhin [56] found that the crustal Ra was generally <0.05. The R/Ra of the mantle source generally exceeds 5; i.e., the R/Ra of 3He/4He is 7.9 [58], whereas White [59] obtained an R/Ra of 8.8±2.5 for the upper mantle and 5–50 for the lower mantle. The R/Ra of the WZ13-1-1 natural gas well in the rift basin of the East China Sea is 8.8, which is the maximum Ra of any sedimentary basin in China [60, 61].Table3 shows the different ratios of 3He/4He. The G1, G2, G3, G4, and G8 sampling points were characterized by helium isotopes from the crust with a R/Ra<0.1. These sampling points were in the TNCB and close to the Ordos Basin. Other sampling points located in EB and TNCB showed different degrees of mantle-derived helium supply (Figure 3), consistent with the results of Zhang et al. [25].Table 3 Chemical and isotopic compositions of hot spring gases in the Eastern block and the Trans-North China Block of the North China Craton. No.Sampling siteLongitude (°E)Latitude (°N)N2 (%)O2 (%)Ar (%)CO2 (%)CH4 (%)4He (%)4He/20NeR/RaRc/RaHeM (%)δ13CPDB (‰)Tectonic unitG1Qicun112.6238.5496.440.601.430.030.031.47319.360.040.040.24-14.3TNCBG2Duncun112.7038.4995.720.851.380.13—1.92734.050.030.030.12-18.2TNCBG3Tangtou112.8338.5992.122.721.470.070.053.58971.090.030.030.12-17.5TNCBG4Yangou112.7938.9591.253.311.410.010.033.991647.600.070.070.62-18.6TNCBG5Hunyuan113.9439.4196.791.191.450.010.100.46188.650.620.627.51-17.5TNCBG6Shengshui114.0440.4489.906.651.301.84—0.31149.341.021.0212.53-15.0TNCBG7Tianzhen114.0440.4394.452.841.311.00—0.41219.280.990.9912.16-14.5TNCBG8Yangyuan114.5940.2196.101.311.250.980.020.34185.520.010.010-15.4TNCBG9Houhaoyao115.5440.3488.339.351.310.010.910.09202.850.960.9611.78-20.7TNCBG10Dongwaikou116.0840.96N.A.N.A.N.A.N.A.N.A.N.A.135.540.490.495.88-16.1TNCBG11Tangquan 1115.7440.90N.A.N.A.N.A.N.A.N.A.N.A.85.420.420.424.99-16.1TNCBG12Shengshiyuan115.9940.46N.A.N.A.N.A.N.A.N.A.N.A.63.152.422.4330.15-12.7TNCBG13Wuliying115.9340.48N.A.N.A.N.A.N.A.N.A.N.A.28.832.032.0425.31-14.2TNCBG14Jinyu115.9740.48N.A.N.A.N.A.N.A.N.A.N.A.96.072.522.5231.38-13.1TNCBG15Liyuantou117.1939.03N.A.N.A.N.A.N.A.N.A.N.A.155.300.270.273.12-19.8EBG16Xiawucun117.7839.27N.A.N.A.N.A.N.A.N.A.N.A.14.700.190.181.95-20.6EBG17Luqiancun117.7839.2996.651.901.300.140.02—3.110.160.080.78-20.7EBG18Zunhua117.7640.2198.520.071.280.050.010.0841.810.420.424.97-17.2EBG19Jidong118.3439.18N.A.N.A.N.A.N.A.N.A.N.A.8.120.340.323.73-18.0EBG20Liuzan118.6839.16N.A.N.A.N.A.N.A.N.A.N.A.211.950.350.354.13-17.4EBG21Changsheng118.1739.82N.A.N.A.N.A.N.A.N.A.N.A.538.730.700.708.52-17.2EBG22Baodi117.3439.5569.425.301.097.9116.13—44.500.490.495.85-9.3EBG23Lizigu117.3439.5274.5518.571.052.193.60—17.800.470.465.54-9.8EBG24Dijing117.3639.5473.8817.200.925.552.45—85.630.480.485.74-6.4EBG25Jingdong116.9439.9695.781.601.290.720.540.0782.530.160.161.72-14.4EBG26Huashuiwan116.5040.1897.190.131.211.740.03—20.650.540.536.44-11.20EB“—” stands for contributions below 0.1%. N.A.: not analyzed.δ13C is the value of CO2 in the analyzed sample.Figure 3 Conceptual model of He isotopes in the North China Craton (NCC). (a) Comparison between the thermal and seismic lithospheric bases for the NCC and the3He/4He (Ra) of the Eastern Block (EB) and the Trans-North China Block (TNCB) [70]. (b) The mantle convection regime for the destruction and modification of the lithosphere beneath the NCC, as modified after Zhu et al. [7]. Faults according to Chen et al. [71].The mantle source contribution of helium isotopes in gases of the hot springs ranged from 0 to 31.38%. The mantle source contributions of sampling locations G1, G2, G3, and G8 were below 0.5%. Sample location G8 was close to the Ordos Basin and represented a typical crustal source gas, with no mantle source contribution (R/Ra=0.01). The sampling points G1, G2, and G3 s were in the Qicun geothermal field in Shanxi Province (Figure 1(b)). The Qicun geothermal field is in the Zhoushan fault-fold zone of Hongtaoshan-I in the Shanxi block of the North China Plate and is categorized as part of the plate interior thermal system. The heat source is vertical heat transfer through the crust, and the circulation of meteoric water occurs through deep faults [62, 63]. The mantle source contributions of sampling locations G6, G7, and G9 exceeded 10%, while those of sampling locations G12, G13, and G14 were 30.15%, 25.31%, and 31.38%, respectively. G9 is in the Houhaoyao thermal field, characterized by developed fractures and rock rupture, thus providing access and space for the upwelling of mantle-derived material [64]. The sampling locations G12, G13, and G14 are in the Yanhuai Basin (Figure 1(b)). The regional geophysical data showed the presence of typical crustal faults, magmatic activities, complex structural patterns composed of shallow faults, and a mantle transition zone in the lower part [65]. The hot spring gases distributed in the central NCC indicated the presence of mantle sources in the TNCB, which may be related to crustal thinning in the region. A low-speed belt of 130 km was noted at the junction between the northern North China Basin and the Yanshan orogenic belt. This belt experienced multiple periods of tectonic activities, thereby providing a favorable channel for gas upwelling [66, 67]. The TNCB is a typical continent-to-continent collision belt that was formed by the collision between the EB and the WB at ~1.85 billion years ago. The rock unit in the region experienced intense multistage deformations accompanied by large-scale overthrusting and ductile shear [68]. Tectonic activity, crustal thinning, and deep cut active faults provide good conditions for mantle gas emission. The high 3He/4He ratios (0.10–2.52 Ra) are associated with the injection of a magmatic reservoir beneath the fault in the TNCB and EB, and subduction of the Pacific Plate also results in higher activity in the TNCB and EB [69].Typical crustal sources of helium isotopes in hot spring gases for sampling sites G1, G2, G3, G4, and G8 were distributed in the western EB, outside of the boundaries of the Ordos Basin. The remaining 21 hot spring sampling sites showed a mantle-derived gas supply, similar to the spatial scope of the destruction of the Pacific Plate subduction to the east of the NCC (Figure3(b)). This assertion was further verified by Dai et al. [26] who found that helium isotopes of gas in the Ordos Basin were characterized by low CO2 and low R/Ra values (<0.1) typical of the craton basin [72, 73].The spatial differentiation of gas isotope sources is not accidental. Many fault basins were formed in the east NCC during the Paleogene. This was accompanied by long basalt eruptions and active magmatic activities. These events facilitated the formation of oil in the east NCC [69]. Besides, a low-velocity zone was observed at a depth of 70 km in the east NCC [66] along with an obvious lithosphere-asthenosphere boundary (LAB) [74]. These observations confirm the presence of deep fluid activity in the eastern NCC. The EB and the TNCB contain greater quantities of mantle-derived gases and more direct channels. The thickness of the lithosphere of the western Ordos Basin (80–180 km) exceeds that of the EB and the TNCB, and an area of thinning is evident to the east of the basin [75], characterized by less magmatic activity and a stable structure [7, 9]. The eastern NCC shows a higher heat flow compared to the western part [76], which can be related to magma activity during the late Mesozoic and thinning of the continental lithosphere. ### 5.2. Gas Composition The present study analyzed 16 hot spring gas samples in the EB and TNCB (Table3). All samples were rich in N2 (69.42–98.52%). The content of N2 of the gas samples was characteristic of that of a medium-low temperature hydrothermal system, such as the peninsula craton in the heat field of central and western India, indicating low deep equilibrium temperatures [77]. The accumulation of N2 may be due to thermal decomposition of organic matter in sedimentary and metamorphic rocks [78].The N2/Ar ratios in air, air-saturated water (ASW), and groundwater were 84, 38, and 50, respectively [80–82]. The results of the present study showed a strong correlation between N2 and Ar (Figure 4). Plotting the results of gas sample analysis showed two separated distributions. Gas samples were distributed along the He-air trend line with N2/Ar ratios approaching that of air of 84, suggesting that N2 originated from the atmosphere and that aquifers were recharged with meteoric water containing dissolved air. Atmospheric precipitation is the main source of Ar in geothermal gas [32]. Therefore, both Ar and N2 in hot spring gas originated from the atmosphere.Figure 4 Relative N2-He-Ar abundances in free gas. The ASW represents air-saturated water [79]. Classification of subduction-derived gases after Fischer et al. [80]. Numbers and names of sampling sites are listed in Table 3.The N2/He ratios of 11 samples showed regional variation of sources. As shown in Figure 2, the sampling locations G1, G2, G3, and G4 showed crustal predominated gases, consistent with the results of the helium-neon isotope (Figure 5). The mantle source contribution of sampling locations G5, G6, G7, and G8 was significantly increased. These gas sampling points were in the central TNCB. Simultaneously, the sampling locations G9, G11, and G15 contained typical subduction zone gases. These sampling locations were in the EB which experienced the highest degree of destruction of the NCC. Arc-type gases are characterized by a high N2 content, N2/Arratios>200, and N2/Heratios>1,000. Mantle-derived gases are characterized by a low N2 content and N2/Heratios<200 [80]. Sample location G15 is in the Bohai Bay Basin in the Huanghua Depression and Cangxian uplift area. This area has conditions that facilitate mantle degassing. The mantle source contributions of the sampling points gradually increased from west to east (Figure 3(b)), and the gas characteristics of the subduction zone were observed in the EB (Figure 4).Figure 5 Plot of R/Ra-4He/20Ne in the Trans-North China Block (TNCB) and Eastern Block (EB). Endmembers in the plot are R/Raair=1, 4He/20Neair=0.254, R/Ramantle=8, 4He/20Nemantle=1000, R/Racrust=0.02, and 4He/20Necrust=1000 [91]. Numbers and names of sampling sites are listed in Table 3. ### 5.3. Carbon Isotopes of CO2 CO2 in hot spring gas is generated by the organic or inorganic process. The formation of CO2 through organic processes involves the decomposition of organic matter and bacterial activities. Formation of CO2 through inorganic processes involves magmatic activities in the mantle, thermal decomposition, and the dissolution of carbonate rocks [83]. The δ13C value is an effective criterion to determine the source of carbon dioxide and methane [84]. δ13C in CO2 for degassing of the upper mantle ranges from −8‰ to −4‰ [85, 86]. The values of δ13CCO2 with a magmatic origin range from −9.1‰ to 2‰ [87, 88]. δ13CCO2 in sedimentary basins is generally regulated by the generation of organic hydrocarbon by thermal decomposition and ranges from −15‰ to −25‰ [89]. δ13CCO2 of organic origin is generally lower than −10‰, while δ13CCO2 of inorganic origin typically ranges from −8‰ to 3‰ [90].Figure6 shows the mixing of different sources. The δ13CCO2 isotope showed a trend indicative of depositional genesis of carbonates (dashed box). Carbonate rocks account for 80% of the volume of sedimentary strata in the region and provide a material source for carbon [37]. Only the δ13CCO2 of sampling locations G22, G23, and G24 exceeded −10‰. These sampling locations were in the central North China Plain (EB). Meanwhile, the CO2 concentration of spring water was relatively low, ranging from 0.01 to 7.91%. The CO2 concentrations of only sampling locations G22, G23, and G24 exceeded 2% (Table 3). Most organic carbon was observed in the EB, and the δ13CCO2 value of sedimentary origin in the EB was similar to that in the Ordos Basin (Figure 6).Figure 6 Plot of3He/4He (R/Ra) vs. δ13CCO2. The endmember compositions for sedimentary organic carbon (S, δ13CCO2=−25‰–−19‰, 3He/4HeR/Ra=0.01; green arrow), mantle carbon (M, δ13CCO2=−6‰–−2‰, 3He/4HeR/Ra=8; orange arrow), and limestones (L, δ13CCO2=0‰, 3He/4HeR/Ra=0.01; blue arrow) [91]; numbers and names of sampling sites are listed in Table 3.The CO2 concentrations in the samples observed in the present study were different from those of gas wells recorded by Dai et al. [26] in the Ordos Basin of 0.02–8.87% with an average of 1.86%. The maximum value and average CO2 concentrations in the Bohai Bay Basin far exceeded those in the Ordos Basin, with an average value of >10%. This observation could be attributed to the dissolution of CO2 in water being promoted by the high temperature of geothermal water, leaving behind only a small quantity of inorganic carbon.Moreover,3He/4He (R/Ra) showed that gas in the sample sites near the EB was of mantle origin. In contrast, the gas component in the sample sites near the Ordos Basin was of sedimentary origin (Figure 6). ### 5.4. Hydrogen and Oxygen Isotopes Stable isotopes of hydrogen and oxygen can be used to identify the geothermal water source, trace the circulation path, and analyze the geothermal reservoir environment [37, 92]. There are significant differences in the hydrogen and oxygen isotopes among geothermal water, groundwater, meteoric water, and mixed water. The analysis of hydrogen and oxygen isotope ratios of meteoric water samples at different latitudes globally has shown that they all follow a linear relationship called the global meteoric water line (GMWL): δD=8δ18O+10 [93].The average values ofδD and δ18O at the 46 sampling points in the study area were −78.46‰ and −10.27‰, respectively, and ranged from −77.26 to −69.27‰ and from −10.45 to −8.04‰, respectively. As shown in Figure 7, when plotting δD against δ18O, most of the hydrogen and oxygen isotopes were distributed near the LMWL, indicating a meteoric water source. The results indicated oxygen shifting, with a maximum shift of 1.39 (G21).Figure 7 Hydrogen and oxygen stable isotopic composition of geothermal water in the Eastern Block (EB) and the Trans-North China Block (TNCB) of the North China Craton (NCC). GMWL stands for global meteoric water line:δD=8δ18O+10 [93]; LMWL stands for local meteoric water line, which is the meteorite water line of the monsoon region of Eastern China: δD=7.46δ18O+0.9 [106]. Numbers and names of sampling sites are listed in Table 1.Rocks are rich in oxygen (>40%) and poor in hydrogen (less than 1%) [94]. Therefore, the occurrence of reactions between water and rocks can result in an oxygen shift in water, whereas δD remains largely stable. For example, the water-rock interaction between atmospheric precipitation and media containing carbonate water can enhance the δ18O value of water [95]. A high temperature can promote interactions between water and rock, thereby intensifying the exchange of oxygen isotopes between geothermal water and oxygen-enriched surrounding rock. The δD and δ18O of Tianzhen 1 in the TNCB were −9.8‰ and −81.5‰, whereas those of Tianzhen 2 were −9.9‰ and −80.5‰, respectively. These results indicate that the exchange of oxygen isotopes may occur in the presence of oxygen-rich rocks in the quaternary sand-gravel aquifer of the Yanggao-Tianzhen Basin [25]. The δ18O values span a wide range (−10.3 to −8.0‰), and it is noteworthy that the samples all plot to the right of the GMWL, with relatively uniform δD values. By considering an average isotopic gradient of precipitation for China, it can be concluded from the H and O isotopes that probably the difference of δ18O is due to the fact that the geothermal water is recharged from different areas, the Taihangshan Range and Yan Shan Range NW of Tianjin [96]. Meanwhile, δ13CCO2 in the EB is of carbonate sedimentary origin (Figure 6), thereby facilitating oxygen isotope exchange in water-rock reactions. The corresponding δD and δ18O values of magmatic water are −20±10‰ and 10±2‰, respectively [97, 98]. In addition, the δD and δ18O of the EB exceeded those of the WB (Figure 7), possibly due to magmatism.There were obvious spatial differences inδD between the EB and TNCB. The average δD values of the EB and TNCB were −79.2‰ and −84.1‰, respectively, significantly lower than those in the area of destruction in the NCC. The δD value can be used to derive the groundwater recharge depth in the crust, which decreases with the depth [99]. The results of reservoir temperature and δD indicated that the EB has deeper groundwater circulation. Moreover, the contribution of deep mantle heat flow in the eastern NCC exceeds that in the western NCC [14]. This result may be related to active underground magmatic activity in the eastern NCC, the thinning of the crust, and the higher intensity of seismic activity. ### 5.5. Reservoir Temperature in the NCC Shallow geothermal data are of great significance for describing thermal states and revealing geodynamic processes of the continental lithosphere [100–102]. Since water-rock reactions are related to temperature, the geochemical thermometer has been widely used to estimate the final temperature of the water-rock balance in a reservoir [103]. Na-K-Mg trigonometry is usually used to determine the degree of water-rock reaction balance [104]. The Na-K-Mg trigonometry diagram is largely used to study hydrothermal systems and can provide a basis for chemical geothermometers [105].The results of the present study showed that the geothermal water in the Ordos Basin was largely local equilibrium water, whereas most of the geothermal water in the EB was immature (Figure8). The present study used the cation temperature scale to calculate the reservoir temperature. The prerequisite for using the temperature scale should be as follows: water should be in equilibrium, temperature should be greater than 25°C, and the calculated reservoir temperature should exceed the measured temperature when using the thermometric scale to calculate the reservoir temperature of a hot spring. Although some spring water in the study area does not apply to these temperature scales, there were differences in the calculated reservoir temperature between the EB and the WB, which are of reference significance. The K-Mg thermometric scale [104] can indicate the medium and low reservoir temperatures: (5)t°C=441014.0−logK2/Mg−273.15.Figure 8 Na-K-Mg triangular diagram of geothermal waters of the North China Craton (NCC) (base map according to [104]).The results of the K-Mg thermometric scale showed that the geothermal reservoir temperature of the WB ranged from 25.3 to 116.4°C, with an average temperature of 38.4°C, whereas the geothermal reservoir temperature in the EB and TNCB ranged from 31.1 to 112.6°C, with an average of 80.7°C (Figure2; Tables 1 and 2). The present study used Inverse Distance Weighting (IDW) to analyze the spatial distribution of reservoir temperature. The IDW interpolation method is widely used in digital elevation models (DEMs), meteorological and hydrological analysis, and other fields due to its simplicity and convenient calculation [107].The temperatures of hot springs in the WB ranged from 8.8 to 28.5°C, with an average of 16.1°C, whereas that in the EB and TNCB ranged from 11.4 to 92.6°C, with an average of 50.2°C (Tables1 and 2).Such regional differences are not accidental. The Ordos Basin has a crust with a temperature that exceeds that of the mantle and a mantle heat flow of 21.2–24.5 mW·m−2 [108]. The EB has a thermal state of mantle temperature that exceeds the temperature of the crust and a mantle heat flow ranging between 30 and 140 mW.m−2, with an average of 61.9±14.8mW·m−2 (He et al., 2011). The heat flow in the WB is related to a thicker lithosphere composed of continental blocks. In addition, the Cenozoic tectonic-thermal activity in the WB is weaker than that in the EB [70]. This can be attributed to the subduction of the Pacific Plate to the EB area, resulting in a thinner lithosphere and higher volcanic activity.The inconsistency between measured temperatures and reservoir temperatures of hot springs can be partially attributed to the difference in heat flow (Figure9) between the EB and the WB. Meanwhile, geothermal water circulation in the eastern NCC occurs at a greater depth. Crustal circulation at a greater depth is also a source of heat.Figure 9 Contour map of heat flow in the North China Craton (NCC). Data from the China Heat Flow Database (CHFDB, chfdb.xyz). ## 5.1. Helium and Neon Isotopes Noble gases within the crust originate from three main sources [55]: (1) the atmosphere, introduced into the crust through groundwater recharge; (2) the mantle, from regions of magmatic activity; and (3) gases produced in the crust by the result of radioactive decay processes. The helium and carbon isotope ratios are sensitive tracers of gas sources due to varying gas isotope ratios among the atmosphere, crust, and mantle [25]. Helium is mainly derived from the atmosphere, crust, and mantle. The mixture of crust and mantle sources can be determined by the R/Ra ratio. The 3He/4He of atmospheric helium is 1.4×10−6, known as Ra [56]. Previous studies on the R/Ra ratio of crustal helium provided slightly different but consistent values. For example, Poreda et al. [57] obtained a crustal R/Ra of 0.013–0.021, whereas Mamyrin and Tolstikhin [56] found that the crustal Ra was generally <0.05. The R/Ra of the mantle source generally exceeds 5; i.e., the R/Ra of 3He/4He is 7.9 [58], whereas White [59] obtained an R/Ra of 8.8±2.5 for the upper mantle and 5–50 for the lower mantle. The R/Ra of the WZ13-1-1 natural gas well in the rift basin of the East China Sea is 8.8, which is the maximum Ra of any sedimentary basin in China [60, 61].Table3 shows the different ratios of 3He/4He. The G1, G2, G3, G4, and G8 sampling points were characterized by helium isotopes from the crust with a R/Ra<0.1. These sampling points were in the TNCB and close to the Ordos Basin. Other sampling points located in EB and TNCB showed different degrees of mantle-derived helium supply (Figure 3), consistent with the results of Zhang et al. [25].Table 3 Chemical and isotopic compositions of hot spring gases in the Eastern block and the Trans-North China Block of the North China Craton. No.Sampling siteLongitude (°E)Latitude (°N)N2 (%)O2 (%)Ar (%)CO2 (%)CH4 (%)4He (%)4He/20NeR/RaRc/RaHeM (%)δ13CPDB (‰)Tectonic unitG1Qicun112.6238.5496.440.601.430.030.031.47319.360.040.040.24-14.3TNCBG2Duncun112.7038.4995.720.851.380.13—1.92734.050.030.030.12-18.2TNCBG3Tangtou112.8338.5992.122.721.470.070.053.58971.090.030.030.12-17.5TNCBG4Yangou112.7938.9591.253.311.410.010.033.991647.600.070.070.62-18.6TNCBG5Hunyuan113.9439.4196.791.191.450.010.100.46188.650.620.627.51-17.5TNCBG6Shengshui114.0440.4489.906.651.301.84—0.31149.341.021.0212.53-15.0TNCBG7Tianzhen114.0440.4394.452.841.311.00—0.41219.280.990.9912.16-14.5TNCBG8Yangyuan114.5940.2196.101.311.250.980.020.34185.520.010.010-15.4TNCBG9Houhaoyao115.5440.3488.339.351.310.010.910.09202.850.960.9611.78-20.7TNCBG10Dongwaikou116.0840.96N.A.N.A.N.A.N.A.N.A.N.A.135.540.490.495.88-16.1TNCBG11Tangquan 1115.7440.90N.A.N.A.N.A.N.A.N.A.N.A.85.420.420.424.99-16.1TNCBG12Shengshiyuan115.9940.46N.A.N.A.N.A.N.A.N.A.N.A.63.152.422.4330.15-12.7TNCBG13Wuliying115.9340.48N.A.N.A.N.A.N.A.N.A.N.A.28.832.032.0425.31-14.2TNCBG14Jinyu115.9740.48N.A.N.A.N.A.N.A.N.A.N.A.96.072.522.5231.38-13.1TNCBG15Liyuantou117.1939.03N.A.N.A.N.A.N.A.N.A.N.A.155.300.270.273.12-19.8EBG16Xiawucun117.7839.27N.A.N.A.N.A.N.A.N.A.N.A.14.700.190.181.95-20.6EBG17Luqiancun117.7839.2996.651.901.300.140.02—3.110.160.080.78-20.7EBG18Zunhua117.7640.2198.520.071.280.050.010.0841.810.420.424.97-17.2EBG19Jidong118.3439.18N.A.N.A.N.A.N.A.N.A.N.A.8.120.340.323.73-18.0EBG20Liuzan118.6839.16N.A.N.A.N.A.N.A.N.A.N.A.211.950.350.354.13-17.4EBG21Changsheng118.1739.82N.A.N.A.N.A.N.A.N.A.N.A.538.730.700.708.52-17.2EBG22Baodi117.3439.5569.425.301.097.9116.13—44.500.490.495.85-9.3EBG23Lizigu117.3439.5274.5518.571.052.193.60—17.800.470.465.54-9.8EBG24Dijing117.3639.5473.8817.200.925.552.45—85.630.480.485.74-6.4EBG25Jingdong116.9439.9695.781.601.290.720.540.0782.530.160.161.72-14.4EBG26Huashuiwan116.5040.1897.190.131.211.740.03—20.650.540.536.44-11.20EB“—” stands for contributions below 0.1%. N.A.: not analyzed.δ13C is the value of CO2 in the analyzed sample.Figure 3 Conceptual model of He isotopes in the North China Craton (NCC). (a) Comparison between the thermal and seismic lithospheric bases for the NCC and the3He/4He (Ra) of the Eastern Block (EB) and the Trans-North China Block (TNCB) [70]. (b) The mantle convection regime for the destruction and modification of the lithosphere beneath the NCC, as modified after Zhu et al. [7]. Faults according to Chen et al. [71].The mantle source contribution of helium isotopes in gases of the hot springs ranged from 0 to 31.38%. The mantle source contributions of sampling locations G1, G2, G3, and G8 were below 0.5%. Sample location G8 was close to the Ordos Basin and represented a typical crustal source gas, with no mantle source contribution (R/Ra=0.01). The sampling points G1, G2, and G3 s were in the Qicun geothermal field in Shanxi Province (Figure 1(b)). The Qicun geothermal field is in the Zhoushan fault-fold zone of Hongtaoshan-I in the Shanxi block of the North China Plate and is categorized as part of the plate interior thermal system. The heat source is vertical heat transfer through the crust, and the circulation of meteoric water occurs through deep faults [62, 63]. The mantle source contributions of sampling locations G6, G7, and G9 exceeded 10%, while those of sampling locations G12, G13, and G14 were 30.15%, 25.31%, and 31.38%, respectively. G9 is in the Houhaoyao thermal field, characterized by developed fractures and rock rupture, thus providing access and space for the upwelling of mantle-derived material [64]. The sampling locations G12, G13, and G14 are in the Yanhuai Basin (Figure 1(b)). The regional geophysical data showed the presence of typical crustal faults, magmatic activities, complex structural patterns composed of shallow faults, and a mantle transition zone in the lower part [65]. The hot spring gases distributed in the central NCC indicated the presence of mantle sources in the TNCB, which may be related to crustal thinning in the region. A low-speed belt of 130 km was noted at the junction between the northern North China Basin and the Yanshan orogenic belt. This belt experienced multiple periods of tectonic activities, thereby providing a favorable channel for gas upwelling [66, 67]. The TNCB is a typical continent-to-continent collision belt that was formed by the collision between the EB and the WB at ~1.85 billion years ago. The rock unit in the region experienced intense multistage deformations accompanied by large-scale overthrusting and ductile shear [68]. Tectonic activity, crustal thinning, and deep cut active faults provide good conditions for mantle gas emission. The high 3He/4He ratios (0.10–2.52 Ra) are associated with the injection of a magmatic reservoir beneath the fault in the TNCB and EB, and subduction of the Pacific Plate also results in higher activity in the TNCB and EB [69].Typical crustal sources of helium isotopes in hot spring gases for sampling sites G1, G2, G3, G4, and G8 were distributed in the western EB, outside of the boundaries of the Ordos Basin. The remaining 21 hot spring sampling sites showed a mantle-derived gas supply, similar to the spatial scope of the destruction of the Pacific Plate subduction to the east of the NCC (Figure3(b)). This assertion was further verified by Dai et al. [26] who found that helium isotopes of gas in the Ordos Basin were characterized by low CO2 and low R/Ra values (<0.1) typical of the craton basin [72, 73].The spatial differentiation of gas isotope sources is not accidental. Many fault basins were formed in the east NCC during the Paleogene. This was accompanied by long basalt eruptions and active magmatic activities. These events facilitated the formation of oil in the east NCC [69]. Besides, a low-velocity zone was observed at a depth of 70 km in the east NCC [66] along with an obvious lithosphere-asthenosphere boundary (LAB) [74]. These observations confirm the presence of deep fluid activity in the eastern NCC. The EB and the TNCB contain greater quantities of mantle-derived gases and more direct channels. The thickness of the lithosphere of the western Ordos Basin (80–180 km) exceeds that of the EB and the TNCB, and an area of thinning is evident to the east of the basin [75], characterized by less magmatic activity and a stable structure [7, 9]. The eastern NCC shows a higher heat flow compared to the western part [76], which can be related to magma activity during the late Mesozoic and thinning of the continental lithosphere. ## 5.2. Gas Composition The present study analyzed 16 hot spring gas samples in the EB and TNCB (Table3). All samples were rich in N2 (69.42–98.52%). The content of N2 of the gas samples was characteristic of that of a medium-low temperature hydrothermal system, such as the peninsula craton in the heat field of central and western India, indicating low deep equilibrium temperatures [77]. The accumulation of N2 may be due to thermal decomposition of organic matter in sedimentary and metamorphic rocks [78].The N2/Ar ratios in air, air-saturated water (ASW), and groundwater were 84, 38, and 50, respectively [80–82]. The results of the present study showed a strong correlation between N2 and Ar (Figure 4). Plotting the results of gas sample analysis showed two separated distributions. Gas samples were distributed along the He-air trend line with N2/Ar ratios approaching that of air of 84, suggesting that N2 originated from the atmosphere and that aquifers were recharged with meteoric water containing dissolved air. Atmospheric precipitation is the main source of Ar in geothermal gas [32]. Therefore, both Ar and N2 in hot spring gas originated from the atmosphere.Figure 4 Relative N2-He-Ar abundances in free gas. The ASW represents air-saturated water [79]. Classification of subduction-derived gases after Fischer et al. [80]. Numbers and names of sampling sites are listed in Table 3.The N2/He ratios of 11 samples showed regional variation of sources. As shown in Figure 2, the sampling locations G1, G2, G3, and G4 showed crustal predominated gases, consistent with the results of the helium-neon isotope (Figure 5). The mantle source contribution of sampling locations G5, G6, G7, and G8 was significantly increased. These gas sampling points were in the central TNCB. Simultaneously, the sampling locations G9, G11, and G15 contained typical subduction zone gases. These sampling locations were in the EB which experienced the highest degree of destruction of the NCC. Arc-type gases are characterized by a high N2 content, N2/Arratios>200, and N2/Heratios>1,000. Mantle-derived gases are characterized by a low N2 content and N2/Heratios<200 [80]. Sample location G15 is in the Bohai Bay Basin in the Huanghua Depression and Cangxian uplift area. This area has conditions that facilitate mantle degassing. The mantle source contributions of the sampling points gradually increased from west to east (Figure 3(b)), and the gas characteristics of the subduction zone were observed in the EB (Figure 4).Figure 5 Plot of R/Ra-4He/20Ne in the Trans-North China Block (TNCB) and Eastern Block (EB). Endmembers in the plot are R/Raair=1, 4He/20Neair=0.254, R/Ramantle=8, 4He/20Nemantle=1000, R/Racrust=0.02, and 4He/20Necrust=1000 [91]. Numbers and names of sampling sites are listed in Table 3. ## 5.3. Carbon Isotopes of CO2 CO2 in hot spring gas is generated by the organic or inorganic process. The formation of CO2 through organic processes involves the decomposition of organic matter and bacterial activities. Formation of CO2 through inorganic processes involves magmatic activities in the mantle, thermal decomposition, and the dissolution of carbonate rocks [83]. The δ13C value is an effective criterion to determine the source of carbon dioxide and methane [84]. δ13C in CO2 for degassing of the upper mantle ranges from −8‰ to −4‰ [85, 86]. The values of δ13CCO2 with a magmatic origin range from −9.1‰ to 2‰ [87, 88]. δ13CCO2 in sedimentary basins is generally regulated by the generation of organic hydrocarbon by thermal decomposition and ranges from −15‰ to −25‰ [89]. δ13CCO2 of organic origin is generally lower than −10‰, while δ13CCO2 of inorganic origin typically ranges from −8‰ to 3‰ [90].Figure6 shows the mixing of different sources. The δ13CCO2 isotope showed a trend indicative of depositional genesis of carbonates (dashed box). Carbonate rocks account for 80% of the volume of sedimentary strata in the region and provide a material source for carbon [37]. Only the δ13CCO2 of sampling locations G22, G23, and G24 exceeded −10‰. These sampling locations were in the central North China Plain (EB). Meanwhile, the CO2 concentration of spring water was relatively low, ranging from 0.01 to 7.91%. The CO2 concentrations of only sampling locations G22, G23, and G24 exceeded 2% (Table 3). Most organic carbon was observed in the EB, and the δ13CCO2 value of sedimentary origin in the EB was similar to that in the Ordos Basin (Figure 6).Figure 6 Plot of3He/4He (R/Ra) vs. δ13CCO2. The endmember compositions for sedimentary organic carbon (S, δ13CCO2=−25‰–−19‰, 3He/4HeR/Ra=0.01; green arrow), mantle carbon (M, δ13CCO2=−6‰–−2‰, 3He/4HeR/Ra=8; orange arrow), and limestones (L, δ13CCO2=0‰, 3He/4HeR/Ra=0.01; blue arrow) [91]; numbers and names of sampling sites are listed in Table 3.The CO2 concentrations in the samples observed in the present study were different from those of gas wells recorded by Dai et al. [26] in the Ordos Basin of 0.02–8.87% with an average of 1.86%. The maximum value and average CO2 concentrations in the Bohai Bay Basin far exceeded those in the Ordos Basin, with an average value of >10%. This observation could be attributed to the dissolution of CO2 in water being promoted by the high temperature of geothermal water, leaving behind only a small quantity of inorganic carbon.Moreover,3He/4He (R/Ra) showed that gas in the sample sites near the EB was of mantle origin. In contrast, the gas component in the sample sites near the Ordos Basin was of sedimentary origin (Figure 6). ## 5.4. Hydrogen and Oxygen Isotopes Stable isotopes of hydrogen and oxygen can be used to identify the geothermal water source, trace the circulation path, and analyze the geothermal reservoir environment [37, 92]. There are significant differences in the hydrogen and oxygen isotopes among geothermal water, groundwater, meteoric water, and mixed water. The analysis of hydrogen and oxygen isotope ratios of meteoric water samples at different latitudes globally has shown that they all follow a linear relationship called the global meteoric water line (GMWL): δD=8δ18O+10 [93].The average values ofδD and δ18O at the 46 sampling points in the study area were −78.46‰ and −10.27‰, respectively, and ranged from −77.26 to −69.27‰ and from −10.45 to −8.04‰, respectively. As shown in Figure 7, when plotting δD against δ18O, most of the hydrogen and oxygen isotopes were distributed near the LMWL, indicating a meteoric water source. The results indicated oxygen shifting, with a maximum shift of 1.39 (G21).Figure 7 Hydrogen and oxygen stable isotopic composition of geothermal water in the Eastern Block (EB) and the Trans-North China Block (TNCB) of the North China Craton (NCC). GMWL stands for global meteoric water line:δD=8δ18O+10 [93]; LMWL stands for local meteoric water line, which is the meteorite water line of the monsoon region of Eastern China: δD=7.46δ18O+0.9 [106]. Numbers and names of sampling sites are listed in Table 1.Rocks are rich in oxygen (>40%) and poor in hydrogen (less than 1%) [94]. Therefore, the occurrence of reactions between water and rocks can result in an oxygen shift in water, whereas δD remains largely stable. For example, the water-rock interaction between atmospheric precipitation and media containing carbonate water can enhance the δ18O value of water [95]. A high temperature can promote interactions between water and rock, thereby intensifying the exchange of oxygen isotopes between geothermal water and oxygen-enriched surrounding rock. The δD and δ18O of Tianzhen 1 in the TNCB were −9.8‰ and −81.5‰, whereas those of Tianzhen 2 were −9.9‰ and −80.5‰, respectively. These results indicate that the exchange of oxygen isotopes may occur in the presence of oxygen-rich rocks in the quaternary sand-gravel aquifer of the Yanggao-Tianzhen Basin [25]. The δ18O values span a wide range (−10.3 to −8.0‰), and it is noteworthy that the samples all plot to the right of the GMWL, with relatively uniform δD values. By considering an average isotopic gradient of precipitation for China, it can be concluded from the H and O isotopes that probably the difference of δ18O is due to the fact that the geothermal water is recharged from different areas, the Taihangshan Range and Yan Shan Range NW of Tianjin [96]. Meanwhile, δ13CCO2 in the EB is of carbonate sedimentary origin (Figure 6), thereby facilitating oxygen isotope exchange in water-rock reactions. The corresponding δD and δ18O values of magmatic water are −20±10‰ and 10±2‰, respectively [97, 98]. In addition, the δD and δ18O of the EB exceeded those of the WB (Figure 7), possibly due to magmatism.There were obvious spatial differences inδD between the EB and TNCB. The average δD values of the EB and TNCB were −79.2‰ and −84.1‰, respectively, significantly lower than those in the area of destruction in the NCC. The δD value can be used to derive the groundwater recharge depth in the crust, which decreases with the depth [99]. The results of reservoir temperature and δD indicated that the EB has deeper groundwater circulation. Moreover, the contribution of deep mantle heat flow in the eastern NCC exceeds that in the western NCC [14]. This result may be related to active underground magmatic activity in the eastern NCC, the thinning of the crust, and the higher intensity of seismic activity. ## 5.5. Reservoir Temperature in the NCC Shallow geothermal data are of great significance for describing thermal states and revealing geodynamic processes of the continental lithosphere [100–102]. Since water-rock reactions are related to temperature, the geochemical thermometer has been widely used to estimate the final temperature of the water-rock balance in a reservoir [103]. Na-K-Mg trigonometry is usually used to determine the degree of water-rock reaction balance [104]. The Na-K-Mg trigonometry diagram is largely used to study hydrothermal systems and can provide a basis for chemical geothermometers [105].The results of the present study showed that the geothermal water in the Ordos Basin was largely local equilibrium water, whereas most of the geothermal water in the EB was immature (Figure8). The present study used the cation temperature scale to calculate the reservoir temperature. The prerequisite for using the temperature scale should be as follows: water should be in equilibrium, temperature should be greater than 25°C, and the calculated reservoir temperature should exceed the measured temperature when using the thermometric scale to calculate the reservoir temperature of a hot spring. Although some spring water in the study area does not apply to these temperature scales, there were differences in the calculated reservoir temperature between the EB and the WB, which are of reference significance. The K-Mg thermometric scale [104] can indicate the medium and low reservoir temperatures: (5)t°C=441014.0−logK2/Mg−273.15.Figure 8 Na-K-Mg triangular diagram of geothermal waters of the North China Craton (NCC) (base map according to [104]).The results of the K-Mg thermometric scale showed that the geothermal reservoir temperature of the WB ranged from 25.3 to 116.4°C, with an average temperature of 38.4°C, whereas the geothermal reservoir temperature in the EB and TNCB ranged from 31.1 to 112.6°C, with an average of 80.7°C (Figure2; Tables 1 and 2). The present study used Inverse Distance Weighting (IDW) to analyze the spatial distribution of reservoir temperature. The IDW interpolation method is widely used in digital elevation models (DEMs), meteorological and hydrological analysis, and other fields due to its simplicity and convenient calculation [107].The temperatures of hot springs in the WB ranged from 8.8 to 28.5°C, with an average of 16.1°C, whereas that in the EB and TNCB ranged from 11.4 to 92.6°C, with an average of 50.2°C (Tables1 and 2).Such regional differences are not accidental. The Ordos Basin has a crust with a temperature that exceeds that of the mantle and a mantle heat flow of 21.2–24.5 mW·m−2 [108]. The EB has a thermal state of mantle temperature that exceeds the temperature of the crust and a mantle heat flow ranging between 30 and 140 mW.m−2, with an average of 61.9±14.8mW·m−2 (He et al., 2011). The heat flow in the WB is related to a thicker lithosphere composed of continental blocks. In addition, the Cenozoic tectonic-thermal activity in the WB is weaker than that in the EB [70]. This can be attributed to the subduction of the Pacific Plate to the EB area, resulting in a thinner lithosphere and higher volcanic activity.The inconsistency between measured temperatures and reservoir temperatures of hot springs can be partially attributed to the difference in heat flow (Figure9) between the EB and the WB. Meanwhile, geothermal water circulation in the eastern NCC occurs at a greater depth. Crustal circulation at a greater depth is also a source of heat.Figure 9 Contour map of heat flow in the North China Craton (NCC). Data from the China Heat Flow Database (CHFDB, chfdb.xyz). ## 6. Conclusions The TNCB and EB of the NCC provide paths for the emission of gas due to the presence of active faults. Continuous upwelling of mantle-derived gas occurs in the asthenosphere beneath the crust. The present study conducted a chemical analysis of the hot springs and gas in the craton basin as well as an isotope analysis in the EB and the TNCB, whereas a chemical analysis of hot springs was conducted in the WB. From the results of the present study, the following conclusions could be made:(1) The compositions of helium-neon isotopes showed that mantle sources contributed to isotope compositions to different degrees in both the Bohai Bay Basin (EB) and the basin-ridge tectonic area (TNCB). There was a small mantle contribution of helium in Xinzhou near the Taihang Mountain. Moreover, the R/Ra of helium in most natural gas components of the Ordos Basin indicated crustal sources(2) Abundant N2 was noted in all hot springs in the eastern region, and the contribution of N2 from mantle sources increased from west to east. Typical subduction zone gases were noted in the eastern region, which remained exposed in the Huailai Basin. Ar and N2 in the study area may be of the same origin. The results of δ13CCO2 showed that CO2 in the gas in the EB is of organic origin, whereas that in the high-temperature geothermal area in some carbonate sedimentary layers was of inorganic origin(3) The results of hydrogen and oxygen isotope analysis showed that the geothermal water in the east of the craton was meteoric water.δD increased gradually from west to east, whereas the depth of groundwater circulation increased. The temperatures of the thermal reservoir and water calculated by the ion temperature scale of the hot spring showed that the temperature of the EB far exceeded that in the WB.Hydrochemistry, gas composition, and isotope analysis of hot springs in the NCC area can provide favorable evidence for the development and utilization of geothermal fields. The results of the present study showed obvious spatial differences in these attributes, which could be related to tectonic conditions, magmatic activities, and active faults. Mantle-derived helium was found in the hot spring gas of the TNCB, consistent with the extent of Pacific Plate subduction. --- *Source: 1009766-2021-10-25.xml*
2021
# The Construction of Accurate Recommendation Model of Learning Resources of Knowledge Graph under Deep Learning **Authors:** Xia Yang; Leping Tan **Journal:** Scientific Programming (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1010122 --- ## Abstract With the rapid development of science and technology and the continuous progress of teaching, it is now flooded with rich learning resources. Massive learning resources provide learners with a good learning foundation. At the same time, learners want to be precise from many learning resources. Second, it becomes more and more difficult to quickly obtain the learning resources you want. Therefore, it is very important to accurately and quickly recommend learning resources to learners. During the last two decades, a large number of different types of recommendation systems were adopted that present the users with contents of their choice, such as videos, products, and educational content recommendation systems. The knowledge graph has been fully applied in this process. The application of deep learning in the recommendation systems has further enhanced their performance. This article proposes a learning resource accurate recommendation model based on the knowledge graph under deep learning. We build a recommendation system based on deep learning that is comprised of a learner knowledge representation (KR) model and a learning resource KR model. Information such as learner’s basic information, learning resource information, and other data is used by the recommendation engine to calculate the target learner’s score based on the learner KR and the learning resource KR and generate a recommendation list for the target learner. We use mean absolute error (MAE) as the evaluation indicator. The experimental results show that the proposed recommendation system achieves better results as compared to the traditional systems. --- ## Body ## 1. Introduction In recent years, the rapid development of artificial intelligence technologies such as deep learning, knowledge graphs, and artificial neural networks has driven society from “Internet +” into a new era of “artificial intelligence +”. The “Education Information 2.0 Action Plan” emphasizes on establishing and improving the sustainable development mechanism of educational information and building a networked, digital, intelligent, personalized, and lifelong education system [1]. The key to building a new education system lies in the development of personalized learning, and the realization of personalized learning is inseparable from the strong support of the adaptive learning system. The core components of the adaptive learning system include five parts: domain model, user model, adaptive model, adaptive engine, and presentation model [2]. The domain model is an important foundation and core element of building an adaptive learning system, and building a domain model with clear semantics, complete structure, and good scalability is an important challenge faced by adaptive learning systems. The artificial intelligence technology represented by the knowledge graph provides a technical guarantee for the construction of the educational field model. The “New Generation Artificial Intelligence [3] Development Plan” specifically pointed out that “focus on breakthroughs in core technologies of knowledge processing, in-depth search and visual interaction form a multi-source, multi-disciplinary and multi-data type cross-media knowledge map covering billions of entities.” [4]. Therefore, the use of knowledge construction, knowledge mining, knowledge reasoning, and other technologies to realize the knowledge extraction, expression, fusion, reasoning, and utilization of the education domain model is an important topic in the current education research. The recommendation system is excellent in solving the “information overload problem.” Traditional recommendation technologies, including collaborative filtering and content-based recommendations, have been applied in many fields, such as product recommendation, news recommendations, Amazon book recommendations, Netflix movie recommendations, and course recommendations [5, 6]. In the e-learning environment, learners have different attributes, such as learning motivation, learning level, learning style, and preference, and these learner characteristics will affect the learner’s learning. The general recommendation technology limits the performance of the online learning recommendation system due to cold start and sparsity issues [7–9]; that is, when there are new learning resources that have not been evaluated or new users who have not commented on any items, it will lead to the inability to make accurate recommendations. The data are huge, and the acquisition of these data will become very sparse [10]. Therefore, improving the accuracy of personalized learning resource recommendation and improving performance are currently an urgent problem to be solved. To solve the above problems, this paper mainly studies the recommendation algorithm based on knowledge representation and learning resources. To improve the customization and accuracy of recommendation, knowledge representation is used to integrate learners’ learning level, learning style, and learning preferences into the recommendation process, and the semantic relationship of knowledge is used to alleviate the cold start problem; the problem of sparsity is solved through collaborative filtering algorithm.The domain model construction refers to the process of using modeling technology to label and serialize domain knowledge. The education domain model construction refers to the process of using knowledge extraction, knowledge fusion, and other technologies to establish connections between subject knowledge and knowledge. The purpose is to serialize knowledge to better promote teaching and learning. The commonly used methods for model construction in the education domain are concept map, knowledge map, and knowledge graph. These three methods can perform knowledge representation and schemata, but these concepts are easy to confuse. The concept map was first proposed by Professor Novak to organize and characterize knowledge. It includes two parts, nodes and connections. Nodes are used to represent concepts, and connections are used to represent the relationship between concepts [6].The concept map includes four elements: proposition, concept, cross-connection, and hierarchical structure. The concept map can not only describe knowledge but also evaluate knowledge. Trowbridge used concept maps to evaluate university courses and analyzed the differences in students’ knowledge understanding of the same lesson; Ruiz-Primo used probability maps as an evaluation tool and found that the students’ concept maps were interpreted as representing him. The concept of knowledge map was first proposed by Dansereau and Holley. The knowledge map can organize scattered knowledge to form serialized knowledge and promote the knowledge construction of learners. Knowledge map represents the path of knowledge acquisition and the relationship of knowledge. It can not only characterize the knowledge system structure, but also help learners accurately locate the required knowledge. The knowledge map is mainly used in the systematic construction of subject knowledge structure in the field of education. Kim believes that knowledge maps can establish knowledge relationships, help structure knowledge, and facilitate knowledge understanding; knowledge maps can help teachers reshape learning content and learning resources and effectively reflect the subject system, learning goals, and learning. At present, there are intelligent search engine, large-scale knowledge base, famous meta web knowledge base, wikista knowledge base of Wikimedia Foundation, and Yago comprehensive knowledge base developed by Max Planck Institute in Germany. In the field of education, Knewton in the United States used knowledge graphs to build an interdisciplinary knowledge system that includes concepts and their prerequisite relationships. Tsinghua University and Microsoft Research Institute developed Open Academic Graph, “Wisdom Learning Companion.”The main contributions of this research study include the following.We build a recommendation system based on deep learning that is comprised of four parts: the learner knowledge representation (KR) model, the learning resource KR model, the CF recommendation engine, and the learning resource recommendation. These models include information such as the learner’s basic information, the learning resource information, the data preprocessing modules, and the recommendation engine, which calculates the target learner’s score based on the learner KR and the learning resource KR and generates a recommendation list for the target learner. We use mean absolute error (MAE) as the evaluation indicator to test the performance of our system and achieve a significantly lower MAE value as compared to the traditional CF algorithms.The rest of the study is organized as follows.Section2 presents some of the related work by other researchers, Section 3 discusses the methodology of our approach, Section 4 discusses the experiments and their results, and Section 5 is the conclusion of our work. ## 2. Related Work Liu Hongjing and others believe that the knowledge map can reflect the knowledge structure of a certain subject and promote learners to generate relevance and networked learning thinking. The knowledge graph was originally based on the measurement of scientific knowledge, showing the structural relationship of knowledge through graphical representation, which belongs to the research category of scientometrics. The knowledge graph is a structured knowledge semantic network. The nodes in the graph represent entities, and the edges in the graph represent the semantic relationships between nodes. Leo believes that the subject knowledge map, as a semantic network that establishes a connection between knowledge points and knowledge points, and knowledge points and teaching resources, can play an important role in the semantic association of learning materials, the construction of learner models, and the personalized recommendation of learning resources; Li Zhen and others believe that the educational knowledge graph is a kind of knowledge element as a node, according to its multidimensional semantic relations to be associated, at the knowledge level and the cognitive level to represent the subject domain knowledge and the cognitive state of learners, available knowledge organization and cognitive representation tools for knowledge navigation, cognitive diagnosis, resource aggregation, and route recommendation; and Yu Shengquan and others believe that the subject knowledge graph is based on the logical relationship of knowledge formed by the semantic relationship of knowledge, on this basis, superimposes teaching goals, teaching problems, and cognitive status, and then generates a cognitive map. In the general field, knowledge graphs are mainly used in large-scale knowledge bases [11, 12]. Zhou et al. [13] incorporated word-oriented and entity-oriented knowledge graphs to enhance data representation in conversational recommendation systems and tackle the issue of insufficient contextual information and the issue of the semantic gap between natural language expression and user preference. Shi et al. [14] proposed a knowledge graph-based learning path recommendation model to generate diverse learning paths and satisfy different learning needs. They build a multidimensional knowledge graph framework to store learning objects, proposed a number of semantic relationships between the learning objects, and built the learning path recommendation model. Zhou et al. [15] studied the various challenges the interactive recommender systems are facing, such as the large-scale data set requirement for the training of the model for effective recommendation. They suggested the use of knowledge graph for dealing with these issues and to use the prior knowledge learned from the knowledge graph to guide the candidate selection and propagate the users’ preferences over the knowledge graph.Based on the above analysis, this research conducted a comparative analysis of the three similar concepts of concept map, knowledge map, and knowledge graph from the five dimensions of conceptual connotation, component elements, knowledge scope, knowledge relationship, and application field and systematically sorted out the similarities of the three. The difference and the comparative analysis are shown in Table1. To sum up, compared with concept maps and knowledge maps, knowledge graphs can express a wider range of knowledge content and semantic relations, and the construction is more automated. However, through literature analysis, it is found that the current knowledge graph still has the following problems in terms of knowledge content representation, learner ability description, and construction methods.Table 1 Comparison of concept maps, knowledge maps, and knowledge graphs. DimensionConcept mapKnowledge mapKnowledge graphConceptual connotationA tool for organizing and characterizing knowledgeA structured knowledge networkA semantic network that reveals the relationships between entitiesConstituent elementsNodes and connectionsNodes, the relationship between nodes and visual representationEntity, attribute, semantic relationshipRange of knowledgeFor simple conceptsTarget a clear learning topicWide range of domain knowledgeKnowledge relationsRelatively simple, often only the whole and part of the relationshipThe association relationship is relatively rich, with parent-child, predecessor, successor, inclusion, and other relationshipsThe knowledge relationship is a semantic relationship, and the relationship is more obtained through knowledge inference miningApplication fieldKnowledge representation, knowledge organization, knowledge evaluationKnowledge organization, knowledge association, knowledge navigation, knowledge searchIntelligent search, in-depth question and answer, social network, knowledge base construction, knowledge reasoning(1) In terms of knowledge content representation, the existing knowledge graph still focuses on the description of basic knowledge points. The knowledge content is scattered, and there is a lack of knowledge units that center on subject keywords and other related words are combined according to the semantic relevance of the subject knowledge cluster; (2) in terms of learner ability description, most of the existing knowledge graphs describe the content of knowledge points. There is a lack of further research on the relationship between knowledge points and the characterization of learners’ abilities; and (3) in terms of construction methods, most of the existing knowledge graphs are still constructed manually by domain experts and lack the help of machine learning and automatic construction of artificial intelligence technologies such as deep learning and natural language processing. ## 3. Method ### 3.1. Accurate Recommendation Algorithm Based on Deep Learning The attribute characteristics and learning resource characteristics of learners in the learning process are used as the basis for designing learner KR and learning resource KR models. On the basis of emphasizing learner’s independent learning, based on CF and KR, we build an accurate recommendation model of learning resources as shown in Figure1. The model mainly contains 4 parts: learner KR model, learning resource KR model, CF recommendation engine, and personalized learning resource recommendation. This article will explain in detail how the recommendation model works.(1) Learner KR model: this model mainly includes learner’s basic information, preference information, and learner attribute characteristic information. Learner’s basic information can be obtained explicitly and implicitly; learner KR model stores learner attribute characteristics, including learning level, learning motivation, learning style, and other information. The learner KR model performs a personalized analysis of the learner’s personal data according to the learner’s preferences and attribute characteristics, while the CF recommendation engine uses the learner and the learning resource KR information to make score predictions for learning.(2) Learning object model: this part stores the information of learning resources, including text, image, animation, audio or video, and other formats.(3) Data preprocessing: the data preprocessing component is put in the learner, and learning resource data are prepared and preprocessed into the correct format that the recommendation engine can recognize.(4) Recommendation engine: once the data preprocessing is successful, the recommendation engine will calculate the target learner score based on the learner and the learning resource KR, similarity, and the score prediction of the target learner. Finally, the recommendation engine generates a personalized recommendation list for the target learner.Figure 1 Accurate recommendation model of learning resources. #### 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. #### 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. #### 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 3.1. Accurate Recommendation Algorithm Based on Deep Learning The attribute characteristics and learning resource characteristics of learners in the learning process are used as the basis for designing learner KR and learning resource KR models. On the basis of emphasizing learner’s independent learning, based on CF and KR, we build an accurate recommendation model of learning resources as shown in Figure1. The model mainly contains 4 parts: learner KR model, learning resource KR model, CF recommendation engine, and personalized learning resource recommendation. This article will explain in detail how the recommendation model works.(1) Learner KR model: this model mainly includes learner’s basic information, preference information, and learner attribute characteristic information. Learner’s basic information can be obtained explicitly and implicitly; learner KR model stores learner attribute characteristics, including learning level, learning motivation, learning style, and other information. The learner KR model performs a personalized analysis of the learner’s personal data according to the learner’s preferences and attribute characteristics, while the CF recommendation engine uses the learner and the learning resource KR information to make score predictions for learning.(2) Learning object model: this part stores the information of learning resources, including text, image, animation, audio or video, and other formats.(3) Data preprocessing: the data preprocessing component is put in the learner, and learning resource data are prepared and preprocessed into the correct format that the recommendation engine can recognize.(4) Recommendation engine: once the data preprocessing is successful, the recommendation engine will calculate the target learner score based on the learner and the learning resource KR, similarity, and the score prediction of the target learner. Finally, the recommendation engine generates a personalized recommendation list for the target learner.Figure 1 Accurate recommendation model of learning resources. ### 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. ### 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. ### 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. ## 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. ## 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 4. Experiment and Result Analysis The recommendation list of the CF recommendation system is generated in the following way. First, we find the nearest neighbors of several target learners, then extract the score data of the learning resources of the nearest neighbors, and finally predict the scores of the learning resources of the target learners based on these score data and generate recommendation list. The data set is extremely important for the algorithm. The data set is defined as data= (U, I, R), whereU = {u1, u2, u3, …, um} is the basic learner set, |U| = m; I = {i1, i2, i3, …, in} is the collection of learning resources, |I| = nm ∗ n-order matrix R is the learner’s scoring matrix for each learning resource, and the element rij indicates the score of the ith user in U on the jth learning resource in I. The key to CF recommendation is to accurately locate the nearest neighbor of the target learner, and the basis for determining the nearest neighbor is to calculate the relationship between the learners. There are mainly the following three commonly used calculation methods for the similarity of equation (1). ### 4.1. Pearson’s Correlation Similarity The difference between the Pearson and the modified cosine similarity is that the denominator is the user’s common rating item, shown as follows:(1)Su,v=∑α∈PuvRu,v−R¯uRv,α−R¯v∑α∈PuvRu,α−R¯u2∑α∈PuvRv,α−R¯v2,(2)Su,v=u∗vu∗v.The closer the value ofSu,v is to 1, the higher the similarity between user u and user v. ### 4.2. Adjusted Cosine In view of the fact that the traditional cosine similarity does not consider the user’s rating preference, that is to say, some users like high ratings and some like low ratings, for example, the physics and mathematics scores of two students are 5 and 4, 3 and 2, respectively. If the traditional cosine similarity calculation finds that the similarity between the two users is very low, but in fact the preferences of the two students are the same, compared with the mathematics class, two students prefer physics. The following equation is the modified cosine similarity of two usersu and v:(3)Su,v=∑α∈Pu,αRu,α−R¯uRV,α−R¯v∑α∈PuRu,α−R¯2∑α∈PvRv,α−R¯v2.Among them,Su,v represents the set of scoring resource items, Pu and Pv represent the resource items scored by users u and v, respectively, Ru,α and RV,α represent the score of the resource item α, and R represents the average score. This article needs to consider the ontology domain. We use knowledge and users to evaluate learning resources. Therefore, in equation (4), we use the modified cosine similarity to calculate. The KR similarity calculation equation Si,j of learning resource objects i and j is as follows:(4)Si,j=∑Rl,i−R¯lRl,j−R¯l∑Rl,i−R¯l2∑Rl,j−R¯l2,where Rl,i refers to the score of the learner l on the learning resource object i; Rl,j refers to the score of the learner l on the learning resource object j; and R represents the average of all the scores provided by the learner l. Finally, the target learner’s prediction score for the learning resource object is calculated. The N most similar learning resource objects are obtained from equation (4). The calculation of the prediction score is shown in the following equation:(5)Pl,i=∑i∈NSi,t×Rl,t∑i∈NSi,t.Among them,N is a collection of learning resource objects like the learning resource object i, and Rl,t is the scores of the learning resource object t by the learner l.The recommendation algorithm uses the CF recommendation engine to generate a top N recommendation list of learning resource objects based on the target learner’s predicted scores of learning resources, the learner’s knowledge representation, and the learning resource knowledge representation model. In this study,L represents the set of all learners; that is, L = {l1, l2, l3, …, mn}; I represents all possible recommended learning resources; that is, I = {i1, i2, i3, …, in}; k represents learners and learning resource domain knowledge; that is, k = {k1, k2, k3, …, kq}; and R represents the learner’s scoring of learning resources, and the scoring range is defined as R = {1, 2, 3, 4, 5}.Algorithm1 generates a recommended list of top N learning resources. It is explained as follows.Algorithm 1:Recommended list of top N learning resources. Inputs:collection of learning resource objects,I = {i1, i2, i3, …, in}.KR, k = {learner, learning resource}.The score value of learnerR, R = {1, 2, 3, 4, 5}.Output:prediction, score and top N learning resource recommendation list.Method:Step 1, fore-chain ∈ I,j ∈ I, n ∈ N, do;Step 2, Use equation (4) to calculate the similarity S(i, j) End for each;Step 3, Use formula (5) to calculate the prediction score Pl, i;Step 4, The top N learning resource objects with the highest predicted scores are used to generate a top N learning resource recommendation list for the target learnerlt.This article applies the model to the self-developed “Mobile Autonomous School” online learning platform. To meet the individual learning needs of learners, starting from the two aspects of learners and learning resources, the learner KR model and learning resource KR are proposed. This model analyzes the learning behavior of students on the platform and comprehensively considers a personalized resource recommendation model based on the learner’s learning level, learning style, and learning preferences. The model considers the learning characteristics and learning of the learners in actual learning applications and preference to introduce the best learning resources. It integrates the basic information of learners, learners’ attributes, learners’ hobbies, and user ratings to make the recommendation of learning resources more systematic and comprehensive. Among them, the recommendation of style information factors that consider learners’ learning can enable learners to obtain recommended resources suitable for the learning situation at the time; the recommendation that takes into account the characteristics of the learner’s attributes can enable the system to recommend learning resources that match the learner’s learning level, academic background, and other characteristics; recommendations that consider learner preference factors can help promote learners’ interest in learning recommended resources and improve the continuity of learning behavior; recommendations that consider learning goal factors make it easier for learners to learn efficiently and in a targeted manner; and the recommendation of evaluation factors can maintain the timeliness of recommended resources, so that learners can access the latest and most accurate learning resources. The personalized learning resource recommendation model combines the above attributes to systematically make personalized recommendations, saving learners. The time and energy of resources are chosen, and the utilization rate of learning resources and learners’ academic performance is improved. The experimental data set contains 30,000 scoring records, generated from the evaluation and scoring of 650 learning resources by 200 learners within 2 months. The value range is 1∼5 (1—very irrelevant, 5—very relevant), and 0 means that the learner has not made any evaluation. The experiment divides the collected data into two parts at a ratio of 1 : 4, and one part is used as the training set. The other part is used as a test set to construct a recommendation model. The specific data are shown in Figure5.Figure 5 Data set. #### 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 4.1. Pearson’s Correlation Similarity The difference between the Pearson and the modified cosine similarity is that the denominator is the user’s common rating item, shown as follows:(1)Su,v=∑α∈PuvRu,v−R¯uRv,α−R¯v∑α∈PuvRu,α−R¯u2∑α∈PuvRv,α−R¯v2,(2)Su,v=u∗vu∗v.The closer the value ofSu,v is to 1, the higher the similarity between user u and user v. ## 4.2. Adjusted Cosine In view of the fact that the traditional cosine similarity does not consider the user’s rating preference, that is to say, some users like high ratings and some like low ratings, for example, the physics and mathematics scores of two students are 5 and 4, 3 and 2, respectively. If the traditional cosine similarity calculation finds that the similarity between the two users is very low, but in fact the preferences of the two students are the same, compared with the mathematics class, two students prefer physics. The following equation is the modified cosine similarity of two usersu and v:(3)Su,v=∑α∈Pu,αRu,α−R¯uRV,α−R¯v∑α∈PuRu,α−R¯2∑α∈PvRv,α−R¯v2.Among them,Su,v represents the set of scoring resource items, Pu and Pv represent the resource items scored by users u and v, respectively, Ru,α and RV,α represent the score of the resource item α, and R represents the average score. This article needs to consider the ontology domain. We use knowledge and users to evaluate learning resources. Therefore, in equation (4), we use the modified cosine similarity to calculate. The KR similarity calculation equation Si,j of learning resource objects i and j is as follows:(4)Si,j=∑Rl,i−R¯lRl,j−R¯l∑Rl,i−R¯l2∑Rl,j−R¯l2,where Rl,i refers to the score of the learner l on the learning resource object i; Rl,j refers to the score of the learner l on the learning resource object j; and R represents the average of all the scores provided by the learner l. Finally, the target learner’s prediction score for the learning resource object is calculated. The N most similar learning resource objects are obtained from equation (4). The calculation of the prediction score is shown in the following equation:(5)Pl,i=∑i∈NSi,t×Rl,t∑i∈NSi,t.Among them,N is a collection of learning resource objects like the learning resource object i, and Rl,t is the scores of the learning resource object t by the learner l.The recommendation algorithm uses the CF recommendation engine to generate a top N recommendation list of learning resource objects based on the target learner’s predicted scores of learning resources, the learner’s knowledge representation, and the learning resource knowledge representation model. In this study,L represents the set of all learners; that is, L = {l1, l2, l3, …, mn}; I represents all possible recommended learning resources; that is, I = {i1, i2, i3, …, in}; k represents learners and learning resource domain knowledge; that is, k = {k1, k2, k3, …, kq}; and R represents the learner’s scoring of learning resources, and the scoring range is defined as R = {1, 2, 3, 4, 5}.Algorithm1 generates a recommended list of top N learning resources. It is explained as follows.Algorithm 1:Recommended list of top N learning resources. Inputs:collection of learning resource objects,I = {i1, i2, i3, …, in}.KR, k = {learner, learning resource}.The score value of learnerR, R = {1, 2, 3, 4, 5}.Output:prediction, score and top N learning resource recommendation list.Method:Step 1, fore-chain ∈ I,j ∈ I, n ∈ N, do;Step 2, Use equation (4) to calculate the similarity S(i, j) End for each;Step 3, Use formula (5) to calculate the prediction score Pl, i;Step 4, The top N learning resource objects with the highest predicted scores are used to generate a top N learning resource recommendation list for the target learnerlt.This article applies the model to the self-developed “Mobile Autonomous School” online learning platform. To meet the individual learning needs of learners, starting from the two aspects of learners and learning resources, the learner KR model and learning resource KR are proposed. This model analyzes the learning behavior of students on the platform and comprehensively considers a personalized resource recommendation model based on the learner’s learning level, learning style, and learning preferences. The model considers the learning characteristics and learning of the learners in actual learning applications and preference to introduce the best learning resources. It integrates the basic information of learners, learners’ attributes, learners’ hobbies, and user ratings to make the recommendation of learning resources more systematic and comprehensive. Among them, the recommendation of style information factors that consider learners’ learning can enable learners to obtain recommended resources suitable for the learning situation at the time; the recommendation that takes into account the characteristics of the learner’s attributes can enable the system to recommend learning resources that match the learner’s learning level, academic background, and other characteristics; recommendations that consider learner preference factors can help promote learners’ interest in learning recommended resources and improve the continuity of learning behavior; recommendations that consider learning goal factors make it easier for learners to learn efficiently and in a targeted manner; and the recommendation of evaluation factors can maintain the timeliness of recommended resources, so that learners can access the latest and most accurate learning resources. The personalized learning resource recommendation model combines the above attributes to systematically make personalized recommendations, saving learners. The time and energy of resources are chosen, and the utilization rate of learning resources and learners’ academic performance is improved. The experimental data set contains 30,000 scoring records, generated from the evaluation and scoring of 650 learning resources by 200 learners within 2 months. The value range is 1∼5 (1—very irrelevant, 5—very relevant), and 0 means that the learner has not made any evaluation. The experiment divides the collected data into two parts at a ratio of 1 : 4, and one part is used as the training set. The other part is used as a test set to construct a recommendation model. The specific data are shown in Figure5.Figure 5 Data set. ### 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 5. Conclusion This article intends to study the methods and applications of knowledge graphs in the recommendation of course digital resources. After doing a lot of literature research on knowledge graphs and recommendation techniques, the research background and related theories of knowledge graphs are analyzed, and the recommendation model based on knowledge graphs is summarized. Through the design and development of the digital resource recommendation tool, the role and significance of the knowledge graph in the field of digital learning resource recommendation are verified. As learning resources become more and more abundant, the recommendation of digital resources is bound to become the trend of adult learning and lifelong learning. Knowledge graphs play the most important role in this process. We elaborate the design ideas and function realization process of the resource recommendation tool based on the knowledge graph. Three modules are designed including the course digital resource uploading module, the course knowledge graph presentation module, and the related resource recommendation module. We implement a tool for recommending course digital resources based on knowledge graphs. Through specific course experiments, the usability and learning effect of this tool for adult online learning are analyzed. It is found by comparison that this tool can not only recommend relevant learning resources for learners, but also the interactive design on the knowledge map can stimulate learners’ enthusiasm for online learning, enhance their interest in learning, and play a particularly important role in the effect of online learning. Experimental results show that the digital resource recommendation tool based on the knowledge graph meets the expected goals and design requirements, achieving a significantly lower MAE value as compared to its competitors. It also lays the foundation for the future research on resource recommendation based on the knowledge graph. The innovations of this research are mainly reflected in the method of presenting digital curriculum resources in the form of knowledge graph. This research proposes that the digital learning resources of curriculum are presented in the form of knowledge graph, which is different from the original linear knowledge point presentation method. --- *Source: 1010122-2022-01-25.xml*
1010122-2022-01-25_1010122-2022-01-25.md
81,861
The Construction of Accurate Recommendation Model of Learning Resources of Knowledge Graph under Deep Learning
Xia Yang; Leping Tan
Scientific Programming (2022)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1010122
1010122-2022-01-25.xml
--- ## Abstract With the rapid development of science and technology and the continuous progress of teaching, it is now flooded with rich learning resources. Massive learning resources provide learners with a good learning foundation. At the same time, learners want to be precise from many learning resources. Second, it becomes more and more difficult to quickly obtain the learning resources you want. Therefore, it is very important to accurately and quickly recommend learning resources to learners. During the last two decades, a large number of different types of recommendation systems were adopted that present the users with contents of their choice, such as videos, products, and educational content recommendation systems. The knowledge graph has been fully applied in this process. The application of deep learning in the recommendation systems has further enhanced their performance. This article proposes a learning resource accurate recommendation model based on the knowledge graph under deep learning. We build a recommendation system based on deep learning that is comprised of a learner knowledge representation (KR) model and a learning resource KR model. Information such as learner’s basic information, learning resource information, and other data is used by the recommendation engine to calculate the target learner’s score based on the learner KR and the learning resource KR and generate a recommendation list for the target learner. We use mean absolute error (MAE) as the evaluation indicator. The experimental results show that the proposed recommendation system achieves better results as compared to the traditional systems. --- ## Body ## 1. Introduction In recent years, the rapid development of artificial intelligence technologies such as deep learning, knowledge graphs, and artificial neural networks has driven society from “Internet +” into a new era of “artificial intelligence +”. The “Education Information 2.0 Action Plan” emphasizes on establishing and improving the sustainable development mechanism of educational information and building a networked, digital, intelligent, personalized, and lifelong education system [1]. The key to building a new education system lies in the development of personalized learning, and the realization of personalized learning is inseparable from the strong support of the adaptive learning system. The core components of the adaptive learning system include five parts: domain model, user model, adaptive model, adaptive engine, and presentation model [2]. The domain model is an important foundation and core element of building an adaptive learning system, and building a domain model with clear semantics, complete structure, and good scalability is an important challenge faced by adaptive learning systems. The artificial intelligence technology represented by the knowledge graph provides a technical guarantee for the construction of the educational field model. The “New Generation Artificial Intelligence [3] Development Plan” specifically pointed out that “focus on breakthroughs in core technologies of knowledge processing, in-depth search and visual interaction form a multi-source, multi-disciplinary and multi-data type cross-media knowledge map covering billions of entities.” [4]. Therefore, the use of knowledge construction, knowledge mining, knowledge reasoning, and other technologies to realize the knowledge extraction, expression, fusion, reasoning, and utilization of the education domain model is an important topic in the current education research. The recommendation system is excellent in solving the “information overload problem.” Traditional recommendation technologies, including collaborative filtering and content-based recommendations, have been applied in many fields, such as product recommendation, news recommendations, Amazon book recommendations, Netflix movie recommendations, and course recommendations [5, 6]. In the e-learning environment, learners have different attributes, such as learning motivation, learning level, learning style, and preference, and these learner characteristics will affect the learner’s learning. The general recommendation technology limits the performance of the online learning recommendation system due to cold start and sparsity issues [7–9]; that is, when there are new learning resources that have not been evaluated or new users who have not commented on any items, it will lead to the inability to make accurate recommendations. The data are huge, and the acquisition of these data will become very sparse [10]. Therefore, improving the accuracy of personalized learning resource recommendation and improving performance are currently an urgent problem to be solved. To solve the above problems, this paper mainly studies the recommendation algorithm based on knowledge representation and learning resources. To improve the customization and accuracy of recommendation, knowledge representation is used to integrate learners’ learning level, learning style, and learning preferences into the recommendation process, and the semantic relationship of knowledge is used to alleviate the cold start problem; the problem of sparsity is solved through collaborative filtering algorithm.The domain model construction refers to the process of using modeling technology to label and serialize domain knowledge. The education domain model construction refers to the process of using knowledge extraction, knowledge fusion, and other technologies to establish connections between subject knowledge and knowledge. The purpose is to serialize knowledge to better promote teaching and learning. The commonly used methods for model construction in the education domain are concept map, knowledge map, and knowledge graph. These three methods can perform knowledge representation and schemata, but these concepts are easy to confuse. The concept map was first proposed by Professor Novak to organize and characterize knowledge. It includes two parts, nodes and connections. Nodes are used to represent concepts, and connections are used to represent the relationship between concepts [6].The concept map includes four elements: proposition, concept, cross-connection, and hierarchical structure. The concept map can not only describe knowledge but also evaluate knowledge. Trowbridge used concept maps to evaluate university courses and analyzed the differences in students’ knowledge understanding of the same lesson; Ruiz-Primo used probability maps as an evaluation tool and found that the students’ concept maps were interpreted as representing him. The concept of knowledge map was first proposed by Dansereau and Holley. The knowledge map can organize scattered knowledge to form serialized knowledge and promote the knowledge construction of learners. Knowledge map represents the path of knowledge acquisition and the relationship of knowledge. It can not only characterize the knowledge system structure, but also help learners accurately locate the required knowledge. The knowledge map is mainly used in the systematic construction of subject knowledge structure in the field of education. Kim believes that knowledge maps can establish knowledge relationships, help structure knowledge, and facilitate knowledge understanding; knowledge maps can help teachers reshape learning content and learning resources and effectively reflect the subject system, learning goals, and learning. At present, there are intelligent search engine, large-scale knowledge base, famous meta web knowledge base, wikista knowledge base of Wikimedia Foundation, and Yago comprehensive knowledge base developed by Max Planck Institute in Germany. In the field of education, Knewton in the United States used knowledge graphs to build an interdisciplinary knowledge system that includes concepts and their prerequisite relationships. Tsinghua University and Microsoft Research Institute developed Open Academic Graph, “Wisdom Learning Companion.”The main contributions of this research study include the following.We build a recommendation system based on deep learning that is comprised of four parts: the learner knowledge representation (KR) model, the learning resource KR model, the CF recommendation engine, and the learning resource recommendation. These models include information such as the learner’s basic information, the learning resource information, the data preprocessing modules, and the recommendation engine, which calculates the target learner’s score based on the learner KR and the learning resource KR and generates a recommendation list for the target learner. We use mean absolute error (MAE) as the evaluation indicator to test the performance of our system and achieve a significantly lower MAE value as compared to the traditional CF algorithms.The rest of the study is organized as follows.Section2 presents some of the related work by other researchers, Section 3 discusses the methodology of our approach, Section 4 discusses the experiments and their results, and Section 5 is the conclusion of our work. ## 2. Related Work Liu Hongjing and others believe that the knowledge map can reflect the knowledge structure of a certain subject and promote learners to generate relevance and networked learning thinking. The knowledge graph was originally based on the measurement of scientific knowledge, showing the structural relationship of knowledge through graphical representation, which belongs to the research category of scientometrics. The knowledge graph is a structured knowledge semantic network. The nodes in the graph represent entities, and the edges in the graph represent the semantic relationships between nodes. Leo believes that the subject knowledge map, as a semantic network that establishes a connection between knowledge points and knowledge points, and knowledge points and teaching resources, can play an important role in the semantic association of learning materials, the construction of learner models, and the personalized recommendation of learning resources; Li Zhen and others believe that the educational knowledge graph is a kind of knowledge element as a node, according to its multidimensional semantic relations to be associated, at the knowledge level and the cognitive level to represent the subject domain knowledge and the cognitive state of learners, available knowledge organization and cognitive representation tools for knowledge navigation, cognitive diagnosis, resource aggregation, and route recommendation; and Yu Shengquan and others believe that the subject knowledge graph is based on the logical relationship of knowledge formed by the semantic relationship of knowledge, on this basis, superimposes teaching goals, teaching problems, and cognitive status, and then generates a cognitive map. In the general field, knowledge graphs are mainly used in large-scale knowledge bases [11, 12]. Zhou et al. [13] incorporated word-oriented and entity-oriented knowledge graphs to enhance data representation in conversational recommendation systems and tackle the issue of insufficient contextual information and the issue of the semantic gap between natural language expression and user preference. Shi et al. [14] proposed a knowledge graph-based learning path recommendation model to generate diverse learning paths and satisfy different learning needs. They build a multidimensional knowledge graph framework to store learning objects, proposed a number of semantic relationships between the learning objects, and built the learning path recommendation model. Zhou et al. [15] studied the various challenges the interactive recommender systems are facing, such as the large-scale data set requirement for the training of the model for effective recommendation. They suggested the use of knowledge graph for dealing with these issues and to use the prior knowledge learned from the knowledge graph to guide the candidate selection and propagate the users’ preferences over the knowledge graph.Based on the above analysis, this research conducted a comparative analysis of the three similar concepts of concept map, knowledge map, and knowledge graph from the five dimensions of conceptual connotation, component elements, knowledge scope, knowledge relationship, and application field and systematically sorted out the similarities of the three. The difference and the comparative analysis are shown in Table1. To sum up, compared with concept maps and knowledge maps, knowledge graphs can express a wider range of knowledge content and semantic relations, and the construction is more automated. However, through literature analysis, it is found that the current knowledge graph still has the following problems in terms of knowledge content representation, learner ability description, and construction methods.Table 1 Comparison of concept maps, knowledge maps, and knowledge graphs. DimensionConcept mapKnowledge mapKnowledge graphConceptual connotationA tool for organizing and characterizing knowledgeA structured knowledge networkA semantic network that reveals the relationships between entitiesConstituent elementsNodes and connectionsNodes, the relationship between nodes and visual representationEntity, attribute, semantic relationshipRange of knowledgeFor simple conceptsTarget a clear learning topicWide range of domain knowledgeKnowledge relationsRelatively simple, often only the whole and part of the relationshipThe association relationship is relatively rich, with parent-child, predecessor, successor, inclusion, and other relationshipsThe knowledge relationship is a semantic relationship, and the relationship is more obtained through knowledge inference miningApplication fieldKnowledge representation, knowledge organization, knowledge evaluationKnowledge organization, knowledge association, knowledge navigation, knowledge searchIntelligent search, in-depth question and answer, social network, knowledge base construction, knowledge reasoning(1) In terms of knowledge content representation, the existing knowledge graph still focuses on the description of basic knowledge points. The knowledge content is scattered, and there is a lack of knowledge units that center on subject keywords and other related words are combined according to the semantic relevance of the subject knowledge cluster; (2) in terms of learner ability description, most of the existing knowledge graphs describe the content of knowledge points. There is a lack of further research on the relationship between knowledge points and the characterization of learners’ abilities; and (3) in terms of construction methods, most of the existing knowledge graphs are still constructed manually by domain experts and lack the help of machine learning and automatic construction of artificial intelligence technologies such as deep learning and natural language processing. ## 3. Method ### 3.1. Accurate Recommendation Algorithm Based on Deep Learning The attribute characteristics and learning resource characteristics of learners in the learning process are used as the basis for designing learner KR and learning resource KR models. On the basis of emphasizing learner’s independent learning, based on CF and KR, we build an accurate recommendation model of learning resources as shown in Figure1. The model mainly contains 4 parts: learner KR model, learning resource KR model, CF recommendation engine, and personalized learning resource recommendation. This article will explain in detail how the recommendation model works.(1) Learner KR model: this model mainly includes learner’s basic information, preference information, and learner attribute characteristic information. Learner’s basic information can be obtained explicitly and implicitly; learner KR model stores learner attribute characteristics, including learning level, learning motivation, learning style, and other information. The learner KR model performs a personalized analysis of the learner’s personal data according to the learner’s preferences and attribute characteristics, while the CF recommendation engine uses the learner and the learning resource KR information to make score predictions for learning.(2) Learning object model: this part stores the information of learning resources, including text, image, animation, audio or video, and other formats.(3) Data preprocessing: the data preprocessing component is put in the learner, and learning resource data are prepared and preprocessed into the correct format that the recommendation engine can recognize.(4) Recommendation engine: once the data preprocessing is successful, the recommendation engine will calculate the target learner score based on the learner and the learning resource KR, similarity, and the score prediction of the target learner. Finally, the recommendation engine generates a personalized recommendation list for the target learner.Figure 1 Accurate recommendation model of learning resources. #### 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. #### 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. #### 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 3.1. Accurate Recommendation Algorithm Based on Deep Learning The attribute characteristics and learning resource characteristics of learners in the learning process are used as the basis for designing learner KR and learning resource KR models. On the basis of emphasizing learner’s independent learning, based on CF and KR, we build an accurate recommendation model of learning resources as shown in Figure1. The model mainly contains 4 parts: learner KR model, learning resource KR model, CF recommendation engine, and personalized learning resource recommendation. This article will explain in detail how the recommendation model works.(1) Learner KR model: this model mainly includes learner’s basic information, preference information, and learner attribute characteristic information. Learner’s basic information can be obtained explicitly and implicitly; learner KR model stores learner attribute characteristics, including learning level, learning motivation, learning style, and other information. The learner KR model performs a personalized analysis of the learner’s personal data according to the learner’s preferences and attribute characteristics, while the CF recommendation engine uses the learner and the learning resource KR information to make score predictions for learning.(2) Learning object model: this part stores the information of learning resources, including text, image, animation, audio or video, and other formats.(3) Data preprocessing: the data preprocessing component is put in the learner, and learning resource data are prepared and preprocessed into the correct format that the recommendation engine can recognize.(4) Recommendation engine: once the data preprocessing is successful, the recommendation engine will calculate the target learner score based on the learner and the learning resource KR, similarity, and the score prediction of the target learner. Finally, the recommendation engine generates a personalized recommendation list for the target learner.Figure 1 Accurate recommendation model of learning resources. ### 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. ### 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. ### 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 3.1.1. Learners and Learning Resources KR model is the organization method of subject knowledge and teaching laws. Its essence is the symbolic form of knowledge, mainly for the convenience of computers to store and process knowledge. At present, more KR technologies include predicate logic representation and Web representation, production rules, semantic network, and frame notation. ## 3.1.2. Learner KR Model Since personalized learning resource recommendation needs to understand learner information and provides learning resources according to learners’ learning goals, learning needs, teaching content, learning problems and environment, and preferences, it is necessary to establish a learner model. In the field of education, knowledge of learning resources, and concept retrieval, the KR-based learner model can feedback a series of concepts that are closely related to learning retrieval to learners based on the connections between concepts, which are conducive to the learners to discover new learning goals, areas of interest, and learning interests and make personalized recommendations. The formal definition of the learner model UR is as follows: UR = (Ub, Uo, Un, Uh, Up, Uv}, where the user’s personal information Ub = (ID, Name, Age, Sex, Edu, Tel), including basic personal information such as user name, name, age, education background, gender, and phone number. Learner KR prefers that the ontology is UO = (C, RC, RN, FC, A, I), where C represents the concept of learner preference in the ontology; RC describes the classification relationship between concepts in the learner’s preference ontology; and RN describes the learner’s preference ontology. FC represents the attribute and parity in the function. A means axiom; and Un is learning style information, and learning style is divided into 4 dimensions, active type/contemplative type, sentiment type/intuitive, visual/verbal, and comprehensive/sequential; that is, the learner’s learning style is composed of 4 permutations and combinations; the different learning styles in this article are expressed as follows: Un = (active/contemplative, perceptual/intuitive, visual/verbal, comprehensive/sequential) = {1, 2, 3, 4}. Uh is the learning level information; to obtain the learning level of the learner, a random set of 10 questions is set for learning level test, through the test situation to assign the learning level of learners, where primary = 0∼3, intermediate = 4∼6, and advanced = 7∼10; for different learning levels, Uh is used to represent Uh = (primary, intermediate, advanced) = {1, 2, 3}. Up is user preference information, which describes the user’s preference for interface, font, language, resource content type, resource media type, etc. Uv is access log information, which mainly records the time to log into the learning system, the learning resources accessed, the time to start accessing the learning resources, the time to complete the access, and the time to exit the system of learners, which can help them determine which time period is high in learning efficiency. Among them, the learner KR model is shown in Figure 2. The construction of the learner KR model needs to include the learner’s personal data information, learning level, learning style information, and access log information and obtain the learner’s data information through two methods, explicit and implicit. Moreover, it will learn learner profile information such as learner preferences, personal basic information (name, gender, age, etc.), learning style and learning level, and other attributes that are stored in the learner KR model. Once the learner’s data information is obtained, the constructed learner model will be automatically updated to form personalized learner knowledge based on the learner’s preferences, learning style, and learning level. The CF recommendation engine makes score predictions based on the learner’s KR information and learning resource KR information, and the prediction score is high. The learning objects are recommended to the target learners, and then, personalized recommendations are made to the target learners. When a new learner enters the model, it will perform a semantic search based on the learner’s registration information, analyze the learner’s attribute characteristics, and determine the learner and match the learner’s KR model to alleviate the cold start problem.Figure 2 KR model of learner. ## 3.1.3. Learning Resource KR Model In the online education environment, the structure of learning resources is complex and diverse. Its manifestations include courseware, cases, literature, resource catalog indexes, online courses, test questions, test papers, homework, text, animation, forum Q&A, lesson plans, and media. To better build learning resources, data sharing is promoted between various types of learning resources at all levels, and the efficiency and accuracy of learning resource retrieval are improved; according to the characteristics of learning resource objects and the disorder of resource storage, the learning resources are divided into two categories, text materials and media materials. The text materials include courseware, cases, test papers, and textbook materials, while the media materials include graphics/image materials, animation materials, audio materials, and video materials. To be able to structure and describe learning resources, this article uses the metadata-based KR ontology construction method (metadata-based knowledge representation ontology building, referred to as MOB method) to define domain knowledge. This method is normative, having features such as rationality and scalability.Firstly, the construction of the learning resource KR model can realize the management and retrieval of learning resources based on knowledge points. When constructing the learning resource description ontology, the ontology of computing subject knowledge points is used for representation, and the knowledge point ontology is mapped through the associated knowledge point attributes. Secondly, a list of terms is extracted from metadata: media materials, video/audio, text, courseware, online courses, titles, resource locations, names, contact information, and association relationships, and finally, category level is defined and established. The class is defined according to the level of the classified resource. The learning object is the parent class (learning object), which includes subclasses such as courseware, text, and media data. The media data are text, video/audio, and graphics/pictures. Others are subcategories; the establishment of hierarchical relationships between categories mainly uses “part-of,” “kind-of,” “instance-of,” and “attribute-of.” The hierarchical structure diagram of the classes in the learning resource KR model is shown in Figure3.Figure 3 Model of learning resources.Constructing the learning resource KR model uses the ontology description language OWL and tools to express the relationship between the concepts and entities between learners, between learners and learning resources, and between knowledge points in learning resources. All the learning resources of CF are classified according to the attribute characteristics of each knowledge point, and examples and quizzes are carried out in the learning practice classroom. These quizzes and examples are closely related to the learning goals. The CF recommendation engine uses the semantic relationship between the learner and the learning resource. The learner performs score prediction and predicts the similarity of the target learner. Then, based on the data set provided by the online learning platform, the learner and learning resource KR model are constructed, and they are preprocessed together with the Web log data into CF82. The data structure course is taken as an example to apply it, mainly from the following aspects: (1) the key knowledge points of the data structure course; (2) the content related to the knowledge points, such as learning objectives, learning focus, and difficulty; (3) the relationship between the knowledge points before and after. The learning of the course is often contextual. Before learning a certain knowledge point, you must first learn another knowledge point; (4) examples explain the concept and use some examples or import before class to explain the concept, to help understand the concept. To be able to obtain the completeness of the resource more accurately, the class hierarchy is established using constraints and some special relationships between the classes. Standardized extraction is the hierarchical relationship between classes. Part of the class hierarchical structure constructed by taking the data structure as an example is shown in Figure4.Figure 4 Hierarchy structure diagram of partial classes in “data structure.” ## 4. Experiment and Result Analysis The recommendation list of the CF recommendation system is generated in the following way. First, we find the nearest neighbors of several target learners, then extract the score data of the learning resources of the nearest neighbors, and finally predict the scores of the learning resources of the target learners based on these score data and generate recommendation list. The data set is extremely important for the algorithm. The data set is defined as data= (U, I, R), whereU = {u1, u2, u3, …, um} is the basic learner set, |U| = m; I = {i1, i2, i3, …, in} is the collection of learning resources, |I| = nm ∗ n-order matrix R is the learner’s scoring matrix for each learning resource, and the element rij indicates the score of the ith user in U on the jth learning resource in I. The key to CF recommendation is to accurately locate the nearest neighbor of the target learner, and the basis for determining the nearest neighbor is to calculate the relationship between the learners. There are mainly the following three commonly used calculation methods for the similarity of equation (1). ### 4.1. Pearson’s Correlation Similarity The difference between the Pearson and the modified cosine similarity is that the denominator is the user’s common rating item, shown as follows:(1)Su,v=∑α∈PuvRu,v−R¯uRv,α−R¯v∑α∈PuvRu,α−R¯u2∑α∈PuvRv,α−R¯v2,(2)Su,v=u∗vu∗v.The closer the value ofSu,v is to 1, the higher the similarity between user u and user v. ### 4.2. Adjusted Cosine In view of the fact that the traditional cosine similarity does not consider the user’s rating preference, that is to say, some users like high ratings and some like low ratings, for example, the physics and mathematics scores of two students are 5 and 4, 3 and 2, respectively. If the traditional cosine similarity calculation finds that the similarity between the two users is very low, but in fact the preferences of the two students are the same, compared with the mathematics class, two students prefer physics. The following equation is the modified cosine similarity of two usersu and v:(3)Su,v=∑α∈Pu,αRu,α−R¯uRV,α−R¯v∑α∈PuRu,α−R¯2∑α∈PvRv,α−R¯v2.Among them,Su,v represents the set of scoring resource items, Pu and Pv represent the resource items scored by users u and v, respectively, Ru,α and RV,α represent the score of the resource item α, and R represents the average score. This article needs to consider the ontology domain. We use knowledge and users to evaluate learning resources. Therefore, in equation (4), we use the modified cosine similarity to calculate. The KR similarity calculation equation Si,j of learning resource objects i and j is as follows:(4)Si,j=∑Rl,i−R¯lRl,j−R¯l∑Rl,i−R¯l2∑Rl,j−R¯l2,where Rl,i refers to the score of the learner l on the learning resource object i; Rl,j refers to the score of the learner l on the learning resource object j; and R represents the average of all the scores provided by the learner l. Finally, the target learner’s prediction score for the learning resource object is calculated. The N most similar learning resource objects are obtained from equation (4). The calculation of the prediction score is shown in the following equation:(5)Pl,i=∑i∈NSi,t×Rl,t∑i∈NSi,t.Among them,N is a collection of learning resource objects like the learning resource object i, and Rl,t is the scores of the learning resource object t by the learner l.The recommendation algorithm uses the CF recommendation engine to generate a top N recommendation list of learning resource objects based on the target learner’s predicted scores of learning resources, the learner’s knowledge representation, and the learning resource knowledge representation model. In this study,L represents the set of all learners; that is, L = {l1, l2, l3, …, mn}; I represents all possible recommended learning resources; that is, I = {i1, i2, i3, …, in}; k represents learners and learning resource domain knowledge; that is, k = {k1, k2, k3, …, kq}; and R represents the learner’s scoring of learning resources, and the scoring range is defined as R = {1, 2, 3, 4, 5}.Algorithm1 generates a recommended list of top N learning resources. It is explained as follows.Algorithm 1:Recommended list of top N learning resources. Inputs:collection of learning resource objects,I = {i1, i2, i3, …, in}.KR, k = {learner, learning resource}.The score value of learnerR, R = {1, 2, 3, 4, 5}.Output:prediction, score and top N learning resource recommendation list.Method:Step 1, fore-chain ∈ I,j ∈ I, n ∈ N, do;Step 2, Use equation (4) to calculate the similarity S(i, j) End for each;Step 3, Use formula (5) to calculate the prediction score Pl, i;Step 4, The top N learning resource objects with the highest predicted scores are used to generate a top N learning resource recommendation list for the target learnerlt.This article applies the model to the self-developed “Mobile Autonomous School” online learning platform. To meet the individual learning needs of learners, starting from the two aspects of learners and learning resources, the learner KR model and learning resource KR are proposed. This model analyzes the learning behavior of students on the platform and comprehensively considers a personalized resource recommendation model based on the learner’s learning level, learning style, and learning preferences. The model considers the learning characteristics and learning of the learners in actual learning applications and preference to introduce the best learning resources. It integrates the basic information of learners, learners’ attributes, learners’ hobbies, and user ratings to make the recommendation of learning resources more systematic and comprehensive. Among them, the recommendation of style information factors that consider learners’ learning can enable learners to obtain recommended resources suitable for the learning situation at the time; the recommendation that takes into account the characteristics of the learner’s attributes can enable the system to recommend learning resources that match the learner’s learning level, academic background, and other characteristics; recommendations that consider learner preference factors can help promote learners’ interest in learning recommended resources and improve the continuity of learning behavior; recommendations that consider learning goal factors make it easier for learners to learn efficiently and in a targeted manner; and the recommendation of evaluation factors can maintain the timeliness of recommended resources, so that learners can access the latest and most accurate learning resources. The personalized learning resource recommendation model combines the above attributes to systematically make personalized recommendations, saving learners. The time and energy of resources are chosen, and the utilization rate of learning resources and learners’ academic performance is improved. The experimental data set contains 30,000 scoring records, generated from the evaluation and scoring of 650 learning resources by 200 learners within 2 months. The value range is 1∼5 (1—very irrelevant, 5—very relevant), and 0 means that the learner has not made any evaluation. The experiment divides the collected data into two parts at a ratio of 1 : 4, and one part is used as the training set. The other part is used as a test set to construct a recommendation model. The specific data are shown in Figure5.Figure 5 Data set. #### 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 4.1. Pearson’s Correlation Similarity The difference between the Pearson and the modified cosine similarity is that the denominator is the user’s common rating item, shown as follows:(1)Su,v=∑α∈PuvRu,v−R¯uRv,α−R¯v∑α∈PuvRu,α−R¯u2∑α∈PuvRv,α−R¯v2,(2)Su,v=u∗vu∗v.The closer the value ofSu,v is to 1, the higher the similarity between user u and user v. ## 4.2. Adjusted Cosine In view of the fact that the traditional cosine similarity does not consider the user’s rating preference, that is to say, some users like high ratings and some like low ratings, for example, the physics and mathematics scores of two students are 5 and 4, 3 and 2, respectively. If the traditional cosine similarity calculation finds that the similarity between the two users is very low, but in fact the preferences of the two students are the same, compared with the mathematics class, two students prefer physics. The following equation is the modified cosine similarity of two usersu and v:(3)Su,v=∑α∈Pu,αRu,α−R¯uRV,α−R¯v∑α∈PuRu,α−R¯2∑α∈PvRv,α−R¯v2.Among them,Su,v represents the set of scoring resource items, Pu and Pv represent the resource items scored by users u and v, respectively, Ru,α and RV,α represent the score of the resource item α, and R represents the average score. This article needs to consider the ontology domain. We use knowledge and users to evaluate learning resources. Therefore, in equation (4), we use the modified cosine similarity to calculate. The KR similarity calculation equation Si,j of learning resource objects i and j is as follows:(4)Si,j=∑Rl,i−R¯lRl,j−R¯l∑Rl,i−R¯l2∑Rl,j−R¯l2,where Rl,i refers to the score of the learner l on the learning resource object i; Rl,j refers to the score of the learner l on the learning resource object j; and R represents the average of all the scores provided by the learner l. Finally, the target learner’s prediction score for the learning resource object is calculated. The N most similar learning resource objects are obtained from equation (4). The calculation of the prediction score is shown in the following equation:(5)Pl,i=∑i∈NSi,t×Rl,t∑i∈NSi,t.Among them,N is a collection of learning resource objects like the learning resource object i, and Rl,t is the scores of the learning resource object t by the learner l.The recommendation algorithm uses the CF recommendation engine to generate a top N recommendation list of learning resource objects based on the target learner’s predicted scores of learning resources, the learner’s knowledge representation, and the learning resource knowledge representation model. In this study,L represents the set of all learners; that is, L = {l1, l2, l3, …, mn}; I represents all possible recommended learning resources; that is, I = {i1, i2, i3, …, in}; k represents learners and learning resource domain knowledge; that is, k = {k1, k2, k3, …, kq}; and R represents the learner’s scoring of learning resources, and the scoring range is defined as R = {1, 2, 3, 4, 5}.Algorithm1 generates a recommended list of top N learning resources. It is explained as follows.Algorithm 1:Recommended list of top N learning resources. Inputs:collection of learning resource objects,I = {i1, i2, i3, …, in}.KR, k = {learner, learning resource}.The score value of learnerR, R = {1, 2, 3, 4, 5}.Output:prediction, score and top N learning resource recommendation list.Method:Step 1, fore-chain ∈ I,j ∈ I, n ∈ N, do;Step 2, Use equation (4) to calculate the similarity S(i, j) End for each;Step 3, Use formula (5) to calculate the prediction score Pl, i;Step 4, The top N learning resource objects with the highest predicted scores are used to generate a top N learning resource recommendation list for the target learnerlt.This article applies the model to the self-developed “Mobile Autonomous School” online learning platform. To meet the individual learning needs of learners, starting from the two aspects of learners and learning resources, the learner KR model and learning resource KR are proposed. This model analyzes the learning behavior of students on the platform and comprehensively considers a personalized resource recommendation model based on the learner’s learning level, learning style, and learning preferences. The model considers the learning characteristics and learning of the learners in actual learning applications and preference to introduce the best learning resources. It integrates the basic information of learners, learners’ attributes, learners’ hobbies, and user ratings to make the recommendation of learning resources more systematic and comprehensive. Among them, the recommendation of style information factors that consider learners’ learning can enable learners to obtain recommended resources suitable for the learning situation at the time; the recommendation that takes into account the characteristics of the learner’s attributes can enable the system to recommend learning resources that match the learner’s learning level, academic background, and other characteristics; recommendations that consider learner preference factors can help promote learners’ interest in learning recommended resources and improve the continuity of learning behavior; recommendations that consider learning goal factors make it easier for learners to learn efficiently and in a targeted manner; and the recommendation of evaluation factors can maintain the timeliness of recommended resources, so that learners can access the latest and most accurate learning resources. The personalized learning resource recommendation model combines the above attributes to systematically make personalized recommendations, saving learners. The time and energy of resources are chosen, and the utilization rate of learning resources and learners’ academic performance is improved. The experimental data set contains 30,000 scoring records, generated from the evaluation and scoring of 650 learning resources by 200 learners within 2 months. The value range is 1∼5 (1—very irrelevant, 5—very relevant), and 0 means that the learner has not made any evaluation. The experiment divides the collected data into two parts at a ratio of 1 : 4, and one part is used as the training set. The other part is used as a test set to construct a recommendation model. The specific data are shown in Figure5.Figure 5 Data set. ### 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 4.2.1. Algorithm Evaluation Criteria This study uses the average absolute error (MAE) as an evaluation indicator to evaluate the accuracy of the proposed algorithm. MAE evaluates the performance of the recommendation system by accurately predicting the user’s score [6]. MAE compares the user’s project prediction score with the user’s deviation of the actual score of the project, and the calculation formula for calculating the MAE of different neighbor communities through the algorithm proposed in this article is as follows:(6)E′=1m∑i=1mPi−Ri.Among them,E is the average absolute error, Pi is the user’s predicted score for the item, Ri is the user’s actual score for the item, and m is the number of predicted scores. From equation (6), the smaller the value of MAE, the higher the accuracy of the algorithm.The KR-CF algorithm is compared with the traditional recommendation algorithms cosine-CF and Pearson-CF, and the comparison results are shown in Figure6. From Figure 6, the MAE value of the KR-CF algorithm is significantly lower under a different number of neighbors. In traditional algorithms, as the number of neighbors increases, the sparseness of the data decreases; that is, the recommendation accuracy of the algorithm increases as the MAE value decreases and finally stabilizes. The KR-CF algorithm comprehensively considers learning when predicting the score. The learning level, learning style, and learning preferences of the learners improve the user similarity, which highlights the importance of the semantic relationship between learners for the recommendation of learning resources, improves the recognition accuracy of the learner’s nearest neighbors, and makes the recommendation more reasonable. Experiments show that the KR-CF algorithm has a higher recommended performance than the traditional CF algorithm.Figure 6 MAE values of the three algorithms.The questionnaire mainly investigates whether the user’s usage habits of the tool and the resource recommendation of the knowledge graph are acceptable, to determine whether the tool can improve the learning efficiency of learners, reduce blindness in online learning, and achieve research purposes. The questionnaire’s recognition of the learning effect of the recommendation tool is made into a Richter scale, which is considered from three aspects: the use effect of the curriculum resource knowledge map, the effect of digital resource recommendation, and the effect of comprehensive use of the tool. There are many settings in each aspect. We have questions for users to answer. A total of 20 questionnaires were distributed, and all were collected. According to the questionnaire, the histogram of the use effect of the curriculum resource knowledge map is as follows: the graph forms a normal distribution, and the agreement item generally becomes the highest value in this part, which shows that the use effect of the knowledge map is very satisfying, which is shown in Figure7.Figure 7 Effect of knowledge graph.The line chart of the recommendation effect of digital resources shows that the abscissa represents the proportion of each item, and the ordinate represents the questions in the questionnaire. The experience of digital resources is good, and the percentage of agree and above is extremely high, which is shown in Figure8.Figure 8 Digital resource recommendation effect.From the perspective of the comprehensive use of the tool, this part examines the user’s evaluation of the overall effect of the tool. The abscissa of the figure is the option, the ordinate represents the number of people, and the broken line represents the problem in this part. It can be seen from the line chart that most users agree that the tool is easy to use and can improve learning efficiency and enhance learning interest, which is shown in Figure9.Figure 9 Effect of using the tool.Analyzing the three aspects of the questionnaire, the following results can be obtained:(1) The interactive effect and user experience of the tool are good. 90% of the survey respondents agree with “the interactive effect of the knowledge graph is good,” and “more than half of them agree very much.” 80% of users have a simple, beautiful, and clear drawing style, and all users think that the recommended interface of this tool is simple and clear. Among them, 40% of users agree with this view very much.(2) Very few people in the survey respondents have heard of the knowledge graph, but most people are very interested in it. In 20 learners, only 30% of the users have heard of the knowledge graph, and the remaining 70% said that they once heard about the concept of knowledge graphs. Although most people are unfamiliar with knowledge graphs, 65% of users said they were “interested in knowledge graphs.”(3) The knowledge graph can present the structure of the course knowledge points and locate the key knowledge points. There are 14 users who agree with the “accurate display of related learning resources and course system structure,” accounting for 70% of the total users, among which 4 people said they very much agree with this view. Using knowledge graphs, 85% of users agree that “the key knowledge points of the course can be clearly presented.”(4) In the process of online learning, resource recommendation is very necessary, and users are willing to learn the recommended content. The questionnaire shows that all surveyed users “have some contact with recommendation before,” and 95% of users “like resource tools.” In the process of recommended learning, 11 of them agreed with this view. 60% of users agree that “tools can be recommended in the order of knowledge points in the course,” and 80% of users are “willing to learn the recommended resource content in the recommendation list.”(5) The tool can save learning time and improve learning efficiency. The questionnaire shows that 65% of users believe that “it can save time for learning this course,” and 14 people agree that “digital resource recommendation reduces the time to find related resources.” The users who agree with “this learning tool can improve learning efficiency” account for 75% of the total number of users, indicating that most users believe that this tool can help improve learning efficiency. ## 5. Conclusion This article intends to study the methods and applications of knowledge graphs in the recommendation of course digital resources. After doing a lot of literature research on knowledge graphs and recommendation techniques, the research background and related theories of knowledge graphs are analyzed, and the recommendation model based on knowledge graphs is summarized. Through the design and development of the digital resource recommendation tool, the role and significance of the knowledge graph in the field of digital learning resource recommendation are verified. As learning resources become more and more abundant, the recommendation of digital resources is bound to become the trend of adult learning and lifelong learning. Knowledge graphs play the most important role in this process. We elaborate the design ideas and function realization process of the resource recommendation tool based on the knowledge graph. Three modules are designed including the course digital resource uploading module, the course knowledge graph presentation module, and the related resource recommendation module. We implement a tool for recommending course digital resources based on knowledge graphs. Through specific course experiments, the usability and learning effect of this tool for adult online learning are analyzed. It is found by comparison that this tool can not only recommend relevant learning resources for learners, but also the interactive design on the knowledge map can stimulate learners’ enthusiasm for online learning, enhance their interest in learning, and play a particularly important role in the effect of online learning. Experimental results show that the digital resource recommendation tool based on the knowledge graph meets the expected goals and design requirements, achieving a significantly lower MAE value as compared to its competitors. It also lays the foundation for the future research on resource recommendation based on the knowledge graph. The innovations of this research are mainly reflected in the method of presenting digital curriculum resources in the form of knowledge graph. This research proposes that the digital learning resources of curriculum are presented in the form of knowledge graph, which is different from the original linear knowledge point presentation method. --- *Source: 1010122-2022-01-25.xml*
2022
# Analysis of Interaction of Multiple Cracks Based on Tip Stress Field Using Extended Finite Element Method **Authors:** Yuxiao Wang; Akbar A. Javadi; Corrado Fidelibus **Journal:** Journal of Applied Mathematics (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1010174 --- ## Abstract A new method is presented to study the interaction of multiple cracks, especially for the areas near crack tips by using the extended finite element method. In order to track the cracks, a new geometric tracking technique is proposed to track enriched elements and nodes along the crack instead of using the narrow band level set method. This allows to accurately determine enriched elements and nodes and calculate enrichment values. A method is proposed for constructing a multicrack matrix, which involves numbering enriched nodes of multiple cracks and solving the global stiffness matrix. In this approach, the stress fields around multiple cracks can be studied. The interaction integral method is employed to study the crack propagation and its direction by calculating the stress intensify factor. The developed model has been coded in MATLAB environment and validated against analytical solutions. The application of the model in the crack interaction study is demonstrated through a number of examples. The results illustrate the influence of the interaction of multiple cracks as they approach each other. --- ## Body ## 1. Introduction Crack propagation can be efficiently simulated by resorting to the eXtended Finite Element Method (XFEM), saving a great deal of computational time and costs, given that no remeshing and refinement are performed [1–8]. Great advantages are gained over other methods such as the standard Finite Element Method (FEM) [9, 10], Element-Free Galerkin (EFG) method [11, 12], Boundary Element Method (BEM) [13–15], and Discrete Element Method (DEM) [16, 17]. With XFEM, by enriching the nodes along the crack, additional degrees of freedom are given to reproduce the jump of the variables across the discontinuities. XFEM is associated with the Partition of Unity Method (PUM) [18–21], and the shape functions for the enriched elements consist of standard shape functions, Heaviside step functions, and near-tip asymptotic functions [2].In the context of XFEM, as a crack grows, the elements at the crack tips are sectioned and new elements containing a crack tip are formed. In order to deal with extending cracks, the narrow band level set method [22, 23] is used to track the crack path and to search for the advancing crack tips. This method can be sometimes inaccurate, especially when a crack forms a kink angle (see Section 2.1), so there is the need to find alternatives in order to overcome the shortcomings of the narrow band level set method.Many researchers in the past decades studied the growth and mutual interaction of multiple cracks, mainly focusing on method developments and practical field applications [24–26]. With reference to the mutual interaction of two cracks, Lawler [27] and Tanaka et al. [28] experimentally defined a relation between the crack propagation rate and the J integral [29]. Carpinteri and Monetto [30] employed BEM to model the propagation of multiple cracks and implemented a global nonlinear stress-strain relationship associated with the geometry changes for the extension, intersection, and coalescence of the cracks. Budyn et al. [31] studied the growth of multiple cracks in brittle materials and presented a displacement equation for intersecting cracks. Daux et al. [32] proposed a theory for branched and intersecting cracks based on XFEM. Fageehi and Alshoaibi [33] studied nonplanar crack growth of multiple cracks by developing a code using FEM. Broumand [34] presented two methods, based on XFEM, to accurately detect multiple cracks of diverse size, shape, and orientation in 2D elastic bodies. Pham and Weijermars [35] used the linear superposition method to calculate stress tensor fields with multiple pressure-loaded fractures and further developed analytical solutions for the domains with large numbers of internally pressurized fractures and boreholes in a homogeneous elastic medium. Santoro et al. [36] discussed the response from a beam under static loads with multiple cracks and proposed an approach to evaluate the response in the presence of multiple cracks.Even though an abundant literature is available concerning the numerical analysis of multiple cracks, the application of XFEM in this context is very limited. In this paper, a new method is proposed, based on XFEM, for the analysis of the propagation and interaction of multiple cracks in two-dimensional domains. The enriched nodes of the cracks are tracked by a proposed Geometric Tracking Technique (GTT). The enrichments for shape functions are ascribed to the enriched nodes of all fractures. Different rules for the Gauss points apply for the elements crossed by cracks and for those containing tips. An approach is presented to solve multiple cracks. In this approach, each crack in the domain is set as a unit prone to coalesce into the whole framework of the multiple cracks by combining all enriched stiffness matrices into one matrix. A node numbering rule is presented for multiple cracks. The interaction between cracks is based on domain forms of the Interaction Integral Method (IIM) [37–40]. By studying the distancing and intersection of integral areas around tips, the crack behaviors can be predicted. The method is implemented into a MATLAB code. In what follows, the theoretical basis, steps of the analysis, and numerical implementation of the method are reported. In addition, numerical simulations for validation are illustrated to show the effectiveness and robustness of the method.The paper is structured as follows: in Section2, GTT is presented and an example is set up to verify the effectiveness with respect to the narrow band level set method; in Section 3, combining GTT and XFEM, the numerical results of two standard models are compared to corresponding analytical solutions; in Section 4, the simulation scheme for multiple cracks and the features of the XFEM solution are also presented; finally, in Section 5, the application of the method to multiple crack interaction and crack propagation prediction is illustrated. Concluding remarks are reported in Section 6. ## 2. Geometric Tracking Technique In order to track the cracks in a domain, GTT is introduced for the accurate search and location of crack nodes and elements. The proposed technique can be used with different types of elements according to the given mesh and geometry. It is implemented in several steps. The first step involves of the selection of a rectangular areaΩw enclosing all the crack segments (each segment assumed linear), by checking the coordinates of the crack extremities (see Figure 1). The area Ωw can be represented by the set S: (1)S=x,y∈Ωwxs1≤x≤xs2,fxs1≤y≤fxs2.Figure 1 Sketch of a simulation domain withΩw, Ωsubi, and segments i and i+1; the equation of the line of segment i is Aix+Biy+Ci=0.Each crack segmenti, enclosed by the subdomain Ωsubi, can be represented by a linear equation, as follows: (2)Aix+Biy+Ci=0,whereAi, Bi, and Ci are the coefficients of the straight line of segment i of the crack.SubdomainΩsubi can be represented by the set Ssubi: (3)Ssubi=x,y∈Ωsubixsubm≤x≤xsubn,fxsubm≤y≤fxsubn.The enriched elements crossed by a crack can be located by finding the intersection between the crack linear segments (Equation (2)) and the boundaries of the elements. Each element e has a given area and can be represented by the following set: (4)Γe=x,y∈Ωexe1≤x≤xe2,fxe1≤fx≤fxe2.Substitutingxe1 and xe2 into Equation (2) yields ye1 and ye2: (5)ye1=−Ci−Aixe1Bi,ye2=−Ci−Aixe2Bi.Ifye1, or ye2, is in the range of fxe1,fxe2, the crack certainly intersects the element e; therefore, e is an enriched element. Alternatively, by substituting fxe1 and fxe2 in the same equation, one obtains (6)xe1′=−Ci−Aifxe1Bi,xe2′=−Ci−Aifxe1Bi.Ifxe1′, or xe2′, is in the range of xe1,xe2, element e is enriched.An example follows about the use of the technique for rectangular elements. With reference to Figure2, where crack segment 1 and enclosing subdomain Ωsub1 of Figure 1 are shown, subdomain Ωsub1 can be represented by the set S1: (7)S1=x,y∈Ωsub1x1≤x≤x3,fx1≤y≤fx3.Figure 2 Searching for elements intersecting crack segment 1.The equation of the straight line of crack segment 1, enclosed by subdomainΩsub1, is (8)A1x+B1y+C1=0.The setΓeA of element A is (9)ΓeA=x,y∈ΩeAx11≤x≤x22,fx11≤fx≤fx22,whereΩeA is the domain of element A. By substituting the x coordinates of this element in Equation (2), the coordinate y (P) of the intersection node P can be obtained; it is (10)yP=−C1−A1x11B1.Since the coordinatesx11,yP of node P are included by the set ΓeA, it is proven that the crack segment 1 certainly goes through element A. Similarly, by substituting the y coordinate of the other intersection node Q (still in Figure 2) in Equation (2), the relative x coordinate is (11)xQ=−C1−B1fx22A1.It can be shown that bothxQ and yQ are inside the domain of element B, so element B is also crossed by the crack segment 1; therefore, element B is an enriched element.Note that GTT, in association with XFEM, could be potentially used also in Nonlinear Fracture Mechanics (NLFM) problems. In fact, the technique is apt to track crack paths following any geometric shape, irrespective of the cracking process (linear or nonlinear). ### 2.1. Comparison with the Narrow Band Level Set Method In what follows, an example is given to explain the advantages in using GTT when dealing with cracks having large kink angles if compared to the narrow band level set method.In the narrow band level set method, circular ranges around crack tips, supposedly containing all the enriched elements, are defined first [41]. However, within those circles, some of the elements inside may not intersect the crack, so a narrow band is introduced for further sifting the enriched elements. The distances between nodes and cracks can be calculated by using the nodal coordinates and the linear equations of the crack segments. Normally, the nodes within the range of the narrow band are regarded as target nodes and are saved in the set of the enriched nodes. The level set functions of the enriched nodes are used to ascertain if these nodes are located within the narrow band. With reference to Figure 3, where domain A and domain B encompass all the enriched elements, the linear equations of the two segments of the crack (Equation (2)) have coefficients 0.05, -1, and 1.7 (A1, B1, and C1, respectively) and 0.65, -1, and -1.122 (A2, B2, and C2, respectively) for segment 1 and segment 2, respectively.Figure 3 Sketch for the comparison of the proposed tracking technique and the narrow band method.The distanced1 between node P2,3 and crack segment 1 is (12)d1=0.05×2−1×3+1.70.052+1=1.199.The intersection node N between the linex=6 and crack segment 2 has coordinates (6, 2.778). As 2.778 is between 2 and 3 and node N is on the edge of element D, the element D is an enriched element and node Q is an enriched node.The distanced2 between node Q7,2 and crack segment 2 is (13)d2=0.65×7−1×2−1.1220.652+1=1.199.According to the above calculations, node P and node Q have the same distance to crack segments 1 and 2, respectively. Since node Q is on the enriched element D, the distance from node Q to the crack should be equal to or less than the size of the narrow band. As node P has the same distance to the crack as node Q, the distance from the crack to node P is also equal to or less than the size of the narrow band. However, it is shown in Figure3 that node P is not an enriched node. So, in this case, the application of the narrow band level set method produces an error.Different from the narrow band level set method, the proposed GTT determines all the enriched elements that crack segments pass through by judging whether the element edges intersect with the crack segment. All the elements are examined for this quest with respect to each crack segment, so it is ensured that all the enriched elements are picked as target elements, no extra elements are included, and no enriched elements are ignored.With reference to Figure3, the intersection node M of line x=2 and crack segment 1 has the coordinates (2, 1.9). The y coordinate of element C ranges from 2 to 3, so element C is not regarded as an enriched element; therefore, node P is not be counted as an enriched node. However, the range of the y coordinate of the left edge of element F is from 1 to 2, including the y coordinate of node M. So element F can be regarded as an enriched element, and the relative nodes are enriched nodes.Based on the above comparison, one may state that the proposed GTT is accurate and effective and constitutes a robust procedure for tracking the crack growth trajectories. ## 2.1. Comparison with the Narrow Band Level Set Method In what follows, an example is given to explain the advantages in using GTT when dealing with cracks having large kink angles if compared to the narrow band level set method.In the narrow band level set method, circular ranges around crack tips, supposedly containing all the enriched elements, are defined first [41]. However, within those circles, some of the elements inside may not intersect the crack, so a narrow band is introduced for further sifting the enriched elements. The distances between nodes and cracks can be calculated by using the nodal coordinates and the linear equations of the crack segments. Normally, the nodes within the range of the narrow band are regarded as target nodes and are saved in the set of the enriched nodes. The level set functions of the enriched nodes are used to ascertain if these nodes are located within the narrow band. With reference to Figure 3, where domain A and domain B encompass all the enriched elements, the linear equations of the two segments of the crack (Equation (2)) have coefficients 0.05, -1, and 1.7 (A1, B1, and C1, respectively) and 0.65, -1, and -1.122 (A2, B2, and C2, respectively) for segment 1 and segment 2, respectively.Figure 3 Sketch for the comparison of the proposed tracking technique and the narrow band method.The distanced1 between node P2,3 and crack segment 1 is (12)d1=0.05×2−1×3+1.70.052+1=1.199.The intersection node N between the linex=6 and crack segment 2 has coordinates (6, 2.778). As 2.778 is between 2 and 3 and node N is on the edge of element D, the element D is an enriched element and node Q is an enriched node.The distanced2 between node Q7,2 and crack segment 2 is (13)d2=0.65×7−1×2−1.1220.652+1=1.199.According to the above calculations, node P and node Q have the same distance to crack segments 1 and 2, respectively. Since node Q is on the enriched element D, the distance from node Q to the crack should be equal to or less than the size of the narrow band. As node P has the same distance to the crack as node Q, the distance from the crack to node P is also equal to or less than the size of the narrow band. However, it is shown in Figure3 that node P is not an enriched node. So, in this case, the application of the narrow band level set method produces an error.Different from the narrow band level set method, the proposed GTT determines all the enriched elements that crack segments pass through by judging whether the element edges intersect with the crack segment. All the elements are examined for this quest with respect to each crack segment, so it is ensured that all the enriched elements are picked as target elements, no extra elements are included, and no enriched elements are ignored.With reference to Figure3, the intersection node M of line x=2 and crack segment 1 has the coordinates (2, 1.9). The y coordinate of element C ranges from 2 to 3, so element C is not regarded as an enriched element; therefore, node P is not be counted as an enriched node. However, the range of the y coordinate of the left edge of element F is from 1 to 2, including the y coordinate of node M. So element F can be regarded as an enriched element, and the relative nodes are enriched nodes.Based on the above comparison, one may state that the proposed GTT is accurate and effective and constitutes a robust procedure for tracking the crack growth trajectories. ## 3. Numerical Simulation of Single Cracks With reference to the single crack in a solid domain in Figure2, according to GTT, the elements A, B, C, and D can be part of a set of Heaviside-enriched elements (crack passing through) and tip-enriched elements (crack tips stay). This set contains element numbers and corresponding node numbers. In the next step, those elements are classified into two subsets. The elements including a crack tip are removed from the initial set (except the ones that contain crack tips bordering the edge of the domain) and then inserted in another set. The remaining elements of the initial set are then considered Heaviside-enriched elements. If an element includes both a Heaviside-enriched node and a tip-enriched node, it is denoted as a tip-enriched element or as the last Heaviside-enriched element before a tip-enriched element, so these two types of elements are differentiated. The enriched nodes normally include three types: Heaviside-enriched nodes, tip-enriched nodes, and mix-enriched nodes. Different types of enriched nodes are separately dealt with, according to the location of the elements.After rearranging the Heaviside-enriched elements and the tip-enriched elements into different sets, an enrichmentφj of the shape function Nj can be adopted to represent the jump of displacements across the crack surfaces [42]: (14)φjx,y=Hx,y,where Hx,y is the step function that, for example, in the case of a horizontal crack, is equal to 1 when y is greater than 0, and equal to -1 when y is less than 0.For tip-enriched nodes, the enrichmentφl can be expressed as a tip branch function [2, 43]: (15)φlr,θ=Blr,θ=rsinθ2,rsinθ2sinθ,rcosθ2,rcosθ2sinθ,where Bl is the tip branch function (l=1,2,3,4) and r and θ are the local polar coordinates at the crack tip.The approximation for the displacementu can be expressed by using the enrichments of Equations (14) and (15): (16)ux,y=∑i=1nsNix,yui+∑j=1ncutNjx,yHx,y−Hxj,yjaj+∑k=1ntip1Nkx,y∑l=14Bl1x,y−Bl1xk,ykbkl+∑k=1ntip2Nkx,y∑l=14Bl2x,y−Bl2xk,ykbkl,where ui are the standard degrees of freedom (dofs); aj, bk1, and bk2 are additional dofs; Hx,y are the enrichment values of Heaviside-enriched nodes; Bl1 are the enrichment values of nodes around crack tip 1; Bl2 are the enrichment values of nodes around crack tip 2; Ni are the standard shape functions; ns is the number of standard nodes; ncut is the number of Heaviside-enriched nodes; and ntip1 and ntip2 are the sets of tip-enriched nodes for the first and second crack tips, respectively.In order to demonstrate the validity of GTT combined with XFEM, in what follows, the solutions of two standard examples are reported. A comparison is made among the obtained values of the Stress Intensify Factor (SIF) with those descending from analytical solutions. IIM is used to calculate the SIF values. ### 3.1. Edge Crack The first example refers to ana-long single-edge crack (Figure 4(a)). It is horizontal and runs from the left vertical boundary of a rectangular plate of width (w) 100 m and height (h) 100 m, loaded by vertical tractions σ of 3.67 MPa on the top and bottom edges. Four-node quadrilateral elements are used. In order to examine the sensitivity of the model results to the element size, different structured meshes, all having equal-size square elements, are tested of 50×50, 100×100, and 200×200 elements. The elastic modulus (E) and Poisson’s ratio (ν) are 15 GPa and 0.25, respectively. Cracks having lengths 3.5, 5.5, 7.5, 9.5, and 11.5 m are considered. The shaded plot of the von Mises stress σvm for the case with a 7.5 m long crack and 100×100 elements is shown in Figure 4(b). The comparison between the numerical solution and analytical solution in terms of SIF values is reported in Table 1. An analytical solution for the Mode-I SIF KI is [44] (17)KI=Cσaπ,where σ is the dominant vertical tensile stress (equal to the vertical tractions applied in the example), a is the crack length, and C is a correction coefficient, equal to (18)C=1.12−0.231aw+10.55aw2−21.71aw3+30.382aW4.Figure 4 Single edge crack: (a) schematic of aw×L plate with an a-long horizontal edge crack and loaded by vertical tractions σ; (b) shaded plot of the von Mises stress σvm for a 100×100 m2 plate mesh and a=7.5 m. (a)(b)Table 1 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with an edge crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.51.35691.35251.37011.36305.51.72961.71851.70961.70867.52.06992.00542.00101.99509.52.39762.37122.35952.245011.52.72472.69332.68242.4700 ### 3.2. Central Crack A plate with a central crack is loaded by an isotropic tensile stressσ of 3.67 MPa (Figure 5(a)). The discrete length values selected for the crack are 3, 7, 11, 15, and 19 m. In Figure 5(b), the shaded plot of the von Mises stress predicted by using XFEM for the 15 m long crack and the 100×100-element mesh is shown. The corresponding analytical solution for the Mode-I SIF KI is [45] (19)KI=σaπ,with σ here the dominant isotropic stress (equal to the normal tractions applied in the example).Figure 5 Central crack: (a) schematic of aw×L plate with an a-long horizontal central crack and loaded by normal tractions σ on all the edges; (b) shaded plot of the von Mises stresses σvm for a 100×100 m2 plate mesh and a=15 m. (a)(b)In Table2, the comparison between the analytical solution and the numerical solution for different meshes is reported.Table 2 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with a central crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.01.07890.75980.77260.79657.01.26191.18651.19711.216611.01.57601.50131.50371.525115.01.84041.77781.78321.781019.02.07522.03712.01892.0049Observing the results in Tables1 and 2, one may notice that when the mesh size is 100×100, the numerical solution is very close to the analytical solution, so there is no point in further refining the mesh, given that enough accuracy is gained. One may also state that with the proposed method, the crack growth propagation and direction for the two examples is effectively predicted. ## 3.1. Edge Crack The first example refers to ana-long single-edge crack (Figure 4(a)). It is horizontal and runs from the left vertical boundary of a rectangular plate of width (w) 100 m and height (h) 100 m, loaded by vertical tractions σ of 3.67 MPa on the top and bottom edges. Four-node quadrilateral elements are used. In order to examine the sensitivity of the model results to the element size, different structured meshes, all having equal-size square elements, are tested of 50×50, 100×100, and 200×200 elements. The elastic modulus (E) and Poisson’s ratio (ν) are 15 GPa and 0.25, respectively. Cracks having lengths 3.5, 5.5, 7.5, 9.5, and 11.5 m are considered. The shaded plot of the von Mises stress σvm for the case with a 7.5 m long crack and 100×100 elements is shown in Figure 4(b). The comparison between the numerical solution and analytical solution in terms of SIF values is reported in Table 1. An analytical solution for the Mode-I SIF KI is [44] (17)KI=Cσaπ,where σ is the dominant vertical tensile stress (equal to the vertical tractions applied in the example), a is the crack length, and C is a correction coefficient, equal to (18)C=1.12−0.231aw+10.55aw2−21.71aw3+30.382aW4.Figure 4 Single edge crack: (a) schematic of aw×L plate with an a-long horizontal edge crack and loaded by vertical tractions σ; (b) shaded plot of the von Mises stress σvm for a 100×100 m2 plate mesh and a=7.5 m. (a)(b)Table 1 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with an edge crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.51.35691.35251.37011.36305.51.72961.71851.70961.70867.52.06992.00542.00101.99509.52.39762.37122.35952.245011.52.72472.69332.68242.4700 ## 3.2. Central Crack A plate with a central crack is loaded by an isotropic tensile stressσ of 3.67 MPa (Figure 5(a)). The discrete length values selected for the crack are 3, 7, 11, 15, and 19 m. In Figure 5(b), the shaded plot of the von Mises stress predicted by using XFEM for the 15 m long crack and the 100×100-element mesh is shown. The corresponding analytical solution for the Mode-I SIF KI is [45] (19)KI=σaπ,with σ here the dominant isotropic stress (equal to the normal tractions applied in the example).Figure 5 Central crack: (a) schematic of aw×L plate with an a-long horizontal central crack and loaded by normal tractions σ on all the edges; (b) shaded plot of the von Mises stresses σvm for a 100×100 m2 plate mesh and a=15 m. (a)(b)In Table2, the comparison between the analytical solution and the numerical solution for different meshes is reported.Table 2 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with a central crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.01.07890.75980.77260.79657.01.26191.18651.19711.216611.01.57601.50131.50371.525115.01.84041.77781.78321.781019.02.07522.03712.01892.0049Observing the results in Tables1 and 2, one may notice that when the mesh size is 100×100, the numerical solution is very close to the analytical solution, so there is no point in further refining the mesh, given that enough accuracy is gained. One may also state that with the proposed method, the crack growth propagation and direction for the two examples is effectively predicted. ## 4. Numerical Simulation of Multiple Cracks For a multiple-crack problem, a specific numbering rule is applied (see Table3). The nodes of a crack k are reported in a column set from 2+10k−1 to 11+10k−1; column 1 contains the numbers of the standard nodes; a is the number of total standard nodes; m represents the number of the last enriched node in the last crack; m+1 and m+2 are the numbers of Heaviside-enriched nodes; and m+3, m+4, m+5, and m+6 are the numbers of the tip-enriched nodes.Table 3 Numbering scheme for the node-numbering matrix for thekth crack. Columns1…2+10k−13+10k−14+10k−15+10k−1…10+10k−111+10k−1…1…2…3…m+1e.v.m+3e.v.…m+6e.v.…4…m+2e.v.…a−1…a…e.v.: enriched value.The governing equation for the whole domain can be written as(20)Ku=P,where P is the global load matrix, u is the global displacement matrix of Equation (16), and K is the global stiffness matrix, consisting of the standard stiffness matrix and the enriched stiffness matrices of the cracks; in formula, (21)K=K0K0,1K0,2⋯K0,n−2K0,n−1K0,nK1,0K10⋯000K2,00K2⋯000⋯⋯⋯⋯⋯⋯⋯Kn−2,000⋯Kn−200Kn−1,000⋯0Kn−10Kn,000⋯00Kn,in which K0 is the standard stiffness matrix; K1, K2,…, Kn are the enriched stiffness matrices of cracks 1 to n, with n being the number of cracks; and K0,n is the cross-term between the standard elements and the enriched elements. The matrices are as follows: (22)K0=∫ΩBiTDBjdΩ,i,j=u,Kn=∫ΩBiTDBjdΩ,i,j=a,b,K0,n=∫ΩBiTDBjdΩ,i,j=u,a,b,where D is the elasticity matrix, Bu, Ba, and Bb are strain differential operator matrices, respectively defined as: (23)Bu=∂Ni∂x00∂Ni∂y∂Ni∂y∂Ni∂x,Ba=∂NiH−Hxi,yi∂x00∂NiH−Hxi,yi∂y∂NiH−Hxi,yi∂y∂NiH−Hxi,yi∂x,Bb=∂NiB1−B1xi,yi∂x0∂NiB2−B2xi,yi∂x00∂NiB1−B1xi,yi∂y0∂NiB2−B2xi,yi∂y∂NiB1−B1xi,yi∂y∂NiB1−B1xi,yi∂x∂NiB2−B2xi,yi∂y∂NiB2−B2xi,yi∂x∂NiB3−B3xi,yi∂x0∂NiB4−B4xi,yi∂x00∂NiB3−B3xi,yi∂y0∂NiB4−B4xi,yi∂y∂NiB3−B3xi,yi∂y∂NiB3−B3xi,yi∂x∂NiB4−B4xi,yi∂y∂NiB4−B4xi,yi∂x,with B1, B2, B3, and B4 the tip branch functions around the crack tips and B1xi,yi, B2xi,yi, B3xi,yi, and B4xi,yi the enrichment values of the tip branch functions at the tip-enriched nodes.A Delaunay triangulation is adopted herein to partition the enriched elements [46]. For the solutions of the integrals, the number of Gauss points is 9 for the standard elements, 3 for the triangular Heaviside-enriched elements, and 7 for the triangular tip-enriched elements. ## 5. Interaction of Multiple Cracks The proposed method is used to analyze the interaction among multiple cracks through the following illustrative examples.The stress field around a crack tip is influenced by other cracks in proximity, thus leading to stress-shadowing effects and redirection of the crack propagation [47]. SIF is useful to check if a crack propagates and to predict the propagation trajectory [48–51]. IIM is often used to calculate SIF values since it has a high degree of accuracy compared with other methods [52–55], although the involved process is quite complex; for the application, it is required to solve an energy integral, based on the J integral method [29], covering a zone around the crack tip. Two states are considered: present state (1) and auxiliary state (2). If Ω is the domain of integration (the zone around the tip), bounded by Γ, for a local reference system x1,x2 of the crack, with origin at the tip and x1 aligned with it, the interaction integral I can be written as [2] (24)I=∫ΓW1,2δ1j−σij1∂ui2∂x1−σij2∂ui1∂x1njdΓ,where δ is the Dirac delta function, n is the vector normal to Γ, and W is the interaction strain energy, equal to [1, 3] (25)W1,2=σij1εij2=σij2εij1.The integralI can also be expressed by using the values of SIF KI and KII, respectively, for Mode-I and Mode-II, for both the present state and the auxiliary state, as follows: (26)I=2EKI1KI2+KII1KII2,where E is Young’s modulus, and (27)KI=E2I1,Mode‐I,(28)KII=E2I1,Mode‐II.Equation (24) can be transformed as follows: (29)I=∫Ωσij1∂ui2∂x1+σij2∂ui1∂x1−W1,2δij∂q∂xjdΩ,where qx1,x2 is a weighting function. The solution of the integral in Equation (29) is relatively straightforward.With reference to Figure6(a), the radius R of the zone is assumed equal to the square root of three times the element area [56]. The weighting function q is as follows: (30)qr=0,r≥R,1r<R,where r originates from the crack tip. When a portion of the zone is outside the domain, as in Figure 6(b), the weighting function (qx) on the edge of the domain is assumed equal to zero [57].Figure 6 Zone of integration around a tip for IMM: (a) zone inside the domain; (b) zone outside the domain. (a)(b)In the following subsections, two examples of interaction are described, with two and five interacting cracks, respectively. ### 5.1. Two-Crack Example In this example, two cracks,h and l, are considered. Domain, loading, material and mesh are the same used in Section 3.1. In a first case (case 1, Figure 7(a)), the cracks are parallel, both 5.5 m long and 25 m mutually distant. In a second case (case 2, Figure 7(b)), they are still parallel only 5 m distant. A third and a fourth case (case 3, case 4) with a slanted crack follow. For all the cases, four-node quadrilateral elements are employed. The vertical traction at the top and at the bottom boundaries is 3.67 MPa.Figure 7 Schematic of the two-crack example: (a) case 1; (b) case 2. (a)(b)In case 1, the stress fields around the crack tips do not significantly interact. In case 2, the two stress fields overlap with mutual interference, producing a shadowing effect and causing an alteration of the SIFs resulting in the redirection of the crack propagation; in fact, given the mesh size, the zones for IIM overlap (Figure7(b)) and Equation (29) for crack h becomes (31)Ih=∫Ωhσh,ij1∂uh,i2∂xh,1+σh,ij2∂uh,i1∂xh,1−Wh1,2δh,1j∂qh∂xh,jdΩh,and, for crack l, (32)Il=∫Qσl,ij1∂ul,i2∂xl,1+σl,ij2∂ul,i1∂xl,1−Wl1,2δl,1j∂ql∂xl,jdQl.The overlapped integral domainIlh is then (33)Ilh=∫Ωlhσlh,ij1∂ulh,i2∂xlh,1+σlh,ij2∂ulh,i1∂xlh,1−Wlh1,2δlh,1j∂qlh∂xlh,jdΩlh,where Ωlh is the intersection zone. For Equations (31), (32), and (33), subscripts h and l indicate crack h and crack l, respectively, based on Equation (24). The remaining equations follow the same notation rule, so the comprehensive interaction integral for crack h is equal to the superposition of Ih and Ilh: (34)Ihh=Ih+Ilh,and the comprehensive interaction integral for crack l is (35)Ill=Il+Ilh.Thus, by employing Equations (27) and (28), SIFs in Mode-I and Mode-II for the two cracks are (36)KI,l=E2Ill,Mode‐I,KII,l=E2Ill,Mode‐II,KI,h=E2Ihh,Mode‐I,KII,h=E2Ihh,Mode‐II.The results of the XFEM calculations in terms of the von Mises stress for two cases are shown in Figures8 and 9. It can be seen that the cracks propagate horizontally (along the alignment of the cracks) only in case 1, whereas in case 2, the propagation directions deviate due to the mutual interaction. The results are in good agreement with those reported in the literature [58].Figure 8 Results of the two-crack example, case 1: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)Figure 9 Results of the two-crack example, case 2: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)In Figures10(a) and 10(b), the schemes of cases 3 and 4 are reported, respectively. There is a 5.5 m long horizontal crack (the “minor” crack, h) and a 10 m long slanted crack (inclined 45° with respect to the x-axis) (the “major” crack, l). In case 3, the major crack is rather far from the minor crack (50 m from the tip of h to the center of l). In case 4, the two cracks are closer. It is assumed that the propagation of crack h is affected by the other crack when this one lies within the domain of the interaction integral of h. The major crack l can be seen as a border “arresting” the effect on the stress field of the minor crack h, so the actual interaction integral domain for crack h is equivalent to the entire circular area minus the shaded area (Figure 10(b)). The interaction integral for crack h is as Equation (31).Figure 10 Schematic of the two-crack example: (a) case 3; (b) case 4. (a)(b)The interaction integralIΔh of the shaded area Δh of case 4 is (domain ΩΔh) (37)IΔh=∫ΩΔhσΔh,ij1∂uΔh,i2∂xΔh,1+σΔh,ij2∂uΔh,i1∂xΔh,1−WΔh1,2δΔh,1j∂qΔh∂xΔh,jdΩΔh.Thus, the final interaction integral of crackh is equal to (38)Ih′=Ih−IΔh.The results for these two cases are shown in Figure11. It can be seen that in case 3, the minor crack propagates along the initial horizontal direction, while in case 4, when the interaction integral domain includes the other crack, the propagation trajectory deviates from the horizontal along the shortest pathway towards the major crack, according to the results reported in the literature [3].Figure 11 Shaded plots of the von Mises stressσvm for the two-crack example: (a) case 3; (b) case 4 (lengths in m). (a)(b) ### 5.2. Multiple-Crack Example In order to study the interaction of more complicated patterns with more than two cracks, two cases are considered in what follows. In case 1, there are five parallel edge cracks. Again, the domain, loading, material and mesh are the same as the previous example. The results in terms of von Mises stress are shown in Figure12(a). It can be noticed that the cracks far from the horizontal middle line have a higher stress concentration and also show a larger deflection angle, while the cracks near to it have a smaller stress concentration and a mild deflection. Obviously, given the symmetry, the crack right along the middle line has no deflection. In front of the crack tips, the stress fields join to form a relatively large symmetric shadow area.Figure 12 Shaded plots of the von Mises stressσvm for the multiple-crack example: (a) 5 parallel edge cracks (case 1); (b) 5 randomly located cracks (case 2) (lengths in m). (a)(b)In case 2, five slanted cracks are randomly distributed. Similar to case 1, the farther the crack is from the middle line, the stronger the stress concentration is; some of the stress fields intersect each other to form a large shadow area (Figure12(b)). However, the stress concentration area is smaller than that in case 1, for the weaker stress-shadowing effect, given the larger distances among the cracks. In fact, as the cracks propagate, they get closer to each other, and the shadowing effect starts to increase and become visible as shown in Figure 13. The consequential effect is that the shadowing areas extend and intersect with the other ones, and also, the values of the stress concentration increase.Figure 13 Shaded plots of the von Mises stressσvm for the multiple-crack example, case 2, for an increased propagation (lengths in m). ## 5.1. Two-Crack Example In this example, two cracks,h and l, are considered. Domain, loading, material and mesh are the same used in Section 3.1. In a first case (case 1, Figure 7(a)), the cracks are parallel, both 5.5 m long and 25 m mutually distant. In a second case (case 2, Figure 7(b)), they are still parallel only 5 m distant. A third and a fourth case (case 3, case 4) with a slanted crack follow. For all the cases, four-node quadrilateral elements are employed. The vertical traction at the top and at the bottom boundaries is 3.67 MPa.Figure 7 Schematic of the two-crack example: (a) case 1; (b) case 2. (a)(b)In case 1, the stress fields around the crack tips do not significantly interact. In case 2, the two stress fields overlap with mutual interference, producing a shadowing effect and causing an alteration of the SIFs resulting in the redirection of the crack propagation; in fact, given the mesh size, the zones for IIM overlap (Figure7(b)) and Equation (29) for crack h becomes (31)Ih=∫Ωhσh,ij1∂uh,i2∂xh,1+σh,ij2∂uh,i1∂xh,1−Wh1,2δh,1j∂qh∂xh,jdΩh,and, for crack l, (32)Il=∫Qσl,ij1∂ul,i2∂xl,1+σl,ij2∂ul,i1∂xl,1−Wl1,2δl,1j∂ql∂xl,jdQl.The overlapped integral domainIlh is then (33)Ilh=∫Ωlhσlh,ij1∂ulh,i2∂xlh,1+σlh,ij2∂ulh,i1∂xlh,1−Wlh1,2δlh,1j∂qlh∂xlh,jdΩlh,where Ωlh is the intersection zone. For Equations (31), (32), and (33), subscripts h and l indicate crack h and crack l, respectively, based on Equation (24). The remaining equations follow the same notation rule, so the comprehensive interaction integral for crack h is equal to the superposition of Ih and Ilh: (34)Ihh=Ih+Ilh,and the comprehensive interaction integral for crack l is (35)Ill=Il+Ilh.Thus, by employing Equations (27) and (28), SIFs in Mode-I and Mode-II for the two cracks are (36)KI,l=E2Ill,Mode‐I,KII,l=E2Ill,Mode‐II,KI,h=E2Ihh,Mode‐I,KII,h=E2Ihh,Mode‐II.The results of the XFEM calculations in terms of the von Mises stress for two cases are shown in Figures8 and 9. It can be seen that the cracks propagate horizontally (along the alignment of the cracks) only in case 1, whereas in case 2, the propagation directions deviate due to the mutual interaction. The results are in good agreement with those reported in the literature [58].Figure 8 Results of the two-crack example, case 1: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)Figure 9 Results of the two-crack example, case 2: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)In Figures10(a) and 10(b), the schemes of cases 3 and 4 are reported, respectively. There is a 5.5 m long horizontal crack (the “minor” crack, h) and a 10 m long slanted crack (inclined 45° with respect to the x-axis) (the “major” crack, l). In case 3, the major crack is rather far from the minor crack (50 m from the tip of h to the center of l). In case 4, the two cracks are closer. It is assumed that the propagation of crack h is affected by the other crack when this one lies within the domain of the interaction integral of h. The major crack l can be seen as a border “arresting” the effect on the stress field of the minor crack h, so the actual interaction integral domain for crack h is equivalent to the entire circular area minus the shaded area (Figure 10(b)). The interaction integral for crack h is as Equation (31).Figure 10 Schematic of the two-crack example: (a) case 3; (b) case 4. (a)(b)The interaction integralIΔh of the shaded area Δh of case 4 is (domain ΩΔh) (37)IΔh=∫ΩΔhσΔh,ij1∂uΔh,i2∂xΔh,1+σΔh,ij2∂uΔh,i1∂xΔh,1−WΔh1,2δΔh,1j∂qΔh∂xΔh,jdΩΔh.Thus, the final interaction integral of crackh is equal to (38)Ih′=Ih−IΔh.The results for these two cases are shown in Figure11. It can be seen that in case 3, the minor crack propagates along the initial horizontal direction, while in case 4, when the interaction integral domain includes the other crack, the propagation trajectory deviates from the horizontal along the shortest pathway towards the major crack, according to the results reported in the literature [3].Figure 11 Shaded plots of the von Mises stressσvm for the two-crack example: (a) case 3; (b) case 4 (lengths in m). (a)(b) ## 5.2. Multiple-Crack Example In order to study the interaction of more complicated patterns with more than two cracks, two cases are considered in what follows. In case 1, there are five parallel edge cracks. Again, the domain, loading, material and mesh are the same as the previous example. The results in terms of von Mises stress are shown in Figure12(a). It can be noticed that the cracks far from the horizontal middle line have a higher stress concentration and also show a larger deflection angle, while the cracks near to it have a smaller stress concentration and a mild deflection. Obviously, given the symmetry, the crack right along the middle line has no deflection. In front of the crack tips, the stress fields join to form a relatively large symmetric shadow area.Figure 12 Shaded plots of the von Mises stressσvm for the multiple-crack example: (a) 5 parallel edge cracks (case 1); (b) 5 randomly located cracks (case 2) (lengths in m). (a)(b)In case 2, five slanted cracks are randomly distributed. Similar to case 1, the farther the crack is from the middle line, the stronger the stress concentration is; some of the stress fields intersect each other to form a large shadow area (Figure12(b)). However, the stress concentration area is smaller than that in case 1, for the weaker stress-shadowing effect, given the larger distances among the cracks. In fact, as the cracks propagate, they get closer to each other, and the shadowing effect starts to increase and become visible as shown in Figure 13. The consequential effect is that the shadowing areas extend and intersect with the other ones, and also, the values of the stress concentration increase.Figure 13 Shaded plots of the von Mises stressσvm for the multiple-crack example, case 2, for an increased propagation (lengths in m). ## 6. Conclusion This paper presents the development of a method for the analysis of propagation and interaction of multiple cracks. The method consists of the combination of a Geometric Tracking Technique (GTT) with an eXtended Finite Element Method (XFEM) formulation. The application of the method is illustrated by simulating three different examples, single crack, two cracks, and multiple cracks. The results of the first two examples are in good agreement with the corresponding solutions available in the literature. The third example is proposed to simulate the interaction among multiple cracks. The Interaction Integral Method (IIM) is employed to derive the Stress Intensify Factors (SIFs), useful for the prediction of the magnitude and direction of the crack propagation. From the results, one may derive that when the stress field around a crack tip is influenced by another crack in proximity, the propagation trajectory deviates off the crack alignment, thus producing a stress-shadowing effect. The closer the cracks are, the stronger this effect is. The patterns of the computed stress distributions well reflect the interaction among the cracks and as such are proofs of the good quality of the proposed method. --- *Source: 1010174-2022-12-22.xml*
1010174-2022-12-22_1010174-2022-12-22.md
44,929
Analysis of Interaction of Multiple Cracks Based on Tip Stress Field Using Extended Finite Element Method
Yuxiao Wang; Akbar A. Javadi; Corrado Fidelibus
Journal of Applied Mathematics (2022)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1010174
1010174-2022-12-22.xml
--- ## Abstract A new method is presented to study the interaction of multiple cracks, especially for the areas near crack tips by using the extended finite element method. In order to track the cracks, a new geometric tracking technique is proposed to track enriched elements and nodes along the crack instead of using the narrow band level set method. This allows to accurately determine enriched elements and nodes and calculate enrichment values. A method is proposed for constructing a multicrack matrix, which involves numbering enriched nodes of multiple cracks and solving the global stiffness matrix. In this approach, the stress fields around multiple cracks can be studied. The interaction integral method is employed to study the crack propagation and its direction by calculating the stress intensify factor. The developed model has been coded in MATLAB environment and validated against analytical solutions. The application of the model in the crack interaction study is demonstrated through a number of examples. The results illustrate the influence of the interaction of multiple cracks as they approach each other. --- ## Body ## 1. Introduction Crack propagation can be efficiently simulated by resorting to the eXtended Finite Element Method (XFEM), saving a great deal of computational time and costs, given that no remeshing and refinement are performed [1–8]. Great advantages are gained over other methods such as the standard Finite Element Method (FEM) [9, 10], Element-Free Galerkin (EFG) method [11, 12], Boundary Element Method (BEM) [13–15], and Discrete Element Method (DEM) [16, 17]. With XFEM, by enriching the nodes along the crack, additional degrees of freedom are given to reproduce the jump of the variables across the discontinuities. XFEM is associated with the Partition of Unity Method (PUM) [18–21], and the shape functions for the enriched elements consist of standard shape functions, Heaviside step functions, and near-tip asymptotic functions [2].In the context of XFEM, as a crack grows, the elements at the crack tips are sectioned and new elements containing a crack tip are formed. In order to deal with extending cracks, the narrow band level set method [22, 23] is used to track the crack path and to search for the advancing crack tips. This method can be sometimes inaccurate, especially when a crack forms a kink angle (see Section 2.1), so there is the need to find alternatives in order to overcome the shortcomings of the narrow band level set method.Many researchers in the past decades studied the growth and mutual interaction of multiple cracks, mainly focusing on method developments and practical field applications [24–26]. With reference to the mutual interaction of two cracks, Lawler [27] and Tanaka et al. [28] experimentally defined a relation between the crack propagation rate and the J integral [29]. Carpinteri and Monetto [30] employed BEM to model the propagation of multiple cracks and implemented a global nonlinear stress-strain relationship associated with the geometry changes for the extension, intersection, and coalescence of the cracks. Budyn et al. [31] studied the growth of multiple cracks in brittle materials and presented a displacement equation for intersecting cracks. Daux et al. [32] proposed a theory for branched and intersecting cracks based on XFEM. Fageehi and Alshoaibi [33] studied nonplanar crack growth of multiple cracks by developing a code using FEM. Broumand [34] presented two methods, based on XFEM, to accurately detect multiple cracks of diverse size, shape, and orientation in 2D elastic bodies. Pham and Weijermars [35] used the linear superposition method to calculate stress tensor fields with multiple pressure-loaded fractures and further developed analytical solutions for the domains with large numbers of internally pressurized fractures and boreholes in a homogeneous elastic medium. Santoro et al. [36] discussed the response from a beam under static loads with multiple cracks and proposed an approach to evaluate the response in the presence of multiple cracks.Even though an abundant literature is available concerning the numerical analysis of multiple cracks, the application of XFEM in this context is very limited. In this paper, a new method is proposed, based on XFEM, for the analysis of the propagation and interaction of multiple cracks in two-dimensional domains. The enriched nodes of the cracks are tracked by a proposed Geometric Tracking Technique (GTT). The enrichments for shape functions are ascribed to the enriched nodes of all fractures. Different rules for the Gauss points apply for the elements crossed by cracks and for those containing tips. An approach is presented to solve multiple cracks. In this approach, each crack in the domain is set as a unit prone to coalesce into the whole framework of the multiple cracks by combining all enriched stiffness matrices into one matrix. A node numbering rule is presented for multiple cracks. The interaction between cracks is based on domain forms of the Interaction Integral Method (IIM) [37–40]. By studying the distancing and intersection of integral areas around tips, the crack behaviors can be predicted. The method is implemented into a MATLAB code. In what follows, the theoretical basis, steps of the analysis, and numerical implementation of the method are reported. In addition, numerical simulations for validation are illustrated to show the effectiveness and robustness of the method.The paper is structured as follows: in Section2, GTT is presented and an example is set up to verify the effectiveness with respect to the narrow band level set method; in Section 3, combining GTT and XFEM, the numerical results of two standard models are compared to corresponding analytical solutions; in Section 4, the simulation scheme for multiple cracks and the features of the XFEM solution are also presented; finally, in Section 5, the application of the method to multiple crack interaction and crack propagation prediction is illustrated. Concluding remarks are reported in Section 6. ## 2. Geometric Tracking Technique In order to track the cracks in a domain, GTT is introduced for the accurate search and location of crack nodes and elements. The proposed technique can be used with different types of elements according to the given mesh and geometry. It is implemented in several steps. The first step involves of the selection of a rectangular areaΩw enclosing all the crack segments (each segment assumed linear), by checking the coordinates of the crack extremities (see Figure 1). The area Ωw can be represented by the set S: (1)S=x,y∈Ωwxs1≤x≤xs2,fxs1≤y≤fxs2.Figure 1 Sketch of a simulation domain withΩw, Ωsubi, and segments i and i+1; the equation of the line of segment i is Aix+Biy+Ci=0.Each crack segmenti, enclosed by the subdomain Ωsubi, can be represented by a linear equation, as follows: (2)Aix+Biy+Ci=0,whereAi, Bi, and Ci are the coefficients of the straight line of segment i of the crack.SubdomainΩsubi can be represented by the set Ssubi: (3)Ssubi=x,y∈Ωsubixsubm≤x≤xsubn,fxsubm≤y≤fxsubn.The enriched elements crossed by a crack can be located by finding the intersection between the crack linear segments (Equation (2)) and the boundaries of the elements. Each element e has a given area and can be represented by the following set: (4)Γe=x,y∈Ωexe1≤x≤xe2,fxe1≤fx≤fxe2.Substitutingxe1 and xe2 into Equation (2) yields ye1 and ye2: (5)ye1=−Ci−Aixe1Bi,ye2=−Ci−Aixe2Bi.Ifye1, or ye2, is in the range of fxe1,fxe2, the crack certainly intersects the element e; therefore, e is an enriched element. Alternatively, by substituting fxe1 and fxe2 in the same equation, one obtains (6)xe1′=−Ci−Aifxe1Bi,xe2′=−Ci−Aifxe1Bi.Ifxe1′, or xe2′, is in the range of xe1,xe2, element e is enriched.An example follows about the use of the technique for rectangular elements. With reference to Figure2, where crack segment 1 and enclosing subdomain Ωsub1 of Figure 1 are shown, subdomain Ωsub1 can be represented by the set S1: (7)S1=x,y∈Ωsub1x1≤x≤x3,fx1≤y≤fx3.Figure 2 Searching for elements intersecting crack segment 1.The equation of the straight line of crack segment 1, enclosed by subdomainΩsub1, is (8)A1x+B1y+C1=0.The setΓeA of element A is (9)ΓeA=x,y∈ΩeAx11≤x≤x22,fx11≤fx≤fx22,whereΩeA is the domain of element A. By substituting the x coordinates of this element in Equation (2), the coordinate y (P) of the intersection node P can be obtained; it is (10)yP=−C1−A1x11B1.Since the coordinatesx11,yP of node P are included by the set ΓeA, it is proven that the crack segment 1 certainly goes through element A. Similarly, by substituting the y coordinate of the other intersection node Q (still in Figure 2) in Equation (2), the relative x coordinate is (11)xQ=−C1−B1fx22A1.It can be shown that bothxQ and yQ are inside the domain of element B, so element B is also crossed by the crack segment 1; therefore, element B is an enriched element.Note that GTT, in association with XFEM, could be potentially used also in Nonlinear Fracture Mechanics (NLFM) problems. In fact, the technique is apt to track crack paths following any geometric shape, irrespective of the cracking process (linear or nonlinear). ### 2.1. Comparison with the Narrow Band Level Set Method In what follows, an example is given to explain the advantages in using GTT when dealing with cracks having large kink angles if compared to the narrow band level set method.In the narrow band level set method, circular ranges around crack tips, supposedly containing all the enriched elements, are defined first [41]. However, within those circles, some of the elements inside may not intersect the crack, so a narrow band is introduced for further sifting the enriched elements. The distances between nodes and cracks can be calculated by using the nodal coordinates and the linear equations of the crack segments. Normally, the nodes within the range of the narrow band are regarded as target nodes and are saved in the set of the enriched nodes. The level set functions of the enriched nodes are used to ascertain if these nodes are located within the narrow band. With reference to Figure 3, where domain A and domain B encompass all the enriched elements, the linear equations of the two segments of the crack (Equation (2)) have coefficients 0.05, -1, and 1.7 (A1, B1, and C1, respectively) and 0.65, -1, and -1.122 (A2, B2, and C2, respectively) for segment 1 and segment 2, respectively.Figure 3 Sketch for the comparison of the proposed tracking technique and the narrow band method.The distanced1 between node P2,3 and crack segment 1 is (12)d1=0.05×2−1×3+1.70.052+1=1.199.The intersection node N between the linex=6 and crack segment 2 has coordinates (6, 2.778). As 2.778 is between 2 and 3 and node N is on the edge of element D, the element D is an enriched element and node Q is an enriched node.The distanced2 between node Q7,2 and crack segment 2 is (13)d2=0.65×7−1×2−1.1220.652+1=1.199.According to the above calculations, node P and node Q have the same distance to crack segments 1 and 2, respectively. Since node Q is on the enriched element D, the distance from node Q to the crack should be equal to or less than the size of the narrow band. As node P has the same distance to the crack as node Q, the distance from the crack to node P is also equal to or less than the size of the narrow band. However, it is shown in Figure3 that node P is not an enriched node. So, in this case, the application of the narrow band level set method produces an error.Different from the narrow band level set method, the proposed GTT determines all the enriched elements that crack segments pass through by judging whether the element edges intersect with the crack segment. All the elements are examined for this quest with respect to each crack segment, so it is ensured that all the enriched elements are picked as target elements, no extra elements are included, and no enriched elements are ignored.With reference to Figure3, the intersection node M of line x=2 and crack segment 1 has the coordinates (2, 1.9). The y coordinate of element C ranges from 2 to 3, so element C is not regarded as an enriched element; therefore, node P is not be counted as an enriched node. However, the range of the y coordinate of the left edge of element F is from 1 to 2, including the y coordinate of node M. So element F can be regarded as an enriched element, and the relative nodes are enriched nodes.Based on the above comparison, one may state that the proposed GTT is accurate and effective and constitutes a robust procedure for tracking the crack growth trajectories. ## 2.1. Comparison with the Narrow Band Level Set Method In what follows, an example is given to explain the advantages in using GTT when dealing with cracks having large kink angles if compared to the narrow band level set method.In the narrow band level set method, circular ranges around crack tips, supposedly containing all the enriched elements, are defined first [41]. However, within those circles, some of the elements inside may not intersect the crack, so a narrow band is introduced for further sifting the enriched elements. The distances between nodes and cracks can be calculated by using the nodal coordinates and the linear equations of the crack segments. Normally, the nodes within the range of the narrow band are regarded as target nodes and are saved in the set of the enriched nodes. The level set functions of the enriched nodes are used to ascertain if these nodes are located within the narrow band. With reference to Figure 3, where domain A and domain B encompass all the enriched elements, the linear equations of the two segments of the crack (Equation (2)) have coefficients 0.05, -1, and 1.7 (A1, B1, and C1, respectively) and 0.65, -1, and -1.122 (A2, B2, and C2, respectively) for segment 1 and segment 2, respectively.Figure 3 Sketch for the comparison of the proposed tracking technique and the narrow band method.The distanced1 between node P2,3 and crack segment 1 is (12)d1=0.05×2−1×3+1.70.052+1=1.199.The intersection node N between the linex=6 and crack segment 2 has coordinates (6, 2.778). As 2.778 is between 2 and 3 and node N is on the edge of element D, the element D is an enriched element and node Q is an enriched node.The distanced2 between node Q7,2 and crack segment 2 is (13)d2=0.65×7−1×2−1.1220.652+1=1.199.According to the above calculations, node P and node Q have the same distance to crack segments 1 and 2, respectively. Since node Q is on the enriched element D, the distance from node Q to the crack should be equal to or less than the size of the narrow band. As node P has the same distance to the crack as node Q, the distance from the crack to node P is also equal to or less than the size of the narrow band. However, it is shown in Figure3 that node P is not an enriched node. So, in this case, the application of the narrow band level set method produces an error.Different from the narrow band level set method, the proposed GTT determines all the enriched elements that crack segments pass through by judging whether the element edges intersect with the crack segment. All the elements are examined for this quest with respect to each crack segment, so it is ensured that all the enriched elements are picked as target elements, no extra elements are included, and no enriched elements are ignored.With reference to Figure3, the intersection node M of line x=2 and crack segment 1 has the coordinates (2, 1.9). The y coordinate of element C ranges from 2 to 3, so element C is not regarded as an enriched element; therefore, node P is not be counted as an enriched node. However, the range of the y coordinate of the left edge of element F is from 1 to 2, including the y coordinate of node M. So element F can be regarded as an enriched element, and the relative nodes are enriched nodes.Based on the above comparison, one may state that the proposed GTT is accurate and effective and constitutes a robust procedure for tracking the crack growth trajectories. ## 3. Numerical Simulation of Single Cracks With reference to the single crack in a solid domain in Figure2, according to GTT, the elements A, B, C, and D can be part of a set of Heaviside-enriched elements (crack passing through) and tip-enriched elements (crack tips stay). This set contains element numbers and corresponding node numbers. In the next step, those elements are classified into two subsets. The elements including a crack tip are removed from the initial set (except the ones that contain crack tips bordering the edge of the domain) and then inserted in another set. The remaining elements of the initial set are then considered Heaviside-enriched elements. If an element includes both a Heaviside-enriched node and a tip-enriched node, it is denoted as a tip-enriched element or as the last Heaviside-enriched element before a tip-enriched element, so these two types of elements are differentiated. The enriched nodes normally include three types: Heaviside-enriched nodes, tip-enriched nodes, and mix-enriched nodes. Different types of enriched nodes are separately dealt with, according to the location of the elements.After rearranging the Heaviside-enriched elements and the tip-enriched elements into different sets, an enrichmentφj of the shape function Nj can be adopted to represent the jump of displacements across the crack surfaces [42]: (14)φjx,y=Hx,y,where Hx,y is the step function that, for example, in the case of a horizontal crack, is equal to 1 when y is greater than 0, and equal to -1 when y is less than 0.For tip-enriched nodes, the enrichmentφl can be expressed as a tip branch function [2, 43]: (15)φlr,θ=Blr,θ=rsinθ2,rsinθ2sinθ,rcosθ2,rcosθ2sinθ,where Bl is the tip branch function (l=1,2,3,4) and r and θ are the local polar coordinates at the crack tip.The approximation for the displacementu can be expressed by using the enrichments of Equations (14) and (15): (16)ux,y=∑i=1nsNix,yui+∑j=1ncutNjx,yHx,y−Hxj,yjaj+∑k=1ntip1Nkx,y∑l=14Bl1x,y−Bl1xk,ykbkl+∑k=1ntip2Nkx,y∑l=14Bl2x,y−Bl2xk,ykbkl,where ui are the standard degrees of freedom (dofs); aj, bk1, and bk2 are additional dofs; Hx,y are the enrichment values of Heaviside-enriched nodes; Bl1 are the enrichment values of nodes around crack tip 1; Bl2 are the enrichment values of nodes around crack tip 2; Ni are the standard shape functions; ns is the number of standard nodes; ncut is the number of Heaviside-enriched nodes; and ntip1 and ntip2 are the sets of tip-enriched nodes for the first and second crack tips, respectively.In order to demonstrate the validity of GTT combined with XFEM, in what follows, the solutions of two standard examples are reported. A comparison is made among the obtained values of the Stress Intensify Factor (SIF) with those descending from analytical solutions. IIM is used to calculate the SIF values. ### 3.1. Edge Crack The first example refers to ana-long single-edge crack (Figure 4(a)). It is horizontal and runs from the left vertical boundary of a rectangular plate of width (w) 100 m and height (h) 100 m, loaded by vertical tractions σ of 3.67 MPa on the top and bottom edges. Four-node quadrilateral elements are used. In order to examine the sensitivity of the model results to the element size, different structured meshes, all having equal-size square elements, are tested of 50×50, 100×100, and 200×200 elements. The elastic modulus (E) and Poisson’s ratio (ν) are 15 GPa and 0.25, respectively. Cracks having lengths 3.5, 5.5, 7.5, 9.5, and 11.5 m are considered. The shaded plot of the von Mises stress σvm for the case with a 7.5 m long crack and 100×100 elements is shown in Figure 4(b). The comparison between the numerical solution and analytical solution in terms of SIF values is reported in Table 1. An analytical solution for the Mode-I SIF KI is [44] (17)KI=Cσaπ,where σ is the dominant vertical tensile stress (equal to the vertical tractions applied in the example), a is the crack length, and C is a correction coefficient, equal to (18)C=1.12−0.231aw+10.55aw2−21.71aw3+30.382aW4.Figure 4 Single edge crack: (a) schematic of aw×L plate with an a-long horizontal edge crack and loaded by vertical tractions σ; (b) shaded plot of the von Mises stress σvm for a 100×100 m2 plate mesh and a=7.5 m. (a)(b)Table 1 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with an edge crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.51.35691.35251.37011.36305.51.72961.71851.70961.70867.52.06992.00542.00101.99509.52.39762.37122.35952.245011.52.72472.69332.68242.4700 ### 3.2. Central Crack A plate with a central crack is loaded by an isotropic tensile stressσ of 3.67 MPa (Figure 5(a)). The discrete length values selected for the crack are 3, 7, 11, 15, and 19 m. In Figure 5(b), the shaded plot of the von Mises stress predicted by using XFEM for the 15 m long crack and the 100×100-element mesh is shown. The corresponding analytical solution for the Mode-I SIF KI is [45] (19)KI=σaπ,with σ here the dominant isotropic stress (equal to the normal tractions applied in the example).Figure 5 Central crack: (a) schematic of aw×L plate with an a-long horizontal central crack and loaded by normal tractions σ on all the edges; (b) shaded plot of the von Mises stresses σvm for a 100×100 m2 plate mesh and a=15 m. (a)(b)In Table2, the comparison between the analytical solution and the numerical solution for different meshes is reported.Table 2 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with a central crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.01.07890.75980.77260.79657.01.26191.18651.19711.216611.01.57601.50131.50371.525115.01.84041.77781.78321.781019.02.07522.03712.01892.0049Observing the results in Tables1 and 2, one may notice that when the mesh size is 100×100, the numerical solution is very close to the analytical solution, so there is no point in further refining the mesh, given that enough accuracy is gained. One may also state that with the proposed method, the crack growth propagation and direction for the two examples is effectively predicted. ## 3.1. Edge Crack The first example refers to ana-long single-edge crack (Figure 4(a)). It is horizontal and runs from the left vertical boundary of a rectangular plate of width (w) 100 m and height (h) 100 m, loaded by vertical tractions σ of 3.67 MPa on the top and bottom edges. Four-node quadrilateral elements are used. In order to examine the sensitivity of the model results to the element size, different structured meshes, all having equal-size square elements, are tested of 50×50, 100×100, and 200×200 elements. The elastic modulus (E) and Poisson’s ratio (ν) are 15 GPa and 0.25, respectively. Cracks having lengths 3.5, 5.5, 7.5, 9.5, and 11.5 m are considered. The shaded plot of the von Mises stress σvm for the case with a 7.5 m long crack and 100×100 elements is shown in Figure 4(b). The comparison between the numerical solution and analytical solution in terms of SIF values is reported in Table 1. An analytical solution for the Mode-I SIF KI is [44] (17)KI=Cσaπ,where σ is the dominant vertical tensile stress (equal to the vertical tractions applied in the example), a is the crack length, and C is a correction coefficient, equal to (18)C=1.12−0.231aw+10.55aw2−21.71aw3+30.382aW4.Figure 4 Single edge crack: (a) schematic of aw×L plate with an a-long horizontal edge crack and loaded by vertical tractions σ; (b) shaded plot of the von Mises stress σvm for a 100×100 m2 plate mesh and a=7.5 m. (a)(b)Table 1 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with an edge crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.51.35691.35251.37011.36305.51.72961.71851.70961.70867.52.06992.00542.00101.99509.52.39762.37122.35952.245011.52.72472.69332.68242.4700 ## 3.2. Central Crack A plate with a central crack is loaded by an isotropic tensile stressσ of 3.67 MPa (Figure 5(a)). The discrete length values selected for the crack are 3, 7, 11, 15, and 19 m. In Figure 5(b), the shaded plot of the von Mises stress predicted by using XFEM for the 15 m long crack and the 100×100-element mesh is shown. The corresponding analytical solution for the Mode-I SIF KI is [45] (19)KI=σaπ,with σ here the dominant isotropic stress (equal to the normal tractions applied in the example).Figure 5 Central crack: (a) schematic of aw×L plate with an a-long horizontal central crack and loaded by normal tractions σ on all the edges; (b) shaded plot of the von Mises stresses σvm for a 100×100 m2 plate mesh and a=15 m. (a)(b)In Table2, the comparison between the analytical solution and the numerical solution for different meshes is reported.Table 2 Comparison between the SIF values (unit: 107 Pa/m2) of the numerical solution with those of the analytical solution for a plate with a central crack. Length (m)SIF numerical solutionSIF analytical solutionMesh50×50Mesh100×100Mesh200×2003.01.07890.75980.77260.79657.01.26191.18651.19711.216611.01.57601.50131.50371.525115.01.84041.77781.78321.781019.02.07522.03712.01892.0049Observing the results in Tables1 and 2, one may notice that when the mesh size is 100×100, the numerical solution is very close to the analytical solution, so there is no point in further refining the mesh, given that enough accuracy is gained. One may also state that with the proposed method, the crack growth propagation and direction for the two examples is effectively predicted. ## 4. Numerical Simulation of Multiple Cracks For a multiple-crack problem, a specific numbering rule is applied (see Table3). The nodes of a crack k are reported in a column set from 2+10k−1 to 11+10k−1; column 1 contains the numbers of the standard nodes; a is the number of total standard nodes; m represents the number of the last enriched node in the last crack; m+1 and m+2 are the numbers of Heaviside-enriched nodes; and m+3, m+4, m+5, and m+6 are the numbers of the tip-enriched nodes.Table 3 Numbering scheme for the node-numbering matrix for thekth crack. Columns1…2+10k−13+10k−14+10k−15+10k−1…10+10k−111+10k−1…1…2…3…m+1e.v.m+3e.v.…m+6e.v.…4…m+2e.v.…a−1…a…e.v.: enriched value.The governing equation for the whole domain can be written as(20)Ku=P,where P is the global load matrix, u is the global displacement matrix of Equation (16), and K is the global stiffness matrix, consisting of the standard stiffness matrix and the enriched stiffness matrices of the cracks; in formula, (21)K=K0K0,1K0,2⋯K0,n−2K0,n−1K0,nK1,0K10⋯000K2,00K2⋯000⋯⋯⋯⋯⋯⋯⋯Kn−2,000⋯Kn−200Kn−1,000⋯0Kn−10Kn,000⋯00Kn,in which K0 is the standard stiffness matrix; K1, K2,…, Kn are the enriched stiffness matrices of cracks 1 to n, with n being the number of cracks; and K0,n is the cross-term between the standard elements and the enriched elements. The matrices are as follows: (22)K0=∫ΩBiTDBjdΩ,i,j=u,Kn=∫ΩBiTDBjdΩ,i,j=a,b,K0,n=∫ΩBiTDBjdΩ,i,j=u,a,b,where D is the elasticity matrix, Bu, Ba, and Bb are strain differential operator matrices, respectively defined as: (23)Bu=∂Ni∂x00∂Ni∂y∂Ni∂y∂Ni∂x,Ba=∂NiH−Hxi,yi∂x00∂NiH−Hxi,yi∂y∂NiH−Hxi,yi∂y∂NiH−Hxi,yi∂x,Bb=∂NiB1−B1xi,yi∂x0∂NiB2−B2xi,yi∂x00∂NiB1−B1xi,yi∂y0∂NiB2−B2xi,yi∂y∂NiB1−B1xi,yi∂y∂NiB1−B1xi,yi∂x∂NiB2−B2xi,yi∂y∂NiB2−B2xi,yi∂x∂NiB3−B3xi,yi∂x0∂NiB4−B4xi,yi∂x00∂NiB3−B3xi,yi∂y0∂NiB4−B4xi,yi∂y∂NiB3−B3xi,yi∂y∂NiB3−B3xi,yi∂x∂NiB4−B4xi,yi∂y∂NiB4−B4xi,yi∂x,with B1, B2, B3, and B4 the tip branch functions around the crack tips and B1xi,yi, B2xi,yi, B3xi,yi, and B4xi,yi the enrichment values of the tip branch functions at the tip-enriched nodes.A Delaunay triangulation is adopted herein to partition the enriched elements [46]. For the solutions of the integrals, the number of Gauss points is 9 for the standard elements, 3 for the triangular Heaviside-enriched elements, and 7 for the triangular tip-enriched elements. ## 5. Interaction of Multiple Cracks The proposed method is used to analyze the interaction among multiple cracks through the following illustrative examples.The stress field around a crack tip is influenced by other cracks in proximity, thus leading to stress-shadowing effects and redirection of the crack propagation [47]. SIF is useful to check if a crack propagates and to predict the propagation trajectory [48–51]. IIM is often used to calculate SIF values since it has a high degree of accuracy compared with other methods [52–55], although the involved process is quite complex; for the application, it is required to solve an energy integral, based on the J integral method [29], covering a zone around the crack tip. Two states are considered: present state (1) and auxiliary state (2). If Ω is the domain of integration (the zone around the tip), bounded by Γ, for a local reference system x1,x2 of the crack, with origin at the tip and x1 aligned with it, the interaction integral I can be written as [2] (24)I=∫ΓW1,2δ1j−σij1∂ui2∂x1−σij2∂ui1∂x1njdΓ,where δ is the Dirac delta function, n is the vector normal to Γ, and W is the interaction strain energy, equal to [1, 3] (25)W1,2=σij1εij2=σij2εij1.The integralI can also be expressed by using the values of SIF KI and KII, respectively, for Mode-I and Mode-II, for both the present state and the auxiliary state, as follows: (26)I=2EKI1KI2+KII1KII2,where E is Young’s modulus, and (27)KI=E2I1,Mode‐I,(28)KII=E2I1,Mode‐II.Equation (24) can be transformed as follows: (29)I=∫Ωσij1∂ui2∂x1+σij2∂ui1∂x1−W1,2δij∂q∂xjdΩ,where qx1,x2 is a weighting function. The solution of the integral in Equation (29) is relatively straightforward.With reference to Figure6(a), the radius R of the zone is assumed equal to the square root of three times the element area [56]. The weighting function q is as follows: (30)qr=0,r≥R,1r<R,where r originates from the crack tip. When a portion of the zone is outside the domain, as in Figure 6(b), the weighting function (qx) on the edge of the domain is assumed equal to zero [57].Figure 6 Zone of integration around a tip for IMM: (a) zone inside the domain; (b) zone outside the domain. (a)(b)In the following subsections, two examples of interaction are described, with two and five interacting cracks, respectively. ### 5.1. Two-Crack Example In this example, two cracks,h and l, are considered. Domain, loading, material and mesh are the same used in Section 3.1. In a first case (case 1, Figure 7(a)), the cracks are parallel, both 5.5 m long and 25 m mutually distant. In a second case (case 2, Figure 7(b)), they are still parallel only 5 m distant. A third and a fourth case (case 3, case 4) with a slanted crack follow. For all the cases, four-node quadrilateral elements are employed. The vertical traction at the top and at the bottom boundaries is 3.67 MPa.Figure 7 Schematic of the two-crack example: (a) case 1; (b) case 2. (a)(b)In case 1, the stress fields around the crack tips do not significantly interact. In case 2, the two stress fields overlap with mutual interference, producing a shadowing effect and causing an alteration of the SIFs resulting in the redirection of the crack propagation; in fact, given the mesh size, the zones for IIM overlap (Figure7(b)) and Equation (29) for crack h becomes (31)Ih=∫Ωhσh,ij1∂uh,i2∂xh,1+σh,ij2∂uh,i1∂xh,1−Wh1,2δh,1j∂qh∂xh,jdΩh,and, for crack l, (32)Il=∫Qσl,ij1∂ul,i2∂xl,1+σl,ij2∂ul,i1∂xl,1−Wl1,2δl,1j∂ql∂xl,jdQl.The overlapped integral domainIlh is then (33)Ilh=∫Ωlhσlh,ij1∂ulh,i2∂xlh,1+σlh,ij2∂ulh,i1∂xlh,1−Wlh1,2δlh,1j∂qlh∂xlh,jdΩlh,where Ωlh is the intersection zone. For Equations (31), (32), and (33), subscripts h and l indicate crack h and crack l, respectively, based on Equation (24). The remaining equations follow the same notation rule, so the comprehensive interaction integral for crack h is equal to the superposition of Ih and Ilh: (34)Ihh=Ih+Ilh,and the comprehensive interaction integral for crack l is (35)Ill=Il+Ilh.Thus, by employing Equations (27) and (28), SIFs in Mode-I and Mode-II for the two cracks are (36)KI,l=E2Ill,Mode‐I,KII,l=E2Ill,Mode‐II,KI,h=E2Ihh,Mode‐I,KII,h=E2Ihh,Mode‐II.The results of the XFEM calculations in terms of the von Mises stress for two cases are shown in Figures8 and 9. It can be seen that the cracks propagate horizontally (along the alignment of the cracks) only in case 1, whereas in case 2, the propagation directions deviate due to the mutual interaction. The results are in good agreement with those reported in the literature [58].Figure 8 Results of the two-crack example, case 1: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)Figure 9 Results of the two-crack example, case 2: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)In Figures10(a) and 10(b), the schemes of cases 3 and 4 are reported, respectively. There is a 5.5 m long horizontal crack (the “minor” crack, h) and a 10 m long slanted crack (inclined 45° with respect to the x-axis) (the “major” crack, l). In case 3, the major crack is rather far from the minor crack (50 m from the tip of h to the center of l). In case 4, the two cracks are closer. It is assumed that the propagation of crack h is affected by the other crack when this one lies within the domain of the interaction integral of h. The major crack l can be seen as a border “arresting” the effect on the stress field of the minor crack h, so the actual interaction integral domain for crack h is equivalent to the entire circular area minus the shaded area (Figure 10(b)). The interaction integral for crack h is as Equation (31).Figure 10 Schematic of the two-crack example: (a) case 3; (b) case 4. (a)(b)The interaction integralIΔh of the shaded area Δh of case 4 is (domain ΩΔh) (37)IΔh=∫ΩΔhσΔh,ij1∂uΔh,i2∂xΔh,1+σΔh,ij2∂uΔh,i1∂xΔh,1−WΔh1,2δΔh,1j∂qΔh∂xΔh,jdΩΔh.Thus, the final interaction integral of crackh is equal to (38)Ih′=Ih−IΔh.The results for these two cases are shown in Figure11. It can be seen that in case 3, the minor crack propagates along the initial horizontal direction, while in case 4, when the interaction integral domain includes the other crack, the propagation trajectory deviates from the horizontal along the shortest pathway towards the major crack, according to the results reported in the literature [3].Figure 11 Shaded plots of the von Mises stressσvm for the two-crack example: (a) case 3; (b) case 4 (lengths in m). (a)(b) ### 5.2. Multiple-Crack Example In order to study the interaction of more complicated patterns with more than two cracks, two cases are considered in what follows. In case 1, there are five parallel edge cracks. Again, the domain, loading, material and mesh are the same as the previous example. The results in terms of von Mises stress are shown in Figure12(a). It can be noticed that the cracks far from the horizontal middle line have a higher stress concentration and also show a larger deflection angle, while the cracks near to it have a smaller stress concentration and a mild deflection. Obviously, given the symmetry, the crack right along the middle line has no deflection. In front of the crack tips, the stress fields join to form a relatively large symmetric shadow area.Figure 12 Shaded plots of the von Mises stressσvm for the multiple-crack example: (a) 5 parallel edge cracks (case 1); (b) 5 randomly located cracks (case 2) (lengths in m). (a)(b)In case 2, five slanted cracks are randomly distributed. Similar to case 1, the farther the crack is from the middle line, the stronger the stress concentration is; some of the stress fields intersect each other to form a large shadow area (Figure12(b)). However, the stress concentration area is smaller than that in case 1, for the weaker stress-shadowing effect, given the larger distances among the cracks. In fact, as the cracks propagate, they get closer to each other, and the shadowing effect starts to increase and become visible as shown in Figure 13. The consequential effect is that the shadowing areas extend and intersect with the other ones, and also, the values of the stress concentration increase.Figure 13 Shaded plots of the von Mises stressσvm for the multiple-crack example, case 2, for an increased propagation (lengths in m). ## 5.1. Two-Crack Example In this example, two cracks,h and l, are considered. Domain, loading, material and mesh are the same used in Section 3.1. In a first case (case 1, Figure 7(a)), the cracks are parallel, both 5.5 m long and 25 m mutually distant. In a second case (case 2, Figure 7(b)), they are still parallel only 5 m distant. A third and a fourth case (case 3, case 4) with a slanted crack follow. For all the cases, four-node quadrilateral elements are employed. The vertical traction at the top and at the bottom boundaries is 3.67 MPa.Figure 7 Schematic of the two-crack example: (a) case 1; (b) case 2. (a)(b)In case 1, the stress fields around the crack tips do not significantly interact. In case 2, the two stress fields overlap with mutual interference, producing a shadowing effect and causing an alteration of the SIFs resulting in the redirection of the crack propagation; in fact, given the mesh size, the zones for IIM overlap (Figure7(b)) and Equation (29) for crack h becomes (31)Ih=∫Ωhσh,ij1∂uh,i2∂xh,1+σh,ij2∂uh,i1∂xh,1−Wh1,2δh,1j∂qh∂xh,jdΩh,and, for crack l, (32)Il=∫Qσl,ij1∂ul,i2∂xl,1+σl,ij2∂ul,i1∂xl,1−Wl1,2δl,1j∂ql∂xl,jdQl.The overlapped integral domainIlh is then (33)Ilh=∫Ωlhσlh,ij1∂ulh,i2∂xlh,1+σlh,ij2∂ulh,i1∂xlh,1−Wlh1,2δlh,1j∂qlh∂xlh,jdΩlh,where Ωlh is the intersection zone. For Equations (31), (32), and (33), subscripts h and l indicate crack h and crack l, respectively, based on Equation (24). The remaining equations follow the same notation rule, so the comprehensive interaction integral for crack h is equal to the superposition of Ih and Ilh: (34)Ihh=Ih+Ilh,and the comprehensive interaction integral for crack l is (35)Ill=Il+Ilh.Thus, by employing Equations (27) and (28), SIFs in Mode-I and Mode-II for the two cracks are (36)KI,l=E2Ill,Mode‐I,KII,l=E2Ill,Mode‐II,KI,h=E2Ihh,Mode‐I,KII,h=E2Ihh,Mode‐II.The results of the XFEM calculations in terms of the von Mises stress for two cases are shown in Figures8 and 9. It can be seen that the cracks propagate horizontally (along the alignment of the cracks) only in case 1, whereas in case 2, the propagation directions deviate due to the mutual interaction. The results are in good agreement with those reported in the literature [58].Figure 8 Results of the two-crack example, case 1: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)Figure 9 Results of the two-crack example, case 2: (a) mesh; (b) shaded plot of the von Mises stressσvm (lengths in m). (a)(b)In Figures10(a) and 10(b), the schemes of cases 3 and 4 are reported, respectively. There is a 5.5 m long horizontal crack (the “minor” crack, h) and a 10 m long slanted crack (inclined 45° with respect to the x-axis) (the “major” crack, l). In case 3, the major crack is rather far from the minor crack (50 m from the tip of h to the center of l). In case 4, the two cracks are closer. It is assumed that the propagation of crack h is affected by the other crack when this one lies within the domain of the interaction integral of h. The major crack l can be seen as a border “arresting” the effect on the stress field of the minor crack h, so the actual interaction integral domain for crack h is equivalent to the entire circular area minus the shaded area (Figure 10(b)). The interaction integral for crack h is as Equation (31).Figure 10 Schematic of the two-crack example: (a) case 3; (b) case 4. (a)(b)The interaction integralIΔh of the shaded area Δh of case 4 is (domain ΩΔh) (37)IΔh=∫ΩΔhσΔh,ij1∂uΔh,i2∂xΔh,1+σΔh,ij2∂uΔh,i1∂xΔh,1−WΔh1,2δΔh,1j∂qΔh∂xΔh,jdΩΔh.Thus, the final interaction integral of crackh is equal to (38)Ih′=Ih−IΔh.The results for these two cases are shown in Figure11. It can be seen that in case 3, the minor crack propagates along the initial horizontal direction, while in case 4, when the interaction integral domain includes the other crack, the propagation trajectory deviates from the horizontal along the shortest pathway towards the major crack, according to the results reported in the literature [3].Figure 11 Shaded plots of the von Mises stressσvm for the two-crack example: (a) case 3; (b) case 4 (lengths in m). (a)(b) ## 5.2. Multiple-Crack Example In order to study the interaction of more complicated patterns with more than two cracks, two cases are considered in what follows. In case 1, there are five parallel edge cracks. Again, the domain, loading, material and mesh are the same as the previous example. The results in terms of von Mises stress are shown in Figure12(a). It can be noticed that the cracks far from the horizontal middle line have a higher stress concentration and also show a larger deflection angle, while the cracks near to it have a smaller stress concentration and a mild deflection. Obviously, given the symmetry, the crack right along the middle line has no deflection. In front of the crack tips, the stress fields join to form a relatively large symmetric shadow area.Figure 12 Shaded plots of the von Mises stressσvm for the multiple-crack example: (a) 5 parallel edge cracks (case 1); (b) 5 randomly located cracks (case 2) (lengths in m). (a)(b)In case 2, five slanted cracks are randomly distributed. Similar to case 1, the farther the crack is from the middle line, the stronger the stress concentration is; some of the stress fields intersect each other to form a large shadow area (Figure12(b)). However, the stress concentration area is smaller than that in case 1, for the weaker stress-shadowing effect, given the larger distances among the cracks. In fact, as the cracks propagate, they get closer to each other, and the shadowing effect starts to increase and become visible as shown in Figure 13. The consequential effect is that the shadowing areas extend and intersect with the other ones, and also, the values of the stress concentration increase.Figure 13 Shaded plots of the von Mises stressσvm for the multiple-crack example, case 2, for an increased propagation (lengths in m). ## 6. Conclusion This paper presents the development of a method for the analysis of propagation and interaction of multiple cracks. The method consists of the combination of a Geometric Tracking Technique (GTT) with an eXtended Finite Element Method (XFEM) formulation. The application of the method is illustrated by simulating three different examples, single crack, two cracks, and multiple cracks. The results of the first two examples are in good agreement with the corresponding solutions available in the literature. The third example is proposed to simulate the interaction among multiple cracks. The Interaction Integral Method (IIM) is employed to derive the Stress Intensify Factors (SIFs), useful for the prediction of the magnitude and direction of the crack propagation. From the results, one may derive that when the stress field around a crack tip is influenced by another crack in proximity, the propagation trajectory deviates off the crack alignment, thus producing a stress-shadowing effect. The closer the cracks are, the stronger this effect is. The patterns of the computed stress distributions well reflect the interaction among the cracks and as such are proofs of the good quality of the proposed method. --- *Source: 1010174-2022-12-22.xml*
2022
# Heat Shock Protein 72 Expressing Stress in Sepsis: Unbridgeable Gap between Animal and Human Studies—A Hypothetical “Comparative” Study **Authors:** George Briassoulis; Efrossini Briassouli; Diana-Michaela Fitrolaki; Ioanna Plati; Kleovoulos Apostolou; Theonymfi Tavladaki; Anna-Maria Spanaki **Journal:** BioMed Research International (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101023 --- ## Abstract Heat shock protein 72 (Hsp72) exhibits a protective role during times of increased risk of pathogenic challenge and/or tissue damage. The aim of the study was to ascertain Hsp72 protective effect differences between animal and human studies in sepsis using a hypothetical “comparative study” model. Forty-one in vivo (56.1%), in vitro (17.1%), or combined (26.8%) animal and 14 in vivo (2) or in vitro (12) human Hsp72 studies (P < 0.0001) were enrolled in the analysis. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in septic animal studies (P < 0.0001). Only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001). In animal models, any Hsp72 induction method tried increased intracellular Hsp72 (100%), compared to 57.1% of human studies (P < 0.02), reduced proinflammatory cytokines (28/29), and enhanced survival (18/18). Animal studies show a clear Hsp72 protective effect in sepsis. Human studies are inconclusive, showing either protection or a possible relation to mortality and infections. This might be due to the fact that using evermore purified target cell populations in animal models, a lot of clinical information regarding the net response that occurs in sepsis is missing. --- ## Body ## 1. Introduction Sepsis is an inflammation-induced syndrome resulting from a complex interaction between host and infectious agents.  It is considered severe when associated with acute organ dysfunction, which accounts for the main cause underlying sepsis-induced death. Despite increasing evidence in support of antioxidant [1], anti-inflammatory [2], or immune-enhancing [3] therapies in sepsis, recent studies failed to establish a correlation between antiseptic pathway-based therapies and improvement of sepsis [4] or septic shock [5] or among immune-competent patients [6].Rapid expression of the survival gene heat shock protein 72 (Hsp72) was shown to be critical for mounting cytoprotection against severe cellular stress, like elevated temperature [7]. Intracellular Hsps are upregulated in cells subjected to stressful stimuli, including inflammation and oxidative stress exerting a protective effect against hypoxia, excess oxygen radicals, endotoxin, infections, and fever [8]. Recent studies imply that different biological disease processes and/or simple interventions may interfere with high temperature stress, leading to different clinical outcome in patients with and without sepsis [9]. In septic patients, administration of antipyretics independently associated with 28-day mortality, without association of fever with mortality [9]. Importantly, fever control using external cooling was safe and decreased vasopressor requirements and early mortality in septic shock [10].Inducible Hsp72 is also found extracellularly where it exhibits a protective role by facilitating immunological responses during times of increased risk of pathogenic challenge and/or tissue damage [11]. Experimental data provide important insights into the anti-inflammatory mechanisms of stress proteins protection and may lead to the development of a novel strategy for treatment of infectious and inflammatory disorders [12]. However, although overexpression of stress proteins signals danger to inflammatory cells and aids in immune surveillance by transporting intracellular peptides to immune cells [13], it has also been linked to a deleterious role in some diseases [14]. In addition, serum Hsp72 levels were shown to be modulated according to the patient oxidant status whereas increased serum Hsp72 was associated with mortality in sepsis [15].The purpose of this basic research-related review in critical care is to document the available evidence on the role of Hsp72 in sepsis, reporting both the state of the art and the future research directions. It might be that potential therapeutic use of stress proteins in prevention of common stress-related diseases involves achieving optimal balance between protective and immunogenic effects of these molecules [16]. In this review, we will attempt to classify experimental and clinical studies on Hsp72 in sepsis and to compare their results on inflammation, organ function, and outcome; we will also briefly discuss the mechanisms on how stress proteins might exert their protective or negative role in the disease development and highlight the potential clinic translation in the research field. ## 2. Materials and Methods Human or animal in vivo or in vitro studies examining the beneficial effect of intra- or extracellular Hsp72 expression in sepsis were included in this study. The PRISMA [17] search method for identification of studies consisted of searches of PubMed database (1992 to September 2012) and a manual review of reference lists using the search term: “Hsp70 or 72.” The search output was limited with the search filter for any of: sepsis; severe sepsis; bacterial lipopolysaccharide (LPS); endotoxin. References in selected studies were examined also. The title and abstract of all studies identified by the above search strategy were screened, and the full text for all potentially relevant studies published in English was obtained. The full text of any potentially relevant studies was assessed by five authors (DMF, EB, IP, AK, and TT). The same authors extracted data from the published studies. ### 2.1. Statistical Analysis Proportions of methods used and results findings were compared by theχ 2 test. A two-sided alpha of 0.05 was used for statistical significance. The results were analyzed using SPSS software (version 20.0, SPSS, Chicago, IL, USA). ## 2.1. Statistical Analysis Proportions of methods used and results findings were compared by theχ 2 test. A two-sided alpha of 0.05 was used for statistical significance. The results were analyzed using SPSS software (version 20.0, SPSS, Chicago, IL, USA). ## 3. Results Our search identified 411 PubMed titles and abstracts. After excluding duplicates, studies with no original data, or data insufficient to evaluate or those whose outcome was ischemia/reperfusion injury or others, 55 articles were finally included for analysis. The aim of this minireview was not to examine the quality of studies, but to describe induction methods and to compare in vivo and in vitro methods and results regarding a potential protective role for Hsp72 in human and animal sepsis. ### 3.1. Animals Forty-one in vivo (23, 56.1%), in vitro (7, 17.1%), or combined (11, 26.8%) animal studies fulfilling the research criteria regarding the role of Hsp72 in sepsis were enrolled in analysis (Tables1(a), 1(b), and 1(c)). In only 6 studies transgenic animals (4Hsp−/− (9.8%), 2 overexpressing the human Hspa12b gene (4.9%)) were used (14.6%), all in mice (P < 0.03). Hsp72 induction methods used in rats differed from those used in mice (P < 0.0001). Hsp72 induction was attempted most often using heat shock (rats 9, 37.5%; mice 2, 12.5%), glutamine (Gln) (rats 7, 29.2%; mice 4, 25%; sheep 1, 100%), or combined Gln with additional inducer (rats 1, 4.2%; mice 2, 12.6%). In 7 rats Hsp72 was induced through adenoviral vector Hsp72 (AdHSP) (3, 12.5% of studies in rats) or various recombinant Hsp72 (rHsp72) preparations (4, 16.7%) compared to 3 mice studies where AdHSP, bovine rHsp72 preconditioning, or overexpressed Hsp72 within the intestinal epithelium was used (6.2%). Hsp72 gene-transfected models (3, 18.8%) or cecal ligation and puncture (CLP) with LPS or injection of microorganisms (2, 12.5%) were used only in mice studies.Table 1 (a) Results of animal in vivo studies examining the preventive role of intra- or extracellular Hsp72 (Hsp70) expression and Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (b) Results of animal in vitro studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (c) Results of genetic animal studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (a) In vivo Induction Organs studied Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Inhibitors Functional Pathways Interleukins Organ damage Survival CLP sepsis rats [18, 19]LPS-treated rats [20–24] LPS-treated mice [25] Heat stress Lungs (4)Heart (1) Splenocytes (1) Rostral Ventrolateral medulla (1)Mitochondrial function (1)Brain (1) Induced (7) — Hsp70 inhibitors (KNK437 or pifithrin-m) abrogated the ability of the thermal treatment to enhance TNF-α (1) Alleviated hypotension, bradycardia, sympathetic vasomotor activity (1) EEG and epileptic spikes attenuated (1) Suppressed iNOS mRNA NF-κB activation, IB kinase activation, IB degradation (1) Prevented downregulation of Grp75, preserved cytochrome c oxidase (1) enhanced phosphorylation of IKK, IkB, NF-κB nuclear translocation, binding to the TNF-α promoter (1) Cytokines declined (2) HMGB1 inhibited (1) enhanced LPS-induced TNF-α production (1) Reduced (4) Prevented sepsis-associated encephalopathy (1) Enhanced (6) LPS-treated mice [26, 27] rats [28–30] sheep [31] CLP sepsis rats [32, 33]CPL sepsis mice [34, 35] Glutamine Heart (3)Lungs (3) Liver (2)Aorta (1)Kidneys (1)Brain (1)Blood (1)Multiple organs (1) Induced (7) Blood samples: increased Hsp72 only after coadministration of Gln and ciprofloxacin [26] Quercetin blocked Gln-mediated enhancement of Hsp and HSF-1-p expressions and survival benefit (2)LD75 dose of P. aeruginosa and ciprofloxacin in combinations (1) Prevented ARDS (2)arterial pressure, cardiac contractility restored in the Gln than in the LPS shock (2) Quercetin prevented Gln protection (1) No difference in hemodynamic parameters (1) Inhibited activation, translocation of NF-κB to the nucleus degradation of IKBalpha, phosphorylation of p38 MAPK, ERK, increased MKP-1 (1) lung HMGB-1 expression NF-κB DNA-binding activity suppressed (1) Reduced peroxide biosynthesis (1) Attenuated TNF-alpha (3), IL-6. IL-18, MDA, HMGB-1, apoptosis (1) increased IL-10 (1) Reduced (5) Enhanced (7) LPS-treated rats bovine or Ad70 virus or rHsp [36–40]CLP sepsis rats and tracheas AdHSP [32] Exogenous rHsp AdTrack or Ad70 virus 72 Liver (1)Peritoneal macrophages (1) MLE-12 cells (1)Myocardium (1)Lungs (1) Induced (4) — — Normalized hemostasis (2) hemodynamics (2)Biochemical parameters (1) Inhibited LPS-induced decrease NO expression in macrophages, normalized neutrophil apoptosis (1) inhibited IκBα degradation and NF-κB, p65 nuclear translocation (2) apoptotic cellular pathways caspases 3, 8, 9 (1) Modified myeloid cells response to LPS (1) prevented LPS-induced increase in TNF-α and IL-6 (2) Reduced ICAM-1, attenuated cardiac dysfunction (1) Attenuated cardiac dysfunction (1) reduced alveolar cell apoptosis (1) Enhanced (5) (b) In vitro Induction Organs studied Intracellular Hsp72 expression Inhibitors Pathways Interleukins Organ damage Survival Murine macrophage-like RAW 264.7 cells [12] Heat shocked Macrophages (1) Cells from HS overexpressed Hsp72 (1) — Inhibited phosphorylation of p38, JNK, ERK/MAPK, IκBα degradation, NF-κB p65 nuclear translocation (1) HS inhibited HMGB1-induced cytokines TNF-α and IL-1β (1) — Enhanced (1) CLP-treated murine peritoneal macrophage cell line RAW264.7 [41] Neonatal rat cardiomyocytes [42]IEC-18 rat intestinal epithelial cells [43] Glutamine Peritoneal macrophages (1)Cardiomyocytes (1) Intestinal epithelial cells (1) Increased Hsp70 expression (2) Gln protection mimicked by PUGNA, banished by alloxan (1) DFMO ornithine decarboxylase inhibitor (1) Reduced LDH, increased O-ClcNAc, HSF-1, transcription activity (1) increased HSF1 binding to HSE (1) In vitro TNF-α dose- time-Gln. Dependent In vivo lower intracellular TNF-α level Attenuated LPS-induced cardiomyocyte damage (1) Enhanced (3) LPS-treated rats [44] LPS stimulation-mouse macrophage-like cell line (RAW 264.7 cells) [45, 46] Transfected with Hsp70 plasmid or HS Myocardium (1)Macrophages (2) Hsp70 plasmid or HS induced Hsp70 (2) — iNOS mRNA completely abolished by HS-Hsp70–transfected cells (1)HS inhibited LPS-induced NF-κB and HSF-1 activity (1) increases both cellular SK1 mRNA and protein levels (1) — — Enhanced (2) CLP rats, murine lung epithelial-12 cells in culture [47] Murine macrophage-like RAW 264.7 cells [12] Exogenous Hsp72 Lungs (1)Macrophages (1) Overexpression of Hsp72 in RAW/Hsp72 cells (1)— — Limited nuclear translocation of NF-κB, phosphorylation of IkappaBalpha (2)Inhibition of the MAP kinases (p38, JNK, and ERK) (1) Inhibition of the NF-κB - HMGB1-induced release of TNF-α, IL-1β (1) Limited NF-κB activation (2) Enhanced (2) CLP-treated mice [48] Arsenite(Positive control) Lungs (1) Induced- Inhibitors blocked Hsp72 expression, (1) Anti-human Hsp72 (1) Pretreatment with neutralizing antibodies to Hsp72 diminished neutrophil killing (1) — Survivors highern of γ δT cells (1) Enhanced (1) (c) KO animals Induction Organs studied Intracellular Hsp72 expression Pathways Interleukins Organ damage Survival CLP sepsis Hsp70.1/3−/− KO mice [34] Glutamine Lungs (1) Hsp70.1/3−/− mice did not increase Hsp72 (1) Hsp70.1/3((−/−)) mice increased NF-κB binding/ activation (1) Increased TNF-α, IL-6 in KO (1) Increased lung injury in KO (1) Decreased in KO (1) CLP sepsis, injection of microorganisms Hsp70−/− KO mice [49] Imipenem/ cilastatin Gut (1)Lungs (1) Hsp70−/−mice did not increase Hsp72 (1) Increased apoptosis and inflammation Hsp70−/−increased TNF-α, IL-6, IL-10, IL-1b KO-increased gut epithelial apoptosis, pulmonary inflammation (1) Decreased in KO age dependent (1) LPS-treated mice Hsp−/− or overexpressed Hsp70 [50] LPS Intestinal epithelium (1) Pharmacologic Hsp70 upregulation Hsp70 reduced TLR4 signaling in enterocytes (1) Hsp70 reversed TLR4- cytokines, enterocyte apoptosis (1) Prevented and treated experimental NEC (1) — LPS-treated mice overexpressing the human Hspa12b gene [51] LPS Heart (1) Overexpression of HSPA12B Prevented decrement in the activation of PI3K/protein kinase B signaling in myocardium (1) Decreased the expression of VCAM-1/ICAM-1 (1) Decreased leucocyte infiltration in myocardium (1) Attenuated cardiac dysfunction (1) — n: number of studies; PBMC: peripheral blood mononuclear cells; LPS: bacterial lipopolysaccharide; CLP: cecal ligation and puncture; TNF-α: tumor necrosis factor-alpha; AdHSP: adenoviral vector Hsp72; Gln: glutamine; HS: heat stress; Hspgene: Hsp70 gene-transfected models; HSF1: HS factor 1; HSE: heat shock element; IKK: IκB kinase; IkB: IkappaBalpha.In more than half of the studies induction was attempted in a pretreatment mode (10, 62.5% for mice; 13, 54.2% for rats induction after LPS injection or CLP), followed by a concomitant mode in rats (6, 25%) or a posttreatment one in mice (4, 25%). The different time intervals used before or after experimental sepsis, most often 1-2 hours, did not differ among groups. Preventive effect was achieved by most induction methods used in mice or rats (39/41, 95.1%), irrespective of the challenge period or timing used (Figures1(a) and 1(b)). Two studies, one carried out in sheep and one in rats, were inconclusive. In all septic animal models, any Hsp72 induction method tried increased intracellular Hsp72 (41/41, 100%), reduced proinflammatory cytokines (28/29 studies involving cytokine measurements), organ damage (27/27), clinical deterioration (19/20), and enhanced survival (18/18).(a) Preventive effect was achieved by all induction methods used irrespective of the challenge period or (b) time lapse between the sepsis insult and the Hsp72 induction: LPS, bacterial lipopolysaccharide; CLP, caecal ligation and puncture; iHsp72, inducible heat shock protein 72; Pre, pre-treatment; Post, posttreatment; both, trials with pre- and postexperiments; Con, concomitant; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; Hspgene, Hsp72 gene-transfected models. (a) (b) ### 3.2. Patients Only 14 human in vivo (2) and in vitro (12) Hsp72 studies were identified (Tables2(a) and 2(b)): human peripheral blood mononuclear cells (hPBMC) 9 studies, 64.3%; polymorphonuclear leukocytes (hPMNL) 2 studies, 14.3%; lymphocytes (hPBLC) 1 study, 7.1%; in vivo (children or adults’ serum levels) 2 studies, 14.3%. Of those, hPBMC were used in only 2 studies with septic patients but in 6 with healthy volunteers. Heat stress (HS) or acclimation was used in 5 studies (35.7%), Gln administration in 2 in association with LPS (14.3%), recombinant human Hsp72 in 1 (7.1%), and either inhibitor or agonist in 1 (7.1%). In 4 studies no challenge or only LPS (28.6%) was used. In only 1 out of 6 (16.7%) studies in septic patients induction Hsp72 methods were attempted compared to 100% in the studies with healthy (7) or ARDS (1) patients (P < 0.006). Protection markers studied were apoptosis (3 studies, 21.3%), HS (2 studies, 14.3%), oxidative damage, hospital infections, hemodynamic instability, and ARDS (1 study each, 7.1%).Table 2 (a) Human in vivo studies relating intra- or extracellular Hsp72 (Hsp70) expression to outcome in sepsis. (b) Human in vitro studies relating intracellular Hsp72 (Hsp70) expression to outcome in sepsis. (a) In vivo Study population/material Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Hsp72 is associated with Conclusion on the Hsp72 role in sepsis Patients with septic shock [15, 52] Children with septic shock (1), adults with severe sepsis (1) — Elevated in septic shock (1) nonsurvivors (1) pronounced oxidative damage (1) Septic shock-mortality (2) modulated according to oxidant status (1) Related to mortality (2) patient oxidant status (1) Healthy young men Gln-LPS [53] Crossover study: Hsp70 in PBMCs (1) Gln did not affect Hsp70 in PBMCs (1) — Gln did not affect LPS-WBC, TBF-α, IL-6, temperature heart rate alterations Not protective in experimental sepsis (1) (b) In vitro Study population/material Expression in cells/Hsp72 challenge Hsp70 is associated with Conclusion on the Hsp72 role in sepsis PBMCs-Hsp inhibitor-inducers [54, 55] PBMCs 24 hours after sepsis (1) sodium arsenite (inducer of Hsp) and quercetin (suppressor of Hsp) to regulate expression of Hsp70 in PMNLs (1) Hsp70 increased (1) prevented by quercetin (1) Enhanced TNF-α (1) increased oxidative activity, inhibited apoptosis (1) Inconclusive (1) may inhibit apoptosis (1) LPS-PBMC [56] LPS inducibility of Hsp70 expression in the PBMC Inhibits Hsp70 expression in PBMC (in septic patients more than in controls) Decreased resistance to infectious insults during severe sepsis May be related to infections Heat shock, PBMC [57–60] Heat stress Hsp70 in PBMC (2) or with LPS and training (1) or exercised in heat acclimation (1) Hsp70 increase (3) inhibited by monensin, methyl-beta-cyclodextrin, and methylamine, reduced in patients with ARDS (2) Hsp70 decreased in ARDS, recoverε δ over time (1) released from lysosomal lipid rafts (1) Reduced apoptosis, TNF-α, IL-1b, increase δ CD14/CD16 (1) Protective (3) not sufficient (1) Recombinant Hsp70-neutrophils, monocytes [39] Preconditioning of myeloid cells after LTA addition with rHsp70 (1) Effect of human recombinant Hsp70 isolated fromSpodoptera cells on neutrophil apoptosis and expression of CD11b/CD18 receptors and TNF on ROS production in neutrophils and monocytes Ameliorated reactive oxygen species, TNF-α, CD11b/CD18, did not normalize apoptosis (1) Protective (1) Glutamine-[61]-lymphocytes [62] Glutamine-PBMCs (1) or lymphocytes (1) After LPS-HS increased 3-fold Hsp70. A reduction of Gln led to a 40% lower Hsp70 level (2) Gln decreased TNF-α (1) Reduced Gln = reduced Hsp70 = impaired stress response (1) Protective (2)Intracellular Hsp72 was induced in 8 in vitro studies (57.1%, 6 in healthy, 2 in septic) and inhibited in 3 (21.4%, 2 in septic, 1 in ARDS patients). Of the 6 studies in septic patients, intracellular Hsp72 was increased in 2 (33%), inhibited in 2 (33%), and not measured in 2. With the exception of sodium arsenite, neither Gln nor HS were tested in these studies. Extracellular Hsp72, measured in 1 in vitro and in 2 in vivo studies, was shown to increase in sepsis, especially in septic shock or in those who died (14.3% of human studies).Increased intracellular Hsp72 was protective in half of the human studies (50%); regarding the 9 positive (HS, Gln, exogenous Hsp72) in vitro induction Hsp72 human studies 7 (77.8%) were protective (Figure2(a)) and 2 inconclusive (11.1%) or nonprotective (11.1%). Of the induction methods used, protection offered HS (4/5, 80%), glutamine (1/2, 50%), rHsp72 and sodium arsenite (1/1, 100% each) (Figure 2(b)). In contrast, of the 2 in vivo (serum Hsp72 measurements), 2 in vitro endotoxin induced (LPS or CLP), and 1 Hsp72 inhibitor human studies, none was shown to be associated with a better outcome (P < 0.02); 3 studies were associated with mortality (60%) and 1 with infection (20%) or were inconclusive (20%). Septic patients’ studies were positive for protection in only 1 out of 6 (16.7%) compared to 5 out of 7 (71.4%) in healthy and 100% in ARDS patients (P < 0.06).(a) Increased serum Hsp72 in septic patients was associated with mortality whereas human cell studies with Hsp72 induction were either inconclusive or protective or even partially associated with mortality and infection; (b) heat pretreatment and/or glutamine incubation and recombinant or Hsp72 agonists (sodium arsenite) partially protected human cells compared to the nonchallenged human cells or to those challenged with Hsp72 inhibitors (quercetin) or LPS alone (P < 0.04). Positive Hsp72 induction human in vitro studies were tried in healthy individuals or ARDS patients compared with 1 study in septic patients’ cells (P < 0.02) whereas negative human Hsp72 studies (LPS, quercetin) or neutral studies (no induction) were only examined in septic human cells: iHsp72, inducible heat shock protein 72; hPBMC, human peripheral blood mononuclear cells; hPMNL, human peripheral polymorphonuclear leukocytes; hPBMC, human peripheral blood lymphocytes; ARDS, acute respiratory distress syndrome; Gln, glutamine; HS, heat stress; LPS, bacterial lipopolysaccharide; rHsp72, recombinant Hsp72. (a) (b) ### 3.3. Human Compared to Animal Studies Out of a total of 55 enrolled studies, only 2 in vivo human studies (3.6%) have been reported on the role of Hsp72 in sepsis compared to 7 mice (12.7%) and 15 rat (27.3%) in vivo studies (P < 0.0001); in contrast 12 human (21.8%) studies have been reported in vitro compared to only 2 in rats (3.6%) and 5 in mice (9.1%); 4 mice (7.3%) and 7 rat (12.7%) combined in vitro-in vivo studies have also been reported. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in animal studies (Figure 3(a)). When restricted to the septic patients’ studies, however, only 1 out of 6 (16.7%) demonstrated an Hsp72 protective effect compared to 95.8% protection shown in animal studies (P < 0.0001). In addition, only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001).(a) Diagram showing summaries of conclusions regarding the Hsp72 protective effects in sepsis in human and animal studies (P < 0.008); (b) human Hsp72 induction methods showed inconsistent results compared to the unanimous Hsp72 protective results in experimental sepsis with any attempted induction method; selection of any induction method, however, did not affect results; (c) Hsp72 induction protective effect using various induction methods was not influenced by the in vitro, in vivo, or combined study method selected: iHsp72, inducible heat shock protein 72; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; rHsp72, recombinant Hsp72; Hspgene, Hsp72 gene-transfected models; both, in vitro and in vivo experiments. (a) (b) (c)Most of the human studies were prospective observational experimental controlled studies (57.1%) and only 1 randomized study (7.1%) compared to prospective controlled animal studies (100%,P < 0.0001). All other human studies were experimental control (14.3%) or noncontrolled (14.3%) studies. Induction methods used differed significantly (P < 0.02), increasing Hsp72 in 57.1% of the human as compared to 100% of animal studies (P < 0.02). Only 6 (42.9%) human studies included septic patients compared to 41 (100% experimental sepsis) in animal studies (P < 0.0001). Although differed among Hsp72 study populations (P < 0.001) or methodology selected (P < 0.02), the various induction methods used did not affect the Hsp72 offered protection (Figures 3(b) and 3(c)). ## 3.1. Animals Forty-one in vivo (23, 56.1%), in vitro (7, 17.1%), or combined (11, 26.8%) animal studies fulfilling the research criteria regarding the role of Hsp72 in sepsis were enrolled in analysis (Tables1(a), 1(b), and 1(c)). In only 6 studies transgenic animals (4Hsp−/− (9.8%), 2 overexpressing the human Hspa12b gene (4.9%)) were used (14.6%), all in mice (P < 0.03). Hsp72 induction methods used in rats differed from those used in mice (P < 0.0001). Hsp72 induction was attempted most often using heat shock (rats 9, 37.5%; mice 2, 12.5%), glutamine (Gln) (rats 7, 29.2%; mice 4, 25%; sheep 1, 100%), or combined Gln with additional inducer (rats 1, 4.2%; mice 2, 12.6%). In 7 rats Hsp72 was induced through adenoviral vector Hsp72 (AdHSP) (3, 12.5% of studies in rats) or various recombinant Hsp72 (rHsp72) preparations (4, 16.7%) compared to 3 mice studies where AdHSP, bovine rHsp72 preconditioning, or overexpressed Hsp72 within the intestinal epithelium was used (6.2%). Hsp72 gene-transfected models (3, 18.8%) or cecal ligation and puncture (CLP) with LPS or injection of microorganisms (2, 12.5%) were used only in mice studies.Table 1 (a) Results of animal in vivo studies examining the preventive role of intra- or extracellular Hsp72 (Hsp70) expression and Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (b) Results of animal in vitro studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (c) Results of genetic animal studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (a) In vivo Induction Organs studied Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Inhibitors Functional Pathways Interleukins Organ damage Survival CLP sepsis rats [18, 19]LPS-treated rats [20–24] LPS-treated mice [25] Heat stress Lungs (4)Heart (1) Splenocytes (1) Rostral Ventrolateral medulla (1)Mitochondrial function (1)Brain (1) Induced (7) — Hsp70 inhibitors (KNK437 or pifithrin-m) abrogated the ability of the thermal treatment to enhance TNF-α (1) Alleviated hypotension, bradycardia, sympathetic vasomotor activity (1) EEG and epileptic spikes attenuated (1) Suppressed iNOS mRNA NF-κB activation, IB kinase activation, IB degradation (1) Prevented downregulation of Grp75, preserved cytochrome c oxidase (1) enhanced phosphorylation of IKK, IkB, NF-κB nuclear translocation, binding to the TNF-α promoter (1) Cytokines declined (2) HMGB1 inhibited (1) enhanced LPS-induced TNF-α production (1) Reduced (4) Prevented sepsis-associated encephalopathy (1) Enhanced (6) LPS-treated mice [26, 27] rats [28–30] sheep [31] CLP sepsis rats [32, 33]CPL sepsis mice [34, 35] Glutamine Heart (3)Lungs (3) Liver (2)Aorta (1)Kidneys (1)Brain (1)Blood (1)Multiple organs (1) Induced (7) Blood samples: increased Hsp72 only after coadministration of Gln and ciprofloxacin [26] Quercetin blocked Gln-mediated enhancement of Hsp and HSF-1-p expressions and survival benefit (2)LD75 dose of P. aeruginosa and ciprofloxacin in combinations (1) Prevented ARDS (2)arterial pressure, cardiac contractility restored in the Gln than in the LPS shock (2) Quercetin prevented Gln protection (1) No difference in hemodynamic parameters (1) Inhibited activation, translocation of NF-κB to the nucleus degradation of IKBalpha, phosphorylation of p38 MAPK, ERK, increased MKP-1 (1) lung HMGB-1 expression NF-κB DNA-binding activity suppressed (1) Reduced peroxide biosynthesis (1) Attenuated TNF-alpha (3), IL-6. IL-18, MDA, HMGB-1, apoptosis (1) increased IL-10 (1) Reduced (5) Enhanced (7) LPS-treated rats bovine or Ad70 virus or rHsp [36–40]CLP sepsis rats and tracheas AdHSP [32] Exogenous rHsp AdTrack or Ad70 virus 72 Liver (1)Peritoneal macrophages (1) MLE-12 cells (1)Myocardium (1)Lungs (1) Induced (4) — — Normalized hemostasis (2) hemodynamics (2)Biochemical parameters (1) Inhibited LPS-induced decrease NO expression in macrophages, normalized neutrophil apoptosis (1) inhibited IκBα degradation and NF-κB, p65 nuclear translocation (2) apoptotic cellular pathways caspases 3, 8, 9 (1) Modified myeloid cells response to LPS (1) prevented LPS-induced increase in TNF-α and IL-6 (2) Reduced ICAM-1, attenuated cardiac dysfunction (1) Attenuated cardiac dysfunction (1) reduced alveolar cell apoptosis (1) Enhanced (5) (b) In vitro Induction Organs studied Intracellular Hsp72 expression Inhibitors Pathways Interleukins Organ damage Survival Murine macrophage-like RAW 264.7 cells [12] Heat shocked Macrophages (1) Cells from HS overexpressed Hsp72 (1) — Inhibited phosphorylation of p38, JNK, ERK/MAPK, IκBα degradation, NF-κB p65 nuclear translocation (1) HS inhibited HMGB1-induced cytokines TNF-α and IL-1β (1) — Enhanced (1) CLP-treated murine peritoneal macrophage cell line RAW264.7 [41] Neonatal rat cardiomyocytes [42]IEC-18 rat intestinal epithelial cells [43] Glutamine Peritoneal macrophages (1)Cardiomyocytes (1) Intestinal epithelial cells (1) Increased Hsp70 expression (2) Gln protection mimicked by PUGNA, banished by alloxan (1) DFMO ornithine decarboxylase inhibitor (1) Reduced LDH, increased O-ClcNAc, HSF-1, transcription activity (1) increased HSF1 binding to HSE (1) In vitro TNF-α dose- time-Gln. Dependent In vivo lower intracellular TNF-α level Attenuated LPS-induced cardiomyocyte damage (1) Enhanced (3) LPS-treated rats [44] LPS stimulation-mouse macrophage-like cell line (RAW 264.7 cells) [45, 46] Transfected with Hsp70 plasmid or HS Myocardium (1)Macrophages (2) Hsp70 plasmid or HS induced Hsp70 (2) — iNOS mRNA completely abolished by HS-Hsp70–transfected cells (1)HS inhibited LPS-induced NF-κB and HSF-1 activity (1) increases both cellular SK1 mRNA and protein levels (1) — — Enhanced (2) CLP rats, murine lung epithelial-12 cells in culture [47] Murine macrophage-like RAW 264.7 cells [12] Exogenous Hsp72 Lungs (1)Macrophages (1) Overexpression of Hsp72 in RAW/Hsp72 cells (1)— — Limited nuclear translocation of NF-κB, phosphorylation of IkappaBalpha (2)Inhibition of the MAP kinases (p38, JNK, and ERK) (1) Inhibition of the NF-κB - HMGB1-induced release of TNF-α, IL-1β (1) Limited NF-κB activation (2) Enhanced (2) CLP-treated mice [48] Arsenite(Positive control) Lungs (1) Induced- Inhibitors blocked Hsp72 expression, (1) Anti-human Hsp72 (1) Pretreatment with neutralizing antibodies to Hsp72 diminished neutrophil killing (1) — Survivors highern of γ δT cells (1) Enhanced (1) (c) KO animals Induction Organs studied Intracellular Hsp72 expression Pathways Interleukins Organ damage Survival CLP sepsis Hsp70.1/3−/− KO mice [34] Glutamine Lungs (1) Hsp70.1/3−/− mice did not increase Hsp72 (1) Hsp70.1/3((−/−)) mice increased NF-κB binding/ activation (1) Increased TNF-α, IL-6 in KO (1) Increased lung injury in KO (1) Decreased in KO (1) CLP sepsis, injection of microorganisms Hsp70−/− KO mice [49] Imipenem/ cilastatin Gut (1)Lungs (1) Hsp70−/−mice did not increase Hsp72 (1) Increased apoptosis and inflammation Hsp70−/−increased TNF-α, IL-6, IL-10, IL-1b KO-increased gut epithelial apoptosis, pulmonary inflammation (1) Decreased in KO age dependent (1) LPS-treated mice Hsp−/− or overexpressed Hsp70 [50] LPS Intestinal epithelium (1) Pharmacologic Hsp70 upregulation Hsp70 reduced TLR4 signaling in enterocytes (1) Hsp70 reversed TLR4- cytokines, enterocyte apoptosis (1) Prevented and treated experimental NEC (1) — LPS-treated mice overexpressing the human Hspa12b gene [51] LPS Heart (1) Overexpression of HSPA12B Prevented decrement in the activation of PI3K/protein kinase B signaling in myocardium (1) Decreased the expression of VCAM-1/ICAM-1 (1) Decreased leucocyte infiltration in myocardium (1) Attenuated cardiac dysfunction (1) — n: number of studies; PBMC: peripheral blood mononuclear cells; LPS: bacterial lipopolysaccharide; CLP: cecal ligation and puncture; TNF-α: tumor necrosis factor-alpha; AdHSP: adenoviral vector Hsp72; Gln: glutamine; HS: heat stress; Hspgene: Hsp70 gene-transfected models; HSF1: HS factor 1; HSE: heat shock element; IKK: IκB kinase; IkB: IkappaBalpha.In more than half of the studies induction was attempted in a pretreatment mode (10, 62.5% for mice; 13, 54.2% for rats induction after LPS injection or CLP), followed by a concomitant mode in rats (6, 25%) or a posttreatment one in mice (4, 25%). The different time intervals used before or after experimental sepsis, most often 1-2 hours, did not differ among groups. Preventive effect was achieved by most induction methods used in mice or rats (39/41, 95.1%), irrespective of the challenge period or timing used (Figures1(a) and 1(b)). Two studies, one carried out in sheep and one in rats, were inconclusive. In all septic animal models, any Hsp72 induction method tried increased intracellular Hsp72 (41/41, 100%), reduced proinflammatory cytokines (28/29 studies involving cytokine measurements), organ damage (27/27), clinical deterioration (19/20), and enhanced survival (18/18).(a) Preventive effect was achieved by all induction methods used irrespective of the challenge period or (b) time lapse between the sepsis insult and the Hsp72 induction: LPS, bacterial lipopolysaccharide; CLP, caecal ligation and puncture; iHsp72, inducible heat shock protein 72; Pre, pre-treatment; Post, posttreatment; both, trials with pre- and postexperiments; Con, concomitant; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; Hspgene, Hsp72 gene-transfected models. (a) (b) ## 3.2. Patients Only 14 human in vivo (2) and in vitro (12) Hsp72 studies were identified (Tables2(a) and 2(b)): human peripheral blood mononuclear cells (hPBMC) 9 studies, 64.3%; polymorphonuclear leukocytes (hPMNL) 2 studies, 14.3%; lymphocytes (hPBLC) 1 study, 7.1%; in vivo (children or adults’ serum levels) 2 studies, 14.3%. Of those, hPBMC were used in only 2 studies with septic patients but in 6 with healthy volunteers. Heat stress (HS) or acclimation was used in 5 studies (35.7%), Gln administration in 2 in association with LPS (14.3%), recombinant human Hsp72 in 1 (7.1%), and either inhibitor or agonist in 1 (7.1%). In 4 studies no challenge or only LPS (28.6%) was used. In only 1 out of 6 (16.7%) studies in septic patients induction Hsp72 methods were attempted compared to 100% in the studies with healthy (7) or ARDS (1) patients (P < 0.006). Protection markers studied were apoptosis (3 studies, 21.3%), HS (2 studies, 14.3%), oxidative damage, hospital infections, hemodynamic instability, and ARDS (1 study each, 7.1%).Table 2 (a) Human in vivo studies relating intra- or extracellular Hsp72 (Hsp70) expression to outcome in sepsis. (b) Human in vitro studies relating intracellular Hsp72 (Hsp70) expression to outcome in sepsis. (a) In vivo Study population/material Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Hsp72 is associated with Conclusion on the Hsp72 role in sepsis Patients with septic shock [15, 52] Children with septic shock (1), adults with severe sepsis (1) — Elevated in septic shock (1) nonsurvivors (1) pronounced oxidative damage (1) Septic shock-mortality (2) modulated according to oxidant status (1) Related to mortality (2) patient oxidant status (1) Healthy young men Gln-LPS [53] Crossover study: Hsp70 in PBMCs (1) Gln did not affect Hsp70 in PBMCs (1) — Gln did not affect LPS-WBC, TBF-α, IL-6, temperature heart rate alterations Not protective in experimental sepsis (1) (b) In vitro Study population/material Expression in cells/Hsp72 challenge Hsp70 is associated with Conclusion on the Hsp72 role in sepsis PBMCs-Hsp inhibitor-inducers [54, 55] PBMCs 24 hours after sepsis (1) sodium arsenite (inducer of Hsp) and quercetin (suppressor of Hsp) to regulate expression of Hsp70 in PMNLs (1) Hsp70 increased (1) prevented by quercetin (1) Enhanced TNF-α (1) increased oxidative activity, inhibited apoptosis (1) Inconclusive (1) may inhibit apoptosis (1) LPS-PBMC [56] LPS inducibility of Hsp70 expression in the PBMC Inhibits Hsp70 expression in PBMC (in septic patients more than in controls) Decreased resistance to infectious insults during severe sepsis May be related to infections Heat shock, PBMC [57–60] Heat stress Hsp70 in PBMC (2) or with LPS and training (1) or exercised in heat acclimation (1) Hsp70 increase (3) inhibited by monensin, methyl-beta-cyclodextrin, and methylamine, reduced in patients with ARDS (2) Hsp70 decreased in ARDS, recoverε δ over time (1) released from lysosomal lipid rafts (1) Reduced apoptosis, TNF-α, IL-1b, increase δ CD14/CD16 (1) Protective (3) not sufficient (1) Recombinant Hsp70-neutrophils, monocytes [39] Preconditioning of myeloid cells after LTA addition with rHsp70 (1) Effect of human recombinant Hsp70 isolated fromSpodoptera cells on neutrophil apoptosis and expression of CD11b/CD18 receptors and TNF on ROS production in neutrophils and monocytes Ameliorated reactive oxygen species, TNF-α, CD11b/CD18, did not normalize apoptosis (1) Protective (1) Glutamine-[61]-lymphocytes [62] Glutamine-PBMCs (1) or lymphocytes (1) After LPS-HS increased 3-fold Hsp70. A reduction of Gln led to a 40% lower Hsp70 level (2) Gln decreased TNF-α (1) Reduced Gln = reduced Hsp70 = impaired stress response (1) Protective (2)Intracellular Hsp72 was induced in 8 in vitro studies (57.1%, 6 in healthy, 2 in septic) and inhibited in 3 (21.4%, 2 in septic, 1 in ARDS patients). Of the 6 studies in septic patients, intracellular Hsp72 was increased in 2 (33%), inhibited in 2 (33%), and not measured in 2. With the exception of sodium arsenite, neither Gln nor HS were tested in these studies. Extracellular Hsp72, measured in 1 in vitro and in 2 in vivo studies, was shown to increase in sepsis, especially in septic shock or in those who died (14.3% of human studies).Increased intracellular Hsp72 was protective in half of the human studies (50%); regarding the 9 positive (HS, Gln, exogenous Hsp72) in vitro induction Hsp72 human studies 7 (77.8%) were protective (Figure2(a)) and 2 inconclusive (11.1%) or nonprotective (11.1%). Of the induction methods used, protection offered HS (4/5, 80%), glutamine (1/2, 50%), rHsp72 and sodium arsenite (1/1, 100% each) (Figure 2(b)). In contrast, of the 2 in vivo (serum Hsp72 measurements), 2 in vitro endotoxin induced (LPS or CLP), and 1 Hsp72 inhibitor human studies, none was shown to be associated with a better outcome (P < 0.02); 3 studies were associated with mortality (60%) and 1 with infection (20%) or were inconclusive (20%). Septic patients’ studies were positive for protection in only 1 out of 6 (16.7%) compared to 5 out of 7 (71.4%) in healthy and 100% in ARDS patients (P < 0.06).(a) Increased serum Hsp72 in septic patients was associated with mortality whereas human cell studies with Hsp72 induction were either inconclusive or protective or even partially associated with mortality and infection; (b) heat pretreatment and/or glutamine incubation and recombinant or Hsp72 agonists (sodium arsenite) partially protected human cells compared to the nonchallenged human cells or to those challenged with Hsp72 inhibitors (quercetin) or LPS alone (P < 0.04). Positive Hsp72 induction human in vitro studies were tried in healthy individuals or ARDS patients compared with 1 study in septic patients’ cells (P < 0.02) whereas negative human Hsp72 studies (LPS, quercetin) or neutral studies (no induction) were only examined in septic human cells: iHsp72, inducible heat shock protein 72; hPBMC, human peripheral blood mononuclear cells; hPMNL, human peripheral polymorphonuclear leukocytes; hPBMC, human peripheral blood lymphocytes; ARDS, acute respiratory distress syndrome; Gln, glutamine; HS, heat stress; LPS, bacterial lipopolysaccharide; rHsp72, recombinant Hsp72. (a) (b) ## 3.3. Human Compared to Animal Studies Out of a total of 55 enrolled studies, only 2 in vivo human studies (3.6%) have been reported on the role of Hsp72 in sepsis compared to 7 mice (12.7%) and 15 rat (27.3%) in vivo studies (P < 0.0001); in contrast 12 human (21.8%) studies have been reported in vitro compared to only 2 in rats (3.6%) and 5 in mice (9.1%); 4 mice (7.3%) and 7 rat (12.7%) combined in vitro-in vivo studies have also been reported. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in animal studies (Figure 3(a)). When restricted to the septic patients’ studies, however, only 1 out of 6 (16.7%) demonstrated an Hsp72 protective effect compared to 95.8% protection shown in animal studies (P < 0.0001). In addition, only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001).(a) Diagram showing summaries of conclusions regarding the Hsp72 protective effects in sepsis in human and animal studies (P < 0.008); (b) human Hsp72 induction methods showed inconsistent results compared to the unanimous Hsp72 protective results in experimental sepsis with any attempted induction method; selection of any induction method, however, did not affect results; (c) Hsp72 induction protective effect using various induction methods was not influenced by the in vitro, in vivo, or combined study method selected: iHsp72, inducible heat shock protein 72; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; rHsp72, recombinant Hsp72; Hspgene, Hsp72 gene-transfected models; both, in vitro and in vivo experiments. (a) (b) (c)Most of the human studies were prospective observational experimental controlled studies (57.1%) and only 1 randomized study (7.1%) compared to prospective controlled animal studies (100%,P < 0.0001). All other human studies were experimental control (14.3%) or noncontrolled (14.3%) studies. Induction methods used differed significantly (P < 0.02), increasing Hsp72 in 57.1% of the human as compared to 100% of animal studies (P < 0.02). Only 6 (42.9%) human studies included septic patients compared to 41 (100% experimental sepsis) in animal studies (P < 0.0001). Although differed among Hsp72 study populations (P < 0.001) or methodology selected (P < 0.02), the various induction methods used did not affect the Hsp72 offered protection (Figures 3(b) and 3(c)). ## 4. Discussion Hsps70 are emerging as powerful dichotomous immune-modulatory molecules that can have stimulatory and inhibitory effects on immune responses [63]. In our hypothetical “comparative study” model, we found that the balance between Hsp72 promotion and control of inflammatory responses and sepsis outcome differed unpredictably between human and animal studies. Clinical studies were inconclusive, showing either a low probability of protection (16.7% among septic patients) or even a possible relation to mortality and infections. In contrast, almost all (94.7%) septic animal in vivo and in vitro studies showed a biochemical, biological, and clinical protective effect for Hsp72 in sepsis. This might be due to the fact that using evermore purified target cell populations to provide insight into the direct effects of molecules on cells, a lot of clinical information regarding the net response that occurs in vivo is missing [63]. ### 4.1. Stress Proteins Induction Sepsis, endotoxin tolerance, and heat shock all display downregulation of innate immunity, sharing a common immune suppressive effect, possibly through HS factor 1 (HSF1) mediated competitive inhibition of nuclear factor kappa-B (NF-κB) binding [45]. It has been shown that multiple chaperones or cochaperones, including Hsp72, tend to form a complex with HSF1 monomers [64]. Once a cell is exposed to stress, these chaperones and cochaperones bind to denatured and damaged proteins, thereby “releasing” the nonactive HSF1 monomers to subsequently undergo homotrimerization [65]. However, while homotrimerization is sufficient for DNA binding and nuclear translocation, the magnitude and duration of transcriptional activity are regulated by inducible phosphorylation of specific serine residues of HSF1 by several protein kinases (Erk1/2, glycogen synthase kinase, protein kinase C) [64].Once inside the nucleus, HSF1 binds to a heat shock element (HSE) in the promoter of  Hsp genes, which is defined by a tandem repeat of the pentamer nGAAn arranged in an alternating orientation either “head to head” (e.g., 5′-nGAAnnTTCn-3′) or “tail to tail” (e.g., 5′-nTTCnnGAAn-3′) [66], resulting in the upregulation of stress protein gene expression [67]. Thus, the intracellular accumulation of denatured or improperly folded proteins in response to stress is believed to be the universal signal resulting in the stress-induced gene expression of stress proteins [68, 69] which is proportional to the severity of the stress [70]. Besides the innate immune response stress proteins seem to activate also the adaptive immune response [71]. Thus, they have the capacity to elicit a pathogen-specific immune response [72] and to mediate the induction of peptide-specific immunity, eliciting potent T cell responses against the chaperoned peptide [73]. ### 4.2. Experimental Hsp72 Studies Hsp72 is the most highly induced stress protein in cells and tissues undergoing the stress response [74] and is central to the cytoprotective properties in patients with a variety of critical illnesses [52] or injuries [75]. Cell cycle components, regulatory proteins, and proteins in the mitogenic signal cascade may be protected by the molecular chaperone Hsp72 during periods of stress, by impairing proteasomal degradation of IkappaBalpha (IκBa) [47]. In addition, binding of Hsp72 to the Ser/Thr protein kinase IRE1a enhances the IRE1a/X-box binding protein XBP1 signaling at the endoplasmic reticulum and inhibits endoplasmic reticulum stress-induced apoptosis [76]. Thus, increased expression of Hsp72 by gene transfer/transfection has been demonstrated to confer protection against in vitro toxicity secondary to lethal hyperthermia [77], endotoxin [78], nitric oxide [79], hyperoxia [80], lung inflammation and injury [81], and in vivo ischemia-reperfusion injury [82]. On the contrary, microinjection of anti-Hsp72 antibody into cells impaired their ability to achieve thermotolerance [83].We showed that in septic animal models, all reported Hsp72 induction methods increased intracellular Hsp72; this was associated with reduced proinflammatory cytokines, decreased organ damage, clinical improvement, and enhanced survival. Analysis of reviewed studies showed differed methodology approaching the Hsp72 biological and/or genetic implication in the sepsis process. #### 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. #### 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. #### 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. #### 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. #### 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ### 4.3. Human Studies Although the release of the Hsp72 in sepsis serves as a host impending danger signal to neighboring cells and might exert a cytoprotective function at low serum levels, it might also potentiate an already active host immune response leading to poor outcome once a certain critical threshold is attained. Such a sensitive balance could be an explanation of the surprising finding of this study, showing that only 16.7% of the 6 human septic studies demonstrated an Hsp72 protective effect compared to 95.8% protection shown in the 41 septic animal studies. In addition, by experimentally studying healthy individuals rather than patients in a real clinical setting, human studies mix up mild molecular reactions to stress with severe infectious systemic inflammatory response syndrome (SIRS), being thereby unconvincing and unable to verify results of experimentally controlled septic animal models. #### 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. #### 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ### 4.4. Factors Influencing Heat Shock Proteins Protective Role in Sepsis Recent work demonstrated that febrile-range temperatures achieved during sepsis and noninfectious SIRS correlated with detectable changes in stress gene expression in vivo (whole blood messenger RNA), thereby suggesting that fever can activate Hsp72 gene expression and modify innate immune responses [113]. Hsp72 serum levels may also be modulated according to the patient oxidant status [15] and prevent excessive gut apoptosis and inflammation in an age-dependent response to sepsis [49]. Importantly, Hsp72 inhibited LPS-induced NO release but only partially reduced the LPS increased expression of iNOS mRNA and exhibited LPS-induced NF-κB DNA binding and LPS tolerance; in contrast, HS inhibited LPS-induced NF-κB and HSF1 activity whereas HSF1 inhibited NF-κB DNA binding [45]!A significant body of preexisting literature has hypothesized a relationship between Hsp72 expression and Gln’s protection in both in vitro and in vivo settings [32, 43, 62, 114, 115]. Pioneer studies showed that Gln supplementation could attenuate lethal heat and oxidant injury and increase Hsp72 expression in intestinal epithelial cells [116–118]. Compared, however, with whey protein supplementation in a randomized, double-blinded, comparative effectiveness trial, zinc, selenium, Gln, and intravenous metoclopramide conferred no advantage in the immune-competent population [6]. In addition, we recently showed that although apparently safe in animal models (pups), premature infants, and critically ill children, glutamine supplementation did not reduce mortality or late onset sepsis [119]. Methodological problems noted in the reviewed randomized experimental and clinical trials [119] should therefore be seriously considered in any future well-designed large blinded randomized controlled trial involving glutamine supplementation in severe sepsis.Drug interactions were also shown either to suppress Hsp72 protective effects exacerbating therefore drug-induced side effects or to induce Hsp72 beneficial effects by suppressing drug-induced exacerbations. Thus, it was recently shown that bleomycin-induced pulmonary fibrosis is mediated by suppression of pulmonary expression of Hsp72 whereas an inducer of Hsp72 expression, such as geranylgeranylacetone, could be therapeutically beneficial for the treatment of gefitinib-induced pulmonary fibrosis [120].Finally, critically ill patients display variable physiologic responses when stressed; gene association studies have recently been employed to explain this variability. Genetic variants of Hsp72 have also been associated with the development of septic shock in patients [121, 122]. Thus, the specific absence of Hsp72.1/3 gene expression can lead to increased mortality after septic insult [85]. ### 4.5. Limitations of the Study The major problem that limits the comparability with human sepsis is the fact that in most cases of animal models, various forms of preconditioning were employed. This approach is nonspecific, and only a minor amount (about 10%) used genetically modified animals. Accordingly, important differences between cell and/or animal models versus clinical studies have been noted several times with various inflammatory pathways and have been written about extensively in the literature [123, 124]. To the best of our knowledge, however, such discrepancies have not been summarized in detail in the context of  Hsp72 and sepsis; in our opinion, these findings might be helpful for cautiously interpreting experimental data in the critical care field. ## 4.1. Stress Proteins Induction Sepsis, endotoxin tolerance, and heat shock all display downregulation of innate immunity, sharing a common immune suppressive effect, possibly through HS factor 1 (HSF1) mediated competitive inhibition of nuclear factor kappa-B (NF-κB) binding [45]. It has been shown that multiple chaperones or cochaperones, including Hsp72, tend to form a complex with HSF1 monomers [64]. Once a cell is exposed to stress, these chaperones and cochaperones bind to denatured and damaged proteins, thereby “releasing” the nonactive HSF1 monomers to subsequently undergo homotrimerization [65]. However, while homotrimerization is sufficient for DNA binding and nuclear translocation, the magnitude and duration of transcriptional activity are regulated by inducible phosphorylation of specific serine residues of HSF1 by several protein kinases (Erk1/2, glycogen synthase kinase, protein kinase C) [64].Once inside the nucleus, HSF1 binds to a heat shock element (HSE) in the promoter of  Hsp genes, which is defined by a tandem repeat of the pentamer nGAAn arranged in an alternating orientation either “head to head” (e.g., 5′-nGAAnnTTCn-3′) or “tail to tail” (e.g., 5′-nTTCnnGAAn-3′) [66], resulting in the upregulation of stress protein gene expression [67]. Thus, the intracellular accumulation of denatured or improperly folded proteins in response to stress is believed to be the universal signal resulting in the stress-induced gene expression of stress proteins [68, 69] which is proportional to the severity of the stress [70]. Besides the innate immune response stress proteins seem to activate also the adaptive immune response [71]. Thus, they have the capacity to elicit a pathogen-specific immune response [72] and to mediate the induction of peptide-specific immunity, eliciting potent T cell responses against the chaperoned peptide [73]. ## 4.2. Experimental Hsp72 Studies Hsp72 is the most highly induced stress protein in cells and tissues undergoing the stress response [74] and is central to the cytoprotective properties in patients with a variety of critical illnesses [52] or injuries [75]. Cell cycle components, regulatory proteins, and proteins in the mitogenic signal cascade may be protected by the molecular chaperone Hsp72 during periods of stress, by impairing proteasomal degradation of IkappaBalpha (IκBa) [47]. In addition, binding of Hsp72 to the Ser/Thr protein kinase IRE1a enhances the IRE1a/X-box binding protein XBP1 signaling at the endoplasmic reticulum and inhibits endoplasmic reticulum stress-induced apoptosis [76]. Thus, increased expression of Hsp72 by gene transfer/transfection has been demonstrated to confer protection against in vitro toxicity secondary to lethal hyperthermia [77], endotoxin [78], nitric oxide [79], hyperoxia [80], lung inflammation and injury [81], and in vivo ischemia-reperfusion injury [82]. On the contrary, microinjection of anti-Hsp72 antibody into cells impaired their ability to achieve thermotolerance [83].We showed that in septic animal models, all reported Hsp72 induction methods increased intracellular Hsp72; this was associated with reduced proinflammatory cytokines, decreased organ damage, clinical improvement, and enhanced survival. Analysis of reviewed studies showed differed methodology approaching the Hsp72 biological and/or genetic implication in the sepsis process. ### 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. ### 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. ### 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. ### 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. ### 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ## 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. ## 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. ## 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. ## 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. ## 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ## 4.3. Human Studies Although the release of the Hsp72 in sepsis serves as a host impending danger signal to neighboring cells and might exert a cytoprotective function at low serum levels, it might also potentiate an already active host immune response leading to poor outcome once a certain critical threshold is attained. Such a sensitive balance could be an explanation of the surprising finding of this study, showing that only 16.7% of the 6 human septic studies demonstrated an Hsp72 protective effect compared to 95.8% protection shown in the 41 septic animal studies. In addition, by experimentally studying healthy individuals rather than patients in a real clinical setting, human studies mix up mild molecular reactions to stress with severe infectious systemic inflammatory response syndrome (SIRS), being thereby unconvincing and unable to verify results of experimentally controlled septic animal models. ### 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. ### 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ## 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. ## 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ## 4.4. Factors Influencing Heat Shock Proteins Protective Role in Sepsis Recent work demonstrated that febrile-range temperatures achieved during sepsis and noninfectious SIRS correlated with detectable changes in stress gene expression in vivo (whole blood messenger RNA), thereby suggesting that fever can activate Hsp72 gene expression and modify innate immune responses [113]. Hsp72 serum levels may also be modulated according to the patient oxidant status [15] and prevent excessive gut apoptosis and inflammation in an age-dependent response to sepsis [49]. Importantly, Hsp72 inhibited LPS-induced NO release but only partially reduced the LPS increased expression of iNOS mRNA and exhibited LPS-induced NF-κB DNA binding and LPS tolerance; in contrast, HS inhibited LPS-induced NF-κB and HSF1 activity whereas HSF1 inhibited NF-κB DNA binding [45]!A significant body of preexisting literature has hypothesized a relationship between Hsp72 expression and Gln’s protection in both in vitro and in vivo settings [32, 43, 62, 114, 115]. Pioneer studies showed that Gln supplementation could attenuate lethal heat and oxidant injury and increase Hsp72 expression in intestinal epithelial cells [116–118]. Compared, however, with whey protein supplementation in a randomized, double-blinded, comparative effectiveness trial, zinc, selenium, Gln, and intravenous metoclopramide conferred no advantage in the immune-competent population [6]. In addition, we recently showed that although apparently safe in animal models (pups), premature infants, and critically ill children, glutamine supplementation did not reduce mortality or late onset sepsis [119]. Methodological problems noted in the reviewed randomized experimental and clinical trials [119] should therefore be seriously considered in any future well-designed large blinded randomized controlled trial involving glutamine supplementation in severe sepsis.Drug interactions were also shown either to suppress Hsp72 protective effects exacerbating therefore drug-induced side effects or to induce Hsp72 beneficial effects by suppressing drug-induced exacerbations. Thus, it was recently shown that bleomycin-induced pulmonary fibrosis is mediated by suppression of pulmonary expression of Hsp72 whereas an inducer of Hsp72 expression, such as geranylgeranylacetone, could be therapeutically beneficial for the treatment of gefitinib-induced pulmonary fibrosis [120].Finally, critically ill patients display variable physiologic responses when stressed; gene association studies have recently been employed to explain this variability. Genetic variants of Hsp72 have also been associated with the development of septic shock in patients [121, 122]. Thus, the specific absence of Hsp72.1/3 gene expression can lead to increased mortality after septic insult [85]. ## 4.5. Limitations of the Study The major problem that limits the comparability with human sepsis is the fact that in most cases of animal models, various forms of preconditioning were employed. This approach is nonspecific, and only a minor amount (about 10%) used genetically modified animals. Accordingly, important differences between cell and/or animal models versus clinical studies have been noted several times with various inflammatory pathways and have been written about extensively in the literature [123, 124]. To the best of our knowledge, however, such discrepancies have not been summarized in detail in the context of  Hsp72 and sepsis; in our opinion, these findings might be helpful for cautiously interpreting experimental data in the critical care field. ## 5. Conclusions Heat shock proteins are molecular chaperokines that prevent the formation of nonspecific protein aggregates and exhibit sophisticated protection mechanisms. Experimental studies have repeatedly shown a strong molecular, biological, and clinical protective effect for Hsp72 in sepsis. Once again, clinical studies are inconclusive, varying from a protective in vitro effect to an in vivo Hsp72-mortality association. Possible influences by severity of disease-related factors, genetic variants, oxidant status, and unpredictable interventions such as those of temperature control, nutritional (glutamine) immune-enhancing, or drug intervening effects may unpredictably influence the Hsp72 protection efficacy in sepsis. Our “comparative” study data demonstrate that cell-protection with exogenous Hsp72, Hsp72 genes, heat stress, or glutamine is associated with induction of Hsp72 and that new Hsp72 targeted pharmaconutrition may be an approach to activating the preconditioning response in sepsis in clinical practice. However, as this hypothetical study suggests, much more work is needed to clarify the cellular and molecular mechanisms in which Hsp72 signals “danger” and regulates immune function in response to sepsis. ## Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. --- *Source: 101023-2014-01-12.xml*
101023-2014-01-12_101023-2014-01-12.md
88,917
Heat Shock Protein 72 Expressing Stress in Sepsis: Unbridgeable Gap between Animal and Human Studies—A Hypothetical “Comparative” Study
George Briassoulis; Efrossini Briassouli; Diana-Michaela Fitrolaki; Ioanna Plati; Kleovoulos Apostolou; Theonymfi Tavladaki; Anna-Maria Spanaki
BioMed Research International (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101023
101023-2014-01-12.xml
--- ## Abstract Heat shock protein 72 (Hsp72) exhibits a protective role during times of increased risk of pathogenic challenge and/or tissue damage. The aim of the study was to ascertain Hsp72 protective effect differences between animal and human studies in sepsis using a hypothetical “comparative study” model. Forty-one in vivo (56.1%), in vitro (17.1%), or combined (26.8%) animal and 14 in vivo (2) or in vitro (12) human Hsp72 studies (P < 0.0001) were enrolled in the analysis. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in septic animal studies (P < 0.0001). Only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001). In animal models, any Hsp72 induction method tried increased intracellular Hsp72 (100%), compared to 57.1% of human studies (P < 0.02), reduced proinflammatory cytokines (28/29), and enhanced survival (18/18). Animal studies show a clear Hsp72 protective effect in sepsis. Human studies are inconclusive, showing either protection or a possible relation to mortality and infections. This might be due to the fact that using evermore purified target cell populations in animal models, a lot of clinical information regarding the net response that occurs in sepsis is missing. --- ## Body ## 1. Introduction Sepsis is an inflammation-induced syndrome resulting from a complex interaction between host and infectious agents.  It is considered severe when associated with acute organ dysfunction, which accounts for the main cause underlying sepsis-induced death. Despite increasing evidence in support of antioxidant [1], anti-inflammatory [2], or immune-enhancing [3] therapies in sepsis, recent studies failed to establish a correlation between antiseptic pathway-based therapies and improvement of sepsis [4] or septic shock [5] or among immune-competent patients [6].Rapid expression of the survival gene heat shock protein 72 (Hsp72) was shown to be critical for mounting cytoprotection against severe cellular stress, like elevated temperature [7]. Intracellular Hsps are upregulated in cells subjected to stressful stimuli, including inflammation and oxidative stress exerting a protective effect against hypoxia, excess oxygen radicals, endotoxin, infections, and fever [8]. Recent studies imply that different biological disease processes and/or simple interventions may interfere with high temperature stress, leading to different clinical outcome in patients with and without sepsis [9]. In septic patients, administration of antipyretics independently associated with 28-day mortality, without association of fever with mortality [9]. Importantly, fever control using external cooling was safe and decreased vasopressor requirements and early mortality in septic shock [10].Inducible Hsp72 is also found extracellularly where it exhibits a protective role by facilitating immunological responses during times of increased risk of pathogenic challenge and/or tissue damage [11]. Experimental data provide important insights into the anti-inflammatory mechanisms of stress proteins protection and may lead to the development of a novel strategy for treatment of infectious and inflammatory disorders [12]. However, although overexpression of stress proteins signals danger to inflammatory cells and aids in immune surveillance by transporting intracellular peptides to immune cells [13], it has also been linked to a deleterious role in some diseases [14]. In addition, serum Hsp72 levels were shown to be modulated according to the patient oxidant status whereas increased serum Hsp72 was associated with mortality in sepsis [15].The purpose of this basic research-related review in critical care is to document the available evidence on the role of Hsp72 in sepsis, reporting both the state of the art and the future research directions. It might be that potential therapeutic use of stress proteins in prevention of common stress-related diseases involves achieving optimal balance between protective and immunogenic effects of these molecules [16]. In this review, we will attempt to classify experimental and clinical studies on Hsp72 in sepsis and to compare their results on inflammation, organ function, and outcome; we will also briefly discuss the mechanisms on how stress proteins might exert their protective or negative role in the disease development and highlight the potential clinic translation in the research field. ## 2. Materials and Methods Human or animal in vivo or in vitro studies examining the beneficial effect of intra- or extracellular Hsp72 expression in sepsis were included in this study. The PRISMA [17] search method for identification of studies consisted of searches of PubMed database (1992 to September 2012) and a manual review of reference lists using the search term: “Hsp70 or 72.” The search output was limited with the search filter for any of: sepsis; severe sepsis; bacterial lipopolysaccharide (LPS); endotoxin. References in selected studies were examined also. The title and abstract of all studies identified by the above search strategy were screened, and the full text for all potentially relevant studies published in English was obtained. The full text of any potentially relevant studies was assessed by five authors (DMF, EB, IP, AK, and TT). The same authors extracted data from the published studies. ### 2.1. Statistical Analysis Proportions of methods used and results findings were compared by theχ 2 test. A two-sided alpha of 0.05 was used for statistical significance. The results were analyzed using SPSS software (version 20.0, SPSS, Chicago, IL, USA). ## 2.1. Statistical Analysis Proportions of methods used and results findings were compared by theχ 2 test. A two-sided alpha of 0.05 was used for statistical significance. The results were analyzed using SPSS software (version 20.0, SPSS, Chicago, IL, USA). ## 3. Results Our search identified 411 PubMed titles and abstracts. After excluding duplicates, studies with no original data, or data insufficient to evaluate or those whose outcome was ischemia/reperfusion injury or others, 55 articles were finally included for analysis. The aim of this minireview was not to examine the quality of studies, but to describe induction methods and to compare in vivo and in vitro methods and results regarding a potential protective role for Hsp72 in human and animal sepsis. ### 3.1. Animals Forty-one in vivo (23, 56.1%), in vitro (7, 17.1%), or combined (11, 26.8%) animal studies fulfilling the research criteria regarding the role of Hsp72 in sepsis were enrolled in analysis (Tables1(a), 1(b), and 1(c)). In only 6 studies transgenic animals (4Hsp−/− (9.8%), 2 overexpressing the human Hspa12b gene (4.9%)) were used (14.6%), all in mice (P < 0.03). Hsp72 induction methods used in rats differed from those used in mice (P < 0.0001). Hsp72 induction was attempted most often using heat shock (rats 9, 37.5%; mice 2, 12.5%), glutamine (Gln) (rats 7, 29.2%; mice 4, 25%; sheep 1, 100%), or combined Gln with additional inducer (rats 1, 4.2%; mice 2, 12.6%). In 7 rats Hsp72 was induced through adenoviral vector Hsp72 (AdHSP) (3, 12.5% of studies in rats) or various recombinant Hsp72 (rHsp72) preparations (4, 16.7%) compared to 3 mice studies where AdHSP, bovine rHsp72 preconditioning, or overexpressed Hsp72 within the intestinal epithelium was used (6.2%). Hsp72 gene-transfected models (3, 18.8%) or cecal ligation and puncture (CLP) with LPS or injection of microorganisms (2, 12.5%) were used only in mice studies.Table 1 (a) Results of animal in vivo studies examining the preventive role of intra- or extracellular Hsp72 (Hsp70) expression and Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (b) Results of animal in vitro studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (c) Results of genetic animal studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (a) In vivo Induction Organs studied Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Inhibitors Functional Pathways Interleukins Organ damage Survival CLP sepsis rats [18, 19]LPS-treated rats [20–24] LPS-treated mice [25] Heat stress Lungs (4)Heart (1) Splenocytes (1) Rostral Ventrolateral medulla (1)Mitochondrial function (1)Brain (1) Induced (7) — Hsp70 inhibitors (KNK437 or pifithrin-m) abrogated the ability of the thermal treatment to enhance TNF-α (1) Alleviated hypotension, bradycardia, sympathetic vasomotor activity (1) EEG and epileptic spikes attenuated (1) Suppressed iNOS mRNA NF-κB activation, IB kinase activation, IB degradation (1) Prevented downregulation of Grp75, preserved cytochrome c oxidase (1) enhanced phosphorylation of IKK, IkB, NF-κB nuclear translocation, binding to the TNF-α promoter (1) Cytokines declined (2) HMGB1 inhibited (1) enhanced LPS-induced TNF-α production (1) Reduced (4) Prevented sepsis-associated encephalopathy (1) Enhanced (6) LPS-treated mice [26, 27] rats [28–30] sheep [31] CLP sepsis rats [32, 33]CPL sepsis mice [34, 35] Glutamine Heart (3)Lungs (3) Liver (2)Aorta (1)Kidneys (1)Brain (1)Blood (1)Multiple organs (1) Induced (7) Blood samples: increased Hsp72 only after coadministration of Gln and ciprofloxacin [26] Quercetin blocked Gln-mediated enhancement of Hsp and HSF-1-p expressions and survival benefit (2)LD75 dose of P. aeruginosa and ciprofloxacin in combinations (1) Prevented ARDS (2)arterial pressure, cardiac contractility restored in the Gln than in the LPS shock (2) Quercetin prevented Gln protection (1) No difference in hemodynamic parameters (1) Inhibited activation, translocation of NF-κB to the nucleus degradation of IKBalpha, phosphorylation of p38 MAPK, ERK, increased MKP-1 (1) lung HMGB-1 expression NF-κB DNA-binding activity suppressed (1) Reduced peroxide biosynthesis (1) Attenuated TNF-alpha (3), IL-6. IL-18, MDA, HMGB-1, apoptosis (1) increased IL-10 (1) Reduced (5) Enhanced (7) LPS-treated rats bovine or Ad70 virus or rHsp [36–40]CLP sepsis rats and tracheas AdHSP [32] Exogenous rHsp AdTrack or Ad70 virus 72 Liver (1)Peritoneal macrophages (1) MLE-12 cells (1)Myocardium (1)Lungs (1) Induced (4) — — Normalized hemostasis (2) hemodynamics (2)Biochemical parameters (1) Inhibited LPS-induced decrease NO expression in macrophages, normalized neutrophil apoptosis (1) inhibited IκBα degradation and NF-κB, p65 nuclear translocation (2) apoptotic cellular pathways caspases 3, 8, 9 (1) Modified myeloid cells response to LPS (1) prevented LPS-induced increase in TNF-α and IL-6 (2) Reduced ICAM-1, attenuated cardiac dysfunction (1) Attenuated cardiac dysfunction (1) reduced alveolar cell apoptosis (1) Enhanced (5) (b) In vitro Induction Organs studied Intracellular Hsp72 expression Inhibitors Pathways Interleukins Organ damage Survival Murine macrophage-like RAW 264.7 cells [12] Heat shocked Macrophages (1) Cells from HS overexpressed Hsp72 (1) — Inhibited phosphorylation of p38, JNK, ERK/MAPK, IκBα degradation, NF-κB p65 nuclear translocation (1) HS inhibited HMGB1-induced cytokines TNF-α and IL-1β (1) — Enhanced (1) CLP-treated murine peritoneal macrophage cell line RAW264.7 [41] Neonatal rat cardiomyocytes [42]IEC-18 rat intestinal epithelial cells [43] Glutamine Peritoneal macrophages (1)Cardiomyocytes (1) Intestinal epithelial cells (1) Increased Hsp70 expression (2) Gln protection mimicked by PUGNA, banished by alloxan (1) DFMO ornithine decarboxylase inhibitor (1) Reduced LDH, increased O-ClcNAc, HSF-1, transcription activity (1) increased HSF1 binding to HSE (1) In vitro TNF-α dose- time-Gln. Dependent In vivo lower intracellular TNF-α level Attenuated LPS-induced cardiomyocyte damage (1) Enhanced (3) LPS-treated rats [44] LPS stimulation-mouse macrophage-like cell line (RAW 264.7 cells) [45, 46] Transfected with Hsp70 plasmid or HS Myocardium (1)Macrophages (2) Hsp70 plasmid or HS induced Hsp70 (2) — iNOS mRNA completely abolished by HS-Hsp70–transfected cells (1)HS inhibited LPS-induced NF-κB and HSF-1 activity (1) increases both cellular SK1 mRNA and protein levels (1) — — Enhanced (2) CLP rats, murine lung epithelial-12 cells in culture [47] Murine macrophage-like RAW 264.7 cells [12] Exogenous Hsp72 Lungs (1)Macrophages (1) Overexpression of Hsp72 in RAW/Hsp72 cells (1)— — Limited nuclear translocation of NF-κB, phosphorylation of IkappaBalpha (2)Inhibition of the MAP kinases (p38, JNK, and ERK) (1) Inhibition of the NF-κB - HMGB1-induced release of TNF-α, IL-1β (1) Limited NF-κB activation (2) Enhanced (2) CLP-treated mice [48] Arsenite(Positive control) Lungs (1) Induced- Inhibitors blocked Hsp72 expression, (1) Anti-human Hsp72 (1) Pretreatment with neutralizing antibodies to Hsp72 diminished neutrophil killing (1) — Survivors highern of γ δT cells (1) Enhanced (1) (c) KO animals Induction Organs studied Intracellular Hsp72 expression Pathways Interleukins Organ damage Survival CLP sepsis Hsp70.1/3−/− KO mice [34] Glutamine Lungs (1) Hsp70.1/3−/− mice did not increase Hsp72 (1) Hsp70.1/3((−/−)) mice increased NF-κB binding/ activation (1) Increased TNF-α, IL-6 in KO (1) Increased lung injury in KO (1) Decreased in KO (1) CLP sepsis, injection of microorganisms Hsp70−/− KO mice [49] Imipenem/ cilastatin Gut (1)Lungs (1) Hsp70−/−mice did not increase Hsp72 (1) Increased apoptosis and inflammation Hsp70−/−increased TNF-α, IL-6, IL-10, IL-1b KO-increased gut epithelial apoptosis, pulmonary inflammation (1) Decreased in KO age dependent (1) LPS-treated mice Hsp−/− or overexpressed Hsp70 [50] LPS Intestinal epithelium (1) Pharmacologic Hsp70 upregulation Hsp70 reduced TLR4 signaling in enterocytes (1) Hsp70 reversed TLR4- cytokines, enterocyte apoptosis (1) Prevented and treated experimental NEC (1) — LPS-treated mice overexpressing the human Hspa12b gene [51] LPS Heart (1) Overexpression of HSPA12B Prevented decrement in the activation of PI3K/protein kinase B signaling in myocardium (1) Decreased the expression of VCAM-1/ICAM-1 (1) Decreased leucocyte infiltration in myocardium (1) Attenuated cardiac dysfunction (1) — n: number of studies; PBMC: peripheral blood mononuclear cells; LPS: bacterial lipopolysaccharide; CLP: cecal ligation and puncture; TNF-α: tumor necrosis factor-alpha; AdHSP: adenoviral vector Hsp72; Gln: glutamine; HS: heat stress; Hspgene: Hsp70 gene-transfected models; HSF1: HS factor 1; HSE: heat shock element; IKK: IκB kinase; IkB: IkappaBalpha.In more than half of the studies induction was attempted in a pretreatment mode (10, 62.5% for mice; 13, 54.2% for rats induction after LPS injection or CLP), followed by a concomitant mode in rats (6, 25%) or a posttreatment one in mice (4, 25%). The different time intervals used before or after experimental sepsis, most often 1-2 hours, did not differ among groups. Preventive effect was achieved by most induction methods used in mice or rats (39/41, 95.1%), irrespective of the challenge period or timing used (Figures1(a) and 1(b)). Two studies, one carried out in sheep and one in rats, were inconclusive. In all septic animal models, any Hsp72 induction method tried increased intracellular Hsp72 (41/41, 100%), reduced proinflammatory cytokines (28/29 studies involving cytokine measurements), organ damage (27/27), clinical deterioration (19/20), and enhanced survival (18/18).(a) Preventive effect was achieved by all induction methods used irrespective of the challenge period or (b) time lapse between the sepsis insult and the Hsp72 induction: LPS, bacterial lipopolysaccharide; CLP, caecal ligation and puncture; iHsp72, inducible heat shock protein 72; Pre, pre-treatment; Post, posttreatment; both, trials with pre- and postexperiments; Con, concomitant; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; Hspgene, Hsp72 gene-transfected models. (a) (b) ### 3.2. Patients Only 14 human in vivo (2) and in vitro (12) Hsp72 studies were identified (Tables2(a) and 2(b)): human peripheral blood mononuclear cells (hPBMC) 9 studies, 64.3%; polymorphonuclear leukocytes (hPMNL) 2 studies, 14.3%; lymphocytes (hPBLC) 1 study, 7.1%; in vivo (children or adults’ serum levels) 2 studies, 14.3%. Of those, hPBMC were used in only 2 studies with septic patients but in 6 with healthy volunteers. Heat stress (HS) or acclimation was used in 5 studies (35.7%), Gln administration in 2 in association with LPS (14.3%), recombinant human Hsp72 in 1 (7.1%), and either inhibitor or agonist in 1 (7.1%). In 4 studies no challenge or only LPS (28.6%) was used. In only 1 out of 6 (16.7%) studies in septic patients induction Hsp72 methods were attempted compared to 100% in the studies with healthy (7) or ARDS (1) patients (P < 0.006). Protection markers studied were apoptosis (3 studies, 21.3%), HS (2 studies, 14.3%), oxidative damage, hospital infections, hemodynamic instability, and ARDS (1 study each, 7.1%).Table 2 (a) Human in vivo studies relating intra- or extracellular Hsp72 (Hsp70) expression to outcome in sepsis. (b) Human in vitro studies relating intracellular Hsp72 (Hsp70) expression to outcome in sepsis. (a) In vivo Study population/material Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Hsp72 is associated with Conclusion on the Hsp72 role in sepsis Patients with septic shock [15, 52] Children with septic shock (1), adults with severe sepsis (1) — Elevated in septic shock (1) nonsurvivors (1) pronounced oxidative damage (1) Septic shock-mortality (2) modulated according to oxidant status (1) Related to mortality (2) patient oxidant status (1) Healthy young men Gln-LPS [53] Crossover study: Hsp70 in PBMCs (1) Gln did not affect Hsp70 in PBMCs (1) — Gln did not affect LPS-WBC, TBF-α, IL-6, temperature heart rate alterations Not protective in experimental sepsis (1) (b) In vitro Study population/material Expression in cells/Hsp72 challenge Hsp70 is associated with Conclusion on the Hsp72 role in sepsis PBMCs-Hsp inhibitor-inducers [54, 55] PBMCs 24 hours after sepsis (1) sodium arsenite (inducer of Hsp) and quercetin (suppressor of Hsp) to regulate expression of Hsp70 in PMNLs (1) Hsp70 increased (1) prevented by quercetin (1) Enhanced TNF-α (1) increased oxidative activity, inhibited apoptosis (1) Inconclusive (1) may inhibit apoptosis (1) LPS-PBMC [56] LPS inducibility of Hsp70 expression in the PBMC Inhibits Hsp70 expression in PBMC (in septic patients more than in controls) Decreased resistance to infectious insults during severe sepsis May be related to infections Heat shock, PBMC [57–60] Heat stress Hsp70 in PBMC (2) or with LPS and training (1) or exercised in heat acclimation (1) Hsp70 increase (3) inhibited by monensin, methyl-beta-cyclodextrin, and methylamine, reduced in patients with ARDS (2) Hsp70 decreased in ARDS, recoverε δ over time (1) released from lysosomal lipid rafts (1) Reduced apoptosis, TNF-α, IL-1b, increase δ CD14/CD16 (1) Protective (3) not sufficient (1) Recombinant Hsp70-neutrophils, monocytes [39] Preconditioning of myeloid cells after LTA addition with rHsp70 (1) Effect of human recombinant Hsp70 isolated fromSpodoptera cells on neutrophil apoptosis and expression of CD11b/CD18 receptors and TNF on ROS production in neutrophils and monocytes Ameliorated reactive oxygen species, TNF-α, CD11b/CD18, did not normalize apoptosis (1) Protective (1) Glutamine-[61]-lymphocytes [62] Glutamine-PBMCs (1) or lymphocytes (1) After LPS-HS increased 3-fold Hsp70. A reduction of Gln led to a 40% lower Hsp70 level (2) Gln decreased TNF-α (1) Reduced Gln = reduced Hsp70 = impaired stress response (1) Protective (2)Intracellular Hsp72 was induced in 8 in vitro studies (57.1%, 6 in healthy, 2 in septic) and inhibited in 3 (21.4%, 2 in septic, 1 in ARDS patients). Of the 6 studies in septic patients, intracellular Hsp72 was increased in 2 (33%), inhibited in 2 (33%), and not measured in 2. With the exception of sodium arsenite, neither Gln nor HS were tested in these studies. Extracellular Hsp72, measured in 1 in vitro and in 2 in vivo studies, was shown to increase in sepsis, especially in septic shock or in those who died (14.3% of human studies).Increased intracellular Hsp72 was protective in half of the human studies (50%); regarding the 9 positive (HS, Gln, exogenous Hsp72) in vitro induction Hsp72 human studies 7 (77.8%) were protective (Figure2(a)) and 2 inconclusive (11.1%) or nonprotective (11.1%). Of the induction methods used, protection offered HS (4/5, 80%), glutamine (1/2, 50%), rHsp72 and sodium arsenite (1/1, 100% each) (Figure 2(b)). In contrast, of the 2 in vivo (serum Hsp72 measurements), 2 in vitro endotoxin induced (LPS or CLP), and 1 Hsp72 inhibitor human studies, none was shown to be associated with a better outcome (P < 0.02); 3 studies were associated with mortality (60%) and 1 with infection (20%) or were inconclusive (20%). Septic patients’ studies were positive for protection in only 1 out of 6 (16.7%) compared to 5 out of 7 (71.4%) in healthy and 100% in ARDS patients (P < 0.06).(a) Increased serum Hsp72 in septic patients was associated with mortality whereas human cell studies with Hsp72 induction were either inconclusive or protective or even partially associated with mortality and infection; (b) heat pretreatment and/or glutamine incubation and recombinant or Hsp72 agonists (sodium arsenite) partially protected human cells compared to the nonchallenged human cells or to those challenged with Hsp72 inhibitors (quercetin) or LPS alone (P < 0.04). Positive Hsp72 induction human in vitro studies were tried in healthy individuals or ARDS patients compared with 1 study in septic patients’ cells (P < 0.02) whereas negative human Hsp72 studies (LPS, quercetin) or neutral studies (no induction) were only examined in septic human cells: iHsp72, inducible heat shock protein 72; hPBMC, human peripheral blood mononuclear cells; hPMNL, human peripheral polymorphonuclear leukocytes; hPBMC, human peripheral blood lymphocytes; ARDS, acute respiratory distress syndrome; Gln, glutamine; HS, heat stress; LPS, bacterial lipopolysaccharide; rHsp72, recombinant Hsp72. (a) (b) ### 3.3. Human Compared to Animal Studies Out of a total of 55 enrolled studies, only 2 in vivo human studies (3.6%) have been reported on the role of Hsp72 in sepsis compared to 7 mice (12.7%) and 15 rat (27.3%) in vivo studies (P < 0.0001); in contrast 12 human (21.8%) studies have been reported in vitro compared to only 2 in rats (3.6%) and 5 in mice (9.1%); 4 mice (7.3%) and 7 rat (12.7%) combined in vitro-in vivo studies have also been reported. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in animal studies (Figure 3(a)). When restricted to the septic patients’ studies, however, only 1 out of 6 (16.7%) demonstrated an Hsp72 protective effect compared to 95.8% protection shown in animal studies (P < 0.0001). In addition, only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001).(a) Diagram showing summaries of conclusions regarding the Hsp72 protective effects in sepsis in human and animal studies (P < 0.008); (b) human Hsp72 induction methods showed inconsistent results compared to the unanimous Hsp72 protective results in experimental sepsis with any attempted induction method; selection of any induction method, however, did not affect results; (c) Hsp72 induction protective effect using various induction methods was not influenced by the in vitro, in vivo, or combined study method selected: iHsp72, inducible heat shock protein 72; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; rHsp72, recombinant Hsp72; Hspgene, Hsp72 gene-transfected models; both, in vitro and in vivo experiments. (a) (b) (c)Most of the human studies were prospective observational experimental controlled studies (57.1%) and only 1 randomized study (7.1%) compared to prospective controlled animal studies (100%,P < 0.0001). All other human studies were experimental control (14.3%) or noncontrolled (14.3%) studies. Induction methods used differed significantly (P < 0.02), increasing Hsp72 in 57.1% of the human as compared to 100% of animal studies (P < 0.02). Only 6 (42.9%) human studies included septic patients compared to 41 (100% experimental sepsis) in animal studies (P < 0.0001). Although differed among Hsp72 study populations (P < 0.001) or methodology selected (P < 0.02), the various induction methods used did not affect the Hsp72 offered protection (Figures 3(b) and 3(c)). ## 3.1. Animals Forty-one in vivo (23, 56.1%), in vitro (7, 17.1%), or combined (11, 26.8%) animal studies fulfilling the research criteria regarding the role of Hsp72 in sepsis were enrolled in analysis (Tables1(a), 1(b), and 1(c)). In only 6 studies transgenic animals (4Hsp−/− (9.8%), 2 overexpressing the human Hspa12b gene (4.9%)) were used (14.6%), all in mice (P < 0.03). Hsp72 induction methods used in rats differed from those used in mice (P < 0.0001). Hsp72 induction was attempted most often using heat shock (rats 9, 37.5%; mice 2, 12.5%), glutamine (Gln) (rats 7, 29.2%; mice 4, 25%; sheep 1, 100%), or combined Gln with additional inducer (rats 1, 4.2%; mice 2, 12.6%). In 7 rats Hsp72 was induced through adenoviral vector Hsp72 (AdHSP) (3, 12.5% of studies in rats) or various recombinant Hsp72 (rHsp72) preparations (4, 16.7%) compared to 3 mice studies where AdHSP, bovine rHsp72 preconditioning, or overexpressed Hsp72 within the intestinal epithelium was used (6.2%). Hsp72 gene-transfected models (3, 18.8%) or cecal ligation and puncture (CLP) with LPS or injection of microorganisms (2, 12.5%) were used only in mice studies.Table 1 (a) Results of animal in vivo studies examining the preventive role of intra- or extracellular Hsp72 (Hsp70) expression and Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (b) Results of animal in vitro studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (c) Results of genetic animal studies examining the preventive role of intracellular Hsp72 (Hsp70) expression in experimental sepsis or sepsis-related pathophysiology. (a) In vivo Induction Organs studied Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Inhibitors Functional Pathways Interleukins Organ damage Survival CLP sepsis rats [18, 19]LPS-treated rats [20–24] LPS-treated mice [25] Heat stress Lungs (4)Heart (1) Splenocytes (1) Rostral Ventrolateral medulla (1)Mitochondrial function (1)Brain (1) Induced (7) — Hsp70 inhibitors (KNK437 or pifithrin-m) abrogated the ability of the thermal treatment to enhance TNF-α (1) Alleviated hypotension, bradycardia, sympathetic vasomotor activity (1) EEG and epileptic spikes attenuated (1) Suppressed iNOS mRNA NF-κB activation, IB kinase activation, IB degradation (1) Prevented downregulation of Grp75, preserved cytochrome c oxidase (1) enhanced phosphorylation of IKK, IkB, NF-κB nuclear translocation, binding to the TNF-α promoter (1) Cytokines declined (2) HMGB1 inhibited (1) enhanced LPS-induced TNF-α production (1) Reduced (4) Prevented sepsis-associated encephalopathy (1) Enhanced (6) LPS-treated mice [26, 27] rats [28–30] sheep [31] CLP sepsis rats [32, 33]CPL sepsis mice [34, 35] Glutamine Heart (3)Lungs (3) Liver (2)Aorta (1)Kidneys (1)Brain (1)Blood (1)Multiple organs (1) Induced (7) Blood samples: increased Hsp72 only after coadministration of Gln and ciprofloxacin [26] Quercetin blocked Gln-mediated enhancement of Hsp and HSF-1-p expressions and survival benefit (2)LD75 dose of P. aeruginosa and ciprofloxacin in combinations (1) Prevented ARDS (2)arterial pressure, cardiac contractility restored in the Gln than in the LPS shock (2) Quercetin prevented Gln protection (1) No difference in hemodynamic parameters (1) Inhibited activation, translocation of NF-κB to the nucleus degradation of IKBalpha, phosphorylation of p38 MAPK, ERK, increased MKP-1 (1) lung HMGB-1 expression NF-κB DNA-binding activity suppressed (1) Reduced peroxide biosynthesis (1) Attenuated TNF-alpha (3), IL-6. IL-18, MDA, HMGB-1, apoptosis (1) increased IL-10 (1) Reduced (5) Enhanced (7) LPS-treated rats bovine or Ad70 virus or rHsp [36–40]CLP sepsis rats and tracheas AdHSP [32] Exogenous rHsp AdTrack or Ad70 virus 72 Liver (1)Peritoneal macrophages (1) MLE-12 cells (1)Myocardium (1)Lungs (1) Induced (4) — — Normalized hemostasis (2) hemodynamics (2)Biochemical parameters (1) Inhibited LPS-induced decrease NO expression in macrophages, normalized neutrophil apoptosis (1) inhibited IκBα degradation and NF-κB, p65 nuclear translocation (2) apoptotic cellular pathways caspases 3, 8, 9 (1) Modified myeloid cells response to LPS (1) prevented LPS-induced increase in TNF-α and IL-6 (2) Reduced ICAM-1, attenuated cardiac dysfunction (1) Attenuated cardiac dysfunction (1) reduced alveolar cell apoptosis (1) Enhanced (5) (b) In vitro Induction Organs studied Intracellular Hsp72 expression Inhibitors Pathways Interleukins Organ damage Survival Murine macrophage-like RAW 264.7 cells [12] Heat shocked Macrophages (1) Cells from HS overexpressed Hsp72 (1) — Inhibited phosphorylation of p38, JNK, ERK/MAPK, IκBα degradation, NF-κB p65 nuclear translocation (1) HS inhibited HMGB1-induced cytokines TNF-α and IL-1β (1) — Enhanced (1) CLP-treated murine peritoneal macrophage cell line RAW264.7 [41] Neonatal rat cardiomyocytes [42]IEC-18 rat intestinal epithelial cells [43] Glutamine Peritoneal macrophages (1)Cardiomyocytes (1) Intestinal epithelial cells (1) Increased Hsp70 expression (2) Gln protection mimicked by PUGNA, banished by alloxan (1) DFMO ornithine decarboxylase inhibitor (1) Reduced LDH, increased O-ClcNAc, HSF-1, transcription activity (1) increased HSF1 binding to HSE (1) In vitro TNF-α dose- time-Gln. Dependent In vivo lower intracellular TNF-α level Attenuated LPS-induced cardiomyocyte damage (1) Enhanced (3) LPS-treated rats [44] LPS stimulation-mouse macrophage-like cell line (RAW 264.7 cells) [45, 46] Transfected with Hsp70 plasmid or HS Myocardium (1)Macrophages (2) Hsp70 plasmid or HS induced Hsp70 (2) — iNOS mRNA completely abolished by HS-Hsp70–transfected cells (1)HS inhibited LPS-induced NF-κB and HSF-1 activity (1) increases both cellular SK1 mRNA and protein levels (1) — — Enhanced (2) CLP rats, murine lung epithelial-12 cells in culture [47] Murine macrophage-like RAW 264.7 cells [12] Exogenous Hsp72 Lungs (1)Macrophages (1) Overexpression of Hsp72 in RAW/Hsp72 cells (1)— — Limited nuclear translocation of NF-κB, phosphorylation of IkappaBalpha (2)Inhibition of the MAP kinases (p38, JNK, and ERK) (1) Inhibition of the NF-κB - HMGB1-induced release of TNF-α, IL-1β (1) Limited NF-κB activation (2) Enhanced (2) CLP-treated mice [48] Arsenite(Positive control) Lungs (1) Induced- Inhibitors blocked Hsp72 expression, (1) Anti-human Hsp72 (1) Pretreatment with neutralizing antibodies to Hsp72 diminished neutrophil killing (1) — Survivors highern of γ δT cells (1) Enhanced (1) (c) KO animals Induction Organs studied Intracellular Hsp72 expression Pathways Interleukins Organ damage Survival CLP sepsis Hsp70.1/3−/− KO mice [34] Glutamine Lungs (1) Hsp70.1/3−/− mice did not increase Hsp72 (1) Hsp70.1/3((−/−)) mice increased NF-κB binding/ activation (1) Increased TNF-α, IL-6 in KO (1) Increased lung injury in KO (1) Decreased in KO (1) CLP sepsis, injection of microorganisms Hsp70−/− KO mice [49] Imipenem/ cilastatin Gut (1)Lungs (1) Hsp70−/−mice did not increase Hsp72 (1) Increased apoptosis and inflammation Hsp70−/−increased TNF-α, IL-6, IL-10, IL-1b KO-increased gut epithelial apoptosis, pulmonary inflammation (1) Decreased in KO age dependent (1) LPS-treated mice Hsp−/− or overexpressed Hsp70 [50] LPS Intestinal epithelium (1) Pharmacologic Hsp70 upregulation Hsp70 reduced TLR4 signaling in enterocytes (1) Hsp70 reversed TLR4- cytokines, enterocyte apoptosis (1) Prevented and treated experimental NEC (1) — LPS-treated mice overexpressing the human Hspa12b gene [51] LPS Heart (1) Overexpression of HSPA12B Prevented decrement in the activation of PI3K/protein kinase B signaling in myocardium (1) Decreased the expression of VCAM-1/ICAM-1 (1) Decreased leucocyte infiltration in myocardium (1) Attenuated cardiac dysfunction (1) — n: number of studies; PBMC: peripheral blood mononuclear cells; LPS: bacterial lipopolysaccharide; CLP: cecal ligation and puncture; TNF-α: tumor necrosis factor-alpha; AdHSP: adenoviral vector Hsp72; Gln: glutamine; HS: heat stress; Hspgene: Hsp70 gene-transfected models; HSF1: HS factor 1; HSE: heat shock element; IKK: IκB kinase; IkB: IkappaBalpha.In more than half of the studies induction was attempted in a pretreatment mode (10, 62.5% for mice; 13, 54.2% for rats induction after LPS injection or CLP), followed by a concomitant mode in rats (6, 25%) or a posttreatment one in mice (4, 25%). The different time intervals used before or after experimental sepsis, most often 1-2 hours, did not differ among groups. Preventive effect was achieved by most induction methods used in mice or rats (39/41, 95.1%), irrespective of the challenge period or timing used (Figures1(a) and 1(b)). Two studies, one carried out in sheep and one in rats, were inconclusive. In all septic animal models, any Hsp72 induction method tried increased intracellular Hsp72 (41/41, 100%), reduced proinflammatory cytokines (28/29 studies involving cytokine measurements), organ damage (27/27), clinical deterioration (19/20), and enhanced survival (18/18).(a) Preventive effect was achieved by all induction methods used irrespective of the challenge period or (b) time lapse between the sepsis insult and the Hsp72 induction: LPS, bacterial lipopolysaccharide; CLP, caecal ligation and puncture; iHsp72, inducible heat shock protein 72; Pre, pre-treatment; Post, posttreatment; both, trials with pre- and postexperiments; Con, concomitant; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; Hspgene, Hsp72 gene-transfected models. (a) (b) ## 3.2. Patients Only 14 human in vivo (2) and in vitro (12) Hsp72 studies were identified (Tables2(a) and 2(b)): human peripheral blood mononuclear cells (hPBMC) 9 studies, 64.3%; polymorphonuclear leukocytes (hPMNL) 2 studies, 14.3%; lymphocytes (hPBLC) 1 study, 7.1%; in vivo (children or adults’ serum levels) 2 studies, 14.3%. Of those, hPBMC were used in only 2 studies with septic patients but in 6 with healthy volunteers. Heat stress (HS) or acclimation was used in 5 studies (35.7%), Gln administration in 2 in association with LPS (14.3%), recombinant human Hsp72 in 1 (7.1%), and either inhibitor or agonist in 1 (7.1%). In 4 studies no challenge or only LPS (28.6%) was used. In only 1 out of 6 (16.7%) studies in septic patients induction Hsp72 methods were attempted compared to 100% in the studies with healthy (7) or ARDS (1) patients (P < 0.006). Protection markers studied were apoptosis (3 studies, 21.3%), HS (2 studies, 14.3%), oxidative damage, hospital infections, hemodynamic instability, and ARDS (1 study each, 7.1%).Table 2 (a) Human in vivo studies relating intra- or extracellular Hsp72 (Hsp70) expression to outcome in sepsis. (b) Human in vitro studies relating intracellular Hsp72 (Hsp70) expression to outcome in sepsis. (a) In vivo Study population/material Expression in cells/Hsp72 challenge Extracellular Hsp72 levels Hsp72 is associated with Conclusion on the Hsp72 role in sepsis Patients with septic shock [15, 52] Children with septic shock (1), adults with severe sepsis (1) — Elevated in septic shock (1) nonsurvivors (1) pronounced oxidative damage (1) Septic shock-mortality (2) modulated according to oxidant status (1) Related to mortality (2) patient oxidant status (1) Healthy young men Gln-LPS [53] Crossover study: Hsp70 in PBMCs (1) Gln did not affect Hsp70 in PBMCs (1) — Gln did not affect LPS-WBC, TBF-α, IL-6, temperature heart rate alterations Not protective in experimental sepsis (1) (b) In vitro Study population/material Expression in cells/Hsp72 challenge Hsp70 is associated with Conclusion on the Hsp72 role in sepsis PBMCs-Hsp inhibitor-inducers [54, 55] PBMCs 24 hours after sepsis (1) sodium arsenite (inducer of Hsp) and quercetin (suppressor of Hsp) to regulate expression of Hsp70 in PMNLs (1) Hsp70 increased (1) prevented by quercetin (1) Enhanced TNF-α (1) increased oxidative activity, inhibited apoptosis (1) Inconclusive (1) may inhibit apoptosis (1) LPS-PBMC [56] LPS inducibility of Hsp70 expression in the PBMC Inhibits Hsp70 expression in PBMC (in septic patients more than in controls) Decreased resistance to infectious insults during severe sepsis May be related to infections Heat shock, PBMC [57–60] Heat stress Hsp70 in PBMC (2) or with LPS and training (1) or exercised in heat acclimation (1) Hsp70 increase (3) inhibited by monensin, methyl-beta-cyclodextrin, and methylamine, reduced in patients with ARDS (2) Hsp70 decreased in ARDS, recoverε δ over time (1) released from lysosomal lipid rafts (1) Reduced apoptosis, TNF-α, IL-1b, increase δ CD14/CD16 (1) Protective (3) not sufficient (1) Recombinant Hsp70-neutrophils, monocytes [39] Preconditioning of myeloid cells after LTA addition with rHsp70 (1) Effect of human recombinant Hsp70 isolated fromSpodoptera cells on neutrophil apoptosis and expression of CD11b/CD18 receptors and TNF on ROS production in neutrophils and monocytes Ameliorated reactive oxygen species, TNF-α, CD11b/CD18, did not normalize apoptosis (1) Protective (1) Glutamine-[61]-lymphocytes [62] Glutamine-PBMCs (1) or lymphocytes (1) After LPS-HS increased 3-fold Hsp70. A reduction of Gln led to a 40% lower Hsp70 level (2) Gln decreased TNF-α (1) Reduced Gln = reduced Hsp70 = impaired stress response (1) Protective (2)Intracellular Hsp72 was induced in 8 in vitro studies (57.1%, 6 in healthy, 2 in septic) and inhibited in 3 (21.4%, 2 in septic, 1 in ARDS patients). Of the 6 studies in septic patients, intracellular Hsp72 was increased in 2 (33%), inhibited in 2 (33%), and not measured in 2. With the exception of sodium arsenite, neither Gln nor HS were tested in these studies. Extracellular Hsp72, measured in 1 in vitro and in 2 in vivo studies, was shown to increase in sepsis, especially in septic shock or in those who died (14.3% of human studies).Increased intracellular Hsp72 was protective in half of the human studies (50%); regarding the 9 positive (HS, Gln, exogenous Hsp72) in vitro induction Hsp72 human studies 7 (77.8%) were protective (Figure2(a)) and 2 inconclusive (11.1%) or nonprotective (11.1%). Of the induction methods used, protection offered HS (4/5, 80%), glutamine (1/2, 50%), rHsp72 and sodium arsenite (1/1, 100% each) (Figure 2(b)). In contrast, of the 2 in vivo (serum Hsp72 measurements), 2 in vitro endotoxin induced (LPS or CLP), and 1 Hsp72 inhibitor human studies, none was shown to be associated with a better outcome (P < 0.02); 3 studies were associated with mortality (60%) and 1 with infection (20%) or were inconclusive (20%). Septic patients’ studies were positive for protection in only 1 out of 6 (16.7%) compared to 5 out of 7 (71.4%) in healthy and 100% in ARDS patients (P < 0.06).(a) Increased serum Hsp72 in septic patients was associated with mortality whereas human cell studies with Hsp72 induction were either inconclusive or protective or even partially associated with mortality and infection; (b) heat pretreatment and/or glutamine incubation and recombinant or Hsp72 agonists (sodium arsenite) partially protected human cells compared to the nonchallenged human cells or to those challenged with Hsp72 inhibitors (quercetin) or LPS alone (P < 0.04). Positive Hsp72 induction human in vitro studies were tried in healthy individuals or ARDS patients compared with 1 study in septic patients’ cells (P < 0.02) whereas negative human Hsp72 studies (LPS, quercetin) or neutral studies (no induction) were only examined in septic human cells: iHsp72, inducible heat shock protein 72; hPBMC, human peripheral blood mononuclear cells; hPMNL, human peripheral polymorphonuclear leukocytes; hPBMC, human peripheral blood lymphocytes; ARDS, acute respiratory distress syndrome; Gln, glutamine; HS, heat stress; LPS, bacterial lipopolysaccharide; rHsp72, recombinant Hsp72. (a) (b) ## 3.3. Human Compared to Animal Studies Out of a total of 55 enrolled studies, only 2 in vivo human studies (3.6%) have been reported on the role of Hsp72 in sepsis compared to 7 mice (12.7%) and 15 rat (27.3%) in vivo studies (P < 0.0001); in contrast 12 human (21.8%) studies have been reported in vitro compared to only 2 in rats (3.6%) and 5 in mice (9.1%); 4 mice (7.3%) and 7 rat (12.7%) combined in vitro-in vivo studies have also been reported. Of the 14 human studies, 50% showed a protective Hsp72 effect compared to 95.8% protection shown in animal studies (Figure 3(a)). When restricted to the septic patients’ studies, however, only 1 out of 6 (16.7%) demonstrated an Hsp72 protective effect compared to 95.8% protection shown in animal studies (P < 0.0001). In addition, only human studies reported Hsp72-associated mortality (21.4%) or infection (7.1%) or reported results (14.3%) to be nonprotective (P < 0.001).(a) Diagram showing summaries of conclusions regarding the Hsp72 protective effects in sepsis in human and animal studies (P < 0.008); (b) human Hsp72 induction methods showed inconsistent results compared to the unanimous Hsp72 protective results in experimental sepsis with any attempted induction method; selection of any induction method, however, did not affect results; (c) Hsp72 induction protective effect using various induction methods was not influenced by the in vitro, in vivo, or combined study method selected: iHsp72, inducible heat shock protein 72; AdHSP, adenoviral vector Hsp72; exogHsp, exogenous Hsp72 preparations; Gln, glutamine; +, additional challenge; HS, heat stress; rHsp72, recombinant Hsp72; Hspgene, Hsp72 gene-transfected models; both, in vitro and in vivo experiments. (a) (b) (c)Most of the human studies were prospective observational experimental controlled studies (57.1%) and only 1 randomized study (7.1%) compared to prospective controlled animal studies (100%,P < 0.0001). All other human studies were experimental control (14.3%) or noncontrolled (14.3%) studies. Induction methods used differed significantly (P < 0.02), increasing Hsp72 in 57.1% of the human as compared to 100% of animal studies (P < 0.02). Only 6 (42.9%) human studies included septic patients compared to 41 (100% experimental sepsis) in animal studies (P < 0.0001). Although differed among Hsp72 study populations (P < 0.001) or methodology selected (P < 0.02), the various induction methods used did not affect the Hsp72 offered protection (Figures 3(b) and 3(c)). ## 4. Discussion Hsps70 are emerging as powerful dichotomous immune-modulatory molecules that can have stimulatory and inhibitory effects on immune responses [63]. In our hypothetical “comparative study” model, we found that the balance between Hsp72 promotion and control of inflammatory responses and sepsis outcome differed unpredictably between human and animal studies. Clinical studies were inconclusive, showing either a low probability of protection (16.7% among septic patients) or even a possible relation to mortality and infections. In contrast, almost all (94.7%) septic animal in vivo and in vitro studies showed a biochemical, biological, and clinical protective effect for Hsp72 in sepsis. This might be due to the fact that using evermore purified target cell populations to provide insight into the direct effects of molecules on cells, a lot of clinical information regarding the net response that occurs in vivo is missing [63]. ### 4.1. Stress Proteins Induction Sepsis, endotoxin tolerance, and heat shock all display downregulation of innate immunity, sharing a common immune suppressive effect, possibly through HS factor 1 (HSF1) mediated competitive inhibition of nuclear factor kappa-B (NF-κB) binding [45]. It has been shown that multiple chaperones or cochaperones, including Hsp72, tend to form a complex with HSF1 monomers [64]. Once a cell is exposed to stress, these chaperones and cochaperones bind to denatured and damaged proteins, thereby “releasing” the nonactive HSF1 monomers to subsequently undergo homotrimerization [65]. However, while homotrimerization is sufficient for DNA binding and nuclear translocation, the magnitude and duration of transcriptional activity are regulated by inducible phosphorylation of specific serine residues of HSF1 by several protein kinases (Erk1/2, glycogen synthase kinase, protein kinase C) [64].Once inside the nucleus, HSF1 binds to a heat shock element (HSE) in the promoter of  Hsp genes, which is defined by a tandem repeat of the pentamer nGAAn arranged in an alternating orientation either “head to head” (e.g., 5′-nGAAnnTTCn-3′) or “tail to tail” (e.g., 5′-nTTCnnGAAn-3′) [66], resulting in the upregulation of stress protein gene expression [67]. Thus, the intracellular accumulation of denatured or improperly folded proteins in response to stress is believed to be the universal signal resulting in the stress-induced gene expression of stress proteins [68, 69] which is proportional to the severity of the stress [70]. Besides the innate immune response stress proteins seem to activate also the adaptive immune response [71]. Thus, they have the capacity to elicit a pathogen-specific immune response [72] and to mediate the induction of peptide-specific immunity, eliciting potent T cell responses against the chaperoned peptide [73]. ### 4.2. Experimental Hsp72 Studies Hsp72 is the most highly induced stress protein in cells and tissues undergoing the stress response [74] and is central to the cytoprotective properties in patients with a variety of critical illnesses [52] or injuries [75]. Cell cycle components, regulatory proteins, and proteins in the mitogenic signal cascade may be protected by the molecular chaperone Hsp72 during periods of stress, by impairing proteasomal degradation of IkappaBalpha (IκBa) [47]. In addition, binding of Hsp72 to the Ser/Thr protein kinase IRE1a enhances the IRE1a/X-box binding protein XBP1 signaling at the endoplasmic reticulum and inhibits endoplasmic reticulum stress-induced apoptosis [76]. Thus, increased expression of Hsp72 by gene transfer/transfection has been demonstrated to confer protection against in vitro toxicity secondary to lethal hyperthermia [77], endotoxin [78], nitric oxide [79], hyperoxia [80], lung inflammation and injury [81], and in vivo ischemia-reperfusion injury [82]. On the contrary, microinjection of anti-Hsp72 antibody into cells impaired their ability to achieve thermotolerance [83].We showed that in septic animal models, all reported Hsp72 induction methods increased intracellular Hsp72; this was associated with reduced proinflammatory cytokines, decreased organ damage, clinical improvement, and enhanced survival. Analysis of reviewed studies showed differed methodology approaching the Hsp72 biological and/or genetic implication in the sepsis process. #### 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. #### 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. #### 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. #### 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. #### 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ### 4.3. Human Studies Although the release of the Hsp72 in sepsis serves as a host impending danger signal to neighboring cells and might exert a cytoprotective function at low serum levels, it might also potentiate an already active host immune response leading to poor outcome once a certain critical threshold is attained. Such a sensitive balance could be an explanation of the surprising finding of this study, showing that only 16.7% of the 6 human septic studies demonstrated an Hsp72 protective effect compared to 95.8% protection shown in the 41 septic animal studies. In addition, by experimentally studying healthy individuals rather than patients in a real clinical setting, human studies mix up mild molecular reactions to stress with severe infectious systemic inflammatory response syndrome (SIRS), being thereby unconvincing and unable to verify results of experimentally controlled septic animal models. #### 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. #### 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ### 4.4. Factors Influencing Heat Shock Proteins Protective Role in Sepsis Recent work demonstrated that febrile-range temperatures achieved during sepsis and noninfectious SIRS correlated with detectable changes in stress gene expression in vivo (whole blood messenger RNA), thereby suggesting that fever can activate Hsp72 gene expression and modify innate immune responses [113]. Hsp72 serum levels may also be modulated according to the patient oxidant status [15] and prevent excessive gut apoptosis and inflammation in an age-dependent response to sepsis [49]. Importantly, Hsp72 inhibited LPS-induced NO release but only partially reduced the LPS increased expression of iNOS mRNA and exhibited LPS-induced NF-κB DNA binding and LPS tolerance; in contrast, HS inhibited LPS-induced NF-κB and HSF1 activity whereas HSF1 inhibited NF-κB DNA binding [45]!A significant body of preexisting literature has hypothesized a relationship between Hsp72 expression and Gln’s protection in both in vitro and in vivo settings [32, 43, 62, 114, 115]. Pioneer studies showed that Gln supplementation could attenuate lethal heat and oxidant injury and increase Hsp72 expression in intestinal epithelial cells [116–118]. Compared, however, with whey protein supplementation in a randomized, double-blinded, comparative effectiveness trial, zinc, selenium, Gln, and intravenous metoclopramide conferred no advantage in the immune-competent population [6]. In addition, we recently showed that although apparently safe in animal models (pups), premature infants, and critically ill children, glutamine supplementation did not reduce mortality or late onset sepsis [119]. Methodological problems noted in the reviewed randomized experimental and clinical trials [119] should therefore be seriously considered in any future well-designed large blinded randomized controlled trial involving glutamine supplementation in severe sepsis.Drug interactions were also shown either to suppress Hsp72 protective effects exacerbating therefore drug-induced side effects or to induce Hsp72 beneficial effects by suppressing drug-induced exacerbations. Thus, it was recently shown that bleomycin-induced pulmonary fibrosis is mediated by suppression of pulmonary expression of Hsp72 whereas an inducer of Hsp72 expression, such as geranylgeranylacetone, could be therapeutically beneficial for the treatment of gefitinib-induced pulmonary fibrosis [120].Finally, critically ill patients display variable physiologic responses when stressed; gene association studies have recently been employed to explain this variability. Genetic variants of Hsp72 have also been associated with the development of septic shock in patients [121, 122]. Thus, the specific absence of Hsp72.1/3 gene expression can lead to increased mortality after septic insult [85]. ### 4.5. Limitations of the Study The major problem that limits the comparability with human sepsis is the fact that in most cases of animal models, various forms of preconditioning were employed. This approach is nonspecific, and only a minor amount (about 10%) used genetically modified animals. Accordingly, important differences between cell and/or animal models versus clinical studies have been noted several times with various inflammatory pathways and have been written about extensively in the literature [123, 124]. To the best of our knowledge, however, such discrepancies have not been summarized in detail in the context of  Hsp72 and sepsis; in our opinion, these findings might be helpful for cautiously interpreting experimental data in the critical care field. ## 4.1. Stress Proteins Induction Sepsis, endotoxin tolerance, and heat shock all display downregulation of innate immunity, sharing a common immune suppressive effect, possibly through HS factor 1 (HSF1) mediated competitive inhibition of nuclear factor kappa-B (NF-κB) binding [45]. It has been shown that multiple chaperones or cochaperones, including Hsp72, tend to form a complex with HSF1 monomers [64]. Once a cell is exposed to stress, these chaperones and cochaperones bind to denatured and damaged proteins, thereby “releasing” the nonactive HSF1 monomers to subsequently undergo homotrimerization [65]. However, while homotrimerization is sufficient for DNA binding and nuclear translocation, the magnitude and duration of transcriptional activity are regulated by inducible phosphorylation of specific serine residues of HSF1 by several protein kinases (Erk1/2, glycogen synthase kinase, protein kinase C) [64].Once inside the nucleus, HSF1 binds to a heat shock element (HSE) in the promoter of  Hsp genes, which is defined by a tandem repeat of the pentamer nGAAn arranged in an alternating orientation either “head to head” (e.g., 5′-nGAAnnTTCn-3′) or “tail to tail” (e.g., 5′-nTTCnnGAAn-3′) [66], resulting in the upregulation of stress protein gene expression [67]. Thus, the intracellular accumulation of denatured or improperly folded proteins in response to stress is believed to be the universal signal resulting in the stress-induced gene expression of stress proteins [68, 69] which is proportional to the severity of the stress [70]. Besides the innate immune response stress proteins seem to activate also the adaptive immune response [71]. Thus, they have the capacity to elicit a pathogen-specific immune response [72] and to mediate the induction of peptide-specific immunity, eliciting potent T cell responses against the chaperoned peptide [73]. ## 4.2. Experimental Hsp72 Studies Hsp72 is the most highly induced stress protein in cells and tissues undergoing the stress response [74] and is central to the cytoprotective properties in patients with a variety of critical illnesses [52] or injuries [75]. Cell cycle components, regulatory proteins, and proteins in the mitogenic signal cascade may be protected by the molecular chaperone Hsp72 during periods of stress, by impairing proteasomal degradation of IkappaBalpha (IκBa) [47]. In addition, binding of Hsp72 to the Ser/Thr protein kinase IRE1a enhances the IRE1a/X-box binding protein XBP1 signaling at the endoplasmic reticulum and inhibits endoplasmic reticulum stress-induced apoptosis [76]. Thus, increased expression of Hsp72 by gene transfer/transfection has been demonstrated to confer protection against in vitro toxicity secondary to lethal hyperthermia [77], endotoxin [78], nitric oxide [79], hyperoxia [80], lung inflammation and injury [81], and in vivo ischemia-reperfusion injury [82]. On the contrary, microinjection of anti-Hsp72 antibody into cells impaired their ability to achieve thermotolerance [83].We showed that in septic animal models, all reported Hsp72 induction methods increased intracellular Hsp72; this was associated with reduced proinflammatory cytokines, decreased organ damage, clinical improvement, and enhanced survival. Analysis of reviewed studies showed differed methodology approaching the Hsp72 biological and/or genetic implication in the sepsis process. ### 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. ### 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. ### 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. ### 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. ### 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ## 4.2.1. Transgenic Animals When challenged with systemic endotoxin, HSF1-deficient [84] or Hsp722 −/− mice [49] had increased apoptosis and mortality compared to wild-type (WT) mice. Hsp72 expression was also required for Gln’s protective effects on survival and tissue injury [34], an effect not seen in Hsp72 −/− mice [85]. On the contrary, using transgenic mice overexpressing the human Hspa12b gene, Hsp72 attenuated the endotoxin-induced cardiac dysfunction and leucocyte infiltration into the myocardium [51]. ## 4.2.2. Hsp72 Overexpression with Adenovirus Injection (AdHSP) Hsp72 overexpression with adenovirus injection prevented the LPS-induced increase in tumor necrosis factor-alpha (TNFα) and IL-6 levels associated with inhibited IκBα degradation [36] through NF-κB pathway [47]. Increases in levels of Hsp72 by gene transfection attenuated LPS- or TNFα-induced high mobility group box protein-1 (HMGB1) cytoplasmic translocation and release [12], decreased inducible NO synthase (iNOS) messenger RNA expression [45], and protected cells from programmed cell death [46]. Thus, AdHSP protected against sepsis-induced lung injury [86] by reducing nuclear caspase-3 [87], prevented alveolar type II cell proliferation [88], and improved short-term survival following CLP [89]. ## 4.2.3. Exogenous Hsp72 At the cellular level, Hsp72 preparations not only inhibited LPS-induced reactive oxygen species production and decreased NO expression in macrophages, but they also partially normalized the disturbed neutrophil apoptosis [37]. Prophylactic administration of exogenous human Hsp72 normalized inflammatory responses [38], limited host tissue damage [48], and reduced mortality rates [39]. Liposomal transfer of Hsp72 into the myocardium abolished LPS-induced contractile dysfunction [44], reduced mortality rates, and modified hemostasis and hemodynamics [40]. Intestinal Hsp72 overexpression reversed toll-like receptor (TLR)-4-induced cytokines and enterocyte apoptosis and prevented and treated experimental necrotizing enterocolitis [50]. Thus, mammalian Hsp72 appears to be an attractive target in therapeutic strategies designed to stimulate endogenous protective mechanisms against many deleterious consequences of septic shock by accelerating the functional recovery of susceptible organs in humans [40, 90]. ## 4.2.4. Glutamine Although Gln has little effect under basal conditions [43], endotoxin-treated animals given Gln exhibited dramatic increases in tissue Hsp72 expression [26], marked reduction of end-organ damage [28], attenuation of cytokine release [41] and peroxide biosynthesis, and improved vascular reactivity [29] associated with a significant decrease in mortality [91]. The molecular mechanism of Gln-induced Hsp72 expression appears to be mediated via enhancement of O-linked β-N-acetylglucosamine modification and subsequently to increased levels of endonuclear HSF1 expression [43] and HSF1 transcription activity [42].In a recent study, septic mice with Gln administration showed less severe damage to the kidneys and exhibited decreased HMGB1 and TLR4 in kidney tissues [35]. In Gln-treated rats, lung Hsp72 and HSF1-p expressions were enhanced [32, 92], lung HMGB1 expression and NF-κB DNA-binding activity were suppressed, and ARDS was attenuated and survival improved [33]. By inducing Hsp72, Gln attenuated LPS-induced cardiomyocyte damage [42] and left ventricular dysfunction [27] whereas Gln-treated sheep had a greater increase in myocardial Hsp72 immunoreactivity without aggravating the hyperdynamic circulation after endotoxemia [31]. In a rat brain model of endotoxemia, Gln upregulated the expression of Hsp72 and decreased the magnitude of apoptosis by inhibiting the translocation of NF-κB from the cytoplasm to the nucleus [30]. ## 4.2.5. Hyperthermic Heat Shock Subjected to a brief hyperthermic heat shock, Hsp72 conferred protection against sepsis-related circulatory fatality via inhibition of iNOS gene expression through prevention of NF-κB activation in cellular processes that included prevention of IκB kinase activation [25] and inhibition of IκBα degradation [20]. Also, Hsp72 induction by thermal pretreatment [21] attenuated proinflammatory cytokines [22] and improved survival in the LPS-induced systemic inflammation model, potentially involving Hsp-mediated inhibition of HMGB1 secretion [23]. A HS response induction of Hsp72 mRNA and protein expression in the lung has been shown to be associated with reduced lung injury [18], improved lung function [93], and survival [94].Heat shock pretreatment could also attenuate the electrocortical dysfunction in rats with LPS-induced septic response, suggesting that HS induced Hsp72 might potentially be used to prevent septic encephalopathy in sepsis [24]. Similarly, HS treatment led to Hsp72 overexpression and preserved the expression of the enzyme mitochondrial cytochrome c oxidase complex associated with the minimization of ultrastructural deformities during sepsis [19]. Interestingly, Gln increased DNA binding of HSF1 in HS cells but in its absence ornithine was able to rescue the heat-induced DNA binding of HSF1 [43]. ## 4.3. Human Studies Although the release of the Hsp72 in sepsis serves as a host impending danger signal to neighboring cells and might exert a cytoprotective function at low serum levels, it might also potentiate an already active host immune response leading to poor outcome once a certain critical threshold is attained. Such a sensitive balance could be an explanation of the surprising finding of this study, showing that only 16.7% of the 6 human septic studies demonstrated an Hsp72 protective effect compared to 95.8% protection shown in the 41 septic animal studies. In addition, by experimentally studying healthy individuals rather than patients in a real clinical setting, human studies mix up mild molecular reactions to stress with severe infectious systemic inflammatory response syndrome (SIRS), being thereby unconvincing and unable to verify results of experimentally controlled septic animal models. ### 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. ### 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ## 4.3.1. Intracellular Hsp72: In Vitro Studies (Cell Models) Human in vitro studies, mainly examining intracellular Hsp72 expression in hPBMC or hPMNL in patients and healthy individuals by using HS, Gln, exogenous Hsp72, and Hsp72 inhibitors or agonists, are inconclusive [57]. Thus, although Gln infusion altered neither endotoxin-induced systemic inflammation nor early expression of Hsp72 in isolated PBMCs in healthy volunteers [53], inducibility of ex vivo Hsp72 was impaired in peripheral blood lymphocytes of patients with severe sepsis [95], possibly contributing to immune dysfunction of T and B lymphocyte responses in resisting infection in severe sepsis [56].Enhanced Hsp72 response in endurance-trained individuals, however, improved heat tolerance through both anti-inflammatory and antiapoptotic mechanisms [58]. Also, rHsp72 preconditioning ameliorated reactive oxygen species, TNFα, and CD11b/CD18 adhesion receptor expression after lipoteichoic acid addition [39]. Sepsis was shown to enhance expression of iHsp72 in PBMCs correlated to plasma TNFα concentrations [54] and in activated PMNLs, in which oxidative activity was increased and apoptosis was inhibited [55]. Similarly, using various Gln doses, proinflammatory cytokine release could directly be attenuated in PBMCs through enhancement of Hsp72 expression [61]. Overexpression of Hsp72 attenuated NF-κB activation and proinflammatory cytokine release [88, 96], inhibited LPS-mediated apoptosis, and protected lung epithelial cells [80] and pulmonary artery endothelial cells from oxidant-mediated [97] and inflammation-induced lung injury [59]. ## 4.3.2. Extracellular Hsp72: In Vivo Studies (Serum Hsp) Although PBMC Hsp72 expression was shown to be markedly decreased in critically ill septic patients [56], a significant increase in serum Hsp72 levels was reported in children with septic shock [52]. Extracellular Hsp72, reflected by increased serum levels, was also evident in children with acute lung injury [81] or following cardiopulmonary bypass [98]. Results of a recent adult study also indicated that increased serum Hsp72 is associated with mortality in sepsis [15]. Worse outcome associated with extracellular Hsp72 has also been reported in coronary artery disease [99], liver disease [90], sickle cell disease vasoocclusive crisis [100], and preeclampsia [101].Heat shock proteins are markedly induced in response to a diverse range of cellular insults, being a reliable danger marker of cell stress [102]. Thus, extracellular Hsps act as a “danger signal,” activating immune-competent cells through LPS TLR4/CD14-dependent signaling [103]. According to the “danger hypothesis,” the release of stress proteins from severely stressed or damaged cells serves as a host impending danger signal to neighboring cells [104]. They are released in a nonspecific manner from dying, necrotic cells [105] or from viable cells release in a specific and inhibitable manner [106, 107]. Using viable cell counts and lactate dehydrogenase the release of Hsp72 was shown to not be due to cellular damage [60]. Recent studies suggest that Hsp72 is actively released via an exosome-dependent nonclassical protein secretory pathway, possibly involving lysosomal lipid rafts [108]. Immune cell receptors capture Hsps released from necrotic cells or Hsp-containing exosomes [109], and receptor engagement by Hsp72 increases dendritic cell production of  TNFα, IL-1b, IL-6, and chemokine [110]. The host innate immune response occurs through a NF-κB-dependent proinflammatory gene expression via TLR4 and TLR2 [111], similar to a LPS-mediated signal transduction [112]. ## 4.4. Factors Influencing Heat Shock Proteins Protective Role in Sepsis Recent work demonstrated that febrile-range temperatures achieved during sepsis and noninfectious SIRS correlated with detectable changes in stress gene expression in vivo (whole blood messenger RNA), thereby suggesting that fever can activate Hsp72 gene expression and modify innate immune responses [113]. Hsp72 serum levels may also be modulated according to the patient oxidant status [15] and prevent excessive gut apoptosis and inflammation in an age-dependent response to sepsis [49]. Importantly, Hsp72 inhibited LPS-induced NO release but only partially reduced the LPS increased expression of iNOS mRNA and exhibited LPS-induced NF-κB DNA binding and LPS tolerance; in contrast, HS inhibited LPS-induced NF-κB and HSF1 activity whereas HSF1 inhibited NF-κB DNA binding [45]!A significant body of preexisting literature has hypothesized a relationship between Hsp72 expression and Gln’s protection in both in vitro and in vivo settings [32, 43, 62, 114, 115]. Pioneer studies showed that Gln supplementation could attenuate lethal heat and oxidant injury and increase Hsp72 expression in intestinal epithelial cells [116–118]. Compared, however, with whey protein supplementation in a randomized, double-blinded, comparative effectiveness trial, zinc, selenium, Gln, and intravenous metoclopramide conferred no advantage in the immune-competent population [6]. In addition, we recently showed that although apparently safe in animal models (pups), premature infants, and critically ill children, glutamine supplementation did not reduce mortality or late onset sepsis [119]. Methodological problems noted in the reviewed randomized experimental and clinical trials [119] should therefore be seriously considered in any future well-designed large blinded randomized controlled trial involving glutamine supplementation in severe sepsis.Drug interactions were also shown either to suppress Hsp72 protective effects exacerbating therefore drug-induced side effects or to induce Hsp72 beneficial effects by suppressing drug-induced exacerbations. Thus, it was recently shown that bleomycin-induced pulmonary fibrosis is mediated by suppression of pulmonary expression of Hsp72 whereas an inducer of Hsp72 expression, such as geranylgeranylacetone, could be therapeutically beneficial for the treatment of gefitinib-induced pulmonary fibrosis [120].Finally, critically ill patients display variable physiologic responses when stressed; gene association studies have recently been employed to explain this variability. Genetic variants of Hsp72 have also been associated with the development of septic shock in patients [121, 122]. Thus, the specific absence of Hsp72.1/3 gene expression can lead to increased mortality after septic insult [85]. ## 4.5. Limitations of the Study The major problem that limits the comparability with human sepsis is the fact that in most cases of animal models, various forms of preconditioning were employed. This approach is nonspecific, and only a minor amount (about 10%) used genetically modified animals. Accordingly, important differences between cell and/or animal models versus clinical studies have been noted several times with various inflammatory pathways and have been written about extensively in the literature [123, 124]. To the best of our knowledge, however, such discrepancies have not been summarized in detail in the context of  Hsp72 and sepsis; in our opinion, these findings might be helpful for cautiously interpreting experimental data in the critical care field. ## 5. Conclusions Heat shock proteins are molecular chaperokines that prevent the formation of nonspecific protein aggregates and exhibit sophisticated protection mechanisms. Experimental studies have repeatedly shown a strong molecular, biological, and clinical protective effect for Hsp72 in sepsis. Once again, clinical studies are inconclusive, varying from a protective in vitro effect to an in vivo Hsp72-mortality association. Possible influences by severity of disease-related factors, genetic variants, oxidant status, and unpredictable interventions such as those of temperature control, nutritional (glutamine) immune-enhancing, or drug intervening effects may unpredictably influence the Hsp72 protection efficacy in sepsis. Our “comparative” study data demonstrate that cell-protection with exogenous Hsp72, Hsp72 genes, heat stress, or glutamine is associated with induction of Hsp72 and that new Hsp72 targeted pharmaconutrition may be an approach to activating the preconditioning response in sepsis in clinical practice. However, as this hypothetical study suggests, much more work is needed to clarify the cellular and molecular mechanisms in which Hsp72 signals “danger” and regulates immune function in response to sepsis. ## Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. --- *Source: 101023-2014-01-12.xml*
2014
# Surgical Resection for Small Cell Lung Cancer: Pneumonectomy versus Lobectomy **Authors:** Jiang Yuequan; Zhang Zhi; Xie Chenmin **Journal:** ISRN Surgery (2012) **Publisher:** International Scholarly Research Network **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.5402/2012/101024 --- ## Abstract Background. There are some patients with SCLC that are diagnosed in the operating room by cryosection and surgeons had to perform surgical resection for these patients. The aim of this study is to compare the effective of pneumonectomy with lobectomy for SCLC. Methods. A retrospective study was undertaken in 75 patients with SCLC that were diagnosed by cryosection during surgery. 31 of them underwent pneumonectomy, 44 underwent lobectomy. Local recurrence rate and survival rate according to surgical procedures and cancer stages were analyzed. Results. There was significant difference in the overall survival rate between lobectomy and pneumonectomy groups (P=0.044). For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028). No significant difference in overall survival rate was found between the two surgical groups in patients with stage III SCLC (P=0.933). The local recurrence rate in lobectomy group was significant higher that in pneumonectomy group (P=0.0017). Conclusions. SCLC was responsive to surgical therapy. When surgeons have to select an appropriate method of operation for patients with SCLC during surgery, pneumonectomy may be the right choice for these patients. Pneumonectomy can result in significantly better local control and higher survival rate compare with lobectomy. --- ## Body ## 1. Background According to World Health Organization (WHO) statistics, more than 1 million cases of lung cancer are diagnosed annually around the world. The incidence of small cell lung cancer (SCLC) was about 20–25% of all newly diagnosed lung cancers [1]. SCLC is considered distinct from other lung cancers, because of their clinical and biologic characteristics. It exhibits aggressive behavior, with rapid growth and early spread. SCLC seems sensitive to both chemotherapy and radiotherapy, but the overall 5-year survival rate is still poor despite the sensitivity [2]. Although the efficacy of surgery for SCLC is controversial, surgical excision is still believed a curative treatment. In fact, some patients with SCLC were diagnosed in the operating room by cryosection. For these patients the surgeon had to choose proper surgical procedure. We found some patients with SCLC who underwent pneumonectomy experienced long-term survival. We supposed that pneumonectomy might achieve complete resection and conferred a survival advantage for these patients. This study reviewed the records of 75 patients with SCLC diagnosed by intraoperative cryosection and compared the therapeutic efficacy of pneumonectomy and lobectomy on patients with SCLC. ## 2. Methods From January 1982 to December 2010, there were 85 patients did not that a confirmed diagnosis of SCLC before resection and underwent surgery at the Department of Thoracic Surgery of Chongqing Cancer Hospital & Institute. For 51 of the 85 patients (60%), histological or cytological diagnosis was not obtained preoperatively. For the remaining 34 patients, the preoperative diagnosis of adenocarcinoma was in 11 cases, bronchioloalveolar carcinoma in 11 cases, squamous cell carcinoma in 12 cases.2 patients had an incomplete resection, and 1 patient had unresectable disease. 6 patients had the pathologic subtype with combined histology tumor (mixtures of SCLC with non-SCLC components). 1 patient died of perioperative complications. Thus 75 patients with SCLC were in this study. This study was approved by the Ethics Committee of Chongqing Cancer Hospital & Institute, China.There were 69 men and 6 women, with the median age of 56 years (range 41–71 years). The preoperative assessments included chest roentgenography, computed tomography of the chest, external ultrasonography of the abdomen and bone scintigraphy. Magnetic resonance imaging of the brain was used in 51 patients. In this study, 61 patients underwent bronchoscopy and 23 patients underwent mediastinoscopy without definite diagnosis of SCLC. 56 patients get PET-CT (Positron emission tomography-computed tomography). Because all the patients in this study had no pathological diagnosis of SCLC preoperatively, induction chemotherapy was not performed.All operations were performed with curative intent and every patient underwent mediastinal lymph node resection. Pathologic staging was undertaken according to the 7th edition of the AJCC staging system of lung cancer. All these patients were referred for consideration of adjuvant chemotherapy and prophylactic cranial irradiation (PCI). The postoperative chemotherapy was performed with the PE regimen that is etoposide and either cisplatin or carboplatin. Four to six cycles of chemotherapy were performed if the patient’s condition after surgery was well tolerable against the treatment. 8 patients were not treated with PCI and 5 of the 8 patients were not treated with adjuvant chemotherapy. ### 2.1. Followup Following hospital discharge, patients with SCLC were regularly monitored in the outpatient department at intervals of 1 month for the first 1 year, 3 months for the next 2 years, and every 6 months thereafter. All patients in this study underwent a clinical evaluation that included chest radiography, external ultrasonography of the abdomen, computed tomography (CT) scans of the thorax, and bone emission computed tomography (ECT) scanning at least once half year. Local recurrence was defined as recurrence that occurred within the ipsilateral hemithorax including the mediastinum. ### 2.2. Statistical Analysis Survival was defined as the interval between date of surgery and date of death or last followup. Survival rates were calculated using the Kaplan-Meier method and the differences were compared using the log-rank test. Comparisons of continuous and dichotomous variables between groups were performed with the Studentt-test and χ2 tests, respectively. All analyses were accomplished with SPSS 13 statistical package. ## 2.1. Followup Following hospital discharge, patients with SCLC were regularly monitored in the outpatient department at intervals of 1 month for the first 1 year, 3 months for the next 2 years, and every 6 months thereafter. All patients in this study underwent a clinical evaluation that included chest radiography, external ultrasonography of the abdomen, computed tomography (CT) scans of the thorax, and bone emission computed tomography (ECT) scanning at least once half year. Local recurrence was defined as recurrence that occurred within the ipsilateral hemithorax including the mediastinum. ## 2.2. Statistical Analysis Survival was defined as the interval between date of surgery and date of death or last followup. Survival rates were calculated using the Kaplan-Meier method and the differences were compared using the log-rank test. Comparisons of continuous and dichotomous variables between groups were performed with the Studentt-test and χ2 tests, respectively. All analyses were accomplished with SPSS 13 statistical package. ## 3. Results ### 3.1. Surgery 31 patients underwent pneumonectomy (including 7 right and 24 left pneumonectomies). 44 patients underwent lobectomy (including 12 patients underwent sleeve resection). The lobectomy procedures included 12 right upper lobectomies, 1 middle lobectomy, 2 right upper and middle lobectomies, 2 middle and right lower lobectomies, 9 right lower lobectomies, 10 left upper lobectomies, and 8 left lower lobectomies. The patients were divided into two: groups pneumonectomy group (n=31) and lobectomy group (n=44). ### 3.2. Characteristics of the Patients The clinical and pathologic characteristics of the two group patients are presented in Table1. There were no statistical differences between the two groups regarding age, sex, and adjuvant therapy. The pathologic stage of the lobectomy group was stage I in 2, stage II in 29, and stage III in 13 patients. The pathologic stage of pneumonectomy group was stage II in 24 and stage III in 7 patients. There were no patients with stage I in pneumonectomy group. Statistic analysis showed that there was no significant difference in distribution of pathologic stage between the two groups (P=0.249).Table 1 Characteristics of the patients with lobectomy and pneumonectomy. CharacteristicLobectomy (n=44)Pneumonectomy (n=31)P valueAge mean ± SD (years)58.5±18.457.9±17.80.8609GenderMale40290.6782Female42Adjuvant therapyChemotherapy alone120.3631Chemotherapy plus PCI4329Pathologic stageI30II31240.3272III107 ### 3.3. Survival Rate of the Patients and Local Recurrence with SCLC The median survival time and 5-year survival rate for entire cohort were 22 months and 20.34%. They were 27 months and 26.2% for stage II, 18 months and 0.0% for stage III. The median survival time and 5-year survival rate of patients with SCLC were 20 months and 11.1% by lobectomy, 28 months and 24.0% by pneumonectomy (Table2). There was significant difference in the overall survival rate between the two surgical groups (P=0.044; Figure 1).Table 2 Comparison of Local recurrence and survival rate. LobectomyPneumonectomyPAll stageMedian survival time (CI)20 (15.83–24.16)n=4428 (21.52–34.48)n=315-year survival rate11.1%24.0%0.044Local recurrence rate59.1% (26/44)22.6% (7/31)0.0017Stage IIMedian survival time (CI)21 (17.03–24.97)n=3130 (18.48–41.52)n=245-year survival rate16.7%31.6%0.028Local recurrence rate61.3% (19/31)20.8% (5/24)0.0027Stage IIIMedian survival time (CI)16 (11.35–20.65)n=1018 (5.16–30.83)n=75-year survival rate000.933Local recurrence rate60% (6/10)28.6% (2/7)0.3348CI: 95% confidence interval.Figure 1 Survival curves according to surgical procedures. The 5-year survival rate for patients with SCLC was 16.1% by lobectomy, 24.0% by pneumonectomy. There was significant difference in the overall survival rate between the two groups (P=0.044).The median survival time and 5-year survival rate of patients with stage II SCLC were 22 months and 16.7% in lobectomy group, 30 months and 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028, Figure 2). For patients with stage III SCLC, the median survival time was 16 months in lobectomy group and 18 months in pneumonectomy group respectively. There was no patient with stage III SCLC who survived for more than 5 years in this study. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC (P=0.933, Figure 3).Figure 2 Survival curves of patients with stage II SCLC according to surgical procedures. The 5-year survival rate of patients with stage II SCLC was 16.7% in lobectomy group, 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028).Figure 3 Survival curves of patients with stag III SCLC according to surgical procedures. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC.The local recurrence rate comparison of two surgical procedures was showed in Table2. The local recurrence rate were 59.1% (26/44) in lobectomy group, 22.6% (7/31) in pneumonectomy group, there was statistically significant difference in local recurrence rate between the two groups (P=0.0017). By stages, there was statistically significant difference of local recurrence rate between the two surgical groups in stage II SCLC (P=0.002), but no significant difference of local recurrence rate was found between the two groups in stage III SCLC (P=0.3348).In our study, the patients with sleeve resection were included into lobectomy group, because there was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877, Figure 4).Figure 4 Survival curves of patients according to surgery of sleeve resection lobectomy and lobectomy. There was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877). ## 3.1. Surgery 31 patients underwent pneumonectomy (including 7 right and 24 left pneumonectomies). 44 patients underwent lobectomy (including 12 patients underwent sleeve resection). The lobectomy procedures included 12 right upper lobectomies, 1 middle lobectomy, 2 right upper and middle lobectomies, 2 middle and right lower lobectomies, 9 right lower lobectomies, 10 left upper lobectomies, and 8 left lower lobectomies. The patients were divided into two: groups pneumonectomy group (n=31) and lobectomy group (n=44). ## 3.2. Characteristics of the Patients The clinical and pathologic characteristics of the two group patients are presented in Table1. There were no statistical differences between the two groups regarding age, sex, and adjuvant therapy. The pathologic stage of the lobectomy group was stage I in 2, stage II in 29, and stage III in 13 patients. The pathologic stage of pneumonectomy group was stage II in 24 and stage III in 7 patients. There were no patients with stage I in pneumonectomy group. Statistic analysis showed that there was no significant difference in distribution of pathologic stage between the two groups (P=0.249).Table 1 Characteristics of the patients with lobectomy and pneumonectomy. CharacteristicLobectomy (n=44)Pneumonectomy (n=31)P valueAge mean ± SD (years)58.5±18.457.9±17.80.8609GenderMale40290.6782Female42Adjuvant therapyChemotherapy alone120.3631Chemotherapy plus PCI4329Pathologic stageI30II31240.3272III107 ## 3.3. Survival Rate of the Patients and Local Recurrence with SCLC The median survival time and 5-year survival rate for entire cohort were 22 months and 20.34%. They were 27 months and 26.2% for stage II, 18 months and 0.0% for stage III. The median survival time and 5-year survival rate of patients with SCLC were 20 months and 11.1% by lobectomy, 28 months and 24.0% by pneumonectomy (Table2). There was significant difference in the overall survival rate between the two surgical groups (P=0.044; Figure 1).Table 2 Comparison of Local recurrence and survival rate. LobectomyPneumonectomyPAll stageMedian survival time (CI)20 (15.83–24.16)n=4428 (21.52–34.48)n=315-year survival rate11.1%24.0%0.044Local recurrence rate59.1% (26/44)22.6% (7/31)0.0017Stage IIMedian survival time (CI)21 (17.03–24.97)n=3130 (18.48–41.52)n=245-year survival rate16.7%31.6%0.028Local recurrence rate61.3% (19/31)20.8% (5/24)0.0027Stage IIIMedian survival time (CI)16 (11.35–20.65)n=1018 (5.16–30.83)n=75-year survival rate000.933Local recurrence rate60% (6/10)28.6% (2/7)0.3348CI: 95% confidence interval.Figure 1 Survival curves according to surgical procedures. The 5-year survival rate for patients with SCLC was 16.1% by lobectomy, 24.0% by pneumonectomy. There was significant difference in the overall survival rate between the two groups (P=0.044).The median survival time and 5-year survival rate of patients with stage II SCLC were 22 months and 16.7% in lobectomy group, 30 months and 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028, Figure 2). For patients with stage III SCLC, the median survival time was 16 months in lobectomy group and 18 months in pneumonectomy group respectively. There was no patient with stage III SCLC who survived for more than 5 years in this study. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC (P=0.933, Figure 3).Figure 2 Survival curves of patients with stage II SCLC according to surgical procedures. The 5-year survival rate of patients with stage II SCLC was 16.7% in lobectomy group, 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028).Figure 3 Survival curves of patients with stag III SCLC according to surgical procedures. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC.The local recurrence rate comparison of two surgical procedures was showed in Table2. The local recurrence rate were 59.1% (26/44) in lobectomy group, 22.6% (7/31) in pneumonectomy group, there was statistically significant difference in local recurrence rate between the two groups (P=0.0017). By stages, there was statistically significant difference of local recurrence rate between the two surgical groups in stage II SCLC (P=0.002), but no significant difference of local recurrence rate was found between the two groups in stage III SCLC (P=0.3348).In our study, the patients with sleeve resection were included into lobectomy group, because there was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877, Figure 4).Figure 4 Survival curves of patients according to surgery of sleeve resection lobectomy and lobectomy. There was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877). ## 4. Discussion The efficacy of surgery in SCLC is controversial. About 30 years ago, British Medical Research Council performed a randomized trial about surgery versus radiotherapy for SCLC. The result showed that surgery and radiotherapy were equally ineffective in limited stage SCLC [2–4]. This result has been widely cited as evidence to prove that surgical treatment to SCLC is ineffective. But proponents of surgery argue that there were some limitations in that randomized trial. CT scanning and mediastinoscopy were unavailable at that time. The patients recruited in that trial were not currently suitable for surgery, complete resection was only achieved in 34 (48%) patients and 37 (52%) patients underwent exploratory thoracotomy only.With the advent of new diagnostic tools, such as spiral computed tomography and positron emission tomography, limited disease can be more readily identified and adequately staged preoperatively. Some clinicians believed that good results can be achieved in selected patients with complete resection [5, 6]. Moreover, with the platinum agent, granulocyte-colony-stimulating factor, and serotonin-antagonizing antiemetic agent becoming available, the chemotherapeutic regimens for SCLC have been changed [7–10]. Recent studies reported that multimodality treatment involving surgery achieved a good prognosis in SCLC patients with limited stage disease, thus suggested the importance of surgery with a curative intent [11, 12].Although many researches confirmed that multimodality treatment including surgery and chemotherapy might represent an effective form of treatment for limited SCLC, it was generally accepted that surgical resection was appropriate only for the patients with stage I SCLC [13, 14]. However, stage of SCLC is usually underestimated preoperatively [2, 15]. Lymph node metastasis is often underestimated, and occult mediastinal involvement might be missed even by medistinoscopy. Eric Lim and his colleagues reported 59 patients with stage I to III SCLC who underwent lung resection with nodal dissection and showed excellent overall 5-year survival of 52%. Their surgical series suggests that good results can be achieved in selected patients with complete resection throughout the spectrum of UICC stage I to III [5].Our study reviewed 75 patients that do not have confirmed diagnosis of SCLC preoperatively. The reason is that there were no CT and bronchoscope in our institute until 1989. So, not every patient in this group gets these assessments even after 1989 because of economic reason. Also some of these patients were misdiagnosed as NSCLC. Their postoperative pathologic stages included stage I in 3 cases, stage II in 55 cases, and stage III in 17 cases. The median survival time and postoperative 5-year survival rate of patients with SCLC in our surgical series were 22 months and 20.34%. These figures can be compared with the results reported by Brock et al. [14] and are poorer than the result reported by Inoue et al. [15].Comparing the median survival time and survival rate of two surgical procedures, we found that the median survival time and 5-year survival rate by pneumonectomy were better than those by lobectomy (28 months, 24.0% versus 20 months, 11.1%), and statistical analysis showed there was significant difference in overall survival rate between the two groups (P=0.044). Moreover, our study showed that for patients with stage II SCLC, the postoperative overall survival rates by pneumonectomy were significantly better than by lobectomy (P=0.028). For patients with stage III SCLC, there was no difference between the two surgical groups in overall survival rates (P=0.933).We had examined 31 cases of pneumonectomy for SCLC in this study. It was because we found that SCLC usually originates in the lung’s large central airways, invades the main bronchus and fuses with metastatic hilar lymph nodes at presentation. The metastatic lymph nodes usually involve interlobular lymph node and peribronchial lymph nodes of neighboring lobe also. Lobectomy often leaves these peribronchial lymph nodes in neighboring lobe. It is even impossible to distinguish primary tumor from lymph node metastasis during operation. Shepherd reported that local control remains a problem, with one-third of patients having recurrence only at the primary site. Failure to achieve control at the primary site remains the single most important obstacle to cure in patients with limited SCLC [16]. We believe that pneumonectomy can achieve complete resection of the neoplasm for SCLC rather than lobectomy do. In this study, the local recurrence rate of patients with SCLC was 59.1% (26/44) in lobectomy group, and it was 22.6% (7/31) in pneumonectomy group. There was significant difference between the two groups in local recurrence rate (P=0.0017). The results indicate that pneumonectomy can afford better curability rates than lobectomy. ## 5. Conclusion Chemotherapy in combination with radiation therapy is the mainstay of treatment for SCLC. But there are still some patients with SCLC that are diagnosed intraoperatively. In this specific situation, the surgeon must select an appropriate method of operation to complete the operation for these patients. In our study, we can easily draw the conclusion that pneumonectomy can achieve complete resection and reduce local recurrence rate of patients with SCLC. It can result in better local control than lobectomy. The survival of patients with SCLC after pneumonectomy is better than after lobectomy.Nevertheless, there were some limitations in this study which should be noted. The patients in the two surgical groups were not selected by randomization. In pneumonectomy group, 70 years old was the upper limit age of patients. Patients with pneumonectomy had healthy lung function and strong heart. These factors may influence on survival rate. Also the long-term complications of the two surgical procedures were not analyzed in this study. We had mediastinoscopy instrument after 2000: only 23 patients in this study received mediastinoscopy. In recent 15 years, we seldom performed surgery to patients with SCLC, because we also think that surgical intervention in the management of SCLC is not considered standard. So most of the cases in this study were before 15 years ago. --- *Source: 101024-2012-05-30.xml*
101024-2012-05-30_101024-2012-05-30.md
24,204
Surgical Resection for Small Cell Lung Cancer: Pneumonectomy versus Lobectomy
Jiang Yuequan; Zhang Zhi; Xie Chenmin
ISRN Surgery (2012)
Medical & Health Sciences
International Scholarly Research Network
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.5402/2012/101024
101024-2012-05-30.xml
--- ## Abstract Background. There are some patients with SCLC that are diagnosed in the operating room by cryosection and surgeons had to perform surgical resection for these patients. The aim of this study is to compare the effective of pneumonectomy with lobectomy for SCLC. Methods. A retrospective study was undertaken in 75 patients with SCLC that were diagnosed by cryosection during surgery. 31 of them underwent pneumonectomy, 44 underwent lobectomy. Local recurrence rate and survival rate according to surgical procedures and cancer stages were analyzed. Results. There was significant difference in the overall survival rate between lobectomy and pneumonectomy groups (P=0.044). For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028). No significant difference in overall survival rate was found between the two surgical groups in patients with stage III SCLC (P=0.933). The local recurrence rate in lobectomy group was significant higher that in pneumonectomy group (P=0.0017). Conclusions. SCLC was responsive to surgical therapy. When surgeons have to select an appropriate method of operation for patients with SCLC during surgery, pneumonectomy may be the right choice for these patients. Pneumonectomy can result in significantly better local control and higher survival rate compare with lobectomy. --- ## Body ## 1. Background According to World Health Organization (WHO) statistics, more than 1 million cases of lung cancer are diagnosed annually around the world. The incidence of small cell lung cancer (SCLC) was about 20–25% of all newly diagnosed lung cancers [1]. SCLC is considered distinct from other lung cancers, because of their clinical and biologic characteristics. It exhibits aggressive behavior, with rapid growth and early spread. SCLC seems sensitive to both chemotherapy and radiotherapy, but the overall 5-year survival rate is still poor despite the sensitivity [2]. Although the efficacy of surgery for SCLC is controversial, surgical excision is still believed a curative treatment. In fact, some patients with SCLC were diagnosed in the operating room by cryosection. For these patients the surgeon had to choose proper surgical procedure. We found some patients with SCLC who underwent pneumonectomy experienced long-term survival. We supposed that pneumonectomy might achieve complete resection and conferred a survival advantage for these patients. This study reviewed the records of 75 patients with SCLC diagnosed by intraoperative cryosection and compared the therapeutic efficacy of pneumonectomy and lobectomy on patients with SCLC. ## 2. Methods From January 1982 to December 2010, there were 85 patients did not that a confirmed diagnosis of SCLC before resection and underwent surgery at the Department of Thoracic Surgery of Chongqing Cancer Hospital & Institute. For 51 of the 85 patients (60%), histological or cytological diagnosis was not obtained preoperatively. For the remaining 34 patients, the preoperative diagnosis of adenocarcinoma was in 11 cases, bronchioloalveolar carcinoma in 11 cases, squamous cell carcinoma in 12 cases.2 patients had an incomplete resection, and 1 patient had unresectable disease. 6 patients had the pathologic subtype with combined histology tumor (mixtures of SCLC with non-SCLC components). 1 patient died of perioperative complications. Thus 75 patients with SCLC were in this study. This study was approved by the Ethics Committee of Chongqing Cancer Hospital & Institute, China.There were 69 men and 6 women, with the median age of 56 years (range 41–71 years). The preoperative assessments included chest roentgenography, computed tomography of the chest, external ultrasonography of the abdomen and bone scintigraphy. Magnetic resonance imaging of the brain was used in 51 patients. In this study, 61 patients underwent bronchoscopy and 23 patients underwent mediastinoscopy without definite diagnosis of SCLC. 56 patients get PET-CT (Positron emission tomography-computed tomography). Because all the patients in this study had no pathological diagnosis of SCLC preoperatively, induction chemotherapy was not performed.All operations were performed with curative intent and every patient underwent mediastinal lymph node resection. Pathologic staging was undertaken according to the 7th edition of the AJCC staging system of lung cancer. All these patients were referred for consideration of adjuvant chemotherapy and prophylactic cranial irradiation (PCI). The postoperative chemotherapy was performed with the PE regimen that is etoposide and either cisplatin or carboplatin. Four to six cycles of chemotherapy were performed if the patient’s condition after surgery was well tolerable against the treatment. 8 patients were not treated with PCI and 5 of the 8 patients were not treated with adjuvant chemotherapy. ### 2.1. Followup Following hospital discharge, patients with SCLC were regularly monitored in the outpatient department at intervals of 1 month for the first 1 year, 3 months for the next 2 years, and every 6 months thereafter. All patients in this study underwent a clinical evaluation that included chest radiography, external ultrasonography of the abdomen, computed tomography (CT) scans of the thorax, and bone emission computed tomography (ECT) scanning at least once half year. Local recurrence was defined as recurrence that occurred within the ipsilateral hemithorax including the mediastinum. ### 2.2. Statistical Analysis Survival was defined as the interval between date of surgery and date of death or last followup. Survival rates were calculated using the Kaplan-Meier method and the differences were compared using the log-rank test. Comparisons of continuous and dichotomous variables between groups were performed with the Studentt-test and χ2 tests, respectively. All analyses were accomplished with SPSS 13 statistical package. ## 2.1. Followup Following hospital discharge, patients with SCLC were regularly monitored in the outpatient department at intervals of 1 month for the first 1 year, 3 months for the next 2 years, and every 6 months thereafter. All patients in this study underwent a clinical evaluation that included chest radiography, external ultrasonography of the abdomen, computed tomography (CT) scans of the thorax, and bone emission computed tomography (ECT) scanning at least once half year. Local recurrence was defined as recurrence that occurred within the ipsilateral hemithorax including the mediastinum. ## 2.2. Statistical Analysis Survival was defined as the interval between date of surgery and date of death or last followup. Survival rates were calculated using the Kaplan-Meier method and the differences were compared using the log-rank test. Comparisons of continuous and dichotomous variables between groups were performed with the Studentt-test and χ2 tests, respectively. All analyses were accomplished with SPSS 13 statistical package. ## 3. Results ### 3.1. Surgery 31 patients underwent pneumonectomy (including 7 right and 24 left pneumonectomies). 44 patients underwent lobectomy (including 12 patients underwent sleeve resection). The lobectomy procedures included 12 right upper lobectomies, 1 middle lobectomy, 2 right upper and middle lobectomies, 2 middle and right lower lobectomies, 9 right lower lobectomies, 10 left upper lobectomies, and 8 left lower lobectomies. The patients were divided into two: groups pneumonectomy group (n=31) and lobectomy group (n=44). ### 3.2. Characteristics of the Patients The clinical and pathologic characteristics of the two group patients are presented in Table1. There were no statistical differences between the two groups regarding age, sex, and adjuvant therapy. The pathologic stage of the lobectomy group was stage I in 2, stage II in 29, and stage III in 13 patients. The pathologic stage of pneumonectomy group was stage II in 24 and stage III in 7 patients. There were no patients with stage I in pneumonectomy group. Statistic analysis showed that there was no significant difference in distribution of pathologic stage between the two groups (P=0.249).Table 1 Characteristics of the patients with lobectomy and pneumonectomy. CharacteristicLobectomy (n=44)Pneumonectomy (n=31)P valueAge mean ± SD (years)58.5±18.457.9±17.80.8609GenderMale40290.6782Female42Adjuvant therapyChemotherapy alone120.3631Chemotherapy plus PCI4329Pathologic stageI30II31240.3272III107 ### 3.3. Survival Rate of the Patients and Local Recurrence with SCLC The median survival time and 5-year survival rate for entire cohort were 22 months and 20.34%. They were 27 months and 26.2% for stage II, 18 months and 0.0% for stage III. The median survival time and 5-year survival rate of patients with SCLC were 20 months and 11.1% by lobectomy, 28 months and 24.0% by pneumonectomy (Table2). There was significant difference in the overall survival rate between the two surgical groups (P=0.044; Figure 1).Table 2 Comparison of Local recurrence and survival rate. LobectomyPneumonectomyPAll stageMedian survival time (CI)20 (15.83–24.16)n=4428 (21.52–34.48)n=315-year survival rate11.1%24.0%0.044Local recurrence rate59.1% (26/44)22.6% (7/31)0.0017Stage IIMedian survival time (CI)21 (17.03–24.97)n=3130 (18.48–41.52)n=245-year survival rate16.7%31.6%0.028Local recurrence rate61.3% (19/31)20.8% (5/24)0.0027Stage IIIMedian survival time (CI)16 (11.35–20.65)n=1018 (5.16–30.83)n=75-year survival rate000.933Local recurrence rate60% (6/10)28.6% (2/7)0.3348CI: 95% confidence interval.Figure 1 Survival curves according to surgical procedures. The 5-year survival rate for patients with SCLC was 16.1% by lobectomy, 24.0% by pneumonectomy. There was significant difference in the overall survival rate between the two groups (P=0.044).The median survival time and 5-year survival rate of patients with stage II SCLC were 22 months and 16.7% in lobectomy group, 30 months and 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028, Figure 2). For patients with stage III SCLC, the median survival time was 16 months in lobectomy group and 18 months in pneumonectomy group respectively. There was no patient with stage III SCLC who survived for more than 5 years in this study. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC (P=0.933, Figure 3).Figure 2 Survival curves of patients with stage II SCLC according to surgical procedures. The 5-year survival rate of patients with stage II SCLC was 16.7% in lobectomy group, 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028).Figure 3 Survival curves of patients with stag III SCLC according to surgical procedures. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC.The local recurrence rate comparison of two surgical procedures was showed in Table2. The local recurrence rate were 59.1% (26/44) in lobectomy group, 22.6% (7/31) in pneumonectomy group, there was statistically significant difference in local recurrence rate between the two groups (P=0.0017). By stages, there was statistically significant difference of local recurrence rate between the two surgical groups in stage II SCLC (P=0.002), but no significant difference of local recurrence rate was found between the two groups in stage III SCLC (P=0.3348).In our study, the patients with sleeve resection were included into lobectomy group, because there was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877, Figure 4).Figure 4 Survival curves of patients according to surgery of sleeve resection lobectomy and lobectomy. There was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877). ## 3.1. Surgery 31 patients underwent pneumonectomy (including 7 right and 24 left pneumonectomies). 44 patients underwent lobectomy (including 12 patients underwent sleeve resection). The lobectomy procedures included 12 right upper lobectomies, 1 middle lobectomy, 2 right upper and middle lobectomies, 2 middle and right lower lobectomies, 9 right lower lobectomies, 10 left upper lobectomies, and 8 left lower lobectomies. The patients were divided into two: groups pneumonectomy group (n=31) and lobectomy group (n=44). ## 3.2. Characteristics of the Patients The clinical and pathologic characteristics of the two group patients are presented in Table1. There were no statistical differences between the two groups regarding age, sex, and adjuvant therapy. The pathologic stage of the lobectomy group was stage I in 2, stage II in 29, and stage III in 13 patients. The pathologic stage of pneumonectomy group was stage II in 24 and stage III in 7 patients. There were no patients with stage I in pneumonectomy group. Statistic analysis showed that there was no significant difference in distribution of pathologic stage between the two groups (P=0.249).Table 1 Characteristics of the patients with lobectomy and pneumonectomy. CharacteristicLobectomy (n=44)Pneumonectomy (n=31)P valueAge mean ± SD (years)58.5±18.457.9±17.80.8609GenderMale40290.6782Female42Adjuvant therapyChemotherapy alone120.3631Chemotherapy plus PCI4329Pathologic stageI30II31240.3272III107 ## 3.3. Survival Rate of the Patients and Local Recurrence with SCLC The median survival time and 5-year survival rate for entire cohort were 22 months and 20.34%. They were 27 months and 26.2% for stage II, 18 months and 0.0% for stage III. The median survival time and 5-year survival rate of patients with SCLC were 20 months and 11.1% by lobectomy, 28 months and 24.0% by pneumonectomy (Table2). There was significant difference in the overall survival rate between the two surgical groups (P=0.044; Figure 1).Table 2 Comparison of Local recurrence and survival rate. LobectomyPneumonectomyPAll stageMedian survival time (CI)20 (15.83–24.16)n=4428 (21.52–34.48)n=315-year survival rate11.1%24.0%0.044Local recurrence rate59.1% (26/44)22.6% (7/31)0.0017Stage IIMedian survival time (CI)21 (17.03–24.97)n=3130 (18.48–41.52)n=245-year survival rate16.7%31.6%0.028Local recurrence rate61.3% (19/31)20.8% (5/24)0.0027Stage IIIMedian survival time (CI)16 (11.35–20.65)n=1018 (5.16–30.83)n=75-year survival rate000.933Local recurrence rate60% (6/10)28.6% (2/7)0.3348CI: 95% confidence interval.Figure 1 Survival curves according to surgical procedures. The 5-year survival rate for patients with SCLC was 16.1% by lobectomy, 24.0% by pneumonectomy. There was significant difference in the overall survival rate between the two groups (P=0.044).The median survival time and 5-year survival rate of patients with stage II SCLC were 22 months and 16.7% in lobectomy group, 30 months and 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028, Figure 2). For patients with stage III SCLC, the median survival time was 16 months in lobectomy group and 18 months in pneumonectomy group respectively. There was no patient with stage III SCLC who survived for more than 5 years in this study. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC (P=0.933, Figure 3).Figure 2 Survival curves of patients with stage II SCLC according to surgical procedures. The 5-year survival rate of patients with stage II SCLC was 16.7% in lobectomy group, 31.6% in the pneumonectomy group. For patients with stage II SCLC, the overall survival rate after pneumonectomy was significantly better than after lobectomy (P=0.028).Figure 3 Survival curves of patients with stag III SCLC according to surgical procedures. No significant difference in overall survival rate was found between lobectomy group and pneumonectomy group in patients with stage III SCLC.The local recurrence rate comparison of two surgical procedures was showed in Table2. The local recurrence rate were 59.1% (26/44) in lobectomy group, 22.6% (7/31) in pneumonectomy group, there was statistically significant difference in local recurrence rate between the two groups (P=0.0017). By stages, there was statistically significant difference of local recurrence rate between the two surgical groups in stage II SCLC (P=0.002), but no significant difference of local recurrence rate was found between the two groups in stage III SCLC (P=0.3348).In our study, the patients with sleeve resection were included into lobectomy group, because there was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877, Figure 4).Figure 4 Survival curves of patients according to surgery of sleeve resection lobectomy and lobectomy. There was no significant difference in overall survival rate between the patients who underwent general lobectomy and these who underwent sleeve resection lobectomy (P=0.877). ## 4. Discussion The efficacy of surgery in SCLC is controversial. About 30 years ago, British Medical Research Council performed a randomized trial about surgery versus radiotherapy for SCLC. The result showed that surgery and radiotherapy were equally ineffective in limited stage SCLC [2–4]. This result has been widely cited as evidence to prove that surgical treatment to SCLC is ineffective. But proponents of surgery argue that there were some limitations in that randomized trial. CT scanning and mediastinoscopy were unavailable at that time. The patients recruited in that trial were not currently suitable for surgery, complete resection was only achieved in 34 (48%) patients and 37 (52%) patients underwent exploratory thoracotomy only.With the advent of new diagnostic tools, such as spiral computed tomography and positron emission tomography, limited disease can be more readily identified and adequately staged preoperatively. Some clinicians believed that good results can be achieved in selected patients with complete resection [5, 6]. Moreover, with the platinum agent, granulocyte-colony-stimulating factor, and serotonin-antagonizing antiemetic agent becoming available, the chemotherapeutic regimens for SCLC have been changed [7–10]. Recent studies reported that multimodality treatment involving surgery achieved a good prognosis in SCLC patients with limited stage disease, thus suggested the importance of surgery with a curative intent [11, 12].Although many researches confirmed that multimodality treatment including surgery and chemotherapy might represent an effective form of treatment for limited SCLC, it was generally accepted that surgical resection was appropriate only for the patients with stage I SCLC [13, 14]. However, stage of SCLC is usually underestimated preoperatively [2, 15]. Lymph node metastasis is often underestimated, and occult mediastinal involvement might be missed even by medistinoscopy. Eric Lim and his colleagues reported 59 patients with stage I to III SCLC who underwent lung resection with nodal dissection and showed excellent overall 5-year survival of 52%. Their surgical series suggests that good results can be achieved in selected patients with complete resection throughout the spectrum of UICC stage I to III [5].Our study reviewed 75 patients that do not have confirmed diagnosis of SCLC preoperatively. The reason is that there were no CT and bronchoscope in our institute until 1989. So, not every patient in this group gets these assessments even after 1989 because of economic reason. Also some of these patients were misdiagnosed as NSCLC. Their postoperative pathologic stages included stage I in 3 cases, stage II in 55 cases, and stage III in 17 cases. The median survival time and postoperative 5-year survival rate of patients with SCLC in our surgical series were 22 months and 20.34%. These figures can be compared with the results reported by Brock et al. [14] and are poorer than the result reported by Inoue et al. [15].Comparing the median survival time and survival rate of two surgical procedures, we found that the median survival time and 5-year survival rate by pneumonectomy were better than those by lobectomy (28 months, 24.0% versus 20 months, 11.1%), and statistical analysis showed there was significant difference in overall survival rate between the two groups (P=0.044). Moreover, our study showed that for patients with stage II SCLC, the postoperative overall survival rates by pneumonectomy were significantly better than by lobectomy (P=0.028). For patients with stage III SCLC, there was no difference between the two surgical groups in overall survival rates (P=0.933).We had examined 31 cases of pneumonectomy for SCLC in this study. It was because we found that SCLC usually originates in the lung’s large central airways, invades the main bronchus and fuses with metastatic hilar lymph nodes at presentation. The metastatic lymph nodes usually involve interlobular lymph node and peribronchial lymph nodes of neighboring lobe also. Lobectomy often leaves these peribronchial lymph nodes in neighboring lobe. It is even impossible to distinguish primary tumor from lymph node metastasis during operation. Shepherd reported that local control remains a problem, with one-third of patients having recurrence only at the primary site. Failure to achieve control at the primary site remains the single most important obstacle to cure in patients with limited SCLC [16]. We believe that pneumonectomy can achieve complete resection of the neoplasm for SCLC rather than lobectomy do. In this study, the local recurrence rate of patients with SCLC was 59.1% (26/44) in lobectomy group, and it was 22.6% (7/31) in pneumonectomy group. There was significant difference between the two groups in local recurrence rate (P=0.0017). The results indicate that pneumonectomy can afford better curability rates than lobectomy. ## 5. Conclusion Chemotherapy in combination with radiation therapy is the mainstay of treatment for SCLC. But there are still some patients with SCLC that are diagnosed intraoperatively. In this specific situation, the surgeon must select an appropriate method of operation to complete the operation for these patients. In our study, we can easily draw the conclusion that pneumonectomy can achieve complete resection and reduce local recurrence rate of patients with SCLC. It can result in better local control than lobectomy. The survival of patients with SCLC after pneumonectomy is better than after lobectomy.Nevertheless, there were some limitations in this study which should be noted. The patients in the two surgical groups were not selected by randomization. In pneumonectomy group, 70 years old was the upper limit age of patients. Patients with pneumonectomy had healthy lung function and strong heart. These factors may influence on survival rate. Also the long-term complications of the two surgical procedures were not analyzed in this study. We had mediastinoscopy instrument after 2000: only 23 patients in this study received mediastinoscopy. In recent 15 years, we seldom performed surgery to patients with SCLC, because we also think that surgical intervention in the management of SCLC is not considered standard. So most of the cases in this study were before 15 years ago. --- *Source: 101024-2012-05-30.xml*
2012
# Takotsubo Syndrome Associated with ST Elevation Myocardial Infarction **Authors:** Saad Ezad; Michael McGee; Andrew J. Boyle **Journal:** Case Reports in Cardiology (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1010243 --- ## Abstract Background. Takotsubo syndrome is a reversible heart failure syndrome which often presents with symptoms and ECG changes that mimic an acute myocardial infarction. Obstructive coronary artery disease has traditionally been seen as exclusion criteria for the diagnosis of takotsubo; however, recent reports have called this into question and suggest that the two conditions may coexist. Case Summary. We describe a case of an 83-year-old male presenting with chest pain consistent with acute myocardial infarction. The ECG demonstrated anterior ST elevation with bedside echocardiography showing apical wall motion abnormalities. Cardiac catheterisation found an occluded OM2 branch of the left circumflex artery with ventriculography confirming apical ballooning consistent with takotsubo and not in the vascular territory supplied by the occluded epicardial vessel. Repeat echocardiogram 6 weeks later confirmed resolution of the apical wall motion abnormalities consistent with a diagnosis of takotsubo. Discussion. This case demonstrates the finding of takotsubo syndrome in a male patient with acute myocardial infarction. Traditionally, this would preclude a diagnosis of takotsubo; however, following previous reports of takotsubo in association with coronary artery dissection and acute myocardial infarction in female patients, new diagnostic criteria have been proposed which allow the diagnosis of takotsubo in the presence of obstructive coronary artery disease. This case adds to the growing body of literature that suggests takotsubo can coexist with acute myocardial infarction; however, it remains to be elucidated if it is a consequence or cause of myocardial infarction. --- ## Body ## 1. Introduction First described in a case series of Japanese patients in 1991 [1], takotsubo syndrome (TTS) is a rapidly reversible heart failure syndrome most commonly seen in postmenopausal women following emotional or physical stress [2]. Various terms have been used to describe this condition including broken heart syndrome [3], takotsubo cardiomyopathy [4], and stress-induced cardiomyopathy [5]. TTS is now the preferred nomenclature as patients with takotsubo do not appear to have primary muscle pathology [6]. Presentation mimics acute myocardial infarction (AMI) with chest pain and dyspnoea often associated with ST segment elevation or T wave inversion. In a contemporary western population, an estimated 0.9% of patients admitted for primary PCI were diagnosed with TTS [7]. However, data from the International Takotsubo Registry has shown coronary artery disease coexists in 15.3% of patients with a diagnosis of TTS [8]. The underlying pathophysiology has yet to be fully understood, although a rapid elevation in circulating catecholamine levels in response to stress has traditionally been believed to be a central feature [9]. More recently, however, it has been demonstrated that plasma catecholamine levels are normal or only mild-moderately elevated in patients with TTS [10] and that local cardiac sympathetic hyperactivation results in myocardial stunning [11]. ## 2. Case Presentation An 83-year-old gentleman with a past medical history of diet-controlled diabetes mellitus type 2, gout, and hypertension presented to our institution with a 4-hour history of upper abdominal pain and lower chest tightness associated with dyspnoea, which was partially relieved by intravenous morphine and sublingual glyceryl trinitrate administered by ambulance paramedics. On arrival in the emergency department, a 12-lead ECG showed minimal anterior ST elevation (Figure1(a)); therefore, a bedside echocardiogram was performed. This demonstrated hypokinesis of the apical third of the anterior, inferior, and lateral walls. Given the borderline ECG changes and regional wall motion abnormalities on echo, the patient was taken for emergency cardiac catheterisation.Figure 1 (a) Admission ECG showing minimal anterior ST elevation and concave ST segments in leads II, III, and aVF. (b) ECG 48 hours after presentation demonstrating deep T wave inversion across precordial and limb leads with a prolonged QTc. (a) (b)Angiography revealed an occluded obtuse marginal 2 (OM2) branch of the circumflex artery (Figure2(c)) with minor disease in the other major epicardial arteries (Figures 2(a) and 2(b)). Flow was restored following passage of the guidewire, and thrombus was clearly identifiable in the vessel. The lesion was treated with one 2.5 mm × 15 mm drug-eluting stent resulting in TIMI III flow (Figure 2(d)).Figure 2 Coronary angiography on admission. (a) Minor disease in the left anterior descending artery. (b) Mild-moderate disease in the right coronary artery. (c) Occluded OM2 branch (arrow) of the left circumflex artery. (d) TIMI III flow post percutaneous coronary intervention (PCI) with drug-eluting stent (DES). Left ventriculogram in the right anterior oblique (RAO) projection during diastole (e) and systole (f) showing mid apical pattern of takotsubo syndrome with sparing of the apical tip. (a) (b) (c) (d) (e) (f)Ventriculogram done in the RAO projection revealed mid and apical hypokinesis and ballooning with preserved basal function (Figures2(e) and 2(f)). Ventriculogram from the LAO projection showed posterior wall hypokinesis more in keeping with the ischaemic territory affected by acute plaque rupture.A venous blood gas revealed haemoglobin of 145 g/L (ref 120-170 g/L), normal electrolytes, and blood glucose of 8.7 mmol/L (ref 3.5-7.7 mmol/L). The patient’s initial troponin I was 365 ng/L (ref <26 ng/L) and peaked at 17,180 ng/L the following day. His ECG evolved to show deep symmetrical T wave inversion across the anterolateral and limb leads, clearly more extensive than the distribution of the infarct artery (Figure1(b)) associated with the prolongation of the QT interval. Formal echocardiogram performed 6 hours following percutaneous coronary intervention (PCI) showed severe apical ballooning and hypokinesis extending to mid cavity with preservation of basal function, consistent with TTS. The posterolateral wall was also noted to be akinetic in keeping with a region of infarction. There was mild LV systolic dysfunction (EF 45%).The patient was commenced on perindopril and atorvastatin in addition to dual antiplatelet therapy with aspirin and clopidogrel. On further questioning, no acute emotional triggers in the patient’s life could be identified. On day 3 of the patient’s admission, troponin was down trending at 8907 ng/L. He was discharged 4 days after presentation, following an uncomplicated inpatient stay. Follow-up echocardiography performed 6 weeks after discharge demonstrated restoration of normal LV systolic function and resolution of the previously seen regional wall motion abnormalities (Figure3).Figure 3 Transthoracic echocardiogram on admission and 6-week follow-up. Apical 4 chamber window at end-diastole (a) and at end-systole (b) showing apical dilatation during acute presentation. (c) Apical 4 chamber window at end-diastole at 6-week follow-up. (d) Apical 4 chamber window at end-systole at 6-week follow-up revealing resolution of apical ballooning. (a) (b) (c) (d) ## 3. Discussion The current Mayo diagnostic criteria [12] require the presence of a transient regional wall motional abnormality, which extends beyond a single epicardial vascular distribution, and the absence of obstructive coronary artery disease or evidence of acute plaque rupture. However, several recent reports of TTS in association with coronary artery dissection [13] and acute myocardial infarction [14, 15] have led to newly proposed diagnostic criteria which state that TTS may exist as a comorbidity with a variety of illnesses including acute coronary syndromes [16].To our knowledge, this is a unique case describing the association of acute myocardial infarction (AMI) caused by acute plaque rupture and TTS in a male patient. The lack of an emotional trigger in this case suggests a possible causal relationship between the two conditions. The stress associated with AMI could conceivably have resulted in sympathetic activation causing TTS in a distribution not perfused by the occluded OM2 artery. Alternatively, postischaemic myocardial stunning has been proposed as a possible trigger factor [17]. Conversely, AMI may be triggered by TTS [18]. A recent report described a case of thromboembolism from a left ventricular thrombus as a result of TTS causing AMI [19]. Alternatively, pain has been described as a trigger for TTS [20], and the chest pain from AMI could potentially have triggered sympathetic activation.Frangieh et al. found that T wave inversion on presentation was twice as common in TTS (45%) compared with AMI (22%) [21]. The lack of T wave inversion on presentation in this case followed by the development of deep T wave inversion across the precordial leads could suggest that AMI was the initial pathology followed by TTS. Furthermore, the QTc interval increased from 390 msec on the presentation to 527 msec when the T wave inversion evolved. A prolonged QT interval has also been associated with TTS rather than AMI [21, 22]. A shorter time to peak troponin (<6 hours) and smaller troponin rise have been found to be predictive of TTS [22], in contrast to our case where a large troponin rise was seen which peaked 24 hours after admission.This case adds to the growing body of literature suggesting TTS and coronary artery disease may not be mutually exclusive as once thought; however, more work is required to identify the nature of the link and whether TTS is the cause or consequence of AMI in those patients in whom it coexists. --- *Source: 1010243-2019-05-16.xml*
1010243-2019-05-16_1010243-2019-05-16.md
10,043
Takotsubo Syndrome Associated with ST Elevation Myocardial Infarction
Saad Ezad; Michael McGee; Andrew J. Boyle
Case Reports in Cardiology (2019)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1010243
1010243-2019-05-16.xml
--- ## Abstract Background. Takotsubo syndrome is a reversible heart failure syndrome which often presents with symptoms and ECG changes that mimic an acute myocardial infarction. Obstructive coronary artery disease has traditionally been seen as exclusion criteria for the diagnosis of takotsubo; however, recent reports have called this into question and suggest that the two conditions may coexist. Case Summary. We describe a case of an 83-year-old male presenting with chest pain consistent with acute myocardial infarction. The ECG demonstrated anterior ST elevation with bedside echocardiography showing apical wall motion abnormalities. Cardiac catheterisation found an occluded OM2 branch of the left circumflex artery with ventriculography confirming apical ballooning consistent with takotsubo and not in the vascular territory supplied by the occluded epicardial vessel. Repeat echocardiogram 6 weeks later confirmed resolution of the apical wall motion abnormalities consistent with a diagnosis of takotsubo. Discussion. This case demonstrates the finding of takotsubo syndrome in a male patient with acute myocardial infarction. Traditionally, this would preclude a diagnosis of takotsubo; however, following previous reports of takotsubo in association with coronary artery dissection and acute myocardial infarction in female patients, new diagnostic criteria have been proposed which allow the diagnosis of takotsubo in the presence of obstructive coronary artery disease. This case adds to the growing body of literature that suggests takotsubo can coexist with acute myocardial infarction; however, it remains to be elucidated if it is a consequence or cause of myocardial infarction. --- ## Body ## 1. Introduction First described in a case series of Japanese patients in 1991 [1], takotsubo syndrome (TTS) is a rapidly reversible heart failure syndrome most commonly seen in postmenopausal women following emotional or physical stress [2]. Various terms have been used to describe this condition including broken heart syndrome [3], takotsubo cardiomyopathy [4], and stress-induced cardiomyopathy [5]. TTS is now the preferred nomenclature as patients with takotsubo do not appear to have primary muscle pathology [6]. Presentation mimics acute myocardial infarction (AMI) with chest pain and dyspnoea often associated with ST segment elevation or T wave inversion. In a contemporary western population, an estimated 0.9% of patients admitted for primary PCI were diagnosed with TTS [7]. However, data from the International Takotsubo Registry has shown coronary artery disease coexists in 15.3% of patients with a diagnosis of TTS [8]. The underlying pathophysiology has yet to be fully understood, although a rapid elevation in circulating catecholamine levels in response to stress has traditionally been believed to be a central feature [9]. More recently, however, it has been demonstrated that plasma catecholamine levels are normal or only mild-moderately elevated in patients with TTS [10] and that local cardiac sympathetic hyperactivation results in myocardial stunning [11]. ## 2. Case Presentation An 83-year-old gentleman with a past medical history of diet-controlled diabetes mellitus type 2, gout, and hypertension presented to our institution with a 4-hour history of upper abdominal pain and lower chest tightness associated with dyspnoea, which was partially relieved by intravenous morphine and sublingual glyceryl trinitrate administered by ambulance paramedics. On arrival in the emergency department, a 12-lead ECG showed minimal anterior ST elevation (Figure1(a)); therefore, a bedside echocardiogram was performed. This demonstrated hypokinesis of the apical third of the anterior, inferior, and lateral walls. Given the borderline ECG changes and regional wall motion abnormalities on echo, the patient was taken for emergency cardiac catheterisation.Figure 1 (a) Admission ECG showing minimal anterior ST elevation and concave ST segments in leads II, III, and aVF. (b) ECG 48 hours after presentation demonstrating deep T wave inversion across precordial and limb leads with a prolonged QTc. (a) (b)Angiography revealed an occluded obtuse marginal 2 (OM2) branch of the circumflex artery (Figure2(c)) with minor disease in the other major epicardial arteries (Figures 2(a) and 2(b)). Flow was restored following passage of the guidewire, and thrombus was clearly identifiable in the vessel. The lesion was treated with one 2.5 mm × 15 mm drug-eluting stent resulting in TIMI III flow (Figure 2(d)).Figure 2 Coronary angiography on admission. (a) Minor disease in the left anterior descending artery. (b) Mild-moderate disease in the right coronary artery. (c) Occluded OM2 branch (arrow) of the left circumflex artery. (d) TIMI III flow post percutaneous coronary intervention (PCI) with drug-eluting stent (DES). Left ventriculogram in the right anterior oblique (RAO) projection during diastole (e) and systole (f) showing mid apical pattern of takotsubo syndrome with sparing of the apical tip. (a) (b) (c) (d) (e) (f)Ventriculogram done in the RAO projection revealed mid and apical hypokinesis and ballooning with preserved basal function (Figures2(e) and 2(f)). Ventriculogram from the LAO projection showed posterior wall hypokinesis more in keeping with the ischaemic territory affected by acute plaque rupture.A venous blood gas revealed haemoglobin of 145 g/L (ref 120-170 g/L), normal electrolytes, and blood glucose of 8.7 mmol/L (ref 3.5-7.7 mmol/L). The patient’s initial troponin I was 365 ng/L (ref <26 ng/L) and peaked at 17,180 ng/L the following day. His ECG evolved to show deep symmetrical T wave inversion across the anterolateral and limb leads, clearly more extensive than the distribution of the infarct artery (Figure1(b)) associated with the prolongation of the QT interval. Formal echocardiogram performed 6 hours following percutaneous coronary intervention (PCI) showed severe apical ballooning and hypokinesis extending to mid cavity with preservation of basal function, consistent with TTS. The posterolateral wall was also noted to be akinetic in keeping with a region of infarction. There was mild LV systolic dysfunction (EF 45%).The patient was commenced on perindopril and atorvastatin in addition to dual antiplatelet therapy with aspirin and clopidogrel. On further questioning, no acute emotional triggers in the patient’s life could be identified. On day 3 of the patient’s admission, troponin was down trending at 8907 ng/L. He was discharged 4 days after presentation, following an uncomplicated inpatient stay. Follow-up echocardiography performed 6 weeks after discharge demonstrated restoration of normal LV systolic function and resolution of the previously seen regional wall motion abnormalities (Figure3).Figure 3 Transthoracic echocardiogram on admission and 6-week follow-up. Apical 4 chamber window at end-diastole (a) and at end-systole (b) showing apical dilatation during acute presentation. (c) Apical 4 chamber window at end-diastole at 6-week follow-up. (d) Apical 4 chamber window at end-systole at 6-week follow-up revealing resolution of apical ballooning. (a) (b) (c) (d) ## 3. Discussion The current Mayo diagnostic criteria [12] require the presence of a transient regional wall motional abnormality, which extends beyond a single epicardial vascular distribution, and the absence of obstructive coronary artery disease or evidence of acute plaque rupture. However, several recent reports of TTS in association with coronary artery dissection [13] and acute myocardial infarction [14, 15] have led to newly proposed diagnostic criteria which state that TTS may exist as a comorbidity with a variety of illnesses including acute coronary syndromes [16].To our knowledge, this is a unique case describing the association of acute myocardial infarction (AMI) caused by acute plaque rupture and TTS in a male patient. The lack of an emotional trigger in this case suggests a possible causal relationship between the two conditions. The stress associated with AMI could conceivably have resulted in sympathetic activation causing TTS in a distribution not perfused by the occluded OM2 artery. Alternatively, postischaemic myocardial stunning has been proposed as a possible trigger factor [17]. Conversely, AMI may be triggered by TTS [18]. A recent report described a case of thromboembolism from a left ventricular thrombus as a result of TTS causing AMI [19]. Alternatively, pain has been described as a trigger for TTS [20], and the chest pain from AMI could potentially have triggered sympathetic activation.Frangieh et al. found that T wave inversion on presentation was twice as common in TTS (45%) compared with AMI (22%) [21]. The lack of T wave inversion on presentation in this case followed by the development of deep T wave inversion across the precordial leads could suggest that AMI was the initial pathology followed by TTS. Furthermore, the QTc interval increased from 390 msec on the presentation to 527 msec when the T wave inversion evolved. A prolonged QT interval has also been associated with TTS rather than AMI [21, 22]. A shorter time to peak troponin (<6 hours) and smaller troponin rise have been found to be predictive of TTS [22], in contrast to our case where a large troponin rise was seen which peaked 24 hours after admission.This case adds to the growing body of literature suggesting TTS and coronary artery disease may not be mutually exclusive as once thought; however, more work is required to identify the nature of the link and whether TTS is the cause or consequence of AMI in those patients in whom it coexists. --- *Source: 1010243-2019-05-16.xml*
2019
# Rapamycin and FTY720 Alleviate Atherosclerosis by Cross Talk of Macrophage Polarization and Autophagy **Authors:** Rui-zhen Sun; Ying Fan; Xiao Liang; Tian-tian Gong; Qi Wang; Hui Liu; Zhi-yan Shan; Lei Lei **Journal:** BioMed Research International (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1010248 --- ## Abstract Foam cell formation and macrophage polarization are involved in the pathologic development of atherosclerosis, one of the most important human diseases affecting large and medium artery walls. This study was designed to assess the effects of rapamycin and FTY720 (fingolimod) on macrophages and foam cells. Mouse peritoneal macrophages were collected and treated with rapamycin and FTY720 to study autophagy, polarization, and lipid accumulation. Next, foam cells were formed by oxidizing low-density lipoprotein to observe changes in lipid accumulation, autophagy, and polarization in rapamycin-treated or FTY720-treated foam cells. Lastly, foam cells that had been treated with rapamycin and FTY720 were evaluated for sphingosine 1-phosphate receptor (S1prs) expression. Autophagy microtubule-associated protein 1 light chain 3- (LC3-) II was increased, and classically activated macrophage phenotype markers interleukin- (IL-) 6, cyclooxygenase-2 (COX2), and inducible nitric oxide synthase (iNOS) were increased, whereas alternatively activated macrophage phenotype markers transforming growth factor- (TGF-)β, arginase 1 (Arg1), and mannose receptor C-type 1 (Mrc1) were decreased by rapamycin in peritoneal macrophages. LC3-II was also obviously enhanced, though polarization markers were unchanged in rapamycin-treated foam cells. Moreover, lipid accumulation was inhibited in rapamycin-treated macrophage cells but was unchanged in rapamycin-treated foam cells. For FTY720, LC3-II did not change, whereas TGF-β, Arg1 and Mrc1 were augmented, and IL-6 was suppressed in macrophages. However, LC3-II was increased, and TGF-β, ARG1 and MRC1 were strikingly augmented, whereas IL-6, COX2 and iNOS could be suppressed in foam cells. Furthermore, lipid accumulation was alleviated in FTY720-treated foam cells. Additionally, S1pr1 was markedly decreased in foam cells (P < .05); S1pr2, S1pr3, S1pr4 and S1pr5 were unchanged in rapamycin-treated foam cells. In FTY720-treated foam cells, S1pr3 and S1pr4 were decreased, and S1pr1, S1pr2 and S1pr5 were unchanged. Therefore, we deduced that rapamycin stimulated classically activated macrophages and supressed early atherosclerosis. Rapamycin may also stabilize artery plaques by preventing apoptosis and S1PR1 in advanced atherosclerosis. FTY720 allowed transformation of foam cells into alternatively activated macrophages through the autophagy pathway to alleviate advanced atherosclerosis. --- ## Body ## 1. Introduction Atherosclerosis, one of the most harmful human diseases of large and medium artery walls, leads to acute myocardial infarction and sudden death [1]. It has been demonstrated that atherosclerosis involves lipid accumulation and inflammatory infiltration [1], and that macrophages play a crucial role in pathogenesis. During the initial phase of atherosclerosis development, circulating monocytes migrate into the arterial wall via dysfunctional endothelial cells and then differentiate into macrophages [2–4]. Next, macrophages engulf oxidized low-density lipoprotein (ox-LDL) to digest and transport lipids out of the vascular wall [5]. When overloaded with lipid droplets, macrophages will transform into foam cells that initiate plaque formation inside the blood vessels [6]. This inflammatory process appears to be a hallmark of atherosclerosis [7–9]. Thus, decreasing macrophage foam cell formation would be an attractive strategy for reversing atherosclerosis.Macrophage phenotype emerges in response to the microenvironment in a process referred to as macrophage activation or polarization [10]. Macrophages are either classically activated (M1) or alternatively activated (M2). M1 macrophages are activated by treatment with interferon-γ or lipopolysaccharide, whereas M2 macrophages are activated by treatment with Th2 cytokines interleukin- (IL-) 4 or IL-13; the M2 phenotype switch can be enhanced by IL-10. Early in the innate immune response, M1 macrophages produce reactive oxygen species and proinflammatory cytokines and chemokines to drive inflammation; thus, they are referred to as “killer” macrophages. During the resolution phase of inflammation, M2 macrophages scavenge debris and assist in angiogenesis and wound healing; thus, they are referred to as “healer” macrophages [11]. During atherosclerosis development, there is differential polarization of macrophages that results in differences in the number and distribution of polarization macrophages within the plaque. M1 and M2 macrophages link to produce atherosclerotic plaques, and the M2 macrophages can resist foam cell transformation [2]. Thus, selective removal of macrophages or altering polarization status within the plaque may have a role in alleviating atherosclerosis.2-Amino-2-[2-(4-octylphenyl)ethyl]propane-1,3-diol hydrochloride (FTY720), also known as fingolimod, is an immune-modulating drug used to treat multiple sclerosis and multiple organ transplantation. It is both a synthetic sphingosine 1-phosphate (S1P) analogue and an S1P receptor modulator [12]. The drug may serve as a functional antagonist or agonist, depending on the S1P receptor subtype and target cell or tissue. S1P induces M2 phenotype polarization via IL-4 to protect against atherosclerosis development [13]. Some studies have shown that FTY720 reduces atherosclerosis by suppressing monocyte/macrophage migration to atherosclerotic lesions [14]. Short-term, low-dose oral FTY720 has shown great benefit in inhibiting early development of atherosclerosis via induction of regulatory T-cells and inhibition of effector T-cell response in apolipoprotein E-deficient mice fed a high-cholesterol diet [15]. Moreover, FTY720 treatment of low-density lipoprotein receptor- (LDLR-) deficient mice fed a cholesterol-rich diet activates M2 phenotype marker IL-4 in peritoneal macrophages to reduce atherosclerotic lesion formation in a dose-dependent manner. Concentrations of proinflammatory cytokines such as tumor necrosis factor-α, IL-6, and IL-12 are also reduced [12]. However, FTY720 failed to affect atherosclerosis in moderately hypercholesterolemic LDLR-/- mice [16]. Thus, some important questions remain regarding how FTY720 affects macrophage function and whether FTY720 plays a role in alleviating atherosclerosis through interaction with foam cells.Autophagy is an evolutionarily conserved, physiologic, self-protective process. Autophagy is classically considered a pathway that contributes to cellular homeostasis and adaptation to stress [17]. Dysfunctional autophagy is associated with some human diseases. A limited number of clinical studies have shown that autophagy is impaired in the advanced stages of atherosclerosis and that its deficiency induces lethal accumulation of cholesterol crystals and promotes atherosclerosis [18–20]. Stents eluting the mTOR inhibitor, everolimus, selectively clear macrophages by autophagy in rabbit model atherosclerotic plaques to promote stability without affecting smooth muscle cell viability [21]. Several examinations have also shown that mice with Atg5 (an essential autophagy gene) macrophage-specific deletion develop plaques characterized by increased apoptosis, oxidative stress, and plaque necrosis [22]. These results suggest that macrophage autophagy plays an essential but complicated role in the pathogenesis of atherosclerosis. Moreover, in vivo experiments have shown that rapamycin, an mTOR inhibitor, reduces macrophage death rate and delays plaque progression through autophagy upregulation [23]. In vitro experiments show that rapamycin not only reduces intracellular lipid droplet accumulation, but also inhibits cell apoptosis by clearing dysfunctional mitochondria and lowering intracellular reactive oxygen species levels during foam cell development [23]. In that study, atherosclerosis development was characterized by macrophage autophagy inhibition and changes to the distribution and rate of macrophage polarization [23]. Therefore, we deduced that selective promotion of macrophage autophagy may reduce M1 macrophages, increase M2 macrophages, and alter macrophage foam cells to stabilize vulnerable atherosclerotic plaques.Additionally, FTY720 can induce autophagy in some cancer cells [24–26]. However, little is known about whether FTY720 mediates macrophage autophagy and polarization in atherosclerosis. One previous study showed that FTY720 stimulated production of 27-hydroxycholesterol, an endogenous ligand of the liver X receptor, to cause liver X receptor-induced upregulation of ATP-binding cassette, subfamily A member 1 (ABCA1). It also conferred atheroprotective effects independent of sphingosine 1-phosphate receptor (S1PR) activation in human primary macrophages [27]. Furthermore, FTY720 inhibited inflammatory factors in endothelial and vascular smooth muscle cells through S1PR1 and S1PR3 and inhibited secretion of monocyte chemotactic protein 1 (MCP-1) by S1PR3 [14]. In vivo and in vitro experiments indicate that S1P mediates S1PR3 to recruit monocytes/macrophages and change smooth muscle cells to protect them from atherosclerosis [28]. Although FTY720 has a role in lipid metabolism and macrophage migration to reduce atherosclerosis, the drug may act as a functional antagonist or agonist, depending on S1P receptor subtype and cell or tissue target. Consequently, this study was designed to assess the roles of rapamycin and FTY720 on autophagy and polarization of macrophage cells and foam cells and to explore the S1PR macrophage foam cells to identify new approaches for reducing atherosclerosis. ## 2. Materials and Methods ### 2.1. Animal B6D2F1 mice, aged 8 to 10 weeks, were purchased from Vital River (Beijing, China). Animals were treated according to the Guide for the Care and Use of Laboratory Animals. All animal experiments were performed under the Code of Practice, Harbin Medicine University Ethics Committees. ### 2.2. Peritoneal Macrophages and Culture Mouse macrophages were isolated from the peritoneal cavity of B6D2F1 mice 3 days after treatment with 3% thioglycolate (T9032-500G, Sigma-Aldrich). Macrophages were cultured at 37°C in a 5% CO2 humidified incubator. Isolated macrophages were maintained in RPMI 1640 (22400089, Invitrogen) containing 10% fetal bovine serum (FBS, 04-001-1A, BI) and 50 μg/mL penicillin/streptomycin (15140-148, Invitrogen) for 24 h. Next, the medium was changed, and the cells for RNA or protein analysis were removed 24 h after treatment with FTY720 (SML0700, Sigma-Aldrich) or rapamycin (ab120224, Abcam). ### 2.3. Immunofluorescence Staining Cells were fixed in 4% paraformaldehyde for 20 min at 4°C. Next, cells were rinsed with 0.25% Triton X-100 in phosphate-buffered saline (PBS) and incubated with 70% alcohol for 5 min before being incubated with blocking buffer for 30 min at room temperature. For F4/80 staining, cells were incubated with F4/80 (565612, BD) overnight at 4°C and then counterstained with Hoechst 33342 (14533-100MG, Sigma-Aldrich). Lastly, cells were mounted on glass slides and examined using a Nikon microscope. ### 2.4. Western Blot Cells were lysed using RIPA buffer (Sigma-Aldrich) containing Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Philadelphia, PA) for 20 min on ice. Samples were incubated at 70°C for 15 min in NuPAGE sample buffer (Life Technologies), and proteins were separated on NuPAGE 4% to 12% Bis-Tris gels before transfer to PVDF (Life Technologies) for immunoblotting. Several primary antibodies were used for detection: anti-LC3 (14600-1-AP, Proteintech), anti-TGF-β (ab66043, Abcam), anti-IL-6 (21865-1-AP, Proteintech), anti-COX2 (12375-1-AP, Proteintech), anti-Arg1 (16001-1-AP, Proteintech), and anti-glyceraldehyde-3-phosphate dehydrogenase (GAPDH, KC-5G4, KANGCHEN). All secondary antibodies used for visualization were either goat anti-mouse or goat anti-rabbit and were purchased from Abcam. Blots were developed with the SuperSignal West Pico Chemiluminescent Substrate or SuperSignal West Femto Maximum Sensitivity Substrate Kit (Thermo Fisher) and visualized by the ImageQuant LAS 4000 biomolecular imager (GE Healthcare Life Sciences, Pittsburgh, PA). Densitometry analysis was completed with the help of ImageJ software, which allows for quantification of band intensity. A rectangle was placed on each band, and the band intensity and background intensity were analyzed. Quantification was determined by subtracting band intensity from background intensity. Protein expression was corrected with a loading control such as GAPDH by dividing the protein densitometry value. All western blot data is presented as protein densitometry/control protein densitometry. ### 2.5. Foam Cell Formation Copper ox-LDL (2 mg/mL) was purchased from Peking Union-Biology Co. Ltd (China). Mouse peritoneal macrophages were then seeded in a 12-well or 6-well cell culture (Corning) in RPMI 1640 media containing 10% fetal bovine serum (FBS) and allowed to adhere overnight. The next day, cells were treated with ox-LDL (150μg/mL) for 48 h. ### 2.6. Oil Red O Staining Oil Red O stock solution (0.5%) in 100% isopropanol was diluted to 60% (vol/vol) isopropanol using distilled water. The solution was then filtered to remove particulates. After incubation with ox-LDL, cells were rinsed twice with PBS and fixed with 4% paraformaldehyde in PBS for 15 min at room temperature. Next, cells were rinsed twice with PBS and stained with a filtered Oil Red O solution for 1 h at room temperature. They were rinsed with distilled water and mounted using aqueous mounting media. ### 2.7. Real-Time Polymerase Chain Reaction RNA samples from differentially treated macrophages were extracted using TRIzol Reagent (15596026, Invitrogen). cDNA was synthesized from mRNA using the High Capacity cDNA Reverse Transcription Kit (AT341-02, TransGen Biotech). Real-time PCR was performed using 1μL of cDNA, 10 μL TransStart TM Top Green real time PCR SuperMix (AQ131, TransGen Biotech), and gene-specific primers in a 20 μL reaction system on the CFX96 Real-Time System (Bio-Rad). Specific primers were obtained from a primer bank for mouse Arg1 (forward primer, 5′-CTCCAAGCCAAAGTCCTTAGAG; reverse primer, 5′-AGGAGCTGTCATTAGGGACATC), Mrc1 (forward primer, 5′-CTCTGTTCAGCTATTGGACGC; reverse primer, 5′-GGAATTTCTGGGATTCAGCTTC), iNOS (forward primer, 5′-GTTCTCAGCCCAACAATACAAGA; reverse primer, 5′-GTGGACGGGTCGATGTCAC), IL-6 (forward primer: 5′-CCAAGAGGTGAGTGCTTCCC; reverse primer, 5′-CTGTTGTTCAGACTCTCTCCCT), S1PR1 (forward primer, 5′-ATGGTGTCCACTAGCATCCC; reverse primer, 5′-CGATGTTCAACTTGCCTGTGTAG), S1PR2 (forward primer, 5′-ATGGGCGGCTTATACTCAGAG; reverse primer, 5′-GCGCAGCACAAGATGATGAT), S1PR3 (forward primer, 5′-TTTCATCGGCAACTTGGCTCT; reverse primer, 5′-GCTACGAACATACTGCCCTCC), S1PR4 (forward primer, 5′-GTCAGGGACTCGTACCTTCCA; reverse primer, 5′-GATGCAGCCATACACACGG), and S1PR5 (forward primer, GCTTTGGTTTGCGCGTGAG; reverse primer, 5′-GGCGTCCTAAGCAGTTCCAG). GAPDH was used as an internal control, and each sample was run in triplicate. The 2-ΔΔCt method was used to analyze qPCR gene expression data. Significance was assessed using two-tailed Student’st-test to compare the levels of DE genes for different groups (P < .05). ### 2.8. Statistical Analysis Data were expressed as means ± standard deviations and were tested for normality by SPSS Statistics (P > .05). Then, ANOVA was performed on the data using SPSS Statistics. A value ofP < .05 was considered to be statistically significant. A value ofP < .01 was considered to be more statistically significant. ## 2.1. Animal B6D2F1 mice, aged 8 to 10 weeks, were purchased from Vital River (Beijing, China). Animals were treated according to the Guide for the Care and Use of Laboratory Animals. All animal experiments were performed under the Code of Practice, Harbin Medicine University Ethics Committees. ## 2.2. Peritoneal Macrophages and Culture Mouse macrophages were isolated from the peritoneal cavity of B6D2F1 mice 3 days after treatment with 3% thioglycolate (T9032-500G, Sigma-Aldrich). Macrophages were cultured at 37°C in a 5% CO2 humidified incubator. Isolated macrophages were maintained in RPMI 1640 (22400089, Invitrogen) containing 10% fetal bovine serum (FBS, 04-001-1A, BI) and 50 μg/mL penicillin/streptomycin (15140-148, Invitrogen) for 24 h. Next, the medium was changed, and the cells for RNA or protein analysis were removed 24 h after treatment with FTY720 (SML0700, Sigma-Aldrich) or rapamycin (ab120224, Abcam). ## 2.3. Immunofluorescence Staining Cells were fixed in 4% paraformaldehyde for 20 min at 4°C. Next, cells were rinsed with 0.25% Triton X-100 in phosphate-buffered saline (PBS) and incubated with 70% alcohol for 5 min before being incubated with blocking buffer for 30 min at room temperature. For F4/80 staining, cells were incubated with F4/80 (565612, BD) overnight at 4°C and then counterstained with Hoechst 33342 (14533-100MG, Sigma-Aldrich). Lastly, cells were mounted on glass slides and examined using a Nikon microscope. ## 2.4. Western Blot Cells were lysed using RIPA buffer (Sigma-Aldrich) containing Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Philadelphia, PA) for 20 min on ice. Samples were incubated at 70°C for 15 min in NuPAGE sample buffer (Life Technologies), and proteins were separated on NuPAGE 4% to 12% Bis-Tris gels before transfer to PVDF (Life Technologies) for immunoblotting. Several primary antibodies were used for detection: anti-LC3 (14600-1-AP, Proteintech), anti-TGF-β (ab66043, Abcam), anti-IL-6 (21865-1-AP, Proteintech), anti-COX2 (12375-1-AP, Proteintech), anti-Arg1 (16001-1-AP, Proteintech), and anti-glyceraldehyde-3-phosphate dehydrogenase (GAPDH, KC-5G4, KANGCHEN). All secondary antibodies used for visualization were either goat anti-mouse or goat anti-rabbit and were purchased from Abcam. Blots were developed with the SuperSignal West Pico Chemiluminescent Substrate or SuperSignal West Femto Maximum Sensitivity Substrate Kit (Thermo Fisher) and visualized by the ImageQuant LAS 4000 biomolecular imager (GE Healthcare Life Sciences, Pittsburgh, PA). Densitometry analysis was completed with the help of ImageJ software, which allows for quantification of band intensity. A rectangle was placed on each band, and the band intensity and background intensity were analyzed. Quantification was determined by subtracting band intensity from background intensity. Protein expression was corrected with a loading control such as GAPDH by dividing the protein densitometry value. All western blot data is presented as protein densitometry/control protein densitometry. ## 2.5. Foam Cell Formation Copper ox-LDL (2 mg/mL) was purchased from Peking Union-Biology Co. Ltd (China). Mouse peritoneal macrophages were then seeded in a 12-well or 6-well cell culture (Corning) in RPMI 1640 media containing 10% fetal bovine serum (FBS) and allowed to adhere overnight. The next day, cells were treated with ox-LDL (150μg/mL) for 48 h. ## 2.6. Oil Red O Staining Oil Red O stock solution (0.5%) in 100% isopropanol was diluted to 60% (vol/vol) isopropanol using distilled water. The solution was then filtered to remove particulates. After incubation with ox-LDL, cells were rinsed twice with PBS and fixed with 4% paraformaldehyde in PBS for 15 min at room temperature. Next, cells were rinsed twice with PBS and stained with a filtered Oil Red O solution for 1 h at room temperature. They were rinsed with distilled water and mounted using aqueous mounting media. ## 2.7. Real-Time Polymerase Chain Reaction RNA samples from differentially treated macrophages were extracted using TRIzol Reagent (15596026, Invitrogen). cDNA was synthesized from mRNA using the High Capacity cDNA Reverse Transcription Kit (AT341-02, TransGen Biotech). Real-time PCR was performed using 1μL of cDNA, 10 μL TransStart TM Top Green real time PCR SuperMix (AQ131, TransGen Biotech), and gene-specific primers in a 20 μL reaction system on the CFX96 Real-Time System (Bio-Rad). Specific primers were obtained from a primer bank for mouse Arg1 (forward primer, 5′-CTCCAAGCCAAAGTCCTTAGAG; reverse primer, 5′-AGGAGCTGTCATTAGGGACATC), Mrc1 (forward primer, 5′-CTCTGTTCAGCTATTGGACGC; reverse primer, 5′-GGAATTTCTGGGATTCAGCTTC), iNOS (forward primer, 5′-GTTCTCAGCCCAACAATACAAGA; reverse primer, 5′-GTGGACGGGTCGATGTCAC), IL-6 (forward primer: 5′-CCAAGAGGTGAGTGCTTCCC; reverse primer, 5′-CTGTTGTTCAGACTCTCTCCCT), S1PR1 (forward primer, 5′-ATGGTGTCCACTAGCATCCC; reverse primer, 5′-CGATGTTCAACTTGCCTGTGTAG), S1PR2 (forward primer, 5′-ATGGGCGGCTTATACTCAGAG; reverse primer, 5′-GCGCAGCACAAGATGATGAT), S1PR3 (forward primer, 5′-TTTCATCGGCAACTTGGCTCT; reverse primer, 5′-GCTACGAACATACTGCCCTCC), S1PR4 (forward primer, 5′-GTCAGGGACTCGTACCTTCCA; reverse primer, 5′-GATGCAGCCATACACACGG), and S1PR5 (forward primer, GCTTTGGTTTGCGCGTGAG; reverse primer, 5′-GGCGTCCTAAGCAGTTCCAG). GAPDH was used as an internal control, and each sample was run in triplicate. The 2-ΔΔCt method was used to analyze qPCR gene expression data. Significance was assessed using two-tailed Student’st-test to compare the levels of DE genes for different groups (P < .05). ## 2.8. Statistical Analysis Data were expressed as means ± standard deviations and were tested for normality by SPSS Statistics (P > .05). Then, ANOVA was performed on the data using SPSS Statistics. A value ofP < .05 was considered to be statistically significant. A value ofP < .01 was considered to be more statistically significant. ## 3. Results ### 3.1. Rapamycin Induces M1 Polarization by Increasing Macrophage Autophagy Peritoneal macrophages were isolated as described previously. Immunofluorescence analysis showed a positivity rate of 87.6% for F4/80 (Figure1(a)). To investigate the effect of autophagy on macrophages, expression of autophagic markers LC3I and LC3II in macrophages was analyzed. Cells were cultured with or without autophagy activator rapamycin for 24 h. After treatment, cells were harvested, and proteins were collected for western blot analysis. As shown in Figure 1(b), rapamycin increased expression of LC3-II at different doses. Furthermore, real-time PCR showed that IL-6 and iNOS (M1 markers) were increased whereas Arg1 and Mrc1 (M2 markers) were decreased (Figure 1(c)). Moreover, western blot analysis demonstrated that IL-6 and COX2 (M1 markers) were higher in rapamycin-treated compared with untreated macrophages. Meanwhile, expression of TGF-β and ARG1 (M2 markers) was lower in rapamycin-treated compared with untreated macrophages (Figures 1(d) and 1(e)). This finding suggested that autophagy could induce M1 polarization.Figure 1 Rapamycin promoted autophagy in peritoneal macrophages and activated the M1 phenotype. (a) Peritoneal macrophages subjected to immunofluorescence staining by macrophage marker F4/80 antibody; (b) western blot showing LC3 expression in various rapamycin- (Rap-) treated groups, relative protein and GAPDH levels shown as histograms; (c) real-time PCR showing expression of Arg1, Mrc1, IL-6, and iNOS in various rapamycin-treated groups; (d) western blot showing expression of IL-6, TGF-β, COX2, and ARG1 in various rapamycin- (Rap-) treated groups; (e) relative protein and GAPDH levels shown as histograms; ∗P < .05 and ∗∗P < .01 vs. untreated controls. Results are representative of 3 independent experiments; each experiment was repeated 3 times. ### 3.2. Rapamycin Does Not Affect Foam Cell Polarization Macrophage foam cells indicate advanced stage atherosclerosis. To generate foam cells, macrophages were treated by ox-LDL for 48 h. Oil Red O staining showed that ox-LDL increased lipid accumulation and that this accumulation was decreased by rapamycin treatment (Figures2(a) and 2(b)). To determine whether autophagy affected polarization of foam cells, rapamycin was used at various doses. First, western blot analysis demonstrated that autophagy marker LC3II was elevated in the presence of rapamycin. Next, real-time PCR showed that rapamycin had almost no effect on foam cell polarization (Figure 2(c)). Western blot analysis verified that TGF-β, ARG1 and COX2 expression were unchanged in rapamycin-treated foam cells; IL-6 increased slightly in foam cells treated with 1 μM rapamycin (Figures 2(d) and 2(e)). However, lipid accumulation did not differ between rapamycin-treated foam cells and ox-LDL-treated foam cells (Figures 2(a) and 2(b)). Thus, autophagy appeared to have no effect on foam cell polarization.Figure 2 Rapamycin had no effect on foam cell polarization. (a) Oil Red O staining demonstrated that rapamycin inhibited macrophage transformation to foam cells but did not affect foam cells; (b) percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells;∗P < .05 and ∗∗P < .01 vs. ox-LDL; (c) real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression for various rapamycin-treated groups; (d) western blot showing expression of LC3, IL-6, COX2, TGF-β, and ARG1 in foam cells treated with various concentrations of rapamycin; (e) results shown representative of 3 independent experiments, each experiment repeated 3 times. Protein and GAPDH relative levels are shown as histograms.P < .05 and ∗∗P < .01 vs. untreated controls; MΦ, macrophage; bar, 100 μm. (a) (b) (c) (d) (e) ### 3.3. FTY720 Reduces Lipid Accumulation by M2 Polarization of Foam Cells A previous study reported that M2 polarization by S1P was responsible for the antiatherogenic properties of high-density lipoproteins in vivo [13]. FTY720 was not only an S1P analogue, but also an S1P receptor modulator that had contradicting roles in lipid metabolism and macrophage migration in atherosclerotic disease. To test the effect of FTY720 on polarization, FTY720-treated macrophages and foam cells were used. Real-time PCR showed that high-dose FTY720 treatment of macrophages increased Arg1 and Mrc1, reduced IL-6, and did not change iNOS. However, Arg1 and Mrc1 were increased, whereas IL-6 and iNOS were decreased in FTY720-treated foam cells (Figure 3(a)). Western blot demonstrated that TGF-β and ARG1 expression were increased in both FTY720-induced macrophages and FTY720-induced foam cells; however, ARG1 expression did not change in FTY720-treated foam cells (Figures 3(b) and 3(c)). Additionally, western blot was used to verify that IL-6 and COX2 were reduced in FTY720-treated macrophages and foam cells, but COX2 was not reduced in FTY720-induced macrophages. These results indicate that FTY720 could improve M2 polarization of macrophages and foam cells. Furthermore, LC3 expression was detected in macrophages and foam cells. However, LC3II expression differed in both cell types. LC3II expression was increased in FTY720-treated foam cells, but not in FTY720-treated macrophages (Figures 3(a) and 3(b)). Thus, FTY720 appeared to transform the polarization of foam cells. Additionally, lipid accumulation was decreased in FTY720-treated foam cells (Figures 3(d) and 3(e)), but not FTY720-treated macrophages. This finding suggests that FTY720 played a key role in M2 polarization to alleviate advances in atherosclerosis; autophagy may also promote M2 polarization.Figure 3 FTY720 reduces lipid accumulation by M2 polarization of foam cells. (a) Real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression with various concentrations of rapamycin treatment in macrophages and foam cells. (b) Western blot showing LC3, ARG1, COX2, IL-6, and TGF-β in macrophages and foam cells treated with various concentrations of rapamycin. (c) Results shown representative of 3 independent experiments, each experiment repeated 3 times. Data representative of protein and relative GAPDH are shown as histograms; ∗P < .05 vs. untreated controls. (d) Oil Red O staining demonstrating that FTY720 reduced lipid accumulation in macrophage foam cells; bar, 100 μm. (e) Percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells; ∗P < .05 and ∗∗P < .01 vs. ox-LDL. ### 3.4. S1PR Involved in FTY720 and Rapamycin Regulation of Foam Cells Previous research shows that S1P receptors are involved in atherosclerosis, possibly through endoplasmic reticulum (ER) stress, not PI3K/beclin1 or mTOR [29]. In this study, S1PR was detected in rapamycin-induced and FTY720-induced foam cells. Real-time PCR showed that S1pr1 expression was lower in rapamycin-treated compared with untreated foam cells (P< .05; Figure 4(a)). However, FTY720 inhibited expression of S1pr3 and S1pr4 in foam cells (Figure 4(a)).Figure 4 S1PR involvement in FTY720- and rapamycin-regulated foam cells; (a) real-time PCR analysis shown for S1pr1-5 expression in foam cells; (b) western blot showing apoptosis marker cleaved-CASPASE 3 and total CASPASE 3; (c) data representative of cleaved-CASPASE 3 levels relative to total CASPASE 3 levels shown as histograms.∗P < .05 vs. untreated controls. Results shown are representative of 3 independent experiments; each experiment was repeated 3 times.Macrophage apoptosis in atherosclerotic plaques may increase plaque destabilization. Therefore, the next investigation aimed to determine whether FTY720-mediated M2 polarization caused macrophages to resist apoptosis. Expression of cleaved-CASPASE 3 was slightly decreased in 5μM FTY720-mediated foam cells but was unchanged in rapamycin-treated foam cells (Figures 4(b) and 4(c)). These results support the antiapoptotic and atheroprotective effects of FTY720. ## 3.1. Rapamycin Induces M1 Polarization by Increasing Macrophage Autophagy Peritoneal macrophages were isolated as described previously. Immunofluorescence analysis showed a positivity rate of 87.6% for F4/80 (Figure1(a)). To investigate the effect of autophagy on macrophages, expression of autophagic markers LC3I and LC3II in macrophages was analyzed. Cells were cultured with or without autophagy activator rapamycin for 24 h. After treatment, cells were harvested, and proteins were collected for western blot analysis. As shown in Figure 1(b), rapamycin increased expression of LC3-II at different doses. Furthermore, real-time PCR showed that IL-6 and iNOS (M1 markers) were increased whereas Arg1 and Mrc1 (M2 markers) were decreased (Figure 1(c)). Moreover, western blot analysis demonstrated that IL-6 and COX2 (M1 markers) were higher in rapamycin-treated compared with untreated macrophages. Meanwhile, expression of TGF-β and ARG1 (M2 markers) was lower in rapamycin-treated compared with untreated macrophages (Figures 1(d) and 1(e)). This finding suggested that autophagy could induce M1 polarization.Figure 1 Rapamycin promoted autophagy in peritoneal macrophages and activated the M1 phenotype. (a) Peritoneal macrophages subjected to immunofluorescence staining by macrophage marker F4/80 antibody; (b) western blot showing LC3 expression in various rapamycin- (Rap-) treated groups, relative protein and GAPDH levels shown as histograms; (c) real-time PCR showing expression of Arg1, Mrc1, IL-6, and iNOS in various rapamycin-treated groups; (d) western blot showing expression of IL-6, TGF-β, COX2, and ARG1 in various rapamycin- (Rap-) treated groups; (e) relative protein and GAPDH levels shown as histograms; ∗P < .05 and ∗∗P < .01 vs. untreated controls. Results are representative of 3 independent experiments; each experiment was repeated 3 times. ## 3.2. Rapamycin Does Not Affect Foam Cell Polarization Macrophage foam cells indicate advanced stage atherosclerosis. To generate foam cells, macrophages were treated by ox-LDL for 48 h. Oil Red O staining showed that ox-LDL increased lipid accumulation and that this accumulation was decreased by rapamycin treatment (Figures2(a) and 2(b)). To determine whether autophagy affected polarization of foam cells, rapamycin was used at various doses. First, western blot analysis demonstrated that autophagy marker LC3II was elevated in the presence of rapamycin. Next, real-time PCR showed that rapamycin had almost no effect on foam cell polarization (Figure 2(c)). Western blot analysis verified that TGF-β, ARG1 and COX2 expression were unchanged in rapamycin-treated foam cells; IL-6 increased slightly in foam cells treated with 1 μM rapamycin (Figures 2(d) and 2(e)). However, lipid accumulation did not differ between rapamycin-treated foam cells and ox-LDL-treated foam cells (Figures 2(a) and 2(b)). Thus, autophagy appeared to have no effect on foam cell polarization.Figure 2 Rapamycin had no effect on foam cell polarization. (a) Oil Red O staining demonstrated that rapamycin inhibited macrophage transformation to foam cells but did not affect foam cells; (b) percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells;∗P < .05 and ∗∗P < .01 vs. ox-LDL; (c) real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression for various rapamycin-treated groups; (d) western blot showing expression of LC3, IL-6, COX2, TGF-β, and ARG1 in foam cells treated with various concentrations of rapamycin; (e) results shown representative of 3 independent experiments, each experiment repeated 3 times. Protein and GAPDH relative levels are shown as histograms.P < .05 and ∗∗P < .01 vs. untreated controls; MΦ, macrophage; bar, 100 μm. (a) (b) (c) (d) (e) ## 3.3. FTY720 Reduces Lipid Accumulation by M2 Polarization of Foam Cells A previous study reported that M2 polarization by S1P was responsible for the antiatherogenic properties of high-density lipoproteins in vivo [13]. FTY720 was not only an S1P analogue, but also an S1P receptor modulator that had contradicting roles in lipid metabolism and macrophage migration in atherosclerotic disease. To test the effect of FTY720 on polarization, FTY720-treated macrophages and foam cells were used. Real-time PCR showed that high-dose FTY720 treatment of macrophages increased Arg1 and Mrc1, reduced IL-6, and did not change iNOS. However, Arg1 and Mrc1 were increased, whereas IL-6 and iNOS were decreased in FTY720-treated foam cells (Figure 3(a)). Western blot demonstrated that TGF-β and ARG1 expression were increased in both FTY720-induced macrophages and FTY720-induced foam cells; however, ARG1 expression did not change in FTY720-treated foam cells (Figures 3(b) and 3(c)). Additionally, western blot was used to verify that IL-6 and COX2 were reduced in FTY720-treated macrophages and foam cells, but COX2 was not reduced in FTY720-induced macrophages. These results indicate that FTY720 could improve M2 polarization of macrophages and foam cells. Furthermore, LC3 expression was detected in macrophages and foam cells. However, LC3II expression differed in both cell types. LC3II expression was increased in FTY720-treated foam cells, but not in FTY720-treated macrophages (Figures 3(a) and 3(b)). Thus, FTY720 appeared to transform the polarization of foam cells. Additionally, lipid accumulation was decreased in FTY720-treated foam cells (Figures 3(d) and 3(e)), but not FTY720-treated macrophages. This finding suggests that FTY720 played a key role in M2 polarization to alleviate advances in atherosclerosis; autophagy may also promote M2 polarization.Figure 3 FTY720 reduces lipid accumulation by M2 polarization of foam cells. (a) Real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression with various concentrations of rapamycin treatment in macrophages and foam cells. (b) Western blot showing LC3, ARG1, COX2, IL-6, and TGF-β in macrophages and foam cells treated with various concentrations of rapamycin. (c) Results shown representative of 3 independent experiments, each experiment repeated 3 times. Data representative of protein and relative GAPDH are shown as histograms; ∗P < .05 vs. untreated controls. (d) Oil Red O staining demonstrating that FTY720 reduced lipid accumulation in macrophage foam cells; bar, 100 μm. (e) Percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells; ∗P < .05 and ∗∗P < .01 vs. ox-LDL. ## 3.4. S1PR Involved in FTY720 and Rapamycin Regulation of Foam Cells Previous research shows that S1P receptors are involved in atherosclerosis, possibly through endoplasmic reticulum (ER) stress, not PI3K/beclin1 or mTOR [29]. In this study, S1PR was detected in rapamycin-induced and FTY720-induced foam cells. Real-time PCR showed that S1pr1 expression was lower in rapamycin-treated compared with untreated foam cells (P< .05; Figure 4(a)). However, FTY720 inhibited expression of S1pr3 and S1pr4 in foam cells (Figure 4(a)).Figure 4 S1PR involvement in FTY720- and rapamycin-regulated foam cells; (a) real-time PCR analysis shown for S1pr1-5 expression in foam cells; (b) western blot showing apoptosis marker cleaved-CASPASE 3 and total CASPASE 3; (c) data representative of cleaved-CASPASE 3 levels relative to total CASPASE 3 levels shown as histograms.∗P < .05 vs. untreated controls. Results shown are representative of 3 independent experiments; each experiment was repeated 3 times.Macrophage apoptosis in atherosclerotic plaques may increase plaque destabilization. Therefore, the next investigation aimed to determine whether FTY720-mediated M2 polarization caused macrophages to resist apoptosis. Expression of cleaved-CASPASE 3 was slightly decreased in 5μM FTY720-mediated foam cells but was unchanged in rapamycin-treated foam cells (Figures 4(b) and 4(c)). These results support the antiapoptotic and atheroprotective effects of FTY720. ## 4. Discussion Atherosclerosis is characterized by persistent inflammation of the arterial wall. Macrophages, the most abundant immune cells in atherosclerotic plaques, can be transformed to foam cells by oxidized low-density lipoprotein (ox-LDL) induction, which accelerates atherosclerosis [1, 6–9]. Thus, macrophages and foam cells are crucial in different stages of atherosclerotic disease and, as such, are attractive targets for therapy. Moreover, macrophages are functionally complex in their adoption of multiple polarization states based on the composition of their microenvironment. Previous reports show that defective phagocytic clearance of macrophages promotes plaque necrosis and inhibition of autophagy by silencing ATG5 or other autophagy mediators to enhance protective processes in atherosclerosis [22]. Furthermore, mTOR mediates cross talk of macrophage polarization and autophagy in atherosclerosis [30]. However, the polarization and autophagy of foam cells in the pathogenesis of atherosclerosis remain largely unexplained. Understanding cross talk of autophagy and polarization regulation is crucial to understanding pathogenesis.In our studies, we found that macrophage and foam cell autophagy produce different effects. Rapamycin is a recognized autophagy activator through its inhibition of mTOR signaling in various cells. By increasing LC3II expression, we first verified the increased autophagy induced by rapamycin in both macrophages and foam cells. Furthermore, our finding that rapamycin induced M1 polarization by increasing IL-6, iNOS and COX2 expression, while decreasing TGF-β, Arg1, and MRC1 expression in macrophages, is consistent with a previous report [31]. However, we found no effect on foam cell polarization. Moreover, Oil Red O staining showed decreased lipid accumulation in rapamycin-treated macrophages but not rapamycin-treated foam cells. This finding suggests that M1 macrophages may inhibit foam cell formation. M1 macrophages have been previously reported to promote inflammation through direct targeting of the NF-κB pathway [32].Atherosclerosis is a chronic immunoinflammatory disease [1]. Here, we found that M1 macrophages can enhance autophagy by inhibiting mTOR signaling, a key regulator of atherosclerosis. Thus, mTOR depletion stimulates M1 macrophages and suppresses early atherosclerosis. In this study, rapamycin also enhanced autophagy by inhibiting mTOR in foam cells, though it did not affect polarization or alter lipid load. In vivo results showed that rapamycin attenuated the macrophage death rate and delayed plaque progression through autophagy upregulation [23, 33, 34]. Wang and colleagues previously showed that cholesterol efflux was increased by autophagy in macrophage foam cells [35]. Hence, rapamycin-induced autophagy in foam cells delays intracellular lipid accumulation and may stabilize artery plaques through prevention of apoptosis and S1PR1 in advanced atherosclerosis.FTY720, a novel S1P analogue [12] and a potent immunosuppressive drug, acted as a sphingosine kinase (SphK) 1 antagonist, growth inhibitor, and apoptosis inducer in various human cancer cell lines. In this study, FTY720 did not affect autophagy in macrophages, but its activity in suppressing proinflammatory factor IL-6 in macrophages was consistent with a previous study [36]. In animal models, FTY720 reduced atherosclerosis by suppressing monocyte/macrophage migration, inducing a regulatory T-cell response, and inhibiting effector T-cell responses [14, 15]. Moreover, the M1 phenotype marker-IL-6 was inhibited, whereas the M2 phenotype marker IL-4 was increased in peritoneal macrophage cells from FTY720-treated LDLR-deficient mice fed a cholesterol-rich diet [12]. In the present study, FTY720 also inhibited M1 phenotype markers IL-6, COX2, and iNOS, promoted M2 markers TGF-β, Arg1, and Mrc1 in macrophage foam cells, and reduced lipid accumulation in macrophage foam cells, as shown by Oil Red O staining. LC3-II was notably increased in FTY720-treated foam cells compared with FTY720-treated macrophages, which have been shown in some cancer studies to induce the autophagy pathway [24–26, 37–39]. Consequently, we deduced that FTY720 affected macrophage function and polarization through constriction of the M1 phenotype and activation of the M2 phenotype via the autophagy pathway to alleviate advanced atherosclerosis. Additionally, the finding that FTY20 inhibited S1PR3 and S1PR4 in macrophage foam cells was similar to previous reports that FTY20 could suppress S1PR1 and S1PR3 to decrease inflammatory factors and alleviate atherosclerosis [14, 28]. It is possible that FTY720 suppresses S1PR3 and S1PR4 to reduce lipid accumulation. ## 5. Conclusion In summary, rapamycin induced autophagy, promoted the M1 phenotype, suppressed the M2 phenotype, and inhibited foam cell formation in peritoneal macrophages. Additionally, rapamycin activated autophagy in peritoneal macrophage foam cells but did not affect macrophage polarization or reduce lipid accumulation. Thus, the mTOR depletion of macrophage cells stimulated M1 macrophages and suppresses early atherosclerosis. FTY720 could promote the M2 phenotype and suppress the M1 phenotype in peritoneal macrophages and foam cells, but it only activated autophagy in peritoneal macrophage foam cells and alleviated lipid accumulation. FTY720 transformation of foam cells into the M2 phenotype occurred through the autophagy pathway and alleviated advanced atherosclerosis. --- *Source: 1010248-2018-12-06.xml*
1010248-2018-12-06_1010248-2018-12-06.md
43,706
Rapamycin and FTY720 Alleviate Atherosclerosis by Cross Talk of Macrophage Polarization and Autophagy
Rui-zhen Sun; Ying Fan; Xiao Liang; Tian-tian Gong; Qi Wang; Hui Liu; Zhi-yan Shan; Lei Lei
BioMed Research International (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1010248
1010248-2018-12-06.xml
--- ## Abstract Foam cell formation and macrophage polarization are involved in the pathologic development of atherosclerosis, one of the most important human diseases affecting large and medium artery walls. This study was designed to assess the effects of rapamycin and FTY720 (fingolimod) on macrophages and foam cells. Mouse peritoneal macrophages were collected and treated with rapamycin and FTY720 to study autophagy, polarization, and lipid accumulation. Next, foam cells were formed by oxidizing low-density lipoprotein to observe changes in lipid accumulation, autophagy, and polarization in rapamycin-treated or FTY720-treated foam cells. Lastly, foam cells that had been treated with rapamycin and FTY720 were evaluated for sphingosine 1-phosphate receptor (S1prs) expression. Autophagy microtubule-associated protein 1 light chain 3- (LC3-) II was increased, and classically activated macrophage phenotype markers interleukin- (IL-) 6, cyclooxygenase-2 (COX2), and inducible nitric oxide synthase (iNOS) were increased, whereas alternatively activated macrophage phenotype markers transforming growth factor- (TGF-)β, arginase 1 (Arg1), and mannose receptor C-type 1 (Mrc1) were decreased by rapamycin in peritoneal macrophages. LC3-II was also obviously enhanced, though polarization markers were unchanged in rapamycin-treated foam cells. Moreover, lipid accumulation was inhibited in rapamycin-treated macrophage cells but was unchanged in rapamycin-treated foam cells. For FTY720, LC3-II did not change, whereas TGF-β, Arg1 and Mrc1 were augmented, and IL-6 was suppressed in macrophages. However, LC3-II was increased, and TGF-β, ARG1 and MRC1 were strikingly augmented, whereas IL-6, COX2 and iNOS could be suppressed in foam cells. Furthermore, lipid accumulation was alleviated in FTY720-treated foam cells. Additionally, S1pr1 was markedly decreased in foam cells (P < .05); S1pr2, S1pr3, S1pr4 and S1pr5 were unchanged in rapamycin-treated foam cells. In FTY720-treated foam cells, S1pr3 and S1pr4 were decreased, and S1pr1, S1pr2 and S1pr5 were unchanged. Therefore, we deduced that rapamycin stimulated classically activated macrophages and supressed early atherosclerosis. Rapamycin may also stabilize artery plaques by preventing apoptosis and S1PR1 in advanced atherosclerosis. FTY720 allowed transformation of foam cells into alternatively activated macrophages through the autophagy pathway to alleviate advanced atherosclerosis. --- ## Body ## 1. Introduction Atherosclerosis, one of the most harmful human diseases of large and medium artery walls, leads to acute myocardial infarction and sudden death [1]. It has been demonstrated that atherosclerosis involves lipid accumulation and inflammatory infiltration [1], and that macrophages play a crucial role in pathogenesis. During the initial phase of atherosclerosis development, circulating monocytes migrate into the arterial wall via dysfunctional endothelial cells and then differentiate into macrophages [2–4]. Next, macrophages engulf oxidized low-density lipoprotein (ox-LDL) to digest and transport lipids out of the vascular wall [5]. When overloaded with lipid droplets, macrophages will transform into foam cells that initiate plaque formation inside the blood vessels [6]. This inflammatory process appears to be a hallmark of atherosclerosis [7–9]. Thus, decreasing macrophage foam cell formation would be an attractive strategy for reversing atherosclerosis.Macrophage phenotype emerges in response to the microenvironment in a process referred to as macrophage activation or polarization [10]. Macrophages are either classically activated (M1) or alternatively activated (M2). M1 macrophages are activated by treatment with interferon-γ or lipopolysaccharide, whereas M2 macrophages are activated by treatment with Th2 cytokines interleukin- (IL-) 4 or IL-13; the M2 phenotype switch can be enhanced by IL-10. Early in the innate immune response, M1 macrophages produce reactive oxygen species and proinflammatory cytokines and chemokines to drive inflammation; thus, they are referred to as “killer” macrophages. During the resolution phase of inflammation, M2 macrophages scavenge debris and assist in angiogenesis and wound healing; thus, they are referred to as “healer” macrophages [11]. During atherosclerosis development, there is differential polarization of macrophages that results in differences in the number and distribution of polarization macrophages within the plaque. M1 and M2 macrophages link to produce atherosclerotic plaques, and the M2 macrophages can resist foam cell transformation [2]. Thus, selective removal of macrophages or altering polarization status within the plaque may have a role in alleviating atherosclerosis.2-Amino-2-[2-(4-octylphenyl)ethyl]propane-1,3-diol hydrochloride (FTY720), also known as fingolimod, is an immune-modulating drug used to treat multiple sclerosis and multiple organ transplantation. It is both a synthetic sphingosine 1-phosphate (S1P) analogue and an S1P receptor modulator [12]. The drug may serve as a functional antagonist or agonist, depending on the S1P receptor subtype and target cell or tissue. S1P induces M2 phenotype polarization via IL-4 to protect against atherosclerosis development [13]. Some studies have shown that FTY720 reduces atherosclerosis by suppressing monocyte/macrophage migration to atherosclerotic lesions [14]. Short-term, low-dose oral FTY720 has shown great benefit in inhibiting early development of atherosclerosis via induction of regulatory T-cells and inhibition of effector T-cell response in apolipoprotein E-deficient mice fed a high-cholesterol diet [15]. Moreover, FTY720 treatment of low-density lipoprotein receptor- (LDLR-) deficient mice fed a cholesterol-rich diet activates M2 phenotype marker IL-4 in peritoneal macrophages to reduce atherosclerotic lesion formation in a dose-dependent manner. Concentrations of proinflammatory cytokines such as tumor necrosis factor-α, IL-6, and IL-12 are also reduced [12]. However, FTY720 failed to affect atherosclerosis in moderately hypercholesterolemic LDLR-/- mice [16]. Thus, some important questions remain regarding how FTY720 affects macrophage function and whether FTY720 plays a role in alleviating atherosclerosis through interaction with foam cells.Autophagy is an evolutionarily conserved, physiologic, self-protective process. Autophagy is classically considered a pathway that contributes to cellular homeostasis and adaptation to stress [17]. Dysfunctional autophagy is associated with some human diseases. A limited number of clinical studies have shown that autophagy is impaired in the advanced stages of atherosclerosis and that its deficiency induces lethal accumulation of cholesterol crystals and promotes atherosclerosis [18–20]. Stents eluting the mTOR inhibitor, everolimus, selectively clear macrophages by autophagy in rabbit model atherosclerotic plaques to promote stability without affecting smooth muscle cell viability [21]. Several examinations have also shown that mice with Atg5 (an essential autophagy gene) macrophage-specific deletion develop plaques characterized by increased apoptosis, oxidative stress, and plaque necrosis [22]. These results suggest that macrophage autophagy plays an essential but complicated role in the pathogenesis of atherosclerosis. Moreover, in vivo experiments have shown that rapamycin, an mTOR inhibitor, reduces macrophage death rate and delays plaque progression through autophagy upregulation [23]. In vitro experiments show that rapamycin not only reduces intracellular lipid droplet accumulation, but also inhibits cell apoptosis by clearing dysfunctional mitochondria and lowering intracellular reactive oxygen species levels during foam cell development [23]. In that study, atherosclerosis development was characterized by macrophage autophagy inhibition and changes to the distribution and rate of macrophage polarization [23]. Therefore, we deduced that selective promotion of macrophage autophagy may reduce M1 macrophages, increase M2 macrophages, and alter macrophage foam cells to stabilize vulnerable atherosclerotic plaques.Additionally, FTY720 can induce autophagy in some cancer cells [24–26]. However, little is known about whether FTY720 mediates macrophage autophagy and polarization in atherosclerosis. One previous study showed that FTY720 stimulated production of 27-hydroxycholesterol, an endogenous ligand of the liver X receptor, to cause liver X receptor-induced upregulation of ATP-binding cassette, subfamily A member 1 (ABCA1). It also conferred atheroprotective effects independent of sphingosine 1-phosphate receptor (S1PR) activation in human primary macrophages [27]. Furthermore, FTY720 inhibited inflammatory factors in endothelial and vascular smooth muscle cells through S1PR1 and S1PR3 and inhibited secretion of monocyte chemotactic protein 1 (MCP-1) by S1PR3 [14]. In vivo and in vitro experiments indicate that S1P mediates S1PR3 to recruit monocytes/macrophages and change smooth muscle cells to protect them from atherosclerosis [28]. Although FTY720 has a role in lipid metabolism and macrophage migration to reduce atherosclerosis, the drug may act as a functional antagonist or agonist, depending on S1P receptor subtype and cell or tissue target. Consequently, this study was designed to assess the roles of rapamycin and FTY720 on autophagy and polarization of macrophage cells and foam cells and to explore the S1PR macrophage foam cells to identify new approaches for reducing atherosclerosis. ## 2. Materials and Methods ### 2.1. Animal B6D2F1 mice, aged 8 to 10 weeks, were purchased from Vital River (Beijing, China). Animals were treated according to the Guide for the Care and Use of Laboratory Animals. All animal experiments were performed under the Code of Practice, Harbin Medicine University Ethics Committees. ### 2.2. Peritoneal Macrophages and Culture Mouse macrophages were isolated from the peritoneal cavity of B6D2F1 mice 3 days after treatment with 3% thioglycolate (T9032-500G, Sigma-Aldrich). Macrophages were cultured at 37°C in a 5% CO2 humidified incubator. Isolated macrophages were maintained in RPMI 1640 (22400089, Invitrogen) containing 10% fetal bovine serum (FBS, 04-001-1A, BI) and 50 μg/mL penicillin/streptomycin (15140-148, Invitrogen) for 24 h. Next, the medium was changed, and the cells for RNA or protein analysis were removed 24 h after treatment with FTY720 (SML0700, Sigma-Aldrich) or rapamycin (ab120224, Abcam). ### 2.3. Immunofluorescence Staining Cells were fixed in 4% paraformaldehyde for 20 min at 4°C. Next, cells were rinsed with 0.25% Triton X-100 in phosphate-buffered saline (PBS) and incubated with 70% alcohol for 5 min before being incubated with blocking buffer for 30 min at room temperature. For F4/80 staining, cells were incubated with F4/80 (565612, BD) overnight at 4°C and then counterstained with Hoechst 33342 (14533-100MG, Sigma-Aldrich). Lastly, cells were mounted on glass slides and examined using a Nikon microscope. ### 2.4. Western Blot Cells were lysed using RIPA buffer (Sigma-Aldrich) containing Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Philadelphia, PA) for 20 min on ice. Samples were incubated at 70°C for 15 min in NuPAGE sample buffer (Life Technologies), and proteins were separated on NuPAGE 4% to 12% Bis-Tris gels before transfer to PVDF (Life Technologies) for immunoblotting. Several primary antibodies were used for detection: anti-LC3 (14600-1-AP, Proteintech), anti-TGF-β (ab66043, Abcam), anti-IL-6 (21865-1-AP, Proteintech), anti-COX2 (12375-1-AP, Proteintech), anti-Arg1 (16001-1-AP, Proteintech), and anti-glyceraldehyde-3-phosphate dehydrogenase (GAPDH, KC-5G4, KANGCHEN). All secondary antibodies used for visualization were either goat anti-mouse or goat anti-rabbit and were purchased from Abcam. Blots were developed with the SuperSignal West Pico Chemiluminescent Substrate or SuperSignal West Femto Maximum Sensitivity Substrate Kit (Thermo Fisher) and visualized by the ImageQuant LAS 4000 biomolecular imager (GE Healthcare Life Sciences, Pittsburgh, PA). Densitometry analysis was completed with the help of ImageJ software, which allows for quantification of band intensity. A rectangle was placed on each band, and the band intensity and background intensity were analyzed. Quantification was determined by subtracting band intensity from background intensity. Protein expression was corrected with a loading control such as GAPDH by dividing the protein densitometry value. All western blot data is presented as protein densitometry/control protein densitometry. ### 2.5. Foam Cell Formation Copper ox-LDL (2 mg/mL) was purchased from Peking Union-Biology Co. Ltd (China). Mouse peritoneal macrophages were then seeded in a 12-well or 6-well cell culture (Corning) in RPMI 1640 media containing 10% fetal bovine serum (FBS) and allowed to adhere overnight. The next day, cells were treated with ox-LDL (150μg/mL) for 48 h. ### 2.6. Oil Red O Staining Oil Red O stock solution (0.5%) in 100% isopropanol was diluted to 60% (vol/vol) isopropanol using distilled water. The solution was then filtered to remove particulates. After incubation with ox-LDL, cells were rinsed twice with PBS and fixed with 4% paraformaldehyde in PBS for 15 min at room temperature. Next, cells were rinsed twice with PBS and stained with a filtered Oil Red O solution for 1 h at room temperature. They were rinsed with distilled water and mounted using aqueous mounting media. ### 2.7. Real-Time Polymerase Chain Reaction RNA samples from differentially treated macrophages were extracted using TRIzol Reagent (15596026, Invitrogen). cDNA was synthesized from mRNA using the High Capacity cDNA Reverse Transcription Kit (AT341-02, TransGen Biotech). Real-time PCR was performed using 1μL of cDNA, 10 μL TransStart TM Top Green real time PCR SuperMix (AQ131, TransGen Biotech), and gene-specific primers in a 20 μL reaction system on the CFX96 Real-Time System (Bio-Rad). Specific primers were obtained from a primer bank for mouse Arg1 (forward primer, 5′-CTCCAAGCCAAAGTCCTTAGAG; reverse primer, 5′-AGGAGCTGTCATTAGGGACATC), Mrc1 (forward primer, 5′-CTCTGTTCAGCTATTGGACGC; reverse primer, 5′-GGAATTTCTGGGATTCAGCTTC), iNOS (forward primer, 5′-GTTCTCAGCCCAACAATACAAGA; reverse primer, 5′-GTGGACGGGTCGATGTCAC), IL-6 (forward primer: 5′-CCAAGAGGTGAGTGCTTCCC; reverse primer, 5′-CTGTTGTTCAGACTCTCTCCCT), S1PR1 (forward primer, 5′-ATGGTGTCCACTAGCATCCC; reverse primer, 5′-CGATGTTCAACTTGCCTGTGTAG), S1PR2 (forward primer, 5′-ATGGGCGGCTTATACTCAGAG; reverse primer, 5′-GCGCAGCACAAGATGATGAT), S1PR3 (forward primer, 5′-TTTCATCGGCAACTTGGCTCT; reverse primer, 5′-GCTACGAACATACTGCCCTCC), S1PR4 (forward primer, 5′-GTCAGGGACTCGTACCTTCCA; reverse primer, 5′-GATGCAGCCATACACACGG), and S1PR5 (forward primer, GCTTTGGTTTGCGCGTGAG; reverse primer, 5′-GGCGTCCTAAGCAGTTCCAG). GAPDH was used as an internal control, and each sample was run in triplicate. The 2-ΔΔCt method was used to analyze qPCR gene expression data. Significance was assessed using two-tailed Student’st-test to compare the levels of DE genes for different groups (P < .05). ### 2.8. Statistical Analysis Data were expressed as means ± standard deviations and were tested for normality by SPSS Statistics (P > .05). Then, ANOVA was performed on the data using SPSS Statistics. A value ofP < .05 was considered to be statistically significant. A value ofP < .01 was considered to be more statistically significant. ## 2.1. Animal B6D2F1 mice, aged 8 to 10 weeks, were purchased from Vital River (Beijing, China). Animals were treated according to the Guide for the Care and Use of Laboratory Animals. All animal experiments were performed under the Code of Practice, Harbin Medicine University Ethics Committees. ## 2.2. Peritoneal Macrophages and Culture Mouse macrophages were isolated from the peritoneal cavity of B6D2F1 mice 3 days after treatment with 3% thioglycolate (T9032-500G, Sigma-Aldrich). Macrophages were cultured at 37°C in a 5% CO2 humidified incubator. Isolated macrophages were maintained in RPMI 1640 (22400089, Invitrogen) containing 10% fetal bovine serum (FBS, 04-001-1A, BI) and 50 μg/mL penicillin/streptomycin (15140-148, Invitrogen) for 24 h. Next, the medium was changed, and the cells for RNA or protein analysis were removed 24 h after treatment with FTY720 (SML0700, Sigma-Aldrich) or rapamycin (ab120224, Abcam). ## 2.3. Immunofluorescence Staining Cells were fixed in 4% paraformaldehyde for 20 min at 4°C. Next, cells were rinsed with 0.25% Triton X-100 in phosphate-buffered saline (PBS) and incubated with 70% alcohol for 5 min before being incubated with blocking buffer for 30 min at room temperature. For F4/80 staining, cells were incubated with F4/80 (565612, BD) overnight at 4°C and then counterstained with Hoechst 33342 (14533-100MG, Sigma-Aldrich). Lastly, cells were mounted on glass slides and examined using a Nikon microscope. ## 2.4. Western Blot Cells were lysed using RIPA buffer (Sigma-Aldrich) containing Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Philadelphia, PA) for 20 min on ice. Samples were incubated at 70°C for 15 min in NuPAGE sample buffer (Life Technologies), and proteins were separated on NuPAGE 4% to 12% Bis-Tris gels before transfer to PVDF (Life Technologies) for immunoblotting. Several primary antibodies were used for detection: anti-LC3 (14600-1-AP, Proteintech), anti-TGF-β (ab66043, Abcam), anti-IL-6 (21865-1-AP, Proteintech), anti-COX2 (12375-1-AP, Proteintech), anti-Arg1 (16001-1-AP, Proteintech), and anti-glyceraldehyde-3-phosphate dehydrogenase (GAPDH, KC-5G4, KANGCHEN). All secondary antibodies used for visualization were either goat anti-mouse or goat anti-rabbit and were purchased from Abcam. Blots were developed with the SuperSignal West Pico Chemiluminescent Substrate or SuperSignal West Femto Maximum Sensitivity Substrate Kit (Thermo Fisher) and visualized by the ImageQuant LAS 4000 biomolecular imager (GE Healthcare Life Sciences, Pittsburgh, PA). Densitometry analysis was completed with the help of ImageJ software, which allows for quantification of band intensity. A rectangle was placed on each band, and the band intensity and background intensity were analyzed. Quantification was determined by subtracting band intensity from background intensity. Protein expression was corrected with a loading control such as GAPDH by dividing the protein densitometry value. All western blot data is presented as protein densitometry/control protein densitometry. ## 2.5. Foam Cell Formation Copper ox-LDL (2 mg/mL) was purchased from Peking Union-Biology Co. Ltd (China). Mouse peritoneal macrophages were then seeded in a 12-well or 6-well cell culture (Corning) in RPMI 1640 media containing 10% fetal bovine serum (FBS) and allowed to adhere overnight. The next day, cells were treated with ox-LDL (150μg/mL) for 48 h. ## 2.6. Oil Red O Staining Oil Red O stock solution (0.5%) in 100% isopropanol was diluted to 60% (vol/vol) isopropanol using distilled water. The solution was then filtered to remove particulates. After incubation with ox-LDL, cells were rinsed twice with PBS and fixed with 4% paraformaldehyde in PBS for 15 min at room temperature. Next, cells were rinsed twice with PBS and stained with a filtered Oil Red O solution for 1 h at room temperature. They were rinsed with distilled water and mounted using aqueous mounting media. ## 2.7. Real-Time Polymerase Chain Reaction RNA samples from differentially treated macrophages were extracted using TRIzol Reagent (15596026, Invitrogen). cDNA was synthesized from mRNA using the High Capacity cDNA Reverse Transcription Kit (AT341-02, TransGen Biotech). Real-time PCR was performed using 1μL of cDNA, 10 μL TransStart TM Top Green real time PCR SuperMix (AQ131, TransGen Biotech), and gene-specific primers in a 20 μL reaction system on the CFX96 Real-Time System (Bio-Rad). Specific primers were obtained from a primer bank for mouse Arg1 (forward primer, 5′-CTCCAAGCCAAAGTCCTTAGAG; reverse primer, 5′-AGGAGCTGTCATTAGGGACATC), Mrc1 (forward primer, 5′-CTCTGTTCAGCTATTGGACGC; reverse primer, 5′-GGAATTTCTGGGATTCAGCTTC), iNOS (forward primer, 5′-GTTCTCAGCCCAACAATACAAGA; reverse primer, 5′-GTGGACGGGTCGATGTCAC), IL-6 (forward primer: 5′-CCAAGAGGTGAGTGCTTCCC; reverse primer, 5′-CTGTTGTTCAGACTCTCTCCCT), S1PR1 (forward primer, 5′-ATGGTGTCCACTAGCATCCC; reverse primer, 5′-CGATGTTCAACTTGCCTGTGTAG), S1PR2 (forward primer, 5′-ATGGGCGGCTTATACTCAGAG; reverse primer, 5′-GCGCAGCACAAGATGATGAT), S1PR3 (forward primer, 5′-TTTCATCGGCAACTTGGCTCT; reverse primer, 5′-GCTACGAACATACTGCCCTCC), S1PR4 (forward primer, 5′-GTCAGGGACTCGTACCTTCCA; reverse primer, 5′-GATGCAGCCATACACACGG), and S1PR5 (forward primer, GCTTTGGTTTGCGCGTGAG; reverse primer, 5′-GGCGTCCTAAGCAGTTCCAG). GAPDH was used as an internal control, and each sample was run in triplicate. The 2-ΔΔCt method was used to analyze qPCR gene expression data. Significance was assessed using two-tailed Student’st-test to compare the levels of DE genes for different groups (P < .05). ## 2.8. Statistical Analysis Data were expressed as means ± standard deviations and were tested for normality by SPSS Statistics (P > .05). Then, ANOVA was performed on the data using SPSS Statistics. A value ofP < .05 was considered to be statistically significant. A value ofP < .01 was considered to be more statistically significant. ## 3. Results ### 3.1. Rapamycin Induces M1 Polarization by Increasing Macrophage Autophagy Peritoneal macrophages were isolated as described previously. Immunofluorescence analysis showed a positivity rate of 87.6% for F4/80 (Figure1(a)). To investigate the effect of autophagy on macrophages, expression of autophagic markers LC3I and LC3II in macrophages was analyzed. Cells were cultured with or without autophagy activator rapamycin for 24 h. After treatment, cells were harvested, and proteins were collected for western blot analysis. As shown in Figure 1(b), rapamycin increased expression of LC3-II at different doses. Furthermore, real-time PCR showed that IL-6 and iNOS (M1 markers) were increased whereas Arg1 and Mrc1 (M2 markers) were decreased (Figure 1(c)). Moreover, western blot analysis demonstrated that IL-6 and COX2 (M1 markers) were higher in rapamycin-treated compared with untreated macrophages. Meanwhile, expression of TGF-β and ARG1 (M2 markers) was lower in rapamycin-treated compared with untreated macrophages (Figures 1(d) and 1(e)). This finding suggested that autophagy could induce M1 polarization.Figure 1 Rapamycin promoted autophagy in peritoneal macrophages and activated the M1 phenotype. (a) Peritoneal macrophages subjected to immunofluorescence staining by macrophage marker F4/80 antibody; (b) western blot showing LC3 expression in various rapamycin- (Rap-) treated groups, relative protein and GAPDH levels shown as histograms; (c) real-time PCR showing expression of Arg1, Mrc1, IL-6, and iNOS in various rapamycin-treated groups; (d) western blot showing expression of IL-6, TGF-β, COX2, and ARG1 in various rapamycin- (Rap-) treated groups; (e) relative protein and GAPDH levels shown as histograms; ∗P < .05 and ∗∗P < .01 vs. untreated controls. Results are representative of 3 independent experiments; each experiment was repeated 3 times. ### 3.2. Rapamycin Does Not Affect Foam Cell Polarization Macrophage foam cells indicate advanced stage atherosclerosis. To generate foam cells, macrophages were treated by ox-LDL for 48 h. Oil Red O staining showed that ox-LDL increased lipid accumulation and that this accumulation was decreased by rapamycin treatment (Figures2(a) and 2(b)). To determine whether autophagy affected polarization of foam cells, rapamycin was used at various doses. First, western blot analysis demonstrated that autophagy marker LC3II was elevated in the presence of rapamycin. Next, real-time PCR showed that rapamycin had almost no effect on foam cell polarization (Figure 2(c)). Western blot analysis verified that TGF-β, ARG1 and COX2 expression were unchanged in rapamycin-treated foam cells; IL-6 increased slightly in foam cells treated with 1 μM rapamycin (Figures 2(d) and 2(e)). However, lipid accumulation did not differ between rapamycin-treated foam cells and ox-LDL-treated foam cells (Figures 2(a) and 2(b)). Thus, autophagy appeared to have no effect on foam cell polarization.Figure 2 Rapamycin had no effect on foam cell polarization. (a) Oil Red O staining demonstrated that rapamycin inhibited macrophage transformation to foam cells but did not affect foam cells; (b) percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells;∗P < .05 and ∗∗P < .01 vs. ox-LDL; (c) real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression for various rapamycin-treated groups; (d) western blot showing expression of LC3, IL-6, COX2, TGF-β, and ARG1 in foam cells treated with various concentrations of rapamycin; (e) results shown representative of 3 independent experiments, each experiment repeated 3 times. Protein and GAPDH relative levels are shown as histograms.P < .05 and ∗∗P < .01 vs. untreated controls; MΦ, macrophage; bar, 100 μm. (a) (b) (c) (d) (e) ### 3.3. FTY720 Reduces Lipid Accumulation by M2 Polarization of Foam Cells A previous study reported that M2 polarization by S1P was responsible for the antiatherogenic properties of high-density lipoproteins in vivo [13]. FTY720 was not only an S1P analogue, but also an S1P receptor modulator that had contradicting roles in lipid metabolism and macrophage migration in atherosclerotic disease. To test the effect of FTY720 on polarization, FTY720-treated macrophages and foam cells were used. Real-time PCR showed that high-dose FTY720 treatment of macrophages increased Arg1 and Mrc1, reduced IL-6, and did not change iNOS. However, Arg1 and Mrc1 were increased, whereas IL-6 and iNOS were decreased in FTY720-treated foam cells (Figure 3(a)). Western blot demonstrated that TGF-β and ARG1 expression were increased in both FTY720-induced macrophages and FTY720-induced foam cells; however, ARG1 expression did not change in FTY720-treated foam cells (Figures 3(b) and 3(c)). Additionally, western blot was used to verify that IL-6 and COX2 were reduced in FTY720-treated macrophages and foam cells, but COX2 was not reduced in FTY720-induced macrophages. These results indicate that FTY720 could improve M2 polarization of macrophages and foam cells. Furthermore, LC3 expression was detected in macrophages and foam cells. However, LC3II expression differed in both cell types. LC3II expression was increased in FTY720-treated foam cells, but not in FTY720-treated macrophages (Figures 3(a) and 3(b)). Thus, FTY720 appeared to transform the polarization of foam cells. Additionally, lipid accumulation was decreased in FTY720-treated foam cells (Figures 3(d) and 3(e)), but not FTY720-treated macrophages. This finding suggests that FTY720 played a key role in M2 polarization to alleviate advances in atherosclerosis; autophagy may also promote M2 polarization.Figure 3 FTY720 reduces lipid accumulation by M2 polarization of foam cells. (a) Real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression with various concentrations of rapamycin treatment in macrophages and foam cells. (b) Western blot showing LC3, ARG1, COX2, IL-6, and TGF-β in macrophages and foam cells treated with various concentrations of rapamycin. (c) Results shown representative of 3 independent experiments, each experiment repeated 3 times. Data representative of protein and relative GAPDH are shown as histograms; ∗P < .05 vs. untreated controls. (d) Oil Red O staining demonstrating that FTY720 reduced lipid accumulation in macrophage foam cells; bar, 100 μm. (e) Percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells; ∗P < .05 and ∗∗P < .01 vs. ox-LDL. ### 3.4. S1PR Involved in FTY720 and Rapamycin Regulation of Foam Cells Previous research shows that S1P receptors are involved in atherosclerosis, possibly through endoplasmic reticulum (ER) stress, not PI3K/beclin1 or mTOR [29]. In this study, S1PR was detected in rapamycin-induced and FTY720-induced foam cells. Real-time PCR showed that S1pr1 expression was lower in rapamycin-treated compared with untreated foam cells (P< .05; Figure 4(a)). However, FTY720 inhibited expression of S1pr3 and S1pr4 in foam cells (Figure 4(a)).Figure 4 S1PR involvement in FTY720- and rapamycin-regulated foam cells; (a) real-time PCR analysis shown for S1pr1-5 expression in foam cells; (b) western blot showing apoptosis marker cleaved-CASPASE 3 and total CASPASE 3; (c) data representative of cleaved-CASPASE 3 levels relative to total CASPASE 3 levels shown as histograms.∗P < .05 vs. untreated controls. Results shown are representative of 3 independent experiments; each experiment was repeated 3 times.Macrophage apoptosis in atherosclerotic plaques may increase plaque destabilization. Therefore, the next investigation aimed to determine whether FTY720-mediated M2 polarization caused macrophages to resist apoptosis. Expression of cleaved-CASPASE 3 was slightly decreased in 5μM FTY720-mediated foam cells but was unchanged in rapamycin-treated foam cells (Figures 4(b) and 4(c)). These results support the antiapoptotic and atheroprotective effects of FTY720. ## 3.1. Rapamycin Induces M1 Polarization by Increasing Macrophage Autophagy Peritoneal macrophages were isolated as described previously. Immunofluorescence analysis showed a positivity rate of 87.6% for F4/80 (Figure1(a)). To investigate the effect of autophagy on macrophages, expression of autophagic markers LC3I and LC3II in macrophages was analyzed. Cells were cultured with or without autophagy activator rapamycin for 24 h. After treatment, cells were harvested, and proteins were collected for western blot analysis. As shown in Figure 1(b), rapamycin increased expression of LC3-II at different doses. Furthermore, real-time PCR showed that IL-6 and iNOS (M1 markers) were increased whereas Arg1 and Mrc1 (M2 markers) were decreased (Figure 1(c)). Moreover, western blot analysis demonstrated that IL-6 and COX2 (M1 markers) were higher in rapamycin-treated compared with untreated macrophages. Meanwhile, expression of TGF-β and ARG1 (M2 markers) was lower in rapamycin-treated compared with untreated macrophages (Figures 1(d) and 1(e)). This finding suggested that autophagy could induce M1 polarization.Figure 1 Rapamycin promoted autophagy in peritoneal macrophages and activated the M1 phenotype. (a) Peritoneal macrophages subjected to immunofluorescence staining by macrophage marker F4/80 antibody; (b) western blot showing LC3 expression in various rapamycin- (Rap-) treated groups, relative protein and GAPDH levels shown as histograms; (c) real-time PCR showing expression of Arg1, Mrc1, IL-6, and iNOS in various rapamycin-treated groups; (d) western blot showing expression of IL-6, TGF-β, COX2, and ARG1 in various rapamycin- (Rap-) treated groups; (e) relative protein and GAPDH levels shown as histograms; ∗P < .05 and ∗∗P < .01 vs. untreated controls. Results are representative of 3 independent experiments; each experiment was repeated 3 times. ## 3.2. Rapamycin Does Not Affect Foam Cell Polarization Macrophage foam cells indicate advanced stage atherosclerosis. To generate foam cells, macrophages were treated by ox-LDL for 48 h. Oil Red O staining showed that ox-LDL increased lipid accumulation and that this accumulation was decreased by rapamycin treatment (Figures2(a) and 2(b)). To determine whether autophagy affected polarization of foam cells, rapamycin was used at various doses. First, western blot analysis demonstrated that autophagy marker LC3II was elevated in the presence of rapamycin. Next, real-time PCR showed that rapamycin had almost no effect on foam cell polarization (Figure 2(c)). Western blot analysis verified that TGF-β, ARG1 and COX2 expression were unchanged in rapamycin-treated foam cells; IL-6 increased slightly in foam cells treated with 1 μM rapamycin (Figures 2(d) and 2(e)). However, lipid accumulation did not differ between rapamycin-treated foam cells and ox-LDL-treated foam cells (Figures 2(a) and 2(b)). Thus, autophagy appeared to have no effect on foam cell polarization.Figure 2 Rapamycin had no effect on foam cell polarization. (a) Oil Red O staining demonstrated that rapamycin inhibited macrophage transformation to foam cells but did not affect foam cells; (b) percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells;∗P < .05 and ∗∗P < .01 vs. ox-LDL; (c) real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression for various rapamycin-treated groups; (d) western blot showing expression of LC3, IL-6, COX2, TGF-β, and ARG1 in foam cells treated with various concentrations of rapamycin; (e) results shown representative of 3 independent experiments, each experiment repeated 3 times. Protein and GAPDH relative levels are shown as histograms.P < .05 and ∗∗P < .01 vs. untreated controls; MΦ, macrophage; bar, 100 μm. (a) (b) (c) (d) (e) ## 3.3. FTY720 Reduces Lipid Accumulation by M2 Polarization of Foam Cells A previous study reported that M2 polarization by S1P was responsible for the antiatherogenic properties of high-density lipoproteins in vivo [13]. FTY720 was not only an S1P analogue, but also an S1P receptor modulator that had contradicting roles in lipid metabolism and macrophage migration in atherosclerotic disease. To test the effect of FTY720 on polarization, FTY720-treated macrophages and foam cells were used. Real-time PCR showed that high-dose FTY720 treatment of macrophages increased Arg1 and Mrc1, reduced IL-6, and did not change iNOS. However, Arg1 and Mrc1 were increased, whereas IL-6 and iNOS were decreased in FTY720-treated foam cells (Figure 3(a)). Western blot demonstrated that TGF-β and ARG1 expression were increased in both FTY720-induced macrophages and FTY720-induced foam cells; however, ARG1 expression did not change in FTY720-treated foam cells (Figures 3(b) and 3(c)). Additionally, western blot was used to verify that IL-6 and COX2 were reduced in FTY720-treated macrophages and foam cells, but COX2 was not reduced in FTY720-induced macrophages. These results indicate that FTY720 could improve M2 polarization of macrophages and foam cells. Furthermore, LC3 expression was detected in macrophages and foam cells. However, LC3II expression differed in both cell types. LC3II expression was increased in FTY720-treated foam cells, but not in FTY720-treated macrophages (Figures 3(a) and 3(b)). Thus, FTY720 appeared to transform the polarization of foam cells. Additionally, lipid accumulation was decreased in FTY720-treated foam cells (Figures 3(d) and 3(e)), but not FTY720-treated macrophages. This finding suggests that FTY720 played a key role in M2 polarization to alleviate advances in atherosclerosis; autophagy may also promote M2 polarization.Figure 3 FTY720 reduces lipid accumulation by M2 polarization of foam cells. (a) Real-time PCR showing Arg1, Mrc1, IL-6, and iNOS expression with various concentrations of rapamycin treatment in macrophages and foam cells. (b) Western blot showing LC3, ARG1, COX2, IL-6, and TGF-β in macrophages and foam cells treated with various concentrations of rapamycin. (c) Results shown representative of 3 independent experiments, each experiment repeated 3 times. Data representative of protein and relative GAPDH are shown as histograms; ∗P < .05 vs. untreated controls. (d) Oil Red O staining demonstrating that FTY720 reduced lipid accumulation in macrophage foam cells; bar, 100 μm. (e) Percentage of foam cells with high intensity from 3 experimental iterations: 6 views were analyzed from 2 randomly selected staining wells; ∗P < .05 and ∗∗P < .01 vs. ox-LDL. ## 3.4. S1PR Involved in FTY720 and Rapamycin Regulation of Foam Cells Previous research shows that S1P receptors are involved in atherosclerosis, possibly through endoplasmic reticulum (ER) stress, not PI3K/beclin1 or mTOR [29]. In this study, S1PR was detected in rapamycin-induced and FTY720-induced foam cells. Real-time PCR showed that S1pr1 expression was lower in rapamycin-treated compared with untreated foam cells (P< .05; Figure 4(a)). However, FTY720 inhibited expression of S1pr3 and S1pr4 in foam cells (Figure 4(a)).Figure 4 S1PR involvement in FTY720- and rapamycin-regulated foam cells; (a) real-time PCR analysis shown for S1pr1-5 expression in foam cells; (b) western blot showing apoptosis marker cleaved-CASPASE 3 and total CASPASE 3; (c) data representative of cleaved-CASPASE 3 levels relative to total CASPASE 3 levels shown as histograms.∗P < .05 vs. untreated controls. Results shown are representative of 3 independent experiments; each experiment was repeated 3 times.Macrophage apoptosis in atherosclerotic plaques may increase plaque destabilization. Therefore, the next investigation aimed to determine whether FTY720-mediated M2 polarization caused macrophages to resist apoptosis. Expression of cleaved-CASPASE 3 was slightly decreased in 5μM FTY720-mediated foam cells but was unchanged in rapamycin-treated foam cells (Figures 4(b) and 4(c)). These results support the antiapoptotic and atheroprotective effects of FTY720. ## 4. Discussion Atherosclerosis is characterized by persistent inflammation of the arterial wall. Macrophages, the most abundant immune cells in atherosclerotic plaques, can be transformed to foam cells by oxidized low-density lipoprotein (ox-LDL) induction, which accelerates atherosclerosis [1, 6–9]. Thus, macrophages and foam cells are crucial in different stages of atherosclerotic disease and, as such, are attractive targets for therapy. Moreover, macrophages are functionally complex in their adoption of multiple polarization states based on the composition of their microenvironment. Previous reports show that defective phagocytic clearance of macrophages promotes plaque necrosis and inhibition of autophagy by silencing ATG5 or other autophagy mediators to enhance protective processes in atherosclerosis [22]. Furthermore, mTOR mediates cross talk of macrophage polarization and autophagy in atherosclerosis [30]. However, the polarization and autophagy of foam cells in the pathogenesis of atherosclerosis remain largely unexplained. Understanding cross talk of autophagy and polarization regulation is crucial to understanding pathogenesis.In our studies, we found that macrophage and foam cell autophagy produce different effects. Rapamycin is a recognized autophagy activator through its inhibition of mTOR signaling in various cells. By increasing LC3II expression, we first verified the increased autophagy induced by rapamycin in both macrophages and foam cells. Furthermore, our finding that rapamycin induced M1 polarization by increasing IL-6, iNOS and COX2 expression, while decreasing TGF-β, Arg1, and MRC1 expression in macrophages, is consistent with a previous report [31]. However, we found no effect on foam cell polarization. Moreover, Oil Red O staining showed decreased lipid accumulation in rapamycin-treated macrophages but not rapamycin-treated foam cells. This finding suggests that M1 macrophages may inhibit foam cell formation. M1 macrophages have been previously reported to promote inflammation through direct targeting of the NF-κB pathway [32].Atherosclerosis is a chronic immunoinflammatory disease [1]. Here, we found that M1 macrophages can enhance autophagy by inhibiting mTOR signaling, a key regulator of atherosclerosis. Thus, mTOR depletion stimulates M1 macrophages and suppresses early atherosclerosis. In this study, rapamycin also enhanced autophagy by inhibiting mTOR in foam cells, though it did not affect polarization or alter lipid load. In vivo results showed that rapamycin attenuated the macrophage death rate and delayed plaque progression through autophagy upregulation [23, 33, 34]. Wang and colleagues previously showed that cholesterol efflux was increased by autophagy in macrophage foam cells [35]. Hence, rapamycin-induced autophagy in foam cells delays intracellular lipid accumulation and may stabilize artery plaques through prevention of apoptosis and S1PR1 in advanced atherosclerosis.FTY720, a novel S1P analogue [12] and a potent immunosuppressive drug, acted as a sphingosine kinase (SphK) 1 antagonist, growth inhibitor, and apoptosis inducer in various human cancer cell lines. In this study, FTY720 did not affect autophagy in macrophages, but its activity in suppressing proinflammatory factor IL-6 in macrophages was consistent with a previous study [36]. In animal models, FTY720 reduced atherosclerosis by suppressing monocyte/macrophage migration, inducing a regulatory T-cell response, and inhibiting effector T-cell responses [14, 15]. Moreover, the M1 phenotype marker-IL-6 was inhibited, whereas the M2 phenotype marker IL-4 was increased in peritoneal macrophage cells from FTY720-treated LDLR-deficient mice fed a cholesterol-rich diet [12]. In the present study, FTY720 also inhibited M1 phenotype markers IL-6, COX2, and iNOS, promoted M2 markers TGF-β, Arg1, and Mrc1 in macrophage foam cells, and reduced lipid accumulation in macrophage foam cells, as shown by Oil Red O staining. LC3-II was notably increased in FTY720-treated foam cells compared with FTY720-treated macrophages, which have been shown in some cancer studies to induce the autophagy pathway [24–26, 37–39]. Consequently, we deduced that FTY720 affected macrophage function and polarization through constriction of the M1 phenotype and activation of the M2 phenotype via the autophagy pathway to alleviate advanced atherosclerosis. Additionally, the finding that FTY20 inhibited S1PR3 and S1PR4 in macrophage foam cells was similar to previous reports that FTY20 could suppress S1PR1 and S1PR3 to decrease inflammatory factors and alleviate atherosclerosis [14, 28]. It is possible that FTY720 suppresses S1PR3 and S1PR4 to reduce lipid accumulation. ## 5. Conclusion In summary, rapamycin induced autophagy, promoted the M1 phenotype, suppressed the M2 phenotype, and inhibited foam cell formation in peritoneal macrophages. Additionally, rapamycin activated autophagy in peritoneal macrophage foam cells but did not affect macrophage polarization or reduce lipid accumulation. Thus, the mTOR depletion of macrophage cells stimulated M1 macrophages and suppresses early atherosclerosis. FTY720 could promote the M2 phenotype and suppress the M1 phenotype in peritoneal macrophages and foam cells, but it only activated autophagy in peritoneal macrophage foam cells and alleviated lipid accumulation. FTY720 transformation of foam cells into the M2 phenotype occurred through the autophagy pathway and alleviated advanced atherosclerosis. --- *Source: 1010248-2018-12-06.xml*
2018
# Immunology and Cell Biology of Parasitic Diseases 2014 **Authors:** Luis I. Terrazas; Abhay R. Satoskar; Miriam Rodriguez-Sosa; Abraham Landa-Piedra **Journal:** BioMed Research International (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101025 --- ## Body --- *Source: 101025-2015-05-20.xml*
101025-2015-05-20_101025-2015-05-20.md
393
Immunology and Cell Biology of Parasitic Diseases 2014
Luis I. Terrazas; Abhay R. Satoskar; Miriam Rodriguez-Sosa; Abraham Landa-Piedra
BioMed Research International (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101025
101025-2015-05-20.xml
--- ## Body --- *Source: 101025-2015-05-20.xml*
2015
# Mitochondrial Dysfunction and Diabetic Nephropathy: Nontraditional Therapeutic Opportunities **Authors:** Ping Na Zhang; Meng Qi Zhou; Jing Guo; Hui Juan Zheng; Jingyi Tang; Chao Zhang; Yu Ning Liu; Wei Jing Liu; Yao Xian Wang **Journal:** Journal of Diabetes Research (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1010268 --- ## Abstract Diabetic nephropathy (DN) is a progressive microvascular diabetic complication. Growing evidence shows that persistent mitochondrial dysfunction contributes to the progression of renal diseases, including DN, as it alters mitochondrial homeostasis and, in turn, affects normal kidney function. Pharmacological regulation of mitochondrial networking is a promising therapeutic strategy for preventing and restoring renal function in DN. In this review, we have surveyed recent advances in elucidating the mitochondrial networking and signaling pathways in physiological and pathological contexts. Additionally, we have considered the contributions of nontraditional therapy that ameliorate mitochondrial dysfunction and discussed their molecular mechanism, highlighting the potential value of nontraditional therapies, such as herbal medicine and lifestyle interventions, in therapeutic interventions for DN. The generation of new insights using mitochondrial networking will facilitate further investigations on nontraditional therapies for DN. --- ## Body ## 1. Introduction Diabetic nephropathy (DN) is a chronic disease that is caused by diabetes and is characterized by microangiopathy and alterations in kidney structure and function. It not only causes end-stage renal disease (ESRD) but also significantly increases the incidence and mortality rate of cardiovascular and cerebrovascular diseases [1]. With the rapid increase in the incidence of diabetes, the number of cases of DN worldwide has increased rapidly. In 2019, the International Diabetes Federation reported that approximately 463 million individuals were diagnosed with diabetes, and its incidence is expected to reach 700 million by 2045. In addition, approximately 30%–40% of these individuals are expected to develop DN [2]. However, current therapies delay rather than prevent the progression of ESRD, necessitating the search for new therapeutic targets to ameliorate the poor prognosis of DN. Current studies suggest that irregularities in key pathways and cellular components promote renal dysfunction and lead to DN. These include enhanced glucose metabolite flux, more glycation end (AGE) products, endoplasmic reticulum stress, mitochondrial dysfunction, abnormally active renin angiotensin system, and oxidative stress [3–6], with mitochondrial dysfunction playing a key role in the occurrence and pathogenesis of DN [7]. Various studies have emphasized the impact of nontraditional treatments, such as herbal medicine, nutrition, exercise, and surgical treatment, on the prevention and delayed progression of DN. Nontraditional therapy is considered a well-proven strategy which robustly improves health in most organisms. Randomized controlled clinical trials have shown that herbal medicines are efficacious and safe [8, 9]. In terms of experimental research, studies provided evidence for the efficacy of nontraditional therapies from the perspectives of ameliorating mitochondrial dysfunction. This provides a rationale for further exploration of the effect of nontraditional approaches on DN at the molecular level. Mitochondria are important for renal cell survival, as these serve as metabolic energy producers and regulate programmed cell death. The structure and function of mitochondria are regulated by a mitochondrial quality control (MQC) system, which is a series of processes that include mitochondrial biogenesis, mitochondrial proteostasis, mitochondrial dynamics/mitophagy, and mitochondria-mediated cell death. In this review, we have outlined the physiological role of mitochondria in renal function, discussed the role of mitochondrial dysfunction in the occurrence and development of DN, and emphasized on the therapeutic effect of nontraditional treatments, particularly herbal medicine (Table 1) and lifestyle interventions, on DN by targeting mitochondrial networking.Table 1 Mitochondria-targeted herb medicine in DN. Herb medicineThe form of herb medicineExperimental modelTargetPathwayObserved effectRef.Mitochondrial biogenesisBerberinePure chemicalPatients with DN, db/db diabetic micePCG-1α↑, FAO↑, AMPK↑PGC-1α signaling pathwayRestoration of PGC-1α activity and the energy homeostasis[10]Tangshen formulaExtractdb/db diabetic mice, mTECsPGC-1α↑, LXR↑, ABCA1↑PGC-1α-LXR-ABCA1 pathwayImproving cholesterol efflux[11]SalidrosidePure chemicaldb/db diabetic miceSIRT1↑, PGC-1α↑SIRT1/PGC-1α axisImproving mitochondrial biogenesis[12]ResveratrolPure chemicaldb/db diabetic mice, HGECsAdipoR1↑, AdipoR2↑, AMPK↑, SIRT1↑, PGC-1α↑, PPARα↑AMPK–SIRT1–PGC–1α axisAmeliorating lipotoxicity, oxidative stress, apoptosis, and endothelial dysfunction[13]ResveratrolPure chemicaldb/db diabetic miceAMPK↑, SIRT1↑, PGC-1α↑, PPARα↑AMPK–SIRT1–PGC–1α axisPrevention of lipotoxicity-related apoptosis and oxidative stress[14]ResveratrolPure chemicalSTZ-induced diabetic rats, podocytesSIRT1↑, PGC-1α↑, ROS↓SIRT1/PGC-1α axisInhibition of mitochondrial oxidative stress and apoptosis[15]ResveratrolPure chemicalDN rabbits with AKI, HK-2 cellsSIRT1↑, PGC-1α↑, HIF-1α↓SIRT1–PGC–1α–HIF-1α signaling pathwaysReducing renal hypoxia, mitochondrial dysfunction and renal tubular cell apoptosis[16]MareinExtractdb/db diabetic mice, HK-2 cellsSGLT2↓, SREBP-1↓, AMPK↑, PGC-1α↑AMPK/ACC/PGC-1α pathwayAmelioration of fibrosis and inflammation[17]Mitochondrial dynamicsBerberinePure chemicaldb/db diabetic mice, podocytesDRP1↓, MFF↓, FIS1↓, MID49↓, MID51↓, PGC-1α↑DRP1 modulatorInhibiting mitochondrial fission and cell apoptosis[18]Astragaloside IVPure chemicaldb/db diabetic miceDrp1↓, MFF↓, Fis1↓Mitochondrial quality control networkAmelioration of renal injury[19]PolydatinPure chemicalKKAy mice, hyperglycemia-induced MPC5 cellsDRP1↓, ROS↓, caspase-3↓, caspase-9↓ROS/DRP1/mitochondrial fission/apoptosis pathwayImpairing mitochondria fitness and ameliorating podocyte injury[20]MitophagyAstragaloside IIPure chemicalSTZ-induced diabetic ratsNRF2↑, Keap1↓, PINK1↑, Parkin↑NRF2 and PINK1 pathwayAmelioration of podocyte injury and mitochondrial dysfunction[21]Huangqi-Danshen decoctionExtractdb/db diabetic miceDRP-1↓, PINK1↑, Parkin↑PINK1/Parkin pathwayProtection kidney injury by inhibiting PINK1/Parkin-mediated mitophagy[22]Mitochondria ROSNepeta angustifolia C. Y. WuExtractHFD/STZ-induced diabetic rats, mesangial cellsSOD↑, ROS↓, MDA↓Mitochondrial-caspase apoptosis pathwayAntioxidative stress, inflammation and inhibiting mesangial cell apoptosis[23]ResveratrolPure chemicaldb/db diabetic miceROS↓, AMPK↑, SIRT1AMPK/SIRT1-independent pathwayAntioxidative stress and enhanced mitochondrial biogenesis[24]Betulinic acidPure chemicalSTZ-induced diabetic ratsSOD↑, CAT ↑, MDA↓, AMPK, NF-κB↓, NRF2↑AMPK/NF-κB/NRF2 signaling pathwayAttenuating the oxidative stress and inflammatory condition[25]ObacunonePure chemicalNRK-52E cellsSOD↑, GSK-3β↓, NRF2↑GSK-3β/Fyn pathwayInhibiting oxidative stress and mitochondrial dysfunction[26]CurcuminPure chemicalSTZ-induced diabetic ratsNRF2↑, FOXO-3a↑, PKCβII↓, NF-κB↓PKCβII/p 66 Shc axisAntioxidative stress[27]Notoginsenoside R1Pure chemicaldb/db diabetic mice, HK-2 cellsROS↓, NRF2↑, HO-1↑NRF2 pathwayInhibition of apoptosis and renal fibrosis caused by oxidative stress[28]Oleanolic acid and N-acetylcysteinePure chemicalType 2 diabetic rat model, mesangial cellsROS↓, NRF2↑, TGF-β/smad2/3↓, α-SMA↓NRF2/Keap1 systemInhibition of oxidative stress and ER stress[29] ## 2. Critical Mediator of DN: Mitochondrial Dysfunction The kidney, a highly metabolic organ rich in mitochondria, requires a large amount of ATP for its normal function [30]. The kidney possesses the second highest oxygen consumption and mitochondrial content following the heart [30, 31]. Mitochondrial energetics are altered in DN due to hyperglycemia, which induces changes in the electron transport chain (ETC) which cause an increase in reactive oxygen species (ROS) and a decrease in ATP production. This leads to increased mitochondrial division, decreased PGC1α levels, changes in mitochondrial morphology, increased cell apoptosis, and further aggravation of the condition [32–34] (Figure 1).Figure 1 Hyperglycemia serves as the primary factor that influences mitochondrial dysfunction in DN. The increased level of glucose enhances glycolysis, and the subsequent activation of the TXNIP, AGE, and PKC pathways reinforces the decrease in ATP levels. Insufficient ATP levels stimulate the ETC to overwork in response to the energy supply for the kidneys. In turn, excessive ROS production occurs following the overactivation of the ETC, which results in decreased ATP production, mutation of mtDNA, abnormal opening of the mitochondrial permeability transition pore, and ultimately mitochondrial fragmentation and swelling. Decreases in the levels of OPA1, MFN1, and MFN2 may contribute to the decrease in mitochondrial fusion observed in DN. Activation of DRP1 promotes mitochondrial fragmentation and fission. Damaged mitochondria are cleared by mitophagy. However, an excess number of damaged mitochondria that is higher than the rate of mitophagy may result in cell death. Abbreviations: DN: diabetic nephropathy; DRP1: dynamin 1-like protein; PGC-1α: PGC1α, peroxisome proliferator-activated receptor γ coactivator 1α; AMPK: 5′-AMP-activated protein kinase; SIRT1: sirtuin-1; PINK1: putative kinase protein 1; Cyt c: cytochrome c; ROS: reactive oxygen species; MFN1 and 2: mitofusin proteins 1 and 2; OPA1: optic atrophy protein 1; MFF: mitofission proteins; FIS1: mitochondrial fission 1; PPAR: peroxisome proliferator-activated receptor; Parkin: E3 ubiquitin-protein ligase parkin; ER: endoplasmic reticulum; TXNIP: thioredoxin-interacting protein; AGE: advanced glycation end; PKC: protein kinase C; ETC: electron transport chain. ### 2.1. Mitochondria: The “Energy Station” for the Kidney In general, the mechanism of ATP production in kidney cells is determined by the cell type. For example, proximal tubules in the renal cortex are dependent on oxidative phosphorylation for ATP production to fuel active glucose, nutrient, and ion transport [35]. However, glomerular cells such as podocytes and mesangial cells are largely utilized for filtering blood, removal of small molecules (e.g., glucose, urea, salt, and water), and retaining large proteins, including hemoglobin [36]. This passive process does not require direct ATP. Therefore, glomerular cells can perform aerobic and anaerobic respiration to produce ATP for basic cellular processes [37–40]. ATP is produced through the respiratory chain, which includes five multienzyme protein complexes embedded in the inner mitochondrial membrane [19], including complex I: NADH CoQ reductase, complex II: succinate-CoQ reductase, complex III: reduced CoQ-cytochrome c reductase, complex IV: cytochrome c oxidase, and complex V: ATP synthase. One palmitate molecule produces 106 ATP molecules, whereas glucose oxidation yields only 36 ATP molecules [41, 42]. Due to the higher energy requirements of the proximal tubules, they use nonesterified fatty acids, such as palmitate, to maximize the production of ATP through β-oxidation. In the diabetic state, there is a large amount of substrate in the form of glucose, which provides fuel for the citric acid cycle and produces more NADH and FADH2. However, during the electron transfer process, the generation of a greater reducing force leads to electron leak; these electrons combine with oxygen molecules to produce a large amount of ROS and induce oxidative stress [43, 44]. ### 2.2. ROS and Mitochondrial Dysfunction in DN: Dangerous Liaisons The double membrane structure of mitochondria contains a large number of unsaturated fatty acids which are highly vulnerable to ROS attack. Excessive ROS results in membrane lipid peroxidation as well as triggers the mitochondrial permeability transition pore (mPTP) to abnormally open, which in turn increases its permeability and allows proteins to enter the membrane space. These negatively charged proteins are released into the cytoplasm, causing positive ions in the membrane gap to flow back into the matrix. Subsequently, the ion concentration gradient on both sides of the mitochondrial inner membrane disappears [45], mitochondrial membrane potential decreases, oxidative phosphorylation uncouples, and ATP synthesis is blocked. At the same time, it causes an imbalance of related molecules moving in and out of the mitochondria, leading to the dysfunction of the mitochondrial and cytoplasmic barriers. The greater concentration of positive ions in the mitochondrial matrix than in the cytoplasm aggravates swelling and even ruptures the mitochondria [46]. Since mitochondrial DNA (mtDNA) lacks the protection of introns, histones, and other DNA-related proteins and it is near the electron transport chain where ROS production occurs, it is more susceptible to ROS attack than nuclear DNA. Mutations may occur that lead to mitochondrial dysfunction and contribute to the progression of DN [4, 45, 47]. According to a previous study, mtDNA damage precedes bioenergy dysfunction in DN, indicating that systemic mitochondrial dysfunction and glucose-induced mtDNA changes can lead to DN [48]. In general, ROS and mitochondrial dysfunction are mutually causes and effects, forming a vicious cycle. ### 2.3. Imbalance of Mitochondrial Dynamics in DN: A Vicious Cycle Mitochondria are highly dynamic organelles that regulate their shape, quantity, distribution, and function through continuous fusion and fission. They form a network-like mode of action in the cell which can be redistributed to meet the energy needs of the cell to the maximum extent as it is important to maintain cell homeostasis [49, 50]. Mitochondrial fusion is mainly involved in the synthesis and repair of mitochondria. When the mitochondria are slightly damaged by harmful stress, such as mtDNA variation and mitochondrial membrane potential decline, the fusion of damaged mitochondria and healthy mitochondria can repair the mutated mtDNA and restore the membrane potential to realize self-repair [51]. Mitochondrial fission also contributes to the maintenance of mitochondrial membrane potential and mtDNA stability. Depolarized mitochondrial membranes and altered mtDNA accumulate during mitochondrial fission and are discarded by autophagy or the ubiquitin-proteasome system in order to maintain normal mitochondrial function [52–54]. Increases in the levels of proteins that facilitate mitochondrial fusion occur early in the disease process in the kidneys of patients with diabetes [33]. These increases may be an early compensatory event for increased ATP demand because increasing mitochondrial fusion induced by high glucose 1 [55] or mitofusins (MFN1 and MFN2) can increase mitochondrial bioenergy function and reduce diabetic kidney damage. Fusion may also prevent renal damage in diabetes by balancing mitochondrial fission and fragmentation, which is generally considered harmful in DN.When the mitochondrial membrane potential is damaged, the pathway for PTEN-induced putative kinase protein 1 (PINK1) to enter the inner membrane of mitochondria is blocked; therefore, it accumulates in the outer membrane, recruits Parkin to the damaged mitochondria, and phosphorylates it. Activated Parkin can ubiquitinate voltage-dependent anion channel protein 1, MFN1, MFN2, and other substrates embedded in the outer membrane. This leads to further regulation of mitochondrial morphology and dynamic changes in fission and fusion. Subsequently, the ubiquitinated mitochondria, with the assistance of autophagy receptor regulatory proteins, such as P62/SQSTM1 and microtubule-associated protein light chain 3, aggregate into double-layer autophagic vesicles, which are encapsulated to form mitochondrial autophagosomes, and fuse with lysosomes to form mitochondrial autophagic lysosomes that are degraded by hydrolases [56, 57]. Nevertheless, accumulation of autophagosomes containing mitochondria has been found in the kidneys of patients with diabetes [58] and rodent models of DN [58–60]. Although dysfunctional mitochondria can be removed by mitophagy, these can also trigger cell death in the presence of an extremely high number of damaged mitochondria relative to the rate of mitophagy [34]. Programmed cell death may occur in several forms, which include apoptosis, programmed necrosis, and autophagic cell death. Despite these distinct cell death pathways, members of the Bcl-2 family have been implicated in the direct or indirect control of mitochondrial processes [61, 62]. The permeability of the damaged mitochondrial membrane changes, resulting in the disappearance of the membrane potential, the rupture of mitochondria, and the release of intermembrane space cell death proteins (such as Cyt c, Smac/DIABLO, and HtrA2/Omi) into the cytoplasm, ultimately leading to cell death [63–65]. ## 2.1. Mitochondria: The “Energy Station” for the Kidney In general, the mechanism of ATP production in kidney cells is determined by the cell type. For example, proximal tubules in the renal cortex are dependent on oxidative phosphorylation for ATP production to fuel active glucose, nutrient, and ion transport [35]. However, glomerular cells such as podocytes and mesangial cells are largely utilized for filtering blood, removal of small molecules (e.g., glucose, urea, salt, and water), and retaining large proteins, including hemoglobin [36]. This passive process does not require direct ATP. Therefore, glomerular cells can perform aerobic and anaerobic respiration to produce ATP for basic cellular processes [37–40]. ATP is produced through the respiratory chain, which includes five multienzyme protein complexes embedded in the inner mitochondrial membrane [19], including complex I: NADH CoQ reductase, complex II: succinate-CoQ reductase, complex III: reduced CoQ-cytochrome c reductase, complex IV: cytochrome c oxidase, and complex V: ATP synthase. One palmitate molecule produces 106 ATP molecules, whereas glucose oxidation yields only 36 ATP molecules [41, 42]. Due to the higher energy requirements of the proximal tubules, they use nonesterified fatty acids, such as palmitate, to maximize the production of ATP through β-oxidation. In the diabetic state, there is a large amount of substrate in the form of glucose, which provides fuel for the citric acid cycle and produces more NADH and FADH2. However, during the electron transfer process, the generation of a greater reducing force leads to electron leak; these electrons combine with oxygen molecules to produce a large amount of ROS and induce oxidative stress [43, 44]. ## 2.2. ROS and Mitochondrial Dysfunction in DN: Dangerous Liaisons The double membrane structure of mitochondria contains a large number of unsaturated fatty acids which are highly vulnerable to ROS attack. Excessive ROS results in membrane lipid peroxidation as well as triggers the mitochondrial permeability transition pore (mPTP) to abnormally open, which in turn increases its permeability and allows proteins to enter the membrane space. These negatively charged proteins are released into the cytoplasm, causing positive ions in the membrane gap to flow back into the matrix. Subsequently, the ion concentration gradient on both sides of the mitochondrial inner membrane disappears [45], mitochondrial membrane potential decreases, oxidative phosphorylation uncouples, and ATP synthesis is blocked. At the same time, it causes an imbalance of related molecules moving in and out of the mitochondria, leading to the dysfunction of the mitochondrial and cytoplasmic barriers. The greater concentration of positive ions in the mitochondrial matrix than in the cytoplasm aggravates swelling and even ruptures the mitochondria [46]. Since mitochondrial DNA (mtDNA) lacks the protection of introns, histones, and other DNA-related proteins and it is near the electron transport chain where ROS production occurs, it is more susceptible to ROS attack than nuclear DNA. Mutations may occur that lead to mitochondrial dysfunction and contribute to the progression of DN [4, 45, 47]. According to a previous study, mtDNA damage precedes bioenergy dysfunction in DN, indicating that systemic mitochondrial dysfunction and glucose-induced mtDNA changes can lead to DN [48]. In general, ROS and mitochondrial dysfunction are mutually causes and effects, forming a vicious cycle. ## 2.3. Imbalance of Mitochondrial Dynamics in DN: A Vicious Cycle Mitochondria are highly dynamic organelles that regulate their shape, quantity, distribution, and function through continuous fusion and fission. They form a network-like mode of action in the cell which can be redistributed to meet the energy needs of the cell to the maximum extent as it is important to maintain cell homeostasis [49, 50]. Mitochondrial fusion is mainly involved in the synthesis and repair of mitochondria. When the mitochondria are slightly damaged by harmful stress, such as mtDNA variation and mitochondrial membrane potential decline, the fusion of damaged mitochondria and healthy mitochondria can repair the mutated mtDNA and restore the membrane potential to realize self-repair [51]. Mitochondrial fission also contributes to the maintenance of mitochondrial membrane potential and mtDNA stability. Depolarized mitochondrial membranes and altered mtDNA accumulate during mitochondrial fission and are discarded by autophagy or the ubiquitin-proteasome system in order to maintain normal mitochondrial function [52–54]. Increases in the levels of proteins that facilitate mitochondrial fusion occur early in the disease process in the kidneys of patients with diabetes [33]. These increases may be an early compensatory event for increased ATP demand because increasing mitochondrial fusion induced by high glucose 1 [55] or mitofusins (MFN1 and MFN2) can increase mitochondrial bioenergy function and reduce diabetic kidney damage. Fusion may also prevent renal damage in diabetes by balancing mitochondrial fission and fragmentation, which is generally considered harmful in DN.When the mitochondrial membrane potential is damaged, the pathway for PTEN-induced putative kinase protein 1 (PINK1) to enter the inner membrane of mitochondria is blocked; therefore, it accumulates in the outer membrane, recruits Parkin to the damaged mitochondria, and phosphorylates it. Activated Parkin can ubiquitinate voltage-dependent anion channel protein 1, MFN1, MFN2, and other substrates embedded in the outer membrane. This leads to further regulation of mitochondrial morphology and dynamic changes in fission and fusion. Subsequently, the ubiquitinated mitochondria, with the assistance of autophagy receptor regulatory proteins, such as P62/SQSTM1 and microtubule-associated protein light chain 3, aggregate into double-layer autophagic vesicles, which are encapsulated to form mitochondrial autophagosomes, and fuse with lysosomes to form mitochondrial autophagic lysosomes that are degraded by hydrolases [56, 57]. Nevertheless, accumulation of autophagosomes containing mitochondria has been found in the kidneys of patients with diabetes [58] and rodent models of DN [58–60]. Although dysfunctional mitochondria can be removed by mitophagy, these can also trigger cell death in the presence of an extremely high number of damaged mitochondria relative to the rate of mitophagy [34]. Programmed cell death may occur in several forms, which include apoptosis, programmed necrosis, and autophagic cell death. Despite these distinct cell death pathways, members of the Bcl-2 family have been implicated in the direct or indirect control of mitochondrial processes [61, 62]. The permeability of the damaged mitochondrial membrane changes, resulting in the disappearance of the membrane potential, the rupture of mitochondria, and the release of intermembrane space cell death proteins (such as Cyt c, Smac/DIABLO, and HtrA2/Omi) into the cytoplasm, ultimately leading to cell death [63–65]. ## 3. Maintaining Mitochondrial Homeostasis: The Target of Herbal Medicine in DN Mitochondrial homeostasis pertains to the balance between mitochondrial fission, fusion, and biogenesis and mitophagy, which maintains mitochondrial energetics. Diseases such as DN can disrupt mitochondrial homeostasis and thus contributes to disease progression. In recent years, most of the studies on the mechanisms of herbal medicine treatment of DN focus on improving mitochondrial homeostasis and function, aiming to restore renal function and slow the progression of DN (Figure2).Figure 2 Therapeutic target of herbal medicine on mitochondrial dysfunction in DN. Herbal medicine plays a protective role in inhibiting DRP1-mediated mitochondrial dynamics to improve mitochondrial dysfunction in DN. In addition, herbal medicine enhances mitochondrial biogenesis by inducing the expression of PGC-1α and its upstream regulators (AMPK and SIRT1) and drives PINK1/Parkin-mediated mitophagy. In addition, the renoprotective effects of herbal medicine are associated with antioxidative stress. Abbreviations: DN: diabetic nephropathy; DRP1: dynamin 1-like protein; PGC-1α: PGC1α, peroxisome proliferator-activated receptor γ coactivator 1α; AMPK: 5′-AMP-activated protein kinase; SIRT1: sirtuin-1; PINK1: putative kinase protein 1; Cyt c: cytochrome c; ROS: reactive oxygen species; MFN1 and 2: mitofusin proteins 1 and 2; OPA1: optic atrophy protein 1; MFF: mitofission proteins; FIS1: mitochondrial fission 1; PPAR: peroxisome proliferator-activated receptor; Parkin: E3 ubiquitin-protein ligase parkin. ### 3.1. Mechanism of Herb Medicine on Mitochondrial Biogenesis in DN The complex process of mitochondrial biogenesis involves the generation of new mitochondrial mass and mtDNA replication, which are derived from preexisting mitochondria. This increases ATP production to meet the growing energy demands of cells. Mitochondrial biosynthesis is controlled by various transcriptional coactivating and coinhibitory factors [66, 67]; however, the peroxisome proliferator-activated receptor γ coactivator- (PGC-) 1α remains as the predominant upstream transcriptional regulator of mitochondrial biogenesis [68]. In several gain- and loss-of-function experimental studies, the activation of PGC-1 has been demonstrated to upregulate the expression of mitochondrial genes, including nuclear respiratory factor- (NRF-) 1, NRF-2, peroxisome proliferator-activated receptors (PPARs), and estrogen-related receptor alpha [69–71]. PGC-1α binds to PPARs, which act as master regulators of fatty acid oxidation (FAO) and nutrient supply [72]. To note, kidney proximal tubules have high levels of baseline energy consumption, supporting FAO as the preferred energy source in proximal tubules [73]. Defective FAO causes lipid accumulation, apoptosis, and tubule epithelial cell dedifferentiation [74]. Taken together, PGC-1α regulates complex processes of nutrient availability, FAO, and mitochondria biogenesis.However, reduced PGC-1α expression and consequent dysfunctional mitochondria have been observed in patients with DN and animal models [6, 75–77]. Moreover, cholesterol accumulation in the kidney is a risk factor for DN progression. PGC-1α acts as a master regulator of lipid metabolism by regulating mitochondria [78]. Given the pivotal role of PGC-1α and metabolism in kidney cells, it is important to search for new approaches to restore the activity of PGC-1α in DN. An increasing number of studies have demonstrated that the interventional mechanisms of herbal medicines on DN are associated with this target. Berberine (BBR), an isoquinoline alkaloid present in Chinese herbal medicine (CHM), is widely used for treating DN. In particular, BBR can directly regulate PGC-1α to enhance FAO in DN, which promotes mitochondrial energy homeostasis and energy metabolism in podocytes [10]. Tangshen formula is a CHM that ameliorates kidney injuries in a diabetic model by promoting the PGC-1α-LXR-ABCA1 pathway to improve renal cholesterol efflux in db/db mice [11]. Moreover, an active component of the traditional Chinese medicine herb Rhodiola rosea L., salidroside, has been reported to greatly attenuate DN by upregulating mtDNA copy number and ETC protein expression [12].As PGC-1α is almost ubiquitously expressed, targeting its upstream regulatory sensors such as 5′-AMP-activated protein kinase (AMPK), NAD-dependent protein deacetylase sirtuin-1 (SIRT1) is generally acknowledged as a significant method to restore mitochondrial function. AMPK, an extensively studied upstream regulator of PGC-1α, increases the rate of mitochondrial biogenesis by initiating the transcription of the PPARGC1A gene and by phosphorylating amino acids threonine-177 and serine-538, which in turn activates PGC-1α. In dead, herbal medicine prevents DN via the AMPK-SIRT1-PGC-1α axis that is a hot spot. Resveratrol is a naturally occurring polyphenol that imparts anti-inflammatory, antidiabetic, antioxidative, and neuroprotective effects. Particularly, resveratrol prevents DN via activation of the AMPK-SIRT1-PGC-1α axis, and PPARs were coactivated by PGC-1α in db/db mice [14]. Additional studies revealed resveratrol imparts a protective effect against DN by improving lipotoxicity, oxidative stress, and apoptosis by directly activating AdipoR1 and AdipoR2 that in turn upregulates AMPK and forkhead box protein O (FoxO) expression [13]. Interestingly, Zhang et al. further investigate the renoprotection mechanism of resveratrol in vivo and in vitro, which suggested that SIRT1/PGC-1α was upregulated, accompanied by improved mitochondrial function decreased oxidative stress and apoptosis [15]. In addition to single DN, the protective role of resveratrol has been verified in acute renal injury with DN via activating SIRT1–PGC–1α–hypoxia-inducible transcription factor-1α (HIF-1α) signaling pathways [16]. Because the role of sodium glucose cotransporter 2 (SGLT2) in excess glucose reabsorption has become a research topic of interest, SGLT2 inhibitors (SGLT2i) could reduce hyperfiltration and inhibit inflammatory and fibrotic responses that are elicited by proximal tubular cells [34, 79]. In addition, SGLT2i that enhances the excretion of urinary glucose triggers AMPK, a nutrient sensor, which in turn reverses the metabolic disorders associated with DN [80]. Marein is one of the main active components of Coreopsis tinctoria Nutt, which possesses renoprotective activity in DN by directly suppressing SGLT2 expression and then activating the AMPK–acetyl CoA carboxylase (ACC)–PGC-1α pathway to suppress fibrosis and inflammation [17]. ### 3.2. Mechanism of Herb Medicine on Mitochondrial Dynamics in DN Mitochondria are highly dynamic organelles that require correct mitochondrial morphology to maintain maximal ATP production [81]. The major processes of mitophagy, fission, and fusion occur as a response to mitochondrial dynamics as well as to maintain mitochondrial integrity in different metabolic conditions. Fission is essential in the isolation of damaged parts from the rest of the mitochondrial network and is induced by translocating dynamin 1-like protein (DRP1) from the cytosol to the outer membrane of the mitochondria where it binds to its receptors, including mitochondrial fission 1 (FIS1), mitochondrial fission factor (MFF), and mitochondrial dynamics proteins MID49 and MID51 [82, 83]. Mitochondria fusion involves the recruitment of a series of proteins that include MFN1 and MFN2 that triggers outer membrane fusion, as well as optic atrophy protein 1 (OPA1) that facilitates inner membrane fusion. However, in DN, excessive mitochondrial fission and fusion are associated with key features of renal damage. Mitochondrial dysfunction in podocytes is increasingly recognized as a factor contributing to the pathogenesis of DN [84].Previous studies have elucidated the correlation between mitochondrial dynamics disorder and DN progression and revealed that DRP1 may be potentially utilized as a therapeutic target in the treatment of DN [85]. Meanwhile, traditional Chinese medicine has definite effects on this point. Besides its function on PGC-1α-mediated mitochondrial biogenesis, BBR plays a therapeutic role in positively regulating DRP1-mediated mitochondrial dynamics to protect glomerulus and improve the fragmentation and dysfunction of mitochondria in podocytes [18]. AS-IV is a major and active component of Astragalus, which is a traditional Chinese medicinal herb for tonifying. Liu et al. have shown in diabetic db/db mice that AS-IV significantly improves albuminuria and renal pathologic injury. In addition, they found AS-IV decreased the elevation of renal DRP1, Fis1, and MFF expression in db/db mice [19]. More than that, polydatin which is mainly extracted from the roots of Polygonum cuspidatum not only inhibits DRP1 activation and fragmented mitochondria caused by high glucose but also blocked the increase of apoptosis through a DRP1-dependent mechanism [20, 86].Mitophagy allows the removal of damaged and nonfunctional mitochondria from the network and requires the efficient recognition of targeted mitochondria followed by the engulfment of mitochondria by autophagosomes [87]. As part of a healthy network of mitochondria, mitophagy is regulated by a PINK1-PARKIN pathway for mitochondrial identification and labeling [88]. However, impairment of the mitophagy system aggravated the progression of DN, which was mainly caused by decreases in renal PINK1 and Parkin expression in diabetes following activation of either FOXO1 or NRF2 signal [89, 90]. In recent years, the role and regulation of mitophagy in DN have attracted lots of attention. It has been reported that AS II, another of the active constituents of Astragalus, exerts protective effects on podocyte injury and mitochondrial dysfunction through enhancing mitophagy activation via modulation of NRF2 and PINK1 [21]. Huangqi-Danshen decoction which mainly includes Astragali Radix (Huang-qi) and Salviae Miltiorrhizae Radix et Rhizoma (Dan-shen) significantly alleviated DN, which might be associated with the reversion of the enhanced mitochondrial fission and the inhibition of PINK1/Parkin-mediated mitophagy [22]. ## 3.1. Mechanism of Herb Medicine on Mitochondrial Biogenesis in DN The complex process of mitochondrial biogenesis involves the generation of new mitochondrial mass and mtDNA replication, which are derived from preexisting mitochondria. This increases ATP production to meet the growing energy demands of cells. Mitochondrial biosynthesis is controlled by various transcriptional coactivating and coinhibitory factors [66, 67]; however, the peroxisome proliferator-activated receptor γ coactivator- (PGC-) 1α remains as the predominant upstream transcriptional regulator of mitochondrial biogenesis [68]. In several gain- and loss-of-function experimental studies, the activation of PGC-1 has been demonstrated to upregulate the expression of mitochondrial genes, including nuclear respiratory factor- (NRF-) 1, NRF-2, peroxisome proliferator-activated receptors (PPARs), and estrogen-related receptor alpha [69–71]. PGC-1α binds to PPARs, which act as master regulators of fatty acid oxidation (FAO) and nutrient supply [72]. To note, kidney proximal tubules have high levels of baseline energy consumption, supporting FAO as the preferred energy source in proximal tubules [73]. Defective FAO causes lipid accumulation, apoptosis, and tubule epithelial cell dedifferentiation [74]. Taken together, PGC-1α regulates complex processes of nutrient availability, FAO, and mitochondria biogenesis.However, reduced PGC-1α expression and consequent dysfunctional mitochondria have been observed in patients with DN and animal models [6, 75–77]. Moreover, cholesterol accumulation in the kidney is a risk factor for DN progression. PGC-1α acts as a master regulator of lipid metabolism by regulating mitochondria [78]. Given the pivotal role of PGC-1α and metabolism in kidney cells, it is important to search for new approaches to restore the activity of PGC-1α in DN. An increasing number of studies have demonstrated that the interventional mechanisms of herbal medicines on DN are associated with this target. Berberine (BBR), an isoquinoline alkaloid present in Chinese herbal medicine (CHM), is widely used for treating DN. In particular, BBR can directly regulate PGC-1α to enhance FAO in DN, which promotes mitochondrial energy homeostasis and energy metabolism in podocytes [10]. Tangshen formula is a CHM that ameliorates kidney injuries in a diabetic model by promoting the PGC-1α-LXR-ABCA1 pathway to improve renal cholesterol efflux in db/db mice [11]. Moreover, an active component of the traditional Chinese medicine herb Rhodiola rosea L., salidroside, has been reported to greatly attenuate DN by upregulating mtDNA copy number and ETC protein expression [12].As PGC-1α is almost ubiquitously expressed, targeting its upstream regulatory sensors such as 5′-AMP-activated protein kinase (AMPK), NAD-dependent protein deacetylase sirtuin-1 (SIRT1) is generally acknowledged as a significant method to restore mitochondrial function. AMPK, an extensively studied upstream regulator of PGC-1α, increases the rate of mitochondrial biogenesis by initiating the transcription of the PPARGC1A gene and by phosphorylating amino acids threonine-177 and serine-538, which in turn activates PGC-1α. In dead, herbal medicine prevents DN via the AMPK-SIRT1-PGC-1α axis that is a hot spot. Resveratrol is a naturally occurring polyphenol that imparts anti-inflammatory, antidiabetic, antioxidative, and neuroprotective effects. Particularly, resveratrol prevents DN via activation of the AMPK-SIRT1-PGC-1α axis, and PPARs were coactivated by PGC-1α in db/db mice [14]. Additional studies revealed resveratrol imparts a protective effect against DN by improving lipotoxicity, oxidative stress, and apoptosis by directly activating AdipoR1 and AdipoR2 that in turn upregulates AMPK and forkhead box protein O (FoxO) expression [13]. Interestingly, Zhang et al. further investigate the renoprotection mechanism of resveratrol in vivo and in vitro, which suggested that SIRT1/PGC-1α was upregulated, accompanied by improved mitochondrial function decreased oxidative stress and apoptosis [15]. In addition to single DN, the protective role of resveratrol has been verified in acute renal injury with DN via activating SIRT1–PGC–1α–hypoxia-inducible transcription factor-1α (HIF-1α) signaling pathways [16]. Because the role of sodium glucose cotransporter 2 (SGLT2) in excess glucose reabsorption has become a research topic of interest, SGLT2 inhibitors (SGLT2i) could reduce hyperfiltration and inhibit inflammatory and fibrotic responses that are elicited by proximal tubular cells [34, 79]. In addition, SGLT2i that enhances the excretion of urinary glucose triggers AMPK, a nutrient sensor, which in turn reverses the metabolic disorders associated with DN [80]. Marein is one of the main active components of Coreopsis tinctoria Nutt, which possesses renoprotective activity in DN by directly suppressing SGLT2 expression and then activating the AMPK–acetyl CoA carboxylase (ACC)–PGC-1α pathway to suppress fibrosis and inflammation [17]. ## 3.2. Mechanism of Herb Medicine on Mitochondrial Dynamics in DN Mitochondria are highly dynamic organelles that require correct mitochondrial morphology to maintain maximal ATP production [81]. The major processes of mitophagy, fission, and fusion occur as a response to mitochondrial dynamics as well as to maintain mitochondrial integrity in different metabolic conditions. Fission is essential in the isolation of damaged parts from the rest of the mitochondrial network and is induced by translocating dynamin 1-like protein (DRP1) from the cytosol to the outer membrane of the mitochondria where it binds to its receptors, including mitochondrial fission 1 (FIS1), mitochondrial fission factor (MFF), and mitochondrial dynamics proteins MID49 and MID51 [82, 83]. Mitochondria fusion involves the recruitment of a series of proteins that include MFN1 and MFN2 that triggers outer membrane fusion, as well as optic atrophy protein 1 (OPA1) that facilitates inner membrane fusion. However, in DN, excessive mitochondrial fission and fusion are associated with key features of renal damage. Mitochondrial dysfunction in podocytes is increasingly recognized as a factor contributing to the pathogenesis of DN [84].Previous studies have elucidated the correlation between mitochondrial dynamics disorder and DN progression and revealed that DRP1 may be potentially utilized as a therapeutic target in the treatment of DN [85]. Meanwhile, traditional Chinese medicine has definite effects on this point. Besides its function on PGC-1α-mediated mitochondrial biogenesis, BBR plays a therapeutic role in positively regulating DRP1-mediated mitochondrial dynamics to protect glomerulus and improve the fragmentation and dysfunction of mitochondria in podocytes [18]. AS-IV is a major and active component of Astragalus, which is a traditional Chinese medicinal herb for tonifying. Liu et al. have shown in diabetic db/db mice that AS-IV significantly improves albuminuria and renal pathologic injury. In addition, they found AS-IV decreased the elevation of renal DRP1, Fis1, and MFF expression in db/db mice [19]. More than that, polydatin which is mainly extracted from the roots of Polygonum cuspidatum not only inhibits DRP1 activation and fragmented mitochondria caused by high glucose but also blocked the increase of apoptosis through a DRP1-dependent mechanism [20, 86].Mitophagy allows the removal of damaged and nonfunctional mitochondria from the network and requires the efficient recognition of targeted mitochondria followed by the engulfment of mitochondria by autophagosomes [87]. As part of a healthy network of mitochondria, mitophagy is regulated by a PINK1-PARKIN pathway for mitochondrial identification and labeling [88]. However, impairment of the mitophagy system aggravated the progression of DN, which was mainly caused by decreases in renal PINK1 and Parkin expression in diabetes following activation of either FOXO1 or NRF2 signal [89, 90]. In recent years, the role and regulation of mitophagy in DN have attracted lots of attention. It has been reported that AS II, another of the active constituents of Astragalus, exerts protective effects on podocyte injury and mitochondrial dysfunction through enhancing mitophagy activation via modulation of NRF2 and PINK1 [21]. Huangqi-Danshen decoction which mainly includes Astragali Radix (Huang-qi) and Salviae Miltiorrhizae Radix et Rhizoma (Dan-shen) significantly alleviated DN, which might be associated with the reversion of the enhanced mitochondrial fission and the inhibition of PINK1/Parkin-mediated mitophagy [22]. ## 4. The Modulation of Mitochondrial ROS: The Effect of Herb Medicine in DN Increased oxidative stress is due to ROS production caused by dysfunctional cellular respiration during hyperglycemia and is the major pathway involved in the pathogenesis of DN [3]. In the early phase of pathogenesis, ROS mainly from mitochondria origin do have a role in the regulation of various metabolic pathways. However, their accumulation that exceeds local antioxidant capacity is a biomarker of mitochondrial dysfunction in DN [91]. Overproduction of ROS can subsequently induce oxidative stress and cause damage to critical cellular components (particularly protein and DNA) and glomerular podocyte, which contributed to inflammation, interstitial fibrosis, and apoptosis [92, 93]. The damaging effect of ROS is thought to be mediated by activation of several pathways such as NF-κB, hexosamine, and the formation of AGE products [94]. The transcription of genes that encode antioxidant enzymes, including SOD2, glutathione peroxidase, and catalase, is activated by NRF2, which in turn promotes antioxidant activity and induces negative feedback on NF-κB [95, 96]. Therefore, an emerging therapeutic target antioxidant defense mechanism and promoting renoprotection in DN involves the activation of NRF2 with its mediated antioxidant enzymes [97], while traditional Chinese medicine has definite antioxidant effects.Nepeta angustifolia C. Y. Wu, an important medicinal material constituting a variety of traditional Chinese medicine prescription, has significant antioxidant activity [98]. Nepeta angustifolia C. Y. Wu inhibits proinflammatory mediators and renal oxidative stress in diabetic rats, as well as improves mitochondrial potential to disrupt mesangial cell apoptosis caused by oxidative stress in vitro [23]. AMPK is an energy sensor in metabolic homeostasis [99]. Recent studies have shown that AMPK participates in the attenuation of oxidative stress in DN [100]. The beneficial effects of resveratrol on renal diseases are attributed to its antioxidative properties. Moreover, the study conducted by Kitada et al. [24] indicated that resveratrol can enhance mitochondrial biogenesis and protect against DN through normalisation of Mn-SOD dysfunction via the AMPK/SIRT1-independent pathway. Betulinic acid is extracted from the outer bark of white birch trees and exerts a protective effect on DN by effectively attenuating oxidative stress and inflammatory conditions via the AMPK/NF-κB/NRF2 signaling pathway [25]. Zhou et al. showed that obacunone, a natural bioactive compound isolated from the Rutaceae family, blocks GSK-3β signal transduction and subsequently enhances the activity of NRF2 to inhibit oxidative stress and mitochondrial dysfunction in NRK-52E cells [26]. Furthermore, Yahya et al. [27] showed using STZ-induced diabetic rats that curcumin imparts a nephroprotective effect via NRF2 activation, inhibition of NF-κB, suppression of NADPH oxidase, and downregulation/inhibition the PKC βII/p66 Shc axis. The role of herbal medicine in NRF2/AGE signal should also be carefully considered as NRF2/AGE plays a pivotal role in controlling transcriptional regulation of the genes encoding endogenous antioxidant enzymes [97]. Notoginsenoside R1, a novel phytoestrogen isolated from Panax notoginseng (Burk.) F. H. Chen, was found to decrease AGE-induced mitochondrial injury and promote NRF2 and HO-1 expression to eliminate oxidative stress and apoptosis in DN [28]. Further, oleanolic acid combined with N-acetylcysteine has therapeutic effects on DN through an antioxidative effect and endoplasmic reticulum stress reduction by the NRF2/Keap1 system [29]. ## 5. Lifestyle Interventions Diabetes is usually accompanied by excessive nutrition and calories, as well as a decrease in physical activity, both of which aggravate nephropathy [91]. In 2020, the consensus statement of the American Association of Clinical Endocrinologists and American College of Endocrinology on the comprehensive management algorithm for type 2 diabetes (T2D) mentioned that lifestyle optimization is essential for all diabetic patients, including healthy eating patterns, weight loss, physical activity, and smoking cessation [101]. We summarize therapeutic strategies about lifestyle intervention, with a focus on mitochondrial biogenesis, to improve the malignant progress of DN. ### 5.1. Healthy Eating Patterns and DN Healthy eating patterns are important for patients with diabetes and DN to maintain glucose control and inhibit the progression of kidney damage [102]. Particularly in late-stage kidney disease, a low-protein diet (LPD) can maintain the renal function in patients with chronic kidney disease (CKD), including those with DN [103–106]. In terms of the molecular mechanism of LPD against DN, earlier animal studies have revealed that LPD decreases intraglomerular pressure via reduction of afferent arteriole vasoconstriction, which in turn improves glomerular hyperfiltration and hypertension as well as reduces fibrosis of mesangial cells via growth factor-β signals. Furthermore, an LPD, particularly a very LPD, can also prevent renal tubular cell injury, apoptosis, inflammation/oxidative stress, and fibrosis within the tubule-interstitial region by reducing the accumulation damaged mitochondria, which is triggered by the reduction in the activity of mammalian target of rapamycin complex 1 and the restoration of autophagy [107]. However, due to the insufficiency of clear results from present clinical trials, the renal protective effect of LPD against DN is controversial. Existing clinical research evidence is unable to fully prove the renal protective effect of LPD [108–110], although other studies have shown that LPD can delay the decline of renal function [111, 112]. In addition, the American Diabetes Association believes that a short-term (approximately 3–4 months) low-carbohydrate (LC) diet is beneficial for diabetes management [113]. Compared with an ordinary diet, a LC diet contains a higher protein and fat content and ratio. The energy required by the body mainly comes from the metabolism of fat into ketone bodies; therefore, it is also called a ketogenic diet. A LC diet can directly reduce blood sugar levels, and ketone bodies have various functions, such as anti-inflammatory, mitochondrial biogenesis regulatory, and antioxidant activity [114]. However, a long-term LC diet may damage kidney function, which is mainly attributed to its high protein content [113]. Several human physiological studies have shown that a high-protein diet can cause renal hyperfiltration [115–117]. Although the actual cause of this phenomenon remains unclear, studies have attempted to describe the effects related to specific amino acid components as well as dietary advanced glycosylation end products [118, 119]. ### 5.2. Weight Loss and DN In a review on weight loss in coronary heart disease, the GFR and proteinuria in patients with weight loss improved, and the weight loss and CKD index effects of surgical intervention were better than those of drug and lifestyle interventions [120–122]. Miras et al. [123] confirmed this finding by retrospectively analysing data of 84 patients with DN who underwent bariatric surgery over a 12–18-month period. Among them, 32 patients with albuminuria at baseline had a mean 3.5-fold decrease in the postoperative albumin–creatinine ratio, and albuminuria in 32 patients returned to normal levels.A systematic review and meta-analysis including approximately 30 studies reported the impact of bariatric surgery on renal function. All studies measured the changes in relevant indicators of renal dysfunction within 4 weeks before and after bariatric surgery. Among them, six studies measured a 54% reduction in the risk of postoperative glomerular hyperfiltration, and 16 studies measured a 60%–70% reduction in the risk of postoperative albuminuria and total proteinuria [124]. Cohort studies have reported the benefits of bariatric surgery in improving creatinine levels and the GFR or reducing the incidence of stage 4 ESRD [124–128], which may be related to improved renal tubular damage [129]. Furthermore, surgery-induced weight loss can improve mitochondrial biogenesis and mitochondrial dysfunction [70], which may be an effective treatment for DN [130, 131]. ### 5.3. Physical Activity and DN Moderate aerobic exercise can reduce weight and improve insulin sensitivity, hyperglycemia, hyperlipidemia, and DN [132, 133]. Studies have reported that upregulation of the expression of eNOS and nNOS proteins in the kidney and improvement in NADPH oxidase and α-oxyaldehyde levels may reduce early diabetic nephropathy in Zucker diabetic fatty rats. Chronic aerobic exercise has health benefits and may be utilized as a treatment method for the prevention and development of renal dysfunction in T2D [134]. However, strenuous exercise may aggravate DN progression. Studies have reported that the rates of urinary protein excretion increase after strenuous exercise and tubular proteinuria occurs [107, 135]. A prospective study has demonstrated for the first time that the intensity of physical activity, rather than the total amount, is associated with the occurrence and progression of DN in type 1 diabetes. Moreover, the beneficial relationship between moderate- and high-intensity physical activity and progression of nephropathy is not affected by the duration of diabetes, age, sex, or smoking [136].This high-intensity, low-volume training program not only increases the content of citrate synthase and cytochrome C oxidase subunit IV with increasing insulin sensitivity but also stimulates mitochondrial biogenesis. The contraction activity can lead to important signal events such as calcium release, AMP/ATP ratio change, cell redox state, and ROS generation. These events activate AMPK and stimulate PGC-1α [137]. PGC-1α can stimulate several genes encoding mitochondrial proteins, mtDNA amplification and proliferation, and oxidative metabolism. In short, the number of mitochondria per cell and their function increased several times in trained subjects compared to those in sedentary subjects. Although the best exercise type, frequency, and intensity for preventing DN or DN progression have not been formally determined, it is recommended to perform moderate-to-high-intensity aerobic exercise for at least 150 minutes and two to three sessions of resistance exercise per week for patients without contraindications [138]. ## 5.1. Healthy Eating Patterns and DN Healthy eating patterns are important for patients with diabetes and DN to maintain glucose control and inhibit the progression of kidney damage [102]. Particularly in late-stage kidney disease, a low-protein diet (LPD) can maintain the renal function in patients with chronic kidney disease (CKD), including those with DN [103–106]. In terms of the molecular mechanism of LPD against DN, earlier animal studies have revealed that LPD decreases intraglomerular pressure via reduction of afferent arteriole vasoconstriction, which in turn improves glomerular hyperfiltration and hypertension as well as reduces fibrosis of mesangial cells via growth factor-β signals. Furthermore, an LPD, particularly a very LPD, can also prevent renal tubular cell injury, apoptosis, inflammation/oxidative stress, and fibrosis within the tubule-interstitial region by reducing the accumulation damaged mitochondria, which is triggered by the reduction in the activity of mammalian target of rapamycin complex 1 and the restoration of autophagy [107]. However, due to the insufficiency of clear results from present clinical trials, the renal protective effect of LPD against DN is controversial. Existing clinical research evidence is unable to fully prove the renal protective effect of LPD [108–110], although other studies have shown that LPD can delay the decline of renal function [111, 112]. In addition, the American Diabetes Association believes that a short-term (approximately 3–4 months) low-carbohydrate (LC) diet is beneficial for diabetes management [113]. Compared with an ordinary diet, a LC diet contains a higher protein and fat content and ratio. The energy required by the body mainly comes from the metabolism of fat into ketone bodies; therefore, it is also called a ketogenic diet. A LC diet can directly reduce blood sugar levels, and ketone bodies have various functions, such as anti-inflammatory, mitochondrial biogenesis regulatory, and antioxidant activity [114]. However, a long-term LC diet may damage kidney function, which is mainly attributed to its high protein content [113]. Several human physiological studies have shown that a high-protein diet can cause renal hyperfiltration [115–117]. Although the actual cause of this phenomenon remains unclear, studies have attempted to describe the effects related to specific amino acid components as well as dietary advanced glycosylation end products [118, 119]. ## 5.2. Weight Loss and DN In a review on weight loss in coronary heart disease, the GFR and proteinuria in patients with weight loss improved, and the weight loss and CKD index effects of surgical intervention were better than those of drug and lifestyle interventions [120–122]. Miras et al. [123] confirmed this finding by retrospectively analysing data of 84 patients with DN who underwent bariatric surgery over a 12–18-month period. Among them, 32 patients with albuminuria at baseline had a mean 3.5-fold decrease in the postoperative albumin–creatinine ratio, and albuminuria in 32 patients returned to normal levels.A systematic review and meta-analysis including approximately 30 studies reported the impact of bariatric surgery on renal function. All studies measured the changes in relevant indicators of renal dysfunction within 4 weeks before and after bariatric surgery. Among them, six studies measured a 54% reduction in the risk of postoperative glomerular hyperfiltration, and 16 studies measured a 60%–70% reduction in the risk of postoperative albuminuria and total proteinuria [124]. Cohort studies have reported the benefits of bariatric surgery in improving creatinine levels and the GFR or reducing the incidence of stage 4 ESRD [124–128], which may be related to improved renal tubular damage [129]. Furthermore, surgery-induced weight loss can improve mitochondrial biogenesis and mitochondrial dysfunction [70], which may be an effective treatment for DN [130, 131]. ## 5.3. Physical Activity and DN Moderate aerobic exercise can reduce weight and improve insulin sensitivity, hyperglycemia, hyperlipidemia, and DN [132, 133]. Studies have reported that upregulation of the expression of eNOS and nNOS proteins in the kidney and improvement in NADPH oxidase and α-oxyaldehyde levels may reduce early diabetic nephropathy in Zucker diabetic fatty rats. Chronic aerobic exercise has health benefits and may be utilized as a treatment method for the prevention and development of renal dysfunction in T2D [134]. However, strenuous exercise may aggravate DN progression. Studies have reported that the rates of urinary protein excretion increase after strenuous exercise and tubular proteinuria occurs [107, 135]. A prospective study has demonstrated for the first time that the intensity of physical activity, rather than the total amount, is associated with the occurrence and progression of DN in type 1 diabetes. Moreover, the beneficial relationship between moderate- and high-intensity physical activity and progression of nephropathy is not affected by the duration of diabetes, age, sex, or smoking [136].This high-intensity, low-volume training program not only increases the content of citrate synthase and cytochrome C oxidase subunit IV with increasing insulin sensitivity but also stimulates mitochondrial biogenesis. The contraction activity can lead to important signal events such as calcium release, AMP/ATP ratio change, cell redox state, and ROS generation. These events activate AMPK and stimulate PGC-1α [137]. PGC-1α can stimulate several genes encoding mitochondrial proteins, mtDNA amplification and proliferation, and oxidative metabolism. In short, the number of mitochondria per cell and their function increased several times in trained subjects compared to those in sedentary subjects. Although the best exercise type, frequency, and intensity for preventing DN or DN progression have not been formally determined, it is recommended to perform moderate-to-high-intensity aerobic exercise for at least 150 minutes and two to three sessions of resistance exercise per week for patients without contraindications [138]. ## 6. Conclusion Recently, mitochondrial dysfunction has been shown to be a critical determinant of the progressive loss of renal function in patients with diabetes. Pharmacological regulation of mitochondrial networking may be a promising therapeutic strategy in preventing and treating DN. Moreover, nontraditional therapies, including herbal medicine and lifestyle interventions, play a renoprotective role in improving mitochondrial homeostasis and function. Overall, the interventional mechanisms of nontraditional therapies for DN are still in their infancy compared with traditional treatments. Elucidating the mechanism of action and efficacy of nontraditional therapies involving mitochondria may facilitate the discovery of novel therapeutic approaches in treating DN and preventing the progression of DN to ESRD. --- *Source: 1010268-2021-12-09.xml*
1010268-2021-12-09_1010268-2021-12-09.md
61,183
Mitochondrial Dysfunction and Diabetic Nephropathy: Nontraditional Therapeutic Opportunities
Ping Na Zhang; Meng Qi Zhou; Jing Guo; Hui Juan Zheng; Jingyi Tang; Chao Zhang; Yu Ning Liu; Wei Jing Liu; Yao Xian Wang
Journal of Diabetes Research (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1010268
1010268-2021-12-09.xml
--- ## Abstract Diabetic nephropathy (DN) is a progressive microvascular diabetic complication. Growing evidence shows that persistent mitochondrial dysfunction contributes to the progression of renal diseases, including DN, as it alters mitochondrial homeostasis and, in turn, affects normal kidney function. Pharmacological regulation of mitochondrial networking is a promising therapeutic strategy for preventing and restoring renal function in DN. In this review, we have surveyed recent advances in elucidating the mitochondrial networking and signaling pathways in physiological and pathological contexts. Additionally, we have considered the contributions of nontraditional therapy that ameliorate mitochondrial dysfunction and discussed their molecular mechanism, highlighting the potential value of nontraditional therapies, such as herbal medicine and lifestyle interventions, in therapeutic interventions for DN. The generation of new insights using mitochondrial networking will facilitate further investigations on nontraditional therapies for DN. --- ## Body ## 1. Introduction Diabetic nephropathy (DN) is a chronic disease that is caused by diabetes and is characterized by microangiopathy and alterations in kidney structure and function. It not only causes end-stage renal disease (ESRD) but also significantly increases the incidence and mortality rate of cardiovascular and cerebrovascular diseases [1]. With the rapid increase in the incidence of diabetes, the number of cases of DN worldwide has increased rapidly. In 2019, the International Diabetes Federation reported that approximately 463 million individuals were diagnosed with diabetes, and its incidence is expected to reach 700 million by 2045. In addition, approximately 30%–40% of these individuals are expected to develop DN [2]. However, current therapies delay rather than prevent the progression of ESRD, necessitating the search for new therapeutic targets to ameliorate the poor prognosis of DN. Current studies suggest that irregularities in key pathways and cellular components promote renal dysfunction and lead to DN. These include enhanced glucose metabolite flux, more glycation end (AGE) products, endoplasmic reticulum stress, mitochondrial dysfunction, abnormally active renin angiotensin system, and oxidative stress [3–6], with mitochondrial dysfunction playing a key role in the occurrence and pathogenesis of DN [7]. Various studies have emphasized the impact of nontraditional treatments, such as herbal medicine, nutrition, exercise, and surgical treatment, on the prevention and delayed progression of DN. Nontraditional therapy is considered a well-proven strategy which robustly improves health in most organisms. Randomized controlled clinical trials have shown that herbal medicines are efficacious and safe [8, 9]. In terms of experimental research, studies provided evidence for the efficacy of nontraditional therapies from the perspectives of ameliorating mitochondrial dysfunction. This provides a rationale for further exploration of the effect of nontraditional approaches on DN at the molecular level. Mitochondria are important for renal cell survival, as these serve as metabolic energy producers and regulate programmed cell death. The structure and function of mitochondria are regulated by a mitochondrial quality control (MQC) system, which is a series of processes that include mitochondrial biogenesis, mitochondrial proteostasis, mitochondrial dynamics/mitophagy, and mitochondria-mediated cell death. In this review, we have outlined the physiological role of mitochondria in renal function, discussed the role of mitochondrial dysfunction in the occurrence and development of DN, and emphasized on the therapeutic effect of nontraditional treatments, particularly herbal medicine (Table 1) and lifestyle interventions, on DN by targeting mitochondrial networking.Table 1 Mitochondria-targeted herb medicine in DN. Herb medicineThe form of herb medicineExperimental modelTargetPathwayObserved effectRef.Mitochondrial biogenesisBerberinePure chemicalPatients with DN, db/db diabetic micePCG-1α↑, FAO↑, AMPK↑PGC-1α signaling pathwayRestoration of PGC-1α activity and the energy homeostasis[10]Tangshen formulaExtractdb/db diabetic mice, mTECsPGC-1α↑, LXR↑, ABCA1↑PGC-1α-LXR-ABCA1 pathwayImproving cholesterol efflux[11]SalidrosidePure chemicaldb/db diabetic miceSIRT1↑, PGC-1α↑SIRT1/PGC-1α axisImproving mitochondrial biogenesis[12]ResveratrolPure chemicaldb/db diabetic mice, HGECsAdipoR1↑, AdipoR2↑, AMPK↑, SIRT1↑, PGC-1α↑, PPARα↑AMPK–SIRT1–PGC–1α axisAmeliorating lipotoxicity, oxidative stress, apoptosis, and endothelial dysfunction[13]ResveratrolPure chemicaldb/db diabetic miceAMPK↑, SIRT1↑, PGC-1α↑, PPARα↑AMPK–SIRT1–PGC–1α axisPrevention of lipotoxicity-related apoptosis and oxidative stress[14]ResveratrolPure chemicalSTZ-induced diabetic rats, podocytesSIRT1↑, PGC-1α↑, ROS↓SIRT1/PGC-1α axisInhibition of mitochondrial oxidative stress and apoptosis[15]ResveratrolPure chemicalDN rabbits with AKI, HK-2 cellsSIRT1↑, PGC-1α↑, HIF-1α↓SIRT1–PGC–1α–HIF-1α signaling pathwaysReducing renal hypoxia, mitochondrial dysfunction and renal tubular cell apoptosis[16]MareinExtractdb/db diabetic mice, HK-2 cellsSGLT2↓, SREBP-1↓, AMPK↑, PGC-1α↑AMPK/ACC/PGC-1α pathwayAmelioration of fibrosis and inflammation[17]Mitochondrial dynamicsBerberinePure chemicaldb/db diabetic mice, podocytesDRP1↓, MFF↓, FIS1↓, MID49↓, MID51↓, PGC-1α↑DRP1 modulatorInhibiting mitochondrial fission and cell apoptosis[18]Astragaloside IVPure chemicaldb/db diabetic miceDrp1↓, MFF↓, Fis1↓Mitochondrial quality control networkAmelioration of renal injury[19]PolydatinPure chemicalKKAy mice, hyperglycemia-induced MPC5 cellsDRP1↓, ROS↓, caspase-3↓, caspase-9↓ROS/DRP1/mitochondrial fission/apoptosis pathwayImpairing mitochondria fitness and ameliorating podocyte injury[20]MitophagyAstragaloside IIPure chemicalSTZ-induced diabetic ratsNRF2↑, Keap1↓, PINK1↑, Parkin↑NRF2 and PINK1 pathwayAmelioration of podocyte injury and mitochondrial dysfunction[21]Huangqi-Danshen decoctionExtractdb/db diabetic miceDRP-1↓, PINK1↑, Parkin↑PINK1/Parkin pathwayProtection kidney injury by inhibiting PINK1/Parkin-mediated mitophagy[22]Mitochondria ROSNepeta angustifolia C. Y. WuExtractHFD/STZ-induced diabetic rats, mesangial cellsSOD↑, ROS↓, MDA↓Mitochondrial-caspase apoptosis pathwayAntioxidative stress, inflammation and inhibiting mesangial cell apoptosis[23]ResveratrolPure chemicaldb/db diabetic miceROS↓, AMPK↑, SIRT1AMPK/SIRT1-independent pathwayAntioxidative stress and enhanced mitochondrial biogenesis[24]Betulinic acidPure chemicalSTZ-induced diabetic ratsSOD↑, CAT ↑, MDA↓, AMPK, NF-κB↓, NRF2↑AMPK/NF-κB/NRF2 signaling pathwayAttenuating the oxidative stress and inflammatory condition[25]ObacunonePure chemicalNRK-52E cellsSOD↑, GSK-3β↓, NRF2↑GSK-3β/Fyn pathwayInhibiting oxidative stress and mitochondrial dysfunction[26]CurcuminPure chemicalSTZ-induced diabetic ratsNRF2↑, FOXO-3a↑, PKCβII↓, NF-κB↓PKCβII/p 66 Shc axisAntioxidative stress[27]Notoginsenoside R1Pure chemicaldb/db diabetic mice, HK-2 cellsROS↓, NRF2↑, HO-1↑NRF2 pathwayInhibition of apoptosis and renal fibrosis caused by oxidative stress[28]Oleanolic acid and N-acetylcysteinePure chemicalType 2 diabetic rat model, mesangial cellsROS↓, NRF2↑, TGF-β/smad2/3↓, α-SMA↓NRF2/Keap1 systemInhibition of oxidative stress and ER stress[29] ## 2. Critical Mediator of DN: Mitochondrial Dysfunction The kidney, a highly metabolic organ rich in mitochondria, requires a large amount of ATP for its normal function [30]. The kidney possesses the second highest oxygen consumption and mitochondrial content following the heart [30, 31]. Mitochondrial energetics are altered in DN due to hyperglycemia, which induces changes in the electron transport chain (ETC) which cause an increase in reactive oxygen species (ROS) and a decrease in ATP production. This leads to increased mitochondrial division, decreased PGC1α levels, changes in mitochondrial morphology, increased cell apoptosis, and further aggravation of the condition [32–34] (Figure 1).Figure 1 Hyperglycemia serves as the primary factor that influences mitochondrial dysfunction in DN. The increased level of glucose enhances glycolysis, and the subsequent activation of the TXNIP, AGE, and PKC pathways reinforces the decrease in ATP levels. Insufficient ATP levels stimulate the ETC to overwork in response to the energy supply for the kidneys. In turn, excessive ROS production occurs following the overactivation of the ETC, which results in decreased ATP production, mutation of mtDNA, abnormal opening of the mitochondrial permeability transition pore, and ultimately mitochondrial fragmentation and swelling. Decreases in the levels of OPA1, MFN1, and MFN2 may contribute to the decrease in mitochondrial fusion observed in DN. Activation of DRP1 promotes mitochondrial fragmentation and fission. Damaged mitochondria are cleared by mitophagy. However, an excess number of damaged mitochondria that is higher than the rate of mitophagy may result in cell death. Abbreviations: DN: diabetic nephropathy; DRP1: dynamin 1-like protein; PGC-1α: PGC1α, peroxisome proliferator-activated receptor γ coactivator 1α; AMPK: 5′-AMP-activated protein kinase; SIRT1: sirtuin-1; PINK1: putative kinase protein 1; Cyt c: cytochrome c; ROS: reactive oxygen species; MFN1 and 2: mitofusin proteins 1 and 2; OPA1: optic atrophy protein 1; MFF: mitofission proteins; FIS1: mitochondrial fission 1; PPAR: peroxisome proliferator-activated receptor; Parkin: E3 ubiquitin-protein ligase parkin; ER: endoplasmic reticulum; TXNIP: thioredoxin-interacting protein; AGE: advanced glycation end; PKC: protein kinase C; ETC: electron transport chain. ### 2.1. Mitochondria: The “Energy Station” for the Kidney In general, the mechanism of ATP production in kidney cells is determined by the cell type. For example, proximal tubules in the renal cortex are dependent on oxidative phosphorylation for ATP production to fuel active glucose, nutrient, and ion transport [35]. However, glomerular cells such as podocytes and mesangial cells are largely utilized for filtering blood, removal of small molecules (e.g., glucose, urea, salt, and water), and retaining large proteins, including hemoglobin [36]. This passive process does not require direct ATP. Therefore, glomerular cells can perform aerobic and anaerobic respiration to produce ATP for basic cellular processes [37–40]. ATP is produced through the respiratory chain, which includes five multienzyme protein complexes embedded in the inner mitochondrial membrane [19], including complex I: NADH CoQ reductase, complex II: succinate-CoQ reductase, complex III: reduced CoQ-cytochrome c reductase, complex IV: cytochrome c oxidase, and complex V: ATP synthase. One palmitate molecule produces 106 ATP molecules, whereas glucose oxidation yields only 36 ATP molecules [41, 42]. Due to the higher energy requirements of the proximal tubules, they use nonesterified fatty acids, such as palmitate, to maximize the production of ATP through β-oxidation. In the diabetic state, there is a large amount of substrate in the form of glucose, which provides fuel for the citric acid cycle and produces more NADH and FADH2. However, during the electron transfer process, the generation of a greater reducing force leads to electron leak; these electrons combine with oxygen molecules to produce a large amount of ROS and induce oxidative stress [43, 44]. ### 2.2. ROS and Mitochondrial Dysfunction in DN: Dangerous Liaisons The double membrane structure of mitochondria contains a large number of unsaturated fatty acids which are highly vulnerable to ROS attack. Excessive ROS results in membrane lipid peroxidation as well as triggers the mitochondrial permeability transition pore (mPTP) to abnormally open, which in turn increases its permeability and allows proteins to enter the membrane space. These negatively charged proteins are released into the cytoplasm, causing positive ions in the membrane gap to flow back into the matrix. Subsequently, the ion concentration gradient on both sides of the mitochondrial inner membrane disappears [45], mitochondrial membrane potential decreases, oxidative phosphorylation uncouples, and ATP synthesis is blocked. At the same time, it causes an imbalance of related molecules moving in and out of the mitochondria, leading to the dysfunction of the mitochondrial and cytoplasmic barriers. The greater concentration of positive ions in the mitochondrial matrix than in the cytoplasm aggravates swelling and even ruptures the mitochondria [46]. Since mitochondrial DNA (mtDNA) lacks the protection of introns, histones, and other DNA-related proteins and it is near the electron transport chain where ROS production occurs, it is more susceptible to ROS attack than nuclear DNA. Mutations may occur that lead to mitochondrial dysfunction and contribute to the progression of DN [4, 45, 47]. According to a previous study, mtDNA damage precedes bioenergy dysfunction in DN, indicating that systemic mitochondrial dysfunction and glucose-induced mtDNA changes can lead to DN [48]. In general, ROS and mitochondrial dysfunction are mutually causes and effects, forming a vicious cycle. ### 2.3. Imbalance of Mitochondrial Dynamics in DN: A Vicious Cycle Mitochondria are highly dynamic organelles that regulate their shape, quantity, distribution, and function through continuous fusion and fission. They form a network-like mode of action in the cell which can be redistributed to meet the energy needs of the cell to the maximum extent as it is important to maintain cell homeostasis [49, 50]. Mitochondrial fusion is mainly involved in the synthesis and repair of mitochondria. When the mitochondria are slightly damaged by harmful stress, such as mtDNA variation and mitochondrial membrane potential decline, the fusion of damaged mitochondria and healthy mitochondria can repair the mutated mtDNA and restore the membrane potential to realize self-repair [51]. Mitochondrial fission also contributes to the maintenance of mitochondrial membrane potential and mtDNA stability. Depolarized mitochondrial membranes and altered mtDNA accumulate during mitochondrial fission and are discarded by autophagy or the ubiquitin-proteasome system in order to maintain normal mitochondrial function [52–54]. Increases in the levels of proteins that facilitate mitochondrial fusion occur early in the disease process in the kidneys of patients with diabetes [33]. These increases may be an early compensatory event for increased ATP demand because increasing mitochondrial fusion induced by high glucose 1 [55] or mitofusins (MFN1 and MFN2) can increase mitochondrial bioenergy function and reduce diabetic kidney damage. Fusion may also prevent renal damage in diabetes by balancing mitochondrial fission and fragmentation, which is generally considered harmful in DN.When the mitochondrial membrane potential is damaged, the pathway for PTEN-induced putative kinase protein 1 (PINK1) to enter the inner membrane of mitochondria is blocked; therefore, it accumulates in the outer membrane, recruits Parkin to the damaged mitochondria, and phosphorylates it. Activated Parkin can ubiquitinate voltage-dependent anion channel protein 1, MFN1, MFN2, and other substrates embedded in the outer membrane. This leads to further regulation of mitochondrial morphology and dynamic changes in fission and fusion. Subsequently, the ubiquitinated mitochondria, with the assistance of autophagy receptor regulatory proteins, such as P62/SQSTM1 and microtubule-associated protein light chain 3, aggregate into double-layer autophagic vesicles, which are encapsulated to form mitochondrial autophagosomes, and fuse with lysosomes to form mitochondrial autophagic lysosomes that are degraded by hydrolases [56, 57]. Nevertheless, accumulation of autophagosomes containing mitochondria has been found in the kidneys of patients with diabetes [58] and rodent models of DN [58–60]. Although dysfunctional mitochondria can be removed by mitophagy, these can also trigger cell death in the presence of an extremely high number of damaged mitochondria relative to the rate of mitophagy [34]. Programmed cell death may occur in several forms, which include apoptosis, programmed necrosis, and autophagic cell death. Despite these distinct cell death pathways, members of the Bcl-2 family have been implicated in the direct or indirect control of mitochondrial processes [61, 62]. The permeability of the damaged mitochondrial membrane changes, resulting in the disappearance of the membrane potential, the rupture of mitochondria, and the release of intermembrane space cell death proteins (such as Cyt c, Smac/DIABLO, and HtrA2/Omi) into the cytoplasm, ultimately leading to cell death [63–65]. ## 2.1. Mitochondria: The “Energy Station” for the Kidney In general, the mechanism of ATP production in kidney cells is determined by the cell type. For example, proximal tubules in the renal cortex are dependent on oxidative phosphorylation for ATP production to fuel active glucose, nutrient, and ion transport [35]. However, glomerular cells such as podocytes and mesangial cells are largely utilized for filtering blood, removal of small molecules (e.g., glucose, urea, salt, and water), and retaining large proteins, including hemoglobin [36]. This passive process does not require direct ATP. Therefore, glomerular cells can perform aerobic and anaerobic respiration to produce ATP for basic cellular processes [37–40]. ATP is produced through the respiratory chain, which includes five multienzyme protein complexes embedded in the inner mitochondrial membrane [19], including complex I: NADH CoQ reductase, complex II: succinate-CoQ reductase, complex III: reduced CoQ-cytochrome c reductase, complex IV: cytochrome c oxidase, and complex V: ATP synthase. One palmitate molecule produces 106 ATP molecules, whereas glucose oxidation yields only 36 ATP molecules [41, 42]. Due to the higher energy requirements of the proximal tubules, they use nonesterified fatty acids, such as palmitate, to maximize the production of ATP through β-oxidation. In the diabetic state, there is a large amount of substrate in the form of glucose, which provides fuel for the citric acid cycle and produces more NADH and FADH2. However, during the electron transfer process, the generation of a greater reducing force leads to electron leak; these electrons combine with oxygen molecules to produce a large amount of ROS and induce oxidative stress [43, 44]. ## 2.2. ROS and Mitochondrial Dysfunction in DN: Dangerous Liaisons The double membrane structure of mitochondria contains a large number of unsaturated fatty acids which are highly vulnerable to ROS attack. Excessive ROS results in membrane lipid peroxidation as well as triggers the mitochondrial permeability transition pore (mPTP) to abnormally open, which in turn increases its permeability and allows proteins to enter the membrane space. These negatively charged proteins are released into the cytoplasm, causing positive ions in the membrane gap to flow back into the matrix. Subsequently, the ion concentration gradient on both sides of the mitochondrial inner membrane disappears [45], mitochondrial membrane potential decreases, oxidative phosphorylation uncouples, and ATP synthesis is blocked. At the same time, it causes an imbalance of related molecules moving in and out of the mitochondria, leading to the dysfunction of the mitochondrial and cytoplasmic barriers. The greater concentration of positive ions in the mitochondrial matrix than in the cytoplasm aggravates swelling and even ruptures the mitochondria [46]. Since mitochondrial DNA (mtDNA) lacks the protection of introns, histones, and other DNA-related proteins and it is near the electron transport chain where ROS production occurs, it is more susceptible to ROS attack than nuclear DNA. Mutations may occur that lead to mitochondrial dysfunction and contribute to the progression of DN [4, 45, 47]. According to a previous study, mtDNA damage precedes bioenergy dysfunction in DN, indicating that systemic mitochondrial dysfunction and glucose-induced mtDNA changes can lead to DN [48]. In general, ROS and mitochondrial dysfunction are mutually causes and effects, forming a vicious cycle. ## 2.3. Imbalance of Mitochondrial Dynamics in DN: A Vicious Cycle Mitochondria are highly dynamic organelles that regulate their shape, quantity, distribution, and function through continuous fusion and fission. They form a network-like mode of action in the cell which can be redistributed to meet the energy needs of the cell to the maximum extent as it is important to maintain cell homeostasis [49, 50]. Mitochondrial fusion is mainly involved in the synthesis and repair of mitochondria. When the mitochondria are slightly damaged by harmful stress, such as mtDNA variation and mitochondrial membrane potential decline, the fusion of damaged mitochondria and healthy mitochondria can repair the mutated mtDNA and restore the membrane potential to realize self-repair [51]. Mitochondrial fission also contributes to the maintenance of mitochondrial membrane potential and mtDNA stability. Depolarized mitochondrial membranes and altered mtDNA accumulate during mitochondrial fission and are discarded by autophagy or the ubiquitin-proteasome system in order to maintain normal mitochondrial function [52–54]. Increases in the levels of proteins that facilitate mitochondrial fusion occur early in the disease process in the kidneys of patients with diabetes [33]. These increases may be an early compensatory event for increased ATP demand because increasing mitochondrial fusion induced by high glucose 1 [55] or mitofusins (MFN1 and MFN2) can increase mitochondrial bioenergy function and reduce diabetic kidney damage. Fusion may also prevent renal damage in diabetes by balancing mitochondrial fission and fragmentation, which is generally considered harmful in DN.When the mitochondrial membrane potential is damaged, the pathway for PTEN-induced putative kinase protein 1 (PINK1) to enter the inner membrane of mitochondria is blocked; therefore, it accumulates in the outer membrane, recruits Parkin to the damaged mitochondria, and phosphorylates it. Activated Parkin can ubiquitinate voltage-dependent anion channel protein 1, MFN1, MFN2, and other substrates embedded in the outer membrane. This leads to further regulation of mitochondrial morphology and dynamic changes in fission and fusion. Subsequently, the ubiquitinated mitochondria, with the assistance of autophagy receptor regulatory proteins, such as P62/SQSTM1 and microtubule-associated protein light chain 3, aggregate into double-layer autophagic vesicles, which are encapsulated to form mitochondrial autophagosomes, and fuse with lysosomes to form mitochondrial autophagic lysosomes that are degraded by hydrolases [56, 57]. Nevertheless, accumulation of autophagosomes containing mitochondria has been found in the kidneys of patients with diabetes [58] and rodent models of DN [58–60]. Although dysfunctional mitochondria can be removed by mitophagy, these can also trigger cell death in the presence of an extremely high number of damaged mitochondria relative to the rate of mitophagy [34]. Programmed cell death may occur in several forms, which include apoptosis, programmed necrosis, and autophagic cell death. Despite these distinct cell death pathways, members of the Bcl-2 family have been implicated in the direct or indirect control of mitochondrial processes [61, 62]. The permeability of the damaged mitochondrial membrane changes, resulting in the disappearance of the membrane potential, the rupture of mitochondria, and the release of intermembrane space cell death proteins (such as Cyt c, Smac/DIABLO, and HtrA2/Omi) into the cytoplasm, ultimately leading to cell death [63–65]. ## 3. Maintaining Mitochondrial Homeostasis: The Target of Herbal Medicine in DN Mitochondrial homeostasis pertains to the balance between mitochondrial fission, fusion, and biogenesis and mitophagy, which maintains mitochondrial energetics. Diseases such as DN can disrupt mitochondrial homeostasis and thus contributes to disease progression. In recent years, most of the studies on the mechanisms of herbal medicine treatment of DN focus on improving mitochondrial homeostasis and function, aiming to restore renal function and slow the progression of DN (Figure2).Figure 2 Therapeutic target of herbal medicine on mitochondrial dysfunction in DN. Herbal medicine plays a protective role in inhibiting DRP1-mediated mitochondrial dynamics to improve mitochondrial dysfunction in DN. In addition, herbal medicine enhances mitochondrial biogenesis by inducing the expression of PGC-1α and its upstream regulators (AMPK and SIRT1) and drives PINK1/Parkin-mediated mitophagy. In addition, the renoprotective effects of herbal medicine are associated with antioxidative stress. Abbreviations: DN: diabetic nephropathy; DRP1: dynamin 1-like protein; PGC-1α: PGC1α, peroxisome proliferator-activated receptor γ coactivator 1α; AMPK: 5′-AMP-activated protein kinase; SIRT1: sirtuin-1; PINK1: putative kinase protein 1; Cyt c: cytochrome c; ROS: reactive oxygen species; MFN1 and 2: mitofusin proteins 1 and 2; OPA1: optic atrophy protein 1; MFF: mitofission proteins; FIS1: mitochondrial fission 1; PPAR: peroxisome proliferator-activated receptor; Parkin: E3 ubiquitin-protein ligase parkin. ### 3.1. Mechanism of Herb Medicine on Mitochondrial Biogenesis in DN The complex process of mitochondrial biogenesis involves the generation of new mitochondrial mass and mtDNA replication, which are derived from preexisting mitochondria. This increases ATP production to meet the growing energy demands of cells. Mitochondrial biosynthesis is controlled by various transcriptional coactivating and coinhibitory factors [66, 67]; however, the peroxisome proliferator-activated receptor γ coactivator- (PGC-) 1α remains as the predominant upstream transcriptional regulator of mitochondrial biogenesis [68]. In several gain- and loss-of-function experimental studies, the activation of PGC-1 has been demonstrated to upregulate the expression of mitochondrial genes, including nuclear respiratory factor- (NRF-) 1, NRF-2, peroxisome proliferator-activated receptors (PPARs), and estrogen-related receptor alpha [69–71]. PGC-1α binds to PPARs, which act as master regulators of fatty acid oxidation (FAO) and nutrient supply [72]. To note, kidney proximal tubules have high levels of baseline energy consumption, supporting FAO as the preferred energy source in proximal tubules [73]. Defective FAO causes lipid accumulation, apoptosis, and tubule epithelial cell dedifferentiation [74]. Taken together, PGC-1α regulates complex processes of nutrient availability, FAO, and mitochondria biogenesis.However, reduced PGC-1α expression and consequent dysfunctional mitochondria have been observed in patients with DN and animal models [6, 75–77]. Moreover, cholesterol accumulation in the kidney is a risk factor for DN progression. PGC-1α acts as a master regulator of lipid metabolism by regulating mitochondria [78]. Given the pivotal role of PGC-1α and metabolism in kidney cells, it is important to search for new approaches to restore the activity of PGC-1α in DN. An increasing number of studies have demonstrated that the interventional mechanisms of herbal medicines on DN are associated with this target. Berberine (BBR), an isoquinoline alkaloid present in Chinese herbal medicine (CHM), is widely used for treating DN. In particular, BBR can directly regulate PGC-1α to enhance FAO in DN, which promotes mitochondrial energy homeostasis and energy metabolism in podocytes [10]. Tangshen formula is a CHM that ameliorates kidney injuries in a diabetic model by promoting the PGC-1α-LXR-ABCA1 pathway to improve renal cholesterol efflux in db/db mice [11]. Moreover, an active component of the traditional Chinese medicine herb Rhodiola rosea L., salidroside, has been reported to greatly attenuate DN by upregulating mtDNA copy number and ETC protein expression [12].As PGC-1α is almost ubiquitously expressed, targeting its upstream regulatory sensors such as 5′-AMP-activated protein kinase (AMPK), NAD-dependent protein deacetylase sirtuin-1 (SIRT1) is generally acknowledged as a significant method to restore mitochondrial function. AMPK, an extensively studied upstream regulator of PGC-1α, increases the rate of mitochondrial biogenesis by initiating the transcription of the PPARGC1A gene and by phosphorylating amino acids threonine-177 and serine-538, which in turn activates PGC-1α. In dead, herbal medicine prevents DN via the AMPK-SIRT1-PGC-1α axis that is a hot spot. Resveratrol is a naturally occurring polyphenol that imparts anti-inflammatory, antidiabetic, antioxidative, and neuroprotective effects. Particularly, resveratrol prevents DN via activation of the AMPK-SIRT1-PGC-1α axis, and PPARs were coactivated by PGC-1α in db/db mice [14]. Additional studies revealed resveratrol imparts a protective effect against DN by improving lipotoxicity, oxidative stress, and apoptosis by directly activating AdipoR1 and AdipoR2 that in turn upregulates AMPK and forkhead box protein O (FoxO) expression [13]. Interestingly, Zhang et al. further investigate the renoprotection mechanism of resveratrol in vivo and in vitro, which suggested that SIRT1/PGC-1α was upregulated, accompanied by improved mitochondrial function decreased oxidative stress and apoptosis [15]. In addition to single DN, the protective role of resveratrol has been verified in acute renal injury with DN via activating SIRT1–PGC–1α–hypoxia-inducible transcription factor-1α (HIF-1α) signaling pathways [16]. Because the role of sodium glucose cotransporter 2 (SGLT2) in excess glucose reabsorption has become a research topic of interest, SGLT2 inhibitors (SGLT2i) could reduce hyperfiltration and inhibit inflammatory and fibrotic responses that are elicited by proximal tubular cells [34, 79]. In addition, SGLT2i that enhances the excretion of urinary glucose triggers AMPK, a nutrient sensor, which in turn reverses the metabolic disorders associated with DN [80]. Marein is one of the main active components of Coreopsis tinctoria Nutt, which possesses renoprotective activity in DN by directly suppressing SGLT2 expression and then activating the AMPK–acetyl CoA carboxylase (ACC)–PGC-1α pathway to suppress fibrosis and inflammation [17]. ### 3.2. Mechanism of Herb Medicine on Mitochondrial Dynamics in DN Mitochondria are highly dynamic organelles that require correct mitochondrial morphology to maintain maximal ATP production [81]. The major processes of mitophagy, fission, and fusion occur as a response to mitochondrial dynamics as well as to maintain mitochondrial integrity in different metabolic conditions. Fission is essential in the isolation of damaged parts from the rest of the mitochondrial network and is induced by translocating dynamin 1-like protein (DRP1) from the cytosol to the outer membrane of the mitochondria where it binds to its receptors, including mitochondrial fission 1 (FIS1), mitochondrial fission factor (MFF), and mitochondrial dynamics proteins MID49 and MID51 [82, 83]. Mitochondria fusion involves the recruitment of a series of proteins that include MFN1 and MFN2 that triggers outer membrane fusion, as well as optic atrophy protein 1 (OPA1) that facilitates inner membrane fusion. However, in DN, excessive mitochondrial fission and fusion are associated with key features of renal damage. Mitochondrial dysfunction in podocytes is increasingly recognized as a factor contributing to the pathogenesis of DN [84].Previous studies have elucidated the correlation between mitochondrial dynamics disorder and DN progression and revealed that DRP1 may be potentially utilized as a therapeutic target in the treatment of DN [85]. Meanwhile, traditional Chinese medicine has definite effects on this point. Besides its function on PGC-1α-mediated mitochondrial biogenesis, BBR plays a therapeutic role in positively regulating DRP1-mediated mitochondrial dynamics to protect glomerulus and improve the fragmentation and dysfunction of mitochondria in podocytes [18]. AS-IV is a major and active component of Astragalus, which is a traditional Chinese medicinal herb for tonifying. Liu et al. have shown in diabetic db/db mice that AS-IV significantly improves albuminuria and renal pathologic injury. In addition, they found AS-IV decreased the elevation of renal DRP1, Fis1, and MFF expression in db/db mice [19]. More than that, polydatin which is mainly extracted from the roots of Polygonum cuspidatum not only inhibits DRP1 activation and fragmented mitochondria caused by high glucose but also blocked the increase of apoptosis through a DRP1-dependent mechanism [20, 86].Mitophagy allows the removal of damaged and nonfunctional mitochondria from the network and requires the efficient recognition of targeted mitochondria followed by the engulfment of mitochondria by autophagosomes [87]. As part of a healthy network of mitochondria, mitophagy is regulated by a PINK1-PARKIN pathway for mitochondrial identification and labeling [88]. However, impairment of the mitophagy system aggravated the progression of DN, which was mainly caused by decreases in renal PINK1 and Parkin expression in diabetes following activation of either FOXO1 or NRF2 signal [89, 90]. In recent years, the role and regulation of mitophagy in DN have attracted lots of attention. It has been reported that AS II, another of the active constituents of Astragalus, exerts protective effects on podocyte injury and mitochondrial dysfunction through enhancing mitophagy activation via modulation of NRF2 and PINK1 [21]. Huangqi-Danshen decoction which mainly includes Astragali Radix (Huang-qi) and Salviae Miltiorrhizae Radix et Rhizoma (Dan-shen) significantly alleviated DN, which might be associated with the reversion of the enhanced mitochondrial fission and the inhibition of PINK1/Parkin-mediated mitophagy [22]. ## 3.1. Mechanism of Herb Medicine on Mitochondrial Biogenesis in DN The complex process of mitochondrial biogenesis involves the generation of new mitochondrial mass and mtDNA replication, which are derived from preexisting mitochondria. This increases ATP production to meet the growing energy demands of cells. Mitochondrial biosynthesis is controlled by various transcriptional coactivating and coinhibitory factors [66, 67]; however, the peroxisome proliferator-activated receptor γ coactivator- (PGC-) 1α remains as the predominant upstream transcriptional regulator of mitochondrial biogenesis [68]. In several gain- and loss-of-function experimental studies, the activation of PGC-1 has been demonstrated to upregulate the expression of mitochondrial genes, including nuclear respiratory factor- (NRF-) 1, NRF-2, peroxisome proliferator-activated receptors (PPARs), and estrogen-related receptor alpha [69–71]. PGC-1α binds to PPARs, which act as master regulators of fatty acid oxidation (FAO) and nutrient supply [72]. To note, kidney proximal tubules have high levels of baseline energy consumption, supporting FAO as the preferred energy source in proximal tubules [73]. Defective FAO causes lipid accumulation, apoptosis, and tubule epithelial cell dedifferentiation [74]. Taken together, PGC-1α regulates complex processes of nutrient availability, FAO, and mitochondria biogenesis.However, reduced PGC-1α expression and consequent dysfunctional mitochondria have been observed in patients with DN and animal models [6, 75–77]. Moreover, cholesterol accumulation in the kidney is a risk factor for DN progression. PGC-1α acts as a master regulator of lipid metabolism by regulating mitochondria [78]. Given the pivotal role of PGC-1α and metabolism in kidney cells, it is important to search for new approaches to restore the activity of PGC-1α in DN. An increasing number of studies have demonstrated that the interventional mechanisms of herbal medicines on DN are associated with this target. Berberine (BBR), an isoquinoline alkaloid present in Chinese herbal medicine (CHM), is widely used for treating DN. In particular, BBR can directly regulate PGC-1α to enhance FAO in DN, which promotes mitochondrial energy homeostasis and energy metabolism in podocytes [10]. Tangshen formula is a CHM that ameliorates kidney injuries in a diabetic model by promoting the PGC-1α-LXR-ABCA1 pathway to improve renal cholesterol efflux in db/db mice [11]. Moreover, an active component of the traditional Chinese medicine herb Rhodiola rosea L., salidroside, has been reported to greatly attenuate DN by upregulating mtDNA copy number and ETC protein expression [12].As PGC-1α is almost ubiquitously expressed, targeting its upstream regulatory sensors such as 5′-AMP-activated protein kinase (AMPK), NAD-dependent protein deacetylase sirtuin-1 (SIRT1) is generally acknowledged as a significant method to restore mitochondrial function. AMPK, an extensively studied upstream regulator of PGC-1α, increases the rate of mitochondrial biogenesis by initiating the transcription of the PPARGC1A gene and by phosphorylating amino acids threonine-177 and serine-538, which in turn activates PGC-1α. In dead, herbal medicine prevents DN via the AMPK-SIRT1-PGC-1α axis that is a hot spot. Resveratrol is a naturally occurring polyphenol that imparts anti-inflammatory, antidiabetic, antioxidative, and neuroprotective effects. Particularly, resveratrol prevents DN via activation of the AMPK-SIRT1-PGC-1α axis, and PPARs were coactivated by PGC-1α in db/db mice [14]. Additional studies revealed resveratrol imparts a protective effect against DN by improving lipotoxicity, oxidative stress, and apoptosis by directly activating AdipoR1 and AdipoR2 that in turn upregulates AMPK and forkhead box protein O (FoxO) expression [13]. Interestingly, Zhang et al. further investigate the renoprotection mechanism of resveratrol in vivo and in vitro, which suggested that SIRT1/PGC-1α was upregulated, accompanied by improved mitochondrial function decreased oxidative stress and apoptosis [15]. In addition to single DN, the protective role of resveratrol has been verified in acute renal injury with DN via activating SIRT1–PGC–1α–hypoxia-inducible transcription factor-1α (HIF-1α) signaling pathways [16]. Because the role of sodium glucose cotransporter 2 (SGLT2) in excess glucose reabsorption has become a research topic of interest, SGLT2 inhibitors (SGLT2i) could reduce hyperfiltration and inhibit inflammatory and fibrotic responses that are elicited by proximal tubular cells [34, 79]. In addition, SGLT2i that enhances the excretion of urinary glucose triggers AMPK, a nutrient sensor, which in turn reverses the metabolic disorders associated with DN [80]. Marein is one of the main active components of Coreopsis tinctoria Nutt, which possesses renoprotective activity in DN by directly suppressing SGLT2 expression and then activating the AMPK–acetyl CoA carboxylase (ACC)–PGC-1α pathway to suppress fibrosis and inflammation [17]. ## 3.2. Mechanism of Herb Medicine on Mitochondrial Dynamics in DN Mitochondria are highly dynamic organelles that require correct mitochondrial morphology to maintain maximal ATP production [81]. The major processes of mitophagy, fission, and fusion occur as a response to mitochondrial dynamics as well as to maintain mitochondrial integrity in different metabolic conditions. Fission is essential in the isolation of damaged parts from the rest of the mitochondrial network and is induced by translocating dynamin 1-like protein (DRP1) from the cytosol to the outer membrane of the mitochondria where it binds to its receptors, including mitochondrial fission 1 (FIS1), mitochondrial fission factor (MFF), and mitochondrial dynamics proteins MID49 and MID51 [82, 83]. Mitochondria fusion involves the recruitment of a series of proteins that include MFN1 and MFN2 that triggers outer membrane fusion, as well as optic atrophy protein 1 (OPA1) that facilitates inner membrane fusion. However, in DN, excessive mitochondrial fission and fusion are associated with key features of renal damage. Mitochondrial dysfunction in podocytes is increasingly recognized as a factor contributing to the pathogenesis of DN [84].Previous studies have elucidated the correlation between mitochondrial dynamics disorder and DN progression and revealed that DRP1 may be potentially utilized as a therapeutic target in the treatment of DN [85]. Meanwhile, traditional Chinese medicine has definite effects on this point. Besides its function on PGC-1α-mediated mitochondrial biogenesis, BBR plays a therapeutic role in positively regulating DRP1-mediated mitochondrial dynamics to protect glomerulus and improve the fragmentation and dysfunction of mitochondria in podocytes [18]. AS-IV is a major and active component of Astragalus, which is a traditional Chinese medicinal herb for tonifying. Liu et al. have shown in diabetic db/db mice that AS-IV significantly improves albuminuria and renal pathologic injury. In addition, they found AS-IV decreased the elevation of renal DRP1, Fis1, and MFF expression in db/db mice [19]. More than that, polydatin which is mainly extracted from the roots of Polygonum cuspidatum not only inhibits DRP1 activation and fragmented mitochondria caused by high glucose but also blocked the increase of apoptosis through a DRP1-dependent mechanism [20, 86].Mitophagy allows the removal of damaged and nonfunctional mitochondria from the network and requires the efficient recognition of targeted mitochondria followed by the engulfment of mitochondria by autophagosomes [87]. As part of a healthy network of mitochondria, mitophagy is regulated by a PINK1-PARKIN pathway for mitochondrial identification and labeling [88]. However, impairment of the mitophagy system aggravated the progression of DN, which was mainly caused by decreases in renal PINK1 and Parkin expression in diabetes following activation of either FOXO1 or NRF2 signal [89, 90]. In recent years, the role and regulation of mitophagy in DN have attracted lots of attention. It has been reported that AS II, another of the active constituents of Astragalus, exerts protective effects on podocyte injury and mitochondrial dysfunction through enhancing mitophagy activation via modulation of NRF2 and PINK1 [21]. Huangqi-Danshen decoction which mainly includes Astragali Radix (Huang-qi) and Salviae Miltiorrhizae Radix et Rhizoma (Dan-shen) significantly alleviated DN, which might be associated with the reversion of the enhanced mitochondrial fission and the inhibition of PINK1/Parkin-mediated mitophagy [22]. ## 4. The Modulation of Mitochondrial ROS: The Effect of Herb Medicine in DN Increased oxidative stress is due to ROS production caused by dysfunctional cellular respiration during hyperglycemia and is the major pathway involved in the pathogenesis of DN [3]. In the early phase of pathogenesis, ROS mainly from mitochondria origin do have a role in the regulation of various metabolic pathways. However, their accumulation that exceeds local antioxidant capacity is a biomarker of mitochondrial dysfunction in DN [91]. Overproduction of ROS can subsequently induce oxidative stress and cause damage to critical cellular components (particularly protein and DNA) and glomerular podocyte, which contributed to inflammation, interstitial fibrosis, and apoptosis [92, 93]. The damaging effect of ROS is thought to be mediated by activation of several pathways such as NF-κB, hexosamine, and the formation of AGE products [94]. The transcription of genes that encode antioxidant enzymes, including SOD2, glutathione peroxidase, and catalase, is activated by NRF2, which in turn promotes antioxidant activity and induces negative feedback on NF-κB [95, 96]. Therefore, an emerging therapeutic target antioxidant defense mechanism and promoting renoprotection in DN involves the activation of NRF2 with its mediated antioxidant enzymes [97], while traditional Chinese medicine has definite antioxidant effects.Nepeta angustifolia C. Y. Wu, an important medicinal material constituting a variety of traditional Chinese medicine prescription, has significant antioxidant activity [98]. Nepeta angustifolia C. Y. Wu inhibits proinflammatory mediators and renal oxidative stress in diabetic rats, as well as improves mitochondrial potential to disrupt mesangial cell apoptosis caused by oxidative stress in vitro [23]. AMPK is an energy sensor in metabolic homeostasis [99]. Recent studies have shown that AMPK participates in the attenuation of oxidative stress in DN [100]. The beneficial effects of resveratrol on renal diseases are attributed to its antioxidative properties. Moreover, the study conducted by Kitada et al. [24] indicated that resveratrol can enhance mitochondrial biogenesis and protect against DN through normalisation of Mn-SOD dysfunction via the AMPK/SIRT1-independent pathway. Betulinic acid is extracted from the outer bark of white birch trees and exerts a protective effect on DN by effectively attenuating oxidative stress and inflammatory conditions via the AMPK/NF-κB/NRF2 signaling pathway [25]. Zhou et al. showed that obacunone, a natural bioactive compound isolated from the Rutaceae family, blocks GSK-3β signal transduction and subsequently enhances the activity of NRF2 to inhibit oxidative stress and mitochondrial dysfunction in NRK-52E cells [26]. Furthermore, Yahya et al. [27] showed using STZ-induced diabetic rats that curcumin imparts a nephroprotective effect via NRF2 activation, inhibition of NF-κB, suppression of NADPH oxidase, and downregulation/inhibition the PKC βII/p66 Shc axis. The role of herbal medicine in NRF2/AGE signal should also be carefully considered as NRF2/AGE plays a pivotal role in controlling transcriptional regulation of the genes encoding endogenous antioxidant enzymes [97]. Notoginsenoside R1, a novel phytoestrogen isolated from Panax notoginseng (Burk.) F. H. Chen, was found to decrease AGE-induced mitochondrial injury and promote NRF2 and HO-1 expression to eliminate oxidative stress and apoptosis in DN [28]. Further, oleanolic acid combined with N-acetylcysteine has therapeutic effects on DN through an antioxidative effect and endoplasmic reticulum stress reduction by the NRF2/Keap1 system [29]. ## 5. Lifestyle Interventions Diabetes is usually accompanied by excessive nutrition and calories, as well as a decrease in physical activity, both of which aggravate nephropathy [91]. In 2020, the consensus statement of the American Association of Clinical Endocrinologists and American College of Endocrinology on the comprehensive management algorithm for type 2 diabetes (T2D) mentioned that lifestyle optimization is essential for all diabetic patients, including healthy eating patterns, weight loss, physical activity, and smoking cessation [101]. We summarize therapeutic strategies about lifestyle intervention, with a focus on mitochondrial biogenesis, to improve the malignant progress of DN. ### 5.1. Healthy Eating Patterns and DN Healthy eating patterns are important for patients with diabetes and DN to maintain glucose control and inhibit the progression of kidney damage [102]. Particularly in late-stage kidney disease, a low-protein diet (LPD) can maintain the renal function in patients with chronic kidney disease (CKD), including those with DN [103–106]. In terms of the molecular mechanism of LPD against DN, earlier animal studies have revealed that LPD decreases intraglomerular pressure via reduction of afferent arteriole vasoconstriction, which in turn improves glomerular hyperfiltration and hypertension as well as reduces fibrosis of mesangial cells via growth factor-β signals. Furthermore, an LPD, particularly a very LPD, can also prevent renal tubular cell injury, apoptosis, inflammation/oxidative stress, and fibrosis within the tubule-interstitial region by reducing the accumulation damaged mitochondria, which is triggered by the reduction in the activity of mammalian target of rapamycin complex 1 and the restoration of autophagy [107]. However, due to the insufficiency of clear results from present clinical trials, the renal protective effect of LPD against DN is controversial. Existing clinical research evidence is unable to fully prove the renal protective effect of LPD [108–110], although other studies have shown that LPD can delay the decline of renal function [111, 112]. In addition, the American Diabetes Association believes that a short-term (approximately 3–4 months) low-carbohydrate (LC) diet is beneficial for diabetes management [113]. Compared with an ordinary diet, a LC diet contains a higher protein and fat content and ratio. The energy required by the body mainly comes from the metabolism of fat into ketone bodies; therefore, it is also called a ketogenic diet. A LC diet can directly reduce blood sugar levels, and ketone bodies have various functions, such as anti-inflammatory, mitochondrial biogenesis regulatory, and antioxidant activity [114]. However, a long-term LC diet may damage kidney function, which is mainly attributed to its high protein content [113]. Several human physiological studies have shown that a high-protein diet can cause renal hyperfiltration [115–117]. Although the actual cause of this phenomenon remains unclear, studies have attempted to describe the effects related to specific amino acid components as well as dietary advanced glycosylation end products [118, 119]. ### 5.2. Weight Loss and DN In a review on weight loss in coronary heart disease, the GFR and proteinuria in patients with weight loss improved, and the weight loss and CKD index effects of surgical intervention were better than those of drug and lifestyle interventions [120–122]. Miras et al. [123] confirmed this finding by retrospectively analysing data of 84 patients with DN who underwent bariatric surgery over a 12–18-month period. Among them, 32 patients with albuminuria at baseline had a mean 3.5-fold decrease in the postoperative albumin–creatinine ratio, and albuminuria in 32 patients returned to normal levels.A systematic review and meta-analysis including approximately 30 studies reported the impact of bariatric surgery on renal function. All studies measured the changes in relevant indicators of renal dysfunction within 4 weeks before and after bariatric surgery. Among them, six studies measured a 54% reduction in the risk of postoperative glomerular hyperfiltration, and 16 studies measured a 60%–70% reduction in the risk of postoperative albuminuria and total proteinuria [124]. Cohort studies have reported the benefits of bariatric surgery in improving creatinine levels and the GFR or reducing the incidence of stage 4 ESRD [124–128], which may be related to improved renal tubular damage [129]. Furthermore, surgery-induced weight loss can improve mitochondrial biogenesis and mitochondrial dysfunction [70], which may be an effective treatment for DN [130, 131]. ### 5.3. Physical Activity and DN Moderate aerobic exercise can reduce weight and improve insulin sensitivity, hyperglycemia, hyperlipidemia, and DN [132, 133]. Studies have reported that upregulation of the expression of eNOS and nNOS proteins in the kidney and improvement in NADPH oxidase and α-oxyaldehyde levels may reduce early diabetic nephropathy in Zucker diabetic fatty rats. Chronic aerobic exercise has health benefits and may be utilized as a treatment method for the prevention and development of renal dysfunction in T2D [134]. However, strenuous exercise may aggravate DN progression. Studies have reported that the rates of urinary protein excretion increase after strenuous exercise and tubular proteinuria occurs [107, 135]. A prospective study has demonstrated for the first time that the intensity of physical activity, rather than the total amount, is associated with the occurrence and progression of DN in type 1 diabetes. Moreover, the beneficial relationship between moderate- and high-intensity physical activity and progression of nephropathy is not affected by the duration of diabetes, age, sex, or smoking [136].This high-intensity, low-volume training program not only increases the content of citrate synthase and cytochrome C oxidase subunit IV with increasing insulin sensitivity but also stimulates mitochondrial biogenesis. The contraction activity can lead to important signal events such as calcium release, AMP/ATP ratio change, cell redox state, and ROS generation. These events activate AMPK and stimulate PGC-1α [137]. PGC-1α can stimulate several genes encoding mitochondrial proteins, mtDNA amplification and proliferation, and oxidative metabolism. In short, the number of mitochondria per cell and their function increased several times in trained subjects compared to those in sedentary subjects. Although the best exercise type, frequency, and intensity for preventing DN or DN progression have not been formally determined, it is recommended to perform moderate-to-high-intensity aerobic exercise for at least 150 minutes and two to three sessions of resistance exercise per week for patients without contraindications [138]. ## 5.1. Healthy Eating Patterns and DN Healthy eating patterns are important for patients with diabetes and DN to maintain glucose control and inhibit the progression of kidney damage [102]. Particularly in late-stage kidney disease, a low-protein diet (LPD) can maintain the renal function in patients with chronic kidney disease (CKD), including those with DN [103–106]. In terms of the molecular mechanism of LPD against DN, earlier animal studies have revealed that LPD decreases intraglomerular pressure via reduction of afferent arteriole vasoconstriction, which in turn improves glomerular hyperfiltration and hypertension as well as reduces fibrosis of mesangial cells via growth factor-β signals. Furthermore, an LPD, particularly a very LPD, can also prevent renal tubular cell injury, apoptosis, inflammation/oxidative stress, and fibrosis within the tubule-interstitial region by reducing the accumulation damaged mitochondria, which is triggered by the reduction in the activity of mammalian target of rapamycin complex 1 and the restoration of autophagy [107]. However, due to the insufficiency of clear results from present clinical trials, the renal protective effect of LPD against DN is controversial. Existing clinical research evidence is unable to fully prove the renal protective effect of LPD [108–110], although other studies have shown that LPD can delay the decline of renal function [111, 112]. In addition, the American Diabetes Association believes that a short-term (approximately 3–4 months) low-carbohydrate (LC) diet is beneficial for diabetes management [113]. Compared with an ordinary diet, a LC diet contains a higher protein and fat content and ratio. The energy required by the body mainly comes from the metabolism of fat into ketone bodies; therefore, it is also called a ketogenic diet. A LC diet can directly reduce blood sugar levels, and ketone bodies have various functions, such as anti-inflammatory, mitochondrial biogenesis regulatory, and antioxidant activity [114]. However, a long-term LC diet may damage kidney function, which is mainly attributed to its high protein content [113]. Several human physiological studies have shown that a high-protein diet can cause renal hyperfiltration [115–117]. Although the actual cause of this phenomenon remains unclear, studies have attempted to describe the effects related to specific amino acid components as well as dietary advanced glycosylation end products [118, 119]. ## 5.2. Weight Loss and DN In a review on weight loss in coronary heart disease, the GFR and proteinuria in patients with weight loss improved, and the weight loss and CKD index effects of surgical intervention were better than those of drug and lifestyle interventions [120–122]. Miras et al. [123] confirmed this finding by retrospectively analysing data of 84 patients with DN who underwent bariatric surgery over a 12–18-month period. Among them, 32 patients with albuminuria at baseline had a mean 3.5-fold decrease in the postoperative albumin–creatinine ratio, and albuminuria in 32 patients returned to normal levels.A systematic review and meta-analysis including approximately 30 studies reported the impact of bariatric surgery on renal function. All studies measured the changes in relevant indicators of renal dysfunction within 4 weeks before and after bariatric surgery. Among them, six studies measured a 54% reduction in the risk of postoperative glomerular hyperfiltration, and 16 studies measured a 60%–70% reduction in the risk of postoperative albuminuria and total proteinuria [124]. Cohort studies have reported the benefits of bariatric surgery in improving creatinine levels and the GFR or reducing the incidence of stage 4 ESRD [124–128], which may be related to improved renal tubular damage [129]. Furthermore, surgery-induced weight loss can improve mitochondrial biogenesis and mitochondrial dysfunction [70], which may be an effective treatment for DN [130, 131]. ## 5.3. Physical Activity and DN Moderate aerobic exercise can reduce weight and improve insulin sensitivity, hyperglycemia, hyperlipidemia, and DN [132, 133]. Studies have reported that upregulation of the expression of eNOS and nNOS proteins in the kidney and improvement in NADPH oxidase and α-oxyaldehyde levels may reduce early diabetic nephropathy in Zucker diabetic fatty rats. Chronic aerobic exercise has health benefits and may be utilized as a treatment method for the prevention and development of renal dysfunction in T2D [134]. However, strenuous exercise may aggravate DN progression. Studies have reported that the rates of urinary protein excretion increase after strenuous exercise and tubular proteinuria occurs [107, 135]. A prospective study has demonstrated for the first time that the intensity of physical activity, rather than the total amount, is associated with the occurrence and progression of DN in type 1 diabetes. Moreover, the beneficial relationship between moderate- and high-intensity physical activity and progression of nephropathy is not affected by the duration of diabetes, age, sex, or smoking [136].This high-intensity, low-volume training program not only increases the content of citrate synthase and cytochrome C oxidase subunit IV with increasing insulin sensitivity but also stimulates mitochondrial biogenesis. The contraction activity can lead to important signal events such as calcium release, AMP/ATP ratio change, cell redox state, and ROS generation. These events activate AMPK and stimulate PGC-1α [137]. PGC-1α can stimulate several genes encoding mitochondrial proteins, mtDNA amplification and proliferation, and oxidative metabolism. In short, the number of mitochondria per cell and their function increased several times in trained subjects compared to those in sedentary subjects. Although the best exercise type, frequency, and intensity for preventing DN or DN progression have not been formally determined, it is recommended to perform moderate-to-high-intensity aerobic exercise for at least 150 minutes and two to three sessions of resistance exercise per week for patients without contraindications [138]. ## 6. Conclusion Recently, mitochondrial dysfunction has been shown to be a critical determinant of the progressive loss of renal function in patients with diabetes. Pharmacological regulation of mitochondrial networking may be a promising therapeutic strategy in preventing and treating DN. Moreover, nontraditional therapies, including herbal medicine and lifestyle interventions, play a renoprotective role in improving mitochondrial homeostasis and function. Overall, the interventional mechanisms of nontraditional therapies for DN are still in their infancy compared with traditional treatments. Elucidating the mechanism of action and efficacy of nontraditional therapies involving mitochondria may facilitate the discovery of novel therapeutic approaches in treating DN and preventing the progression of DN to ESRD. --- *Source: 1010268-2021-12-09.xml*
2021
# The Influence of Age on Interaction between Breath-Holding Test and Single-Breath Carbon Dioxide Test **Authors:** Nikita Trembach; Igor Zabolotskikh **Journal:** BioMed Research International (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1010289 --- ## Abstract Introduction. The aim of the study was to compare the breath-holding test and single-breath carbon dioxide test in evaluation of the peripheral chemoreflex sensitivity to carbon dioxide in healthy subjects of different age.Methods. The study involved 47 healthy volunteers between ages of 25 and 85 years. All participants were divided into 4 groups according to age: 25 to 44 years (n=14), 45 to 60 years (n=13), 60 to 75 years (n=12), and older than 75 years (n=8). Breath-holding test was performed in the morning before breakfast. The single-breath carbon dioxide (SB-CO2) test was performed the following day.Results. No correlation was found between age and duration of breath-holding (r=0.13) and between age and peripheral chemoreflex sensitivity to CO2 (r=0.07). In all age groups there were no significant differences in the mean values from the breath-holding test and peripheral chemoreflex sensitivity tests. In all groups there was a strong significant inverse correlation between breath-holding test and SB-CO2 test.Conclusion. A breath-holding test reflects the sensitivity of the peripheral chemoreflex to carbon dioxide in healthy elderly humans. Increasing age alone does not alter the peripheral ventilatory response to hypercapnia. --- ## Body ## 1. Introduction The role of peripheral chemoreflex sensitivity to hypoxia and hypercapnia in the pathogenesis of various pathological conditions has garnered much attention in recent years. The degree of impairment in cardiorespiratory system reflex regulation is a marker of disease progression and its prognosis [1, 2]. Increased peripheral chemoreflex sensitivity is associated with a decrease in arterial baroreflex sensitivity in chronic cardiovascular diseases [3], which is a risk factor for hemodynamic instability.The study of the peripheral chemoreflex sensitivity is traditionally performed by a hypoxic test [4–6]; however persistent hypoxia occurs during these techniques, which can potentially lead to respiratory depression due to central effects [5]. Furthermore, there is a potential risk of adverse incidents related to hypoxia, especially in high risk patients. The method of single-breath carbon dioxide test, designed by McClean et al. [7], is an alternative method of evaluating peripheral chemoreflex sensitivity and is relatively safe compared with hypoxic tests. In addition, it has worked well in clinical practice [8]. However, this method also requires sophisticated equipment, which limits its application in routine practice.The duration of a voluntary apnea depends on several factors, and one of them is the sensitivity of the peripheral chemoreflex [9]. However, the properties of the respiratory system may change with the age and respiratory biomechanics in the elderly may be different from that in young adults. Data on the effect of age on the peripheral chemoreflex sensitivity are controversial; there are works showing an increase [10] or a decrease [11] of sensitivity. Other researchers found no effect of age on peripheral chemoreflex [12]. However, much of the research describes the hypoxic test using hypoxic gas mixture (pure nitrogen) to reduce the arterial oxygen saturation to 65–85%. Currently there is little data on the effect of age on the sensitivity of peripheral chemoreceptors to carbon dioxide.The aim of the study was to assess whether the breath-holding test reflects peripheral chemoreflex sensitivity to carbon dioxide in healthy subjects of different ages. ## 2. Methods The study involved 47 healthy volunteers between the ages of 25 and 85 years (23 males, 24 females). The study was approved by the local ethics committee. All subjects provided signed informed consent to both tests. Volunteers were recruited from the population during 2015-2016 years. All participants were divided into 4 groups according to age: 25 to 44 years (n=14), from 45 to 60 years (n=13), from 60 to 75 years (n=12), and more than 75 years (n=8). No subjects had a history of chronic respiratory or cardiovascular disease, alcohol abuse, or smoking. Before the study, all patients were weighed, the body mass index was calculated, and respiratory function was evaluated using spirometry (Table 1).Table 1 Characteristics of the subjects. ⁢Age group 25–44 years 45–59 years 60–74 years ≥75 years Average age, years 34 ± 5 53 ± 4 66 ± 4 79 ± 3 Weight, kg 72 ± 4 76 ± 4 74 ± 4 69 ± 4 Height, cm 167 ± 4 164 ± 5 165 ± 6 164 ± 4 FEV1 (% predicted) 98 ± 4 96 ± 6 97 ± 7 95 ± 5 VLC (% predicted) 101 ± 3 99 ± 4 95 ± 6 94 ± 6 FEV1: forced expiratory volume; VLC: vital lung capacity. Data are presented as mean ± standard deviation.In all participants, the breath-holding test was performed in the morning before breakfast. The single-breath carbon dioxide test was performed the following day.The single-breath carbon dioxide test was performed as follows. The participant’s nose was clamped using a soft grip. Breathing through the mouth was monitored using a mouthpiece connected to a pneumatic respiratory valve separating the inhaled gas mixture from exhaled air. The inspiratory port was connected to a T-shaped valve in such a way that ventilation is carried out from either a rubber bag or a 2 L tank, which was filled after each inhalation of the gas mixture containing 13% CO2 or atmospheric air. After a brief period of eupnoea (approximately 5 min), in the expiratory phase, the T-shaped valve was switched to breathing a mixture with high CO2 content so that the next breath was taken using this mixture. The valve was then switched to atmospheric air. On average, 10 breaths of the hypercapnic mixture were taken with intervals of 2 min of breathing room air. Respiratory rate and tidal volume were estimated breath to breath with the calculation of minute ventilation (Volumeter Blease, United Kingdom). The CO2 fraction in the exhaled mixture was measured using a sidestream gas analyser (Nihon Kohden, Japan). The average minute ventilation was calculated from the data of the last five breaths before breathing the hypercapnic mixture as the control. Likewise, the average FetCO2 was determined during these breaths and used as the control FetCO2. The ventilation response to a hypercapnic stimulus was determined as the average of the two highest rates of MV (during the first 20 seconds after the stimulus, breaths beyond this time were excluded to minimize the contribution of central chemoreception). Poststimulus FetCO2 was also assessed during these cycles. The ventilation response to breathing a hypercapnic mixture was calculated by the formula: (poststimulusMV-controlMV)/((poststimulusFetCO2-controlFetCO2)×(Patm-47)), where Patm represents the atmospheric pressure in mmHg and 47 is the saturated water vapour pressure in mmHg. The median of all 10 episodes was taken as the sensitivity of the peripheral chemoreflex, expressed in L/min/mmHg.The breath-holding test was performed as follows: voluntary breath-holding duration was assessed three times, with 10 min intervals of normal resting breathing. After inspiration of an atmospheric air volume equal to 2/3 of the vital lung capacity ±15%, the participant was asked to hold their breath and the duration of voluntary apnea was measured from the beginning of the voluntary inspiration until reflex contractions of the diaphragm were noted by palpation. A mean value of the duration of the three samples was calculated.Data are presented as mean ± standard deviation due to normal distribution (Shapiro–Wilk test). To assess the relationship between the two methods, Pearson’s correlation coefficient was calculated. ## 3. Results of the Study In total the average sensitivity of peripheral chemoreflex was0.326±0.107 L/min/mm Hg; the average duration of breath-holding test was 49±10 seconds. There was a positive correlation between the subjects’ height and peripheral chemoreflex sensitivity (r=0.45,  R2=0.2, and   p<0.05); no correlation was found between chemoreflex sensitivity and other characteristics. We also found a positive correlation between the duration of breath-holding and vital lung capacity (r=0.54,  R2=0.29, and  p<0.05). No correlation was found between age and breath-holding duration (r=0.13,  R2=0.2) (Figure 1(a)) and between age and peripheral chemoreflex sensitivity (r=0.07,  R2=0.2) (Figure 1(b)).Figure 1 The relationship of age and breath-holding duration (a) and age and peripheral chemoreflex sensitivity to carbon dioxide (b). (a) (b)In general, we found a significant inverse correlation between the results of the two tests (r=-0.88,  R2=0.78, and  p<0.001) (Figure 2(a)). The linear correlation equation for the relationship was y=-0.00863×x+0.75. Also, a significant inverse correlation was found between breath-holding duration normalized to vital lung capacity and peripheral chemoreflex sensitivity, normalized to subjects’ height (r=-0.8,  R2=0.65, and p<0.001) (Figure 2(a)). The linear correlation equation for this relationship was y=-0.000158×x+0.00465.Figure 2 The relationship of breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide (SB-CO2) (a), breath-holding duration normalized to vital lung capacity (VLC), and peripheral chemoreflex sensitivity to carbon dioxide normalized to height (b). (a) (b)In all age groups there were no significant differences in the mean values of the breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide (Table2). In all groups there was a strong significant inverse correlation between breath-holding duration and peripheral chemoreflex sensitivity.Table 2 Correlation between breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide in different age groups. ⁢Age group 25–44 years 45–59 years 60–74 years ≥75 years Breath-holding duration, sec 51 ± 13 48 ± 11 48 ± 8 47 ± 7 Peripheral chemoreflex CO2 sensitivity, L/min/mmHg 0.317 ± 0.119 0.345 ± 0.105 0.312 ± 0.064 0.333 ± 0.124 Correlation coefficient - 0.93 ∗ - 0.86 ∗ - 0.83 ∗ - 0.88 ∗ p ∗ < 0.05. ## 4. Discussion The duration of breath-holding after deep inspiration is a function of several factors [13]: chemoreception, mechanoreception (receptors of light stretching), the impact of descending cortical respiratory drive, and a cognitive component, of which the first two are involuntary, but the most important components [14]. The duration of voluntary apnea doubled after breathing a hyperoxic mixture or after prehyperventilation [15]. On the other hand, the breath-holding duration was reduced under hypoxemic and hypercapnic conditions [16, 17]. Thus, it is not surprising that the duration of breath-holding had a strong inverse correlation with the SB-CO2 test. Davidson et al. [18] reported higher PETCO2 values after breath-holding in subjects with prior carotid body resections for asthma compared to healthy volunteers, which suggests great contributing of peripheral chemoreception. Feiner et al. [19] showed that the peripheral chemoreception, but not central, makes the largest contribution to the breath-holding duration, but the peripheral ventilatory response to hypercapnia was not evaluated. Our work demonstrates the contribution of peripheral sensitivity to carbon dioxide to the breath-hold duration.Importantly, we noted that increasing age has no effect on this pattern. The duration of breath-holding did not depend on the age and did not differ between the groups, although there is evidence that potential changes in the respiratory system and respiratory biomechanics associated with aging [20]. Structural changes of the intercostal muscles and joints and edge-vertebral joints may accompany the aging process, but these changes may not necessarily have been presented [21]. A reduction in the elastic properties of lung tissue also occurs with age [21], but this is often the result of comorbidity. The analysis of our results showed that the initial values of FEV1, vital capacity, tidal volume, and respiratory rate did not differ between age groups, indicating respiratory biomechanics likely were not markedly altered with age in our study. This observation could explain the absence of differences in the duration of breath-holding after a deep inspiration.Existing works on the effect of age on the sensitivity of the peripheral chemoreflex represent conflicting results, from no change in sensitivity in the elderly [12] to an increase [10] or decrease [11]. However, most researchers used a hypoxic test and their works had a different design (steady-state, progressive, or transient methods) and included different age groups. It should be noted that unlike the trend for PaO2 to decrease with age the exchange of carbon dioxide varies with age much less with relatively unchanged PaCO2 [22]. The stability of PaCO2 with aging may have caused a lack of influence of age on the duration of breath-holding and on respiratory response to carbon dioxide. Martinez showed a decreased peripheral response to hypercapnia in elderly men (55 to 76 years old) compared to young men (25 to 38 years old), but there were some limitations due to a small sample size [23]. Our findings indicate that there is no relationship of age and sensitivity of peripheral chemoreceptors to carbon dioxide. In our study the average sensitivity of peripheral chemoreflex was 0.326±0.107 L/min/mm Hg with no influence of age, so our data correlates with results of other authors, who described values of 0.34±0.12 L/min/mm Hg [8] and 0.28±0.04 L/min/mm Hg [24] in healthy subjects.However, an increase in peripheral chemoreflex sensitivity to carbon dioxide may advance aging, because aging is often associated with concomitant diseases, resulting, as is known, in a change in the reflex regulation of the cardiorespiratory system and increase peripheral chemoreflex sensitivity [25]. Such diseases include chronic heart failure [1, 26], hypertension, chronic obstructive pulmonary disease, obstructive sleep apnea, and other conditions [27]. Thus, our findings support the thesis that the biological age does not always equate with chronological age and in the absence of chronic disease the peripheral chemoreflex remains intact.Thus, a pattern marked by us in healthy people of different ages may be slightly different in situations that affect these factors (obesity, cardiac disease, respiratory diseases, etc.), and this fact requires further research.The correlation with SB-CO2 test results and subjects’ height observed in our study is consistent with those obtained by Chua and Coats [24], who also found similar relationship, but it was not statistically significant. Perhaps this is due to the fact that the number of observations in our work was greater, which could influence the statistics. The results indicate a positive correlation between the duration of breath-holding and vital lung capacity. The duration of voluntary apnea also depends on the lung volumes [28]. Previous studies showed that lung volumes greatly influence a breath-holding [29] and forced vital capacity was identified as a significant predictors of breath-hold duration [19]. ## 5. Conclusion A breath-holding test reflects the sensitivity of the peripheral chemoreflex to carbon dioxide in the healthy elderly. Increasing age alone does not alter the peripheral ventilatory response to hypercapnia. --- *Source: 1010289-2017-01-31.xml*
1010289-2017-01-31_1010289-2017-01-31.md
15,862
The Influence of Age on Interaction between Breath-Holding Test and Single-Breath Carbon Dioxide Test
Nikita Trembach; Igor Zabolotskikh
BioMed Research International (2017)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1010289
1010289-2017-01-31.xml
--- ## Abstract Introduction. The aim of the study was to compare the breath-holding test and single-breath carbon dioxide test in evaluation of the peripheral chemoreflex sensitivity to carbon dioxide in healthy subjects of different age.Methods. The study involved 47 healthy volunteers between ages of 25 and 85 years. All participants were divided into 4 groups according to age: 25 to 44 years (n=14), 45 to 60 years (n=13), 60 to 75 years (n=12), and older than 75 years (n=8). Breath-holding test was performed in the morning before breakfast. The single-breath carbon dioxide (SB-CO2) test was performed the following day.Results. No correlation was found between age and duration of breath-holding (r=0.13) and between age and peripheral chemoreflex sensitivity to CO2 (r=0.07). In all age groups there were no significant differences in the mean values from the breath-holding test and peripheral chemoreflex sensitivity tests. In all groups there was a strong significant inverse correlation between breath-holding test and SB-CO2 test.Conclusion. A breath-holding test reflects the sensitivity of the peripheral chemoreflex to carbon dioxide in healthy elderly humans. Increasing age alone does not alter the peripheral ventilatory response to hypercapnia. --- ## Body ## 1. Introduction The role of peripheral chemoreflex sensitivity to hypoxia and hypercapnia in the pathogenesis of various pathological conditions has garnered much attention in recent years. The degree of impairment in cardiorespiratory system reflex regulation is a marker of disease progression and its prognosis [1, 2]. Increased peripheral chemoreflex sensitivity is associated with a decrease in arterial baroreflex sensitivity in chronic cardiovascular diseases [3], which is a risk factor for hemodynamic instability.The study of the peripheral chemoreflex sensitivity is traditionally performed by a hypoxic test [4–6]; however persistent hypoxia occurs during these techniques, which can potentially lead to respiratory depression due to central effects [5]. Furthermore, there is a potential risk of adverse incidents related to hypoxia, especially in high risk patients. The method of single-breath carbon dioxide test, designed by McClean et al. [7], is an alternative method of evaluating peripheral chemoreflex sensitivity and is relatively safe compared with hypoxic tests. In addition, it has worked well in clinical practice [8]. However, this method also requires sophisticated equipment, which limits its application in routine practice.The duration of a voluntary apnea depends on several factors, and one of them is the sensitivity of the peripheral chemoreflex [9]. However, the properties of the respiratory system may change with the age and respiratory biomechanics in the elderly may be different from that in young adults. Data on the effect of age on the peripheral chemoreflex sensitivity are controversial; there are works showing an increase [10] or a decrease [11] of sensitivity. Other researchers found no effect of age on peripheral chemoreflex [12]. However, much of the research describes the hypoxic test using hypoxic gas mixture (pure nitrogen) to reduce the arterial oxygen saturation to 65–85%. Currently there is little data on the effect of age on the sensitivity of peripheral chemoreceptors to carbon dioxide.The aim of the study was to assess whether the breath-holding test reflects peripheral chemoreflex sensitivity to carbon dioxide in healthy subjects of different ages. ## 2. Methods The study involved 47 healthy volunteers between the ages of 25 and 85 years (23 males, 24 females). The study was approved by the local ethics committee. All subjects provided signed informed consent to both tests. Volunteers were recruited from the population during 2015-2016 years. All participants were divided into 4 groups according to age: 25 to 44 years (n=14), from 45 to 60 years (n=13), from 60 to 75 years (n=12), and more than 75 years (n=8). No subjects had a history of chronic respiratory or cardiovascular disease, alcohol abuse, or smoking. Before the study, all patients were weighed, the body mass index was calculated, and respiratory function was evaluated using spirometry (Table 1).Table 1 Characteristics of the subjects. ⁢Age group 25–44 years 45–59 years 60–74 years ≥75 years Average age, years 34 ± 5 53 ± 4 66 ± 4 79 ± 3 Weight, kg 72 ± 4 76 ± 4 74 ± 4 69 ± 4 Height, cm 167 ± 4 164 ± 5 165 ± 6 164 ± 4 FEV1 (% predicted) 98 ± 4 96 ± 6 97 ± 7 95 ± 5 VLC (% predicted) 101 ± 3 99 ± 4 95 ± 6 94 ± 6 FEV1: forced expiratory volume; VLC: vital lung capacity. Data are presented as mean ± standard deviation.In all participants, the breath-holding test was performed in the morning before breakfast. The single-breath carbon dioxide test was performed the following day.The single-breath carbon dioxide test was performed as follows. The participant’s nose was clamped using a soft grip. Breathing through the mouth was monitored using a mouthpiece connected to a pneumatic respiratory valve separating the inhaled gas mixture from exhaled air. The inspiratory port was connected to a T-shaped valve in such a way that ventilation is carried out from either a rubber bag or a 2 L tank, which was filled after each inhalation of the gas mixture containing 13% CO2 or atmospheric air. After a brief period of eupnoea (approximately 5 min), in the expiratory phase, the T-shaped valve was switched to breathing a mixture with high CO2 content so that the next breath was taken using this mixture. The valve was then switched to atmospheric air. On average, 10 breaths of the hypercapnic mixture were taken with intervals of 2 min of breathing room air. Respiratory rate and tidal volume were estimated breath to breath with the calculation of minute ventilation (Volumeter Blease, United Kingdom). The CO2 fraction in the exhaled mixture was measured using a sidestream gas analyser (Nihon Kohden, Japan). The average minute ventilation was calculated from the data of the last five breaths before breathing the hypercapnic mixture as the control. Likewise, the average FetCO2 was determined during these breaths and used as the control FetCO2. The ventilation response to a hypercapnic stimulus was determined as the average of the two highest rates of MV (during the first 20 seconds after the stimulus, breaths beyond this time were excluded to minimize the contribution of central chemoreception). Poststimulus FetCO2 was also assessed during these cycles. The ventilation response to breathing a hypercapnic mixture was calculated by the formula: (poststimulusMV-controlMV)/((poststimulusFetCO2-controlFetCO2)×(Patm-47)), where Patm represents the atmospheric pressure in mmHg and 47 is the saturated water vapour pressure in mmHg. The median of all 10 episodes was taken as the sensitivity of the peripheral chemoreflex, expressed in L/min/mmHg.The breath-holding test was performed as follows: voluntary breath-holding duration was assessed three times, with 10 min intervals of normal resting breathing. After inspiration of an atmospheric air volume equal to 2/3 of the vital lung capacity ±15%, the participant was asked to hold their breath and the duration of voluntary apnea was measured from the beginning of the voluntary inspiration until reflex contractions of the diaphragm were noted by palpation. A mean value of the duration of the three samples was calculated.Data are presented as mean ± standard deviation due to normal distribution (Shapiro–Wilk test). To assess the relationship between the two methods, Pearson’s correlation coefficient was calculated. ## 3. Results of the Study In total the average sensitivity of peripheral chemoreflex was0.326±0.107 L/min/mm Hg; the average duration of breath-holding test was 49±10 seconds. There was a positive correlation between the subjects’ height and peripheral chemoreflex sensitivity (r=0.45,  R2=0.2, and   p<0.05); no correlation was found between chemoreflex sensitivity and other characteristics. We also found a positive correlation between the duration of breath-holding and vital lung capacity (r=0.54,  R2=0.29, and  p<0.05). No correlation was found between age and breath-holding duration (r=0.13,  R2=0.2) (Figure 1(a)) and between age and peripheral chemoreflex sensitivity (r=0.07,  R2=0.2) (Figure 1(b)).Figure 1 The relationship of age and breath-holding duration (a) and age and peripheral chemoreflex sensitivity to carbon dioxide (b). (a) (b)In general, we found a significant inverse correlation between the results of the two tests (r=-0.88,  R2=0.78, and  p<0.001) (Figure 2(a)). The linear correlation equation for the relationship was y=-0.00863×x+0.75. Also, a significant inverse correlation was found between breath-holding duration normalized to vital lung capacity and peripheral chemoreflex sensitivity, normalized to subjects’ height (r=-0.8,  R2=0.65, and p<0.001) (Figure 2(a)). The linear correlation equation for this relationship was y=-0.000158×x+0.00465.Figure 2 The relationship of breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide (SB-CO2) (a), breath-holding duration normalized to vital lung capacity (VLC), and peripheral chemoreflex sensitivity to carbon dioxide normalized to height (b). (a) (b)In all age groups there were no significant differences in the mean values of the breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide (Table2). In all groups there was a strong significant inverse correlation between breath-holding duration and peripheral chemoreflex sensitivity.Table 2 Correlation between breath-holding duration and peripheral chemoreflex sensitivity to carbon dioxide in different age groups. ⁢Age group 25–44 years 45–59 years 60–74 years ≥75 years Breath-holding duration, sec 51 ± 13 48 ± 11 48 ± 8 47 ± 7 Peripheral chemoreflex CO2 sensitivity, L/min/mmHg 0.317 ± 0.119 0.345 ± 0.105 0.312 ± 0.064 0.333 ± 0.124 Correlation coefficient - 0.93 ∗ - 0.86 ∗ - 0.83 ∗ - 0.88 ∗ p ∗ < 0.05. ## 4. Discussion The duration of breath-holding after deep inspiration is a function of several factors [13]: chemoreception, mechanoreception (receptors of light stretching), the impact of descending cortical respiratory drive, and a cognitive component, of which the first two are involuntary, but the most important components [14]. The duration of voluntary apnea doubled after breathing a hyperoxic mixture or after prehyperventilation [15]. On the other hand, the breath-holding duration was reduced under hypoxemic and hypercapnic conditions [16, 17]. Thus, it is not surprising that the duration of breath-holding had a strong inverse correlation with the SB-CO2 test. Davidson et al. [18] reported higher PETCO2 values after breath-holding in subjects with prior carotid body resections for asthma compared to healthy volunteers, which suggests great contributing of peripheral chemoreception. Feiner et al. [19] showed that the peripheral chemoreception, but not central, makes the largest contribution to the breath-holding duration, but the peripheral ventilatory response to hypercapnia was not evaluated. Our work demonstrates the contribution of peripheral sensitivity to carbon dioxide to the breath-hold duration.Importantly, we noted that increasing age has no effect on this pattern. The duration of breath-holding did not depend on the age and did not differ between the groups, although there is evidence that potential changes in the respiratory system and respiratory biomechanics associated with aging [20]. Structural changes of the intercostal muscles and joints and edge-vertebral joints may accompany the aging process, but these changes may not necessarily have been presented [21]. A reduction in the elastic properties of lung tissue also occurs with age [21], but this is often the result of comorbidity. The analysis of our results showed that the initial values of FEV1, vital capacity, tidal volume, and respiratory rate did not differ between age groups, indicating respiratory biomechanics likely were not markedly altered with age in our study. This observation could explain the absence of differences in the duration of breath-holding after a deep inspiration.Existing works on the effect of age on the sensitivity of the peripheral chemoreflex represent conflicting results, from no change in sensitivity in the elderly [12] to an increase [10] or decrease [11]. However, most researchers used a hypoxic test and their works had a different design (steady-state, progressive, or transient methods) and included different age groups. It should be noted that unlike the trend for PaO2 to decrease with age the exchange of carbon dioxide varies with age much less with relatively unchanged PaCO2 [22]. The stability of PaCO2 with aging may have caused a lack of influence of age on the duration of breath-holding and on respiratory response to carbon dioxide. Martinez showed a decreased peripheral response to hypercapnia in elderly men (55 to 76 years old) compared to young men (25 to 38 years old), but there were some limitations due to a small sample size [23]. Our findings indicate that there is no relationship of age and sensitivity of peripheral chemoreceptors to carbon dioxide. In our study the average sensitivity of peripheral chemoreflex was 0.326±0.107 L/min/mm Hg with no influence of age, so our data correlates with results of other authors, who described values of 0.34±0.12 L/min/mm Hg [8] and 0.28±0.04 L/min/mm Hg [24] in healthy subjects.However, an increase in peripheral chemoreflex sensitivity to carbon dioxide may advance aging, because aging is often associated with concomitant diseases, resulting, as is known, in a change in the reflex regulation of the cardiorespiratory system and increase peripheral chemoreflex sensitivity [25]. Such diseases include chronic heart failure [1, 26], hypertension, chronic obstructive pulmonary disease, obstructive sleep apnea, and other conditions [27]. Thus, our findings support the thesis that the biological age does not always equate with chronological age and in the absence of chronic disease the peripheral chemoreflex remains intact.Thus, a pattern marked by us in healthy people of different ages may be slightly different in situations that affect these factors (obesity, cardiac disease, respiratory diseases, etc.), and this fact requires further research.The correlation with SB-CO2 test results and subjects’ height observed in our study is consistent with those obtained by Chua and Coats [24], who also found similar relationship, but it was not statistically significant. Perhaps this is due to the fact that the number of observations in our work was greater, which could influence the statistics. The results indicate a positive correlation between the duration of breath-holding and vital lung capacity. The duration of voluntary apnea also depends on the lung volumes [28]. Previous studies showed that lung volumes greatly influence a breath-holding [29] and forced vital capacity was identified as a significant predictors of breath-hold duration [19]. ## 5. Conclusion A breath-holding test reflects the sensitivity of the peripheral chemoreflex to carbon dioxide in the healthy elderly. Increasing age alone does not alter the peripheral ventilatory response to hypercapnia. --- *Source: 1010289-2017-01-31.xml*
2017
# Imaging Diagnosis of Splanchnic Venous Thrombosis **Authors:** S. Rajesh; Amar Mukund; Ankur Arora **Journal:** Gastroenterology Research and Practice (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101029 --- ## Abstract Splanchnic vein thrombosis (SVT) is a broad term that includes Budd-Chiari syndrome and occlusion of veins that constitute the portal venous system. Due to the common risk factors involved in the pathogenesis of these clinically distinct disorders, concurrent involvement of two different regions is quite common. In acute and subacute SVT, the symptoms may overlap with a variety of other abdominal emergencies while in chronic SVT, the extent of portal hypertension and its attendant complications determine the clinical course. As a result, clinical diagnosis is often difficult and is frequently reliant on imaging. Tremendous improvements in vascular imaging in recent years have ensured that this once rare entity is being increasingly detected. Treatment of acute SVT requires immediate anticoagulation. Transcatheter thrombolysis or transjugular intrahepatic portosystemic shunt is used in the event of clinical deterioration. In cases with peritonitis, immediate laparotomy and bowel resection may be required for irreversible bowel ischemia. In chronic SVT, the underlying cause should be identified and treated. The imaging manifestations of the clinical syndromes resulting from SVT are comprehensively discussed here along with a brief review of the relevant clinical features and therapeutic approach. --- ## Body ## 1. Introduction Splanchnic venous system includes the mesenteric, splenic, and hepatic beds, the first two serving as the major inflow for the third (Figure1). Blood flowing through the intestines, spleen, and pancreas is collected by the superior mesenteric vein (SMV) and splenic vein (SV) which join to form the portal vein (PV). Stomach and part of the pancreas drain directly into the portal vein. At the porta hepatis, PV divides into right and left branches that continue to their respective hepatic lobes, ultimately emptying into the hepatic sinusoids. Venous outflow from the liver is through the hepatic veins (HV) which drain into the inferior vena cava (IVC). Consequently, the term splanchnic vein thrombosis (SVT) includes occlusion of veins that form the portal venous system or the hepatic veins (Budd-Chiari syndrome) [1, 2]. Although portal and mesenteric vein thrombosis and Budd-Chiari syndrome are three distinct clinical entities, their etiologies are often shared and clinical presentation may overlap. Moreover, simultaneous involvement of two different regions is fairly frequent due to the common risk factors. Thus, it is only prudent to discuss them collectively. Once considered to be a rare entity, SVT is increasingly being detected, thanks mainly to the remarkable advancements in imaging technology and increased awareness amongst healthcare providers. The present review appraises the radiological manifestations of SVT and aims to underscore the importance of imaging in decision making and patient selection to improve therapy and outcome in this group of patients.Figure 1 Graphic illustration of the splanchnic venous system. RHV: right hepatic vein, MHV: middle hepatic vein, LHV: left hepatic vein, PV: portal vein, SMV: superior mesenteric vein, SV: splenic vein, LGV: left gastric vein, and IMV: inferior mesenteric vein. ## 2. Budd-Chiari Syndrome The term Budd-Chiari syndrome (BCS) refers to the clinical manifestations arising as a consequence of hepatic venous outflow tract obstruction at any level from the small hepatic veins to the cavoatrial junction regardless of the mechanism of obstruction [3] (Figure 2). It follows that cardiac and pericardial diseases as well as sinusoidal obstruction syndrome are excluded from this definition [3, 4].Figure 2 Gray-scale US image demonstrating homogeneously hypoechoic and bulbous liver with chinked portal venous radicals (arrows) in a patient with fulminant BCS. ### 2.1. Etiology On the basis of etiology, BCS is divided into primary BCS (related to a primarily endoluminal venous disease, i.e., thrombosis or web) and secondary BCS (caused by infiltration or compression by a lesion outside the venous system, i.e., benign or malignant tumors, cysts, abscesses, etc.) [4] (Box 1). Prevalence of this disease shows marked geographic variation, from being one of the most common causes for hospital admission for liver disease in Nepal to becoming a rare entity in western countries [5, 6]. Based on the level of obstruction, BCS has been classified into three types [7] (Box 2). In the past, IVC was reported to be frequently obstructed in Asians and usually patent in western patients. However, this pattern has changed over the period of time in India, where hepatic vein thrombosis now accounts for the majority of the cases (59%) and obstruction of terminal IVC now accounts for a lesser proportion of cases (16%) [8].Box 1:Causes of BCS. Primary BCS Hypercoagulable states Inherited Antithrombin deficiency Protein C, S deficiency Heterozygous Factor V Leiden Prothrombin mutation Acquired Myeloproliferative disorders Paroxysmal nocturnal hemoglobinuria Antiphospholipid syndrome Cancer Pregnancy Use of oral contraceptives Systemic causes Connective tissue disease Inflammatory bowel disease Behcet disease Sarcoidosis Vasculitis Cancer Secondary BCS Benign or malignant hepatic tumors Trauma Hepatic abscess Simple or complex hepatic cysts IdiopathicBox 2:Classification of BCS based on the level of obstruction. (I) Obstruction of IVC ± hepatic veins (II) Obstruction of major hepatic veins (III) Obstruction of small centrilobular venules ### 2.2. Clinical Features Clinical presentation of the disease is highly variable and depends on the acuteness and extent of the hepatic venous outflow tract obstruction, ranging from complete absence of symptoms to fulminant hepatic failure, through acute, subacute, or chronic progressive development of symptoms over weeks to months [4, 9]. In cases of extensive and acute thrombosis of veins, frequently encountered in the western countries, the patient presents with abdominal pain and distension, jaundice, and hepatomegaly. The etiology in such cases is usually an underlying thrombotic disorder, intake of oral contraceptive pills, or pregnancy [4]. On the other hand, in Asian countries, membranous occlusion of the HV/IVC is more common [10]. Once considered to be congenital in origin, membranous web is now widely accepted to be a result of organized thrombus with focal stenosis being a part of this pathological spectrum [11]. This might be a possible explanation for the majority of Asian patients with BCS having a subacute to chronic course, characterized by insidious onset of portal hypertension, leg edema, gastrointestinal bleed, and nodular liver [12]. The course of manifestations in these patients can be steady or marked by exacerbations and remissions [9].Since the changes in the liver parenchyma in BCS can be inhomogeneous, a single biopsy may be falsely negative [3]. Thus, a biopsy is usually reserved for cases in which radiological findings are inconclusive, like in the event of involvement of small hepatic veins with patent large veins, although differentiation of this form from sinusoidal obstruction syndrome is not always possible [9]. Also, serial liver biopsies are useful for assessing the severity of disease and determining whether it has progressed after therapeutic interventions.Early diagnosis of BCS is of critical importance for commencing appropriate therapy. Due to the nonspecific and variable clinical presentation and the fact that biopsy cannot be blindly relied upon, imaging assumes a vital role in the early identification of the disease and accurate assessment of its extent. ### 2.3. Imaging Findings Hepatic venous outflow tract obstruction causes increase in the sinusoidal and portal pressures, leading to hepatic congestion, necrosis, fibrosis, and eventually florid cirrhosis [13]. Imaging findings at various stages of BCS reflect the progressive pathological changes occurring in the hepatic parenchyma and vasculature. Real-time ultrasound (US) coupled with Doppler is currently considered to be the investigation of choice for the initial evaluation of a patient suspected of having BCS and in experienced hands might be the only modality required to establish the diagnosis in majority of the cases [14]. It demonstrates the hepatic echotexture and morphological changes, status of HV and IVC, and evidence of intrahepatic collaterals. Simultaneously, presence of ascites and splenomegaly can be assessed. Besides, US is widely available and inexpensive and does not impart harmful radiation to the patient or the operator. However, its major limitations are patient’s body habitus and operator expertise which may preclude an adequate examination. Computed tomography (CT) and magnetic resonance imaging (MRI) have a complementary role to US and Doppler and serve mainly as problem solving tools. Routine use of cross-sectional imaging in patients with BCS to rule out the development of hepatocellular carcinoma or comprehensive assessment of collateral circulation is debatable. Catheter IVC graphy/cavography, which was once considered the standard of reference for evaluation of HV and IVC is now no longer routinely used for diagnostic purpose because noninvasive imaging provides evidence for BCS in most patients. Cavography tends to over diagnose HV thrombosis even when the failure to cannulate the HV might be due to technical failure. Moreover, it fails to provide an assessment of the extent of thrombosis in case of IVC obstruction which can be accurately done by MR venography [15, 16]. In addition, the entire extent of intrahepatic collaterals might not be picked up on cavography. Thus, it is reserved for patients in whom surgical or radiological intervention is contemplated. However, it still remains the gold standard when the hemodynamic significance of a suspected IVC narrowing due to caudate lobe hypertrophy is to be estimated in postsurgical/transplant patients. Pressure gradient across the suspected segment of narrowing is measured and a gradient of > 3 mm Hg is considered hemodynamically significant [17]. #### 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) #### 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ### 2.4. Treatment In patients not responding to anticoagulation and nutritional therapy, radiological and surgical interventions may be contemplated including placement of portosystemic shunts and liver transplantation. In patients with short segment occlusion of HV or IVC, balloon angioplasty or stent insertion can be performed [3, 4, 12, 33, 36]. Imaging follow-up at routine intervals is necessary in all these cases to determine the long-term results of intervention. US examination coupled with Doppler is usually adequate to evaluate the patency of the native vessels or stents after intervention (Figure 14). Presence of ascites and any associated liver parenchymal changes can also be simultaneously assessed. However, cross-sectional imaging or catheter angiography may be required in cases of equivocal findings on Doppler or when the symptoms for which the intervention was performed have recurred in spite of an apparently normal Doppler study. ## 2.1. Etiology On the basis of etiology, BCS is divided into primary BCS (related to a primarily endoluminal venous disease, i.e., thrombosis or web) and secondary BCS (caused by infiltration or compression by a lesion outside the venous system, i.e., benign or malignant tumors, cysts, abscesses, etc.) [4] (Box 1). Prevalence of this disease shows marked geographic variation, from being one of the most common causes for hospital admission for liver disease in Nepal to becoming a rare entity in western countries [5, 6]. Based on the level of obstruction, BCS has been classified into three types [7] (Box 2). In the past, IVC was reported to be frequently obstructed in Asians and usually patent in western patients. However, this pattern has changed over the period of time in India, where hepatic vein thrombosis now accounts for the majority of the cases (59%) and obstruction of terminal IVC now accounts for a lesser proportion of cases (16%) [8].Box 1:Causes of BCS. Primary BCS Hypercoagulable states Inherited Antithrombin deficiency Protein C, S deficiency Heterozygous Factor V Leiden Prothrombin mutation Acquired Myeloproliferative disorders Paroxysmal nocturnal hemoglobinuria Antiphospholipid syndrome Cancer Pregnancy Use of oral contraceptives Systemic causes Connective tissue disease Inflammatory bowel disease Behcet disease Sarcoidosis Vasculitis Cancer Secondary BCS Benign or malignant hepatic tumors Trauma Hepatic abscess Simple or complex hepatic cysts IdiopathicBox 2:Classification of BCS based on the level of obstruction. (I) Obstruction of IVC ± hepatic veins (II) Obstruction of major hepatic veins (III) Obstruction of small centrilobular venules ## 2.2. Clinical Features Clinical presentation of the disease is highly variable and depends on the acuteness and extent of the hepatic venous outflow tract obstruction, ranging from complete absence of symptoms to fulminant hepatic failure, through acute, subacute, or chronic progressive development of symptoms over weeks to months [4, 9]. In cases of extensive and acute thrombosis of veins, frequently encountered in the western countries, the patient presents with abdominal pain and distension, jaundice, and hepatomegaly. The etiology in such cases is usually an underlying thrombotic disorder, intake of oral contraceptive pills, or pregnancy [4]. On the other hand, in Asian countries, membranous occlusion of the HV/IVC is more common [10]. Once considered to be congenital in origin, membranous web is now widely accepted to be a result of organized thrombus with focal stenosis being a part of this pathological spectrum [11]. This might be a possible explanation for the majority of Asian patients with BCS having a subacute to chronic course, characterized by insidious onset of portal hypertension, leg edema, gastrointestinal bleed, and nodular liver [12]. The course of manifestations in these patients can be steady or marked by exacerbations and remissions [9].Since the changes in the liver parenchyma in BCS can be inhomogeneous, a single biopsy may be falsely negative [3]. Thus, a biopsy is usually reserved for cases in which radiological findings are inconclusive, like in the event of involvement of small hepatic veins with patent large veins, although differentiation of this form from sinusoidal obstruction syndrome is not always possible [9]. Also, serial liver biopsies are useful for assessing the severity of disease and determining whether it has progressed after therapeutic interventions.Early diagnosis of BCS is of critical importance for commencing appropriate therapy. Due to the nonspecific and variable clinical presentation and the fact that biopsy cannot be blindly relied upon, imaging assumes a vital role in the early identification of the disease and accurate assessment of its extent. ## 2.3. Imaging Findings Hepatic venous outflow tract obstruction causes increase in the sinusoidal and portal pressures, leading to hepatic congestion, necrosis, fibrosis, and eventually florid cirrhosis [13]. Imaging findings at various stages of BCS reflect the progressive pathological changes occurring in the hepatic parenchyma and vasculature. Real-time ultrasound (US) coupled with Doppler is currently considered to be the investigation of choice for the initial evaluation of a patient suspected of having BCS and in experienced hands might be the only modality required to establish the diagnosis in majority of the cases [14]. It demonstrates the hepatic echotexture and morphological changes, status of HV and IVC, and evidence of intrahepatic collaterals. Simultaneously, presence of ascites and splenomegaly can be assessed. Besides, US is widely available and inexpensive and does not impart harmful radiation to the patient or the operator. However, its major limitations are patient’s body habitus and operator expertise which may preclude an adequate examination. Computed tomography (CT) and magnetic resonance imaging (MRI) have a complementary role to US and Doppler and serve mainly as problem solving tools. Routine use of cross-sectional imaging in patients with BCS to rule out the development of hepatocellular carcinoma or comprehensive assessment of collateral circulation is debatable. Catheter IVC graphy/cavography, which was once considered the standard of reference for evaluation of HV and IVC is now no longer routinely used for diagnostic purpose because noninvasive imaging provides evidence for BCS in most patients. Cavography tends to over diagnose HV thrombosis even when the failure to cannulate the HV might be due to technical failure. Moreover, it fails to provide an assessment of the extent of thrombosis in case of IVC obstruction which can be accurately done by MR venography [15, 16]. In addition, the entire extent of intrahepatic collaterals might not be picked up on cavography. Thus, it is reserved for patients in whom surgical or radiological intervention is contemplated. However, it still remains the gold standard when the hemodynamic significance of a suspected IVC narrowing due to caudate lobe hypertrophy is to be estimated in postsurgical/transplant patients. Pressure gradient across the suspected segment of narrowing is measured and a gradient of > 3 mm Hg is considered hemodynamically significant [17]. ### 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) ### 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ## 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) ## 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ## 2.4. Treatment In patients not responding to anticoagulation and nutritional therapy, radiological and surgical interventions may be contemplated including placement of portosystemic shunts and liver transplantation. In patients with short segment occlusion of HV or IVC, balloon angioplasty or stent insertion can be performed [3, 4, 12, 33, 36]. Imaging follow-up at routine intervals is necessary in all these cases to determine the long-term results of intervention. US examination coupled with Doppler is usually adequate to evaluate the patency of the native vessels or stents after intervention (Figure 14). Presence of ascites and any associated liver parenchymal changes can also be simultaneously assessed. However, cross-sectional imaging or catheter angiography may be required in cases of equivocal findings on Doppler or when the symptoms for which the intervention was performed have recurred in spite of an apparently normal Doppler study. ## 3. Portal Vein Thrombosis Obstruction of PV or its branches may be secondary to thrombosis or due to encasement or infiltration by a tumor (Box4). It can present acutely with sudden onset of right upper quadrant pain, nausea, and/or fever. However, in most patients, PVT occurs slowly and silently with patients presenting with vague abdominal pain and features of portal hypertension. It is often not discovered until gastrointestinal hemorrhage develops, or unless the thrombosis is detected during routine surveillance for a known underlying pathologic condition. In third world countries, it accounts for up to 30% and 75% of cases of portal hypertension in adults and children, respectively [37]. Thus, from a clinical standpoint, PVT can be divided into acute or chronic [38]. PVT occurring in children and in patients with cirrhosis can be considered separately as their features and management differ from the other group of patients [9].Box 4:Causes of portomesenteric venous thrombosis. Cirrhosis Abdominal malignancies (hepatic, pancreatic etc.) Hypercoagulable states (both inherited and acquired; see Box1) Myeloproliferative disorders Local inflammation Umbilical vein catheterization Appendicitis Diverticulitis Pancreatitis Cholecystitis Duodenal ulcer Inflammatory bowel disease Tubercular lymphadenitis Traumatic/Iatrogenic Splenectomy, gastrectomy, colectomy, cholecystectomy Liver transplantation Abdominal trauma Surgical/radiological porto-systemic shunting ### 3.1. Etiology Several etiological causes, either of local or systemic origin, might be responsible for PVT development (Box4), although more than one factor is often identified [39]. A local risk factor can be identified in up to 30% of cases of PVT: cirrhosis and malignant tumors accounting for the majority of them [9, 39–42]. In the rest of the patients, the most common local factor for PVT is an inflammatory focus in the abdomen [38, 43, 44]. However, presence of cirrhosis, malignancy, and other intra-abdominal causes such as inflammation do not exclude the presence of systemic risk factors and the two may often coexist [9]. Local factors are usually recognized at the acute stage of PVT than the chronic stage [38]. Systemic risk factors are similar in prevalence in patients with acute and chronic PVT. An inherited or acquired hypercoagulable state is the usual culprit [39, 45–48]. ### 3.2. Acute Portal Vein Thrombosis Acute formation of a thrombus within the portal vein can be complete or eccentric, leaving a peripheral circulating lumen. The thrombus can also involve the mesenteric veins and/or the splenic vein. In cases of complete acute thrombosis, the patient usually presents with abdominal pain of sudden onset. Peritoneal signs, however, are usually absent except when an inflammatory focus is the cause of PVT or when PVT is complicated by intestinal ischemia. Acute PVT associated with an intra-abdominal focus of infection is frequently referred to as acute pylephlebitis. Clinical features of pylephlebitis include a high, spiking fever with chills, a painful liver, and sometimes shock. Small liver abscesses are common in this setting.Depending on the extension, PVT can be classified into four categories [49]: (1) confined to the PV beyond the confluence of the SV; (2) extended to the SMV, but with patent mesenteric vessels; (3) extended to the whole splanchnic venous system, but with large collaterals; or (4) with only fine collaterals. This classification is useful to evaluate a patient’s operability and clinical outcome. Another classification proposed by Yerdel et al. [50] is also widely accepted (Figure 32).Figure 32 Classification of PVT proposed by Yerdel et al.Liver function is usually preserved in patients with acute PVT unless the patient has an underlying liver disease such as cirrhosis. This is because of two reasons: (1) compensatory increase in hepatic arterial blood flow (hepatic artery buffer response) and (2) rapid development of a collateral circulation from pre-existing veins in the porta hepatis (venous rescue) [51–54]. The hepatic artery buffer response manifests on imaging in the form of increased hepatic parenchymal enhancement of the involved segment in the arterial phase with attendant hypertrophy of the adjoining artery. Formation of collaterals begins in a few days after portal vein obstruction and finalizes within 3 to 5 wk [53, 54]. As long as there is no extension of the thrombus to mesenteric venous arches, all manifestations of acute PVT are completely reversible, either by recanalization or by development of a cavernoma [9].It is clear from the above discussion that PVT is an ongoing process. Hence, a clear distinction between acute or chronic thrombus cannot always be made due to a considerable overlap between the two clinical situations. Formation of portal cavernoma has been suggested to be a marker of chronicity but it has been debated [55, 56]. ### 3.3. Imaging Diagnosis Imaging diagnosis of acute PVT can be readily made using noninvasive methods. #### 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) #### 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. #### 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ### 3.4. Treatment The goal of treatment in acute PVT is recanalization of the thrombosed vein using anticoagulation and thrombolysis (either transcatheter or surgical) to prevent the development of portal hypertension and intestinal ischemia. When local inflammation is the underlying cause for the PVT, appropriate antibiotic therapy is warranted with correction of the causal factors, if needed [9]. ### 3.5. Chronic Portal Vein Thrombosis When acute PVT is asymptomatic and goes undetected, patients present later in life and are diagnosed either incidentally on imaging done for unrelated issues or when investigations for portal hypertension related complications are carried out. In patients with chronic PVT, the actual thrombus is commonly not visualized. Rather, the obstructed portal vein is replaced by a network of portoportal collateral veins bypassing the area of occlusion (portal cavernoma) [54]. However, these collaterals are not sufficient and do not normalize hepatopetal blood flow and hence eventually portal hypertension develops [74].The development of a collateral circulation, with its attendant risk of variceal hemorrhage, is responsible for most of the complications and is the most common manifestation of PV obstruction [74]. Bleeding is generally well-tolerated and bleed-related mortality in patients with PVT is much lower than in patients with cirrhosis, probably due to preserved liver function and because the patients are usually younger [44, 75–80]. Usually the gastroesophageal varices are large in size and gastric varices are particularly more frequently seen in 30–40% patients [81]. Ectopic varices are significantly more frequent in patients with chronic PVT than in patients with cirrhosis and occur commonly in the duodenum, anorectal region, and gallbladder bed [82–84]. Collaterals can also develop along the gastroepiploic pathway (Figure 42). Other sequelae of the subsequent portal hypertension, such as ascites, are less frequent.Figure 42 Axial MIP image showing a severely attenuated and partially calcified retropancreatic splenic vein (interrupted arrows) resulting in formation of a prominent gastroepiploic collateral channel (arrowheads) between the SMV and the remnant splenic vein at splenic hilum (solid arrow) along the greater curvature of stomach. Asterisk denotes the gastric lumen. ### 3.6. Imaging Features and Diagnosis #### 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) #### 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ### 3.7. Portal Hypertensive Biliopathy/Portal Cavernoma Cholangiopathy Periportal collaterals can produce compression and deformation of the biliary tract (both extra- and intrahepatic) and gall bladder wall resulting in the so-called portal hypertensive biliopathy [91, 92] (Figure 44) also called as portal cavernoma cholangiopathy. These collateral veins are caused by reopening of the two preformed venous systems near the extrahepatic bile ducts-epicholedochal (ECD) venous plexus of Saint [93] and the paracholedochal (PACD) veins of Petren [94]. The ECD plexus of Saint forms a mesh on the surface of the common bile duct (CBD) while the PACD venous plexus of Petren runs parallel to the CBD. Engorgement of these collaterals can cause compressive and ischemic changes on the biliary tree manifesting as indentations, strictures, intrahepatic biliary radicles dilatation, and intraductal lithiasis (Figures 45–47). Dilatation of epicholedochal veins results in thickened and enhancing bile duct walls on cross-sectional images and may simulate a mass (pseudocholangiocarcinoma sign) [91] (Figure 48). The left hepatic duct is involved more commonly (38–100%) and severely [87]. Portal biliopathy usually remains asymptomatic (62–95%) [87]. Common symptoms are jaundice, biliary colic, and recurrent cholangitis and are seen with longstanding disease and presence of stones [95–99]. Various sequelae like choledocholithiasis, cholangitis, and secondary biliary cirrhosis can develop in longstanding disease [87]. MRCP is the first line of investigation [100]. ERCP is only recommended if a therapeutic intervention is contemplated [100]. MRCP is also helpful in differentiating choledochal varices from stones. Endoscopic ultrasonography may also show the characteristic lesions of portal biliopathy [101, 102]; however, it is not recommended as a part of routine work-up.Figure 44 Graphic illustration demonstrating opening up of epi- and paracholedochal venous collaterals in chronic PVT causing portal biliopathy.Figure 45 Coronal oblique CECT image (a) showing multiple paracholedochal collaterals (solid black arrows) causing extrinsic compression over the CBD (interrupted arrow). (b) 2D MRCP image of the same patient demonstrating undulating margins of CBD (arrow) due to the compression. (a) (b)Figure 46 (a) Thick-slab 3D MRCP image of a patient with portal biliopathy demonstrating extrinsic vascular impression over CBD by the paracholedochal collaterals (solid arrows). The distal CBD is narrowed by these collaterals with resultant upstream biliary dilatation. Undulating margins of biliary system can also be seen (interrupted arrow) with a grossly distended gall bladder. (b) 3D MRCP image from another patient showing wavy contour of the mid- and distal CBD due to portal biliopathy with resultant narrowing and gross bilobar biliary dilatation. (a) (b)Figure 47 Coronal oblique CECT image showing chronic, partially calcified, occlusive thrombus involving the main portal vein (black arrow) with multiple tortuous periportal collateral channels (solid white arrows). Splenic vein is also partially thrombosed (asterisk). Gall bladder calculi (interrupted arrow) and ascites can also be seen.Figure 48 Axial CECT image of a patient with EHPVO showing multiple tiny paracholedochal collaterals appearing as continuous enhancement of one of the biliary radicals in right hepatic lobe (arrows) mimicking cholangiocarcinoma (pseudocholangiocarcinoma sign). Splenic infarct is also seen due to associated splenic vein thrombosis (interrupted arrow) along with ascites (asterisk). ### 3.8. Treatment Therapy for chronic PVT basically revolves around management of complications of portal hypertension including gastrointestinal bleeding, hypersplenism, and ascites [9]. Prevention of extension of thrombosis and treatment of portal biliopathy are other facets of treatment [9]. ### 3.9. Extrahepatic Portal Venous Obstruction It is a distinct clinical entity characterized by obstruction of extrahepatic PV with or without involvement of intrahepatic PV branches in the setting of a well preserved liver function. It does not include isolated thrombosis of SV or SMV [87, 100].PVT seen in cirrhosis or HCC usually involves the intrahepatic PV radicals and is not associated with portal cavernoma formation or development of portal hypertension, both of which are integral to the definition of EHPVO [87]. It is a primarily childhood disorder but can present at any age. Patients usually present with symptoms or complications of secondary portal hypertension including variceal bleeding ascites and feature of hypersplenism. Jaundice can develop due to portal biliopathy but is usually not severe [87]. ### 3.10. Treatment Therapeutic approach is primarily focused on management of an acute episode of variceal bleeding followed by secondary prophylaxis [87]. Other issues such as hypersplenism, growth retardation, portal biliopathy, and minimal hepatic encephalopathy need to be individualized depending on the age of presentation, site and nature of obstruction, and clinical manifestations [87]. ### 3.11. Portal Vein Thrombosis in Patients with Cirrhosis PVT is most common in patients with preexisting cirrhosis. The prevalence of PVT increases with the severity of the cirrhosis, being less than 1% in patients with compensated cirrhosis [103], but 8%–25% in candidates for liver transplantation [104]. In patients with cirrhosis, portal venous obstruction is commonly related to invasion by hepatocellular carcinoma [105]. Neoplastic obstruction should always be considered, especially when the portal vein is larger than 23 mm in diameter, when thrombus demonstrates arterial phase enhancement (known asthreads-and-streaks pattern of enhancement) [70, 105] (Figure 49), when pulsatile flow is seen on Doppler ultrasound, and when serum alpha fetoprotein levels are increased [106].Figure 49 Axial (a) and coronal (b) MIP images of a patient with liver cirrhosis and multifocal hepatocellular carcinoma demonstrating multiple thin streaks of arterial phase enhancement within the main portal vein (arrows in (b)) as well as its intrahepatic branches (arrows in (a)) consistent with tumor thrombus (threads-and-streaks sign). (a) (b) ## 3.1. Etiology Several etiological causes, either of local or systemic origin, might be responsible for PVT development (Box4), although more than one factor is often identified [39]. A local risk factor can be identified in up to 30% of cases of PVT: cirrhosis and malignant tumors accounting for the majority of them [9, 39–42]. In the rest of the patients, the most common local factor for PVT is an inflammatory focus in the abdomen [38, 43, 44]. However, presence of cirrhosis, malignancy, and other intra-abdominal causes such as inflammation do not exclude the presence of systemic risk factors and the two may often coexist [9]. Local factors are usually recognized at the acute stage of PVT than the chronic stage [38]. Systemic risk factors are similar in prevalence in patients with acute and chronic PVT. An inherited or acquired hypercoagulable state is the usual culprit [39, 45–48]. ## 3.2. Acute Portal Vein Thrombosis Acute formation of a thrombus within the portal vein can be complete or eccentric, leaving a peripheral circulating lumen. The thrombus can also involve the mesenteric veins and/or the splenic vein. In cases of complete acute thrombosis, the patient usually presents with abdominal pain of sudden onset. Peritoneal signs, however, are usually absent except when an inflammatory focus is the cause of PVT or when PVT is complicated by intestinal ischemia. Acute PVT associated with an intra-abdominal focus of infection is frequently referred to as acute pylephlebitis. Clinical features of pylephlebitis include a high, spiking fever with chills, a painful liver, and sometimes shock. Small liver abscesses are common in this setting.Depending on the extension, PVT can be classified into four categories [49]: (1) confined to the PV beyond the confluence of the SV; (2) extended to the SMV, but with patent mesenteric vessels; (3) extended to the whole splanchnic venous system, but with large collaterals; or (4) with only fine collaterals. This classification is useful to evaluate a patient’s operability and clinical outcome. Another classification proposed by Yerdel et al. [50] is also widely accepted (Figure 32).Figure 32 Classification of PVT proposed by Yerdel et al.Liver function is usually preserved in patients with acute PVT unless the patient has an underlying liver disease such as cirrhosis. This is because of two reasons: (1) compensatory increase in hepatic arterial blood flow (hepatic artery buffer response) and (2) rapid development of a collateral circulation from pre-existing veins in the porta hepatis (venous rescue) [51–54]. The hepatic artery buffer response manifests on imaging in the form of increased hepatic parenchymal enhancement of the involved segment in the arterial phase with attendant hypertrophy of the adjoining artery. Formation of collaterals begins in a few days after portal vein obstruction and finalizes within 3 to 5 wk [53, 54]. As long as there is no extension of the thrombus to mesenteric venous arches, all manifestations of acute PVT are completely reversible, either by recanalization or by development of a cavernoma [9].It is clear from the above discussion that PVT is an ongoing process. Hence, a clear distinction between acute or chronic thrombus cannot always be made due to a considerable overlap between the two clinical situations. Formation of portal cavernoma has been suggested to be a marker of chronicity but it has been debated [55, 56]. ## 3.3. Imaging Diagnosis Imaging diagnosis of acute PVT can be readily made using noninvasive methods. ### 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) ### 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. ### 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ## 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) ## 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. ## 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ## 3.4. Treatment The goal of treatment in acute PVT is recanalization of the thrombosed vein using anticoagulation and thrombolysis (either transcatheter or surgical) to prevent the development of portal hypertension and intestinal ischemia. When local inflammation is the underlying cause for the PVT, appropriate antibiotic therapy is warranted with correction of the causal factors, if needed [9]. ## 3.5. Chronic Portal Vein Thrombosis When acute PVT is asymptomatic and goes undetected, patients present later in life and are diagnosed either incidentally on imaging done for unrelated issues or when investigations for portal hypertension related complications are carried out. In patients with chronic PVT, the actual thrombus is commonly not visualized. Rather, the obstructed portal vein is replaced by a network of portoportal collateral veins bypassing the area of occlusion (portal cavernoma) [54]. However, these collaterals are not sufficient and do not normalize hepatopetal blood flow and hence eventually portal hypertension develops [74].The development of a collateral circulation, with its attendant risk of variceal hemorrhage, is responsible for most of the complications and is the most common manifestation of PV obstruction [74]. Bleeding is generally well-tolerated and bleed-related mortality in patients with PVT is much lower than in patients with cirrhosis, probably due to preserved liver function and because the patients are usually younger [44, 75–80]. Usually the gastroesophageal varices are large in size and gastric varices are particularly more frequently seen in 30–40% patients [81]. Ectopic varices are significantly more frequent in patients with chronic PVT than in patients with cirrhosis and occur commonly in the duodenum, anorectal region, and gallbladder bed [82–84]. Collaterals can also develop along the gastroepiploic pathway (Figure 42). Other sequelae of the subsequent portal hypertension, such as ascites, are less frequent.Figure 42 Axial MIP image showing a severely attenuated and partially calcified retropancreatic splenic vein (interrupted arrows) resulting in formation of a prominent gastroepiploic collateral channel (arrowheads) between the SMV and the remnant splenic vein at splenic hilum (solid arrow) along the greater curvature of stomach. Asterisk denotes the gastric lumen. ## 3.6. Imaging Features and Diagnosis ### 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) ### 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ## 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) ## 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ## 3.7. Portal Hypertensive Biliopathy/Portal Cavernoma Cholangiopathy Periportal collaterals can produce compression and deformation of the biliary tract (both extra- and intrahepatic) and gall bladder wall resulting in the so-called portal hypertensive biliopathy [91, 92] (Figure 44) also called as portal cavernoma cholangiopathy. These collateral veins are caused by reopening of the two preformed venous systems near the extrahepatic bile ducts-epicholedochal (ECD) venous plexus of Saint [93] and the paracholedochal (PACD) veins of Petren [94]. The ECD plexus of Saint forms a mesh on the surface of the common bile duct (CBD) while the PACD venous plexus of Petren runs parallel to the CBD. Engorgement of these collaterals can cause compressive and ischemic changes on the biliary tree manifesting as indentations, strictures, intrahepatic biliary radicles dilatation, and intraductal lithiasis (Figures 45–47). Dilatation of epicholedochal veins results in thickened and enhancing bile duct walls on cross-sectional images and may simulate a mass (pseudocholangiocarcinoma sign) [91] (Figure 48). The left hepatic duct is involved more commonly (38–100%) and severely [87]. Portal biliopathy usually remains asymptomatic (62–95%) [87]. Common symptoms are jaundice, biliary colic, and recurrent cholangitis and are seen with longstanding disease and presence of stones [95–99]. Various sequelae like choledocholithiasis, cholangitis, and secondary biliary cirrhosis can develop in longstanding disease [87]. MRCP is the first line of investigation [100]. ERCP is only recommended if a therapeutic intervention is contemplated [100]. MRCP is also helpful in differentiating choledochal varices from stones. Endoscopic ultrasonography may also show the characteristic lesions of portal biliopathy [101, 102]; however, it is not recommended as a part of routine work-up.Figure 44 Graphic illustration demonstrating opening up of epi- and paracholedochal venous collaterals in chronic PVT causing portal biliopathy.Figure 45 Coronal oblique CECT image (a) showing multiple paracholedochal collaterals (solid black arrows) causing extrinsic compression over the CBD (interrupted arrow). (b) 2D MRCP image of the same patient demonstrating undulating margins of CBD (arrow) due to the compression. (a) (b)Figure 46 (a) Thick-slab 3D MRCP image of a patient with portal biliopathy demonstrating extrinsic vascular impression over CBD by the paracholedochal collaterals (solid arrows). The distal CBD is narrowed by these collaterals with resultant upstream biliary dilatation. Undulating margins of biliary system can also be seen (interrupted arrow) with a grossly distended gall bladder. (b) 3D MRCP image from another patient showing wavy contour of the mid- and distal CBD due to portal biliopathy with resultant narrowing and gross bilobar biliary dilatation. (a) (b)Figure 47 Coronal oblique CECT image showing chronic, partially calcified, occlusive thrombus involving the main portal vein (black arrow) with multiple tortuous periportal collateral channels (solid white arrows). Splenic vein is also partially thrombosed (asterisk). Gall bladder calculi (interrupted arrow) and ascites can also be seen.Figure 48 Axial CECT image of a patient with EHPVO showing multiple tiny paracholedochal collaterals appearing as continuous enhancement of one of the biliary radicals in right hepatic lobe (arrows) mimicking cholangiocarcinoma (pseudocholangiocarcinoma sign). Splenic infarct is also seen due to associated splenic vein thrombosis (interrupted arrow) along with ascites (asterisk). ## 3.8. Treatment Therapy for chronic PVT basically revolves around management of complications of portal hypertension including gastrointestinal bleeding, hypersplenism, and ascites [9]. Prevention of extension of thrombosis and treatment of portal biliopathy are other facets of treatment [9]. ## 3.9. Extrahepatic Portal Venous Obstruction It is a distinct clinical entity characterized by obstruction of extrahepatic PV with or without involvement of intrahepatic PV branches in the setting of a well preserved liver function. It does not include isolated thrombosis of SV or SMV [87, 100].PVT seen in cirrhosis or HCC usually involves the intrahepatic PV radicals and is not associated with portal cavernoma formation or development of portal hypertension, both of which are integral to the definition of EHPVO [87]. It is a primarily childhood disorder but can present at any age. Patients usually present with symptoms or complications of secondary portal hypertension including variceal bleeding ascites and feature of hypersplenism. Jaundice can develop due to portal biliopathy but is usually not severe [87]. ## 3.10. Treatment Therapeutic approach is primarily focused on management of an acute episode of variceal bleeding followed by secondary prophylaxis [87]. Other issues such as hypersplenism, growth retardation, portal biliopathy, and minimal hepatic encephalopathy need to be individualized depending on the age of presentation, site and nature of obstruction, and clinical manifestations [87]. ## 3.11. Portal Vein Thrombosis in Patients with Cirrhosis PVT is most common in patients with preexisting cirrhosis. The prevalence of PVT increases with the severity of the cirrhosis, being less than 1% in patients with compensated cirrhosis [103], but 8%–25% in candidates for liver transplantation [104]. In patients with cirrhosis, portal venous obstruction is commonly related to invasion by hepatocellular carcinoma [105]. Neoplastic obstruction should always be considered, especially when the portal vein is larger than 23 mm in diameter, when thrombus demonstrates arterial phase enhancement (known asthreads-and-streaks pattern of enhancement) [70, 105] (Figure 49), when pulsatile flow is seen on Doppler ultrasound, and when serum alpha fetoprotein levels are increased [106].Figure 49 Axial (a) and coronal (b) MIP images of a patient with liver cirrhosis and multifocal hepatocellular carcinoma demonstrating multiple thin streaks of arterial phase enhancement within the main portal vein (arrows in (b)) as well as its intrahepatic branches (arrows in (a)) consistent with tumor thrombus (threads-and-streaks sign). (a) (b) ## 4. Mesenteric Vein Thrombosis Although arterial causes of acute mesenteric ischemia are far more common than venous causes, venous thrombosis still accounts for about 5%–20% of cases of mesenteric ischemia and remains an important cause of acute bowel infarction [107–110]. They are most often the result of a thrombosis of the SMV [111]. Owing to their nonspecific clinical presentation, imaging plays a critical role in the early diagnosis of MVT. With the improvements in contrast and spatial resolution, both in CT and MRI, bowel wall abnormalities resulting from a lack of venous drainage can be assessed accurately, while correctly depicting the mesenteric arterial circulation. ### 4.1. Clinical Features Patients with acute MVT usually present with abdominal pain out of proportion to the physical findings, nausea, vomiting, and constipation, with or without bloody diarrhea [110]. Abdominal symptoms may then gradually worsen with the development of peritonitis, which indicates intestinal infarction and can be seen in one-third to two-thirds of patients with acute MVT [112]. Abdominal distension can be present in up to 50% of cases [110]. Patients with chronic MVT are often asymptomatic due to extensive venous collateralization and are unlikely to develop intestinal infarction. Complications such as variceal bleeding can occur in late stages secondary to portal hypertension. Weight loss, food avoidance, vague postprandial abdominal pain, or distention may be present. The pain usually occurs within the first hour after eating, diminishing over the next 1-2 hours. Chronic thrombosis of the portomesenteric vasculature is usually detected as an incidental finding during evaluation of other abdominal pathologic conditions, such as portal hypertension, malignancy, or chronic pancreatitis [110]. ### 4.2. Classification of MVT MVT is classified on the basis of etiology into either primary or secondary [111]. It is considered primary, or idiopathic, when no predisposing factor can be found. Due to an increased awareness of predisposing disorders and improvements in imaging technology, the incidence of idiopathic MVT continues to decline [113, 114]. Patients with a predisposing condition such as prothrombotic and myeloproliferative disorders, neoplasms, diverse inflammatory conditions, recent surgery, portal hypertension, and miscellaneous causes such as oral contraceptives or pregnancy are said to have secondary MVT (Box 4). ### 4.3. Anatomy of the Mesenteric Venous System Multiple small veins (venae rectae) originate from the bowel wall and join to form venous arcades. Small bowel and the proximal colon as far as the splenic flexure are drained by these venous arcades through the pancreaticoduodenal, jejunal and ileal, ileocolic, right colic, and middle colic veins. The confluence of these veins forms the SMV. The inferior mesenteric vein (IMV) can drain either directly into the SV, into the SMV, or into the angle of the splenoportal confluence. It drains the splenic flexure, descending colon, sigmoid colon, and part of the rectum. ### 4.4. Pathophysiology of Bowel Ischemia The location and extent of venous thrombosis and the status of collateral circulation are important predictors of bowel ischemia and subsequent infarction. It has been demonstrated that patients with thrombosis of the venae rectae and venous arcades are at greater risk of developing bowel abnormalities than the ones with thrombosis confined to the SMV close to the splenoportal confluence [115].Etiology of the thrombosis often determines the location of the thrombosis. Intra-abdominal infections like pancreatitis affect the larger veins first while hematological disorders involve the smaller veins first followed by the larger venous trunks [112].When the thrombus evolves slowly and there is enough time for the collaterals to develop, bowel infarction is unlikely [116]. ### 4.5. Imaging #### 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. #### 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. #### 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. #### 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. #### 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. #### 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). #### 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. #### 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ### 4.6. Treatment Systemic anticoagulation for the prevention of thrombus propagation is the current mainstay therapy for patients with acute mesenteric venous thrombosis without bowel ischemia [112]. Transcatheter thrombolysis (either percutaneous or through transjugular route) has also been attempted in some cases to good effect [120]. When intestinal infarction has already developed and the patient has features of peritonitis, emergency laparotomy for resection of the necrotic parts of the gut should be performed [127]. ## 4.1. Clinical Features Patients with acute MVT usually present with abdominal pain out of proportion to the physical findings, nausea, vomiting, and constipation, with or without bloody diarrhea [110]. Abdominal symptoms may then gradually worsen with the development of peritonitis, which indicates intestinal infarction and can be seen in one-third to two-thirds of patients with acute MVT [112]. Abdominal distension can be present in up to 50% of cases [110]. Patients with chronic MVT are often asymptomatic due to extensive venous collateralization and are unlikely to develop intestinal infarction. Complications such as variceal bleeding can occur in late stages secondary to portal hypertension. Weight loss, food avoidance, vague postprandial abdominal pain, or distention may be present. The pain usually occurs within the first hour after eating, diminishing over the next 1-2 hours. Chronic thrombosis of the portomesenteric vasculature is usually detected as an incidental finding during evaluation of other abdominal pathologic conditions, such as portal hypertension, malignancy, or chronic pancreatitis [110]. ## 4.2. Classification of MVT MVT is classified on the basis of etiology into either primary or secondary [111]. It is considered primary, or idiopathic, when no predisposing factor can be found. Due to an increased awareness of predisposing disorders and improvements in imaging technology, the incidence of idiopathic MVT continues to decline [113, 114]. Patients with a predisposing condition such as prothrombotic and myeloproliferative disorders, neoplasms, diverse inflammatory conditions, recent surgery, portal hypertension, and miscellaneous causes such as oral contraceptives or pregnancy are said to have secondary MVT (Box 4). ## 4.3. Anatomy of the Mesenteric Venous System Multiple small veins (venae rectae) originate from the bowel wall and join to form venous arcades. Small bowel and the proximal colon as far as the splenic flexure are drained by these venous arcades through the pancreaticoduodenal, jejunal and ileal, ileocolic, right colic, and middle colic veins. The confluence of these veins forms the SMV. The inferior mesenteric vein (IMV) can drain either directly into the SV, into the SMV, or into the angle of the splenoportal confluence. It drains the splenic flexure, descending colon, sigmoid colon, and part of the rectum. ## 4.4. Pathophysiology of Bowel Ischemia The location and extent of venous thrombosis and the status of collateral circulation are important predictors of bowel ischemia and subsequent infarction. It has been demonstrated that patients with thrombosis of the venae rectae and venous arcades are at greater risk of developing bowel abnormalities than the ones with thrombosis confined to the SMV close to the splenoportal confluence [115].Etiology of the thrombosis often determines the location of the thrombosis. Intra-abdominal infections like pancreatitis affect the larger veins first while hematological disorders involve the smaller veins first followed by the larger venous trunks [112].When the thrombus evolves slowly and there is enough time for the collaterals to develop, bowel infarction is unlikely [116]. ## 4.5. Imaging ### 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. ### 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. ### 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. ### 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. ### 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. ### 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). ### 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. ### 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ## 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. ## 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. ## 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. ## 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. ## 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. ## 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). ## 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. ## 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ## 4.6. Treatment Systemic anticoagulation for the prevention of thrombus propagation is the current mainstay therapy for patients with acute mesenteric venous thrombosis without bowel ischemia [112]. Transcatheter thrombolysis (either percutaneous or through transjugular route) has also been attempted in some cases to good effect [120]. When intestinal infarction has already developed and the patient has features of peritonitis, emergency laparotomy for resection of the necrotic parts of the gut should be performed [127]. ## 5. Conclusions With the advancements in imaging technology, the rate of detection of splanchnic venous thrombosis has gradually increased. The consequences of these thromboses can be severe, including fulminant liver failure, bowel infarction, and variceal bleeding, with high mortality rates. Clinical features are often nonspecific and overlap with many other abdominal emergencies. Since this entity is still relatively rare, no uniform treatment protocols are established. Conservative medical treatment is often ineffective, especially in cases with extensive thrombosis and organ damage, underlining the need for a prompt diagnosis and commencement of therapy. Ultrasound coupled with Doppler is highly effective in detecting hepatic and portal venous and IVC thrombosis with attendant findings of ascites, splenomegaly, and liver parenchymal changes. Cross-sectional imaging serves primarily as a problem solving tool and in evaluation of associated complications like varices and portal biliopathy. However, for mesenteric venous thrombosis, contrast-enhanced MDCT and MRI are superior not only in detection of the primary vascular abnormality but also in delineating the changes in bowel wall and mesentery. Catheter angiography is now reserved essentially for cases in which therapeutic intervention is planned. --- *Source: 101029-2015-10-12.xml*
101029-2015-10-12_101029-2015-10-12.md
151,437
Imaging Diagnosis of Splanchnic Venous Thrombosis
S. Rajesh; Amar Mukund; Ankur Arora
Gastroenterology Research and Practice (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101029
101029-2015-10-12.xml
--- ## Abstract Splanchnic vein thrombosis (SVT) is a broad term that includes Budd-Chiari syndrome and occlusion of veins that constitute the portal venous system. Due to the common risk factors involved in the pathogenesis of these clinically distinct disorders, concurrent involvement of two different regions is quite common. In acute and subacute SVT, the symptoms may overlap with a variety of other abdominal emergencies while in chronic SVT, the extent of portal hypertension and its attendant complications determine the clinical course. As a result, clinical diagnosis is often difficult and is frequently reliant on imaging. Tremendous improvements in vascular imaging in recent years have ensured that this once rare entity is being increasingly detected. Treatment of acute SVT requires immediate anticoagulation. Transcatheter thrombolysis or transjugular intrahepatic portosystemic shunt is used in the event of clinical deterioration. In cases with peritonitis, immediate laparotomy and bowel resection may be required for irreversible bowel ischemia. In chronic SVT, the underlying cause should be identified and treated. The imaging manifestations of the clinical syndromes resulting from SVT are comprehensively discussed here along with a brief review of the relevant clinical features and therapeutic approach. --- ## Body ## 1. Introduction Splanchnic venous system includes the mesenteric, splenic, and hepatic beds, the first two serving as the major inflow for the third (Figure1). Blood flowing through the intestines, spleen, and pancreas is collected by the superior mesenteric vein (SMV) and splenic vein (SV) which join to form the portal vein (PV). Stomach and part of the pancreas drain directly into the portal vein. At the porta hepatis, PV divides into right and left branches that continue to their respective hepatic lobes, ultimately emptying into the hepatic sinusoids. Venous outflow from the liver is through the hepatic veins (HV) which drain into the inferior vena cava (IVC). Consequently, the term splanchnic vein thrombosis (SVT) includes occlusion of veins that form the portal venous system or the hepatic veins (Budd-Chiari syndrome) [1, 2]. Although portal and mesenteric vein thrombosis and Budd-Chiari syndrome are three distinct clinical entities, their etiologies are often shared and clinical presentation may overlap. Moreover, simultaneous involvement of two different regions is fairly frequent due to the common risk factors. Thus, it is only prudent to discuss them collectively. Once considered to be a rare entity, SVT is increasingly being detected, thanks mainly to the remarkable advancements in imaging technology and increased awareness amongst healthcare providers. The present review appraises the radiological manifestations of SVT and aims to underscore the importance of imaging in decision making and patient selection to improve therapy and outcome in this group of patients.Figure 1 Graphic illustration of the splanchnic venous system. RHV: right hepatic vein, MHV: middle hepatic vein, LHV: left hepatic vein, PV: portal vein, SMV: superior mesenteric vein, SV: splenic vein, LGV: left gastric vein, and IMV: inferior mesenteric vein. ## 2. Budd-Chiari Syndrome The term Budd-Chiari syndrome (BCS) refers to the clinical manifestations arising as a consequence of hepatic venous outflow tract obstruction at any level from the small hepatic veins to the cavoatrial junction regardless of the mechanism of obstruction [3] (Figure 2). It follows that cardiac and pericardial diseases as well as sinusoidal obstruction syndrome are excluded from this definition [3, 4].Figure 2 Gray-scale US image demonstrating homogeneously hypoechoic and bulbous liver with chinked portal venous radicals (arrows) in a patient with fulminant BCS. ### 2.1. Etiology On the basis of etiology, BCS is divided into primary BCS (related to a primarily endoluminal venous disease, i.e., thrombosis or web) and secondary BCS (caused by infiltration or compression by a lesion outside the venous system, i.e., benign or malignant tumors, cysts, abscesses, etc.) [4] (Box 1). Prevalence of this disease shows marked geographic variation, from being one of the most common causes for hospital admission for liver disease in Nepal to becoming a rare entity in western countries [5, 6]. Based on the level of obstruction, BCS has been classified into three types [7] (Box 2). In the past, IVC was reported to be frequently obstructed in Asians and usually patent in western patients. However, this pattern has changed over the period of time in India, where hepatic vein thrombosis now accounts for the majority of the cases (59%) and obstruction of terminal IVC now accounts for a lesser proportion of cases (16%) [8].Box 1:Causes of BCS. Primary BCS Hypercoagulable states Inherited Antithrombin deficiency Protein C, S deficiency Heterozygous Factor V Leiden Prothrombin mutation Acquired Myeloproliferative disorders Paroxysmal nocturnal hemoglobinuria Antiphospholipid syndrome Cancer Pregnancy Use of oral contraceptives Systemic causes Connective tissue disease Inflammatory bowel disease Behcet disease Sarcoidosis Vasculitis Cancer Secondary BCS Benign or malignant hepatic tumors Trauma Hepatic abscess Simple or complex hepatic cysts IdiopathicBox 2:Classification of BCS based on the level of obstruction. (I) Obstruction of IVC ± hepatic veins (II) Obstruction of major hepatic veins (III) Obstruction of small centrilobular venules ### 2.2. Clinical Features Clinical presentation of the disease is highly variable and depends on the acuteness and extent of the hepatic venous outflow tract obstruction, ranging from complete absence of symptoms to fulminant hepatic failure, through acute, subacute, or chronic progressive development of symptoms over weeks to months [4, 9]. In cases of extensive and acute thrombosis of veins, frequently encountered in the western countries, the patient presents with abdominal pain and distension, jaundice, and hepatomegaly. The etiology in such cases is usually an underlying thrombotic disorder, intake of oral contraceptive pills, or pregnancy [4]. On the other hand, in Asian countries, membranous occlusion of the HV/IVC is more common [10]. Once considered to be congenital in origin, membranous web is now widely accepted to be a result of organized thrombus with focal stenosis being a part of this pathological spectrum [11]. This might be a possible explanation for the majority of Asian patients with BCS having a subacute to chronic course, characterized by insidious onset of portal hypertension, leg edema, gastrointestinal bleed, and nodular liver [12]. The course of manifestations in these patients can be steady or marked by exacerbations and remissions [9].Since the changes in the liver parenchyma in BCS can be inhomogeneous, a single biopsy may be falsely negative [3]. Thus, a biopsy is usually reserved for cases in which radiological findings are inconclusive, like in the event of involvement of small hepatic veins with patent large veins, although differentiation of this form from sinusoidal obstruction syndrome is not always possible [9]. Also, serial liver biopsies are useful for assessing the severity of disease and determining whether it has progressed after therapeutic interventions.Early diagnosis of BCS is of critical importance for commencing appropriate therapy. Due to the nonspecific and variable clinical presentation and the fact that biopsy cannot be blindly relied upon, imaging assumes a vital role in the early identification of the disease and accurate assessment of its extent. ### 2.3. Imaging Findings Hepatic venous outflow tract obstruction causes increase in the sinusoidal and portal pressures, leading to hepatic congestion, necrosis, fibrosis, and eventually florid cirrhosis [13]. Imaging findings at various stages of BCS reflect the progressive pathological changes occurring in the hepatic parenchyma and vasculature. Real-time ultrasound (US) coupled with Doppler is currently considered to be the investigation of choice for the initial evaluation of a patient suspected of having BCS and in experienced hands might be the only modality required to establish the diagnosis in majority of the cases [14]. It demonstrates the hepatic echotexture and morphological changes, status of HV and IVC, and evidence of intrahepatic collaterals. Simultaneously, presence of ascites and splenomegaly can be assessed. Besides, US is widely available and inexpensive and does not impart harmful radiation to the patient or the operator. However, its major limitations are patient’s body habitus and operator expertise which may preclude an adequate examination. Computed tomography (CT) and magnetic resonance imaging (MRI) have a complementary role to US and Doppler and serve mainly as problem solving tools. Routine use of cross-sectional imaging in patients with BCS to rule out the development of hepatocellular carcinoma or comprehensive assessment of collateral circulation is debatable. Catheter IVC graphy/cavography, which was once considered the standard of reference for evaluation of HV and IVC is now no longer routinely used for diagnostic purpose because noninvasive imaging provides evidence for BCS in most patients. Cavography tends to over diagnose HV thrombosis even when the failure to cannulate the HV might be due to technical failure. Moreover, it fails to provide an assessment of the extent of thrombosis in case of IVC obstruction which can be accurately done by MR venography [15, 16]. In addition, the entire extent of intrahepatic collaterals might not be picked up on cavography. Thus, it is reserved for patients in whom surgical or radiological intervention is contemplated. However, it still remains the gold standard when the hemodynamic significance of a suspected IVC narrowing due to caudate lobe hypertrophy is to be estimated in postsurgical/transplant patients. Pressure gradient across the suspected segment of narrowing is measured and a gradient of > 3 mm Hg is considered hemodynamically significant [17]. #### 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) #### 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ### 2.4. Treatment In patients not responding to anticoagulation and nutritional therapy, radiological and surgical interventions may be contemplated including placement of portosystemic shunts and liver transplantation. In patients with short segment occlusion of HV or IVC, balloon angioplasty or stent insertion can be performed [3, 4, 12, 33, 36]. Imaging follow-up at routine intervals is necessary in all these cases to determine the long-term results of intervention. US examination coupled with Doppler is usually adequate to evaluate the patency of the native vessels or stents after intervention (Figure 14). Presence of ascites and any associated liver parenchymal changes can also be simultaneously assessed. However, cross-sectional imaging or catheter angiography may be required in cases of equivocal findings on Doppler or when the symptoms for which the intervention was performed have recurred in spite of an apparently normal Doppler study. ## 2.1. Etiology On the basis of etiology, BCS is divided into primary BCS (related to a primarily endoluminal venous disease, i.e., thrombosis or web) and secondary BCS (caused by infiltration or compression by a lesion outside the venous system, i.e., benign or malignant tumors, cysts, abscesses, etc.) [4] (Box 1). Prevalence of this disease shows marked geographic variation, from being one of the most common causes for hospital admission for liver disease in Nepal to becoming a rare entity in western countries [5, 6]. Based on the level of obstruction, BCS has been classified into three types [7] (Box 2). In the past, IVC was reported to be frequently obstructed in Asians and usually patent in western patients. However, this pattern has changed over the period of time in India, where hepatic vein thrombosis now accounts for the majority of the cases (59%) and obstruction of terminal IVC now accounts for a lesser proportion of cases (16%) [8].Box 1:Causes of BCS. Primary BCS Hypercoagulable states Inherited Antithrombin deficiency Protein C, S deficiency Heterozygous Factor V Leiden Prothrombin mutation Acquired Myeloproliferative disorders Paroxysmal nocturnal hemoglobinuria Antiphospholipid syndrome Cancer Pregnancy Use of oral contraceptives Systemic causes Connective tissue disease Inflammatory bowel disease Behcet disease Sarcoidosis Vasculitis Cancer Secondary BCS Benign or malignant hepatic tumors Trauma Hepatic abscess Simple or complex hepatic cysts IdiopathicBox 2:Classification of BCS based on the level of obstruction. (I) Obstruction of IVC ± hepatic veins (II) Obstruction of major hepatic veins (III) Obstruction of small centrilobular venules ## 2.2. Clinical Features Clinical presentation of the disease is highly variable and depends on the acuteness and extent of the hepatic venous outflow tract obstruction, ranging from complete absence of symptoms to fulminant hepatic failure, through acute, subacute, or chronic progressive development of symptoms over weeks to months [4, 9]. In cases of extensive and acute thrombosis of veins, frequently encountered in the western countries, the patient presents with abdominal pain and distension, jaundice, and hepatomegaly. The etiology in such cases is usually an underlying thrombotic disorder, intake of oral contraceptive pills, or pregnancy [4]. On the other hand, in Asian countries, membranous occlusion of the HV/IVC is more common [10]. Once considered to be congenital in origin, membranous web is now widely accepted to be a result of organized thrombus with focal stenosis being a part of this pathological spectrum [11]. This might be a possible explanation for the majority of Asian patients with BCS having a subacute to chronic course, characterized by insidious onset of portal hypertension, leg edema, gastrointestinal bleed, and nodular liver [12]. The course of manifestations in these patients can be steady or marked by exacerbations and remissions [9].Since the changes in the liver parenchyma in BCS can be inhomogeneous, a single biopsy may be falsely negative [3]. Thus, a biopsy is usually reserved for cases in which radiological findings are inconclusive, like in the event of involvement of small hepatic veins with patent large veins, although differentiation of this form from sinusoidal obstruction syndrome is not always possible [9]. Also, serial liver biopsies are useful for assessing the severity of disease and determining whether it has progressed after therapeutic interventions.Early diagnosis of BCS is of critical importance for commencing appropriate therapy. Due to the nonspecific and variable clinical presentation and the fact that biopsy cannot be blindly relied upon, imaging assumes a vital role in the early identification of the disease and accurate assessment of its extent. ## 2.3. Imaging Findings Hepatic venous outflow tract obstruction causes increase in the sinusoidal and portal pressures, leading to hepatic congestion, necrosis, fibrosis, and eventually florid cirrhosis [13]. Imaging findings at various stages of BCS reflect the progressive pathological changes occurring in the hepatic parenchyma and vasculature. Real-time ultrasound (US) coupled with Doppler is currently considered to be the investigation of choice for the initial evaluation of a patient suspected of having BCS and in experienced hands might be the only modality required to establish the diagnosis in majority of the cases [14]. It demonstrates the hepatic echotexture and morphological changes, status of HV and IVC, and evidence of intrahepatic collaterals. Simultaneously, presence of ascites and splenomegaly can be assessed. Besides, US is widely available and inexpensive and does not impart harmful radiation to the patient or the operator. However, its major limitations are patient’s body habitus and operator expertise which may preclude an adequate examination. Computed tomography (CT) and magnetic resonance imaging (MRI) have a complementary role to US and Doppler and serve mainly as problem solving tools. Routine use of cross-sectional imaging in patients with BCS to rule out the development of hepatocellular carcinoma or comprehensive assessment of collateral circulation is debatable. Catheter IVC graphy/cavography, which was once considered the standard of reference for evaluation of HV and IVC is now no longer routinely used for diagnostic purpose because noninvasive imaging provides evidence for BCS in most patients. Cavography tends to over diagnose HV thrombosis even when the failure to cannulate the HV might be due to technical failure. Moreover, it fails to provide an assessment of the extent of thrombosis in case of IVC obstruction which can be accurately done by MR venography [15, 16]. In addition, the entire extent of intrahepatic collaterals might not be picked up on cavography. Thus, it is reserved for patients in whom surgical or radiological intervention is contemplated. However, it still remains the gold standard when the hemodynamic significance of a suspected IVC narrowing due to caudate lobe hypertrophy is to be estimated in postsurgical/transplant patients. Pressure gradient across the suspected segment of narrowing is measured and a gradient of > 3 mm Hg is considered hemodynamically significant [17]. ### 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) ### 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ## 2.3.1. Hepatic Parenchymal Changes In the acute stage, congestive changes predominate resulting in global enlargement of the liver [7]. On gray-scale US, the liver is typically enlarged and bulbous and appears homogeneously hypoechoic (Figure 2). However, altered regional echogenicity may be seen secondary to perfusion alterations and hemorrhage [7] (Figure 3).Figure 3 Gray-scale image from the US study of a patient with BCS due to obstruction of the common channel of the middle and the left hepatic veins (arrow) showing a more heterogeneous hepatic parenchymal echotexture. Collateral channel can be seen bridging the two hepatic veins proximal to obstruction (interrupted arrow).On the noncontrast enhanced CT scan, liver shows diffuse hypodensity [12] (Figure 4). Postadministration of intravenous contrast, a characteristic “flip-flop” pattern of enhancement is seen in the form of early homogeneous enhancement of the caudate lobe and central portion of liver around IVC and decreased enhancement peripherally (Figure 5). This partially reverses on the equilibrium phase images with the periphery of the liver retaining contrast and showing patchy inhomogeneous enhancement while there is washout of contrast from the central portion of liver [12]. These changes are attributed to acute tissue edema in the peripheral portions of liver due to the combined effects of hepatic venous obstruction and diminished portal flow. On MRI, peripheral liver parenchyma is of moderately low signal intensity on T1-weighted images and high signal intensity on T2-weighted images compared to the central portion with decreased enhancement of the peripheral liver after gadolinium administration [15, 16].Figure 4 Noncontrast-enhanced axial CT scan image showing a diffusely hypodense liver in this patient with acute thrombosis of all the three hepatic veins. On careful inspection, the right and middle hepatic veins can be made out as mildly hyperdense structures (arrows) on the background of this hypodense liver. Ascites can also be seen on this section (asterisk).Figure 5 Axial CECT image acquired in the portal venous phase showing enhancement of the caudate lobe (asterisk) while rest of the liver parenchyma in the periphery remains predominantly hypoenhancing. Thrombosed right and middle hepatic veins (white arrows) and IVC (black arrow) can also be seen.As the disease progresses, there is reversal of flow in the portal vein with development of intrahepatic collaterals which permit decompression of liver [18]. Thus, in subacute BCS, a mottled pattern of parenchymal enhancement is seen with no specific predilection for centre or periphery of the liver (Figure 6).Figure 6 Coronal (a) and axial (b) portal venous phase CECT image showing thrombosed right hepatic vein (arrows) and the part of the intrahepatic portion of IVC (arrowheads) with mottled enhancement of the liver parenchyma and ascites. (a) (b)Caudate lobe has separate veins (which may not be affected by the disease process) which drain directly into the IVC at a level lower than the ostia of the main hepatic veins. This may result in compensatory caudate lobe hypertrophy which can be seen in up to 75% cases of BCS and serves as a useful indirect sign [19] (Figure 7). However, caudate hypertrophy is nonspecific and can be seen in many other cases of cirrhosis of varied etiologies.Figure 7 Axial CECT images from two different patients with chronic BCS demonstrating markedly hypertrophied caudate lobe (asterisk). (a) (b)In later stages of the disease, morphological changes start appearing in the liver in the form of surface nodularity and coarsened echotexture on US with changes of portal hypertension (Figure8). This results in decreased T1- and T2-weighted signal intensity at unenhanced MR imaging and in delayed enhancement in contrast-enhanced studies [15, 16]. Attendant volume redistribution starts taking place in the liver resulting in right lobe atrophy with hypertrophy of the left lobe.Figure 8 Axial images from the CECT scan of two different patients with chronic BCS demonstrating cirrhotic architecture of liver in the form of irregular lobulated outlines and heterogeneous mottled hepatic parenchymal enhancement. Ascites (asterisks in (a)), splenomegaly (asterisk in (b)) and paraesophageal and perisplenic collaterals (arrow in (a) and (b), resp.) can also be seen. (a) (b)Due to focal loss of portal perfusion in patients with BCS, compensatory nodular hyperplasia can occur in areas of hepatic parenchyma that have an adequate blood supply resulting in formation of regenerative nodules [20–23]. They are usually multiple with a typical diameter of between 0.5 and 4 cm [22]. The term large regenerative nodules (LRN) is preferred for these lesions rather than nodular regenerative hyperplasia (NRH) since NRH, by definition, implies that there should be no fibrosis interspersed between the nodules while BCS at a later stage of the disease can result in fibrosis or frank cirrhosis [21, 22]. On multiphasic contrast-enhanced CT or MRI, LRN demonstrate marked and homogeneous enhancement on the arterial phase images and remain hyperattenuating to the surrounding hepatic parenchyma on portal venous phase images [22] (Figure 9). Because LRN are mainly composed of normal liver parenchyma, they are not well-appreciated on unenhanced or equilibrium phase CT or T2-weighted MR images [22]. They may appear bright on T1WI due to deposition of copper within some of these nodules; however, they do not contain fat, hemorrhage or calcification [22, 23]. There is no evidence that LRN degenerate into malignancy. Although hepatocellular carcinoma (HCC) is considered to be extremely rare in BCS, it is important to differentiate LRN from HCC since a misdiagnosis may deny a patient the possibility of liver transplant or subject him to unnecessary aggressive treatment for HCC. HCC is usually hypointense to the liver on T1WI and hyperintense on T2WI, along with evidence of heterogeneity, encapsulation, and portal or hepatic venous invasion, none of which are seen in LRN. On multiphasic CT or MRI, HCC shows washout of contrast on the portal venous and equilibrium phase images in contradistinction to LRN which remain slightly hyperattenuating. On the hepatobiliary phase, HCC would appear hypointense while LRN would retain contrast on account of it being composed of predominantly normal or hyperplastic hepatocytes [21, 22]. Also, it has been seen that when HCC is encountered in a noncirrhotic liver, it is usually a solitary, large, heterogeneous mass while LRN are almost always multiple, small, and homogeneously enhancing [24]. A marked increase in the number of LRN has been noticed following creation of a portosystemic shunt [20, 22] (Figure 9).Figure 9 Axial CECT images acquired in the arterial (a) and venous (b) phase showing an arterial phase enhancing nodule (arrow in (a)) in liver which retains the contrast in the venous phase (arrow in (b)) consistent with regenerative nodule in this patient who had undergone direct intrahepatic portocaval shunt (DIPS) for BCS. (a) (b) ## 2.3.2. Vascular Changes HV may be normal or reduced in caliber and filled with intraluminal anechoic or echogenic thrombus in the acute phase [7, 12] (Figure 10). HV walls may appear thickened and echogenic. Not uncommonly, there may be a partial or complete nonvisualization of the HV due to the markedly heterogeneous hepatic parenchyma and altered caliber and echogenicity of the HV [7, 12, 25]. Alternatively, there can be stenosis of the HV, most commonly at or near the ostia, with proximal dilatation [7] (Figures 11 and 12). In cases of chronic thrombosis, the HV may be reduced to an echogenic cord-like structure [26] (Figure 13).Figure 10 Gray-scale US images from two different patients demonstrating echogenic thrombus within the right hepatic vein (arrows). (a) (b)Figure 11 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (black arrow) with multiple intrahepatic collaterals (white arrows) and heterogeneous hepatic echotexture.Figure 12 Gray-scale US image demonstrating stenosis at the ostium of right hepatic vein (long white arrow in (a)) and the common channel of middle and left hepatic vein (arrow in (b)) with multiple intrahepatic collaterals (small white arrows in (a)). (a) (b)Figure 13 Gray-scale US image showing the distal portion of right hepatic vein (marked by calipers) being reduced to a cord-like structure due to chronic thrombosis.The normal blood flow in the HV is phasic in response to the cardiac cycle (Figure14). In BCS, flow in the HV changes from phasic to absent, continuous, turbulent, or reversed [7] (Figure 15). Turbulent or high flow is usually seen at or near the site of stenosis.Figure 14 Spectral Doppler image posthepatic vein stenting demonstrates restoration of normal triphasic waveform (inverted “M” shape) of the right hepatic vein in a patient with BCS. Arrow denotes the stent in the right hepatic vein.Figure 15 Spectral Doppler image in a patient with BCS shows monophasic waveform in the hepatic vein.IVC can be obstructed in its suprahepatic or intrahepatic portion or both. Suprahepatic occlusion is usually due to webs or short segment stenosis while intrahepatic IVC obstruction is commonly secondary to compression caused by an enlarged caudate lobe [7, 27] (Figure 16). Long segment narrowing of intrahepatic IVC without associated caudate lobe enlargement or focal narrowing due to a web or a thrombus can also be observed [7] (Figures 6(a) and 17). On US, membranous web usually appears as an echogenic linear area within the lumen of IVC best seen in deep inspiration (Figure 18(a)). On conventional venography or CT/MRI angiography, they appear as dome shaped linear filling defects (Figures 18(b) and 19). Similarly, hepatic venous web appears as a linear hypodense intraluminal structure with or without proximal dilatation (Figure 20). Short segment stenosis is seen as an area of narrowing with proximal dilatation. In partial IVC obstruction or extrinsic IVC compression, the normally phasic flow in IVC can change to a continuous waveform (called as “pseudoportal” Doppler signal) [28]. In later stages, chronic thrombosis of IVC can evolve into calcification [29] (Figure 21). Establishing the patency of IVC is important before deciding upon the surgical management, if need may arise. If the IVC is patent portocaval or mesocaval shunt can be created while if the IVC is occluded mesoatrial shunt would be required.Figure 16 Coronal CECT (a) and gray-scale US (b) image demonstrating compression of intrahepatic IVC (arrows) caused by hypertrophy of the caudate lobe. (a) (b)Figure 17 Gray-scale US image demonstrating echogenic thrombus in IVC (arrow).Figure 18 Gray-scale US (a) and coronal MIP (b) images demonstrating an IVC web (sequel of chronic focal thrombosis) which appears as a linear echogenic structure on US (arrow in (a)), while on CT, it appears as an intraluminal hypodense linear structure (arrow in (b)). (a) (b)Figure 19 Coronal CECT image (a) showing an IVC web (arrow). IVC angiogram (b) of the same patient showing a jet of contrast (arrow) entering the right atrium signifying the obstruction caused by the web. Postangioplasty image (c) shows resolution of the stenosis. (a) (b) (c)Figure 20 Axial CECT image demonstrating a web in the left hepatic vein (arrow) with heterogeneous hepatic parenchymal enhancement.Figure 21 Coronal CECT images demonstrating mural calcification involving the IVC (long thin black arrows in (a) and (b)) secondary to chronic thrombosis. Multiple superficial abdominal wall and paraesophageal collaterals (white arrows and short thick black arrow, resp.) along with a prominent accessory vein (arrowhead) can also be seen. (a) (b)Due to the combined effects of decreased portal blood flow in BCS and the underlying thrombophilia, simultaneous portal vein thrombosis (PVT) can occur in up to 15% of cases [30]. Portal blood flow on Doppler may be absent, slowed, or reversed [31]. Assessment of PV patency is crucial as a thrombosed portal vein may preclude creation of a portosystemic shunt to decompress the liver in such patients.Caudate lobe outflow serves as a drainage pathway for intrahepatic venovenous collaterals. Thus, caudate vein may be dilated in BCS. In the appropriate clinical setting, a caudate lobe vein > 3 mm has been reported to be strongly suggestive of BCS [32] (Figure 22).Figure 22 Prominent caudate lobe vein (marked by calipers; measuring 7 mm) in setting of BCS.On CT, the thrombosed HV are hypoattenuating or not visualized in the acute phase, and the IVC is compressed by the hypertrophied caudate lobe [30] (Figures 23 and 16). Ascites and splenomegaly are commonly found. T 2 ∗-weighted gradient-recalled echo sequences can demonstrate absence of flow in the HV and IVC. However, postcontrast T1-weighted images are ideal to reveal the venous occlusion.Figure 23 Thrombosed middle and left hepatic veins appearing as hypodense nonenhancing structures (arrows) on a background of heterogeneous liver parenchyma and ascites (asterisk).But one of the most specific signs of chronic BCS is the visualization of intrahepatic “comma-shaped” bridging venovenous collaterals which communicate between an occluded and nonoccluded HV or caudate lobe vein and reveal a continuous monophasic flow [12] (Figures 24–27). These have been noted in more than 80% of cases of BCS [33]. A “spider web” pattern of intrahepatic collaterals can also be sometimes seen signifying multiple intrahepatic communications between the hepatic veins (Figure 28). In addition, intrahepatic vessels communicating with a systemic vein through surface/subcapsular collaterals can also be observed. In cases of IVC obstruction, extrahepatic collateral channels including abdominal wall varices can develop bypassing the occluded segment [34] (Figure 29). Cho et al. [35] have classified the types of collaterals that can be seen in BCS (Box 3).Box 3:Different types of collateral pathways described in association with BCS (Figures 24–31). (1) Intrahepatic collaterals (2) Extrahepatic collaterals (I) Inferior phrenic-pericardiophrenic collaterals (II) Superficial abdominal wall collaterals (III) Left renal-hemiazygous pathway (IV) Vertebro-lumbar azygous pathwayFigure 24 Gray-scale US images demonstrating thrombosed distal portion of right hepatic vein (arrow in (a)) with a typicalcomma-shaped venovenous collateral (arrow in (b)). (a) (b)Figure 25 Other examples ofcomma-shaped collaterals (arrows) on US. (a) (b)Figure 26 Axial CECT images from four different patients demonstratingcomma-shaped intrahepatic collaterals (arrows) demonstrating varying degrees of patency. (a) (b) (c) (d)Figure 27 Secondary BCS in two different patients. (a) Axial maximum-intensity-projection (MIP) CECT image in a patient with past history of blunt trauma to the abdomen demonstrating a liver laceration (arrows) which had caused thrombosis of the middle hepatic vein with resultantcomma-shaped intrahepatic venovenous collateral (arrowheads) between the left hepatic vein and the remnant middle hepatic vein. (b) Axial MIP image from the CECT scan of a young woman with hydatid cyst of liver (asterisk) causing thrombosis of the right hepatic vein and formation of intrahepatic collateral (arrowheads) between the middle and right hepatic vein. (a) (b)Figure 28 Spider web pattern of collaterals in BCS on catheter angiography.Figure 29 Axial (a) and coronal (b) MIP images showing multiple abdominal wall collaterals in a patient with IVC thrombus. (a) (b)Figure 30 Angiogram performed via a catheter inserted in the left hepatic vein demonstrates drainage through the inferior phrenic vein (vertical arrow in (a)) and pericardiophrenic collateral (horizontal arrow) with delayed opacification of the intercostal veins as well (vertical arrows in (b)). (a) (b)Figure 31 IVC angiogram demonstrating opacification of the intervertebral venous plexus and hemiazygous vein (arrow).Due to the highly variable and nonspecific presentation of the disease, a diagnosis of BCS must be considered in all patients with an acute or chronic liver disease, when the common causes for liver disease have been excluded. Thus, assessment of the patency of HV and IVC should be a part of routine protocol of patients with liver disease, especially in endemic regions. ## 2.4. Treatment In patients not responding to anticoagulation and nutritional therapy, radiological and surgical interventions may be contemplated including placement of portosystemic shunts and liver transplantation. In patients with short segment occlusion of HV or IVC, balloon angioplasty or stent insertion can be performed [3, 4, 12, 33, 36]. Imaging follow-up at routine intervals is necessary in all these cases to determine the long-term results of intervention. US examination coupled with Doppler is usually adequate to evaluate the patency of the native vessels or stents after intervention (Figure 14). Presence of ascites and any associated liver parenchymal changes can also be simultaneously assessed. However, cross-sectional imaging or catheter angiography may be required in cases of equivocal findings on Doppler or when the symptoms for which the intervention was performed have recurred in spite of an apparently normal Doppler study. ## 3. Portal Vein Thrombosis Obstruction of PV or its branches may be secondary to thrombosis or due to encasement or infiltration by a tumor (Box4). It can present acutely with sudden onset of right upper quadrant pain, nausea, and/or fever. However, in most patients, PVT occurs slowly and silently with patients presenting with vague abdominal pain and features of portal hypertension. It is often not discovered until gastrointestinal hemorrhage develops, or unless the thrombosis is detected during routine surveillance for a known underlying pathologic condition. In third world countries, it accounts for up to 30% and 75% of cases of portal hypertension in adults and children, respectively [37]. Thus, from a clinical standpoint, PVT can be divided into acute or chronic [38]. PVT occurring in children and in patients with cirrhosis can be considered separately as their features and management differ from the other group of patients [9].Box 4:Causes of portomesenteric venous thrombosis. Cirrhosis Abdominal malignancies (hepatic, pancreatic etc.) Hypercoagulable states (both inherited and acquired; see Box1) Myeloproliferative disorders Local inflammation Umbilical vein catheterization Appendicitis Diverticulitis Pancreatitis Cholecystitis Duodenal ulcer Inflammatory bowel disease Tubercular lymphadenitis Traumatic/Iatrogenic Splenectomy, gastrectomy, colectomy, cholecystectomy Liver transplantation Abdominal trauma Surgical/radiological porto-systemic shunting ### 3.1. Etiology Several etiological causes, either of local or systemic origin, might be responsible for PVT development (Box4), although more than one factor is often identified [39]. A local risk factor can be identified in up to 30% of cases of PVT: cirrhosis and malignant tumors accounting for the majority of them [9, 39–42]. In the rest of the patients, the most common local factor for PVT is an inflammatory focus in the abdomen [38, 43, 44]. However, presence of cirrhosis, malignancy, and other intra-abdominal causes such as inflammation do not exclude the presence of systemic risk factors and the two may often coexist [9]. Local factors are usually recognized at the acute stage of PVT than the chronic stage [38]. Systemic risk factors are similar in prevalence in patients with acute and chronic PVT. An inherited or acquired hypercoagulable state is the usual culprit [39, 45–48]. ### 3.2. Acute Portal Vein Thrombosis Acute formation of a thrombus within the portal vein can be complete or eccentric, leaving a peripheral circulating lumen. The thrombus can also involve the mesenteric veins and/or the splenic vein. In cases of complete acute thrombosis, the patient usually presents with abdominal pain of sudden onset. Peritoneal signs, however, are usually absent except when an inflammatory focus is the cause of PVT or when PVT is complicated by intestinal ischemia. Acute PVT associated with an intra-abdominal focus of infection is frequently referred to as acute pylephlebitis. Clinical features of pylephlebitis include a high, spiking fever with chills, a painful liver, and sometimes shock. Small liver abscesses are common in this setting.Depending on the extension, PVT can be classified into four categories [49]: (1) confined to the PV beyond the confluence of the SV; (2) extended to the SMV, but with patent mesenteric vessels; (3) extended to the whole splanchnic venous system, but with large collaterals; or (4) with only fine collaterals. This classification is useful to evaluate a patient’s operability and clinical outcome. Another classification proposed by Yerdel et al. [50] is also widely accepted (Figure 32).Figure 32 Classification of PVT proposed by Yerdel et al.Liver function is usually preserved in patients with acute PVT unless the patient has an underlying liver disease such as cirrhosis. This is because of two reasons: (1) compensatory increase in hepatic arterial blood flow (hepatic artery buffer response) and (2) rapid development of a collateral circulation from pre-existing veins in the porta hepatis (venous rescue) [51–54]. The hepatic artery buffer response manifests on imaging in the form of increased hepatic parenchymal enhancement of the involved segment in the arterial phase with attendant hypertrophy of the adjoining artery. Formation of collaterals begins in a few days after portal vein obstruction and finalizes within 3 to 5 wk [53, 54]. As long as there is no extension of the thrombus to mesenteric venous arches, all manifestations of acute PVT are completely reversible, either by recanalization or by development of a cavernoma [9].It is clear from the above discussion that PVT is an ongoing process. Hence, a clear distinction between acute or chronic thrombus cannot always be made due to a considerable overlap between the two clinical situations. Formation of portal cavernoma has been suggested to be a marker of chronicity but it has been debated [55, 56]. ### 3.3. Imaging Diagnosis Imaging diagnosis of acute PVT can be readily made using noninvasive methods. #### 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) #### 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. #### 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ### 3.4. Treatment The goal of treatment in acute PVT is recanalization of the thrombosed vein using anticoagulation and thrombolysis (either transcatheter or surgical) to prevent the development of portal hypertension and intestinal ischemia. When local inflammation is the underlying cause for the PVT, appropriate antibiotic therapy is warranted with correction of the causal factors, if needed [9]. ### 3.5. Chronic Portal Vein Thrombosis When acute PVT is asymptomatic and goes undetected, patients present later in life and are diagnosed either incidentally on imaging done for unrelated issues or when investigations for portal hypertension related complications are carried out. In patients with chronic PVT, the actual thrombus is commonly not visualized. Rather, the obstructed portal vein is replaced by a network of portoportal collateral veins bypassing the area of occlusion (portal cavernoma) [54]. However, these collaterals are not sufficient and do not normalize hepatopetal blood flow and hence eventually portal hypertension develops [74].The development of a collateral circulation, with its attendant risk of variceal hemorrhage, is responsible for most of the complications and is the most common manifestation of PV obstruction [74]. Bleeding is generally well-tolerated and bleed-related mortality in patients with PVT is much lower than in patients with cirrhosis, probably due to preserved liver function and because the patients are usually younger [44, 75–80]. Usually the gastroesophageal varices are large in size and gastric varices are particularly more frequently seen in 30–40% patients [81]. Ectopic varices are significantly more frequent in patients with chronic PVT than in patients with cirrhosis and occur commonly in the duodenum, anorectal region, and gallbladder bed [82–84]. Collaterals can also develop along the gastroepiploic pathway (Figure 42). Other sequelae of the subsequent portal hypertension, such as ascites, are less frequent.Figure 42 Axial MIP image showing a severely attenuated and partially calcified retropancreatic splenic vein (interrupted arrows) resulting in formation of a prominent gastroepiploic collateral channel (arrowheads) between the SMV and the remnant splenic vein at splenic hilum (solid arrow) along the greater curvature of stomach. Asterisk denotes the gastric lumen. ### 3.6. Imaging Features and Diagnosis #### 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) #### 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ### 3.7. Portal Hypertensive Biliopathy/Portal Cavernoma Cholangiopathy Periportal collaterals can produce compression and deformation of the biliary tract (both extra- and intrahepatic) and gall bladder wall resulting in the so-called portal hypertensive biliopathy [91, 92] (Figure 44) also called as portal cavernoma cholangiopathy. These collateral veins are caused by reopening of the two preformed venous systems near the extrahepatic bile ducts-epicholedochal (ECD) venous plexus of Saint [93] and the paracholedochal (PACD) veins of Petren [94]. The ECD plexus of Saint forms a mesh on the surface of the common bile duct (CBD) while the PACD venous plexus of Petren runs parallel to the CBD. Engorgement of these collaterals can cause compressive and ischemic changes on the biliary tree manifesting as indentations, strictures, intrahepatic biliary radicles dilatation, and intraductal lithiasis (Figures 45–47). Dilatation of epicholedochal veins results in thickened and enhancing bile duct walls on cross-sectional images and may simulate a mass (pseudocholangiocarcinoma sign) [91] (Figure 48). The left hepatic duct is involved more commonly (38–100%) and severely [87]. Portal biliopathy usually remains asymptomatic (62–95%) [87]. Common symptoms are jaundice, biliary colic, and recurrent cholangitis and are seen with longstanding disease and presence of stones [95–99]. Various sequelae like choledocholithiasis, cholangitis, and secondary biliary cirrhosis can develop in longstanding disease [87]. MRCP is the first line of investigation [100]. ERCP is only recommended if a therapeutic intervention is contemplated [100]. MRCP is also helpful in differentiating choledochal varices from stones. Endoscopic ultrasonography may also show the characteristic lesions of portal biliopathy [101, 102]; however, it is not recommended as a part of routine work-up.Figure 44 Graphic illustration demonstrating opening up of epi- and paracholedochal venous collaterals in chronic PVT causing portal biliopathy.Figure 45 Coronal oblique CECT image (a) showing multiple paracholedochal collaterals (solid black arrows) causing extrinsic compression over the CBD (interrupted arrow). (b) 2D MRCP image of the same patient demonstrating undulating margins of CBD (arrow) due to the compression. (a) (b)Figure 46 (a) Thick-slab 3D MRCP image of a patient with portal biliopathy demonstrating extrinsic vascular impression over CBD by the paracholedochal collaterals (solid arrows). The distal CBD is narrowed by these collaterals with resultant upstream biliary dilatation. Undulating margins of biliary system can also be seen (interrupted arrow) with a grossly distended gall bladder. (b) 3D MRCP image from another patient showing wavy contour of the mid- and distal CBD due to portal biliopathy with resultant narrowing and gross bilobar biliary dilatation. (a) (b)Figure 47 Coronal oblique CECT image showing chronic, partially calcified, occlusive thrombus involving the main portal vein (black arrow) with multiple tortuous periportal collateral channels (solid white arrows). Splenic vein is also partially thrombosed (asterisk). Gall bladder calculi (interrupted arrow) and ascites can also be seen.Figure 48 Axial CECT image of a patient with EHPVO showing multiple tiny paracholedochal collaterals appearing as continuous enhancement of one of the biliary radicals in right hepatic lobe (arrows) mimicking cholangiocarcinoma (pseudocholangiocarcinoma sign). Splenic infarct is also seen due to associated splenic vein thrombosis (interrupted arrow) along with ascites (asterisk). ### 3.8. Treatment Therapy for chronic PVT basically revolves around management of complications of portal hypertension including gastrointestinal bleeding, hypersplenism, and ascites [9]. Prevention of extension of thrombosis and treatment of portal biliopathy are other facets of treatment [9]. ### 3.9. Extrahepatic Portal Venous Obstruction It is a distinct clinical entity characterized by obstruction of extrahepatic PV with or without involvement of intrahepatic PV branches in the setting of a well preserved liver function. It does not include isolated thrombosis of SV or SMV [87, 100].PVT seen in cirrhosis or HCC usually involves the intrahepatic PV radicals and is not associated with portal cavernoma formation or development of portal hypertension, both of which are integral to the definition of EHPVO [87]. It is a primarily childhood disorder but can present at any age. Patients usually present with symptoms or complications of secondary portal hypertension including variceal bleeding ascites and feature of hypersplenism. Jaundice can develop due to portal biliopathy but is usually not severe [87]. ### 3.10. Treatment Therapeutic approach is primarily focused on management of an acute episode of variceal bleeding followed by secondary prophylaxis [87]. Other issues such as hypersplenism, growth retardation, portal biliopathy, and minimal hepatic encephalopathy need to be individualized depending on the age of presentation, site and nature of obstruction, and clinical manifestations [87]. ### 3.11. Portal Vein Thrombosis in Patients with Cirrhosis PVT is most common in patients with preexisting cirrhosis. The prevalence of PVT increases with the severity of the cirrhosis, being less than 1% in patients with compensated cirrhosis [103], but 8%–25% in candidates for liver transplantation [104]. In patients with cirrhosis, portal venous obstruction is commonly related to invasion by hepatocellular carcinoma [105]. Neoplastic obstruction should always be considered, especially when the portal vein is larger than 23 mm in diameter, when thrombus demonstrates arterial phase enhancement (known asthreads-and-streaks pattern of enhancement) [70, 105] (Figure 49), when pulsatile flow is seen on Doppler ultrasound, and when serum alpha fetoprotein levels are increased [106].Figure 49 Axial (a) and coronal (b) MIP images of a patient with liver cirrhosis and multifocal hepatocellular carcinoma demonstrating multiple thin streaks of arterial phase enhancement within the main portal vein (arrows in (b)) as well as its intrahepatic branches (arrows in (a)) consistent with tumor thrombus (threads-and-streaks sign). (a) (b) ## 3.1. Etiology Several etiological causes, either of local or systemic origin, might be responsible for PVT development (Box4), although more than one factor is often identified [39]. A local risk factor can be identified in up to 30% of cases of PVT: cirrhosis and malignant tumors accounting for the majority of them [9, 39–42]. In the rest of the patients, the most common local factor for PVT is an inflammatory focus in the abdomen [38, 43, 44]. However, presence of cirrhosis, malignancy, and other intra-abdominal causes such as inflammation do not exclude the presence of systemic risk factors and the two may often coexist [9]. Local factors are usually recognized at the acute stage of PVT than the chronic stage [38]. Systemic risk factors are similar in prevalence in patients with acute and chronic PVT. An inherited or acquired hypercoagulable state is the usual culprit [39, 45–48]. ## 3.2. Acute Portal Vein Thrombosis Acute formation of a thrombus within the portal vein can be complete or eccentric, leaving a peripheral circulating lumen. The thrombus can also involve the mesenteric veins and/or the splenic vein. In cases of complete acute thrombosis, the patient usually presents with abdominal pain of sudden onset. Peritoneal signs, however, are usually absent except when an inflammatory focus is the cause of PVT or when PVT is complicated by intestinal ischemia. Acute PVT associated with an intra-abdominal focus of infection is frequently referred to as acute pylephlebitis. Clinical features of pylephlebitis include a high, spiking fever with chills, a painful liver, and sometimes shock. Small liver abscesses are common in this setting.Depending on the extension, PVT can be classified into four categories [49]: (1) confined to the PV beyond the confluence of the SV; (2) extended to the SMV, but with patent mesenteric vessels; (3) extended to the whole splanchnic venous system, but with large collaterals; or (4) with only fine collaterals. This classification is useful to evaluate a patient’s operability and clinical outcome. Another classification proposed by Yerdel et al. [50] is also widely accepted (Figure 32).Figure 32 Classification of PVT proposed by Yerdel et al.Liver function is usually preserved in patients with acute PVT unless the patient has an underlying liver disease such as cirrhosis. This is because of two reasons: (1) compensatory increase in hepatic arterial blood flow (hepatic artery buffer response) and (2) rapid development of a collateral circulation from pre-existing veins in the porta hepatis (venous rescue) [51–54]. The hepatic artery buffer response manifests on imaging in the form of increased hepatic parenchymal enhancement of the involved segment in the arterial phase with attendant hypertrophy of the adjoining artery. Formation of collaterals begins in a few days after portal vein obstruction and finalizes within 3 to 5 wk [53, 54]. As long as there is no extension of the thrombus to mesenteric venous arches, all manifestations of acute PVT are completely reversible, either by recanalization or by development of a cavernoma [9].It is clear from the above discussion that PVT is an ongoing process. Hence, a clear distinction between acute or chronic thrombus cannot always be made due to a considerable overlap between the two clinical situations. Formation of portal cavernoma has been suggested to be a marker of chronicity but it has been debated [55, 56]. ## 3.3. Imaging Diagnosis Imaging diagnosis of acute PVT can be readily made using noninvasive methods. ### 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) ### 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. ### 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ## 3.3.1. US and Doppler Ultrasound is a reliable noninvasive technique with a high degree of accuracy for the detection of PVT and is the investigation of choice. It has a reported sensitivity and specificity ranging between 60% and 100% [57]. Gray-scale ultrasound usually demonstrates hyperechoic material within the vessel lumen with occasional distension of the vein [39, 58, 59] (Figure 33(a)). Many times, a recently formed thrombus is virtually anechoic; hence an ultrasound Doppler is required for its demonstration. Doppler imaging will show absence of flow in part or all of the lumen [60]. Attendant hypertrophy of the hepatic artery can also be demonstrated (Figure 33(b)).Figure 33 Gray-scale US image showing thrombosed left portal vein (arrow in (a)). On application of colour Doppler (b), hypertrophy of the accompanying branch of hepatic artery can be seen (black arrow in (b)) with opening up of periportal collateral venous channels (white arrow). (a) (b)Endoscopic ultrasound (EUS) may have comparable sensitivity and specificity to colour Doppler (81% and 93%, resp.) in the diagnosis of PVT and appears to be more accurate than US or CT scan in assessment of portal invasion by tumours [61–63]. However, it is difficult to optimally visualize the intrahepatic portion of portal vein by EUS which remains a drawback.Recently, contrast-enhanced ultrasound (CEUS) has also been utilized to differentiate benign and malignant PVT using independent criteria [64, 65] (Figure 34). Use of pulsatile flow in a portal vein thrombus as the criterion for diagnosing malignant PVT yielded sensitivity of 82.5% and specificity of 100%, whereas positive enhancement of the PVT itself as a criterion for diagnosing malignancy yielded overall sensitivity and specificity of 100% for each [64]. In another study, CEUS could conclusively differentiate between benign and malignant PVT in 37 of 38 patients (97% sensitivity) [65].Figure 34 Side-by-side contrast-enhanced US (a) and gray-scale image (b) demonstrating absence of enhancement of the portal vein thrombus in the arterial phase (arrow in (a)) signifying benign nature of the thrombus. (a) (b) ## 3.3.2. CT A CT scan without contrast can show hyperattenuating material in the PV [66–68] (Figure 35(a)). After injection of contrast agent, lack of luminal enhancement is seen (Figure 35(b)). In addition, increased hepatic parenchymal enhancement in the arterial phase which becomes isodense to the liver in the portal venous phase is common and is described as transient hepatic enhancement difference [68–70] (Figures 36 and 37). Rim enhancement of the involved vessel may be noted due to flow in the dilated vasa vasorum or thrombophlebitis [71] (Figure 38). In contrast with a bland thrombus that is seen as a low density, nonenhancing defect within portal veins, a tumour thrombus enhances following contrast administration [72]. For the assessment of thrombus extension within the portal venous system as well into the mesenteric veins, CT or MR angiography is more sensitive techniques than Doppler sonography, because the mesenteric veins are more difficult to visualize with ultrasound [73]. Also changes in the bowel wall (described later) can be better appreciated on cross-sectional imaging than US.Figure 35 Axial NCCT (a) and CECT (b) images demonstrating mildly hyperdense thrombus occluding the main portal vein (arrows). Corresponding images at a caudal level in the same patient showing hyperdense thrombus in the SMV with associated fat stranding in the adjoining mesentery. (a) (b) (c) (d)Figure 36 Axial CECT images obtained in the arterial (a) and venous (b) phases showing an abscess in the left lobe (asterisk) which had caused acute thrombosis of the left portal vein (pylephlebitis). Associated hepatic artery buffer response is seen in the form of increased enhancement of the left hepatic lobe in the arterial phase (arrows in (a)) which becomes essentially isodense on the portal venous phase. (a) (b)Figure 37 Coronal oblique CECT image of a patient with acute necrotizing pancreatitis demonstrates thrombosed splenic vein (thick white arrows) and a segmental branch of right portal vein (thin white arrow) with hepatic artery buffer response in the form of differential hyperenhancement of the affected liver segment (black arrows).Figure 38 Coronal oblique CECT image demonstrating thrombosed portal vein as well as the SMV (arrows) with rim-enhancement of their walls. ## 3.3.3. MRI MRI is equally sensitive in detection of PVT. At spin-echo MR, the clot appears isointense on T1-weighted images, the clot appears isointense to hyperintense on T1-weighted images, and usually has a more intense signal on T2 images, while older clots appear hyperintense only on T2-weighted images [51] (Figure 39). Tumor thrombi can be differentiated from bland thrombi because they appear more hyperintense on T2-weighted images, demonstrate diffusion restriction, and enhance with gadolinium (Figures 40 and 41). Gradient-echo MR might help to better evaluate any equivocal findings on spin-echo MR image [51]. Contrast-enhanced MR angiography (CE-MRI) is superior to Doppler US in detecting partial thrombosis and occlusion of the main portal venous vessels [57]. It also identifies portosplenic collaterals more adequately than colour Doppler.Figure 39 Axial T2-weighted MR image demonstrating mildly hyperintense thrombus (arrow) in the right portal vein.Figure 40 (a) Axial T2-weighted fat saturated image in a patient with liver cirrhosis and multifocal hepatocellular carcinoma showing occlusive heterogeneously hyperintense tumor thrombus (asterisk and arrows) expanding the right portal vein. It shows diffusion restriction (asterisk and arrows in (b)). One of the tumoral masses can also be seen on this image (thick arrow). (a) (b)Figure 41 Axial CEMRI images obtained in the arterial (a) and venous (b) phases showing a lobulated lesion showing arterial phase enhancement (asterisk in (a)) with washout of contrast on the venous phase. Associated enhancing right portal vein tumor thrombus (arrows) is present. (a) (b) ## 3.4. Treatment The goal of treatment in acute PVT is recanalization of the thrombosed vein using anticoagulation and thrombolysis (either transcatheter or surgical) to prevent the development of portal hypertension and intestinal ischemia. When local inflammation is the underlying cause for the PVT, appropriate antibiotic therapy is warranted with correction of the causal factors, if needed [9]. ## 3.5. Chronic Portal Vein Thrombosis When acute PVT is asymptomatic and goes undetected, patients present later in life and are diagnosed either incidentally on imaging done for unrelated issues or when investigations for portal hypertension related complications are carried out. In patients with chronic PVT, the actual thrombus is commonly not visualized. Rather, the obstructed portal vein is replaced by a network of portoportal collateral veins bypassing the area of occlusion (portal cavernoma) [54]. However, these collaterals are not sufficient and do not normalize hepatopetal blood flow and hence eventually portal hypertension develops [74].The development of a collateral circulation, with its attendant risk of variceal hemorrhage, is responsible for most of the complications and is the most common manifestation of PV obstruction [74]. Bleeding is generally well-tolerated and bleed-related mortality in patients with PVT is much lower than in patients with cirrhosis, probably due to preserved liver function and because the patients are usually younger [44, 75–80]. Usually the gastroesophageal varices are large in size and gastric varices are particularly more frequently seen in 30–40% patients [81]. Ectopic varices are significantly more frequent in patients with chronic PVT than in patients with cirrhosis and occur commonly in the duodenum, anorectal region, and gallbladder bed [82–84]. Collaterals can also develop along the gastroepiploic pathway (Figure 42). Other sequelae of the subsequent portal hypertension, such as ascites, are less frequent.Figure 42 Axial MIP image showing a severely attenuated and partially calcified retropancreatic splenic vein (interrupted arrows) resulting in formation of a prominent gastroepiploic collateral channel (arrowheads) between the SMV and the remnant splenic vein at splenic hilum (solid arrow) along the greater curvature of stomach. Asterisk denotes the gastric lumen. ## 3.6. Imaging Features and Diagnosis ### 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) ### 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ## 3.6.1. US and Doppler Portal cavernoma produces a distinctive tangle of tortuous vessels in the porta hepatis which can be easily demonstrated on US and Doppler [85] (Figure 43). Gall bladder wall varices can also be seen which should not be confused with acute cholecystitis. For the diagnosis of chronic PVT, Doppler USG has a sensitivity and specificity above 95% and should be the initial imaging investigation of choice in these patients [86, 87].Figure 43 Gray-scale US (a) image showing replacement of the main portal vein by an ill-defined echogenic area containing multiple subtle anechoic tubular structures. On application of colour Doppler (b) turbulent flow can be seen within these anechoic structures consistent with portal cavernoma. (a) (b) ## 3.6.2. CT and MRI Cross-sectional imaging can assess the true extent of the periportal collaterals as well associated manifestations of chronic PVT like splenomegaly, portosystemic collaterals, and shunts in relation to portal venous system [68, 88]. They also give anatomical road-map prior to shunt surgery [87]. In the absence of cirrhosis, there might be an enlarged caudate lobe, together with an atrophic left lateral segment or right lobe of the liver and hypertrophied hepatic artery [89, 90]. Typically, the umbilical vein is not dilated as it connects to the left portal vein branch downstream of the obstruction [9]. ## 3.7. Portal Hypertensive Biliopathy/Portal Cavernoma Cholangiopathy Periportal collaterals can produce compression and deformation of the biliary tract (both extra- and intrahepatic) and gall bladder wall resulting in the so-called portal hypertensive biliopathy [91, 92] (Figure 44) also called as portal cavernoma cholangiopathy. These collateral veins are caused by reopening of the two preformed venous systems near the extrahepatic bile ducts-epicholedochal (ECD) venous plexus of Saint [93] and the paracholedochal (PACD) veins of Petren [94]. The ECD plexus of Saint forms a mesh on the surface of the common bile duct (CBD) while the PACD venous plexus of Petren runs parallel to the CBD. Engorgement of these collaterals can cause compressive and ischemic changes on the biliary tree manifesting as indentations, strictures, intrahepatic biliary radicles dilatation, and intraductal lithiasis (Figures 45–47). Dilatation of epicholedochal veins results in thickened and enhancing bile duct walls on cross-sectional images and may simulate a mass (pseudocholangiocarcinoma sign) [91] (Figure 48). The left hepatic duct is involved more commonly (38–100%) and severely [87]. Portal biliopathy usually remains asymptomatic (62–95%) [87]. Common symptoms are jaundice, biliary colic, and recurrent cholangitis and are seen with longstanding disease and presence of stones [95–99]. Various sequelae like choledocholithiasis, cholangitis, and secondary biliary cirrhosis can develop in longstanding disease [87]. MRCP is the first line of investigation [100]. ERCP is only recommended if a therapeutic intervention is contemplated [100]. MRCP is also helpful in differentiating choledochal varices from stones. Endoscopic ultrasonography may also show the characteristic lesions of portal biliopathy [101, 102]; however, it is not recommended as a part of routine work-up.Figure 44 Graphic illustration demonstrating opening up of epi- and paracholedochal venous collaterals in chronic PVT causing portal biliopathy.Figure 45 Coronal oblique CECT image (a) showing multiple paracholedochal collaterals (solid black arrows) causing extrinsic compression over the CBD (interrupted arrow). (b) 2D MRCP image of the same patient demonstrating undulating margins of CBD (arrow) due to the compression. (a) (b)Figure 46 (a) Thick-slab 3D MRCP image of a patient with portal biliopathy demonstrating extrinsic vascular impression over CBD by the paracholedochal collaterals (solid arrows). The distal CBD is narrowed by these collaterals with resultant upstream biliary dilatation. Undulating margins of biliary system can also be seen (interrupted arrow) with a grossly distended gall bladder. (b) 3D MRCP image from another patient showing wavy contour of the mid- and distal CBD due to portal biliopathy with resultant narrowing and gross bilobar biliary dilatation. (a) (b)Figure 47 Coronal oblique CECT image showing chronic, partially calcified, occlusive thrombus involving the main portal vein (black arrow) with multiple tortuous periportal collateral channels (solid white arrows). Splenic vein is also partially thrombosed (asterisk). Gall bladder calculi (interrupted arrow) and ascites can also be seen.Figure 48 Axial CECT image of a patient with EHPVO showing multiple tiny paracholedochal collaterals appearing as continuous enhancement of one of the biliary radicals in right hepatic lobe (arrows) mimicking cholangiocarcinoma (pseudocholangiocarcinoma sign). Splenic infarct is also seen due to associated splenic vein thrombosis (interrupted arrow) along with ascites (asterisk). ## 3.8. Treatment Therapy for chronic PVT basically revolves around management of complications of portal hypertension including gastrointestinal bleeding, hypersplenism, and ascites [9]. Prevention of extension of thrombosis and treatment of portal biliopathy are other facets of treatment [9]. ## 3.9. Extrahepatic Portal Venous Obstruction It is a distinct clinical entity characterized by obstruction of extrahepatic PV with or without involvement of intrahepatic PV branches in the setting of a well preserved liver function. It does not include isolated thrombosis of SV or SMV [87, 100].PVT seen in cirrhosis or HCC usually involves the intrahepatic PV radicals and is not associated with portal cavernoma formation or development of portal hypertension, both of which are integral to the definition of EHPVO [87]. It is a primarily childhood disorder but can present at any age. Patients usually present with symptoms or complications of secondary portal hypertension including variceal bleeding ascites and feature of hypersplenism. Jaundice can develop due to portal biliopathy but is usually not severe [87]. ## 3.10. Treatment Therapeutic approach is primarily focused on management of an acute episode of variceal bleeding followed by secondary prophylaxis [87]. Other issues such as hypersplenism, growth retardation, portal biliopathy, and minimal hepatic encephalopathy need to be individualized depending on the age of presentation, site and nature of obstruction, and clinical manifestations [87]. ## 3.11. Portal Vein Thrombosis in Patients with Cirrhosis PVT is most common in patients with preexisting cirrhosis. The prevalence of PVT increases with the severity of the cirrhosis, being less than 1% in patients with compensated cirrhosis [103], but 8%–25% in candidates for liver transplantation [104]. In patients with cirrhosis, portal venous obstruction is commonly related to invasion by hepatocellular carcinoma [105]. Neoplastic obstruction should always be considered, especially when the portal vein is larger than 23 mm in diameter, when thrombus demonstrates arterial phase enhancement (known asthreads-and-streaks pattern of enhancement) [70, 105] (Figure 49), when pulsatile flow is seen on Doppler ultrasound, and when serum alpha fetoprotein levels are increased [106].Figure 49 Axial (a) and coronal (b) MIP images of a patient with liver cirrhosis and multifocal hepatocellular carcinoma demonstrating multiple thin streaks of arterial phase enhancement within the main portal vein (arrows in (b)) as well as its intrahepatic branches (arrows in (a)) consistent with tumor thrombus (threads-and-streaks sign). (a) (b) ## 4. Mesenteric Vein Thrombosis Although arterial causes of acute mesenteric ischemia are far more common than venous causes, venous thrombosis still accounts for about 5%–20% of cases of mesenteric ischemia and remains an important cause of acute bowel infarction [107–110]. They are most often the result of a thrombosis of the SMV [111]. Owing to their nonspecific clinical presentation, imaging plays a critical role in the early diagnosis of MVT. With the improvements in contrast and spatial resolution, both in CT and MRI, bowel wall abnormalities resulting from a lack of venous drainage can be assessed accurately, while correctly depicting the mesenteric arterial circulation. ### 4.1. Clinical Features Patients with acute MVT usually present with abdominal pain out of proportion to the physical findings, nausea, vomiting, and constipation, with or without bloody diarrhea [110]. Abdominal symptoms may then gradually worsen with the development of peritonitis, which indicates intestinal infarction and can be seen in one-third to two-thirds of patients with acute MVT [112]. Abdominal distension can be present in up to 50% of cases [110]. Patients with chronic MVT are often asymptomatic due to extensive venous collateralization and are unlikely to develop intestinal infarction. Complications such as variceal bleeding can occur in late stages secondary to portal hypertension. Weight loss, food avoidance, vague postprandial abdominal pain, or distention may be present. The pain usually occurs within the first hour after eating, diminishing over the next 1-2 hours. Chronic thrombosis of the portomesenteric vasculature is usually detected as an incidental finding during evaluation of other abdominal pathologic conditions, such as portal hypertension, malignancy, or chronic pancreatitis [110]. ### 4.2. Classification of MVT MVT is classified on the basis of etiology into either primary or secondary [111]. It is considered primary, or idiopathic, when no predisposing factor can be found. Due to an increased awareness of predisposing disorders and improvements in imaging technology, the incidence of idiopathic MVT continues to decline [113, 114]. Patients with a predisposing condition such as prothrombotic and myeloproliferative disorders, neoplasms, diverse inflammatory conditions, recent surgery, portal hypertension, and miscellaneous causes such as oral contraceptives or pregnancy are said to have secondary MVT (Box 4). ### 4.3. Anatomy of the Mesenteric Venous System Multiple small veins (venae rectae) originate from the bowel wall and join to form venous arcades. Small bowel and the proximal colon as far as the splenic flexure are drained by these venous arcades through the pancreaticoduodenal, jejunal and ileal, ileocolic, right colic, and middle colic veins. The confluence of these veins forms the SMV. The inferior mesenteric vein (IMV) can drain either directly into the SV, into the SMV, or into the angle of the splenoportal confluence. It drains the splenic flexure, descending colon, sigmoid colon, and part of the rectum. ### 4.4. Pathophysiology of Bowel Ischemia The location and extent of venous thrombosis and the status of collateral circulation are important predictors of bowel ischemia and subsequent infarction. It has been demonstrated that patients with thrombosis of the venae rectae and venous arcades are at greater risk of developing bowel abnormalities than the ones with thrombosis confined to the SMV close to the splenoportal confluence [115].Etiology of the thrombosis often determines the location of the thrombosis. Intra-abdominal infections like pancreatitis affect the larger veins first while hematological disorders involve the smaller veins first followed by the larger venous trunks [112].When the thrombus evolves slowly and there is enough time for the collaterals to develop, bowel infarction is unlikely [116]. ### 4.5. Imaging #### 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. #### 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. #### 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. #### 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. #### 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. #### 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). #### 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. #### 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ### 4.6. Treatment Systemic anticoagulation for the prevention of thrombus propagation is the current mainstay therapy for patients with acute mesenteric venous thrombosis without bowel ischemia [112]. Transcatheter thrombolysis (either percutaneous or through transjugular route) has also been attempted in some cases to good effect [120]. When intestinal infarction has already developed and the patient has features of peritonitis, emergency laparotomy for resection of the necrotic parts of the gut should be performed [127]. ## 4.1. Clinical Features Patients with acute MVT usually present with abdominal pain out of proportion to the physical findings, nausea, vomiting, and constipation, with or without bloody diarrhea [110]. Abdominal symptoms may then gradually worsen with the development of peritonitis, which indicates intestinal infarction and can be seen in one-third to two-thirds of patients with acute MVT [112]. Abdominal distension can be present in up to 50% of cases [110]. Patients with chronic MVT are often asymptomatic due to extensive venous collateralization and are unlikely to develop intestinal infarction. Complications such as variceal bleeding can occur in late stages secondary to portal hypertension. Weight loss, food avoidance, vague postprandial abdominal pain, or distention may be present. The pain usually occurs within the first hour after eating, diminishing over the next 1-2 hours. Chronic thrombosis of the portomesenteric vasculature is usually detected as an incidental finding during evaluation of other abdominal pathologic conditions, such as portal hypertension, malignancy, or chronic pancreatitis [110]. ## 4.2. Classification of MVT MVT is classified on the basis of etiology into either primary or secondary [111]. It is considered primary, or idiopathic, when no predisposing factor can be found. Due to an increased awareness of predisposing disorders and improvements in imaging technology, the incidence of idiopathic MVT continues to decline [113, 114]. Patients with a predisposing condition such as prothrombotic and myeloproliferative disorders, neoplasms, diverse inflammatory conditions, recent surgery, portal hypertension, and miscellaneous causes such as oral contraceptives or pregnancy are said to have secondary MVT (Box 4). ## 4.3. Anatomy of the Mesenteric Venous System Multiple small veins (venae rectae) originate from the bowel wall and join to form venous arcades. Small bowel and the proximal colon as far as the splenic flexure are drained by these venous arcades through the pancreaticoduodenal, jejunal and ileal, ileocolic, right colic, and middle colic veins. The confluence of these veins forms the SMV. The inferior mesenteric vein (IMV) can drain either directly into the SV, into the SMV, or into the angle of the splenoportal confluence. It drains the splenic flexure, descending colon, sigmoid colon, and part of the rectum. ## 4.4. Pathophysiology of Bowel Ischemia The location and extent of venous thrombosis and the status of collateral circulation are important predictors of bowel ischemia and subsequent infarction. It has been demonstrated that patients with thrombosis of the venae rectae and venous arcades are at greater risk of developing bowel abnormalities than the ones with thrombosis confined to the SMV close to the splenoportal confluence [115].Etiology of the thrombosis often determines the location of the thrombosis. Intra-abdominal infections like pancreatitis affect the larger veins first while hematological disorders involve the smaller veins first followed by the larger venous trunks [112].When the thrombus evolves slowly and there is enough time for the collaterals to develop, bowel infarction is unlikely [116]. ## 4.5. Imaging ### 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. ### 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. ### 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. ### 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. ### 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. ### 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). ### 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. ### 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ## 4.5.1. Plain Radiography/Barium Studies Most often, a nonspecific pattern of dilated, fluid-filled bowel loops can be demonstrated on these studies. Submucosal hemorrhage leading to mural thickening and the so-called “thumbprinting,” bowel separation due to mesenteric thickening, pneumatosis intestinalis, and portomesenteric venous gas can occasionally be seen in late-stage disease. However, the findings are often nonspecific and of little or no use in diagnostic evaluation [117, 118]. ## 4.5.2. US and Doppler Doppler US allows direct real-time evaluation of the mesenteric veins and provides flow information of the visceral vessels; however, compared to the pivotal role played by Doppler US in the detection of PVT, visualization of mesenteric veins is often hampered by poor acoustic window due to the overlying bowel gases. Nevertheless, the segment of superior mesenteric vein adjoining the splenoportal confluence can frequently be imaged in experienced hands. Bowel wall thickening and free intraperitoneal fluid can also be detected providing a clue to the underlying venous abnormality. ## 4.5.3. CT Widely considered to be the imaging investigation of choice, CT permits optimal evaluation of vascular structures, the bowel wall, and the adjacent mesentery. Multidetector row CT scanners have now enabled volumetric acquisitions in a single breath hold, eliminating motion artifact and suppressing respiratory misregistration allowing sensitivity rates of up to 95% in the detection of MVT [119]. Helical CT angiography and three-dimensional gadolinium-enhanced MR angiography should be considered the primary diagnostic modalities for patients with a high clinical suspicion of mesenteric ischemia.Data acquisition should be performed at peak venous enhancement, with the delay between the start of injection and the commencement of image acquisition tailored for that purpose. Protocols typically use 55–70-second delays following administration of 125–150 mL of intravenous contrast medium at a rate of 3.5–5 mL/sec through a peripheral vein. Imaging is completed with coronal and sagittal reformation, with the creation of (curved) MIP images that allow the entire course of the thrombosed vein to be viewed on a single image. Unenhanced data acquisition preceding the portal phase is especially useful for detecting mural hemorrhage. ## 4.5.4. Venous Abnormalities Thrombus appears as a well-demarcated, persistent, partial, or complete intraluminal filling defect, which may be surrounded by rim-enhancing venous walls [71] (Figure 50). It has been reported that thrombosis shown on a noncontrast-enhanced CT scan has a low density during the acute period (within 1 wk of the onset of the disease). It has a high density during the subacute period (1–3 wk after disease onset) with a CT value higher than the values for the abdominal aorta (called as the “mesenteric vein angiographic phenomenon”) (Figure 35). It has a low density during the chronic period (>3 wk) and is accompanied by lateral branch angiogenesis [120]. In case of tumoral infiltration, the thrombus may enhance following intravenous contrast administration.Figure 50 Coronal MIP image showing complete portomesenteric vein thrombosis (black arrows) with associated mesenteric stranding (white arrows).Depending on the extent and amount of thrombus, enlargement of the affected vein may be seen. Marked venous enlargement can be seen in tumoral thrombus. It also serves as a useful sign to indicate acute thrombus because in chronic thrombus there tends to be atrophy of vein. Due to the congestion caused by thrombosis, engorgement of the mesenteric veins can also be seen. ## 4.5.5. Bowel Abnormalities Associated bowel abnormalities most commonly manifests as mural thickening [121]. Wall thickening may result from intramural edema which appears as hypoattenuating bowel wall or intramural hemorrhage which causes increased attenuation of the affected bowel wall [121, 122] (Figure 51). Both of these findings are more common and prominent with venous congestion than with arterial occlusion [122].Figure 51 Axial NCCT image showing submucosal bowel wall hemorrhage appearing as linear hyperdense rim (solid arrows). Small bowel dilatation (asterisk) and pneumatosis intestinalis (interrupted arrow) can also be seen.The bowel wall may be stratified into two or three thickened walls referred to as the halo sign or target sign (Figure52). The inner mucosal and outer muscularis propria rings of high attenuation are separated by submucosal layer of low attenuation representing edema [111].Figure 52 Axial CECT image demonstratinghalo sign in one of the jejunal loops due to inner mucosal and outer muscularis propria rings of high attenuation separated by submucosal layer of low attenuation representing edema in a patient with SMV thrombosis. Extensive mesenteric stranding and minimal ascites can also be seen.Abnormal enhancement is also a specific sign of bowel ischemia in patients with MVT. In normal subjects, a smooth homogeneous inner rim of enhancement can be seen during the venous phase of CT. Prolonged venous congestion impedes the arterial supply, with subsequent decrease of bowel wall enhancement which has been reported as highly specific for venous bowel infarction [121, 123] (Figure 53).Figure 53 Axial CECT image showing nonenhancing loop of jejunum (arrow) due to SMV thrombosis.Bowel dilatation is a nonspecific but important sign which can result either from aperistaltic bowel (as a reflex response to ischemic injury) or transmural bowel infarction resulting in total loss of contractile function [111] (Figure 54).Figure 54 Axial CECT image showing nondependent focus of portal venous gas (arrow) with mesenteric stranding and ascites.In late stages, intramural gas can be seen (pneumatosis intestinalis) which may dissect into the venous system resulting in portal or mesenteric venous gas (Figures51 and 54). Intrahepatic portal vein gas should be differentiated from aerobilia. The distribution of hepatic gas in patients with aerobilia is central, around the portal hilum, and does not extend to within 2 cm of the liver capsule [124]. Gas in mesenteric vein branches should be differentiated from pneumoperitoneum. Pneumoperitoneum does not have a linear, ramifying configuration and can be present in the antimesenteric border of the intestine. However, these signs are nonspecific and can be seen in non-ischemic causes like infection [125, 126]. Even in patients with bowel ischemia, they are not highly predictive of transmural infarction since partial ischemia of bowel wall may also be present. Frank perforation will lead to free intraperitoneal air. ## 4.5.6. Mesenteric Abnormalities Due to the underlying venous congestion and/or superimposed inflammatory process, mesenteric fat stranding is frequently seen with MVT (Figures50, 52, and 54). Compared to arterial occlusion, this finding is far more common and more pronounced in cases of venous thrombosis [122]. Free intraperitoneal fluid or ascites can be seen in late stages (Figures 51, 52, and 54). ## 4.5.7. MRI With the advent of 3D gadolinium-enhanced MR angiographic techniques with short acquisition times (single breath hold), sensitivity of MRI in detecting MVT equals that of MDCT with the added advantages of improved soft tissue resolution, lack of ionizing radiation, and better safety profile of paramagnetic agents compared with that of iodinated contrast agents. However, severity of stenosis can be overestimated on MR angiography since it indirectly relies on detection of vascular signal which can be degraded due to turbulence. Also, MR angiography is less sensitive for detection of calcification, spatial resolution is lower compared with that of CT angiography, and stents cannot be visualized due to the signal void caused by metallic material [117]. Such protocols take 30–60 minutes to complete, considerably longer than with CT angiography [117]. Thus MR is usually reserved for patients in whom CT angiography is contraindicated. ## 4.5.8. Catheter Angiography Conventional angiography is reserved for cases with equivocal findings on noninvasive imaging and is also used in conjunction with transcatheter therapeutic techniques in management of symptomatic portal and mesenteric venous thrombosis. ## 4.6. Treatment Systemic anticoagulation for the prevention of thrombus propagation is the current mainstay therapy for patients with acute mesenteric venous thrombosis without bowel ischemia [112]. Transcatheter thrombolysis (either percutaneous or through transjugular route) has also been attempted in some cases to good effect [120]. When intestinal infarction has already developed and the patient has features of peritonitis, emergency laparotomy for resection of the necrotic parts of the gut should be performed [127]. ## 5. Conclusions With the advancements in imaging technology, the rate of detection of splanchnic venous thrombosis has gradually increased. The consequences of these thromboses can be severe, including fulminant liver failure, bowel infarction, and variceal bleeding, with high mortality rates. Clinical features are often nonspecific and overlap with many other abdominal emergencies. Since this entity is still relatively rare, no uniform treatment protocols are established. Conservative medical treatment is often ineffective, especially in cases with extensive thrombosis and organ damage, underlining the need for a prompt diagnosis and commencement of therapy. Ultrasound coupled with Doppler is highly effective in detecting hepatic and portal venous and IVC thrombosis with attendant findings of ascites, splenomegaly, and liver parenchymal changes. Cross-sectional imaging serves primarily as a problem solving tool and in evaluation of associated complications like varices and portal biliopathy. However, for mesenteric venous thrombosis, contrast-enhanced MDCT and MRI are superior not only in detection of the primary vascular abnormality but also in delineating the changes in bowel wall and mesentery. Catheter angiography is now reserved essentially for cases in which therapeutic intervention is planned. --- *Source: 101029-2015-10-12.xml*
2015
# Association of FTO Mutations with Risk and Survival of Breast Cancer in a Chinese Population **Authors:** Xianxu Zeng; Zhenying Ban; Jing Cao; Wei Zhang; Tianjiao Chu; Dongmei Lei; Yanmin Du **Journal:** Disease Markers (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101032 --- ## Abstract Recently, several studies have reported associations between fat mass and obesity-associated (FTO) gene mutations and cancer susceptibility. But little is known about their association with risk and survival of breast cancer in Chinese population. The aim of this study is to examine whether cancer-related FTO polymorphisms are associated with risk and survival of breast cancer and BMI levels in controls in a Chinese population. We genotyped six FTO polymorphisms in a case-control study, including 537 breast cancer cases and 537 controls. FTO rs1477196 AA genotype had significant decreased breast cancer risk [odds ratio (OR) = 0.54, 95% confidence interval (CI): 0.34–0.86] compared to GG genotype, and this association was only found in women with BMI < 24 kg/m2 (OR = 0.41, 95% CI: 0.22–0.76); and rs16953002 AA genotype conferred significant increased breast cancer risk (OR = 1.80, 95% CI: 1.23–2.63) compared to GG genotype. Haplotype analysis showed that FTO TAC haplotype (rs9939609-rs1477196-rs1121980) had significant reduced breast cancer risk (OR = 0.76, 95% CI: 0.62–0.93) compared with TGC haplotype. But we failed to find any association between FTO polymorphisms and breast cancer survival. These findings suggest that variants in FTO gene may influence breast cancer susceptibility. --- ## Body ## 1. Introduction Although breast cancer mortality rates have declined in recent years owing to increased awareness, early detection, and better treatment options, it is still the most common cancer and the leading cause of cancer-related death among females in the world [1]. In China, about 187,000 cases were diagnosed with breast cancer each year, of which approximately 48,000 died of this disease [2].Potential risk factors, including age, reproductive factors, personal or family history of breast disease, genetic predisposition, and environmental factors, may be involved in the pathogenesis of breast cancer [3]. Early reports have described the clinical significance of genetic predisposition alleles in breast cancer development [4–6]. Fat mass and obesity-associated (FTO) gene plays a role in regulation of food intake and is originally identified as a susceptibility gene for obesity [7–9]. Obesity contributes to the development of a number of chronic diseases, such as metabolic syndrome, fatty liver, heart disease, and cancer [10–12]. It has been shown that the risk of cancer is 50% higher in obese women than those with normal weight, and there is growing scientific evidence that excess body weight contributes to the development of breast cancer [13, 14]. Although the underlying biological mechanisms by which body fatness becomes a risk for cancer development are not fully understood, recent studies have shown that a set of single nucleotide polymorphisms (SNP) in FTO gene may associate with cancer risk [15]. Till now, a number of FTO SNPs, such as rs1477196, rs11075995, rs17817449, rs11075995, and rs1121980, were reported to be associated with breast cancer risk in populations of different ethnic origins, but the results lack consistency and research was rare in Chinese population [16–20]. Besides, the association of FTO SNPs with survival of breast cancer has not been studied in Chinese population.To explore the role of FTO polymorphisms in breast cancer development and progress, we performed a case-control study to assess the associations of six cancer-related FTO polymorphisms (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) with breast cancer risk and prognosis and BMI levels in healthy controls in Chinese population. ## 2. Materials and Methods ### 2.1. Study Population Newly pathologically confirmed female breast cancer cases were consecutively recruited between 2008 and 2014 from the Third Affiliated Hospital of Zhengzhou University. A total of 537 eligible breast cancer cases completed in-person interviews with response rate of 91.5%. At the same time, 537 healthy controls who frequency matched to cases by age (5-year-age groups) and without a history of cancer were randomly selected from the same hospital with response rate of 90.1%. At enrollment, all participants signed a written informed consent form and then were interviewed by trained interviewers using structured questionnaire, including demographic characteristics (age and education), reproductive variables (age at menarche, menopause status, and ever breastfeeding), medical information (histological grade, clinical stage, tumor size, estrogen receptor status, and progesterone receptor status), and family history of breast cancer. Height and weight were also measured to calculate body mass index (BMI). Fasting peripheral blood samples were also collected from each subject at enrollment. All patients were followed up regularly (every 6 months) until death or the end of follow-up. This study was approved by the Ethics Committee of the Third Affiliated Hospital of Zhengzhou University. ### 2.2. DNA Extraction and Genotyping There are total 8 SNPs (rs9939609, rs17817449, rs8050136, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) in FTO gene that reported to be associated with cancer risk [15]. After searching http://www.hapmap.org database, we found that rs9939609 tags rs17817449 and rs8050136 with r2>0.95 in population of Chinese Han in Beijing (CHB). So we selected 6 SNPs (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980), all of which were located in the intron region of FTO gene, for genotyping with possible association with susceptibility and prognosis of breast cancer [15]. Genomic DNA was extracted from the buffy coats using genomic DNA purification kit (Promega, Madison, WI, USA). Genotyping for the selected SNPs was performed by ABI real-time PCR system using the TaqMan SNP genotyping assay. Polymerase chain reactions (PCR) were carried out by using standard PCR cycle conditions recommended by the manufacturer (an initial denaturation step at 95°C for 10 minutes, followed by 40 cycles of 95°C 15 seconds and annealing at 60°C for 1 minute). The PCR results were analyzed by use of the Detection System software. Finally, the quality and potential misclassification of the genotyping were assessed by reevaluating 10% of duplicate DNA samples (108 total samples) that are randomly selected from the whole study population. The concordance rate for the quality control samples was 100%. ### 2.3. Statistical Analysis We used SAS software version 9.3 (SAS Institute, Inc.) for statistical analyses. Chi-square test andt-test were used to evaluate the case-control differences in the distribution of the selected characteristics, including age, education, age at menarche, menopause status, ever breastfeeding, family history of breast cancer, and BMI. Odd ratios (ORs) and 95% confidence intervals (CIs) of FTO SNPs and breast cancer risk were calculated by unconditional logistic regression models, using the most common genotype as the referent group. The linkage disequilibrium between loci in the FTO gene and the deviation of allele frequencies from Hardy-Weinberg equilibrium in healthy controls were assessed by HaploView version 4.2 [21, 22]. We used HAPSTAT 3.0 software to evaluate associations between haplotypes and cancer risk. Both univariate ANOVA and multivariate ANCOVA analyses were performed to determine the effects of the FTO polymorphisms on BMI levels in healthy controls. Effects of the different genotypes on breast cancer survival were evaluated by hazard ratios (HRs) and 95% CIs, using univariate and multivariate Cox regression analysis. A two-tailed P value of 0.05 was considered statistically significant. ## 2.1. Study Population Newly pathologically confirmed female breast cancer cases were consecutively recruited between 2008 and 2014 from the Third Affiliated Hospital of Zhengzhou University. A total of 537 eligible breast cancer cases completed in-person interviews with response rate of 91.5%. At the same time, 537 healthy controls who frequency matched to cases by age (5-year-age groups) and without a history of cancer were randomly selected from the same hospital with response rate of 90.1%. At enrollment, all participants signed a written informed consent form and then were interviewed by trained interviewers using structured questionnaire, including demographic characteristics (age and education), reproductive variables (age at menarche, menopause status, and ever breastfeeding), medical information (histological grade, clinical stage, tumor size, estrogen receptor status, and progesterone receptor status), and family history of breast cancer. Height and weight were also measured to calculate body mass index (BMI). Fasting peripheral blood samples were also collected from each subject at enrollment. All patients were followed up regularly (every 6 months) until death or the end of follow-up. This study was approved by the Ethics Committee of the Third Affiliated Hospital of Zhengzhou University. ## 2.2. DNA Extraction and Genotyping There are total 8 SNPs (rs9939609, rs17817449, rs8050136, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) in FTO gene that reported to be associated with cancer risk [15]. After searching http://www.hapmap.org database, we found that rs9939609 tags rs17817449 and rs8050136 with r2>0.95 in population of Chinese Han in Beijing (CHB). So we selected 6 SNPs (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980), all of which were located in the intron region of FTO gene, for genotyping with possible association with susceptibility and prognosis of breast cancer [15]. Genomic DNA was extracted from the buffy coats using genomic DNA purification kit (Promega, Madison, WI, USA). Genotyping for the selected SNPs was performed by ABI real-time PCR system using the TaqMan SNP genotyping assay. Polymerase chain reactions (PCR) were carried out by using standard PCR cycle conditions recommended by the manufacturer (an initial denaturation step at 95°C for 10 minutes, followed by 40 cycles of 95°C 15 seconds and annealing at 60°C for 1 minute). The PCR results were analyzed by use of the Detection System software. Finally, the quality and potential misclassification of the genotyping were assessed by reevaluating 10% of duplicate DNA samples (108 total samples) that are randomly selected from the whole study population. The concordance rate for the quality control samples was 100%. ## 2.3. Statistical Analysis We used SAS software version 9.3 (SAS Institute, Inc.) for statistical analyses. Chi-square test andt-test were used to evaluate the case-control differences in the distribution of the selected characteristics, including age, education, age at menarche, menopause status, ever breastfeeding, family history of breast cancer, and BMI. Odd ratios (ORs) and 95% confidence intervals (CIs) of FTO SNPs and breast cancer risk were calculated by unconditional logistic regression models, using the most common genotype as the referent group. The linkage disequilibrium between loci in the FTO gene and the deviation of allele frequencies from Hardy-Weinberg equilibrium in healthy controls were assessed by HaploView version 4.2 [21, 22]. We used HAPSTAT 3.0 software to evaluate associations between haplotypes and cancer risk. Both univariate ANOVA and multivariate ANCOVA analyses were performed to determine the effects of the FTO polymorphisms on BMI levels in healthy controls. Effects of the different genotypes on breast cancer survival were evaluated by hazard ratios (HRs) and 95% CIs, using univariate and multivariate Cox regression analysis. A two-tailed P value of 0.05 was considered statistically significant. ## 3. Results The distributions of selected characteristics of the study subjects are shown in Table1. Cases and controls were evenly matched by age. Cases were more probably to have an earlier age at menarche and have higher BMI and lower education level and less likely to have been breastfeeding compared to healthy controls. No significant difference was found with respect to menopause status and family history of breast cancer between the two groups. The total positive expression rates of ER and PR among cases were 63.9% and 55.7%. The proportions of histologic grades I–III among cases were 27.9%, 42.3%, and 29.8%, and proportions of clinical stages 1-2 and 3-4 among cases were 76.2% and 23.8%, respectively. Besides, the percentages of patients with tumor size ≤ 2 cm and > 2 cm are 36.1% and 63.9%, respectively.Table 1 Selected characteristics of cases and controls. Characteristics Cases (n=537) Controls (n=537) P value Age (year) 47.9 ± 9.2 48.3 ± 8.9 0.50 Age at menarche 14.7 ± 1.7 14.9 ± 1.7 0.02 BMI (kg/m2) 24.0 ± 3.7 23.5 ± 3.4 0.007 Menopause 301 (56.1) 323 (60.1) 0.17 Family history of breast cancer 16 (3.0) 10 (1.9) 0.23 Ever breastfeeding 477 (88.8) 507 (94.4) 0.001 Education Junior middle school or below 258 (48.0) 204 (38.0) 0.002 Senior middle school 163 (30.4) 207 (38.5) College or above 116 (21.6) 126 (23.5) ER (n, %) Positive 343 (63.9) Negative 194 (36.1) PR (n, %) Positive 299 (55.7) Negative 238 (44.3) Histologic grade (n, %) I 150 (27.9) II 227 (42.3) III 160 (29.8) Clinical stage 1-2 409 (76.2) 3-4 128 (23.8) Tumor size (n, %) ≤2 194 (36.1) >2 343 (63.9)Frequencies of the 6 SNPs in FTO gene are shown in Table2. Hardy-Weinberg equilibrium test of the selected SNPs showed no deviation among the controls (P>0.05). Two SNPs in FTO gene were significantly associated with risk of breast cancer. Specifically, AA genotype of rs1477196 was associated with significant decreased risk of breast cancer (OR = 0.54, 95% CI: 0.34–0.86) compared with the GG genotype; subjects with the AA genotype of rs16953002 had significant increased cancer risk (OR = 1.80, 95% CI = 1.23–2.63) compared with those who carry the GG genotype. These associations were remained statistically significant after further adjustment for age, age at menarche, BMI, ever breastfeeding, and education (OR = 0.56, 95% CI = 0.34–0.90; OR = 1.77, 95% CI: 1.20–2.61). Further stratified analysis by BMI level (<24 kg/m2 and ≥24 kg/m2) showed that the effect of rs1477196 was statistically significant only in people with BMI < 24 kg/m2 (OR = 0.41, 95% CI: 0.22–0.76) (Table 3).Table 2 ORs and 95% CIs for breast cancer in relation to polymorphisms of FTO gene. SNP Genotypes Cases,n (%) Controls,n (%) OR (95% CI)† OR (95% CI)‡ § P trend rs9939609 TT 389 (72.6) 408 (76.0) 1.00 1.00 0.13 TA 130 (24.2) 119 (22.1) 1.14 (0.86–1.52) 1.14 (0.85–1.52) AA 17 (3.2) 10 (1.9) 1.78 (0.80–3.93) 1.69 (0.75–3.80) AA + TA 147 (27.4) 129 (24.0) 1.19 (0.90–1.57) 1.18 (0.89–1.56) rs1477196 GG 272 (50.7) 231 (43.0) 1.00 1.00 0.004 GA 231 (43.1) 254 (47.3) 0.77 (0.60–0.99) 0.79 (0.61–1.02) AA 33 (6.2) 52 (9.7) 0.54 (0.34–0.86) 0.56 (0.34–0.90) AA + GA 264 (49.2) 306 (57.0) 0.73 (0.58–0.93) 0.75 (0.59–0.96) rs6499640 GG 361 (67.5) 375 (69.8) 1.00 1.00 0.44 GA 159 (29.7) 148 (27.6) 1.12 (0.85–1.46) 1.13 (0.86–1.48) AA 15 (2.8) 14 (2.6) 1.10 (0.52–2.32) 1.11 (0.52–2.35) AA + GA 174 (32.5) 162 (30.2) 1.11 (0.86–1.44) 1.13 (0.87–1.47) rs16953002 GG 217 (40.5) 250 (46.5) 1.00 1.00 0.005 GA 232 (43.3) 231 (43.0) 1.16 (0.90–1.50) 1.13 (0.87–1.46) AA 87 (16.2) 56 (10.4) 1.80 (1.23–2.63) 1.77 (1.20–2.61) AA + GA 319 (59.5) 287 (53.5) 1.28 (1.01–1.64) 1.25 (0.98–1.60) rs11075995 TT 249 (46.5) 236 (44.0) 1.00 1.00 0.24 TA 244 (45.5) 247 (46.0) 0.93 (0.73–1.20) 0.97 (0.75–1.25) AA 43 (8.0) 54 (10.0) 0.75 (0.48–1.17) 0.80 (0.51–1.25) AA + TA 287 (53.5) 301 (56.0) 0.90 (0.71–1.15) 0.94 (0.73–1.20) rs1121980 GG 360 (67.2) 377 (70.2) 1.00 1.00 0.19 GA 154 (28.7) 145 (27.0) 1.11 (0.85–1.45) 1.11 (0.84–1.46) AA 22 (4.1) 15 (2.8) 1.54 (0.79–3.02) 1.42 (0.71–2.82) AA + GA 176 (32.8) 160 (29.8) 1.15 (0.89–1.49) 1.13 (0.87–1.48) †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education. § P trend is test of trend for the number of copies of the variant allele (0, 1 and 2).Table 3 FTO rs1477196 and rs16953002 and breast cancer risk stratified by BMI levels. SNP Genotypes BMI < 24 BMI ≥ 24 OR (95% CI)† OR (95% CI)‡ OR (95% CI)† OR (95% CI)‡ rs1477196 GG 1.00 1.00 GA 0.75 (0.54–1.05) 0.73 (0.52–1.02) 0.81 (0.56–1.18) 0.86 (0.58–1.27) AA 0.41 (0.22–0.76) 0.45 (0.24–0.85) 0.91 (0.42–1.98) 0.72 (0.34–1.52) P interaction 0.02 rs16953002 GG 1.00 1.00 GA 1.20 (0.85–1.69) 1.15 (0.81–1.63) 1.10 (0.74–1.63) 1.09 (0.73–1.63) AA 1.78 (1.07–2.96) 1.66 (0.99–2.79) 1.82 (1.01–3.27) 1.90 (1.05–3.46) P interaction 0.15 †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education.Haplotype analysis showed that rs9939609, rs1477196, and rs1121980 were in linkage disequilibrium (D′=1.0, r2 = 0.07–0.78). The associations between FTO haplotypes and breast cancer risk were shown in Table 4. Carriers of the TAC haplotype had significant reduced risk of breast cancer (OR = 0.76, 95% CI: 0.62–0.93) relative to the TGC haplotype carriers.Table 4 Association of haplotypes in the FTO gene with risk of breast cancer. Haplotype∗ Cases, % Controls, % OR (95% CI)† OR (95% CI)‡ P value TGC 53.8 50.4 1.00 1.00 TAC 27.7 33.3 0.76 (0.62–0.93) 0.78 (0.63–0.95) 0.009 AGT 15.3 12.9 1.09 (0.85–1.40) 1.08 (0.84–1.40) 0.51 TGT 3.2 3.3 0.91 (0.55–1.48) 0.87 (0.53–1.44) 0.69 ∗In the order rs9939609, rs1477196, and rs1121980. †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education.Based on the data from follow-up interviews to October 2014, a total of 105 patients died of breast cancer. But we failed to find any association between FTO polymorphisms and breast cancer survival.Finally, we investigated the associations between the FTO SNPs and BMI in healthy controls. Results showed that subjects carrying the mutant genotypes (G/A:23.21±3.25 kg/m2; A/A: 23.13±3.30 kg/m2) of rs1477196 had lower BMI than those carrying the GG genotype (23.82±3.60 kg/m2), but it did not reach statistical significance. ## 4. Discussion FTO is an established obesity-susceptibility gene, and several loci in this gene have been reported to be associated with cancer risk. However, it has not been fully studied with regard to risk and survival of breast cancer in Chinese population. In this study, we found that two SNPs (rs1477196 and rs16953002) and TAC haplotype (rs9939609-rs1477196-rs1121980) in FTO gene were significantly associated with breast cancer risk. FTO gene located in chromosome 16q12.2. FTO protein is one homolog in the AlkB family proteins which also acts as a DNA-demethylase [23]. Mutations in FTO gene could lead to loss of protein function that may cause severe growth retardation, leanness, increased metabolic rate, and hyperphagia [24]. Previous genome-wide association study (GWAS) in 2007 has identified a set of susceptibility loci within the first intron of FTO gene that are obesity-related [25]. Also, a number of mutations in intron region of FTO gene were found to be associated with breast cancer risk in African-ancestry populations (rs17817449) [16], in a mixed ethnic population of Northwestern University (rs7206790, rs8047395, rs9939609, and rs1477196) [17], in ER-negative breast cancer of European ancestry population (rs11075995) [18], and in Chinese population (rs11075995) [20]. Although the exact functional significance of these SNPs has not been clarified, biological studies have suggested that the effect of the risk alleles can influence the methylation capability and at least in part be mediated through epigenetic alterations [26, 27]. Another study showed that the presence of the risk allele in intron 1 is associated with increased levels of the FTO transcript, suggesting that there may becis-regulatory site that regulates FTO in intron 1 [28]. Thus, our findings are biologically plausible.Rs1477196 located in intron 1 of FTO gene. A few studies have examined the effect of the rs1477196 variant on obesity and breast cancer risk. Rampersaud et al. [29] investigated the associations of physical activity, common FTO gene variants with BMI, and obesity in Old Order Amish (OOA) individuals, and the results showed that rs1477196 was significantly associated with BMI (P<0.001), and this effect had significant interaction with physical activity (P=0.004). Specifically, the rs1477196 C allele was associated with obesity in the low-activity group (OR, 1.58 for each risk allele; P=0.001) but not in the high-activity group (P=0.11). Another study done among early onset and severe obesity cases in a western Spain population also found that AA genotype of rs1477196 conferred decreased risk of obesity (OR = 0.41, 95% CI = 0.19–0.90), and a block containing the rs1477196/rs17817449/rs9939609 haplotype had an increased risk of obesity (GGA versus ATT: OR = 2.07, 95% CI = 1.41–3.04) [30]. A case-control study, including 354 breast cancer cases and 364 controls, conducted at Northwestern University in a mixed-race population found that compared to the common homozygote AA the rare homozygote GG of rs1477196 conferred 2.38-fold increased risk of breast cancer [17]. In our study, we found that rs1477196 AA genotype was associated with 0.54-fold decreased odds of breast cancer risk, and this association was found only in people with BMI < 24 kg/m2 after stratification by BMI levels.Besides, TAC haplotype (rs9939609-rs1477196-rs1121980) had significant reduced breast cancer risk compared with TGC haplotype, which may be mainly driven by individual SNP rs1477196, which was in linkage disequilibrium with rs9939609 (D′=1.0) in our study. FTO rs9939609 has been reported to be associated with obesity in many other papers [31–33].Rs16953002, located in 31kb from exon 9 and over 146kb from exon 8 of FTO, was once reported to be associated with melanoma in European population (per-allele OR for A = 1.16) [34], but till now, no other research has yet reported its role in obesity or breast cancer risk.The essential strength of our study is the detailed review of cancer diagnosis (cases were histopathologically confirmed), which minimized the potential disease misclassification. However, limitations of our study cannot be ignored. First, potential selection bias of controls might occur because of the hospital-based study design. Second, Berkson’s bias could not be ruled out because obesity is more frequent amongst hospitalized patients than that in the general population, which may underestimate the real association in our study. Third, the relatively small sample size may reduce the power to explore subgroups analysis. Fourth, lack of information on physical activity may have interaction with the identified mutations. ## 5. Conclusions Overall, our results suggest that genetic variations in FTO gene may play roles in breast cancer pathogenesis in women. However, further researches are required to replicate our results in other populations and functional analyses are needed to elucidate the exact role of these genomic variations in the development of breast cancer. --- *Source: 101032-2015-06-04.xml*
101032-2015-06-04_101032-2015-06-04.md
23,736
Association of FTO Mutations with Risk and Survival of Breast Cancer in a Chinese Population
Xianxu Zeng; Zhenying Ban; Jing Cao; Wei Zhang; Tianjiao Chu; Dongmei Lei; Yanmin Du
Disease Markers (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101032
101032-2015-06-04.xml
--- ## Abstract Recently, several studies have reported associations between fat mass and obesity-associated (FTO) gene mutations and cancer susceptibility. But little is known about their association with risk and survival of breast cancer in Chinese population. The aim of this study is to examine whether cancer-related FTO polymorphisms are associated with risk and survival of breast cancer and BMI levels in controls in a Chinese population. We genotyped six FTO polymorphisms in a case-control study, including 537 breast cancer cases and 537 controls. FTO rs1477196 AA genotype had significant decreased breast cancer risk [odds ratio (OR) = 0.54, 95% confidence interval (CI): 0.34–0.86] compared to GG genotype, and this association was only found in women with BMI < 24 kg/m2 (OR = 0.41, 95% CI: 0.22–0.76); and rs16953002 AA genotype conferred significant increased breast cancer risk (OR = 1.80, 95% CI: 1.23–2.63) compared to GG genotype. Haplotype analysis showed that FTO TAC haplotype (rs9939609-rs1477196-rs1121980) had significant reduced breast cancer risk (OR = 0.76, 95% CI: 0.62–0.93) compared with TGC haplotype. But we failed to find any association between FTO polymorphisms and breast cancer survival. These findings suggest that variants in FTO gene may influence breast cancer susceptibility. --- ## Body ## 1. Introduction Although breast cancer mortality rates have declined in recent years owing to increased awareness, early detection, and better treatment options, it is still the most common cancer and the leading cause of cancer-related death among females in the world [1]. In China, about 187,000 cases were diagnosed with breast cancer each year, of which approximately 48,000 died of this disease [2].Potential risk factors, including age, reproductive factors, personal or family history of breast disease, genetic predisposition, and environmental factors, may be involved in the pathogenesis of breast cancer [3]. Early reports have described the clinical significance of genetic predisposition alleles in breast cancer development [4–6]. Fat mass and obesity-associated (FTO) gene plays a role in regulation of food intake and is originally identified as a susceptibility gene for obesity [7–9]. Obesity contributes to the development of a number of chronic diseases, such as metabolic syndrome, fatty liver, heart disease, and cancer [10–12]. It has been shown that the risk of cancer is 50% higher in obese women than those with normal weight, and there is growing scientific evidence that excess body weight contributes to the development of breast cancer [13, 14]. Although the underlying biological mechanisms by which body fatness becomes a risk for cancer development are not fully understood, recent studies have shown that a set of single nucleotide polymorphisms (SNP) in FTO gene may associate with cancer risk [15]. Till now, a number of FTO SNPs, such as rs1477196, rs11075995, rs17817449, rs11075995, and rs1121980, were reported to be associated with breast cancer risk in populations of different ethnic origins, but the results lack consistency and research was rare in Chinese population [16–20]. Besides, the association of FTO SNPs with survival of breast cancer has not been studied in Chinese population.To explore the role of FTO polymorphisms in breast cancer development and progress, we performed a case-control study to assess the associations of six cancer-related FTO polymorphisms (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) with breast cancer risk and prognosis and BMI levels in healthy controls in Chinese population. ## 2. Materials and Methods ### 2.1. Study Population Newly pathologically confirmed female breast cancer cases were consecutively recruited between 2008 and 2014 from the Third Affiliated Hospital of Zhengzhou University. A total of 537 eligible breast cancer cases completed in-person interviews with response rate of 91.5%. At the same time, 537 healthy controls who frequency matched to cases by age (5-year-age groups) and without a history of cancer were randomly selected from the same hospital with response rate of 90.1%. At enrollment, all participants signed a written informed consent form and then were interviewed by trained interviewers using structured questionnaire, including demographic characteristics (age and education), reproductive variables (age at menarche, menopause status, and ever breastfeeding), medical information (histological grade, clinical stage, tumor size, estrogen receptor status, and progesterone receptor status), and family history of breast cancer. Height and weight were also measured to calculate body mass index (BMI). Fasting peripheral blood samples were also collected from each subject at enrollment. All patients were followed up regularly (every 6 months) until death or the end of follow-up. This study was approved by the Ethics Committee of the Third Affiliated Hospital of Zhengzhou University. ### 2.2. DNA Extraction and Genotyping There are total 8 SNPs (rs9939609, rs17817449, rs8050136, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) in FTO gene that reported to be associated with cancer risk [15]. After searching http://www.hapmap.org database, we found that rs9939609 tags rs17817449 and rs8050136 with r2>0.95 in population of Chinese Han in Beijing (CHB). So we selected 6 SNPs (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980), all of which were located in the intron region of FTO gene, for genotyping with possible association with susceptibility and prognosis of breast cancer [15]. Genomic DNA was extracted from the buffy coats using genomic DNA purification kit (Promega, Madison, WI, USA). Genotyping for the selected SNPs was performed by ABI real-time PCR system using the TaqMan SNP genotyping assay. Polymerase chain reactions (PCR) were carried out by using standard PCR cycle conditions recommended by the manufacturer (an initial denaturation step at 95°C for 10 minutes, followed by 40 cycles of 95°C 15 seconds and annealing at 60°C for 1 minute). The PCR results were analyzed by use of the Detection System software. Finally, the quality and potential misclassification of the genotyping were assessed by reevaluating 10% of duplicate DNA samples (108 total samples) that are randomly selected from the whole study population. The concordance rate for the quality control samples was 100%. ### 2.3. Statistical Analysis We used SAS software version 9.3 (SAS Institute, Inc.) for statistical analyses. Chi-square test andt-test were used to evaluate the case-control differences in the distribution of the selected characteristics, including age, education, age at menarche, menopause status, ever breastfeeding, family history of breast cancer, and BMI. Odd ratios (ORs) and 95% confidence intervals (CIs) of FTO SNPs and breast cancer risk were calculated by unconditional logistic regression models, using the most common genotype as the referent group. The linkage disequilibrium between loci in the FTO gene and the deviation of allele frequencies from Hardy-Weinberg equilibrium in healthy controls were assessed by HaploView version 4.2 [21, 22]. We used HAPSTAT 3.0 software to evaluate associations between haplotypes and cancer risk. Both univariate ANOVA and multivariate ANCOVA analyses were performed to determine the effects of the FTO polymorphisms on BMI levels in healthy controls. Effects of the different genotypes on breast cancer survival were evaluated by hazard ratios (HRs) and 95% CIs, using univariate and multivariate Cox regression analysis. A two-tailed P value of 0.05 was considered statistically significant. ## 2.1. Study Population Newly pathologically confirmed female breast cancer cases were consecutively recruited between 2008 and 2014 from the Third Affiliated Hospital of Zhengzhou University. A total of 537 eligible breast cancer cases completed in-person interviews with response rate of 91.5%. At the same time, 537 healthy controls who frequency matched to cases by age (5-year-age groups) and without a history of cancer were randomly selected from the same hospital with response rate of 90.1%. At enrollment, all participants signed a written informed consent form and then were interviewed by trained interviewers using structured questionnaire, including demographic characteristics (age and education), reproductive variables (age at menarche, menopause status, and ever breastfeeding), medical information (histological grade, clinical stage, tumor size, estrogen receptor status, and progesterone receptor status), and family history of breast cancer. Height and weight were also measured to calculate body mass index (BMI). Fasting peripheral blood samples were also collected from each subject at enrollment. All patients were followed up regularly (every 6 months) until death or the end of follow-up. This study was approved by the Ethics Committee of the Third Affiliated Hospital of Zhengzhou University. ## 2.2. DNA Extraction and Genotyping There are total 8 SNPs (rs9939609, rs17817449, rs8050136, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980) in FTO gene that reported to be associated with cancer risk [15]. After searching http://www.hapmap.org database, we found that rs9939609 tags rs17817449 and rs8050136 with r2>0.95 in population of Chinese Han in Beijing (CHB). So we selected 6 SNPs (rs9939609, rs1477196, rs6499640, rs16953002, rs11075995, and rs1121980), all of which were located in the intron region of FTO gene, for genotyping with possible association with susceptibility and prognosis of breast cancer [15]. Genomic DNA was extracted from the buffy coats using genomic DNA purification kit (Promega, Madison, WI, USA). Genotyping for the selected SNPs was performed by ABI real-time PCR system using the TaqMan SNP genotyping assay. Polymerase chain reactions (PCR) were carried out by using standard PCR cycle conditions recommended by the manufacturer (an initial denaturation step at 95°C for 10 minutes, followed by 40 cycles of 95°C 15 seconds and annealing at 60°C for 1 minute). The PCR results were analyzed by use of the Detection System software. Finally, the quality and potential misclassification of the genotyping were assessed by reevaluating 10% of duplicate DNA samples (108 total samples) that are randomly selected from the whole study population. The concordance rate for the quality control samples was 100%. ## 2.3. Statistical Analysis We used SAS software version 9.3 (SAS Institute, Inc.) for statistical analyses. Chi-square test andt-test were used to evaluate the case-control differences in the distribution of the selected characteristics, including age, education, age at menarche, menopause status, ever breastfeeding, family history of breast cancer, and BMI. Odd ratios (ORs) and 95% confidence intervals (CIs) of FTO SNPs and breast cancer risk were calculated by unconditional logistic regression models, using the most common genotype as the referent group. The linkage disequilibrium between loci in the FTO gene and the deviation of allele frequencies from Hardy-Weinberg equilibrium in healthy controls were assessed by HaploView version 4.2 [21, 22]. We used HAPSTAT 3.0 software to evaluate associations between haplotypes and cancer risk. Both univariate ANOVA and multivariate ANCOVA analyses were performed to determine the effects of the FTO polymorphisms on BMI levels in healthy controls. Effects of the different genotypes on breast cancer survival were evaluated by hazard ratios (HRs) and 95% CIs, using univariate and multivariate Cox regression analysis. A two-tailed P value of 0.05 was considered statistically significant. ## 3. Results The distributions of selected characteristics of the study subjects are shown in Table1. Cases and controls were evenly matched by age. Cases were more probably to have an earlier age at menarche and have higher BMI and lower education level and less likely to have been breastfeeding compared to healthy controls. No significant difference was found with respect to menopause status and family history of breast cancer between the two groups. The total positive expression rates of ER and PR among cases were 63.9% and 55.7%. The proportions of histologic grades I–III among cases were 27.9%, 42.3%, and 29.8%, and proportions of clinical stages 1-2 and 3-4 among cases were 76.2% and 23.8%, respectively. Besides, the percentages of patients with tumor size ≤ 2 cm and > 2 cm are 36.1% and 63.9%, respectively.Table 1 Selected characteristics of cases and controls. Characteristics Cases (n=537) Controls (n=537) P value Age (year) 47.9 ± 9.2 48.3 ± 8.9 0.50 Age at menarche 14.7 ± 1.7 14.9 ± 1.7 0.02 BMI (kg/m2) 24.0 ± 3.7 23.5 ± 3.4 0.007 Menopause 301 (56.1) 323 (60.1) 0.17 Family history of breast cancer 16 (3.0) 10 (1.9) 0.23 Ever breastfeeding 477 (88.8) 507 (94.4) 0.001 Education Junior middle school or below 258 (48.0) 204 (38.0) 0.002 Senior middle school 163 (30.4) 207 (38.5) College or above 116 (21.6) 126 (23.5) ER (n, %) Positive 343 (63.9) Negative 194 (36.1) PR (n, %) Positive 299 (55.7) Negative 238 (44.3) Histologic grade (n, %) I 150 (27.9) II 227 (42.3) III 160 (29.8) Clinical stage 1-2 409 (76.2) 3-4 128 (23.8) Tumor size (n, %) ≤2 194 (36.1) >2 343 (63.9)Frequencies of the 6 SNPs in FTO gene are shown in Table2. Hardy-Weinberg equilibrium test of the selected SNPs showed no deviation among the controls (P>0.05). Two SNPs in FTO gene were significantly associated with risk of breast cancer. Specifically, AA genotype of rs1477196 was associated with significant decreased risk of breast cancer (OR = 0.54, 95% CI: 0.34–0.86) compared with the GG genotype; subjects with the AA genotype of rs16953002 had significant increased cancer risk (OR = 1.80, 95% CI = 1.23–2.63) compared with those who carry the GG genotype. These associations were remained statistically significant after further adjustment for age, age at menarche, BMI, ever breastfeeding, and education (OR = 0.56, 95% CI = 0.34–0.90; OR = 1.77, 95% CI: 1.20–2.61). Further stratified analysis by BMI level (<24 kg/m2 and ≥24 kg/m2) showed that the effect of rs1477196 was statistically significant only in people with BMI < 24 kg/m2 (OR = 0.41, 95% CI: 0.22–0.76) (Table 3).Table 2 ORs and 95% CIs for breast cancer in relation to polymorphisms of FTO gene. SNP Genotypes Cases,n (%) Controls,n (%) OR (95% CI)† OR (95% CI)‡ § P trend rs9939609 TT 389 (72.6) 408 (76.0) 1.00 1.00 0.13 TA 130 (24.2) 119 (22.1) 1.14 (0.86–1.52) 1.14 (0.85–1.52) AA 17 (3.2) 10 (1.9) 1.78 (0.80–3.93) 1.69 (0.75–3.80) AA + TA 147 (27.4) 129 (24.0) 1.19 (0.90–1.57) 1.18 (0.89–1.56) rs1477196 GG 272 (50.7) 231 (43.0) 1.00 1.00 0.004 GA 231 (43.1) 254 (47.3) 0.77 (0.60–0.99) 0.79 (0.61–1.02) AA 33 (6.2) 52 (9.7) 0.54 (0.34–0.86) 0.56 (0.34–0.90) AA + GA 264 (49.2) 306 (57.0) 0.73 (0.58–0.93) 0.75 (0.59–0.96) rs6499640 GG 361 (67.5) 375 (69.8) 1.00 1.00 0.44 GA 159 (29.7) 148 (27.6) 1.12 (0.85–1.46) 1.13 (0.86–1.48) AA 15 (2.8) 14 (2.6) 1.10 (0.52–2.32) 1.11 (0.52–2.35) AA + GA 174 (32.5) 162 (30.2) 1.11 (0.86–1.44) 1.13 (0.87–1.47) rs16953002 GG 217 (40.5) 250 (46.5) 1.00 1.00 0.005 GA 232 (43.3) 231 (43.0) 1.16 (0.90–1.50) 1.13 (0.87–1.46) AA 87 (16.2) 56 (10.4) 1.80 (1.23–2.63) 1.77 (1.20–2.61) AA + GA 319 (59.5) 287 (53.5) 1.28 (1.01–1.64) 1.25 (0.98–1.60) rs11075995 TT 249 (46.5) 236 (44.0) 1.00 1.00 0.24 TA 244 (45.5) 247 (46.0) 0.93 (0.73–1.20) 0.97 (0.75–1.25) AA 43 (8.0) 54 (10.0) 0.75 (0.48–1.17) 0.80 (0.51–1.25) AA + TA 287 (53.5) 301 (56.0) 0.90 (0.71–1.15) 0.94 (0.73–1.20) rs1121980 GG 360 (67.2) 377 (70.2) 1.00 1.00 0.19 GA 154 (28.7) 145 (27.0) 1.11 (0.85–1.45) 1.11 (0.84–1.46) AA 22 (4.1) 15 (2.8) 1.54 (0.79–3.02) 1.42 (0.71–2.82) AA + GA 176 (32.8) 160 (29.8) 1.15 (0.89–1.49) 1.13 (0.87–1.48) †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education. § P trend is test of trend for the number of copies of the variant allele (0, 1 and 2).Table 3 FTO rs1477196 and rs16953002 and breast cancer risk stratified by BMI levels. SNP Genotypes BMI < 24 BMI ≥ 24 OR (95% CI)† OR (95% CI)‡ OR (95% CI)† OR (95% CI)‡ rs1477196 GG 1.00 1.00 GA 0.75 (0.54–1.05) 0.73 (0.52–1.02) 0.81 (0.56–1.18) 0.86 (0.58–1.27) AA 0.41 (0.22–0.76) 0.45 (0.24–0.85) 0.91 (0.42–1.98) 0.72 (0.34–1.52) P interaction 0.02 rs16953002 GG 1.00 1.00 GA 1.20 (0.85–1.69) 1.15 (0.81–1.63) 1.10 (0.74–1.63) 1.09 (0.73–1.63) AA 1.78 (1.07–2.96) 1.66 (0.99–2.79) 1.82 (1.01–3.27) 1.90 (1.05–3.46) P interaction 0.15 †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education.Haplotype analysis showed that rs9939609, rs1477196, and rs1121980 were in linkage disequilibrium (D′=1.0, r2 = 0.07–0.78). The associations between FTO haplotypes and breast cancer risk were shown in Table 4. Carriers of the TAC haplotype had significant reduced risk of breast cancer (OR = 0.76, 95% CI: 0.62–0.93) relative to the TGC haplotype carriers.Table 4 Association of haplotypes in the FTO gene with risk of breast cancer. Haplotype∗ Cases, % Controls, % OR (95% CI)† OR (95% CI)‡ P value TGC 53.8 50.4 1.00 1.00 TAC 27.7 33.3 0.76 (0.62–0.93) 0.78 (0.63–0.95) 0.009 AGT 15.3 12.9 1.09 (0.85–1.40) 1.08 (0.84–1.40) 0.51 TGT 3.2 3.3 0.91 (0.55–1.48) 0.87 (0.53–1.44) 0.69 ∗In the order rs9939609, rs1477196, and rs1121980. †Adjusted for age. ‡Adjusted for age, age at menarche, BMI, ever breastfeeding, and education.Based on the data from follow-up interviews to October 2014, a total of 105 patients died of breast cancer. But we failed to find any association between FTO polymorphisms and breast cancer survival.Finally, we investigated the associations between the FTO SNPs and BMI in healthy controls. Results showed that subjects carrying the mutant genotypes (G/A:23.21±3.25 kg/m2; A/A: 23.13±3.30 kg/m2) of rs1477196 had lower BMI than those carrying the GG genotype (23.82±3.60 kg/m2), but it did not reach statistical significance. ## 4. Discussion FTO is an established obesity-susceptibility gene, and several loci in this gene have been reported to be associated with cancer risk. However, it has not been fully studied with regard to risk and survival of breast cancer in Chinese population. In this study, we found that two SNPs (rs1477196 and rs16953002) and TAC haplotype (rs9939609-rs1477196-rs1121980) in FTO gene were significantly associated with breast cancer risk. FTO gene located in chromosome 16q12.2. FTO protein is one homolog in the AlkB family proteins which also acts as a DNA-demethylase [23]. Mutations in FTO gene could lead to loss of protein function that may cause severe growth retardation, leanness, increased metabolic rate, and hyperphagia [24]. Previous genome-wide association study (GWAS) in 2007 has identified a set of susceptibility loci within the first intron of FTO gene that are obesity-related [25]. Also, a number of mutations in intron region of FTO gene were found to be associated with breast cancer risk in African-ancestry populations (rs17817449) [16], in a mixed ethnic population of Northwestern University (rs7206790, rs8047395, rs9939609, and rs1477196) [17], in ER-negative breast cancer of European ancestry population (rs11075995) [18], and in Chinese population (rs11075995) [20]. Although the exact functional significance of these SNPs has not been clarified, biological studies have suggested that the effect of the risk alleles can influence the methylation capability and at least in part be mediated through epigenetic alterations [26, 27]. Another study showed that the presence of the risk allele in intron 1 is associated with increased levels of the FTO transcript, suggesting that there may becis-regulatory site that regulates FTO in intron 1 [28]. Thus, our findings are biologically plausible.Rs1477196 located in intron 1 of FTO gene. A few studies have examined the effect of the rs1477196 variant on obesity and breast cancer risk. Rampersaud et al. [29] investigated the associations of physical activity, common FTO gene variants with BMI, and obesity in Old Order Amish (OOA) individuals, and the results showed that rs1477196 was significantly associated with BMI (P<0.001), and this effect had significant interaction with physical activity (P=0.004). Specifically, the rs1477196 C allele was associated with obesity in the low-activity group (OR, 1.58 for each risk allele; P=0.001) but not in the high-activity group (P=0.11). Another study done among early onset and severe obesity cases in a western Spain population also found that AA genotype of rs1477196 conferred decreased risk of obesity (OR = 0.41, 95% CI = 0.19–0.90), and a block containing the rs1477196/rs17817449/rs9939609 haplotype had an increased risk of obesity (GGA versus ATT: OR = 2.07, 95% CI = 1.41–3.04) [30]. A case-control study, including 354 breast cancer cases and 364 controls, conducted at Northwestern University in a mixed-race population found that compared to the common homozygote AA the rare homozygote GG of rs1477196 conferred 2.38-fold increased risk of breast cancer [17]. In our study, we found that rs1477196 AA genotype was associated with 0.54-fold decreased odds of breast cancer risk, and this association was found only in people with BMI < 24 kg/m2 after stratification by BMI levels.Besides, TAC haplotype (rs9939609-rs1477196-rs1121980) had significant reduced breast cancer risk compared with TGC haplotype, which may be mainly driven by individual SNP rs1477196, which was in linkage disequilibrium with rs9939609 (D′=1.0) in our study. FTO rs9939609 has been reported to be associated with obesity in many other papers [31–33].Rs16953002, located in 31kb from exon 9 and over 146kb from exon 8 of FTO, was once reported to be associated with melanoma in European population (per-allele OR for A = 1.16) [34], but till now, no other research has yet reported its role in obesity or breast cancer risk.The essential strength of our study is the detailed review of cancer diagnosis (cases were histopathologically confirmed), which minimized the potential disease misclassification. However, limitations of our study cannot be ignored. First, potential selection bias of controls might occur because of the hospital-based study design. Second, Berkson’s bias could not be ruled out because obesity is more frequent amongst hospitalized patients than that in the general population, which may underestimate the real association in our study. Third, the relatively small sample size may reduce the power to explore subgroups analysis. Fourth, lack of information on physical activity may have interaction with the identified mutations. ## 5. Conclusions Overall, our results suggest that genetic variations in FTO gene may play roles in breast cancer pathogenesis in women. However, further researches are required to replicate our results in other populations and functional analyses are needed to elucidate the exact role of these genomic variations in the development of breast cancer. --- *Source: 101032-2015-06-04.xml*
2015
# Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression **Authors:** Yingchong Wang; Na Zhou; Fuqing Chang; Shengwang Hao **Journal:** Advances in Materials Science and Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101035 --- ## Abstract Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating), secondary (steady state creep regime), and tertiary creep (accelerating creep) stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage. --- ## Body ## 1. Introduction When concrete is subjected to sustained loading, it exhibits the phenomenon of creep. This phenomenon has been recognized long time ago by both structural and material engineers. Creep behavior of concrete is an important phenomenon to be taken into account for evaluating and analyzing the behavior of concrete structures [1, 2]. General creep trends of concrete have been established and are well known [1, 3], and models have been proposed to predict creep in the design of concrete structures [2]. However, despite major successes, the phenomenon of creep is still far from being fully understood, even though it has occupied some of the best minds in the field of cement and concrete research and materials science [3]. Particularly, most of the work that has been pursued in the area of creep behavior was conducted at low levels of stress [4–6] and early-age creep [7], and the brittle creep behavior of concrete under sustained load levels that are close to its short-term strength is not well known.Concrete is brittle and under high-sustained loading, creep failure can occur after a certain time [8, 9]. This behavior is referred to as static fatigue in materials, and it is thus important to understand the behavior of materials. However, whilst this phenomenon has been studied extensively in metals [10], polymer plastic materials, and rocks [11], the time-dependent failure of concrete has been investigated only to a very limited extent. Because of the extensive range of the variety of constituents entering a concrete mix, it is often difficult to make definite comparisons. Efforts have been made in micromechanics of the effect of concrete composition on creep [12]. Viscoelasticity and crack growth govern the long-term deformability of concrete and thus its service behavior and durability [13]. For low-load levels, the viscoelastic behavior appears quasi-linear, and crack growth is inactive. On the other hand, for high-load levels, cracks grow and interact with viscoelasticity [13]. Nevertheless, as noted by researchers the best way to achieve good long-time predictions is to conduct short-time tests on the concrete and then extrapolate them on the basis of a good prediction model incorporating as much as possible of the physics of creep [3, 14]. Macroscopic scaling laws are necessary if we wish to extrapolate the laboratory results in an attempt to understand the process of brittle creep in concrete at the time-scales and strain rates in practical engineering. The critical behavior of the signals could be a basis for forecasting the time-to-failure of materials [15].The time-dependent brittle deformation of concrete usually causes concrete to fail under constant stress that is well below its short-term strength over an extended period of time, a process known as brittle creep. The process of creep failure in other brittle materials, such as rock, is typically classified by the following three consecutive stages: primary creep, secondary creep, and accelerating tertiary creep, which is a precursor to failure [16, 17]. The available literatures [18–21] on creep failure of concrete focus mainly on tensile and flexural creep failure. The creep curves for concrete from tensile and flexural creep tests [8, 22] also display a three-stage process, corresponding to the change in creep rate. The current knowledge of the brittle creep failure behavior of concrete under compression is comparatively not well defined, and it lacks complete experimental results. Nevertheless, it is particularly important for the design and safety assessment of large concrete dams, concrete works in underground engineering, and so on. In these engineering applications, concrete materials are always loaded under long-term compression. It has been shown that basic creep in compression is significantly more important than basic creep in tension [23].In this paper, through constant uniaxial compression tests for concrete, we discuss three regimes of concrete under constant load and introduce the simple rule between the axial deformation rate of the concrete and the rupture time. This law has an important theoretical and practical significance for predicting concrete creep damage. ## 2. Materials and Specimen Preparation Each concrete specimen used was a rectangular block, 160 mm high, 40 mm × 40 mm in cross section. Each specimen was cast in an accurately machined steel mold. The components of the concrete were silicate cement (28-day strength of 42.5 MPa), natural river sand (fine aggregate), and limestone coarse aggregate (4.75–16 mm in size). The mixture proportion by weight of cement : water : fine aggregate : coarse aggregate was 1 : 0.58 : 2.13 : 3.80. The specimens were cured for 28 days in a fog room at20±2°C and at a relative humidity (RH) ≥95%. The age of loading concrete is about half a year. ## 3. Experimental Methodology All specimens were uniaxially compressed in the vertical direction (along the 160 mm axis) at room temperature using a universal electromechanical testing machine, equipped with a load cell with a load step of 1 kN. The deformationu of the tested specimens was measured using an extensometer with a resolution of 1 μm located on the sides of the specimens. The test set-up is shown in Figure 1.Figure 1 The test set-up.Before brittle creep experiments, a series of monotonically increasing load experiments was conducted by controlling the monotonic increase of the crosshead displacement of the testing machine in order to obtain the short-term failure characteristics of the concrete. In these experiments, the displacement of the crosshead was the governing displacement that combined the deformation of the loading apparatus and the deformed concrete specimen. The displacement of the crosshead was measured continuously using the linear variable differential transformer (LVDT).The stress used for the creep tests is determined to be equal to approximately 90% of the peak stress of the short-term failure tests. A series of conventional brittle creep experiments were then performed at a predetermined stress. Figure2 shows the detailed loading process of a sample as an example. We designed the procedure for the brittle creep tests according to the definition of creep. During the tests, all specimens were rapidly loaded to the predetermined stress state with a rate of 5 MPa/s, which is shown in the initial loading phase in Figure 2(a), for the subsequent creep test. The loading was then stopped and the samples were allowed to deform under constant stress (the parts of the creep phases where forces were kept constant in Figure 2(a) until failure). It is shown in Figure 2 that the strain of the samples increased rapidly during the phase of the application of the initial stress. It is clearly indicated that the force is kept constant during the creep phase of the experiment.Figure 2 Force-time and strain-time plots of a typical creep experiment on concrete as an example of demonstration of the loading process. (a) The curve of axial force versus time and (b) the curve of the axial strain versus time. It is clear that the load is kept constant during the creep phase. The strain of the sample increases with time at an applied constant force and eventually leads to a sudden macroscopic failure. (a) (b) ## 4. Results and Discussions ### 4.1. Monotonically Increasing Displacement Experiments The initial calibration experiments were conducted in order to obtain the short-term failure characteristics of the tested concrete materials, using 12 specimens that were compressed by controlling the crosshead displacement, monotonically increasing at a rate of 0.1 mm/min until failure (i.e., no-hold step). The sizes and shapes of specimens were the same as those chosen for the creep experiments. Figure3 shows the representative displacement-stress curves for four concrete specimens as examples.Figure 3 Dynamic rupture in concrete (samples are labeled as MS1, MS2, MS3, and MS4) under monotonic loading at a constant displacement rate (0.05 mm/min). The specimens fail suddenly in the postfailure stage after the peak force is reached. Concrete exhibits some scatter in the attained peak force and failure strain. In these monotonic tests, the displacements in the brittle creep tests are selected to be the same as those corresponding to the 80–90% peak forces.It can be seen that the concrete specimens exhibit typical stress-strain curves of brittle heterogeneous material like concrete and rock. At a very early stage of the compression test, the stress-displacement curve was slightly convex upwards. Later, an almost linear stress-displacement relation was elicited. The slope of the stress-displacement curve then decreased and after some time it reached the peak stress. Eventually, the specimens failed suddenly at some point in the postfailure stage, after the peak stress. The failure could be attributed to the accumulation and coalescence of microcracks and microdefects. The average peak stress of the 12 specimens was52.67±5.51 MPa, with a coefficient of variation of 0.10. ### 4.2. Brittle Creep Experiments Brittle creep failure experiments were performed on concrete specimens in order to yield times-to-failure and creep strain rates. The applied stresses in the brittle creep tests were selected to be between 80% and 90% of the peak stress in the monotonic tests (Figure3). As noted above, the detailed loading process of the specimens is shown in Figure 1 as an example.In brittle creep tests, 11 samples were loaded with different initial stresses during the brittle creep tests. Some samples failed immediately upon the application of the initial stress (these experiments were discarded as they produced no data; some samples did not fail during the available time window of 4 days) and these experiments were also discarded. Following this calibration for the distribution of sample strengths, we conducted four tests with prescribed initial stresses of about 90% of the peak stress. In order to show the load process and the level of the predetermined stresses for each experiment, curves of the axial stress against time for four samples are shown in Figure4. It is indicated that all samples failed abruptly after the process of constant stress maintenance. The photographs of failed samples shown in Figure 5 indicate that localization occurred when samples evolved to macroscopic failure.Figure 4 Axial stress versus time curves of four concrete samples (labeled as CS1, CS2, CS3, and CS4) during the total loading process, including the initial loading phases and the following creep phases.Figure 5 Photos of failed samples in brittle creep experiments.During the phase where the samples were maintained at a constant stress, concrete exhibited an obvious creep behavior. Figure4 shows the curves of strain against time for all four samples during the creep phase. It is clear that all curves exhibited the three typical stages of creep that characterize brittle creep deformation. The strain-time curves are characterized by the first stage of primary (transient) creep, followed by the secondary creep and by a stage of tertiary creep.Each primary creep stage is characterized by an initially high strain rate that decreased with time to reach an almost constant, secondary stage strain rate that is often interpreted as steady creep. Finally, samples entered a tertiary phase that is characterized by an accelerated increase of strain. This eventually resulted in macroscopic failure of the samples. From Figures6-7, it is clear that the secondary stage dominated the lifetime of samples.Figure 6 Axial strain versus time curves of four concrete specimens at the creep phases fromt0 to failure under a constant applied stress. (a) Sample CS1; (b) sample CS2; (c) sample CS3; (d) sample CS4. The curves show the three typical stages of brittle creep: (1) primary (transient) stage characterized by an increasing strain with deceleration, (2) secondary stage, and (3) tertiary stage characterized by accelerating creep. (a) (b) (c) (d)Figure 7 Axial strain rate versus time curves of the four concrete specimens at the creep phases.Further investigation is needed to assess whether a specimen developing steady state creep may fail if the load is sustained for a sufficiently long time. Accelerated strain rate in the tertiary stage can be attributed to the fact that the concrete became weaker and increasingly fractured. In many tests of brittle concrete, the tertiary stage creep was missed owing to the rapid brittle failure of samples at this stage.In order to further characterize the three stages of the evolution of the properties of concrete’s creep failure, the first derivatives of the strain-time curves, that is, the strain rate against time, are calculated and shown in Figure7. The plots in Figure 7 demonstrate that there is a long and an almost constant strain rate portion of the curves, as indicated by the horizontal parts of the curves. The steady state creep strain rate can be subsequently calculated within this portion of the creep curve. ### 4.3. Power-Law Critical Behavior The acceleration properties that were considered could be used to obtain a short-term prediction for the time-to-failure. Many material failure phenomena, such as volcanoes [24, 25], landslides [26], and laboratory samples [27–29], are preceded by clear accelerating rates of strain [30]. It has been widely suggested [30] that these precursory signals could be the basis for forecasting the time of failure.The theoretical analysis [31] has shown that the acceleration behavior of the strain rate near failure can be described as a power-law behavior dε/dt≈A1-t/tf-α with an exponent α=1/2. In this paper, the log-log plots of the axial strain rate versus [1-(t-t0)/(tf-t0)] for the four specimens are shown in Figure 8, in order to show the critical power-law behavior near failure. The symbol tf represents the failure time, and as noted above, t0 is the start time of the creep phase. Consequently, tf-t0 is the entire creep time for each concrete specimen. The fitted results (see the red lines in Figures 8(a)-8(b) and the solid blue lines in Figures 6(c)-6(d)) of the data near failure for each specimen are also shown in Figure 8. It is shown that the creep strain rates near failure for each specimen can be described well by a power-law relation; namely, strain rate = A[1-(t-t0)/(tf-t0)]-α. The mean power-law exponent was -α=-0.51±0.06, which is almost equal to the theoretical value of −1/2 [31].Figure 8 Log-log plots of axial strain rate against time for four specimens. The power-law fits (see the solid red lines in (a) and (b), and the solid blue lines in (c) and (d)) to the experimental data near failure are indicated in the figures. The power-law exponent is equal to-0.51±0.06 and it is approximately equal to the theoretical value of −1/2 quoted in [31]. (a) (b) (c) (d) ### 4.4. Dependence of Time-to-Failure on the Secondary Creep Rate In the brittle creep process, the secondary stage of deformation dominates the entire lifetime of the specimen. In the experiments, a steep creep slope in the secondary stage implies a short lifetime with the lifetime represented by a power-law relationship. The dependence of the lifetime on the creep slopeλs of the secondary stage is shown in Figure 9. It is indicated that has exhibited an almost linear relationship with a secondary creep rate λs. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1.Figure 9 The log-log plot of the creep slopesλs of the secondary stage and (εf-ε0)/(tf-t0). It is indicated that there is an almost linear relation; namely, the solid straight line is the fitted result. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. ### 4.5. Discussions Brittle failure in concrete is driven by microcrack growth. For low-load levels, crack growth would not occur. However, for high-load levels, crack growth is active and, coupled with viscoelasticity, governs the deformability of concrete and thus its durability. It has been shown [13, 32] that models taking crack evolution and the viscoelastic behavior of concrete into consideration can reproduce the experimentally observed trends and explain the effects observed in creep failure tests.In essence, the evolution of the macroscopic response depends on the microphysical properties and microdamage process of the sample, which induces sample-specificity of time-to-failure. The time-to-failure of concrete samples depends on the sample’s microstructure evolution. Variability in the initial microstructures (such as crack density, porosity, and their heterogeneous distribution) could lead to diversification in both creep patterns and time-to-failure among samples deformed under the same loading conditions. Consequently, for a given concrete composition and loading conditions, time-to-failure and creep curves can be highly variable from one sample to another, because of the intrinsic variability of concrete microstructures.In the present experiments, the difference in the imposed creep stresses was so small that it could not eliminate the effect of sample variability on time-to-failure. Thus, the data from the present creep experiments do not show a strong, direct dependence of time-to-failure on the stress level. In order to fully describe the differences in time-to-failure for various samples as a function of applied stress, it is necessary to perform a series of creep failure experiments over a wider range of imposed creep stresses for which brittle creep failure occurs that are different percentages of the peak stress of the monotonic tests, yielding a range of different times-to-failure. Their eventual macroscopic failure patterns are also different from each other, as shown in Figure5.However, the responses obtained do present scaling law patterns, which allow the possibility of extrapolating the laboratory results to understand the processes occurring in actual structures. The reason that the concrete samples exhibit the scaling law is that the sizes of the microdamage events are usually much smaller than those of the eventual macroscopic failure events. ## 4.1. Monotonically Increasing Displacement Experiments The initial calibration experiments were conducted in order to obtain the short-term failure characteristics of the tested concrete materials, using 12 specimens that were compressed by controlling the crosshead displacement, monotonically increasing at a rate of 0.1 mm/min until failure (i.e., no-hold step). The sizes and shapes of specimens were the same as those chosen for the creep experiments. Figure3 shows the representative displacement-stress curves for four concrete specimens as examples.Figure 3 Dynamic rupture in concrete (samples are labeled as MS1, MS2, MS3, and MS4) under monotonic loading at a constant displacement rate (0.05 mm/min). The specimens fail suddenly in the postfailure stage after the peak force is reached. Concrete exhibits some scatter in the attained peak force and failure strain. In these monotonic tests, the displacements in the brittle creep tests are selected to be the same as those corresponding to the 80–90% peak forces.It can be seen that the concrete specimens exhibit typical stress-strain curves of brittle heterogeneous material like concrete and rock. At a very early stage of the compression test, the stress-displacement curve was slightly convex upwards. Later, an almost linear stress-displacement relation was elicited. The slope of the stress-displacement curve then decreased and after some time it reached the peak stress. Eventually, the specimens failed suddenly at some point in the postfailure stage, after the peak stress. The failure could be attributed to the accumulation and coalescence of microcracks and microdefects. The average peak stress of the 12 specimens was52.67±5.51 MPa, with a coefficient of variation of 0.10. ## 4.2. Brittle Creep Experiments Brittle creep failure experiments were performed on concrete specimens in order to yield times-to-failure and creep strain rates. The applied stresses in the brittle creep tests were selected to be between 80% and 90% of the peak stress in the monotonic tests (Figure3). As noted above, the detailed loading process of the specimens is shown in Figure 1 as an example.In brittle creep tests, 11 samples were loaded with different initial stresses during the brittle creep tests. Some samples failed immediately upon the application of the initial stress (these experiments were discarded as they produced no data; some samples did not fail during the available time window of 4 days) and these experiments were also discarded. Following this calibration for the distribution of sample strengths, we conducted four tests with prescribed initial stresses of about 90% of the peak stress. In order to show the load process and the level of the predetermined stresses for each experiment, curves of the axial stress against time for four samples are shown in Figure4. It is indicated that all samples failed abruptly after the process of constant stress maintenance. The photographs of failed samples shown in Figure 5 indicate that localization occurred when samples evolved to macroscopic failure.Figure 4 Axial stress versus time curves of four concrete samples (labeled as CS1, CS2, CS3, and CS4) during the total loading process, including the initial loading phases and the following creep phases.Figure 5 Photos of failed samples in brittle creep experiments.During the phase where the samples were maintained at a constant stress, concrete exhibited an obvious creep behavior. Figure4 shows the curves of strain against time for all four samples during the creep phase. It is clear that all curves exhibited the three typical stages of creep that characterize brittle creep deformation. The strain-time curves are characterized by the first stage of primary (transient) creep, followed by the secondary creep and by a stage of tertiary creep.Each primary creep stage is characterized by an initially high strain rate that decreased with time to reach an almost constant, secondary stage strain rate that is often interpreted as steady creep. Finally, samples entered a tertiary phase that is characterized by an accelerated increase of strain. This eventually resulted in macroscopic failure of the samples. From Figures6-7, it is clear that the secondary stage dominated the lifetime of samples.Figure 6 Axial strain versus time curves of four concrete specimens at the creep phases fromt0 to failure under a constant applied stress. (a) Sample CS1; (b) sample CS2; (c) sample CS3; (d) sample CS4. The curves show the three typical stages of brittle creep: (1) primary (transient) stage characterized by an increasing strain with deceleration, (2) secondary stage, and (3) tertiary stage characterized by accelerating creep. (a) (b) (c) (d)Figure 7 Axial strain rate versus time curves of the four concrete specimens at the creep phases.Further investigation is needed to assess whether a specimen developing steady state creep may fail if the load is sustained for a sufficiently long time. Accelerated strain rate in the tertiary stage can be attributed to the fact that the concrete became weaker and increasingly fractured. In many tests of brittle concrete, the tertiary stage creep was missed owing to the rapid brittle failure of samples at this stage.In order to further characterize the three stages of the evolution of the properties of concrete’s creep failure, the first derivatives of the strain-time curves, that is, the strain rate against time, are calculated and shown in Figure7. The plots in Figure 7 demonstrate that there is a long and an almost constant strain rate portion of the curves, as indicated by the horizontal parts of the curves. The steady state creep strain rate can be subsequently calculated within this portion of the creep curve. ## 4.3. Power-Law Critical Behavior The acceleration properties that were considered could be used to obtain a short-term prediction for the time-to-failure. Many material failure phenomena, such as volcanoes [24, 25], landslides [26], and laboratory samples [27–29], are preceded by clear accelerating rates of strain [30]. It has been widely suggested [30] that these precursory signals could be the basis for forecasting the time of failure.The theoretical analysis [31] has shown that the acceleration behavior of the strain rate near failure can be described as a power-law behavior dε/dt≈A1-t/tf-α with an exponent α=1/2. In this paper, the log-log plots of the axial strain rate versus [1-(t-t0)/(tf-t0)] for the four specimens are shown in Figure 8, in order to show the critical power-law behavior near failure. The symbol tf represents the failure time, and as noted above, t0 is the start time of the creep phase. Consequently, tf-t0 is the entire creep time for each concrete specimen. The fitted results (see the red lines in Figures 8(a)-8(b) and the solid blue lines in Figures 6(c)-6(d)) of the data near failure for each specimen are also shown in Figure 8. It is shown that the creep strain rates near failure for each specimen can be described well by a power-law relation; namely, strain rate = A[1-(t-t0)/(tf-t0)]-α. The mean power-law exponent was -α=-0.51±0.06, which is almost equal to the theoretical value of −1/2 [31].Figure 8 Log-log plots of axial strain rate against time for four specimens. The power-law fits (see the solid red lines in (a) and (b), and the solid blue lines in (c) and (d)) to the experimental data near failure are indicated in the figures. The power-law exponent is equal to-0.51±0.06 and it is approximately equal to the theoretical value of −1/2 quoted in [31]. (a) (b) (c) (d) ## 4.4. Dependence of Time-to-Failure on the Secondary Creep Rate In the brittle creep process, the secondary stage of deformation dominates the entire lifetime of the specimen. In the experiments, a steep creep slope in the secondary stage implies a short lifetime with the lifetime represented by a power-law relationship. The dependence of the lifetime on the creep slopeλs of the secondary stage is shown in Figure 9. It is indicated that has exhibited an almost linear relationship with a secondary creep rate λs. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1.Figure 9 The log-log plot of the creep slopesλs of the secondary stage and (εf-ε0)/(tf-t0). It is indicated that there is an almost linear relation; namely, the solid straight line is the fitted result. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. ## 4.5. Discussions Brittle failure in concrete is driven by microcrack growth. For low-load levels, crack growth would not occur. However, for high-load levels, crack growth is active and, coupled with viscoelasticity, governs the deformability of concrete and thus its durability. It has been shown [13, 32] that models taking crack evolution and the viscoelastic behavior of concrete into consideration can reproduce the experimentally observed trends and explain the effects observed in creep failure tests.In essence, the evolution of the macroscopic response depends on the microphysical properties and microdamage process of the sample, which induces sample-specificity of time-to-failure. The time-to-failure of concrete samples depends on the sample’s microstructure evolution. Variability in the initial microstructures (such as crack density, porosity, and their heterogeneous distribution) could lead to diversification in both creep patterns and time-to-failure among samples deformed under the same loading conditions. Consequently, for a given concrete composition and loading conditions, time-to-failure and creep curves can be highly variable from one sample to another, because of the intrinsic variability of concrete microstructures.In the present experiments, the difference in the imposed creep stresses was so small that it could not eliminate the effect of sample variability on time-to-failure. Thus, the data from the present creep experiments do not show a strong, direct dependence of time-to-failure on the stress level. In order to fully describe the differences in time-to-failure for various samples as a function of applied stress, it is necessary to perform a series of creep failure experiments over a wider range of imposed creep stresses for which brittle creep failure occurs that are different percentages of the peak stress of the monotonic tests, yielding a range of different times-to-failure. Their eventual macroscopic failure patterns are also different from each other, as shown in Figure5.However, the responses obtained do present scaling law patterns, which allow the possibility of extrapolating the laboratory results to understand the processes occurring in actual structures. The reason that the concrete samples exhibit the scaling law is that the sizes of the microdamage events are usually much smaller than those of the eventual macroscopic failure events. ## 5. Conclusions The experimental data presented in this paper demonstrated that the creep failure of concrete under compression exhibits the three typical evolutionary stages that lead to an eventual macroscopic failure. Samples exhibited sample specificities of the time-to-failure and macroscopic failure patterns but showed similarities in creep process and macroscopic scaling laws.Concrete specimens tested in the present experiments showed an obvious tertiary stage before the macroscopic failure. The strain rates in the tertiary creep stage of the concrete increased rapidly and led to an acceleration creep. The acceleration creep near failure exhibited a critical power-law behavior with an exponent of-0.51±0.06 that was approximately equal to the theoretical value of −1/2.The curve of the strain rate against time indicated that there existed a long-term stage where the strain rate sustained a constant value. This implied that the secondary stage with a constant strain rate dominated the lifetime of concrete. The average creep rate expressed by the total creep strain over the lifetime(tf-t0) for each specimen showed a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior in the steady stage. Also, we can estimate the onset of the third stage based on when the strain rate begins to deviate from the fitted slope of the secondary stage. --- *Source: 101035-2015-10-01.xml*
101035-2015-10-01_101035-2015-10-01.md
32,758
Brittle Creep Failure, Critical Behavior, and Time-to-Failure Prediction of Concrete under Uniaxial Compression
Yingchong Wang; Na Zhou; Fuqing Chang; Shengwang Hao
Advances in Materials Science and Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101035
101035-2015-10-01.xml
--- ## Abstract Understanding the time-dependent brittle deformation behavior of concrete as a main building material is fundamental for the lifetime prediction and engineering design. Herein, we present the experimental measures of brittle creep failure, critical behavior, and the dependence of time-to-failure, on the secondary creep rate of concrete under sustained uniaxial compression. A complete evolution process of creep failure is achieved. Three typical creep stages are observed, including the primary (decelerating), secondary (steady state creep regime), and tertiary creep (accelerating creep) stages. The time-to-failure shows sample-specificity although all samples exhibit a similar creep process. All specimens exhibit a critical power-law behavior with an exponent of −0.51 ± 0.06, approximately equal to the theoretical value of −1/2. All samples have a long-term secondary stage characterized by a constant strain rate that dominates the lifetime of a sample. The average creep rate expressed by the total creep strain over the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior at the steady stage. --- ## Body ## 1. Introduction When concrete is subjected to sustained loading, it exhibits the phenomenon of creep. This phenomenon has been recognized long time ago by both structural and material engineers. Creep behavior of concrete is an important phenomenon to be taken into account for evaluating and analyzing the behavior of concrete structures [1, 2]. General creep trends of concrete have been established and are well known [1, 3], and models have been proposed to predict creep in the design of concrete structures [2]. However, despite major successes, the phenomenon of creep is still far from being fully understood, even though it has occupied some of the best minds in the field of cement and concrete research and materials science [3]. Particularly, most of the work that has been pursued in the area of creep behavior was conducted at low levels of stress [4–6] and early-age creep [7], and the brittle creep behavior of concrete under sustained load levels that are close to its short-term strength is not well known.Concrete is brittle and under high-sustained loading, creep failure can occur after a certain time [8, 9]. This behavior is referred to as static fatigue in materials, and it is thus important to understand the behavior of materials. However, whilst this phenomenon has been studied extensively in metals [10], polymer plastic materials, and rocks [11], the time-dependent failure of concrete has been investigated only to a very limited extent. Because of the extensive range of the variety of constituents entering a concrete mix, it is often difficult to make definite comparisons. Efforts have been made in micromechanics of the effect of concrete composition on creep [12]. Viscoelasticity and crack growth govern the long-term deformability of concrete and thus its service behavior and durability [13]. For low-load levels, the viscoelastic behavior appears quasi-linear, and crack growth is inactive. On the other hand, for high-load levels, cracks grow and interact with viscoelasticity [13]. Nevertheless, as noted by researchers the best way to achieve good long-time predictions is to conduct short-time tests on the concrete and then extrapolate them on the basis of a good prediction model incorporating as much as possible of the physics of creep [3, 14]. Macroscopic scaling laws are necessary if we wish to extrapolate the laboratory results in an attempt to understand the process of brittle creep in concrete at the time-scales and strain rates in practical engineering. The critical behavior of the signals could be a basis for forecasting the time-to-failure of materials [15].The time-dependent brittle deformation of concrete usually causes concrete to fail under constant stress that is well below its short-term strength over an extended period of time, a process known as brittle creep. The process of creep failure in other brittle materials, such as rock, is typically classified by the following three consecutive stages: primary creep, secondary creep, and accelerating tertiary creep, which is a precursor to failure [16, 17]. The available literatures [18–21] on creep failure of concrete focus mainly on tensile and flexural creep failure. The creep curves for concrete from tensile and flexural creep tests [8, 22] also display a three-stage process, corresponding to the change in creep rate. The current knowledge of the brittle creep failure behavior of concrete under compression is comparatively not well defined, and it lacks complete experimental results. Nevertheless, it is particularly important for the design and safety assessment of large concrete dams, concrete works in underground engineering, and so on. In these engineering applications, concrete materials are always loaded under long-term compression. It has been shown that basic creep in compression is significantly more important than basic creep in tension [23].In this paper, through constant uniaxial compression tests for concrete, we discuss three regimes of concrete under constant load and introduce the simple rule between the axial deformation rate of the concrete and the rupture time. This law has an important theoretical and practical significance for predicting concrete creep damage. ## 2. Materials and Specimen Preparation Each concrete specimen used was a rectangular block, 160 mm high, 40 mm × 40 mm in cross section. Each specimen was cast in an accurately machined steel mold. The components of the concrete were silicate cement (28-day strength of 42.5 MPa), natural river sand (fine aggregate), and limestone coarse aggregate (4.75–16 mm in size). The mixture proportion by weight of cement : water : fine aggregate : coarse aggregate was 1 : 0.58 : 2.13 : 3.80. The specimens were cured for 28 days in a fog room at20±2°C and at a relative humidity (RH) ≥95%. The age of loading concrete is about half a year. ## 3. Experimental Methodology All specimens were uniaxially compressed in the vertical direction (along the 160 mm axis) at room temperature using a universal electromechanical testing machine, equipped with a load cell with a load step of 1 kN. The deformationu of the tested specimens was measured using an extensometer with a resolution of 1 μm located on the sides of the specimens. The test set-up is shown in Figure 1.Figure 1 The test set-up.Before brittle creep experiments, a series of monotonically increasing load experiments was conducted by controlling the monotonic increase of the crosshead displacement of the testing machine in order to obtain the short-term failure characteristics of the concrete. In these experiments, the displacement of the crosshead was the governing displacement that combined the deformation of the loading apparatus and the deformed concrete specimen. The displacement of the crosshead was measured continuously using the linear variable differential transformer (LVDT).The stress used for the creep tests is determined to be equal to approximately 90% of the peak stress of the short-term failure tests. A series of conventional brittle creep experiments were then performed at a predetermined stress. Figure2 shows the detailed loading process of a sample as an example. We designed the procedure for the brittle creep tests according to the definition of creep. During the tests, all specimens were rapidly loaded to the predetermined stress state with a rate of 5 MPa/s, which is shown in the initial loading phase in Figure 2(a), for the subsequent creep test. The loading was then stopped and the samples were allowed to deform under constant stress (the parts of the creep phases where forces were kept constant in Figure 2(a) until failure). It is shown in Figure 2 that the strain of the samples increased rapidly during the phase of the application of the initial stress. It is clearly indicated that the force is kept constant during the creep phase of the experiment.Figure 2 Force-time and strain-time plots of a typical creep experiment on concrete as an example of demonstration of the loading process. (a) The curve of axial force versus time and (b) the curve of the axial strain versus time. It is clear that the load is kept constant during the creep phase. The strain of the sample increases with time at an applied constant force and eventually leads to a sudden macroscopic failure. (a) (b) ## 4. Results and Discussions ### 4.1. Monotonically Increasing Displacement Experiments The initial calibration experiments were conducted in order to obtain the short-term failure characteristics of the tested concrete materials, using 12 specimens that were compressed by controlling the crosshead displacement, monotonically increasing at a rate of 0.1 mm/min until failure (i.e., no-hold step). The sizes and shapes of specimens were the same as those chosen for the creep experiments. Figure3 shows the representative displacement-stress curves for four concrete specimens as examples.Figure 3 Dynamic rupture in concrete (samples are labeled as MS1, MS2, MS3, and MS4) under monotonic loading at a constant displacement rate (0.05 mm/min). The specimens fail suddenly in the postfailure stage after the peak force is reached. Concrete exhibits some scatter in the attained peak force and failure strain. In these monotonic tests, the displacements in the brittle creep tests are selected to be the same as those corresponding to the 80–90% peak forces.It can be seen that the concrete specimens exhibit typical stress-strain curves of brittle heterogeneous material like concrete and rock. At a very early stage of the compression test, the stress-displacement curve was slightly convex upwards. Later, an almost linear stress-displacement relation was elicited. The slope of the stress-displacement curve then decreased and after some time it reached the peak stress. Eventually, the specimens failed suddenly at some point in the postfailure stage, after the peak stress. The failure could be attributed to the accumulation and coalescence of microcracks and microdefects. The average peak stress of the 12 specimens was52.67±5.51 MPa, with a coefficient of variation of 0.10. ### 4.2. Brittle Creep Experiments Brittle creep failure experiments were performed on concrete specimens in order to yield times-to-failure and creep strain rates. The applied stresses in the brittle creep tests were selected to be between 80% and 90% of the peak stress in the monotonic tests (Figure3). As noted above, the detailed loading process of the specimens is shown in Figure 1 as an example.In brittle creep tests, 11 samples were loaded with different initial stresses during the brittle creep tests. Some samples failed immediately upon the application of the initial stress (these experiments were discarded as they produced no data; some samples did not fail during the available time window of 4 days) and these experiments were also discarded. Following this calibration for the distribution of sample strengths, we conducted four tests with prescribed initial stresses of about 90% of the peak stress. In order to show the load process and the level of the predetermined stresses for each experiment, curves of the axial stress against time for four samples are shown in Figure4. It is indicated that all samples failed abruptly after the process of constant stress maintenance. The photographs of failed samples shown in Figure 5 indicate that localization occurred when samples evolved to macroscopic failure.Figure 4 Axial stress versus time curves of four concrete samples (labeled as CS1, CS2, CS3, and CS4) during the total loading process, including the initial loading phases and the following creep phases.Figure 5 Photos of failed samples in brittle creep experiments.During the phase where the samples were maintained at a constant stress, concrete exhibited an obvious creep behavior. Figure4 shows the curves of strain against time for all four samples during the creep phase. It is clear that all curves exhibited the three typical stages of creep that characterize brittle creep deformation. The strain-time curves are characterized by the first stage of primary (transient) creep, followed by the secondary creep and by a stage of tertiary creep.Each primary creep stage is characterized by an initially high strain rate that decreased with time to reach an almost constant, secondary stage strain rate that is often interpreted as steady creep. Finally, samples entered a tertiary phase that is characterized by an accelerated increase of strain. This eventually resulted in macroscopic failure of the samples. From Figures6-7, it is clear that the secondary stage dominated the lifetime of samples.Figure 6 Axial strain versus time curves of four concrete specimens at the creep phases fromt0 to failure under a constant applied stress. (a) Sample CS1; (b) sample CS2; (c) sample CS3; (d) sample CS4. The curves show the three typical stages of brittle creep: (1) primary (transient) stage characterized by an increasing strain with deceleration, (2) secondary stage, and (3) tertiary stage characterized by accelerating creep. (a) (b) (c) (d)Figure 7 Axial strain rate versus time curves of the four concrete specimens at the creep phases.Further investigation is needed to assess whether a specimen developing steady state creep may fail if the load is sustained for a sufficiently long time. Accelerated strain rate in the tertiary stage can be attributed to the fact that the concrete became weaker and increasingly fractured. In many tests of brittle concrete, the tertiary stage creep was missed owing to the rapid brittle failure of samples at this stage.In order to further characterize the three stages of the evolution of the properties of concrete’s creep failure, the first derivatives of the strain-time curves, that is, the strain rate against time, are calculated and shown in Figure7. The plots in Figure 7 demonstrate that there is a long and an almost constant strain rate portion of the curves, as indicated by the horizontal parts of the curves. The steady state creep strain rate can be subsequently calculated within this portion of the creep curve. ### 4.3. Power-Law Critical Behavior The acceleration properties that were considered could be used to obtain a short-term prediction for the time-to-failure. Many material failure phenomena, such as volcanoes [24, 25], landslides [26], and laboratory samples [27–29], are preceded by clear accelerating rates of strain [30]. It has been widely suggested [30] that these precursory signals could be the basis for forecasting the time of failure.The theoretical analysis [31] has shown that the acceleration behavior of the strain rate near failure can be described as a power-law behavior dε/dt≈A1-t/tf-α with an exponent α=1/2. In this paper, the log-log plots of the axial strain rate versus [1-(t-t0)/(tf-t0)] for the four specimens are shown in Figure 8, in order to show the critical power-law behavior near failure. The symbol tf represents the failure time, and as noted above, t0 is the start time of the creep phase. Consequently, tf-t0 is the entire creep time for each concrete specimen. The fitted results (see the red lines in Figures 8(a)-8(b) and the solid blue lines in Figures 6(c)-6(d)) of the data near failure for each specimen are also shown in Figure 8. It is shown that the creep strain rates near failure for each specimen can be described well by a power-law relation; namely, strain rate = A[1-(t-t0)/(tf-t0)]-α. The mean power-law exponent was -α=-0.51±0.06, which is almost equal to the theoretical value of −1/2 [31].Figure 8 Log-log plots of axial strain rate against time for four specimens. The power-law fits (see the solid red lines in (a) and (b), and the solid blue lines in (c) and (d)) to the experimental data near failure are indicated in the figures. The power-law exponent is equal to-0.51±0.06 and it is approximately equal to the theoretical value of −1/2 quoted in [31]. (a) (b) (c) (d) ### 4.4. Dependence of Time-to-Failure on the Secondary Creep Rate In the brittle creep process, the secondary stage of deformation dominates the entire lifetime of the specimen. In the experiments, a steep creep slope in the secondary stage implies a short lifetime with the lifetime represented by a power-law relationship. The dependence of the lifetime on the creep slopeλs of the secondary stage is shown in Figure 9. It is indicated that has exhibited an almost linear relationship with a secondary creep rate λs. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1.Figure 9 The log-log plot of the creep slopesλs of the secondary stage and (εf-ε0)/(tf-t0). It is indicated that there is an almost linear relation; namely, the solid straight line is the fitted result. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. ### 4.5. Discussions Brittle failure in concrete is driven by microcrack growth. For low-load levels, crack growth would not occur. However, for high-load levels, crack growth is active and, coupled with viscoelasticity, governs the deformability of concrete and thus its durability. It has been shown [13, 32] that models taking crack evolution and the viscoelastic behavior of concrete into consideration can reproduce the experimentally observed trends and explain the effects observed in creep failure tests.In essence, the evolution of the macroscopic response depends on the microphysical properties and microdamage process of the sample, which induces sample-specificity of time-to-failure. The time-to-failure of concrete samples depends on the sample’s microstructure evolution. Variability in the initial microstructures (such as crack density, porosity, and their heterogeneous distribution) could lead to diversification in both creep patterns and time-to-failure among samples deformed under the same loading conditions. Consequently, for a given concrete composition and loading conditions, time-to-failure and creep curves can be highly variable from one sample to another, because of the intrinsic variability of concrete microstructures.In the present experiments, the difference in the imposed creep stresses was so small that it could not eliminate the effect of sample variability on time-to-failure. Thus, the data from the present creep experiments do not show a strong, direct dependence of time-to-failure on the stress level. In order to fully describe the differences in time-to-failure for various samples as a function of applied stress, it is necessary to perform a series of creep failure experiments over a wider range of imposed creep stresses for which brittle creep failure occurs that are different percentages of the peak stress of the monotonic tests, yielding a range of different times-to-failure. Their eventual macroscopic failure patterns are also different from each other, as shown in Figure5.However, the responses obtained do present scaling law patterns, which allow the possibility of extrapolating the laboratory results to understand the processes occurring in actual structures. The reason that the concrete samples exhibit the scaling law is that the sizes of the microdamage events are usually much smaller than those of the eventual macroscopic failure events. ## 4.1. Monotonically Increasing Displacement Experiments The initial calibration experiments were conducted in order to obtain the short-term failure characteristics of the tested concrete materials, using 12 specimens that were compressed by controlling the crosshead displacement, monotonically increasing at a rate of 0.1 mm/min until failure (i.e., no-hold step). The sizes and shapes of specimens were the same as those chosen for the creep experiments. Figure3 shows the representative displacement-stress curves for four concrete specimens as examples.Figure 3 Dynamic rupture in concrete (samples are labeled as MS1, MS2, MS3, and MS4) under monotonic loading at a constant displacement rate (0.05 mm/min). The specimens fail suddenly in the postfailure stage after the peak force is reached. Concrete exhibits some scatter in the attained peak force and failure strain. In these monotonic tests, the displacements in the brittle creep tests are selected to be the same as those corresponding to the 80–90% peak forces.It can be seen that the concrete specimens exhibit typical stress-strain curves of brittle heterogeneous material like concrete and rock. At a very early stage of the compression test, the stress-displacement curve was slightly convex upwards. Later, an almost linear stress-displacement relation was elicited. The slope of the stress-displacement curve then decreased and after some time it reached the peak stress. Eventually, the specimens failed suddenly at some point in the postfailure stage, after the peak stress. The failure could be attributed to the accumulation and coalescence of microcracks and microdefects. The average peak stress of the 12 specimens was52.67±5.51 MPa, with a coefficient of variation of 0.10. ## 4.2. Brittle Creep Experiments Brittle creep failure experiments were performed on concrete specimens in order to yield times-to-failure and creep strain rates. The applied stresses in the brittle creep tests were selected to be between 80% and 90% of the peak stress in the monotonic tests (Figure3). As noted above, the detailed loading process of the specimens is shown in Figure 1 as an example.In brittle creep tests, 11 samples were loaded with different initial stresses during the brittle creep tests. Some samples failed immediately upon the application of the initial stress (these experiments were discarded as they produced no data; some samples did not fail during the available time window of 4 days) and these experiments were also discarded. Following this calibration for the distribution of sample strengths, we conducted four tests with prescribed initial stresses of about 90% of the peak stress. In order to show the load process and the level of the predetermined stresses for each experiment, curves of the axial stress against time for four samples are shown in Figure4. It is indicated that all samples failed abruptly after the process of constant stress maintenance. The photographs of failed samples shown in Figure 5 indicate that localization occurred when samples evolved to macroscopic failure.Figure 4 Axial stress versus time curves of four concrete samples (labeled as CS1, CS2, CS3, and CS4) during the total loading process, including the initial loading phases and the following creep phases.Figure 5 Photos of failed samples in brittle creep experiments.During the phase where the samples were maintained at a constant stress, concrete exhibited an obvious creep behavior. Figure4 shows the curves of strain against time for all four samples during the creep phase. It is clear that all curves exhibited the three typical stages of creep that characterize brittle creep deformation. The strain-time curves are characterized by the first stage of primary (transient) creep, followed by the secondary creep and by a stage of tertiary creep.Each primary creep stage is characterized by an initially high strain rate that decreased with time to reach an almost constant, secondary stage strain rate that is often interpreted as steady creep. Finally, samples entered a tertiary phase that is characterized by an accelerated increase of strain. This eventually resulted in macroscopic failure of the samples. From Figures6-7, it is clear that the secondary stage dominated the lifetime of samples.Figure 6 Axial strain versus time curves of four concrete specimens at the creep phases fromt0 to failure under a constant applied stress. (a) Sample CS1; (b) sample CS2; (c) sample CS3; (d) sample CS4. The curves show the three typical stages of brittle creep: (1) primary (transient) stage characterized by an increasing strain with deceleration, (2) secondary stage, and (3) tertiary stage characterized by accelerating creep. (a) (b) (c) (d)Figure 7 Axial strain rate versus time curves of the four concrete specimens at the creep phases.Further investigation is needed to assess whether a specimen developing steady state creep may fail if the load is sustained for a sufficiently long time. Accelerated strain rate in the tertiary stage can be attributed to the fact that the concrete became weaker and increasingly fractured. In many tests of brittle concrete, the tertiary stage creep was missed owing to the rapid brittle failure of samples at this stage.In order to further characterize the three stages of the evolution of the properties of concrete’s creep failure, the first derivatives of the strain-time curves, that is, the strain rate against time, are calculated and shown in Figure7. The plots in Figure 7 demonstrate that there is a long and an almost constant strain rate portion of the curves, as indicated by the horizontal parts of the curves. The steady state creep strain rate can be subsequently calculated within this portion of the creep curve. ## 4.3. Power-Law Critical Behavior The acceleration properties that were considered could be used to obtain a short-term prediction for the time-to-failure. Many material failure phenomena, such as volcanoes [24, 25], landslides [26], and laboratory samples [27–29], are preceded by clear accelerating rates of strain [30]. It has been widely suggested [30] that these precursory signals could be the basis for forecasting the time of failure.The theoretical analysis [31] has shown that the acceleration behavior of the strain rate near failure can be described as a power-law behavior dε/dt≈A1-t/tf-α with an exponent α=1/2. In this paper, the log-log plots of the axial strain rate versus [1-(t-t0)/(tf-t0)] for the four specimens are shown in Figure 8, in order to show the critical power-law behavior near failure. The symbol tf represents the failure time, and as noted above, t0 is the start time of the creep phase. Consequently, tf-t0 is the entire creep time for each concrete specimen. The fitted results (see the red lines in Figures 8(a)-8(b) and the solid blue lines in Figures 6(c)-6(d)) of the data near failure for each specimen are also shown in Figure 8. It is shown that the creep strain rates near failure for each specimen can be described well by a power-law relation; namely, strain rate = A[1-(t-t0)/(tf-t0)]-α. The mean power-law exponent was -α=-0.51±0.06, which is almost equal to the theoretical value of −1/2 [31].Figure 8 Log-log plots of axial strain rate against time for four specimens. The power-law fits (see the solid red lines in (a) and (b), and the solid blue lines in (c) and (d)) to the experimental data near failure are indicated in the figures. The power-law exponent is equal to-0.51±0.06 and it is approximately equal to the theoretical value of −1/2 quoted in [31]. (a) (b) (c) (d) ## 4.4. Dependence of Time-to-Failure on the Secondary Creep Rate In the brittle creep process, the secondary stage of deformation dominates the entire lifetime of the specimen. In the experiments, a steep creep slope in the secondary stage implies a short lifetime with the lifetime represented by a power-law relationship. The dependence of the lifetime on the creep slopeλs of the secondary stage is shown in Figure 9. It is indicated that has exhibited an almost linear relationship with a secondary creep rate λs. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1.Figure 9 The log-log plot of the creep slopesλs of the secondary stage and (εf-ε0)/(tf-t0). It is indicated that there is an almost linear relation; namely, the solid straight line is the fitted result. So, the lifetime (tf-t0) for each specimen shows a power-law dependence on the secondary creep rate with an exponent of −1. ## 4.5. Discussions Brittle failure in concrete is driven by microcrack growth. For low-load levels, crack growth would not occur. However, for high-load levels, crack growth is active and, coupled with viscoelasticity, governs the deformability of concrete and thus its durability. It has been shown [13, 32] that models taking crack evolution and the viscoelastic behavior of concrete into consideration can reproduce the experimentally observed trends and explain the effects observed in creep failure tests.In essence, the evolution of the macroscopic response depends on the microphysical properties and microdamage process of the sample, which induces sample-specificity of time-to-failure. The time-to-failure of concrete samples depends on the sample’s microstructure evolution. Variability in the initial microstructures (such as crack density, porosity, and their heterogeneous distribution) could lead to diversification in both creep patterns and time-to-failure among samples deformed under the same loading conditions. Consequently, for a given concrete composition and loading conditions, time-to-failure and creep curves can be highly variable from one sample to another, because of the intrinsic variability of concrete microstructures.In the present experiments, the difference in the imposed creep stresses was so small that it could not eliminate the effect of sample variability on time-to-failure. Thus, the data from the present creep experiments do not show a strong, direct dependence of time-to-failure on the stress level. In order to fully describe the differences in time-to-failure for various samples as a function of applied stress, it is necessary to perform a series of creep failure experiments over a wider range of imposed creep stresses for which brittle creep failure occurs that are different percentages of the peak stress of the monotonic tests, yielding a range of different times-to-failure. Their eventual macroscopic failure patterns are also different from each other, as shown in Figure5.However, the responses obtained do present scaling law patterns, which allow the possibility of extrapolating the laboratory results to understand the processes occurring in actual structures. The reason that the concrete samples exhibit the scaling law is that the sizes of the microdamage events are usually much smaller than those of the eventual macroscopic failure events. ## 5. Conclusions The experimental data presented in this paper demonstrated that the creep failure of concrete under compression exhibits the three typical evolutionary stages that lead to an eventual macroscopic failure. Samples exhibited sample specificities of the time-to-failure and macroscopic failure patterns but showed similarities in creep process and macroscopic scaling laws.Concrete specimens tested in the present experiments showed an obvious tertiary stage before the macroscopic failure. The strain rates in the tertiary creep stage of the concrete increased rapidly and led to an acceleration creep. The acceleration creep near failure exhibited a critical power-law behavior with an exponent of-0.51±0.06 that was approximately equal to the theoretical value of −1/2.The curve of the strain rate against time indicated that there existed a long-term stage where the strain rate sustained a constant value. This implied that the secondary stage with a constant strain rate dominated the lifetime of concrete. The average creep rate expressed by the total creep strain over the lifetime(tf-t0) for each specimen showed a power-law dependence on the secondary creep rate with an exponent of −1. This could provide a clue to the prediction of the time-to-failure of concrete, based on the monitoring of the creep behavior in the steady stage. Also, we can estimate the onset of the third stage based on when the strain rate begins to deviate from the fitted slope of the secondary stage. --- *Source: 101035-2015-10-01.xml*
2015
# Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions **Authors:** Leanne M. Hirshfield; Philip Bobko; Alex Barelka; Stuart H. Hirshfield; Mathew T. Farrington; Spencer Gulbronson; Diane Paverman **Journal:** Advances in Human-Computer Interaction (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101038 --- ## Abstract In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions. --- ## Body ## 1. Introduction As the lines between humans and computers become blurred, research is beginning to measure users’ experiences while interacting with a technology or information system. In particular, dealing with computer malfunctions that involve slow internet connectivity, or those that involve the introduction of malware onto one’s computer system, has unfortunately become a somewhat regular part of users’ interactions with computers. Computer users’ cognitive load and emotional state may change depending on the type and severity of the malfunction. Additionally, one’s perceived trust in, or suspicion of, the computer system may be correlated with these changes in cognitive load and emotional state.It is commonplace to use surveys to acquire users’ self-reports of their cognitive load and psychological states during human-computer interactions. For example, the NASA-TLX is one of the most commonly used surveys for assessing workload [1]. As another example, when attempting to measure self-report emotional states, users often complete surveys such as semantic differentials, the Self-Assessment Manikin, or the Positive and Negative Affect Schedule [2, 3]. The vast majority of trust research to date has also relied on surveys to assess people’s trust in others [4, 5]. Although this method of measurement is commonplace and valuable for understanding and measuring changes in user states, it is limited by many of the well-known drawbacks of subjective, self-report measures. For example, subjects may have different frames of reference when completing surveys; further, survey responses correlate only moderately with actual behavior and/or others’ perceptions of the subject’s behavior [6]. Also, subjects’ use of rating scales is prone to distortion due to social desirability [7], and surveys and self-reports are often administered after a task has been completed (postdictively). They are thus limited in their capacity to accurately collect valuable insight into the users’ changing experiences throughout a task.To compensate for the shortcomings of subjective, self-report techniques, in this study we use noninvasive brain measurement techniques to measure changes in user states objectively and in real time. Such measurement techniques have emerged in the literature with fMRI and the electroencephalograph (EEG) being used to measure workload and emotional states in human-computer interactions [8–14]. Furthermore, some researchers have recently used fMRI and associated brain activity to measure aspects of trust and distrust [15, 16]. Although fMRI provides valuable information about cognitive functioning in the brain, the device is quite constricting. It requires subjects to lie still in a large magnet and is extremely expensive [17, 18]. Although fMRI results suggest that we can measure trust objectively by assessing brain functioning, the tool cannot be used outside the research lab, limiting its uses for monitoring trust in more operational, real-world situations.In order to enable the measurement of cognitive load, emotion, and the correlated constructs of trust and suspicion in real-world contexts, we employed a new, noninvasive brain sensing technique called functional near infrared spectroscopy (fNIRS) to make real-time, objective measurements of users’ mental states while they conduct tasks in operational working conditions. The fNIRS device (shown in Figure1) is easy to set up, lightweight, comfortable, and portable, and it can be implemented wirelessly, allowing for use in many settings.Figure 1 A subject wearing a 52-channel fNIRS device.One overarching goal in this study was to demonstrate the feasibility of using fNIRS to objectively measure users’ states in real time while they work with, and interact with, computer systems. Towards these ends, we first provide a summary of the literatures on workload, emotional state, trust, and suspicion. We also describe specific research that guided our experimental goals and hypotheses [19]. We then describe our protocol, data analysis techniques, findings, and interpretations. We conclude with a discussion of the implications of this work for future research. ## 2. Background and Literature Review ### 2.1. Workload and Emotional State The termcognitive workload is used in literature from various fields. Many describe cognitive workload in general terms, for example, as the ratio of the cognitive resources needed to complete a task to the cognitive resources available from the human operator [20]. Some view workload as a measure that can be determined subjectively, as done with the NASA TLX [1]. Others view workload via performance measurements, focusing on the operator’s performance on a given task to determine levels of cognitive workload [20]. Yet others view cognitive workload as a measure of the overall activation measured by various brain imaging devices while subjects complete some task [8, 21, 22]. Cognitive psychologists note that there is notone area in the brain that activates when a person is experiencing mental workload. However, these researchers look at specific areas in the brain to see which areas are activated while subjects perform simple tasks [23–25]. We have used fNIRS to measure spatial and verbal working memory load, as well as response inhibition load and visual search load [11, 12].Regarding emotional reactions, most researchers agree that emotions are affective states that exist in a short period of time and are related to a particular event [26, 27]. From a psychological point of view, emotions are often mapped to points in a two-dimensional space of affective valence and arousal. Valence represents overall pleasantness of emotional experiences and arousal represents the intensity level of emotion, ranging from calm to excited [2, 28, 29]. These 2 dimensions enable researchers to differentiate among four categories of emotions. Some researchers even differentiate among nine categories of emotion by including a neutral section on both the valence and arousal axis. However, in principle, an infinite amount of other arbitrary numbers of categories can be defined [30]. ### 2.2. Trust and Suspicion #### 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. #### 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. #### 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ### 2.3. Users’ Cognitive and Emotional Reactions to Computer Malfunctions When users are confronted with unexpected stimuli during their human-computer interactions, we expect them to have a negative emotional response, and we expect a decrease in trust to be correlated with that emotional response. In some scenarios, the users may become frustrated with the computer (“This computer always freezes on me!”), or in other instances they may even become concerned that a malevolent agent has gained access to their computer (“I think this website stole my credit card information!”). Both examples imply that the computer user’s beliefs about the cause of an unexpected stimulus will affect his or her physiological response to that stimulus; that is, characteristics of the unexpected stimulus moderate the physiological, cognitive, and emotional responses.Regarding potential emotional responses, Figure2 shows two variants of Russell’s circumplex model of affect, arguably the most popular model depicting the arousal and valence dimensions and their relation to emotional state [28]. Research states that most people’s neutral (a.k.a baseline) state is located near point (0,0) in Russell’s model [28]. The current study focuses primarily on the model to the left in Figure 2. We symbolically place our hypotheses on the model in the left panel of Figure 2, while the depiction in the right panel of Figure 2 contains more detailed semantic descriptors. Indeed, note that our first example above (“this computer always freezes on me”) was suggested as inducing user frustration. In Figure 2, “frustration” is associated with high negative valence, but only moderate arousal (see H1a in Figure 2). In contrast, our second example above (“I think this website stole my credit card information”) was suggested as inducing user reactions of concern, alarm, and fear. In Figure 2, these terms are associated with high arousal, but only moderate negative valence (posited in Figure 2 as H1b).Figure 2 Two schematics of Russell’s circumplex model of affect [28]. As we are less interested in semantic meaning than we are in the physiological responses related to our experimental stimuli, we focus on the model to the left.With these depictions in mind, we hypothesize the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only a moderate decrease in valence, as indicated by the letter “H1b” in Figure 2.fMRI methods have recently been used to identify neural networks in the brain that are associated with different semantic emotional states [39] and with Russell’s 2-dimensional valence/arousal model [40]. However, this research remains in the nascent stage, and more work is needed to understand the neural correlates of emotion. In particular, the the neural correlates of mental states such as fear or alarm (i.e., the regions associated with the “H1b” label in Figure 2) remain largely unexplored in the research literature. In the limited work that has been done with fMRI, the emotional state of “fear” has been found to activate areas in the dorsolateral prefrontal cortex (DLPFC), in the pre- and supplementary motor cortex, and in Broca’s area [13]. Furthermore, research has found that high levels of stress and arousal have a direct effect on Broca’s area [41]. Also, activation in the orbitofrontal cortex has been linked to an “alarm signal” that is sent by the brain in response to negative affect, when there is a need to regulate the negative emotion [42]. Lastly, much research has linked DLPFC activation to the cognitive load associated with regulating one’s emotions such as frustration or fear (i.e., the regions labeled with the “H1a” and “H1b,” respectively, in Figure 2) [13, 17].Although all of the above brain measurement research was done with fMRI, fNIRS sensors, with a depth into the brain of up to 3 cm, are capable of reaching Broca’s area, the DLPFC, the supplementary motor cortex, and the orbitofrontal cortex. This suggests that the neural correlates relating to high arousal and moderately low levels of valence can be measured noninvasively. This leads us to an instrumentation corollary to our first set of hypotheses:H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and GSR sensors.In addition to affective state, we also expect computer malfunctions to affect users’ cognitive load (and to reduce trust in a computer system). The overall cognitive load required to perform a task using a computer is composed of a portion attributable to the difficulty of the task itself plus a portion attributable to the complexity of operating the computer. In this regard, we follow Shneiderman’s theory of syntactic and semantic components of a user interface [43]. The semantic component involves the workload needed to complete the task. The syntactic component includes interpreting the computer’s feedback and then formulating and inputting commands to the computer. A goal in user interface design is to reduce the mental effort devoted to the syntactic aspects so that more workload can be devoted to the underlying task, or semantic, aspects [11]. People prefer to expend as little cognitive effort as possible when working with a computer system, and well-designed systems are able to minimize syntactic workload in order to meet this important user goal.Therefore, when a computer malfunctions, we expect to see increases in the users’ cognitive load and a potential loss of trust, as the user is forced to account for the shortcomings of the computer. For example, if the computer is performing slowly while a user tries to compose an email, the user may need to use more verbal working memory load while she keeps her train of thought and waits for the computer to catch up. Or if a user is working with a poorly designed software program, he may continually have difficulties finding the correct menu items and commands to achieve his desired outcome. Accompanying his loss of trust in the software will be an increase in cognitive load as he tries to navigate the interface and complete his target task. Literature has also shown that increases in negative affect are directly related to increases in cognitive load [17, 44]. While this cognitive load can take many forms in the brain, one form involves activation in the DLPFC brain region, which has been linked to the cognitively demanding effort involved in emotion regulation [45].This leads to our next hypothesis.H2. Computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine. #### 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. #### 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ### 2.4. Experimental Design Eleven individuals (4 males) participated in this study (mean age = 20.2 years; SD = 0.603). Participants were college students representing a range of majors. Informed consent was obtained, and participants were compensated for their time. All participants filled out a preexperimental survey, which included demographic information and computer usage queries. All participants answered that they frequently used the Internet and had previous experience shopping online. Our experimental protocol was designed to begin with the participant at a level of normalcy and trust and to end with the participant in a state of distrust. ### 2.5. Task and Manipulations Participants were asked to use the Google search engine to shop online for the least expensive model of a specified bike. We chose this search engine task because it involved a task with which most subjects had previous experience (i.e., online shopping). During each session, subjects had fifteen minutes to search for three specific bicycles online. Participants received financial bonuses for finding bicycles priced below a given benchmark price. Financial bonuses were calculated as a percentage of the subject’s discount on the benchmark price. Thus, participants had incentive to continue searching for the lowest possible price on all bikes during their entire 15 minute task session. The participants were specifically instructed to use only the Google search engine while searching for the bikes. The participants were told that the objective of the study was to measure workload levels as users navigate shopping websites.Each participant completed this 15-minute-long task four times over the course of four consecutive days. Participants were told that there would be five such sessions, but as described below, our intervention on day four obviated the need for a fifth session. During each session, the participant was placed in a small room containing the fNIRS device and a standard desktop computer to use while shopping. Placing the participant in a room alone was intended to distance the subject from the researchers. This distance prevented the subject from relying on the researchers when manipulations occurred (see below). Also in the participant’s room was a “Call Researchers Button” that would alert the researchers if assistance was required. Participants were told to use the button only if they felt that the experiment needed to be stopped or if they required researcher intervention (i.e., if they felt uncomfortable with the measurement devices, or if they wanted to end the study for some reason). #### 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). #### 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. #### 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. #### 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ### 2.6. Equipment Set-Up We collected fNIRS data using a Hitachi ETG-4000 near-infrared spectroscopy device. Participants wore a cap with 52 channels that take measurements two times per second. As fNIRS equipment is somewhat sensitive to movement, participants were placed at a comfortable distance from the keyboard and mouse and were asked to minimize movement throughout the experiment. Prior research has shown that this minimal movement does not corrupt the fNIRS signal with motion artifacts [53]. All fNIRS data were synchronized by placing marks in the datasets whenever a task started or ended, or whenever a manipulation occurred. We collected GSR data using a wireless Affectiva Q-Sensor bracelet. We examined the participants’ electrodermal activity (EDA) with the GSR sensor. ### 2.7. Survey Instruments At the end of each 15-minute session, participants filled out postsession surveys that included the NASA Task Load index (TLX) for subjective workload assessment, a semantic differential survey [1], and the Self-Assessment Manikin (SAM) for valence and arousal emotional state assessment. Lang’s SAM [2] has been used to identify a number of emotional states that fall on the 2-dimensional valence-arousal schema developed by Russell. Semantic differential surveys measure the connotative meaning of concepts. Participants were asked: “Place a mark on each scale to indicate your feelings or your opinions regarding the time you spent today working on the computer and browsing through the various websites searching for bikes.” They then had to indicate how they felt during that day’s tasks by placing a mark on a scale defined by two bipolar adjectives (for example, “Adequate-Inadequate,” “Enjoyable-Frustrating,” or “Difficult to use-Easy to use”). Embedded within a list of adjective pairings was the adjective pairing of “Trusting-Distrusting”. We included this pairing within the longer list to subjectively gauge levels of trust in a way that would not cause users to become suspicious of our research paradigm. #### 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ### 2.8. Measures of Valence and Arousal Lang’s Self-Assessment Manikin (SAM) survey has been used by many researchers to acquire subjects’ self-report valence and arousal. From an objective, real-time measurement point of view, the galvanic skin response (GSR) sensor is also capable of measuring arousal. GSR sensors measure changes in electrical resistance across two regions of the skin, and the electrical resistance of the skin fluctuates quickly during mental, physical, and emotional arousal. This change in the skin’s electrodermal activity (EDA) can be used to measure arousal in individuals, although not valence. Despite this limitation, GSR has been used in controlled experiments to measure arousal while participants experienced a variety of emotions such as stress, excitement, boredom, and anger [27, 54]. ### 2.9. Manipulation Checks We checked our manipulations by reviewing participant responses to the postexperiment open-ended survey and by analyzing the subject results on the postsession surveys. All participants reported that they believed the malware they encountered (the malware manipulation) was the result of their visit to a malevolent website, which they believe that they had stumbled across on their own via their Google searches. The slow internet manipulation introduced on day three received mixed responses from participants. Five participants noted feeling suspicious of the researchers during the third experiment session—they suspected that we were behind the slow internet manipulation. We examine the data from this subset of subjects at the end of this paper. ## 2.1. Workload and Emotional State The termcognitive workload is used in literature from various fields. Many describe cognitive workload in general terms, for example, as the ratio of the cognitive resources needed to complete a task to the cognitive resources available from the human operator [20]. Some view workload as a measure that can be determined subjectively, as done with the NASA TLX [1]. Others view workload via performance measurements, focusing on the operator’s performance on a given task to determine levels of cognitive workload [20]. Yet others view cognitive workload as a measure of the overall activation measured by various brain imaging devices while subjects complete some task [8, 21, 22]. Cognitive psychologists note that there is notone area in the brain that activates when a person is experiencing mental workload. However, these researchers look at specific areas in the brain to see which areas are activated while subjects perform simple tasks [23–25]. We have used fNIRS to measure spatial and verbal working memory load, as well as response inhibition load and visual search load [11, 12].Regarding emotional reactions, most researchers agree that emotions are affective states that exist in a short period of time and are related to a particular event [26, 27]. From a psychological point of view, emotions are often mapped to points in a two-dimensional space of affective valence and arousal. Valence represents overall pleasantness of emotional experiences and arousal represents the intensity level of emotion, ranging from calm to excited [2, 28, 29]. These 2 dimensions enable researchers to differentiate among four categories of emotions. Some researchers even differentiate among nine categories of emotion by including a neutral section on both the valence and arousal axis. However, in principle, an infinite amount of other arbitrary numbers of categories can be defined [30]. ## 2.2. Trust and Suspicion ### 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. ### 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. ### 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ## 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. ## 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. ## 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ## 2.3. Users’ Cognitive and Emotional Reactions to Computer Malfunctions When users are confronted with unexpected stimuli during their human-computer interactions, we expect them to have a negative emotional response, and we expect a decrease in trust to be correlated with that emotional response. In some scenarios, the users may become frustrated with the computer (“This computer always freezes on me!”), or in other instances they may even become concerned that a malevolent agent has gained access to their computer (“I think this website stole my credit card information!”). Both examples imply that the computer user’s beliefs about the cause of an unexpected stimulus will affect his or her physiological response to that stimulus; that is, characteristics of the unexpected stimulus moderate the physiological, cognitive, and emotional responses.Regarding potential emotional responses, Figure2 shows two variants of Russell’s circumplex model of affect, arguably the most popular model depicting the arousal and valence dimensions and their relation to emotional state [28]. Research states that most people’s neutral (a.k.a baseline) state is located near point (0,0) in Russell’s model [28]. The current study focuses primarily on the model to the left in Figure 2. We symbolically place our hypotheses on the model in the left panel of Figure 2, while the depiction in the right panel of Figure 2 contains more detailed semantic descriptors. Indeed, note that our first example above (“this computer always freezes on me”) was suggested as inducing user frustration. In Figure 2, “frustration” is associated with high negative valence, but only moderate arousal (see H1a in Figure 2). In contrast, our second example above (“I think this website stole my credit card information”) was suggested as inducing user reactions of concern, alarm, and fear. In Figure 2, these terms are associated with high arousal, but only moderate negative valence (posited in Figure 2 as H1b).Figure 2 Two schematics of Russell’s circumplex model of affect [28]. As we are less interested in semantic meaning than we are in the physiological responses related to our experimental stimuli, we focus on the model to the left.With these depictions in mind, we hypothesize the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only a moderate decrease in valence, as indicated by the letter “H1b” in Figure 2.fMRI methods have recently been used to identify neural networks in the brain that are associated with different semantic emotional states [39] and with Russell’s 2-dimensional valence/arousal model [40]. However, this research remains in the nascent stage, and more work is needed to understand the neural correlates of emotion. In particular, the the neural correlates of mental states such as fear or alarm (i.e., the regions associated with the “H1b” label in Figure 2) remain largely unexplored in the research literature. In the limited work that has been done with fMRI, the emotional state of “fear” has been found to activate areas in the dorsolateral prefrontal cortex (DLPFC), in the pre- and supplementary motor cortex, and in Broca’s area [13]. Furthermore, research has found that high levels of stress and arousal have a direct effect on Broca’s area [41]. Also, activation in the orbitofrontal cortex has been linked to an “alarm signal” that is sent by the brain in response to negative affect, when there is a need to regulate the negative emotion [42]. Lastly, much research has linked DLPFC activation to the cognitive load associated with regulating one’s emotions such as frustration or fear (i.e., the regions labeled with the “H1a” and “H1b,” respectively, in Figure 2) [13, 17].Although all of the above brain measurement research was done with fMRI, fNIRS sensors, with a depth into the brain of up to 3 cm, are capable of reaching Broca’s area, the DLPFC, the supplementary motor cortex, and the orbitofrontal cortex. This suggests that the neural correlates relating to high arousal and moderately low levels of valence can be measured noninvasively. This leads us to an instrumentation corollary to our first set of hypotheses:H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and GSR sensors.In addition to affective state, we also expect computer malfunctions to affect users’ cognitive load (and to reduce trust in a computer system). The overall cognitive load required to perform a task using a computer is composed of a portion attributable to the difficulty of the task itself plus a portion attributable to the complexity of operating the computer. In this regard, we follow Shneiderman’s theory of syntactic and semantic components of a user interface [43]. The semantic component involves the workload needed to complete the task. The syntactic component includes interpreting the computer’s feedback and then formulating and inputting commands to the computer. A goal in user interface design is to reduce the mental effort devoted to the syntactic aspects so that more workload can be devoted to the underlying task, or semantic, aspects [11]. People prefer to expend as little cognitive effort as possible when working with a computer system, and well-designed systems are able to minimize syntactic workload in order to meet this important user goal.Therefore, when a computer malfunctions, we expect to see increases in the users’ cognitive load and a potential loss of trust, as the user is forced to account for the shortcomings of the computer. For example, if the computer is performing slowly while a user tries to compose an email, the user may need to use more verbal working memory load while she keeps her train of thought and waits for the computer to catch up. Or if a user is working with a poorly designed software program, he may continually have difficulties finding the correct menu items and commands to achieve his desired outcome. Accompanying his loss of trust in the software will be an increase in cognitive load as he tries to navigate the interface and complete his target task. Literature has also shown that increases in negative affect are directly related to increases in cognitive load [17, 44]. While this cognitive load can take many forms in the brain, one form involves activation in the DLPFC brain region, which has been linked to the cognitively demanding effort involved in emotion regulation [45].This leads to our next hypothesis.H2. Computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine. ### 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. ### 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ## 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. ## 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ## 2.4. Experimental Design Eleven individuals (4 males) participated in this study (mean age = 20.2 years; SD = 0.603). Participants were college students representing a range of majors. Informed consent was obtained, and participants were compensated for their time. All participants filled out a preexperimental survey, which included demographic information and computer usage queries. All participants answered that they frequently used the Internet and had previous experience shopping online. Our experimental protocol was designed to begin with the participant at a level of normalcy and trust and to end with the participant in a state of distrust. ## 2.5. Task and Manipulations Participants were asked to use the Google search engine to shop online for the least expensive model of a specified bike. We chose this search engine task because it involved a task with which most subjects had previous experience (i.e., online shopping). During each session, subjects had fifteen minutes to search for three specific bicycles online. Participants received financial bonuses for finding bicycles priced below a given benchmark price. Financial bonuses were calculated as a percentage of the subject’s discount on the benchmark price. Thus, participants had incentive to continue searching for the lowest possible price on all bikes during their entire 15 minute task session. The participants were specifically instructed to use only the Google search engine while searching for the bikes. The participants were told that the objective of the study was to measure workload levels as users navigate shopping websites.Each participant completed this 15-minute-long task four times over the course of four consecutive days. Participants were told that there would be five such sessions, but as described below, our intervention on day four obviated the need for a fifth session. During each session, the participant was placed in a small room containing the fNIRS device and a standard desktop computer to use while shopping. Placing the participant in a room alone was intended to distance the subject from the researchers. This distance prevented the subject from relying on the researchers when manipulations occurred (see below). Also in the participant’s room was a “Call Researchers Button” that would alert the researchers if assistance was required. Participants were told to use the button only if they felt that the experiment needed to be stopped or if they required researcher intervention (i.e., if they felt uncomfortable with the measurement devices, or if they wanted to end the study for some reason). ### 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). ### 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. ### 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. ### 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ## 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). ## 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. ## 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. ## 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ## 2.6. Equipment Set-Up We collected fNIRS data using a Hitachi ETG-4000 near-infrared spectroscopy device. Participants wore a cap with 52 channels that take measurements two times per second. As fNIRS equipment is somewhat sensitive to movement, participants were placed at a comfortable distance from the keyboard and mouse and were asked to minimize movement throughout the experiment. Prior research has shown that this minimal movement does not corrupt the fNIRS signal with motion artifacts [53]. All fNIRS data were synchronized by placing marks in the datasets whenever a task started or ended, or whenever a manipulation occurred. We collected GSR data using a wireless Affectiva Q-Sensor bracelet. We examined the participants’ electrodermal activity (EDA) with the GSR sensor. ## 2.7. Survey Instruments At the end of each 15-minute session, participants filled out postsession surveys that included the NASA Task Load index (TLX) for subjective workload assessment, a semantic differential survey [1], and the Self-Assessment Manikin (SAM) for valence and arousal emotional state assessment. Lang’s SAM [2] has been used to identify a number of emotional states that fall on the 2-dimensional valence-arousal schema developed by Russell. Semantic differential surveys measure the connotative meaning of concepts. Participants were asked: “Place a mark on each scale to indicate your feelings or your opinions regarding the time you spent today working on the computer and browsing through the various websites searching for bikes.” They then had to indicate how they felt during that day’s tasks by placing a mark on a scale defined by two bipolar adjectives (for example, “Adequate-Inadequate,” “Enjoyable-Frustrating,” or “Difficult to use-Easy to use”). Embedded within a list of adjective pairings was the adjective pairing of “Trusting-Distrusting”. We included this pairing within the longer list to subjectively gauge levels of trust in a way that would not cause users to become suspicious of our research paradigm. ### 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ## 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ## 2.8. Measures of Valence and Arousal Lang’s Self-Assessment Manikin (SAM) survey has been used by many researchers to acquire subjects’ self-report valence and arousal. From an objective, real-time measurement point of view, the galvanic skin response (GSR) sensor is also capable of measuring arousal. GSR sensors measure changes in electrical resistance across two regions of the skin, and the electrical resistance of the skin fluctuates quickly during mental, physical, and emotional arousal. This change in the skin’s electrodermal activity (EDA) can be used to measure arousal in individuals, although not valence. Despite this limitation, GSR has been used in controlled experiments to measure arousal while participants experienced a variety of emotions such as stress, excitement, boredom, and anger [27, 54]. ## 2.9. Manipulation Checks We checked our manipulations by reviewing participant responses to the postexperiment open-ended survey and by analyzing the subject results on the postsession surveys. All participants reported that they believed the malware they encountered (the malware manipulation) was the result of their visit to a malevolent website, which they believe that they had stumbled across on their own via their Google searches. The slow internet manipulation introduced on day three received mixed responses from participants. Five participants noted feeling suspicious of the researchers during the third experiment session—they suspected that we were behind the slow internet manipulation. We examine the data from this subset of subjects at the end of this paper. ## 3. Results Survey, GSR, and fNIRS data were recorded for all 11 participants. Figure4 provides a graph of these trends, averaged across subjects, over the course of days two, three, and four.Figure 4 Self-report values after the baseline, slowed internet manipulation, and malware manipulation.We aimed to determine whether or not there were statistically significant differences between the baseline, slowed internet manipulation, and malware manipulation sessions. We made statistical comparisons of our survey, GSR, and fNIRS datasets. Allt-test results are available in Table 1 and they are discussed in the analysis section of this paper. Paired comparison t-tests were conducted on our postsession survey responses.Table 1 P values for t-tests comparing the effects of the manipulations on the dependent variables. Distrust (↑) Workload (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Comparison between baseline and the slowed Internet manipulation sessions 0.095 0.001* 0.003* 0.018* 0.002* DLPFC, frontopolar region Comparison between baseline and the malware manipulation sessions 0.006* 0.004* 0.120 0.003* 0.49* DLPFC, orbitofrontal cortex, broca’s area, frontopolar region Distrust (↑) Workload (↓) Valence (↑) Arousal (↑) GSR-EDA (↑) fNIRS HbO* Comparison between slowed Internet manipulation and malware manipulation sessions 0.084 0.074 0.105 0.035* 0.19 N/A *indicates statistical significance. Note thatn=11 for all comparisons, except the GSR data, which is noted above.The GSR data were analyzed using paired sample one-tailedt-tests to compare the electrodermal activity (EDA) values before and after target manipulations. On day three, we compared EDA immediately before the onset of the slowed internet manipulation and then 100 seconds after the slowed internet manipulation began. GSR data for two subjects were discarded due to poor contact with the skin on the third measurement day and substantial motion artifacts in the data. On day four, we used a paired sample one-tailed t-test to compare the EDA values immediately before the onset of the faux computer virus manipulation and then immediately after the ‘Blue Screen of Death’ appeared for each subject. Data for three subjects were discarded due to poor contact with the skin on the fourth measurement day and substantial motion artifacts in the data. Therefore, results were computed for the remaining eight subjects’ data.We used the NIRS_SPM MATLAB suite of tools to analyze the fNIRS data [55]. We first converted our raw light intensity data into relative changes of oxygenated (HbO) concentrations. We then preprocessed all data using a band-pass filter (between 0.1 and 0.01 Hz) to remove noise and motion artifacts. We used a general linear model (GLM) to fit our fNIRS data. Because the GLM analysis relies on the temporal variational pattern of signals, it is robust to differential path length factor variation, optical scattering, or poor contact on the head. By incorporating the GLM with a P-value calculation, NIRS-SPM not only enables calculation of activation maps of HbO but also allows for spatial localization. We used Tsuzuki’s 3D-digitizer-free method for the virtual registration of NIRS channels onto the stereotactic brain coordinate system. Essentially, this method allows us to place a virtual optode holder on the scalp by registering optodes and channels onto reference brains. Assuming that the fNIRS probe is reproducibly set across subjects, the virtual registration can yield as accurate spatial estimation as the probabilistic registration method. Please refer to the following paper for further information [56]. Based on Tsuzuki’s virtual anatomical registration findings, we identified the functional regions of the brain that were activated during the slowed internet manipulation. The top of Figure 5 shows that areas with significantly higher HbO were the Frontopolar area and the Dorsolateral prefrontal cortex (DLPFC). For day four, the bottom of Figure 5 shows the area of the brain where HbO significantly increased when subjects transitioned from the control time (searching for bikes with no manipulations) to the virus manipulation (i.e., high arousal and, presumably, alarm).(a) Significant areas of activation comparing theslow_internetΔHbO to before_slow_internetΔHbO. (n=11). (b) Significant areas of activation while subjects encountered the computer virus (n=11). (a) (b)The brain regions that showed significant activation during the malware manipulation were the frontopolar area, DLPFC, orbitofrontal area, and the pars triangularis Broca’s area. Figure5 shows the results of our statistical analysis on the fNIRS data transposed onto a standard brain. These results are also noted in an abbreviated form in Table 1. ## 4. Interpretation and Analysis of Hypotheses Our first set of hypotheses stated the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only amoderate decrease in valence, as indicated by the letter “H1b” in Figure 2.H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and/or GSR sensors.The survey data shows an increase in frustration and arousal and a decrease in valence. Furthermore, the survey data suggests that a loss in trust was correlated with these emotional changes (although the change was not statistically significant; see Table1). Figure 6 shows the average valence and arousal reports for days two, three, and four overlaid onto Russell’s circumplex model. The slow Internet and malware manipulations had their expected effect on participants, with the self-report valence and arousal scores for the slow internet manipulation residing in the region of H1a and the malware manipulation residing in the region of H1b.Figure 6 Valence and arousal reports for each session are overlaid on Russell’s circumplex model. We subtracted the baseline (day two) arousal and valence measures from days two, three, and four to shift the data so that baseline was at point(0,0) in Russell’s model.The GSR data are consistent with the arousal survey data, indicating that subjects’ level of arousal increased during the frustrating, slowed internet manipulation. This increase in arousal while frustrated also appears in the previously noted research on arousal and emotion. Furthermore, the fNIRS data shows that this frustrating manipulation was accompanied by an increase in DLPFC and frontopolar activation. The DLPFC is involved in working memory, emotion regulation, and self-control [57]. The frontopolar and DLPFC findings are consistent with a wealth of neuroscience research that ties together negative affect (such as frustration) with a need to conduct emotion regulation and with an increase in cognitive load.The malware manipulation was designed to elicit alarm and lower trust. The survey data shows that a significant increase in self-report frustration and arousal, and a decrease in self-report valence, are associated with the malware manipulation. The GSR data are consistent with the arousal survey data, indicating that subjects’ level of arousal increased during the alarmful malware manipulation. Additionally, survey data suggests that a statistically significant loss in trust was reported after the malware manipulation (again, see Table1), indicating that trust was correlated with these emotional state changes.The fNIRS data shows that the malware manipulation was accompanied by an increase in brain activation [45] in DLPFC, pars triangulate Broca’s area, and the orbitofrontal cortex. Researchers believe that the pars triangulate Broca’s area is responsible for helping humans to turn subjective experiences into speech. Research has found that high levels of arousal (presumably related to alarm if valence is negative) have a direct effect on Broca’s area [41]. It is possible that the alarm experienced by our users caused an increase in activation of Broca’s area while they attempted to find words to comprehend what was occurring. All 11 subjects did use the “Call Researcher” button after they saw the “Blue Screen of Death,” and all 11 subjects reported in their postexperiment interview that they truly believed a virus had been placed on their computer by the xtremebestprice.com website. Anecdotally, we noted the difficulty in producing speech during this condition when one subject simply repeated “What?” again and again to himself in a slow manner during the virus manipulation. The DLPFC, as mentioned previously, plays a role in emotion regulation and has been found to be activated during negative affect situations [45]. Lastly, the increase in activation in the orbitofrontal cortex makes sense, as activation in that region has been linked to high stress situations [42, 45, 58].We also note that there was an increase in EDA during both manipulations (as compared to baseline), but unfortunately the GSR data was not able to significantly differentiate between the day four (malware) and day three (slow Internet) user states. However, the fNIRS was able to distinguish between the user states in these two conditions. The slow Internet manipulation was associated with increased DLPFC and frontopolar activation, while the malware manipulation was associated with increased activation in Broca’s area, the DLPFC, the frontopolar region, and the orbitofrontal cortex. The activation in these specific brain regions was somewhat expected, as prior research (described in the literature review section) has tied these regions to emotional states such as alarm, frustration, and stress.Our first set of hypotheses were thus supported by our results, affirming the notion that it is feasible to not only (1) identify the emotional effects of the computer manipulations, but also (2) distinguish between minor and major negative stimuli via subjects’ differential emotional reactions. And, by using the fNIRS device, this distinction can be made relatively noninvasively, while users work on their computer system under normal working conditions.Our second hypothesis stated that computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine.Our postsurvey results and our fNIRS results support this hypothesis. The NASA-TLX self-report workload scores showed a significant increase after the slowed internet manipulation, which was also associated with a reduction in trust (see Table1). The NASA-TLX scores also increased significantly after subjects’ trust was lowered by the malware manipulation. Furthermore, the fNIRS data during the slowed Internet manipulation and during the malware manipulation showed that the frontopolar region, an area associated with cognitive load, was significantly higher during each of these manipulations than compared to baseline levels. The frontopolar region is involved in much of the higher order cognitive processing that makes us human, such as executive processing, memory, and planning. Although the elusiveness of this region makes it difficult to determine specific functionality, we can safely assume that the increased HbO in this region during the slowed internet manipulation indicates an overall increase in cognitive load. To repeat, our second hypothesis was supported by our surveys and by the fNIRS device. ### 4.1. Exploratory Analysis of Suspicion Hypotheses 3 and 3a were hypotheses about the concept of suspicion.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal.Given the dearth of research on the construct of suspicion, we conducted some exploratory analyses on this concept. After subjects were debriefed about the true nature of the study we asked subjects whether or not they had become suspicious that the researchers were actually responsible for the manipulations. All subjects reported that they believed the malware manipulation on the fourth day, was truly a computer virus that they had stumbled across. Interestingly, five subjects reported that they felt suspicious of the experimenters during the slowed Internet manipulation on the third day. One subject described his/her reaction as: “I felt a little suspicious that the experimenters were messing around with the computer, but I kept telling myself I was just being paranoid.” Thus, we post hoc split our sample into subjects who, at the end of day 3, mentioned “suspicion” of the experimenters or other agents (n1=5) and those who did not (n2=6). Note that these are small sample sizes; more research is needed to further validate these results.Regarding Hypothesis 3 (suspicion will be associated with increases in paracingulate cortex activity), the participants who interacted with the frustrating slowed Internet manipulation and reportedno associated suspicion had significant activation in their DLPFC and their middle temporal gyrus. As noted before, the DLPFC activation suggests an increase in emotion regulation and cognitive load during this condition. The middle temporal gyrus subserves language and semantic memory processing and is connected, through a series of networks, to the frontopolar region. This cognitive load is likely directly related to the semantic processing needed to complete the Internet browsing task. Thus, the fNIRS indicates that the nonsuspicious subjects simply became frustrated by the manipulation, and their brain activity showed this increase in cognitive load and the need for emotion regulation that is associated with frustration. In contrast, for thesuspicious subjects, the area of activation was directly above the anterior cingulate cortex (ACC), which is the frontal part of the cingulate cortex that resembles a “collar” form around the corpus callosum. The paracingulate cortex is a subset of the ACC. It is located within the ACC, closest to the external brain cortex.Thus, our fNIRS results lend support to our third hypothesis. We also note that our subject pool was small, and the post hoc splitting of groups into suspicious and nonsuspicious was based on exit interviews. We thus consider these findings tentative, although intriguing.We also computed summary data for the suspicious and nonsuspicious groups separately (see Table2). We looked at the self-report scores reported after the day two (baseline trust) session and after the day three (slowed Internet manipulation) session. We compared the GSR EDA measures for each group immediately before the slowed manipulation began, and then 100 seconds after the slow internet manipulation began.Table 2 P values for t-tests comparing the effects of the day 2 versus day 3 manipulations on the dependent variables in the study. Groups Self-report measures Sensor measures Distrust (↑) Workload (↑) Frustration (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Not suspicious 0.038* 0.029* 0.007* 0.026* 0.104 0.15 Middle temporal gyrus*, DLPFC* Suspicious no change 0.003* 0.008* 0.038* 0.035* 0.012* Paracingulate cortex* *indicates statistical significance.All empirical trends were as expected. That is, the slowed internet manipulation increased cognitive load, frustration, and arousal and decreased valence for both subgroups. However, only the suspicious group showed a statistically significant increase in arousal during the slow Internet manipulation (both in terms of self-report and GSR measures), thus supporting Hypothesis 3a.Lastly, it is worth noting that the construct of trust may also be correlated with the emotional and cognitive changes described above. As described previously, one item in the postsession survey asked subjects to indicate their level of trust while working through the experiment that day. High trust was listed on the left side of the scale (high trust = 1) and high distrust was listed on the far right side of the scale (distrust = 7). As we expected, participants lost trust after the third and fourth sessions (after day two, three, and four Likert survey averages wereM = 4.9, 5.5, and 6.6, resp.). We computed one-tailed, paired comparison t-tests by using day two data as a baseline for our comparisons. As reported in Table 1, participants reported feeling less trusting after the malware manipulation (day four) than they reported after the control, day-two session (P<0.0065). Loss of trust also occurred after the slow Internet manipulation (day three) when compared to the control, day-two session, but this difference was not statistically significant (P<0.0950). These results suggest a correlation between trust and the dependent measures reported in this experiment. Future studies should explore this relationship further, by using a more systematic manipulation of trust and by employing more robust surveys from the trust literature to measure changes in in the different facets of trust. ## 4.1. Exploratory Analysis of Suspicion Hypotheses 3 and 3a were hypotheses about the concept of suspicion.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal.Given the dearth of research on the construct of suspicion, we conducted some exploratory analyses on this concept. After subjects were debriefed about the true nature of the study we asked subjects whether or not they had become suspicious that the researchers were actually responsible for the manipulations. All subjects reported that they believed the malware manipulation on the fourth day, was truly a computer virus that they had stumbled across. Interestingly, five subjects reported that they felt suspicious of the experimenters during the slowed Internet manipulation on the third day. One subject described his/her reaction as: “I felt a little suspicious that the experimenters were messing around with the computer, but I kept telling myself I was just being paranoid.” Thus, we post hoc split our sample into subjects who, at the end of day 3, mentioned “suspicion” of the experimenters or other agents (n1=5) and those who did not (n2=6). Note that these are small sample sizes; more research is needed to further validate these results.Regarding Hypothesis 3 (suspicion will be associated with increases in paracingulate cortex activity), the participants who interacted with the frustrating slowed Internet manipulation and reportedno associated suspicion had significant activation in their DLPFC and their middle temporal gyrus. As noted before, the DLPFC activation suggests an increase in emotion regulation and cognitive load during this condition. The middle temporal gyrus subserves language and semantic memory processing and is connected, through a series of networks, to the frontopolar region. This cognitive load is likely directly related to the semantic processing needed to complete the Internet browsing task. Thus, the fNIRS indicates that the nonsuspicious subjects simply became frustrated by the manipulation, and their brain activity showed this increase in cognitive load and the need for emotion regulation that is associated with frustration. In contrast, for thesuspicious subjects, the area of activation was directly above the anterior cingulate cortex (ACC), which is the frontal part of the cingulate cortex that resembles a “collar” form around the corpus callosum. The paracingulate cortex is a subset of the ACC. It is located within the ACC, closest to the external brain cortex.Thus, our fNIRS results lend support to our third hypothesis. We also note that our subject pool was small, and the post hoc splitting of groups into suspicious and nonsuspicious was based on exit interviews. We thus consider these findings tentative, although intriguing.We also computed summary data for the suspicious and nonsuspicious groups separately (see Table2). We looked at the self-report scores reported after the day two (baseline trust) session and after the day three (slowed Internet manipulation) session. We compared the GSR EDA measures for each group immediately before the slowed manipulation began, and then 100 seconds after the slow internet manipulation began.Table 2 P values for t-tests comparing the effects of the day 2 versus day 3 manipulations on the dependent variables in the study. Groups Self-report measures Sensor measures Distrust (↑) Workload (↑) Frustration (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Not suspicious 0.038* 0.029* 0.007* 0.026* 0.104 0.15 Middle temporal gyrus*, DLPFC* Suspicious no change 0.003* 0.008* 0.038* 0.035* 0.012* Paracingulate cortex* *indicates statistical significance.All empirical trends were as expected. That is, the slowed internet manipulation increased cognitive load, frustration, and arousal and decreased valence for both subgroups. However, only the suspicious group showed a statistically significant increase in arousal during the slow Internet manipulation (both in terms of self-report and GSR measures), thus supporting Hypothesis 3a.Lastly, it is worth noting that the construct of trust may also be correlated with the emotional and cognitive changes described above. As described previously, one item in the postsession survey asked subjects to indicate their level of trust while working through the experiment that day. High trust was listed on the left side of the scale (high trust = 1) and high distrust was listed on the far right side of the scale (distrust = 7). As we expected, participants lost trust after the third and fourth sessions (after day two, three, and four Likert survey averages wereM = 4.9, 5.5, and 6.6, resp.). We computed one-tailed, paired comparison t-tests by using day two data as a baseline for our comparisons. As reported in Table 1, participants reported feeling less trusting after the malware manipulation (day four) than they reported after the control, day-two session (P<0.0065). Loss of trust also occurred after the slow Internet manipulation (day three) when compared to the control, day-two session, but this difference was not statistically significant (P<0.0950). These results suggest a correlation between trust and the dependent measures reported in this experiment. Future studies should explore this relationship further, by using a more systematic manipulation of trust and by employing more robust surveys from the trust literature to measure changes in in the different facets of trust. ## 5. Conclusion One overarching goal in this study was to demonstrate the feasibility of using fNIRS to objectively measure the cognitive and emotional correlates of two computer manipulations in real time. In particular, when we slowed down participants’ Internet speeds we caused reactions similar to being frustrated and/or annoyed. In this scenario, we hypothesized that, when compared to a user’s baseline state, unexpected minor negative stimuli would be associated with a moderate increase in arousal and a large decrease in valence. Our fNIRS, GSR, and survey data supported this hypothesis. When we simulated a computer virus and crash on users’ systems, we designed the manipulation to be deemed severely negative, and to be attributed to possible malintent by an external agent. In this scenario, we hypothesized that when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli would be associated with a large increase in arousal and only a moderate decrease in valence. Again, our fNIRS, GSR, and survey data supported this hypothesis. We also looked at suspicion in our datasets in a post hoc manner, with a small subject pool. While we did find support for the claim that the Theory of Mind region of the paracingulate cortex is activated during suspicion, we make those claims with caution. Future work must take a closer look at the state of suspicion. In particular, fNIRS could be used with experimental paradigms that specifically manipulate suspicion in order to get a more reliable measure of that state. The same can be said for the construct of trust. Results from the trust item in our postsession surveys suggested that trust is associated with the emotional and cognitive state changes that we measured, but there is a need to conduct follow-on studies that manipulate trust in a more controlled (e.g., counterbalanced) experimental paradigm.The results also suggest that fNIRS can measure cognitive activity related to users’ changing cognitive and emotional states during human-computer interactions. This is quite promising, as the noninvasive fNIRS device is easy-to-set up and comfortable, and it has been implemented wirelessly, showing great promise for future measurements of computer users’ experiences while they work with computer systems in real time. The results also indicate that trust and suspicion are correlated with the cognitive and emotional state changes of the computer users. Future research should attempt to disentangle these findings and to look more specifically at manipulations of trust and suspicion in order to measure those constructs during human-computer interactions. --- *Source: 101038-2014-04-30.xml*
101038-2014-04-30_101038-2014-04-30.md
100,851
Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions
Leanne M. Hirshfield; Philip Bobko; Alex Barelka; Stuart H. Hirshfield; Mathew T. Farrington; Spencer Gulbronson; Diane Paverman
Advances in Human-Computer Interaction (2014)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101038
101038-2014-04-30.xml
--- ## Abstract In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions. --- ## Body ## 1. Introduction As the lines between humans and computers become blurred, research is beginning to measure users’ experiences while interacting with a technology or information system. In particular, dealing with computer malfunctions that involve slow internet connectivity, or those that involve the introduction of malware onto one’s computer system, has unfortunately become a somewhat regular part of users’ interactions with computers. Computer users’ cognitive load and emotional state may change depending on the type and severity of the malfunction. Additionally, one’s perceived trust in, or suspicion of, the computer system may be correlated with these changes in cognitive load and emotional state.It is commonplace to use surveys to acquire users’ self-reports of their cognitive load and psychological states during human-computer interactions. For example, the NASA-TLX is one of the most commonly used surveys for assessing workload [1]. As another example, when attempting to measure self-report emotional states, users often complete surveys such as semantic differentials, the Self-Assessment Manikin, or the Positive and Negative Affect Schedule [2, 3]. The vast majority of trust research to date has also relied on surveys to assess people’s trust in others [4, 5]. Although this method of measurement is commonplace and valuable for understanding and measuring changes in user states, it is limited by many of the well-known drawbacks of subjective, self-report measures. For example, subjects may have different frames of reference when completing surveys; further, survey responses correlate only moderately with actual behavior and/or others’ perceptions of the subject’s behavior [6]. Also, subjects’ use of rating scales is prone to distortion due to social desirability [7], and surveys and self-reports are often administered after a task has been completed (postdictively). They are thus limited in their capacity to accurately collect valuable insight into the users’ changing experiences throughout a task.To compensate for the shortcomings of subjective, self-report techniques, in this study we use noninvasive brain measurement techniques to measure changes in user states objectively and in real time. Such measurement techniques have emerged in the literature with fMRI and the electroencephalograph (EEG) being used to measure workload and emotional states in human-computer interactions [8–14]. Furthermore, some researchers have recently used fMRI and associated brain activity to measure aspects of trust and distrust [15, 16]. Although fMRI provides valuable information about cognitive functioning in the brain, the device is quite constricting. It requires subjects to lie still in a large magnet and is extremely expensive [17, 18]. Although fMRI results suggest that we can measure trust objectively by assessing brain functioning, the tool cannot be used outside the research lab, limiting its uses for monitoring trust in more operational, real-world situations.In order to enable the measurement of cognitive load, emotion, and the correlated constructs of trust and suspicion in real-world contexts, we employed a new, noninvasive brain sensing technique called functional near infrared spectroscopy (fNIRS) to make real-time, objective measurements of users’ mental states while they conduct tasks in operational working conditions. The fNIRS device (shown in Figure1) is easy to set up, lightweight, comfortable, and portable, and it can be implemented wirelessly, allowing for use in many settings.Figure 1 A subject wearing a 52-channel fNIRS device.One overarching goal in this study was to demonstrate the feasibility of using fNIRS to objectively measure users’ states in real time while they work with, and interact with, computer systems. Towards these ends, we first provide a summary of the literatures on workload, emotional state, trust, and suspicion. We also describe specific research that guided our experimental goals and hypotheses [19]. We then describe our protocol, data analysis techniques, findings, and interpretations. We conclude with a discussion of the implications of this work for future research. ## 2. Background and Literature Review ### 2.1. Workload and Emotional State The termcognitive workload is used in literature from various fields. Many describe cognitive workload in general terms, for example, as the ratio of the cognitive resources needed to complete a task to the cognitive resources available from the human operator [20]. Some view workload as a measure that can be determined subjectively, as done with the NASA TLX [1]. Others view workload via performance measurements, focusing on the operator’s performance on a given task to determine levels of cognitive workload [20]. Yet others view cognitive workload as a measure of the overall activation measured by various brain imaging devices while subjects complete some task [8, 21, 22]. Cognitive psychologists note that there is notone area in the brain that activates when a person is experiencing mental workload. However, these researchers look at specific areas in the brain to see which areas are activated while subjects perform simple tasks [23–25]. We have used fNIRS to measure spatial and verbal working memory load, as well as response inhibition load and visual search load [11, 12].Regarding emotional reactions, most researchers agree that emotions are affective states that exist in a short period of time and are related to a particular event [26, 27]. From a psychological point of view, emotions are often mapped to points in a two-dimensional space of affective valence and arousal. Valence represents overall pleasantness of emotional experiences and arousal represents the intensity level of emotion, ranging from calm to excited [2, 28, 29]. These 2 dimensions enable researchers to differentiate among four categories of emotions. Some researchers even differentiate among nine categories of emotion by including a neutral section on both the valence and arousal axis. However, in principle, an infinite amount of other arbitrary numbers of categories can be defined [30]. ### 2.2. Trust and Suspicion #### 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. #### 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. #### 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ### 2.3. Users’ Cognitive and Emotional Reactions to Computer Malfunctions When users are confronted with unexpected stimuli during their human-computer interactions, we expect them to have a negative emotional response, and we expect a decrease in trust to be correlated with that emotional response. In some scenarios, the users may become frustrated with the computer (“This computer always freezes on me!”), or in other instances they may even become concerned that a malevolent agent has gained access to their computer (“I think this website stole my credit card information!”). Both examples imply that the computer user’s beliefs about the cause of an unexpected stimulus will affect his or her physiological response to that stimulus; that is, characteristics of the unexpected stimulus moderate the physiological, cognitive, and emotional responses.Regarding potential emotional responses, Figure2 shows two variants of Russell’s circumplex model of affect, arguably the most popular model depicting the arousal and valence dimensions and their relation to emotional state [28]. Research states that most people’s neutral (a.k.a baseline) state is located near point (0,0) in Russell’s model [28]. The current study focuses primarily on the model to the left in Figure 2. We symbolically place our hypotheses on the model in the left panel of Figure 2, while the depiction in the right panel of Figure 2 contains more detailed semantic descriptors. Indeed, note that our first example above (“this computer always freezes on me”) was suggested as inducing user frustration. In Figure 2, “frustration” is associated with high negative valence, but only moderate arousal (see H1a in Figure 2). In contrast, our second example above (“I think this website stole my credit card information”) was suggested as inducing user reactions of concern, alarm, and fear. In Figure 2, these terms are associated with high arousal, but only moderate negative valence (posited in Figure 2 as H1b).Figure 2 Two schematics of Russell’s circumplex model of affect [28]. As we are less interested in semantic meaning than we are in the physiological responses related to our experimental stimuli, we focus on the model to the left.With these depictions in mind, we hypothesize the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only a moderate decrease in valence, as indicated by the letter “H1b” in Figure 2.fMRI methods have recently been used to identify neural networks in the brain that are associated with different semantic emotional states [39] and with Russell’s 2-dimensional valence/arousal model [40]. However, this research remains in the nascent stage, and more work is needed to understand the neural correlates of emotion. In particular, the the neural correlates of mental states such as fear or alarm (i.e., the regions associated with the “H1b” label in Figure 2) remain largely unexplored in the research literature. In the limited work that has been done with fMRI, the emotional state of “fear” has been found to activate areas in the dorsolateral prefrontal cortex (DLPFC), in the pre- and supplementary motor cortex, and in Broca’s area [13]. Furthermore, research has found that high levels of stress and arousal have a direct effect on Broca’s area [41]. Also, activation in the orbitofrontal cortex has been linked to an “alarm signal” that is sent by the brain in response to negative affect, when there is a need to regulate the negative emotion [42]. Lastly, much research has linked DLPFC activation to the cognitive load associated with regulating one’s emotions such as frustration or fear (i.e., the regions labeled with the “H1a” and “H1b,” respectively, in Figure 2) [13, 17].Although all of the above brain measurement research was done with fMRI, fNIRS sensors, with a depth into the brain of up to 3 cm, are capable of reaching Broca’s area, the DLPFC, the supplementary motor cortex, and the orbitofrontal cortex. This suggests that the neural correlates relating to high arousal and moderately low levels of valence can be measured noninvasively. This leads us to an instrumentation corollary to our first set of hypotheses:H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and GSR sensors.In addition to affective state, we also expect computer malfunctions to affect users’ cognitive load (and to reduce trust in a computer system). The overall cognitive load required to perform a task using a computer is composed of a portion attributable to the difficulty of the task itself plus a portion attributable to the complexity of operating the computer. In this regard, we follow Shneiderman’s theory of syntactic and semantic components of a user interface [43]. The semantic component involves the workload needed to complete the task. The syntactic component includes interpreting the computer’s feedback and then formulating and inputting commands to the computer. A goal in user interface design is to reduce the mental effort devoted to the syntactic aspects so that more workload can be devoted to the underlying task, or semantic, aspects [11]. People prefer to expend as little cognitive effort as possible when working with a computer system, and well-designed systems are able to minimize syntactic workload in order to meet this important user goal.Therefore, when a computer malfunctions, we expect to see increases in the users’ cognitive load and a potential loss of trust, as the user is forced to account for the shortcomings of the computer. For example, if the computer is performing slowly while a user tries to compose an email, the user may need to use more verbal working memory load while she keeps her train of thought and waits for the computer to catch up. Or if a user is working with a poorly designed software program, he may continually have difficulties finding the correct menu items and commands to achieve his desired outcome. Accompanying his loss of trust in the software will be an increase in cognitive load as he tries to navigate the interface and complete his target task. Literature has also shown that increases in negative affect are directly related to increases in cognitive load [17, 44]. While this cognitive load can take many forms in the brain, one form involves activation in the DLPFC brain region, which has been linked to the cognitively demanding effort involved in emotion regulation [45].This leads to our next hypothesis.H2. Computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine. #### 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. #### 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ### 2.4. Experimental Design Eleven individuals (4 males) participated in this study (mean age = 20.2 years; SD = 0.603). Participants were college students representing a range of majors. Informed consent was obtained, and participants were compensated for their time. All participants filled out a preexperimental survey, which included demographic information and computer usage queries. All participants answered that they frequently used the Internet and had previous experience shopping online. Our experimental protocol was designed to begin with the participant at a level of normalcy and trust and to end with the participant in a state of distrust. ### 2.5. Task and Manipulations Participants were asked to use the Google search engine to shop online for the least expensive model of a specified bike. We chose this search engine task because it involved a task with which most subjects had previous experience (i.e., online shopping). During each session, subjects had fifteen minutes to search for three specific bicycles online. Participants received financial bonuses for finding bicycles priced below a given benchmark price. Financial bonuses were calculated as a percentage of the subject’s discount on the benchmark price. Thus, participants had incentive to continue searching for the lowest possible price on all bikes during their entire 15 minute task session. The participants were specifically instructed to use only the Google search engine while searching for the bikes. The participants were told that the objective of the study was to measure workload levels as users navigate shopping websites.Each participant completed this 15-minute-long task four times over the course of four consecutive days. Participants were told that there would be five such sessions, but as described below, our intervention on day four obviated the need for a fifth session. During each session, the participant was placed in a small room containing the fNIRS device and a standard desktop computer to use while shopping. Placing the participant in a room alone was intended to distance the subject from the researchers. This distance prevented the subject from relying on the researchers when manipulations occurred (see below). Also in the participant’s room was a “Call Researchers Button” that would alert the researchers if assistance was required. Participants were told to use the button only if they felt that the experiment needed to be stopped or if they required researcher intervention (i.e., if they felt uncomfortable with the measurement devices, or if they wanted to end the study for some reason). #### 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). #### 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. #### 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. #### 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ### 2.6. Equipment Set-Up We collected fNIRS data using a Hitachi ETG-4000 near-infrared spectroscopy device. Participants wore a cap with 52 channels that take measurements two times per second. As fNIRS equipment is somewhat sensitive to movement, participants were placed at a comfortable distance from the keyboard and mouse and were asked to minimize movement throughout the experiment. Prior research has shown that this minimal movement does not corrupt the fNIRS signal with motion artifacts [53]. All fNIRS data were synchronized by placing marks in the datasets whenever a task started or ended, or whenever a manipulation occurred. We collected GSR data using a wireless Affectiva Q-Sensor bracelet. We examined the participants’ electrodermal activity (EDA) with the GSR sensor. ### 2.7. Survey Instruments At the end of each 15-minute session, participants filled out postsession surveys that included the NASA Task Load index (TLX) for subjective workload assessment, a semantic differential survey [1], and the Self-Assessment Manikin (SAM) for valence and arousal emotional state assessment. Lang’s SAM [2] has been used to identify a number of emotional states that fall on the 2-dimensional valence-arousal schema developed by Russell. Semantic differential surveys measure the connotative meaning of concepts. Participants were asked: “Place a mark on each scale to indicate your feelings or your opinions regarding the time you spent today working on the computer and browsing through the various websites searching for bikes.” They then had to indicate how they felt during that day’s tasks by placing a mark on a scale defined by two bipolar adjectives (for example, “Adequate-Inadequate,” “Enjoyable-Frustrating,” or “Difficult to use-Easy to use”). Embedded within a list of adjective pairings was the adjective pairing of “Trusting-Distrusting”. We included this pairing within the longer list to subjectively gauge levels of trust in a way that would not cause users to become suspicious of our research paradigm. #### 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ### 2.8. Measures of Valence and Arousal Lang’s Self-Assessment Manikin (SAM) survey has been used by many researchers to acquire subjects’ self-report valence and arousal. From an objective, real-time measurement point of view, the galvanic skin response (GSR) sensor is also capable of measuring arousal. GSR sensors measure changes in electrical resistance across two regions of the skin, and the electrical resistance of the skin fluctuates quickly during mental, physical, and emotional arousal. This change in the skin’s electrodermal activity (EDA) can be used to measure arousal in individuals, although not valence. Despite this limitation, GSR has been used in controlled experiments to measure arousal while participants experienced a variety of emotions such as stress, excitement, boredom, and anger [27, 54]. ### 2.9. Manipulation Checks We checked our manipulations by reviewing participant responses to the postexperiment open-ended survey and by analyzing the subject results on the postsession surveys. All participants reported that they believed the malware they encountered (the malware manipulation) was the result of their visit to a malevolent website, which they believe that they had stumbled across on their own via their Google searches. The slow internet manipulation introduced on day three received mixed responses from participants. Five participants noted feeling suspicious of the researchers during the third experiment session—they suspected that we were behind the slow internet manipulation. We examine the data from this subset of subjects at the end of this paper. ## 2.1. Workload and Emotional State The termcognitive workload is used in literature from various fields. Many describe cognitive workload in general terms, for example, as the ratio of the cognitive resources needed to complete a task to the cognitive resources available from the human operator [20]. Some view workload as a measure that can be determined subjectively, as done with the NASA TLX [1]. Others view workload via performance measurements, focusing on the operator’s performance on a given task to determine levels of cognitive workload [20]. Yet others view cognitive workload as a measure of the overall activation measured by various brain imaging devices while subjects complete some task [8, 21, 22]. Cognitive psychologists note that there is notone area in the brain that activates when a person is experiencing mental workload. However, these researchers look at specific areas in the brain to see which areas are activated while subjects perform simple tasks [23–25]. We have used fNIRS to measure spatial and verbal working memory load, as well as response inhibition load and visual search load [11, 12].Regarding emotional reactions, most researchers agree that emotions are affective states that exist in a short period of time and are related to a particular event [26, 27]. From a psychological point of view, emotions are often mapped to points in a two-dimensional space of affective valence and arousal. Valence represents overall pleasantness of emotional experiences and arousal represents the intensity level of emotion, ranging from calm to excited [2, 28, 29]. These 2 dimensions enable researchers to differentiate among four categories of emotions. Some researchers even differentiate among nine categories of emotion by including a neutral section on both the valence and arousal axis. However, in principle, an infinite amount of other arbitrary numbers of categories can be defined [30]. ## 2.2. Trust and Suspicion ### 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. ### 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. ### 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ## 2.2.1. The Agent of Trust and Suspicion There is substantial literature exploringinterpersonal trust—often in the management and other social science domains [4, 31–35] As the interactions between humans and their computer interfaces become increasingly “personal,” research in the information technology (IT) realm is broadening this context to explore an individual’s trust and distrust based on interactions not only with another person, but also with other types of external agents (e.g., computers and information systems). Across the literature on trust and automation [35–37], we note that the concept of trust has been (or can be) applied to trust in(i) the operator of another IT system,(ii) the programmer/designer of the IT system,(iii) the programmer/developer of the algorithms used in the software,(iv) the software itself (without reference to the programmer), and(v) the hardware in the system. ## 2.2.2. Trust (and Distrust) Rousseau et al.’s review on the topic of trust in the management domain concluded that there are many definitions of trust [31]. Mayer et al.’s definition of interpersonal trust is most frequently cited (including the three trustworthiness components of ability, integrity, and benevolence) [32]. Rousseau et al.’s review leads to a similar definition of trust; that is, “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (page 395). This is consistent with Lyons et al.’s (2011) recent definition in IT contexts, and we adopt it here, with the modification that “another” can be “another person” or “another agent” such as an IT system [19]. ## 2.2.3. Suspicion The construct of suspicion seems related to trust and distrust, yet there is scant literature available on this topic; let alone literature that considers suspicion in IT contexts. Thus, we also investigate the concept of suspicion, but in an exploratory manner. Based on work in social psychology, marketing, communication, human factors, and management, Bobko et al. [38] define state suspicion as the “simultaneous combination of uncertainty, perceived (mal) intent, and cognitive activity (searching for alternative explanations or new data).” Note that a key component of suspicion is uncertainty—and it occurs with a negative frame (“concern”) that is linked to cognitive activity regarding the uncertainty. In one of the few empirical papers in this area, Lyons et al. preliminarily investigated suspicion, trust, distrust, and an operator’s decision confidence [19].They also stated that trust, distrust, and suspicion are orthogonal constructs, although their conclusions are potentially confounded by the referents used (e.g., suspicion was about the subject’s particular computer, whereas trust and distrust were about an IT system). As noted above, most, if not all, measures of trust and distrust in the literature are self-report assessments that ask the trustor to respond using Likert scales or behavioral checklists. The scarce literature on suspicion uses the same methodology [38]. ## 2.3. Users’ Cognitive and Emotional Reactions to Computer Malfunctions When users are confronted with unexpected stimuli during their human-computer interactions, we expect them to have a negative emotional response, and we expect a decrease in trust to be correlated with that emotional response. In some scenarios, the users may become frustrated with the computer (“This computer always freezes on me!”), or in other instances they may even become concerned that a malevolent agent has gained access to their computer (“I think this website stole my credit card information!”). Both examples imply that the computer user’s beliefs about the cause of an unexpected stimulus will affect his or her physiological response to that stimulus; that is, characteristics of the unexpected stimulus moderate the physiological, cognitive, and emotional responses.Regarding potential emotional responses, Figure2 shows two variants of Russell’s circumplex model of affect, arguably the most popular model depicting the arousal and valence dimensions and their relation to emotional state [28]. Research states that most people’s neutral (a.k.a baseline) state is located near point (0,0) in Russell’s model [28]. The current study focuses primarily on the model to the left in Figure 2. We symbolically place our hypotheses on the model in the left panel of Figure 2, while the depiction in the right panel of Figure 2 contains more detailed semantic descriptors. Indeed, note that our first example above (“this computer always freezes on me”) was suggested as inducing user frustration. In Figure 2, “frustration” is associated with high negative valence, but only moderate arousal (see H1a in Figure 2). In contrast, our second example above (“I think this website stole my credit card information”) was suggested as inducing user reactions of concern, alarm, and fear. In Figure 2, these terms are associated with high arousal, but only moderate negative valence (posited in Figure 2 as H1b).Figure 2 Two schematics of Russell’s circumplex model of affect [28]. As we are less interested in semantic meaning than we are in the physiological responses related to our experimental stimuli, we focus on the model to the left.With these depictions in mind, we hypothesize the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only a moderate decrease in valence, as indicated by the letter “H1b” in Figure 2.fMRI methods have recently been used to identify neural networks in the brain that are associated with different semantic emotional states [39] and with Russell’s 2-dimensional valence/arousal model [40]. However, this research remains in the nascent stage, and more work is needed to understand the neural correlates of emotion. In particular, the the neural correlates of mental states such as fear or alarm (i.e., the regions associated with the “H1b” label in Figure 2) remain largely unexplored in the research literature. In the limited work that has been done with fMRI, the emotional state of “fear” has been found to activate areas in the dorsolateral prefrontal cortex (DLPFC), in the pre- and supplementary motor cortex, and in Broca’s area [13]. Furthermore, research has found that high levels of stress and arousal have a direct effect on Broca’s area [41]. Also, activation in the orbitofrontal cortex has been linked to an “alarm signal” that is sent by the brain in response to negative affect, when there is a need to regulate the negative emotion [42]. Lastly, much research has linked DLPFC activation to the cognitive load associated with regulating one’s emotions such as frustration or fear (i.e., the regions labeled with the “H1a” and “H1b,” respectively, in Figure 2) [13, 17].Although all of the above brain measurement research was done with fMRI, fNIRS sensors, with a depth into the brain of up to 3 cm, are capable of reaching Broca’s area, the DLPFC, the supplementary motor cortex, and the orbitofrontal cortex. This suggests that the neural correlates relating to high arousal and moderately low levels of valence can be measured noninvasively. This leads us to an instrumentation corollary to our first set of hypotheses:H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and GSR sensors.In addition to affective state, we also expect computer malfunctions to affect users’ cognitive load (and to reduce trust in a computer system). The overall cognitive load required to perform a task using a computer is composed of a portion attributable to the difficulty of the task itself plus a portion attributable to the complexity of operating the computer. In this regard, we follow Shneiderman’s theory of syntactic and semantic components of a user interface [43]. The semantic component involves the workload needed to complete the task. The syntactic component includes interpreting the computer’s feedback and then formulating and inputting commands to the computer. A goal in user interface design is to reduce the mental effort devoted to the syntactic aspects so that more workload can be devoted to the underlying task, or semantic, aspects [11]. People prefer to expend as little cognitive effort as possible when working with a computer system, and well-designed systems are able to minimize syntactic workload in order to meet this important user goal.Therefore, when a computer malfunctions, we expect to see increases in the users’ cognitive load and a potential loss of trust, as the user is forced to account for the shortcomings of the computer. For example, if the computer is performing slowly while a user tries to compose an email, the user may need to use more verbal working memory load while she keeps her train of thought and waits for the computer to catch up. Or if a user is working with a poorly designed software program, he may continually have difficulties finding the correct menu items and commands to achieve his desired outcome. Accompanying his loss of trust in the software will be an increase in cognitive load as he tries to navigate the interface and complete his target task. Literature has also shown that increases in negative affect are directly related to increases in cognitive load [17, 44]. While this cognitive load can take many forms in the brain, one form involves activation in the DLPFC brain region, which has been linked to the cognitively demanding effort involved in emotion regulation [45].This leads to our next hypothesis.H2. Computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine. ### 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. ### 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ## 2.3.1. MRI Studies Related to Trust and Suspicion The current experiment focuses on the changing cognitive and emotional state changes that result from two common computer malfunctions. A secondary experimental goal is to explore the way that users’ levels of trust and/or suspicion may be related to these measured cognitive and emotional state changes. Several brain regions that are of interest in trust research relate to a paradigm called “theory of mind” (ToM; Premack and Woodruff, 1978 [46]), which is concerned with understanding how individuals attribute beliefs, desires, and intentions to oneself and others. Researchers conducting ToM studies have found that the anterior paracingulate cortex is activated when participants are deciding whether or not to trust someone else [15, 16]. The anterior paracingulate cortex is a subset of the anterior cingulate cortex, which can be measured by fNIRS. Krueger et al. used fMRI to measure the brain activity of pairs of people playing a classic trust game [16]. They found that building a trust relationship was related to activation in the paracingulate cortex, which (as shown in the ToM research stated above) is involved in the process by which we infer another’s intentions. They also found that unconditional trust was related to activity in the septal area, a region that has been linked to social attachment behavior. Dimoka constructed a study that mimics typical interactions with e-bay sellers. She asked participants, while in an MRI machine, to complete a series of purchasing interactions with hypothetical “sellers” [15]. She noted that participant’s thoughts when working with “low distrust” sellers were associated with brain activation in the anterior paracingulate cortex.In summary, the fMRI research presented in this section suggests that higher brain activation in the anterior paracingulate cortex will be associated with the process by which users conduct the cognitively demanding process to infer whether or not another person is trustworthy (actively trying to infer another’s intentions). Because people tend to be cognitive misers, they prefer to adopt truth (or lie) biases that enable them to unconditionally trust (or distrust) others and thereby reduce their cognitive load [47, 48]. Thus, we suggest that individuals tax the paracingulate cortex when there is a need to place cognitive effort toward the determination of an external agent’s intentions. This may also indicate suspicion on the part of the user, because cognitive activation (generation of alternative possible explanations for observed behavior) is an important component of state suspicion [38]. This leads to our third hypothesis.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.Research has also suggested that if an individual is uncertain about the intentions of another (or uncertain about any number of other interactions within their environment, and that uncertainty is perceived to have potentially negative consequences), then anxiety will increase [14, 38]. Because uncertainty is a component of state suspicion, and subsequent anxiety is associated with increases in arousal (see Figure 2); we further hypothesize the following.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal. ## 2.3.2. The Utility of Functional Near-Infrared Spectroscopy Users in the brain imaging studies described above were placed in cumbersome, expensive, and constricting fMRI scanners during all studies. There is a need to study trust, distrust, and suspicion, as well as their associated affective states, while computer users conduct more naturalistic human-computer interactions. To this end, we use the noninvasive fNIRs and GSR sensors in our study.The fNIRS device was introduced in the 1990’s to complement, and in some cases overcome, the limitations of the EEG and other brain imaging techniques [49]. The fNIRS device uses light sources in the wavelength range (690–830 nm) that are pulsed into the brain. Deoxygenated hemoglobin (Hb) and oxygenated hemoglobin (HbO) are the main absorbers of near-infrared light in tissues during hemodynamic and metabolic changes associated with neural activity in the brain [49]. These changes can be detected by measuring the diffusively reflected light that has probed the brain cortex [21, 49, 50]. We have used fNIRS to successfully measure a range of cognitive states [11, 12, 18, 51, 52] while computer users complete tasks under normal working conditions and, as noted above, one purpose of the current study is to further explicate the utility of fNIRS measurements. ## 2.4. Experimental Design Eleven individuals (4 males) participated in this study (mean age = 20.2 years; SD = 0.603). Participants were college students representing a range of majors. Informed consent was obtained, and participants were compensated for their time. All participants filled out a preexperimental survey, which included demographic information and computer usage queries. All participants answered that they frequently used the Internet and had previous experience shopping online. Our experimental protocol was designed to begin with the participant at a level of normalcy and trust and to end with the participant in a state of distrust. ## 2.5. Task and Manipulations Participants were asked to use the Google search engine to shop online for the least expensive model of a specified bike. We chose this search engine task because it involved a task with which most subjects had previous experience (i.e., online shopping). During each session, subjects had fifteen minutes to search for three specific bicycles online. Participants received financial bonuses for finding bicycles priced below a given benchmark price. Financial bonuses were calculated as a percentage of the subject’s discount on the benchmark price. Thus, participants had incentive to continue searching for the lowest possible price on all bikes during their entire 15 minute task session. The participants were specifically instructed to use only the Google search engine while searching for the bikes. The participants were told that the objective of the study was to measure workload levels as users navigate shopping websites.Each participant completed this 15-minute-long task four times over the course of four consecutive days. Participants were told that there would be five such sessions, but as described below, our intervention on day four obviated the need for a fifth session. During each session, the participant was placed in a small room containing the fNIRS device and a standard desktop computer to use while shopping. Placing the participant in a room alone was intended to distance the subject from the researchers. This distance prevented the subject from relying on the researchers when manipulations occurred (see below). Also in the participant’s room was a “Call Researchers Button” that would alert the researchers if assistance was required. Participants were told to use the button only if they felt that the experiment needed to be stopped or if they required researcher intervention (i.e., if they felt uncomfortable with the measurement devices, or if they wanted to end the study for some reason). ### 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). ### 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. ### 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. ### 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ## 2.5.1. Days One and Two: Baseline During the first two days, each participant conducted his/her 15-minute-long task without any intervention on the part of the researchers; that is, participants searched for the lowest price of three bikes that were assigned to them without any other intervention. The purposes of these two sessions were to (a) establish participant familiarity with both the computer system and with the researchers and (b) establish positive, consistent interactions during the computer tasks. We felt that these two aspects would create a sufficient level of trust that could be weakened by subsequent manipulations (and our manipulation check confirms this; see next section). ## 2.5.2. Day Three: Slowed Internet Manipulation This manipulation was created to target Hypothesis 1a (among others), where the unexpected manipulation was negative, but minor. During the third 15-minute-long session we let users work for several minutes on the search task before introducing any changes. Our manipulation followed a set of carefully scheduled variations of the speed of the Internet. Levels of Internet speed were based upon a previous pilot study. The levels were chosen because they were overt enough to cause a noticeable delay in the Internet speed, yet subtle enough to remain within a speed range that subjects considered to be a frustrating, but believable, speed. Thus, the manipulation on day three was intended to induce frustration and to lower users’ trust in the computer system. The lowered trust was expected because the slowdown could create reductions in perceived ability/integrity of the system. ## 2.5.3. Day Four: Malware Manipulation This manipulation was created to test Hypothesis 1b, where the stimuli are perceived as severely negative, or they are attributed to malintent. On day four, subjects again had to shop for three bicycles online. No manipulations were introduced while searching for the first two bikes (which took approximately 10 minutes out of the day four session). We did this to reestablish any trust that may have been lost during the day three Internet speed manipulation. However, on this fourth day, the third bicycle presented to subjects was fictitious, and it could only be found on our custom website, “XtremeBestPrice.com” (this website is no longer accessible online as we did not want people outside of our study to stumble upon the fake malware site). This website was Google-indexed, and we purposely made some common web page design flaws (e.g., flashing animations, a few misspelled words) on the page in order to lower its trustworthiness (cf. Lee and See [35]). There was no indication that the researchers were responsible for its existence. We also wrote about our website in several web forums in order to add legitimacy to the website. When participants made their way to our website, they did so using the same methods that they had previously used on many occasions. As the participant navigated around our site, a series of pop-ups and downloads were automatically triggered, eventually launching a “Blue Screen of Death” to indicate a computer crash (this process is depicted in Figure 3). There was no way to exit this blue screen, so participants had little option but to call the researchers into the room using the “Call Researchers Button”.Figure 3 Screenshots of the day four manipulations. Figure3(a) shows the homepage of xtremebestprice.com, 3(b) shows the pop-ups that occur on the site, and 3(c) shows the “blue screen of death”. ## 2.5.4. Debriefing and Suspicion Exit Survey At the end of the fourth day, we debriefed subjects about the true nature of our study. After explaining that we had indeed caused the Internet to slow down on the third day and that we had created the fake xtremebestprice.com website, we asked subjects (via an open-ended survey) whether or not they had suspected that we, the researchers, rather than the internet connection or the website, were the cause of the computer glitches. ## 2.6. Equipment Set-Up We collected fNIRS data using a Hitachi ETG-4000 near-infrared spectroscopy device. Participants wore a cap with 52 channels that take measurements two times per second. As fNIRS equipment is somewhat sensitive to movement, participants were placed at a comfortable distance from the keyboard and mouse and were asked to minimize movement throughout the experiment. Prior research has shown that this minimal movement does not corrupt the fNIRS signal with motion artifacts [53]. All fNIRS data were synchronized by placing marks in the datasets whenever a task started or ended, or whenever a manipulation occurred. We collected GSR data using a wireless Affectiva Q-Sensor bracelet. We examined the participants’ electrodermal activity (EDA) with the GSR sensor. ## 2.7. Survey Instruments At the end of each 15-minute session, participants filled out postsession surveys that included the NASA Task Load index (TLX) for subjective workload assessment, a semantic differential survey [1], and the Self-Assessment Manikin (SAM) for valence and arousal emotional state assessment. Lang’s SAM [2] has been used to identify a number of emotional states that fall on the 2-dimensional valence-arousal schema developed by Russell. Semantic differential surveys measure the connotative meaning of concepts. Participants were asked: “Place a mark on each scale to indicate your feelings or your opinions regarding the time you spent today working on the computer and browsing through the various websites searching for bikes.” They then had to indicate how they felt during that day’s tasks by placing a mark on a scale defined by two bipolar adjectives (for example, “Adequate-Inadequate,” “Enjoyable-Frustrating,” or “Difficult to use-Easy to use”). Embedded within a list of adjective pairings was the adjective pairing of “Trusting-Distrusting”. We included this pairing within the longer list to subjectively gauge levels of trust in a way that would not cause users to become suspicious of our research paradigm. ### 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ## 2.7.1. End of Experiment Survey As described previously, after the final experiment session and the subject debriefing, we asked the subjects to complete a final postsurvey that asked them three questions (did they notice a slow-down on day three?; if so, to what did they attribute the slow-down?; during day four, what were their thoughts about the causes of the computer issues?). ## 2.8. Measures of Valence and Arousal Lang’s Self-Assessment Manikin (SAM) survey has been used by many researchers to acquire subjects’ self-report valence and arousal. From an objective, real-time measurement point of view, the galvanic skin response (GSR) sensor is also capable of measuring arousal. GSR sensors measure changes in electrical resistance across two regions of the skin, and the electrical resistance of the skin fluctuates quickly during mental, physical, and emotional arousal. This change in the skin’s electrodermal activity (EDA) can be used to measure arousal in individuals, although not valence. Despite this limitation, GSR has been used in controlled experiments to measure arousal while participants experienced a variety of emotions such as stress, excitement, boredom, and anger [27, 54]. ## 2.9. Manipulation Checks We checked our manipulations by reviewing participant responses to the postexperiment open-ended survey and by analyzing the subject results on the postsession surveys. All participants reported that they believed the malware they encountered (the malware manipulation) was the result of their visit to a malevolent website, which they believe that they had stumbled across on their own via their Google searches. The slow internet manipulation introduced on day three received mixed responses from participants. Five participants noted feeling suspicious of the researchers during the third experiment session—they suspected that we were behind the slow internet manipulation. We examine the data from this subset of subjects at the end of this paper. ## 3. Results Survey, GSR, and fNIRS data were recorded for all 11 participants. Figure4 provides a graph of these trends, averaged across subjects, over the course of days two, three, and four.Figure 4 Self-report values after the baseline, slowed internet manipulation, and malware manipulation.We aimed to determine whether or not there were statistically significant differences between the baseline, slowed internet manipulation, and malware manipulation sessions. We made statistical comparisons of our survey, GSR, and fNIRS datasets. Allt-test results are available in Table 1 and they are discussed in the analysis section of this paper. Paired comparison t-tests were conducted on our postsession survey responses.Table 1 P values for t-tests comparing the effects of the manipulations on the dependent variables. Distrust (↑) Workload (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Comparison between baseline and the slowed Internet manipulation sessions 0.095 0.001* 0.003* 0.018* 0.002* DLPFC, frontopolar region Comparison between baseline and the malware manipulation sessions 0.006* 0.004* 0.120 0.003* 0.49* DLPFC, orbitofrontal cortex, broca’s area, frontopolar region Distrust (↑) Workload (↓) Valence (↑) Arousal (↑) GSR-EDA (↑) fNIRS HbO* Comparison between slowed Internet manipulation and malware manipulation sessions 0.084 0.074 0.105 0.035* 0.19 N/A *indicates statistical significance. Note thatn=11 for all comparisons, except the GSR data, which is noted above.The GSR data were analyzed using paired sample one-tailedt-tests to compare the electrodermal activity (EDA) values before and after target manipulations. On day three, we compared EDA immediately before the onset of the slowed internet manipulation and then 100 seconds after the slowed internet manipulation began. GSR data for two subjects were discarded due to poor contact with the skin on the third measurement day and substantial motion artifacts in the data. On day four, we used a paired sample one-tailed t-test to compare the EDA values immediately before the onset of the faux computer virus manipulation and then immediately after the ‘Blue Screen of Death’ appeared for each subject. Data for three subjects were discarded due to poor contact with the skin on the fourth measurement day and substantial motion artifacts in the data. Therefore, results were computed for the remaining eight subjects’ data.We used the NIRS_SPM MATLAB suite of tools to analyze the fNIRS data [55]. We first converted our raw light intensity data into relative changes of oxygenated (HbO) concentrations. We then preprocessed all data using a band-pass filter (between 0.1 and 0.01 Hz) to remove noise and motion artifacts. We used a general linear model (GLM) to fit our fNIRS data. Because the GLM analysis relies on the temporal variational pattern of signals, it is robust to differential path length factor variation, optical scattering, or poor contact on the head. By incorporating the GLM with a P-value calculation, NIRS-SPM not only enables calculation of activation maps of HbO but also allows for spatial localization. We used Tsuzuki’s 3D-digitizer-free method for the virtual registration of NIRS channels onto the stereotactic brain coordinate system. Essentially, this method allows us to place a virtual optode holder on the scalp by registering optodes and channels onto reference brains. Assuming that the fNIRS probe is reproducibly set across subjects, the virtual registration can yield as accurate spatial estimation as the probabilistic registration method. Please refer to the following paper for further information [56]. Based on Tsuzuki’s virtual anatomical registration findings, we identified the functional regions of the brain that were activated during the slowed internet manipulation. The top of Figure 5 shows that areas with significantly higher HbO were the Frontopolar area and the Dorsolateral prefrontal cortex (DLPFC). For day four, the bottom of Figure 5 shows the area of the brain where HbO significantly increased when subjects transitioned from the control time (searching for bikes with no manipulations) to the virus manipulation (i.e., high arousal and, presumably, alarm).(a) Significant areas of activation comparing theslow_internetΔHbO to before_slow_internetΔHbO. (n=11). (b) Significant areas of activation while subjects encountered the computer virus (n=11). (a) (b)The brain regions that showed significant activation during the malware manipulation were the frontopolar area, DLPFC, orbitofrontal area, and the pars triangularis Broca’s area. Figure5 shows the results of our statistical analysis on the fNIRS data transposed onto a standard brain. These results are also noted in an abbreviated form in Table 1. ## 4. Interpretation and Analysis of Hypotheses Our first set of hypotheses stated the following.H1a. If unexpected computer-generated stimuli are negative, but minor, they will cause reactions similar to being frustrated and/or annoyed. More specifically, when compared to a user’s baseline state, unexpected minor negative stimuli will be associated with amoderate increase in arousal and alarge decrease in valence, as indicated by the letter “H1a” in Figure 2.H1b. If unexpected computer-generated stimuli are perceived as severely negative, or they are attributed to possible malintent by an external agent, they will cause individuals to feel afraid and/or alarmed. More specifically, when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli will be associated with alarge increase in arousal and only amoderate decrease in valence, as indicated by the letter “H1b” in Figure 2.H1 Corollary. The regions identified by H1a and H1b in Figure 2 can be distinguished noninvasively, with fNIRS and/or GSR sensors.The survey data shows an increase in frustration and arousal and a decrease in valence. Furthermore, the survey data suggests that a loss in trust was correlated with these emotional changes (although the change was not statistically significant; see Table1). Figure 6 shows the average valence and arousal reports for days two, three, and four overlaid onto Russell’s circumplex model. The slow Internet and malware manipulations had their expected effect on participants, with the self-report valence and arousal scores for the slow internet manipulation residing in the region of H1a and the malware manipulation residing in the region of H1b.Figure 6 Valence and arousal reports for each session are overlaid on Russell’s circumplex model. We subtracted the baseline (day two) arousal and valence measures from days two, three, and four to shift the data so that baseline was at point(0,0) in Russell’s model.The GSR data are consistent with the arousal survey data, indicating that subjects’ level of arousal increased during the frustrating, slowed internet manipulation. This increase in arousal while frustrated also appears in the previously noted research on arousal and emotion. Furthermore, the fNIRS data shows that this frustrating manipulation was accompanied by an increase in DLPFC and frontopolar activation. The DLPFC is involved in working memory, emotion regulation, and self-control [57]. The frontopolar and DLPFC findings are consistent with a wealth of neuroscience research that ties together negative affect (such as frustration) with a need to conduct emotion regulation and with an increase in cognitive load.The malware manipulation was designed to elicit alarm and lower trust. The survey data shows that a significant increase in self-report frustration and arousal, and a decrease in self-report valence, are associated with the malware manipulation. The GSR data are consistent with the arousal survey data, indicating that subjects’ level of arousal increased during the alarmful malware manipulation. Additionally, survey data suggests that a statistically significant loss in trust was reported after the malware manipulation (again, see Table1), indicating that trust was correlated with these emotional state changes.The fNIRS data shows that the malware manipulation was accompanied by an increase in brain activation [45] in DLPFC, pars triangulate Broca’s area, and the orbitofrontal cortex. Researchers believe that the pars triangulate Broca’s area is responsible for helping humans to turn subjective experiences into speech. Research has found that high levels of arousal (presumably related to alarm if valence is negative) have a direct effect on Broca’s area [41]. It is possible that the alarm experienced by our users caused an increase in activation of Broca’s area while they attempted to find words to comprehend what was occurring. All 11 subjects did use the “Call Researcher” button after they saw the “Blue Screen of Death,” and all 11 subjects reported in their postexperiment interview that they truly believed a virus had been placed on their computer by the xtremebestprice.com website. Anecdotally, we noted the difficulty in producing speech during this condition when one subject simply repeated “What?” again and again to himself in a slow manner during the virus manipulation. The DLPFC, as mentioned previously, plays a role in emotion regulation and has been found to be activated during negative affect situations [45]. Lastly, the increase in activation in the orbitofrontal cortex makes sense, as activation in that region has been linked to high stress situations [42, 45, 58].We also note that there was an increase in EDA during both manipulations (as compared to baseline), but unfortunately the GSR data was not able to significantly differentiate between the day four (malware) and day three (slow Internet) user states. However, the fNIRS was able to distinguish between the user states in these two conditions. The slow Internet manipulation was associated with increased DLPFC and frontopolar activation, while the malware manipulation was associated with increased activation in Broca’s area, the DLPFC, the frontopolar region, and the orbitofrontal cortex. The activation in these specific brain regions was somewhat expected, as prior research (described in the literature review section) has tied these regions to emotional states such as alarm, frustration, and stress.Our first set of hypotheses were thus supported by our results, affirming the notion that it is feasible to not only (1) identify the emotional effects of the computer manipulations, but also (2) distinguish between minor and major negative stimuli via subjects’ differential emotional reactions. And, by using the fNIRS device, this distinction can be made relatively noninvasively, while users work on their computer system under normal working conditions.Our second hypothesis stated that computer users will experience more cognitive load when interacting with a malfunctioning computer than they did when working on a properly functioning machine.Our postsurvey results and our fNIRS results support this hypothesis. The NASA-TLX self-report workload scores showed a significant increase after the slowed internet manipulation, which was also associated with a reduction in trust (see Table1). The NASA-TLX scores also increased significantly after subjects’ trust was lowered by the malware manipulation. Furthermore, the fNIRS data during the slowed Internet manipulation and during the malware manipulation showed that the frontopolar region, an area associated with cognitive load, was significantly higher during each of these manipulations than compared to baseline levels. The frontopolar region is involved in much of the higher order cognitive processing that makes us human, such as executive processing, memory, and planning. Although the elusiveness of this region makes it difficult to determine specific functionality, we can safely assume that the increased HbO in this region during the slowed internet manipulation indicates an overall increase in cognitive load. To repeat, our second hypothesis was supported by our surveys and by the fNIRS device. ### 4.1. Exploratory Analysis of Suspicion Hypotheses 3 and 3a were hypotheses about the concept of suspicion.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal.Given the dearth of research on the construct of suspicion, we conducted some exploratory analyses on this concept. After subjects were debriefed about the true nature of the study we asked subjects whether or not they had become suspicious that the researchers were actually responsible for the manipulations. All subjects reported that they believed the malware manipulation on the fourth day, was truly a computer virus that they had stumbled across. Interestingly, five subjects reported that they felt suspicious of the experimenters during the slowed Internet manipulation on the third day. One subject described his/her reaction as: “I felt a little suspicious that the experimenters were messing around with the computer, but I kept telling myself I was just being paranoid.” Thus, we post hoc split our sample into subjects who, at the end of day 3, mentioned “suspicion” of the experimenters or other agents (n1=5) and those who did not (n2=6). Note that these are small sample sizes; more research is needed to further validate these results.Regarding Hypothesis 3 (suspicion will be associated with increases in paracingulate cortex activity), the participants who interacted with the frustrating slowed Internet manipulation and reportedno associated suspicion had significant activation in their DLPFC and their middle temporal gyrus. As noted before, the DLPFC activation suggests an increase in emotion regulation and cognitive load during this condition. The middle temporal gyrus subserves language and semantic memory processing and is connected, through a series of networks, to the frontopolar region. This cognitive load is likely directly related to the semantic processing needed to complete the Internet browsing task. Thus, the fNIRS indicates that the nonsuspicious subjects simply became frustrated by the manipulation, and their brain activity showed this increase in cognitive load and the need for emotion regulation that is associated with frustration. In contrast, for thesuspicious subjects, the area of activation was directly above the anterior cingulate cortex (ACC), which is the frontal part of the cingulate cortex that resembles a “collar” form around the corpus callosum. The paracingulate cortex is a subset of the ACC. It is located within the ACC, closest to the external brain cortex.Thus, our fNIRS results lend support to our third hypothesis. We also note that our subject pool was small, and the post hoc splitting of groups into suspicious and nonsuspicious was based on exit interviews. We thus consider these findings tentative, although intriguing.We also computed summary data for the suspicious and nonsuspicious groups separately (see Table2). We looked at the self-report scores reported after the day two (baseline trust) session and after the day three (slowed Internet manipulation) session. We compared the GSR EDA measures for each group immediately before the slowed manipulation began, and then 100 seconds after the slow internet manipulation began.Table 2 P values for t-tests comparing the effects of the day 2 versus day 3 manipulations on the dependent variables in the study. Groups Self-report measures Sensor measures Distrust (↑) Workload (↑) Frustration (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Not suspicious 0.038* 0.029* 0.007* 0.026* 0.104 0.15 Middle temporal gyrus*, DLPFC* Suspicious no change 0.003* 0.008* 0.038* 0.035* 0.012* Paracingulate cortex* *indicates statistical significance.All empirical trends were as expected. That is, the slowed internet manipulation increased cognitive load, frustration, and arousal and decreased valence for both subgroups. However, only the suspicious group showed a statistically significant increase in arousal during the slow Internet manipulation (both in terms of self-report and GSR measures), thus supporting Hypothesis 3a.Lastly, it is worth noting that the construct of trust may also be correlated with the emotional and cognitive changes described above. As described previously, one item in the postsession survey asked subjects to indicate their level of trust while working through the experiment that day. High trust was listed on the left side of the scale (high trust = 1) and high distrust was listed on the far right side of the scale (distrust = 7). As we expected, participants lost trust after the third and fourth sessions (after day two, three, and four Likert survey averages wereM = 4.9, 5.5, and 6.6, resp.). We computed one-tailed, paired comparison t-tests by using day two data as a baseline for our comparisons. As reported in Table 1, participants reported feeling less trusting after the malware manipulation (day four) than they reported after the control, day-two session (P<0.0065). Loss of trust also occurred after the slow Internet manipulation (day three) when compared to the control, day-two session, but this difference was not statistically significant (P<0.0950). These results suggest a correlation between trust and the dependent measures reported in this experiment. Future studies should explore this relationship further, by using a more systematic manipulation of trust and by employing more robust surveys from the trust literature to measure changes in in the different facets of trust. ## 4.1. Exploratory Analysis of Suspicion Hypotheses 3 and 3a were hypotheses about the concept of suspicion.H3. Increases in paracingulate cortex activity will be associated with increased suspicion in individuals as they conduct the cognitively demanding process involved in inferring the intentions of others.H3a. Suspicion will be accompanied by an increase in physiological indices of arousal.Given the dearth of research on the construct of suspicion, we conducted some exploratory analyses on this concept. After subjects were debriefed about the true nature of the study we asked subjects whether or not they had become suspicious that the researchers were actually responsible for the manipulations. All subjects reported that they believed the malware manipulation on the fourth day, was truly a computer virus that they had stumbled across. Interestingly, five subjects reported that they felt suspicious of the experimenters during the slowed Internet manipulation on the third day. One subject described his/her reaction as: “I felt a little suspicious that the experimenters were messing around with the computer, but I kept telling myself I was just being paranoid.” Thus, we post hoc split our sample into subjects who, at the end of day 3, mentioned “suspicion” of the experimenters or other agents (n1=5) and those who did not (n2=6). Note that these are small sample sizes; more research is needed to further validate these results.Regarding Hypothesis 3 (suspicion will be associated with increases in paracingulate cortex activity), the participants who interacted with the frustrating slowed Internet manipulation and reportedno associated suspicion had significant activation in their DLPFC and their middle temporal gyrus. As noted before, the DLPFC activation suggests an increase in emotion regulation and cognitive load during this condition. The middle temporal gyrus subserves language and semantic memory processing and is connected, through a series of networks, to the frontopolar region. This cognitive load is likely directly related to the semantic processing needed to complete the Internet browsing task. Thus, the fNIRS indicates that the nonsuspicious subjects simply became frustrated by the manipulation, and their brain activity showed this increase in cognitive load and the need for emotion regulation that is associated with frustration. In contrast, for thesuspicious subjects, the area of activation was directly above the anterior cingulate cortex (ACC), which is the frontal part of the cingulate cortex that resembles a “collar” form around the corpus callosum. The paracingulate cortex is a subset of the ACC. It is located within the ACC, closest to the external brain cortex.Thus, our fNIRS results lend support to our third hypothesis. We also note that our subject pool was small, and the post hoc splitting of groups into suspicious and nonsuspicious was based on exit interviews. We thus consider these findings tentative, although intriguing.We also computed summary data for the suspicious and nonsuspicious groups separately (see Table2). We looked at the self-report scores reported after the day two (baseline trust) session and after the day three (slowed Internet manipulation) session. We compared the GSR EDA measures for each group immediately before the slowed manipulation began, and then 100 seconds after the slow internet manipulation began.Table 2 P values for t-tests comparing the effects of the day 2 versus day 3 manipulations on the dependent variables in the study. Groups Self-report measures Sensor measures Distrust (↑) Workload (↑) Frustration (↑) Valence (↓) Arousal (↑) GSR-EDA (↑) fNIRS HbO (↑) Not suspicious 0.038* 0.029* 0.007* 0.026* 0.104 0.15 Middle temporal gyrus*, DLPFC* Suspicious no change 0.003* 0.008* 0.038* 0.035* 0.012* Paracingulate cortex* *indicates statistical significance.All empirical trends were as expected. That is, the slowed internet manipulation increased cognitive load, frustration, and arousal and decreased valence for both subgroups. However, only the suspicious group showed a statistically significant increase in arousal during the slow Internet manipulation (both in terms of self-report and GSR measures), thus supporting Hypothesis 3a.Lastly, it is worth noting that the construct of trust may also be correlated with the emotional and cognitive changes described above. As described previously, one item in the postsession survey asked subjects to indicate their level of trust while working through the experiment that day. High trust was listed on the left side of the scale (high trust = 1) and high distrust was listed on the far right side of the scale (distrust = 7). As we expected, participants lost trust after the third and fourth sessions (after day two, three, and four Likert survey averages wereM = 4.9, 5.5, and 6.6, resp.). We computed one-tailed, paired comparison t-tests by using day two data as a baseline for our comparisons. As reported in Table 1, participants reported feeling less trusting after the malware manipulation (day four) than they reported after the control, day-two session (P<0.0065). Loss of trust also occurred after the slow Internet manipulation (day three) when compared to the control, day-two session, but this difference was not statistically significant (P<0.0950). These results suggest a correlation between trust and the dependent measures reported in this experiment. Future studies should explore this relationship further, by using a more systematic manipulation of trust and by employing more robust surveys from the trust literature to measure changes in in the different facets of trust. ## 5. Conclusion One overarching goal in this study was to demonstrate the feasibility of using fNIRS to objectively measure the cognitive and emotional correlates of two computer manipulations in real time. In particular, when we slowed down participants’ Internet speeds we caused reactions similar to being frustrated and/or annoyed. In this scenario, we hypothesized that, when compared to a user’s baseline state, unexpected minor negative stimuli would be associated with a moderate increase in arousal and a large decrease in valence. Our fNIRS, GSR, and survey data supported this hypothesis. When we simulated a computer virus and crash on users’ systems, we designed the manipulation to be deemed severely negative, and to be attributed to possible malintent by an external agent. In this scenario, we hypothesized that when compared to a user’s baseline state, the presence of these unexpected and severely negative stimuli would be associated with a large increase in arousal and only a moderate decrease in valence. Again, our fNIRS, GSR, and survey data supported this hypothesis. We also looked at suspicion in our datasets in a post hoc manner, with a small subject pool. While we did find support for the claim that the Theory of Mind region of the paracingulate cortex is activated during suspicion, we make those claims with caution. Future work must take a closer look at the state of suspicion. In particular, fNIRS could be used with experimental paradigms that specifically manipulate suspicion in order to get a more reliable measure of that state. The same can be said for the construct of trust. Results from the trust item in our postsession surveys suggested that trust is associated with the emotional and cognitive state changes that we measured, but there is a need to conduct follow-on studies that manipulate trust in a more controlled (e.g., counterbalanced) experimental paradigm.The results also suggest that fNIRS can measure cognitive activity related to users’ changing cognitive and emotional states during human-computer interactions. This is quite promising, as the noninvasive fNIRS device is easy-to-set up and comfortable, and it has been implemented wirelessly, showing great promise for future measurements of computer users’ experiences while they work with computer systems in real time. The results also indicate that trust and suspicion are correlated with the cognitive and emotional state changes of the computer users. Future research should attempt to disentangle these findings and to look more specifically at manipulations of trust and suspicion in order to measure those constructs during human-computer interactions. --- *Source: 101038-2014-04-30.xml*
2014
# Approximate Analytical Solution for a Coupled System of Fractional Nonlinear Integrodifferential Equations by the RPS Method **Authors:** Ayed Al e’damat **Journal:** Complexity (2020) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2020/1010382 --- ## Abstract In this work, a modified residual power series method is implemented for providing efficient analytical and approximate solutions for a class of coupled system of nonlinear fractional integrodifferential equations. The proposed algorithm is based on the concept of residual error functions and generalized power series formula. The fractional derivative is described under the Caputo concept. To illustrate the potential, accuracy, and efficiency of the proposed method, two numerical applications of the coupled system of nonlinear fractional integrodifferential equations are tested. The numerical results confirm the theoretical predictions and depict that the suggested scheme is highly convenient, is quite effective, and practically simplifies computational time. Consequently, the proposed method is simple, accurate, and convenient in handling different types of fractional models arising in the engineering and physical systems. --- ## Body ## 1. Introduction The theory of fractional calculus is indeed a generalization of the standard calculus that deals with differentiation and integration of a noninteger order, which is utilized to describe various real-world phenomena arising in natural sciences, applied mathematics, and engineering fields with great applications for these tools, including nonlinear oscillation of earthquakes, fractional fluid-dynamic traffic, economics, solid mechanics, viscoelasticity, and control theory [1–7]. The major cause behind this is that modeling of a specific phenomenon does not depend only at the time instant but also the historical state, so the differential and integral operators for integer and fractional cases are found to be a superb tool in describing the hereditary and memory properties for different engineering and physical phenomena [8–12]. However, several mathematical forms of the abovementioned issues contain nonlinear fractional integrodifferential equations (FIDEs), and other nonlinear models can be found in [13–18]. Since most fractional differential and integrodifferential equations cannot be solved analytically, it is necessary to find an accurate numerical and analytical method to deal with the complexity of fractional operators involving such equations [19–24].This paper aims to introduce a recent analytical as well as numerical method based on the use of fractional residual power series (RPS) technique for obtaining the approximate solution for a class of coupled system of fractional integrodifferential equations in the following form:(1)Dβv1t=λ1∫abK1t,ξJ1v1ξ,v2ξdξ+λ2∫atK2t,ξJ2v1ξ,v2ξdξ+f1t,Dβv2t=λ3∫abH1t,ξT1v1ξ,v2ξdξ+λ4∫atH2t,ξT2v1ξ,v2ξdξ+f2t.This is subject to the following initial conditions:(2)v10=v1,0,v20=v2,0,where 0<β≤1.The residual power series method (RPSM) has a wide range of applications, especially in simulating nonlinear issues in a fractional meaning, which has been developed and modified over recent years as a powerful mathematical treatment indispensable in dealing with the emerging realistic system in physics, engineering, and natural sciences [25–28]. More specifically, it is a modern analytical and approximation technique that relies on the expansion of the fractional power series and residual error functions, which was first proposed in 2013 to provide analytical series solutions to fuzzy differential equations of the first and second orders and minimize the residual errors. This method has many advantages and properties as follows: it is an accurate alternative instrument, it requires less effort to achieve results, it provides a rapid convergence rate to the exact solution, it deals directly with different types of nonlinear terms and complex functions, and it has the ability to choose any point in the integration domain, making the approximate solution applicable. It has also excellent estimating characteristics that reflect high reliability. Furthermore, it is a systematic tool to find sequential solutions for several types of nonlinear differential equations and integrodifferential equations of fractional order without having to discretize, linearize, or even expose to perturbation. Therefore, it attracted the attention of many researchers.Freihet et al. [29] have used the fractional power series for solving the fractional stiff system and introduced some basic theorems related to RPS generalization in the sense of Caputo fractional derivative. The (2 + 1)-dimensional time-fractional Burgers–Kadomtsev–Petviashvili equation has been solved by the RPS method [30]. In [31], analytic-approximate solutions for nonlinear coupled fractional Jaulent–Miodek equations with energy-dependent Schrödinger potential have been obtained using the RPS and q-homotopy analysis methods. Moreover, this method was successfully applied for solving both linear and nonlinear ordinary, partial, and fuzzy differential equations [32–37]. Therefore, such adaptives can be used as an alternative technique in solving several nonlinear problems arising in engineering and sciences.The outline of this paper is organized as follows: In the next section, we review some basic definitions and theories related to fractional differentiation and fractional power series representations. In Section3, the solution by the RPS technique is provided. In Section 4, numerical application is performed to show accuracy and efficiency of the RPS method. Finally, we give concluding remarks in Section 5. ## 2. Basic Mathematical Concepts In this section, basic definitions and results related to fractional calculus are given and fractional power series concept is also represented.Definition 1. The Riemann–Liouville fractional integral operator of orderβ, over the interval a,b for a function v∈L1a,b, is defined by(3)Ja+βvx=1Γβ∫axvεx−ε1−βdε,0<ε<x,β>0,vt,β=0. Forβ,α≥0 and μ≥−1, the operator Ja+β has the following basic properties:(1) Ja+βx−aμ=Γμ+1/Γμ+1+βx−aμ+β(2) Ja+βJa+αvx=Ja+β+αvx(3) Ja+βJa+αvx=Ja+αJa+βvxDefinition 2. Forβ>0,a,t,β∈ℝ. The Caputo fractional derivative of order β is given by(4)Da+βvx=1Γn−β∫axvnεx−εβ−n+1dε. Forn−1<β<n,n∈ℕ. In case β=n and n∈ℕ, then Da+βvx=dn/dxnvx. The following are some interesting properties of the operatorDa+β:(1) For any constantc∈ℝ, then Da+βc=0.(2) Da+βx−aμ=Γμ+1/Γμ+1−βx−aμ−β,n−1<β≤n,μ>n−1,n∈N,q∈R,0,otherwise.(3) Da+βJa+βvx=vx.(4) Ja+βDa+βvx=vx−∑m=0n−1vma+/k!x−am.Definition 3. A fractional power series (FPS) representation att=a has the following form:(5)∑m=0∞vmx−amβ=v0+v1x−aβ+v2x−a2β+⋯,where 0≤n−1<β≤n and x≥a and vm is the coefficient of the series.Theorem 1. Suppose thatvx has the following FPS representation at t=a:(6)vx=∑m=0∞vmx−amβ,where n−1<β≤n,a<x<a+R,φx∈Ca,a+R, and Da+mβφx∈Ca,a+R for m=0,1,2,…. Then, the coefficients vm will be in the form vm=Da+mβva/Γmβ+1 such that Da+mβ=Da+β⋅Da+β⋅⋯⋅Da+β (m-times). ## 3. Fractional RPS Method for the Coupled System of IDEs The purpose of this section is to construct FPS solution for the coupled system of nonlinear fractional integrodifferential equations (1) and (2) by substituting the FPS expansion among the truncated residual functions.The RPS algorithm proposing the solution of equations (1) and (2) about x0=0 gives the following FPS expansion:(7)vix=∑m=0∞vi,mxmβΓmβ+1,i=1,2.The truncated series form of equation (7) can be given by the following kth-FPS approximate solution:(8)vi,kx=∑m=0kvi,mxmβΓmβ+1,i=1,2.Clearly, ifvi0=vi,0,i=1,2, then expansion (8) can be written as(9)vi,kx=vi,0+∑m=1kvi,mxmβΓmβ+1,i=1,2.Define the residual function for equations (1) and (2) as follows:(10)Res1x=D0+αv1t−λ1∫abK1t,ξJ1v1ξ,v2ξdξ−λ2∫atK2t,ξJ2v1ξ,v2ξdξ−f1t,Res2x=D0+αv2t−λ3∫abH1t,ξT1v1ξ,v2ξdξ−λ4∫atH2t,ξT2v1ξ,v2ξdξ−f2t.Also, define thekth-residual function as(11)Res1,kx=D0+αv1,kt−λ1∫abK1t,ξJ1v1,kξ,v2,kξdξ−λ2∫atK2t,ξJ2v1,kξ,v2,kξdξ−f1t,Res2,kx=D0+αv2,kt−λ3∫abH1t,ξT1v1,kξ,v2,kξdξ−λ4∫atH2t,ξT2v1,kξ,v2,kξdξ−f2t.According to the RPS algorithm [27–32], we have the following relations:(i) limk⟶∞Resi,kx=Resx=0, for each x∈0,1,i=1,2.(ii) D0+mβRes0=D0+mβResi,k0=0, for each m=0,1,2,…,k,i=1,2.For obtaining the coefficientsvi,m, m=0,1,2,…,k,i=1,2, one can solve the solution of the following relation:(12)D0+k−1βResi,k0=0,k=1,2,3,…,i=1,2.Algorithm 1. To find the coefficientsvi,m,m=1,2,3,…,k,i=1,2, in equation (9), perform the following steps:(i) Step 1: substitute expansion (9) function vi,kx,i=1,2, into k-th residual function (11) such that Res1,kt=D0+αv1,0+∑m=1kv1,mtmβ/Γmβ+1−λ1∫abK1t,ξJ1v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−λ2∫atK2t,ξJ2v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−f1t,Res2.kt=D0+αv2,0+∑m=1kv2,mtmβ/Γmβ+1−λ3∫abH1t,ξT1v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−λ4∫atH2t,ξT2v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−f2t.(ii) Step 2: find the relation of fractional formulaDx0k−1β of Resi,kx at x=x0,i=1,2.(iii) Step 3: fork=1, obtain the relation through the fact Resi,1xx=0=0,i=1,2. For k=2, obtain the relation through the fact D0+βResi,2xx=0=0,i=1,2. For k=3, obtain the relation through the fact D0+2βResi,3xx=0=0,i=1,2,….Fork=m, obtain the relation through the fact D0+m−1βResi,mxx=0=0,i=1,2.(iv) Step 4: solve the obtained algebraic fractional systemDx0k−1βResi,kx0,k=1,2,3,…,i=1,2.(v) Step 5: substitute the values ofvi,m back into equation (8) and then stop. ## 4. Numerical Applications and Simulation This section aims to test two applications of the system of nonlinear fractional IDEs to show the efficiency, accuracy, and applicability of the proposed method. In this section, all calculations are preformed using Wolfram-Mathematica 10.Example 1. Consider the following nonlinear fractional integrodifferential equation:(13)Dαv1x+∫0xtv1t−v2tdt+∫01tx5v1t+v2tdt=exx+2x5−2x5e+e−x1+x,Dαv2x+∫0xtv1t+v2tdt+∫01tx5v1t−v2tdt=2+ex−1+x+2x5e−e−x2+x. This is subject to the following initial conditions:(14)v10=v20=1. The exact solution of this coupled system isv1x=ex and v2x=e−x. Using the RPS algorithm, thek-th residual functions Res1,kxandRes2,kx are given by(15)Res1,kx=Dαv1,mx+∫0xtv1,mt−v2,mtdt+∫01tx5v1,mt+v2,mtdt−exx+2x5−2x5e+e−x1+x,Res2,kx=Dαv2,mx+∫0xtv1,mt+v2,mtdt+∫01tx5v1,mt−v2,mtdt−2+ex−1+x+2x5e−e−x2+x,where vi,mx has the following form:(16)vi,mx=1+∑m=1kvi,mxmβΓmβ+1,i=1,2. Consequently,(17)Res1,kx=Dα1+∑m=1kv1,mxmβΓmβ+1+∫0xt∑m=1kv1,m−v2,mtmβΓmβ+1dt+∫01tx52+∑m=1kv1,m+v2,mtmβΓmβ+1dt−exx+2x5−2x5e+e−x1+x,Res2,kx=Dα1+∑m=1kv2,mxmβΓmβ+1+∫0xt2+∑m=1kv1,m+v2,mtmβΓmβ+1dt+∫01tx5∑m=1kv1,m−v2,mtmβΓmβ+1dt−2+ex−1+x+2x5e−e−x2+x. Numerical simulation specializes in advanced numeric or approximate methods for finding digital or approximate solutions and estimating errors of these approximations. For such purpose, a few decimal numbers are often recorded to calculate the absolute error, which produces some vague estimates related to significance and units, as well as does not provide clear and explicit evidence regarding the subject matter of the study. Therefore, the comparison of absolute error with the exact value leads to the determination of the relative error as a ratio between the value of absolute error and the exact, which gives some importance and reduces ambiguity for a deeper understanding of the behavior of the approximate solutions. By using the RPS method of Example1, the numerical results of v1t and v2t are shown in Tables 1 and 2 at β=1 and k=8. The results obtained in Tables 1 and 2 show that the error estimate using the proposed method is very small and that the solutions correspond well to each other. In general, it should be noted that increasing the number of iterations k will lead to an improvement in numerical solutions and approaching the exact value. Tables3 and 4 show the sixth approximate solutions of Example 1 at different values of β such that β∈1,0.9,0.8,0.7 with step size 0.1 and k=6. From these tables, it can be concluded that the RPS algorithm and the approximate solutions are consistent with each other and with the exact solutions for all values of t in 0,1. Here, it is worth noting that the closer the value of the fractional derivative approaching the integer case β=1, the closer the approximate solution is to the exact solution.Table 1 Numerical results of exactv1t and approximate v1,8t solutions for Example 1 at β=1 and k=8 over the interval [0, 1]. tExact solutionv1tApproximate solutionv1,8tAbsolute errorv1t−v1,8t01.01.00.00.11.10517091.10517088.4700×10−80.21.22140281.22140002.7581×10−60.31.34985881.34983752.1307×10−50.41.49182471.49173339.1364×10−50.51.64872131.64843752.8377×10−40.61.82211881.82140007.1880×10−40.72.01375272.01217081.5819×10−30.82.22554092.22240003.1409×10−30.92.45960312.45383755.7656×10−31.02.71828182.70833339.9485×10−3Table 2 Numerical results of exactv2t and approximate v2,8t solutions for Example 1 at β=1 and k=8 over the interval [0, 1]. tExact solutionv2tApproximate solutionv2,8tAbsolute errorv2t−v2,8t01.01.00.00.10.90483740.90483758.1900×10−80.20.81873080.81873332.5802×10−60.30.74081820.74083751.9279×10−50.40.670332010.67040007.9954×10−50.50.60653070.60677082.4017×10−40.60.54881160.54940005.8836×10−40.70.49658530.49783751.2522×10−30.80.44932900.45173332.4043×10−30.90.40656970.41083754.2678×10−31.00.36787940.37500007.1206×10−3Table 3 Numerical results of sixth approximate solutionv1,6t for Example 1 at different values of fractional order β, k=6, and t∈0,1. tβ=1β=0.9β=0.8β=0.70.01.01.01.01.00.11.05108331.07339611.10738841.16015100.21.12466671.17168231.23921291.33650980.31.22125001.29563811.39742291.53592850.41.34133331.44461541.58015781.75606440.51.48541671.61817481.78607761.99517670.61.65400001.81603452.01420212.25197360.71.84758332.03802172.26378742.52546050.82.06666672.28404012.53425262.81484580.92.31175002.55404952.82513333.11948301.02.58333332.84805183.13605143.4388329Table 4 Numerical results of sixth approximate solutionv2,6t for Example 1 at different values of fractional order β, k=6, and t∈0,1. tβ=1β=0.9β=0.8β=0.70.01.01.01.01.00.10.90483330.87807790.84606940.80884670.20.81866660.78554780.74993950.71236110.30.74049990.70717360.67345200.63910040.40.66933330.63862100.60841740.57727680.50.60416660.57720550.55052850.52144660.60.54399990.52091630.49696430.46841580.70.48783330.46810270.44565630.41605950.80.43466660.41733750.39498000.36284820.90.38350000.36734490.34359910.30762041.00.33333330.31695920.29037840.2494581Example 2. Consider the following fractional integrodifferential equation:(18)Dαv1x+∫0xv2ξdξ+∫01ξv1ξdξ=−cos1+cosx+sin1+sinx,Dαv2x+∫0xv1ξdξ+∫01ξv2ξdξ=cos1−cosx+sin1−sinx. This is subject to the following initial conditions:(19)v10=v20=1. The exact solution of this coupled system isv1x=sinx and v2x=cosx. Using the RPS algorithm, the k-th residual functions Res1,kx and Res2,kx are given by(20)Res1,kx=Dαv1,mx+∫0xv2,mtdt+∫01v1,mtdt−cosx+sinx−cos1+sin1,Res2,kx=Dαv2,mx+∫0xv1,mtdt+∫01v2,mtdt−cosx−sinx+sin1+cos1,where v1,mx and v2,mx have the following form:(21)v1,mx=∑m=1kv1,mxmβΓmβ+1,v2,mx=1+∑m=1kv2,mxmβΓmβ+1. The absolute errors are listed in Tables5 and 6. The results obtained by the RPS method show that the exact solutions are in good agreement with approximate solutions at β=1, k=6, and step size 0.1. Tables 7 and 8 show approximate solutions at different values of β such that β∈0.9,0.8,0.7 and k=6 with step size 0.1. From these tables, one can find that the RPS method provides us with an accurate approximate solution, which is in good agreement with the exact solutions for all values of t in 0,1. Also, it is worth noting that the closer the value of the fractional derivative approaching the integer case β=1, the closer the approximate solution is to the exact solution.Table 5 Numerical results of exactv1t and approximate v1,6t solutions for Example 2 at β=1 and k=6 over the interval [0, 1]. tExact solutionv1tApproximate solutionv1,6tAbsolute errorv1t−v1,6t0.10.09983340.09982577.7205×10−60.20.19866930.19866246.8911×10−60.30.29552020.29552161.3995×10−60.40.38941830.38941523.1144×10−60.50.47942550.47942322.3173×10−60.60.56464250.56464926.7460×10−60.70.64421770.64421992.2046×10−60.80.71735610.71735332.8095×10−60.90.78332690.78332861.7189×10−61.00.84147100.84144752.3458×10−5Table 6 Numerical results of exactv2t and approximate v2,6t solutions for Example 2 at β=1 and k=6 over the interval [0, 1]. tExact solutionv1tApproximate solutionv2,6tAbsolute errorv2t−v2,6t0.10.99500420.99501036.1745×10−60.20.98006660.98005976.9024×10−60.30.95533650.95532709.4863×10−60.40.92106100.92104109.6762×10−60.50.87758260.87757715.4999×10−60.60.82533560.82532698.7210×10−60.70.76484220.76456222.2974×10−50.80.69670670.69669848.3385×10−60.90.62161000.62159861.1407×10−51.00.54030230.54026693.5406×10−5Table 7 Numerical results of approximate solutionv1,6t for Example 2 for different values of β,k=6, and t∈0,1. tβ=0.9β=0.8β=0.70.00.00.00.00.10.07495010.10278650.14220480.20.15536180.20210550.26448920.30.24514620.30866290.38973690.40.34455770.42310600.51959600.50.45358020.54543020.65440170.60.57214340.67549410.79416890.70.70016950.81312910.93881290.80.83758540.95816931.08821760.90.98432691.11046091.24226021.01.14033821.26986351.4008207Table 8 Numerical results of approximate solutionv2,6t of Example 2 for different values of β,k=6, and t∈0,1. tβ=0.9β=0.8β=0.70.01.01.01.00.11.00368390.99957980.99027090.20.99181770.97706930.95264300.30.96781310.94094530.90180370.40.93296890.89384120.84138820.50.88811390.83727770.77330640.60.83385920.77229220.69877490.70.77069010.69965780.61865100.80.69901080.61998320.53357870.90.61916850.53376640.44406351.00.53146840.44142620.3505153 ## 5. Concluding Remarks In this work, a class of a coupled system of nonlinear fractional integrodifferential equations of fractional order has been discussed by using the RPS method under the Caputo fractional derivative. The RPS algorithm has been given to optimize the approximate solution by minimizing a residual error function with the help of generalized Taylor formula. To demonstrate the consistency with the theoretical framework, two illustrative examples have been provided. The obtained results indicated that the approximate solutions are coinciding with the exact solution and with each other for different values of the fractional order over the selected nods and parameters. From our results, we can conclude that the RPS method is a systematic and suitable scheme to address many fractional initial value problems with great potential in scientific applications. The calculations have been performed by using Wolfram-Mathematica 10. --- *Source: 1010382-2020-08-13.xml*
1010382-2020-08-13_1010382-2020-08-13.md
19,586
Approximate Analytical Solution for a Coupled System of Fractional Nonlinear Integrodifferential Equations by the RPS Method
Ayed Al e’damat
Complexity (2020)
Mathematical Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2020/1010382
1010382-2020-08-13.xml
--- ## Abstract In this work, a modified residual power series method is implemented for providing efficient analytical and approximate solutions for a class of coupled system of nonlinear fractional integrodifferential equations. The proposed algorithm is based on the concept of residual error functions and generalized power series formula. The fractional derivative is described under the Caputo concept. To illustrate the potential, accuracy, and efficiency of the proposed method, two numerical applications of the coupled system of nonlinear fractional integrodifferential equations are tested. The numerical results confirm the theoretical predictions and depict that the suggested scheme is highly convenient, is quite effective, and practically simplifies computational time. Consequently, the proposed method is simple, accurate, and convenient in handling different types of fractional models arising in the engineering and physical systems. --- ## Body ## 1. Introduction The theory of fractional calculus is indeed a generalization of the standard calculus that deals with differentiation and integration of a noninteger order, which is utilized to describe various real-world phenomena arising in natural sciences, applied mathematics, and engineering fields with great applications for these tools, including nonlinear oscillation of earthquakes, fractional fluid-dynamic traffic, economics, solid mechanics, viscoelasticity, and control theory [1–7]. The major cause behind this is that modeling of a specific phenomenon does not depend only at the time instant but also the historical state, so the differential and integral operators for integer and fractional cases are found to be a superb tool in describing the hereditary and memory properties for different engineering and physical phenomena [8–12]. However, several mathematical forms of the abovementioned issues contain nonlinear fractional integrodifferential equations (FIDEs), and other nonlinear models can be found in [13–18]. Since most fractional differential and integrodifferential equations cannot be solved analytically, it is necessary to find an accurate numerical and analytical method to deal with the complexity of fractional operators involving such equations [19–24].This paper aims to introduce a recent analytical as well as numerical method based on the use of fractional residual power series (RPS) technique for obtaining the approximate solution for a class of coupled system of fractional integrodifferential equations in the following form:(1)Dβv1t=λ1∫abK1t,ξJ1v1ξ,v2ξdξ+λ2∫atK2t,ξJ2v1ξ,v2ξdξ+f1t,Dβv2t=λ3∫abH1t,ξT1v1ξ,v2ξdξ+λ4∫atH2t,ξT2v1ξ,v2ξdξ+f2t.This is subject to the following initial conditions:(2)v10=v1,0,v20=v2,0,where 0<β≤1.The residual power series method (RPSM) has a wide range of applications, especially in simulating nonlinear issues in a fractional meaning, which has been developed and modified over recent years as a powerful mathematical treatment indispensable in dealing with the emerging realistic system in physics, engineering, and natural sciences [25–28]. More specifically, it is a modern analytical and approximation technique that relies on the expansion of the fractional power series and residual error functions, which was first proposed in 2013 to provide analytical series solutions to fuzzy differential equations of the first and second orders and minimize the residual errors. This method has many advantages and properties as follows: it is an accurate alternative instrument, it requires less effort to achieve results, it provides a rapid convergence rate to the exact solution, it deals directly with different types of nonlinear terms and complex functions, and it has the ability to choose any point in the integration domain, making the approximate solution applicable. It has also excellent estimating characteristics that reflect high reliability. Furthermore, it is a systematic tool to find sequential solutions for several types of nonlinear differential equations and integrodifferential equations of fractional order without having to discretize, linearize, or even expose to perturbation. Therefore, it attracted the attention of many researchers.Freihet et al. [29] have used the fractional power series for solving the fractional stiff system and introduced some basic theorems related to RPS generalization in the sense of Caputo fractional derivative. The (2 + 1)-dimensional time-fractional Burgers–Kadomtsev–Petviashvili equation has been solved by the RPS method [30]. In [31], analytic-approximate solutions for nonlinear coupled fractional Jaulent–Miodek equations with energy-dependent Schrödinger potential have been obtained using the RPS and q-homotopy analysis methods. Moreover, this method was successfully applied for solving both linear and nonlinear ordinary, partial, and fuzzy differential equations [32–37]. Therefore, such adaptives can be used as an alternative technique in solving several nonlinear problems arising in engineering and sciences.The outline of this paper is organized as follows: In the next section, we review some basic definitions and theories related to fractional differentiation and fractional power series representations. In Section3, the solution by the RPS technique is provided. In Section 4, numerical application is performed to show accuracy and efficiency of the RPS method. Finally, we give concluding remarks in Section 5. ## 2. Basic Mathematical Concepts In this section, basic definitions and results related to fractional calculus are given and fractional power series concept is also represented.Definition 1. The Riemann–Liouville fractional integral operator of orderβ, over the interval a,b for a function v∈L1a,b, is defined by(3)Ja+βvx=1Γβ∫axvεx−ε1−βdε,0<ε<x,β>0,vt,β=0. Forβ,α≥0 and μ≥−1, the operator Ja+β has the following basic properties:(1) Ja+βx−aμ=Γμ+1/Γμ+1+βx−aμ+β(2) Ja+βJa+αvx=Ja+β+αvx(3) Ja+βJa+αvx=Ja+αJa+βvxDefinition 2. Forβ>0,a,t,β∈ℝ. The Caputo fractional derivative of order β is given by(4)Da+βvx=1Γn−β∫axvnεx−εβ−n+1dε. Forn−1<β<n,n∈ℕ. In case β=n and n∈ℕ, then Da+βvx=dn/dxnvx. The following are some interesting properties of the operatorDa+β:(1) For any constantc∈ℝ, then Da+βc=0.(2) Da+βx−aμ=Γμ+1/Γμ+1−βx−aμ−β,n−1<β≤n,μ>n−1,n∈N,q∈R,0,otherwise.(3) Da+βJa+βvx=vx.(4) Ja+βDa+βvx=vx−∑m=0n−1vma+/k!x−am.Definition 3. A fractional power series (FPS) representation att=a has the following form:(5)∑m=0∞vmx−amβ=v0+v1x−aβ+v2x−a2β+⋯,where 0≤n−1<β≤n and x≥a and vm is the coefficient of the series.Theorem 1. Suppose thatvx has the following FPS representation at t=a:(6)vx=∑m=0∞vmx−amβ,where n−1<β≤n,a<x<a+R,φx∈Ca,a+R, and Da+mβφx∈Ca,a+R for m=0,1,2,…. Then, the coefficients vm will be in the form vm=Da+mβva/Γmβ+1 such that Da+mβ=Da+β⋅Da+β⋅⋯⋅Da+β (m-times). ## 3. Fractional RPS Method for the Coupled System of IDEs The purpose of this section is to construct FPS solution for the coupled system of nonlinear fractional integrodifferential equations (1) and (2) by substituting the FPS expansion among the truncated residual functions.The RPS algorithm proposing the solution of equations (1) and (2) about x0=0 gives the following FPS expansion:(7)vix=∑m=0∞vi,mxmβΓmβ+1,i=1,2.The truncated series form of equation (7) can be given by the following kth-FPS approximate solution:(8)vi,kx=∑m=0kvi,mxmβΓmβ+1,i=1,2.Clearly, ifvi0=vi,0,i=1,2, then expansion (8) can be written as(9)vi,kx=vi,0+∑m=1kvi,mxmβΓmβ+1,i=1,2.Define the residual function for equations (1) and (2) as follows:(10)Res1x=D0+αv1t−λ1∫abK1t,ξJ1v1ξ,v2ξdξ−λ2∫atK2t,ξJ2v1ξ,v2ξdξ−f1t,Res2x=D0+αv2t−λ3∫abH1t,ξT1v1ξ,v2ξdξ−λ4∫atH2t,ξT2v1ξ,v2ξdξ−f2t.Also, define thekth-residual function as(11)Res1,kx=D0+αv1,kt−λ1∫abK1t,ξJ1v1,kξ,v2,kξdξ−λ2∫atK2t,ξJ2v1,kξ,v2,kξdξ−f1t,Res2,kx=D0+αv2,kt−λ3∫abH1t,ξT1v1,kξ,v2,kξdξ−λ4∫atH2t,ξT2v1,kξ,v2,kξdξ−f2t.According to the RPS algorithm [27–32], we have the following relations:(i) limk⟶∞Resi,kx=Resx=0, for each x∈0,1,i=1,2.(ii) D0+mβRes0=D0+mβResi,k0=0, for each m=0,1,2,…,k,i=1,2.For obtaining the coefficientsvi,m, m=0,1,2,…,k,i=1,2, one can solve the solution of the following relation:(12)D0+k−1βResi,k0=0,k=1,2,3,…,i=1,2.Algorithm 1. To find the coefficientsvi,m,m=1,2,3,…,k,i=1,2, in equation (9), perform the following steps:(i) Step 1: substitute expansion (9) function vi,kx,i=1,2, into k-th residual function (11) such that Res1,kt=D0+αv1,0+∑m=1kv1,mtmβ/Γmβ+1−λ1∫abK1t,ξJ1v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−λ2∫atK2t,ξJ2v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−f1t,Res2.kt=D0+αv2,0+∑m=1kv2,mtmβ/Γmβ+1−λ3∫abH1t,ξT1v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−λ4∫atH2t,ξT2v1,0+∑m=1kv1,mξmβ/Γmβ+1,v2,0+∑m=1kv2,mξmβ/Γmβ+1dξ−f2t.(ii) Step 2: find the relation of fractional formulaDx0k−1β of Resi,kx at x=x0,i=1,2.(iii) Step 3: fork=1, obtain the relation through the fact Resi,1xx=0=0,i=1,2. For k=2, obtain the relation through the fact D0+βResi,2xx=0=0,i=1,2. For k=3, obtain the relation through the fact D0+2βResi,3xx=0=0,i=1,2,….Fork=m, obtain the relation through the fact D0+m−1βResi,mxx=0=0,i=1,2.(iv) Step 4: solve the obtained algebraic fractional systemDx0k−1βResi,kx0,k=1,2,3,…,i=1,2.(v) Step 5: substitute the values ofvi,m back into equation (8) and then stop. ## 4. Numerical Applications and Simulation This section aims to test two applications of the system of nonlinear fractional IDEs to show the efficiency, accuracy, and applicability of the proposed method. In this section, all calculations are preformed using Wolfram-Mathematica 10.Example 1. Consider the following nonlinear fractional integrodifferential equation:(13)Dαv1x+∫0xtv1t−v2tdt+∫01tx5v1t+v2tdt=exx+2x5−2x5e+e−x1+x,Dαv2x+∫0xtv1t+v2tdt+∫01tx5v1t−v2tdt=2+ex−1+x+2x5e−e−x2+x. This is subject to the following initial conditions:(14)v10=v20=1. The exact solution of this coupled system isv1x=ex and v2x=e−x. Using the RPS algorithm, thek-th residual functions Res1,kxandRes2,kx are given by(15)Res1,kx=Dαv1,mx+∫0xtv1,mt−v2,mtdt+∫01tx5v1,mt+v2,mtdt−exx+2x5−2x5e+e−x1+x,Res2,kx=Dαv2,mx+∫0xtv1,mt+v2,mtdt+∫01tx5v1,mt−v2,mtdt−2+ex−1+x+2x5e−e−x2+x,where vi,mx has the following form:(16)vi,mx=1+∑m=1kvi,mxmβΓmβ+1,i=1,2. Consequently,(17)Res1,kx=Dα1+∑m=1kv1,mxmβΓmβ+1+∫0xt∑m=1kv1,m−v2,mtmβΓmβ+1dt+∫01tx52+∑m=1kv1,m+v2,mtmβΓmβ+1dt−exx+2x5−2x5e+e−x1+x,Res2,kx=Dα1+∑m=1kv2,mxmβΓmβ+1+∫0xt2+∑m=1kv1,m+v2,mtmβΓmβ+1dt+∫01tx5∑m=1kv1,m−v2,mtmβΓmβ+1dt−2+ex−1+x+2x5e−e−x2+x. Numerical simulation specializes in advanced numeric or approximate methods for finding digital or approximate solutions and estimating errors of these approximations. For such purpose, a few decimal numbers are often recorded to calculate the absolute error, which produces some vague estimates related to significance and units, as well as does not provide clear and explicit evidence regarding the subject matter of the study. Therefore, the comparison of absolute error with the exact value leads to the determination of the relative error as a ratio between the value of absolute error and the exact, which gives some importance and reduces ambiguity for a deeper understanding of the behavior of the approximate solutions. By using the RPS method of Example1, the numerical results of v1t and v2t are shown in Tables 1 and 2 at β=1 and k=8. The results obtained in Tables 1 and 2 show that the error estimate using the proposed method is very small and that the solutions correspond well to each other. In general, it should be noted that increasing the number of iterations k will lead to an improvement in numerical solutions and approaching the exact value. Tables3 and 4 show the sixth approximate solutions of Example 1 at different values of β such that β∈1,0.9,0.8,0.7 with step size 0.1 and k=6. From these tables, it can be concluded that the RPS algorithm and the approximate solutions are consistent with each other and with the exact solutions for all values of t in 0,1. Here, it is worth noting that the closer the value of the fractional derivative approaching the integer case β=1, the closer the approximate solution is to the exact solution.Table 1 Numerical results of exactv1t and approximate v1,8t solutions for Example 1 at β=1 and k=8 over the interval [0, 1]. tExact solutionv1tApproximate solutionv1,8tAbsolute errorv1t−v1,8t01.01.00.00.11.10517091.10517088.4700×10−80.21.22140281.22140002.7581×10−60.31.34985881.34983752.1307×10−50.41.49182471.49173339.1364×10−50.51.64872131.64843752.8377×10−40.61.82211881.82140007.1880×10−40.72.01375272.01217081.5819×10−30.82.22554092.22240003.1409×10−30.92.45960312.45383755.7656×10−31.02.71828182.70833339.9485×10−3Table 2 Numerical results of exactv2t and approximate v2,8t solutions for Example 1 at β=1 and k=8 over the interval [0, 1]. tExact solutionv2tApproximate solutionv2,8tAbsolute errorv2t−v2,8t01.01.00.00.10.90483740.90483758.1900×10−80.20.81873080.81873332.5802×10−60.30.74081820.74083751.9279×10−50.40.670332010.67040007.9954×10−50.50.60653070.60677082.4017×10−40.60.54881160.54940005.8836×10−40.70.49658530.49783751.2522×10−30.80.44932900.45173332.4043×10−30.90.40656970.41083754.2678×10−31.00.36787940.37500007.1206×10−3Table 3 Numerical results of sixth approximate solutionv1,6t for Example 1 at different values of fractional order β, k=6, and t∈0,1. tβ=1β=0.9β=0.8β=0.70.01.01.01.01.00.11.05108331.07339611.10738841.16015100.21.12466671.17168231.23921291.33650980.31.22125001.29563811.39742291.53592850.41.34133331.44461541.58015781.75606440.51.48541671.61817481.78607761.99517670.61.65400001.81603452.01420212.25197360.71.84758332.03802172.26378742.52546050.82.06666672.28404012.53425262.81484580.92.31175002.55404952.82513333.11948301.02.58333332.84805183.13605143.4388329Table 4 Numerical results of sixth approximate solutionv2,6t for Example 1 at different values of fractional order β, k=6, and t∈0,1. tβ=1β=0.9β=0.8β=0.70.01.01.01.01.00.10.90483330.87807790.84606940.80884670.20.81866660.78554780.74993950.71236110.30.74049990.70717360.67345200.63910040.40.66933330.63862100.60841740.57727680.50.60416660.57720550.55052850.52144660.60.54399990.52091630.49696430.46841580.70.48783330.46810270.44565630.41605950.80.43466660.41733750.39498000.36284820.90.38350000.36734490.34359910.30762041.00.33333330.31695920.29037840.2494581Example 2. Consider the following fractional integrodifferential equation:(18)Dαv1x+∫0xv2ξdξ+∫01ξv1ξdξ=−cos1+cosx+sin1+sinx,Dαv2x+∫0xv1ξdξ+∫01ξv2ξdξ=cos1−cosx+sin1−sinx. This is subject to the following initial conditions:(19)v10=v20=1. The exact solution of this coupled system isv1x=sinx and v2x=cosx. Using the RPS algorithm, the k-th residual functions Res1,kx and Res2,kx are given by(20)Res1,kx=Dαv1,mx+∫0xv2,mtdt+∫01v1,mtdt−cosx+sinx−cos1+sin1,Res2,kx=Dαv2,mx+∫0xv1,mtdt+∫01v2,mtdt−cosx−sinx+sin1+cos1,where v1,mx and v2,mx have the following form:(21)v1,mx=∑m=1kv1,mxmβΓmβ+1,v2,mx=1+∑m=1kv2,mxmβΓmβ+1. The absolute errors are listed in Tables5 and 6. The results obtained by the RPS method show that the exact solutions are in good agreement with approximate solutions at β=1, k=6, and step size 0.1. Tables 7 and 8 show approximate solutions at different values of β such that β∈0.9,0.8,0.7 and k=6 with step size 0.1. From these tables, one can find that the RPS method provides us with an accurate approximate solution, which is in good agreement with the exact solutions for all values of t in 0,1. Also, it is worth noting that the closer the value of the fractional derivative approaching the integer case β=1, the closer the approximate solution is to the exact solution.Table 5 Numerical results of exactv1t and approximate v1,6t solutions for Example 2 at β=1 and k=6 over the interval [0, 1]. tExact solutionv1tApproximate solutionv1,6tAbsolute errorv1t−v1,6t0.10.09983340.09982577.7205×10−60.20.19866930.19866246.8911×10−60.30.29552020.29552161.3995×10−60.40.38941830.38941523.1144×10−60.50.47942550.47942322.3173×10−60.60.56464250.56464926.7460×10−60.70.64421770.64421992.2046×10−60.80.71735610.71735332.8095×10−60.90.78332690.78332861.7189×10−61.00.84147100.84144752.3458×10−5Table 6 Numerical results of exactv2t and approximate v2,6t solutions for Example 2 at β=1 and k=6 over the interval [0, 1]. tExact solutionv1tApproximate solutionv2,6tAbsolute errorv2t−v2,6t0.10.99500420.99501036.1745×10−60.20.98006660.98005976.9024×10−60.30.95533650.95532709.4863×10−60.40.92106100.92104109.6762×10−60.50.87758260.87757715.4999×10−60.60.82533560.82532698.7210×10−60.70.76484220.76456222.2974×10−50.80.69670670.69669848.3385×10−60.90.62161000.62159861.1407×10−51.00.54030230.54026693.5406×10−5Table 7 Numerical results of approximate solutionv1,6t for Example 2 for different values of β,k=6, and t∈0,1. tβ=0.9β=0.8β=0.70.00.00.00.00.10.07495010.10278650.14220480.20.15536180.20210550.26448920.30.24514620.30866290.38973690.40.34455770.42310600.51959600.50.45358020.54543020.65440170.60.57214340.67549410.79416890.70.70016950.81312910.93881290.80.83758540.95816931.08821760.90.98432691.11046091.24226021.01.14033821.26986351.4008207Table 8 Numerical results of approximate solutionv2,6t of Example 2 for different values of β,k=6, and t∈0,1. tβ=0.9β=0.8β=0.70.01.01.01.00.11.00368390.99957980.99027090.20.99181770.97706930.95264300.30.96781310.94094530.90180370.40.93296890.89384120.84138820.50.88811390.83727770.77330640.60.83385920.77229220.69877490.70.77069010.69965780.61865100.80.69901080.61998320.53357870.90.61916850.53376640.44406351.00.53146840.44142620.3505153 ## 5. Concluding Remarks In this work, a class of a coupled system of nonlinear fractional integrodifferential equations of fractional order has been discussed by using the RPS method under the Caputo fractional derivative. The RPS algorithm has been given to optimize the approximate solution by minimizing a residual error function with the help of generalized Taylor formula. To demonstrate the consistency with the theoretical framework, two illustrative examples have been provided. The obtained results indicated that the approximate solutions are coinciding with the exact solution and with each other for different values of the fractional order over the selected nods and parameters. From our results, we can conclude that the RPS method is a systematic and suitable scheme to address many fractional initial value problems with great potential in scientific applications. The calculations have been performed by using Wolfram-Mathematica 10. --- *Source: 1010382-2020-08-13.xml*
2020
# Evolutionary Algorithms Applied to Antennas and Propagation: A Review of State of the Art **Authors:** Sotirios K. Goudos; Christos Kalialakis; Raj Mittra **Journal:** International Journal of Antennas and Propagation (2016) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2016/1010459 --- ## Abstract A review of evolutionary algorithms (EAs) with applications to antenna and propagation problems is presented. EAs have emerged as viable candidates for global optimization problems and have been attracting the attention of the research community interested in solving real-world engineering problems, as evidenced by the fact that very large number of antenna design problems have been addressed in the literature in recent years by using EAs. In this paper, our primary focus is on Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Differential Evolution (DE), though we also briefly review other recently introduced nature-inspired algorithms. An overview of case examples optimized by each family of algorithms is included in the paper. --- ## Body ## 1. Introduction Several evolutionary algorithms (EAs) have emerged in the past decade that mimic the behavior and evolution of biological entities inspired mainly by Darwin’s theory of evolution and its natural selection mechanism. The study of evolutionary algorithms began in the 1960s. Several researchers independently developed three mainstream evolutionary algorithms, namely, the Genetic Algorithms [1, 2], Evolutionary Programming [3], and evolution strategies [4] (Figure 1). EAs are widely used for the solution of single and multiobjective optimization problems and Figure 1 depicts some of the main algorithmic families.Figure 1 A diagram depicting main families of evolutionary algorithms.Swarm Intelligence (SI) algorithms are also a special type of EA. The SI can be defined as the collective behavior of decentralized and self-organized swarms. SI algorithms include Particle Swarm Optimization (PSO) [5], Ant Colony Optimization [6], and Artificial Bee Colony (ABC) [7]. PSO mimics the swarm behavior of birds flocking and fish schooling [5]. The most common PSO algorithms include the classical Inertia Weight PSO (IWPSO) and the Constriction Factor PSO (CFPSO) [8]. The PSO algorithm is easy to implement and is computationally efficient; it is typically used only for real-valued problems. An option to expand PSO for discrete-valued problems also exists [9]. Other SI algorithms include (i) Artificial Bee Colony (ABC) [7], which models and simulates the behaviors of honey bees foraging for food, and (ii) Ant Colony Optimization (ACO) [6, 10, 11], which is a population-based metaheuristic inspired by the behavior of real ants.Differential Evolution (DE) [12, 13] is a population-based stochastic global optimization algorithm, which has been used in several real-world engineering problems utilizing several variants of the DE algorithm. An overview of both the PSO and DE algorithms and hybridizations of these algorithms along with other soft computing tools can be found in [14].Other evolutionary techniques applied to antenna problems include the recently proposed Wind Driven Optimization (WDO) [15]; biogeography-based optimization (BBO); Invasive Weed Optimization (IWO) [16–20]; Evolutionary Programming (EP) [21, 22]; and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [23, 24].An important theorem which is pertinent to the performance of optimization algorithms is the so-called “No Free Lunch” (NFL) theorem, which pertains to the average behavior of optimization algorithms over given spaces of optimization problems. It has been shown in [25] that when averaged over all possible optimization problems defined over some search space X, no algorithm has a performance advantage over any other. Additionally, in [26] the authors show that it is theoretically impossible to have a best general-purpose universal optimization strategy, and the only way for one strategy to be superior to the others is when it focuses on a particular class of problems. It should be noted that a wide variety of optimization problems arise in the antenna domain and that it is not always an easy task to find the best optimization algorithm for solving each of them. Therefore, it is worthwhile to explore new optimization algorithms if we find that they can work well for the problem at hand. Optimization problems arising in the design and synthesis of antennas can benefit considerably from an application of the EAs, which can lead to unconventional solutions regarding position and excitation of the antenna elements in an array. Furthermore, the elements themselves can be geometrically designed by using the EAs.The purpose of this paper is to briefly describe the algorithms and present their application to antenna design problems found in the recent literature.Section2 reviews the Genetic Algorithms (GAs). Section 3 focuses on PSO and Section 4 is devoted to DE. A brief overview of other EA algorithms is provided in Section 5. We describe the algorithms briefly and then present some statistics regarding the use of the algorithms in the literature. Additionally, for each algorithm some representative papers are referenced in the tables. Some open issues are discussed in Section 6, and conclusions are drawn in Section 7. ## 2. Genetic Algorithms GAs, the most popular EAs, are inspired by Darwin’s natural selection. GAs can be real or binary-coded. In a binary-coded GA each chromosome encodes a binary string [27, 28]. The most commonly used operators are crossover, mutation, and selection. The selection operator chooses two parent chromosomes from the current population according to a selection strategy. Most popular selection strategies include roulette wheel and tournament selection. The crossover operator combines the two parent chromosomes in order to produce one new child chromosome. The mutation operator is applied with a predefined mutation probability to a new child chromosome.A search in the Scopus database shows that there are 65762 conference papers and 94510 journal papers related to GAs from 1977 to 2016. Figure2 shows the number of papers related to GAs and antennas over the last 15 years. Additionally, the search in the same database using the keywords GAs and Antennas reveals a total number of 2807 papers (both journal and conference). Tutorials and applications of GAs to electromagnetics can be found in [29, 30].Figure 2 Papers using GAs for antenna design from 1993 to May 2016.Table1 lists selected papers that use GAs (mostly binary-coded) for antenna design. Several papers exist in the literature that apply GAs for array synthesis. A binary-coded GA has been applied for linear and planar array synthesis in [31, 32], among others, while a GA that uses decimal operator has been used in [33]. The problem of array thinning has been addressed in the literature using binary-coded GAs in [34], while the array-failure correction problem has been addressed by using the real-coded GAs in [35]. A simple GA has been used for the synthesis of time-modulated arrays in [36]. Different antenna types have been designed using the GAs, for example, wire antennas in [37–40], patch antennas in [41, 42], and RFID tags in [43]. Other GA-type algorithms, such as a mixed integer GA, have also been applied in [44] to several different antenna design problems. An improved GA for the design of linear aperiodic arrays and a binary-coded GA which uses two-point crossover (instead of the usual one point) have been used in [45] for the design of patch antennas with low RCS. A GA that combines chaos theory and genetic algorithm is used for designing T-shaped MIMO radar antenna array in [46]. A design procedure based on a GA is given in [47] for optimizing a patch antenna for operation to four frequency bands GSM1800, GSM1900, LTE2300, and Bluetooth.Table 1 Selected journal papers that use GAs for antenna design. Problem Algorithm(s) used Ref. Thinned arrays Binary-coded GA [34] Wire antennas and Yagi-Uda antenna Binary-coded GA [37–40] Array synthesis Decimal operators GA [33] Array-failure correction Real-coded GA [35] Linear and planar array synthesis Binary-coded GA and hybrid GA with simulated annealing (SA) [31, 32] Broadband patch antennas and dual-band patch antennas Binary-coded GA and parallel GA [41, 42] RFID tags Binary-coded GA [43] Time-modulated linear arrays Simple GA [36] Linear array design, thinned subarrays, and circularly polarized patch antenna Mixed integer GA [44] Linear aperiodic arrays design Improved GA [138] Low RCS slot patch antennas Binary-coded GA and two-point crossover [45] T-shaped MIMO radar antenna array chaos genetic algorithm [46] Quad-band patch antenna Binary-coded GA [47] ## 3. Particle Swarm Optimization (PSO) In PSO, the particles move in the search space, where each particle position is updated by two optimum values. The first one is the best solution (fitness) that has been achieved so far. This value is calledpbest. The other one is the global best value obtained so far by any particle in the swarm. This best value is called g b e s t. After finding thepbest and g b e s t, the velocity update rule is an important factor in a PSO algorithm. The most commonly used algorithm defines that the velocity of each particle for every problem dimension is updated with the following equation:(1) u G + 1 , n i = w u G , n i + c 1 rand 1 0,1 ⁡ p b e s t G + 1 , n i - x G , n i + c 2 rand 2 0,1 ⁡ g b e s t G + 1 , n i - x G , n i ,where u G + 1 , n i is the ith particle velocity in the nth dimension, G + 1 denotes the current iteration and G denotes the previous, x G , n i is the particle position in the nth dimension, r a n d 1 0,1 , r a n d 2 ( 0,1 ) are uniformly distributed random numbers in 0,1, w is a parameter known as the inertia weight, and c 1 and c 2 are the learning factors.The parameterw (inertia weight) is a constant between 0 and 1. This parameter represents the particle’s fly without any external influence. The higher the value of w is or the closer it is to unity, the more the particle stays unaffected frompbest and g b e s t. The inertia weight controls the impact of the previous velocity: a large inertia weight favors exploration, while a small inertia weight favors exploitation. The parameter c 1 represents the influence of the particle memory on its best position, while the parameter c 2 represents the influence of the swarm best position. Therefore, in the Inertia Weight PSO (IWPSO) algorithm the parameters to be determined are the swarm size (or population size), usually 100 or less, the cognitive learning factor c 1 and the social learning factor c 2 (usually both are set equal to 2.0), the inertia weight w, and the maximum number of iterations. It is common practice to linearly decrease the inertia weight starting from 0.9 or 0.95 to 0.4.Clerc [8] has suggested the use of a different velocity update rule, which has introduced a parameter K called constriction factor. The role of the constriction factor is to ensure convergence when all the particles have stopped their movements. The velocity update rule is then given by(2) u G + 1 , n i = K u G , n i + c 1 rand 1 0,1 ⁡ p b e s t G + 1 , n i - x G , n i + c 2 rand 2 0,1 ⁡ g b e s t G + 1 , n i - x G , n i , K = 2 2 - φ - φ 2 - 4 φ ,where φ = c 1 + c 2 and φ > 4. This PSO algorithm variant is known as Constriction Factor PSO (CFPSO).The Scopus database shows a total number of 39673 papers from 1995 to April 2016 for PSO related papers (including 19570 journal papers and 18355 conference papers). A refined search for antenna papers reveals 519 journal papers and 515 conference papers from 2002 to May 2016 indicating the popularity of PSO algorithm. Figure3 shows the distribution of papers on PSO design of antennas from 2002 to 2016.Figure 3 Papers using PSO for antenna design from 2002 to May 2016.Table2 lists selected PSO antenna papers. Introductory and tutorial papers that introduce the application of the PSO for antenna design are [48, 49]. Additionally, the problem of sidelobe suppression of linear arrays using the PSO has been addressed in [50, 51]. A comparison of the performance of the PSO and GA algorithms, as applied to the problem of phased arrays design, has been given in [52], while a comparative study of PSO, GAs, and DE for circular array design has been reported in [53]. A performance comparison of PSO, GA, and a hybrid GA-PSO has been provided in [54], where they have been applied to the problem of designing profiled corrugated horn antennas. The application of PSO to conformal phased arrays design has been shown in [55]. A coplanar waveguide-fed planar monopole antenna for multiband operation has been designed in [56] using the PSO algorithm in conjunction with the Method of Moments (MoM). The authors in [57] use PSO for reconfigurable phase-differentiated array design. A Parallelized PSO optimizer has been used in conjunction with the Finite Difference Time Domain (FDTD) which has been employed for multiband patch antenna designs in [58], and the problem of minimizing power loss in time-modulated arrays has been addressed in [59]. Boundary conditions play an important role in the application of the PSO, and the performances of different boundary conditions have been tested on a 16-element array antenna in [60], based on mathematical benchmark functions. All of the above papers use the original IWPSO or the CFPSO, although several variants of PSO are available. A new PSO variant called quantum PSO (QPSO) algorithm has been proposed and applied in [61] to find a set of infinitesimal dipoles which produces the same near and far fields as a circular dielectric resonator antenna (DRA). The interesting point about QPSO is the fact that it contains only one control parameter. The comprehensive learning PSO (CLPSO) is applied to Yagi-Uda antenna design in [62] and unequally spaced arrays sidelobe suppression in [63]. A modified PSO algorithm has been applied in [64] for the synthesis of thinned planar circular arrays. A Feedback Particle Swarm Optimization (FPSO) is in [65] proposed for SLL minimization and null control of linear arrays. The FPSO is based on a nonlinear inertia weight algorithm.Table 2 Selected journal papers that use PSO for antenna design. Problem Algorithm(s) used Ref. Linear arrays design Original PSO and CLPSO [50, 51, 63] Profiled corrugated horn antenna design Original PSO and GA and hybrid GA-PSO [54] Phased arrays design Original PSO and GA comparison [52] Circular array design Original PSO and DE and GA comparison [53] Coplanar waveguide-fed planar monopole antenna Original PSO with MoM [56] Conformal phased arrays design Original PSO [55] Reconfigurable phase-differentiated array design Original PSO [57] Multiband and wideband patch antenna design Parallel PSO/FDTD [58] Time-modulated arrays design Original PSO [59] Infinitesimal dipoles array synthesis QPSO [61] Yagi-Uda antenna CLPSO [62] Thinned planar circular arrays Modified PSO [64] Adaptive beamforming of linear antenna arrays Adaptive Mutated Boolean PSO [139] Dual-band patch antenna design Boolean PSO [140] Array design Chaotic PSO [141] Square thinned arrays Hybrid PSO and Hadamard difference sets [142] Linear array design Feedback Particle Swarm Optimization (FPSO) with nonlinear inertia weight [65] ## 4. Differential Evolution (DE) In DE, the initial population evolves in each generation with the use of three operators: mutation, crossover, and selection. Several DE variants or strategies exist in the literature [13, 66] that depend on the form of these operators. The choice of the best DE strategy depends on the problem type [67]. Common DE strategies for the generation of trial vectors include DE/rand/1/bin, DE/rand-to-best/2/bin, and DE/rand/2/bin. In these strategies a mutant vector v - G + 1 , i for each target vector x - G , i is computed by(3) D E / r a n d / 1 / b i n v - G + 1 , i = x - G , r 1 + F x - G , r 2 - x - G , r 3 , r 1 ≠ r 2 ≠ r 3 D E / rand-to-best / 2 / b i n v - G + 1 , i = x - G , i + F x - G , b e s t - x - G , i + F x - G , r 1 - x - G , r 2 + F x - G , r 3 - x - G , r 4 , r 1 ≠ r 2 ≠ r 3 ≠ r 4 D E / r a n d / 2 / b i n v - G + 1 , i = x - G , r 1 + F x - G , r 2 - x - G , r 3 + F x - G , r 4 - x - G , r 5 , r 1 ≠ r 2 ≠ r 3 ≠ r 4 ≠ r 5 ,where r 1 , r 2 , r 3 , r 4 , r 5 are randomly chosen indices from the population that are different from the index i and F is a mutation control parameter. After mutation, the crossover operator is applied to generate a trial vector u - G + 1 , i = u G + 1,1 i , u G + 1,2 i , … , u G + 1 , j i , … , u G + 1 , D i whose coordinates are given by(4) u G + 1 , j i = v G + 1 , j i , if rand j 0,1 ⁡ ≤ C R or j = r n i , x G + 1 , j i , if rand j 0,1 ⁡ > C R and j ≠ r n i ,where j = 1,2 , … , D , rand j 0,1 ⁡ is a number from a uniform random distribution from the interval 0,1, r n i is a randomly chosen index from 1,2 , … , D, and C R is the crossover constant from the interval 0,1. D E uses a greedy selection operator, which for minimization problems is defined by(5) x - G + 1 , i = u - G + 1 , i , if f u - G + 1 , i < f x - G , i , x - G , i , otherwise,where f u - G + 1 , i and f x - G , i are the fitness values of the trial and the old vector, respectively. The new trial vector u - G + 1 , i replaces the old vector x - G , i only when it produces a lower objective-function value than the old one. Otherwise, the old vector remains in the next generation. The stopping criterion for the DE is usually the generation number or the number of objective-function evaluations.A search in the Scopus database reveals 38,097 documents related to DE (27,482 journal papers and 7831 conference papers). A refined search for antenna related papers using the DE shows 221 journal papers and 152 conference papers. Figure4 shows how the papers related to DE antenna are distributed from 2002 to May 2016.Figure 4 Papers using DE for antenna design from 2002 to May 2016.A general review paper of the use of DE in electromagnetics has been reported in [68], and a book [69] on DE implementation in electromagnetics has been published. Table 3 lists some representative papers for antenna design. The most commonly used DE strategy for antenna design is the DE/rand/1/bin variant. The above-mentioned strategy has been applied, among others, to the problem of linear array design in [70]; synthesis of difference patterns of monopulse antennas in [71]; array pattern nulling in [72]; and conformal array design in [73]. Several other DE strategies have been applied to antenna problems. In [74], the authors have introduced a new DDE/BoR/1/bin strategy for linear array synthesis, while a modified DE strategy (MDES) has been used in [75] for the same problem. The strategy DE/best/1/bin has been applied in [76–78] for time-modulated array design. Self-adaptive DE algorithms have also been applied to antenna problems, including jDE [79] in [80–82]; SaDE [83] in [84, 85]; and CODE-EIG in [86]. Multiobjective DE algorithms are also another large group of DE algorithms applied to antenna problems. These include applications to linear array design in [87], to subarray design in [88], and to Yagi-Uda antennas [89]. DE algorithms hybridized with other methods are also commonly found in the literature; for instance, the DE has been used with the Method of Moments in [90] for the design of low Radar Cross Section (RCS) antennas.Table 3 Selected journal papers that use DE for antenna design. Problem Algorithm(s) used Ref. Linear arrays design DE/rand/1/bin [70] Linear arrays design DDE/BoR/1/bin [74] Linear arrays design and E-shaped patch antenna jDE [79] [80–82] Difference patterns of monopulse antennas DE/rand/1/bin [71] Array pattern nulling DE/rand/1/bin [72] Linear arrays design Modified DE strategy (MDES) [75] Linear arrays design Multiobjective DE [87] Array design Improved DE [143] Shaped beam synthesis CODE-EIG [86] Conformal arrays design DE/rand/1/bin [73] Horn antenna design and sparse linear arrays synthesis SaDE [83] [84, 85] Subarray design Memetic GDE3 [88] Time-modulated arrays design DE/best/1/bin [76–78] Monopulse antenna with a subarray weighting Hybrid DE [144] Yagi-Uda antenna GDE3 [89] Low Radar Cross Section (RCS) slot patch antenna design DE/MoM [90] ## 5. Other Innovative Algorithms Several new EAs have emerged during the last ten years that are based on different evolutionary models of animals, insects, or other biological entities. Artificial Bee Colony (ABC) [7] is a recently proposed SI algorithm, which has been applied to several real-world engineering problems. The ABC algorithm models and simulates the honey bee behavior in food foraging. In the ABC algorithm, a potential solution to the optimization problem is represented by the position of a food source while the food source corresponds to the quality (objective-function fitness) of the associated solution. The ABC algorithm has been successfully applied to several problems in wireless communications [91]. A number of different variants of the ABC that improve the original algorithm have been proposed in [92]. A search in the Scopus database shows that there are more than 3000 papers on ABC, of which 48 use the ABC for antenna design. These include array design [93–96]; resonant frequency of patch antennas calculation [97]; and RFID tags design [98–100].Ant Colony Optimization (ACO) [6, 10, 11] is a population-based metaheuristic which was introduced by Dorigo et al. [11] inspired by the behavior of real ants. The algorithm is based on the fact that ant colonies can find the shortest path between their nest and a food source just by depositing and reacting to pheromones while they are exploring their environment. ACO is suitable for solving combinatorial optimization problems that are common in antennas. The search in Scopus shows more than 10,000 papers on ACO, with 169 papers dealing with the topic of antenna design. The topic of linear array synthesis has been presented in [101]; patch antenna design in [102]; sum-difference pattern synthesis in [103]; and thinned array design in [102]. A modified touring ant colony optimizer has been used for shaped beam synthesis [104] and for pattern nulling in [105–107]. The authors in [108] present a comparative study of simulated annealing (SA), GA, and ACO on self-structured antenna design.Biogeography-based optimization (BBO) [109] is another later addition to EAs. BBO is based on mathematical models that describe how species migrate from one island to another, how new species arise, and how species become extinct. The way the problem solution is found is analogous to nature’s way of distributing species. The search in the Scopus database yielded 654 papers that refer to BBO from 2007 to May 2016 with 40 papers that use BBO for antenna design from 2009 till today. BBO has been applied to Yagi-Uda design [110] and array synthesis [111–117]. Additionally a hybrid DE/BBO algorithm has been used for the design of a reconfigurable antenna array with discrete phase shifters in [118].Evolutionary Programming (EP) was originally proposed by Fogel in [3]. EP is based on the idea of representing individuals phenotypically as finite state machines capable of responding to environmental stimuli. The representations used in EP are problem-dependent. The most common representation used is a fixed-length real-valued vector. In EP, the vectors evolve but do not exchange information between them. There are no crossover operators but only mutation operators. EP has been applied to several problems in electromagnetics [21, 119, 120]. Among others EP has been applied to patch antenna design [121] and to wideband parasitic dipole arrays [122].Hansen and Ostermeier [123] introduced the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). CMA-ES is a second-order approach estimating a positive covariance matrix within an iterative procedure. More details about the CMA-ES performance algorithm can be found in [124, 125]. In [23, 24] the authors present an approach for mixed-parameter optimization based on CMA-ES, which is successfully applied in several design problems in electromagnetics. This approach is based on the concepts presented in [22] for EP. The CMA-ES algorithm has been recently applied to design problems in antennas and electromagnetics in general [126–129]. ### 5.1. Other Artificial Intelligence Methods Other artificial intelligence methods and techniques include Artificial Neural Network (ANN) architectures [130], which are a family of models inspired by biological neural networks. ANNs are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. The applications of ANNs to electromagnetics are a popular topic in the literature [131].Deep learning is a type of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and nonlinear transformations [132–135]. These methods have dramatically improved the state of the art in speech recognition, visual object recognition, object detection, and many other domains such as drug discovery and genomics [132]. ## 5.1. Other Artificial Intelligence Methods Other artificial intelligence methods and techniques include Artificial Neural Network (ANN) architectures [130], which are a family of models inspired by biological neural networks. ANNs are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. The applications of ANNs to electromagnetics are a popular topic in the literature [131].Deep learning is a type of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and nonlinear transformations [132–135]. These methods have dramatically improved the state of the art in speech recognition, visual object recognition, object detection, and many other domains such as drug discovery and genomics [132]. ## 6. Discussion: Open Issues The choice of the best algorithm for every problem requires the consideration of its specific characteristics. Once the algorithm is chosen, the key issue becomes the selection of the algorithm control parameters, which in most cases can be also problem-dependent. A practical initial approach is to use the control parameters for these algorithms that commonly perform well regardless of the characteristics of the problem to be solved.For real-coded GAs typical values are 0.9 for the crossover probability and1 / N for the mutation probability, where N is the problem dimension. For the binary-coded GA, typical values for crossover and mutation probabilities are 1.0,0.8 and 0.01,0.1, respectively.In the PSO algorithmsc 1 and c 2 are set equal to 2.05. For CFPSO, these values result in K = 0.7298. For IWPSO, it is common practice to linearly decrease the inertia weight starting from 0.95 to 0.4. The velocity is updated asynchronously, which means that the global best position is updated at the moment it is found.For the DE, Storn and Price [13] have suggested choosing the differential evolution control parameters F and CR from the intervals 0.5,1 and 0.8,1, respectively, and setting N P = 10 D. The correct selection of these control parameter values is, frequently, a problem-dependent task. Multiple algorithm runs are often required for fine-tuning the control parameters. There are several DE algorithms in the literature that self-adapt these control parameters. Another question is the selection of the appropriate strategy for the generation of trial vectors, which requires additional computational time using a trial-and-error search procedure. Therefore, it is not always an easy task to fine-tune the control parameters and strategy. Since finding the suitable control parameter values and strategy in such a way is often very time-consuming, there has been an increasing trend among researchers in designing new adaptive and self-adaptive DE variants. A DE strategy (jDE) that self-adapts the control parameters has been introduced in [79]. This algorithm has been applied successfully to a microwave absorber design problem [136] and linear array synthesis [82]. SaDE, a DE algorithm that self-adapts both control parameters and strategy based on learning experiences from previous generations, is presented in [83].The research domain of evolutionary algorithms is growing rapidly. A current and growing research trend in evolutionary algorithms is their hybridization with local optimizers. These algorithms are called Memetic Algorithms (MAs) [90] that are inspired by Dawkins’ notion of meme. The advantage of such an approach is that the use of a local search optimizer ensures that specific regions of the search space can be explored using fewer evaluations and good quality solutions can be generated early during the search. Furthermore, the global search algorithm generates good initial solutions for the local search. MAs can be highly efficient due to this combination of global exploration and local exploitation.An interesting idea has been presented in [137] where the authors conceptually present the equivalences of various popular EAs like GAs, PSO, DE, and BBO. Their basic conclusion is that the conceptual equivalence of the algorithms is supported by the fact that modifications in algorithms result in very different performance levels.Finally, another concern that is pertinent to all of the above algorithms is the definition of the stopping criterion. Usually, this is the iteration number or the number of objective-function evaluations. Additionally and in order to avoid stagnation, another criterion could be set for the algorithm to stop after a number of generations when the objective function does not further improve. ## 7. Conclusion A brief survey of different evolutionary algorithms and their application to different problems in antennas and propagation have been presented in this review paper. It is to be noted that, among the evolutionary algorithms used in the literature, the GA and SI algorithms are among those most commonly utilized.The bibliography statistics show that GAs, PSO, and DE are among the most popular algorithms for antenna design. It must be pointed out that several variants of these algorithms have also been employed along with other nature-inspired algorithms that have emerged. Most notably, ABC, ACO, BBO, EP, and CMA_ES have been applied to several antenna design problems. The body of literature on EAs to antenna design is by now quite extensive and it continues to grow fast with more innovative algorithms. The above presented algorithms have been proven effective in solving specific antenna design problems, although the NFL theorem assures that a best global optimizer does not exist. the search for new algorithms and their application to antenna design problems is an ongoing research process, which is likely to continue unabated for some time to come. --- *Source: 1010459-2016-08-25.xml*
1010459-2016-08-25_1010459-2016-08-25.md
31,486
Evolutionary Algorithms Applied to Antennas and Propagation: A Review of State of the Art
Sotirios K. Goudos; Christos Kalialakis; Raj Mittra
International Journal of Antennas and Propagation (2016)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2016/1010459
1010459-2016-08-25.xml
--- ## Abstract A review of evolutionary algorithms (EAs) with applications to antenna and propagation problems is presented. EAs have emerged as viable candidates for global optimization problems and have been attracting the attention of the research community interested in solving real-world engineering problems, as evidenced by the fact that very large number of antenna design problems have been addressed in the literature in recent years by using EAs. In this paper, our primary focus is on Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), and Differential Evolution (DE), though we also briefly review other recently introduced nature-inspired algorithms. An overview of case examples optimized by each family of algorithms is included in the paper. --- ## Body ## 1. Introduction Several evolutionary algorithms (EAs) have emerged in the past decade that mimic the behavior and evolution of biological entities inspired mainly by Darwin’s theory of evolution and its natural selection mechanism. The study of evolutionary algorithms began in the 1960s. Several researchers independently developed three mainstream evolutionary algorithms, namely, the Genetic Algorithms [1, 2], Evolutionary Programming [3], and evolution strategies [4] (Figure 1). EAs are widely used for the solution of single and multiobjective optimization problems and Figure 1 depicts some of the main algorithmic families.Figure 1 A diagram depicting main families of evolutionary algorithms.Swarm Intelligence (SI) algorithms are also a special type of EA. The SI can be defined as the collective behavior of decentralized and self-organized swarms. SI algorithms include Particle Swarm Optimization (PSO) [5], Ant Colony Optimization [6], and Artificial Bee Colony (ABC) [7]. PSO mimics the swarm behavior of birds flocking and fish schooling [5]. The most common PSO algorithms include the classical Inertia Weight PSO (IWPSO) and the Constriction Factor PSO (CFPSO) [8]. The PSO algorithm is easy to implement and is computationally efficient; it is typically used only for real-valued problems. An option to expand PSO for discrete-valued problems also exists [9]. Other SI algorithms include (i) Artificial Bee Colony (ABC) [7], which models and simulates the behaviors of honey bees foraging for food, and (ii) Ant Colony Optimization (ACO) [6, 10, 11], which is a population-based metaheuristic inspired by the behavior of real ants.Differential Evolution (DE) [12, 13] is a population-based stochastic global optimization algorithm, which has been used in several real-world engineering problems utilizing several variants of the DE algorithm. An overview of both the PSO and DE algorithms and hybridizations of these algorithms along with other soft computing tools can be found in [14].Other evolutionary techniques applied to antenna problems include the recently proposed Wind Driven Optimization (WDO) [15]; biogeography-based optimization (BBO); Invasive Weed Optimization (IWO) [16–20]; Evolutionary Programming (EP) [21, 22]; and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [23, 24].An important theorem which is pertinent to the performance of optimization algorithms is the so-called “No Free Lunch” (NFL) theorem, which pertains to the average behavior of optimization algorithms over given spaces of optimization problems. It has been shown in [25] that when averaged over all possible optimization problems defined over some search space X, no algorithm has a performance advantage over any other. Additionally, in [26] the authors show that it is theoretically impossible to have a best general-purpose universal optimization strategy, and the only way for one strategy to be superior to the others is when it focuses on a particular class of problems. It should be noted that a wide variety of optimization problems arise in the antenna domain and that it is not always an easy task to find the best optimization algorithm for solving each of them. Therefore, it is worthwhile to explore new optimization algorithms if we find that they can work well for the problem at hand. Optimization problems arising in the design and synthesis of antennas can benefit considerably from an application of the EAs, which can lead to unconventional solutions regarding position and excitation of the antenna elements in an array. Furthermore, the elements themselves can be geometrically designed by using the EAs.The purpose of this paper is to briefly describe the algorithms and present their application to antenna design problems found in the recent literature.Section2 reviews the Genetic Algorithms (GAs). Section 3 focuses on PSO and Section 4 is devoted to DE. A brief overview of other EA algorithms is provided in Section 5. We describe the algorithms briefly and then present some statistics regarding the use of the algorithms in the literature. Additionally, for each algorithm some representative papers are referenced in the tables. Some open issues are discussed in Section 6, and conclusions are drawn in Section 7. ## 2. Genetic Algorithms GAs, the most popular EAs, are inspired by Darwin’s natural selection. GAs can be real or binary-coded. In a binary-coded GA each chromosome encodes a binary string [27, 28]. The most commonly used operators are crossover, mutation, and selection. The selection operator chooses two parent chromosomes from the current population according to a selection strategy. Most popular selection strategies include roulette wheel and tournament selection. The crossover operator combines the two parent chromosomes in order to produce one new child chromosome. The mutation operator is applied with a predefined mutation probability to a new child chromosome.A search in the Scopus database shows that there are 65762 conference papers and 94510 journal papers related to GAs from 1977 to 2016. Figure2 shows the number of papers related to GAs and antennas over the last 15 years. Additionally, the search in the same database using the keywords GAs and Antennas reveals a total number of 2807 papers (both journal and conference). Tutorials and applications of GAs to electromagnetics can be found in [29, 30].Figure 2 Papers using GAs for antenna design from 1993 to May 2016.Table1 lists selected papers that use GAs (mostly binary-coded) for antenna design. Several papers exist in the literature that apply GAs for array synthesis. A binary-coded GA has been applied for linear and planar array synthesis in [31, 32], among others, while a GA that uses decimal operator has been used in [33]. The problem of array thinning has been addressed in the literature using binary-coded GAs in [34], while the array-failure correction problem has been addressed by using the real-coded GAs in [35]. A simple GA has been used for the synthesis of time-modulated arrays in [36]. Different antenna types have been designed using the GAs, for example, wire antennas in [37–40], patch antennas in [41, 42], and RFID tags in [43]. Other GA-type algorithms, such as a mixed integer GA, have also been applied in [44] to several different antenna design problems. An improved GA for the design of linear aperiodic arrays and a binary-coded GA which uses two-point crossover (instead of the usual one point) have been used in [45] for the design of patch antennas with low RCS. A GA that combines chaos theory and genetic algorithm is used for designing T-shaped MIMO radar antenna array in [46]. A design procedure based on a GA is given in [47] for optimizing a patch antenna for operation to four frequency bands GSM1800, GSM1900, LTE2300, and Bluetooth.Table 1 Selected journal papers that use GAs for antenna design. Problem Algorithm(s) used Ref. Thinned arrays Binary-coded GA [34] Wire antennas and Yagi-Uda antenna Binary-coded GA [37–40] Array synthesis Decimal operators GA [33] Array-failure correction Real-coded GA [35] Linear and planar array synthesis Binary-coded GA and hybrid GA with simulated annealing (SA) [31, 32] Broadband patch antennas and dual-band patch antennas Binary-coded GA and parallel GA [41, 42] RFID tags Binary-coded GA [43] Time-modulated linear arrays Simple GA [36] Linear array design, thinned subarrays, and circularly polarized patch antenna Mixed integer GA [44] Linear aperiodic arrays design Improved GA [138] Low RCS slot patch antennas Binary-coded GA and two-point crossover [45] T-shaped MIMO radar antenna array chaos genetic algorithm [46] Quad-band patch antenna Binary-coded GA [47] ## 3. Particle Swarm Optimization (PSO) In PSO, the particles move in the search space, where each particle position is updated by two optimum values. The first one is the best solution (fitness) that has been achieved so far. This value is calledpbest. The other one is the global best value obtained so far by any particle in the swarm. This best value is called g b e s t. After finding thepbest and g b e s t, the velocity update rule is an important factor in a PSO algorithm. The most commonly used algorithm defines that the velocity of each particle for every problem dimension is updated with the following equation:(1) u G + 1 , n i = w u G , n i + c 1 rand 1 0,1 ⁡ p b e s t G + 1 , n i - x G , n i + c 2 rand 2 0,1 ⁡ g b e s t G + 1 , n i - x G , n i ,where u G + 1 , n i is the ith particle velocity in the nth dimension, G + 1 denotes the current iteration and G denotes the previous, x G , n i is the particle position in the nth dimension, r a n d 1 0,1 , r a n d 2 ( 0,1 ) are uniformly distributed random numbers in 0,1, w is a parameter known as the inertia weight, and c 1 and c 2 are the learning factors.The parameterw (inertia weight) is a constant between 0 and 1. This parameter represents the particle’s fly without any external influence. The higher the value of w is or the closer it is to unity, the more the particle stays unaffected frompbest and g b e s t. The inertia weight controls the impact of the previous velocity: a large inertia weight favors exploration, while a small inertia weight favors exploitation. The parameter c 1 represents the influence of the particle memory on its best position, while the parameter c 2 represents the influence of the swarm best position. Therefore, in the Inertia Weight PSO (IWPSO) algorithm the parameters to be determined are the swarm size (or population size), usually 100 or less, the cognitive learning factor c 1 and the social learning factor c 2 (usually both are set equal to 2.0), the inertia weight w, and the maximum number of iterations. It is common practice to linearly decrease the inertia weight starting from 0.9 or 0.95 to 0.4.Clerc [8] has suggested the use of a different velocity update rule, which has introduced a parameter K called constriction factor. The role of the constriction factor is to ensure convergence when all the particles have stopped their movements. The velocity update rule is then given by(2) u G + 1 , n i = K u G , n i + c 1 rand 1 0,1 ⁡ p b e s t G + 1 , n i - x G , n i + c 2 rand 2 0,1 ⁡ g b e s t G + 1 , n i - x G , n i , K = 2 2 - φ - φ 2 - 4 φ ,where φ = c 1 + c 2 and φ > 4. This PSO algorithm variant is known as Constriction Factor PSO (CFPSO).The Scopus database shows a total number of 39673 papers from 1995 to April 2016 for PSO related papers (including 19570 journal papers and 18355 conference papers). A refined search for antenna papers reveals 519 journal papers and 515 conference papers from 2002 to May 2016 indicating the popularity of PSO algorithm. Figure3 shows the distribution of papers on PSO design of antennas from 2002 to 2016.Figure 3 Papers using PSO for antenna design from 2002 to May 2016.Table2 lists selected PSO antenna papers. Introductory and tutorial papers that introduce the application of the PSO for antenna design are [48, 49]. Additionally, the problem of sidelobe suppression of linear arrays using the PSO has been addressed in [50, 51]. A comparison of the performance of the PSO and GA algorithms, as applied to the problem of phased arrays design, has been given in [52], while a comparative study of PSO, GAs, and DE for circular array design has been reported in [53]. A performance comparison of PSO, GA, and a hybrid GA-PSO has been provided in [54], where they have been applied to the problem of designing profiled corrugated horn antennas. The application of PSO to conformal phased arrays design has been shown in [55]. A coplanar waveguide-fed planar monopole antenna for multiband operation has been designed in [56] using the PSO algorithm in conjunction with the Method of Moments (MoM). The authors in [57] use PSO for reconfigurable phase-differentiated array design. A Parallelized PSO optimizer has been used in conjunction with the Finite Difference Time Domain (FDTD) which has been employed for multiband patch antenna designs in [58], and the problem of minimizing power loss in time-modulated arrays has been addressed in [59]. Boundary conditions play an important role in the application of the PSO, and the performances of different boundary conditions have been tested on a 16-element array antenna in [60], based on mathematical benchmark functions. All of the above papers use the original IWPSO or the CFPSO, although several variants of PSO are available. A new PSO variant called quantum PSO (QPSO) algorithm has been proposed and applied in [61] to find a set of infinitesimal dipoles which produces the same near and far fields as a circular dielectric resonator antenna (DRA). The interesting point about QPSO is the fact that it contains only one control parameter. The comprehensive learning PSO (CLPSO) is applied to Yagi-Uda antenna design in [62] and unequally spaced arrays sidelobe suppression in [63]. A modified PSO algorithm has been applied in [64] for the synthesis of thinned planar circular arrays. A Feedback Particle Swarm Optimization (FPSO) is in [65] proposed for SLL minimization and null control of linear arrays. The FPSO is based on a nonlinear inertia weight algorithm.Table 2 Selected journal papers that use PSO for antenna design. Problem Algorithm(s) used Ref. Linear arrays design Original PSO and CLPSO [50, 51, 63] Profiled corrugated horn antenna design Original PSO and GA and hybrid GA-PSO [54] Phased arrays design Original PSO and GA comparison [52] Circular array design Original PSO and DE and GA comparison [53] Coplanar waveguide-fed planar monopole antenna Original PSO with MoM [56] Conformal phased arrays design Original PSO [55] Reconfigurable phase-differentiated array design Original PSO [57] Multiband and wideband patch antenna design Parallel PSO/FDTD [58] Time-modulated arrays design Original PSO [59] Infinitesimal dipoles array synthesis QPSO [61] Yagi-Uda antenna CLPSO [62] Thinned planar circular arrays Modified PSO [64] Adaptive beamforming of linear antenna arrays Adaptive Mutated Boolean PSO [139] Dual-band patch antenna design Boolean PSO [140] Array design Chaotic PSO [141] Square thinned arrays Hybrid PSO and Hadamard difference sets [142] Linear array design Feedback Particle Swarm Optimization (FPSO) with nonlinear inertia weight [65] ## 4. Differential Evolution (DE) In DE, the initial population evolves in each generation with the use of three operators: mutation, crossover, and selection. Several DE variants or strategies exist in the literature [13, 66] that depend on the form of these operators. The choice of the best DE strategy depends on the problem type [67]. Common DE strategies for the generation of trial vectors include DE/rand/1/bin, DE/rand-to-best/2/bin, and DE/rand/2/bin. In these strategies a mutant vector v - G + 1 , i for each target vector x - G , i is computed by(3) D E / r a n d / 1 / b i n v - G + 1 , i = x - G , r 1 + F x - G , r 2 - x - G , r 3 , r 1 ≠ r 2 ≠ r 3 D E / rand-to-best / 2 / b i n v - G + 1 , i = x - G , i + F x - G , b e s t - x - G , i + F x - G , r 1 - x - G , r 2 + F x - G , r 3 - x - G , r 4 , r 1 ≠ r 2 ≠ r 3 ≠ r 4 D E / r a n d / 2 / b i n v - G + 1 , i = x - G , r 1 + F x - G , r 2 - x - G , r 3 + F x - G , r 4 - x - G , r 5 , r 1 ≠ r 2 ≠ r 3 ≠ r 4 ≠ r 5 ,where r 1 , r 2 , r 3 , r 4 , r 5 are randomly chosen indices from the population that are different from the index i and F is a mutation control parameter. After mutation, the crossover operator is applied to generate a trial vector u - G + 1 , i = u G + 1,1 i , u G + 1,2 i , … , u G + 1 , j i , … , u G + 1 , D i whose coordinates are given by(4) u G + 1 , j i = v G + 1 , j i , if rand j 0,1 ⁡ ≤ C R or j = r n i , x G + 1 , j i , if rand j 0,1 ⁡ > C R and j ≠ r n i ,where j = 1,2 , … , D , rand j 0,1 ⁡ is a number from a uniform random distribution from the interval 0,1, r n i is a randomly chosen index from 1,2 , … , D, and C R is the crossover constant from the interval 0,1. D E uses a greedy selection operator, which for minimization problems is defined by(5) x - G + 1 , i = u - G + 1 , i , if f u - G + 1 , i < f x - G , i , x - G , i , otherwise,where f u - G + 1 , i and f x - G , i are the fitness values of the trial and the old vector, respectively. The new trial vector u - G + 1 , i replaces the old vector x - G , i only when it produces a lower objective-function value than the old one. Otherwise, the old vector remains in the next generation. The stopping criterion for the DE is usually the generation number or the number of objective-function evaluations.A search in the Scopus database reveals 38,097 documents related to DE (27,482 journal papers and 7831 conference papers). A refined search for antenna related papers using the DE shows 221 journal papers and 152 conference papers. Figure4 shows how the papers related to DE antenna are distributed from 2002 to May 2016.Figure 4 Papers using DE for antenna design from 2002 to May 2016.A general review paper of the use of DE in electromagnetics has been reported in [68], and a book [69] on DE implementation in electromagnetics has been published. Table 3 lists some representative papers for antenna design. The most commonly used DE strategy for antenna design is the DE/rand/1/bin variant. The above-mentioned strategy has been applied, among others, to the problem of linear array design in [70]; synthesis of difference patterns of monopulse antennas in [71]; array pattern nulling in [72]; and conformal array design in [73]. Several other DE strategies have been applied to antenna problems. In [74], the authors have introduced a new DDE/BoR/1/bin strategy for linear array synthesis, while a modified DE strategy (MDES) has been used in [75] for the same problem. The strategy DE/best/1/bin has been applied in [76–78] for time-modulated array design. Self-adaptive DE algorithms have also been applied to antenna problems, including jDE [79] in [80–82]; SaDE [83] in [84, 85]; and CODE-EIG in [86]. Multiobjective DE algorithms are also another large group of DE algorithms applied to antenna problems. These include applications to linear array design in [87], to subarray design in [88], and to Yagi-Uda antennas [89]. DE algorithms hybridized with other methods are also commonly found in the literature; for instance, the DE has been used with the Method of Moments in [90] for the design of low Radar Cross Section (RCS) antennas.Table 3 Selected journal papers that use DE for antenna design. Problem Algorithm(s) used Ref. Linear arrays design DE/rand/1/bin [70] Linear arrays design DDE/BoR/1/bin [74] Linear arrays design and E-shaped patch antenna jDE [79] [80–82] Difference patterns of monopulse antennas DE/rand/1/bin [71] Array pattern nulling DE/rand/1/bin [72] Linear arrays design Modified DE strategy (MDES) [75] Linear arrays design Multiobjective DE [87] Array design Improved DE [143] Shaped beam synthesis CODE-EIG [86] Conformal arrays design DE/rand/1/bin [73] Horn antenna design and sparse linear arrays synthesis SaDE [83] [84, 85] Subarray design Memetic GDE3 [88] Time-modulated arrays design DE/best/1/bin [76–78] Monopulse antenna with a subarray weighting Hybrid DE [144] Yagi-Uda antenna GDE3 [89] Low Radar Cross Section (RCS) slot patch antenna design DE/MoM [90] ## 5. Other Innovative Algorithms Several new EAs have emerged during the last ten years that are based on different evolutionary models of animals, insects, or other biological entities. Artificial Bee Colony (ABC) [7] is a recently proposed SI algorithm, which has been applied to several real-world engineering problems. The ABC algorithm models and simulates the honey bee behavior in food foraging. In the ABC algorithm, a potential solution to the optimization problem is represented by the position of a food source while the food source corresponds to the quality (objective-function fitness) of the associated solution. The ABC algorithm has been successfully applied to several problems in wireless communications [91]. A number of different variants of the ABC that improve the original algorithm have been proposed in [92]. A search in the Scopus database shows that there are more than 3000 papers on ABC, of which 48 use the ABC for antenna design. These include array design [93–96]; resonant frequency of patch antennas calculation [97]; and RFID tags design [98–100].Ant Colony Optimization (ACO) [6, 10, 11] is a population-based metaheuristic which was introduced by Dorigo et al. [11] inspired by the behavior of real ants. The algorithm is based on the fact that ant colonies can find the shortest path between their nest and a food source just by depositing and reacting to pheromones while they are exploring their environment. ACO is suitable for solving combinatorial optimization problems that are common in antennas. The search in Scopus shows more than 10,000 papers on ACO, with 169 papers dealing with the topic of antenna design. The topic of linear array synthesis has been presented in [101]; patch antenna design in [102]; sum-difference pattern synthesis in [103]; and thinned array design in [102]. A modified touring ant colony optimizer has been used for shaped beam synthesis [104] and for pattern nulling in [105–107]. The authors in [108] present a comparative study of simulated annealing (SA), GA, and ACO on self-structured antenna design.Biogeography-based optimization (BBO) [109] is another later addition to EAs. BBO is based on mathematical models that describe how species migrate from one island to another, how new species arise, and how species become extinct. The way the problem solution is found is analogous to nature’s way of distributing species. The search in the Scopus database yielded 654 papers that refer to BBO from 2007 to May 2016 with 40 papers that use BBO for antenna design from 2009 till today. BBO has been applied to Yagi-Uda design [110] and array synthesis [111–117]. Additionally a hybrid DE/BBO algorithm has been used for the design of a reconfigurable antenna array with discrete phase shifters in [118].Evolutionary Programming (EP) was originally proposed by Fogel in [3]. EP is based on the idea of representing individuals phenotypically as finite state machines capable of responding to environmental stimuli. The representations used in EP are problem-dependent. The most common representation used is a fixed-length real-valued vector. In EP, the vectors evolve but do not exchange information between them. There are no crossover operators but only mutation operators. EP has been applied to several problems in electromagnetics [21, 119, 120]. Among others EP has been applied to patch antenna design [121] and to wideband parasitic dipole arrays [122].Hansen and Ostermeier [123] introduced the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). CMA-ES is a second-order approach estimating a positive covariance matrix within an iterative procedure. More details about the CMA-ES performance algorithm can be found in [124, 125]. In [23, 24] the authors present an approach for mixed-parameter optimization based on CMA-ES, which is successfully applied in several design problems in electromagnetics. This approach is based on the concepts presented in [22] for EP. The CMA-ES algorithm has been recently applied to design problems in antennas and electromagnetics in general [126–129]. ### 5.1. Other Artificial Intelligence Methods Other artificial intelligence methods and techniques include Artificial Neural Network (ANN) architectures [130], which are a family of models inspired by biological neural networks. ANNs are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. The applications of ANNs to electromagnetics are a popular topic in the literature [131].Deep learning is a type of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and nonlinear transformations [132–135]. These methods have dramatically improved the state of the art in speech recognition, visual object recognition, object detection, and many other domains such as drug discovery and genomics [132]. ## 5.1. Other Artificial Intelligence Methods Other artificial intelligence methods and techniques include Artificial Neural Network (ANN) architectures [130], which are a family of models inspired by biological neural networks. ANNs are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. The applications of ANNs to electromagnetics are a popular topic in the literature [131].Deep learning is a type of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and nonlinear transformations [132–135]. These methods have dramatically improved the state of the art in speech recognition, visual object recognition, object detection, and many other domains such as drug discovery and genomics [132]. ## 6. Discussion: Open Issues The choice of the best algorithm for every problem requires the consideration of its specific characteristics. Once the algorithm is chosen, the key issue becomes the selection of the algorithm control parameters, which in most cases can be also problem-dependent. A practical initial approach is to use the control parameters for these algorithms that commonly perform well regardless of the characteristics of the problem to be solved.For real-coded GAs typical values are 0.9 for the crossover probability and1 / N for the mutation probability, where N is the problem dimension. For the binary-coded GA, typical values for crossover and mutation probabilities are 1.0,0.8 and 0.01,0.1, respectively.In the PSO algorithmsc 1 and c 2 are set equal to 2.05. For CFPSO, these values result in K = 0.7298. For IWPSO, it is common practice to linearly decrease the inertia weight starting from 0.95 to 0.4. The velocity is updated asynchronously, which means that the global best position is updated at the moment it is found.For the DE, Storn and Price [13] have suggested choosing the differential evolution control parameters F and CR from the intervals 0.5,1 and 0.8,1, respectively, and setting N P = 10 D. The correct selection of these control parameter values is, frequently, a problem-dependent task. Multiple algorithm runs are often required for fine-tuning the control parameters. There are several DE algorithms in the literature that self-adapt these control parameters. Another question is the selection of the appropriate strategy for the generation of trial vectors, which requires additional computational time using a trial-and-error search procedure. Therefore, it is not always an easy task to fine-tune the control parameters and strategy. Since finding the suitable control parameter values and strategy in such a way is often very time-consuming, there has been an increasing trend among researchers in designing new adaptive and self-adaptive DE variants. A DE strategy (jDE) that self-adapts the control parameters has been introduced in [79]. This algorithm has been applied successfully to a microwave absorber design problem [136] and linear array synthesis [82]. SaDE, a DE algorithm that self-adapts both control parameters and strategy based on learning experiences from previous generations, is presented in [83].The research domain of evolutionary algorithms is growing rapidly. A current and growing research trend in evolutionary algorithms is their hybridization with local optimizers. These algorithms are called Memetic Algorithms (MAs) [90] that are inspired by Dawkins’ notion of meme. The advantage of such an approach is that the use of a local search optimizer ensures that specific regions of the search space can be explored using fewer evaluations and good quality solutions can be generated early during the search. Furthermore, the global search algorithm generates good initial solutions for the local search. MAs can be highly efficient due to this combination of global exploration and local exploitation.An interesting idea has been presented in [137] where the authors conceptually present the equivalences of various popular EAs like GAs, PSO, DE, and BBO. Their basic conclusion is that the conceptual equivalence of the algorithms is supported by the fact that modifications in algorithms result in very different performance levels.Finally, another concern that is pertinent to all of the above algorithms is the definition of the stopping criterion. Usually, this is the iteration number or the number of objective-function evaluations. Additionally and in order to avoid stagnation, another criterion could be set for the algorithm to stop after a number of generations when the objective function does not further improve. ## 7. Conclusion A brief survey of different evolutionary algorithms and their application to different problems in antennas and propagation have been presented in this review paper. It is to be noted that, among the evolutionary algorithms used in the literature, the GA and SI algorithms are among those most commonly utilized.The bibliography statistics show that GAs, PSO, and DE are among the most popular algorithms for antenna design. It must be pointed out that several variants of these algorithms have also been employed along with other nature-inspired algorithms that have emerged. Most notably, ABC, ACO, BBO, EP, and CMA_ES have been applied to several antenna design problems. The body of literature on EAs to antenna design is by now quite extensive and it continues to grow fast with more innovative algorithms. The above presented algorithms have been proven effective in solving specific antenna design problems, although the NFL theorem assures that a best global optimizer does not exist. the search for new algorithms and their application to antenna design problems is an ongoing research process, which is likely to continue unabated for some time to come. --- *Source: 1010459-2016-08-25.xml*
2016
# Discoidin Domain-Containing Receptor 2 Is Present in Human Atherosclerotic Plaques and Involved in the Expression and Activity of MMP-2 **Authors:** Qi Yu; Ruihan Liu; Ying Chen; Ahmed Bilal Waqar; Fuqiang Liu; Juan Yang; Ting Lian; Guangwei Zhang; Hua Guan; Yuanyuan Cui; Cangbao Xu **Journal:** Oxidative Medicine and Cellular Longevity (2021) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2021/1010496 --- ## Abstract Discoidin domain-containing receptor 2 (DDR2) has been suggested to be involved in atherosclerotic progression, but its pathological role remains unknown. Using immunochemical staining, we located and compared the expression of DDR2 in the atherosclerotic plaques of humans and various animal models. Then, siRNA was applied to knock down the expression of DDR2 in vascular smooth muscle cells (VSMCs), and the migration, proliferation, and collagenΙ-induced expression of matrix metalloproteinases (MMPs) were evaluated. We found that an abundance of DDR2 was present in the atherosclerotic plaques of humans and various animal models and was distributed around fatty and necrotic cores. After incubation of oxidized low-density lipoprotein (ox-LDL), DDR2 was upregulated in VSMCs in response to such a proatherosclerotic condition. Next, we found that decreased DDR2 expression in VSMCs inhibited the migration, proliferation, and collagen Ι-induced expression of matrix metalloproteinases (MMPs). Moreover, we found that DDR2 is strongly associated with the protein expression and activity of MMP-2, suggesting that DDR2 might play a role in the etiology of unstable plaques. Considering that DDR2 is present in the atherosclerotic plaques of humans and is associated with collagen Ι-induced secretion of MMP-2, the clinical role of DDR2 in cardiovascular disease should be elucidated in further experiments. --- ## Body ## 1. Introduction Cardiovascular disease (CVD) is a major cause of death in the world. Atherosclerosis has been known as the common pathological basis for CVD [1]. Atherosclerotic plaque rupture primarily causes clinical events such as myocardial infarction, stroke, and thrombogenesis. Regarding previous studies, researchers suggest that the balance of macrophages and vascular smooth muscle cells (VSMCs) in the atherosclerotic plaque is a crucial factor for plaque stability. Briefly, as a result of macrophage proliferation in plaques, these inflammatory cells release many matrix metalloproteinases (MMPs) to effectively degrade extracellular matrix (ECM), such as collagens and elastin, resulting in plaque destabilization or rupture; conversely, VSMCs synthesize ECM to keep the plaque stable [2]. Despite the fact that VSMCs contribute to plaque stabilization, these cells may also promote plaque rupture by secreting MMPs during the phenotypic transition of VSMCs from the contractile to the synthetic state [3]. However, it is unknown what causes VSMCs to induce production of MMPs instead of synthesizing ECM. Notably, the major component of ECM is collagens, which account for approximately 60% of the total protein in atherosclerotic plaques [4]. These collagens not only constitute atherosclerotic plaques but also affect cell proliferation, migration, and adhesion via collagen receptors [4]. Among these collagen receptors, Discoidin domain-containing receptor 2 (DDR2) is regarded as a subgroup of tyrosine-kinase receptors, which is activated by natural ligands including collagen of types I, II, III, and X and is responsible for the communication and link between collagens and the cells [5]. DDR2 is present in various tissues, including vascular tissue and is implicated in the regulation of cell metabolism, differentiation, and growth. Interestingly, activation of DDR2 can induce production of MMPs, and therefore, DDR2 is recognized as playing a very important role in fibrosis and cancer [6]. Considering that DDR2 is found in VSMCs, an interesting question arises as to whether DDR2 also affects plaque stability via mediating MMP expression in VSMCs [7, 8]. Consequently, we designed this study to elucidate the pathological role of DDR2 in atherosclerosis. ## 2. Materials and Methods ### 2.1. Atherosclerotic Specimens and Histological Staining Japanese white rabbits were treated by a cholesterol-rich diet containing 0.3% cholesterol and 3% corn oil for 6 or 16 weeks to induce atherosclerosis (n=3, respectively), and then, rabbits were euthanized by using pentobarbital sodium at each time point. The aortic arch of each rabbit was cut into 10 cross sections (4 μm) [9]. Male ApoE–/– mice were euthanized by cervical dislocation. Segments of heart tissue crossing the ascending aorta and aortic sinus from male ApoE–/– mice (n=4) were embedded within OCT, and serial sections (8 μm thick) were made as previously described [10]. Hematoxylin and eosin (H&E), oil red O, and Masson’s trichrome stains were performed according to the protocols as previously described [11, 12]. Moreover, sections were performed with immunohistochemical staining against DDR2 in mouse (1 : 200; Abcam, Cambridge, UK; CST, Beverly, MA, USA) human and rabbit (1 : 200; Santa Cruz Biotechnology, Inc, Dallas, TX, USA), RAM11 of macrophages (1 : 200; Dako, CA, USA), and α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) as previously described [11].Human carotid plaques were collected from patients who received endarterectomy at Zhengzhou Central Hospital. Informed consents were obtained from all patients enrolled in the study, and all experiments were implemented in accordance with the guidelines and regulations set by the Ethics Committee of Xi’an Medical University (Permit No. XYJZS-201609027-1). Japanese white rabbits, SD rats, and ApoE–/– mice were purchased from the laboratory animal center at Xi’an Jiaotong University (Xi’an, China). All animal experiments were performed in the animal facility of Institute of Basic and Translational Medicine at Xi’an Medical University. The animal experiments were strictly following the guidelines of animal experiment in Xi’an Medical University, which was adapted from the Guide for the Care and Use of Laboratory Animals (NIH; Bethesda, MD, USA; NIH Publication No. 85-23, revised 2011). The Laboratory Animal Administration Committee of Xi’an Medical University approved all animal experiments (Institutional Animal Care and Use Committee; Permit No. XYJZS-201608012-2). ### 2.2. VSMC Culture VSMCs were obtained from the aortas of male SD rats (200-300 g) as previously described [13]. Cells were used in the experiments from passages 3 to 6. Before the initiation of each experiment, an additional incubation of serum-free DMEM for 24 h renders cells to be quiescent. Then, cells were exposed to oxidized low-density lipoprotein (ox-LDL) (0, 25, 50, and 100 mg/L; Yiyuan Biotechnologies, Guangzhou, China) for 24 h to simulate the proatherosclerotic condition. To activate DDR2, cells were incubated with collagen Ι (Sigma-Aldrich) for 48 h. To study the inhibition of signaling pathways, cells were treated with inhibitors for 30 min. SP600125 (20 μmol/L; Calbiochem) is an inhibitor for JNK (c-Jun N-terminal kinase), and SB203580 (10 μmol/L; Calbiochem) is an inhibitor for p38 MAPK (mitogen-activated protein kinase), and PD98059 (20 μmol/L; Calbiochem) is an inhibitor for MEK (MAPK/ERK (extracellular-signal-regulated kinase) kinase). The doses of the inhibitors were referenced by the previous studies [14, 15]. Then, cells were exposed to ox-LDL (100 mg/L). ### 2.3. siRNA Interference Small interfering RNA (siRNA) was used to knock down DDR2 expression in VSMCs as previously described [16]. Referencing a previous study, siRNA sequences were synthesized by a commercial company (sense strand, 5′- GAUGAUAGCAACACUCGGAUU-3′; antisense strand, 5′-UCCGAGUGUUGCUAUCAUCUU-3′; RiboBio, Guangzhou, China) [17]. siN05815122147 (RiboBio, Guangzhou, China) was used as a universal negative control. Transfection was performed with X-tremeGENE siRNA Transfection Reagent (Roche) and accorded to the manufacturer’s instructions. Briefly, the cells were rinsed twice with phosphate-buffered saline (PBS) to reduce background interference. DDR2 siRNA with various doses (400 and 800 ng; approximately 40 and 80 pmol) were transfected into VSMCs for 6 h, and were then treated with ox-LDL (100 mg/L). The cells were collected to examine the DDR2 expression or to be performed by migration and proliferation assay. ### 2.4. Migration and Proliferation Assay The migration of VSMCs was assessed by using the transwell permeable support insert (Corning, Lowell, MA, USA) as previously described [18]. After incubation of siRNA for 24 h without FBS, VSMCs were seeded on Matrigel (5 mg/mL; BD Biosciences, San Diego, CA, USA) of the upper compartment, and DMEM supplemented with 10% FBS was added into the lower compartment. Cells were cultured for another 24 h and then detected by crystal violet staining. Five different high-power fields per well were photographed. The positively stained VSMCs were counted by an observer blinded to the treatment protocol.The proliferation of VSMCs was assessed by the wound-healing assay as previously described [19]. Briefly, after 24 h of siRNA treatment, a 10-l pipette tip was used to scrap an artificial wound in the monolayer across the bottom of the dish. After extensive washing, medium containing 10% FBS was removed, and cells started to migrate for the appropriate time in a 37°C incubation chamber with 5% CO2. At various time points, images were obtained with a Nikon TE2000 Inverted Microscope. Meanwhile, some representative dishes were performed by immunofluorescence against α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) with Alexa Fluor 488 (1 : 200; Thermo Fisher Scientific, CA, USA). The remaining open area of the wound was quantified by using ImageJ as previously described, with some modifications [20]. ### 2.5. RNA Extraction and Real-Time PCR Total RNA was extracted from the aorta and VSMCs. Real-time PCR was performed as previously described [21, 22]. The sequences of the primers are listed in Table 1.Table 1 Primers were used for real-time PCR. GeneForward (5′-3′)Reverse (5′–3′)DDR2GATCATGTTTGAATTTGACCGAGCACTGGGGTTCACATCMMP-2TTGACCAGAACACCATCGGGTCCAGGTCAGGTGTGTMMP-3GCTGTGTGCTCATCCTACCTGACAACAGGGCTACTGTCMMP-8AGGAATGCCACTATGATTGCAAGAAATCACCAGAGTCGMMP-9ACAGCGAGACACTAAAGGCGGCAAGTCTTCGGTGTAGCMMP-12GCTGGTTCGGTTGTTAGGGTAGTTACACCCTGAGCATACMMP-13ACTCAAATGGTCCCAAACTATCAGCAGTGCCATCATMMP-14GTACCCACACACAACGCTTTATCTGGAACACCACAGCGAPDHTACCCACGGCAAGTTCAACGCACCAGCATCACCCCATTTG ### 2.6. Protein Extraction and Western Blotting Analysis Total protein was extracted from the aorta of rabbits and VSMCs as previously described [22]. The primary antibodies were against rabbit’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA), rat’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA; CST, Beverly, MA, USA), MMP-2 (1 : 500; Abcam, Cambridge, MA), TIMP-1 (1 : 500; Abcam, Cambridge, MA), TIMP-2 (1 : 500; Abcam, Cambridge, MA), p-ERK1/2 (1 : 1000; CST, Beverly, MA, USA), and GAPDH (1 : 1000; Santa Cruz Biotechnology, Santa Cruz, CA). Western blotting analysis was applied as previously described, and relative protein expression was measured by ImageJ with gel analysis [22]. ### 2.7. Zymography The supernatants along with the total protein were extracted from cultured VSMCs. Under nonreducing conditions, the equal amounts of sample protein were analyzed by SDS-PAGE in gelatin-containing acrylamide gels (2 mg/mL gelatin and 7.5% polyacrylamide) as previously described [23]. ### 2.8. Statistical Analysis All data are expressed as themean±SE. Two groups of comparisons were used by Student’s t-test. Multiple groups of comparisons were performed by using one-way ANOVA with the Bonferroni test. P<0.05 was considered statistically significant. The statistical calculations were performed by using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). ## 2.1. Atherosclerotic Specimens and Histological Staining Japanese white rabbits were treated by a cholesterol-rich diet containing 0.3% cholesterol and 3% corn oil for 6 or 16 weeks to induce atherosclerosis (n=3, respectively), and then, rabbits were euthanized by using pentobarbital sodium at each time point. The aortic arch of each rabbit was cut into 10 cross sections (4 μm) [9]. Male ApoE–/– mice were euthanized by cervical dislocation. Segments of heart tissue crossing the ascending aorta and aortic sinus from male ApoE–/– mice (n=4) were embedded within OCT, and serial sections (8 μm thick) were made as previously described [10]. Hematoxylin and eosin (H&E), oil red O, and Masson’s trichrome stains were performed according to the protocols as previously described [11, 12]. Moreover, sections were performed with immunohistochemical staining against DDR2 in mouse (1 : 200; Abcam, Cambridge, UK; CST, Beverly, MA, USA) human and rabbit (1 : 200; Santa Cruz Biotechnology, Inc, Dallas, TX, USA), RAM11 of macrophages (1 : 200; Dako, CA, USA), and α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) as previously described [11].Human carotid plaques were collected from patients who received endarterectomy at Zhengzhou Central Hospital. Informed consents were obtained from all patients enrolled in the study, and all experiments were implemented in accordance with the guidelines and regulations set by the Ethics Committee of Xi’an Medical University (Permit No. XYJZS-201609027-1). Japanese white rabbits, SD rats, and ApoE–/– mice were purchased from the laboratory animal center at Xi’an Jiaotong University (Xi’an, China). All animal experiments were performed in the animal facility of Institute of Basic and Translational Medicine at Xi’an Medical University. The animal experiments were strictly following the guidelines of animal experiment in Xi’an Medical University, which was adapted from the Guide for the Care and Use of Laboratory Animals (NIH; Bethesda, MD, USA; NIH Publication No. 85-23, revised 2011). The Laboratory Animal Administration Committee of Xi’an Medical University approved all animal experiments (Institutional Animal Care and Use Committee; Permit No. XYJZS-201608012-2). ## 2.2. VSMC Culture VSMCs were obtained from the aortas of male SD rats (200-300 g) as previously described [13]. Cells were used in the experiments from passages 3 to 6. Before the initiation of each experiment, an additional incubation of serum-free DMEM for 24 h renders cells to be quiescent. Then, cells were exposed to oxidized low-density lipoprotein (ox-LDL) (0, 25, 50, and 100 mg/L; Yiyuan Biotechnologies, Guangzhou, China) for 24 h to simulate the proatherosclerotic condition. To activate DDR2, cells were incubated with collagen Ι (Sigma-Aldrich) for 48 h. To study the inhibition of signaling pathways, cells were treated with inhibitors for 30 min. SP600125 (20 μmol/L; Calbiochem) is an inhibitor for JNK (c-Jun N-terminal kinase), and SB203580 (10 μmol/L; Calbiochem) is an inhibitor for p38 MAPK (mitogen-activated protein kinase), and PD98059 (20 μmol/L; Calbiochem) is an inhibitor for MEK (MAPK/ERK (extracellular-signal-regulated kinase) kinase). The doses of the inhibitors were referenced by the previous studies [14, 15]. Then, cells were exposed to ox-LDL (100 mg/L). ## 2.3. siRNA Interference Small interfering RNA (siRNA) was used to knock down DDR2 expression in VSMCs as previously described [16]. Referencing a previous study, siRNA sequences were synthesized by a commercial company (sense strand, 5′- GAUGAUAGCAACACUCGGAUU-3′; antisense strand, 5′-UCCGAGUGUUGCUAUCAUCUU-3′; RiboBio, Guangzhou, China) [17]. siN05815122147 (RiboBio, Guangzhou, China) was used as a universal negative control. Transfection was performed with X-tremeGENE siRNA Transfection Reagent (Roche) and accorded to the manufacturer’s instructions. Briefly, the cells were rinsed twice with phosphate-buffered saline (PBS) to reduce background interference. DDR2 siRNA with various doses (400 and 800 ng; approximately 40 and 80 pmol) were transfected into VSMCs for 6 h, and were then treated with ox-LDL (100 mg/L). The cells were collected to examine the DDR2 expression or to be performed by migration and proliferation assay. ## 2.4. Migration and Proliferation Assay The migration of VSMCs was assessed by using the transwell permeable support insert (Corning, Lowell, MA, USA) as previously described [18]. After incubation of siRNA for 24 h without FBS, VSMCs were seeded on Matrigel (5 mg/mL; BD Biosciences, San Diego, CA, USA) of the upper compartment, and DMEM supplemented with 10% FBS was added into the lower compartment. Cells were cultured for another 24 h and then detected by crystal violet staining. Five different high-power fields per well were photographed. The positively stained VSMCs were counted by an observer blinded to the treatment protocol.The proliferation of VSMCs was assessed by the wound-healing assay as previously described [19]. Briefly, after 24 h of siRNA treatment, a 10-l pipette tip was used to scrap an artificial wound in the monolayer across the bottom of the dish. After extensive washing, medium containing 10% FBS was removed, and cells started to migrate for the appropriate time in a 37°C incubation chamber with 5% CO2. At various time points, images were obtained with a Nikon TE2000 Inverted Microscope. Meanwhile, some representative dishes were performed by immunofluorescence against α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) with Alexa Fluor 488 (1 : 200; Thermo Fisher Scientific, CA, USA). The remaining open area of the wound was quantified by using ImageJ as previously described, with some modifications [20]. ## 2.5. RNA Extraction and Real-Time PCR Total RNA was extracted from the aorta and VSMCs. Real-time PCR was performed as previously described [21, 22]. The sequences of the primers are listed in Table 1.Table 1 Primers were used for real-time PCR. GeneForward (5′-3′)Reverse (5′–3′)DDR2GATCATGTTTGAATTTGACCGAGCACTGGGGTTCACATCMMP-2TTGACCAGAACACCATCGGGTCCAGGTCAGGTGTGTMMP-3GCTGTGTGCTCATCCTACCTGACAACAGGGCTACTGTCMMP-8AGGAATGCCACTATGATTGCAAGAAATCACCAGAGTCGMMP-9ACAGCGAGACACTAAAGGCGGCAAGTCTTCGGTGTAGCMMP-12GCTGGTTCGGTTGTTAGGGTAGTTACACCCTGAGCATACMMP-13ACTCAAATGGTCCCAAACTATCAGCAGTGCCATCATMMP-14GTACCCACACACAACGCTTTATCTGGAACACCACAGCGAPDHTACCCACGGCAAGTTCAACGCACCAGCATCACCCCATTTG ## 2.6. Protein Extraction and Western Blotting Analysis Total protein was extracted from the aorta of rabbits and VSMCs as previously described [22]. The primary antibodies were against rabbit’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA), rat’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA; CST, Beverly, MA, USA), MMP-2 (1 : 500; Abcam, Cambridge, MA), TIMP-1 (1 : 500; Abcam, Cambridge, MA), TIMP-2 (1 : 500; Abcam, Cambridge, MA), p-ERK1/2 (1 : 1000; CST, Beverly, MA, USA), and GAPDH (1 : 1000; Santa Cruz Biotechnology, Santa Cruz, CA). Western blotting analysis was applied as previously described, and relative protein expression was measured by ImageJ with gel analysis [22]. ## 2.7. Zymography The supernatants along with the total protein were extracted from cultured VSMCs. Under nonreducing conditions, the equal amounts of sample protein were analyzed by SDS-PAGE in gelatin-containing acrylamide gels (2 mg/mL gelatin and 7.5% polyacrylamide) as previously described [23]. ## 2.8. Statistical Analysis All data are expressed as themean±SE. Two groups of comparisons were used by Student’s t-test. Multiple groups of comparisons were performed by using one-way ANOVA with the Bonferroni test. P<0.05 was considered statistically significant. The statistical calculations were performed by using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). ## 3. Results ### 3.1. DDRs Are Present in Atherosclerotic Lesions of Human Being To examine whether DDR2 was present in atherosclerotic lesions, we used immunohistochemistry staining to identify DDR2 from human tissue to various animal species. Interestingly, we found expression of DDR2 in human carotid atherosclerotic plaques (Figure1). Combined with the section with MST, we found that DDR2 distributes densely around the fatty cores of human carotid atherosclerotic plaques, and these positive stains were also adjacent to the fibrous cap and the middle membrane (Figure 1).Figure 1 DDR2 in human carotid atherosclerotic plaque. Hematoxylin and eosin (H&E; 10x); Masson’s trichrome stain (MTS; 10x); immunohistochemical staining against DDR2 (10x); rabbit IgG isotype control (10x). The area in the box is displayed as a high-power field in (b) (40x). NC: necrotic core; FC: fatty core; F: fibrous cap; M: middle membrane. ### 3.2. DDRs Are Present in Atherosclerotic Lesions of Animal Models By using a rabbit model, we attempted to investigate the relationship between DDR2 expression and atherosclerotic progression. We found that DDR2 presented in the early and middle atherosclerotic lesions of rabbits, and this receptor was located in a similar position (Figure2(a)). In the early-stage lesion, the majority of DDR2 was deposited along the lower edge of the lesion (Figure 2(a)). In the middle-stage lesion, DDR2 was diffusely distributed in the lesion and was deposited on the surface of the lesion (Figure 2(a)). DDR2 neither apparently overlapped with the cytoplasm of macrophages nor fully overlapped with VSMCs (Figure 2(a)). To compare MTS with DDR2 staining, indicating DDR2 expression and distribution might tend to localize around collagen fibers (Figure 2(a)). To further confirm the above results, we also used immunohistochemistry staining to identify DDR2 in atherosclerotic lesions of ApoE-/-mice (Figure 2(b)). As Figure 2(b) shows, DDR2 was also found in atherosclerotic lesions of mice.Figure 2 DDR2 in atherosclerotic plaques of animal models. Comparison of DDR2 expression and collagen distribution in atherosclerotic plaques of the HCD-induced rabbit model (a). Serial paraffin sections of aortic lesions were stained with Masson’s trichrome stain (MTS) and immunohistochemical staining againstα-smooth muscle actin (SMC), macrophages (Mϕ), and DDR2 (a) (bar=200 μm). Atherosclerotic plaques of the apoE knockout mouse model and immunohistochemical staining against DDR2 (b) (bar=100 μm). The area in the box is displayed as a high-power field on the right side (bar=50 μm). (a)(b) ### 3.3. ox-LDL Upregulates DDR2 in VSMCs via the MAPK Pathway Considering that DDR2 was abundant in atherosclerotic lesions, we questioned whether DDR2 expression was associated with atherosclerotic development. To compare the protein expression of DDR2, we found that there was no significant difference between the early and middle lesions (Figure3(a)). Next, we also questioned what caused the upregulation of DDR2 in atherosclerotic lesions. Considering ox-LDL as a key atherogenic factor, we incubated the various concentrations of ox-LDL with VSMCs for 48 h. We found that 100 mg/mL of ox-LDL significantly increased DDR2 expression in VSMCs (Figure 3(b)).Figure 3 Protein expression of DDR2 in the aorta and VSMCs. Immunoblot analysis and quantification of DDR2 in aortas of HCD-induced atherosclerotic rabbits ((a);n=4) and VSMCs that were incubated with various concentrations of 0, 25, 50, and 100 mg/L ox-LDL (b). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the 0 mg group; ##P<0.01 100 mg vs. the 25 mg group. Effect of MAPK inhibitors on ox-LDL-induced DDR2 expression (c). VSMCs were preincubated with inhibitors for 30 min and then treated with 100 ng/mL ox-LDL for 24 h in the presence of inhibitors (SP600125, SB203580, and PD98059 against JNK, MEK, and p38 MAPK, respectively). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the ox-LDL group. (a)(b)(c)Next, we also investigated which pathway may be involved in ox-LDL-induced DDR2 expression. By using inhibitors against the MAPK pathway, we found that blocking JNK, p48, and ERK1/2 in VSMCs could neutralize ox-LDL-induced DDR2 expression, suggesting that the MAPK pathway might be responsible for the ox-LDL-induced upregulation of DDR2 (Figure3(c)). ### 3.4. DDR2 Affects the Migration of VSMCs As reported in a previous study, a specific siRNA against DDR2 was applied to suppress DDR2 protein expression. First, we used various doses of siRNA to knock down DDR2 expression in VSMCs. We found that 800 ng of siRNA could efficiently inhibit ox-LDL-induced DDR2 expression, which was quantified and confirmed with real-time PCR (Figure4(a)). Next, to test whether inhibition of DDR2 in SMCs affected its migration, the transwell assay was performed after incubation of siRNA with VSMCs for 48 h (0, 200, 400, and 800 ng of siRNA, respectively). We found that 400 and 800 ng of siRNA reduced cell migration, indicating that the migration activity of VSMCs was inhibited by DDR2 reduction (Figure 4(b)). To confirm this result, we also used a wound healing assay to check the migration of VSMCs. As Figure 4(c) shows, we found that a decrease in DDR2 expression indeed inhibited the migration of VSMCs.Figure 4 Effect of knockdown of DDR2 on the migration and proliferation of VSMCs. After 400 and 800 ng of siRNA treatments, DDR2 expression in VSMCs was analyzed and quantified by western blotting and real-time PCR (a). Migration and proliferation were assessed by transwell analysis (b) and wound healing assay (c). Data are expressed as themean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the scramble RNA-treated group. (a)(b)(c) ### 3.5. DDR2 Affects the mRNA Expression of MMPs in VSMCs The invasion of VSMCs is determined by its collagenase secretion; thus, we examined whether MMP expression was reduced by DDR2 deficiency. First, we incubated collagen I with VSMCs for 48 h, and MMP (MMP-2, MMP-3, MMP-8, MMP-9, MMP-12, MMP-13, and MMP-14) expression was quantified by real-time PCR. Interestingly, we found that collagen I could upregulate MMP-2, MMP-3, MMP-9, and MMP-14 (Figure5(a)). Next, we used 800 ng of siRNA to knock down DDR2 expression before collagen I incubation. As a result of mRNA quantification by real-time PCR, inhibition of DDR2 reversed the effect of collagen I on expression of MMPs and suppressed the mRNA expression of MMP-2, MMP-3, MMP-9, and MMP-13 (Figure 5(b)). In addition, we failed to find the mRNA expression of MMP-12 in VSMCS.Figure 5 Effect of knockdown of DDR2 on mRNA expression of MMPs in VSMCs. Collagen I-induced MMP expression was assessed by real-time PCR (a). After siRNA treatment, collagen I-induced MMP expression was assessed by real-time PCR (b). Data are expressed as themean±standarderror. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001 vs. the collagen I-treated group or collagen I combined with siRNA-treated group. (a)(b) ### 3.6. DDR2 Affects the Activity and Expression of MMP-2 in VSMCs Considering that collagen I-induced mRNA expression of MMP-2 is dramatically suppressed after DDR2 deficiency, we next examined whether the inhibition of DDR2 affected the activity of MMP-2. As a result of zymography, MMP-2 activity was increased by collagen I incubation but was decreased by 800 ng of siRNA against DDR2 expression, and almost no MMP-2 activity was found without collagen I incubation before siRNA interference (Figure6(a)). However, we did not find any bands of MMP-9 in the same zymography gel. To study whether deceased activity of MMP-2 was attributed to MMP-2 protein, we next examined MMP-2 protein expression via western blotting. Indeed, we found that MMP-2 production was decreased in the presence of siRNA against DDR2 expression (Figure 6(b)). Next, TIMP-1 and TIMP-2 were studied and quantified by western blotting, and the result showed that there was no significant difference between the two groups (Figure 6(c)). To further study the underlying mechanism, we also examined the ERK signaling according to previous report [6]. After inhibition of DDR2 by siRNA, phosphorylation of ERK1/2 was reduced (Figure 6(d)).Figure 6 Effect of knockdown of DDR2 on the activity and expression of MMP-2 in VSMCs. The activity of MMP-2 was assessed by zymography (a). DDR2 protein expression was analyzed and quantified by western blotting (b). TIMP-1 and TIMP-2 were detected and quantified by western blotting (c). p-ERK1/2 protein expression was analyzed and quantified by western blotting (d). Data are expressed as themean±standarderror. ∗P<0.05 vs. the scramble RNA-treated group. (a)(b)(c)(d) ## 3.1. DDRs Are Present in Atherosclerotic Lesions of Human Being To examine whether DDR2 was present in atherosclerotic lesions, we used immunohistochemistry staining to identify DDR2 from human tissue to various animal species. Interestingly, we found expression of DDR2 in human carotid atherosclerotic plaques (Figure1). Combined with the section with MST, we found that DDR2 distributes densely around the fatty cores of human carotid atherosclerotic plaques, and these positive stains were also adjacent to the fibrous cap and the middle membrane (Figure 1).Figure 1 DDR2 in human carotid atherosclerotic plaque. Hematoxylin and eosin (H&E; 10x); Masson’s trichrome stain (MTS; 10x); immunohistochemical staining against DDR2 (10x); rabbit IgG isotype control (10x). The area in the box is displayed as a high-power field in (b) (40x). NC: necrotic core; FC: fatty core; F: fibrous cap; M: middle membrane. ## 3.2. DDRs Are Present in Atherosclerotic Lesions of Animal Models By using a rabbit model, we attempted to investigate the relationship between DDR2 expression and atherosclerotic progression. We found that DDR2 presented in the early and middle atherosclerotic lesions of rabbits, and this receptor was located in a similar position (Figure2(a)). In the early-stage lesion, the majority of DDR2 was deposited along the lower edge of the lesion (Figure 2(a)). In the middle-stage lesion, DDR2 was diffusely distributed in the lesion and was deposited on the surface of the lesion (Figure 2(a)). DDR2 neither apparently overlapped with the cytoplasm of macrophages nor fully overlapped with VSMCs (Figure 2(a)). To compare MTS with DDR2 staining, indicating DDR2 expression and distribution might tend to localize around collagen fibers (Figure 2(a)). To further confirm the above results, we also used immunohistochemistry staining to identify DDR2 in atherosclerotic lesions of ApoE-/-mice (Figure 2(b)). As Figure 2(b) shows, DDR2 was also found in atherosclerotic lesions of mice.Figure 2 DDR2 in atherosclerotic plaques of animal models. Comparison of DDR2 expression and collagen distribution in atherosclerotic plaques of the HCD-induced rabbit model (a). Serial paraffin sections of aortic lesions were stained with Masson’s trichrome stain (MTS) and immunohistochemical staining againstα-smooth muscle actin (SMC), macrophages (Mϕ), and DDR2 (a) (bar=200 μm). Atherosclerotic plaques of the apoE knockout mouse model and immunohistochemical staining against DDR2 (b) (bar=100 μm). The area in the box is displayed as a high-power field on the right side (bar=50 μm). (a)(b) ## 3.3. ox-LDL Upregulates DDR2 in VSMCs via the MAPK Pathway Considering that DDR2 was abundant in atherosclerotic lesions, we questioned whether DDR2 expression was associated with atherosclerotic development. To compare the protein expression of DDR2, we found that there was no significant difference between the early and middle lesions (Figure3(a)). Next, we also questioned what caused the upregulation of DDR2 in atherosclerotic lesions. Considering ox-LDL as a key atherogenic factor, we incubated the various concentrations of ox-LDL with VSMCs for 48 h. We found that 100 mg/mL of ox-LDL significantly increased DDR2 expression in VSMCs (Figure 3(b)).Figure 3 Protein expression of DDR2 in the aorta and VSMCs. Immunoblot analysis and quantification of DDR2 in aortas of HCD-induced atherosclerotic rabbits ((a);n=4) and VSMCs that were incubated with various concentrations of 0, 25, 50, and 100 mg/L ox-LDL (b). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the 0 mg group; ##P<0.01 100 mg vs. the 25 mg group. Effect of MAPK inhibitors on ox-LDL-induced DDR2 expression (c). VSMCs were preincubated with inhibitors for 30 min and then treated with 100 ng/mL ox-LDL for 24 h in the presence of inhibitors (SP600125, SB203580, and PD98059 against JNK, MEK, and p38 MAPK, respectively). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the ox-LDL group. (a)(b)(c)Next, we also investigated which pathway may be involved in ox-LDL-induced DDR2 expression. By using inhibitors against the MAPK pathway, we found that blocking JNK, p48, and ERK1/2 in VSMCs could neutralize ox-LDL-induced DDR2 expression, suggesting that the MAPK pathway might be responsible for the ox-LDL-induced upregulation of DDR2 (Figure3(c)). ## 3.4. DDR2 Affects the Migration of VSMCs As reported in a previous study, a specific siRNA against DDR2 was applied to suppress DDR2 protein expression. First, we used various doses of siRNA to knock down DDR2 expression in VSMCs. We found that 800 ng of siRNA could efficiently inhibit ox-LDL-induced DDR2 expression, which was quantified and confirmed with real-time PCR (Figure4(a)). Next, to test whether inhibition of DDR2 in SMCs affected its migration, the transwell assay was performed after incubation of siRNA with VSMCs for 48 h (0, 200, 400, and 800 ng of siRNA, respectively). We found that 400 and 800 ng of siRNA reduced cell migration, indicating that the migration activity of VSMCs was inhibited by DDR2 reduction (Figure 4(b)). To confirm this result, we also used a wound healing assay to check the migration of VSMCs. As Figure 4(c) shows, we found that a decrease in DDR2 expression indeed inhibited the migration of VSMCs.Figure 4 Effect of knockdown of DDR2 on the migration and proliferation of VSMCs. After 400 and 800 ng of siRNA treatments, DDR2 expression in VSMCs was analyzed and quantified by western blotting and real-time PCR (a). Migration and proliferation were assessed by transwell analysis (b) and wound healing assay (c). Data are expressed as themean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the scramble RNA-treated group. (a)(b)(c) ## 3.5. DDR2 Affects the mRNA Expression of MMPs in VSMCs The invasion of VSMCs is determined by its collagenase secretion; thus, we examined whether MMP expression was reduced by DDR2 deficiency. First, we incubated collagen I with VSMCs for 48 h, and MMP (MMP-2, MMP-3, MMP-8, MMP-9, MMP-12, MMP-13, and MMP-14) expression was quantified by real-time PCR. Interestingly, we found that collagen I could upregulate MMP-2, MMP-3, MMP-9, and MMP-14 (Figure5(a)). Next, we used 800 ng of siRNA to knock down DDR2 expression before collagen I incubation. As a result of mRNA quantification by real-time PCR, inhibition of DDR2 reversed the effect of collagen I on expression of MMPs and suppressed the mRNA expression of MMP-2, MMP-3, MMP-9, and MMP-13 (Figure 5(b)). In addition, we failed to find the mRNA expression of MMP-12 in VSMCS.Figure 5 Effect of knockdown of DDR2 on mRNA expression of MMPs in VSMCs. Collagen I-induced MMP expression was assessed by real-time PCR (a). After siRNA treatment, collagen I-induced MMP expression was assessed by real-time PCR (b). Data are expressed as themean±standarderror. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001 vs. the collagen I-treated group or collagen I combined with siRNA-treated group. (a)(b) ## 3.6. DDR2 Affects the Activity and Expression of MMP-2 in VSMCs Considering that collagen I-induced mRNA expression of MMP-2 is dramatically suppressed after DDR2 deficiency, we next examined whether the inhibition of DDR2 affected the activity of MMP-2. As a result of zymography, MMP-2 activity was increased by collagen I incubation but was decreased by 800 ng of siRNA against DDR2 expression, and almost no MMP-2 activity was found without collagen I incubation before siRNA interference (Figure6(a)). However, we did not find any bands of MMP-9 in the same zymography gel. To study whether deceased activity of MMP-2 was attributed to MMP-2 protein, we next examined MMP-2 protein expression via western blotting. Indeed, we found that MMP-2 production was decreased in the presence of siRNA against DDR2 expression (Figure 6(b)). Next, TIMP-1 and TIMP-2 were studied and quantified by western blotting, and the result showed that there was no significant difference between the two groups (Figure 6(c)). To further study the underlying mechanism, we also examined the ERK signaling according to previous report [6]. After inhibition of DDR2 by siRNA, phosphorylation of ERK1/2 was reduced (Figure 6(d)).Figure 6 Effect of knockdown of DDR2 on the activity and expression of MMP-2 in VSMCs. The activity of MMP-2 was assessed by zymography (a). DDR2 protein expression was analyzed and quantified by western blotting (b). TIMP-1 and TIMP-2 were detected and quantified by western blotting (c). p-ERK1/2 protein expression was analyzed and quantified by western blotting (d). Data are expressed as themean±standarderror. ∗P<0.05 vs. the scramble RNA-treated group. (a)(b)(c)(d) ## 4. Discussion Atherosclerotic plaque rupture is a serious problem for patients with cardiovascular disease, often causing unstable angina, myocardial infarction, and sudden coronary death [24]. However, how the atherosclerotic plaque becomes vulnerable remains unknown. The question remains as to why ECM tends to degrade in these unstable plaques. Notably, ECM not only constitutes atherosclerotic plaques but also activates collagen receptors to regulate the surrounding cells [25]. As a collagen receptor, DDR2 is involved with various diseases such as fibrotic diseases, arthritis, cancer, and atherosclerosis [5].In the current study, we found that DDR2 was present in atherosclerotic plaques of various animal models, but DDR2 immunoreactivity did not totally overlap with macrophages and VSMCs. As previously described, DDR2 is highly expressed in VSMCs and also found in activated endothelial cells [26]. Based our observation, DDR2 expression is inclined to localize around collagens and fibrous caps. More importantly, for the first time, we have reported that DDR2 is highly expressed in carotid atherosclerotic plaques of human and is distributing around the fatty core of atherosclerosis and overlapping with collagen fibers. Combined to these findings, DDR2 expression in the specific regions of atherosclerotic plaque may be somehow upregulated by collagen or other proatherosclerotic factors, and the underlying mechanism needs to be addressed by the further studies. Given that ox-LDL is mainly a proatherosclerotic factor and abundant to the fat core, a question arises of whether ox-LDL can induce the expression of DDR2. Interestingly, the current research proves that ox-LDL can induce upregulation of DDR2 in a dose-dependent manner, whereas such upregulation is neutralized by the inhibition of JNK, MAPK48, and ERK, showing that the MAPK pathway may be involved with ox-LDL-induced DDR2 expression. In the vasculature, SMCs not only are responsible for the secretion of collagen fibers but also express collagen receptors to interact with ECM, regulating their own proliferation, differentiation, and migration [27]. Accordingly, we speculate that DDR2 may affect the physiological functions of VSMCs, especially when these cells are stimulated by proatherosclerotic factors. As shown in our results, VSMCs with downregulation of DDR2 display reduced differentiation and migration. Considering that VSMC infiltration in intima is a vital process of atherogenesis, DDR2 could be a mediator to regulate atherosclerotic progression [3]. Notably, given that this depends on the secretion of MMPs, another question has been raised as to whether DDR2 also regulates MMPs [28]. Indeed, MMPs can almost determine the fate of atherosclerotic plaques because MMP-mediated breakdown of ECM is a typical feature of an unstable plaque [29]. In pathological conditions, excessive ECM also stimulates VSMCs to degrade ECM as a negative feedback regulation. As a natural ligand against DDRs, collagen I can activate DDR2 in VSMCs to produce MMPs for the degradation of ECM [30]. We observed that collagen I indeed promotes VSMCs to produce various types of MMPs, especially MMP-2 and MMP-3. However, collagen I-induced expression of MMPs can be completely neutralized by knockdown of DDR2, suggesting that DDR2 plays a pivotal role in the degradation of ECM. Of note, since MMP-2 expression is dramatically affected by the knockdown of DDR2, a causal relationship between DDR2 and MMP-2 is shown in the current study. Increased MMPs do not necessarily have enzymatic activity because MMPs are not only initially synthesized as inactive state of zymogens but are also inhibited by specific endogenous inhibitors such as metalloproteinases [31]. Thus, only if DDR2 mediates the activity of MMPs can this collagen receptor be made sure to be involved with vulnerable plaques. Interestingly, decreased DDR2 indeed also inhibits the protein expression and activity of MMP-2, indicating that the effect of MMP-2 in VSMCs may proceed via collagen I-activated DDR2. With regard to the previous studies, a mechanism is suggested that collagen I causes the expression and phosphorylation of DDR2 and finally upregulates MMP-2 expression via ERK1/2 signaling [6, 32]. Accordingly, we have confirmed that phosphorylation of ERK1/2 is suppressed by inhibition of DDR2 in VSMCs, indicating that ERK1/2 may be involved with expression and activity of MMP-2. Of note, this result needs to be proven by more experiments. Moreover, we did not find pro-MMP-2, pro-MMP-9, and MMP-9 in the zymography, which did not exclude the possible regulation of activated DDR2 to other MMPs.In conclusion, we found high expression of DDR2 in atherosclerotic plaques of human and various animal models, and DDR2 in VSMCs was involved with collagen I-induced secretion of MMP-2, suggesting that the clinical role of DDR2 in cardiovascular disease should be elucidated in further experiments. Owing to the proatherosclerotic condition, an abundance of DDR2 is present in the atherosclerotic plaques of humans and various animal models; furthermore, excessive ECM such as collagen I may also activate DDR2 to produce MMP-2 for the degradation of ECM, which is ultimately involved in the ethology of unstable atherosclerotic plaques. Once this mechanism is clarified, DDR2 may become a novel target for developing new therapeutic strategy for the control of unstable atherosclerotic plaques. --- *Source: 1010496-2021-12-16.xml*
1010496-2021-12-16_1010496-2021-12-16.md
43,265
Discoidin Domain-Containing Receptor 2 Is Present in Human Atherosclerotic Plaques and Involved in the Expression and Activity of MMP-2
Qi Yu; Ruihan Liu; Ying Chen; Ahmed Bilal Waqar; Fuqiang Liu; Juan Yang; Ting Lian; Guangwei Zhang; Hua Guan; Yuanyuan Cui; Cangbao Xu
Oxidative Medicine and Cellular Longevity (2021)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2021/1010496
1010496-2021-12-16.xml
--- ## Abstract Discoidin domain-containing receptor 2 (DDR2) has been suggested to be involved in atherosclerotic progression, but its pathological role remains unknown. Using immunochemical staining, we located and compared the expression of DDR2 in the atherosclerotic plaques of humans and various animal models. Then, siRNA was applied to knock down the expression of DDR2 in vascular smooth muscle cells (VSMCs), and the migration, proliferation, and collagenΙ-induced expression of matrix metalloproteinases (MMPs) were evaluated. We found that an abundance of DDR2 was present in the atherosclerotic plaques of humans and various animal models and was distributed around fatty and necrotic cores. After incubation of oxidized low-density lipoprotein (ox-LDL), DDR2 was upregulated in VSMCs in response to such a proatherosclerotic condition. Next, we found that decreased DDR2 expression in VSMCs inhibited the migration, proliferation, and collagen Ι-induced expression of matrix metalloproteinases (MMPs). Moreover, we found that DDR2 is strongly associated with the protein expression and activity of MMP-2, suggesting that DDR2 might play a role in the etiology of unstable plaques. Considering that DDR2 is present in the atherosclerotic plaques of humans and is associated with collagen Ι-induced secretion of MMP-2, the clinical role of DDR2 in cardiovascular disease should be elucidated in further experiments. --- ## Body ## 1. Introduction Cardiovascular disease (CVD) is a major cause of death in the world. Atherosclerosis has been known as the common pathological basis for CVD [1]. Atherosclerotic plaque rupture primarily causes clinical events such as myocardial infarction, stroke, and thrombogenesis. Regarding previous studies, researchers suggest that the balance of macrophages and vascular smooth muscle cells (VSMCs) in the atherosclerotic plaque is a crucial factor for plaque stability. Briefly, as a result of macrophage proliferation in plaques, these inflammatory cells release many matrix metalloproteinases (MMPs) to effectively degrade extracellular matrix (ECM), such as collagens and elastin, resulting in plaque destabilization or rupture; conversely, VSMCs synthesize ECM to keep the plaque stable [2]. Despite the fact that VSMCs contribute to plaque stabilization, these cells may also promote plaque rupture by secreting MMPs during the phenotypic transition of VSMCs from the contractile to the synthetic state [3]. However, it is unknown what causes VSMCs to induce production of MMPs instead of synthesizing ECM. Notably, the major component of ECM is collagens, which account for approximately 60% of the total protein in atherosclerotic plaques [4]. These collagens not only constitute atherosclerotic plaques but also affect cell proliferation, migration, and adhesion via collagen receptors [4]. Among these collagen receptors, Discoidin domain-containing receptor 2 (DDR2) is regarded as a subgroup of tyrosine-kinase receptors, which is activated by natural ligands including collagen of types I, II, III, and X and is responsible for the communication and link between collagens and the cells [5]. DDR2 is present in various tissues, including vascular tissue and is implicated in the regulation of cell metabolism, differentiation, and growth. Interestingly, activation of DDR2 can induce production of MMPs, and therefore, DDR2 is recognized as playing a very important role in fibrosis and cancer [6]. Considering that DDR2 is found in VSMCs, an interesting question arises as to whether DDR2 also affects plaque stability via mediating MMP expression in VSMCs [7, 8]. Consequently, we designed this study to elucidate the pathological role of DDR2 in atherosclerosis. ## 2. Materials and Methods ### 2.1. Atherosclerotic Specimens and Histological Staining Japanese white rabbits were treated by a cholesterol-rich diet containing 0.3% cholesterol and 3% corn oil for 6 or 16 weeks to induce atherosclerosis (n=3, respectively), and then, rabbits were euthanized by using pentobarbital sodium at each time point. The aortic arch of each rabbit was cut into 10 cross sections (4 μm) [9]. Male ApoE–/– mice were euthanized by cervical dislocation. Segments of heart tissue crossing the ascending aorta and aortic sinus from male ApoE–/– mice (n=4) were embedded within OCT, and serial sections (8 μm thick) were made as previously described [10]. Hematoxylin and eosin (H&E), oil red O, and Masson’s trichrome stains were performed according to the protocols as previously described [11, 12]. Moreover, sections were performed with immunohistochemical staining against DDR2 in mouse (1 : 200; Abcam, Cambridge, UK; CST, Beverly, MA, USA) human and rabbit (1 : 200; Santa Cruz Biotechnology, Inc, Dallas, TX, USA), RAM11 of macrophages (1 : 200; Dako, CA, USA), and α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) as previously described [11].Human carotid plaques were collected from patients who received endarterectomy at Zhengzhou Central Hospital. Informed consents were obtained from all patients enrolled in the study, and all experiments were implemented in accordance with the guidelines and regulations set by the Ethics Committee of Xi’an Medical University (Permit No. XYJZS-201609027-1). Japanese white rabbits, SD rats, and ApoE–/– mice were purchased from the laboratory animal center at Xi’an Jiaotong University (Xi’an, China). All animal experiments were performed in the animal facility of Institute of Basic and Translational Medicine at Xi’an Medical University. The animal experiments were strictly following the guidelines of animal experiment in Xi’an Medical University, which was adapted from the Guide for the Care and Use of Laboratory Animals (NIH; Bethesda, MD, USA; NIH Publication No. 85-23, revised 2011). The Laboratory Animal Administration Committee of Xi’an Medical University approved all animal experiments (Institutional Animal Care and Use Committee; Permit No. XYJZS-201608012-2). ### 2.2. VSMC Culture VSMCs were obtained from the aortas of male SD rats (200-300 g) as previously described [13]. Cells were used in the experiments from passages 3 to 6. Before the initiation of each experiment, an additional incubation of serum-free DMEM for 24 h renders cells to be quiescent. Then, cells were exposed to oxidized low-density lipoprotein (ox-LDL) (0, 25, 50, and 100 mg/L; Yiyuan Biotechnologies, Guangzhou, China) for 24 h to simulate the proatherosclerotic condition. To activate DDR2, cells were incubated with collagen Ι (Sigma-Aldrich) for 48 h. To study the inhibition of signaling pathways, cells were treated with inhibitors for 30 min. SP600125 (20 μmol/L; Calbiochem) is an inhibitor for JNK (c-Jun N-terminal kinase), and SB203580 (10 μmol/L; Calbiochem) is an inhibitor for p38 MAPK (mitogen-activated protein kinase), and PD98059 (20 μmol/L; Calbiochem) is an inhibitor for MEK (MAPK/ERK (extracellular-signal-regulated kinase) kinase). The doses of the inhibitors were referenced by the previous studies [14, 15]. Then, cells were exposed to ox-LDL (100 mg/L). ### 2.3. siRNA Interference Small interfering RNA (siRNA) was used to knock down DDR2 expression in VSMCs as previously described [16]. Referencing a previous study, siRNA sequences were synthesized by a commercial company (sense strand, 5′- GAUGAUAGCAACACUCGGAUU-3′; antisense strand, 5′-UCCGAGUGUUGCUAUCAUCUU-3′; RiboBio, Guangzhou, China) [17]. siN05815122147 (RiboBio, Guangzhou, China) was used as a universal negative control. Transfection was performed with X-tremeGENE siRNA Transfection Reagent (Roche) and accorded to the manufacturer’s instructions. Briefly, the cells were rinsed twice with phosphate-buffered saline (PBS) to reduce background interference. DDR2 siRNA with various doses (400 and 800 ng; approximately 40 and 80 pmol) were transfected into VSMCs for 6 h, and were then treated with ox-LDL (100 mg/L). The cells were collected to examine the DDR2 expression or to be performed by migration and proliferation assay. ### 2.4. Migration and Proliferation Assay The migration of VSMCs was assessed by using the transwell permeable support insert (Corning, Lowell, MA, USA) as previously described [18]. After incubation of siRNA for 24 h without FBS, VSMCs were seeded on Matrigel (5 mg/mL; BD Biosciences, San Diego, CA, USA) of the upper compartment, and DMEM supplemented with 10% FBS was added into the lower compartment. Cells were cultured for another 24 h and then detected by crystal violet staining. Five different high-power fields per well were photographed. The positively stained VSMCs were counted by an observer blinded to the treatment protocol.The proliferation of VSMCs was assessed by the wound-healing assay as previously described [19]. Briefly, after 24 h of siRNA treatment, a 10-l pipette tip was used to scrap an artificial wound in the monolayer across the bottom of the dish. After extensive washing, medium containing 10% FBS was removed, and cells started to migrate for the appropriate time in a 37°C incubation chamber with 5% CO2. At various time points, images were obtained with a Nikon TE2000 Inverted Microscope. Meanwhile, some representative dishes were performed by immunofluorescence against α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) with Alexa Fluor 488 (1 : 200; Thermo Fisher Scientific, CA, USA). The remaining open area of the wound was quantified by using ImageJ as previously described, with some modifications [20]. ### 2.5. RNA Extraction and Real-Time PCR Total RNA was extracted from the aorta and VSMCs. Real-time PCR was performed as previously described [21, 22]. The sequences of the primers are listed in Table 1.Table 1 Primers were used for real-time PCR. GeneForward (5′-3′)Reverse (5′–3′)DDR2GATCATGTTTGAATTTGACCGAGCACTGGGGTTCACATCMMP-2TTGACCAGAACACCATCGGGTCCAGGTCAGGTGTGTMMP-3GCTGTGTGCTCATCCTACCTGACAACAGGGCTACTGTCMMP-8AGGAATGCCACTATGATTGCAAGAAATCACCAGAGTCGMMP-9ACAGCGAGACACTAAAGGCGGCAAGTCTTCGGTGTAGCMMP-12GCTGGTTCGGTTGTTAGGGTAGTTACACCCTGAGCATACMMP-13ACTCAAATGGTCCCAAACTATCAGCAGTGCCATCATMMP-14GTACCCACACACAACGCTTTATCTGGAACACCACAGCGAPDHTACCCACGGCAAGTTCAACGCACCAGCATCACCCCATTTG ### 2.6. Protein Extraction and Western Blotting Analysis Total protein was extracted from the aorta of rabbits and VSMCs as previously described [22]. The primary antibodies were against rabbit’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA), rat’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA; CST, Beverly, MA, USA), MMP-2 (1 : 500; Abcam, Cambridge, MA), TIMP-1 (1 : 500; Abcam, Cambridge, MA), TIMP-2 (1 : 500; Abcam, Cambridge, MA), p-ERK1/2 (1 : 1000; CST, Beverly, MA, USA), and GAPDH (1 : 1000; Santa Cruz Biotechnology, Santa Cruz, CA). Western blotting analysis was applied as previously described, and relative protein expression was measured by ImageJ with gel analysis [22]. ### 2.7. Zymography The supernatants along with the total protein were extracted from cultured VSMCs. Under nonreducing conditions, the equal amounts of sample protein were analyzed by SDS-PAGE in gelatin-containing acrylamide gels (2 mg/mL gelatin and 7.5% polyacrylamide) as previously described [23]. ### 2.8. Statistical Analysis All data are expressed as themean±SE. Two groups of comparisons were used by Student’s t-test. Multiple groups of comparisons were performed by using one-way ANOVA with the Bonferroni test. P<0.05 was considered statistically significant. The statistical calculations were performed by using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). ## 2.1. Atherosclerotic Specimens and Histological Staining Japanese white rabbits were treated by a cholesterol-rich diet containing 0.3% cholesterol and 3% corn oil for 6 or 16 weeks to induce atherosclerosis (n=3, respectively), and then, rabbits were euthanized by using pentobarbital sodium at each time point. The aortic arch of each rabbit was cut into 10 cross sections (4 μm) [9]. Male ApoE–/– mice were euthanized by cervical dislocation. Segments of heart tissue crossing the ascending aorta and aortic sinus from male ApoE–/– mice (n=4) were embedded within OCT, and serial sections (8 μm thick) were made as previously described [10]. Hematoxylin and eosin (H&E), oil red O, and Masson’s trichrome stains were performed according to the protocols as previously described [11, 12]. Moreover, sections were performed with immunohistochemical staining against DDR2 in mouse (1 : 200; Abcam, Cambridge, UK; CST, Beverly, MA, USA) human and rabbit (1 : 200; Santa Cruz Biotechnology, Inc, Dallas, TX, USA), RAM11 of macrophages (1 : 200; Dako, CA, USA), and α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) as previously described [11].Human carotid plaques were collected from patients who received endarterectomy at Zhengzhou Central Hospital. Informed consents were obtained from all patients enrolled in the study, and all experiments were implemented in accordance with the guidelines and regulations set by the Ethics Committee of Xi’an Medical University (Permit No. XYJZS-201609027-1). Japanese white rabbits, SD rats, and ApoE–/– mice were purchased from the laboratory animal center at Xi’an Jiaotong University (Xi’an, China). All animal experiments were performed in the animal facility of Institute of Basic and Translational Medicine at Xi’an Medical University. The animal experiments were strictly following the guidelines of animal experiment in Xi’an Medical University, which was adapted from the Guide for the Care and Use of Laboratory Animals (NIH; Bethesda, MD, USA; NIH Publication No. 85-23, revised 2011). The Laboratory Animal Administration Committee of Xi’an Medical University approved all animal experiments (Institutional Animal Care and Use Committee; Permit No. XYJZS-201608012-2). ## 2.2. VSMC Culture VSMCs were obtained from the aortas of male SD rats (200-300 g) as previously described [13]. Cells were used in the experiments from passages 3 to 6. Before the initiation of each experiment, an additional incubation of serum-free DMEM for 24 h renders cells to be quiescent. Then, cells were exposed to oxidized low-density lipoprotein (ox-LDL) (0, 25, 50, and 100 mg/L; Yiyuan Biotechnologies, Guangzhou, China) for 24 h to simulate the proatherosclerotic condition. To activate DDR2, cells were incubated with collagen Ι (Sigma-Aldrich) for 48 h. To study the inhibition of signaling pathways, cells were treated with inhibitors for 30 min. SP600125 (20 μmol/L; Calbiochem) is an inhibitor for JNK (c-Jun N-terminal kinase), and SB203580 (10 μmol/L; Calbiochem) is an inhibitor for p38 MAPK (mitogen-activated protein kinase), and PD98059 (20 μmol/L; Calbiochem) is an inhibitor for MEK (MAPK/ERK (extracellular-signal-regulated kinase) kinase). The doses of the inhibitors were referenced by the previous studies [14, 15]. Then, cells were exposed to ox-LDL (100 mg/L). ## 2.3. siRNA Interference Small interfering RNA (siRNA) was used to knock down DDR2 expression in VSMCs as previously described [16]. Referencing a previous study, siRNA sequences were synthesized by a commercial company (sense strand, 5′- GAUGAUAGCAACACUCGGAUU-3′; antisense strand, 5′-UCCGAGUGUUGCUAUCAUCUU-3′; RiboBio, Guangzhou, China) [17]. siN05815122147 (RiboBio, Guangzhou, China) was used as a universal negative control. Transfection was performed with X-tremeGENE siRNA Transfection Reagent (Roche) and accorded to the manufacturer’s instructions. Briefly, the cells were rinsed twice with phosphate-buffered saline (PBS) to reduce background interference. DDR2 siRNA with various doses (400 and 800 ng; approximately 40 and 80 pmol) were transfected into VSMCs for 6 h, and were then treated with ox-LDL (100 mg/L). The cells were collected to examine the DDR2 expression or to be performed by migration and proliferation assay. ## 2.4. Migration and Proliferation Assay The migration of VSMCs was assessed by using the transwell permeable support insert (Corning, Lowell, MA, USA) as previously described [18]. After incubation of siRNA for 24 h without FBS, VSMCs were seeded on Matrigel (5 mg/mL; BD Biosciences, San Diego, CA, USA) of the upper compartment, and DMEM supplemented with 10% FBS was added into the lower compartment. Cells were cultured for another 24 h and then detected by crystal violet staining. Five different high-power fields per well were photographed. The positively stained VSMCs were counted by an observer blinded to the treatment protocol.The proliferation of VSMCs was assessed by the wound-healing assay as previously described [19]. Briefly, after 24 h of siRNA treatment, a 10-l pipette tip was used to scrap an artificial wound in the monolayer across the bottom of the dish. After extensive washing, medium containing 10% FBS was removed, and cells started to migrate for the appropriate time in a 37°C incubation chamber with 5% CO2. At various time points, images were obtained with a Nikon TE2000 Inverted Microscope. Meanwhile, some representative dishes were performed by immunofluorescence against α-actin of SMC (1 : 200; Thermo Fisher Scientific, CA, USA) with Alexa Fluor 488 (1 : 200; Thermo Fisher Scientific, CA, USA). The remaining open area of the wound was quantified by using ImageJ as previously described, with some modifications [20]. ## 2.5. RNA Extraction and Real-Time PCR Total RNA was extracted from the aorta and VSMCs. Real-time PCR was performed as previously described [21, 22]. The sequences of the primers are listed in Table 1.Table 1 Primers were used for real-time PCR. GeneForward (5′-3′)Reverse (5′–3′)DDR2GATCATGTTTGAATTTGACCGAGCACTGGGGTTCACATCMMP-2TTGACCAGAACACCATCGGGTCCAGGTCAGGTGTGTMMP-3GCTGTGTGCTCATCCTACCTGACAACAGGGCTACTGTCMMP-8AGGAATGCCACTATGATTGCAAGAAATCACCAGAGTCGMMP-9ACAGCGAGACACTAAAGGCGGCAAGTCTTCGGTGTAGCMMP-12GCTGGTTCGGTTGTTAGGGTAGTTACACCCTGAGCATACMMP-13ACTCAAATGGTCCCAAACTATCAGCAGTGCCATCATMMP-14GTACCCACACACAACGCTTTATCTGGAACACCACAGCGAPDHTACCCACGGCAAGTTCAACGCACCAGCATCACCCCATTTG ## 2.6. Protein Extraction and Western Blotting Analysis Total protein was extracted from the aorta of rabbits and VSMCs as previously described [22]. The primary antibodies were against rabbit’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA), rat’s DDR2 (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA; CST, Beverly, MA, USA), MMP-2 (1 : 500; Abcam, Cambridge, MA), TIMP-1 (1 : 500; Abcam, Cambridge, MA), TIMP-2 (1 : 500; Abcam, Cambridge, MA), p-ERK1/2 (1 : 1000; CST, Beverly, MA, USA), and GAPDH (1 : 1000; Santa Cruz Biotechnology, Santa Cruz, CA). Western blotting analysis was applied as previously described, and relative protein expression was measured by ImageJ with gel analysis [22]. ## 2.7. Zymography The supernatants along with the total protein were extracted from cultured VSMCs. Under nonreducing conditions, the equal amounts of sample protein were analyzed by SDS-PAGE in gelatin-containing acrylamide gels (2 mg/mL gelatin and 7.5% polyacrylamide) as previously described [23]. ## 2.8. Statistical Analysis All data are expressed as themean±SE. Two groups of comparisons were used by Student’s t-test. Multiple groups of comparisons were performed by using one-way ANOVA with the Bonferroni test. P<0.05 was considered statistically significant. The statistical calculations were performed by using SPSS 19.0 software (IBM Corp., Armonk, NY, USA). ## 3. Results ### 3.1. DDRs Are Present in Atherosclerotic Lesions of Human Being To examine whether DDR2 was present in atherosclerotic lesions, we used immunohistochemistry staining to identify DDR2 from human tissue to various animal species. Interestingly, we found expression of DDR2 in human carotid atherosclerotic plaques (Figure1). Combined with the section with MST, we found that DDR2 distributes densely around the fatty cores of human carotid atherosclerotic plaques, and these positive stains were also adjacent to the fibrous cap and the middle membrane (Figure 1).Figure 1 DDR2 in human carotid atherosclerotic plaque. Hematoxylin and eosin (H&E; 10x); Masson’s trichrome stain (MTS; 10x); immunohistochemical staining against DDR2 (10x); rabbit IgG isotype control (10x). The area in the box is displayed as a high-power field in (b) (40x). NC: necrotic core; FC: fatty core; F: fibrous cap; M: middle membrane. ### 3.2. DDRs Are Present in Atherosclerotic Lesions of Animal Models By using a rabbit model, we attempted to investigate the relationship between DDR2 expression and atherosclerotic progression. We found that DDR2 presented in the early and middle atherosclerotic lesions of rabbits, and this receptor was located in a similar position (Figure2(a)). In the early-stage lesion, the majority of DDR2 was deposited along the lower edge of the lesion (Figure 2(a)). In the middle-stage lesion, DDR2 was diffusely distributed in the lesion and was deposited on the surface of the lesion (Figure 2(a)). DDR2 neither apparently overlapped with the cytoplasm of macrophages nor fully overlapped with VSMCs (Figure 2(a)). To compare MTS with DDR2 staining, indicating DDR2 expression and distribution might tend to localize around collagen fibers (Figure 2(a)). To further confirm the above results, we also used immunohistochemistry staining to identify DDR2 in atherosclerotic lesions of ApoE-/-mice (Figure 2(b)). As Figure 2(b) shows, DDR2 was also found in atherosclerotic lesions of mice.Figure 2 DDR2 in atherosclerotic plaques of animal models. Comparison of DDR2 expression and collagen distribution in atherosclerotic plaques of the HCD-induced rabbit model (a). Serial paraffin sections of aortic lesions were stained with Masson’s trichrome stain (MTS) and immunohistochemical staining againstα-smooth muscle actin (SMC), macrophages (Mϕ), and DDR2 (a) (bar=200 μm). Atherosclerotic plaques of the apoE knockout mouse model and immunohistochemical staining against DDR2 (b) (bar=100 μm). The area in the box is displayed as a high-power field on the right side (bar=50 μm). (a)(b) ### 3.3. ox-LDL Upregulates DDR2 in VSMCs via the MAPK Pathway Considering that DDR2 was abundant in atherosclerotic lesions, we questioned whether DDR2 expression was associated with atherosclerotic development. To compare the protein expression of DDR2, we found that there was no significant difference between the early and middle lesions (Figure3(a)). Next, we also questioned what caused the upregulation of DDR2 in atherosclerotic lesions. Considering ox-LDL as a key atherogenic factor, we incubated the various concentrations of ox-LDL with VSMCs for 48 h. We found that 100 mg/mL of ox-LDL significantly increased DDR2 expression in VSMCs (Figure 3(b)).Figure 3 Protein expression of DDR2 in the aorta and VSMCs. Immunoblot analysis and quantification of DDR2 in aortas of HCD-induced atherosclerotic rabbits ((a);n=4) and VSMCs that were incubated with various concentrations of 0, 25, 50, and 100 mg/L ox-LDL (b). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the 0 mg group; ##P<0.01 100 mg vs. the 25 mg group. Effect of MAPK inhibitors on ox-LDL-induced DDR2 expression (c). VSMCs were preincubated with inhibitors for 30 min and then treated with 100 ng/mL ox-LDL for 24 h in the presence of inhibitors (SP600125, SB203580, and PD98059 against JNK, MEK, and p38 MAPK, respectively). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the ox-LDL group. (a)(b)(c)Next, we also investigated which pathway may be involved in ox-LDL-induced DDR2 expression. By using inhibitors against the MAPK pathway, we found that blocking JNK, p48, and ERK1/2 in VSMCs could neutralize ox-LDL-induced DDR2 expression, suggesting that the MAPK pathway might be responsible for the ox-LDL-induced upregulation of DDR2 (Figure3(c)). ### 3.4. DDR2 Affects the Migration of VSMCs As reported in a previous study, a specific siRNA against DDR2 was applied to suppress DDR2 protein expression. First, we used various doses of siRNA to knock down DDR2 expression in VSMCs. We found that 800 ng of siRNA could efficiently inhibit ox-LDL-induced DDR2 expression, which was quantified and confirmed with real-time PCR (Figure4(a)). Next, to test whether inhibition of DDR2 in SMCs affected its migration, the transwell assay was performed after incubation of siRNA with VSMCs for 48 h (0, 200, 400, and 800 ng of siRNA, respectively). We found that 400 and 800 ng of siRNA reduced cell migration, indicating that the migration activity of VSMCs was inhibited by DDR2 reduction (Figure 4(b)). To confirm this result, we also used a wound healing assay to check the migration of VSMCs. As Figure 4(c) shows, we found that a decrease in DDR2 expression indeed inhibited the migration of VSMCs.Figure 4 Effect of knockdown of DDR2 on the migration and proliferation of VSMCs. After 400 and 800 ng of siRNA treatments, DDR2 expression in VSMCs was analyzed and quantified by western blotting and real-time PCR (a). Migration and proliferation were assessed by transwell analysis (b) and wound healing assay (c). Data are expressed as themean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the scramble RNA-treated group. (a)(b)(c) ### 3.5. DDR2 Affects the mRNA Expression of MMPs in VSMCs The invasion of VSMCs is determined by its collagenase secretion; thus, we examined whether MMP expression was reduced by DDR2 deficiency. First, we incubated collagen I with VSMCs for 48 h, and MMP (MMP-2, MMP-3, MMP-8, MMP-9, MMP-12, MMP-13, and MMP-14) expression was quantified by real-time PCR. Interestingly, we found that collagen I could upregulate MMP-2, MMP-3, MMP-9, and MMP-14 (Figure5(a)). Next, we used 800 ng of siRNA to knock down DDR2 expression before collagen I incubation. As a result of mRNA quantification by real-time PCR, inhibition of DDR2 reversed the effect of collagen I on expression of MMPs and suppressed the mRNA expression of MMP-2, MMP-3, MMP-9, and MMP-13 (Figure 5(b)). In addition, we failed to find the mRNA expression of MMP-12 in VSMCS.Figure 5 Effect of knockdown of DDR2 on mRNA expression of MMPs in VSMCs. Collagen I-induced MMP expression was assessed by real-time PCR (a). After siRNA treatment, collagen I-induced MMP expression was assessed by real-time PCR (b). Data are expressed as themean±standarderror. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001 vs. the collagen I-treated group or collagen I combined with siRNA-treated group. (a)(b) ### 3.6. DDR2 Affects the Activity and Expression of MMP-2 in VSMCs Considering that collagen I-induced mRNA expression of MMP-2 is dramatically suppressed after DDR2 deficiency, we next examined whether the inhibition of DDR2 affected the activity of MMP-2. As a result of zymography, MMP-2 activity was increased by collagen I incubation but was decreased by 800 ng of siRNA against DDR2 expression, and almost no MMP-2 activity was found without collagen I incubation before siRNA interference (Figure6(a)). However, we did not find any bands of MMP-9 in the same zymography gel. To study whether deceased activity of MMP-2 was attributed to MMP-2 protein, we next examined MMP-2 protein expression via western blotting. Indeed, we found that MMP-2 production was decreased in the presence of siRNA against DDR2 expression (Figure 6(b)). Next, TIMP-1 and TIMP-2 were studied and quantified by western blotting, and the result showed that there was no significant difference between the two groups (Figure 6(c)). To further study the underlying mechanism, we also examined the ERK signaling according to previous report [6]. After inhibition of DDR2 by siRNA, phosphorylation of ERK1/2 was reduced (Figure 6(d)).Figure 6 Effect of knockdown of DDR2 on the activity and expression of MMP-2 in VSMCs. The activity of MMP-2 was assessed by zymography (a). DDR2 protein expression was analyzed and quantified by western blotting (b). TIMP-1 and TIMP-2 were detected and quantified by western blotting (c). p-ERK1/2 protein expression was analyzed and quantified by western blotting (d). Data are expressed as themean±standarderror. ∗P<0.05 vs. the scramble RNA-treated group. (a)(b)(c)(d) ## 3.1. DDRs Are Present in Atherosclerotic Lesions of Human Being To examine whether DDR2 was present in atherosclerotic lesions, we used immunohistochemistry staining to identify DDR2 from human tissue to various animal species. Interestingly, we found expression of DDR2 in human carotid atherosclerotic plaques (Figure1). Combined with the section with MST, we found that DDR2 distributes densely around the fatty cores of human carotid atherosclerotic plaques, and these positive stains were also adjacent to the fibrous cap and the middle membrane (Figure 1).Figure 1 DDR2 in human carotid atherosclerotic plaque. Hematoxylin and eosin (H&E; 10x); Masson’s trichrome stain (MTS; 10x); immunohistochemical staining against DDR2 (10x); rabbit IgG isotype control (10x). The area in the box is displayed as a high-power field in (b) (40x). NC: necrotic core; FC: fatty core; F: fibrous cap; M: middle membrane. ## 3.2. DDRs Are Present in Atherosclerotic Lesions of Animal Models By using a rabbit model, we attempted to investigate the relationship between DDR2 expression and atherosclerotic progression. We found that DDR2 presented in the early and middle atherosclerotic lesions of rabbits, and this receptor was located in a similar position (Figure2(a)). In the early-stage lesion, the majority of DDR2 was deposited along the lower edge of the lesion (Figure 2(a)). In the middle-stage lesion, DDR2 was diffusely distributed in the lesion and was deposited on the surface of the lesion (Figure 2(a)). DDR2 neither apparently overlapped with the cytoplasm of macrophages nor fully overlapped with VSMCs (Figure 2(a)). To compare MTS with DDR2 staining, indicating DDR2 expression and distribution might tend to localize around collagen fibers (Figure 2(a)). To further confirm the above results, we also used immunohistochemistry staining to identify DDR2 in atherosclerotic lesions of ApoE-/-mice (Figure 2(b)). As Figure 2(b) shows, DDR2 was also found in atherosclerotic lesions of mice.Figure 2 DDR2 in atherosclerotic plaques of animal models. Comparison of DDR2 expression and collagen distribution in atherosclerotic plaques of the HCD-induced rabbit model (a). Serial paraffin sections of aortic lesions were stained with Masson’s trichrome stain (MTS) and immunohistochemical staining againstα-smooth muscle actin (SMC), macrophages (Mϕ), and DDR2 (a) (bar=200 μm). Atherosclerotic plaques of the apoE knockout mouse model and immunohistochemical staining against DDR2 (b) (bar=100 μm). The area in the box is displayed as a high-power field on the right side (bar=50 μm). (a)(b) ## 3.3. ox-LDL Upregulates DDR2 in VSMCs via the MAPK Pathway Considering that DDR2 was abundant in atherosclerotic lesions, we questioned whether DDR2 expression was associated with atherosclerotic development. To compare the protein expression of DDR2, we found that there was no significant difference between the early and middle lesions (Figure3(a)). Next, we also questioned what caused the upregulation of DDR2 in atherosclerotic lesions. Considering ox-LDL as a key atherogenic factor, we incubated the various concentrations of ox-LDL with VSMCs for 48 h. We found that 100 mg/mL of ox-LDL significantly increased DDR2 expression in VSMCs (Figure 3(b)).Figure 3 Protein expression of DDR2 in the aorta and VSMCs. Immunoblot analysis and quantification of DDR2 in aortas of HCD-induced atherosclerotic rabbits ((a);n=4) and VSMCs that were incubated with various concentrations of 0, 25, 50, and 100 mg/L ox-LDL (b). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the 0 mg group; ##P<0.01 100 mg vs. the 25 mg group. Effect of MAPK inhibitors on ox-LDL-induced DDR2 expression (c). VSMCs were preincubated with inhibitors for 30 min and then treated with 100 ng/mL ox-LDL for 24 h in the presence of inhibitors (SP600125, SB203580, and PD98059 against JNK, MEK, and p38 MAPK, respectively). Data are expressed as the mean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the ox-LDL group. (a)(b)(c)Next, we also investigated which pathway may be involved in ox-LDL-induced DDR2 expression. By using inhibitors against the MAPK pathway, we found that blocking JNK, p48, and ERK1/2 in VSMCs could neutralize ox-LDL-induced DDR2 expression, suggesting that the MAPK pathway might be responsible for the ox-LDL-induced upregulation of DDR2 (Figure3(c)). ## 3.4. DDR2 Affects the Migration of VSMCs As reported in a previous study, a specific siRNA against DDR2 was applied to suppress DDR2 protein expression. First, we used various doses of siRNA to knock down DDR2 expression in VSMCs. We found that 800 ng of siRNA could efficiently inhibit ox-LDL-induced DDR2 expression, which was quantified and confirmed with real-time PCR (Figure4(a)). Next, to test whether inhibition of DDR2 in SMCs affected its migration, the transwell assay was performed after incubation of siRNA with VSMCs for 48 h (0, 200, 400, and 800 ng of siRNA, respectively). We found that 400 and 800 ng of siRNA reduced cell migration, indicating that the migration activity of VSMCs was inhibited by DDR2 reduction (Figure 4(b)). To confirm this result, we also used a wound healing assay to check the migration of VSMCs. As Figure 4(c) shows, we found that a decrease in DDR2 expression indeed inhibited the migration of VSMCs.Figure 4 Effect of knockdown of DDR2 on the migration and proliferation of VSMCs. After 400 and 800 ng of siRNA treatments, DDR2 expression in VSMCs was analyzed and quantified by western blotting and real-time PCR (a). Migration and proliferation were assessed by transwell analysis (b) and wound healing assay (c). Data are expressed as themean±standarderror. ∗P<0.05 and ∗∗P<0.01 vs. the scramble RNA-treated group. (a)(b)(c) ## 3.5. DDR2 Affects the mRNA Expression of MMPs in VSMCs The invasion of VSMCs is determined by its collagenase secretion; thus, we examined whether MMP expression was reduced by DDR2 deficiency. First, we incubated collagen I with VSMCs for 48 h, and MMP (MMP-2, MMP-3, MMP-8, MMP-9, MMP-12, MMP-13, and MMP-14) expression was quantified by real-time PCR. Interestingly, we found that collagen I could upregulate MMP-2, MMP-3, MMP-9, and MMP-14 (Figure5(a)). Next, we used 800 ng of siRNA to knock down DDR2 expression before collagen I incubation. As a result of mRNA quantification by real-time PCR, inhibition of DDR2 reversed the effect of collagen I on expression of MMPs and suppressed the mRNA expression of MMP-2, MMP-3, MMP-9, and MMP-13 (Figure 5(b)). In addition, we failed to find the mRNA expression of MMP-12 in VSMCS.Figure 5 Effect of knockdown of DDR2 on mRNA expression of MMPs in VSMCs. Collagen I-induced MMP expression was assessed by real-time PCR (a). After siRNA treatment, collagen I-induced MMP expression was assessed by real-time PCR (b). Data are expressed as themean±standarderror. ∗P<0.05, ∗∗P<0.01, and ∗∗∗P<0.001 vs. the collagen I-treated group or collagen I combined with siRNA-treated group. (a)(b) ## 3.6. DDR2 Affects the Activity and Expression of MMP-2 in VSMCs Considering that collagen I-induced mRNA expression of MMP-2 is dramatically suppressed after DDR2 deficiency, we next examined whether the inhibition of DDR2 affected the activity of MMP-2. As a result of zymography, MMP-2 activity was increased by collagen I incubation but was decreased by 800 ng of siRNA against DDR2 expression, and almost no MMP-2 activity was found without collagen I incubation before siRNA interference (Figure6(a)). However, we did not find any bands of MMP-9 in the same zymography gel. To study whether deceased activity of MMP-2 was attributed to MMP-2 protein, we next examined MMP-2 protein expression via western blotting. Indeed, we found that MMP-2 production was decreased in the presence of siRNA against DDR2 expression (Figure 6(b)). Next, TIMP-1 and TIMP-2 were studied and quantified by western blotting, and the result showed that there was no significant difference between the two groups (Figure 6(c)). To further study the underlying mechanism, we also examined the ERK signaling according to previous report [6]. After inhibition of DDR2 by siRNA, phosphorylation of ERK1/2 was reduced (Figure 6(d)).Figure 6 Effect of knockdown of DDR2 on the activity and expression of MMP-2 in VSMCs. The activity of MMP-2 was assessed by zymography (a). DDR2 protein expression was analyzed and quantified by western blotting (b). TIMP-1 and TIMP-2 were detected and quantified by western blotting (c). p-ERK1/2 protein expression was analyzed and quantified by western blotting (d). Data are expressed as themean±standarderror. ∗P<0.05 vs. the scramble RNA-treated group. (a)(b)(c)(d) ## 4. Discussion Atherosclerotic plaque rupture is a serious problem for patients with cardiovascular disease, often causing unstable angina, myocardial infarction, and sudden coronary death [24]. However, how the atherosclerotic plaque becomes vulnerable remains unknown. The question remains as to why ECM tends to degrade in these unstable plaques. Notably, ECM not only constitutes atherosclerotic plaques but also activates collagen receptors to regulate the surrounding cells [25]. As a collagen receptor, DDR2 is involved with various diseases such as fibrotic diseases, arthritis, cancer, and atherosclerosis [5].In the current study, we found that DDR2 was present in atherosclerotic plaques of various animal models, but DDR2 immunoreactivity did not totally overlap with macrophages and VSMCs. As previously described, DDR2 is highly expressed in VSMCs and also found in activated endothelial cells [26]. Based our observation, DDR2 expression is inclined to localize around collagens and fibrous caps. More importantly, for the first time, we have reported that DDR2 is highly expressed in carotid atherosclerotic plaques of human and is distributing around the fatty core of atherosclerosis and overlapping with collagen fibers. Combined to these findings, DDR2 expression in the specific regions of atherosclerotic plaque may be somehow upregulated by collagen or other proatherosclerotic factors, and the underlying mechanism needs to be addressed by the further studies. Given that ox-LDL is mainly a proatherosclerotic factor and abundant to the fat core, a question arises of whether ox-LDL can induce the expression of DDR2. Interestingly, the current research proves that ox-LDL can induce upregulation of DDR2 in a dose-dependent manner, whereas such upregulation is neutralized by the inhibition of JNK, MAPK48, and ERK, showing that the MAPK pathway may be involved with ox-LDL-induced DDR2 expression. In the vasculature, SMCs not only are responsible for the secretion of collagen fibers but also express collagen receptors to interact with ECM, regulating their own proliferation, differentiation, and migration [27]. Accordingly, we speculate that DDR2 may affect the physiological functions of VSMCs, especially when these cells are stimulated by proatherosclerotic factors. As shown in our results, VSMCs with downregulation of DDR2 display reduced differentiation and migration. Considering that VSMC infiltration in intima is a vital process of atherogenesis, DDR2 could be a mediator to regulate atherosclerotic progression [3]. Notably, given that this depends on the secretion of MMPs, another question has been raised as to whether DDR2 also regulates MMPs [28]. Indeed, MMPs can almost determine the fate of atherosclerotic plaques because MMP-mediated breakdown of ECM is a typical feature of an unstable plaque [29]. In pathological conditions, excessive ECM also stimulates VSMCs to degrade ECM as a negative feedback regulation. As a natural ligand against DDRs, collagen I can activate DDR2 in VSMCs to produce MMPs for the degradation of ECM [30]. We observed that collagen I indeed promotes VSMCs to produce various types of MMPs, especially MMP-2 and MMP-3. However, collagen I-induced expression of MMPs can be completely neutralized by knockdown of DDR2, suggesting that DDR2 plays a pivotal role in the degradation of ECM. Of note, since MMP-2 expression is dramatically affected by the knockdown of DDR2, a causal relationship between DDR2 and MMP-2 is shown in the current study. Increased MMPs do not necessarily have enzymatic activity because MMPs are not only initially synthesized as inactive state of zymogens but are also inhibited by specific endogenous inhibitors such as metalloproteinases [31]. Thus, only if DDR2 mediates the activity of MMPs can this collagen receptor be made sure to be involved with vulnerable plaques. Interestingly, decreased DDR2 indeed also inhibits the protein expression and activity of MMP-2, indicating that the effect of MMP-2 in VSMCs may proceed via collagen I-activated DDR2. With regard to the previous studies, a mechanism is suggested that collagen I causes the expression and phosphorylation of DDR2 and finally upregulates MMP-2 expression via ERK1/2 signaling [6, 32]. Accordingly, we have confirmed that phosphorylation of ERK1/2 is suppressed by inhibition of DDR2 in VSMCs, indicating that ERK1/2 may be involved with expression and activity of MMP-2. Of note, this result needs to be proven by more experiments. Moreover, we did not find pro-MMP-2, pro-MMP-9, and MMP-9 in the zymography, which did not exclude the possible regulation of activated DDR2 to other MMPs.In conclusion, we found high expression of DDR2 in atherosclerotic plaques of human and various animal models, and DDR2 in VSMCs was involved with collagen I-induced secretion of MMP-2, suggesting that the clinical role of DDR2 in cardiovascular disease should be elucidated in further experiments. Owing to the proatherosclerotic condition, an abundance of DDR2 is present in the atherosclerotic plaques of humans and various animal models; furthermore, excessive ECM such as collagen I may also activate DDR2 to produce MMP-2 for the degradation of ECM, which is ultimately involved in the ethology of unstable atherosclerotic plaques. Once this mechanism is clarified, DDR2 may become a novel target for developing new therapeutic strategy for the control of unstable atherosclerotic plaques. --- *Source: 1010496-2021-12-16.xml*
2021
# Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study **Authors:** Charalambos Koukouvaos; Dionisis Kandris; Maria Samarakou **Journal:** The Scientific World Journal (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101056 --- ## Abstract Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems. --- ## Body ## 1. Introduction Serious economic concerns along with the growing worries about the disastrous effects that technological progress accumulatively causes to world environment, impose a shift towards renewable energy resources, which constitute an environmental friendly approach for power generation with satisfactory efficiency [1]. That is why in many regions of the globe there is an official tendency to let renewable energy sources have an increasing share in the overall power production process. For instance, European Union, by means of the implementation of the so-called 20/20/20 initiative, aims to achieve by year 2020 not only the reduction of greenhouse gas emissions by 20% (compared to 1990 ones), but also the increase of the magnitude of renewable energy to 20% and the decrease of the overall energy consumption by 20% through efficient energy management [2].The renewable energy resources which have the most enhanced exploitation, thanks to relevant advances in modern scientific research, are solar, hydropower, geothermal, biomass, and wind energy [3, 4].Solar energy refers to the solar radiation that reaches the Earth and can be converted into other forms of energy, such as heat and electricity. The transformation of solar radiation into electric current is performed by using photovoltaic (PV) cells. A PV cell is practically a p-n junction placed in the interior of a thin wafer of semiconductor. The solar radiation falling on a PV cell can be diconverted to electricity through the so-called photovoltaic effect. This is a physical phenomenon in which photons of light excite electrons into higher energy states letting them act as charge carriers for electric current. Specifically, the exposure of a PV to sunlight triggers the creation of electron-hole pairs proportional to the incident irradiation by photons having energy greater than the band-gap energy of the semiconductor material of the PV cell.Over the last decades, the international interest in the PV conversion of solar radiation is continuously growing. In this way, the use of PV systems is nowadays widespread to an extent that is considered to constitute the third greater renewable energy source in terms of globally installed capacity, after hydro- and wind power.On the other hand, the brilliant prospects of PV systems for further evolution get obstructed due to various technical and economic issues that have yet to be resolved. For this reason modern scientific and technological research focuses on the development of methodologies and equipment for the increase of energy efficiency of PV systems, the reduction of their production cost, the improvement of their market penetration, and the enhancement of their environmental performance [5–7].These research activities can be greatly assisted by the utilization of software tools for the development of models of the PV systems under consideration and the analysis of their performance by carrying out simulation tests.The aim of this paper is to perform the computer-aided design and performance analysis of both grid-connected and standalone PV systems by using in parallel the two most widely used graphical programming environments, namely, Simulink by Mathworks and LabVIEW by National Instruments. This task is accomplished by taking advantage of the potential provided through the adaptation of these two software platforms to systems of PV nature. The parallel utilization of MATLAB and LabVIEW enables the comparative evaluation of their accuracy, validity, and their overall performance under various simulation scenarios. According to the knowledge of the authors of this paper, this is the first comparative study of this kind which is performed in the field of photovoltaics, although similar studies have been carried out in other scientific areas such as [8].The rest of the paper is organized as follows: Section2 focuses on related work in the computer-assisted modelling of PV systems. Section 3 describes the procedures used for the PV systems modelling. In Section 4 the results of the simulation tests executed are presented. The discussion of these results is performed in Section 5. Finally, Section 6 concludes the paper. ## 2. Computer-Aided Performance Analysis of PV Systems The performance of PV plant is subject to many parameters, the influence of which should be accurately estimated before any investment is made for the establishment of this plant. Actually, the temperature and solar radiation are the two main factors affecting the performance of PV systems. Specifically, ambient temperature should definitely be taken into consideration because the relationship between the temperature and the efficiency of solar cells is inversely proportional [9]. Additionally, the energy produced by a PV system on an annual basis is directly related to the available solar radiation and therefore depends on the geographical location of the system because two radiation beams of equal power but of different wavelength can produce different amounts of electricity in a solar cell and thus form a different degree of performance [9]. Moreover, the operational efficiency of a solar panel may be affected by other environmental or other type factors such as the speed and direction of wind, the distribution of the solar spectrum, rain, shading, pollution, panel aging, optical losses, and panel casualties [9–13].In order to calculate the energy efficiency of PV systems in an accurate and systematic modern research works focus on the computer-aided design and analysis of this type of systems [14].For instance, a PV array simulation model incorporated in Simulink GUI environment was developed using basic circuit equations of the PV solar cells including the effects of solar irradiation and temperature changes. The developed model was tested by means of both a directly coupled DC load and an AC load via an inverter [15].In another work a model for a solar PV array (PV), with an AC/DC converter and load, a maximum power point tracker, a battery, and a charger, was built. The whole model was simulated under four testing scenarios some of which included cloudy and sunny conditions along with constant and varying load [16].Similarly, an approach to determine the characteristic of a particular PV cell panel and to study the influence of different values of solar radiation at different temperatures concerning performance of PV cells in Simulink was proposed [17]. There are research works, such as [18], which focus on modelling and simulation of standalone PV systems, whilst others, like [19], address modelling and simulation of grid-connected PV systems to analyze the grid interface behavior and control performance in the system design. Other works, such as [20], predict energy production from PV panels under various performance conditions.In [21] a generalized PV model built in MATLAB/Simulink, which enables the simulation of the system dynamics, is presented. Similarly, [22] focuses on the modelling process of PV cells in Simulink while in [23] the two-diode model of PV module is examined.Concentrating on educational applications, [24] describes the design and implementation of a virtual laboratory for PV power systems based on the Cadence PSpice circuit simulator. In [25] an integrated PV monitoring system having a graphical user interface developed in LabVIEW is presented. ## 3. Theoretical Background The procedure of building accurate PV models is extremely important in order to achieve high efficiency in the decision making related to establishment and operation of PV systems. This justifies why, as already discussed in Section2, many research works are carried out in this scientific area.Generally, modelling of PV systems is based on the assumption that the operation of PV cells may be simulated by examining the operation of analogous electronic circuits. The simplest PV model found in bibliography comprises a single diode connected in parallel with a light generated current source (Iph) as shown in Figure 1, where Id expresses the diode current [26, 27]. This simplified model refers to an ideal solar cell.Figure 1 Electronic circuit equivalent to the model of an ideal PV cell.Going one step further, a series resistanceRs, representing an internal parasitic resistance which reduces the efficiency of the solar cell, is added to the circuit, as depicted in Figure 2 [28, 29]. The relationship between the voltage and the current for a given temperature and solar radiation is given as (1)I=Iph-Id=Iph-I0·(e(V+I·Rs)/(a·VT)-1), where I0 expresses the diode reverse saturation current, a represents the ideality factor of the diode, and VT stands for the so-called thermal voltage of the PV module which is given by [2] (2)VT=Ns·k·Tcq, where Ns denotes the number of photovoltaic cells connected in series in the PV module, k stands for Boltzmann’s constant (1.3806503·10-23 J/K), Tc indicates the temperature of the photovoltaic cell (expressed in Kelvin degrees), and q symbolizes the elementary electric charge (1.60217646·10-19 C).Figure 2 Electronic circuit representing a one-diode with serial resistance model of PV cell.Other scientific researches make use of a model which is derived by the addition of shunt resistanceRp in parallel to the diode in order to improve model accuracy in the cases of high variations of temperature and low voltage but in expense of complexity [30, 31].A few research works adopt an ever more complex model, known as the two-diode model aiming to take into consideration recombination losses, that is, the elimination of mobile electrons and electron holes due to the existence of impurities or defects at the front or/and the rear surfaces of a cell. This model includes a second diode placed in parallel to the current source and the initial diode, thus calling for the simultaneous calculation of seven parameters based on either iteration approaches or more analytical methodologies [23].When simulating a PV module, no matter which model is adopted, the goal is to calculate theI-V and P-V curves which are representative of the module operation. In this curve the so-called maximum power point (MPP) is determined as the point for which the power dissipated to the load is maximum. From basic circuit theory, the power delivered from or to a device is maximized where the derivative (graphically, the slope) dI/dV of the I-V curve is equal to and opposite of the I/V ratio (where dP/dV=0) and corresponds to the “knee” of the curve.Another technical feature that is taken into consideration during the simulation tests performed is the so-called total harmonic distortion (THD). Generally, THD of a signal is a measurement of the harmonic distortion present and is defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. In energy systems THD is used to characterize their power quality of electric power.Finally, in order to carry out the comparative performance analysis of the modelling accuracy of Simulink and Labview the so called coefficient of determination was calculated. This is a statistic term denoted asR2, which expresses the proportion of total variation of outcomes explained by a model, thus, providing a measure of how well-observed outcomes are replicated by this model. ## 4. Modeling and Simulation Setup In accordance with the aforementioned, in Section2, PV modelling studies the research work presented in this paper investigates the computer-aided modelling and analysis of first a standalone commercially available solar panel, second an integrated PV system without grid connection, and third a grid-connected PV system under low and high load conditions. ### 4.1. Modeling of a PV Panel The first modelling procedure performed refers to the modelling and performance analysis of a single PV panel for the production ofI-V and P-V curves in maximum power point (MPP). The modelling was performed both in Simulink and LabVIEW based on the aforementioned, in Section 3, one-diode with serial resistance PV model, because it has an adequate ratio of accuracy to complexity.The specific PV panel examined is Solarex SX-50. The characteristic features of this PV panel along with their typical values, for solar irradianceS=1000 W/m2 and operation temperature T=25°C, are presented in Table 1, while the I-V curve for various operating temperatures is illustrated in Figure 3. The specific panel consists of 36 photovoltaic silicon cells located in two rows of 18 PV cells each. The modelling of SX-50 PV panel in Simulink was performed by using Simscape-SimElectronics library. Figure 4 shows that the structure of the PV panel consisted of two sets of 18 cells each. The corresponding simulation model built is depicted in Figure 5. The modelling of SX-50 PV panel in LabVIEW was performed by using NI LabVIEW Simulation Interface Toolkit. Figure 6 shows the resultant model developed.Table 1 Technical features of SX-50 PV panel. Feature Value Maximum power (Pmax) 50 W Voltage atPmax (Vmax) 16.8 V Current atPmax (Imax) 2.97 A Guaranteed minimumPmax 45 W Short-circuit current (Isc) 3.23 A Temperature coefficient ofIsc (0.065±0.015)%/°C Temperature coefficient ofVoc −(80±10) mV/°C Temperature coefficient of power −(0.5±0.05)%/°CFigure 3 CharacteristicI-V curve of an SX-50 PV panel.Figure 4 Overview of an SX-50 PV panel in Simulink.Figure 5 Simulation model of an SX-50 panel in Simulink.Figure 6 Simulation model of an SX-50 panel in LabVIEW. ### 4.2. Modeling of an Integrated PV System with No Grid Connection The second modelling procedure performed refers to the modelling and performance analysis of a typical integrated PV system, consisting of a PV energy source, a DC/DC converter, a DC/AC inverter, a filter, and a load as shown in Figure7.Figure 7 Overview of the integrated PV system modeled.The modelling was performed both in Simulink and LabVIEW based on the assumptions that the power of the photovoltaic generator is equal to 4.6 KW and the operation temperature is set to 25°C. Additionally, solar irradiance is supposed to have a stable value equal to 1000 W/m2 while the load magnitude is equal to 2 KW. The system was connected to a low-voltage grid by using alternatively a low load (1.5 KW) or a high load (6.5 KW).The filter is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was incorporated in the line that connects the filter with the load. Figure8 illustrates the simulation model developed in Simulink while Figure 9 depicts the same model built in LabVIEW.Figure 8 Integrated PV system model in Simulink.Figure 9 Integrated PV system model in LabVIEW. ### 4.3. Modeling of an Integrated PV System with Connection to Low-Voltage Grid The third modelling task performed refers to the model development and performance analysis of a typical integrated PV system, having the structure shown in Figure7, which is connected to low-voltage grid by using either a low load (1.5 KW) or high load (6.5 KW).The filter incorporated in the overall system structure is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was used in the line which connects the filter with the load. Figure10 illustrates the simulation model developed in Simulink while Figure 11 depicts the same model built in LabVIEW.Figure 10 Grid-connected integrated PV system model in Simulink.Figure 11 Grid-connected integrated PV system model in LabVIEW. ## 4.1. Modeling of a PV Panel The first modelling procedure performed refers to the modelling and performance analysis of a single PV panel for the production ofI-V and P-V curves in maximum power point (MPP). The modelling was performed both in Simulink and LabVIEW based on the aforementioned, in Section 3, one-diode with serial resistance PV model, because it has an adequate ratio of accuracy to complexity.The specific PV panel examined is Solarex SX-50. The characteristic features of this PV panel along with their typical values, for solar irradianceS=1000 W/m2 and operation temperature T=25°C, are presented in Table 1, while the I-V curve for various operating temperatures is illustrated in Figure 3. The specific panel consists of 36 photovoltaic silicon cells located in two rows of 18 PV cells each. The modelling of SX-50 PV panel in Simulink was performed by using Simscape-SimElectronics library. Figure 4 shows that the structure of the PV panel consisted of two sets of 18 cells each. The corresponding simulation model built is depicted in Figure 5. The modelling of SX-50 PV panel in LabVIEW was performed by using NI LabVIEW Simulation Interface Toolkit. Figure 6 shows the resultant model developed.Table 1 Technical features of SX-50 PV panel. Feature Value Maximum power (Pmax) 50 W Voltage atPmax (Vmax) 16.8 V Current atPmax (Imax) 2.97 A Guaranteed minimumPmax 45 W Short-circuit current (Isc) 3.23 A Temperature coefficient ofIsc (0.065±0.015)%/°C Temperature coefficient ofVoc −(80±10) mV/°C Temperature coefficient of power −(0.5±0.05)%/°CFigure 3 CharacteristicI-V curve of an SX-50 PV panel.Figure 4 Overview of an SX-50 PV panel in Simulink.Figure 5 Simulation model of an SX-50 panel in Simulink.Figure 6 Simulation model of an SX-50 panel in LabVIEW. ## 4.2. Modeling of an Integrated PV System with No Grid Connection The second modelling procedure performed refers to the modelling and performance analysis of a typical integrated PV system, consisting of a PV energy source, a DC/DC converter, a DC/AC inverter, a filter, and a load as shown in Figure7.Figure 7 Overview of the integrated PV system modeled.The modelling was performed both in Simulink and LabVIEW based on the assumptions that the power of the photovoltaic generator is equal to 4.6 KW and the operation temperature is set to 25°C. Additionally, solar irradiance is supposed to have a stable value equal to 1000 W/m2 while the load magnitude is equal to 2 KW. The system was connected to a low-voltage grid by using alternatively a low load (1.5 KW) or a high load (6.5 KW).The filter is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was incorporated in the line that connects the filter with the load. Figure8 illustrates the simulation model developed in Simulink while Figure 9 depicts the same model built in LabVIEW.Figure 8 Integrated PV system model in Simulink.Figure 9 Integrated PV system model in LabVIEW. ## 4.3. Modeling of an Integrated PV System with Connection to Low-Voltage Grid The third modelling task performed refers to the model development and performance analysis of a typical integrated PV system, having the structure shown in Figure7, which is connected to low-voltage grid by using either a low load (1.5 KW) or high load (6.5 KW).The filter incorporated in the overall system structure is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was used in the line which connects the filter with the load. Figure10 illustrates the simulation model developed in Simulink while Figure 11 depicts the same model built in LabVIEW.Figure 10 Grid-connected integrated PV system model in Simulink.Figure 11 Grid-connected integrated PV system model in LabVIEW. ## 5. Simulation Results and Discussion In the following subsections of Section5 the outcomes of the simulation tests carried out are both described and commented on. ### 5.1. Performance Evaluation of the PV Panel Modeled The basic assumption in all simulations tests carried out for the performance analysis of the solar panel modeled in Section4.1 was that there was constant solar irradiance S equal to 1000 W/m2, while operation temperature T was sequentially set to 0°C, 25°C, 50°C, and 75°C. For each one of these temperature values the corresponding I-V and P-V curves were drawn and the values of Imax, Vmax and Pmax were calculated both in Simulink and LabVIEW. Indicatively, Figures 12 and 13 depict these curves for T = 25°C in Simulink and LabVIEW.I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in Simulink. (a) (b)Figure 13 I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in LabVIEW.The values ofImax, Vmax, and Pmax which were found based on the simulation results in Simulink and LabVIEW for T = 0°C are presented in Table 2. The corresponding results for T = 50°C are presented in Table 3 and for T = 75°C in Table 4.Table 2 Simulation results forImax, Vmax, and Pmax (T=0°C). I max (A) V max (V) P max (W) Simulink 2.995 18.226 54.607 LabVIEW 2.987 18.811 56.198Table 3 Simulation results forImax, Vmax, and Pmax (T=50°C). I max (A) V max (V) P max (W) Simulink 3.002 14.916 44.782 LabVIEW 3.041 15.152 46.108Table 4 Simulation results forImax, Vmax, and Pmax (T=75°C). I max (A) V max (V) P max (W) Simulink 2.998 13.024 38.925 LabVIEW 3.029 13.834 41.91In a similar way, in Table5, the values of Imax, Vmax, and Pmax calculated through the simulation results by using Simulink and LabVIEW for T = 25°C are presented against the corresponding values provided by the manufacturer of the PV panel, that is, Solarex [32].Table 5 Simulation results forImax, Vmax, and Pmax (T=25°C). I max (A) V max (V) P max (W) Simulink 2.998 16.86 50.55 LabVIEW 2.977 16.909 50.344 Solarex 2.97 16.8 50The percentage deviation of the simulation data ofImax, Vmax, and Pmax from the corresponding dfault values provided by Solarex is shown in Table 6.Table 6 Percentage deviation between default values and simulation results forImax, Vmax, and Pmax (T=25°C). I max (%) V max (%) P max (%) Simulink +0.951 +0.357 +1.1 LabVIEW +0.247 +0.648 +0.688The evaluation of the correlation existing between the simulation results attained alternatively via Simulink and LabVIEW forImax, Vmax, and Pmax is obtained through the calculation of the coefficient of determination (R2). As shown in Figures 14, 15, and 16 the values of R2 found correspondingly for Imax, Vmax, and Pmax are equal to 0.989, 0.996, and 0.999.Figure 14 Schematic comparison of the values ofImax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 15 Schematic comparison of the values ofVmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 16 Schematic comparison of the values ofPmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C. ### 5.2. Performance Evaluation of an Integrated PV System with No Grid Connection The performance analysis of the integrated PV described in Section4.2 was carried out by the investigation of specific characteristics of the system, that is, the system output voltage at load and the current at load by performing Fourier transformation. Simulink makes use of the so-called Powegui FFT Analysis tool which enables the implementation of fast Fourier transformation of signals. Similarly, LabVIEW makes use of STARSIM which is an electrical system simulation tool.The results of the simulation of these two features in Simulink are depicted in Figures17 and 18.Figure 17 Voltage at load with no grid connection in Simulink.Figure 18 Current at load with no grid connection in Simulink.By applying fast Fourier transformation, it was found that the total harmonic distortion at load is 2.08%, while the amplitude of the voltage at the fundamental frequency is 313.2 V and the amplitude of the current at the fundamental frequency is 11.84 A.The corresponding simulation results in LabVIEW proved that the total harmonic distortion at load is 3.78%, while the amplitude of the voltage at the fundamental frequency is 219.53 V RMS, that is, 310.46 V, and the amplitude of the current at the fundamental frequency is 8.3 A RMS, that is, 11.74 A, as shown correspondingly in Figures19 and 20.Voltage at load with no grid connection in LabVIEW. (a) (b)Current at load with no grid connection in LabVIEW. (a) (b)The comparison of these data shows that the simulation through Simulink and LabVIEW leads to results which have a percentage deviation less than 0.9% for both voltage at load and current at load. ### 5.3. Performance Evaluation of an Integrated PV System with Connection to Low-Voltage Grid By using the same software tools as in the last subsection, the performance analysis of the, described in Section4.3, integrated PV connected to low-voltage grid was carried out.First, the low load (1.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures21 and 22, respectively.Figure 21 Voltage at load 1.5 KW with grid connection in Simulink.Figure 22 Current at load 1.5 KW with grid connection in Simulink.Based on these plots it was found that the total harmonic distortion at load is 1.67%, while the amplitude of the voltage at load at the fundamental frequency is 331.9 V and the amplitude of the current at the fundamental frequency is 9.428 A.The corresponding simulation results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 236.71 V RMS, that is, 334.76 V, and the amplitude of the current at the fundamental frequency is 6.74 A RMS, that is, 9.53 A, while the total harmonic distortion at load is 11.25% as shown correspondingly in Figures23 and 24.Voltage at load 1.5 KW with grid connection in LabVIEW. (a) (b)Current at load 1.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots drawn through the alternative utilization of Simulink and LabVIEW are depicted in Figures25 and 26, respectively.Figure 25 Current to load 1.5 KW and network with grid connection in Simulink.Current to load 1.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 26.56 A while the total harmonic distortion is equal to 8.16%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 18.34 A RMS, that is, 25.94 A, while THD is 6.65%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures27 and 28, respectively. Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 21.98 A while the total harmonic distortion is equal to 9.76%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 15.02 A RMS, that is, 21.24 A, while THD is 6.84%.Figure 27 Current to network with grid connection and load 1.5 KW in Simulink.Current to network with grid connection and load 1.5 KW in LabVIEW. (a) (b)The comparative examination of the aforementioned simulation results regarding the 1.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 0.85% for the voltage at load, 1.07% for the current at load, 2.33% for the total current to load and network, and 3.44% for the current to network.Finally, the high load (6.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures29 and 30, respectively.Figure 29 Voltage at load 6.5 KW with grid connection in Simulink.Figure 30 Current at load 6.5 KW with grid connection in Simulink.Based on these plots it was found that the amplitude of the voltage at load at the fundamental frequency is 319.1 V, and the amplitude of the current at the fundamental frequency is 39.89 A, while the total harmonic distortion at load is 0.82%. The equivalent results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 227.95 V RMS, that is, 322.37 V, and the amplitude of the current at the fundamental frequency is 28.49 A RMS, that is, 40.29 A while the total harmonic distortion at load is 7.83%, as shown correspondingly in Figures31 and 32.Voltage at load 6.5 KW with grid connection in LabVIEW. (a) (b)Current at load 6.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures33 and 34, respectively.Figure 33 Current to load 6.5 KW and network with grid connection in Simulink.Current to load 6.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 19.76 A while the total harmonic distortion is equal to 9.88%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 13.99 A RMS, that is, 19.78 A, while THD is 11.54%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures35 and 36, respectively.Figure 35 Current to network with grid connection and load 6.5 KW in Simulink.Current to network with grid connection and load 6.5 KW in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 23.53 A while the total harmonic distortion is equal to 7.89%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 16.38 A, RMS that is, 23.16 A, while THD is 10.4%.The comparative examination of the aforementioned simulation results regarding the 6.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 1.0% for the voltage at load, 1.0% for the current at load, 0.1% for the total current to load and network, and 1.57% for the current to network.The correlation between the results attained via Simulink and LabVIEW for the performance simulation of the grid-connected integrated PV system can be synoptically evaluated in terms of the coefficient of determination (R2). For this reason R2 was computed for both the low load and high load cases by taking into consideration the values of all current amplitudes which were calculated.Specifically, in the 1.5 KW caseR2 was found to be equal to 0.999 as shown in Figure 37. Similarly, in the 6.5 KW case R2 was found to be also equal to 0.999 as illustrated in Figure 38.Figure 37 Coefficient of determinationR2 for the 1.5 KW load case.Figure 38 Coefficient of determinationR2 for the 6.5 KW load case. ## 5.1. Performance Evaluation of the PV Panel Modeled The basic assumption in all simulations tests carried out for the performance analysis of the solar panel modeled in Section4.1 was that there was constant solar irradiance S equal to 1000 W/m2, while operation temperature T was sequentially set to 0°C, 25°C, 50°C, and 75°C. For each one of these temperature values the corresponding I-V and P-V curves were drawn and the values of Imax, Vmax and Pmax were calculated both in Simulink and LabVIEW. Indicatively, Figures 12 and 13 depict these curves for T = 25°C in Simulink and LabVIEW.I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in Simulink. (a) (b)Figure 13 I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in LabVIEW.The values ofImax, Vmax, and Pmax which were found based on the simulation results in Simulink and LabVIEW for T = 0°C are presented in Table 2. The corresponding results for T = 50°C are presented in Table 3 and for T = 75°C in Table 4.Table 2 Simulation results forImax, Vmax, and Pmax (T=0°C). I max (A) V max (V) P max (W) Simulink 2.995 18.226 54.607 LabVIEW 2.987 18.811 56.198Table 3 Simulation results forImax, Vmax, and Pmax (T=50°C). I max (A) V max (V) P max (W) Simulink 3.002 14.916 44.782 LabVIEW 3.041 15.152 46.108Table 4 Simulation results forImax, Vmax, and Pmax (T=75°C). I max (A) V max (V) P max (W) Simulink 2.998 13.024 38.925 LabVIEW 3.029 13.834 41.91In a similar way, in Table5, the values of Imax, Vmax, and Pmax calculated through the simulation results by using Simulink and LabVIEW for T = 25°C are presented against the corresponding values provided by the manufacturer of the PV panel, that is, Solarex [32].Table 5 Simulation results forImax, Vmax, and Pmax (T=25°C). I max (A) V max (V) P max (W) Simulink 2.998 16.86 50.55 LabVIEW 2.977 16.909 50.344 Solarex 2.97 16.8 50The percentage deviation of the simulation data ofImax, Vmax, and Pmax from the corresponding dfault values provided by Solarex is shown in Table 6.Table 6 Percentage deviation between default values and simulation results forImax, Vmax, and Pmax (T=25°C). I max (%) V max (%) P max (%) Simulink +0.951 +0.357 +1.1 LabVIEW +0.247 +0.648 +0.688The evaluation of the correlation existing between the simulation results attained alternatively via Simulink and LabVIEW forImax, Vmax, and Pmax is obtained through the calculation of the coefficient of determination (R2). As shown in Figures 14, 15, and 16 the values of R2 found correspondingly for Imax, Vmax, and Pmax are equal to 0.989, 0.996, and 0.999.Figure 14 Schematic comparison of the values ofImax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 15 Schematic comparison of the values ofVmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 16 Schematic comparison of the values ofPmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C. ## 5.2. Performance Evaluation of an Integrated PV System with No Grid Connection The performance analysis of the integrated PV described in Section4.2 was carried out by the investigation of specific characteristics of the system, that is, the system output voltage at load and the current at load by performing Fourier transformation. Simulink makes use of the so-called Powegui FFT Analysis tool which enables the implementation of fast Fourier transformation of signals. Similarly, LabVIEW makes use of STARSIM which is an electrical system simulation tool.The results of the simulation of these two features in Simulink are depicted in Figures17 and 18.Figure 17 Voltage at load with no grid connection in Simulink.Figure 18 Current at load with no grid connection in Simulink.By applying fast Fourier transformation, it was found that the total harmonic distortion at load is 2.08%, while the amplitude of the voltage at the fundamental frequency is 313.2 V and the amplitude of the current at the fundamental frequency is 11.84 A.The corresponding simulation results in LabVIEW proved that the total harmonic distortion at load is 3.78%, while the amplitude of the voltage at the fundamental frequency is 219.53 V RMS, that is, 310.46 V, and the amplitude of the current at the fundamental frequency is 8.3 A RMS, that is, 11.74 A, as shown correspondingly in Figures19 and 20.Voltage at load with no grid connection in LabVIEW. (a) (b)Current at load with no grid connection in LabVIEW. (a) (b)The comparison of these data shows that the simulation through Simulink and LabVIEW leads to results which have a percentage deviation less than 0.9% for both voltage at load and current at load. ## 5.3. Performance Evaluation of an Integrated PV System with Connection to Low-Voltage Grid By using the same software tools as in the last subsection, the performance analysis of the, described in Section4.3, integrated PV connected to low-voltage grid was carried out.First, the low load (1.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures21 and 22, respectively.Figure 21 Voltage at load 1.5 KW with grid connection in Simulink.Figure 22 Current at load 1.5 KW with grid connection in Simulink.Based on these plots it was found that the total harmonic distortion at load is 1.67%, while the amplitude of the voltage at load at the fundamental frequency is 331.9 V and the amplitude of the current at the fundamental frequency is 9.428 A.The corresponding simulation results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 236.71 V RMS, that is, 334.76 V, and the amplitude of the current at the fundamental frequency is 6.74 A RMS, that is, 9.53 A, while the total harmonic distortion at load is 11.25% as shown correspondingly in Figures23 and 24.Voltage at load 1.5 KW with grid connection in LabVIEW. (a) (b)Current at load 1.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots drawn through the alternative utilization of Simulink and LabVIEW are depicted in Figures25 and 26, respectively.Figure 25 Current to load 1.5 KW and network with grid connection in Simulink.Current to load 1.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 26.56 A while the total harmonic distortion is equal to 8.16%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 18.34 A RMS, that is, 25.94 A, while THD is 6.65%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures27 and 28, respectively. Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 21.98 A while the total harmonic distortion is equal to 9.76%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 15.02 A RMS, that is, 21.24 A, while THD is 6.84%.Figure 27 Current to network with grid connection and load 1.5 KW in Simulink.Current to network with grid connection and load 1.5 KW in LabVIEW. (a) (b)The comparative examination of the aforementioned simulation results regarding the 1.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 0.85% for the voltage at load, 1.07% for the current at load, 2.33% for the total current to load and network, and 3.44% for the current to network.Finally, the high load (6.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures29 and 30, respectively.Figure 29 Voltage at load 6.5 KW with grid connection in Simulink.Figure 30 Current at load 6.5 KW with grid connection in Simulink.Based on these plots it was found that the amplitude of the voltage at load at the fundamental frequency is 319.1 V, and the amplitude of the current at the fundamental frequency is 39.89 A, while the total harmonic distortion at load is 0.82%. The equivalent results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 227.95 V RMS, that is, 322.37 V, and the amplitude of the current at the fundamental frequency is 28.49 A RMS, that is, 40.29 A while the total harmonic distortion at load is 7.83%, as shown correspondingly in Figures31 and 32.Voltage at load 6.5 KW with grid connection in LabVIEW. (a) (b)Current at load 6.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures33 and 34, respectively.Figure 33 Current to load 6.5 KW and network with grid connection in Simulink.Current to load 6.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 19.76 A while the total harmonic distortion is equal to 9.88%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 13.99 A RMS, that is, 19.78 A, while THD is 11.54%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures35 and 36, respectively.Figure 35 Current to network with grid connection and load 6.5 KW in Simulink.Current to network with grid connection and load 6.5 KW in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 23.53 A while the total harmonic distortion is equal to 7.89%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 16.38 A, RMS that is, 23.16 A, while THD is 10.4%.The comparative examination of the aforementioned simulation results regarding the 6.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 1.0% for the voltage at load, 1.0% for the current at load, 0.1% for the total current to load and network, and 1.57% for the current to network.The correlation between the results attained via Simulink and LabVIEW for the performance simulation of the grid-connected integrated PV system can be synoptically evaluated in terms of the coefficient of determination (R2). For this reason R2 was computed for both the low load and high load cases by taking into consideration the values of all current amplitudes which were calculated.Specifically, in the 1.5 KW caseR2 was found to be equal to 0.999 as shown in Figure 37. Similarly, in the 6.5 KW case R2 was found to be also equal to 0.999 as illustrated in Figure 38.Figure 37 Coefficient of determinationR2 for the 1.5 KW load case.Figure 38 Coefficient of determinationR2 for the 6.5 KW load case. ## 6. Conclusions The work presented in this paper focused on computer-aided model development and performance analysis of photovoltaic systems. For this reason, suitable models were built for various formations of photovoltaic schemes. All tasks performed were based on the use of two advanced graphical programming environments, namely, Simulink and LabVIEW. In this way, a comparative evaluation of the credibility of these software platforms was carried out.Specifically, the case of a commercially available PV panel was studied. A model representing this panel was built and its operation was simulated. The simulation results regarding its characteristic features were compared to those provided by the PV panel manufacturer. The results of this comparison were positive in terms of both the modelling performed and the convergence between the outcomes of the two programming environments.Next, the model of an integrated PV system having no grid connection was built and the performance analysis of this system was carried out by applying fast Fourier transformation. The simulation tests performed showed that both software platforms provide results which are almost the same regarding the electric entities simulated and slightly different regarding the signal total harmonic distortion.Finally, the case of an integrated photovoltaic system connected to low-voltage grid was examined. Once again the model development was followed by the overall system performance analysis by examining the cases having a relatively low load and alternatively a higher load connected to the network. The application of fast Fourier transformation provided again results which have a small deviation regarding the electric entities simulated and a bigger one regarding the total harmonic distortion of the signals, thus indicating the existence of different noise influence. --- *Source: 101056-2014-01-02.xml*
101056-2014-01-02_101056-2014-01-02.md
46,137
Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study
Charalambos Koukouvaos; Dionisis Kandris; Maria Samarakou
The Scientific World Journal (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101056
101056-2014-01-02.xml
--- ## Abstract Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems. --- ## Body ## 1. Introduction Serious economic concerns along with the growing worries about the disastrous effects that technological progress accumulatively causes to world environment, impose a shift towards renewable energy resources, which constitute an environmental friendly approach for power generation with satisfactory efficiency [1]. That is why in many regions of the globe there is an official tendency to let renewable energy sources have an increasing share in the overall power production process. For instance, European Union, by means of the implementation of the so-called 20/20/20 initiative, aims to achieve by year 2020 not only the reduction of greenhouse gas emissions by 20% (compared to 1990 ones), but also the increase of the magnitude of renewable energy to 20% and the decrease of the overall energy consumption by 20% through efficient energy management [2].The renewable energy resources which have the most enhanced exploitation, thanks to relevant advances in modern scientific research, are solar, hydropower, geothermal, biomass, and wind energy [3, 4].Solar energy refers to the solar radiation that reaches the Earth and can be converted into other forms of energy, such as heat and electricity. The transformation of solar radiation into electric current is performed by using photovoltaic (PV) cells. A PV cell is practically a p-n junction placed in the interior of a thin wafer of semiconductor. The solar radiation falling on a PV cell can be diconverted to electricity through the so-called photovoltaic effect. This is a physical phenomenon in which photons of light excite electrons into higher energy states letting them act as charge carriers for electric current. Specifically, the exposure of a PV to sunlight triggers the creation of electron-hole pairs proportional to the incident irradiation by photons having energy greater than the band-gap energy of the semiconductor material of the PV cell.Over the last decades, the international interest in the PV conversion of solar radiation is continuously growing. In this way, the use of PV systems is nowadays widespread to an extent that is considered to constitute the third greater renewable energy source in terms of globally installed capacity, after hydro- and wind power.On the other hand, the brilliant prospects of PV systems for further evolution get obstructed due to various technical and economic issues that have yet to be resolved. For this reason modern scientific and technological research focuses on the development of methodologies and equipment for the increase of energy efficiency of PV systems, the reduction of their production cost, the improvement of their market penetration, and the enhancement of their environmental performance [5–7].These research activities can be greatly assisted by the utilization of software tools for the development of models of the PV systems under consideration and the analysis of their performance by carrying out simulation tests.The aim of this paper is to perform the computer-aided design and performance analysis of both grid-connected and standalone PV systems by using in parallel the two most widely used graphical programming environments, namely, Simulink by Mathworks and LabVIEW by National Instruments. This task is accomplished by taking advantage of the potential provided through the adaptation of these two software platforms to systems of PV nature. The parallel utilization of MATLAB and LabVIEW enables the comparative evaluation of their accuracy, validity, and their overall performance under various simulation scenarios. According to the knowledge of the authors of this paper, this is the first comparative study of this kind which is performed in the field of photovoltaics, although similar studies have been carried out in other scientific areas such as [8].The rest of the paper is organized as follows: Section2 focuses on related work in the computer-assisted modelling of PV systems. Section 3 describes the procedures used for the PV systems modelling. In Section 4 the results of the simulation tests executed are presented. The discussion of these results is performed in Section 5. Finally, Section 6 concludes the paper. ## 2. Computer-Aided Performance Analysis of PV Systems The performance of PV plant is subject to many parameters, the influence of which should be accurately estimated before any investment is made for the establishment of this plant. Actually, the temperature and solar radiation are the two main factors affecting the performance of PV systems. Specifically, ambient temperature should definitely be taken into consideration because the relationship between the temperature and the efficiency of solar cells is inversely proportional [9]. Additionally, the energy produced by a PV system on an annual basis is directly related to the available solar radiation and therefore depends on the geographical location of the system because two radiation beams of equal power but of different wavelength can produce different amounts of electricity in a solar cell and thus form a different degree of performance [9]. Moreover, the operational efficiency of a solar panel may be affected by other environmental or other type factors such as the speed and direction of wind, the distribution of the solar spectrum, rain, shading, pollution, panel aging, optical losses, and panel casualties [9–13].In order to calculate the energy efficiency of PV systems in an accurate and systematic modern research works focus on the computer-aided design and analysis of this type of systems [14].For instance, a PV array simulation model incorporated in Simulink GUI environment was developed using basic circuit equations of the PV solar cells including the effects of solar irradiation and temperature changes. The developed model was tested by means of both a directly coupled DC load and an AC load via an inverter [15].In another work a model for a solar PV array (PV), with an AC/DC converter and load, a maximum power point tracker, a battery, and a charger, was built. The whole model was simulated under four testing scenarios some of which included cloudy and sunny conditions along with constant and varying load [16].Similarly, an approach to determine the characteristic of a particular PV cell panel and to study the influence of different values of solar radiation at different temperatures concerning performance of PV cells in Simulink was proposed [17]. There are research works, such as [18], which focus on modelling and simulation of standalone PV systems, whilst others, like [19], address modelling and simulation of grid-connected PV systems to analyze the grid interface behavior and control performance in the system design. Other works, such as [20], predict energy production from PV panels under various performance conditions.In [21] a generalized PV model built in MATLAB/Simulink, which enables the simulation of the system dynamics, is presented. Similarly, [22] focuses on the modelling process of PV cells in Simulink while in [23] the two-diode model of PV module is examined.Concentrating on educational applications, [24] describes the design and implementation of a virtual laboratory for PV power systems based on the Cadence PSpice circuit simulator. In [25] an integrated PV monitoring system having a graphical user interface developed in LabVIEW is presented. ## 3. Theoretical Background The procedure of building accurate PV models is extremely important in order to achieve high efficiency in the decision making related to establishment and operation of PV systems. This justifies why, as already discussed in Section2, many research works are carried out in this scientific area.Generally, modelling of PV systems is based on the assumption that the operation of PV cells may be simulated by examining the operation of analogous electronic circuits. The simplest PV model found in bibliography comprises a single diode connected in parallel with a light generated current source (Iph) as shown in Figure 1, where Id expresses the diode current [26, 27]. This simplified model refers to an ideal solar cell.Figure 1 Electronic circuit equivalent to the model of an ideal PV cell.Going one step further, a series resistanceRs, representing an internal parasitic resistance which reduces the efficiency of the solar cell, is added to the circuit, as depicted in Figure 2 [28, 29]. The relationship between the voltage and the current for a given temperature and solar radiation is given as (1)I=Iph-Id=Iph-I0·(e(V+I·Rs)/(a·VT)-1), where I0 expresses the diode reverse saturation current, a represents the ideality factor of the diode, and VT stands for the so-called thermal voltage of the PV module which is given by [2] (2)VT=Ns·k·Tcq, where Ns denotes the number of photovoltaic cells connected in series in the PV module, k stands for Boltzmann’s constant (1.3806503·10-23 J/K), Tc indicates the temperature of the photovoltaic cell (expressed in Kelvin degrees), and q symbolizes the elementary electric charge (1.60217646·10-19 C).Figure 2 Electronic circuit representing a one-diode with serial resistance model of PV cell.Other scientific researches make use of a model which is derived by the addition of shunt resistanceRp in parallel to the diode in order to improve model accuracy in the cases of high variations of temperature and low voltage but in expense of complexity [30, 31].A few research works adopt an ever more complex model, known as the two-diode model aiming to take into consideration recombination losses, that is, the elimination of mobile electrons and electron holes due to the existence of impurities or defects at the front or/and the rear surfaces of a cell. This model includes a second diode placed in parallel to the current source and the initial diode, thus calling for the simultaneous calculation of seven parameters based on either iteration approaches or more analytical methodologies [23].When simulating a PV module, no matter which model is adopted, the goal is to calculate theI-V and P-V curves which are representative of the module operation. In this curve the so-called maximum power point (MPP) is determined as the point for which the power dissipated to the load is maximum. From basic circuit theory, the power delivered from or to a device is maximized where the derivative (graphically, the slope) dI/dV of the I-V curve is equal to and opposite of the I/V ratio (where dP/dV=0) and corresponds to the “knee” of the curve.Another technical feature that is taken into consideration during the simulation tests performed is the so-called total harmonic distortion (THD). Generally, THD of a signal is a measurement of the harmonic distortion present and is defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. In energy systems THD is used to characterize their power quality of electric power.Finally, in order to carry out the comparative performance analysis of the modelling accuracy of Simulink and Labview the so called coefficient of determination was calculated. This is a statistic term denoted asR2, which expresses the proportion of total variation of outcomes explained by a model, thus, providing a measure of how well-observed outcomes are replicated by this model. ## 4. Modeling and Simulation Setup In accordance with the aforementioned, in Section2, PV modelling studies the research work presented in this paper investigates the computer-aided modelling and analysis of first a standalone commercially available solar panel, second an integrated PV system without grid connection, and third a grid-connected PV system under low and high load conditions. ### 4.1. Modeling of a PV Panel The first modelling procedure performed refers to the modelling and performance analysis of a single PV panel for the production ofI-V and P-V curves in maximum power point (MPP). The modelling was performed both in Simulink and LabVIEW based on the aforementioned, in Section 3, one-diode with serial resistance PV model, because it has an adequate ratio of accuracy to complexity.The specific PV panel examined is Solarex SX-50. The characteristic features of this PV panel along with their typical values, for solar irradianceS=1000 W/m2 and operation temperature T=25°C, are presented in Table 1, while the I-V curve for various operating temperatures is illustrated in Figure 3. The specific panel consists of 36 photovoltaic silicon cells located in two rows of 18 PV cells each. The modelling of SX-50 PV panel in Simulink was performed by using Simscape-SimElectronics library. Figure 4 shows that the structure of the PV panel consisted of two sets of 18 cells each. The corresponding simulation model built is depicted in Figure 5. The modelling of SX-50 PV panel in LabVIEW was performed by using NI LabVIEW Simulation Interface Toolkit. Figure 6 shows the resultant model developed.Table 1 Technical features of SX-50 PV panel. Feature Value Maximum power (Pmax) 50 W Voltage atPmax (Vmax) 16.8 V Current atPmax (Imax) 2.97 A Guaranteed minimumPmax 45 W Short-circuit current (Isc) 3.23 A Temperature coefficient ofIsc (0.065±0.015)%/°C Temperature coefficient ofVoc −(80±10) mV/°C Temperature coefficient of power −(0.5±0.05)%/°CFigure 3 CharacteristicI-V curve of an SX-50 PV panel.Figure 4 Overview of an SX-50 PV panel in Simulink.Figure 5 Simulation model of an SX-50 panel in Simulink.Figure 6 Simulation model of an SX-50 panel in LabVIEW. ### 4.2. Modeling of an Integrated PV System with No Grid Connection The second modelling procedure performed refers to the modelling and performance analysis of a typical integrated PV system, consisting of a PV energy source, a DC/DC converter, a DC/AC inverter, a filter, and a load as shown in Figure7.Figure 7 Overview of the integrated PV system modeled.The modelling was performed both in Simulink and LabVIEW based on the assumptions that the power of the photovoltaic generator is equal to 4.6 KW and the operation temperature is set to 25°C. Additionally, solar irradiance is supposed to have a stable value equal to 1000 W/m2 while the load magnitude is equal to 2 KW. The system was connected to a low-voltage grid by using alternatively a low load (1.5 KW) or a high load (6.5 KW).The filter is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was incorporated in the line that connects the filter with the load. Figure8 illustrates the simulation model developed in Simulink while Figure 9 depicts the same model built in LabVIEW.Figure 8 Integrated PV system model in Simulink.Figure 9 Integrated PV system model in LabVIEW. ### 4.3. Modeling of an Integrated PV System with Connection to Low-Voltage Grid The third modelling task performed refers to the model development and performance analysis of a typical integrated PV system, having the structure shown in Figure7, which is connected to low-voltage grid by using either a low load (1.5 KW) or high load (6.5 KW).The filter incorporated in the overall system structure is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was used in the line which connects the filter with the load. Figure10 illustrates the simulation model developed in Simulink while Figure 11 depicts the same model built in LabVIEW.Figure 10 Grid-connected integrated PV system model in Simulink.Figure 11 Grid-connected integrated PV system model in LabVIEW. ## 4.1. Modeling of a PV Panel The first modelling procedure performed refers to the modelling and performance analysis of a single PV panel for the production ofI-V and P-V curves in maximum power point (MPP). The modelling was performed both in Simulink and LabVIEW based on the aforementioned, in Section 3, one-diode with serial resistance PV model, because it has an adequate ratio of accuracy to complexity.The specific PV panel examined is Solarex SX-50. The characteristic features of this PV panel along with their typical values, for solar irradianceS=1000 W/m2 and operation temperature T=25°C, are presented in Table 1, while the I-V curve for various operating temperatures is illustrated in Figure 3. The specific panel consists of 36 photovoltaic silicon cells located in two rows of 18 PV cells each. The modelling of SX-50 PV panel in Simulink was performed by using Simscape-SimElectronics library. Figure 4 shows that the structure of the PV panel consisted of two sets of 18 cells each. The corresponding simulation model built is depicted in Figure 5. The modelling of SX-50 PV panel in LabVIEW was performed by using NI LabVIEW Simulation Interface Toolkit. Figure 6 shows the resultant model developed.Table 1 Technical features of SX-50 PV panel. Feature Value Maximum power (Pmax) 50 W Voltage atPmax (Vmax) 16.8 V Current atPmax (Imax) 2.97 A Guaranteed minimumPmax 45 W Short-circuit current (Isc) 3.23 A Temperature coefficient ofIsc (0.065±0.015)%/°C Temperature coefficient ofVoc −(80±10) mV/°C Temperature coefficient of power −(0.5±0.05)%/°CFigure 3 CharacteristicI-V curve of an SX-50 PV panel.Figure 4 Overview of an SX-50 PV panel in Simulink.Figure 5 Simulation model of an SX-50 panel in Simulink.Figure 6 Simulation model of an SX-50 panel in LabVIEW. ## 4.2. Modeling of an Integrated PV System with No Grid Connection The second modelling procedure performed refers to the modelling and performance analysis of a typical integrated PV system, consisting of a PV energy source, a DC/DC converter, a DC/AC inverter, a filter, and a load as shown in Figure7.Figure 7 Overview of the integrated PV system modeled.The modelling was performed both in Simulink and LabVIEW based on the assumptions that the power of the photovoltaic generator is equal to 4.6 KW and the operation temperature is set to 25°C. Additionally, solar irradiance is supposed to have a stable value equal to 1000 W/m2 while the load magnitude is equal to 2 KW. The system was connected to a low-voltage grid by using alternatively a low load (1.5 KW) or a high load (6.5 KW).The filter is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was incorporated in the line that connects the filter with the load. Figure8 illustrates the simulation model developed in Simulink while Figure 9 depicts the same model built in LabVIEW.Figure 8 Integrated PV system model in Simulink.Figure 9 Integrated PV system model in LabVIEW. ## 4.3. Modeling of an Integrated PV System with Connection to Low-Voltage Grid The third modelling task performed refers to the model development and performance analysis of a typical integrated PV system, having the structure shown in Figure7, which is connected to low-voltage grid by using either a low load (1.5 KW) or high load (6.5 KW).The filter incorporated in the overall system structure is an LC circuit aiming to cut off the range of the output current which produces the DC-AC inverter harmonic frequencies. Similarly, an RL circuit was used in the line which connects the filter with the load. Figure10 illustrates the simulation model developed in Simulink while Figure 11 depicts the same model built in LabVIEW.Figure 10 Grid-connected integrated PV system model in Simulink.Figure 11 Grid-connected integrated PV system model in LabVIEW. ## 5. Simulation Results and Discussion In the following subsections of Section5 the outcomes of the simulation tests carried out are both described and commented on. ### 5.1. Performance Evaluation of the PV Panel Modeled The basic assumption in all simulations tests carried out for the performance analysis of the solar panel modeled in Section4.1 was that there was constant solar irradiance S equal to 1000 W/m2, while operation temperature T was sequentially set to 0°C, 25°C, 50°C, and 75°C. For each one of these temperature values the corresponding I-V and P-V curves were drawn and the values of Imax, Vmax and Pmax were calculated both in Simulink and LabVIEW. Indicatively, Figures 12 and 13 depict these curves for T = 25°C in Simulink and LabVIEW.I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in Simulink. (a) (b)Figure 13 I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in LabVIEW.The values ofImax, Vmax, and Pmax which were found based on the simulation results in Simulink and LabVIEW for T = 0°C are presented in Table 2. The corresponding results for T = 50°C are presented in Table 3 and for T = 75°C in Table 4.Table 2 Simulation results forImax, Vmax, and Pmax (T=0°C). I max (A) V max (V) P max (W) Simulink 2.995 18.226 54.607 LabVIEW 2.987 18.811 56.198Table 3 Simulation results forImax, Vmax, and Pmax (T=50°C). I max (A) V max (V) P max (W) Simulink 3.002 14.916 44.782 LabVIEW 3.041 15.152 46.108Table 4 Simulation results forImax, Vmax, and Pmax (T=75°C). I max (A) V max (V) P max (W) Simulink 2.998 13.024 38.925 LabVIEW 3.029 13.834 41.91In a similar way, in Table5, the values of Imax, Vmax, and Pmax calculated through the simulation results by using Simulink and LabVIEW for T = 25°C are presented against the corresponding values provided by the manufacturer of the PV panel, that is, Solarex [32].Table 5 Simulation results forImax, Vmax, and Pmax (T=25°C). I max (A) V max (V) P max (W) Simulink 2.998 16.86 50.55 LabVIEW 2.977 16.909 50.344 Solarex 2.97 16.8 50The percentage deviation of the simulation data ofImax, Vmax, and Pmax from the corresponding dfault values provided by Solarex is shown in Table 6.Table 6 Percentage deviation between default values and simulation results forImax, Vmax, and Pmax (T=25°C). I max (%) V max (%) P max (%) Simulink +0.951 +0.357 +1.1 LabVIEW +0.247 +0.648 +0.688The evaluation of the correlation existing between the simulation results attained alternatively via Simulink and LabVIEW forImax, Vmax, and Pmax is obtained through the calculation of the coefficient of determination (R2). As shown in Figures 14, 15, and 16 the values of R2 found correspondingly for Imax, Vmax, and Pmax are equal to 0.989, 0.996, and 0.999.Figure 14 Schematic comparison of the values ofImax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 15 Schematic comparison of the values ofVmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 16 Schematic comparison of the values ofPmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C. ### 5.2. Performance Evaluation of an Integrated PV System with No Grid Connection The performance analysis of the integrated PV described in Section4.2 was carried out by the investigation of specific characteristics of the system, that is, the system output voltage at load and the current at load by performing Fourier transformation. Simulink makes use of the so-called Powegui FFT Analysis tool which enables the implementation of fast Fourier transformation of signals. Similarly, LabVIEW makes use of STARSIM which is an electrical system simulation tool.The results of the simulation of these two features in Simulink are depicted in Figures17 and 18.Figure 17 Voltage at load with no grid connection in Simulink.Figure 18 Current at load with no grid connection in Simulink.By applying fast Fourier transformation, it was found that the total harmonic distortion at load is 2.08%, while the amplitude of the voltage at the fundamental frequency is 313.2 V and the amplitude of the current at the fundamental frequency is 11.84 A.The corresponding simulation results in LabVIEW proved that the total harmonic distortion at load is 3.78%, while the amplitude of the voltage at the fundamental frequency is 219.53 V RMS, that is, 310.46 V, and the amplitude of the current at the fundamental frequency is 8.3 A RMS, that is, 11.74 A, as shown correspondingly in Figures19 and 20.Voltage at load with no grid connection in LabVIEW. (a) (b)Current at load with no grid connection in LabVIEW. (a) (b)The comparison of these data shows that the simulation through Simulink and LabVIEW leads to results which have a percentage deviation less than 0.9% for both voltage at load and current at load. ### 5.3. Performance Evaluation of an Integrated PV System with Connection to Low-Voltage Grid By using the same software tools as in the last subsection, the performance analysis of the, described in Section4.3, integrated PV connected to low-voltage grid was carried out.First, the low load (1.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures21 and 22, respectively.Figure 21 Voltage at load 1.5 KW with grid connection in Simulink.Figure 22 Current at load 1.5 KW with grid connection in Simulink.Based on these plots it was found that the total harmonic distortion at load is 1.67%, while the amplitude of the voltage at load at the fundamental frequency is 331.9 V and the amplitude of the current at the fundamental frequency is 9.428 A.The corresponding simulation results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 236.71 V RMS, that is, 334.76 V, and the amplitude of the current at the fundamental frequency is 6.74 A RMS, that is, 9.53 A, while the total harmonic distortion at load is 11.25% as shown correspondingly in Figures23 and 24.Voltage at load 1.5 KW with grid connection in LabVIEW. (a) (b)Current at load 1.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots drawn through the alternative utilization of Simulink and LabVIEW are depicted in Figures25 and 26, respectively.Figure 25 Current to load 1.5 KW and network with grid connection in Simulink.Current to load 1.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 26.56 A while the total harmonic distortion is equal to 8.16%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 18.34 A RMS, that is, 25.94 A, while THD is 6.65%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures27 and 28, respectively. Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 21.98 A while the total harmonic distortion is equal to 9.76%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 15.02 A RMS, that is, 21.24 A, while THD is 6.84%.Figure 27 Current to network with grid connection and load 1.5 KW in Simulink.Current to network with grid connection and load 1.5 KW in LabVIEW. (a) (b)The comparative examination of the aforementioned simulation results regarding the 1.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 0.85% for the voltage at load, 1.07% for the current at load, 2.33% for the total current to load and network, and 3.44% for the current to network.Finally, the high load (6.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures29 and 30, respectively.Figure 29 Voltage at load 6.5 KW with grid connection in Simulink.Figure 30 Current at load 6.5 KW with grid connection in Simulink.Based on these plots it was found that the amplitude of the voltage at load at the fundamental frequency is 319.1 V, and the amplitude of the current at the fundamental frequency is 39.89 A, while the total harmonic distortion at load is 0.82%. The equivalent results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 227.95 V RMS, that is, 322.37 V, and the amplitude of the current at the fundamental frequency is 28.49 A RMS, that is, 40.29 A while the total harmonic distortion at load is 7.83%, as shown correspondingly in Figures31 and 32.Voltage at load 6.5 KW with grid connection in LabVIEW. (a) (b)Current at load 6.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures33 and 34, respectively.Figure 33 Current to load 6.5 KW and network with grid connection in Simulink.Current to load 6.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 19.76 A while the total harmonic distortion is equal to 9.88%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 13.99 A RMS, that is, 19.78 A, while THD is 11.54%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures35 and 36, respectively.Figure 35 Current to network with grid connection and load 6.5 KW in Simulink.Current to network with grid connection and load 6.5 KW in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 23.53 A while the total harmonic distortion is equal to 7.89%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 16.38 A, RMS that is, 23.16 A, while THD is 10.4%.The comparative examination of the aforementioned simulation results regarding the 6.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 1.0% for the voltage at load, 1.0% for the current at load, 0.1% for the total current to load and network, and 1.57% for the current to network.The correlation between the results attained via Simulink and LabVIEW for the performance simulation of the grid-connected integrated PV system can be synoptically evaluated in terms of the coefficient of determination (R2). For this reason R2 was computed for both the low load and high load cases by taking into consideration the values of all current amplitudes which were calculated.Specifically, in the 1.5 KW caseR2 was found to be equal to 0.999 as shown in Figure 37. Similarly, in the 6.5 KW case R2 was found to be also equal to 0.999 as illustrated in Figure 38.Figure 37 Coefficient of determinationR2 for the 1.5 KW load case.Figure 38 Coefficient of determinationR2 for the 6.5 KW load case. ## 5.1. Performance Evaluation of the PV Panel Modeled The basic assumption in all simulations tests carried out for the performance analysis of the solar panel modeled in Section4.1 was that there was constant solar irradiance S equal to 1000 W/m2, while operation temperature T was sequentially set to 0°C, 25°C, 50°C, and 75°C. For each one of these temperature values the corresponding I-V and P-V curves were drawn and the values of Imax, Vmax and Pmax were calculated both in Simulink and LabVIEW. Indicatively, Figures 12 and 13 depict these curves for T = 25°C in Simulink and LabVIEW.I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in Simulink. (a) (b)Figure 13 I - V and P-V curves for SX-50 Solarex PV panel when T = 25°C and S = 1000 W/m2 in LabVIEW.The values ofImax, Vmax, and Pmax which were found based on the simulation results in Simulink and LabVIEW for T = 0°C are presented in Table 2. The corresponding results for T = 50°C are presented in Table 3 and for T = 75°C in Table 4.Table 2 Simulation results forImax, Vmax, and Pmax (T=0°C). I max (A) V max (V) P max (W) Simulink 2.995 18.226 54.607 LabVIEW 2.987 18.811 56.198Table 3 Simulation results forImax, Vmax, and Pmax (T=50°C). I max (A) V max (V) P max (W) Simulink 3.002 14.916 44.782 LabVIEW 3.041 15.152 46.108Table 4 Simulation results forImax, Vmax, and Pmax (T=75°C). I max (A) V max (V) P max (W) Simulink 2.998 13.024 38.925 LabVIEW 3.029 13.834 41.91In a similar way, in Table5, the values of Imax, Vmax, and Pmax calculated through the simulation results by using Simulink and LabVIEW for T = 25°C are presented against the corresponding values provided by the manufacturer of the PV panel, that is, Solarex [32].Table 5 Simulation results forImax, Vmax, and Pmax (T=25°C). I max (A) V max (V) P max (W) Simulink 2.998 16.86 50.55 LabVIEW 2.977 16.909 50.344 Solarex 2.97 16.8 50The percentage deviation of the simulation data ofImax, Vmax, and Pmax from the corresponding dfault values provided by Solarex is shown in Table 6.Table 6 Percentage deviation between default values and simulation results forImax, Vmax, and Pmax (T=25°C). I max (%) V max (%) P max (%) Simulink +0.951 +0.357 +1.1 LabVIEW +0.247 +0.648 +0.688The evaluation of the correlation existing between the simulation results attained alternatively via Simulink and LabVIEW forImax, Vmax, and Pmax is obtained through the calculation of the coefficient of determination (R2). As shown in Figures 14, 15, and 16 the values of R2 found correspondingly for Imax, Vmax, and Pmax are equal to 0.989, 0.996, and 0.999.Figure 14 Schematic comparison of the values ofImax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 15 Schematic comparison of the values ofVmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C.Figure 16 Schematic comparison of the values ofPmax in Simulink and LabVIEW for T = 0°C, 25°C, 50°C, and 75°C. ## 5.2. Performance Evaluation of an Integrated PV System with No Grid Connection The performance analysis of the integrated PV described in Section4.2 was carried out by the investigation of specific characteristics of the system, that is, the system output voltage at load and the current at load by performing Fourier transformation. Simulink makes use of the so-called Powegui FFT Analysis tool which enables the implementation of fast Fourier transformation of signals. Similarly, LabVIEW makes use of STARSIM which is an electrical system simulation tool.The results of the simulation of these two features in Simulink are depicted in Figures17 and 18.Figure 17 Voltage at load with no grid connection in Simulink.Figure 18 Current at load with no grid connection in Simulink.By applying fast Fourier transformation, it was found that the total harmonic distortion at load is 2.08%, while the amplitude of the voltage at the fundamental frequency is 313.2 V and the amplitude of the current at the fundamental frequency is 11.84 A.The corresponding simulation results in LabVIEW proved that the total harmonic distortion at load is 3.78%, while the amplitude of the voltage at the fundamental frequency is 219.53 V RMS, that is, 310.46 V, and the amplitude of the current at the fundamental frequency is 8.3 A RMS, that is, 11.74 A, as shown correspondingly in Figures19 and 20.Voltage at load with no grid connection in LabVIEW. (a) (b)Current at load with no grid connection in LabVIEW. (a) (b)The comparison of these data shows that the simulation through Simulink and LabVIEW leads to results which have a percentage deviation less than 0.9% for both voltage at load and current at load. ## 5.3. Performance Evaluation of an Integrated PV System with Connection to Low-Voltage Grid By using the same software tools as in the last subsection, the performance analysis of the, described in Section4.3, integrated PV connected to low-voltage grid was carried out.First, the low load (1.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures21 and 22, respectively.Figure 21 Voltage at load 1.5 KW with grid connection in Simulink.Figure 22 Current at load 1.5 KW with grid connection in Simulink.Based on these plots it was found that the total harmonic distortion at load is 1.67%, while the amplitude of the voltage at load at the fundamental frequency is 331.9 V and the amplitude of the current at the fundamental frequency is 9.428 A.The corresponding simulation results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 236.71 V RMS, that is, 334.76 V, and the amplitude of the current at the fundamental frequency is 6.74 A RMS, that is, 9.53 A, while the total harmonic distortion at load is 11.25% as shown correspondingly in Figures23 and 24.Voltage at load 1.5 KW with grid connection in LabVIEW. (a) (b)Current at load 1.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots drawn through the alternative utilization of Simulink and LabVIEW are depicted in Figures25 and 26, respectively.Figure 25 Current to load 1.5 KW and network with grid connection in Simulink.Current to load 1.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 26.56 A while the total harmonic distortion is equal to 8.16%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 18.34 A RMS, that is, 25.94 A, while THD is 6.65%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures27 and 28, respectively. Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 21.98 A while the total harmonic distortion is equal to 9.76%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 15.02 A RMS, that is, 21.24 A, while THD is 6.84%.Figure 27 Current to network with grid connection and load 1.5 KW in Simulink.Current to network with grid connection and load 1.5 KW in LabVIEW. (a) (b)The comparative examination of the aforementioned simulation results regarding the 1.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 0.85% for the voltage at load, 1.07% for the current at load, 2.33% for the total current to load and network, and 3.44% for the current to network.Finally, the high load (6.5 KW) case was examined, starting with the output voltage at load and current at load. The subsequent simulation results via Simulink are shown in Figures29 and 30, respectively.Figure 29 Voltage at load 6.5 KW with grid connection in Simulink.Figure 30 Current at load 6.5 KW with grid connection in Simulink.Based on these plots it was found that the amplitude of the voltage at load at the fundamental frequency is 319.1 V, and the amplitude of the current at the fundamental frequency is 39.89 A, while the total harmonic distortion at load is 0.82%. The equivalent results in LabVIEW proved that the amplitude of the voltage at the fundamental frequency is 227.95 V RMS, that is, 322.37 V, and the amplitude of the current at the fundamental frequency is 28.49 A RMS, that is, 40.29 A while the total harmonic distortion at load is 7.83%, as shown correspondingly in Figures31 and 32.Voltage at load 6.5 KW with grid connection in LabVIEW. (a) (b)Current at load 6.5 KW with grid connection in LabVIEW. (a) (b)Similarly, the total current at the connection of the system to the load and network was investigated. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures33 and 34, respectively.Figure 33 Current to load 6.5 KW and network with grid connection in Simulink.Current to load 6.5 KW and network with grid connection in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection of the system to the load and network is equal to 19.76 A while the total harmonic distortion is equal to 9.88%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 13.99 A RMS, that is, 19.78 A, while THD is 11.54%.Similarly, the current at the connection of the system to the network was examined. The resultant simulation plots via Simulink and LabVIEW are depicted in Figures35 and 36, respectively.Figure 35 Current to network with grid connection and load 6.5 KW in Simulink.Current to network with grid connection and load 6.5 KW in LabVIEW. (a) (b)Based on these plots it was found that according to Simulink the amplitude of the total current at the connection to the network is equal to 23.53 A while the total harmonic distortion is equal to 7.89%. The corresponding simulation results in LabVIEW proved that the current amplitude is equivalent to 16.38 A, RMS that is, 23.16 A, while THD is 10.4%.The comparative examination of the aforementioned simulation results regarding the 6.5 KW load case shows that the percentage deviation between Simulink and LabVIEW is equal to 1.0% for the voltage at load, 1.0% for the current at load, 0.1% for the total current to load and network, and 1.57% for the current to network.The correlation between the results attained via Simulink and LabVIEW for the performance simulation of the grid-connected integrated PV system can be synoptically evaluated in terms of the coefficient of determination (R2). For this reason R2 was computed for both the low load and high load cases by taking into consideration the values of all current amplitudes which were calculated.Specifically, in the 1.5 KW caseR2 was found to be equal to 0.999 as shown in Figure 37. Similarly, in the 6.5 KW case R2 was found to be also equal to 0.999 as illustrated in Figure 38.Figure 37 Coefficient of determinationR2 for the 1.5 KW load case.Figure 38 Coefficient of determinationR2 for the 6.5 KW load case. ## 6. Conclusions The work presented in this paper focused on computer-aided model development and performance analysis of photovoltaic systems. For this reason, suitable models were built for various formations of photovoltaic schemes. All tasks performed were based on the use of two advanced graphical programming environments, namely, Simulink and LabVIEW. In this way, a comparative evaluation of the credibility of these software platforms was carried out.Specifically, the case of a commercially available PV panel was studied. A model representing this panel was built and its operation was simulated. The simulation results regarding its characteristic features were compared to those provided by the PV panel manufacturer. The results of this comparison were positive in terms of both the modelling performed and the convergence between the outcomes of the two programming environments.Next, the model of an integrated PV system having no grid connection was built and the performance analysis of this system was carried out by applying fast Fourier transformation. The simulation tests performed showed that both software platforms provide results which are almost the same regarding the electric entities simulated and slightly different regarding the signal total harmonic distortion.Finally, the case of an integrated photovoltaic system connected to low-voltage grid was examined. Once again the model development was followed by the overall system performance analysis by examining the cases having a relatively low load and alternatively a higher load connected to the network. The application of fast Fourier transformation provided again results which have a small deviation regarding the electric entities simulated and a bigger one regarding the total harmonic distortion of the signals, thus indicating the existence of different noise influence. --- *Source: 101056-2014-01-02.xml*
2014
# Rhabdomyolysis and Dengue Fever: A Case Report and Literature Review **Authors:** Tanya Sargeant; Tricia Harris; Rohan Wilks; Sydney Barned; Karen Galloway-Blake; Trevor Ferguson **Journal:** Case Reports in Medicine (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101058 --- ## Abstract The medical literature contains only a few reports of rhabdomyolysis occurring in patients with dengue fever. We report the case of a 25-year-old Jamaican man who was admitted to a private hospital four days after the onset of an acute febrile illness with fever, myalgia, and generalized weakness. Dengue fever was confirmed with a positive test for the dengue antigen, nonstructural protein 1. He remained well and was discharged on day 6 of his illness. On day 8, he started to pass red urine and was subsequently admitted to the University Hospital of the West Indies. On admission he was found to have myoglobinuria and an elevated creatine phosphokinase (CPK) of 325,600 U/L, leading to a diagnosis of rhabdomyolysis. Dengue IgM was positive. He was treated with aggressive hydration and had close monitoring of his urine output, creatinine, and CPK levels. His hospital course was uneventful without the development of acute renal failure and he was discharged after 14 days in hospital, with a CPK level of 2463 U/L. This case highlights that severe rhabdomyolysis may occur in patients with dengue fever and that early and aggressive treatment may prevent severe complications such as acute renal failure and death. --- ## Body ## 1. Introduction Rhabdomyolysis is characterized by the rapid breakdown of skeletal muscle with leakage of muscle cell contents into the circulation [1, 2]. These contents include electrolytes, myoglobin and other sarcoplasmic proteins, such as creatine kinase, lactate dehydrogenase, alanine aminotransferase, and aspartate aminotransferase [1]. The resulting myonecrosis presents clinically as limb weakness, myalgia and commonly, gross pigmenturia without haematuria [1, 2]. Acute renal failure is a common complication of rhabdomyolysis and is due to the toxic effects of filtering excessive quantities of myoglobin in the setting of hypovolaemia [3]. The causes of rhabdomyolysis are protean, with acute viral infections such as influenza, HIV, coxsackievirus, and cytomegalovirus recognized as common causes [1, 4]. Although the dengue virus shares several features with other viruses known to cause myopathies, dengue fever is not listed as a cause of rhabdomyolysis in major textbooks and review articles [4]. Over the last decade, a number of case reports of patients with dengue and rhabdomyolysis have been published in the literature [4–8]. We now report another case of rhabdomyolysis associated with dengue fever and present a literature review with a discussion of the clinical implications. ## 2. Case Presentation A 25-year-old man with no known chronic illnesses presented to the University Hospital of the West Indies (UHWI) in Kingston, Jamaica, with a history of back pain and fever beginning eight days prior to his presentation. The fever lasted for two days and settled with the use of paracetamol. He then started to experience severe arthralgia, myalgia, and generalized muscle weakness which lasted for the first few days of his illness. He was seen by his primary care physician three days after the onset of symptoms and was admitted to a private hospital with a diagnosis of dengue fever. He reported that his platelet count at that time was 44,000/μL but remained asymptomatic, with no signs of bleeding, and was therefore discharged with a plan for followup as an outpatient. Two days later he noticed dark-coloured, then red urine and therefore returned to the private hospital and was subsequently referred to the UHWI for further management.At the time of presentation, there were no reports of mucosal bleeding, skin haemorrhages, or haematochezia. There was no history of retro-orbital pain, neck stiffness, or photophobia. There were no other urinary symptoms and no history of nausea or vomiting. Of note, he gave no history of exposure to rats and he had not travelled recently. However, he did recall a neighbour having a diagnosis of dengue fever 1-2 weeks prior to the onset of his symptoms.On physical examination, he had normal vital signs with a temperature of 36.3°C. His cardiovascular, respiratory, abdominal, musculoskeletal, and central nervous systems were all normal. Examination of the skin did not reveal any petechiae, purpurae, ecchymoses, or rash. There were no wet purpurae in his mouth and fundoscopy was negative for retinal haemorrhages. A tourniquet test was performed and was negative. Urinalysis revealed a pH of 7.5 and tested positive for blood with no other abnormalities.Investigations from the private hospital revealed an initial leukopaenia, neutropaenia, and thrombocytopaenia, verified by peripheral blood film. By the time of his presentation to the UHWI, his white blood cell count had normalized but the platelet count remained low at 49,000/μL. However, manual review of the peripheral blood film revealed a higher count of 100,000/μL. His initial urea, creatinine, electrolytes, and coagulation studies were normal and remained normal for the duration of his admission. Creatinine level was 63 μmol/L on admission and 82 μmol/L at discharge. Admission aspartate transaminase was 1841 U/L (normal 7–32 U/L), γ-glutamyl transferase was 159 U/L (normal 10–70 U/L), and lactate dehydrogenase was 5740 U/L (normal 105–200 U/L). These values declined steadily during the admission and eventually normalized by the time of discharge. His initial creatine phosphokinase (CPK) was 325,600 U/L (normal for a male 40–240 U/L). With early and aggressive treatment, this value decreased by about 30–50% every day until it was 2,463 U/L at the time of discharge (see Figure 1). Urine microscopy was negative for red blood cells, casts, or other abnormal urinary sediments, but urinary myoglobin was positive. Tests for HIV, hepatitis B and C, and leptospirosis were all negative. Laboratory tests for the dengue virus on day four of his illness revealed a positive test for the dengue virus antigen, non-structural protein 1 (NS1), and negative immunoglobulin M (IgM). Dengue IgM became positive on day eight and remained positive on day 14.Figure 1 Trends in creatine phosphokinase (CPK) levels for case patient while in hospital.In-hospital management included aggressive hydration with intravenous and oral fluids, strict input/output charting, daily urinalyses to monitor pH, and close monitoring of the urea, creatinine, electrolytes, CPK, and platelet count. Target urine output was 200 mL/hour as recommended by Bosch and colleagues [1]. Urinary pH remained above 6.0 throughout and therefore alkalinization of the urine with sodium bicarbonate was not instituted. The patient’s hospital course was uncomplicated, he remained clinically well and was discharged with instructions to return initially for review on the ward and then for followup in the out-patient clinic. ## 3. Discussion This patient fulfils the criteria for a confirmed case of dengue fever as defined by the World Health Organization (WHO) [9] having presented with an acute febrile illness associated with headache, myalgia, arthralgia and leukopaenia and occurring at the same location and time as another confirmed case of dengue fever. It was verified by a positive IgM antibody test on the late acute and convalescent serum specimens and there was also demonstration of dengue virus antigen (NS1) in his serum. His was also a case of rhabdomyolysis with the typical triad of generalized weakness, myalgia, and dark urine/myoglobinuria associated with a CPK that was more than five times the upper limit of normal [2]. Although there are case reports of rhabdomyolysis associated with dengue fever, major textbooks do not mention the dengue virus as a possible cause of rhabdomyolysis [4, 7]. We found only one review article on atypical manifestations of dengue fever which includes rhabdomyolysis as a possible complication [10].A review of the published literature using the PubMed database identified five case reports of rhabdomyolysis associated with dengue fever [4–8]. The findings of the six patients in these case reports and the present case are summarized in Table 1. In each case, there was confirmation of both dengue fever and rhabdomyolysis. Of note six of the seven cases were males, with ages ranging from 25 to 66 years; the lone female was 28 years old. CPK levels ranged from 5,000 to 325,600 U/L. Two of the seven patients died (29%) while four of them developed acute renal failure (57%) and two patients developed respiratory failure (29%). These clinical findings suggest that rhabdomyolysis in patients with dengue carries high morbidity and mortality rates when compared to the 8% overall mortality and 13–50% incidence of acute renal failure for all patients with rhabdomyolysis [1, 2]. In the cases presented, mortality was associated with the presence of acute renal failure and/or multiorgan failure.Table 1 Summary of clinical findings from case reports of dengue fever and rhabdomyolysis. Authors and publication year Country,age, sex Clinical findings CPK level(Admission/peak) (U/L) Results of tests for dengue Renal function/acute renal failure Outcome Gunasekera et al.2000 [6] Sri Lanka28 years, Female Fever, myalgia, proximal muscle weakness, dark urine, dyspneaAcute renal failure, hypotension >5000 Dengue IgM and IgG were positive by immunochromatography testDengue antibody titre for D2 was 2560 (normal <20) Developed acute renal failure—serum creatinine peaked at 780μmol/L Abdominal ultrasound showed normal kidneys Complete and uneventful recovery after supportive treatment including mannitol and bicarbonate Davis and Bourke2004 [4] Darwin, Australia33 years, Male Fever, retro-orbital pain,malaise, macular rash, dark red-brown urine 51,555 Dengue virus IgM negative on day 1 but positive on day 10RT-PCR for serum dengue virus RNA on day 1 of hospitalization was positive for D2 Did not develop renal failure—serum creatinine levels were normal (actual values were not reported) Uncomplicated course with full recovery Davis and Bourke2004 [4] Darwin, Australia33 years, Male Admitted to hospital with suspected dengue haemorrhagic fever and acute renal failure 17,548 Blood RT-PCR was positive for D2 Developed acute renal failure—actual creatinine values were not reported Developed multiple organ dysfunctionDied on day 2 of hospitalization Lim and Goh2005 [8] Singapore27 years, Male Four-day history of fever, nausea, and myalgia; discharged and readmitted after presenting with persistent myalgia and dark red urine 58,961 Dengue virus IgM and IgG titres were negative on day 1 of hospitalizationDengue IgM became positive on day 5 Did not develop acute renal failure—serum creatinine levels reported as normal (actual values were not given) Uneventful hospital stay with discharge on day 7 of second admission after treatment with intravenous hydration Karakus et al.2007 [7] The Netherlands66 years, Male Fatigue, muscle, bone, and joint pain, tea coloured urine; confirmed myoglobinuria 156,900 Dengue IgM and IgG positive on day 9 Developed acute renal failure—serum creatinine 138μmol/L on day 1 and 315 μmol/L on day 5 Developed respiratory failure, septic shock and multiorgan failure; admitted to intensive care unit; Died on day 47 Acharya et al.2010 [5] India44 years, Male Four-day history of fever and myalgiaOn examination, he was febrile and had conjunctival congestion and diffuse muscle tenderness Urine positive for myoglobin 29,000 Dengue IgM ELISA was strongly positive Developed acute renal failure—serum creatinine 2.6 mg/dL Developed quadriparesis and respiratory failureRequired ventilator support for respiratory failure Present case(Sargeant et al. 2012) Jamaica25 years old, Male Four-day history of fever, myalgia and generalized weakness. Started to pass red urine on day 8; myoglobinuria confirmed on urine testing 325,600 Dengue IgM negative on day 4 but positive on day 8Dengue antigen (NS1) positive on day 4 Did not develop renal failure—serum creatinine was 63μmol/L on admission and 82 μmol/L at discharge Full recovery with supportive care CPK: creatine phosphokinase; IgM: immunoglobulin M; IgG: immunoglobulin G, D2: dengue virus type 2, NS1: nonstructural protein 1.The mechanisms involved in the development of rhabdomyolysis in patients with dengue fever are unknown [4]. However, rhabdomyolysis is reported to occur in a number of viral infections, such as influenza A and B, coxsackievirus, Epstein-Barr virus, and HIV [1]. Davis and Bourke suggest that since the dengue virus shares several features with these other viruses known to cause severe myositis, it is not surprising that dengue could also cause rhabdomyolysis [4]. They further suggest that the most likely cause may be due to myotoxic cytokines, particularly tumour necrosis factor (TNF) and interferon alpha (IFN-α) released in response to a viral infection [4]. Muscle biopsy specimens from patients with acute viral myositis have revealed a range of findings, from a mild lymphocytic infiltrate to foci of severe myonecrosis but direct invasion of muscle by the virus has not been consistently demonstrated [4].Acute kidney injury associated with myoglobinuria is the most serious complication of both traumatic and nontraumatic rhabdomyolysis and may be life-threatening. As seen in the cases reviewed, acute renal failure is a frequent complication in patients with dengue and rhabdomyolysis. The exact mechanisms by which rhabdomyolysis impairs renal function are unclear, but experimental evidence suggests that intrarenal vasoconstriction, direct and ischaemic tubular injury, and tubular obstruction all play a role [1]. Myoglobin becomes concentrated along the renal tubules, where it precipitates when it interacts with the Tamm-Horsfall protein, particularly in the presence of acidic urine. Myoglobin seems to have no marked nephrotoxic effect in the tubules unless the urine is acidic, hence the common practice of urinary alkalinization as part of supportive treatment measures.There is no specific threshold value of serum CPK above which the risk of acute kidney injury is markedly increased, but the risk is usually low when CPK levels at admission are less than 15,000–20,000 U/L. Acute kidney injury has been reported with CPK values as low as 5000 U/L, but this usually occurs with coexisting conditions such as sepsis, dehydration, and acidosis [1]. In the published case reports of dengue and rhabdomyolysis, CPK levels ranged from a low of 5,000 U/L to a high of 156,000 U/L. Our patient had an even higher CPK level of 325,600 U/L but did not develop acute renal failure. ## 4. Conclusion Rhabdomyolysis should be recognized as a possible complication of dengue fever and should be reflected as such in medical textbooks especially in light of the possibility of acute renal failure and high mortality. Although causation has not been unequivocally established, supporting evidence for a causal association is the fact that rhabdomyolysis occurs commonly with influenza and other viruses with which the dengue virus shares many similarities.Clinicians should note that the presentation of rhabdomyolysis can be subtle but its complications, in particular, acute kidney injury and multiorgan failure, can be devastating. These adverse effects are preventable with early recognition and institution of the appropriate management. We agree with Davis and Bourke that all patients with dengue fever should have a urinalysis done and that those who test positive for blood should have urine microscopy and a CPK test in order to determine if the patient may have rhabdomyolysis. This approach could be potentially life-saving. --- *Source: 101058-2013-01-08.xml*
101058-2013-01-08_101058-2013-01-08.md
16,129
Rhabdomyolysis and Dengue Fever: A Case Report and Literature Review
Tanya Sargeant; Tricia Harris; Rohan Wilks; Sydney Barned; Karen Galloway-Blake; Trevor Ferguson
Case Reports in Medicine (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101058
101058-2013-01-08.xml
--- ## Abstract The medical literature contains only a few reports of rhabdomyolysis occurring in patients with dengue fever. We report the case of a 25-year-old Jamaican man who was admitted to a private hospital four days after the onset of an acute febrile illness with fever, myalgia, and generalized weakness. Dengue fever was confirmed with a positive test for the dengue antigen, nonstructural protein 1. He remained well and was discharged on day 6 of his illness. On day 8, he started to pass red urine and was subsequently admitted to the University Hospital of the West Indies. On admission he was found to have myoglobinuria and an elevated creatine phosphokinase (CPK) of 325,600 U/L, leading to a diagnosis of rhabdomyolysis. Dengue IgM was positive. He was treated with aggressive hydration and had close monitoring of his urine output, creatinine, and CPK levels. His hospital course was uneventful without the development of acute renal failure and he was discharged after 14 days in hospital, with a CPK level of 2463 U/L. This case highlights that severe rhabdomyolysis may occur in patients with dengue fever and that early and aggressive treatment may prevent severe complications such as acute renal failure and death. --- ## Body ## 1. Introduction Rhabdomyolysis is characterized by the rapid breakdown of skeletal muscle with leakage of muscle cell contents into the circulation [1, 2]. These contents include electrolytes, myoglobin and other sarcoplasmic proteins, such as creatine kinase, lactate dehydrogenase, alanine aminotransferase, and aspartate aminotransferase [1]. The resulting myonecrosis presents clinically as limb weakness, myalgia and commonly, gross pigmenturia without haematuria [1, 2]. Acute renal failure is a common complication of rhabdomyolysis and is due to the toxic effects of filtering excessive quantities of myoglobin in the setting of hypovolaemia [3]. The causes of rhabdomyolysis are protean, with acute viral infections such as influenza, HIV, coxsackievirus, and cytomegalovirus recognized as common causes [1, 4]. Although the dengue virus shares several features with other viruses known to cause myopathies, dengue fever is not listed as a cause of rhabdomyolysis in major textbooks and review articles [4]. Over the last decade, a number of case reports of patients with dengue and rhabdomyolysis have been published in the literature [4–8]. We now report another case of rhabdomyolysis associated with dengue fever and present a literature review with a discussion of the clinical implications. ## 2. Case Presentation A 25-year-old man with no known chronic illnesses presented to the University Hospital of the West Indies (UHWI) in Kingston, Jamaica, with a history of back pain and fever beginning eight days prior to his presentation. The fever lasted for two days and settled with the use of paracetamol. He then started to experience severe arthralgia, myalgia, and generalized muscle weakness which lasted for the first few days of his illness. He was seen by his primary care physician three days after the onset of symptoms and was admitted to a private hospital with a diagnosis of dengue fever. He reported that his platelet count at that time was 44,000/μL but remained asymptomatic, with no signs of bleeding, and was therefore discharged with a plan for followup as an outpatient. Two days later he noticed dark-coloured, then red urine and therefore returned to the private hospital and was subsequently referred to the UHWI for further management.At the time of presentation, there were no reports of mucosal bleeding, skin haemorrhages, or haematochezia. There was no history of retro-orbital pain, neck stiffness, or photophobia. There were no other urinary symptoms and no history of nausea or vomiting. Of note, he gave no history of exposure to rats and he had not travelled recently. However, he did recall a neighbour having a diagnosis of dengue fever 1-2 weeks prior to the onset of his symptoms.On physical examination, he had normal vital signs with a temperature of 36.3°C. His cardiovascular, respiratory, abdominal, musculoskeletal, and central nervous systems were all normal. Examination of the skin did not reveal any petechiae, purpurae, ecchymoses, or rash. There were no wet purpurae in his mouth and fundoscopy was negative for retinal haemorrhages. A tourniquet test was performed and was negative. Urinalysis revealed a pH of 7.5 and tested positive for blood with no other abnormalities.Investigations from the private hospital revealed an initial leukopaenia, neutropaenia, and thrombocytopaenia, verified by peripheral blood film. By the time of his presentation to the UHWI, his white blood cell count had normalized but the platelet count remained low at 49,000/μL. However, manual review of the peripheral blood film revealed a higher count of 100,000/μL. His initial urea, creatinine, electrolytes, and coagulation studies were normal and remained normal for the duration of his admission. Creatinine level was 63 μmol/L on admission and 82 μmol/L at discharge. Admission aspartate transaminase was 1841 U/L (normal 7–32 U/L), γ-glutamyl transferase was 159 U/L (normal 10–70 U/L), and lactate dehydrogenase was 5740 U/L (normal 105–200 U/L). These values declined steadily during the admission and eventually normalized by the time of discharge. His initial creatine phosphokinase (CPK) was 325,600 U/L (normal for a male 40–240 U/L). With early and aggressive treatment, this value decreased by about 30–50% every day until it was 2,463 U/L at the time of discharge (see Figure 1). Urine microscopy was negative for red blood cells, casts, or other abnormal urinary sediments, but urinary myoglobin was positive. Tests for HIV, hepatitis B and C, and leptospirosis were all negative. Laboratory tests for the dengue virus on day four of his illness revealed a positive test for the dengue virus antigen, non-structural protein 1 (NS1), and negative immunoglobulin M (IgM). Dengue IgM became positive on day eight and remained positive on day 14.Figure 1 Trends in creatine phosphokinase (CPK) levels for case patient while in hospital.In-hospital management included aggressive hydration with intravenous and oral fluids, strict input/output charting, daily urinalyses to monitor pH, and close monitoring of the urea, creatinine, electrolytes, CPK, and platelet count. Target urine output was 200 mL/hour as recommended by Bosch and colleagues [1]. Urinary pH remained above 6.0 throughout and therefore alkalinization of the urine with sodium bicarbonate was not instituted. The patient’s hospital course was uncomplicated, he remained clinically well and was discharged with instructions to return initially for review on the ward and then for followup in the out-patient clinic. ## 3. Discussion This patient fulfils the criteria for a confirmed case of dengue fever as defined by the World Health Organization (WHO) [9] having presented with an acute febrile illness associated with headache, myalgia, arthralgia and leukopaenia and occurring at the same location and time as another confirmed case of dengue fever. It was verified by a positive IgM antibody test on the late acute and convalescent serum specimens and there was also demonstration of dengue virus antigen (NS1) in his serum. His was also a case of rhabdomyolysis with the typical triad of generalized weakness, myalgia, and dark urine/myoglobinuria associated with a CPK that was more than five times the upper limit of normal [2]. Although there are case reports of rhabdomyolysis associated with dengue fever, major textbooks do not mention the dengue virus as a possible cause of rhabdomyolysis [4, 7]. We found only one review article on atypical manifestations of dengue fever which includes rhabdomyolysis as a possible complication [10].A review of the published literature using the PubMed database identified five case reports of rhabdomyolysis associated with dengue fever [4–8]. The findings of the six patients in these case reports and the present case are summarized in Table 1. In each case, there was confirmation of both dengue fever and rhabdomyolysis. Of note six of the seven cases were males, with ages ranging from 25 to 66 years; the lone female was 28 years old. CPK levels ranged from 5,000 to 325,600 U/L. Two of the seven patients died (29%) while four of them developed acute renal failure (57%) and two patients developed respiratory failure (29%). These clinical findings suggest that rhabdomyolysis in patients with dengue carries high morbidity and mortality rates when compared to the 8% overall mortality and 13–50% incidence of acute renal failure for all patients with rhabdomyolysis [1, 2]. In the cases presented, mortality was associated with the presence of acute renal failure and/or multiorgan failure.Table 1 Summary of clinical findings from case reports of dengue fever and rhabdomyolysis. Authors and publication year Country,age, sex Clinical findings CPK level(Admission/peak) (U/L) Results of tests for dengue Renal function/acute renal failure Outcome Gunasekera et al.2000 [6] Sri Lanka28 years, Female Fever, myalgia, proximal muscle weakness, dark urine, dyspneaAcute renal failure, hypotension >5000 Dengue IgM and IgG were positive by immunochromatography testDengue antibody titre for D2 was 2560 (normal <20) Developed acute renal failure—serum creatinine peaked at 780μmol/L Abdominal ultrasound showed normal kidneys Complete and uneventful recovery after supportive treatment including mannitol and bicarbonate Davis and Bourke2004 [4] Darwin, Australia33 years, Male Fever, retro-orbital pain,malaise, macular rash, dark red-brown urine 51,555 Dengue virus IgM negative on day 1 but positive on day 10RT-PCR for serum dengue virus RNA on day 1 of hospitalization was positive for D2 Did not develop renal failure—serum creatinine levels were normal (actual values were not reported) Uncomplicated course with full recovery Davis and Bourke2004 [4] Darwin, Australia33 years, Male Admitted to hospital with suspected dengue haemorrhagic fever and acute renal failure 17,548 Blood RT-PCR was positive for D2 Developed acute renal failure—actual creatinine values were not reported Developed multiple organ dysfunctionDied on day 2 of hospitalization Lim and Goh2005 [8] Singapore27 years, Male Four-day history of fever, nausea, and myalgia; discharged and readmitted after presenting with persistent myalgia and dark red urine 58,961 Dengue virus IgM and IgG titres were negative on day 1 of hospitalizationDengue IgM became positive on day 5 Did not develop acute renal failure—serum creatinine levels reported as normal (actual values were not given) Uneventful hospital stay with discharge on day 7 of second admission after treatment with intravenous hydration Karakus et al.2007 [7] The Netherlands66 years, Male Fatigue, muscle, bone, and joint pain, tea coloured urine; confirmed myoglobinuria 156,900 Dengue IgM and IgG positive on day 9 Developed acute renal failure—serum creatinine 138μmol/L on day 1 and 315 μmol/L on day 5 Developed respiratory failure, septic shock and multiorgan failure; admitted to intensive care unit; Died on day 47 Acharya et al.2010 [5] India44 years, Male Four-day history of fever and myalgiaOn examination, he was febrile and had conjunctival congestion and diffuse muscle tenderness Urine positive for myoglobin 29,000 Dengue IgM ELISA was strongly positive Developed acute renal failure—serum creatinine 2.6 mg/dL Developed quadriparesis and respiratory failureRequired ventilator support for respiratory failure Present case(Sargeant et al. 2012) Jamaica25 years old, Male Four-day history of fever, myalgia and generalized weakness. Started to pass red urine on day 8; myoglobinuria confirmed on urine testing 325,600 Dengue IgM negative on day 4 but positive on day 8Dengue antigen (NS1) positive on day 4 Did not develop renal failure—serum creatinine was 63μmol/L on admission and 82 μmol/L at discharge Full recovery with supportive care CPK: creatine phosphokinase; IgM: immunoglobulin M; IgG: immunoglobulin G, D2: dengue virus type 2, NS1: nonstructural protein 1.The mechanisms involved in the development of rhabdomyolysis in patients with dengue fever are unknown [4]. However, rhabdomyolysis is reported to occur in a number of viral infections, such as influenza A and B, coxsackievirus, Epstein-Barr virus, and HIV [1]. Davis and Bourke suggest that since the dengue virus shares several features with these other viruses known to cause severe myositis, it is not surprising that dengue could also cause rhabdomyolysis [4]. They further suggest that the most likely cause may be due to myotoxic cytokines, particularly tumour necrosis factor (TNF) and interferon alpha (IFN-α) released in response to a viral infection [4]. Muscle biopsy specimens from patients with acute viral myositis have revealed a range of findings, from a mild lymphocytic infiltrate to foci of severe myonecrosis but direct invasion of muscle by the virus has not been consistently demonstrated [4].Acute kidney injury associated with myoglobinuria is the most serious complication of both traumatic and nontraumatic rhabdomyolysis and may be life-threatening. As seen in the cases reviewed, acute renal failure is a frequent complication in patients with dengue and rhabdomyolysis. The exact mechanisms by which rhabdomyolysis impairs renal function are unclear, but experimental evidence suggests that intrarenal vasoconstriction, direct and ischaemic tubular injury, and tubular obstruction all play a role [1]. Myoglobin becomes concentrated along the renal tubules, where it precipitates when it interacts with the Tamm-Horsfall protein, particularly in the presence of acidic urine. Myoglobin seems to have no marked nephrotoxic effect in the tubules unless the urine is acidic, hence the common practice of urinary alkalinization as part of supportive treatment measures.There is no specific threshold value of serum CPK above which the risk of acute kidney injury is markedly increased, but the risk is usually low when CPK levels at admission are less than 15,000–20,000 U/L. Acute kidney injury has been reported with CPK values as low as 5000 U/L, but this usually occurs with coexisting conditions such as sepsis, dehydration, and acidosis [1]. In the published case reports of dengue and rhabdomyolysis, CPK levels ranged from a low of 5,000 U/L to a high of 156,000 U/L. Our patient had an even higher CPK level of 325,600 U/L but did not develop acute renal failure. ## 4. Conclusion Rhabdomyolysis should be recognized as a possible complication of dengue fever and should be reflected as such in medical textbooks especially in light of the possibility of acute renal failure and high mortality. Although causation has not been unequivocally established, supporting evidence for a causal association is the fact that rhabdomyolysis occurs commonly with influenza and other viruses with which the dengue virus shares many similarities.Clinicians should note that the presentation of rhabdomyolysis can be subtle but its complications, in particular, acute kidney injury and multiorgan failure, can be devastating. These adverse effects are preventable with early recognition and institution of the appropriate management. We agree with Davis and Bourke that all patients with dengue fever should have a urinalysis done and that those who test positive for blood should have urine microscopy and a CPK test in order to determine if the patient may have rhabdomyolysis. This approach could be potentially life-saving. --- *Source: 101058-2013-01-08.xml*
2013
# Analysis of Road Traffic Network Cascade Failures with Coupled Map Lattice Method **Authors:** Yanan Zhang; Yingrong Lu; Guangquan Lu; Peng Chen; Chuan Ding **Journal:** Mathematical Problems in Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101059 --- ## Abstract In recent years, there is growing literature concerning the cascading failure of network characteristics. The object of this paper is to investigate the cascade failures on road traffic network, considering the aeolotropism of road traffic network topology and road congestion dissipation in traffic flow. An improved coupled map lattice (CML) model is proposed. Furthermore, in order to match the congestion dissipation, a recovery mechanism is put forward in this paper. With a real urban road traffic network in Beijing, the cascading failures are tested using different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers. The impacts of different aspects on road traffic network are evaluated based on the simulation results. The findings confirmed the important roles that these characteristics played in the cascading failure propagation and dissipation on road traffic network. We hope these findings are helpful to find out the optimal road network topology and avoid cascading failure on road network. --- ## Body ## 1. Introduction In many large-scale networks, the failure of a node or edge would make the other nodes fail and lead to a chain reaction due to the coupling relationships among nodes. This phenomenon is known as network cascading failure. Cascading failure problems may take place on many natural or artificial networks, such as the Internet [1, 2], power grids [3–6], and traffic networks [7–11]. The effects from large destruction may be caused by cascading failures on the entire networks. Many cities have suffered from serious traffic paralysis that brought great inconvenience to people’s normal life (e.g., Beijing urban traffic was shut down completely due to the rainstorm on July 21, 2012). Therefore, it is essential to understand the cascading failure on traffic network to prevent or reduce the influences of large-scale failure.Many scholars have studied the impacts of network topology [7, 12], network connectivity [13], different attack strategies [4, 14, 15], and network robustness [16–18] on cascading failure. To describe the cascading failure, coupled map lattice (CML) model has been widely applied in previous literatures. For example, using the basic CML method, Xu and Wang [12] studied the cascading failures in different network topologies. Based on the proposed edge-based CML method, Di et al. [19] investigated the cascading failure on random networks and scale-free networks. Though most studies paid attention to the artificial network on cascading failure, the research that applies CML model to investigate the natural road traffic network on cascading failure is limited.Because the properties of natural road traffic network are different from artificial networks, the particular road traffic network properties need to be concerned when we use CML model. One of the particular properties is aeolotropism. Due to the fact that there are one-way and two-way streets in the city, the road traffic network is supposed to be described as directed graphs. Another particular property is restorability, which means that road congestions can dissipate over a certain range. The road traffic network consists of intersections and road segments. Vehicles travel on the network and form the distributed traffic flow. If traffic congestions occur in one or some road segments, congestions can be gradually dissipated after a period of time due to the redistribution of traffic flow. These two particular properties may lead to unique cascading failures rules in road traffic network.Considering the above particular properties, the original CML model will be improved for analyzing cascading failures of road traffic network. The improved CML model is expected to express the aeolotropism of road traffic network topology, which will be proposed in the following section. Besides, in order to match the pattern of road congestion dissipation, a recovery mechanism has been put forward in the next part. For the purpose of deliberating cascading failures roundly, an empirical network in Beijing is tested to investigate the impacts of different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers on road traffic network.The remainder of this paper is organized as follows. The next section will introduce the improved CML model and the recovery mechanism. Then, the simulations based on the empirical network are conducted. Finally, the highlights of this paper are concluded. ## 2. Road Traffic Network Cascading Failures Model Based on CML The original CML model is formulated as follows [12]:(1)xit+1=1-εfxit+ε∑j=1,i≠jNai,jfxjtki,where xi(t) is the state of the ith node at the tth time step. ε∈0,1 is defined as the coupled strength. N is the sum of all nodes. k(i) represents the degree of the ith node. Adjacency matrix A=(aij)N×N is used to represent the topology of the network. If there is an edge between node i and node j, then aij=aji=1; otherwise, aij=aji=0. Chaotic Logistic map f(x)=μx(1-x) with μ∈(0,4] is used to denote dynamic behaviors of nodes. μ is closer to 4; the value of f(x) is more evenly distributed throughout the region of 0 to 1. Therefore, it is always set to be μ=4. The absolute value notation is used in (1) for ensuring nonnegative saturation state of each node.To describe the aeolotropism of road traffic network, an improved CML model is proposed to investigate the cascading failures on road traffic network. In the original CML model,xi(t) means the state of ith node at the tth time step, while, for road traffic network, it is expressed by road saturation:(2)xit+1=1-ε1-ε2fxit+ε2∑i=1,i≠jN1bijfxjtk-i+ε1∑j=1,i≠jN2bjifxitk+ii,j=1,2,…,n.In (2), xi(t) means the road saturation of the ith road segment at the tth time step. bij is the value of adjacency matrix B=(bij)N×N of the road traffic network. If there is an edge from node i to node j, then bij=1; otherwise, bij=0. ε1∈0,1 and ε2∈(0,1) delegate the coupled strengths of the start point and endpoints, respectively. N1 is the sum of all nodes’ out-degree, and N2 is that of in-degree. k+(i) and k-(i), respectively, represent the in-degree and out-degree of the ith node which means the number of downstream segments and upstream segments for the road traffic network.Cascading failure on road traffic network may be triggered by some internal and external factors (e.g., traffic congestion or crash) that lead to the failure of one or more roads. To describe this situation, an external perturbationR≥1 is added to the node k at the (m+1)th time as follows:(3)xkm+1>1=1-ε1-ε2fxkm+ε2∑k=1,k≠jN1bkjfxjmk-k+ε1∑j=1,k≠jN2bjkfxkmk+k+Ri,j=1,2,…,n.If0<xk(t)<1 when t≤m, the node k is in a normal state; if xk(m+1)≥1, the node k is defined to be failed at the (m+1)th time step. For the situation when the node k fails at the (m+1)th time step, xk(t)≡0 with t>m+1 is defined in previous studies [12, 19]. However, with regard to the road traffic network, the failure state could not continue all the time due to the fact that traffic congestion will gradually dissipate with the redistribution of traffic flow. A recovery mechanism to fit with road traffic characteristics is proposed in this study as follows.If an external perturbationR is added to the node k at the (m+1)th time step, xk(m+1)>1 means that road segment k has been in a blocked state at (m+1)th time step. If the upstream vehicles cannot enter, coupled strength of upstream segments and road segment k is set to be 0 (i.e., ε2=0). The saturation state of the kth node from (m+2)th step to (m+n+1)th step can be represented by (4). If xk(m+n+1)<1 after n steps, the saturation state of the kth node returns to normal, represented by (2). The recovery mechanism of the road traffic network on cascading failure is shown in Figure 1.Figure 1 Recovery mechanism of road traffic network on cascading failure.Consider(4)xit+1=1-ε1fxit+ε1∑j=1,i≠jN2bjifxitk+ii,j=1,2,…,n.The proportion of failed nodes at each time step is used to characterize road cascading failure process as shown in (5) and I is applied to represent the final P(t) to describe the size of cascading failure in the end:(5)Pt=N′tN. ## 3. Simulation Test and Analysis To detail the computational experiment of cascading failure on road traffic network, the real road network of Liuliqiao area in Beijing (total road segments numberN=1004) is selected as the empirical network in this study, as shown in Figure 2(a). This area of more than 22 km2 contains the largest train station in Beijing and is considered a typical region showing transition between free flow and congestions. For the road network, nodes represent the intersections and edges represent the road segments between two intersections as shown in Figure 2(b). Using the proposed model, cascading failure on road traffic network is tested based on different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers.Figure 2 (a) Map of Liuliqiao area in Beijing. (b) Network topology of Liuliqiao area in Beijing. (a) (b) ### 3.1. Different Attack Strategies Different attack strategies would lead to different cascading failure on the network. In this study, to obtain the influences of different cascading failure, four kinds of attack strategies are tested: the deliberate attack based on betweenness (BA), the deliberate attack based on saturation (SA), the deliberate attack based on combination of betweenness and saturation (BSA), and the random attack (RA).Parameterλ is used to characterize the three attack strategies (BA, SA, and BSA) as follows:(6)fit=λxit+1-λbit,where λ(0≤λ≤1) is a weight coefficient. xi(t) is the saturation state and bi(t) is the betweenness of the ith node at the mth time step. fi(t) is the combination of betweenness and saturation of the ith node with a fixed value λ. λ=0 represents BA and the initial failure nodes are deleted in turn according to betweenness. If multiple maximum betweenness nodes exist, we choose the node being attacked in a random manner and λ=1 refers to SA which means that the nodes are attacked gradually with saturation. Similarly, the node is randomly selected to be attacked when maximum saturation nodes exist. For BSA, each attack selects the node with maximum combination of degree and betweenness, while the value of λ equals 0.5. RA means that the node to be attacked randomly.For BA, SA, and BSA, external perturbationR=1.5 is added to a node with the largest value of fi (corresponding to different values of λ). For RA, R=1.5 is added to a randomly chosen node. Figure 3 shows the results of four kinds of attacking strategies with ε1=ε2=0.6.Figure 3 P ( t ) based on different attack strategies.Figure3 shows the occurrence of failure and the process of recovery. This phenomenon is in conformity with the actual road traffic flow. Figure 3 also shows that BA triggers cascading failures more easily than SA. The scale of failure recovery time under BA is longer than SA. This phenomenon implies that betweenness has more destructive impacts on cascading failures than saturation. As shown in Figure 3, RA is least likely to trigger cascading failures and the failures recovery time is also the shortest. It is reasonable for that the node being randomly attacked is usually not that node which has deteriorated impact on network cascading failure. Interestingly, comparing to other three attacks, BSA has the most serious impacts on network, including the largest number of failed nodes, the fastest propagation rate of failure, and the longest recovery time. This implies that those road segments, which have the largest value of combination of betweenness and saturation, are the key nodes causing large-scale cascading failures once attacked. These findings give the guidance on daily traffic control that the potential cascading failures could be avoided by supervising the key road segments and their adjacent segments. ### 3.2. Different Coupling Strength The deficiency of giving the coupled strength a fixed value subjectively [19] is overcome in this study. In the case of different values of ε1 and ε2, cascading failures are triggered by adding the same external perturbation R=1.5 on one node with RA. The simulation results are shown in Figure 4. Figure 4(a) plots the proportion of failed nodes P(t) versus time step t with fixed value of coupling strength ε2(ε2=0.6) and varying values of ε1(ε1=0.1,0.2,…,0.9). Oppositely, Figure 4(b) presents the time series of P(t) with fixed value of coupling strength ε1(ε1=0.6) and varying values of ε2(ε2=0.1,0.2,…,0.9).Figure 4 P ( t ) based on different coupled strength. (a) (b)According to Figures4(a) and 4(b), when the value of ε1 or ε2 is below 0.5, road traffic network cascading failures hardly occur. As the value of the coupling strength increases, especially larger than 0.6, the number of failed nodes and the failure recovery time increase sharply.Figure5 shows coupled strength against ratio of total failed nodes number I for the different attack strategies. Road traffic network cascading failures are triggered by external perturbation R=1.5. We can see that the size of cascading failures increases as the value of the coupling strength increases. This is consistent with the findings from Figure 4. Figure 5 also shows that there is a threshold for each attack strategy. Only when the value of coupling strength is larger than the threshold, the cascading failures occur. For example, the BA curve shows that the cascading failures occur when ε1 is larger than 0.4 in Figure 5(b). Comparing four attacks, the threshold of BSA is the smallest. This illustrates that the deliberate attack based on combination of betweenness and saturation is likely to cause cascading failures even under the low coupling strength.Figure 5 I based on different coupled strength and attack strategies. (a) (b)Comparing Figure5(a) with Figure 5(b), the curves of RA and SA present a different trend. In Figure 5(a), if the value of coupling strength ε2 is less than 0.6, the size of cascading failure with RA is smaller than SA. When the value of ε2 is larger than 0.7, the size of cascading failure with SA is smaller than RA. In Figure 5(b), SA is always larger than RA. This phenomenon may be due to the fact that random attack might select the noncritical nodes or the critical nodes, leading to different results. ### 3.3. Different External PerturbationR The impact of different external perturbationR on the cascading failure on road traffic network is analyzed by adding varying external perturbation R on a fixed node. Figure 6 shows that the proportion of failed nodes Pt varies with the time step t in the case of different value of R with ε1=ε2=0.6. The inset of Figure 6 shows the time series of Pt for the value of R between 1 and 2 (R=1,1.2,1.4,1.6,1.8,2).Figure 6 P ( t ) based on different value of R.According to Figure6, with the increase of R value, the number of failed nodes and failure recovery time increase. There is a threshold Rc for R. Only when the value of R is larger than the Rc, the large-scale cascading failures occur seriously. The inset of Figure 6 shows that Rc value is between 1 and 2. There are a few failed nodes (less than 1% of N) in the road traffic network when the value of R is smaller than 1.2. The finding is useful that we could prevent the occurrence of large-scale failures by controlling the value of R less than the threshold.Figure7 shows that the ratio of total failed nodes I is positively correlated to the external perturbation R and the results are highly dependent on the attack strategies. BSA is the most likely to trigger cascading failures even under small value of external perturbation R.Figure 7 I based on different external perturbation R and attack strategies. ### 3.4. Different Number of Road Segments Being Attacked To confirm the number of nodes being attacked causing large-scale cascading failure, the simulation based on different number of road segments is conducted. Figure8 shows P(t) based on different value of n (percentage of segments being attack) and different attack strategies. Figure 9 indicates n against the proportion of total failed nodes I with different attack strategies. The simulation parameters are set to be ε1=ε2=0.6 and R=1.5.Figure 8 P ( t ) based on different value of n. (a) (b) (c) (d)Figure 9 I based on different attack strategies.In Figure8, for the same n value, the destruction impacts under four kinds of attack strategies are different. This is consistent with the previous analysis. However, in this section’s numerical simulation, the whole network might totally become failed, which means that the size of cascades is equal to the size N of the network (I=1). Figure 8 clearly shows that the network can be restored when the number of nodes being attacked is small, while as n increases to a certain value, all of the nodes in the network will be ineffective and difficult to recover. The certain value is the critical value nc. We should pay more attention to the critical value nc to avoid the devastating failure. In our simulations, for the RA strategy (i.e., Figure 8(d)), we find that nc=4.5%. In Figure 8(b), the nc for BSA is 3%. The values of nc under different attack strategies are various and the descending order is as follows: RA, SA, BA, and BSA. This implies that the large-scale failure most likely happens under BSA.According to Figure9, the more the nodes being attacked are, the larger the size of cascading failure will be. The simulations also express that, for the same n, the scale of the network cascading failures under BSA is the largest. nc for each attack strategy can be seen more obviously in Figure 9. For the road traffic network of Liuliqiao area, all values of nc under four attacks are smaller than 5%, which illustrates that out-road traffic network’s invulnerability is low. Therefore, we should take measures to ensure the reliability of the network and control the number of nodes being attacked to be less than the critical value nc. ## 3.1. Different Attack Strategies Different attack strategies would lead to different cascading failure on the network. In this study, to obtain the influences of different cascading failure, four kinds of attack strategies are tested: the deliberate attack based on betweenness (BA), the deliberate attack based on saturation (SA), the deliberate attack based on combination of betweenness and saturation (BSA), and the random attack (RA).Parameterλ is used to characterize the three attack strategies (BA, SA, and BSA) as follows:(6)fit=λxit+1-λbit,where λ(0≤λ≤1) is a weight coefficient. xi(t) is the saturation state and bi(t) is the betweenness of the ith node at the mth time step. fi(t) is the combination of betweenness and saturation of the ith node with a fixed value λ. λ=0 represents BA and the initial failure nodes are deleted in turn according to betweenness. If multiple maximum betweenness nodes exist, we choose the node being attacked in a random manner and λ=1 refers to SA which means that the nodes are attacked gradually with saturation. Similarly, the node is randomly selected to be attacked when maximum saturation nodes exist. For BSA, each attack selects the node with maximum combination of degree and betweenness, while the value of λ equals 0.5. RA means that the node to be attacked randomly.For BA, SA, and BSA, external perturbationR=1.5 is added to a node with the largest value of fi (corresponding to different values of λ). For RA, R=1.5 is added to a randomly chosen node. Figure 3 shows the results of four kinds of attacking strategies with ε1=ε2=0.6.Figure 3 P ( t ) based on different attack strategies.Figure3 shows the occurrence of failure and the process of recovery. This phenomenon is in conformity with the actual road traffic flow. Figure 3 also shows that BA triggers cascading failures more easily than SA. The scale of failure recovery time under BA is longer than SA. This phenomenon implies that betweenness has more destructive impacts on cascading failures than saturation. As shown in Figure 3, RA is least likely to trigger cascading failures and the failures recovery time is also the shortest. It is reasonable for that the node being randomly attacked is usually not that node which has deteriorated impact on network cascading failure. Interestingly, comparing to other three attacks, BSA has the most serious impacts on network, including the largest number of failed nodes, the fastest propagation rate of failure, and the longest recovery time. This implies that those road segments, which have the largest value of combination of betweenness and saturation, are the key nodes causing large-scale cascading failures once attacked. These findings give the guidance on daily traffic control that the potential cascading failures could be avoided by supervising the key road segments and their adjacent segments. ## 3.2. Different Coupling Strength The deficiency of giving the coupled strength a fixed value subjectively [19] is overcome in this study. In the case of different values of ε1 and ε2, cascading failures are triggered by adding the same external perturbation R=1.5 on one node with RA. The simulation results are shown in Figure 4. Figure 4(a) plots the proportion of failed nodes P(t) versus time step t with fixed value of coupling strength ε2(ε2=0.6) and varying values of ε1(ε1=0.1,0.2,…,0.9). Oppositely, Figure 4(b) presents the time series of P(t) with fixed value of coupling strength ε1(ε1=0.6) and varying values of ε2(ε2=0.1,0.2,…,0.9).Figure 4 P ( t ) based on different coupled strength. (a) (b)According to Figures4(a) and 4(b), when the value of ε1 or ε2 is below 0.5, road traffic network cascading failures hardly occur. As the value of the coupling strength increases, especially larger than 0.6, the number of failed nodes and the failure recovery time increase sharply.Figure5 shows coupled strength against ratio of total failed nodes number I for the different attack strategies. Road traffic network cascading failures are triggered by external perturbation R=1.5. We can see that the size of cascading failures increases as the value of the coupling strength increases. This is consistent with the findings from Figure 4. Figure 5 also shows that there is a threshold for each attack strategy. Only when the value of coupling strength is larger than the threshold, the cascading failures occur. For example, the BA curve shows that the cascading failures occur when ε1 is larger than 0.4 in Figure 5(b). Comparing four attacks, the threshold of BSA is the smallest. This illustrates that the deliberate attack based on combination of betweenness and saturation is likely to cause cascading failures even under the low coupling strength.Figure 5 I based on different coupled strength and attack strategies. (a) (b)Comparing Figure5(a) with Figure 5(b), the curves of RA and SA present a different trend. In Figure 5(a), if the value of coupling strength ε2 is less than 0.6, the size of cascading failure with RA is smaller than SA. When the value of ε2 is larger than 0.7, the size of cascading failure with SA is smaller than RA. In Figure 5(b), SA is always larger than RA. This phenomenon may be due to the fact that random attack might select the noncritical nodes or the critical nodes, leading to different results. ## 3.3. Different External PerturbationR The impact of different external perturbationR on the cascading failure on road traffic network is analyzed by adding varying external perturbation R on a fixed node. Figure 6 shows that the proportion of failed nodes Pt varies with the time step t in the case of different value of R with ε1=ε2=0.6. The inset of Figure 6 shows the time series of Pt for the value of R between 1 and 2 (R=1,1.2,1.4,1.6,1.8,2).Figure 6 P ( t ) based on different value of R.According to Figure6, with the increase of R value, the number of failed nodes and failure recovery time increase. There is a threshold Rc for R. Only when the value of R is larger than the Rc, the large-scale cascading failures occur seriously. The inset of Figure 6 shows that Rc value is between 1 and 2. There are a few failed nodes (less than 1% of N) in the road traffic network when the value of R is smaller than 1.2. The finding is useful that we could prevent the occurrence of large-scale failures by controlling the value of R less than the threshold.Figure7 shows that the ratio of total failed nodes I is positively correlated to the external perturbation R and the results are highly dependent on the attack strategies. BSA is the most likely to trigger cascading failures even under small value of external perturbation R.Figure 7 I based on different external perturbation R and attack strategies. ## 3.4. Different Number of Road Segments Being Attacked To confirm the number of nodes being attacked causing large-scale cascading failure, the simulation based on different number of road segments is conducted. Figure8 shows P(t) based on different value of n (percentage of segments being attack) and different attack strategies. Figure 9 indicates n against the proportion of total failed nodes I with different attack strategies. The simulation parameters are set to be ε1=ε2=0.6 and R=1.5.Figure 8 P ( t ) based on different value of n. (a) (b) (c) (d)Figure 9 I based on different attack strategies.In Figure8, for the same n value, the destruction impacts under four kinds of attack strategies are different. This is consistent with the previous analysis. However, in this section’s numerical simulation, the whole network might totally become failed, which means that the size of cascades is equal to the size N of the network (I=1). Figure 8 clearly shows that the network can be restored when the number of nodes being attacked is small, while as n increases to a certain value, all of the nodes in the network will be ineffective and difficult to recover. The certain value is the critical value nc. We should pay more attention to the critical value nc to avoid the devastating failure. In our simulations, for the RA strategy (i.e., Figure 8(d)), we find that nc=4.5%. In Figure 8(b), the nc for BSA is 3%. The values of nc under different attack strategies are various and the descending order is as follows: RA, SA, BA, and BSA. This implies that the large-scale failure most likely happens under BSA.According to Figure9, the more the nodes being attacked are, the larger the size of cascading failure will be. The simulations also express that, for the same n, the scale of the network cascading failures under BSA is the largest. nc for each attack strategy can be seen more obviously in Figure 9. For the road traffic network of Liuliqiao area, all values of nc under four attacks are smaller than 5%, which illustrates that out-road traffic network’s invulnerability is low. Therefore, we should take measures to ensure the reliability of the network and control the number of nodes being attacked to be less than the critical value nc. ## 4. Conclusion This paper investigated the cascading failures based on the improved CML model. The improvements of CML model depended on the study of particular road traffic network properties, which are the aeolotropism of road traffic network topology and road congestion dissipation in traffic flow. With a real urban road traffic network in Beijing, the cascading failures are tested using different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers; we found the following:(1) the aeolotropism and congestion dissipation for the road traffic network topology should be considered; (2) BSA leads to the largest number of failed nodes, of which the propagation rate of failure is the fastest and the failure recovery time is also the longest; (3) as the value of the coupling strength increases, the scale of the network cascading failure increases, and the scale of cascading failures is highly dependent on different attacks; (4) only when the value of external perturbation R is larger than the corresponding threshold of Rc, the large-scale cascading failures would occur, and the number of failed nodes and failure recovery time increase with the increase of R value; (5) the more the nodes being attacked are, the larger the size of cascading failure will be. If the number of nodes being attacked is larger than threshold of nc, the entire network failure would happen. The road traffic network of Liuliqiao area’s invulnerability is very low because the values of nc for different attacks are very small. The above findings might be useful in avoiding or alleviating large-scale failures of road traffic network. --- *Source: 101059-2015-10-25.xml*
101059-2015-10-25_101059-2015-10-25.md
29,382
Analysis of Road Traffic Network Cascade Failures with Coupled Map Lattice Method
Yanan Zhang; Yingrong Lu; Guangquan Lu; Peng Chen; Chuan Ding
Mathematical Problems in Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101059
101059-2015-10-25.xml
--- ## Abstract In recent years, there is growing literature concerning the cascading failure of network characteristics. The object of this paper is to investigate the cascade failures on road traffic network, considering the aeolotropism of road traffic network topology and road congestion dissipation in traffic flow. An improved coupled map lattice (CML) model is proposed. Furthermore, in order to match the congestion dissipation, a recovery mechanism is put forward in this paper. With a real urban road traffic network in Beijing, the cascading failures are tested using different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers. The impacts of different aspects on road traffic network are evaluated based on the simulation results. The findings confirmed the important roles that these characteristics played in the cascading failure propagation and dissipation on road traffic network. We hope these findings are helpful to find out the optimal road network topology and avoid cascading failure on road network. --- ## Body ## 1. Introduction In many large-scale networks, the failure of a node or edge would make the other nodes fail and lead to a chain reaction due to the coupling relationships among nodes. This phenomenon is known as network cascading failure. Cascading failure problems may take place on many natural or artificial networks, such as the Internet [1, 2], power grids [3–6], and traffic networks [7–11]. The effects from large destruction may be caused by cascading failures on the entire networks. Many cities have suffered from serious traffic paralysis that brought great inconvenience to people’s normal life (e.g., Beijing urban traffic was shut down completely due to the rainstorm on July 21, 2012). Therefore, it is essential to understand the cascading failure on traffic network to prevent or reduce the influences of large-scale failure.Many scholars have studied the impacts of network topology [7, 12], network connectivity [13], different attack strategies [4, 14, 15], and network robustness [16–18] on cascading failure. To describe the cascading failure, coupled map lattice (CML) model has been widely applied in previous literatures. For example, using the basic CML method, Xu and Wang [12] studied the cascading failures in different network topologies. Based on the proposed edge-based CML method, Di et al. [19] investigated the cascading failure on random networks and scale-free networks. Though most studies paid attention to the artificial network on cascading failure, the research that applies CML model to investigate the natural road traffic network on cascading failure is limited.Because the properties of natural road traffic network are different from artificial networks, the particular road traffic network properties need to be concerned when we use CML model. One of the particular properties is aeolotropism. Due to the fact that there are one-way and two-way streets in the city, the road traffic network is supposed to be described as directed graphs. Another particular property is restorability, which means that road congestions can dissipate over a certain range. The road traffic network consists of intersections and road segments. Vehicles travel on the network and form the distributed traffic flow. If traffic congestions occur in one or some road segments, congestions can be gradually dissipated after a period of time due to the redistribution of traffic flow. These two particular properties may lead to unique cascading failures rules in road traffic network.Considering the above particular properties, the original CML model will be improved for analyzing cascading failures of road traffic network. The improved CML model is expected to express the aeolotropism of road traffic network topology, which will be proposed in the following section. Besides, in order to match the pattern of road congestion dissipation, a recovery mechanism has been put forward in the next part. For the purpose of deliberating cascading failures roundly, an empirical network in Beijing is tested to investigate the impacts of different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers on road traffic network.The remainder of this paper is organized as follows. The next section will introduce the improved CML model and the recovery mechanism. Then, the simulations based on the empirical network are conducted. Finally, the highlights of this paper are concluded. ## 2. Road Traffic Network Cascading Failures Model Based on CML The original CML model is formulated as follows [12]:(1)xit+1=1-εfxit+ε∑j=1,i≠jNai,jfxjtki,where xi(t) is the state of the ith node at the tth time step. ε∈0,1 is defined as the coupled strength. N is the sum of all nodes. k(i) represents the degree of the ith node. Adjacency matrix A=(aij)N×N is used to represent the topology of the network. If there is an edge between node i and node j, then aij=aji=1; otherwise, aij=aji=0. Chaotic Logistic map f(x)=μx(1-x) with μ∈(0,4] is used to denote dynamic behaviors of nodes. μ is closer to 4; the value of f(x) is more evenly distributed throughout the region of 0 to 1. Therefore, it is always set to be μ=4. The absolute value notation is used in (1) for ensuring nonnegative saturation state of each node.To describe the aeolotropism of road traffic network, an improved CML model is proposed to investigate the cascading failures on road traffic network. In the original CML model,xi(t) means the state of ith node at the tth time step, while, for road traffic network, it is expressed by road saturation:(2)xit+1=1-ε1-ε2fxit+ε2∑i=1,i≠jN1bijfxjtk-i+ε1∑j=1,i≠jN2bjifxitk+ii,j=1,2,…,n.In (2), xi(t) means the road saturation of the ith road segment at the tth time step. bij is the value of adjacency matrix B=(bij)N×N of the road traffic network. If there is an edge from node i to node j, then bij=1; otherwise, bij=0. ε1∈0,1 and ε2∈(0,1) delegate the coupled strengths of the start point and endpoints, respectively. N1 is the sum of all nodes’ out-degree, and N2 is that of in-degree. k+(i) and k-(i), respectively, represent the in-degree and out-degree of the ith node which means the number of downstream segments and upstream segments for the road traffic network.Cascading failure on road traffic network may be triggered by some internal and external factors (e.g., traffic congestion or crash) that lead to the failure of one or more roads. To describe this situation, an external perturbationR≥1 is added to the node k at the (m+1)th time as follows:(3)xkm+1>1=1-ε1-ε2fxkm+ε2∑k=1,k≠jN1bkjfxjmk-k+ε1∑j=1,k≠jN2bjkfxkmk+k+Ri,j=1,2,…,n.If0<xk(t)<1 when t≤m, the node k is in a normal state; if xk(m+1)≥1, the node k is defined to be failed at the (m+1)th time step. For the situation when the node k fails at the (m+1)th time step, xk(t)≡0 with t>m+1 is defined in previous studies [12, 19]. However, with regard to the road traffic network, the failure state could not continue all the time due to the fact that traffic congestion will gradually dissipate with the redistribution of traffic flow. A recovery mechanism to fit with road traffic characteristics is proposed in this study as follows.If an external perturbationR is added to the node k at the (m+1)th time step, xk(m+1)>1 means that road segment k has been in a blocked state at (m+1)th time step. If the upstream vehicles cannot enter, coupled strength of upstream segments and road segment k is set to be 0 (i.e., ε2=0). The saturation state of the kth node from (m+2)th step to (m+n+1)th step can be represented by (4). If xk(m+n+1)<1 after n steps, the saturation state of the kth node returns to normal, represented by (2). The recovery mechanism of the road traffic network on cascading failure is shown in Figure 1.Figure 1 Recovery mechanism of road traffic network on cascading failure.Consider(4)xit+1=1-ε1fxit+ε1∑j=1,i≠jN2bjifxitk+ii,j=1,2,…,n.The proportion of failed nodes at each time step is used to characterize road cascading failure process as shown in (5) and I is applied to represent the final P(t) to describe the size of cascading failure in the end:(5)Pt=N′tN. ## 3. Simulation Test and Analysis To detail the computational experiment of cascading failure on road traffic network, the real road network of Liuliqiao area in Beijing (total road segments numberN=1004) is selected as the empirical network in this study, as shown in Figure 2(a). This area of more than 22 km2 contains the largest train station in Beijing and is considered a typical region showing transition between free flow and congestions. For the road network, nodes represent the intersections and edges represent the road segments between two intersections as shown in Figure 2(b). Using the proposed model, cascading failure on road traffic network is tested based on different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers.Figure 2 (a) Map of Liuliqiao area in Beijing. (b) Network topology of Liuliqiao area in Beijing. (a) (b) ### 3.1. Different Attack Strategies Different attack strategies would lead to different cascading failure on the network. In this study, to obtain the influences of different cascading failure, four kinds of attack strategies are tested: the deliberate attack based on betweenness (BA), the deliberate attack based on saturation (SA), the deliberate attack based on combination of betweenness and saturation (BSA), and the random attack (RA).Parameterλ is used to characterize the three attack strategies (BA, SA, and BSA) as follows:(6)fit=λxit+1-λbit,where λ(0≤λ≤1) is a weight coefficient. xi(t) is the saturation state and bi(t) is the betweenness of the ith node at the mth time step. fi(t) is the combination of betweenness and saturation of the ith node with a fixed value λ. λ=0 represents BA and the initial failure nodes are deleted in turn according to betweenness. If multiple maximum betweenness nodes exist, we choose the node being attacked in a random manner and λ=1 refers to SA which means that the nodes are attacked gradually with saturation. Similarly, the node is randomly selected to be attacked when maximum saturation nodes exist. For BSA, each attack selects the node with maximum combination of degree and betweenness, while the value of λ equals 0.5. RA means that the node to be attacked randomly.For BA, SA, and BSA, external perturbationR=1.5 is added to a node with the largest value of fi (corresponding to different values of λ). For RA, R=1.5 is added to a randomly chosen node. Figure 3 shows the results of four kinds of attacking strategies with ε1=ε2=0.6.Figure 3 P ( t ) based on different attack strategies.Figure3 shows the occurrence of failure and the process of recovery. This phenomenon is in conformity with the actual road traffic flow. Figure 3 also shows that BA triggers cascading failures more easily than SA. The scale of failure recovery time under BA is longer than SA. This phenomenon implies that betweenness has more destructive impacts on cascading failures than saturation. As shown in Figure 3, RA is least likely to trigger cascading failures and the failures recovery time is also the shortest. It is reasonable for that the node being randomly attacked is usually not that node which has deteriorated impact on network cascading failure. Interestingly, comparing to other three attacks, BSA has the most serious impacts on network, including the largest number of failed nodes, the fastest propagation rate of failure, and the longest recovery time. This implies that those road segments, which have the largest value of combination of betweenness and saturation, are the key nodes causing large-scale cascading failures once attacked. These findings give the guidance on daily traffic control that the potential cascading failures could be avoided by supervising the key road segments and their adjacent segments. ### 3.2. Different Coupling Strength The deficiency of giving the coupled strength a fixed value subjectively [19] is overcome in this study. In the case of different values of ε1 and ε2, cascading failures are triggered by adding the same external perturbation R=1.5 on one node with RA. The simulation results are shown in Figure 4. Figure 4(a) plots the proportion of failed nodes P(t) versus time step t with fixed value of coupling strength ε2(ε2=0.6) and varying values of ε1(ε1=0.1,0.2,…,0.9). Oppositely, Figure 4(b) presents the time series of P(t) with fixed value of coupling strength ε1(ε1=0.6) and varying values of ε2(ε2=0.1,0.2,…,0.9).Figure 4 P ( t ) based on different coupled strength. (a) (b)According to Figures4(a) and 4(b), when the value of ε1 or ε2 is below 0.5, road traffic network cascading failures hardly occur. As the value of the coupling strength increases, especially larger than 0.6, the number of failed nodes and the failure recovery time increase sharply.Figure5 shows coupled strength against ratio of total failed nodes number I for the different attack strategies. Road traffic network cascading failures are triggered by external perturbation R=1.5. We can see that the size of cascading failures increases as the value of the coupling strength increases. This is consistent with the findings from Figure 4. Figure 5 also shows that there is a threshold for each attack strategy. Only when the value of coupling strength is larger than the threshold, the cascading failures occur. For example, the BA curve shows that the cascading failures occur when ε1 is larger than 0.4 in Figure 5(b). Comparing four attacks, the threshold of BSA is the smallest. This illustrates that the deliberate attack based on combination of betweenness and saturation is likely to cause cascading failures even under the low coupling strength.Figure 5 I based on different coupled strength and attack strategies. (a) (b)Comparing Figure5(a) with Figure 5(b), the curves of RA and SA present a different trend. In Figure 5(a), if the value of coupling strength ε2 is less than 0.6, the size of cascading failure with RA is smaller than SA. When the value of ε2 is larger than 0.7, the size of cascading failure with SA is smaller than RA. In Figure 5(b), SA is always larger than RA. This phenomenon may be due to the fact that random attack might select the noncritical nodes or the critical nodes, leading to different results. ### 3.3. Different External PerturbationR The impact of different external perturbationR on the cascading failure on road traffic network is analyzed by adding varying external perturbation R on a fixed node. Figure 6 shows that the proportion of failed nodes Pt varies with the time step t in the case of different value of R with ε1=ε2=0.6. The inset of Figure 6 shows the time series of Pt for the value of R between 1 and 2 (R=1,1.2,1.4,1.6,1.8,2).Figure 6 P ( t ) based on different value of R.According to Figure6, with the increase of R value, the number of failed nodes and failure recovery time increase. There is a threshold Rc for R. Only when the value of R is larger than the Rc, the large-scale cascading failures occur seriously. The inset of Figure 6 shows that Rc value is between 1 and 2. There are a few failed nodes (less than 1% of N) in the road traffic network when the value of R is smaller than 1.2. The finding is useful that we could prevent the occurrence of large-scale failures by controlling the value of R less than the threshold.Figure7 shows that the ratio of total failed nodes I is positively correlated to the external perturbation R and the results are highly dependent on the attack strategies. BSA is the most likely to trigger cascading failures even under small value of external perturbation R.Figure 7 I based on different external perturbation R and attack strategies. ### 3.4. Different Number of Road Segments Being Attacked To confirm the number of nodes being attacked causing large-scale cascading failure, the simulation based on different number of road segments is conducted. Figure8 shows P(t) based on different value of n (percentage of segments being attack) and different attack strategies. Figure 9 indicates n against the proportion of total failed nodes I with different attack strategies. The simulation parameters are set to be ε1=ε2=0.6 and R=1.5.Figure 8 P ( t ) based on different value of n. (a) (b) (c) (d)Figure 9 I based on different attack strategies.In Figure8, for the same n value, the destruction impacts under four kinds of attack strategies are different. This is consistent with the previous analysis. However, in this section’s numerical simulation, the whole network might totally become failed, which means that the size of cascades is equal to the size N of the network (I=1). Figure 8 clearly shows that the network can be restored when the number of nodes being attacked is small, while as n increases to a certain value, all of the nodes in the network will be ineffective and difficult to recover. The certain value is the critical value nc. We should pay more attention to the critical value nc to avoid the devastating failure. In our simulations, for the RA strategy (i.e., Figure 8(d)), we find that nc=4.5%. In Figure 8(b), the nc for BSA is 3%. The values of nc under different attack strategies are various and the descending order is as follows: RA, SA, BA, and BSA. This implies that the large-scale failure most likely happens under BSA.According to Figure9, the more the nodes being attacked are, the larger the size of cascading failure will be. The simulations also express that, for the same n, the scale of the network cascading failures under BSA is the largest. nc for each attack strategy can be seen more obviously in Figure 9. For the road traffic network of Liuliqiao area, all values of nc under four attacks are smaller than 5%, which illustrates that out-road traffic network’s invulnerability is low. Therefore, we should take measures to ensure the reliability of the network and control the number of nodes being attacked to be less than the critical value nc. ## 3.1. Different Attack Strategies Different attack strategies would lead to different cascading failure on the network. In this study, to obtain the influences of different cascading failure, four kinds of attack strategies are tested: the deliberate attack based on betweenness (BA), the deliberate attack based on saturation (SA), the deliberate attack based on combination of betweenness and saturation (BSA), and the random attack (RA).Parameterλ is used to characterize the three attack strategies (BA, SA, and BSA) as follows:(6)fit=λxit+1-λbit,where λ(0≤λ≤1) is a weight coefficient. xi(t) is the saturation state and bi(t) is the betweenness of the ith node at the mth time step. fi(t) is the combination of betweenness and saturation of the ith node with a fixed value λ. λ=0 represents BA and the initial failure nodes are deleted in turn according to betweenness. If multiple maximum betweenness nodes exist, we choose the node being attacked in a random manner and λ=1 refers to SA which means that the nodes are attacked gradually with saturation. Similarly, the node is randomly selected to be attacked when maximum saturation nodes exist. For BSA, each attack selects the node with maximum combination of degree and betweenness, while the value of λ equals 0.5. RA means that the node to be attacked randomly.For BA, SA, and BSA, external perturbationR=1.5 is added to a node with the largest value of fi (corresponding to different values of λ). For RA, R=1.5 is added to a randomly chosen node. Figure 3 shows the results of four kinds of attacking strategies with ε1=ε2=0.6.Figure 3 P ( t ) based on different attack strategies.Figure3 shows the occurrence of failure and the process of recovery. This phenomenon is in conformity with the actual road traffic flow. Figure 3 also shows that BA triggers cascading failures more easily than SA. The scale of failure recovery time under BA is longer than SA. This phenomenon implies that betweenness has more destructive impacts on cascading failures than saturation. As shown in Figure 3, RA is least likely to trigger cascading failures and the failures recovery time is also the shortest. It is reasonable for that the node being randomly attacked is usually not that node which has deteriorated impact on network cascading failure. Interestingly, comparing to other three attacks, BSA has the most serious impacts on network, including the largest number of failed nodes, the fastest propagation rate of failure, and the longest recovery time. This implies that those road segments, which have the largest value of combination of betweenness and saturation, are the key nodes causing large-scale cascading failures once attacked. These findings give the guidance on daily traffic control that the potential cascading failures could be avoided by supervising the key road segments and their adjacent segments. ## 3.2. Different Coupling Strength The deficiency of giving the coupled strength a fixed value subjectively [19] is overcome in this study. In the case of different values of ε1 and ε2, cascading failures are triggered by adding the same external perturbation R=1.5 on one node with RA. The simulation results are shown in Figure 4. Figure 4(a) plots the proportion of failed nodes P(t) versus time step t with fixed value of coupling strength ε2(ε2=0.6) and varying values of ε1(ε1=0.1,0.2,…,0.9). Oppositely, Figure 4(b) presents the time series of P(t) with fixed value of coupling strength ε1(ε1=0.6) and varying values of ε2(ε2=0.1,0.2,…,0.9).Figure 4 P ( t ) based on different coupled strength. (a) (b)According to Figures4(a) and 4(b), when the value of ε1 or ε2 is below 0.5, road traffic network cascading failures hardly occur. As the value of the coupling strength increases, especially larger than 0.6, the number of failed nodes and the failure recovery time increase sharply.Figure5 shows coupled strength against ratio of total failed nodes number I for the different attack strategies. Road traffic network cascading failures are triggered by external perturbation R=1.5. We can see that the size of cascading failures increases as the value of the coupling strength increases. This is consistent with the findings from Figure 4. Figure 5 also shows that there is a threshold for each attack strategy. Only when the value of coupling strength is larger than the threshold, the cascading failures occur. For example, the BA curve shows that the cascading failures occur when ε1 is larger than 0.4 in Figure 5(b). Comparing four attacks, the threshold of BSA is the smallest. This illustrates that the deliberate attack based on combination of betweenness and saturation is likely to cause cascading failures even under the low coupling strength.Figure 5 I based on different coupled strength and attack strategies. (a) (b)Comparing Figure5(a) with Figure 5(b), the curves of RA and SA present a different trend. In Figure 5(a), if the value of coupling strength ε2 is less than 0.6, the size of cascading failure with RA is smaller than SA. When the value of ε2 is larger than 0.7, the size of cascading failure with SA is smaller than RA. In Figure 5(b), SA is always larger than RA. This phenomenon may be due to the fact that random attack might select the noncritical nodes or the critical nodes, leading to different results. ## 3.3. Different External PerturbationR The impact of different external perturbationR on the cascading failure on road traffic network is analyzed by adding varying external perturbation R on a fixed node. Figure 6 shows that the proportion of failed nodes Pt varies with the time step t in the case of different value of R with ε1=ε2=0.6. The inset of Figure 6 shows the time series of Pt for the value of R between 1 and 2 (R=1,1.2,1.4,1.6,1.8,2).Figure 6 P ( t ) based on different value of R.According to Figure6, with the increase of R value, the number of failed nodes and failure recovery time increase. There is a threshold Rc for R. Only when the value of R is larger than the Rc, the large-scale cascading failures occur seriously. The inset of Figure 6 shows that Rc value is between 1 and 2. There are a few failed nodes (less than 1% of N) in the road traffic network when the value of R is smaller than 1.2. The finding is useful that we could prevent the occurrence of large-scale failures by controlling the value of R less than the threshold.Figure7 shows that the ratio of total failed nodes I is positively correlated to the external perturbation R and the results are highly dependent on the attack strategies. BSA is the most likely to trigger cascading failures even under small value of external perturbation R.Figure 7 I based on different external perturbation R and attack strategies. ## 3.4. Different Number of Road Segments Being Attacked To confirm the number of nodes being attacked causing large-scale cascading failure, the simulation based on different number of road segments is conducted. Figure8 shows P(t) based on different value of n (percentage of segments being attack) and different attack strategies. Figure 9 indicates n against the proportion of total failed nodes I with different attack strategies. The simulation parameters are set to be ε1=ε2=0.6 and R=1.5.Figure 8 P ( t ) based on different value of n. (a) (b) (c) (d)Figure 9 I based on different attack strategies.In Figure8, for the same n value, the destruction impacts under four kinds of attack strategies are different. This is consistent with the previous analysis. However, in this section’s numerical simulation, the whole network might totally become failed, which means that the size of cascades is equal to the size N of the network (I=1). Figure 8 clearly shows that the network can be restored when the number of nodes being attacked is small, while as n increases to a certain value, all of the nodes in the network will be ineffective and difficult to recover. The certain value is the critical value nc. We should pay more attention to the critical value nc to avoid the devastating failure. In our simulations, for the RA strategy (i.e., Figure 8(d)), we find that nc=4.5%. In Figure 8(b), the nc for BSA is 3%. The values of nc under different attack strategies are various and the descending order is as follows: RA, SA, BA, and BSA. This implies that the large-scale failure most likely happens under BSA.According to Figure9, the more the nodes being attacked are, the larger the size of cascading failure will be. The simulations also express that, for the same n, the scale of the network cascading failures under BSA is the largest. nc for each attack strategy can be seen more obviously in Figure 9. For the road traffic network of Liuliqiao area, all values of nc under four attacks are smaller than 5%, which illustrates that out-road traffic network’s invulnerability is low. Therefore, we should take measures to ensure the reliability of the network and control the number of nodes being attacked to be less than the critical value nc. ## 4. Conclusion This paper investigated the cascading failures based on the improved CML model. The improvements of CML model depended on the study of particular road traffic network properties, which are the aeolotropism of road traffic network topology and road congestion dissipation in traffic flow. With a real urban road traffic network in Beijing, the cascading failures are tested using different attack strategies, coupling strengths, external perturbations, and attacked road segment numbers; we found the following:(1) the aeolotropism and congestion dissipation for the road traffic network topology should be considered; (2) BSA leads to the largest number of failed nodes, of which the propagation rate of failure is the fastest and the failure recovery time is also the longest; (3) as the value of the coupling strength increases, the scale of the network cascading failure increases, and the scale of cascading failures is highly dependent on different attacks; (4) only when the value of external perturbation R is larger than the corresponding threshold of Rc, the large-scale cascading failures would occur, and the number of failed nodes and failure recovery time increase with the increase of R value; (5) the more the nodes being attacked are, the larger the size of cascading failure will be. If the number of nodes being attacked is larger than threshold of nc, the entire network failure would happen. The road traffic network of Liuliqiao area’s invulnerability is very low because the values of nc for different attacks are very small. The above findings might be useful in avoiding or alleviating large-scale failures of road traffic network. --- *Source: 101059-2015-10-25.xml*
2015
# Acute Hepatocellular Drug Induced Liver Injury Probably by Alfuzosin **Authors:** Tufan Cicek; Huseyin Savas Gokturk; Gulhan Kanat Unler **Journal:** Case Reports in Urology (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101062 --- ## Abstract Alpha blockers are the drugs that exert their effects by binding to alpha receptors and relaxing smooth muscles and are currently used for treatment of benign prostate hyperplasia (BPH). These drugs are often tolerated well by the patients. However, they also possess some common side effects. Hepatotoxicity, on the other hand, is quite rare. We report herein a case with the rare complication of acute hepatocellular drug induced liver injury (DILI) by administration of Alfuzosin. --- ## Body ## 1. Introduction Alpha blockers are currently used for treatment of benign prostate hyperplasia (BPH). These drugs exert their effects by binding to alpha receptors and relaxing smooth muscles. They are used for alleviating symptoms of urinary system disorders [1]. There are 5 types of alpha blockers that are in general use. These are Tamsulosin, Doxazosin, Silodosin, Terazosin, and Alfuzosin. These drugs are often tolerated well by the patients. However, they also possess some common side effects. These include asthenia, vertigo, impotence, and postural hypotension [1]. Hepatotoxicity, on the other hand, is quite rare. It has been reported only in a few patients after Terazosin and Alfuzosin use in the literature [2–5].We report herein a case with the rare complication of acute hepatocellular drug induced liver injury (DILI) by administration of Alfuzosin. ## 2. Case Report A 65-year-old male patient presented to the urology department of our hospital with the complaints of prostatism in January 17, 2014. His prostate-specific antigen level was 0.796 ng/ml. Uroflowmetry was performed, which revealed a reduced urine flow rate, a prolonged micturition time, and obstructive findings. The patient was begun on Alfuzosin prolonged-release tablets at a dose of 10 mg given perorally as a single dose. The patient presented to the urology department at the 9th day of therapy with the loss of appetite and malaise initially, followed by itching on whole body, darkening of urine, and yellowish discoloration of eyes. He gave no history of aspirin, paracetamol, or herbal medicine use or a history of hepatic disease. The patient’s liver tests were normal before the treatment, and Alfuzosin was discontinued. On physical examination there were no stigmata of liver disease. Laboratory test results were as follows: leukocyte count: 5.8 K/μL (4.5–11 K/μL), aspartate aminotransferase (AST): 175 U/L (0–40 U/L), alanine aminotransferase (ALT): 402 U/L (0–55 U/L), alkaline phosphatase (ALP): 235 U/L (15–250 U/L), gamma glutamyl transferase (γ-GT): 327 U/L (8–61 U/L), total bilirubin: 7.6 mg/dL (0.2–1.2 mg/dL), and direct bilirubin, 5.8 mg/dL (0–0.25 mg/dL). Ultrasonography showed normal liver, gall bladder, common bile duct, and intrahepatic bile ducts. Serological examinations ruled out acute viral hepatitis with negative levels of HBsAg, anti-HBc IgM, anti-HCV, anti-HAV Ig M, anti-HEV, HEV-RNA, and HCV RNA. The patient was also examined in terms of autoimmune hepatitis (ANA, AMA, ASMA, and LKM-1) and metabolic liver diseases (transferrin saturation, ferritin, alpha-1 antitrypsin, ceruloplasmin, 24-hour urinary Cu2+, and Kayser-Fleischer ring in eye examination) but all tests were normal. EMA was negative and serum IgA was normal. In order to evaluate intrahepatic bile ducts the patient was examined with magnetic resonance cholangiopancreatography (MRCP) that revealed a normal result. His RUCAM (Roussel UCLAF causality assessment method) score was 7 (probable adverse reaction to drug). The medical therapy was stopped and the patient was monitored. Intermittent laboratory tests demonstrated gradual normalization of transaminases and bilirubin levels (Table 1).Table 1 Laboratory parameters. Laboratory parameters Pre-Alfuzosin February 17 (Alfuzosin) February 25 January 12 ALT (0–55 U/L) 10 402 60 41 AST (0–40 U/L) 23 175 132 32 γ-GT (8–61 U/L) 9 327 36 53 ALP (15–250 U/L) 15 235 198 133 T. bilirubin (0.2–1.2 mg/dL) 0 7.6 8.9 7.9 D. bilirubin (0–0.25 mg/dL) 0 5.8 6.7 5.6 ## 3. Discussion We aimed to report a patient who had no risk factors for or signs of liver disease and was taking no drugs or substances apart from Alfuzosin, known or suspected to cause acute hepatocellular DILI. Alfuzosin is a quinazoline derivative used in treatment of BPH. It is a selectiveα 1 blocker. It is metabolized in liver and the inactive metabolites are excreted in feces. The use of the drug has been linked to dizziness, respiratory infections, headache, and fatigue [1]. However, Alfuzosin has a very favorable safety profile. It has been approved by FDA for use in symptomatic BPH. It may also cause GI complaints, albeit to a lesser degree. However, Alfuzosin-induced hepatotoxicity is substantially rare [3–5].DILI is one of the most common causes of liver injury, which can develop following the use of a variety of drugs through a number of mechanisms. This is because liver is the organ where metabolism of many drugs and chemicals takes place. Thus, almost every medical therapy has a hepatotoxicity potential [4]. The clinical picture of DILI can be highly dependent on the culprit chemical and varies from asymptomatic elevation of liver enzymes to fulminant hepatic failure. The clinical picture of toxic hepatitis may mimic any acute or chronic liver disease and the most common symptoms are jaundice, fatigue, loss of appetite, nausea, and vomiting. Our patient also presented with similar complaints. The injury may have a course characterized by a hepatocellular injury, with elevation of aminotransferase levels or by a cholestatic injury, with elevated alkaline phosphatase levels (with or without hyperbilirubinemia) as the main finding. Based on the biochemical abnormalities, our case had a hepatocellular type of liver injury [6]. Increased transaminases and bilirubin levels together with symptoms beginning following the onset of therapy made us consider acute DILI. Other etiologies were excluded with imaging modalities and laboratory tests. Our diagnosis was confirmed by normalization of laboratory tests as well as a marked improvement in the clinical picture following discontinuation of the culprit drug.The most important factor in diagnosis of DILI is a high index of suspicion. It is essential to take a detailed history and perform a thorough physical examination. The past history should include questioning about NSAID, acetaminophen, and long-term antibiotic use. The differential diagnosis should include alcoholic cirrhosis, blood transfusion, and all forms of liver diseases. Imaging modalities such as ultrasonography or MRI should be used both for diagnosis and differential diagnosis [7].The etiology of Alfuzosin-induced hepatotoxicity is unknown. It is considered to be idiosyncratic. DILI may become manifest as acute, chronic, or fulminant hepatitis. All etiological agents with a hepatotoxicity potential should be excluded by a gastroenterologist using both clinical and laboratory data. The diagnosis of DILI requires a time window of 1 week to 3 months following the use of the drug, normalization of liver function tests after discontinuation of the responsible drug, and recurrent liver damage following reinstitution of the drug. However, literature data suggest that the duration of drug use has varied between 3 and 36 weeks in previous reports [3, 5]. The corresponding duration was 9 days in our patient. Two previous publications reported a mixed cholestatic and hepatocellular injury in one patient and an acute hepatocellular injury in the other [3, 5]. An acute hepatocellular injury developed following drug use in our patient who had no history of liver disease. Compared to previous reports, our patient experienced the shortest time period between the onset of drug therapy and symptom onset and also between the discontinuation of the drug and laboratory and symptomatic improvement [3, 5]. Liver biopsy can be used to confirm the diagnosis and exclude other differential diagnoses. However, a liver biopsy is usually not required for diagnosis of most cases of DILI. We did not perform a liver biopsy. As far as we know, there are only three other reports of severe hepatotoxicity related to Alfuzosin use. Our case is the fourth case of DILI by Alfuzosin. Urologists should definitely question their patients in terms of previous liver disease before putting patients on Alfuzosin therapy. Patients developing liver test abnormalities should definitely be managed and followed by a gastroenterologist. --- *Source: 101062-2015-02-22.xml*
101062-2015-02-22_101062-2015-02-22.md
8,969
Acute Hepatocellular Drug Induced Liver Injury Probably by Alfuzosin
Tufan Cicek; Huseyin Savas Gokturk; Gulhan Kanat Unler
Case Reports in Urology (2015)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101062
101062-2015-02-22.xml
--- ## Abstract Alpha blockers are the drugs that exert their effects by binding to alpha receptors and relaxing smooth muscles and are currently used for treatment of benign prostate hyperplasia (BPH). These drugs are often tolerated well by the patients. However, they also possess some common side effects. Hepatotoxicity, on the other hand, is quite rare. We report herein a case with the rare complication of acute hepatocellular drug induced liver injury (DILI) by administration of Alfuzosin. --- ## Body ## 1. Introduction Alpha blockers are currently used for treatment of benign prostate hyperplasia (BPH). These drugs exert their effects by binding to alpha receptors and relaxing smooth muscles. They are used for alleviating symptoms of urinary system disorders [1]. There are 5 types of alpha blockers that are in general use. These are Tamsulosin, Doxazosin, Silodosin, Terazosin, and Alfuzosin. These drugs are often tolerated well by the patients. However, they also possess some common side effects. These include asthenia, vertigo, impotence, and postural hypotension [1]. Hepatotoxicity, on the other hand, is quite rare. It has been reported only in a few patients after Terazosin and Alfuzosin use in the literature [2–5].We report herein a case with the rare complication of acute hepatocellular drug induced liver injury (DILI) by administration of Alfuzosin. ## 2. Case Report A 65-year-old male patient presented to the urology department of our hospital with the complaints of prostatism in January 17, 2014. His prostate-specific antigen level was 0.796 ng/ml. Uroflowmetry was performed, which revealed a reduced urine flow rate, a prolonged micturition time, and obstructive findings. The patient was begun on Alfuzosin prolonged-release tablets at a dose of 10 mg given perorally as a single dose. The patient presented to the urology department at the 9th day of therapy with the loss of appetite and malaise initially, followed by itching on whole body, darkening of urine, and yellowish discoloration of eyes. He gave no history of aspirin, paracetamol, or herbal medicine use or a history of hepatic disease. The patient’s liver tests were normal before the treatment, and Alfuzosin was discontinued. On physical examination there were no stigmata of liver disease. Laboratory test results were as follows: leukocyte count: 5.8 K/μL (4.5–11 K/μL), aspartate aminotransferase (AST): 175 U/L (0–40 U/L), alanine aminotransferase (ALT): 402 U/L (0–55 U/L), alkaline phosphatase (ALP): 235 U/L (15–250 U/L), gamma glutamyl transferase (γ-GT): 327 U/L (8–61 U/L), total bilirubin: 7.6 mg/dL (0.2–1.2 mg/dL), and direct bilirubin, 5.8 mg/dL (0–0.25 mg/dL). Ultrasonography showed normal liver, gall bladder, common bile duct, and intrahepatic bile ducts. Serological examinations ruled out acute viral hepatitis with negative levels of HBsAg, anti-HBc IgM, anti-HCV, anti-HAV Ig M, anti-HEV, HEV-RNA, and HCV RNA. The patient was also examined in terms of autoimmune hepatitis (ANA, AMA, ASMA, and LKM-1) and metabolic liver diseases (transferrin saturation, ferritin, alpha-1 antitrypsin, ceruloplasmin, 24-hour urinary Cu2+, and Kayser-Fleischer ring in eye examination) but all tests were normal. EMA was negative and serum IgA was normal. In order to evaluate intrahepatic bile ducts the patient was examined with magnetic resonance cholangiopancreatography (MRCP) that revealed a normal result. His RUCAM (Roussel UCLAF causality assessment method) score was 7 (probable adverse reaction to drug). The medical therapy was stopped and the patient was monitored. Intermittent laboratory tests demonstrated gradual normalization of transaminases and bilirubin levels (Table 1).Table 1 Laboratory parameters. Laboratory parameters Pre-Alfuzosin February 17 (Alfuzosin) February 25 January 12 ALT (0–55 U/L) 10 402 60 41 AST (0–40 U/L) 23 175 132 32 γ-GT (8–61 U/L) 9 327 36 53 ALP (15–250 U/L) 15 235 198 133 T. bilirubin (0.2–1.2 mg/dL) 0 7.6 8.9 7.9 D. bilirubin (0–0.25 mg/dL) 0 5.8 6.7 5.6 ## 3. Discussion We aimed to report a patient who had no risk factors for or signs of liver disease and was taking no drugs or substances apart from Alfuzosin, known or suspected to cause acute hepatocellular DILI. Alfuzosin is a quinazoline derivative used in treatment of BPH. It is a selectiveα 1 blocker. It is metabolized in liver and the inactive metabolites are excreted in feces. The use of the drug has been linked to dizziness, respiratory infections, headache, and fatigue [1]. However, Alfuzosin has a very favorable safety profile. It has been approved by FDA for use in symptomatic BPH. It may also cause GI complaints, albeit to a lesser degree. However, Alfuzosin-induced hepatotoxicity is substantially rare [3–5].DILI is one of the most common causes of liver injury, which can develop following the use of a variety of drugs through a number of mechanisms. This is because liver is the organ where metabolism of many drugs and chemicals takes place. Thus, almost every medical therapy has a hepatotoxicity potential [4]. The clinical picture of DILI can be highly dependent on the culprit chemical and varies from asymptomatic elevation of liver enzymes to fulminant hepatic failure. The clinical picture of toxic hepatitis may mimic any acute or chronic liver disease and the most common symptoms are jaundice, fatigue, loss of appetite, nausea, and vomiting. Our patient also presented with similar complaints. The injury may have a course characterized by a hepatocellular injury, with elevation of aminotransferase levels or by a cholestatic injury, with elevated alkaline phosphatase levels (with or without hyperbilirubinemia) as the main finding. Based on the biochemical abnormalities, our case had a hepatocellular type of liver injury [6]. Increased transaminases and bilirubin levels together with symptoms beginning following the onset of therapy made us consider acute DILI. Other etiologies were excluded with imaging modalities and laboratory tests. Our diagnosis was confirmed by normalization of laboratory tests as well as a marked improvement in the clinical picture following discontinuation of the culprit drug.The most important factor in diagnosis of DILI is a high index of suspicion. It is essential to take a detailed history and perform a thorough physical examination. The past history should include questioning about NSAID, acetaminophen, and long-term antibiotic use. The differential diagnosis should include alcoholic cirrhosis, blood transfusion, and all forms of liver diseases. Imaging modalities such as ultrasonography or MRI should be used both for diagnosis and differential diagnosis [7].The etiology of Alfuzosin-induced hepatotoxicity is unknown. It is considered to be idiosyncratic. DILI may become manifest as acute, chronic, or fulminant hepatitis. All etiological agents with a hepatotoxicity potential should be excluded by a gastroenterologist using both clinical and laboratory data. The diagnosis of DILI requires a time window of 1 week to 3 months following the use of the drug, normalization of liver function tests after discontinuation of the responsible drug, and recurrent liver damage following reinstitution of the drug. However, literature data suggest that the duration of drug use has varied between 3 and 36 weeks in previous reports [3, 5]. The corresponding duration was 9 days in our patient. Two previous publications reported a mixed cholestatic and hepatocellular injury in one patient and an acute hepatocellular injury in the other [3, 5]. An acute hepatocellular injury developed following drug use in our patient who had no history of liver disease. Compared to previous reports, our patient experienced the shortest time period between the onset of drug therapy and symptom onset and also between the discontinuation of the drug and laboratory and symptomatic improvement [3, 5]. Liver biopsy can be used to confirm the diagnosis and exclude other differential diagnoses. However, a liver biopsy is usually not required for diagnosis of most cases of DILI. We did not perform a liver biopsy. As far as we know, there are only three other reports of severe hepatotoxicity related to Alfuzosin use. Our case is the fourth case of DILI by Alfuzosin. Urologists should definitely question their patients in terms of previous liver disease before putting patients on Alfuzosin therapy. Patients developing liver test abnormalities should definitely be managed and followed by a gastroenterologist. --- *Source: 101062-2015-02-22.xml*
2015
# Recent Theory and Applications on Numerical Algorithms and Special Functions **Authors:** Ali H. Bhrawy; Robert A. Van Gorder; Dumitru Baleanu; Guo-Cheng Wu **Journal:** Abstract and Applied Analysis (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101063 --- ## Body --- *Source: 101063-2015-03-24.xml*
101063-2015-03-24_101063-2015-03-24.md
401
Recent Theory and Applications on Numerical Algorithms and Special Functions
Ali H. Bhrawy; Robert A. Van Gorder; Dumitru Baleanu; Guo-Cheng Wu
Abstract and Applied Analysis (2015)
Mathematical Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101063
101063-2015-03-24.xml
--- ## Body --- *Source: 101063-2015-03-24.xml*
2015
# A Challenging Case of Acute Mercury Toxicity **Authors:** Ali Nayfeh; Thamer Kassim; Noor Addasi; Faysal Alghoula; Christopher Holewinski; Zachary Depew **Journal:** Case Reports in Medicine (2018) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2018/1010678 --- ## Abstract Background. Mercury exists in multiple forms: elemental, organic, and inorganic. Its toxic manifestations depend on the type and magnitude of exposure. The role of colonoscopic decompression in acute mercury toxicity is still unclear. We present a case of acute elemental mercury toxicity secondary to mercury ingestion, which markedly improved with colonoscopic decompression. Clinical Case. A 54-year-old male presented to the ED five days after ingesting five ounces (148 cubic centimeters) of elemental mercury. Examination was only significant for a distended abdomen. Labs showed elevated serum and urine mercury levels. An abdominal radiograph showed radiopaque material throughout the colon. Succimer and laxatives were initiated. The patient had recurrent bowel movements, and serial radiographs showed interval decrease of mercury in the descending colon with interval increase in the cecum and ascending colon. Colonoscopic decompression was done successfully. The colon was evacuated, and a repeat radiograph showed decreased hyperdense material in the colon. Three months later, a repeat radiograph showed no hyperdense material in the colon. Conclusion. Ingested elemental mercury can be retained in the colon. Although there are no established guidelines for colonoscopic decompression, our patient showed significant improvement. We believe further studies on this subject are needed to guide management practices. --- ## Body ## 1. Introduction Mercury exists in multiple forms: elemental, organic, and inorganic. Its toxic manifestations depend on the type and magnitude of exposure which can range from a minor to a life-threatening presentation. In this article, we present a case of mercury toxicity due to elemental mercury ingestion. We discuss the human use of elemental mercury and its exposure, routes of absorption, and clinical manifestations of its toxicity. We will also discuss the assessment and management of elemental mercury toxicity. ## 2. Case Presentation A 54-year-old male with past medical history of degenerative joint disease, major depressive disorder, polysubstance dependence, and history of childhood burn presented to the Emergency Department (ED) complaining of imbalance, irritability, outburst of temper, fatigue, and weakness for the last five days after he ingested five ounces (oz) (148 cc) of mercury as a suicidal attempt. In the ED, the patient denied any changes in vision, hearing impairment, or tremors. He also denied shortness of breath, abdominal pain, nausea, vomiting, or diarrhea.On physical examination, the patient appeared to be in no obvious distress. He seemed well developed, alert, and oriented to time, place, and person. Vital signs (BP 144/96, HR 74, RR 16, and temperature 96.3). A neurological examination yielded intact cranial nerves, with normal motor function and no sensory deficits. Abdominal examination showed a distended and nontender abdomen with normoactive bowel sounds. The rest of the physical examination showed no abnormalities.On admission, laboratory workup was done and showed hemoglobin of 13.9 gm/dl (normal range 13.5–17.5), platelet of 309 k/ul (normal range is 40–440), WBC of 8 k/ul (normal range 4–12), a mildly elevated creatinine at 1.4 mg/dl (eGFR 56) (reference range 0.6–1.3 mg/dl), and a low potassium at 3.5 (reference range 3.7–5.1 mg/dl). A urine drug screen was done which was positive for amphetamine, benzodiazepine, and cannabinoid. A mercury toxicology workup was initiated and showed an elevation in serum mercury level at 110µg/l (reference range < 10 µg/l), urine mercury level at 37 µg/l (reference range < 10 µg/l), and 24-hour urinary mercury level at 248 µg (no exposure < 20 µg/24, inconclusive 20–150 µg/24h, and potentially toxic > 150 µg/24h). Initial abdominal X-ray (Figure 1) showed diffuse radiopaque material visualized throughout the colon.Figure 1Due to the patient’s toxic mercury levels and worrying neurological signs and symptoms, the patient was admitted to the intensive care unit with a one on one sitter available at his bedside. Vital signs, neurological checks, and electrolytes were measured on regular basis. The poison-control team was consulted upon admission and recommended starting the patient on succimer, as a chelating agent, to be given in a dose of 500 mg every eight hours for the first 5 days then every twelve hours for the 14 days. The patient was also started on polyethylene glycol 17 gm twice a day and magnesium citrate to enhance GI motility along with intravenous fluids and continuous replacement of electrolytes. The neurology team was consulted for further evaluation of the patient neurological complaints. They agreed with the current plan of care; head computed tomography (CT) was performed and showed no intracranial abnormalities.The psychiatry team was asked to assess the patient for his suicidal attempt and was evaluated as nonsuicidal at that time and was started on escitalopram 10 mg. Bedside sitter was discontinued.The patient was showing slow progress in term of feeling weak, fatigue, and imbalance during his stay but remained hemodynamically stable. The patient was transferred out of the ICU to the floor, and serial abdomen X-rays were done. The patient was having recurrent bowel movements on daily basis, and X-rays showed continued decrease in the amount of mercury in the descending colon with interval increase in the hyperdense materiel in the cecum and ascending colon. Mercury levels were trended during the patient’s hospitalization, and results are shown in Table1.Table 1 Admission Day 2 Day 4 Day 6 3 months after discharge 4 months after discharge Serum mercury level (µg/l) 110 134 122 N/A 33 18 24-hour urine mercury 248 499 N/A 233 N/A N/A Serum mercury reference range < 10µg/l; 24-hour urine mercury reference range: nonexposure < 20 µg/24 h; inconclusive 20 to 150 µg/24 h; potentially toxic > 150 µg/24 h.The gastroenterology team was consulted and recommended placing a nasogastric tube and to give 4 liters of polyethylene glycol through the tube and magnesium citrate every 8 hours in an attempt to enhance gastrointestinal motility and hasten mercury clearance. Repeated X-rays showed no advancement of the hyperdense material which remained in the cecum and ascending colon. The decision was made to do a colonoscopy for an attempt of colonoscopic decompression (Figure2). The colon was evacuated with copious amount of washing, and a repeat abdominal X-ray showed decreased hyperdense material in the colon with a nonobstructive gas pattern (Figure 3). The patient was discharged in a good condition after a 10-day hospital stay and followed up after 3 months; at that time, he had a mercury level of 33, and an abdominal X-ray showed no hyperdense material in the colon (Figures 4 and 5). The patient followed up again 1 month later. His neurological symptoms resolved, and kidney function preserved. Also, his mercury level was trending down at 18.Figure 2 (a) Cecum. (b) Ascending colon. (a) (b)Figure 3Figure 4Figure 5 ## 3. Discussion ### 3.1. Brief Introduction, Sources, and Exposure Elemental mercury is a silver-colored liquid that is volatile at room temperature and causes pulmonary, neurological, and renal toxicity [1]. It is used in many technical and medical instruments including sphygmomanometers, manometers, thermometers, barometers, and compact fluorescent light bulbs as well [2, 3]. Mercury has been used widely in amalgam dental filling, which can also be a source of elemental mercury exposure to patients, dental technicians, and dental practitioners. However, studies have not correlated any symptoms or clinically significant health effects with absorption from dental amalgams [2]. Other sources of exposure can be from ingestion of herbal medications for vertigo management, inhalation of mercury vapor for pain relief of arthralgia, and inhalation of mercury vapor for hemorrhoid treatment [2]. ### 3.2. Absorption The major route of absorption is by diffusion through the respiratory tract where up to 80% of inhaled mercury vapor is expected to diffuse into the bloodstream. Absorption from the gastrointestinal tract is very poor with a bioavailability of less than 0.01% and most of the ingested mercury is eliminated in feces [2]. Absorption through the skin is limited as well [2]. Although ingestion of elemental mercury is poorly absorbed through the gastrointestinal tract, mercury globules in the gastrointestinal tract can release vapor at body temperature and this can be absorbed by the lungs. Moreover, the metal can be absorbed more readily in cases of diverticulitis or GI abscesses where there is intense inflammation and mucosal breakdown causing increased bioavailability and systemic manifestations. Also in cases of diverticulosis and GI abscesses, there is a possibility of conversion of elemental mercury to the organic form by bacteria, leading to systemic toxicities [2, 4]. ### 3.3. Distribution and Metabolism The oral lethal dose 10 is approximately 100 grams for a 70 kg adult [5]. In our case, the patient ingested 5 ounces of elemental mercury which is equivalent to around 141.75 grams causing a higher level of absorption. Also, the patient presented five days after ingestion of elemental mercury which we believe increased the chance of the metal to volatilize and diffuse into the bloodstream. This was demonstrated by the patient’s neurological symptoms and the high levels of blood and urine mercury. ### 3.4. Clinical Manifestations (Acute and Chronic), Diagnosis, and Management The clinical manifestations depend on the magnitude of elemental mercury exposure and whether it was acute or chronic. Acute toxicity can be seen in the setting of industrial exposure in which the inhalation of mercury vapor results in hypoxia, permanent lung damage, and even death (2). Inhaled mercury vapor can cause several neurological manifestations due to its ability to diffuse through the blood brain barrier. These manifestations can be reversible once the metal is cleared out of the body and include tremors, paresthesia, memory loss, hyperexcitability, and delayed reflexes [2, 6]. Other symptoms occurring with mercury toxicity include cough, dyspnea, stomatitis, excessive salivation, nausea, vomiting, diarrhea, conjunctivitis, and dermatitis.With chronic exposure to elemental mercury, central nervous system and kidneys are the main affected organs. The major clinical features of chronic mercury poisoning include tremors, psychological disturbances, erethism, and gingivitis [2, 7]. Tremor, either intentional or resting, is considered to be an early neurological sign of poisoning. Erethism is the hallmark of mercury poisoning where features include a change in personality, anxiety, excitability, fearfulness, pathologic shyness along with insomnia, memory loss, depression, fatigue, and outbursts of temper. Proteinuria is the most common sign of kidney effects due to tubular damage. Nephrotic syndrome can occur in severe cases. In addition, peripheral nerve abnormalities can occur but are not very common [2].Diagnosing elemental mercury toxicity depends on exposure history and clinical manifestations. Blood mercury level is a useful test especially if the level of exposure was high (2). However, once absorbed into the blood, the serum half-life is relatively short lasting for approximately three days as the level deceases due to redistribution in the body [8]. The overall half-life of mercury in the body is approximately 1–3 months [8]. The 24-hour urine mercury level is the best biomarker for chronic long-term exposure. Chest X-ray may be needed in the case of presence of respiratory symptoms especially with inhalational toxicity. Abdominal X-ray may show the deposition of mercury in the GI tract in cases of oral ingestion as in our patient.When managing cases of mercury toxicity, treatment starts with eliminating the source of exposure and initiating supportive measures including oxygen, bronchodilators, and fluid resuscitation. Lowering the body level concentration is fundamental in selected cases by using chelating agents especially if the blood and urine concentrations are above 100 mcg/L [9]. (Chelating agents increase the urinary excretion of mercury [9] which includes thiol-based agents such as dimercaprol (British anti-Lewisite (BAL)), penicillamine, unithiol (2,3-dimercaptopropane-1-sulfonate (DMPS)), and succimer (dimercaptosuccinic acid (DMSA)).Retained mercury in the colon can be removed using agents that increase GI motility and colonoscopy. Although the role of colonoscopy is not an established evidence-based recommendation, there are previously reported cases of colonoscopic decompression and evacuation [10]. ## 3.1. Brief Introduction, Sources, and Exposure Elemental mercury is a silver-colored liquid that is volatile at room temperature and causes pulmonary, neurological, and renal toxicity [1]. It is used in many technical and medical instruments including sphygmomanometers, manometers, thermometers, barometers, and compact fluorescent light bulbs as well [2, 3]. Mercury has been used widely in amalgam dental filling, which can also be a source of elemental mercury exposure to patients, dental technicians, and dental practitioners. However, studies have not correlated any symptoms or clinically significant health effects with absorption from dental amalgams [2]. Other sources of exposure can be from ingestion of herbal medications for vertigo management, inhalation of mercury vapor for pain relief of arthralgia, and inhalation of mercury vapor for hemorrhoid treatment [2]. ## 3.2. Absorption The major route of absorption is by diffusion through the respiratory tract where up to 80% of inhaled mercury vapor is expected to diffuse into the bloodstream. Absorption from the gastrointestinal tract is very poor with a bioavailability of less than 0.01% and most of the ingested mercury is eliminated in feces [2]. Absorption through the skin is limited as well [2]. Although ingestion of elemental mercury is poorly absorbed through the gastrointestinal tract, mercury globules in the gastrointestinal tract can release vapor at body temperature and this can be absorbed by the lungs. Moreover, the metal can be absorbed more readily in cases of diverticulitis or GI abscesses where there is intense inflammation and mucosal breakdown causing increased bioavailability and systemic manifestations. Also in cases of diverticulosis and GI abscesses, there is a possibility of conversion of elemental mercury to the organic form by bacteria, leading to systemic toxicities [2, 4]. ## 3.3. Distribution and Metabolism The oral lethal dose 10 is approximately 100 grams for a 70 kg adult [5]. In our case, the patient ingested 5 ounces of elemental mercury which is equivalent to around 141.75 grams causing a higher level of absorption. Also, the patient presented five days after ingestion of elemental mercury which we believe increased the chance of the metal to volatilize and diffuse into the bloodstream. This was demonstrated by the patient’s neurological symptoms and the high levels of blood and urine mercury. ## 3.4. Clinical Manifestations (Acute and Chronic), Diagnosis, and Management The clinical manifestations depend on the magnitude of elemental mercury exposure and whether it was acute or chronic. Acute toxicity can be seen in the setting of industrial exposure in which the inhalation of mercury vapor results in hypoxia, permanent lung damage, and even death (2). Inhaled mercury vapor can cause several neurological manifestations due to its ability to diffuse through the blood brain barrier. These manifestations can be reversible once the metal is cleared out of the body and include tremors, paresthesia, memory loss, hyperexcitability, and delayed reflexes [2, 6]. Other symptoms occurring with mercury toxicity include cough, dyspnea, stomatitis, excessive salivation, nausea, vomiting, diarrhea, conjunctivitis, and dermatitis.With chronic exposure to elemental mercury, central nervous system and kidneys are the main affected organs. The major clinical features of chronic mercury poisoning include tremors, psychological disturbances, erethism, and gingivitis [2, 7]. Tremor, either intentional or resting, is considered to be an early neurological sign of poisoning. Erethism is the hallmark of mercury poisoning where features include a change in personality, anxiety, excitability, fearfulness, pathologic shyness along with insomnia, memory loss, depression, fatigue, and outbursts of temper. Proteinuria is the most common sign of kidney effects due to tubular damage. Nephrotic syndrome can occur in severe cases. In addition, peripheral nerve abnormalities can occur but are not very common [2].Diagnosing elemental mercury toxicity depends on exposure history and clinical manifestations. Blood mercury level is a useful test especially if the level of exposure was high (2). However, once absorbed into the blood, the serum half-life is relatively short lasting for approximately three days as the level deceases due to redistribution in the body [8]. The overall half-life of mercury in the body is approximately 1–3 months [8]. The 24-hour urine mercury level is the best biomarker for chronic long-term exposure. Chest X-ray may be needed in the case of presence of respiratory symptoms especially with inhalational toxicity. Abdominal X-ray may show the deposition of mercury in the GI tract in cases of oral ingestion as in our patient.When managing cases of mercury toxicity, treatment starts with eliminating the source of exposure and initiating supportive measures including oxygen, bronchodilators, and fluid resuscitation. Lowering the body level concentration is fundamental in selected cases by using chelating agents especially if the blood and urine concentrations are above 100 mcg/L [9]. (Chelating agents increase the urinary excretion of mercury [9] which includes thiol-based agents such as dimercaprol (British anti-Lewisite (BAL)), penicillamine, unithiol (2,3-dimercaptopropane-1-sulfonate (DMPS)), and succimer (dimercaptosuccinic acid (DMSA)).Retained mercury in the colon can be removed using agents that increase GI motility and colonoscopy. Although the role of colonoscopy is not an established evidence-based recommendation, there are previously reported cases of colonoscopic decompression and evacuation [10]. ## 4. Conclusion Ingestion of elemental mercury can be retained in the colon. Vigorous GI cleansing via motility enhancing medications and colonoscopy can be used to hasten elimination. The role of colonoscopic decompression in elemental mercury ingestion is not established, but there are previously reported cases where colonoscopy was used for evacuation. --- *Source: 1010678-2018-02-18.xml*
1010678-2018-02-18_1010678-2018-02-18.md
19,335
A Challenging Case of Acute Mercury Toxicity
Ali Nayfeh; Thamer Kassim; Noor Addasi; Faysal Alghoula; Christopher Holewinski; Zachary Depew
Case Reports in Medicine (2018)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2018/1010678
1010678-2018-02-18.xml
--- ## Abstract Background. Mercury exists in multiple forms: elemental, organic, and inorganic. Its toxic manifestations depend on the type and magnitude of exposure. The role of colonoscopic decompression in acute mercury toxicity is still unclear. We present a case of acute elemental mercury toxicity secondary to mercury ingestion, which markedly improved with colonoscopic decompression. Clinical Case. A 54-year-old male presented to the ED five days after ingesting five ounces (148 cubic centimeters) of elemental mercury. Examination was only significant for a distended abdomen. Labs showed elevated serum and urine mercury levels. An abdominal radiograph showed radiopaque material throughout the colon. Succimer and laxatives were initiated. The patient had recurrent bowel movements, and serial radiographs showed interval decrease of mercury in the descending colon with interval increase in the cecum and ascending colon. Colonoscopic decompression was done successfully. The colon was evacuated, and a repeat radiograph showed decreased hyperdense material in the colon. Three months later, a repeat radiograph showed no hyperdense material in the colon. Conclusion. Ingested elemental mercury can be retained in the colon. Although there are no established guidelines for colonoscopic decompression, our patient showed significant improvement. We believe further studies on this subject are needed to guide management practices. --- ## Body ## 1. Introduction Mercury exists in multiple forms: elemental, organic, and inorganic. Its toxic manifestations depend on the type and magnitude of exposure which can range from a minor to a life-threatening presentation. In this article, we present a case of mercury toxicity due to elemental mercury ingestion. We discuss the human use of elemental mercury and its exposure, routes of absorption, and clinical manifestations of its toxicity. We will also discuss the assessment and management of elemental mercury toxicity. ## 2. Case Presentation A 54-year-old male with past medical history of degenerative joint disease, major depressive disorder, polysubstance dependence, and history of childhood burn presented to the Emergency Department (ED) complaining of imbalance, irritability, outburst of temper, fatigue, and weakness for the last five days after he ingested five ounces (oz) (148 cc) of mercury as a suicidal attempt. In the ED, the patient denied any changes in vision, hearing impairment, or tremors. He also denied shortness of breath, abdominal pain, nausea, vomiting, or diarrhea.On physical examination, the patient appeared to be in no obvious distress. He seemed well developed, alert, and oriented to time, place, and person. Vital signs (BP 144/96, HR 74, RR 16, and temperature 96.3). A neurological examination yielded intact cranial nerves, with normal motor function and no sensory deficits. Abdominal examination showed a distended and nontender abdomen with normoactive bowel sounds. The rest of the physical examination showed no abnormalities.On admission, laboratory workup was done and showed hemoglobin of 13.9 gm/dl (normal range 13.5–17.5), platelet of 309 k/ul (normal range is 40–440), WBC of 8 k/ul (normal range 4–12), a mildly elevated creatinine at 1.4 mg/dl (eGFR 56) (reference range 0.6–1.3 mg/dl), and a low potassium at 3.5 (reference range 3.7–5.1 mg/dl). A urine drug screen was done which was positive for amphetamine, benzodiazepine, and cannabinoid. A mercury toxicology workup was initiated and showed an elevation in serum mercury level at 110µg/l (reference range < 10 µg/l), urine mercury level at 37 µg/l (reference range < 10 µg/l), and 24-hour urinary mercury level at 248 µg (no exposure < 20 µg/24, inconclusive 20–150 µg/24h, and potentially toxic > 150 µg/24h). Initial abdominal X-ray (Figure 1) showed diffuse radiopaque material visualized throughout the colon.Figure 1Due to the patient’s toxic mercury levels and worrying neurological signs and symptoms, the patient was admitted to the intensive care unit with a one on one sitter available at his bedside. Vital signs, neurological checks, and electrolytes were measured on regular basis. The poison-control team was consulted upon admission and recommended starting the patient on succimer, as a chelating agent, to be given in a dose of 500 mg every eight hours for the first 5 days then every twelve hours for the 14 days. The patient was also started on polyethylene glycol 17 gm twice a day and magnesium citrate to enhance GI motility along with intravenous fluids and continuous replacement of electrolytes. The neurology team was consulted for further evaluation of the patient neurological complaints. They agreed with the current plan of care; head computed tomography (CT) was performed and showed no intracranial abnormalities.The psychiatry team was asked to assess the patient for his suicidal attempt and was evaluated as nonsuicidal at that time and was started on escitalopram 10 mg. Bedside sitter was discontinued.The patient was showing slow progress in term of feeling weak, fatigue, and imbalance during his stay but remained hemodynamically stable. The patient was transferred out of the ICU to the floor, and serial abdomen X-rays were done. The patient was having recurrent bowel movements on daily basis, and X-rays showed continued decrease in the amount of mercury in the descending colon with interval increase in the hyperdense materiel in the cecum and ascending colon. Mercury levels were trended during the patient’s hospitalization, and results are shown in Table1.Table 1 Admission Day 2 Day 4 Day 6 3 months after discharge 4 months after discharge Serum mercury level (µg/l) 110 134 122 N/A 33 18 24-hour urine mercury 248 499 N/A 233 N/A N/A Serum mercury reference range < 10µg/l; 24-hour urine mercury reference range: nonexposure < 20 µg/24 h; inconclusive 20 to 150 µg/24 h; potentially toxic > 150 µg/24 h.The gastroenterology team was consulted and recommended placing a nasogastric tube and to give 4 liters of polyethylene glycol through the tube and magnesium citrate every 8 hours in an attempt to enhance gastrointestinal motility and hasten mercury clearance. Repeated X-rays showed no advancement of the hyperdense material which remained in the cecum and ascending colon. The decision was made to do a colonoscopy for an attempt of colonoscopic decompression (Figure2). The colon was evacuated with copious amount of washing, and a repeat abdominal X-ray showed decreased hyperdense material in the colon with a nonobstructive gas pattern (Figure 3). The patient was discharged in a good condition after a 10-day hospital stay and followed up after 3 months; at that time, he had a mercury level of 33, and an abdominal X-ray showed no hyperdense material in the colon (Figures 4 and 5). The patient followed up again 1 month later. His neurological symptoms resolved, and kidney function preserved. Also, his mercury level was trending down at 18.Figure 2 (a) Cecum. (b) Ascending colon. (a) (b)Figure 3Figure 4Figure 5 ## 3. Discussion ### 3.1. Brief Introduction, Sources, and Exposure Elemental mercury is a silver-colored liquid that is volatile at room temperature and causes pulmonary, neurological, and renal toxicity [1]. It is used in many technical and medical instruments including sphygmomanometers, manometers, thermometers, barometers, and compact fluorescent light bulbs as well [2, 3]. Mercury has been used widely in amalgam dental filling, which can also be a source of elemental mercury exposure to patients, dental technicians, and dental practitioners. However, studies have not correlated any symptoms or clinically significant health effects with absorption from dental amalgams [2]. Other sources of exposure can be from ingestion of herbal medications for vertigo management, inhalation of mercury vapor for pain relief of arthralgia, and inhalation of mercury vapor for hemorrhoid treatment [2]. ### 3.2. Absorption The major route of absorption is by diffusion through the respiratory tract where up to 80% of inhaled mercury vapor is expected to diffuse into the bloodstream. Absorption from the gastrointestinal tract is very poor with a bioavailability of less than 0.01% and most of the ingested mercury is eliminated in feces [2]. Absorption through the skin is limited as well [2]. Although ingestion of elemental mercury is poorly absorbed through the gastrointestinal tract, mercury globules in the gastrointestinal tract can release vapor at body temperature and this can be absorbed by the lungs. Moreover, the metal can be absorbed more readily in cases of diverticulitis or GI abscesses where there is intense inflammation and mucosal breakdown causing increased bioavailability and systemic manifestations. Also in cases of diverticulosis and GI abscesses, there is a possibility of conversion of elemental mercury to the organic form by bacteria, leading to systemic toxicities [2, 4]. ### 3.3. Distribution and Metabolism The oral lethal dose 10 is approximately 100 grams for a 70 kg adult [5]. In our case, the patient ingested 5 ounces of elemental mercury which is equivalent to around 141.75 grams causing a higher level of absorption. Also, the patient presented five days after ingestion of elemental mercury which we believe increased the chance of the metal to volatilize and diffuse into the bloodstream. This was demonstrated by the patient’s neurological symptoms and the high levels of blood and urine mercury. ### 3.4. Clinical Manifestations (Acute and Chronic), Diagnosis, and Management The clinical manifestations depend on the magnitude of elemental mercury exposure and whether it was acute or chronic. Acute toxicity can be seen in the setting of industrial exposure in which the inhalation of mercury vapor results in hypoxia, permanent lung damage, and even death (2). Inhaled mercury vapor can cause several neurological manifestations due to its ability to diffuse through the blood brain barrier. These manifestations can be reversible once the metal is cleared out of the body and include tremors, paresthesia, memory loss, hyperexcitability, and delayed reflexes [2, 6]. Other symptoms occurring with mercury toxicity include cough, dyspnea, stomatitis, excessive salivation, nausea, vomiting, diarrhea, conjunctivitis, and dermatitis.With chronic exposure to elemental mercury, central nervous system and kidneys are the main affected organs. The major clinical features of chronic mercury poisoning include tremors, psychological disturbances, erethism, and gingivitis [2, 7]. Tremor, either intentional or resting, is considered to be an early neurological sign of poisoning. Erethism is the hallmark of mercury poisoning where features include a change in personality, anxiety, excitability, fearfulness, pathologic shyness along with insomnia, memory loss, depression, fatigue, and outbursts of temper. Proteinuria is the most common sign of kidney effects due to tubular damage. Nephrotic syndrome can occur in severe cases. In addition, peripheral nerve abnormalities can occur but are not very common [2].Diagnosing elemental mercury toxicity depends on exposure history and clinical manifestations. Blood mercury level is a useful test especially if the level of exposure was high (2). However, once absorbed into the blood, the serum half-life is relatively short lasting for approximately three days as the level deceases due to redistribution in the body [8]. The overall half-life of mercury in the body is approximately 1–3 months [8]. The 24-hour urine mercury level is the best biomarker for chronic long-term exposure. Chest X-ray may be needed in the case of presence of respiratory symptoms especially with inhalational toxicity. Abdominal X-ray may show the deposition of mercury in the GI tract in cases of oral ingestion as in our patient.When managing cases of mercury toxicity, treatment starts with eliminating the source of exposure and initiating supportive measures including oxygen, bronchodilators, and fluid resuscitation. Lowering the body level concentration is fundamental in selected cases by using chelating agents especially if the blood and urine concentrations are above 100 mcg/L [9]. (Chelating agents increase the urinary excretion of mercury [9] which includes thiol-based agents such as dimercaprol (British anti-Lewisite (BAL)), penicillamine, unithiol (2,3-dimercaptopropane-1-sulfonate (DMPS)), and succimer (dimercaptosuccinic acid (DMSA)).Retained mercury in the colon can be removed using agents that increase GI motility and colonoscopy. Although the role of colonoscopy is not an established evidence-based recommendation, there are previously reported cases of colonoscopic decompression and evacuation [10]. ## 3.1. Brief Introduction, Sources, and Exposure Elemental mercury is a silver-colored liquid that is volatile at room temperature and causes pulmonary, neurological, and renal toxicity [1]. It is used in many technical and medical instruments including sphygmomanometers, manometers, thermometers, barometers, and compact fluorescent light bulbs as well [2, 3]. Mercury has been used widely in amalgam dental filling, which can also be a source of elemental mercury exposure to patients, dental technicians, and dental practitioners. However, studies have not correlated any symptoms or clinically significant health effects with absorption from dental amalgams [2]. Other sources of exposure can be from ingestion of herbal medications for vertigo management, inhalation of mercury vapor for pain relief of arthralgia, and inhalation of mercury vapor for hemorrhoid treatment [2]. ## 3.2. Absorption The major route of absorption is by diffusion through the respiratory tract where up to 80% of inhaled mercury vapor is expected to diffuse into the bloodstream. Absorption from the gastrointestinal tract is very poor with a bioavailability of less than 0.01% and most of the ingested mercury is eliminated in feces [2]. Absorption through the skin is limited as well [2]. Although ingestion of elemental mercury is poorly absorbed through the gastrointestinal tract, mercury globules in the gastrointestinal tract can release vapor at body temperature and this can be absorbed by the lungs. Moreover, the metal can be absorbed more readily in cases of diverticulitis or GI abscesses where there is intense inflammation and mucosal breakdown causing increased bioavailability and systemic manifestations. Also in cases of diverticulosis and GI abscesses, there is a possibility of conversion of elemental mercury to the organic form by bacteria, leading to systemic toxicities [2, 4]. ## 3.3. Distribution and Metabolism The oral lethal dose 10 is approximately 100 grams for a 70 kg adult [5]. In our case, the patient ingested 5 ounces of elemental mercury which is equivalent to around 141.75 grams causing a higher level of absorption. Also, the patient presented five days after ingestion of elemental mercury which we believe increased the chance of the metal to volatilize and diffuse into the bloodstream. This was demonstrated by the patient’s neurological symptoms and the high levels of blood and urine mercury. ## 3.4. Clinical Manifestations (Acute and Chronic), Diagnosis, and Management The clinical manifestations depend on the magnitude of elemental mercury exposure and whether it was acute or chronic. Acute toxicity can be seen in the setting of industrial exposure in which the inhalation of mercury vapor results in hypoxia, permanent lung damage, and even death (2). Inhaled mercury vapor can cause several neurological manifestations due to its ability to diffuse through the blood brain barrier. These manifestations can be reversible once the metal is cleared out of the body and include tremors, paresthesia, memory loss, hyperexcitability, and delayed reflexes [2, 6]. Other symptoms occurring with mercury toxicity include cough, dyspnea, stomatitis, excessive salivation, nausea, vomiting, diarrhea, conjunctivitis, and dermatitis.With chronic exposure to elemental mercury, central nervous system and kidneys are the main affected organs. The major clinical features of chronic mercury poisoning include tremors, psychological disturbances, erethism, and gingivitis [2, 7]. Tremor, either intentional or resting, is considered to be an early neurological sign of poisoning. Erethism is the hallmark of mercury poisoning where features include a change in personality, anxiety, excitability, fearfulness, pathologic shyness along with insomnia, memory loss, depression, fatigue, and outbursts of temper. Proteinuria is the most common sign of kidney effects due to tubular damage. Nephrotic syndrome can occur in severe cases. In addition, peripheral nerve abnormalities can occur but are not very common [2].Diagnosing elemental mercury toxicity depends on exposure history and clinical manifestations. Blood mercury level is a useful test especially if the level of exposure was high (2). However, once absorbed into the blood, the serum half-life is relatively short lasting for approximately three days as the level deceases due to redistribution in the body [8]. The overall half-life of mercury in the body is approximately 1–3 months [8]. The 24-hour urine mercury level is the best biomarker for chronic long-term exposure. Chest X-ray may be needed in the case of presence of respiratory symptoms especially with inhalational toxicity. Abdominal X-ray may show the deposition of mercury in the GI tract in cases of oral ingestion as in our patient.When managing cases of mercury toxicity, treatment starts with eliminating the source of exposure and initiating supportive measures including oxygen, bronchodilators, and fluid resuscitation. Lowering the body level concentration is fundamental in selected cases by using chelating agents especially if the blood and urine concentrations are above 100 mcg/L [9]. (Chelating agents increase the urinary excretion of mercury [9] which includes thiol-based agents such as dimercaprol (British anti-Lewisite (BAL)), penicillamine, unithiol (2,3-dimercaptopropane-1-sulfonate (DMPS)), and succimer (dimercaptosuccinic acid (DMSA)).Retained mercury in the colon can be removed using agents that increase GI motility and colonoscopy. Although the role of colonoscopy is not an established evidence-based recommendation, there are previously reported cases of colonoscopic decompression and evacuation [10]. ## 4. Conclusion Ingestion of elemental mercury can be retained in the colon. Vigorous GI cleansing via motility enhancing medications and colonoscopy can be used to hasten elimination. The role of colonoscopic decompression in elemental mercury ingestion is not established, but there are previously reported cases where colonoscopy was used for evacuation. --- *Source: 1010678-2018-02-18.xml*
2018
# Lifetime Analysis of Rubber Gasket Composed of Methyl Vinyl Silicone Rubber with Low-Temperature Resistance **Authors:** Young-Doo Kwon; Seong-Hwa Jun; Ji-Min Song **Journal:** Mathematical Problems in Engineering (2015) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2015/101068 --- ## Abstract Most machines and instruments constantly require elastomeric materials like rubber for the purposes of shock absorption, noise attenuation, and sealing. The material properties and accurate lifetime prediction of rubber are closely related to the quality of machines, especially their durability and reliability. The properties of rubber-like elastomers are influenced by ambient conditions, such as temperature, environment, and mechanical load. Moreover, the initial properties of rubber gaskets must be sustained under working conditions to satisfy their required function. Because of its technical merits, as well as its low cost, the highly accelerated life test (HALT) is used by many researchers to predict the long-term lifetime of rubber materials. Methyl vinyl silicone rubber (VMQ) has recently been adopted to improve the lifetime of automobile radiator gaskets. A four-parameter method of determining the recovery ability of the gaskets was recently published, and two revised methods of obtaining the recovery were proposed for polyacrylate (ACM) rubber. The recovery rate curves for VMQ were acquired using the successive zooming genetic algorithm (SZGA). The gasket lifetime for the target recovery (60%) of a compressed gasket was computed somewhat differently depending on the selected regression model. --- ## Body ## 1. Introduction Most machines and instruments constantly require elastomeric materials like rubber for the purposes of shock absorption, noise attenuation, and sealing [1]. The rubber elastomer is classified into three types: natural rubber (NR), synthetic rubber (SR), and NR + SR blended at a given ratio. SR exhibits many excellent properties in terms of mechanical performance. NR is often inferior to certain SRs, especially with respect to thermal stability and compatibility with petroleum products [2]. The SR ethylene propylene diene monomer (EPDM) rubber, which has the characteristic of high-temperature resistance, has been mainly adopted for a radiator gasket of an automobile until now. However, methyl vinyl silicone rubber (VMQ) has recently begun to be used as a radiator gasket material compatible with an extreme temperature range and low temperatures, according to SAE J200, because previous gasket design criteria stated that low-temperature applications for automobiles reached temperatures in the range of –70°C to –55°C by major automotive companies. The VMQ specimen used in this study is made from the final master batch of Burim FMB Co. in ROK, which has been made from the silicone base of Dow Corning Co. by adding 1 PHR of curing agent and 0.5 PHR of pigment.In this study, we predict the lifetime of a VMQ radiator gasket recently developed by a local company using the method proposed in 2014 [3]. Generally three methods are used for the lifetime prediction of a rubber gasket. The most practical method with mathematical concepts is the highly accelerated life test (HALT), which applies temperatures higher than the service temperature over a short period. Using this method, the long-term lifetime of a gasket at lower temperatures can be predicted by extrapolating the data [4]. The second lifetime prediction method under service conditions is economically disadvantageous because of its long testing time, high cost, and labor requirements. The third method is to rely on an experienced engineer specializing in rubber materials, which is less reliable and does not yield objective results.The HALT is a test methodology that accelerates the degradation of material properties using several specimens, and it has been used by many researchers during the material development stage and design process. This test is also commonly applied to rubber materials for gaskets and dampers and facilitates the identification and resolution of weaknesses in new product designs. The methodology diminishes the probability of in-service failures; that is, it increases product quality by virtue of reliability and decreases the development cost and time [5]. The HALT for VMQ was performed at temperatures of 150–200°C under a compression rate of 30%, which is the actual compression rate under service conditions for the radiator gasket. Additionally, a low-temperature test at –70°C was performed under the same compression rate.In this method of lifetime prediction, the Arrhenius model [6] is simpler and more effective for most cases than the Eyring model [7] and uses experimental data. The lifetime of the gasket is defined as the time when the recovery rate meets 60% of the target value after undergoing a 30% compression rate, which depends on the service temperature, whereas ISO 11346 [8] stipulates that the failure time of chemical materials is the point where its initial properties are reduced to 50%.According to most references investigating a lifetime evaluation adopting the linear Arrhenius equation [9] for the ln⁡(t)-(1/T) relationship, where t is the lifetime and T is the temperature, small errors in the lifetime at high temperatures from the HALT evaluation may lead to large errors in the predicted lifetime at low temperatures. Unlike most papers, which do not consider the recovery~ln(t) curve, one study made use of four parameters instead of two parameters in the Arrhenius plot to accurately draw the recovery~ln(t) curve and correctly determine the long-term lifetime.With accurate lifetime predictions at high temperatures, the linear Arrhenius model in thelnt-(1/T) plane can yield correct quantitative analysis of the lifetime of VMQ at a low working temperature. ## 2. Successive Zooming Genetic Algorithm Method for Optimum Parameters The successive zooming genetic algorithm (SZGA) method is used to achieve a smart reduction of the search space around the candidate optimum point [10, 11]. Although this method can also be applied to a general genetic algorithm (GA), it was applied to a micro-genetic algorithm (MGA). The computing procedure of the SZGA is as follows. First, the initial population is generated and an MGA is applied. Subsequently, after every 100 generations, the optimum point with the highest fitness is identified. Second, the search domain is reduced to (Xkopt-αk/2,Xkopt+αk/2), and the optimization procedure continues based on the reduced domain; that is, a new initial population is generated within the new boundaries. This reduction of the search domain increases the resolution of the solution, and the procedure is repeated until the identified solution is satisfactory (δ is the error ratio, Xkopt is the optimum point after (100×k)th generation, α is the zooming factor, and Nzoom is the number of zooming operations). δ is the relative ratio of (Foptk-Foptk-1)/Foptk, and Foptk and Foptk-1 are the kth and (k-1)th optimum function values. The critical ratio δ0 is 1 × 10−6. To fit the recovery rate curve of the polyacrylate (ACM) rubber gasket, the optimal parameters of the smallest mean squared error (MSE) [12] were obtained using this SZGA method. Figure 1 shows the flowchart and the schematics of the SZGA:(1)Mean  squared  errorMSE=1n∑i=1nfk,xi-Di2,where k: unknown parameters.Figure 1 Flowchart and schematics of SZGA. ## 3. Methods of Predicting the Quantitative Lifetime of a VMQ Gasket Methods of mathematically predicting the quantitative lifetime are introduced in this section. To obtain an Arrhenius plot of the long-term lifetime, we first needed to fit the recovery rate curve for a given temperature to obtain the lifetime corresponding to a recovery rate of 60%. Two methods of fitting curves for the ACM were adopted here for the VMQ [3]. The recovery rate of a rubber gasket was assumed to be two exponential functions represented by four parameters. The recovery rate curves were fit using the four optimized parameters, and the lifetimes were solved from the obtained functions. Before we explain the methods of obtaining lifetimes at each given temperature by adopting recovery models, let us first explain the prediction of long-term lifetime using the Arrhenius plot. ### 3.1. Arrhenius Equation and Plot An Arrhenius equation presents the kinetic rateK as a function of the reciprocal of the temperature T in Kelvin [5]. This model is used widely to estimate the reciprocal effect of temperature as(2)K=Ae-Ea/RT.For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line as a function of the activation energy and temperature as(3)ln⁡K=ln⁡A-EaR1T,where K is the rate constant, A is the preexponential factor, R is the gas constant, T is the absolute temperature (°K), and Ea is the activation energy.Equation (3) can be rearranged to give a time-temperature relation by applying t∝1/K, as (4)ln⁡t=ln⁡1A+EaR1T.The lifetimes for higher temperatures are plotted in Arrhenius form, and the long-term lifetimes may be predicted by linearly extrapolating the given data in the semilog plane ofln⁡(t)-(1/T). Figure 2 shows a plot of the long-term lifetime prediction method using the Arrhenius plot. If the part is used for more time (upper square) than the allowable lifetime (dot) at a given temperature, it will fail; if it is used for less time (lower square) than the lifetime, it will be safe.Figure 2 Arrhenius plot used to predict lifetime. ### 3.2. Four-Parameter Method 1 Four-parameter (t0,f0,k1,k2) method 1 is composed of two exponential functions, and when the time is less than the reference time t0, one function is used, as in (5). When the lifetime t is zero in (5) [13], the equation can be rewritten as (6), from which fc is confirmed with the other three parameters f0, k1, and t0:(5)ft=fc-fc-f0et-t0k1:t<t0(6)100%=fc1-e-t0·k1+f0e-t0·k1(7)fc=100-f0e-t0·k11-e-t0·k1.When the time is greater than the reference timet0, the recovery equation is represented as in (8). Figure 3 schematically shows the recovery rate curve using four-parameter method 1:(8)ft=f0e-t-t0k2:t>t0.Figure 3 Recovery rate curve using four-parameter method 1. ### 3.3. Four-Parameter Method 2 In four-parameter (t0,f0,k1,k2) method 2, fc is a constant and not dependent on f0, k1, and t0 [14]. When the lifetime t is -∞, the recovery is assumed to be 100% to make the recovery curve symmetric, as in (9). Therefore, four-parameter method 2 is actually the same as method 1, except that fc=100 instead of the definition in (7) (Figure 4):(9)ft=100-100-f0et-t0k1:t<t0ft=f0e-t-t0k2:t>t0.Figure 4 Recovery rate curve using four-parameter method 2.fc: critical recovery, fL: life recovery, tL: lifetime, f0: reference recovery, and t0: reference time. ## 3.1. Arrhenius Equation and Plot An Arrhenius equation presents the kinetic rateK as a function of the reciprocal of the temperature T in Kelvin [5]. This model is used widely to estimate the reciprocal effect of temperature as(2)K=Ae-Ea/RT.For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line as a function of the activation energy and temperature as(3)ln⁡K=ln⁡A-EaR1T,where K is the rate constant, A is the preexponential factor, R is the gas constant, T is the absolute temperature (°K), and Ea is the activation energy.Equation (3) can be rearranged to give a time-temperature relation by applying t∝1/K, as (4)ln⁡t=ln⁡1A+EaR1T.The lifetimes for higher temperatures are plotted in Arrhenius form, and the long-term lifetimes may be predicted by linearly extrapolating the given data in the semilog plane ofln⁡(t)-(1/T). Figure 2 shows a plot of the long-term lifetime prediction method using the Arrhenius plot. If the part is used for more time (upper square) than the allowable lifetime (dot) at a given temperature, it will fail; if it is used for less time (lower square) than the lifetime, it will be safe.Figure 2 Arrhenius plot used to predict lifetime. ## 3.2. Four-Parameter Method 1 Four-parameter (t0,f0,k1,k2) method 1 is composed of two exponential functions, and when the time is less than the reference time t0, one function is used, as in (5). When the lifetime t is zero in (5) [13], the equation can be rewritten as (6), from which fc is confirmed with the other three parameters f0, k1, and t0:(5)ft=fc-fc-f0et-t0k1:t<t0(6)100%=fc1-e-t0·k1+f0e-t0·k1(7)fc=100-f0e-t0·k11-e-t0·k1.When the time is greater than the reference timet0, the recovery equation is represented as in (8). Figure 3 schematically shows the recovery rate curve using four-parameter method 1:(8)ft=f0e-t-t0k2:t>t0.Figure 3 Recovery rate curve using four-parameter method 1. ## 3.3. Four-Parameter Method 2 In four-parameter (t0,f0,k1,k2) method 2, fc is a constant and not dependent on f0, k1, and t0 [14]. When the lifetime t is -∞, the recovery is assumed to be 100% to make the recovery curve symmetric, as in (9). Therefore, four-parameter method 2 is actually the same as method 1, except that fc=100 instead of the definition in (7) (Figure 4):(9)ft=100-100-f0et-t0k1:t<t0ft=f0e-t-t0k2:t>t0.Figure 4 Recovery rate curve using four-parameter method 2.fc: critical recovery, fL: life recovery, tL: lifetime, f0: reference recovery, and t0: reference time. ## 4. Experiments Before the HALT test of the VMQ, a rubber material property test was performed by the Korea Testing and Research Institute (KTR) and the Korea Polymer Testing & Research Institute (KOPTRI) according to ASTM standards. The test results are given in Table1.Table 1 Material properties of VMQ. Material properties Exp. value Test standard Basic properties Hardness (IRHD) 70 ASTM D412 Tensile strength (MPa) 7.8 Ultimate elongation (%) 150 Heat resistance Change in hardness (%) 0 ASTM D573 Change in tensile strength (%) −19.9 Change in elongation (%) −7.6 Compression set (%) 9.2 ASTM D395, method B; 22 h, 150°C, plied Compression set (%) 33.6 ASTM D395, method B; 1000 h, 150°C, plied Fluid resistance Change in hardness (IRHD) −5 ASTM D471, ASTM oil number 1; 70 h, 150°C Change in tensile strength (%) −10.1 Change in elongation (%) −3.3 Change in volume (%) 3.4 Fluid resistance Change in hardness (IRHD) −10 ASTM D471, ASTM oil number 3; 70 h, 150°C Change in volume (%) 17.5 Low temp. brittleness No cracking ASTM D2137, method A; −55°C, 3 minThe ability of a rubber to return to its original thickness after prolonged compression is measured by a compression set test at high and low temperatures. The compression set tests in this study were carried out under a compression rate of 30% with VMQ silicon rubber gaskets. For the lifetime prediction at a high temperature, the compression set test was performed with components that were heat-aged in an oven at temperatures of 150, 180, and 200°C for periods ranging from 20 to 500 h, and a cold resistance test was performed at a temperature of –70°C for periods ranging from 48 to 120 h. The dimensions of the specimen (diameter = 29 mm and thickness = 12.5 mm) and the compression set were determined according to ISO 815-1 (Figure5) [15]. The compression set and recovery rate were calculated using(10)CS%=l0-l2l0-l1×100Recovery%=100-CS,where CS = compression set, l0 = thickness of the specimen, l1 = thickness in the compressed state, and l2 = thickness after removal of the load.Figure 5 Jig for measuring the compression set.Table2 lists the experimental data for each temperature. First, the experiments were performed at 150°C according to method B in the ASTM D395. The higher temperature of 180°C was adopted for the HALT as in [3]. The temperature of 200°C seems rather high. The experimental data, with the exception of a couple of erroneous data points, were selected to optimize the four parameters in four-parameter methods 1 and 2 by SZGA.Table 2 Results of the compression set test with a compression rate of 30% at (a) high temperatures and (b) a low temperature. (a) Temp. (°C) Time (h) CS (%) Recov. (%) Temp. (°C) Time (h) CS (%) Recov. (%) Temp. (°C) Time (h) CS (%) Recov. (%) 150 22 9.20 90.80 180 48 10.33 89.67 200 20 10.67 89.33 1000 33.60 66.40 96 25.67 74.33 30 18.33 81.67 120 31.00 69.00 40 19.67 80.33 196 32.67 67.33 50 30.00 70.00 240 34.33 65.67 200 83.00 17.00 300 86.00 14.00 400 92.00 8.00 500 97.00 3.00 (b) Temp. (°C) Time (h) CS (%) Recov. (%) −70 48 1.93 98.07 72 2.73 97.27 96 3.34 96.66 120 3.78 96.22 ## 5. Results Representative automobile companies require the compression set rates of engine head rubber gaskets to be less than 40%. In other applications, the lifetimes of a gasket are defined as the time until its recovery rate is 60%. Both heat and cold resistance tests were performed. Experimental data at each temperature were obtained from the compression set test, and the recovery rate curves were fit using the SZGA method to find the optimal parameters of the smallest MSE between the best-fit function and the experimental data. Subsequently, the lifetime of the recovery rate at 60% was obtained from the best-fit recovery rate curve using a bisection method to solve the nonlinear equation. Finally, a linear regression model was fit by superimposing the recovery rate curve on the Arrhenius plot to obtain the long-term lifetime at the working temperature. ### 5.1. Lifetime of VMQ Gaskets under High Temperatures The lifetime evaluations have been made on the two differently regressed curves of methods 1 and 2. #### 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) #### 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. #### 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ### 5.2. Lifetime of VMQ Gaskets under Low Temperatures A closer look at the experimental data at a low temperature (Table2(b)) indicates that the relationship between the recovery rate and lifetime can be represented as a straight line. Thus, a linear regression was performed using the experimental data from the compression set test at a low temperature, and the intercept and slope of the linear equation were obtained. This means that only two parameters were needed to predict the lifetime, unlike in the four-parameter methods. Figure 9 shows the regression line from the linear relation between the recovery rate and lifetime. In cold regions, like the Antarctic, where temperatures can reach –70°C, shrinkage and shrinkage leaking in the rubber may occur because of the low temperatures, and these problems can reduce the recovery rate of VMQ. Thus, the standard required recovery rate of over 60% should be applied to predict the lifetime in this case. Table 5 shows the lifetime of the rubber gasket obtained from the best-fit line at different recovery rates ranging from 70 to 90%. After establishing the standard recovery rate, depending on the method and purpose of use, the lifetime of VMQ can be predicted for each case. For example, according to the linear equation, the lifetime of the VMQ rubber gasket corresponding to a recovery rate of 80% was 357,960 h. Assuming that the rubber gasket in an automobile radiator is continuously exposed to temperatures of –70°C, its lifetime would be 41 years. This result leads to the conclusion that the VMQ rubber gasket sufficiently satisfies the lifetime requirement of 10 years.Table 5 Lifetimes from the fitted line. Recovery rate (%) =-2.02719×ln⁡(t)+105.92382 Time (h) ln ⁡ ( t ) Compressed set (%) Recovery rate (%) 2,579 7.8552 10 90 30,384 10.3217 15 85 357,960 12.7882 20 80 4,217,197 15.2547 25 75 49,683,641 17.7212 30 70Figure 9 Recovery rate curve at low temperature (–70°C). ## 5.1. Lifetime of VMQ Gaskets under High Temperatures The lifetime evaluations have been made on the two differently regressed curves of methods 1 and 2. ### 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) ### 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. ### 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ## 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) ## 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. ## 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ## 5.2. Lifetime of VMQ Gaskets under Low Temperatures A closer look at the experimental data at a low temperature (Table2(b)) indicates that the relationship between the recovery rate and lifetime can be represented as a straight line. Thus, a linear regression was performed using the experimental data from the compression set test at a low temperature, and the intercept and slope of the linear equation were obtained. This means that only two parameters were needed to predict the lifetime, unlike in the four-parameter methods. Figure 9 shows the regression line from the linear relation between the recovery rate and lifetime. In cold regions, like the Antarctic, where temperatures can reach –70°C, shrinkage and shrinkage leaking in the rubber may occur because of the low temperatures, and these problems can reduce the recovery rate of VMQ. Thus, the standard required recovery rate of over 60% should be applied to predict the lifetime in this case. Table 5 shows the lifetime of the rubber gasket obtained from the best-fit line at different recovery rates ranging from 70 to 90%. After establishing the standard recovery rate, depending on the method and purpose of use, the lifetime of VMQ can be predicted for each case. For example, according to the linear equation, the lifetime of the VMQ rubber gasket corresponding to a recovery rate of 80% was 357,960 h. Assuming that the rubber gasket in an automobile radiator is continuously exposed to temperatures of –70°C, its lifetime would be 41 years. This result leads to the conclusion that the VMQ rubber gasket sufficiently satisfies the lifetime requirement of 10 years.Table 5 Lifetimes from the fitted line. Recovery rate (%) =-2.02719×ln⁡(t)+105.92382 Time (h) ln ⁡ ( t ) Compressed set (%) Recovery rate (%) 2,579 7.8552 10 90 30,384 10.3217 15 85 357,960 12.7882 20 80 4,217,197 15.2547 25 75 49,683,641 17.7212 30 70Figure 9 Recovery rate curve at low temperature (–70°C). ## 6. Conclusion A compression set test was carried out on developed VMQ gaskets at a compression rate of 30%. The SZGA method was applied to determine the optimal four parameters for the two four-parameter methods used in this study and calculate the recovery rate curves. The MSEs of the regression functions from different models and the experimental data were compared. By comparing the results of both methods, it was determined that either method can be used to accurately predict gasket lifetime because they showed only small differences in their results. We obtained the target lifetime for a recovery rate of 60% (80% for –70°C) through the fitted recovery rate curve using the bisection method at each temperature.Referring to the data points of the 60% (80% for –70°C) recovery found from the recovery rate curves, a linear Arrhenius plot in thelnt-(1/T) plane was constructed to determine the quantitative lifetime at any given temperature.The results are summarized as follows.(1) A procedure using four-parameter methods 1 and 2 to predict the long-term lifetimes of rubber gaskets was suggested.(2) Using four-parameter methods 1 and 2, the quantitative lifetime of a rubber gasket could be accurately predicted at any given temperature.(3) The lifetime mileage of VMQ was predicted to be 6,836,220 and 7,805,780 mi using four-parameter methods 1 and 2, respectively, at a working temperature of 100°C.(4) The lifetime of the VMQ rubber is 41 years at an ambient temperature of –70°C based on the standard recovery rate of 80%. --- *Source: 101068-2015-10-11.xml*
101068-2015-10-11_101068-2015-10-11.md
33,968
Lifetime Analysis of Rubber Gasket Composed of Methyl Vinyl Silicone Rubber with Low-Temperature Resistance
Young-Doo Kwon; Seong-Hwa Jun; Ji-Min Song
Mathematical Problems in Engineering (2015)
Engineering & Technology
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2015/101068
101068-2015-10-11.xml
--- ## Abstract Most machines and instruments constantly require elastomeric materials like rubber for the purposes of shock absorption, noise attenuation, and sealing. The material properties and accurate lifetime prediction of rubber are closely related to the quality of machines, especially their durability and reliability. The properties of rubber-like elastomers are influenced by ambient conditions, such as temperature, environment, and mechanical load. Moreover, the initial properties of rubber gaskets must be sustained under working conditions to satisfy their required function. Because of its technical merits, as well as its low cost, the highly accelerated life test (HALT) is used by many researchers to predict the long-term lifetime of rubber materials. Methyl vinyl silicone rubber (VMQ) has recently been adopted to improve the lifetime of automobile radiator gaskets. A four-parameter method of determining the recovery ability of the gaskets was recently published, and two revised methods of obtaining the recovery were proposed for polyacrylate (ACM) rubber. The recovery rate curves for VMQ were acquired using the successive zooming genetic algorithm (SZGA). The gasket lifetime for the target recovery (60%) of a compressed gasket was computed somewhat differently depending on the selected regression model. --- ## Body ## 1. Introduction Most machines and instruments constantly require elastomeric materials like rubber for the purposes of shock absorption, noise attenuation, and sealing [1]. The rubber elastomer is classified into three types: natural rubber (NR), synthetic rubber (SR), and NR + SR blended at a given ratio. SR exhibits many excellent properties in terms of mechanical performance. NR is often inferior to certain SRs, especially with respect to thermal stability and compatibility with petroleum products [2]. The SR ethylene propylene diene monomer (EPDM) rubber, which has the characteristic of high-temperature resistance, has been mainly adopted for a radiator gasket of an automobile until now. However, methyl vinyl silicone rubber (VMQ) has recently begun to be used as a radiator gasket material compatible with an extreme temperature range and low temperatures, according to SAE J200, because previous gasket design criteria stated that low-temperature applications for automobiles reached temperatures in the range of –70°C to –55°C by major automotive companies. The VMQ specimen used in this study is made from the final master batch of Burim FMB Co. in ROK, which has been made from the silicone base of Dow Corning Co. by adding 1 PHR of curing agent and 0.5 PHR of pigment.In this study, we predict the lifetime of a VMQ radiator gasket recently developed by a local company using the method proposed in 2014 [3]. Generally three methods are used for the lifetime prediction of a rubber gasket. The most practical method with mathematical concepts is the highly accelerated life test (HALT), which applies temperatures higher than the service temperature over a short period. Using this method, the long-term lifetime of a gasket at lower temperatures can be predicted by extrapolating the data [4]. The second lifetime prediction method under service conditions is economically disadvantageous because of its long testing time, high cost, and labor requirements. The third method is to rely on an experienced engineer specializing in rubber materials, which is less reliable and does not yield objective results.The HALT is a test methodology that accelerates the degradation of material properties using several specimens, and it has been used by many researchers during the material development stage and design process. This test is also commonly applied to rubber materials for gaskets and dampers and facilitates the identification and resolution of weaknesses in new product designs. The methodology diminishes the probability of in-service failures; that is, it increases product quality by virtue of reliability and decreases the development cost and time [5]. The HALT for VMQ was performed at temperatures of 150–200°C under a compression rate of 30%, which is the actual compression rate under service conditions for the radiator gasket. Additionally, a low-temperature test at –70°C was performed under the same compression rate.In this method of lifetime prediction, the Arrhenius model [6] is simpler and more effective for most cases than the Eyring model [7] and uses experimental data. The lifetime of the gasket is defined as the time when the recovery rate meets 60% of the target value after undergoing a 30% compression rate, which depends on the service temperature, whereas ISO 11346 [8] stipulates that the failure time of chemical materials is the point where its initial properties are reduced to 50%.According to most references investigating a lifetime evaluation adopting the linear Arrhenius equation [9] for the ln⁡(t)-(1/T) relationship, where t is the lifetime and T is the temperature, small errors in the lifetime at high temperatures from the HALT evaluation may lead to large errors in the predicted lifetime at low temperatures. Unlike most papers, which do not consider the recovery~ln(t) curve, one study made use of four parameters instead of two parameters in the Arrhenius plot to accurately draw the recovery~ln(t) curve and correctly determine the long-term lifetime.With accurate lifetime predictions at high temperatures, the linear Arrhenius model in thelnt-(1/T) plane can yield correct quantitative analysis of the lifetime of VMQ at a low working temperature. ## 2. Successive Zooming Genetic Algorithm Method for Optimum Parameters The successive zooming genetic algorithm (SZGA) method is used to achieve a smart reduction of the search space around the candidate optimum point [10, 11]. Although this method can also be applied to a general genetic algorithm (GA), it was applied to a micro-genetic algorithm (MGA). The computing procedure of the SZGA is as follows. First, the initial population is generated and an MGA is applied. Subsequently, after every 100 generations, the optimum point with the highest fitness is identified. Second, the search domain is reduced to (Xkopt-αk/2,Xkopt+αk/2), and the optimization procedure continues based on the reduced domain; that is, a new initial population is generated within the new boundaries. This reduction of the search domain increases the resolution of the solution, and the procedure is repeated until the identified solution is satisfactory (δ is the error ratio, Xkopt is the optimum point after (100×k)th generation, α is the zooming factor, and Nzoom is the number of zooming operations). δ is the relative ratio of (Foptk-Foptk-1)/Foptk, and Foptk and Foptk-1 are the kth and (k-1)th optimum function values. The critical ratio δ0 is 1 × 10−6. To fit the recovery rate curve of the polyacrylate (ACM) rubber gasket, the optimal parameters of the smallest mean squared error (MSE) [12] were obtained using this SZGA method. Figure 1 shows the flowchart and the schematics of the SZGA:(1)Mean  squared  errorMSE=1n∑i=1nfk,xi-Di2,where k: unknown parameters.Figure 1 Flowchart and schematics of SZGA. ## 3. Methods of Predicting the Quantitative Lifetime of a VMQ Gasket Methods of mathematically predicting the quantitative lifetime are introduced in this section. To obtain an Arrhenius plot of the long-term lifetime, we first needed to fit the recovery rate curve for a given temperature to obtain the lifetime corresponding to a recovery rate of 60%. Two methods of fitting curves for the ACM were adopted here for the VMQ [3]. The recovery rate of a rubber gasket was assumed to be two exponential functions represented by four parameters. The recovery rate curves were fit using the four optimized parameters, and the lifetimes were solved from the obtained functions. Before we explain the methods of obtaining lifetimes at each given temperature by adopting recovery models, let us first explain the prediction of long-term lifetime using the Arrhenius plot. ### 3.1. Arrhenius Equation and Plot An Arrhenius equation presents the kinetic rateK as a function of the reciprocal of the temperature T in Kelvin [5]. This model is used widely to estimate the reciprocal effect of temperature as(2)K=Ae-Ea/RT.For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line as a function of the activation energy and temperature as(3)ln⁡K=ln⁡A-EaR1T,where K is the rate constant, A is the preexponential factor, R is the gas constant, T is the absolute temperature (°K), and Ea is the activation energy.Equation (3) can be rearranged to give a time-temperature relation by applying t∝1/K, as (4)ln⁡t=ln⁡1A+EaR1T.The lifetimes for higher temperatures are plotted in Arrhenius form, and the long-term lifetimes may be predicted by linearly extrapolating the given data in the semilog plane ofln⁡(t)-(1/T). Figure 2 shows a plot of the long-term lifetime prediction method using the Arrhenius plot. If the part is used for more time (upper square) than the allowable lifetime (dot) at a given temperature, it will fail; if it is used for less time (lower square) than the lifetime, it will be safe.Figure 2 Arrhenius plot used to predict lifetime. ### 3.2. Four-Parameter Method 1 Four-parameter (t0,f0,k1,k2) method 1 is composed of two exponential functions, and when the time is less than the reference time t0, one function is used, as in (5). When the lifetime t is zero in (5) [13], the equation can be rewritten as (6), from which fc is confirmed with the other three parameters f0, k1, and t0:(5)ft=fc-fc-f0et-t0k1:t<t0(6)100%=fc1-e-t0·k1+f0e-t0·k1(7)fc=100-f0e-t0·k11-e-t0·k1.When the time is greater than the reference timet0, the recovery equation is represented as in (8). Figure 3 schematically shows the recovery rate curve using four-parameter method 1:(8)ft=f0e-t-t0k2:t>t0.Figure 3 Recovery rate curve using four-parameter method 1. ### 3.3. Four-Parameter Method 2 In four-parameter (t0,f0,k1,k2) method 2, fc is a constant and not dependent on f0, k1, and t0 [14]. When the lifetime t is -∞, the recovery is assumed to be 100% to make the recovery curve symmetric, as in (9). Therefore, four-parameter method 2 is actually the same as method 1, except that fc=100 instead of the definition in (7) (Figure 4):(9)ft=100-100-f0et-t0k1:t<t0ft=f0e-t-t0k2:t>t0.Figure 4 Recovery rate curve using four-parameter method 2.fc: critical recovery, fL: life recovery, tL: lifetime, f0: reference recovery, and t0: reference time. ## 3.1. Arrhenius Equation and Plot An Arrhenius equation presents the kinetic rateK as a function of the reciprocal of the temperature T in Kelvin [5]. This model is used widely to estimate the reciprocal effect of temperature as(2)K=Ae-Ea/RT.For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line as a function of the activation energy and temperature as(3)ln⁡K=ln⁡A-EaR1T,where K is the rate constant, A is the preexponential factor, R is the gas constant, T is the absolute temperature (°K), and Ea is the activation energy.Equation (3) can be rearranged to give a time-temperature relation by applying t∝1/K, as (4)ln⁡t=ln⁡1A+EaR1T.The lifetimes for higher temperatures are plotted in Arrhenius form, and the long-term lifetimes may be predicted by linearly extrapolating the given data in the semilog plane ofln⁡(t)-(1/T). Figure 2 shows a plot of the long-term lifetime prediction method using the Arrhenius plot. If the part is used for more time (upper square) than the allowable lifetime (dot) at a given temperature, it will fail; if it is used for less time (lower square) than the lifetime, it will be safe.Figure 2 Arrhenius plot used to predict lifetime. ## 3.2. Four-Parameter Method 1 Four-parameter (t0,f0,k1,k2) method 1 is composed of two exponential functions, and when the time is less than the reference time t0, one function is used, as in (5). When the lifetime t is zero in (5) [13], the equation can be rewritten as (6), from which fc is confirmed with the other three parameters f0, k1, and t0:(5)ft=fc-fc-f0et-t0k1:t<t0(6)100%=fc1-e-t0·k1+f0e-t0·k1(7)fc=100-f0e-t0·k11-e-t0·k1.When the time is greater than the reference timet0, the recovery equation is represented as in (8). Figure 3 schematically shows the recovery rate curve using four-parameter method 1:(8)ft=f0e-t-t0k2:t>t0.Figure 3 Recovery rate curve using four-parameter method 1. ## 3.3. Four-Parameter Method 2 In four-parameter (t0,f0,k1,k2) method 2, fc is a constant and not dependent on f0, k1, and t0 [14]. When the lifetime t is -∞, the recovery is assumed to be 100% to make the recovery curve symmetric, as in (9). Therefore, four-parameter method 2 is actually the same as method 1, except that fc=100 instead of the definition in (7) (Figure 4):(9)ft=100-100-f0et-t0k1:t<t0ft=f0e-t-t0k2:t>t0.Figure 4 Recovery rate curve using four-parameter method 2.fc: critical recovery, fL: life recovery, tL: lifetime, f0: reference recovery, and t0: reference time. ## 4. Experiments Before the HALT test of the VMQ, a rubber material property test was performed by the Korea Testing and Research Institute (KTR) and the Korea Polymer Testing & Research Institute (KOPTRI) according to ASTM standards. The test results are given in Table1.Table 1 Material properties of VMQ. Material properties Exp. value Test standard Basic properties Hardness (IRHD) 70 ASTM D412 Tensile strength (MPa) 7.8 Ultimate elongation (%) 150 Heat resistance Change in hardness (%) 0 ASTM D573 Change in tensile strength (%) −19.9 Change in elongation (%) −7.6 Compression set (%) 9.2 ASTM D395, method B; 22 h, 150°C, plied Compression set (%) 33.6 ASTM D395, method B; 1000 h, 150°C, plied Fluid resistance Change in hardness (IRHD) −5 ASTM D471, ASTM oil number 1; 70 h, 150°C Change in tensile strength (%) −10.1 Change in elongation (%) −3.3 Change in volume (%) 3.4 Fluid resistance Change in hardness (IRHD) −10 ASTM D471, ASTM oil number 3; 70 h, 150°C Change in volume (%) 17.5 Low temp. brittleness No cracking ASTM D2137, method A; −55°C, 3 minThe ability of a rubber to return to its original thickness after prolonged compression is measured by a compression set test at high and low temperatures. The compression set tests in this study were carried out under a compression rate of 30% with VMQ silicon rubber gaskets. For the lifetime prediction at a high temperature, the compression set test was performed with components that were heat-aged in an oven at temperatures of 150, 180, and 200°C for periods ranging from 20 to 500 h, and a cold resistance test was performed at a temperature of –70°C for periods ranging from 48 to 120 h. The dimensions of the specimen (diameter = 29 mm and thickness = 12.5 mm) and the compression set were determined according to ISO 815-1 (Figure5) [15]. The compression set and recovery rate were calculated using(10)CS%=l0-l2l0-l1×100Recovery%=100-CS,where CS = compression set, l0 = thickness of the specimen, l1 = thickness in the compressed state, and l2 = thickness after removal of the load.Figure 5 Jig for measuring the compression set.Table2 lists the experimental data for each temperature. First, the experiments were performed at 150°C according to method B in the ASTM D395. The higher temperature of 180°C was adopted for the HALT as in [3]. The temperature of 200°C seems rather high. The experimental data, with the exception of a couple of erroneous data points, were selected to optimize the four parameters in four-parameter methods 1 and 2 by SZGA.Table 2 Results of the compression set test with a compression rate of 30% at (a) high temperatures and (b) a low temperature. (a) Temp. (°C) Time (h) CS (%) Recov. (%) Temp. (°C) Time (h) CS (%) Recov. (%) Temp. (°C) Time (h) CS (%) Recov. (%) 150 22 9.20 90.80 180 48 10.33 89.67 200 20 10.67 89.33 1000 33.60 66.40 96 25.67 74.33 30 18.33 81.67 120 31.00 69.00 40 19.67 80.33 196 32.67 67.33 50 30.00 70.00 240 34.33 65.67 200 83.00 17.00 300 86.00 14.00 400 92.00 8.00 500 97.00 3.00 (b) Temp. (°C) Time (h) CS (%) Recov. (%) −70 48 1.93 98.07 72 2.73 97.27 96 3.34 96.66 120 3.78 96.22 ## 5. Results Representative automobile companies require the compression set rates of engine head rubber gaskets to be less than 40%. In other applications, the lifetimes of a gasket are defined as the time until its recovery rate is 60%. Both heat and cold resistance tests were performed. Experimental data at each temperature were obtained from the compression set test, and the recovery rate curves were fit using the SZGA method to find the optimal parameters of the smallest MSE between the best-fit function and the experimental data. Subsequently, the lifetime of the recovery rate at 60% was obtained from the best-fit recovery rate curve using a bisection method to solve the nonlinear equation. Finally, a linear regression model was fit by superimposing the recovery rate curve on the Arrhenius plot to obtain the long-term lifetime at the working temperature. ### 5.1. Lifetime of VMQ Gaskets under High Temperatures The lifetime evaluations have been made on the two differently regressed curves of methods 1 and 2. #### 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) #### 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. #### 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ### 5.2. Lifetime of VMQ Gaskets under Low Temperatures A closer look at the experimental data at a low temperature (Table2(b)) indicates that the relationship between the recovery rate and lifetime can be represented as a straight line. Thus, a linear regression was performed using the experimental data from the compression set test at a low temperature, and the intercept and slope of the linear equation were obtained. This means that only two parameters were needed to predict the lifetime, unlike in the four-parameter methods. Figure 9 shows the regression line from the linear relation between the recovery rate and lifetime. In cold regions, like the Antarctic, where temperatures can reach –70°C, shrinkage and shrinkage leaking in the rubber may occur because of the low temperatures, and these problems can reduce the recovery rate of VMQ. Thus, the standard required recovery rate of over 60% should be applied to predict the lifetime in this case. Table 5 shows the lifetime of the rubber gasket obtained from the best-fit line at different recovery rates ranging from 70 to 90%. After establishing the standard recovery rate, depending on the method and purpose of use, the lifetime of VMQ can be predicted for each case. For example, according to the linear equation, the lifetime of the VMQ rubber gasket corresponding to a recovery rate of 80% was 357,960 h. Assuming that the rubber gasket in an automobile radiator is continuously exposed to temperatures of –70°C, its lifetime would be 41 years. This result leads to the conclusion that the VMQ rubber gasket sufficiently satisfies the lifetime requirement of 10 years.Table 5 Lifetimes from the fitted line. Recovery rate (%) =-2.02719×ln⁡(t)+105.92382 Time (h) ln ⁡ ( t ) Compressed set (%) Recovery rate (%) 2,579 7.8552 10 90 30,384 10.3217 15 85 357,960 12.7882 20 80 4,217,197 15.2547 25 75 49,683,641 17.7212 30 70Figure 9 Recovery rate curve at low temperature (–70°C). ## 5.1. Lifetime of VMQ Gaskets under High Temperatures The lifetime evaluations have been made on the two differently regressed curves of methods 1 and 2. ### 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) ### 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. ### 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ## 5.1.1. Recovery Rate Curves from Four-Parameter Methods 1 and 2 The recovery rate curves were acquired using four-parameter methods 1 and 2. The SZGA method was used to optimize the four parameters, and the recovery rate curves of the two methods were fit using these four parameters. The recovery rate curves were compared with the experimental data. The results showed that the recovery rate curves could be fit properly using the four parameters. Figures6 and 7 [14, 16] show the recovery rate curves at different temperatures and a compression rate of 30%.Figure 6 Recovery rate curves using four-parameter method 1 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c)Figure 7 Recovery rate curves using four-parameter method 2 with a compression rate of 30% at temperatures of (a) 150°C, (b) 180°C, and (c) 200°C. (a) (b) (c) ## 5.1.2. Mean Squared Error of Four-Parameter Methods 1 and 2 The MSE [12] can be calculated to gauge the extent to which the data points vary from the recovery rate curves. Table 3 compares the MSEs of four-parameter methods 1 and 2. For the life prediction of the rubber gasket, either method can be used to find the least MSE:(11)Mean  squared  errorMSE=1n∑i=1nyi-y^i2.Table 3 MSEs of the four-parameter methods at a compression rate of 30%. Method MSE Total MSE 150°C 180°C 200°C Four-parameter method 1 0 10.432 3.998 14.430 Four-parameter method 2 0 6.712 3.988 10.700One can see that four-parameter method 2 yields a smaller MSE than method 1 for a compression rate of 30%. ## 5.1.3. Results of Quantitative Lifetime Prediction Compression set rates less than 40% are required by major automobile companies. The precise lifetime corresponding to a recovery rate of 60% can be determined using the bisection method from the four-parameter equations. The method with the minimum MSE is the best choice to obtain the lifetime with a 60% recovery rate. The lifetime data acquired from each recovery rate curve were plotted using linear regression, and a linear equation was derived using the Origin Pro system (Figure8). This equation can be used to estimate the lifetime at a specific temperature from the Arrhenius equation. Subsequently, the lifetime and lifetime mileage were calculated by(12)Timehour=expln⁡tLife  mile=30mphmile/hour×timehour.Figure 8 Arrhenius plots for four-parameter methods (a) 1 and (b) 2. (a) (b)Under operating conditions of 30 mph (mile/hour) at 100°C, the lifetime mileage values of the rubber gasket predicted by four-parameter methods 1 and 2 are 6,836,220 and 7,805,780 mi, respectively, as shown in Table4. Because the operating time is assumed to be an average of 3 h/day, the predicted quantitative lifetimes of the rubber gasket calculated using four-parameter methods 1 and 2 are 208 and 273 years, respectively. Thus, the lifetime of the VMQ silicon rubber gasket predicted at a working temperature of 100°C meets the performance requirements of 100,000 mi and 10 years.Table 4 Lifetime mileage determined using the Arrhenius equation. Compression rate 30% Four-parameter method 1 Four-parameter method 2 ln ⁡ ( t ) = 14.06674 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.37589 ln ⁡ ( t ) = 14.39015 × 1 / ( T ( ° C ) + 273 ) × 1000 - 26.11032 Temperature (°C) Lifetime mileage (103miles) Temperature (°C) Lifetime mileage (103 miles) 80 57,911 80 69,453 100 6,836 100 7,805 120 1,003 120 1,095 140 177 140 186 160 36 160 37 180 9 180 9 ## 5.2. Lifetime of VMQ Gaskets under Low Temperatures A closer look at the experimental data at a low temperature (Table2(b)) indicates that the relationship between the recovery rate and lifetime can be represented as a straight line. Thus, a linear regression was performed using the experimental data from the compression set test at a low temperature, and the intercept and slope of the linear equation were obtained. This means that only two parameters were needed to predict the lifetime, unlike in the four-parameter methods. Figure 9 shows the regression line from the linear relation between the recovery rate and lifetime. In cold regions, like the Antarctic, where temperatures can reach –70°C, shrinkage and shrinkage leaking in the rubber may occur because of the low temperatures, and these problems can reduce the recovery rate of VMQ. Thus, the standard required recovery rate of over 60% should be applied to predict the lifetime in this case. Table 5 shows the lifetime of the rubber gasket obtained from the best-fit line at different recovery rates ranging from 70 to 90%. After establishing the standard recovery rate, depending on the method and purpose of use, the lifetime of VMQ can be predicted for each case. For example, according to the linear equation, the lifetime of the VMQ rubber gasket corresponding to a recovery rate of 80% was 357,960 h. Assuming that the rubber gasket in an automobile radiator is continuously exposed to temperatures of –70°C, its lifetime would be 41 years. This result leads to the conclusion that the VMQ rubber gasket sufficiently satisfies the lifetime requirement of 10 years.Table 5 Lifetimes from the fitted line. Recovery rate (%) =-2.02719×ln⁡(t)+105.92382 Time (h) ln ⁡ ( t ) Compressed set (%) Recovery rate (%) 2,579 7.8552 10 90 30,384 10.3217 15 85 357,960 12.7882 20 80 4,217,197 15.2547 25 75 49,683,641 17.7212 30 70Figure 9 Recovery rate curve at low temperature (–70°C). ## 6. Conclusion A compression set test was carried out on developed VMQ gaskets at a compression rate of 30%. The SZGA method was applied to determine the optimal four parameters for the two four-parameter methods used in this study and calculate the recovery rate curves. The MSEs of the regression functions from different models and the experimental data were compared. By comparing the results of both methods, it was determined that either method can be used to accurately predict gasket lifetime because they showed only small differences in their results. We obtained the target lifetime for a recovery rate of 60% (80% for –70°C) through the fitted recovery rate curve using the bisection method at each temperature.Referring to the data points of the 60% (80% for –70°C) recovery found from the recovery rate curves, a linear Arrhenius plot in thelnt-(1/T) plane was constructed to determine the quantitative lifetime at any given temperature.The results are summarized as follows.(1) A procedure using four-parameter methods 1 and 2 to predict the long-term lifetimes of rubber gaskets was suggested.(2) Using four-parameter methods 1 and 2, the quantitative lifetime of a rubber gasket could be accurately predicted at any given temperature.(3) The lifetime mileage of VMQ was predicted to be 6,836,220 and 7,805,780 mi using four-parameter methods 1 and 2, respectively, at a working temperature of 100°C.(4) The lifetime of the VMQ rubber is 41 years at an ambient temperature of –70°C based on the standard recovery rate of 80%. --- *Source: 101068-2015-10-11.xml*
2015
# Ultrasound of Primary Aneurysmal Bone Cyst **Authors:** Katrina N. Glazebrook; Gary L. Keeney; Michael G. Rock **Journal:** Case Reports in Radiology (2014) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2014/101069 --- ## Abstract Aneurysmal bone cysts (ABC) are rare, benign, expansile lesions of bone often found in the metaphyses of long bones in pediatric and young adult population. Multiple fluid levels are typically seen on imaging with magnetic resonance imaging (MRI) or computed tomography (CT). We describe a case of a primary ABC in the fibula of a 34-year-old man diagnosed on ultrasound with a mobile fluid level demonstrated sonographically. --- ## Body ## 1. Introduction Aneurysmal bone cysts (ABC) are rare, benign, expansile lesions of bone most commonly found in the metaphyses of long bones in pediatric and young adult population. The lesion is characterized by blood filled spaces separated by fibrous septa that may contain osteoclast-like giant cells. It can be a primary lesion or arise adjacent to other benign or malignant osseous processes. Imaging with magnetic resonance imaging (MRI) or computed tomography (CT) typically shows multiple fluid levels. Bone lesions are not typically evaluated with ultrasound as the sound waves are not able to penetrate the cortex. If however the cortex is thinned, expanded, or disrupted, ultrasound can identify primary and secondary bone tumors. Ultrasound is often used as a first imaging study for evaluation of palpable superficial masses. Recognition of a lesion to be originating from the bone rather than soft tissue and identification of mobile fluid/fluid levels can suggest the diagnosis of aneurysmal bone cyst and direct the patient to the appropriate treatment. Little has been written in the literature about the sonographic appearance of ABC. We describe the ultrasound, radiographic, and MRI appearance of an aneurysmal bone cyst in the distal fibula. ## 2. Case Report A 34-year-old man presented to his family practitioner with a two-month history of swelling and discomfort in the left lateral lower leg just above his ankle. There was no preceding history of trauma. Physical examination revealed soft tissue fullness at the junction of the proximal two-thirds and distal one-third of the left fibula which was painful to touch. The patient was sent for an ultrasound for evaluation and possibly to aspirate a presumed ganglion cyst.Ultrasound was performed using a General Electric Healthcare Logiq E9 linear ML 6–15 MHz transducer (GE Healthcare Wauwatosa, WI). A cortically based lesion was noted arising from the anterolateral cortex of the fibula with elevation of the periosteum and a thin rim of echogenicity surrounding the mass, presumed to be a thin shell of bone (Figure1) which appeared intact without adjacent soft tissue mass. Two fluid-fluid levels were noted within the mass which were mobile on patient rotation indicating the cystic nature of the lesion’s contents. No internal soft tissue mass extending from the fibular medullary canal was noted. There was increased vascularity in the adjacent soft tissues on color Doppler evaluation consistent with inflammatory changes. The most likely diagnosis was a cortically based aneurysmal bone cyst (ABC) and not a soft tissue solid or cystic mass. In view of the periosteal elevation or sonographic “Codman’s triangle,” the mass was thought to be centered and to have originated within the bone rather than to be a soft tissue mass such as a ganglion cyst which had eroded into the bone. A Brodie abscess or subacute osteomyelitis could have this appearance as symptoms can be indolent, but these tend to be metaphyseal and centrally located. The adjacent fibular cortex was normal with no adjacent soft tissue mass to suggest an underlying aggressive bone lesion such as osteosarcoma or metastasis with secondary ABC formation.Ultrasound of the distal left fibula in a 34-year-old man. ((a) and (b)) Transverse and longitudinal US of the distal left fibula demonstrates an expansile, cortically centered bone lesion extending from the fibular metadiaphysis into the anterior compartment of the calf with a thin echogenic rim of cortex (short arrows) which was contiguous with the underlying fibular cortex (long arrow). TIB: tibia, FIB: fibula. (c) Transverse scan of the distal fibula has been rotated to the same orientation as the MRI (see Figure2). This shows increased vascularity in the adjacent soft tissue about the fibular lesion on color Doppler evaluation (short arrow), without vascularity seen within the lesion. Fluid level is noted (long arrow). (a) (b) (c)MRI was obtained for preoperative planning, consisting of axial and sagittal T1-weighted and fat saturated T2-weighted images with axial and sagittal fat saturated spoiled gradient echo sequence after gadolinium administration. This confirmed a cortically based mass arising from the fibula extending into the anterior compartment, with fluid-fluid levels seen on the T2-weighted sequence (Figure2). There was peripheral enhancement and increased T2 signal in the adjacent soft tissues and fibular marrow consistent with inflammatory changes corresponding to the increased vascularity seen sonographically. No primary lesion could be seen and this was presumed to be a primary cortical ABC.MRI of the distal fibula. (a) Axial T2-weighted MRI with fat saturation demonstrates a cortically based lesion within the distal fibula with a large fluid-fluid level (arrow). Peripheral increased T2 signal around the ABC is consistent with inflammatory changes. (b) Axial T1-weighted MRI shows the lesion is cortically based and has intermediate T1 signal within the mass. (c) Axial spoiled gradient with fat saturation following gadolinium shows peripheral enhancement only corresponding to the increased vascularity seen on color Doppler evaluation (Figure1(c)). (a) (b) (c)Radiograph demonstrated a cortically based lesion with a narrow zone of transition, elevating the periosteum with a thin shell of bone (Figure3). No other lesions were seen in the fibula or tibia. The patient underwent surgical excision of the lesion with curettage. A histologic diagnosis of aneurysmal bone cyst was made forming a 3.2 × 1.7 × 0.9 cm cystic mass (Figure 4). Microscopically, a cystic space with peripheral shell of reactive bone with numerous osteoclast giant cells and scattered histiocytes and lymphocytes was noted. Some of the histiocytes contain hemosiderin.Figure 3 AP radiograph of the distal left fibula shows a lytic lesion with narrow zone of transition within the fibular cortex. There is periosteal elevation and a thin rim of bone surrounding the lesion (arrow).Figure 4 Photograph of the gross specimen shows the cystic space within the cortex of the bone. ## 3. Discussion Aneurysmal bone cyst is a nonneoplastic expansile lesion of bone, mainly affecting children and young adults [1]. In a study of 238 cases of ABC by Vergel De Dios et al., for patients with ABC of the long bones, 86% of patients were younger than 20 years (range of 1.5 to 69 years) [2]. More than 80% of lesions were in the long bones, commonly in the metaphysis, flat bones, or spinal column. Pathologically the lesion consists of channels or spaces separated by fibrous septa which may contain osteoclast-like giant cells and bone trabeculae. Thick new bone formation at the edge of the lesions was present in 52% in the long bones with calcified matrix; a cartilage aura and adjacent myxoid regions were seen in 11 to 16% of long bone ABC. Multiple bones may be affected. There is a rare solid variant which does not contain the cavernous spaces but otherwise has identical histologic findings. Pain and swelling are the common clinical presentations as in our case. Radiographically, ABCs were found centrally or eccentrically in the medulla in 23% and 58%, respectively. In 19% of cases the ABCs were centered in the cortex or on the surface of bone as in our case. ABCs involved the metaphysis or metadiaphysis in 58% of cases. Periosteal new bone was seen in 66% of cases with a thin rim of bone over the external surface of the ABC in 63% of cases which can be a helpful radiographic sign. It can exist as a primary lesion or be secondary to either a benign bone lesion such as chondroblastoma or a malignant bone tumor. In our case no additional bone lesion was seen on imaging or histologically.High resolution ultrasound is increasingly used for initial assessment of ambiguous musculoskeletal soft tissue lesions and for sonographically guided biopsy [3]. Ultrasound is generally not helpful in intramedullary bone lesions as sound waves cannot penetrate the normal cortex [4]. Ultrasound can, however, readily identify primary and secondary bone tumors where there are cortical disruption and soft tissue masses [4]. These areas may then be targeted for percutaneous biopsy under ultrasound guidance which is a quick procedure without ionizing radiation. Ultrasound may also be useful in evaluation of postoperative sites for tumor recurrence particularly if there is significant artefact on MRI or CT due to orthopedic hardware. If the cortex is sufficiently thinned, the sound waves may have sufficient sound transmission to identify the underlying bone lesion. Fornage et al. showed the cystic nature of a calcaneal lesion with ultrasound and used the US to guide a needle aspiration for confirmation of a calcaneal cyst [5]. Haber et al. described the ultrasound findings in a primary ABC in the scapula in a 1-year-old seen on radiograph as an expansile lesion [6]. They found a cystic mass with a thin echogenic shell and multiple intraosseous fluid levels. Suh and Han described fluid levels in large expansile lesion in the ilium [7] with CT for confirmation. In both cases, the cortices of the affected bones were markedly thinned allowing sound to be transmitted and for characterization of the internal structure of the lesions. In both cases, the fluid levels were seen to move with changes in position of the patients as in our case, confirming the cystic nature of the lesion.Fluid levels seen with ABC have been well documented with CT and MRI [8, 9]. Since the initial description, fluid-fluid levels have been described in many bone pathologies and so this finding has become a nonspecific observation. O’Donnell and Saifuddin evaluated the prevalence and diagnostic significance of fluid-fluid levels (FFLs) in focal bone lesions in 738 consecutive patients [10]. FFLs were present in 83 patients (11.2%). Malignant neoplasms most commonly showed FFLs in less than third of the lesion. With increase in the total volume of FFLs, there was a decrease in percentage of malignancy. There were no malignant lesions if 100% of the lesion showed FFL changes. Some aggressive high grade predominantly necrotic bone tumors, particularly telangiectatic osteosarcomas, may have greater than 2/3 of the lesion containing FFLs. These tumors often show a small solid component but differentiation with ABC’s can be difficult on MRI. Radiographs may be helpful; however, ABCs can show an aggressive appearance and malignancies may have a more indolent radiographic appearance. Sundaram et al. describe four cases of osteosarcoma with clinical and imaging findings suggestive of simple or aneurysmal bone cyst radiographically [11]. One tumor was a giant cell-rich variant of osteosarcoma with focal aneurysmal bone cyst-like areas within the navicular bone. MRI showed fluid-fluid levels with expansion of the bone and no soft tissue mass. Clinical features and radiographs should be used to differentiate between TOS and ABC. Microscopically, the tumors were not cystic or telangiectatic but were conventional osteosarcoma and osteoclast-rich osteosarcoma and so did not pose a pathologic dilemma.Treatment of ABC includes curettage with bone grafting if technically possible. Curettage without bone grafting can be safely performed with protected weightbearing until healing has occurred [12]. Radiation had been employed in the past, though this is now avoided to prevent post radiation sarcomas. Recurrence can occur in up to 19% of cases. ## 4. Conclusion Ultrasound may be the first imaging test to evaluate a superficial mass. It is important for the radiologist to be able to recognize a bone rather than soft tissue neoplasm. If the cortex is sufficiently thinned, then ultrasound can demonstrate fluid levels suggesting an aneurysmal bone cyst. Appropriate additional imaging for preoperative planning and surgical management can then be performed without delay in diagnosis. --- *Source: 101069-2014-01-23.xml*
101069-2014-01-23_101069-2014-01-23.md
12,764
Ultrasound of Primary Aneurysmal Bone Cyst
Katrina N. Glazebrook; Gary L. Keeney; Michael G. Rock
Case Reports in Radiology (2014)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2014/101069
101069-2014-01-23.xml
--- ## Abstract Aneurysmal bone cysts (ABC) are rare, benign, expansile lesions of bone often found in the metaphyses of long bones in pediatric and young adult population. Multiple fluid levels are typically seen on imaging with magnetic resonance imaging (MRI) or computed tomography (CT). We describe a case of a primary ABC in the fibula of a 34-year-old man diagnosed on ultrasound with a mobile fluid level demonstrated sonographically. --- ## Body ## 1. Introduction Aneurysmal bone cysts (ABC) are rare, benign, expansile lesions of bone most commonly found in the metaphyses of long bones in pediatric and young adult population. The lesion is characterized by blood filled spaces separated by fibrous septa that may contain osteoclast-like giant cells. It can be a primary lesion or arise adjacent to other benign or malignant osseous processes. Imaging with magnetic resonance imaging (MRI) or computed tomography (CT) typically shows multiple fluid levels. Bone lesions are not typically evaluated with ultrasound as the sound waves are not able to penetrate the cortex. If however the cortex is thinned, expanded, or disrupted, ultrasound can identify primary and secondary bone tumors. Ultrasound is often used as a first imaging study for evaluation of palpable superficial masses. Recognition of a lesion to be originating from the bone rather than soft tissue and identification of mobile fluid/fluid levels can suggest the diagnosis of aneurysmal bone cyst and direct the patient to the appropriate treatment. Little has been written in the literature about the sonographic appearance of ABC. We describe the ultrasound, radiographic, and MRI appearance of an aneurysmal bone cyst in the distal fibula. ## 2. Case Report A 34-year-old man presented to his family practitioner with a two-month history of swelling and discomfort in the left lateral lower leg just above his ankle. There was no preceding history of trauma. Physical examination revealed soft tissue fullness at the junction of the proximal two-thirds and distal one-third of the left fibula which was painful to touch. The patient was sent for an ultrasound for evaluation and possibly to aspirate a presumed ganglion cyst.Ultrasound was performed using a General Electric Healthcare Logiq E9 linear ML 6–15 MHz transducer (GE Healthcare Wauwatosa, WI). A cortically based lesion was noted arising from the anterolateral cortex of the fibula with elevation of the periosteum and a thin rim of echogenicity surrounding the mass, presumed to be a thin shell of bone (Figure1) which appeared intact without adjacent soft tissue mass. Two fluid-fluid levels were noted within the mass which were mobile on patient rotation indicating the cystic nature of the lesion’s contents. No internal soft tissue mass extending from the fibular medullary canal was noted. There was increased vascularity in the adjacent soft tissues on color Doppler evaluation consistent with inflammatory changes. The most likely diagnosis was a cortically based aneurysmal bone cyst (ABC) and not a soft tissue solid or cystic mass. In view of the periosteal elevation or sonographic “Codman’s triangle,” the mass was thought to be centered and to have originated within the bone rather than to be a soft tissue mass such as a ganglion cyst which had eroded into the bone. A Brodie abscess or subacute osteomyelitis could have this appearance as symptoms can be indolent, but these tend to be metaphyseal and centrally located. The adjacent fibular cortex was normal with no adjacent soft tissue mass to suggest an underlying aggressive bone lesion such as osteosarcoma or metastasis with secondary ABC formation.Ultrasound of the distal left fibula in a 34-year-old man. ((a) and (b)) Transverse and longitudinal US of the distal left fibula demonstrates an expansile, cortically centered bone lesion extending from the fibular metadiaphysis into the anterior compartment of the calf with a thin echogenic rim of cortex (short arrows) which was contiguous with the underlying fibular cortex (long arrow). TIB: tibia, FIB: fibula. (c) Transverse scan of the distal fibula has been rotated to the same orientation as the MRI (see Figure2). This shows increased vascularity in the adjacent soft tissue about the fibular lesion on color Doppler evaluation (short arrow), without vascularity seen within the lesion. Fluid level is noted (long arrow). (a) (b) (c)MRI was obtained for preoperative planning, consisting of axial and sagittal T1-weighted and fat saturated T2-weighted images with axial and sagittal fat saturated spoiled gradient echo sequence after gadolinium administration. This confirmed a cortically based mass arising from the fibula extending into the anterior compartment, with fluid-fluid levels seen on the T2-weighted sequence (Figure2). There was peripheral enhancement and increased T2 signal in the adjacent soft tissues and fibular marrow consistent with inflammatory changes corresponding to the increased vascularity seen sonographically. No primary lesion could be seen and this was presumed to be a primary cortical ABC.MRI of the distal fibula. (a) Axial T2-weighted MRI with fat saturation demonstrates a cortically based lesion within the distal fibula with a large fluid-fluid level (arrow). Peripheral increased T2 signal around the ABC is consistent with inflammatory changes. (b) Axial T1-weighted MRI shows the lesion is cortically based and has intermediate T1 signal within the mass. (c) Axial spoiled gradient with fat saturation following gadolinium shows peripheral enhancement only corresponding to the increased vascularity seen on color Doppler evaluation (Figure1(c)). (a) (b) (c)Radiograph demonstrated a cortically based lesion with a narrow zone of transition, elevating the periosteum with a thin shell of bone (Figure3). No other lesions were seen in the fibula or tibia. The patient underwent surgical excision of the lesion with curettage. A histologic diagnosis of aneurysmal bone cyst was made forming a 3.2 × 1.7 × 0.9 cm cystic mass (Figure 4). Microscopically, a cystic space with peripheral shell of reactive bone with numerous osteoclast giant cells and scattered histiocytes and lymphocytes was noted. Some of the histiocytes contain hemosiderin.Figure 3 AP radiograph of the distal left fibula shows a lytic lesion with narrow zone of transition within the fibular cortex. There is periosteal elevation and a thin rim of bone surrounding the lesion (arrow).Figure 4 Photograph of the gross specimen shows the cystic space within the cortex of the bone. ## 3. Discussion Aneurysmal bone cyst is a nonneoplastic expansile lesion of bone, mainly affecting children and young adults [1]. In a study of 238 cases of ABC by Vergel De Dios et al., for patients with ABC of the long bones, 86% of patients were younger than 20 years (range of 1.5 to 69 years) [2]. More than 80% of lesions were in the long bones, commonly in the metaphysis, flat bones, or spinal column. Pathologically the lesion consists of channels or spaces separated by fibrous septa which may contain osteoclast-like giant cells and bone trabeculae. Thick new bone formation at the edge of the lesions was present in 52% in the long bones with calcified matrix; a cartilage aura and adjacent myxoid regions were seen in 11 to 16% of long bone ABC. Multiple bones may be affected. There is a rare solid variant which does not contain the cavernous spaces but otherwise has identical histologic findings. Pain and swelling are the common clinical presentations as in our case. Radiographically, ABCs were found centrally or eccentrically in the medulla in 23% and 58%, respectively. In 19% of cases the ABCs were centered in the cortex or on the surface of bone as in our case. ABCs involved the metaphysis or metadiaphysis in 58% of cases. Periosteal new bone was seen in 66% of cases with a thin rim of bone over the external surface of the ABC in 63% of cases which can be a helpful radiographic sign. It can exist as a primary lesion or be secondary to either a benign bone lesion such as chondroblastoma or a malignant bone tumor. In our case no additional bone lesion was seen on imaging or histologically.High resolution ultrasound is increasingly used for initial assessment of ambiguous musculoskeletal soft tissue lesions and for sonographically guided biopsy [3]. Ultrasound is generally not helpful in intramedullary bone lesions as sound waves cannot penetrate the normal cortex [4]. Ultrasound can, however, readily identify primary and secondary bone tumors where there are cortical disruption and soft tissue masses [4]. These areas may then be targeted for percutaneous biopsy under ultrasound guidance which is a quick procedure without ionizing radiation. Ultrasound may also be useful in evaluation of postoperative sites for tumor recurrence particularly if there is significant artefact on MRI or CT due to orthopedic hardware. If the cortex is sufficiently thinned, the sound waves may have sufficient sound transmission to identify the underlying bone lesion. Fornage et al. showed the cystic nature of a calcaneal lesion with ultrasound and used the US to guide a needle aspiration for confirmation of a calcaneal cyst [5]. Haber et al. described the ultrasound findings in a primary ABC in the scapula in a 1-year-old seen on radiograph as an expansile lesion [6]. They found a cystic mass with a thin echogenic shell and multiple intraosseous fluid levels. Suh and Han described fluid levels in large expansile lesion in the ilium [7] with CT for confirmation. In both cases, the cortices of the affected bones were markedly thinned allowing sound to be transmitted and for characterization of the internal structure of the lesions. In both cases, the fluid levels were seen to move with changes in position of the patients as in our case, confirming the cystic nature of the lesion.Fluid levels seen with ABC have been well documented with CT and MRI [8, 9]. Since the initial description, fluid-fluid levels have been described in many bone pathologies and so this finding has become a nonspecific observation. O’Donnell and Saifuddin evaluated the prevalence and diagnostic significance of fluid-fluid levels (FFLs) in focal bone lesions in 738 consecutive patients [10]. FFLs were present in 83 patients (11.2%). Malignant neoplasms most commonly showed FFLs in less than third of the lesion. With increase in the total volume of FFLs, there was a decrease in percentage of malignancy. There were no malignant lesions if 100% of the lesion showed FFL changes. Some aggressive high grade predominantly necrotic bone tumors, particularly telangiectatic osteosarcomas, may have greater than 2/3 of the lesion containing FFLs. These tumors often show a small solid component but differentiation with ABC’s can be difficult on MRI. Radiographs may be helpful; however, ABCs can show an aggressive appearance and malignancies may have a more indolent radiographic appearance. Sundaram et al. describe four cases of osteosarcoma with clinical and imaging findings suggestive of simple or aneurysmal bone cyst radiographically [11]. One tumor was a giant cell-rich variant of osteosarcoma with focal aneurysmal bone cyst-like areas within the navicular bone. MRI showed fluid-fluid levels with expansion of the bone and no soft tissue mass. Clinical features and radiographs should be used to differentiate between TOS and ABC. Microscopically, the tumors were not cystic or telangiectatic but were conventional osteosarcoma and osteoclast-rich osteosarcoma and so did not pose a pathologic dilemma.Treatment of ABC includes curettage with bone grafting if technically possible. Curettage without bone grafting can be safely performed with protected weightbearing until healing has occurred [12]. Radiation had been employed in the past, though this is now avoided to prevent post radiation sarcomas. Recurrence can occur in up to 19% of cases. ## 4. Conclusion Ultrasound may be the first imaging test to evaluate a superficial mass. It is important for the radiologist to be able to recognize a bone rather than soft tissue neoplasm. If the cortex is sufficiently thinned, then ultrasound can demonstrate fluid levels suggesting an aneurysmal bone cyst. Appropriate additional imaging for preoperative planning and surgical management can then be performed without delay in diagnosis. --- *Source: 101069-2014-01-23.xml*
2014
# An Analytical Solution for Predicting the Vibration-Fatigue-Life in Bimodal Random Processes **Authors:** Chaoshuai Han; Yongliang Ma; Xianqiang Qu; Mindong Yang **Journal:** Shock and Vibration (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1010726 --- ## Abstract Predicting the vibration-fatigue-life of engineering structures subjected to random loading is a critical issue for. Frequency methods are generally adopted to deal with this problem. This paper focuses on bimodal spectra methods, including Jiao-Moan method, Fu-Cebon method, and Modified Fu-Cebon method. It has been proven that these three methods can give acceptable fatigue damage results. However, these three bimodal methods do not have analytical solutions. Jiao-Moan method uses an approximate solution, Fu-Cebon method, and Modified Fu-Cebon method needed to be calculated by numerical integration which is obviously not convenient in engineering application. Thus, an analytical solution for predicting the vibration-fatigue-life in bimodal spectra is developed. The accuracy of the analytical solution is compared with numerical integration. The results show that a very good agreement between an analytical solution and numerical integration can be obtained. Finally, case study in offshore structures is conducted and a bandwidth correction factor is computed through using the proposed analytical solution. --- ## Body ## 1. Introduction Engineering structures from different fields (e.g., aircrafts, wind energy utilizations, and automobiles) are commonly subjected to random vibration loading. These loads often cause structural fatigue failure. Thus, it is significant to carry out a study on assessing the vibration-fatigue-life [1, 2].Vibration fatigue analysis commonly consists of two part processes: structural dynamic analysis and results postprocessing. Structural dynamic analysis provides an accurate prediction of the stress responses of fatigue hot-spots. Once the stress responses are obtained, vibration fatigue can be successfully performed. Existing technologies such as operational modal analysis [3], finite element modeling (FEM), and accelerated-vibration-tests are mature and applicable to obtain the stress of structures [4, 5]. Therefore, the crucial part of a vibration fatigue analysis focuses on results postprocessing.The postprocessing is usually used to calculate fatigue damage based on known stress responses. When the stress responses are time series, fatigue can be evaluated using a traditional time domain method. However, the stress responses of real structures are mostly characterized by the power spectral density (PSD) function. Thus, frequency domain method becomes popular in vibration fatigue analysis [6, 7].A bimodal spectrum is a particular PSD in the random vibration stress response of a structure. For some simple structures, the stress response of structures will show explicit characterization of two peaks. One peak of the bimodal spectrum is governed by the first-order natural frequency of the structure; another is dominated by the main frequency of applied loads. Therefore, some bimodal methods for fatigue analysis can be adopted [8–10]. Moreover, several experiments (e.g., vibration tests on mechanical components) and numerical studies (e.g., virtual simulation of dynamic using FEM) also obtain have shown that the stress PSD is a typical bimodal spectrum [5, 7, 11]. However, for some complex flexible structures, the PSD of the stress response of structures usually is a multimodal and wide-band spectrum. For this situation, existing general wide-band spectral methods such as Dirlik method [12], Benasciutti-Tovo method [10], and Park method [13, 14] can be used to evaluate the vibration-fatigue-life. Recently, Braccesi et al. [15, 16] proposed a bands method to estimate the fatigue damage of a wide-band random process in the frequency domain. In order to speed up the frequency domain method, Braccesi et al. [17, 18] developed a modal approach for fatigue damage evaluation of flexible components by FEM.For fatigue evaluation in bimodal processes, some specific formulae have been proposed. Jiao and Moan [8] provided a bandwidth correction factor from a probabilistic point of view, and the factor is an approximate solution derived by the original model. The approximation inevitably leads to some errors in certain cases. Based on an similar idea, Fu and Cebon [9] developed a formula for predicting the fatigue life in bimodal random processes. In the formula, there is a convolution integral. The author claimed that there is no analytical solution for the convolution integral which has to be calculated by numerical integration. Benasciutti and Tovo [10] compared the above two methods and established a Modified Fu-Cebon method. The new formula improves the damage estimation, but it still needs to calculate numerical integration.In engineering application, the designers prefer an analytical solution rather than numerical methods. Therefore, the purpose of this paper is to develop an analytical solution to predict the vibration-fatigue-life in bimodal spectra. ## 2. Theory of Fatigue Analysis ### 2.1. Fatigue Analysis The basicS-N curve for fatigue analysis can be given as(1)N=K·S-m,where S represents the stress amplitude; N is the number of cycles to fatigue failure, and K and m are the fatigue strength coefficient and fatigue strength exponent, respectively.The total fatigue damage can then be calculated as a linear accumulation rule after Miner [19](2)D=∑niNi=1K∑niSim,where ni is the number of cycles in the stress amplitude Si, resulting from rainflow counting (RFC) [20], and Ni is the number of cycles corresponding to fatigue failure at the same stress amplitude.When the stress amplitude is a continuum function and its probability density function (PDF) isfS(S), the fatigue damage in the duration T can be denoted as follows:(3)D=νc·TK∫0∞Sm·fSSdS,where νc is the frequency of rainflow cycles.For an ideal narrowband process,fS(S) can be approximated by the Rayleigh distribution [21]; the analytical expression is given as (4)fSS=Sλ0exp⁡-S22λ0.Furthermore, the frequency of rainflow cycles νc can be replaced by rate of mean zero upcrossing ν0.According to (3), an analytical solution of fatigue damage [22] for an ideal narrowband process can be written as(5)DNB=ν0·TK2λ0mΓm2+1,where Γ· is the Gamma function.For general wide-band stress process, fatigue damage can be calculated by a narrowband approximation (i.e., (5)) first, and bandwidth correction is made based on the following model [23]:(6)DWB=ρ·DNB.In general, bimodal process is a wide-band process; thus, the fatigue damage in bimodal process can be calculated through (6). ### 2.2. Basic Principle of Bimodal Spectrum Process Assume that a bimodal stress processX(t) is composed of a low frequency process (LF) XL(t) and a high frequency process (HF) XH(t). (7)Xt=XLt+XHt,where XL(t) and XH(t) are independent and narrow Gaussian process.The one-sided spectral density function ofX(t) can be summed from the PSD of LF and HF process. (8)Sω=SLω+SHω.Theith-order spectral moments of S(ω) are defined as(9)λi=∫0∞ωi·SLω+SHωdω=λi,L+λi,H.The rate of mean zero upcrossing corresponding toXL(t) and XH(t) is(10)ν0,L=12πλ2,Lλ0,L,ν0,H=12πλ2,Hλ0,H.The rate of mean zero upcrossing ofX(t) can be expressed as(11)ν0=12πλ2,L+λ2,Hλ0,L+λ0,H=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,H.According to (5), (9), and (11), narrowband approximation of bimodal stress process X(t) can be given as(12)DNB,X=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,HTK2λ0,L+2λ0,HmΓm2+1.Equation (12) is known as the combined spectrum method in API specifications [24].The existing bimodal methods proposed by Jiao and Moan, Fu and Cebon, and Benasciutti and Tovo are based on the idea: two types of cycles can be extracted from the rainflow counting, one is the large stress cycle, and the other is the small cycle [8–10]. The fatigue damage due to X(t) can be approximated with the sum of two individual contributions.(13)D=Dl+Ds,where Dl represents the damage due to the large stress cycle and Ds denotes the damage due to the small stress cycle. ## 2.1. Fatigue Analysis The basicS-N curve for fatigue analysis can be given as(1)N=K·S-m,where S represents the stress amplitude; N is the number of cycles to fatigue failure, and K and m are the fatigue strength coefficient and fatigue strength exponent, respectively.The total fatigue damage can then be calculated as a linear accumulation rule after Miner [19](2)D=∑niNi=1K∑niSim,where ni is the number of cycles in the stress amplitude Si, resulting from rainflow counting (RFC) [20], and Ni is the number of cycles corresponding to fatigue failure at the same stress amplitude.When the stress amplitude is a continuum function and its probability density function (PDF) isfS(S), the fatigue damage in the duration T can be denoted as follows:(3)D=νc·TK∫0∞Sm·fSSdS,where νc is the frequency of rainflow cycles.For an ideal narrowband process,fS(S) can be approximated by the Rayleigh distribution [21]; the analytical expression is given as (4)fSS=Sλ0exp⁡-S22λ0.Furthermore, the frequency of rainflow cycles νc can be replaced by rate of mean zero upcrossing ν0.According to (3), an analytical solution of fatigue damage [22] for an ideal narrowband process can be written as(5)DNB=ν0·TK2λ0mΓm2+1,where Γ· is the Gamma function.For general wide-band stress process, fatigue damage can be calculated by a narrowband approximation (i.e., (5)) first, and bandwidth correction is made based on the following model [23]:(6)DWB=ρ·DNB.In general, bimodal process is a wide-band process; thus, the fatigue damage in bimodal process can be calculated through (6). ## 2.2. Basic Principle of Bimodal Spectrum Process Assume that a bimodal stress processX(t) is composed of a low frequency process (LF) XL(t) and a high frequency process (HF) XH(t). (7)Xt=XLt+XHt,where XL(t) and XH(t) are independent and narrow Gaussian process.The one-sided spectral density function ofX(t) can be summed from the PSD of LF and HF process. (8)Sω=SLω+SHω.Theith-order spectral moments of S(ω) are defined as(9)λi=∫0∞ωi·SLω+SHωdω=λi,L+λi,H.The rate of mean zero upcrossing corresponding toXL(t) and XH(t) is(10)ν0,L=12πλ2,Lλ0,L,ν0,H=12πλ2,Hλ0,H.The rate of mean zero upcrossing ofX(t) can be expressed as(11)ν0=12πλ2,L+λ2,Hλ0,L+λ0,H=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,H.According to (5), (9), and (11), narrowband approximation of bimodal stress process X(t) can be given as(12)DNB,X=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,HTK2λ0,L+2λ0,HmΓm2+1.Equation (12) is known as the combined spectrum method in API specifications [24].The existing bimodal methods proposed by Jiao and Moan, Fu and Cebon, and Benasciutti and Tovo are based on the idea: two types of cycles can be extracted from the rainflow counting, one is the large stress cycle, and the other is the small cycle [8–10]. The fatigue damage due to X(t) can be approximated with the sum of two individual contributions.(13)D=Dl+Ds,where Dl represents the damage due to the large stress cycle and Ds denotes the damage due to the small stress cycle. ## 3. A Review of Bimodal Methods ### 3.1. Jiao-Moan (JM) Method To simplify the study,X(t), XL(t), and XH(t) are normalized as X∗(t), XL∗(t), and XH∗(t) through the following transformation:(14)X∗t=Xtλ0=XLtλ0+XHtλ0=XL∗t+XH∗tand then(15)λ0∗=λ0,L∗+λ0,H∗=1,where(16)λ0,L∗=λ0,Lλ0,λ0,H∗=λ0,Hλ0.Jiao-Moan points out that the small stress cycles are produced by the envelope of the HF process, which follows the Rayleigh distribution. The fatigue damage due to the small stress cycles can be obtained according to (5).While the large stress cycles are from the envelop process,P(t) (see Figure 1), the amplitude of P(t) is equal to(17)Qt=RLt+RHt,where RL(t) and RH(t) are the envelopes of XL∗(t) and XH∗(t), respectively.Figure 1 Bimodal processX∗(t), the envelope process P(t), and the amplitude process Q(t).The distribution ofQ(t) can be written as a form of a convolution integral(18)fQq=∫0qfRLq-xfRHxdx=∫0qfRLyfRHq-ydy.RL(t) and RH(t) obey the Rayleigh distribution; therefore, (18) has an analytical solution which is given [8](19)fQq=qλ0,L∗·exp⁡-q22λ0,L∗+qλ0,H∗·exp⁡-q22λ0,H∗+exp⁡-q22·2πλ0,L∗λ0,H∗·Φqλ0,L∗λ0,H∗+Φqλ0,H∗λ0,L∗-1·q2-1.The rate of mean zero upcrossing due toP(t) can be calculated as(20)ν0,P=λ0,L∗ν0,L1+λ0,H∗λ0,L∗ν0,Hν0,LδH2,where(21)δH=1-λ1,H∗2λ0,H∗λ2,H∗.An approximation was made by Jiao and Moan for (19) as follows [8]:(22)fQq≈λ0,L∗-λ0,L∗λ0,H∗·q·exp⁡-q22λ0,L∗+2πλ0,L∗λ0,H∗·q2-1·exp⁡-q22.After the approximation, a closed-form solution of the bandwidth correction factor can be then derived [8](23)ρ=ν0,Pν0×λ0,L∗m/2+21-λ0,H∗λ0,L∗+πλ0,L∗λ0,H∗mΓm/2+1/2Γm/2+1+ν0,Hν0λ0,H∗m/2.Finally, the fatigue damage can be obtained as (6) and (12). ### 3.2. Fu-Cebon (FC) Method Similarly to JM method, Fu and Cebon also considered that the total damage is produced by a large cycle (SH+SL) and a small cycle (SL), as depicted in Figure 2. The small cycles are from the HF process, and the distribution of the amplitude PSs(S) is a Rayleigh distribution, as shown in (4). However, the number of cycles associated with the small cycles ns is different from JM method and equals ν0,H-ν0,L·T. According to (5), the damage due to the small cycles is(24)Ds=ν0,H-ν0,L·TK2λ0,HmΓ1+m2.Figure 2 The large cycles and small cycles for a random stress process.The amplitude of the large cyclesSl can be approximated as the sum of amplitude of the LF and HF processes, the distribution of which can be expressed by a convolution of two Rayleigh distributions [9]. (25)PSlS=∫0SPSLyPSHS-ydy=∫0SPSLS-yPSHydy=1λ0,Lλ0,He-S2/2λ0,H∫0SSy-y2e-Uy2+VSydy,where U=1/2λ0,L+1/2λ0,H and V=1/λ0,H.The number of cycles of the large cycles isnl=ν0,L·T. Thus, the fatigue damage due to the large stress cycles can be expressed by(26)Dl=ν0,L·TK∫0∞SmPSlSdS.Equation (26) can be calculated with numerical integration [9, 10]. Therefore, the total damage can be obtained according to (13). ### 3.3. Modify Fu-Cebon (MFC) Method Benasciutti and Tovo made a comparison between JM method and FC method and concluded that using the envelop process is more suitable [10]. Thus, a hybrid technique is adopted to modify the FC method. More specifically, the large cycles and small cycles are produced according to the idea of FC method. The number of cycles associated with the large cycles is defined similarly to JM method. That is, nl=ν0,P·T, while the number of cycles corresponding to the small cycles is ns=ν0,H-ν0,P·T. The total damage for MFC method can be then written according to (13).Although the accuracy of the MFC method is improved, the fatigue damage still has to be calculated with numerical integral. ### 3.4. Comparison of Three Bimodal Methods Detailed comparison of the aforementioned three bimodal methods can be found in Table1. In all methods, the amplitude of the small cycle obeys Rayleigh distribution, and the corresponding fatigue damage has an analytical expression as in (5); the distribution of amplitude of the large cycle is convolution integration of two Rayleigh distributions, and the relevant fatigue damage can be calculated by (26).Table 1 Comparison of large cycles and small cycles for different spectral methods. Method Large cycles Small cycles n l PDF of amplitude n s PDF of amplitude JM ν 0 , P ⋅ T Eq. (22) ν 0 , H ⋅ T Rayleigh FC ν 0 , L ⋅ T Eq. (25) ν 0 , H - ν 0 , L ⋅ T Rayleigh MFC ν 0 , P ⋅ T Eq. (25) ν 0 , H - ν 0 , P ⋅ T RayleighBecause of complexity of the convolution integration, several researches assert that (26) has no analytical solution [9, 10]. To solve this problem, Jiao and Moan used an approximate model (i.e., (22)) to obtain a closed-form solution [8]. However, the approximate model may lead to errors in some cases as in Figure 3 which illustrates the divergence of (19) and (22) for different values of λ0,L∗ and λ0,H∗. It is found that (22) becomes closer to (19) with the increase of λ0,L∗.Figure 3 The divergence of (19) and (22) for JM method in case of (a) λ0,L∗=0.01 and (b) λ0,L∗=0.99. (a) (b)For FC and MFC methods, (26) was calculated by numerical technique. Although the numerical technique can give a fatigue damage prediction, it is complex and not convenient when applied in real engineering. In addition, the solutions in some cases are not reasonable. In Section 4, an analytical solution of (26) will be derived to evaluate the fatigue damage, and the derivation of the analytical solution focuses on the fatigue damage of the large cycles. ## 3.1. Jiao-Moan (JM) Method To simplify the study,X(t), XL(t), and XH(t) are normalized as X∗(t), XL∗(t), and XH∗(t) through the following transformation:(14)X∗t=Xtλ0=XLtλ0+XHtλ0=XL∗t+XH∗tand then(15)λ0∗=λ0,L∗+λ0,H∗=1,where(16)λ0,L∗=λ0,Lλ0,λ0,H∗=λ0,Hλ0.Jiao-Moan points out that the small stress cycles are produced by the envelope of the HF process, which follows the Rayleigh distribution. The fatigue damage due to the small stress cycles can be obtained according to (5).While the large stress cycles are from the envelop process,P(t) (see Figure 1), the amplitude of P(t) is equal to(17)Qt=RLt+RHt,where RL(t) and RH(t) are the envelopes of XL∗(t) and XH∗(t), respectively.Figure 1 Bimodal processX∗(t), the envelope process P(t), and the amplitude process Q(t).The distribution ofQ(t) can be written as a form of a convolution integral(18)fQq=∫0qfRLq-xfRHxdx=∫0qfRLyfRHq-ydy.RL(t) and RH(t) obey the Rayleigh distribution; therefore, (18) has an analytical solution which is given [8](19)fQq=qλ0,L∗·exp⁡-q22λ0,L∗+qλ0,H∗·exp⁡-q22λ0,H∗+exp⁡-q22·2πλ0,L∗λ0,H∗·Φqλ0,L∗λ0,H∗+Φqλ0,H∗λ0,L∗-1·q2-1.The rate of mean zero upcrossing due toP(t) can be calculated as(20)ν0,P=λ0,L∗ν0,L1+λ0,H∗λ0,L∗ν0,Hν0,LδH2,where(21)δH=1-λ1,H∗2λ0,H∗λ2,H∗.An approximation was made by Jiao and Moan for (19) as follows [8]:(22)fQq≈λ0,L∗-λ0,L∗λ0,H∗·q·exp⁡-q22λ0,L∗+2πλ0,L∗λ0,H∗·q2-1·exp⁡-q22.After the approximation, a closed-form solution of the bandwidth correction factor can be then derived [8](23)ρ=ν0,Pν0×λ0,L∗m/2+21-λ0,H∗λ0,L∗+πλ0,L∗λ0,H∗mΓm/2+1/2Γm/2+1+ν0,Hν0λ0,H∗m/2.Finally, the fatigue damage can be obtained as (6) and (12). ## 3.2. Fu-Cebon (FC) Method Similarly to JM method, Fu and Cebon also considered that the total damage is produced by a large cycle (SH+SL) and a small cycle (SL), as depicted in Figure 2. The small cycles are from the HF process, and the distribution of the amplitude PSs(S) is a Rayleigh distribution, as shown in (4). However, the number of cycles associated with the small cycles ns is different from JM method and equals ν0,H-ν0,L·T. According to (5), the damage due to the small cycles is(24)Ds=ν0,H-ν0,L·TK2λ0,HmΓ1+m2.Figure 2 The large cycles and small cycles for a random stress process.The amplitude of the large cyclesSl can be approximated as the sum of amplitude of the LF and HF processes, the distribution of which can be expressed by a convolution of two Rayleigh distributions [9]. (25)PSlS=∫0SPSLyPSHS-ydy=∫0SPSLS-yPSHydy=1λ0,Lλ0,He-S2/2λ0,H∫0SSy-y2e-Uy2+VSydy,where U=1/2λ0,L+1/2λ0,H and V=1/λ0,H.The number of cycles of the large cycles isnl=ν0,L·T. Thus, the fatigue damage due to the large stress cycles can be expressed by(26)Dl=ν0,L·TK∫0∞SmPSlSdS.Equation (26) can be calculated with numerical integration [9, 10]. Therefore, the total damage can be obtained according to (13). ## 3.3. Modify Fu-Cebon (MFC) Method Benasciutti and Tovo made a comparison between JM method and FC method and concluded that using the envelop process is more suitable [10]. Thus, a hybrid technique is adopted to modify the FC method. More specifically, the large cycles and small cycles are produced according to the idea of FC method. The number of cycles associated with the large cycles is defined similarly to JM method. That is, nl=ν0,P·T, while the number of cycles corresponding to the small cycles is ns=ν0,H-ν0,P·T. The total damage for MFC method can be then written according to (13).Although the accuracy of the MFC method is improved, the fatigue damage still has to be calculated with numerical integral. ## 3.4. Comparison of Three Bimodal Methods Detailed comparison of the aforementioned three bimodal methods can be found in Table1. In all methods, the amplitude of the small cycle obeys Rayleigh distribution, and the corresponding fatigue damage has an analytical expression as in (5); the distribution of amplitude of the large cycle is convolution integration of two Rayleigh distributions, and the relevant fatigue damage can be calculated by (26).Table 1 Comparison of large cycles and small cycles for different spectral methods. Method Large cycles Small cycles n l PDF of amplitude n s PDF of amplitude JM ν 0 , P ⋅ T Eq. (22) ν 0 , H ⋅ T Rayleigh FC ν 0 , L ⋅ T Eq. (25) ν 0 , H - ν 0 , L ⋅ T Rayleigh MFC ν 0 , P ⋅ T Eq. (25) ν 0 , H - ν 0 , P ⋅ T RayleighBecause of complexity of the convolution integration, several researches assert that (26) has no analytical solution [9, 10]. To solve this problem, Jiao and Moan used an approximate model (i.e., (22)) to obtain a closed-form solution [8]. However, the approximate model may lead to errors in some cases as in Figure 3 which illustrates the divergence of (19) and (22) for different values of λ0,L∗ and λ0,H∗. It is found that (22) becomes closer to (19) with the increase of λ0,L∗.Figure 3 The divergence of (19) and (22) for JM method in case of (a) λ0,L∗=0.01 and (b) λ0,L∗=0.99. (a) (b)For FC and MFC methods, (26) was calculated by numerical technique. Although the numerical technique can give a fatigue damage prediction, it is complex and not convenient when applied in real engineering. In addition, the solutions in some cases are not reasonable. In Section 4, an analytical solution of (26) will be derived to evaluate the fatigue damage, and the derivation of the analytical solution focuses on the fatigue damage of the large cycles. ## 4. Derivation of an Analytical Solution ### 4.1. Derivation of an Analytical Solution for (25) Equation (25) can be rewritten as (27)PSlS=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy-∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy.Equation (27) will be divided into two items.( 1) The First Item. It is as follows:(28)I1=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,L+λ0,Hexp⁡-S22λ0,H+Sλ0,L+λ0,H·exp⁡-S22λ0,L+S2λ0,Lλ0,L+λ0,H·2πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.( 2) The Second Item. It is as follows:(29)I2=∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,H-Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,L-2Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H-exp⁡-S22λ0,L+exp⁡-S22λ0,L+λ0,H2πλ0,Lλ0,Hλ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L1+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H1λ0,L+λ0,H+S2λ0,Hλ0,Lλ0,L+λ0,H2.The analytical solution of (25) can be then obtained(30)PSlS=Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H+S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.Note that whenλ0,L+λ0,H=1, (30) is just equal to (19) derived by Jiao and Moan [8]. Therefore, (19) is a special case of (30). ### 4.2. Derivation of an Analytical Solution for (26) Based on (30) The derivation of an analytical solution for (26) is on the basis of (30), as (31)Z=∫0∞Sm·PSlSdS=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS+∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Equation (31) will be divided into five parts.( 1) The First Part. It is as follows:(32)Z1=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,LdS=λ0,L2λ0,L+λ0,H2·2λ0,Lm·Γ1+m2.( 2) The Second Part. It is as follows:(33)Z2=∫0∞Sm·Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS=λ0,H2λ0,L+λ0,H2·2λ0,Hm·Γ1+m2.( 3) The Third Part. It is as follows:(34)Z3=1λ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,H∫0∞Sm+2-Smλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HdS=m2πλ1λ2λ0,L+λ0,Hm-22m-1Γm+12.( 4) The Fourth Part. It is as follows:(35)Z4=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,HdS.Equation (35) contains a standard Normal cumulative distribution function Φ(·). It is difficult to get an exact solution directly. Thus, a new variable is introduced, as follows:(36)t=Sλ0,L+λ0,H.With a method of variable substitution, (35) can be simplified:(37)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2exp⁡-t22Φtλ0,Lλ0,Hdt-∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt.By defining(38)Hλ0,L,λ0,H,m=∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt,(37) becomes(39)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,L,λ0,H,m+2-Hλ0,L,λ0,H,mand using integration by parts, (38) reduces to(40)Hλ0,L,λ0,H,m=m-1·Hλ0,L,λ0,H,m-2+1212πλ0,Lλ0,H2λ0,Hλ0,L+λ0,HmΓm2.Equation (40) is a recurrence formula; when m is an odd number, it becomes(41)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,1+λ0,Lλ0,Hm-1!!22π∑k=2m+1/212k-2!!2λ0,Hλ0,L+λ0,H2k-1Γ2k-12,where ·!! is a double factorial function and H(λ0,L,λ0,H,1) has an analytical expression which can be derived conveniently(42)Hλ0,L,λ0,H,1=12+12λ0,Lλ0,L+λ0,H.When m is an even number, H(λ0,L,λ0,H,m) is(43)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,0+λ0,Lλ0,Hm-1!!22π∑k=1m/212k-1!!2λ0,Hλ0,L+λ0,H2kΓk,where(44)Hλ0,L,λ0,H,0=2π4+12ππ2-atan⁡λ0,Hλ0,L.Specific derivation of (44) can be seen in Appendix A.In addition, a Matlab program has been written to calculateH(λ0,L,λ0,H,m) in Appendix B.( 5) The Fifth Part. It is as follows:(45)Z5=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Similarly to the fourth part, the analytical solution of the fifth part can be derived as(46)Z5=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,H,λ0,L,m+2-Hλ0,H,λ0,L,m.The final solution is(47)Z=Z1+Z2-Z3+Z4+Z5. ## 4.1. Derivation of an Analytical Solution for (25) Equation (25) can be rewritten as (27)PSlS=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy-∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy.Equation (27) will be divided into two items.( 1) The First Item. It is as follows:(28)I1=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,L+λ0,Hexp⁡-S22λ0,H+Sλ0,L+λ0,H·exp⁡-S22λ0,L+S2λ0,Lλ0,L+λ0,H·2πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.( 2) The Second Item. It is as follows:(29)I2=∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,H-Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,L-2Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H-exp⁡-S22λ0,L+exp⁡-S22λ0,L+λ0,H2πλ0,Lλ0,Hλ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L1+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H1λ0,L+λ0,H+S2λ0,Hλ0,Lλ0,L+λ0,H2.The analytical solution of (25) can be then obtained(30)PSlS=Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H+S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.Note that whenλ0,L+λ0,H=1, (30) is just equal to (19) derived by Jiao and Moan [8]. Therefore, (19) is a special case of (30). ## 4.2. Derivation of an Analytical Solution for (26) Based on (30) The derivation of an analytical solution for (26) is on the basis of (30), as (31)Z=∫0∞Sm·PSlSdS=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS+∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Equation (31) will be divided into five parts.( 1) The First Part. It is as follows:(32)Z1=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,LdS=λ0,L2λ0,L+λ0,H2·2λ0,Lm·Γ1+m2.( 2) The Second Part. It is as follows:(33)Z2=∫0∞Sm·Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS=λ0,H2λ0,L+λ0,H2·2λ0,Hm·Γ1+m2.( 3) The Third Part. It is as follows:(34)Z3=1λ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,H∫0∞Sm+2-Smλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HdS=m2πλ1λ2λ0,L+λ0,Hm-22m-1Γm+12.( 4) The Fourth Part. It is as follows:(35)Z4=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,HdS.Equation (35) contains a standard Normal cumulative distribution function Φ(·). It is difficult to get an exact solution directly. Thus, a new variable is introduced, as follows:(36)t=Sλ0,L+λ0,H.With a method of variable substitution, (35) can be simplified:(37)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2exp⁡-t22Φtλ0,Lλ0,Hdt-∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt.By defining(38)Hλ0,L,λ0,H,m=∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt,(37) becomes(39)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,L,λ0,H,m+2-Hλ0,L,λ0,H,mand using integration by parts, (38) reduces to(40)Hλ0,L,λ0,H,m=m-1·Hλ0,L,λ0,H,m-2+1212πλ0,Lλ0,H2λ0,Hλ0,L+λ0,HmΓm2.Equation (40) is a recurrence formula; when m is an odd number, it becomes(41)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,1+λ0,Lλ0,Hm-1!!22π∑k=2m+1/212k-2!!2λ0,Hλ0,L+λ0,H2k-1Γ2k-12,where ·!! is a double factorial function and H(λ0,L,λ0,H,1) has an analytical expression which can be derived conveniently(42)Hλ0,L,λ0,H,1=12+12λ0,Lλ0,L+λ0,H.When m is an even number, H(λ0,L,λ0,H,m) is(43)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,0+λ0,Lλ0,Hm-1!!22π∑k=1m/212k-1!!2λ0,Hλ0,L+λ0,H2kΓk,where(44)Hλ0,L,λ0,H,0=2π4+12ππ2-atan⁡λ0,Hλ0,L.Specific derivation of (44) can be seen in Appendix A.In addition, a Matlab program has been written to calculateH(λ0,L,λ0,H,m) in Appendix B.( 5) The Fifth Part. It is as follows:(45)Z5=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Similarly to the fourth part, the analytical solution of the fifth part can be derived as(46)Z5=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,H,λ0,L,m+2-Hλ0,H,λ0,L,m.The final solution is(47)Z=Z1+Z2-Z3+Z4+Z5. ## 5. Numerical Validation In this part, the accuracy of FC method and JM method and the derived analytical solution will be validated with numerical integration. Transformation of (26) will be carried out first. ### 5.1. Treatment of Double Integral Based on FC Method As pointed out by Fu and Cebon and Benasciutti and Tovo [9, 10], FC’s numerical integration can be calculated as the following processes.Equation (26) contains a double integral, in which variables S and y are in the range of 0,∞ and 0,S, respectively. Apparently, the latter is not compatible with the integration range of Gauss-Legendre quadrature formula. Therefore, by using a integration transformation(48)y=S21+t,the integral part of (26) can be simplified to(49)J=∫0∞SmPSlSdS=1λ0,Lλ0,H∫0∞∫-11SmS231-t2exp⁡-S/221+t22λ0,H·exp⁡-S/221-t22λ0,LdtdS.Equation (49) can be thus calculated with Gauss-Legendre and Gauss-Laguerre quadrature formula. ### 5.2. Treatment of Numerical Integral for (31) Direct calculation of (31) may lead to some mathematical accumulative errors. To obtain a precise integral solution, (31) has to be handled with a variable substitution, and the result is(50)Z′=λ0,Lm+4/2+λ0,Hm+4/2λ0,L+λ0,H2·2m·Γ1+m2+2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2-tmexp⁡-t22Φtλ0,Lλ0,H+Φtλ0,Hλ0,L-1dt.Note that Z1 and Z2 for (31) have analytical solution; therefore, only Z3, Z4, and Z5 are dealt with in (50).The solution of (50) can be obtained through Gauss-Laguerre quadrature formula.The accuracy of the numerical integral in (49) and (50) depends on the order of nodes and weights which can be obtained from handbook of mathematics [25]. The accuracy increases with the increasing orders. However, from the engineering point of view, too many orders of nodes and weights will lead to difficulty in calculation. The integral results of (50) are convergent when the orders of nodes and weights are equal to 30. Therefore, the present study takes the order of 30. ### 5.3. Discussion of Results It is very convenient to use JM method in the case ofλ0,L+λ0,H=1, while FC method and MFC method can be used for any case regardless of λ0,L and λ0,H. Therefore, the analytical results are divided into two: λ0,L+λ0,H=1 and λ0,L+λ0,H≠1.The solutions calculated by different mathematical methods form=3 and m=4 in the case of λ0,L+λ0,H=1 are plotted in Figures 4 and 5. It turns out that the proposed analytical solution gives the same results with the exact numerical integral solution for any value of λ0,L and λ0,H. FC’s numerical integral solution matches the exact numerical integral solution very well in most cases. However, for relatively low value of λ0,L or when λ0,L tends to 1, it may not be right. JM’s approximate solution is close to the exact numerical integral solution only in a small range.Figure 4 Comparison of different solutions form=3.Figure 5 Comparison of different solutions form=4.FC’s numerical integral solutions, the exact numerical integral solutions, and the analytical solutions form=3 and m=4 in the case of λ0,L+λ0,H≠1 are shown in Tables 2 and 3. The results indicate that the latter two solutions are always approximately equal for any value of λ0,L and λ0,H, while FC’s numerical integral solutions show good agreement with the exact integral solutions only in a few cases. As like the case of λ0,L+λ0,H=1, for relatively high or low values of λ0,L and λ0,H (i.e., λ0,L=100, λ0,H=1×10-5), FC’s integral solutions may give incorrect results. This phenomenon is in accord with the analysis of Benasciutti and Tovo (e.g., FC’s numerical integration may be impossible for too low values of λ0,L and λ0,H) [10].Table 2 Comparison of different solutions form=3 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 3.657 E - 12 6.183 E - 06 6.183 E - 06 3.000 E - 03 7.000 E - 03 3.292 E - 03 7.591 E - 03 7.591 E - 03 5.000 E - 02 5.000 E - 02 2.591 E - 01 2.522 E - 01 2.522 E - 01 2.000 E - 01 6.000 E - 01 5.262 E + 00 5.267 E + 00 5.267 E + 00 5.000 E + 00 4.000 E + 00 2.146 E + 02 2.146 E + 02 2.146 E + 02 8.000 E + 01 2.000 E + 01 7.062 E + 03 7.062 E + 03 7.062 E + 03 4.000 E + 02 6.000 E + 02 1.144 E + 05 2.493 E + 05 2.493 E + 05 5.000 E + 03 3.000 E + 03 1.238 E + 04 5.602 E + 06 5.602 E + 06 6.000 E + 07 5.000 E + 07 8.468 E - 05 9.180 E + 12 9.180 E + 12 7.000 E + 09 4.000 E + 09 9.073 E - 09 8.999 E + 15 8.999 E + 15 1.000 E + 02 1.000 E - 05 3.084 E - 01 3.762 E + 03 3.762 E + 03Table 3 Comparison of different solutions form=4 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 2.580 E - 13 1.437 E - 07 1.437 E - 07 3.000 E - 03 7.000 E - 03 1.187 E - 03 1.832 E - 03 1.832 E - 03 5.000 E - 02 5.000 E - 02 2.198 E - 01 1.942 E - 01 1.942 E - 01 2.000 E - 01 6.000 E - 01 1.131 E + 01 1.130 E + 01 1.130 E + 01 5.000 E + 00 4.000 E + 00 1.567 E + 03 1.567 E + 03 1.567 E + 03 8.000 E + 01 2.000 E + 01 1.682 E + 05 1.682 E + 05 1.682 E + 05 4.000 E + 02 6.000 E + 02 6.632 E + 06 1.915 E + 07 1.915 E + 07 5.000 E + 03 3.000 E + 03 7.706 E + 05 1.216 E + 09 1.216 E + 09 6.000 E + 07 5.000 E + 07 5.308 E - 03 2.344 E + 17 2.344 E + 17 7.000 E + 09 4.000 E + 09 5.688 E - 07 2.289 E + 21 2.289 E + 21 1.000 E + 02 1.000 E - 05 7.230 E - 01 8.006 E + 04 8.006 E + 04In a word, for any value ofλ0,L and λ0,H, the analytical solution derived in this paper always gives an accurate result. Furthermore, it can be solved very conveniently and quickly with the aid of a personal computer through a program given in Appendix B. ## 5.1. Treatment of Double Integral Based on FC Method As pointed out by Fu and Cebon and Benasciutti and Tovo [9, 10], FC’s numerical integration can be calculated as the following processes.Equation (26) contains a double integral, in which variables S and y are in the range of 0,∞ and 0,S, respectively. Apparently, the latter is not compatible with the integration range of Gauss-Legendre quadrature formula. Therefore, by using a integration transformation(48)y=S21+t,the integral part of (26) can be simplified to(49)J=∫0∞SmPSlSdS=1λ0,Lλ0,H∫0∞∫-11SmS231-t2exp⁡-S/221+t22λ0,H·exp⁡-S/221-t22λ0,LdtdS.Equation (49) can be thus calculated with Gauss-Legendre and Gauss-Laguerre quadrature formula. ## 5.2. Treatment of Numerical Integral for (31) Direct calculation of (31) may lead to some mathematical accumulative errors. To obtain a precise integral solution, (31) has to be handled with a variable substitution, and the result is(50)Z′=λ0,Lm+4/2+λ0,Hm+4/2λ0,L+λ0,H2·2m·Γ1+m2+2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2-tmexp⁡-t22Φtλ0,Lλ0,H+Φtλ0,Hλ0,L-1dt.Note that Z1 and Z2 for (31) have analytical solution; therefore, only Z3, Z4, and Z5 are dealt with in (50).The solution of (50) can be obtained through Gauss-Laguerre quadrature formula.The accuracy of the numerical integral in (49) and (50) depends on the order of nodes and weights which can be obtained from handbook of mathematics [25]. The accuracy increases with the increasing orders. However, from the engineering point of view, too many orders of nodes and weights will lead to difficulty in calculation. The integral results of (50) are convergent when the orders of nodes and weights are equal to 30. Therefore, the present study takes the order of 30. ## 5.3. Discussion of Results It is very convenient to use JM method in the case ofλ0,L+λ0,H=1, while FC method and MFC method can be used for any case regardless of λ0,L and λ0,H. Therefore, the analytical results are divided into two: λ0,L+λ0,H=1 and λ0,L+λ0,H≠1.The solutions calculated by different mathematical methods form=3 and m=4 in the case of λ0,L+λ0,H=1 are plotted in Figures 4 and 5. It turns out that the proposed analytical solution gives the same results with the exact numerical integral solution for any value of λ0,L and λ0,H. FC’s numerical integral solution matches the exact numerical integral solution very well in most cases. However, for relatively low value of λ0,L or when λ0,L tends to 1, it may not be right. JM’s approximate solution is close to the exact numerical integral solution only in a small range.Figure 4 Comparison of different solutions form=3.Figure 5 Comparison of different solutions form=4.FC’s numerical integral solutions, the exact numerical integral solutions, and the analytical solutions form=3 and m=4 in the case of λ0,L+λ0,H≠1 are shown in Tables 2 and 3. The results indicate that the latter two solutions are always approximately equal for any value of λ0,L and λ0,H, while FC’s numerical integral solutions show good agreement with the exact integral solutions only in a few cases. As like the case of λ0,L+λ0,H=1, for relatively high or low values of λ0,L and λ0,H (i.e., λ0,L=100, λ0,H=1×10-5), FC’s integral solutions may give incorrect results. This phenomenon is in accord with the analysis of Benasciutti and Tovo (e.g., FC’s numerical integration may be impossible for too low values of λ0,L and λ0,H) [10].Table 2 Comparison of different solutions form=3 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 3.657 E - 12 6.183 E - 06 6.183 E - 06 3.000 E - 03 7.000 E - 03 3.292 E - 03 7.591 E - 03 7.591 E - 03 5.000 E - 02 5.000 E - 02 2.591 E - 01 2.522 E - 01 2.522 E - 01 2.000 E - 01 6.000 E - 01 5.262 E + 00 5.267 E + 00 5.267 E + 00 5.000 E + 00 4.000 E + 00 2.146 E + 02 2.146 E + 02 2.146 E + 02 8.000 E + 01 2.000 E + 01 7.062 E + 03 7.062 E + 03 7.062 E + 03 4.000 E + 02 6.000 E + 02 1.144 E + 05 2.493 E + 05 2.493 E + 05 5.000 E + 03 3.000 E + 03 1.238 E + 04 5.602 E + 06 5.602 E + 06 6.000 E + 07 5.000 E + 07 8.468 E - 05 9.180 E + 12 9.180 E + 12 7.000 E + 09 4.000 E + 09 9.073 E - 09 8.999 E + 15 8.999 E + 15 1.000 E + 02 1.000 E - 05 3.084 E - 01 3.762 E + 03 3.762 E + 03Table 3 Comparison of different solutions form=4 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 2.580 E - 13 1.437 E - 07 1.437 E - 07 3.000 E - 03 7.000 E - 03 1.187 E - 03 1.832 E - 03 1.832 E - 03 5.000 E - 02 5.000 E - 02 2.198 E - 01 1.942 E - 01 1.942 E - 01 2.000 E - 01 6.000 E - 01 1.131 E + 01 1.130 E + 01 1.130 E + 01 5.000 E + 00 4.000 E + 00 1.567 E + 03 1.567 E + 03 1.567 E + 03 8.000 E + 01 2.000 E + 01 1.682 E + 05 1.682 E + 05 1.682 E + 05 4.000 E + 02 6.000 E + 02 6.632 E + 06 1.915 E + 07 1.915 E + 07 5.000 E + 03 3.000 E + 03 7.706 E + 05 1.216 E + 09 1.216 E + 09 6.000 E + 07 5.000 E + 07 5.308 E - 03 2.344 E + 17 2.344 E + 17 7.000 E + 09 4.000 E + 09 5.688 E - 07 2.289 E + 21 2.289 E + 21 1.000 E + 02 1.000 E - 05 7.230 E - 01 8.006 E + 04 8.006 E + 04In a word, for any value ofλ0,L and λ0,H, the analytical solution derived in this paper always gives an accurate result. Furthermore, it can be solved very conveniently and quickly with the aid of a personal computer through a program given in Appendix B. ## 6. Case Study In this section, the bandwidth correction factor is used to compare different bimodal spectral methods.A general analytical solution (GAS) of fatigue damage for JM, FC, and MFC method can be written as(51)DGAS=nlK·Z+nsK2λ0,HmΓ1+m2.Z can be obtained according to (47), and nl and ns can be chosen as defined in Table 1, which represent different bimodal spectra methods, that is, JM, FC, and MFC method.According to (6), the analytical solution of the bandwidth correction factor is(52)ρGAS=DGASDNB.Likewise, a general integration solution (GIS) as given in FC and MFC method can be written as(53)DGIS=nlK·J+nsK2λ0,HmΓ1+m2.J can be obtained from (49).The integration solution of the damage correction factor is(54)ρGIS=DGISDNB. ### 6.1. Ideal Bimodal Spectra Different spectral shapes have been investigated in the literatures [5, 26]. Bimodal PSDs with two rectangular blocks are used to carry out numerical simulations, as shown in Figure 6. Two blocks are characterized by the amplitude levels SωL and SωH, as well as the frequency ranges ωb-ωa and ωd-ωc.Figure 6 Ideal bimodal spectra.ω L and ωH are the central frequencies, as defined in(55)ωL=ωa+ωb2,ωH=ωc+ωd2.A 1 and A2 represent the areas of block 1 and block 2 and are equal to the zero-order moment, respectively. For convenience, the sum of the two spectral moments is normalized to unity; that is, A1+A2=1.To ensure that these two spectra are approximately narrow band processes,ωb/ωa=ωd/ωc=1.1.Herein, two new parameters are introduced(56)B=λ0,Hλ0,L,R=ωHωL.The parameter values in the numerical simulations are conducted as follows:B=0.1, 0.4, 1, 2, and 9; R=6 and 10 which ensure two spectra are well-separated; m=3, 4, and 5 which are widely used in welded steel structures. ωa does not affect the simulated bandwidth correction factor ρ [10, 27]. Thus, ωa can be taken as the arbitrary value in the theory as long as ωa>0; herein, ωa=5rad/s.In the process of time domain simulation, the time series generated by IDFT [28] contains 20 million sample points and 150–450 thousand rainflow cycles, which is a sufficiently long time history, so that the sampling errors can be neglected.Figure7 is the result of JM method. The bandwidth correction factor calculated by (23) is in good agreement with the results obtained from (52). For m=3, JM method can provide a reasonable damage prediction. However, for m=4 and m=5, this method tends to underestimate the fatigue damage.Figure 7 The bandwidth correction factor of JM method for (a)R=6 and (b) R=10. (a) (b)Figures8 and 9 display the results of FC method and MFC method, which are both calculated by the proposed analytical solution (see (52)) and FC’s numerical integration solution (see (54)). The bandwidth correction factor calculated by (52) is very close to the results obtained from (54). Besides, the FC method always provides conservative results compared with RFC. The MFC method improves the accuracy of the original FC method to some extent. However, it may underestimate the fatigue damage in some cases.Figure 8 The bandwidth correction factor of FC method for (a)R=6 and (b) R=10. (a) (b)Figure 9 The bandwidth correction factor of MFC method for (a)R=6 and (b) R=10. (a) (b)As has been discussed above, the bandwidth correction factor calculated by the analytical solution has divergence with that computed from RFC. The disagreement is not because of the error from the analytical solution but arises from the fact that the physical models of the original bimodal methods and the rainflow counting method are different. ### 6.2. Real Bimodal Spectra In practice, the rectangular bimodal spectra in previous simulations cannot represent real spectra encountered in the structures. Therefore, more realistic bimodal stress spectra will be chosen to predict the vibration-fatigue-life by using the analytical solution proposed in this paper. For offshore structures subjected to random wave loading, the PSDs of fatigue stress of joints always exhibit two predominant peaks of frequency. Wirsching gave a general expression to characterize the PSD, and the model has been widely used in a few surveys [10, 23, 29]. The analytical form is(57)Sω=AHsexp⁡-1050/TD4ω4TD4ω41-ω/ωn22+2ζω/ωn2,where A is a scale factor, Hs is the significant wave height, TD is the dominant wave period, ωn is the natural angular frequency of structure, and ζ is the damping coefficient.A and Hs do not affect the value of the bandwidth correction factor ρ [10, 27]. Thus, they are chosen to be equal to unity for simplicity. The value of other parameters can be seen in Table 4.Table 4 Parameters for real bimodal spectra. A H s (m) T D (s) ω N (rad/s) ζ Group 1 1 1 15 2.2 1.5% Group 2 1 1 5 10 1%Real stress spectra corresponding to group 1 and group 2 can be seen in Figure10. S(ω) is a double peak spectrum, the first peak is produced by the peak of random wave spectrum, and the second one is excitated by the first mode response of structures.Figure 10 Real bimodal spectra corresponding to (a) group 1 and (b) group 2. (a) (b)The results of different bimodal methods are shown in Figures11(a) and 11(b). The bandwidth correction factor for JM method is calculated with (23) and (52). It should be noted that λ0,L+λ0,H≠1 for real bimodal spectra in Figures 10(a) and 10(b), so, λ0,L and λ0,H should be normalized as (14), (15), and (16), while ρ for FC method and MFC method is obtained through (52) and (54).Figure 11 The bandwidth correction factor of different bimodal methods for (a) group 1 and (b) group 2. (a) (b)Under the real stress spectra, the zero-order spectral moments corresponding to low frequency and high frequency are very small. JM method provides acceptable damage estimation compared with RFC. The results computed by (54) for FC and MFC method may be not correct in some cases. The incorrect results are mainly caused by (49). However, the proposed analytical solution (see (52)) can give a satisfactory damage prediction and always provide a conservative prediction. More importantly, (52) is more convenient to apply in predicting vibration-fatigue-life than numerical integration. ## 6.1. Ideal Bimodal Spectra Different spectral shapes have been investigated in the literatures [5, 26]. Bimodal PSDs with two rectangular blocks are used to carry out numerical simulations, as shown in Figure 6. Two blocks are characterized by the amplitude levels SωL and SωH, as well as the frequency ranges ωb-ωa and ωd-ωc.Figure 6 Ideal bimodal spectra.ω L and ωH are the central frequencies, as defined in(55)ωL=ωa+ωb2,ωH=ωc+ωd2.A 1 and A2 represent the areas of block 1 and block 2 and are equal to the zero-order moment, respectively. For convenience, the sum of the two spectral moments is normalized to unity; that is, A1+A2=1.To ensure that these two spectra are approximately narrow band processes,ωb/ωa=ωd/ωc=1.1.Herein, two new parameters are introduced(56)B=λ0,Hλ0,L,R=ωHωL.The parameter values in the numerical simulations are conducted as follows:B=0.1, 0.4, 1, 2, and 9; R=6 and 10 which ensure two spectra are well-separated; m=3, 4, and 5 which are widely used in welded steel structures. ωa does not affect the simulated bandwidth correction factor ρ [10, 27]. Thus, ωa can be taken as the arbitrary value in the theory as long as ωa>0; herein, ωa=5rad/s.In the process of time domain simulation, the time series generated by IDFT [28] contains 20 million sample points and 150–450 thousand rainflow cycles, which is a sufficiently long time history, so that the sampling errors can be neglected.Figure7 is the result of JM method. The bandwidth correction factor calculated by (23) is in good agreement with the results obtained from (52). For m=3, JM method can provide a reasonable damage prediction. However, for m=4 and m=5, this method tends to underestimate the fatigue damage.Figure 7 The bandwidth correction factor of JM method for (a)R=6 and (b) R=10. (a) (b)Figures8 and 9 display the results of FC method and MFC method, which are both calculated by the proposed analytical solution (see (52)) and FC’s numerical integration solution (see (54)). The bandwidth correction factor calculated by (52) is very close to the results obtained from (54). Besides, the FC method always provides conservative results compared with RFC. The MFC method improves the accuracy of the original FC method to some extent. However, it may underestimate the fatigue damage in some cases.Figure 8 The bandwidth correction factor of FC method for (a)R=6 and (b) R=10. (a) (b)Figure 9 The bandwidth correction factor of MFC method for (a)R=6 and (b) R=10. (a) (b)As has been discussed above, the bandwidth correction factor calculated by the analytical solution has divergence with that computed from RFC. The disagreement is not because of the error from the analytical solution but arises from the fact that the physical models of the original bimodal methods and the rainflow counting method are different. ## 6.2. Real Bimodal Spectra In practice, the rectangular bimodal spectra in previous simulations cannot represent real spectra encountered in the structures. Therefore, more realistic bimodal stress spectra will be chosen to predict the vibration-fatigue-life by using the analytical solution proposed in this paper. For offshore structures subjected to random wave loading, the PSDs of fatigue stress of joints always exhibit two predominant peaks of frequency. Wirsching gave a general expression to characterize the PSD, and the model has been widely used in a few surveys [10, 23, 29]. The analytical form is(57)Sω=AHsexp⁡-1050/TD4ω4TD4ω41-ω/ωn22+2ζω/ωn2,where A is a scale factor, Hs is the significant wave height, TD is the dominant wave period, ωn is the natural angular frequency of structure, and ζ is the damping coefficient.A and Hs do not affect the value of the bandwidth correction factor ρ [10, 27]. Thus, they are chosen to be equal to unity for simplicity. The value of other parameters can be seen in Table 4.Table 4 Parameters for real bimodal spectra. A H s (m) T D (s) ω N (rad/s) ζ Group 1 1 1 15 2.2 1.5% Group 2 1 1 5 10 1%Real stress spectra corresponding to group 1 and group 2 can be seen in Figure10. S(ω) is a double peak spectrum, the first peak is produced by the peak of random wave spectrum, and the second one is excitated by the first mode response of structures.Figure 10 Real bimodal spectra corresponding to (a) group 1 and (b) group 2. (a) (b)The results of different bimodal methods are shown in Figures11(a) and 11(b). The bandwidth correction factor for JM method is calculated with (23) and (52). It should be noted that λ0,L+λ0,H≠1 for real bimodal spectra in Figures 10(a) and 10(b), so, λ0,L and λ0,H should be normalized as (14), (15), and (16), while ρ for FC method and MFC method is obtained through (52) and (54).Figure 11 The bandwidth correction factor of different bimodal methods for (a) group 1 and (b) group 2. (a) (b)Under the real stress spectra, the zero-order spectral moments corresponding to low frequency and high frequency are very small. JM method provides acceptable damage estimation compared with RFC. The results computed by (54) for FC and MFC method may be not correct in some cases. The incorrect results are mainly caused by (49). However, the proposed analytical solution (see (52)) can give a satisfactory damage prediction and always provide a conservative prediction. More importantly, (52) is more convenient to apply in predicting vibration-fatigue-life than numerical integration. ## 7. Conclusion In this paper, bimodal spectral methods to predict vibration-fatigue-life are investigated. The conclusions are as follows:(1) An analytical solution of the convolution integral of two Rayleigh distributions is developed. Besides, this solution is a general form which is different from that proposed by Jiao and Moan. The latter is only a particular case.(2) An analytical solution based on bimodal spectral methods is derived. It is validated that the analytical solution shows a good agreement with numerical integration. More importantly, the analytical solution has a stronger attraction than numerical integration in engineering application.(3) For JM method, the original approximate solution (see (23)) is reasonable in most cases; the analytical solution (see (52)) can give better prediction.(4) In ideal bimodal processes, JM method and MFC method may overestimate or underestimate the fatigue damage, while in real bimodal processes, they give a conservative prediction. FC method always provides conservative results in any cases. Therefore, FC method can be recommended as a safe design technique in real engineering structures. --- *Source: 1010726-2017-01-31.xml*
1010726-2017-01-31_1010726-2017-01-31.md
54,869
An Analytical Solution for Predicting the Vibration-Fatigue-Life in Bimodal Random Processes
Chaoshuai Han; Yongliang Ma; Xianqiang Qu; Mindong Yang
Shock and Vibration (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1010726
1010726-2017-01-31.xml
--- ## Abstract Predicting the vibration-fatigue-life of engineering structures subjected to random loading is a critical issue for. Frequency methods are generally adopted to deal with this problem. This paper focuses on bimodal spectra methods, including Jiao-Moan method, Fu-Cebon method, and Modified Fu-Cebon method. It has been proven that these three methods can give acceptable fatigue damage results. However, these three bimodal methods do not have analytical solutions. Jiao-Moan method uses an approximate solution, Fu-Cebon method, and Modified Fu-Cebon method needed to be calculated by numerical integration which is obviously not convenient in engineering application. Thus, an analytical solution for predicting the vibration-fatigue-life in bimodal spectra is developed. The accuracy of the analytical solution is compared with numerical integration. The results show that a very good agreement between an analytical solution and numerical integration can be obtained. Finally, case study in offshore structures is conducted and a bandwidth correction factor is computed through using the proposed analytical solution. --- ## Body ## 1. Introduction Engineering structures from different fields (e.g., aircrafts, wind energy utilizations, and automobiles) are commonly subjected to random vibration loading. These loads often cause structural fatigue failure. Thus, it is significant to carry out a study on assessing the vibration-fatigue-life [1, 2].Vibration fatigue analysis commonly consists of two part processes: structural dynamic analysis and results postprocessing. Structural dynamic analysis provides an accurate prediction of the stress responses of fatigue hot-spots. Once the stress responses are obtained, vibration fatigue can be successfully performed. Existing technologies such as operational modal analysis [3], finite element modeling (FEM), and accelerated-vibration-tests are mature and applicable to obtain the stress of structures [4, 5]. Therefore, the crucial part of a vibration fatigue analysis focuses on results postprocessing.The postprocessing is usually used to calculate fatigue damage based on known stress responses. When the stress responses are time series, fatigue can be evaluated using a traditional time domain method. However, the stress responses of real structures are mostly characterized by the power spectral density (PSD) function. Thus, frequency domain method becomes popular in vibration fatigue analysis [6, 7].A bimodal spectrum is a particular PSD in the random vibration stress response of a structure. For some simple structures, the stress response of structures will show explicit characterization of two peaks. One peak of the bimodal spectrum is governed by the first-order natural frequency of the structure; another is dominated by the main frequency of applied loads. Therefore, some bimodal methods for fatigue analysis can be adopted [8–10]. Moreover, several experiments (e.g., vibration tests on mechanical components) and numerical studies (e.g., virtual simulation of dynamic using FEM) also obtain have shown that the stress PSD is a typical bimodal spectrum [5, 7, 11]. However, for some complex flexible structures, the PSD of the stress response of structures usually is a multimodal and wide-band spectrum. For this situation, existing general wide-band spectral methods such as Dirlik method [12], Benasciutti-Tovo method [10], and Park method [13, 14] can be used to evaluate the vibration-fatigue-life. Recently, Braccesi et al. [15, 16] proposed a bands method to estimate the fatigue damage of a wide-band random process in the frequency domain. In order to speed up the frequency domain method, Braccesi et al. [17, 18] developed a modal approach for fatigue damage evaluation of flexible components by FEM.For fatigue evaluation in bimodal processes, some specific formulae have been proposed. Jiao and Moan [8] provided a bandwidth correction factor from a probabilistic point of view, and the factor is an approximate solution derived by the original model. The approximation inevitably leads to some errors in certain cases. Based on an similar idea, Fu and Cebon [9] developed a formula for predicting the fatigue life in bimodal random processes. In the formula, there is a convolution integral. The author claimed that there is no analytical solution for the convolution integral which has to be calculated by numerical integration. Benasciutti and Tovo [10] compared the above two methods and established a Modified Fu-Cebon method. The new formula improves the damage estimation, but it still needs to calculate numerical integration.In engineering application, the designers prefer an analytical solution rather than numerical methods. Therefore, the purpose of this paper is to develop an analytical solution to predict the vibration-fatigue-life in bimodal spectra. ## 2. Theory of Fatigue Analysis ### 2.1. Fatigue Analysis The basicS-N curve for fatigue analysis can be given as(1)N=K·S-m,where S represents the stress amplitude; N is the number of cycles to fatigue failure, and K and m are the fatigue strength coefficient and fatigue strength exponent, respectively.The total fatigue damage can then be calculated as a linear accumulation rule after Miner [19](2)D=∑niNi=1K∑niSim,where ni is the number of cycles in the stress amplitude Si, resulting from rainflow counting (RFC) [20], and Ni is the number of cycles corresponding to fatigue failure at the same stress amplitude.When the stress amplitude is a continuum function and its probability density function (PDF) isfS(S), the fatigue damage in the duration T can be denoted as follows:(3)D=νc·TK∫0∞Sm·fSSdS,where νc is the frequency of rainflow cycles.For an ideal narrowband process,fS(S) can be approximated by the Rayleigh distribution [21]; the analytical expression is given as (4)fSS=Sλ0exp⁡-S22λ0.Furthermore, the frequency of rainflow cycles νc can be replaced by rate of mean zero upcrossing ν0.According to (3), an analytical solution of fatigue damage [22] for an ideal narrowband process can be written as(5)DNB=ν0·TK2λ0mΓm2+1,where Γ· is the Gamma function.For general wide-band stress process, fatigue damage can be calculated by a narrowband approximation (i.e., (5)) first, and bandwidth correction is made based on the following model [23]:(6)DWB=ρ·DNB.In general, bimodal process is a wide-band process; thus, the fatigue damage in bimodal process can be calculated through (6). ### 2.2. Basic Principle of Bimodal Spectrum Process Assume that a bimodal stress processX(t) is composed of a low frequency process (LF) XL(t) and a high frequency process (HF) XH(t). (7)Xt=XLt+XHt,where XL(t) and XH(t) are independent and narrow Gaussian process.The one-sided spectral density function ofX(t) can be summed from the PSD of LF and HF process. (8)Sω=SLω+SHω.Theith-order spectral moments of S(ω) are defined as(9)λi=∫0∞ωi·SLω+SHωdω=λi,L+λi,H.The rate of mean zero upcrossing corresponding toXL(t) and XH(t) is(10)ν0,L=12πλ2,Lλ0,L,ν0,H=12πλ2,Hλ0,H.The rate of mean zero upcrossing ofX(t) can be expressed as(11)ν0=12πλ2,L+λ2,Hλ0,L+λ0,H=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,H.According to (5), (9), and (11), narrowband approximation of bimodal stress process X(t) can be given as(12)DNB,X=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,HTK2λ0,L+2λ0,HmΓm2+1.Equation (12) is known as the combined spectrum method in API specifications [24].The existing bimodal methods proposed by Jiao and Moan, Fu and Cebon, and Benasciutti and Tovo are based on the idea: two types of cycles can be extracted from the rainflow counting, one is the large stress cycle, and the other is the small cycle [8–10]. The fatigue damage due to X(t) can be approximated with the sum of two individual contributions.(13)D=Dl+Ds,where Dl represents the damage due to the large stress cycle and Ds denotes the damage due to the small stress cycle. ## 2.1. Fatigue Analysis The basicS-N curve for fatigue analysis can be given as(1)N=K·S-m,where S represents the stress amplitude; N is the number of cycles to fatigue failure, and K and m are the fatigue strength coefficient and fatigue strength exponent, respectively.The total fatigue damage can then be calculated as a linear accumulation rule after Miner [19](2)D=∑niNi=1K∑niSim,where ni is the number of cycles in the stress amplitude Si, resulting from rainflow counting (RFC) [20], and Ni is the number of cycles corresponding to fatigue failure at the same stress amplitude.When the stress amplitude is a continuum function and its probability density function (PDF) isfS(S), the fatigue damage in the duration T can be denoted as follows:(3)D=νc·TK∫0∞Sm·fSSdS,where νc is the frequency of rainflow cycles.For an ideal narrowband process,fS(S) can be approximated by the Rayleigh distribution [21]; the analytical expression is given as (4)fSS=Sλ0exp⁡-S22λ0.Furthermore, the frequency of rainflow cycles νc can be replaced by rate of mean zero upcrossing ν0.According to (3), an analytical solution of fatigue damage [22] for an ideal narrowband process can be written as(5)DNB=ν0·TK2λ0mΓm2+1,where Γ· is the Gamma function.For general wide-band stress process, fatigue damage can be calculated by a narrowband approximation (i.e., (5)) first, and bandwidth correction is made based on the following model [23]:(6)DWB=ρ·DNB.In general, bimodal process is a wide-band process; thus, the fatigue damage in bimodal process can be calculated through (6). ## 2.2. Basic Principle of Bimodal Spectrum Process Assume that a bimodal stress processX(t) is composed of a low frequency process (LF) XL(t) and a high frequency process (HF) XH(t). (7)Xt=XLt+XHt,where XL(t) and XH(t) are independent and narrow Gaussian process.The one-sided spectral density function ofX(t) can be summed from the PSD of LF and HF process. (8)Sω=SLω+SHω.Theith-order spectral moments of S(ω) are defined as(9)λi=∫0∞ωi·SLω+SHωdω=λi,L+λi,H.The rate of mean zero upcrossing corresponding toXL(t) and XH(t) is(10)ν0,L=12πλ2,Lλ0,L,ν0,H=12πλ2,Hλ0,H.The rate of mean zero upcrossing ofX(t) can be expressed as(11)ν0=12πλ2,L+λ2,Hλ0,L+λ0,H=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,H.According to (5), (9), and (11), narrowband approximation of bimodal stress process X(t) can be given as(12)DNB,X=ν0,L2λ0,L+ν0,H2λ0,Hλ0,L+λ0,HTK2λ0,L+2λ0,HmΓm2+1.Equation (12) is known as the combined spectrum method in API specifications [24].The existing bimodal methods proposed by Jiao and Moan, Fu and Cebon, and Benasciutti and Tovo are based on the idea: two types of cycles can be extracted from the rainflow counting, one is the large stress cycle, and the other is the small cycle [8–10]. The fatigue damage due to X(t) can be approximated with the sum of two individual contributions.(13)D=Dl+Ds,where Dl represents the damage due to the large stress cycle and Ds denotes the damage due to the small stress cycle. ## 3. A Review of Bimodal Methods ### 3.1. Jiao-Moan (JM) Method To simplify the study,X(t), XL(t), and XH(t) are normalized as X∗(t), XL∗(t), and XH∗(t) through the following transformation:(14)X∗t=Xtλ0=XLtλ0+XHtλ0=XL∗t+XH∗tand then(15)λ0∗=λ0,L∗+λ0,H∗=1,where(16)λ0,L∗=λ0,Lλ0,λ0,H∗=λ0,Hλ0.Jiao-Moan points out that the small stress cycles are produced by the envelope of the HF process, which follows the Rayleigh distribution. The fatigue damage due to the small stress cycles can be obtained according to (5).While the large stress cycles are from the envelop process,P(t) (see Figure 1), the amplitude of P(t) is equal to(17)Qt=RLt+RHt,where RL(t) and RH(t) are the envelopes of XL∗(t) and XH∗(t), respectively.Figure 1 Bimodal processX∗(t), the envelope process P(t), and the amplitude process Q(t).The distribution ofQ(t) can be written as a form of a convolution integral(18)fQq=∫0qfRLq-xfRHxdx=∫0qfRLyfRHq-ydy.RL(t) and RH(t) obey the Rayleigh distribution; therefore, (18) has an analytical solution which is given [8](19)fQq=qλ0,L∗·exp⁡-q22λ0,L∗+qλ0,H∗·exp⁡-q22λ0,H∗+exp⁡-q22·2πλ0,L∗λ0,H∗·Φqλ0,L∗λ0,H∗+Φqλ0,H∗λ0,L∗-1·q2-1.The rate of mean zero upcrossing due toP(t) can be calculated as(20)ν0,P=λ0,L∗ν0,L1+λ0,H∗λ0,L∗ν0,Hν0,LδH2,where(21)δH=1-λ1,H∗2λ0,H∗λ2,H∗.An approximation was made by Jiao and Moan for (19) as follows [8]:(22)fQq≈λ0,L∗-λ0,L∗λ0,H∗·q·exp⁡-q22λ0,L∗+2πλ0,L∗λ0,H∗·q2-1·exp⁡-q22.After the approximation, a closed-form solution of the bandwidth correction factor can be then derived [8](23)ρ=ν0,Pν0×λ0,L∗m/2+21-λ0,H∗λ0,L∗+πλ0,L∗λ0,H∗mΓm/2+1/2Γm/2+1+ν0,Hν0λ0,H∗m/2.Finally, the fatigue damage can be obtained as (6) and (12). ### 3.2. Fu-Cebon (FC) Method Similarly to JM method, Fu and Cebon also considered that the total damage is produced by a large cycle (SH+SL) and a small cycle (SL), as depicted in Figure 2. The small cycles are from the HF process, and the distribution of the amplitude PSs(S) is a Rayleigh distribution, as shown in (4). However, the number of cycles associated with the small cycles ns is different from JM method and equals ν0,H-ν0,L·T. According to (5), the damage due to the small cycles is(24)Ds=ν0,H-ν0,L·TK2λ0,HmΓ1+m2.Figure 2 The large cycles and small cycles for a random stress process.The amplitude of the large cyclesSl can be approximated as the sum of amplitude of the LF and HF processes, the distribution of which can be expressed by a convolution of two Rayleigh distributions [9]. (25)PSlS=∫0SPSLyPSHS-ydy=∫0SPSLS-yPSHydy=1λ0,Lλ0,He-S2/2λ0,H∫0SSy-y2e-Uy2+VSydy,where U=1/2λ0,L+1/2λ0,H and V=1/λ0,H.The number of cycles of the large cycles isnl=ν0,L·T. Thus, the fatigue damage due to the large stress cycles can be expressed by(26)Dl=ν0,L·TK∫0∞SmPSlSdS.Equation (26) can be calculated with numerical integration [9, 10]. Therefore, the total damage can be obtained according to (13). ### 3.3. Modify Fu-Cebon (MFC) Method Benasciutti and Tovo made a comparison between JM method and FC method and concluded that using the envelop process is more suitable [10]. Thus, a hybrid technique is adopted to modify the FC method. More specifically, the large cycles and small cycles are produced according to the idea of FC method. The number of cycles associated with the large cycles is defined similarly to JM method. That is, nl=ν0,P·T, while the number of cycles corresponding to the small cycles is ns=ν0,H-ν0,P·T. The total damage for MFC method can be then written according to (13).Although the accuracy of the MFC method is improved, the fatigue damage still has to be calculated with numerical integral. ### 3.4. Comparison of Three Bimodal Methods Detailed comparison of the aforementioned three bimodal methods can be found in Table1. In all methods, the amplitude of the small cycle obeys Rayleigh distribution, and the corresponding fatigue damage has an analytical expression as in (5); the distribution of amplitude of the large cycle is convolution integration of two Rayleigh distributions, and the relevant fatigue damage can be calculated by (26).Table 1 Comparison of large cycles and small cycles for different spectral methods. Method Large cycles Small cycles n l PDF of amplitude n s PDF of amplitude JM ν 0 , P ⋅ T Eq. (22) ν 0 , H ⋅ T Rayleigh FC ν 0 , L ⋅ T Eq. (25) ν 0 , H - ν 0 , L ⋅ T Rayleigh MFC ν 0 , P ⋅ T Eq. (25) ν 0 , H - ν 0 , P ⋅ T RayleighBecause of complexity of the convolution integration, several researches assert that (26) has no analytical solution [9, 10]. To solve this problem, Jiao and Moan used an approximate model (i.e., (22)) to obtain a closed-form solution [8]. However, the approximate model may lead to errors in some cases as in Figure 3 which illustrates the divergence of (19) and (22) for different values of λ0,L∗ and λ0,H∗. It is found that (22) becomes closer to (19) with the increase of λ0,L∗.Figure 3 The divergence of (19) and (22) for JM method in case of (a) λ0,L∗=0.01 and (b) λ0,L∗=0.99. (a) (b)For FC and MFC methods, (26) was calculated by numerical technique. Although the numerical technique can give a fatigue damage prediction, it is complex and not convenient when applied in real engineering. In addition, the solutions in some cases are not reasonable. In Section 4, an analytical solution of (26) will be derived to evaluate the fatigue damage, and the derivation of the analytical solution focuses on the fatigue damage of the large cycles. ## 3.1. Jiao-Moan (JM) Method To simplify the study,X(t), XL(t), and XH(t) are normalized as X∗(t), XL∗(t), and XH∗(t) through the following transformation:(14)X∗t=Xtλ0=XLtλ0+XHtλ0=XL∗t+XH∗tand then(15)λ0∗=λ0,L∗+λ0,H∗=1,where(16)λ0,L∗=λ0,Lλ0,λ0,H∗=λ0,Hλ0.Jiao-Moan points out that the small stress cycles are produced by the envelope of the HF process, which follows the Rayleigh distribution. The fatigue damage due to the small stress cycles can be obtained according to (5).While the large stress cycles are from the envelop process,P(t) (see Figure 1), the amplitude of P(t) is equal to(17)Qt=RLt+RHt,where RL(t) and RH(t) are the envelopes of XL∗(t) and XH∗(t), respectively.Figure 1 Bimodal processX∗(t), the envelope process P(t), and the amplitude process Q(t).The distribution ofQ(t) can be written as a form of a convolution integral(18)fQq=∫0qfRLq-xfRHxdx=∫0qfRLyfRHq-ydy.RL(t) and RH(t) obey the Rayleigh distribution; therefore, (18) has an analytical solution which is given [8](19)fQq=qλ0,L∗·exp⁡-q22λ0,L∗+qλ0,H∗·exp⁡-q22λ0,H∗+exp⁡-q22·2πλ0,L∗λ0,H∗·Φqλ0,L∗λ0,H∗+Φqλ0,H∗λ0,L∗-1·q2-1.The rate of mean zero upcrossing due toP(t) can be calculated as(20)ν0,P=λ0,L∗ν0,L1+λ0,H∗λ0,L∗ν0,Hν0,LδH2,where(21)δH=1-λ1,H∗2λ0,H∗λ2,H∗.An approximation was made by Jiao and Moan for (19) as follows [8]:(22)fQq≈λ0,L∗-λ0,L∗λ0,H∗·q·exp⁡-q22λ0,L∗+2πλ0,L∗λ0,H∗·q2-1·exp⁡-q22.After the approximation, a closed-form solution of the bandwidth correction factor can be then derived [8](23)ρ=ν0,Pν0×λ0,L∗m/2+21-λ0,H∗λ0,L∗+πλ0,L∗λ0,H∗mΓm/2+1/2Γm/2+1+ν0,Hν0λ0,H∗m/2.Finally, the fatigue damage can be obtained as (6) and (12). ## 3.2. Fu-Cebon (FC) Method Similarly to JM method, Fu and Cebon also considered that the total damage is produced by a large cycle (SH+SL) and a small cycle (SL), as depicted in Figure 2. The small cycles are from the HF process, and the distribution of the amplitude PSs(S) is a Rayleigh distribution, as shown in (4). However, the number of cycles associated with the small cycles ns is different from JM method and equals ν0,H-ν0,L·T. According to (5), the damage due to the small cycles is(24)Ds=ν0,H-ν0,L·TK2λ0,HmΓ1+m2.Figure 2 The large cycles and small cycles for a random stress process.The amplitude of the large cyclesSl can be approximated as the sum of amplitude of the LF and HF processes, the distribution of which can be expressed by a convolution of two Rayleigh distributions [9]. (25)PSlS=∫0SPSLyPSHS-ydy=∫0SPSLS-yPSHydy=1λ0,Lλ0,He-S2/2λ0,H∫0SSy-y2e-Uy2+VSydy,where U=1/2λ0,L+1/2λ0,H and V=1/λ0,H.The number of cycles of the large cycles isnl=ν0,L·T. Thus, the fatigue damage due to the large stress cycles can be expressed by(26)Dl=ν0,L·TK∫0∞SmPSlSdS.Equation (26) can be calculated with numerical integration [9, 10]. Therefore, the total damage can be obtained according to (13). ## 3.3. Modify Fu-Cebon (MFC) Method Benasciutti and Tovo made a comparison between JM method and FC method and concluded that using the envelop process is more suitable [10]. Thus, a hybrid technique is adopted to modify the FC method. More specifically, the large cycles and small cycles are produced according to the idea of FC method. The number of cycles associated with the large cycles is defined similarly to JM method. That is, nl=ν0,P·T, while the number of cycles corresponding to the small cycles is ns=ν0,H-ν0,P·T. The total damage for MFC method can be then written according to (13).Although the accuracy of the MFC method is improved, the fatigue damage still has to be calculated with numerical integral. ## 3.4. Comparison of Three Bimodal Methods Detailed comparison of the aforementioned three bimodal methods can be found in Table1. In all methods, the amplitude of the small cycle obeys Rayleigh distribution, and the corresponding fatigue damage has an analytical expression as in (5); the distribution of amplitude of the large cycle is convolution integration of two Rayleigh distributions, and the relevant fatigue damage can be calculated by (26).Table 1 Comparison of large cycles and small cycles for different spectral methods. Method Large cycles Small cycles n l PDF of amplitude n s PDF of amplitude JM ν 0 , P ⋅ T Eq. (22) ν 0 , H ⋅ T Rayleigh FC ν 0 , L ⋅ T Eq. (25) ν 0 , H - ν 0 , L ⋅ T Rayleigh MFC ν 0 , P ⋅ T Eq. (25) ν 0 , H - ν 0 , P ⋅ T RayleighBecause of complexity of the convolution integration, several researches assert that (26) has no analytical solution [9, 10]. To solve this problem, Jiao and Moan used an approximate model (i.e., (22)) to obtain a closed-form solution [8]. However, the approximate model may lead to errors in some cases as in Figure 3 which illustrates the divergence of (19) and (22) for different values of λ0,L∗ and λ0,H∗. It is found that (22) becomes closer to (19) with the increase of λ0,L∗.Figure 3 The divergence of (19) and (22) for JM method in case of (a) λ0,L∗=0.01 and (b) λ0,L∗=0.99. (a) (b)For FC and MFC methods, (26) was calculated by numerical technique. Although the numerical technique can give a fatigue damage prediction, it is complex and not convenient when applied in real engineering. In addition, the solutions in some cases are not reasonable. In Section 4, an analytical solution of (26) will be derived to evaluate the fatigue damage, and the derivation of the analytical solution focuses on the fatigue damage of the large cycles. ## 4. Derivation of an Analytical Solution ### 4.1. Derivation of an Analytical Solution for (25) Equation (25) can be rewritten as (27)PSlS=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy-∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy.Equation (27) will be divided into two items.( 1) The First Item. It is as follows:(28)I1=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,L+λ0,Hexp⁡-S22λ0,H+Sλ0,L+λ0,H·exp⁡-S22λ0,L+S2λ0,Lλ0,L+λ0,H·2πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.( 2) The Second Item. It is as follows:(29)I2=∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,H-Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,L-2Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H-exp⁡-S22λ0,L+exp⁡-S22λ0,L+λ0,H2πλ0,Lλ0,Hλ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L1+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H1λ0,L+λ0,H+S2λ0,Hλ0,Lλ0,L+λ0,H2.The analytical solution of (25) can be then obtained(30)PSlS=Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H+S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.Note that whenλ0,L+λ0,H=1, (30) is just equal to (19) derived by Jiao and Moan [8]. Therefore, (19) is a special case of (30). ### 4.2. Derivation of an Analytical Solution for (26) Based on (30) The derivation of an analytical solution for (26) is on the basis of (30), as (31)Z=∫0∞Sm·PSlSdS=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS+∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Equation (31) will be divided into five parts.( 1) The First Part. It is as follows:(32)Z1=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,LdS=λ0,L2λ0,L+λ0,H2·2λ0,Lm·Γ1+m2.( 2) The Second Part. It is as follows:(33)Z2=∫0∞Sm·Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS=λ0,H2λ0,L+λ0,H2·2λ0,Hm·Γ1+m2.( 3) The Third Part. It is as follows:(34)Z3=1λ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,H∫0∞Sm+2-Smλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HdS=m2πλ1λ2λ0,L+λ0,Hm-22m-1Γm+12.( 4) The Fourth Part. It is as follows:(35)Z4=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,HdS.Equation (35) contains a standard Normal cumulative distribution function Φ(·). It is difficult to get an exact solution directly. Thus, a new variable is introduced, as follows:(36)t=Sλ0,L+λ0,H.With a method of variable substitution, (35) can be simplified:(37)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2exp⁡-t22Φtλ0,Lλ0,Hdt-∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt.By defining(38)Hλ0,L,λ0,H,m=∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt,(37) becomes(39)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,L,λ0,H,m+2-Hλ0,L,λ0,H,mand using integration by parts, (38) reduces to(40)Hλ0,L,λ0,H,m=m-1·Hλ0,L,λ0,H,m-2+1212πλ0,Lλ0,H2λ0,Hλ0,L+λ0,HmΓm2.Equation (40) is a recurrence formula; when m is an odd number, it becomes(41)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,1+λ0,Lλ0,Hm-1!!22π∑k=2m+1/212k-2!!2λ0,Hλ0,L+λ0,H2k-1Γ2k-12,where ·!! is a double factorial function and H(λ0,L,λ0,H,1) has an analytical expression which can be derived conveniently(42)Hλ0,L,λ0,H,1=12+12λ0,Lλ0,L+λ0,H.When m is an even number, H(λ0,L,λ0,H,m) is(43)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,0+λ0,Lλ0,Hm-1!!22π∑k=1m/212k-1!!2λ0,Hλ0,L+λ0,H2kΓk,where(44)Hλ0,L,λ0,H,0=2π4+12ππ2-atan⁡λ0,Hλ0,L.Specific derivation of (44) can be seen in Appendix A.In addition, a Matlab program has been written to calculateH(λ0,L,λ0,H,m) in Appendix B.( 5) The Fifth Part. It is as follows:(45)Z5=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Similarly to the fourth part, the analytical solution of the fifth part can be derived as(46)Z5=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,H,λ0,L,m+2-Hλ0,H,λ0,L,m.The final solution is(47)Z=Z1+Z2-Z3+Z4+Z5. ## 4.1. Derivation of an Analytical Solution for (25) Equation (25) can be rewritten as (27)PSlS=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy-∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy.Equation (27) will be divided into two items.( 1) The First Item. It is as follows:(28)I1=∫0SySλ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,L+λ0,Hexp⁡-S22λ0,H+Sλ0,L+λ0,H·exp⁡-S22λ0,L+S2λ0,Lλ0,L+λ0,H·2πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.( 2) The Second Item. It is as follows:(29)I2=∫0Sy2λ0,Lλ0,Hexp⁡-y22λ0,H·exp⁡-S-y22λ0,Ldy=-Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,H-Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,L-2Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H-exp⁡-S22λ0,L+exp⁡-S22λ0,L+λ0,H2πλ0,Lλ0,Hλ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L1+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H1λ0,L+λ0,H+S2λ0,Hλ0,Lλ0,L+λ0,H2.The analytical solution of (25) can be then obtained(30)PSlS=Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,H+S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,H.Note that whenλ0,L+λ0,H=1, (30) is just equal to (19) derived by Jiao and Moan [8]. Therefore, (19) is a special case of (30). ## 4.2. Derivation of an Analytical Solution for (26) Based on (30) The derivation of an analytical solution for (26) is on the basis of (30), as (31)Z=∫0∞Sm·PSlSdS=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,L+Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS+∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,H-1+ΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Equation (31) will be divided into five parts.( 1) The First Part. It is as follows:(32)Z1=∫0∞Sm·Sλ0,Lλ0,L+λ0,H2exp⁡-S22λ0,LdS=λ0,L2λ0,L+λ0,H2·2λ0,Lm·Γ1+m2.( 2) The Second Part. It is as follows:(33)Z2=∫0∞Sm·Sλ0,Hλ0,L+λ0,H2exp⁡-S22λ0,HdS=λ0,H2λ0,L+λ0,H2·2λ0,Hm·Γ1+m2.( 3) The Third Part. It is as follows:(34)Z3=1λ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,H∫0∞Sm+2-Smλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HdS=m2πλ1λ2λ0,L+λ0,Hm-22m-1Γm+12.( 4) The Fourth Part. It is as follows:(35)Z4=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Lλ0,Hλ0,L+λ0,HdS.Equation (35) contains a standard Normal cumulative distribution function Φ(·). It is difficult to get an exact solution directly. Thus, a new variable is introduced, as follows:(36)t=Sλ0,L+λ0,H.With a method of variable substitution, (35) can be simplified:(37)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2exp⁡-t22Φtλ0,Lλ0,Hdt-∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt.By defining(38)Hλ0,L,λ0,H,m=∫0∞tmexp⁡-t22Φtλ0,Lλ0,Hdt,(37) becomes(39)Z4=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,L,λ0,H,m+2-Hλ0,L,λ0,H,mand using integration by parts, (38) reduces to(40)Hλ0,L,λ0,H,m=m-1·Hλ0,L,λ0,H,m-2+1212πλ0,Lλ0,H2λ0,Hλ0,L+λ0,HmΓm2.Equation (40) is a recurrence formula; when m is an odd number, it becomes(41)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,1+λ0,Lλ0,Hm-1!!22π∑k=2m+1/212k-2!!2λ0,Hλ0,L+λ0,H2k-1Γ2k-12,where ·!! is a double factorial function and H(λ0,L,λ0,H,1) has an analytical expression which can be derived conveniently(42)Hλ0,L,λ0,H,1=12+12λ0,Lλ0,L+λ0,H.When m is an even number, H(λ0,L,λ0,H,m) is(43)Hλ0,L,λ0,H,m=m-1!!·Hλ0,L,λ0,H,0+λ0,Lλ0,Hm-1!!22π∑k=1m/212k-1!!2λ0,Hλ0,L+λ0,H2kΓk,where(44)Hλ0,L,λ0,H,0=2π4+12ππ2-atan⁡λ0,Hλ0,L.Specific derivation of (44) can be seen in Appendix A.In addition, a Matlab program has been written to calculateH(λ0,L,λ0,H,m) in Appendix B.( 5) The Fifth Part. It is as follows:(45)Z5=∫0∞Sm·S2-λ0,L+λ0,Hλ0,L+λ0,H22πλ0,Lλ0,Hλ0,L+λ0,Hexp⁡-S22λ0,L+λ0,HΦSλ0,Hλ0,Lλ0,L+λ0,HdS.Similarly to the fourth part, the analytical solution of the fifth part can be derived as(46)Z5=2πλ0,Lλ0,Hλ0,L+λ0,Hm-2Hλ0,H,λ0,L,m+2-Hλ0,H,λ0,L,m.The final solution is(47)Z=Z1+Z2-Z3+Z4+Z5. ## 5. Numerical Validation In this part, the accuracy of FC method and JM method and the derived analytical solution will be validated with numerical integration. Transformation of (26) will be carried out first. ### 5.1. Treatment of Double Integral Based on FC Method As pointed out by Fu and Cebon and Benasciutti and Tovo [9, 10], FC’s numerical integration can be calculated as the following processes.Equation (26) contains a double integral, in which variables S and y are in the range of 0,∞ and 0,S, respectively. Apparently, the latter is not compatible with the integration range of Gauss-Legendre quadrature formula. Therefore, by using a integration transformation(48)y=S21+t,the integral part of (26) can be simplified to(49)J=∫0∞SmPSlSdS=1λ0,Lλ0,H∫0∞∫-11SmS231-t2exp⁡-S/221+t22λ0,H·exp⁡-S/221-t22λ0,LdtdS.Equation (49) can be thus calculated with Gauss-Legendre and Gauss-Laguerre quadrature formula. ### 5.2. Treatment of Numerical Integral for (31) Direct calculation of (31) may lead to some mathematical accumulative errors. To obtain a precise integral solution, (31) has to be handled with a variable substitution, and the result is(50)Z′=λ0,Lm+4/2+λ0,Hm+4/2λ0,L+λ0,H2·2m·Γ1+m2+2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2-tmexp⁡-t22Φtλ0,Lλ0,H+Φtλ0,Hλ0,L-1dt.Note that Z1 and Z2 for (31) have analytical solution; therefore, only Z3, Z4, and Z5 are dealt with in (50).The solution of (50) can be obtained through Gauss-Laguerre quadrature formula.The accuracy of the numerical integral in (49) and (50) depends on the order of nodes and weights which can be obtained from handbook of mathematics [25]. The accuracy increases with the increasing orders. However, from the engineering point of view, too many orders of nodes and weights will lead to difficulty in calculation. The integral results of (50) are convergent when the orders of nodes and weights are equal to 30. Therefore, the present study takes the order of 30. ### 5.3. Discussion of Results It is very convenient to use JM method in the case ofλ0,L+λ0,H=1, while FC method and MFC method can be used for any case regardless of λ0,L and λ0,H. Therefore, the analytical results are divided into two: λ0,L+λ0,H=1 and λ0,L+λ0,H≠1.The solutions calculated by different mathematical methods form=3 and m=4 in the case of λ0,L+λ0,H=1 are plotted in Figures 4 and 5. It turns out that the proposed analytical solution gives the same results with the exact numerical integral solution for any value of λ0,L and λ0,H. FC’s numerical integral solution matches the exact numerical integral solution very well in most cases. However, for relatively low value of λ0,L or when λ0,L tends to 1, it may not be right. JM’s approximate solution is close to the exact numerical integral solution only in a small range.Figure 4 Comparison of different solutions form=3.Figure 5 Comparison of different solutions form=4.FC’s numerical integral solutions, the exact numerical integral solutions, and the analytical solutions form=3 and m=4 in the case of λ0,L+λ0,H≠1 are shown in Tables 2 and 3. The results indicate that the latter two solutions are always approximately equal for any value of λ0,L and λ0,H, while FC’s numerical integral solutions show good agreement with the exact integral solutions only in a few cases. As like the case of λ0,L+λ0,H=1, for relatively high or low values of λ0,L and λ0,H (i.e., λ0,L=100, λ0,H=1×10-5), FC’s integral solutions may give incorrect results. This phenomenon is in accord with the analysis of Benasciutti and Tovo (e.g., FC’s numerical integration may be impossible for too low values of λ0,L and λ0,H) [10].Table 2 Comparison of different solutions form=3 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 3.657 E - 12 6.183 E - 06 6.183 E - 06 3.000 E - 03 7.000 E - 03 3.292 E - 03 7.591 E - 03 7.591 E - 03 5.000 E - 02 5.000 E - 02 2.591 E - 01 2.522 E - 01 2.522 E - 01 2.000 E - 01 6.000 E - 01 5.262 E + 00 5.267 E + 00 5.267 E + 00 5.000 E + 00 4.000 E + 00 2.146 E + 02 2.146 E + 02 2.146 E + 02 8.000 E + 01 2.000 E + 01 7.062 E + 03 7.062 E + 03 7.062 E + 03 4.000 E + 02 6.000 E + 02 1.144 E + 05 2.493 E + 05 2.493 E + 05 5.000 E + 03 3.000 E + 03 1.238 E + 04 5.602 E + 06 5.602 E + 06 6.000 E + 07 5.000 E + 07 8.468 E - 05 9.180 E + 12 9.180 E + 12 7.000 E + 09 4.000 E + 09 9.073 E - 09 8.999 E + 15 8.999 E + 15 1.000 E + 02 1.000 E - 05 3.084 E - 01 3.762 E + 03 3.762 E + 03Table 3 Comparison of different solutions form=4 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 2.580 E - 13 1.437 E - 07 1.437 E - 07 3.000 E - 03 7.000 E - 03 1.187 E - 03 1.832 E - 03 1.832 E - 03 5.000 E - 02 5.000 E - 02 2.198 E - 01 1.942 E - 01 1.942 E - 01 2.000 E - 01 6.000 E - 01 1.131 E + 01 1.130 E + 01 1.130 E + 01 5.000 E + 00 4.000 E + 00 1.567 E + 03 1.567 E + 03 1.567 E + 03 8.000 E + 01 2.000 E + 01 1.682 E + 05 1.682 E + 05 1.682 E + 05 4.000 E + 02 6.000 E + 02 6.632 E + 06 1.915 E + 07 1.915 E + 07 5.000 E + 03 3.000 E + 03 7.706 E + 05 1.216 E + 09 1.216 E + 09 6.000 E + 07 5.000 E + 07 5.308 E - 03 2.344 E + 17 2.344 E + 17 7.000 E + 09 4.000 E + 09 5.688 E - 07 2.289 E + 21 2.289 E + 21 1.000 E + 02 1.000 E - 05 7.230 E - 01 8.006 E + 04 8.006 E + 04In a word, for any value ofλ0,L and λ0,H, the analytical solution derived in this paper always gives an accurate result. Furthermore, it can be solved very conveniently and quickly with the aid of a personal computer through a program given in Appendix B. ## 5.1. Treatment of Double Integral Based on FC Method As pointed out by Fu and Cebon and Benasciutti and Tovo [9, 10], FC’s numerical integration can be calculated as the following processes.Equation (26) contains a double integral, in which variables S and y are in the range of 0,∞ and 0,S, respectively. Apparently, the latter is not compatible with the integration range of Gauss-Legendre quadrature formula. Therefore, by using a integration transformation(48)y=S21+t,the integral part of (26) can be simplified to(49)J=∫0∞SmPSlSdS=1λ0,Lλ0,H∫0∞∫-11SmS231-t2exp⁡-S/221+t22λ0,H·exp⁡-S/221-t22λ0,LdtdS.Equation (49) can be thus calculated with Gauss-Legendre and Gauss-Laguerre quadrature formula. ## 5.2. Treatment of Numerical Integral for (31) Direct calculation of (31) may lead to some mathematical accumulative errors. To obtain a precise integral solution, (31) has to be handled with a variable substitution, and the result is(50)Z′=λ0,Lm+4/2+λ0,Hm+4/2λ0,L+λ0,H2·2m·Γ1+m2+2πλ0,Lλ0,Hλ0,L+λ0,Hm-2∫0∞tm+2-tmexp⁡-t22Φtλ0,Lλ0,H+Φtλ0,Hλ0,L-1dt.Note that Z1 and Z2 for (31) have analytical solution; therefore, only Z3, Z4, and Z5 are dealt with in (50).The solution of (50) can be obtained through Gauss-Laguerre quadrature formula.The accuracy of the numerical integral in (49) and (50) depends on the order of nodes and weights which can be obtained from handbook of mathematics [25]. The accuracy increases with the increasing orders. However, from the engineering point of view, too many orders of nodes and weights will lead to difficulty in calculation. The integral results of (50) are convergent when the orders of nodes and weights are equal to 30. Therefore, the present study takes the order of 30. ## 5.3. Discussion of Results It is very convenient to use JM method in the case ofλ0,L+λ0,H=1, while FC method and MFC method can be used for any case regardless of λ0,L and λ0,H. Therefore, the analytical results are divided into two: λ0,L+λ0,H=1 and λ0,L+λ0,H≠1.The solutions calculated by different mathematical methods form=3 and m=4 in the case of λ0,L+λ0,H=1 are plotted in Figures 4 and 5. It turns out that the proposed analytical solution gives the same results with the exact numerical integral solution for any value of λ0,L and λ0,H. FC’s numerical integral solution matches the exact numerical integral solution very well in most cases. However, for relatively low value of λ0,L or when λ0,L tends to 1, it may not be right. JM’s approximate solution is close to the exact numerical integral solution only in a small range.Figure 4 Comparison of different solutions form=3.Figure 5 Comparison of different solutions form=4.FC’s numerical integral solutions, the exact numerical integral solutions, and the analytical solutions form=3 and m=4 in the case of λ0,L+λ0,H≠1 are shown in Tables 2 and 3. The results indicate that the latter two solutions are always approximately equal for any value of λ0,L and λ0,H, while FC’s numerical integral solutions show good agreement with the exact integral solutions only in a few cases. As like the case of λ0,L+λ0,H=1, for relatively high or low values of λ0,L and λ0,H (i.e., λ0,L=100, λ0,H=1×10-5), FC’s integral solutions may give incorrect results. This phenomenon is in accord with the analysis of Benasciutti and Tovo (e.g., FC’s numerical integration may be impossible for too low values of λ0,L and λ0,H) [10].Table 2 Comparison of different solutions form=3 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 3.657 E - 12 6.183 E - 06 6.183 E - 06 3.000 E - 03 7.000 E - 03 3.292 E - 03 7.591 E - 03 7.591 E - 03 5.000 E - 02 5.000 E - 02 2.591 E - 01 2.522 E - 01 2.522 E - 01 2.000 E - 01 6.000 E - 01 5.262 E + 00 5.267 E + 00 5.267 E + 00 5.000 E + 00 4.000 E + 00 2.146 E + 02 2.146 E + 02 2.146 E + 02 8.000 E + 01 2.000 E + 01 7.062 E + 03 7.062 E + 03 7.062 E + 03 4.000 E + 02 6.000 E + 02 1.144 E + 05 2.493 E + 05 2.493 E + 05 5.000 E + 03 3.000 E + 03 1.238 E + 04 5.602 E + 06 5.602 E + 06 6.000 E + 07 5.000 E + 07 8.468 E - 05 9.180 E + 12 9.180 E + 12 7.000 E + 09 4.000 E + 09 9.073 E - 09 8.999 E + 15 8.999 E + 15 1.000 E + 02 1.000 E - 05 3.084 E - 01 3.762 E + 03 3.762 E + 03Table 3 Comparison of different solutions form=4 in the case of λ0,L+λ0,H≠1. λ 0 , L λ 0 , H FC’s numerical integral solution for (49) The exact numerical integral solution for (50) The analytical solution for (47) 1.000 E - 05 9.000 E - 05 2.580 E - 13 1.437 E - 07 1.437 E - 07 3.000 E - 03 7.000 E - 03 1.187 E - 03 1.832 E - 03 1.832 E - 03 5.000 E - 02 5.000 E - 02 2.198 E - 01 1.942 E - 01 1.942 E - 01 2.000 E - 01 6.000 E - 01 1.131 E + 01 1.130 E + 01 1.130 E + 01 5.000 E + 00 4.000 E + 00 1.567 E + 03 1.567 E + 03 1.567 E + 03 8.000 E + 01 2.000 E + 01 1.682 E + 05 1.682 E + 05 1.682 E + 05 4.000 E + 02 6.000 E + 02 6.632 E + 06 1.915 E + 07 1.915 E + 07 5.000 E + 03 3.000 E + 03 7.706 E + 05 1.216 E + 09 1.216 E + 09 6.000 E + 07 5.000 E + 07 5.308 E - 03 2.344 E + 17 2.344 E + 17 7.000 E + 09 4.000 E + 09 5.688 E - 07 2.289 E + 21 2.289 E + 21 1.000 E + 02 1.000 E - 05 7.230 E - 01 8.006 E + 04 8.006 E + 04In a word, for any value ofλ0,L and λ0,H, the analytical solution derived in this paper always gives an accurate result. Furthermore, it can be solved very conveniently and quickly with the aid of a personal computer through a program given in Appendix B. ## 6. Case Study In this section, the bandwidth correction factor is used to compare different bimodal spectral methods.A general analytical solution (GAS) of fatigue damage for JM, FC, and MFC method can be written as(51)DGAS=nlK·Z+nsK2λ0,HmΓ1+m2.Z can be obtained according to (47), and nl and ns can be chosen as defined in Table 1, which represent different bimodal spectra methods, that is, JM, FC, and MFC method.According to (6), the analytical solution of the bandwidth correction factor is(52)ρGAS=DGASDNB.Likewise, a general integration solution (GIS) as given in FC and MFC method can be written as(53)DGIS=nlK·J+nsK2λ0,HmΓ1+m2.J can be obtained from (49).The integration solution of the damage correction factor is(54)ρGIS=DGISDNB. ### 6.1. Ideal Bimodal Spectra Different spectral shapes have been investigated in the literatures [5, 26]. Bimodal PSDs with two rectangular blocks are used to carry out numerical simulations, as shown in Figure 6. Two blocks are characterized by the amplitude levels SωL and SωH, as well as the frequency ranges ωb-ωa and ωd-ωc.Figure 6 Ideal bimodal spectra.ω L and ωH are the central frequencies, as defined in(55)ωL=ωa+ωb2,ωH=ωc+ωd2.A 1 and A2 represent the areas of block 1 and block 2 and are equal to the zero-order moment, respectively. For convenience, the sum of the two spectral moments is normalized to unity; that is, A1+A2=1.To ensure that these two spectra are approximately narrow band processes,ωb/ωa=ωd/ωc=1.1.Herein, two new parameters are introduced(56)B=λ0,Hλ0,L,R=ωHωL.The parameter values in the numerical simulations are conducted as follows:B=0.1, 0.4, 1, 2, and 9; R=6 and 10 which ensure two spectra are well-separated; m=3, 4, and 5 which are widely used in welded steel structures. ωa does not affect the simulated bandwidth correction factor ρ [10, 27]. Thus, ωa can be taken as the arbitrary value in the theory as long as ωa>0; herein, ωa=5rad/s.In the process of time domain simulation, the time series generated by IDFT [28] contains 20 million sample points and 150–450 thousand rainflow cycles, which is a sufficiently long time history, so that the sampling errors can be neglected.Figure7 is the result of JM method. The bandwidth correction factor calculated by (23) is in good agreement with the results obtained from (52). For m=3, JM method can provide a reasonable damage prediction. However, for m=4 and m=5, this method tends to underestimate the fatigue damage.Figure 7 The bandwidth correction factor of JM method for (a)R=6 and (b) R=10. (a) (b)Figures8 and 9 display the results of FC method and MFC method, which are both calculated by the proposed analytical solution (see (52)) and FC’s numerical integration solution (see (54)). The bandwidth correction factor calculated by (52) is very close to the results obtained from (54). Besides, the FC method always provides conservative results compared with RFC. The MFC method improves the accuracy of the original FC method to some extent. However, it may underestimate the fatigue damage in some cases.Figure 8 The bandwidth correction factor of FC method for (a)R=6 and (b) R=10. (a) (b)Figure 9 The bandwidth correction factor of MFC method for (a)R=6 and (b) R=10. (a) (b)As has been discussed above, the bandwidth correction factor calculated by the analytical solution has divergence with that computed from RFC. The disagreement is not because of the error from the analytical solution but arises from the fact that the physical models of the original bimodal methods and the rainflow counting method are different. ### 6.2. Real Bimodal Spectra In practice, the rectangular bimodal spectra in previous simulations cannot represent real spectra encountered in the structures. Therefore, more realistic bimodal stress spectra will be chosen to predict the vibration-fatigue-life by using the analytical solution proposed in this paper. For offshore structures subjected to random wave loading, the PSDs of fatigue stress of joints always exhibit two predominant peaks of frequency. Wirsching gave a general expression to characterize the PSD, and the model has been widely used in a few surveys [10, 23, 29]. The analytical form is(57)Sω=AHsexp⁡-1050/TD4ω4TD4ω41-ω/ωn22+2ζω/ωn2,where A is a scale factor, Hs is the significant wave height, TD is the dominant wave period, ωn is the natural angular frequency of structure, and ζ is the damping coefficient.A and Hs do not affect the value of the bandwidth correction factor ρ [10, 27]. Thus, they are chosen to be equal to unity for simplicity. The value of other parameters can be seen in Table 4.Table 4 Parameters for real bimodal spectra. A H s (m) T D (s) ω N (rad/s) ζ Group 1 1 1 15 2.2 1.5% Group 2 1 1 5 10 1%Real stress spectra corresponding to group 1 and group 2 can be seen in Figure10. S(ω) is a double peak spectrum, the first peak is produced by the peak of random wave spectrum, and the second one is excitated by the first mode response of structures.Figure 10 Real bimodal spectra corresponding to (a) group 1 and (b) group 2. (a) (b)The results of different bimodal methods are shown in Figures11(a) and 11(b). The bandwidth correction factor for JM method is calculated with (23) and (52). It should be noted that λ0,L+λ0,H≠1 for real bimodal spectra in Figures 10(a) and 10(b), so, λ0,L and λ0,H should be normalized as (14), (15), and (16), while ρ for FC method and MFC method is obtained through (52) and (54).Figure 11 The bandwidth correction factor of different bimodal methods for (a) group 1 and (b) group 2. (a) (b)Under the real stress spectra, the zero-order spectral moments corresponding to low frequency and high frequency are very small. JM method provides acceptable damage estimation compared with RFC. The results computed by (54) for FC and MFC method may be not correct in some cases. The incorrect results are mainly caused by (49). However, the proposed analytical solution (see (52)) can give a satisfactory damage prediction and always provide a conservative prediction. More importantly, (52) is more convenient to apply in predicting vibration-fatigue-life than numerical integration. ## 6.1. Ideal Bimodal Spectra Different spectral shapes have been investigated in the literatures [5, 26]. Bimodal PSDs with two rectangular blocks are used to carry out numerical simulations, as shown in Figure 6. Two blocks are characterized by the amplitude levels SωL and SωH, as well as the frequency ranges ωb-ωa and ωd-ωc.Figure 6 Ideal bimodal spectra.ω L and ωH are the central frequencies, as defined in(55)ωL=ωa+ωb2,ωH=ωc+ωd2.A 1 and A2 represent the areas of block 1 and block 2 and are equal to the zero-order moment, respectively. For convenience, the sum of the two spectral moments is normalized to unity; that is, A1+A2=1.To ensure that these two spectra are approximately narrow band processes,ωb/ωa=ωd/ωc=1.1.Herein, two new parameters are introduced(56)B=λ0,Hλ0,L,R=ωHωL.The parameter values in the numerical simulations are conducted as follows:B=0.1, 0.4, 1, 2, and 9; R=6 and 10 which ensure two spectra are well-separated; m=3, 4, and 5 which are widely used in welded steel structures. ωa does not affect the simulated bandwidth correction factor ρ [10, 27]. Thus, ωa can be taken as the arbitrary value in the theory as long as ωa>0; herein, ωa=5rad/s.In the process of time domain simulation, the time series generated by IDFT [28] contains 20 million sample points and 150–450 thousand rainflow cycles, which is a sufficiently long time history, so that the sampling errors can be neglected.Figure7 is the result of JM method. The bandwidth correction factor calculated by (23) is in good agreement with the results obtained from (52). For m=3, JM method can provide a reasonable damage prediction. However, for m=4 and m=5, this method tends to underestimate the fatigue damage.Figure 7 The bandwidth correction factor of JM method for (a)R=6 and (b) R=10. (a) (b)Figures8 and 9 display the results of FC method and MFC method, which are both calculated by the proposed analytical solution (see (52)) and FC’s numerical integration solution (see (54)). The bandwidth correction factor calculated by (52) is very close to the results obtained from (54). Besides, the FC method always provides conservative results compared with RFC. The MFC method improves the accuracy of the original FC method to some extent. However, it may underestimate the fatigue damage in some cases.Figure 8 The bandwidth correction factor of FC method for (a)R=6 and (b) R=10. (a) (b)Figure 9 The bandwidth correction factor of MFC method for (a)R=6 and (b) R=10. (a) (b)As has been discussed above, the bandwidth correction factor calculated by the analytical solution has divergence with that computed from RFC. The disagreement is not because of the error from the analytical solution but arises from the fact that the physical models of the original bimodal methods and the rainflow counting method are different. ## 6.2. Real Bimodal Spectra In practice, the rectangular bimodal spectra in previous simulations cannot represent real spectra encountered in the structures. Therefore, more realistic bimodal stress spectra will be chosen to predict the vibration-fatigue-life by using the analytical solution proposed in this paper. For offshore structures subjected to random wave loading, the PSDs of fatigue stress of joints always exhibit two predominant peaks of frequency. Wirsching gave a general expression to characterize the PSD, and the model has been widely used in a few surveys [10, 23, 29]. The analytical form is(57)Sω=AHsexp⁡-1050/TD4ω4TD4ω41-ω/ωn22+2ζω/ωn2,where A is a scale factor, Hs is the significant wave height, TD is the dominant wave period, ωn is the natural angular frequency of structure, and ζ is the damping coefficient.A and Hs do not affect the value of the bandwidth correction factor ρ [10, 27]. Thus, they are chosen to be equal to unity for simplicity. The value of other parameters can be seen in Table 4.Table 4 Parameters for real bimodal spectra. A H s (m) T D (s) ω N (rad/s) ζ Group 1 1 1 15 2.2 1.5% Group 2 1 1 5 10 1%Real stress spectra corresponding to group 1 and group 2 can be seen in Figure10. S(ω) is a double peak spectrum, the first peak is produced by the peak of random wave spectrum, and the second one is excitated by the first mode response of structures.Figure 10 Real bimodal spectra corresponding to (a) group 1 and (b) group 2. (a) (b)The results of different bimodal methods are shown in Figures11(a) and 11(b). The bandwidth correction factor for JM method is calculated with (23) and (52). It should be noted that λ0,L+λ0,H≠1 for real bimodal spectra in Figures 10(a) and 10(b), so, λ0,L and λ0,H should be normalized as (14), (15), and (16), while ρ for FC method and MFC method is obtained through (52) and (54).Figure 11 The bandwidth correction factor of different bimodal methods for (a) group 1 and (b) group 2. (a) (b)Under the real stress spectra, the zero-order spectral moments corresponding to low frequency and high frequency are very small. JM method provides acceptable damage estimation compared with RFC. The results computed by (54) for FC and MFC method may be not correct in some cases. The incorrect results are mainly caused by (49). However, the proposed analytical solution (see (52)) can give a satisfactory damage prediction and always provide a conservative prediction. More importantly, (52) is more convenient to apply in predicting vibration-fatigue-life than numerical integration. ## 7. Conclusion In this paper, bimodal spectral methods to predict vibration-fatigue-life are investigated. The conclusions are as follows:(1) An analytical solution of the convolution integral of two Rayleigh distributions is developed. Besides, this solution is a general form which is different from that proposed by Jiao and Moan. The latter is only a particular case.(2) An analytical solution based on bimodal spectral methods is derived. It is validated that the analytical solution shows a good agreement with numerical integration. More importantly, the analytical solution has a stronger attraction than numerical integration in engineering application.(3) For JM method, the original approximate solution (see (23)) is reasonable in most cases; the analytical solution (see (52)) can give better prediction.(4) In ideal bimodal processes, JM method and MFC method may overestimate or underestimate the fatigue damage, while in real bimodal processes, they give a conservative prediction. FC method always provides conservative results in any cases. Therefore, FC method can be recommended as a safe design technique in real engineering structures. --- *Source: 1010726-2017-01-31.xml*
2017
# Comparison of Raw Dairy Manure Slurry and Anaerobically Digested Slurry as N Sources for Grass Forage Production **Authors:** Olivia E. Saunders; Ann-Marie Fortuna; Joe H. Harrison; Elizabeth Whitefield; Craig G. Cogger; Ann C. Kennedy; Andy I. Bary **Journal:** International Journal of Agronomy (2012) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2012/101074 --- ## Abstract We conducted a 3-year field study to determine how raw dairy slurry and anaerobically digested slurry (dairy slurry and food waste) applied via broadcast and subsurface deposition to reed canarygrass (Phalaris arundinacea) affected forage biomass, N uptake, apparent nitrogen recovery (ANR), and soil nitrate concentrations relative to urea. Annual N applications ranged from 600 kg N ha−1 in 2009 to 300 kg N ha−1 in 2011. Forage yield and N uptake were similar across slurry treatments. Soil nitrate concentrations were greatest at the beginning of the fall leaching season, and did not differ among slurry treatments or application methods. Urea-fertilized plots had the highest soil nitrate concentrations but did not consistently have greatest forage biomass. ANR for the slurry treatments ranged from 35 to 70% when calculations were based on ammonium-N concentration, compared with 31 to 65% for urea. Slurry ANR calculated on a total N basis was lower (15 to 40%) due to lower availability of the organic N in the slurries. No consistent differences in soil microbial biomass or other biological indicators were observed. Anaerobically digested slurry supported equal forage production and similar N use efficiency when compared to raw dairy slurry. --- ## Body ## 1. Introduction There is a need for a set of best management practices that addresses how to utilize the growing quantity of reactive nitrogen (N) produced by livestock operations. Animal agriculture in the United States has become more specialized with farms consolidating and growing in size [1]. The number of dairy farms has decreased by 94% since 1960, but the number of animals has remained constant [2]. Animal consolidation has created challenges with respect to on-farm N surplus, waste management and nutrient loading in the environment [3, 4]. Annually in the United States, more than 5800 Mg of manure N is produced [5]. One approach to ameliorate negative environmental impacts associated with animal manures is through adoption of anaerobic digestion technologies to treat farm-generated manures and food processing wastes [6–9]. Digestion of wastes can provide a stable and consistent source of nutrients comparable to inorganic fertilizers such as urea.Anaerobic digestion converts organic carbon into methane used to generate electricity, and it also converts organic N to plant available ammonium (NH4+), increasing the ratio of NH4+/total N in the effluent [10]. Carbon is removed during both the methane production and fiber removal processes, resulting in a smaller C : N ratio of the effluent [11]. Therefore, digested effluent can serve as low-cost source of readily available nutrients for crop production. Some studies [12] have found increased yield and N availability with application of anaerobically digested material as compared to nondigested material, possibly due to increased N availability and reduced carbon (C) content. Anaerobically digested manure can provide sufficient nutrients to support biomass and crop yields equivalent to synthetic fertilizers and raw manures [13, 14]. The apparent mineral nitrogen recovery (ANRM) of tall fescue (Festuca spp.) receiving raw dairy manure slurry was reported by Bittman et al. [13] as 55 and 51% at early and late applications, respectively, using the drag shoe method (band applied directly to soil, under plant canopy). When surface applied, the ANRM was 37% applied early and 40% when applied late. Similar results were presented by Cherney et al. [15] with orchardgrass (Dactulis glomerata L.) and tall fescue.Perennial systems that contain living plants year round tend to remove more reactive N than annual systems. Mean reed canarygrass biomass measured in trials in Minnesota was 13 Mg ha−1 under modest N applications (168 kg N ha−1 yr−1) [16]. Bermudagrass (Cynodon spp.) fertilized with 89–444 kg manure N ha−1 yielded a mean of 7.92 Mg dry matter ha−1 over four or five cuttings per year [17]. However, the forage crop recovered only 25% of the N applied over the four years included in the study. Reed canarygrass (Phalaris arundinacea) is an ideal candidate for N removal because of its ability to store any left-over N applied during the growing season in rhizomes overwinter, providing a significant advantage to the forage in early spring when soil-N may be limited [18].As with any N source, application of manure N in excess of crop uptake can result inNO3- leaching [19]. Up to 46% of applied manure N may persist in the soil, increasing the potential for loss of N after multiple applications in a growing season [20]. Some studies indicate that manure N poses less of a risk to leaching than the same amount of N in the form of synthetic fertilizer [21] due to immobilization of N that often occurs as humic materials build up in soil. Other researchers have determined manure increases NO3- leaching [22]. Irrespective of the source and properties of the N fertilizer applied during winter months when plants are dormant, NO3- leaching can be the main source of N loss [23].Manure additions can enhance soil fertility and quality through their short and long-term contribution to soil C and N [24–27]. Current research demonstrates that long-term manure applications increase soil organic matter, basal respiration, microbial biomass, and enzymatic activity (measures of soil quality), while mineral fertilizers can decrease pH, enzymatic activity, and microbial biomass C [28]. Organic amendments such as manure also have an effect on microbial community structure in addition to enhancing the activity, C content, and size of soil microbial biomass [29, 30]. A study by Zhong et al. [31] demonstrated total phospholipid fatty acid (PLFA), gram-negative, and actinobacterial PLFA were highest in treatments of organic matter and organic matter + mineral NPK fertilizer. Functional diversity from organic manure and organic manure + mineral NPK fertilizers increased over time far more than with additions of synthetic fertilizers alone. We anticipate that long-term application of raw dairy slurry and digested slurry will enhance soil quality affecting microbial community structure and activity overtime.The goal of this study was to determine the fate of applied N in anaerobically digested slurry (derived from mixed dairy slurry and food waste), raw dairy slurry, and urea during forage production. Specifically, we compare the biomass, ANR and N uptake of forages to determine which N source(s) has the potential to maintain forage biomass and reduce reactive N. In addition, we evaluated the effectiveness of subsurface deposition versus broadcast application of raw slurry and anaerobically digested slurry to improve forage biomass production, ANR and N uptake of forages, as well as reduce residual reactive N. Our hypotheses were (1) digested slurry would have more available N and generate a greater forage response than raw slurry, (2) subsurface deposition would conserve more N than surface application, resulting in greater forage response, particularly for the raw manure with higher solids content, and (3) application of digested slurry would reduce soil nitrate-N concentrations relative to urea. ## 2. Materials and Methods ### 2.1. Site Description A field-based experiment, located on a commercial dairy in Monroe, Washington, was established in 2009. The field was mapped as 90% Puget silty clay loam (fine-silty, mixed, superactive, nonacid, mesic Fluvaquentic Endoaquepts) and 10% Sultan silt loam soils (fine-silty, isotic, mesic, Aquandic Dystrochrepts) [32] and had a history of manure applications. The site had climatic conditions typical of the Maritime Pacific Northwest with cool wet winters and dry summers. The 2009 growing season had a drier than normal summer, while 2011 had a cool spring and dry summer (Table 1). The 2010 season had the best growing conditions, with a warm spring and more summer rainfall than 2009 or 2011.Table 1 Average air temperature and total precipitation by month beginning at the start of plot implementation through the 3rd growing season. YearMonthAverage air temp (°C)Total precipitation (mm)2009April9.261May12.873June16.819July20.06August18.013September15.554October9.9100November7.6137December1.4352010January7.389February7.044March7.854April9.455May11.266June14.461July17.07August17.122September15.385October10.685November5.7107December5.31842011January5.1120February3.480March7.1119April7.279May10.674June14.139July15.718August16.43September15.523October10.271Data from Washington State University AgWeatherNet, 21-Acres Station.The experimental design included six treatments in a randomized complete block design with four replicates(3.6m×18m). Treatments included two dairy manure slurries (raw and anaerobically digested), two slurry application methods (broadcast and subsurface deposition), inorganic N (pelletized urea), and a zero-fertilizer treatment that received 0 kg N ha−1.The raw dairy slurry was flushed from the barn floor and obtained fresh from a holding tank. Digested slurry was produced in an anaerobic digester with a plug-flow design, operating within mesophilic (23.5°C) conditions, with an approximate 17-day retention time, and storage capacity of~6,100,000 liters. Liquid slurry from a single dairy consisting of 1,000 lactating cows was codigested with pre-consumer food-waste substrates. Food-waste consisted of no more than 30% of the total digester input and included whey, egg byproduct, processed fish, ruminant blood, biodiesel byproduct, and Daf grease (dissolved air flotation). After digestion, materials were passed through a rotating drum screen solid separator where solids were removed for composting and liquids pumped to a storage lagoon. The digested slurry applied to plots was obtained just after liquids-solids separation and prior to lagoon storage. A 250 mL subsample of each slurry was taken during each application (Table 2), cooled, and analyzed for total-nitrogen, nitrate-N, ammonium-N, total-phosphorus, and total solids [33] (Table 3).Table 2 Dates of forage harvest and fertilizer (slurry and urea) applications for field study in Monroe, WA for 2009–2011. 200920102011Forage HarvestFertilizer applicationaForage HarvestFertilizer applicationaForage harvestFertilizer applicationa17-Apr-09b4-Mar-107-May-09c14-May-0926-Apr-1011-May-105-May-1119-May-112-Jun-098-Jun-0910-Jun-1022-Jun-1010-Jun-1122-Jun-111-Jul-0920-Jun-09d7-Jul-1015-Jul-1014-Jul-104-Aug-1128-Jul-0911-Aug-0912-Aug-1022-Aug-1031-Aug-1131-Aug-0915-Sep-1030-Sep-1020-Sep-1029-Sep-092-Dec-1018-Oct-1130-Nov-09aSoil samples taken 1-day prior to fertilizer application.bEarly season manure application by grower, prior to plot establishment.cHarvest prior to plot establishment, yield data from this harvest does not include replicates.dUnintended slurry application from grower.Table 3 Annual mean N and P concentrations of raw and anaerobically digested slurries applied to pasture plots. 200920102011Raw dairyDigestedRaw dairyDigestedRaw dairyDigestedSlurryslurryslurryslurryslurryslurryPercent Total Solids (%)2.81.93.42.03.41.4Total N, mg kg-1a144116171653267214752000NH4–N, mg kg−170710387761253760930Organic N, mg kg-1b73457887714197151070Total P, mg kg−1350300331292330210aN and P concentrations reported as is.bOrganic nitrogen (N) = total N − NH4–N.A mix of reed canarygrass (Phalaris arundinacea) cv. “Palaton” and white clover (Trifolium repens) was over-seeded into the field at 62 kg ha−1 in May 2006, three years before the start of this experiment. Plots were sprayed with broad leaf herbicides on 18 June 2009, 10 July 2010, and 8 August 2011 with 1.17 L ha−1, 2, 4-Dichlorophenoxy acetic acid, 73 mL ha−1 Carfentrazone-ethyl (Aim),and 410 mL ha−1, dicamba (Banvel) to control the clover. ### 2.2. Slurry Application Method Slurries were applied via two application methods, subsurface deposition and surface broadcast application. Subsurface deposition was accomplished with a 4169-liter capacity manure tank fitted to a National Volume Equipment pump (model MEC 4000/PALD) with a 3.05 meter Aerway Sub-Surface Deposition (Model AW1000-2B48-D) and custom Banderator attachment for application of manure through eight PVC pipes attached directly behind the Banderator tines. Tines were set 19 cm apart on the roller and allowed to drop 10 cm below the soil surface creating intermittent slices 12.5 cm in length at the surface. Visual observation of the plots suggested that the tines created slices at random locations throughout the growing season. Surface broadcast of raw and anaerobically digested slurries were accomplished using an Aerway system with the tines raised above the soil surface.Application rates for the raw and anaerobically digested slurry were projected to be equal in total N and allowed to vary in ammonia-N, for a total yearly application of approximately 600 kg N ha−1 in 2009, 500 kg N ha−1 in 2010, and 300 kg N ha−1 in 2011 (Table 4). We reduced the amount of N applied on urea and slurry treatments each year of the study from 2009 to 2011 based on the fall soil nitrate concentrations. When soil nitrate-N is above 35 mg N kg−1in the fall, it is recommended that applications be eliminated after August 1st, N application rates be reduced in the subsequent year by 25–40 percent and sidedress N at planting be eliminated [34]. An early season raw dairy slurry application (Table 2, application 1 in 2009) was applied by the grower to all plots prior to establishment of the field plots and is not included in the statistical analyses. An inadvertent application of 143 kg N ha−1 across all plots by the grower in June of 2009 (Table 2, application 4) is included in the analysis. This accounts for the higher annual application rate in 2009. Application rates were lowest in 2011 because wet conditions prevented an early season application (Table 4), and the slurries had lower mean N concentrations. Plots were fertilized no more than five days after grass harvest. There were a total of five manure applications per year during 2009-2010 and four in 2011 (Table 2).Table 4 Application rate of fertilizer source at each application period, and seasonal total N and P inputs, 2009–2011. Application1234512345Seasonal totalTotal N kg ha−1NH4+–N kg ha−1Total N kg ha−1NH4+–N kg ha−1Total P kg ha−12009Control111a00143b0510083b02541350Urea111a112112143b1125111211283b1125904710Raw111a121168143b4751647883b23590300130Digested111a176115143b81511149383b476263891212010Control0000000000000Urea112112112112011211211211204484480Raw921121139984532458535350024092Digested8612163671295540354098466268542011Control0000000000000Urea01121121120011211211203363360Raw05260846303424294725813551Digested033136697002352332730813426aEarly season slurry application by grower, prior to plot establishment.bApplication 4 in 2009 was an unintended application from the grower to all plots. Urea fertilizer considered equal to NH4+ in plant availability. ### 2.3. Field Management and Analysis The aboveground biomass from grass swaths, 0.6 × 0.6 m, was harvested from the center of each plot every 28–35 d (Table2) using hand-held hedge clippers. Three subsamples were taken within each of the four plot replicates for each treatment. The three subsamples were divided into grasses, clover, and weeds to adjust the aboveground biomass and ANR measurements. Due to herbicide applications, weeds were minimal all years. White clover biomass was significant in two of the cuttings in 2011. Samples were bagged, and weighed immediately. Forage was then dried at 55°C for 24 hrs, weighed, and ground in a Wiley Mill (Arthur H. Thomas Co., Philadelphia, PA) with a 1 mm screen. Ground samples were analyzed for forage nitrogen content with a Leco FP-528 Nitrogen Analyzer (Leco Corporation, St. Joseph, MI; AOAC, 2001) by Cumberland Valley Analytical Services Inc. (Hagerstown, MD).Soil samples were collected from each plot and analyzed for Bray-1 P, exchangeable K, and pH at the beginning of the experiment 2009 and again at the end of 2010. Six soil cores per plot were taken to a 30-cm depth using a 2.54 cm diameter soil sampling probe and composited. Additional soil samples were collected for nitrate-N analysis monthly throughout the growing season using the same method, except for biweekly from mid-September through the end of November. Nitrate-N below the 30 cm depth was not measured. Soil chemical properties were analyzed by Soiltest Farm Consultants (Moses Lake, WA) using the methods of Gavlak et al. [33]. Ammonium-N was determined using a salicylate-nitroprusside method and nitrate-N using the cadmium reduction method. Soil samples for gravimetric water content were homogenized by mixing, and a subsample was dried at 38°C for 72 hrs [35].Whole-soil phospholipid fatty acid (PLFA) procedures generally followed Bligh and Dyer [36] as described by Petersen and Klug [37] and modified by Ibekwe and Kennedy [38]. Fatty acid methyl esters were analyzed on a gas chromatograph (Agilent Technologies GC 6890, Palo Alto, CA) with a fused silica column and equipped with a flame ionizer detector and integrator. ChemStation (Agilent Technologies) operated the sampling, analysis, and integration of the samples. Extraction efficiencies were based on the nonadecanoic acid peak as an internal standard. Peak chromatographic responses were translated into mole responses using the internal standard and responses were recalculated as needed. Microbial groups were calculated based on the procedure of Pritchett et al. [39]. ### 2.4. Slurry Analysis Slurries were analyzed for total N, ammonium-N, and total P (Table3). Nitrogen was extracted via the Kjeldahl method [33]. Phosphorus was analyzed using a Thermo IRIS Advantage HX Inductively Coupled Plasma (ICP) Radial Spectrometer (Thermo Instrument Systems, Inc., Waltham, MA) by the Dairy One Forage Analysis Laboratory (Ithaca, NY). ### 2.5. Statistical Analyses and Calculations An analysis of variance (ANOVA) was run using SAS PROC MIXED on the aboveground forage biomass, nitrogen content in forage, soil nitrate-N, and soil biological groups for all treatments across the three years [40]. Data were analyzed as a randomized complete block design with each of the six treatments analyzed independently. Crop biomass and crop-nitrogen content from each year were analyzed separately using ANOVA with treatment and sample day as fixed effects. Significance is indicated with a ρ<0.05 [40].Forage apparent N recovery (ANR%) was calculated in 2010 and 2011 as a percentage of N (total and inorganic) applied during the season based on the work of Cogger et al. [41] and Bittman et al. [13]:(1)100×(annualgrassNuptake,treated)-(annualgrassNuptake,control)/appliedN.Estimates of N fixed in white clover were set to 80% of total N in clover biomass based on15N studies conducted in a pasture of similar forages and N fertilizer management [42]. Using the above correction, 80% of clover N was subtracted from the forage N uptake values used in the ANR calculations for the two cuttings in 2011 with significant amounts of clover. ## 2.1. Site Description A field-based experiment, located on a commercial dairy in Monroe, Washington, was established in 2009. The field was mapped as 90% Puget silty clay loam (fine-silty, mixed, superactive, nonacid, mesic Fluvaquentic Endoaquepts) and 10% Sultan silt loam soils (fine-silty, isotic, mesic, Aquandic Dystrochrepts) [32] and had a history of manure applications. The site had climatic conditions typical of the Maritime Pacific Northwest with cool wet winters and dry summers. The 2009 growing season had a drier than normal summer, while 2011 had a cool spring and dry summer (Table 1). The 2010 season had the best growing conditions, with a warm spring and more summer rainfall than 2009 or 2011.Table 1 Average air temperature and total precipitation by month beginning at the start of plot implementation through the 3rd growing season. YearMonthAverage air temp (°C)Total precipitation (mm)2009April9.261May12.873June16.819July20.06August18.013September15.554October9.9100November7.6137December1.4352010January7.389February7.044March7.854April9.455May11.266June14.461July17.07August17.122September15.385October10.685November5.7107December5.31842011January5.1120February3.480March7.1119April7.279May10.674June14.139July15.718August16.43September15.523October10.271Data from Washington State University AgWeatherNet, 21-Acres Station.The experimental design included six treatments in a randomized complete block design with four replicates(3.6m×18m). Treatments included two dairy manure slurries (raw and anaerobically digested), two slurry application methods (broadcast and subsurface deposition), inorganic N (pelletized urea), and a zero-fertilizer treatment that received 0 kg N ha−1.The raw dairy slurry was flushed from the barn floor and obtained fresh from a holding tank. Digested slurry was produced in an anaerobic digester with a plug-flow design, operating within mesophilic (23.5°C) conditions, with an approximate 17-day retention time, and storage capacity of~6,100,000 liters. Liquid slurry from a single dairy consisting of 1,000 lactating cows was codigested with pre-consumer food-waste substrates. Food-waste consisted of no more than 30% of the total digester input and included whey, egg byproduct, processed fish, ruminant blood, biodiesel byproduct, and Daf grease (dissolved air flotation). After digestion, materials were passed through a rotating drum screen solid separator where solids were removed for composting and liquids pumped to a storage lagoon. The digested slurry applied to plots was obtained just after liquids-solids separation and prior to lagoon storage. A 250 mL subsample of each slurry was taken during each application (Table 2), cooled, and analyzed for total-nitrogen, nitrate-N, ammonium-N, total-phosphorus, and total solids [33] (Table 3).Table 2 Dates of forage harvest and fertilizer (slurry and urea) applications for field study in Monroe, WA for 2009–2011. 200920102011Forage HarvestFertilizer applicationaForage HarvestFertilizer applicationaForage harvestFertilizer applicationa17-Apr-09b4-Mar-107-May-09c14-May-0926-Apr-1011-May-105-May-1119-May-112-Jun-098-Jun-0910-Jun-1022-Jun-1010-Jun-1122-Jun-111-Jul-0920-Jun-09d7-Jul-1015-Jul-1014-Jul-104-Aug-1128-Jul-0911-Aug-0912-Aug-1022-Aug-1031-Aug-1131-Aug-0915-Sep-1030-Sep-1020-Sep-1029-Sep-092-Dec-1018-Oct-1130-Nov-09aSoil samples taken 1-day prior to fertilizer application.bEarly season manure application by grower, prior to plot establishment.cHarvest prior to plot establishment, yield data from this harvest does not include replicates.dUnintended slurry application from grower.Table 3 Annual mean N and P concentrations of raw and anaerobically digested slurries applied to pasture plots. 200920102011Raw dairyDigestedRaw dairyDigestedRaw dairyDigestedSlurryslurryslurryslurryslurryslurryPercent Total Solids (%)2.81.93.42.03.41.4Total N, mg kg-1a144116171653267214752000NH4–N, mg kg−170710387761253760930Organic N, mg kg-1b73457887714197151070Total P, mg kg−1350300331292330210aN and P concentrations reported as is.bOrganic nitrogen (N) = total N − NH4–N.A mix of reed canarygrass (Phalaris arundinacea) cv. “Palaton” and white clover (Trifolium repens) was over-seeded into the field at 62 kg ha−1 in May 2006, three years before the start of this experiment. Plots were sprayed with broad leaf herbicides on 18 June 2009, 10 July 2010, and 8 August 2011 with 1.17 L ha−1, 2, 4-Dichlorophenoxy acetic acid, 73 mL ha−1 Carfentrazone-ethyl (Aim),and 410 mL ha−1, dicamba (Banvel) to control the clover. ## 2.2. Slurry Application Method Slurries were applied via two application methods, subsurface deposition and surface broadcast application. Subsurface deposition was accomplished with a 4169-liter capacity manure tank fitted to a National Volume Equipment pump (model MEC 4000/PALD) with a 3.05 meter Aerway Sub-Surface Deposition (Model AW1000-2B48-D) and custom Banderator attachment for application of manure through eight PVC pipes attached directly behind the Banderator tines. Tines were set 19 cm apart on the roller and allowed to drop 10 cm below the soil surface creating intermittent slices 12.5 cm in length at the surface. Visual observation of the plots suggested that the tines created slices at random locations throughout the growing season. Surface broadcast of raw and anaerobically digested slurries were accomplished using an Aerway system with the tines raised above the soil surface.Application rates for the raw and anaerobically digested slurry were projected to be equal in total N and allowed to vary in ammonia-N, for a total yearly application of approximately 600 kg N ha−1 in 2009, 500 kg N ha−1 in 2010, and 300 kg N ha−1 in 2011 (Table 4). We reduced the amount of N applied on urea and slurry treatments each year of the study from 2009 to 2011 based on the fall soil nitrate concentrations. When soil nitrate-N is above 35 mg N kg−1in the fall, it is recommended that applications be eliminated after August 1st, N application rates be reduced in the subsequent year by 25–40 percent and sidedress N at planting be eliminated [34]. An early season raw dairy slurry application (Table 2, application 1 in 2009) was applied by the grower to all plots prior to establishment of the field plots and is not included in the statistical analyses. An inadvertent application of 143 kg N ha−1 across all plots by the grower in June of 2009 (Table 2, application 4) is included in the analysis. This accounts for the higher annual application rate in 2009. Application rates were lowest in 2011 because wet conditions prevented an early season application (Table 4), and the slurries had lower mean N concentrations. Plots were fertilized no more than five days after grass harvest. There were a total of five manure applications per year during 2009-2010 and four in 2011 (Table 2).Table 4 Application rate of fertilizer source at each application period, and seasonal total N and P inputs, 2009–2011. Application1234512345Seasonal totalTotal N kg ha−1NH4+–N kg ha−1Total N kg ha−1NH4+–N kg ha−1Total P kg ha−12009Control111a00143b0510083b02541350Urea111a112112143b1125111211283b1125904710Raw111a121168143b4751647883b23590300130Digested111a176115143b81511149383b476263891212010Control0000000000000Urea112112112112011211211211204484480Raw921121139984532458535350024092Digested8612163671295540354098466268542011Control0000000000000Urea01121121120011211211203363360Raw05260846303424294725813551Digested033136697002352332730813426aEarly season slurry application by grower, prior to plot establishment.bApplication 4 in 2009 was an unintended application from the grower to all plots. Urea fertilizer considered equal to NH4+ in plant availability. ## 2.3. Field Management and Analysis The aboveground biomass from grass swaths, 0.6 × 0.6 m, was harvested from the center of each plot every 28–35 d (Table2) using hand-held hedge clippers. Three subsamples were taken within each of the four plot replicates for each treatment. The three subsamples were divided into grasses, clover, and weeds to adjust the aboveground biomass and ANR measurements. Due to herbicide applications, weeds were minimal all years. White clover biomass was significant in two of the cuttings in 2011. Samples were bagged, and weighed immediately. Forage was then dried at 55°C for 24 hrs, weighed, and ground in a Wiley Mill (Arthur H. Thomas Co., Philadelphia, PA) with a 1 mm screen. Ground samples were analyzed for forage nitrogen content with a Leco FP-528 Nitrogen Analyzer (Leco Corporation, St. Joseph, MI; AOAC, 2001) by Cumberland Valley Analytical Services Inc. (Hagerstown, MD).Soil samples were collected from each plot and analyzed for Bray-1 P, exchangeable K, and pH at the beginning of the experiment 2009 and again at the end of 2010. Six soil cores per plot were taken to a 30-cm depth using a 2.54 cm diameter soil sampling probe and composited. Additional soil samples were collected for nitrate-N analysis monthly throughout the growing season using the same method, except for biweekly from mid-September through the end of November. Nitrate-N below the 30 cm depth was not measured. Soil chemical properties were analyzed by Soiltest Farm Consultants (Moses Lake, WA) using the methods of Gavlak et al. [33]. Ammonium-N was determined using a salicylate-nitroprusside method and nitrate-N using the cadmium reduction method. Soil samples for gravimetric water content were homogenized by mixing, and a subsample was dried at 38°C for 72 hrs [35].Whole-soil phospholipid fatty acid (PLFA) procedures generally followed Bligh and Dyer [36] as described by Petersen and Klug [37] and modified by Ibekwe and Kennedy [38]. Fatty acid methyl esters were analyzed on a gas chromatograph (Agilent Technologies GC 6890, Palo Alto, CA) with a fused silica column and equipped with a flame ionizer detector and integrator. ChemStation (Agilent Technologies) operated the sampling, analysis, and integration of the samples. Extraction efficiencies were based on the nonadecanoic acid peak as an internal standard. Peak chromatographic responses were translated into mole responses using the internal standard and responses were recalculated as needed. Microbial groups were calculated based on the procedure of Pritchett et al. [39]. ## 2.4. Slurry Analysis Slurries were analyzed for total N, ammonium-N, and total P (Table3). Nitrogen was extracted via the Kjeldahl method [33]. Phosphorus was analyzed using a Thermo IRIS Advantage HX Inductively Coupled Plasma (ICP) Radial Spectrometer (Thermo Instrument Systems, Inc., Waltham, MA) by the Dairy One Forage Analysis Laboratory (Ithaca, NY). ## 2.5. Statistical Analyses and Calculations An analysis of variance (ANOVA) was run using SAS PROC MIXED on the aboveground forage biomass, nitrogen content in forage, soil nitrate-N, and soil biological groups for all treatments across the three years [40]. Data were analyzed as a randomized complete block design with each of the six treatments analyzed independently. Crop biomass and crop-nitrogen content from each year were analyzed separately using ANOVA with treatment and sample day as fixed effects. Significance is indicated with a ρ<0.05 [40].Forage apparent N recovery (ANR%) was calculated in 2010 and 2011 as a percentage of N (total and inorganic) applied during the season based on the work of Cogger et al. [41] and Bittman et al. [13]:(1)100×(annualgrassNuptake,treated)-(annualgrassNuptake,control)/appliedN.Estimates of N fixed in white clover were set to 80% of total N in clover biomass based on15N studies conducted in a pasture of similar forages and N fertilizer management [42]. Using the above correction, 80% of clover N was subtracted from the forage N uptake values used in the ANR calculations for the two cuttings in 2011 with significant amounts of clover. ## 3. Results ### 3.1. Baseline Soil Data Soil data sampled in May 2009 prior to the start of the field experiment (Table5) indicated that fertility was not different across the field site. Organic matter, a source of inorganic-N, averaged 5.5 percent (55 g kg−1). Bulk density of soils in the field ranged from 1.14 to 1.30 g cm−3, with a mean average density of 1.21 g cm−3.Table 5 Soil pH, Bray-P, and exchangeable K at start of experiment and after two years of slurry applications. PlotpHBray PNH4OAc Kmg kg−1Baseline, 12 May 2009Control6.0173591Urea6.0176608Raw-subsurface6.0160598Raw-broadcast6.0165632Digested-subsurface6.1140616Digested-broadcast6.01686122 December 2010Control6.2a173379bUrea6.0b176286cRaw-subsurface6.3a186479aRaw-broadcast6.2a185465aDigested-subsurface6.2a173447abDigested-broadcast6.2a162440abLetters in a column within a year indicate significant differences atρ=0.05, letters are not included when no significant differences were found. Samples from different dates were analyzed separately using an ANOVA. ### 3.2. Forage Biomass, N Uptake and ANR Analysis of variance results for cumulative forage biomass in 2009 to 2011 are presented in Table6. Total yield was greatest in 2010 (14.1–18.0 Dry Mg ha−1) and lowest in 2011 (9.2–11.1 Dry Mg ha−1). The 2009 data (8.08–9.5 Dry Mg ha−1) did not include the first cutting of the year (7.7 Mg ha−1) because it was harvested before plots and treatments were established. The growing conditions in 2010 were the most favorable of the three seasons. Forage biomass in 2011 was reduced by cool spring temperatures and low summer rainfall (Table 1). Urea had the highest yield in 2009, (Table 6). In 2010, urea and digested broadcast slurry had higher yield than the digested slurry applied subsurface. Slurry type and application method did not affect yield in 2009 or 2011.Table 6 Annual forage yield and N uptake, 2009 to 2011. Forage yieldNitrogen uptakeTreatmentDry Mg ha−1N kg ha−12009a201020112009b20102011Control8.0b14.1c9.2b283c362cd192cUrea9.5a18.0a11.1a389a655a296aRaw- subsurface8.6b16.6ab10.5a330b507b263bRaw- broadcast7.9b17.0ab10.8a308bc531b254bDigested-subsurface8.6b16.1b10.9a332b501b239bDigested-broadcast8.7b17.8a10.9a338b550ab255bLetters within a column indicate significant differences atρ=0.05.aValues for forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method were 7.7 Mg ha−1.bThe N content in forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method was 253 kg N ha−1.Similar trends occurred when comparing crop N uptake in the forage grasses (Table6). Urea-treated plots accumulated the most plant N, ranging from 296 to 655 kg N ha−1 removed per year. Uptake of N in forage grasses was greatest in 2010 (Table 6). Slurry type and application method did not have a significant effect on N uptake any year. Nitrogen uptake was lowest in 2011, likely a result of lower N application rates (Table 4) and poorer weather during the spring and summer. Forages in 2011 also contained significant amounts of clover, an N fixer (27% of the dry mass of forage yield at harvest 1 and 34% at harvest 2). Less than 10% of the forage biomass was clover in 2009 and 2010.In the first full season of the study (2010), the recovery of applied N in the forage (ANR) was higher than in 2011 (Table7). More favorable weather patterns for growth in 2010 compared with 2011 probably increased ANR in 2010. Urea treatments had an ANR of 65% in 2010 and 31% in 2011. Calculations based on total N applied in slurries were lower, ranging from 29 to 40% in 2010 and 15 to 24% in 2011, and similar between the two types of slurry. ANR calculations based only on the amount of total NH4+–N applied in slurries were 52 to 70% in 2010 and 35 to 53% in 2011, similar to ANR observed for urea.Table 7 Apparent nitrogen recovery (ANR) in harvested forage as percentage of total and ammonium N applied, 2010 and 2011. ANR 2010ANR 2011Treatment% of Total N% ofNH4+–N% of Total N% ofNH4+–NUrea65653131Raw-subsurface29601535Raw-broadcast34702447Digested-subsurface30522353Digested-broadcast40702046Urea fertilizer considered equal toNH4+ in plant availability. ### 3.3. Soil Nitrate-N Plots receiving urea had the highest concentration of soil nitrate-N over the three seasons, while there were few differences among the slurry treatments (Table8). Soil nitrate-N concentrations were highest in all fertilized treatments from July to the start of the fall rainy season, when the potential for leaching increases. Soil nitrate-N levels were greatest in 2009, likely because of the high rates of N applied that year. Lower soil nitrate-N in 2010 reflected the high N uptake during the favorable growing conditions that year. Soil nitrate-N increased again in 2011, particularly in the fall. This was despite a lower N application rate and may reflect the reduced yield and N uptake by the forages during the less favorable growing season in 2011.Table 8 Soil NO3-–N (mg kg−1) at 0 to 30 cm depth, 2009–2011. SoilNO3-–N (mg kg−1)TreatmentSample DateControlUreaRaw subsurfaceRaw broadcastDigested subsurfaceDigested broadcast200912-May20gh21gh18gh19gh19gh20gh4-Jun18gh28fg24fg23 g30fg24fg6-Jul35fe80b71bc76bc68c65cd3-Aug34f86ab80b76bc86ab71bc9-Sep20gh91a66cd82ab72bc67c21-Sep20gh81b53de62cd78bc55d1-Oct17gh62cd52de50de56d45de19-Oct14gh91a35fe54de44e45de3-Nov11h54de23g30fg29fg23g19-Nov10h22gh9.8h12h11h10h30-Nov11h11gh12h12h10h12h201026-Feb12fg11fg15ef15ef13f14ef11-May13f23d20de20de20de18ef16-Jun6.1g9.2fg7.9g7.0g7.4g7.3g13-Jul10fg25cd18e18de16ef14ef17-Aug13fg61a23cd22de18ef22de30-Sep18de36b23cd28c22de19de12-Oct12fg28bc22de22de27cd24cd26-Oct7.2g12fg15ef17ef21de16ef2-Dec6.9g8.2g11fg9.5fg10fg10fg20114-Apr6.1g6.2g7.0g6.5g7.0g8.0g21-Jun7.9g18ef11fg11fg12fe12fg4-Aug8.7g21ef15fg17ef18ef18ef30-Aug12fg43cd22ef16f23ef17ef16-Sept17ef48bc39cd48bc48bc32d29-Sept17ef46c36d42cd55b36d13-Oct19ef66a45c44c50bc44c4-Nov8.4g32d28de24e30de23efLetters within a year indicate significant differences atρ=0.05. ### 3.4. Microbial Groups Microbial groups in general did not vary with treatment, but rather varied by year (Table9). The control and urea treatments varied from the other treatments most consistently for most groups, while no consistent differences were observed among the slurry treatments. By 2011, the control treatment had significantly lower bacteria and anaerobic markers than the other treatments, but similar levels of overall microbial biomass and fungi.Table 9 Soil microbial analyses from field plots in the spring, 2009–2011. BiomassBacteriaFungiBacteria to fungiAnaerobeMono-unsaturatedg kg−1Mole percentaMole percentratioMole percentMole percentMay 2009Control535 ab0.2460.0983.010.0910.338Urea433 c0.2460.0923.230.0920.348Raw-B538 ab0.2430.0933.180.0940.335Raw-SSD454 bc0.2370.0943.070.0910.330Digested-B473 bc0.2420.0923.180.0930.324Digested-SSD623 a0.2380.0833.450.0910.322May 2010Control610 a0.243 b0.071 abc4.18 ab0.115 ab0.328 bUrea333 b0.215 c0.074 ab3.48 c0.101 b0.322 bRaw-B401 a0.266 ab0.084 a4.04 bc0.116 ab0.357 abRaw-SSD297 b0.268 a0.066 bc4.91 a0.127 a0.414 aDigested-B258 b0.259 ab0.071 abc4.65 ab0.123 a0.398 aDigested-SSD279 b0.267 ab0.066 c4.97 a0.125 a0.406 aApril 2011Control5120.221 b0.087 ab3.11 d0.082 c0.341 bUrea4470.250 a0.078 c3.96 a0.094 ab0.380 aRaw-B4890.257 a0.093 ab3.35 bcd0.092 b0.335 bRaw-SSD4280.253 a0.095 a3.25 cd0.100 ab0.345 bDigested-B4410.258 a0.085 bc3.68 ab0.101 ab0.357 abDigested-SSD4910.255 a0.085bc3.61 abc0.102 a0.359 abLetters within a column within a year indicate significant differences atρ=0.05. No letters indicate no significant differences within that column.aMole percent = (mole substance in a mixture)/(mole mixture) %. ## 3.1. Baseline Soil Data Soil data sampled in May 2009 prior to the start of the field experiment (Table5) indicated that fertility was not different across the field site. Organic matter, a source of inorganic-N, averaged 5.5 percent (55 g kg−1). Bulk density of soils in the field ranged from 1.14 to 1.30 g cm−3, with a mean average density of 1.21 g cm−3.Table 5 Soil pH, Bray-P, and exchangeable K at start of experiment and after two years of slurry applications. PlotpHBray PNH4OAc Kmg kg−1Baseline, 12 May 2009Control6.0173591Urea6.0176608Raw-subsurface6.0160598Raw-broadcast6.0165632Digested-subsurface6.1140616Digested-broadcast6.01686122 December 2010Control6.2a173379bUrea6.0b176286cRaw-subsurface6.3a186479aRaw-broadcast6.2a185465aDigested-subsurface6.2a173447abDigested-broadcast6.2a162440abLetters in a column within a year indicate significant differences atρ=0.05, letters are not included when no significant differences were found. Samples from different dates were analyzed separately using an ANOVA. ## 3.2. Forage Biomass, N Uptake and ANR Analysis of variance results for cumulative forage biomass in 2009 to 2011 are presented in Table6. Total yield was greatest in 2010 (14.1–18.0 Dry Mg ha−1) and lowest in 2011 (9.2–11.1 Dry Mg ha−1). The 2009 data (8.08–9.5 Dry Mg ha−1) did not include the first cutting of the year (7.7 Mg ha−1) because it was harvested before plots and treatments were established. The growing conditions in 2010 were the most favorable of the three seasons. Forage biomass in 2011 was reduced by cool spring temperatures and low summer rainfall (Table 1). Urea had the highest yield in 2009, (Table 6). In 2010, urea and digested broadcast slurry had higher yield than the digested slurry applied subsurface. Slurry type and application method did not affect yield in 2009 or 2011.Table 6 Annual forage yield and N uptake, 2009 to 2011. Forage yieldNitrogen uptakeTreatmentDry Mg ha−1N kg ha−12009a201020112009b20102011Control8.0b14.1c9.2b283c362cd192cUrea9.5a18.0a11.1a389a655a296aRaw- subsurface8.6b16.6ab10.5a330b507b263bRaw- broadcast7.9b17.0ab10.8a308bc531b254bDigested-subsurface8.6b16.1b10.9a332b501b239bDigested-broadcast8.7b17.8a10.9a338b550ab255bLetters within a column indicate significant differences atρ=0.05.aValues for forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method were 7.7 Mg ha−1.bThe N content in forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method was 253 kg N ha−1.Similar trends occurred when comparing crop N uptake in the forage grasses (Table6). Urea-treated plots accumulated the most plant N, ranging from 296 to 655 kg N ha−1 removed per year. Uptake of N in forage grasses was greatest in 2010 (Table 6). Slurry type and application method did not have a significant effect on N uptake any year. Nitrogen uptake was lowest in 2011, likely a result of lower N application rates (Table 4) and poorer weather during the spring and summer. Forages in 2011 also contained significant amounts of clover, an N fixer (27% of the dry mass of forage yield at harvest 1 and 34% at harvest 2). Less than 10% of the forage biomass was clover in 2009 and 2010.In the first full season of the study (2010), the recovery of applied N in the forage (ANR) was higher than in 2011 (Table7). More favorable weather patterns for growth in 2010 compared with 2011 probably increased ANR in 2010. Urea treatments had an ANR of 65% in 2010 and 31% in 2011. Calculations based on total N applied in slurries were lower, ranging from 29 to 40% in 2010 and 15 to 24% in 2011, and similar between the two types of slurry. ANR calculations based only on the amount of total NH4+–N applied in slurries were 52 to 70% in 2010 and 35 to 53% in 2011, similar to ANR observed for urea.Table 7 Apparent nitrogen recovery (ANR) in harvested forage as percentage of total and ammonium N applied, 2010 and 2011. ANR 2010ANR 2011Treatment% of Total N% ofNH4+–N% of Total N% ofNH4+–NUrea65653131Raw-subsurface29601535Raw-broadcast34702447Digested-subsurface30522353Digested-broadcast40702046Urea fertilizer considered equal toNH4+ in plant availability. ## 3.3. Soil Nitrate-N Plots receiving urea had the highest concentration of soil nitrate-N over the three seasons, while there were few differences among the slurry treatments (Table8). Soil nitrate-N concentrations were highest in all fertilized treatments from July to the start of the fall rainy season, when the potential for leaching increases. Soil nitrate-N levels were greatest in 2009, likely because of the high rates of N applied that year. Lower soil nitrate-N in 2010 reflected the high N uptake during the favorable growing conditions that year. Soil nitrate-N increased again in 2011, particularly in the fall. This was despite a lower N application rate and may reflect the reduced yield and N uptake by the forages during the less favorable growing season in 2011.Table 8 Soil NO3-–N (mg kg−1) at 0 to 30 cm depth, 2009–2011. SoilNO3-–N (mg kg−1)TreatmentSample DateControlUreaRaw subsurfaceRaw broadcastDigested subsurfaceDigested broadcast200912-May20gh21gh18gh19gh19gh20gh4-Jun18gh28fg24fg23 g30fg24fg6-Jul35fe80b71bc76bc68c65cd3-Aug34f86ab80b76bc86ab71bc9-Sep20gh91a66cd82ab72bc67c21-Sep20gh81b53de62cd78bc55d1-Oct17gh62cd52de50de56d45de19-Oct14gh91a35fe54de44e45de3-Nov11h54de23g30fg29fg23g19-Nov10h22gh9.8h12h11h10h30-Nov11h11gh12h12h10h12h201026-Feb12fg11fg15ef15ef13f14ef11-May13f23d20de20de20de18ef16-Jun6.1g9.2fg7.9g7.0g7.4g7.3g13-Jul10fg25cd18e18de16ef14ef17-Aug13fg61a23cd22de18ef22de30-Sep18de36b23cd28c22de19de12-Oct12fg28bc22de22de27cd24cd26-Oct7.2g12fg15ef17ef21de16ef2-Dec6.9g8.2g11fg9.5fg10fg10fg20114-Apr6.1g6.2g7.0g6.5g7.0g8.0g21-Jun7.9g18ef11fg11fg12fe12fg4-Aug8.7g21ef15fg17ef18ef18ef30-Aug12fg43cd22ef16f23ef17ef16-Sept17ef48bc39cd48bc48bc32d29-Sept17ef46c36d42cd55b36d13-Oct19ef66a45c44c50bc44c4-Nov8.4g32d28de24e30de23efLetters within a year indicate significant differences atρ=0.05. ## 3.4. Microbial Groups Microbial groups in general did not vary with treatment, but rather varied by year (Table9). The control and urea treatments varied from the other treatments most consistently for most groups, while no consistent differences were observed among the slurry treatments. By 2011, the control treatment had significantly lower bacteria and anaerobic markers than the other treatments, but similar levels of overall microbial biomass and fungi.Table 9 Soil microbial analyses from field plots in the spring, 2009–2011. BiomassBacteriaFungiBacteria to fungiAnaerobeMono-unsaturatedg kg−1Mole percentaMole percentratioMole percentMole percentMay 2009Control535 ab0.2460.0983.010.0910.338Urea433 c0.2460.0923.230.0920.348Raw-B538 ab0.2430.0933.180.0940.335Raw-SSD454 bc0.2370.0943.070.0910.330Digested-B473 bc0.2420.0923.180.0930.324Digested-SSD623 a0.2380.0833.450.0910.322May 2010Control610 a0.243 b0.071 abc4.18 ab0.115 ab0.328 bUrea333 b0.215 c0.074 ab3.48 c0.101 b0.322 bRaw-B401 a0.266 ab0.084 a4.04 bc0.116 ab0.357 abRaw-SSD297 b0.268 a0.066 bc4.91 a0.127 a0.414 aDigested-B258 b0.259 ab0.071 abc4.65 ab0.123 a0.398 aDigested-SSD279 b0.267 ab0.066 c4.97 a0.125 a0.406 aApril 2011Control5120.221 b0.087 ab3.11 d0.082 c0.341 bUrea4470.250 a0.078 c3.96 a0.094 ab0.380 aRaw-B4890.257 a0.093 ab3.35 bcd0.092 b0.335 bRaw-SSD4280.253 a0.095 a3.25 cd0.100 ab0.345 bDigested-B4410.258 a0.085 bc3.68 ab0.101 ab0.357 abDigested-SSD4910.255 a0.085bc3.61 abc0.102 a0.359 abLetters within a column within a year indicate significant differences atρ=0.05. No letters indicate no significant differences within that column.aMole percent = (mole substance in a mixture)/(mole mixture) %. ## 4. Discussion ### 4.1. Forage Biomass, N Uptake and ANR Forage biomass, plant N-uptake, and nitrate concentrations during the 2009–2011 growing seasons were affected by seasonal and long-term N management (a history of manure applications) that resulted in high N uptake from the control treatments. Also, favorable growing conditions in 2010 allowed for a more productive field season in this year. For this study, total harvest yield during each season was within the range of other published work where animal manures were applied to forages harvested multiple times over a season [16, 17, 41].While other studies have shown incorporation of manure to increase yield and crop N content by reducing gaseous losses [13], we did not see an improvement in crop N content from incorporation of slurries in this system. Forages grown in plots with broadcast applied slurries took up the same amount of N or more N than with subsurface deposition, which may have been caused by plant-growth disturbance from the airway banderator when subsurface applying effluent. Additionally, the infiltration rate of the anaerobically digested slurry may have been rapid enough that gaseous losses in the field were not different among subsurface deposition and broadcast applications. From an agronomic perspective, the two slurry types performed equally well as urea over the three growing seasons. Anaerobically digested slurry was suitable for forage production when applied at rates equal to raw dairy slurry. Moller and Stinner [8] also reported no differences in N uptake between digested and undigested slurry. How the system will respond after many years of anaerobically digested slurry application is unclear as the quantity of organic N applied is less than that of raw dairy slurry, supplying less recalcitrant N to the pool of soil organic matter. ### 4.2. Soil Nitrate-N and Microbial Groups We found few differences between slurry treatments in seasonal soilNO3- concentrations. There was, however, significantly more nitrate-N in urea-treated plots on many dates, even though there was slightly less total N applied to the urea plots in some years. The spike in nitrate concentration in October on soils where urea was applied in place of slurries indicates a greater potential for N leaching from urea compared with the slurries. All treatments declined in NO3- concentrations to levels that were not significantly different from control treatments after the fall rains began. Lower soil nitrate-N during the growing season of 2010 compared with 2009 may be due in part to a lower amount of total nitrogen applied. Also, little rainfall during the 2009 growing season may have caused a buildup of soil nitrate in the surface layers. Higher late-season nitrate in 2011 compared with 2010 may have been the result of poorer growing conditions reducing N uptake.Postharvest soil nitrate-N is a measure of residual plant-available N subject to leaching loss, and an indicator of excess applied N and/or poor yield. The recommended timing of postharvest soil nitrate testing in forage systems that utilize animal manure as a source of fertility in the Maritime Pacific Northwest is prior to October 15 [34]. Nitrate concentrations from soil samples collected from our site in mid-October showed that all treatments except the control exceeded 30 mg NO3–N kg ha−1 in 2009 and 2011, with NO3–N levels highest in the urea treatment. Fall nitrate-N levels above 30 mg kg−1 are considered excessive in manured pastures, and reduced rates and adjusted timing of applications are recommended [34].While soil nitrate concentrations decreased during the fall 2009 months, it is likely that some of this nitrate was not entirely leached from the system, but stored in the canary grass rhizomes over winter as described by Partala et al. [18]. This is evident in the significantly higher yields and nitrogen content of forages during the early season harvest on 26 April 2010.While the focus of this study is N, dairy manure also contains high levels of P. Runoff from high-P soils can lead to eutrophication in fresh water. Soil P levels were already excessive at the start of this study, because of the history of dairy manure applications at the site, and P tended to increase in the slurry-treated plots during the study (Table5). The anaerobically digested slurry contained less P than the raw dairy slurry, probably because it had a lower solids content, which would lead to less P accumulation over time.Microbial groups varied with year more than treatment in these field studies. Urea treatments varied from the other treatments to the greatest extent. The raw and anaerobically digested materials did not alter the soil microbial components as determined by PLFA. Our results may partially be the result of past manure applications. ## 4.1. Forage Biomass, N Uptake and ANR Forage biomass, plant N-uptake, and nitrate concentrations during the 2009–2011 growing seasons were affected by seasonal and long-term N management (a history of manure applications) that resulted in high N uptake from the control treatments. Also, favorable growing conditions in 2010 allowed for a more productive field season in this year. For this study, total harvest yield during each season was within the range of other published work where animal manures were applied to forages harvested multiple times over a season [16, 17, 41].While other studies have shown incorporation of manure to increase yield and crop N content by reducing gaseous losses [13], we did not see an improvement in crop N content from incorporation of slurries in this system. Forages grown in plots with broadcast applied slurries took up the same amount of N or more N than with subsurface deposition, which may have been caused by plant-growth disturbance from the airway banderator when subsurface applying effluent. Additionally, the infiltration rate of the anaerobically digested slurry may have been rapid enough that gaseous losses in the field were not different among subsurface deposition and broadcast applications. From an agronomic perspective, the two slurry types performed equally well as urea over the three growing seasons. Anaerobically digested slurry was suitable for forage production when applied at rates equal to raw dairy slurry. Moller and Stinner [8] also reported no differences in N uptake between digested and undigested slurry. How the system will respond after many years of anaerobically digested slurry application is unclear as the quantity of organic N applied is less than that of raw dairy slurry, supplying less recalcitrant N to the pool of soil organic matter. ## 4.2. Soil Nitrate-N and Microbial Groups We found few differences between slurry treatments in seasonal soilNO3- concentrations. There was, however, significantly more nitrate-N in urea-treated plots on many dates, even though there was slightly less total N applied to the urea plots in some years. The spike in nitrate concentration in October on soils where urea was applied in place of slurries indicates a greater potential for N leaching from urea compared with the slurries. All treatments declined in NO3- concentrations to levels that were not significantly different from control treatments after the fall rains began. Lower soil nitrate-N during the growing season of 2010 compared with 2009 may be due in part to a lower amount of total nitrogen applied. Also, little rainfall during the 2009 growing season may have caused a buildup of soil nitrate in the surface layers. Higher late-season nitrate in 2011 compared with 2010 may have been the result of poorer growing conditions reducing N uptake.Postharvest soil nitrate-N is a measure of residual plant-available N subject to leaching loss, and an indicator of excess applied N and/or poor yield. The recommended timing of postharvest soil nitrate testing in forage systems that utilize animal manure as a source of fertility in the Maritime Pacific Northwest is prior to October 15 [34]. Nitrate concentrations from soil samples collected from our site in mid-October showed that all treatments except the control exceeded 30 mg NO3–N kg ha−1 in 2009 and 2011, with NO3–N levels highest in the urea treatment. Fall nitrate-N levels above 30 mg kg−1 are considered excessive in manured pastures, and reduced rates and adjusted timing of applications are recommended [34].While soil nitrate concentrations decreased during the fall 2009 months, it is likely that some of this nitrate was not entirely leached from the system, but stored in the canary grass rhizomes over winter as described by Partala et al. [18]. This is evident in the significantly higher yields and nitrogen content of forages during the early season harvest on 26 April 2010.While the focus of this study is N, dairy manure also contains high levels of P. Runoff from high-P soils can lead to eutrophication in fresh water. Soil P levels were already excessive at the start of this study, because of the history of dairy manure applications at the site, and P tended to increase in the slurry-treated plots during the study (Table5). The anaerobically digested slurry contained less P than the raw dairy slurry, probably because it had a lower solids content, which would lead to less P accumulation over time.Microbial groups varied with year more than treatment in these field studies. Urea treatments varied from the other treatments to the greatest extent. The raw and anaerobically digested materials did not alter the soil microbial components as determined by PLFA. Our results may partially be the result of past manure applications. ## 5. Conclusions Subsurface deposition did not increase yield or N uptake compared with surface broadcast application, possibly because the slurries were low enough in solids to infiltrate readily into the soil, and because the subsurface injectors could have disrupted plant growth. Anaerobically digested dairy slurry was shown to provide adequate soil fertility and N availability for crop uptake and forage production over the three field seasons. In the short term, anaerobically digested slurry did not significantly increase yield or N uptake compared with similar rates of raw slurry.This study indicated that soil nitrates measured to a 30 cm depth were fairly consistent across slurry treatments and application methods during each of the field seasons. Soil nitrate-N was lower in 2010 due to favorable growing conditions and lower total applied N relative to 2009. Although urea treatments had the highest apparent N recovery value, the potential for nitrate leaching was also greatest under this management. Anaerobically digested slurry did not increase soilNO3- concentrations or alter the microbial composition and provided equal forage production and similar N use efficiency when compared to undigested dairy slurry. --- *Source: 101074-2012-05-06.xml*
101074-2012-05-06_101074-2012-05-06.md
58,386
Comparison of Raw Dairy Manure Slurry and Anaerobically Digested Slurry as N Sources for Grass Forage Production
Olivia E. Saunders; Ann-Marie Fortuna; Joe H. Harrison; Elizabeth Whitefield; Craig G. Cogger; Ann C. Kennedy; Andy I. Bary
International Journal of Agronomy (2012)
Agricultural Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2012/101074
101074-2012-05-06.xml
--- ## Abstract We conducted a 3-year field study to determine how raw dairy slurry and anaerobically digested slurry (dairy slurry and food waste) applied via broadcast and subsurface deposition to reed canarygrass (Phalaris arundinacea) affected forage biomass, N uptake, apparent nitrogen recovery (ANR), and soil nitrate concentrations relative to urea. Annual N applications ranged from 600 kg N ha−1 in 2009 to 300 kg N ha−1 in 2011. Forage yield and N uptake were similar across slurry treatments. Soil nitrate concentrations were greatest at the beginning of the fall leaching season, and did not differ among slurry treatments or application methods. Urea-fertilized plots had the highest soil nitrate concentrations but did not consistently have greatest forage biomass. ANR for the slurry treatments ranged from 35 to 70% when calculations were based on ammonium-N concentration, compared with 31 to 65% for urea. Slurry ANR calculated on a total N basis was lower (15 to 40%) due to lower availability of the organic N in the slurries. No consistent differences in soil microbial biomass or other biological indicators were observed. Anaerobically digested slurry supported equal forage production and similar N use efficiency when compared to raw dairy slurry. --- ## Body ## 1. Introduction There is a need for a set of best management practices that addresses how to utilize the growing quantity of reactive nitrogen (N) produced by livestock operations. Animal agriculture in the United States has become more specialized with farms consolidating and growing in size [1]. The number of dairy farms has decreased by 94% since 1960, but the number of animals has remained constant [2]. Animal consolidation has created challenges with respect to on-farm N surplus, waste management and nutrient loading in the environment [3, 4]. Annually in the United States, more than 5800 Mg of manure N is produced [5]. One approach to ameliorate negative environmental impacts associated with animal manures is through adoption of anaerobic digestion technologies to treat farm-generated manures and food processing wastes [6–9]. Digestion of wastes can provide a stable and consistent source of nutrients comparable to inorganic fertilizers such as urea.Anaerobic digestion converts organic carbon into methane used to generate electricity, and it also converts organic N to plant available ammonium (NH4+), increasing the ratio of NH4+/total N in the effluent [10]. Carbon is removed during both the methane production and fiber removal processes, resulting in a smaller C : N ratio of the effluent [11]. Therefore, digested effluent can serve as low-cost source of readily available nutrients for crop production. Some studies [12] have found increased yield and N availability with application of anaerobically digested material as compared to nondigested material, possibly due to increased N availability and reduced carbon (C) content. Anaerobically digested manure can provide sufficient nutrients to support biomass and crop yields equivalent to synthetic fertilizers and raw manures [13, 14]. The apparent mineral nitrogen recovery (ANRM) of tall fescue (Festuca spp.) receiving raw dairy manure slurry was reported by Bittman et al. [13] as 55 and 51% at early and late applications, respectively, using the drag shoe method (band applied directly to soil, under plant canopy). When surface applied, the ANRM was 37% applied early and 40% when applied late. Similar results were presented by Cherney et al. [15] with orchardgrass (Dactulis glomerata L.) and tall fescue.Perennial systems that contain living plants year round tend to remove more reactive N than annual systems. Mean reed canarygrass biomass measured in trials in Minnesota was 13 Mg ha−1 under modest N applications (168 kg N ha−1 yr−1) [16]. Bermudagrass (Cynodon spp.) fertilized with 89–444 kg manure N ha−1 yielded a mean of 7.92 Mg dry matter ha−1 over four or five cuttings per year [17]. However, the forage crop recovered only 25% of the N applied over the four years included in the study. Reed canarygrass (Phalaris arundinacea) is an ideal candidate for N removal because of its ability to store any left-over N applied during the growing season in rhizomes overwinter, providing a significant advantage to the forage in early spring when soil-N may be limited [18].As with any N source, application of manure N in excess of crop uptake can result inNO3- leaching [19]. Up to 46% of applied manure N may persist in the soil, increasing the potential for loss of N after multiple applications in a growing season [20]. Some studies indicate that manure N poses less of a risk to leaching than the same amount of N in the form of synthetic fertilizer [21] due to immobilization of N that often occurs as humic materials build up in soil. Other researchers have determined manure increases NO3- leaching [22]. Irrespective of the source and properties of the N fertilizer applied during winter months when plants are dormant, NO3- leaching can be the main source of N loss [23].Manure additions can enhance soil fertility and quality through their short and long-term contribution to soil C and N [24–27]. Current research demonstrates that long-term manure applications increase soil organic matter, basal respiration, microbial biomass, and enzymatic activity (measures of soil quality), while mineral fertilizers can decrease pH, enzymatic activity, and microbial biomass C [28]. Organic amendments such as manure also have an effect on microbial community structure in addition to enhancing the activity, C content, and size of soil microbial biomass [29, 30]. A study by Zhong et al. [31] demonstrated total phospholipid fatty acid (PLFA), gram-negative, and actinobacterial PLFA were highest in treatments of organic matter and organic matter + mineral NPK fertilizer. Functional diversity from organic manure and organic manure + mineral NPK fertilizers increased over time far more than with additions of synthetic fertilizers alone. We anticipate that long-term application of raw dairy slurry and digested slurry will enhance soil quality affecting microbial community structure and activity overtime.The goal of this study was to determine the fate of applied N in anaerobically digested slurry (derived from mixed dairy slurry and food waste), raw dairy slurry, and urea during forage production. Specifically, we compare the biomass, ANR and N uptake of forages to determine which N source(s) has the potential to maintain forage biomass and reduce reactive N. In addition, we evaluated the effectiveness of subsurface deposition versus broadcast application of raw slurry and anaerobically digested slurry to improve forage biomass production, ANR and N uptake of forages, as well as reduce residual reactive N. Our hypotheses were (1) digested slurry would have more available N and generate a greater forage response than raw slurry, (2) subsurface deposition would conserve more N than surface application, resulting in greater forage response, particularly for the raw manure with higher solids content, and (3) application of digested slurry would reduce soil nitrate-N concentrations relative to urea. ## 2. Materials and Methods ### 2.1. Site Description A field-based experiment, located on a commercial dairy in Monroe, Washington, was established in 2009. The field was mapped as 90% Puget silty clay loam (fine-silty, mixed, superactive, nonacid, mesic Fluvaquentic Endoaquepts) and 10% Sultan silt loam soils (fine-silty, isotic, mesic, Aquandic Dystrochrepts) [32] and had a history of manure applications. The site had climatic conditions typical of the Maritime Pacific Northwest with cool wet winters and dry summers. The 2009 growing season had a drier than normal summer, while 2011 had a cool spring and dry summer (Table 1). The 2010 season had the best growing conditions, with a warm spring and more summer rainfall than 2009 or 2011.Table 1 Average air temperature and total precipitation by month beginning at the start of plot implementation through the 3rd growing season. YearMonthAverage air temp (°C)Total precipitation (mm)2009April9.261May12.873June16.819July20.06August18.013September15.554October9.9100November7.6137December1.4352010January7.389February7.044March7.854April9.455May11.266June14.461July17.07August17.122September15.385October10.685November5.7107December5.31842011January5.1120February3.480March7.1119April7.279May10.674June14.139July15.718August16.43September15.523October10.271Data from Washington State University AgWeatherNet, 21-Acres Station.The experimental design included six treatments in a randomized complete block design with four replicates(3.6m×18m). Treatments included two dairy manure slurries (raw and anaerobically digested), two slurry application methods (broadcast and subsurface deposition), inorganic N (pelletized urea), and a zero-fertilizer treatment that received 0 kg N ha−1.The raw dairy slurry was flushed from the barn floor and obtained fresh from a holding tank. Digested slurry was produced in an anaerobic digester with a plug-flow design, operating within mesophilic (23.5°C) conditions, with an approximate 17-day retention time, and storage capacity of~6,100,000 liters. Liquid slurry from a single dairy consisting of 1,000 lactating cows was codigested with pre-consumer food-waste substrates. Food-waste consisted of no more than 30% of the total digester input and included whey, egg byproduct, processed fish, ruminant blood, biodiesel byproduct, and Daf grease (dissolved air flotation). After digestion, materials were passed through a rotating drum screen solid separator where solids were removed for composting and liquids pumped to a storage lagoon. The digested slurry applied to plots was obtained just after liquids-solids separation and prior to lagoon storage. A 250 mL subsample of each slurry was taken during each application (Table 2), cooled, and analyzed for total-nitrogen, nitrate-N, ammonium-N, total-phosphorus, and total solids [33] (Table 3).Table 2 Dates of forage harvest and fertilizer (slurry and urea) applications for field study in Monroe, WA for 2009–2011. 200920102011Forage HarvestFertilizer applicationaForage HarvestFertilizer applicationaForage harvestFertilizer applicationa17-Apr-09b4-Mar-107-May-09c14-May-0926-Apr-1011-May-105-May-1119-May-112-Jun-098-Jun-0910-Jun-1022-Jun-1010-Jun-1122-Jun-111-Jul-0920-Jun-09d7-Jul-1015-Jul-1014-Jul-104-Aug-1128-Jul-0911-Aug-0912-Aug-1022-Aug-1031-Aug-1131-Aug-0915-Sep-1030-Sep-1020-Sep-1029-Sep-092-Dec-1018-Oct-1130-Nov-09aSoil samples taken 1-day prior to fertilizer application.bEarly season manure application by grower, prior to plot establishment.cHarvest prior to plot establishment, yield data from this harvest does not include replicates.dUnintended slurry application from grower.Table 3 Annual mean N and P concentrations of raw and anaerobically digested slurries applied to pasture plots. 200920102011Raw dairyDigestedRaw dairyDigestedRaw dairyDigestedSlurryslurryslurryslurryslurryslurryPercent Total Solids (%)2.81.93.42.03.41.4Total N, mg kg-1a144116171653267214752000NH4–N, mg kg−170710387761253760930Organic N, mg kg-1b73457887714197151070Total P, mg kg−1350300331292330210aN and P concentrations reported as is.bOrganic nitrogen (N) = total N − NH4–N.A mix of reed canarygrass (Phalaris arundinacea) cv. “Palaton” and white clover (Trifolium repens) was over-seeded into the field at 62 kg ha−1 in May 2006, three years before the start of this experiment. Plots were sprayed with broad leaf herbicides on 18 June 2009, 10 July 2010, and 8 August 2011 with 1.17 L ha−1, 2, 4-Dichlorophenoxy acetic acid, 73 mL ha−1 Carfentrazone-ethyl (Aim),and 410 mL ha−1, dicamba (Banvel) to control the clover. ### 2.2. Slurry Application Method Slurries were applied via two application methods, subsurface deposition and surface broadcast application. Subsurface deposition was accomplished with a 4169-liter capacity manure tank fitted to a National Volume Equipment pump (model MEC 4000/PALD) with a 3.05 meter Aerway Sub-Surface Deposition (Model AW1000-2B48-D) and custom Banderator attachment for application of manure through eight PVC pipes attached directly behind the Banderator tines. Tines were set 19 cm apart on the roller and allowed to drop 10 cm below the soil surface creating intermittent slices 12.5 cm in length at the surface. Visual observation of the plots suggested that the tines created slices at random locations throughout the growing season. Surface broadcast of raw and anaerobically digested slurries were accomplished using an Aerway system with the tines raised above the soil surface.Application rates for the raw and anaerobically digested slurry were projected to be equal in total N and allowed to vary in ammonia-N, for a total yearly application of approximately 600 kg N ha−1 in 2009, 500 kg N ha−1 in 2010, and 300 kg N ha−1 in 2011 (Table 4). We reduced the amount of N applied on urea and slurry treatments each year of the study from 2009 to 2011 based on the fall soil nitrate concentrations. When soil nitrate-N is above 35 mg N kg−1in the fall, it is recommended that applications be eliminated after August 1st, N application rates be reduced in the subsequent year by 25–40 percent and sidedress N at planting be eliminated [34]. An early season raw dairy slurry application (Table 2, application 1 in 2009) was applied by the grower to all plots prior to establishment of the field plots and is not included in the statistical analyses. An inadvertent application of 143 kg N ha−1 across all plots by the grower in June of 2009 (Table 2, application 4) is included in the analysis. This accounts for the higher annual application rate in 2009. Application rates were lowest in 2011 because wet conditions prevented an early season application (Table 4), and the slurries had lower mean N concentrations. Plots were fertilized no more than five days after grass harvest. There were a total of five manure applications per year during 2009-2010 and four in 2011 (Table 2).Table 4 Application rate of fertilizer source at each application period, and seasonal total N and P inputs, 2009–2011. Application1234512345Seasonal totalTotal N kg ha−1NH4+–N kg ha−1Total N kg ha−1NH4+–N kg ha−1Total P kg ha−12009Control111a00143b0510083b02541350Urea111a112112143b1125111211283b1125904710Raw111a121168143b4751647883b23590300130Digested111a176115143b81511149383b476263891212010Control0000000000000Urea112112112112011211211211204484480Raw921121139984532458535350024092Digested8612163671295540354098466268542011Control0000000000000Urea01121121120011211211203363360Raw05260846303424294725813551Digested033136697002352332730813426aEarly season slurry application by grower, prior to plot establishment.bApplication 4 in 2009 was an unintended application from the grower to all plots. Urea fertilizer considered equal to NH4+ in plant availability. ### 2.3. Field Management and Analysis The aboveground biomass from grass swaths, 0.6 × 0.6 m, was harvested from the center of each plot every 28–35 d (Table2) using hand-held hedge clippers. Three subsamples were taken within each of the four plot replicates for each treatment. The three subsamples were divided into grasses, clover, and weeds to adjust the aboveground biomass and ANR measurements. Due to herbicide applications, weeds were minimal all years. White clover biomass was significant in two of the cuttings in 2011. Samples were bagged, and weighed immediately. Forage was then dried at 55°C for 24 hrs, weighed, and ground in a Wiley Mill (Arthur H. Thomas Co., Philadelphia, PA) with a 1 mm screen. Ground samples were analyzed for forage nitrogen content with a Leco FP-528 Nitrogen Analyzer (Leco Corporation, St. Joseph, MI; AOAC, 2001) by Cumberland Valley Analytical Services Inc. (Hagerstown, MD).Soil samples were collected from each plot and analyzed for Bray-1 P, exchangeable K, and pH at the beginning of the experiment 2009 and again at the end of 2010. Six soil cores per plot were taken to a 30-cm depth using a 2.54 cm diameter soil sampling probe and composited. Additional soil samples were collected for nitrate-N analysis monthly throughout the growing season using the same method, except for biweekly from mid-September through the end of November. Nitrate-N below the 30 cm depth was not measured. Soil chemical properties were analyzed by Soiltest Farm Consultants (Moses Lake, WA) using the methods of Gavlak et al. [33]. Ammonium-N was determined using a salicylate-nitroprusside method and nitrate-N using the cadmium reduction method. Soil samples for gravimetric water content were homogenized by mixing, and a subsample was dried at 38°C for 72 hrs [35].Whole-soil phospholipid fatty acid (PLFA) procedures generally followed Bligh and Dyer [36] as described by Petersen and Klug [37] and modified by Ibekwe and Kennedy [38]. Fatty acid methyl esters were analyzed on a gas chromatograph (Agilent Technologies GC 6890, Palo Alto, CA) with a fused silica column and equipped with a flame ionizer detector and integrator. ChemStation (Agilent Technologies) operated the sampling, analysis, and integration of the samples. Extraction efficiencies were based on the nonadecanoic acid peak as an internal standard. Peak chromatographic responses were translated into mole responses using the internal standard and responses were recalculated as needed. Microbial groups were calculated based on the procedure of Pritchett et al. [39]. ### 2.4. Slurry Analysis Slurries were analyzed for total N, ammonium-N, and total P (Table3). Nitrogen was extracted via the Kjeldahl method [33]. Phosphorus was analyzed using a Thermo IRIS Advantage HX Inductively Coupled Plasma (ICP) Radial Spectrometer (Thermo Instrument Systems, Inc., Waltham, MA) by the Dairy One Forage Analysis Laboratory (Ithaca, NY). ### 2.5. Statistical Analyses and Calculations An analysis of variance (ANOVA) was run using SAS PROC MIXED on the aboveground forage biomass, nitrogen content in forage, soil nitrate-N, and soil biological groups for all treatments across the three years [40]. Data were analyzed as a randomized complete block design with each of the six treatments analyzed independently. Crop biomass and crop-nitrogen content from each year were analyzed separately using ANOVA with treatment and sample day as fixed effects. Significance is indicated with a ρ<0.05 [40].Forage apparent N recovery (ANR%) was calculated in 2010 and 2011 as a percentage of N (total and inorganic) applied during the season based on the work of Cogger et al. [41] and Bittman et al. [13]:(1)100×(annualgrassNuptake,treated)-(annualgrassNuptake,control)/appliedN.Estimates of N fixed in white clover were set to 80% of total N in clover biomass based on15N studies conducted in a pasture of similar forages and N fertilizer management [42]. Using the above correction, 80% of clover N was subtracted from the forage N uptake values used in the ANR calculations for the two cuttings in 2011 with significant amounts of clover. ## 2.1. Site Description A field-based experiment, located on a commercial dairy in Monroe, Washington, was established in 2009. The field was mapped as 90% Puget silty clay loam (fine-silty, mixed, superactive, nonacid, mesic Fluvaquentic Endoaquepts) and 10% Sultan silt loam soils (fine-silty, isotic, mesic, Aquandic Dystrochrepts) [32] and had a history of manure applications. The site had climatic conditions typical of the Maritime Pacific Northwest with cool wet winters and dry summers. The 2009 growing season had a drier than normal summer, while 2011 had a cool spring and dry summer (Table 1). The 2010 season had the best growing conditions, with a warm spring and more summer rainfall than 2009 or 2011.Table 1 Average air temperature and total precipitation by month beginning at the start of plot implementation through the 3rd growing season. YearMonthAverage air temp (°C)Total precipitation (mm)2009April9.261May12.873June16.819July20.06August18.013September15.554October9.9100November7.6137December1.4352010January7.389February7.044March7.854April9.455May11.266June14.461July17.07August17.122September15.385October10.685November5.7107December5.31842011January5.1120February3.480March7.1119April7.279May10.674June14.139July15.718August16.43September15.523October10.271Data from Washington State University AgWeatherNet, 21-Acres Station.The experimental design included six treatments in a randomized complete block design with four replicates(3.6m×18m). Treatments included two dairy manure slurries (raw and anaerobically digested), two slurry application methods (broadcast and subsurface deposition), inorganic N (pelletized urea), and a zero-fertilizer treatment that received 0 kg N ha−1.The raw dairy slurry was flushed from the barn floor and obtained fresh from a holding tank. Digested slurry was produced in an anaerobic digester with a plug-flow design, operating within mesophilic (23.5°C) conditions, with an approximate 17-day retention time, and storage capacity of~6,100,000 liters. Liquid slurry from a single dairy consisting of 1,000 lactating cows was codigested with pre-consumer food-waste substrates. Food-waste consisted of no more than 30% of the total digester input and included whey, egg byproduct, processed fish, ruminant blood, biodiesel byproduct, and Daf grease (dissolved air flotation). After digestion, materials were passed through a rotating drum screen solid separator where solids were removed for composting and liquids pumped to a storage lagoon. The digested slurry applied to plots was obtained just after liquids-solids separation and prior to lagoon storage. A 250 mL subsample of each slurry was taken during each application (Table 2), cooled, and analyzed for total-nitrogen, nitrate-N, ammonium-N, total-phosphorus, and total solids [33] (Table 3).Table 2 Dates of forage harvest and fertilizer (slurry and urea) applications for field study in Monroe, WA for 2009–2011. 200920102011Forage HarvestFertilizer applicationaForage HarvestFertilizer applicationaForage harvestFertilizer applicationa17-Apr-09b4-Mar-107-May-09c14-May-0926-Apr-1011-May-105-May-1119-May-112-Jun-098-Jun-0910-Jun-1022-Jun-1010-Jun-1122-Jun-111-Jul-0920-Jun-09d7-Jul-1015-Jul-1014-Jul-104-Aug-1128-Jul-0911-Aug-0912-Aug-1022-Aug-1031-Aug-1131-Aug-0915-Sep-1030-Sep-1020-Sep-1029-Sep-092-Dec-1018-Oct-1130-Nov-09aSoil samples taken 1-day prior to fertilizer application.bEarly season manure application by grower, prior to plot establishment.cHarvest prior to plot establishment, yield data from this harvest does not include replicates.dUnintended slurry application from grower.Table 3 Annual mean N and P concentrations of raw and anaerobically digested slurries applied to pasture plots. 200920102011Raw dairyDigestedRaw dairyDigestedRaw dairyDigestedSlurryslurryslurryslurryslurryslurryPercent Total Solids (%)2.81.93.42.03.41.4Total N, mg kg-1a144116171653267214752000NH4–N, mg kg−170710387761253760930Organic N, mg kg-1b73457887714197151070Total P, mg kg−1350300331292330210aN and P concentrations reported as is.bOrganic nitrogen (N) = total N − NH4–N.A mix of reed canarygrass (Phalaris arundinacea) cv. “Palaton” and white clover (Trifolium repens) was over-seeded into the field at 62 kg ha−1 in May 2006, three years before the start of this experiment. Plots were sprayed with broad leaf herbicides on 18 June 2009, 10 July 2010, and 8 August 2011 with 1.17 L ha−1, 2, 4-Dichlorophenoxy acetic acid, 73 mL ha−1 Carfentrazone-ethyl (Aim),and 410 mL ha−1, dicamba (Banvel) to control the clover. ## 2.2. Slurry Application Method Slurries were applied via two application methods, subsurface deposition and surface broadcast application. Subsurface deposition was accomplished with a 4169-liter capacity manure tank fitted to a National Volume Equipment pump (model MEC 4000/PALD) with a 3.05 meter Aerway Sub-Surface Deposition (Model AW1000-2B48-D) and custom Banderator attachment for application of manure through eight PVC pipes attached directly behind the Banderator tines. Tines were set 19 cm apart on the roller and allowed to drop 10 cm below the soil surface creating intermittent slices 12.5 cm in length at the surface. Visual observation of the plots suggested that the tines created slices at random locations throughout the growing season. Surface broadcast of raw and anaerobically digested slurries were accomplished using an Aerway system with the tines raised above the soil surface.Application rates for the raw and anaerobically digested slurry were projected to be equal in total N and allowed to vary in ammonia-N, for a total yearly application of approximately 600 kg N ha−1 in 2009, 500 kg N ha−1 in 2010, and 300 kg N ha−1 in 2011 (Table 4). We reduced the amount of N applied on urea and slurry treatments each year of the study from 2009 to 2011 based on the fall soil nitrate concentrations. When soil nitrate-N is above 35 mg N kg−1in the fall, it is recommended that applications be eliminated after August 1st, N application rates be reduced in the subsequent year by 25–40 percent and sidedress N at planting be eliminated [34]. An early season raw dairy slurry application (Table 2, application 1 in 2009) was applied by the grower to all plots prior to establishment of the field plots and is not included in the statistical analyses. An inadvertent application of 143 kg N ha−1 across all plots by the grower in June of 2009 (Table 2, application 4) is included in the analysis. This accounts for the higher annual application rate in 2009. Application rates were lowest in 2011 because wet conditions prevented an early season application (Table 4), and the slurries had lower mean N concentrations. Plots were fertilized no more than five days after grass harvest. There were a total of five manure applications per year during 2009-2010 and four in 2011 (Table 2).Table 4 Application rate of fertilizer source at each application period, and seasonal total N and P inputs, 2009–2011. Application1234512345Seasonal totalTotal N kg ha−1NH4+–N kg ha−1Total N kg ha−1NH4+–N kg ha−1Total P kg ha−12009Control111a00143b0510083b02541350Urea111a112112143b1125111211283b1125904710Raw111a121168143b4751647883b23590300130Digested111a176115143b81511149383b476263891212010Control0000000000000Urea112112112112011211211211204484480Raw921121139984532458535350024092Digested8612163671295540354098466268542011Control0000000000000Urea01121121120011211211203363360Raw05260846303424294725813551Digested033136697002352332730813426aEarly season slurry application by grower, prior to plot establishment.bApplication 4 in 2009 was an unintended application from the grower to all plots. Urea fertilizer considered equal to NH4+ in plant availability. ## 2.3. Field Management and Analysis The aboveground biomass from grass swaths, 0.6 × 0.6 m, was harvested from the center of each plot every 28–35 d (Table2) using hand-held hedge clippers. Three subsamples were taken within each of the four plot replicates for each treatment. The three subsamples were divided into grasses, clover, and weeds to adjust the aboveground biomass and ANR measurements. Due to herbicide applications, weeds were minimal all years. White clover biomass was significant in two of the cuttings in 2011. Samples were bagged, and weighed immediately. Forage was then dried at 55°C for 24 hrs, weighed, and ground in a Wiley Mill (Arthur H. Thomas Co., Philadelphia, PA) with a 1 mm screen. Ground samples were analyzed for forage nitrogen content with a Leco FP-528 Nitrogen Analyzer (Leco Corporation, St. Joseph, MI; AOAC, 2001) by Cumberland Valley Analytical Services Inc. (Hagerstown, MD).Soil samples were collected from each plot and analyzed for Bray-1 P, exchangeable K, and pH at the beginning of the experiment 2009 and again at the end of 2010. Six soil cores per plot were taken to a 30-cm depth using a 2.54 cm diameter soil sampling probe and composited. Additional soil samples were collected for nitrate-N analysis monthly throughout the growing season using the same method, except for biweekly from mid-September through the end of November. Nitrate-N below the 30 cm depth was not measured. Soil chemical properties were analyzed by Soiltest Farm Consultants (Moses Lake, WA) using the methods of Gavlak et al. [33]. Ammonium-N was determined using a salicylate-nitroprusside method and nitrate-N using the cadmium reduction method. Soil samples for gravimetric water content were homogenized by mixing, and a subsample was dried at 38°C for 72 hrs [35].Whole-soil phospholipid fatty acid (PLFA) procedures generally followed Bligh and Dyer [36] as described by Petersen and Klug [37] and modified by Ibekwe and Kennedy [38]. Fatty acid methyl esters were analyzed on a gas chromatograph (Agilent Technologies GC 6890, Palo Alto, CA) with a fused silica column and equipped with a flame ionizer detector and integrator. ChemStation (Agilent Technologies) operated the sampling, analysis, and integration of the samples. Extraction efficiencies were based on the nonadecanoic acid peak as an internal standard. Peak chromatographic responses were translated into mole responses using the internal standard and responses were recalculated as needed. Microbial groups were calculated based on the procedure of Pritchett et al. [39]. ## 2.4. Slurry Analysis Slurries were analyzed for total N, ammonium-N, and total P (Table3). Nitrogen was extracted via the Kjeldahl method [33]. Phosphorus was analyzed using a Thermo IRIS Advantage HX Inductively Coupled Plasma (ICP) Radial Spectrometer (Thermo Instrument Systems, Inc., Waltham, MA) by the Dairy One Forage Analysis Laboratory (Ithaca, NY). ## 2.5. Statistical Analyses and Calculations An analysis of variance (ANOVA) was run using SAS PROC MIXED on the aboveground forage biomass, nitrogen content in forage, soil nitrate-N, and soil biological groups for all treatments across the three years [40]. Data were analyzed as a randomized complete block design with each of the six treatments analyzed independently. Crop biomass and crop-nitrogen content from each year were analyzed separately using ANOVA with treatment and sample day as fixed effects. Significance is indicated with a ρ<0.05 [40].Forage apparent N recovery (ANR%) was calculated in 2010 and 2011 as a percentage of N (total and inorganic) applied during the season based on the work of Cogger et al. [41] and Bittman et al. [13]:(1)100×(annualgrassNuptake,treated)-(annualgrassNuptake,control)/appliedN.Estimates of N fixed in white clover were set to 80% of total N in clover biomass based on15N studies conducted in a pasture of similar forages and N fertilizer management [42]. Using the above correction, 80% of clover N was subtracted from the forage N uptake values used in the ANR calculations for the two cuttings in 2011 with significant amounts of clover. ## 3. Results ### 3.1. Baseline Soil Data Soil data sampled in May 2009 prior to the start of the field experiment (Table5) indicated that fertility was not different across the field site. Organic matter, a source of inorganic-N, averaged 5.5 percent (55 g kg−1). Bulk density of soils in the field ranged from 1.14 to 1.30 g cm−3, with a mean average density of 1.21 g cm−3.Table 5 Soil pH, Bray-P, and exchangeable K at start of experiment and after two years of slurry applications. PlotpHBray PNH4OAc Kmg kg−1Baseline, 12 May 2009Control6.0173591Urea6.0176608Raw-subsurface6.0160598Raw-broadcast6.0165632Digested-subsurface6.1140616Digested-broadcast6.01686122 December 2010Control6.2a173379bUrea6.0b176286cRaw-subsurface6.3a186479aRaw-broadcast6.2a185465aDigested-subsurface6.2a173447abDigested-broadcast6.2a162440abLetters in a column within a year indicate significant differences atρ=0.05, letters are not included when no significant differences were found. Samples from different dates were analyzed separately using an ANOVA. ### 3.2. Forage Biomass, N Uptake and ANR Analysis of variance results for cumulative forage biomass in 2009 to 2011 are presented in Table6. Total yield was greatest in 2010 (14.1–18.0 Dry Mg ha−1) and lowest in 2011 (9.2–11.1 Dry Mg ha−1). The 2009 data (8.08–9.5 Dry Mg ha−1) did not include the first cutting of the year (7.7 Mg ha−1) because it was harvested before plots and treatments were established. The growing conditions in 2010 were the most favorable of the three seasons. Forage biomass in 2011 was reduced by cool spring temperatures and low summer rainfall (Table 1). Urea had the highest yield in 2009, (Table 6). In 2010, urea and digested broadcast slurry had higher yield than the digested slurry applied subsurface. Slurry type and application method did not affect yield in 2009 or 2011.Table 6 Annual forage yield and N uptake, 2009 to 2011. Forage yieldNitrogen uptakeTreatmentDry Mg ha−1N kg ha−12009a201020112009b20102011Control8.0b14.1c9.2b283c362cd192cUrea9.5a18.0a11.1a389a655a296aRaw- subsurface8.6b16.6ab10.5a330b507b263bRaw- broadcast7.9b17.0ab10.8a308bc531b254bDigested-subsurface8.6b16.1b10.9a332b501b239bDigested-broadcast8.7b17.8a10.9a338b550ab255bLetters within a column indicate significant differences atρ=0.05.aValues for forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method were 7.7 Mg ha−1.bThe N content in forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method was 253 kg N ha−1.Similar trends occurred when comparing crop N uptake in the forage grasses (Table6). Urea-treated plots accumulated the most plant N, ranging from 296 to 655 kg N ha−1 removed per year. Uptake of N in forage grasses was greatest in 2010 (Table 6). Slurry type and application method did not have a significant effect on N uptake any year. Nitrogen uptake was lowest in 2011, likely a result of lower N application rates (Table 4) and poorer weather during the spring and summer. Forages in 2011 also contained significant amounts of clover, an N fixer (27% of the dry mass of forage yield at harvest 1 and 34% at harvest 2). Less than 10% of the forage biomass was clover in 2009 and 2010.In the first full season of the study (2010), the recovery of applied N in the forage (ANR) was higher than in 2011 (Table7). More favorable weather patterns for growth in 2010 compared with 2011 probably increased ANR in 2010. Urea treatments had an ANR of 65% in 2010 and 31% in 2011. Calculations based on total N applied in slurries were lower, ranging from 29 to 40% in 2010 and 15 to 24% in 2011, and similar between the two types of slurry. ANR calculations based only on the amount of total NH4+–N applied in slurries were 52 to 70% in 2010 and 35 to 53% in 2011, similar to ANR observed for urea.Table 7 Apparent nitrogen recovery (ANR) in harvested forage as percentage of total and ammonium N applied, 2010 and 2011. ANR 2010ANR 2011Treatment% of Total N% ofNH4+–N% of Total N% ofNH4+–NUrea65653131Raw-subsurface29601535Raw-broadcast34702447Digested-subsurface30522353Digested-broadcast40702046Urea fertilizer considered equal toNH4+ in plant availability. ### 3.3. Soil Nitrate-N Plots receiving urea had the highest concentration of soil nitrate-N over the three seasons, while there were few differences among the slurry treatments (Table8). Soil nitrate-N concentrations were highest in all fertilized treatments from July to the start of the fall rainy season, when the potential for leaching increases. Soil nitrate-N levels were greatest in 2009, likely because of the high rates of N applied that year. Lower soil nitrate-N in 2010 reflected the high N uptake during the favorable growing conditions that year. Soil nitrate-N increased again in 2011, particularly in the fall. This was despite a lower N application rate and may reflect the reduced yield and N uptake by the forages during the less favorable growing season in 2011.Table 8 Soil NO3-–N (mg kg−1) at 0 to 30 cm depth, 2009–2011. SoilNO3-–N (mg kg−1)TreatmentSample DateControlUreaRaw subsurfaceRaw broadcastDigested subsurfaceDigested broadcast200912-May20gh21gh18gh19gh19gh20gh4-Jun18gh28fg24fg23 g30fg24fg6-Jul35fe80b71bc76bc68c65cd3-Aug34f86ab80b76bc86ab71bc9-Sep20gh91a66cd82ab72bc67c21-Sep20gh81b53de62cd78bc55d1-Oct17gh62cd52de50de56d45de19-Oct14gh91a35fe54de44e45de3-Nov11h54de23g30fg29fg23g19-Nov10h22gh9.8h12h11h10h30-Nov11h11gh12h12h10h12h201026-Feb12fg11fg15ef15ef13f14ef11-May13f23d20de20de20de18ef16-Jun6.1g9.2fg7.9g7.0g7.4g7.3g13-Jul10fg25cd18e18de16ef14ef17-Aug13fg61a23cd22de18ef22de30-Sep18de36b23cd28c22de19de12-Oct12fg28bc22de22de27cd24cd26-Oct7.2g12fg15ef17ef21de16ef2-Dec6.9g8.2g11fg9.5fg10fg10fg20114-Apr6.1g6.2g7.0g6.5g7.0g8.0g21-Jun7.9g18ef11fg11fg12fe12fg4-Aug8.7g21ef15fg17ef18ef18ef30-Aug12fg43cd22ef16f23ef17ef16-Sept17ef48bc39cd48bc48bc32d29-Sept17ef46c36d42cd55b36d13-Oct19ef66a45c44c50bc44c4-Nov8.4g32d28de24e30de23efLetters within a year indicate significant differences atρ=0.05. ### 3.4. Microbial Groups Microbial groups in general did not vary with treatment, but rather varied by year (Table9). The control and urea treatments varied from the other treatments most consistently for most groups, while no consistent differences were observed among the slurry treatments. By 2011, the control treatment had significantly lower bacteria and anaerobic markers than the other treatments, but similar levels of overall microbial biomass and fungi.Table 9 Soil microbial analyses from field plots in the spring, 2009–2011. BiomassBacteriaFungiBacteria to fungiAnaerobeMono-unsaturatedg kg−1Mole percentaMole percentratioMole percentMole percentMay 2009Control535 ab0.2460.0983.010.0910.338Urea433 c0.2460.0923.230.0920.348Raw-B538 ab0.2430.0933.180.0940.335Raw-SSD454 bc0.2370.0943.070.0910.330Digested-B473 bc0.2420.0923.180.0930.324Digested-SSD623 a0.2380.0833.450.0910.322May 2010Control610 a0.243 b0.071 abc4.18 ab0.115 ab0.328 bUrea333 b0.215 c0.074 ab3.48 c0.101 b0.322 bRaw-B401 a0.266 ab0.084 a4.04 bc0.116 ab0.357 abRaw-SSD297 b0.268 a0.066 bc4.91 a0.127 a0.414 aDigested-B258 b0.259 ab0.071 abc4.65 ab0.123 a0.398 aDigested-SSD279 b0.267 ab0.066 c4.97 a0.125 a0.406 aApril 2011Control5120.221 b0.087 ab3.11 d0.082 c0.341 bUrea4470.250 a0.078 c3.96 a0.094 ab0.380 aRaw-B4890.257 a0.093 ab3.35 bcd0.092 b0.335 bRaw-SSD4280.253 a0.095 a3.25 cd0.100 ab0.345 bDigested-B4410.258 a0.085 bc3.68 ab0.101 ab0.357 abDigested-SSD4910.255 a0.085bc3.61 abc0.102 a0.359 abLetters within a column within a year indicate significant differences atρ=0.05. No letters indicate no significant differences within that column.aMole percent = (mole substance in a mixture)/(mole mixture) %. ## 3.1. Baseline Soil Data Soil data sampled in May 2009 prior to the start of the field experiment (Table5) indicated that fertility was not different across the field site. Organic matter, a source of inorganic-N, averaged 5.5 percent (55 g kg−1). Bulk density of soils in the field ranged from 1.14 to 1.30 g cm−3, with a mean average density of 1.21 g cm−3.Table 5 Soil pH, Bray-P, and exchangeable K at start of experiment and after two years of slurry applications. PlotpHBray PNH4OAc Kmg kg−1Baseline, 12 May 2009Control6.0173591Urea6.0176608Raw-subsurface6.0160598Raw-broadcast6.0165632Digested-subsurface6.1140616Digested-broadcast6.01686122 December 2010Control6.2a173379bUrea6.0b176286cRaw-subsurface6.3a186479aRaw-broadcast6.2a185465aDigested-subsurface6.2a173447abDigested-broadcast6.2a162440abLetters in a column within a year indicate significant differences atρ=0.05, letters are not included when no significant differences were found. Samples from different dates were analyzed separately using an ANOVA. ## 3.2. Forage Biomass, N Uptake and ANR Analysis of variance results for cumulative forage biomass in 2009 to 2011 are presented in Table6. Total yield was greatest in 2010 (14.1–18.0 Dry Mg ha−1) and lowest in 2011 (9.2–11.1 Dry Mg ha−1). The 2009 data (8.08–9.5 Dry Mg ha−1) did not include the first cutting of the year (7.7 Mg ha−1) because it was harvested before plots and treatments were established. The growing conditions in 2010 were the most favorable of the three seasons. Forage biomass in 2011 was reduced by cool spring temperatures and low summer rainfall (Table 1). Urea had the highest yield in 2009, (Table 6). In 2010, urea and digested broadcast slurry had higher yield than the digested slurry applied subsurface. Slurry type and application method did not affect yield in 2009 or 2011.Table 6 Annual forage yield and N uptake, 2009 to 2011. Forage yieldNitrogen uptakeTreatmentDry Mg ha−1N kg ha−12009a201020112009b20102011Control8.0b14.1c9.2b283c362cd192cUrea9.5a18.0a11.1a389a655a296aRaw- subsurface8.6b16.6ab10.5a330b507b263bRaw- broadcast7.9b17.0ab10.8a308bc531b254bDigested-subsurface8.6b16.1b10.9a332b501b239bDigested-broadcast8.7b17.8a10.9a338b550ab255bLetters within a column indicate significant differences atρ=0.05.aValues for forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method were 7.7 Mg ha−1.bThe N content in forage yield from the first harvest prior to implementation of nitrogen fertilizer treatments and application method was 253 kg N ha−1.Similar trends occurred when comparing crop N uptake in the forage grasses (Table6). Urea-treated plots accumulated the most plant N, ranging from 296 to 655 kg N ha−1 removed per year. Uptake of N in forage grasses was greatest in 2010 (Table 6). Slurry type and application method did not have a significant effect on N uptake any year. Nitrogen uptake was lowest in 2011, likely a result of lower N application rates (Table 4) and poorer weather during the spring and summer. Forages in 2011 also contained significant amounts of clover, an N fixer (27% of the dry mass of forage yield at harvest 1 and 34% at harvest 2). Less than 10% of the forage biomass was clover in 2009 and 2010.In the first full season of the study (2010), the recovery of applied N in the forage (ANR) was higher than in 2011 (Table7). More favorable weather patterns for growth in 2010 compared with 2011 probably increased ANR in 2010. Urea treatments had an ANR of 65% in 2010 and 31% in 2011. Calculations based on total N applied in slurries were lower, ranging from 29 to 40% in 2010 and 15 to 24% in 2011, and similar between the two types of slurry. ANR calculations based only on the amount of total NH4+–N applied in slurries were 52 to 70% in 2010 and 35 to 53% in 2011, similar to ANR observed for urea.Table 7 Apparent nitrogen recovery (ANR) in harvested forage as percentage of total and ammonium N applied, 2010 and 2011. ANR 2010ANR 2011Treatment% of Total N% ofNH4+–N% of Total N% ofNH4+–NUrea65653131Raw-subsurface29601535Raw-broadcast34702447Digested-subsurface30522353Digested-broadcast40702046Urea fertilizer considered equal toNH4+ in plant availability. ## 3.3. Soil Nitrate-N Plots receiving urea had the highest concentration of soil nitrate-N over the three seasons, while there were few differences among the slurry treatments (Table8). Soil nitrate-N concentrations were highest in all fertilized treatments from July to the start of the fall rainy season, when the potential for leaching increases. Soil nitrate-N levels were greatest in 2009, likely because of the high rates of N applied that year. Lower soil nitrate-N in 2010 reflected the high N uptake during the favorable growing conditions that year. Soil nitrate-N increased again in 2011, particularly in the fall. This was despite a lower N application rate and may reflect the reduced yield and N uptake by the forages during the less favorable growing season in 2011.Table 8 Soil NO3-–N (mg kg−1) at 0 to 30 cm depth, 2009–2011. SoilNO3-–N (mg kg−1)TreatmentSample DateControlUreaRaw subsurfaceRaw broadcastDigested subsurfaceDigested broadcast200912-May20gh21gh18gh19gh19gh20gh4-Jun18gh28fg24fg23 g30fg24fg6-Jul35fe80b71bc76bc68c65cd3-Aug34f86ab80b76bc86ab71bc9-Sep20gh91a66cd82ab72bc67c21-Sep20gh81b53de62cd78bc55d1-Oct17gh62cd52de50de56d45de19-Oct14gh91a35fe54de44e45de3-Nov11h54de23g30fg29fg23g19-Nov10h22gh9.8h12h11h10h30-Nov11h11gh12h12h10h12h201026-Feb12fg11fg15ef15ef13f14ef11-May13f23d20de20de20de18ef16-Jun6.1g9.2fg7.9g7.0g7.4g7.3g13-Jul10fg25cd18e18de16ef14ef17-Aug13fg61a23cd22de18ef22de30-Sep18de36b23cd28c22de19de12-Oct12fg28bc22de22de27cd24cd26-Oct7.2g12fg15ef17ef21de16ef2-Dec6.9g8.2g11fg9.5fg10fg10fg20114-Apr6.1g6.2g7.0g6.5g7.0g8.0g21-Jun7.9g18ef11fg11fg12fe12fg4-Aug8.7g21ef15fg17ef18ef18ef30-Aug12fg43cd22ef16f23ef17ef16-Sept17ef48bc39cd48bc48bc32d29-Sept17ef46c36d42cd55b36d13-Oct19ef66a45c44c50bc44c4-Nov8.4g32d28de24e30de23efLetters within a year indicate significant differences atρ=0.05. ## 3.4. Microbial Groups Microbial groups in general did not vary with treatment, but rather varied by year (Table9). The control and urea treatments varied from the other treatments most consistently for most groups, while no consistent differences were observed among the slurry treatments. By 2011, the control treatment had significantly lower bacteria and anaerobic markers than the other treatments, but similar levels of overall microbial biomass and fungi.Table 9 Soil microbial analyses from field plots in the spring, 2009–2011. BiomassBacteriaFungiBacteria to fungiAnaerobeMono-unsaturatedg kg−1Mole percentaMole percentratioMole percentMole percentMay 2009Control535 ab0.2460.0983.010.0910.338Urea433 c0.2460.0923.230.0920.348Raw-B538 ab0.2430.0933.180.0940.335Raw-SSD454 bc0.2370.0943.070.0910.330Digested-B473 bc0.2420.0923.180.0930.324Digested-SSD623 a0.2380.0833.450.0910.322May 2010Control610 a0.243 b0.071 abc4.18 ab0.115 ab0.328 bUrea333 b0.215 c0.074 ab3.48 c0.101 b0.322 bRaw-B401 a0.266 ab0.084 a4.04 bc0.116 ab0.357 abRaw-SSD297 b0.268 a0.066 bc4.91 a0.127 a0.414 aDigested-B258 b0.259 ab0.071 abc4.65 ab0.123 a0.398 aDigested-SSD279 b0.267 ab0.066 c4.97 a0.125 a0.406 aApril 2011Control5120.221 b0.087 ab3.11 d0.082 c0.341 bUrea4470.250 a0.078 c3.96 a0.094 ab0.380 aRaw-B4890.257 a0.093 ab3.35 bcd0.092 b0.335 bRaw-SSD4280.253 a0.095 a3.25 cd0.100 ab0.345 bDigested-B4410.258 a0.085 bc3.68 ab0.101 ab0.357 abDigested-SSD4910.255 a0.085bc3.61 abc0.102 a0.359 abLetters within a column within a year indicate significant differences atρ=0.05. No letters indicate no significant differences within that column.aMole percent = (mole substance in a mixture)/(mole mixture) %. ## 4. Discussion ### 4.1. Forage Biomass, N Uptake and ANR Forage biomass, plant N-uptake, and nitrate concentrations during the 2009–2011 growing seasons were affected by seasonal and long-term N management (a history of manure applications) that resulted in high N uptake from the control treatments. Also, favorable growing conditions in 2010 allowed for a more productive field season in this year. For this study, total harvest yield during each season was within the range of other published work where animal manures were applied to forages harvested multiple times over a season [16, 17, 41].While other studies have shown incorporation of manure to increase yield and crop N content by reducing gaseous losses [13], we did not see an improvement in crop N content from incorporation of slurries in this system. Forages grown in plots with broadcast applied slurries took up the same amount of N or more N than with subsurface deposition, which may have been caused by plant-growth disturbance from the airway banderator when subsurface applying effluent. Additionally, the infiltration rate of the anaerobically digested slurry may have been rapid enough that gaseous losses in the field were not different among subsurface deposition and broadcast applications. From an agronomic perspective, the two slurry types performed equally well as urea over the three growing seasons. Anaerobically digested slurry was suitable for forage production when applied at rates equal to raw dairy slurry. Moller and Stinner [8] also reported no differences in N uptake between digested and undigested slurry. How the system will respond after many years of anaerobically digested slurry application is unclear as the quantity of organic N applied is less than that of raw dairy slurry, supplying less recalcitrant N to the pool of soil organic matter. ### 4.2. Soil Nitrate-N and Microbial Groups We found few differences between slurry treatments in seasonal soilNO3- concentrations. There was, however, significantly more nitrate-N in urea-treated plots on many dates, even though there was slightly less total N applied to the urea plots in some years. The spike in nitrate concentration in October on soils where urea was applied in place of slurries indicates a greater potential for N leaching from urea compared with the slurries. All treatments declined in NO3- concentrations to levels that were not significantly different from control treatments after the fall rains began. Lower soil nitrate-N during the growing season of 2010 compared with 2009 may be due in part to a lower amount of total nitrogen applied. Also, little rainfall during the 2009 growing season may have caused a buildup of soil nitrate in the surface layers. Higher late-season nitrate in 2011 compared with 2010 may have been the result of poorer growing conditions reducing N uptake.Postharvest soil nitrate-N is a measure of residual plant-available N subject to leaching loss, and an indicator of excess applied N and/or poor yield. The recommended timing of postharvest soil nitrate testing in forage systems that utilize animal manure as a source of fertility in the Maritime Pacific Northwest is prior to October 15 [34]. Nitrate concentrations from soil samples collected from our site in mid-October showed that all treatments except the control exceeded 30 mg NO3–N kg ha−1 in 2009 and 2011, with NO3–N levels highest in the urea treatment. Fall nitrate-N levels above 30 mg kg−1 are considered excessive in manured pastures, and reduced rates and adjusted timing of applications are recommended [34].While soil nitrate concentrations decreased during the fall 2009 months, it is likely that some of this nitrate was not entirely leached from the system, but stored in the canary grass rhizomes over winter as described by Partala et al. [18]. This is evident in the significantly higher yields and nitrogen content of forages during the early season harvest on 26 April 2010.While the focus of this study is N, dairy manure also contains high levels of P. Runoff from high-P soils can lead to eutrophication in fresh water. Soil P levels were already excessive at the start of this study, because of the history of dairy manure applications at the site, and P tended to increase in the slurry-treated plots during the study (Table5). The anaerobically digested slurry contained less P than the raw dairy slurry, probably because it had a lower solids content, which would lead to less P accumulation over time.Microbial groups varied with year more than treatment in these field studies. Urea treatments varied from the other treatments to the greatest extent. The raw and anaerobically digested materials did not alter the soil microbial components as determined by PLFA. Our results may partially be the result of past manure applications. ## 4.1. Forage Biomass, N Uptake and ANR Forage biomass, plant N-uptake, and nitrate concentrations during the 2009–2011 growing seasons were affected by seasonal and long-term N management (a history of manure applications) that resulted in high N uptake from the control treatments. Also, favorable growing conditions in 2010 allowed for a more productive field season in this year. For this study, total harvest yield during each season was within the range of other published work where animal manures were applied to forages harvested multiple times over a season [16, 17, 41].While other studies have shown incorporation of manure to increase yield and crop N content by reducing gaseous losses [13], we did not see an improvement in crop N content from incorporation of slurries in this system. Forages grown in plots with broadcast applied slurries took up the same amount of N or more N than with subsurface deposition, which may have been caused by plant-growth disturbance from the airway banderator when subsurface applying effluent. Additionally, the infiltration rate of the anaerobically digested slurry may have been rapid enough that gaseous losses in the field were not different among subsurface deposition and broadcast applications. From an agronomic perspective, the two slurry types performed equally well as urea over the three growing seasons. Anaerobically digested slurry was suitable for forage production when applied at rates equal to raw dairy slurry. Moller and Stinner [8] also reported no differences in N uptake between digested and undigested slurry. How the system will respond after many years of anaerobically digested slurry application is unclear as the quantity of organic N applied is less than that of raw dairy slurry, supplying less recalcitrant N to the pool of soil organic matter. ## 4.2. Soil Nitrate-N and Microbial Groups We found few differences between slurry treatments in seasonal soilNO3- concentrations. There was, however, significantly more nitrate-N in urea-treated plots on many dates, even though there was slightly less total N applied to the urea plots in some years. The spike in nitrate concentration in October on soils where urea was applied in place of slurries indicates a greater potential for N leaching from urea compared with the slurries. All treatments declined in NO3- concentrations to levels that were not significantly different from control treatments after the fall rains began. Lower soil nitrate-N during the growing season of 2010 compared with 2009 may be due in part to a lower amount of total nitrogen applied. Also, little rainfall during the 2009 growing season may have caused a buildup of soil nitrate in the surface layers. Higher late-season nitrate in 2011 compared with 2010 may have been the result of poorer growing conditions reducing N uptake.Postharvest soil nitrate-N is a measure of residual plant-available N subject to leaching loss, and an indicator of excess applied N and/or poor yield. The recommended timing of postharvest soil nitrate testing in forage systems that utilize animal manure as a source of fertility in the Maritime Pacific Northwest is prior to October 15 [34]. Nitrate concentrations from soil samples collected from our site in mid-October showed that all treatments except the control exceeded 30 mg NO3–N kg ha−1 in 2009 and 2011, with NO3–N levels highest in the urea treatment. Fall nitrate-N levels above 30 mg kg−1 are considered excessive in manured pastures, and reduced rates and adjusted timing of applications are recommended [34].While soil nitrate concentrations decreased during the fall 2009 months, it is likely that some of this nitrate was not entirely leached from the system, but stored in the canary grass rhizomes over winter as described by Partala et al. [18]. This is evident in the significantly higher yields and nitrogen content of forages during the early season harvest on 26 April 2010.While the focus of this study is N, dairy manure also contains high levels of P. Runoff from high-P soils can lead to eutrophication in fresh water. Soil P levels were already excessive at the start of this study, because of the history of dairy manure applications at the site, and P tended to increase in the slurry-treated plots during the study (Table5). The anaerobically digested slurry contained less P than the raw dairy slurry, probably because it had a lower solids content, which would lead to less P accumulation over time.Microbial groups varied with year more than treatment in these field studies. Urea treatments varied from the other treatments to the greatest extent. The raw and anaerobically digested materials did not alter the soil microbial components as determined by PLFA. Our results may partially be the result of past manure applications. ## 5. Conclusions Subsurface deposition did not increase yield or N uptake compared with surface broadcast application, possibly because the slurries were low enough in solids to infiltrate readily into the soil, and because the subsurface injectors could have disrupted plant growth. Anaerobically digested dairy slurry was shown to provide adequate soil fertility and N availability for crop uptake and forage production over the three field seasons. In the short term, anaerobically digested slurry did not significantly increase yield or N uptake compared with similar rates of raw slurry.This study indicated that soil nitrates measured to a 30 cm depth were fairly consistent across slurry treatments and application methods during each of the field seasons. Soil nitrate-N was lower in 2010 due to favorable growing conditions and lower total applied N relative to 2009. Although urea treatments had the highest apparent N recovery value, the potential for nitrate leaching was also greatest under this management. Anaerobically digested slurry did not increase soilNO3- concentrations or alter the microbial composition and provided equal forage production and similar N use efficiency when compared to undigested dairy slurry. --- *Source: 101074-2012-05-06.xml*
2012
# A Passenger-Oriented Model for Train Rescheduling on an Urban Rail Transit Line considering Train Capacity Constraint **Authors:** Wenkai Xu; Peng Zhao; Liqiao Ning **Journal:** Mathematical Problems in Engineering (2017) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2017/1010745 --- ## Abstract The major objective of this work is to present a train rescheduling model with train capacity constraint from a passenger-oriented standpoint for a subway line. The model expects to minimize the average generalized delay time (AGDT) of passengers. The generalized delay time is taken into consideration with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. Based on the abundant automatic fare collection (AFC) system records, the passenger arrival rate and the passenger alighting ratio are introduced to depict the short-term characteristics of passenger flow at each station, which can greatly reduce the computation complexity. In addition, an efficient genetic algorithm with adaptive mutation rate and elite strategy is used to solve the large-scale problem. Finally, Beijing Subway Line 13 is taken as a case study to validate the method. The results show that the proposed model does help neutralize the effect of train delay, with a 9.47% drop in the AGDT in comparison with the train-oriented model. --- ## Body ## 1. Introduction The train rescheduling problem is one of the most crucial problems in rail transit operation and management. During the course of daily operation, trains are inevitably affected by unexpected accidents or technical problems, which leads to the deviations from the original timetable as well as delays. If dispatchers can not handle it immediately, the delay may propagate to other trains, which will do great harm to the normal operation and disturb passengers’ trips seriously. Many researchers devoted themselves to studying the train rescheduling problem, which has become a research focus currently.In order to achieve the real-time and intelligent train rescheduling, a lot of studies have been carried out in proposing regulation rules, presenting rescheduling models, and designing solution algorithms. Among so many studies, most proposed models were built from a train-oriented point of view with minimizing the delay time of trains, the number of delayed trains or deviations from the original timetable, and so on [1]. Zhou and Zhong [2] studied the train timetabling problem to minimize the total train travel time for single-track railways. A branch-and-bound program with some lower and upper bound heuristics to reduce the solution space was proposed to find the solutions efficiently. D’Ariano et al. [3] modelled the scheduling problem expecting to minimize the deviation from the original timetable with an Alternative Graph model, which was first introduced by Mascis and Pacciarelli [4] for no-store job shop scheduling. They developed a branch-and-bound algorithm which contains implication rules enabling speeding up the computation. Acuna-Agost et al. [5] studied the same problem in [6] and developed an approach named SAPI. This approach was used to reduce the size of the search space of the mixed integer program for rescheduling problems in order to obtain near-optimal solutions in reasonable durations. Šemrov et al. [7] introduced a reinforcement learning method including a learning agent for train rescheduling on a single-track railway. The solutions can be obtained within reasonable computational time. Sato et al. [8] considered that the inconvenience of traveling by train consisted of the traveling time on board, the waiting time at platforms, and the number of transfers. They presented a MIP-based timetable rescheduling formulation to minimize further inconvenience to passengers.In addition, many researchers focus on heuristic algorithms to accelerate the speed of computation. Meng et al. [9] built a rescheduling model with minimizing the total delay time at the destination and proposed an improved particle swarm algorithm, which was proved to have real-time adjusting ability and high convergence speed. Törnquist Krasemann [10] developed a depth-first greedy algorithm to obtain good-enough schedules quickly in disturbed situations, working as a complement to the previously designed rescheduling approach in Törnquist and Persson [11], which minimized the total final delay of the traffic and the total cost when trains arrived at their final destination or the last stop considered. Dündar and Şahin [12] developed a genetic algorithm for conflict resolutions, which was evaluated against the dispatchers’ and the exact solutions. Artificial neural networks were developed to mimic the decision behavior of train dispatchers so as to reproduce dispatchers’ conflict resolutions. Kanai et al. [13] developed an algorithm seeking for minimizing passengers’ dissatisfaction. The algorithm consisted of both simulation and optimization and tabu search algorithm was used in the optimization part.To sum up, most researchers conceived the train rescheduling problem from a train-oriented viewpoint, and few works paid attention to passengers’ interests. As for this problem in an urban rail transit system, considering the actual characteristics of urban rail transit lines: being shorter in length, high passenger flow volume, and high service frequency, a train rescheduling model for an urban rail transit line should be presented from a passenger-oriented perspective rather than a train-oriented point of view. Currently, during the actual operation process, train rescheduling mainly depends on dispatchers’ dispatching orders, which are based on their experience and craftsmanship without intelligent decision support. But, with passengers’ rising requirements for the level of service (LOS) of a rail transit system, train rescheduling should be more precise and scientific, which is what this work is expected to do. The main contributions of this work are summarized as follows:(1) A train rescheduling model is proposed from a passenger-oriented viewpoint. In this model, the train capacity and stranded passengers are taken into consideration, which make the model more practicable. In addition, the prediction of stranded passengers will remind the corresponding stations to take timely measures of passenger flow control.(2) The passenger arrival rate and the passenger alighting ratio of each station are introduced to capture the different short-term passenger flow characteristics of each station [14]. Then, the number of arrival passengers and the number of alighting passengers at each station can be simply obtained by computation, which can greatly reduce the solution time and improve the model’s applicability.(3) An efficient genetic algorithm with adaptive mutation rate and elite strategy is designed to obtain a good-enough solution of a practical problem within acceptable duration, which is a key factor for real-time application.(4) A real-world case study of Beijing Subway Line 13 is carried out to test the method proposed in this work. The results show that the performance of the passenger-oriented model is much better than the train-oriented model’s. ## 2. Train Rescheduling Model A passenger-oriented model for train rescheduling is presented in this part. For presentation simplicity, the necessary symbols and notations are listed as follows:S: the station set of an urban rail transit line, S=s∣s=1,2,…,m, where m is the total number of stations on the line.V: the train set of an urban rail transit line, V=v∣v=1,2,…,n, where n is the total number of trains that need to be rescheduled.Pv,s: the number of passengers on board when train v arrives at station s.Pv,sA: the number of alighting passengers during the dwell time of train v at station s.Pv,sB: the number of boarding passengers during the dwell time of train v at station s.Pv,sC: the number of arrival passengers at station s during the departure headway between train v-1 and train v at station s.Pv,sS: the number of stranded passengers after train v departing from station s.Tv,sa: the actual arrival time of train v at station s.T-v,sa: the planned arrival time of train v at station s.Tv,sd: the actual departure time of train v at station s.T-v,sd: the planned departure time of train v at station s.Ts,s+1R: the minimum running time for trains from station s to station s+1.Tsstop: the minimum dwell time for trains at station s.CT: the capacity of an urban rail transit train. ### 2.1. Short-Term Characteristics of Passenger Flow In an urban rail transit system, the passenger flow characteristics of a station can be captured by abundant historical AFC records. Each AFC record includes the accurate time of a passenger entering and leaving a station. As for transfer passengers, the accurate time of entering and leaving their transfer stations can be obtained by an assignment model [15, 16]. In the long term (e.g., a day), there are usually obvious changes in the passenger flow characteristics of a station (e.g., peak hour and non-peak hour). But, in the short term (e.g., an hour), a statistical method can be easily used to capture the passenger flow characteristics of a station [14]. The average time for passengers walking from turnstiles to the platform can be obtained by a practical survey. Then, the time of the passenger reaching the platform equals the time of a passenger entering a station plus the average walking time, and we can count the number of passengers who reached the platform during a period of time (e.g., 8:30 am to 9:30 am).In order to depict the passenger flow characteristics of a station, the arrival rateλs is introduced to indicate the number of passengers reaching the platform at station s within one minute. Meanwhile, the alighting ratio θs is introduced to represent the proportion of alighting passengers (Pv,sA) to passengers on board (Pv,s). With the introduction of the two parameters, the computation complexity can be reduced greatly. ### 2.2. Mathematical Relationship between Different Kinds of Passengers In this work, passengers fall into five categories: passenger on board (Pv,s), alighting passenger (Pv,sA), boarding passenger (Pv,sB), arrival passenger (Pv,sC), and stranded passenger (Pv,sS). The mathematical expressions of the five kinds of passengers are as follows. Equations (3) and (5) indicate the constraints of train capacity. The number of stranded passengers can be obtained by (5).(1) Passenger on board (Pv,s) is(1)Pv,s+1=Pv,s-Pv,sA+Pv,sB.(2) Alighting passenger (Pv,sA) is(2)Pv,sA=Pv,s×θs.(3) Boarding passenger (Pv,sB) is(3)Pv,sB=minCT-Pv,s-Pv,sA,Pv,sC+Pv-1,sS.(4) Arrival passenger (Pv,sC) is(4)Pv,sC=λs×Tv,sd-Tv-1,sd60.(5) Stranded passenger (Pv,sS) is(5)Pv,sS=max0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sA. ### 2.3. Model Constraints The model is mainly subject to some operational requirements to ensure the safety of the operation and the feasibility of the timetable optimized by the proposed model. #### 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. #### 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. #### 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. #### 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ### 2.4. Train Rescheduling Objective For most previous studies about train rescheduling problem, their optimal objectives tend to be designed from a train-oriented point of view. For instance, a train-oriented objective can be calculated by (13). Formula (13) and constraints (6)–(11) constitute a complete and train-oriented model of train rescheduling.(13)min∑v∈V∑s∈STv,sa-T-v,sa.However, in this work, the train rescheduling problem is considered from a passenger-oriented perspective with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. The delay time of each alighting passenger equals the delay time of the train arriving at his or her destination station. The total delay time of alighting passengers can be calculated by(14)∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa.As for stranded passengers, they have to spend extra time, at least a headway, waiting for the next train. The penalty factorTpen is introduced to depict this situation. The total penalty time of stranded passengers can be calculated by (15). As a result, the total generalized delay time of passengers equals (14) plus the following equation:(15)∑v∈V∑s∈SPv,sS×Tpen.Consequently, the passenger-oriented objective of minimizing the AGDT of passengers is presented by (16), where ∑v∈V∑s∈SPv,sC represents the total number of passengers who enter the subway line and look for service. The complete passenger-oriented model for train rescheduling is as follows:(16)min∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC(17)Subject  to:Tv,s+1a-Tv,sd≥Ts,s+1RTv,sd-Tv,sa≥TsstopTv+1,sa-Tv,sa≥HminTv+1,sd-Tv,sd≥HminPv,s+1=Pv,s-Pv,sA+Pv,sBPv,sA=Pv,s×θsPv,sB=min⁡CT-Pv,s-Pv,sA,Pv,sC+Pv-1,sSPv,sC=λs×Tv,sd-Tv-1,sd60Pv,sS=max⁡0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sATv,sa≥T-v,sa,Tv,sa∈NTv,sd≥T-v,sd,Tv,sd∈NPv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.1. Short-Term Characteristics of Passenger Flow In an urban rail transit system, the passenger flow characteristics of a station can be captured by abundant historical AFC records. Each AFC record includes the accurate time of a passenger entering and leaving a station. As for transfer passengers, the accurate time of entering and leaving their transfer stations can be obtained by an assignment model [15, 16]. In the long term (e.g., a day), there are usually obvious changes in the passenger flow characteristics of a station (e.g., peak hour and non-peak hour). But, in the short term (e.g., an hour), a statistical method can be easily used to capture the passenger flow characteristics of a station [14]. The average time for passengers walking from turnstiles to the platform can be obtained by a practical survey. Then, the time of the passenger reaching the platform equals the time of a passenger entering a station plus the average walking time, and we can count the number of passengers who reached the platform during a period of time (e.g., 8:30 am to 9:30 am).In order to depict the passenger flow characteristics of a station, the arrival rateλs is introduced to indicate the number of passengers reaching the platform at station s within one minute. Meanwhile, the alighting ratio θs is introduced to represent the proportion of alighting passengers (Pv,sA) to passengers on board (Pv,s). With the introduction of the two parameters, the computation complexity can be reduced greatly. ## 2.2. Mathematical Relationship between Different Kinds of Passengers In this work, passengers fall into five categories: passenger on board (Pv,s), alighting passenger (Pv,sA), boarding passenger (Pv,sB), arrival passenger (Pv,sC), and stranded passenger (Pv,sS). The mathematical expressions of the five kinds of passengers are as follows. Equations (3) and (5) indicate the constraints of train capacity. The number of stranded passengers can be obtained by (5).(1) Passenger on board (Pv,s) is(1)Pv,s+1=Pv,s-Pv,sA+Pv,sB.(2) Alighting passenger (Pv,sA) is(2)Pv,sA=Pv,s×θs.(3) Boarding passenger (Pv,sB) is(3)Pv,sB=minCT-Pv,s-Pv,sA,Pv,sC+Pv-1,sS.(4) Arrival passenger (Pv,sC) is(4)Pv,sC=λs×Tv,sd-Tv-1,sd60.(5) Stranded passenger (Pv,sS) is(5)Pv,sS=max0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sA. ## 2.3. Model Constraints The model is mainly subject to some operational requirements to ensure the safety of the operation and the feasibility of the timetable optimized by the proposed model. ### 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. ### 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. ### 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. ### 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. ## 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. ## 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. ## 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.4. Train Rescheduling Objective For most previous studies about train rescheduling problem, their optimal objectives tend to be designed from a train-oriented point of view. For instance, a train-oriented objective can be calculated by (13). Formula (13) and constraints (6)–(11) constitute a complete and train-oriented model of train rescheduling.(13)min∑v∈V∑s∈STv,sa-T-v,sa.However, in this work, the train rescheduling problem is considered from a passenger-oriented perspective with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. The delay time of each alighting passenger equals the delay time of the train arriving at his or her destination station. The total delay time of alighting passengers can be calculated by(14)∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa.As for stranded passengers, they have to spend extra time, at least a headway, waiting for the next train. The penalty factorTpen is introduced to depict this situation. The total penalty time of stranded passengers can be calculated by (15). As a result, the total generalized delay time of passengers equals (14) plus the following equation:(15)∑v∈V∑s∈SPv,sS×Tpen.Consequently, the passenger-oriented objective of minimizing the AGDT of passengers is presented by (16), where ∑v∈V∑s∈SPv,sC represents the total number of passengers who enter the subway line and look for service. The complete passenger-oriented model for train rescheduling is as follows:(16)min∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC(17)Subject  to:Tv,s+1a-Tv,sd≥Ts,s+1RTv,sd-Tv,sa≥TsstopTv+1,sa-Tv,sa≥HminTv+1,sd-Tv,sd≥HminPv,s+1=Pv,s-Pv,sA+Pv,sBPv,sA=Pv,s×θsPv,sB=min⁡CT-Pv,s-Pv,sA,Pv,sC+Pv-1,sSPv,sC=λs×Tv,sd-Tv-1,sd60Pv,sS=max⁡0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sATv,sa≥T-v,sa,Tv,sa∈NTv,sd≥T-v,sd,Tv,sd∈NPv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 3. Solution Algorithm The train rescheduling problem is considered as one of the most intractable problems in the operation and management of rail transit system [19]. With the rising scale of the problem, exact algorithms usually take a very long time to output the optimal solution, which can not meet the real-time requirement for the actual operation. Fan et al. [20] compared eight different algorithms and found that simple scenarios can be managed efficiently using exact algorithms. But, for complex scenarios, heuristic algorithms are more appropriate, such as ant colony optimization and genetic algorithm. In this work, an efficient genetic algorithm is designed to solve this problem. ### 3.1. Chromosome Structure A chromosome represents a solution in the genetic algorithm. Each train’s actual arrival timeTv,sa and departure time Tv,sd are chosen as genes to form the chromosome. A chromosome is divided into two parts and each part consists of n (the total number of rescheduled trains) subparts. The subpart sequencing is according to the serial number of trains, as shown in Figure 1.Figure 1 Chromosome representation.Each number in a rectangle in Figure1 represents the serial number of a station. For example, the number in the red circle is s, which means that the gene in this position is the actual arrival time of train v at station s. Similarly, the number in the pink circle is s too, which means that the gene in this position is the actual departure time of train v at station s. For the purpose of calculation simplification, all genes are encoded by real type method [21]. ### 3.2. Fitness Function The fitness of an individual in the population represents that the individual is good or bad. Meanwhile, it determines the possibility that the individual can be selected to generate the new individual. The passenger-oriented model is a model with a minimizing objective, so the objective function with some relatively minor modifications is the fitness function; see (18), where M is a big enough positive integer.(18)F=M-∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC.The other main operations in the genetic algorithm are summarized as follows. The method of roulette is adopted in selecting operation and the single-point crossover is used in crossover operation. As for mutation operation, the value of each gene on the chromosome can change within the determined lower and upper bound according to the adaptive mutation rate, which can be determined by (19). When the number of iterations reaches the maximum value, the algorithm is terminated and outputs the optimal solution [22].(19)Rim=Rminm+Rmaxm-Rminm×Fi-FminFmax-Fmin,where Rim represents the mutation rate of individual i. Fi represents the fitness value of individual i. Rmaxm and Rminm indicate the maximum and minimum mutation rate, respectively, which are determined in advance. Fmax and Fmin indicate the maximum and minimum fitness value in the current population, respectively. ### 3.3. Algorithm Procedure The detailed algorithmic steps are depicted as follows.Step 1 (initialization). This step is as follows:(1) Set the initial parameters: population sizeN, initial generation g=0, the maximum number of generations G, crossover rate Rc, and mutation rates Rmaxm and Rminm.(2) Input the initial data:T-v,sa, T-v,sd, Ts,s+1R, Tsstop, CT, λs, θs, and Hmin.(3) Input the serial number of the delayed train, the delay position, and the delay time.(4) Generate the initial populationPg according to the given upper and lower bound of each variable and check whether each individual is feasible. If an individual is infeasible, then delete this individual and reproduce a new individual which meets all constraints.(5) Calculate the fitnessFi of each individual in the initial population Pg.Step 2 (selection, crossover, and mutation). This step is as follows:(1) Calculate the selecting probabilitypi=Fi/∑i=1NFj of each individual and the method of roulette is adopted to select individuals in Pg to form the new population NPg according to the selecting possibility pi.(2) Make crossover operation inNPg according to the crossover rate Rc.(3) Make mutation operation inNPg according to the adaptive mutation rate Rim calculated by (19).(4) Calculate the fitnessNFi of each individual in NPg.(5) Select individuals inNPg based on NFi to replace those worse individuals in Pg and reproduce the new Pg.(6) Calculate the objective value of (16) and the fitness Fi of each individual in the new Pg.(7) Elite strategy: replace the worst individual with the best individual inPg.Step 3 (stop or not). This step is as follows:(1) Updateg=g+1.(2) Ifg=G, the algorithm is terminated and outputs the optimal solution. Otherwise, return to Step 2(1). ## 3.1. Chromosome Structure A chromosome represents a solution in the genetic algorithm. Each train’s actual arrival timeTv,sa and departure time Tv,sd are chosen as genes to form the chromosome. A chromosome is divided into two parts and each part consists of n (the total number of rescheduled trains) subparts. The subpart sequencing is according to the serial number of trains, as shown in Figure 1.Figure 1 Chromosome representation.Each number in a rectangle in Figure1 represents the serial number of a station. For example, the number in the red circle is s, which means that the gene in this position is the actual arrival time of train v at station s. Similarly, the number in the pink circle is s too, which means that the gene in this position is the actual departure time of train v at station s. For the purpose of calculation simplification, all genes are encoded by real type method [21]. ## 3.2. Fitness Function The fitness of an individual in the population represents that the individual is good or bad. Meanwhile, it determines the possibility that the individual can be selected to generate the new individual. The passenger-oriented model is a model with a minimizing objective, so the objective function with some relatively minor modifications is the fitness function; see (18), where M is a big enough positive integer.(18)F=M-∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC.The other main operations in the genetic algorithm are summarized as follows. The method of roulette is adopted in selecting operation and the single-point crossover is used in crossover operation. As for mutation operation, the value of each gene on the chromosome can change within the determined lower and upper bound according to the adaptive mutation rate, which can be determined by (19). When the number of iterations reaches the maximum value, the algorithm is terminated and outputs the optimal solution [22].(19)Rim=Rminm+Rmaxm-Rminm×Fi-FminFmax-Fmin,where Rim represents the mutation rate of individual i. Fi represents the fitness value of individual i. Rmaxm and Rminm indicate the maximum and minimum mutation rate, respectively, which are determined in advance. Fmax and Fmin indicate the maximum and minimum fitness value in the current population, respectively. ## 3.3. Algorithm Procedure The detailed algorithmic steps are depicted as follows.Step 1 (initialization). This step is as follows:(1) Set the initial parameters: population sizeN, initial generation g=0, the maximum number of generations G, crossover rate Rc, and mutation rates Rmaxm and Rminm.(2) Input the initial data:T-v,sa, T-v,sd, Ts,s+1R, Tsstop, CT, λs, θs, and Hmin.(3) Input the serial number of the delayed train, the delay position, and the delay time.(4) Generate the initial populationPg according to the given upper and lower bound of each variable and check whether each individual is feasible. If an individual is infeasible, then delete this individual and reproduce a new individual which meets all constraints.(5) Calculate the fitnessFi of each individual in the initial population Pg.Step 2 (selection, crossover, and mutation). This step is as follows:(1) Calculate the selecting probabilitypi=Fi/∑i=1NFj of each individual and the method of roulette is adopted to select individuals in Pg to form the new population NPg according to the selecting possibility pi.(2) Make crossover operation inNPg according to the crossover rate Rc.(3) Make mutation operation inNPg according to the adaptive mutation rate Rim calculated by (19).(4) Calculate the fitnessNFi of each individual in NPg.(5) Select individuals inNPg based on NFi to replace those worse individuals in Pg and reproduce the new Pg.(6) Calculate the objective value of (16) and the fitness Fi of each individual in the new Pg.(7) Elite strategy: replace the worst individual with the best individual inPg.Step 3 (stop or not). This step is as follows:(1) Updateg=g+1.(2) Ifg=G, the algorithm is terminated and outputs the optimal solution. Otherwise, return to Step 2(1). ## 4. Case Study ### 4.1. Line Description The performance of the passenger-oriented model for train rescheduling and the genetic algorithm is tested by a real-world case of Beijing Subway Line 13. Beijing Subway Line 13 is a semiloop line with 16 stations in total, as shown in Figure2. The down-direction of Line 13 starts from XiZhiMen station and terminates at DongZhiMen station, and the up-direction is opposite. The total length of this line is 40.9 km, and the train in operation has a capacity of 1356 passengers. The length and the minimum train running time of each section are given in Table 1.Table 1 The length and the minimum train running time of each section. Section Length/m T s , s + 1 R/s Section Length/m T s , s + 1 R/s Xizhimen–Dazhongsi 2839 215 Huoying–Lishuiqiao 4785 275 Dazhongsi–Zhichunlu 1206 95 Lishuiqiao–Beiyuan 2272 135 Zhichunlu–Wudaokou 1829 125 Beiyuan–Wangjingxi 6720 385 Wudaokou–Shangdi 4866 285 Wangjingxi–Shaoyaoju 2152 135 Shangdi–Xierqi 2538 185 Shaoyaoju–Guangximen 1110 85 Xierqi–Longze 3623 265 Guangximen–Liufang 1135 85 Longze–Huilongguan 1423 95 Liufang–Dongzhimen 1769 125 Huilongguan–Huoying 2110 135Figure 2 Beijing Subway Line 13. ### 4.2. Train Delay Scenario and Optimization According to the planned timetable of Line 13 down-direction, the planned train diagram for trains whose departure times are between 8:30 am and 9:30 am is obtained in Figure3. There are 11 trains in operation in total and for the 6th train it is assumed that its departure time at Huilongguan station is late for five minutes due to some accidents.Figure 3 The planned train diagram of Beijing Subway Line 13 (8:30 am to 9:30 am).Based on abundant historical AFC records of Beijing Subway Line 13, the passenger arrival rateλs and the passenger alighting ratio θs of Line 13 down-direction are obtained by statistical methods, which are listed in Table 2.Table 2 λ s and θs of each station. Station λ s (persons/min) θ s Station λ s (persons/min) θ s Xizhimen 166 0 Huoying 38 0.25 Dazhongsi 93 0.01 Lishuiqiao 32 0.23 Zhichunlu 95 0.03 Beiyuan 12 0.08 Wudaokou 132 0.06 Wangjingxi 9 0.27 Shangdi 38 0.13 Shaoyaoju 11 0.13 Xierqi 130 0.19 Guangximen 6 0.13 Longze 50 0.24 Liufang 3 0.3 Huilongguan 50 0.29 Dongzhimen 0 1Using the passenger-oriented model and the genetic algorithm proposed in this work, the practical problem is solved within 30 seconds by programing in MATLAB R2014b on an Intel Pentium dual-core CPU 3.1 GHz and 8 GB RAM desktop computer. Meanwhile, the problem is also solved by the train-oriented model mentioned above, using Lingo. The necessary parameters are given in Table3. Table 4 shows the detailed solution results. Compared to the train-oriented model, there is a 9.47% decrease in the AGDT by the passenger-oriented model. The proposed model has obviously optimal effects on passengers’ generalized delay time. The convergence curve of the genetic algorithm is shown in Figure 4.Table 3 Necessary parameters. Parameter Value N 40 G 800 R c 0.8 R m a x m 0.05 R m i n m 0.01 T s s t o p/s 20 C T/person 1356 H m i n/s 160 T p e n/s 500Table 4 Solution results. Model AGDT/s The train-oriented model 65.07 The passenger-oriented model 58.91 Improvement −9.47%Figure 4 The convergence curve of the genetic algorithm. ### 4.3. Train Capacity and Stranded Passengers The highlight of this work is that the train capacity is taken into consideration. With this constraint, the number of passengers on board (Pv,s) cannot be greater than the train capacity. In this experiment, the passenger-oriented model with train capacity constraint is compared with the passenger-oriented model without train capacity constraint. Figure 5 shows the number of passengers on board of the 6th train (the delayed train) with or without the constraint of train capacity. Obviously, without the constraint of train capacity, there are more passengers on board compared to the capacity when the train arrives at Shangdi station and Longze station, which is inconsistent with the reality.Figure 5 The number of passengers on board of the 6th train.In addition, with the constraint of train capacity, there is a possibility that some passengers can not board the arriving train. In this experiment, it is found that there are stranded passengers (Pv,sS) at Wudaokou station and Xierqi station, and the number of stranded passengers is increasing as trains go on, which is shown in Figure 6. The two stations are both of high arrival rate of passengers in practice. In case that there are too many passengers stranded in a station, which may lead to some unexpected incidents, efficient measures must be taken to control the flow of arrival passengers as well as reducing the arrival rate, in particular in Xierqi station, which is a key transfer station in reality. If control measures are taken at Xierqi station, for example, the arrival rate of passengers becomes 90% of the original; Figure 7 shows the drop in the number of stranded passengers at Xierqi station, with a 52.44% decrease on average.Figure 6 The number of stranded passengers at Wudaokou station and Xierqi station.Figure 7 The changes in the number of stranded passengers at Xierqi station. ## 4.1. Line Description The performance of the passenger-oriented model for train rescheduling and the genetic algorithm is tested by a real-world case of Beijing Subway Line 13. Beijing Subway Line 13 is a semiloop line with 16 stations in total, as shown in Figure2. The down-direction of Line 13 starts from XiZhiMen station and terminates at DongZhiMen station, and the up-direction is opposite. The total length of this line is 40.9 km, and the train in operation has a capacity of 1356 passengers. The length and the minimum train running time of each section are given in Table 1.Table 1 The length and the minimum train running time of each section. Section Length/m T s , s + 1 R/s Section Length/m T s , s + 1 R/s Xizhimen–Dazhongsi 2839 215 Huoying–Lishuiqiao 4785 275 Dazhongsi–Zhichunlu 1206 95 Lishuiqiao–Beiyuan 2272 135 Zhichunlu–Wudaokou 1829 125 Beiyuan–Wangjingxi 6720 385 Wudaokou–Shangdi 4866 285 Wangjingxi–Shaoyaoju 2152 135 Shangdi–Xierqi 2538 185 Shaoyaoju–Guangximen 1110 85 Xierqi–Longze 3623 265 Guangximen–Liufang 1135 85 Longze–Huilongguan 1423 95 Liufang–Dongzhimen 1769 125 Huilongguan–Huoying 2110 135Figure 2 Beijing Subway Line 13. ## 4.2. Train Delay Scenario and Optimization According to the planned timetable of Line 13 down-direction, the planned train diagram for trains whose departure times are between 8:30 am and 9:30 am is obtained in Figure3. There are 11 trains in operation in total and for the 6th train it is assumed that its departure time at Huilongguan station is late for five minutes due to some accidents.Figure 3 The planned train diagram of Beijing Subway Line 13 (8:30 am to 9:30 am).Based on abundant historical AFC records of Beijing Subway Line 13, the passenger arrival rateλs and the passenger alighting ratio θs of Line 13 down-direction are obtained by statistical methods, which are listed in Table 2.Table 2 λ s and θs of each station. Station λ s (persons/min) θ s Station λ s (persons/min) θ s Xizhimen 166 0 Huoying 38 0.25 Dazhongsi 93 0.01 Lishuiqiao 32 0.23 Zhichunlu 95 0.03 Beiyuan 12 0.08 Wudaokou 132 0.06 Wangjingxi 9 0.27 Shangdi 38 0.13 Shaoyaoju 11 0.13 Xierqi 130 0.19 Guangximen 6 0.13 Longze 50 0.24 Liufang 3 0.3 Huilongguan 50 0.29 Dongzhimen 0 1Using the passenger-oriented model and the genetic algorithm proposed in this work, the practical problem is solved within 30 seconds by programing in MATLAB R2014b on an Intel Pentium dual-core CPU 3.1 GHz and 8 GB RAM desktop computer. Meanwhile, the problem is also solved by the train-oriented model mentioned above, using Lingo. The necessary parameters are given in Table3. Table 4 shows the detailed solution results. Compared to the train-oriented model, there is a 9.47% decrease in the AGDT by the passenger-oriented model. The proposed model has obviously optimal effects on passengers’ generalized delay time. The convergence curve of the genetic algorithm is shown in Figure 4.Table 3 Necessary parameters. Parameter Value N 40 G 800 R c 0.8 R m a x m 0.05 R m i n m 0.01 T s s t o p/s 20 C T/person 1356 H m i n/s 160 T p e n/s 500Table 4 Solution results. Model AGDT/s The train-oriented model 65.07 The passenger-oriented model 58.91 Improvement −9.47%Figure 4 The convergence curve of the genetic algorithm. ## 4.3. Train Capacity and Stranded Passengers The highlight of this work is that the train capacity is taken into consideration. With this constraint, the number of passengers on board (Pv,s) cannot be greater than the train capacity. In this experiment, the passenger-oriented model with train capacity constraint is compared with the passenger-oriented model without train capacity constraint. Figure 5 shows the number of passengers on board of the 6th train (the delayed train) with or without the constraint of train capacity. Obviously, without the constraint of train capacity, there are more passengers on board compared to the capacity when the train arrives at Shangdi station and Longze station, which is inconsistent with the reality.Figure 5 The number of passengers on board of the 6th train.In addition, with the constraint of train capacity, there is a possibility that some passengers can not board the arriving train. In this experiment, it is found that there are stranded passengers (Pv,sS) at Wudaokou station and Xierqi station, and the number of stranded passengers is increasing as trains go on, which is shown in Figure 6. The two stations are both of high arrival rate of passengers in practice. In case that there are too many passengers stranded in a station, which may lead to some unexpected incidents, efficient measures must be taken to control the flow of arrival passengers as well as reducing the arrival rate, in particular in Xierqi station, which is a key transfer station in reality. If control measures are taken at Xierqi station, for example, the arrival rate of passengers becomes 90% of the original; Figure 7 shows the drop in the number of stranded passengers at Xierqi station, with a 52.44% decrease on average.Figure 6 The number of stranded passengers at Wudaokou station and Xierqi station.Figure 7 The changes in the number of stranded passengers at Xierqi station. ## 5. Conclusions The train rescheduling problem is always a hot problem in rail transit operation and management. With passengers’ rising requirements for the LOS of an urban rail transit system, a passenger-oriented model is much better than a train-oriented model. In this work, a passenger-oriented model with train capacity constraint is presented to minimize the AGDT of passengers, which consists of the delay time of alighting passengers and the penalty time of stranded passengers. In order to meet the real-time requirement, an efficient genetic algorithm is proposed to solve the practical and complex problem. Finally, the case study of Beijing Subway Line 13 is carried out to verify the method proposed in this work. The results show the following:(1) Compared to the train-oriented model, the passenger-oriented model has obviously optimal effects on the generalized delay time of passengers, with a 9.47% decrease in the AGDT.(2) In comparison with the passenger-oriented model without the constraint of train capacity, the model with train capacity constraint is more corresponding to the reality and the number of passengers on board cannot be greater than the train capacity.(3) With the constraint of train capacity, the number of stranded passengers can be counted by the proposed model so that the stations with increasing number of stranded passengers can be detected. Then, the corresponding stations are able to take preventive measures in time. --- *Source: 1010745-2017-04-05.xml*
1010745-2017-04-05_1010745-2017-04-05.md
42,644
A Passenger-Oriented Model for Train Rescheduling on an Urban Rail Transit Line considering Train Capacity Constraint
Wenkai Xu; Peng Zhao; Liqiao Ning
Mathematical Problems in Engineering (2017)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2017/1010745
1010745-2017-04-05.xml
--- ## Abstract The major objective of this work is to present a train rescheduling model with train capacity constraint from a passenger-oriented standpoint for a subway line. The model expects to minimize the average generalized delay time (AGDT) of passengers. The generalized delay time is taken into consideration with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. Based on the abundant automatic fare collection (AFC) system records, the passenger arrival rate and the passenger alighting ratio are introduced to depict the short-term characteristics of passenger flow at each station, which can greatly reduce the computation complexity. In addition, an efficient genetic algorithm with adaptive mutation rate and elite strategy is used to solve the large-scale problem. Finally, Beijing Subway Line 13 is taken as a case study to validate the method. The results show that the proposed model does help neutralize the effect of train delay, with a 9.47% drop in the AGDT in comparison with the train-oriented model. --- ## Body ## 1. Introduction The train rescheduling problem is one of the most crucial problems in rail transit operation and management. During the course of daily operation, trains are inevitably affected by unexpected accidents or technical problems, which leads to the deviations from the original timetable as well as delays. If dispatchers can not handle it immediately, the delay may propagate to other trains, which will do great harm to the normal operation and disturb passengers’ trips seriously. Many researchers devoted themselves to studying the train rescheduling problem, which has become a research focus currently.In order to achieve the real-time and intelligent train rescheduling, a lot of studies have been carried out in proposing regulation rules, presenting rescheduling models, and designing solution algorithms. Among so many studies, most proposed models were built from a train-oriented point of view with minimizing the delay time of trains, the number of delayed trains or deviations from the original timetable, and so on [1]. Zhou and Zhong [2] studied the train timetabling problem to minimize the total train travel time for single-track railways. A branch-and-bound program with some lower and upper bound heuristics to reduce the solution space was proposed to find the solutions efficiently. D’Ariano et al. [3] modelled the scheduling problem expecting to minimize the deviation from the original timetable with an Alternative Graph model, which was first introduced by Mascis and Pacciarelli [4] for no-store job shop scheduling. They developed a branch-and-bound algorithm which contains implication rules enabling speeding up the computation. Acuna-Agost et al. [5] studied the same problem in [6] and developed an approach named SAPI. This approach was used to reduce the size of the search space of the mixed integer program for rescheduling problems in order to obtain near-optimal solutions in reasonable durations. Šemrov et al. [7] introduced a reinforcement learning method including a learning agent for train rescheduling on a single-track railway. The solutions can be obtained within reasonable computational time. Sato et al. [8] considered that the inconvenience of traveling by train consisted of the traveling time on board, the waiting time at platforms, and the number of transfers. They presented a MIP-based timetable rescheduling formulation to minimize further inconvenience to passengers.In addition, many researchers focus on heuristic algorithms to accelerate the speed of computation. Meng et al. [9] built a rescheduling model with minimizing the total delay time at the destination and proposed an improved particle swarm algorithm, which was proved to have real-time adjusting ability and high convergence speed. Törnquist Krasemann [10] developed a depth-first greedy algorithm to obtain good-enough schedules quickly in disturbed situations, working as a complement to the previously designed rescheduling approach in Törnquist and Persson [11], which minimized the total final delay of the traffic and the total cost when trains arrived at their final destination or the last stop considered. Dündar and Şahin [12] developed a genetic algorithm for conflict resolutions, which was evaluated against the dispatchers’ and the exact solutions. Artificial neural networks were developed to mimic the decision behavior of train dispatchers so as to reproduce dispatchers’ conflict resolutions. Kanai et al. [13] developed an algorithm seeking for minimizing passengers’ dissatisfaction. The algorithm consisted of both simulation and optimization and tabu search algorithm was used in the optimization part.To sum up, most researchers conceived the train rescheduling problem from a train-oriented viewpoint, and few works paid attention to passengers’ interests. As for this problem in an urban rail transit system, considering the actual characteristics of urban rail transit lines: being shorter in length, high passenger flow volume, and high service frequency, a train rescheduling model for an urban rail transit line should be presented from a passenger-oriented perspective rather than a train-oriented point of view. Currently, during the actual operation process, train rescheduling mainly depends on dispatchers’ dispatching orders, which are based on their experience and craftsmanship without intelligent decision support. But, with passengers’ rising requirements for the level of service (LOS) of a rail transit system, train rescheduling should be more precise and scientific, which is what this work is expected to do. The main contributions of this work are summarized as follows:(1) A train rescheduling model is proposed from a passenger-oriented viewpoint. In this model, the train capacity and stranded passengers are taken into consideration, which make the model more practicable. In addition, the prediction of stranded passengers will remind the corresponding stations to take timely measures of passenger flow control.(2) The passenger arrival rate and the passenger alighting ratio of each station are introduced to capture the different short-term passenger flow characteristics of each station [14]. Then, the number of arrival passengers and the number of alighting passengers at each station can be simply obtained by computation, which can greatly reduce the solution time and improve the model’s applicability.(3) An efficient genetic algorithm with adaptive mutation rate and elite strategy is designed to obtain a good-enough solution of a practical problem within acceptable duration, which is a key factor for real-time application.(4) A real-world case study of Beijing Subway Line 13 is carried out to test the method proposed in this work. The results show that the performance of the passenger-oriented model is much better than the train-oriented model’s. ## 2. Train Rescheduling Model A passenger-oriented model for train rescheduling is presented in this part. For presentation simplicity, the necessary symbols and notations are listed as follows:S: the station set of an urban rail transit line, S=s∣s=1,2,…,m, where m is the total number of stations on the line.V: the train set of an urban rail transit line, V=v∣v=1,2,…,n, where n is the total number of trains that need to be rescheduled.Pv,s: the number of passengers on board when train v arrives at station s.Pv,sA: the number of alighting passengers during the dwell time of train v at station s.Pv,sB: the number of boarding passengers during the dwell time of train v at station s.Pv,sC: the number of arrival passengers at station s during the departure headway between train v-1 and train v at station s.Pv,sS: the number of stranded passengers after train v departing from station s.Tv,sa: the actual arrival time of train v at station s.T-v,sa: the planned arrival time of train v at station s.Tv,sd: the actual departure time of train v at station s.T-v,sd: the planned departure time of train v at station s.Ts,s+1R: the minimum running time for trains from station s to station s+1.Tsstop: the minimum dwell time for trains at station s.CT: the capacity of an urban rail transit train. ### 2.1. Short-Term Characteristics of Passenger Flow In an urban rail transit system, the passenger flow characteristics of a station can be captured by abundant historical AFC records. Each AFC record includes the accurate time of a passenger entering and leaving a station. As for transfer passengers, the accurate time of entering and leaving their transfer stations can be obtained by an assignment model [15, 16]. In the long term (e.g., a day), there are usually obvious changes in the passenger flow characteristics of a station (e.g., peak hour and non-peak hour). But, in the short term (e.g., an hour), a statistical method can be easily used to capture the passenger flow characteristics of a station [14]. The average time for passengers walking from turnstiles to the platform can be obtained by a practical survey. Then, the time of the passenger reaching the platform equals the time of a passenger entering a station plus the average walking time, and we can count the number of passengers who reached the platform during a period of time (e.g., 8:30 am to 9:30 am).In order to depict the passenger flow characteristics of a station, the arrival rateλs is introduced to indicate the number of passengers reaching the platform at station s within one minute. Meanwhile, the alighting ratio θs is introduced to represent the proportion of alighting passengers (Pv,sA) to passengers on board (Pv,s). With the introduction of the two parameters, the computation complexity can be reduced greatly. ### 2.2. Mathematical Relationship between Different Kinds of Passengers In this work, passengers fall into five categories: passenger on board (Pv,s), alighting passenger (Pv,sA), boarding passenger (Pv,sB), arrival passenger (Pv,sC), and stranded passenger (Pv,sS). The mathematical expressions of the five kinds of passengers are as follows. Equations (3) and (5) indicate the constraints of train capacity. The number of stranded passengers can be obtained by (5).(1) Passenger on board (Pv,s) is(1)Pv,s+1=Pv,s-Pv,sA+Pv,sB.(2) Alighting passenger (Pv,sA) is(2)Pv,sA=Pv,s×θs.(3) Boarding passenger (Pv,sB) is(3)Pv,sB=minCT-Pv,s-Pv,sA,Pv,sC+Pv-1,sS.(4) Arrival passenger (Pv,sC) is(4)Pv,sC=λs×Tv,sd-Tv-1,sd60.(5) Stranded passenger (Pv,sS) is(5)Pv,sS=max0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sA. ### 2.3. Model Constraints The model is mainly subject to some operational requirements to ensure the safety of the operation and the feasibility of the timetable optimized by the proposed model. #### 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. #### 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. #### 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. #### 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ### 2.4. Train Rescheduling Objective For most previous studies about train rescheduling problem, their optimal objectives tend to be designed from a train-oriented point of view. For instance, a train-oriented objective can be calculated by (13). Formula (13) and constraints (6)–(11) constitute a complete and train-oriented model of train rescheduling.(13)min∑v∈V∑s∈STv,sa-T-v,sa.However, in this work, the train rescheduling problem is considered from a passenger-oriented perspective with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. The delay time of each alighting passenger equals the delay time of the train arriving at his or her destination station. The total delay time of alighting passengers can be calculated by(14)∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa.As for stranded passengers, they have to spend extra time, at least a headway, waiting for the next train. The penalty factorTpen is introduced to depict this situation. The total penalty time of stranded passengers can be calculated by (15). As a result, the total generalized delay time of passengers equals (14) plus the following equation:(15)∑v∈V∑s∈SPv,sS×Tpen.Consequently, the passenger-oriented objective of minimizing the AGDT of passengers is presented by (16), where ∑v∈V∑s∈SPv,sC represents the total number of passengers who enter the subway line and look for service. The complete passenger-oriented model for train rescheduling is as follows:(16)min∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC(17)Subject  to:Tv,s+1a-Tv,sd≥Ts,s+1RTv,sd-Tv,sa≥TsstopTv+1,sa-Tv,sa≥HminTv+1,sd-Tv,sd≥HminPv,s+1=Pv,s-Pv,sA+Pv,sBPv,sA=Pv,s×θsPv,sB=min⁡CT-Pv,s-Pv,sA,Pv,sC+Pv-1,sSPv,sC=λs×Tv,sd-Tv-1,sd60Pv,sS=max⁡0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sATv,sa≥T-v,sa,Tv,sa∈NTv,sd≥T-v,sd,Tv,sd∈NPv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.1. Short-Term Characteristics of Passenger Flow In an urban rail transit system, the passenger flow characteristics of a station can be captured by abundant historical AFC records. Each AFC record includes the accurate time of a passenger entering and leaving a station. As for transfer passengers, the accurate time of entering and leaving their transfer stations can be obtained by an assignment model [15, 16]. In the long term (e.g., a day), there are usually obvious changes in the passenger flow characteristics of a station (e.g., peak hour and non-peak hour). But, in the short term (e.g., an hour), a statistical method can be easily used to capture the passenger flow characteristics of a station [14]. The average time for passengers walking from turnstiles to the platform can be obtained by a practical survey. Then, the time of the passenger reaching the platform equals the time of a passenger entering a station plus the average walking time, and we can count the number of passengers who reached the platform during a period of time (e.g., 8:30 am to 9:30 am).In order to depict the passenger flow characteristics of a station, the arrival rateλs is introduced to indicate the number of passengers reaching the platform at station s within one minute. Meanwhile, the alighting ratio θs is introduced to represent the proportion of alighting passengers (Pv,sA) to passengers on board (Pv,s). With the introduction of the two parameters, the computation complexity can be reduced greatly. ## 2.2. Mathematical Relationship between Different Kinds of Passengers In this work, passengers fall into five categories: passenger on board (Pv,s), alighting passenger (Pv,sA), boarding passenger (Pv,sB), arrival passenger (Pv,sC), and stranded passenger (Pv,sS). The mathematical expressions of the five kinds of passengers are as follows. Equations (3) and (5) indicate the constraints of train capacity. The number of stranded passengers can be obtained by (5).(1) Passenger on board (Pv,s) is(1)Pv,s+1=Pv,s-Pv,sA+Pv,sB.(2) Alighting passenger (Pv,sA) is(2)Pv,sA=Pv,s×θs.(3) Boarding passenger (Pv,sB) is(3)Pv,sB=minCT-Pv,s-Pv,sA,Pv,sC+Pv-1,sS.(4) Arrival passenger (Pv,sC) is(4)Pv,sC=λs×Tv,sd-Tv-1,sd60.(5) Stranded passenger (Pv,sS) is(5)Pv,sS=max0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sA. ## 2.3. Model Constraints The model is mainly subject to some operational requirements to ensure the safety of the operation and the feasibility of the timetable optimized by the proposed model. ### 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. ### 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. ### 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. ### 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.3.1. Section Running Time Under the limitation of traction and brake performance of trains, the length of each section, safety requirements, and so on, the actual running time of trains in each section must be longer than the minimum running time [17]; see (6)Tv,s+1a-Tv,sd≥Ts,s+1R. ## 2.3.2. Dwell Time Similar to section running time, the actual dwell time of trains at each section must be longer than the minimum dwell time [18]; see (7). It should be pointed out that dwell times are affected by the number of alighting passengers and boarding passengers, which may extend dwell times. Meanwhile, in the rescheduling process, station staff will guide passengers alighting or boarding a train quickly to shorten dwell times and to recover the timetable.(7)Tv,sd-Tv,sa≥Tsstop. ## 2.3.3. Headway Regarding all trains running on the subway line, they should meet the requirements of the minimum arrival and departure headway of the line, as shown in (8) and (9), where Hmin represents the minimum headway.(8)Tv+1,sa-Tv,sa≥Hmin(9)Tv+1,sd-Tv,sd≥Hmin. ## 2.3.4. Variable Obviously, the rescheduled timetable cannot be earlier than the planned timetable and all variables in this practical problem must be integers, as shown in formulas (10), (11), and (12), where N represents the set of nonnegative integers.(10)Tv,sa≥T-v,sa,Tv,sa∈N(11)Tv,sd≥T-v,sd,Tv,sd∈N(12)Pv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 2.4. Train Rescheduling Objective For most previous studies about train rescheduling problem, their optimal objectives tend to be designed from a train-oriented point of view. For instance, a train-oriented objective can be calculated by (13). Formula (13) and constraints (6)–(11) constitute a complete and train-oriented model of train rescheduling.(13)min∑v∈V∑s∈STv,sa-T-v,sa.However, in this work, the train rescheduling problem is considered from a passenger-oriented perspective with two aspects: the delay time of alighting passengers and the penalty time of stranded passengers. The delay time of each alighting passenger equals the delay time of the train arriving at his or her destination station. The total delay time of alighting passengers can be calculated by(14)∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa.As for stranded passengers, they have to spend extra time, at least a headway, waiting for the next train. The penalty factorTpen is introduced to depict this situation. The total penalty time of stranded passengers can be calculated by (15). As a result, the total generalized delay time of passengers equals (14) plus the following equation:(15)∑v∈V∑s∈SPv,sS×Tpen.Consequently, the passenger-oriented objective of minimizing the AGDT of passengers is presented by (16), where ∑v∈V∑s∈SPv,sC represents the total number of passengers who enter the subway line and look for service. The complete passenger-oriented model for train rescheduling is as follows:(16)min∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC(17)Subject  to:Tv,s+1a-Tv,sd≥Ts,s+1RTv,sd-Tv,sa≥TsstopTv+1,sa-Tv,sa≥HminTv+1,sd-Tv,sd≥HminPv,s+1=Pv,s-Pv,sA+Pv,sBPv,sA=Pv,s×θsPv,sB=min⁡CT-Pv,s-Pv,sA,Pv,sC+Pv-1,sSPv,sC=λs×Tv,sd-Tv-1,sd60Pv,sS=max⁡0,Pv,sC+Pv-1,sS-CT-Pv,s-Pv,sATv,sa≥T-v,sa,Tv,sa∈NTv,sd≥T-v,sd,Tv,sd∈NPv,s,Pv,sA,Pv,sB,Pv,sC,Pv,sS∈N. ## 3. Solution Algorithm The train rescheduling problem is considered as one of the most intractable problems in the operation and management of rail transit system [19]. With the rising scale of the problem, exact algorithms usually take a very long time to output the optimal solution, which can not meet the real-time requirement for the actual operation. Fan et al. [20] compared eight different algorithms and found that simple scenarios can be managed efficiently using exact algorithms. But, for complex scenarios, heuristic algorithms are more appropriate, such as ant colony optimization and genetic algorithm. In this work, an efficient genetic algorithm is designed to solve this problem. ### 3.1. Chromosome Structure A chromosome represents a solution in the genetic algorithm. Each train’s actual arrival timeTv,sa and departure time Tv,sd are chosen as genes to form the chromosome. A chromosome is divided into two parts and each part consists of n (the total number of rescheduled trains) subparts. The subpart sequencing is according to the serial number of trains, as shown in Figure 1.Figure 1 Chromosome representation.Each number in a rectangle in Figure1 represents the serial number of a station. For example, the number in the red circle is s, which means that the gene in this position is the actual arrival time of train v at station s. Similarly, the number in the pink circle is s too, which means that the gene in this position is the actual departure time of train v at station s. For the purpose of calculation simplification, all genes are encoded by real type method [21]. ### 3.2. Fitness Function The fitness of an individual in the population represents that the individual is good or bad. Meanwhile, it determines the possibility that the individual can be selected to generate the new individual. The passenger-oriented model is a model with a minimizing objective, so the objective function with some relatively minor modifications is the fitness function; see (18), where M is a big enough positive integer.(18)F=M-∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC.The other main operations in the genetic algorithm are summarized as follows. The method of roulette is adopted in selecting operation and the single-point crossover is used in crossover operation. As for mutation operation, the value of each gene on the chromosome can change within the determined lower and upper bound according to the adaptive mutation rate, which can be determined by (19). When the number of iterations reaches the maximum value, the algorithm is terminated and outputs the optimal solution [22].(19)Rim=Rminm+Rmaxm-Rminm×Fi-FminFmax-Fmin,where Rim represents the mutation rate of individual i. Fi represents the fitness value of individual i. Rmaxm and Rminm indicate the maximum and minimum mutation rate, respectively, which are determined in advance. Fmax and Fmin indicate the maximum and minimum fitness value in the current population, respectively. ### 3.3. Algorithm Procedure The detailed algorithmic steps are depicted as follows.Step 1 (initialization). This step is as follows:(1) Set the initial parameters: population sizeN, initial generation g=0, the maximum number of generations G, crossover rate Rc, and mutation rates Rmaxm and Rminm.(2) Input the initial data:T-v,sa, T-v,sd, Ts,s+1R, Tsstop, CT, λs, θs, and Hmin.(3) Input the serial number of the delayed train, the delay position, and the delay time.(4) Generate the initial populationPg according to the given upper and lower bound of each variable and check whether each individual is feasible. If an individual is infeasible, then delete this individual and reproduce a new individual which meets all constraints.(5) Calculate the fitnessFi of each individual in the initial population Pg.Step 2 (selection, crossover, and mutation). This step is as follows:(1) Calculate the selecting probabilitypi=Fi/∑i=1NFj of each individual and the method of roulette is adopted to select individuals in Pg to form the new population NPg according to the selecting possibility pi.(2) Make crossover operation inNPg according to the crossover rate Rc.(3) Make mutation operation inNPg according to the adaptive mutation rate Rim calculated by (19).(4) Calculate the fitnessNFi of each individual in NPg.(5) Select individuals inNPg based on NFi to replace those worse individuals in Pg and reproduce the new Pg.(6) Calculate the objective value of (16) and the fitness Fi of each individual in the new Pg.(7) Elite strategy: replace the worst individual with the best individual inPg.Step 3 (stop or not). This step is as follows:(1) Updateg=g+1.(2) Ifg=G, the algorithm is terminated and outputs the optimal solution. Otherwise, return to Step 2(1). ## 3.1. Chromosome Structure A chromosome represents a solution in the genetic algorithm. Each train’s actual arrival timeTv,sa and departure time Tv,sd are chosen as genes to form the chromosome. A chromosome is divided into two parts and each part consists of n (the total number of rescheduled trains) subparts. The subpart sequencing is according to the serial number of trains, as shown in Figure 1.Figure 1 Chromosome representation.Each number in a rectangle in Figure1 represents the serial number of a station. For example, the number in the red circle is s, which means that the gene in this position is the actual arrival time of train v at station s. Similarly, the number in the pink circle is s too, which means that the gene in this position is the actual departure time of train v at station s. For the purpose of calculation simplification, all genes are encoded by real type method [21]. ## 3.2. Fitness Function The fitness of an individual in the population represents that the individual is good or bad. Meanwhile, it determines the possibility that the individual can be selected to generate the new individual. The passenger-oriented model is a model with a minimizing objective, so the objective function with some relatively minor modifications is the fitness function; see (18), where M is a big enough positive integer.(18)F=M-∑v∈V∑s∈SPv,sA×Tv,sa-T-v,sa+Pv,sS×Tpen∑v∈V∑s∈SPv,sC.The other main operations in the genetic algorithm are summarized as follows. The method of roulette is adopted in selecting operation and the single-point crossover is used in crossover operation. As for mutation operation, the value of each gene on the chromosome can change within the determined lower and upper bound according to the adaptive mutation rate, which can be determined by (19). When the number of iterations reaches the maximum value, the algorithm is terminated and outputs the optimal solution [22].(19)Rim=Rminm+Rmaxm-Rminm×Fi-FminFmax-Fmin,where Rim represents the mutation rate of individual i. Fi represents the fitness value of individual i. Rmaxm and Rminm indicate the maximum and minimum mutation rate, respectively, which are determined in advance. Fmax and Fmin indicate the maximum and minimum fitness value in the current population, respectively. ## 3.3. Algorithm Procedure The detailed algorithmic steps are depicted as follows.Step 1 (initialization). This step is as follows:(1) Set the initial parameters: population sizeN, initial generation g=0, the maximum number of generations G, crossover rate Rc, and mutation rates Rmaxm and Rminm.(2) Input the initial data:T-v,sa, T-v,sd, Ts,s+1R, Tsstop, CT, λs, θs, and Hmin.(3) Input the serial number of the delayed train, the delay position, and the delay time.(4) Generate the initial populationPg according to the given upper and lower bound of each variable and check whether each individual is feasible. If an individual is infeasible, then delete this individual and reproduce a new individual which meets all constraints.(5) Calculate the fitnessFi of each individual in the initial population Pg.Step 2 (selection, crossover, and mutation). This step is as follows:(1) Calculate the selecting probabilitypi=Fi/∑i=1NFj of each individual and the method of roulette is adopted to select individuals in Pg to form the new population NPg according to the selecting possibility pi.(2) Make crossover operation inNPg according to the crossover rate Rc.(3) Make mutation operation inNPg according to the adaptive mutation rate Rim calculated by (19).(4) Calculate the fitnessNFi of each individual in NPg.(5) Select individuals inNPg based on NFi to replace those worse individuals in Pg and reproduce the new Pg.(6) Calculate the objective value of (16) and the fitness Fi of each individual in the new Pg.(7) Elite strategy: replace the worst individual with the best individual inPg.Step 3 (stop or not). This step is as follows:(1) Updateg=g+1.(2) Ifg=G, the algorithm is terminated and outputs the optimal solution. Otherwise, return to Step 2(1). ## 4. Case Study ### 4.1. Line Description The performance of the passenger-oriented model for train rescheduling and the genetic algorithm is tested by a real-world case of Beijing Subway Line 13. Beijing Subway Line 13 is a semiloop line with 16 stations in total, as shown in Figure2. The down-direction of Line 13 starts from XiZhiMen station and terminates at DongZhiMen station, and the up-direction is opposite. The total length of this line is 40.9 km, and the train in operation has a capacity of 1356 passengers. The length and the minimum train running time of each section are given in Table 1.Table 1 The length and the minimum train running time of each section. Section Length/m T s , s + 1 R/s Section Length/m T s , s + 1 R/s Xizhimen–Dazhongsi 2839 215 Huoying–Lishuiqiao 4785 275 Dazhongsi–Zhichunlu 1206 95 Lishuiqiao–Beiyuan 2272 135 Zhichunlu–Wudaokou 1829 125 Beiyuan–Wangjingxi 6720 385 Wudaokou–Shangdi 4866 285 Wangjingxi–Shaoyaoju 2152 135 Shangdi–Xierqi 2538 185 Shaoyaoju–Guangximen 1110 85 Xierqi–Longze 3623 265 Guangximen–Liufang 1135 85 Longze–Huilongguan 1423 95 Liufang–Dongzhimen 1769 125 Huilongguan–Huoying 2110 135Figure 2 Beijing Subway Line 13. ### 4.2. Train Delay Scenario and Optimization According to the planned timetable of Line 13 down-direction, the planned train diagram for trains whose departure times are between 8:30 am and 9:30 am is obtained in Figure3. There are 11 trains in operation in total and for the 6th train it is assumed that its departure time at Huilongguan station is late for five minutes due to some accidents.Figure 3 The planned train diagram of Beijing Subway Line 13 (8:30 am to 9:30 am).Based on abundant historical AFC records of Beijing Subway Line 13, the passenger arrival rateλs and the passenger alighting ratio θs of Line 13 down-direction are obtained by statistical methods, which are listed in Table 2.Table 2 λ s and θs of each station. Station λ s (persons/min) θ s Station λ s (persons/min) θ s Xizhimen 166 0 Huoying 38 0.25 Dazhongsi 93 0.01 Lishuiqiao 32 0.23 Zhichunlu 95 0.03 Beiyuan 12 0.08 Wudaokou 132 0.06 Wangjingxi 9 0.27 Shangdi 38 0.13 Shaoyaoju 11 0.13 Xierqi 130 0.19 Guangximen 6 0.13 Longze 50 0.24 Liufang 3 0.3 Huilongguan 50 0.29 Dongzhimen 0 1Using the passenger-oriented model and the genetic algorithm proposed in this work, the practical problem is solved within 30 seconds by programing in MATLAB R2014b on an Intel Pentium dual-core CPU 3.1 GHz and 8 GB RAM desktop computer. Meanwhile, the problem is also solved by the train-oriented model mentioned above, using Lingo. The necessary parameters are given in Table3. Table 4 shows the detailed solution results. Compared to the train-oriented model, there is a 9.47% decrease in the AGDT by the passenger-oriented model. The proposed model has obviously optimal effects on passengers’ generalized delay time. The convergence curve of the genetic algorithm is shown in Figure 4.Table 3 Necessary parameters. Parameter Value N 40 G 800 R c 0.8 R m a x m 0.05 R m i n m 0.01 T s s t o p/s 20 C T/person 1356 H m i n/s 160 T p e n/s 500Table 4 Solution results. Model AGDT/s The train-oriented model 65.07 The passenger-oriented model 58.91 Improvement −9.47%Figure 4 The convergence curve of the genetic algorithm. ### 4.3. Train Capacity and Stranded Passengers The highlight of this work is that the train capacity is taken into consideration. With this constraint, the number of passengers on board (Pv,s) cannot be greater than the train capacity. In this experiment, the passenger-oriented model with train capacity constraint is compared with the passenger-oriented model without train capacity constraint. Figure 5 shows the number of passengers on board of the 6th train (the delayed train) with or without the constraint of train capacity. Obviously, without the constraint of train capacity, there are more passengers on board compared to the capacity when the train arrives at Shangdi station and Longze station, which is inconsistent with the reality.Figure 5 The number of passengers on board of the 6th train.In addition, with the constraint of train capacity, there is a possibility that some passengers can not board the arriving train. In this experiment, it is found that there are stranded passengers (Pv,sS) at Wudaokou station and Xierqi station, and the number of stranded passengers is increasing as trains go on, which is shown in Figure 6. The two stations are both of high arrival rate of passengers in practice. In case that there are too many passengers stranded in a station, which may lead to some unexpected incidents, efficient measures must be taken to control the flow of arrival passengers as well as reducing the arrival rate, in particular in Xierqi station, which is a key transfer station in reality. If control measures are taken at Xierqi station, for example, the arrival rate of passengers becomes 90% of the original; Figure 7 shows the drop in the number of stranded passengers at Xierqi station, with a 52.44% decrease on average.Figure 6 The number of stranded passengers at Wudaokou station and Xierqi station.Figure 7 The changes in the number of stranded passengers at Xierqi station. ## 4.1. Line Description The performance of the passenger-oriented model for train rescheduling and the genetic algorithm is tested by a real-world case of Beijing Subway Line 13. Beijing Subway Line 13 is a semiloop line with 16 stations in total, as shown in Figure2. The down-direction of Line 13 starts from XiZhiMen station and terminates at DongZhiMen station, and the up-direction is opposite. The total length of this line is 40.9 km, and the train in operation has a capacity of 1356 passengers. The length and the minimum train running time of each section are given in Table 1.Table 1 The length and the minimum train running time of each section. Section Length/m T s , s + 1 R/s Section Length/m T s , s + 1 R/s Xizhimen–Dazhongsi 2839 215 Huoying–Lishuiqiao 4785 275 Dazhongsi–Zhichunlu 1206 95 Lishuiqiao–Beiyuan 2272 135 Zhichunlu–Wudaokou 1829 125 Beiyuan–Wangjingxi 6720 385 Wudaokou–Shangdi 4866 285 Wangjingxi–Shaoyaoju 2152 135 Shangdi–Xierqi 2538 185 Shaoyaoju–Guangximen 1110 85 Xierqi–Longze 3623 265 Guangximen–Liufang 1135 85 Longze–Huilongguan 1423 95 Liufang–Dongzhimen 1769 125 Huilongguan–Huoying 2110 135Figure 2 Beijing Subway Line 13. ## 4.2. Train Delay Scenario and Optimization According to the planned timetable of Line 13 down-direction, the planned train diagram for trains whose departure times are between 8:30 am and 9:30 am is obtained in Figure3. There are 11 trains in operation in total and for the 6th train it is assumed that its departure time at Huilongguan station is late for five minutes due to some accidents.Figure 3 The planned train diagram of Beijing Subway Line 13 (8:30 am to 9:30 am).Based on abundant historical AFC records of Beijing Subway Line 13, the passenger arrival rateλs and the passenger alighting ratio θs of Line 13 down-direction are obtained by statistical methods, which are listed in Table 2.Table 2 λ s and θs of each station. Station λ s (persons/min) θ s Station λ s (persons/min) θ s Xizhimen 166 0 Huoying 38 0.25 Dazhongsi 93 0.01 Lishuiqiao 32 0.23 Zhichunlu 95 0.03 Beiyuan 12 0.08 Wudaokou 132 0.06 Wangjingxi 9 0.27 Shangdi 38 0.13 Shaoyaoju 11 0.13 Xierqi 130 0.19 Guangximen 6 0.13 Longze 50 0.24 Liufang 3 0.3 Huilongguan 50 0.29 Dongzhimen 0 1Using the passenger-oriented model and the genetic algorithm proposed in this work, the practical problem is solved within 30 seconds by programing in MATLAB R2014b on an Intel Pentium dual-core CPU 3.1 GHz and 8 GB RAM desktop computer. Meanwhile, the problem is also solved by the train-oriented model mentioned above, using Lingo. The necessary parameters are given in Table3. Table 4 shows the detailed solution results. Compared to the train-oriented model, there is a 9.47% decrease in the AGDT by the passenger-oriented model. The proposed model has obviously optimal effects on passengers’ generalized delay time. The convergence curve of the genetic algorithm is shown in Figure 4.Table 3 Necessary parameters. Parameter Value N 40 G 800 R c 0.8 R m a x m 0.05 R m i n m 0.01 T s s t o p/s 20 C T/person 1356 H m i n/s 160 T p e n/s 500Table 4 Solution results. Model AGDT/s The train-oriented model 65.07 The passenger-oriented model 58.91 Improvement −9.47%Figure 4 The convergence curve of the genetic algorithm. ## 4.3. Train Capacity and Stranded Passengers The highlight of this work is that the train capacity is taken into consideration. With this constraint, the number of passengers on board (Pv,s) cannot be greater than the train capacity. In this experiment, the passenger-oriented model with train capacity constraint is compared with the passenger-oriented model without train capacity constraint. Figure 5 shows the number of passengers on board of the 6th train (the delayed train) with or without the constraint of train capacity. Obviously, without the constraint of train capacity, there are more passengers on board compared to the capacity when the train arrives at Shangdi station and Longze station, which is inconsistent with the reality.Figure 5 The number of passengers on board of the 6th train.In addition, with the constraint of train capacity, there is a possibility that some passengers can not board the arriving train. In this experiment, it is found that there are stranded passengers (Pv,sS) at Wudaokou station and Xierqi station, and the number of stranded passengers is increasing as trains go on, which is shown in Figure 6. The two stations are both of high arrival rate of passengers in practice. In case that there are too many passengers stranded in a station, which may lead to some unexpected incidents, efficient measures must be taken to control the flow of arrival passengers as well as reducing the arrival rate, in particular in Xierqi station, which is a key transfer station in reality. If control measures are taken at Xierqi station, for example, the arrival rate of passengers becomes 90% of the original; Figure 7 shows the drop in the number of stranded passengers at Xierqi station, with a 52.44% decrease on average.Figure 6 The number of stranded passengers at Wudaokou station and Xierqi station.Figure 7 The changes in the number of stranded passengers at Xierqi station. ## 5. Conclusions The train rescheduling problem is always a hot problem in rail transit operation and management. With passengers’ rising requirements for the LOS of an urban rail transit system, a passenger-oriented model is much better than a train-oriented model. In this work, a passenger-oriented model with train capacity constraint is presented to minimize the AGDT of passengers, which consists of the delay time of alighting passengers and the penalty time of stranded passengers. In order to meet the real-time requirement, an efficient genetic algorithm is proposed to solve the practical and complex problem. Finally, the case study of Beijing Subway Line 13 is carried out to verify the method proposed in this work. The results show the following:(1) Compared to the train-oriented model, the passenger-oriented model has obviously optimal effects on the generalized delay time of passengers, with a 9.47% decrease in the AGDT.(2) In comparison with the passenger-oriented model without the constraint of train capacity, the model with train capacity constraint is more corresponding to the reality and the number of passengers on board cannot be greater than the train capacity.(3) With the constraint of train capacity, the number of stranded passengers can be counted by the proposed model so that the stations with increasing number of stranded passengers can be detected. Then, the corresponding stations are able to take preventive measures in time. --- *Source: 1010745-2017-04-05.xml*
2017
# Scene-Specialized Multitarget Detector with an SMC-PHD Filter and a YOLO Network **Authors:** Qianli Liu; Yibing Li; Qianhui Dong; Fang Ye **Journal:** Computational Intelligence and Neuroscience (2022) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2022/1010767 --- ## Abstract You only look once (YOLO) is one of the most efficient target detection networks. However, the performance of the YOLO network decreases significantly when the variation between the training data and the real data is large. To automatically customize the YOLO network, we suggest a novel transfer learning algorithm with the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter and Gaussian mixture probability hypothesis density (GM-PHD) filter. The proposed framework can automatically customize the YOLO framework with unlabelled target sequences. The frames of the unlabelled target sequences are automatically labelled. The detection probability and clutter density of the SMC-PHD filter and GM-PHD are applied to retrain the YOLO network for occluded targets and clutter. A novel likelihood density with the confidence probability of the YOLO detector and visual context indications is implemented to choose target samples. A simple resampling strategy is proposed for SMC-PHD YOLO to address the weight degeneracy problem. Experiments with different datasets indicate that the proposed framework achieves positive outcomes relative to state-of-the-art frameworks. --- ## Body ## 1. Introduction Learning-based detection algorithms have proven important in several subject areas, including smart surveillance systems [1], wireless sensors [2, 3], and secure transportation systems [4]. Over the past several years, convolutional neural networks (CNNs) have achieved excellent results in multiple computer vision assignments. You only look once (YOLO) is an effective visual detection method [5]. Compared with other detection networks, the YOLO network can predict class probabilities and bounding boxes in an assessment directly from the input frame. YOLO detectors, however, are taught with annotated datasets and utilized to attain the highest variability of the target. The distribution of the target captured by the camera may not be a subset of the initial learning set when these detectors are applied to a specific scene, such as in the case of a closed-circuit television (CCTV) camera. Therefore, the resulting Generic YOLO detector may not function effectively, especially for a limited amount of training data [6].To address this problem, transfer learning with cross-domain adaptation is proposed. A specific training dataset is needed to generate a specific detector. Normally, these positive samples of the specific training dataset are manually selected from the target dataset. However, a large amount of labelled data is needed to tune the detector in each frame, and labelling is a labor-intensive task. A typical solution for reducing the collection time is to automatically provide the sample labels with the target frame. Labelled samples are iteratively collected from the unlabelled sequence and added to the training dataset [7].We propose a novel transfer learning method with a probability hypothesis density (PHD) filter, which can automatically retrain a YOLO network for a special object. The scene-specific detector is generated with a Generic YOLO detector trained by labelled frames and sequences without labelled information. The parameters of the YOLO detector are estimated by an iterative process. After automatic and iterative training, the final specialized YOLO detector is produced and can run without the SMC-PHD filter. Figure1 illustrates the structure of our method.Figure 1 YOLO network. The red line shows the grids of images, and the red box shows the bounding boxes. The pattern-filled boxes show the grids with high probabilities.Although improving the YOLO with the SMC method has been employed for transfer learning [8], the detection probability and clutter density are not considered in the target sequence. In the updated step of our proposed method, the occluded targets are selected and collected as positive samples for training. The primary benefit of our method is that the recognition model can learn the appearance of occluded targets and clutter. As shown in the experimental results in Section 4, our proposed SMC-PHD YOLO can detect some occluded speakers with the SMC-PHD filter-based occlusion strategy, while the SMC Faster region-based CNN (R-CNN) [8] cannot detect the occluded targets. In addition, when positive samples are collected, some false samples (clutter) may be added to the positive training dataset. The performance of the SMC Faster R-CNN [8] would be affected by the clutter. When there is clutter in the training dataset, the SMC Faster R-CNN produces false detection. Based on the clutter density, this clutter would be assigned a low weight, and our proposed method could disregard false samples. Our proposed PHD YOLO network has four main contributions:(i) To address the bias between the training dataset and target set, we propose a PHD based transfer learning method for YOLO. For nonlinear tasks, a scene-specialized multitarget detector, SMC-PHD YOLO, is proposed. For linear systems and Gaussian noise tasks, we extend our method to GM-PHD YOLO to eliminate concerns about SMC dependence.(ii) In SMC-PHD YOLO, we show that the detection probability and clutter density of the SMC-PHD filter improve the performance of the retrained YOLO networks for the occluded targets and multiscale targets. When the image quality of the target scenes is unsatisfactory, even with noise, the specialized YOLO network can still detect the target with the posterior density.(iii) A novel likelihood is proposed to verify the selected samples in PHD YOLO. To collect positive samples for training, the confidence probability of the YOLO detector and visual context indications are applied.(iv) For the weight degeneracy problem of SMC YOLO, we also propose a novel and simple resampling strategy that can collect samples from the target sequence based on their weights, and the proposed distribution is assumed to be the target distribution. With the detection distribution, the strategy can function effectively even when a small number of samples is employed.The remainder of this document is structured as follows: Section2 introduces the current approach applied in this sector and offers details regarding the benefits of our proposed method over other specialization methods. Section 3 describes our proposed strategy in detail. Section 4 details the configuration of the simulation and presents experimental outcomes, and concluding comments are provided in Section 5. We adhere to the convention that scale variables, such as confidence, are presented in lowercase italics, e.g., f. Symbols for vector-formed states and their densities are shown in lowercase bold italics, e.g., x, and multitarget states are represented by uppercase bold italics, e.g., X. Uppercase nonbold letters represent polynomials. Symbols for matrices, such as the transition matrix, are shown in uppercase bold letters, e.g., F. ## 2. Background ### 2.1. Specialization Frameworks If the distribution of the training samples is different from that of target scenes, then a traditional visual detector may not function effectively [9]. To address this problem, specialization frameworks are utilized to automatically create scene-specific detectors for a target scene. Transfer learning algorithms based on state-of-the-art theories use the annotated model and expertise gained through prior assignments. There are three main types of transfer learning methods [10]. First, by changing the parameters of the source learning model, the model is improved in a target domain [11, 12]. Second, the variation between the source and target distributions is decreased, and the source learning model is adapted to the target domain [13, 14]. Third, the training samples are manually or automatically chosen, and the model is retrained with a subset of selected samples [15]. We focus on the third category because it can automatically label the selected samples and the training parameters remain unchanged.However, the new training dataset may contain some incorrectly labelled samples because the labels of the samples are not manually verified. With this type of dataset, the accuracy of the detection framework may decrease. To address this problem, various contextual indications, such as the visual appearance of objects, pedestrian movement, road model, size, and place, are used to verify favourable samples for retraining the training dataset; however, this method is sensitive to occlusion [16]. Moreover, some techniques may only use samples from the target domain and waste helpful samples [17]. Htike and Hogg employed a background subtraction algorithm to train a particular detector [9] to select the target samples from the source and target datasets. To automatically label target information, tracklet chains are utilized to link the proposed samples to tracklets [15] predicted by an appearance-target detector. However, for each target scene, this framework, which includes many manual parameters and thresholds, may affect the specialization performance. Alternatively, Maâmatou et al. [10] collected fresh samples. To train a fresh dedicated retrained sensor, an SMC transfer learning method was employed to create a new dataset [8]. ### 2.2. YOLO Network In this work, we used the YOLO (V3) network [5] since it passes the image only once into a fully CNN (FCNN), which enables it to achieve real-time performance. YOLO (V3) was developed based on YOLO [18] and YOLO (V2) [19]. The YOLO network considers the detection problem as a regression problem. Therefore, the network directly generates a bounding box for each class via regression without any proposal region, which decreases the computational cost compared to Faster R-CNN.The YOLO detection model is shown in Figure1, where the network divides each input image of the training set into S×S grids. When the grid is filled by the centre of the target ground truth, the grid is used to detect the object. For each grid, several bounding boxes and their confidence scores are predicted. The confidence fs is defined as(1)fs=pr×IoUpredtruth,pr∈0,1.If the target is in the grid,pr=1; otherwise, pr=0. IoUpredtruth (intersection over the union of the prediction and ground truth) is used to present the coincidence between the predicted bounding box and the reference bounding box, which indicates whether the grid contains targets. If several bounding boxes detect the same target, then nonmaximum suppression (NMS) is applied to select the best bounding box.YOLO has a lower computational cost than Faster R-CNN; however, it has more errors. To address this problem, YOLO uses the “anchor” of the Faster R-CNN to generate suitable prior bounding boxes; YOLO uses k-means cluttering. The adoption of the anchor boxes decreases the mean average precision (mAP). In addition, unlike YOLO, YOLO-V3 uses batch normalization, multiscale prediction, a high-resolution classifier, dimension clutter, direct location prediction, fine-grained features, multiscale training, and other methods that greatly improve the detection accuracy. ### 2.3. Random Finite Set and PHD Filters In this subsection, we discuss the random finite set and PHD filters for scene-specialized transform learning. The probability hypothesis density and random finite set are proposed for multitarget tracking [20–22]. The random finite set is a flexible algorithm that can be combined with any object detector to generate positional and dimensional information on objects of interest. Maggio et al. used detectors such as background subtraction, AdaBoost classifiers, and a statistical change detector to track objects associated with a random finite set (RFS) [23, 24]. For handling occlusion problems during tracking, Kim et al. proposed the labelled RFS [25]. As the RFS is a computationally expensive approximation of the multidistribution Bayes filter, the PHD is the first-order moment of the RFS, which is a set of random variables (or vectors) with random cardinality [20]. An alternative derivation of the PHD filter based on classical point process theory was given in [26]. In multitarget research, the Gaussian mixture PHD (GM-PHD) filter [27] and SMC-PHD filter [28] are widely utilized. The GM-PHD filter is a closed-form solution, as it assumes that the model is linear and Gaussian. By limiting the number of considered partitions and possible alternatives, Granstrom et al. proposed a GM-PHD filter for tracking extended targets [29]. Since different objects have different levels of clutter, an N-type GM-PHD filter was proposed for real video sequences by integrating object detector information into this filter for two scenarios [30]. However, the accuracy may decrease for nonlinear problems. To address nonlinear problems, the SMC-PHD filter was proposed based on the Monte Carlo method. With the weights of the samples (particles), the SMC-PHD filter can track a varying number of unknown targets.The PHD filter is defined as the intensityψk, which is applied to estimate the number of speakers. The PHD filter involves a prediction step and an update step that recursively propagates the intensity function. The PHD prediction step is defined as(2)ψk|k−1xk=ξkxk+∫ϕk∣k−1xk|xk−1ψk−1xk−1dxk−1where x is the target bounding box state. ξkxk is the intensity of the birth RFS. ϕk|k−1xk|xk−1 is the analogue of the state transition probability,(3)ϕk|k−1xk|xk−1=pS,kxk−1fk|k−1xk|xk−1+βk∣k−1xk|xk−1,where pS,kxk−1 is the survival probability and fk|k−1xk|xk−1 is the transition density. βk|k−1xk|xk−1 is the intensity function of the spawn RFS with the previous state xk−1. The PHD update equation is given as(4)ψkxk=1−pD,kxkψk|k−1xk+∑zk∈ZkpD,kxkhkzk∣xkψk|k−1xkκkzk+∫pD,kxkhkzk|xkψk|k−1xk,where hkzk|xk is the likelihood defining the probability of zk given xk. pD,kxk is the detection probability. The intensity of the clutter RFS Ck is shown as κkzk=γuzk, where γ is the average number of Poisson clutter points per scan and uzk is the probability distribution of each clutter point. The PHD recursion involves multiple integrals in equations (2) and (4), which have no closed-form solution in general. To address this issue, the SMC-PHD filter has been proposed and widely utilized [28]. In the SMC-PHD filter, at time k−1, the target PHD ψk−1xk−1 is represented by a set of particles, xk−1i,ωk−1ii=1nk−1, where nk−1 is the number of particles at k−1. To the best of our limited knowledge, this article is the first study to use the PHD filter to train a scene-specialized, multitarget detector. As the number of targets is unknown in our unlabelled dataset and the sample collection is nonlinear and non-Gaussian, the SMC-PHD filter is applied to collect the unlabelled training data and customize the YOLO network. ## 2.1. Specialization Frameworks If the distribution of the training samples is different from that of target scenes, then a traditional visual detector may not function effectively [9]. To address this problem, specialization frameworks are utilized to automatically create scene-specific detectors for a target scene. Transfer learning algorithms based on state-of-the-art theories use the annotated model and expertise gained through prior assignments. There are three main types of transfer learning methods [10]. First, by changing the parameters of the source learning model, the model is improved in a target domain [11, 12]. Second, the variation between the source and target distributions is decreased, and the source learning model is adapted to the target domain [13, 14]. Third, the training samples are manually or automatically chosen, and the model is retrained with a subset of selected samples [15]. We focus on the third category because it can automatically label the selected samples and the training parameters remain unchanged.However, the new training dataset may contain some incorrectly labelled samples because the labels of the samples are not manually verified. With this type of dataset, the accuracy of the detection framework may decrease. To address this problem, various contextual indications, such as the visual appearance of objects, pedestrian movement, road model, size, and place, are used to verify favourable samples for retraining the training dataset; however, this method is sensitive to occlusion [16]. Moreover, some techniques may only use samples from the target domain and waste helpful samples [17]. Htike and Hogg employed a background subtraction algorithm to train a particular detector [9] to select the target samples from the source and target datasets. To automatically label target information, tracklet chains are utilized to link the proposed samples to tracklets [15] predicted by an appearance-target detector. However, for each target scene, this framework, which includes many manual parameters and thresholds, may affect the specialization performance. Alternatively, Maâmatou et al. [10] collected fresh samples. To train a fresh dedicated retrained sensor, an SMC transfer learning method was employed to create a new dataset [8]. ## 2.2. YOLO Network In this work, we used the YOLO (V3) network [5] since it passes the image only once into a fully CNN (FCNN), which enables it to achieve real-time performance. YOLO (V3) was developed based on YOLO [18] and YOLO (V2) [19]. The YOLO network considers the detection problem as a regression problem. Therefore, the network directly generates a bounding box for each class via regression without any proposal region, which decreases the computational cost compared to Faster R-CNN.The YOLO detection model is shown in Figure1, where the network divides each input image of the training set into S×S grids. When the grid is filled by the centre of the target ground truth, the grid is used to detect the object. For each grid, several bounding boxes and their confidence scores are predicted. The confidence fs is defined as(1)fs=pr×IoUpredtruth,pr∈0,1.If the target is in the grid,pr=1; otherwise, pr=0. IoUpredtruth (intersection over the union of the prediction and ground truth) is used to present the coincidence between the predicted bounding box and the reference bounding box, which indicates whether the grid contains targets. If several bounding boxes detect the same target, then nonmaximum suppression (NMS) is applied to select the best bounding box.YOLO has a lower computational cost than Faster R-CNN; however, it has more errors. To address this problem, YOLO uses the “anchor” of the Faster R-CNN to generate suitable prior bounding boxes; YOLO uses k-means cluttering. The adoption of the anchor boxes decreases the mean average precision (mAP). In addition, unlike YOLO, YOLO-V3 uses batch normalization, multiscale prediction, a high-resolution classifier, dimension clutter, direct location prediction, fine-grained features, multiscale training, and other methods that greatly improve the detection accuracy. ## 2.3. Random Finite Set and PHD Filters In this subsection, we discuss the random finite set and PHD filters for scene-specialized transform learning. The probability hypothesis density and random finite set are proposed for multitarget tracking [20–22]. The random finite set is a flexible algorithm that can be combined with any object detector to generate positional and dimensional information on objects of interest. Maggio et al. used detectors such as background subtraction, AdaBoost classifiers, and a statistical change detector to track objects associated with a random finite set (RFS) [23, 24]. For handling occlusion problems during tracking, Kim et al. proposed the labelled RFS [25]. As the RFS is a computationally expensive approximation of the multidistribution Bayes filter, the PHD is the first-order moment of the RFS, which is a set of random variables (or vectors) with random cardinality [20]. An alternative derivation of the PHD filter based on classical point process theory was given in [26]. In multitarget research, the Gaussian mixture PHD (GM-PHD) filter [27] and SMC-PHD filter [28] are widely utilized. The GM-PHD filter is a closed-form solution, as it assumes that the model is linear and Gaussian. By limiting the number of considered partitions and possible alternatives, Granstrom et al. proposed a GM-PHD filter for tracking extended targets [29]. Since different objects have different levels of clutter, an N-type GM-PHD filter was proposed for real video sequences by integrating object detector information into this filter for two scenarios [30]. However, the accuracy may decrease for nonlinear problems. To address nonlinear problems, the SMC-PHD filter was proposed based on the Monte Carlo method. With the weights of the samples (particles), the SMC-PHD filter can track a varying number of unknown targets.The PHD filter is defined as the intensityψk, which is applied to estimate the number of speakers. The PHD filter involves a prediction step and an update step that recursively propagates the intensity function. The PHD prediction step is defined as(2)ψk|k−1xk=ξkxk+∫ϕk∣k−1xk|xk−1ψk−1xk−1dxk−1where x is the target bounding box state. ξkxk is the intensity of the birth RFS. ϕk|k−1xk|xk−1 is the analogue of the state transition probability,(3)ϕk|k−1xk|xk−1=pS,kxk−1fk|k−1xk|xk−1+βk∣k−1xk|xk−1,where pS,kxk−1 is the survival probability and fk|k−1xk|xk−1 is the transition density. βk|k−1xk|xk−1 is the intensity function of the spawn RFS with the previous state xk−1. The PHD update equation is given as(4)ψkxk=1−pD,kxkψk|k−1xk+∑zk∈ZkpD,kxkhkzk∣xkψk|k−1xkκkzk+∫pD,kxkhkzk|xkψk|k−1xk,where hkzk|xk is the likelihood defining the probability of zk given xk. pD,kxk is the detection probability. The intensity of the clutter RFS Ck is shown as κkzk=γuzk, where γ is the average number of Poisson clutter points per scan and uzk is the probability distribution of each clutter point. The PHD recursion involves multiple integrals in equations (2) and (4), which have no closed-form solution in general. To address this issue, the SMC-PHD filter has been proposed and widely utilized [28]. In the SMC-PHD filter, at time k−1, the target PHD ψk−1xk−1 is represented by a set of particles, xk−1i,ωk−1ii=1nk−1, where nk−1 is the number of particles at k−1. To the best of our limited knowledge, this article is the first study to use the PHD filter to train a scene-specialized, multitarget detector. As the number of targets is unknown in our unlabelled dataset and the sample collection is nonlinear and non-Gaussian, the SMC-PHD filter is applied to collect the unlabelled training data and customize the YOLO network. ## 3. Proposed Framework This section introduces our proposed framework, which customizes the YOLO model based on the PHD filter. The PHD filter is used to label the target in unlabelled videos based on the YOLO output. The positive samples estimated by the PHD filter are used to build a new custom dataset. The YOLO network is fine-tuned on this custom dataset, which may contain occluded targets and targets of different styles. Since the number of unlabelled videos is large, the bias between the training dataset and the real data decreases. Compared to the state-of-the-art method, our proposed framework is not sensitive to occlusion and target shape. The overall framework of the proposed method is shown in Figure2.Figure 2 Overall framework of the proposed method. The framework input is a generic, fine-tuned YOLO detector. A visual sequence is provided to the scheme without manual labelling. To customize the YOLO network, an iterative method automatically estimates both parameters.To be more specific, assume that a Generic YOLO networkY0 is trained with generic datasets, such as Common Objects in Context (COCO) [31]. For the target sequence, unlabelled frames are represented as Ikk=1nI, where k is the index of the frame. The detection output of Y0 at frame k is Zkk=1nI. Zk=zkrr=1mk is a detection set at frame k, where zkr is a bounding box state of the detected target. r is the index of the detected target, and mk is the number of detected targets. Furthermore, the PHD filter updates Zkk=1nI to the estimated target state Xkk=1nI. Xk=xkjj=1Sk is an estimated target set, where bk is the number of estimated targets at k and j is the index of the estimated targets. Note that nI is not equal to SK. The PHD filter removes some clutter from Zkk=1nI and adds some missed targets. The nI images with an estimated target bounding box set Xkk=1nI are applied to fine-tune the YOLO network. The fine-tuned YOLO is referred to as Yt, where t is the time of fine-tuning. The training pipeline of the PHD YOLO detector can be found in Figure 3.Figure 3 The training pipeline of the PHD YOLO detector.The challenge is how to select the samples with the SMC-PHD filter. In this section, the iterative process is divided into three steps: prediction, updating, and resampling. In the following subsections, the details of the three primary steps are outlined. Since the SMC-PHD filter is more robust than the GM-PHD filter in the tracking task, PHD YOLO is mainly implemented as an SMC-PHD filter. To extend our proposed method to linear systems, GM-PHD YOLO is briefly discussed at the end of this section. ### 3.1. Prediction Step To build the custom dataset,Xkk−1nI, several particles are applied. At frame k−1, particles are represented as xk−1i,ωk−1i, where ωk−1i is the particle weight. Our work considers only two kinds of particles: survival particles and birth particles. The spawn particles of the SMC-PHD filter are disregarded. For nk−1 survival particles, the particle state is calculated by the transition function F:(5)xk|k−1i=Fxk−1i.Forbk birth particles, the particle state is normally set in the tracking area. The particle weight is calculated by(6)ωk∣k−1i=ϕk∣k−1xki,xk−1iωk−1iqkxki|xk−1i,Zk,i=1,…,nk−1,ξkxkibkpkxki|Zk,i=nk−1+1,…,nk−1+bk.However, if the new birth particle is located near the survival particles, then one target is repeatedly estimated by survival particles and birth particles. Thus, the number of targets would exceed the ground truth. To address this problem, we propose a novel birth density function based on the target state history:(7)ξkxki=maxpb,psmaxωkj∈Ωkδxkixkj1Xk−1xkjωkj,where(8)δxkixkj=1,ifxki=xkj,0, otherwise,1Xk−1xkj=1, ifxkj⊂Xk−1,0, otherwise,where ps is the survival probability and pb is the birth probability. ps represents the probability that the sample xki still exists. When ps=1, a sample still exists in the new dataset. When ps=0, samples are resampled, and samples in different iterations are independent. ### 3.2. Update Step In the update step, the particle states are further updated according to the output of YOLO,Zkk=1nI. The update step of the PHD recursion is approximated by updating the weight of the predicted particles when the likelihood hkzk|xki is obtained. The predicted weights are updated as(9)ωki=1−pDxki+∑zk∈ZkpDxkihkzk|xkiκkzk+Ckzkωk|k−1i,where(10)Ckzk=∑i=1nk−1+bkpD,kxkihkzk|xkiωk|k−1i.The detection probabilitypDxki is simplified as pD,ki in our following work. The number of targets is estimated as the sum of the weights, Sk=∑i=1nkωki.To ignore the clutter, the clutter density functionκk. is applied, and the value of κkzk is varied for the different detections zk. κkzk indicates the level of clutter and is a set value. When zkr has a high probability of being cluttered, κkzk is a high value. If the detection is not cluttered, then κkzk is given as 0. Normally, κkzk is set as a constant or estimated by the Beta-Gaussian mixture model [32].pDxki is the detection probability, which is chosen based on the sample and can be estimated by the Gaussian mixture model [32]. If the sample is occluded, then pDxki would have a low value (near 0). Therefore, the occluded samples have high weights and are selected for retraining the YOLO network. If the sample is not occluded, then pDxki is equal to 1, and the value hki,r is not changed. ### 3.3. Likelihood Function In addition to the detected probability and clutter density, the likelihood density determines whether the sample is selected for retraining. Samples with high weights are employed to retrain the YOLO network, while samples with low weights are disregarded. The likelihood density is applied to represent the relationship between the detections of the YOLO network and the samples. Therefore, we define the likelihood as(11)hk=fsmaxfx,βk,where(12)βk=β0k.During the iterative process,βk is decreased. When the selected sample applied to retrain the YOLO detector has a high associated score, the sample likelihood is maximized. The confidence scores fs are provided by the YOLO network output layer. When fs=0, the weight of the sample is set to 0, and the sample is removed from the specialized dataset. fx indicates whether the sample was detected by the YOLO network. For visual cues, we calculate the Euclidean distance between the selected sample xki and the previous sample Xk−1.(13)fx=e∑xki∈XkDki,rαki,where(14)Dki,r=ukr−uki2+vkr−vki2+wkr−wki2+hkr−hki2,where ukr,vkr,wkr,hkr is the state of the detection zkr. To select high-score samples xki, we use a dynamic threshold:(15)αki=maxxj∈Xt−1δxkixj1Xk−1xjδyjykisj, ifk≠0,α0,ifk=0,where yj and yki are the target class label s calculated by Yt−1. skj is the associated score, and α0 is the initial threshold. ### 3.4. Resampling Step The SMC-PHD filter is utilized to construct a new, specific dataset for retraining, according to the resampling approach, in which resamples from the weighted dataset are included in the generated datasetxkii=1nk. However, the traditional SMC-PHD meets the weight degeneracy problem and the number of samples decreases during the retraining step. To generate a new, unweighted dataset with the same number of samples as the weighted dataset, a sampling strategy is employed. Moreover, the effective sample size (ESS) of xki,ωkii=1nk is calculated:(16)ESS=∑i=1nkωki2∑i=1nkωki2.When the ESS is greater than 0.5, the particles can be considered to be positive samples for the special training dataset. When the ESS is less than 0.5, the particles should be resampled via the Kullback–Leibler distance (KLD) sampling [33]:(17)xkii=1nk←xki,ωkii=1nK.An extra k-means method is used to estimateXk based on the particles xkik=1nk. Note that the aspect ratio of the positive training sample may differ from the initial anchors At−1, as we use the IoU overlap as the positive sample. We employ the k-means method to cluster the aspect ratio of samples to update the anchors. To decrease the computational cost, only three anchors are used to retrain the YOLO network; they are set to At. These proposals are employed to retrain the YOLO network, which is produced by fine-tuning the specific dataset. In the next iteration, these networks will become the input of the forecast phase and be used to create target proposals (bounding boxes) in the target scene. ### 3.5. GM-PHD YOLO SMC-PHD is mainly discussed and applied to improve the YOLO network since it is more robust than the GM-PHD filter for nonlinear systems. However, for linear systems, the GM-PHD filter can provide a higher accuracy rate than the SMC-PHD filter. Therefore, in this subsection, we briefly discuss how to use the GM-PHD filter to improve the YOLO network. The pipeline of the GM-PHD YOLO is similar to that of SMC-PHD YOLO. YOLO is pretrained on the generic dataset, and GM-PHD assists in building the custom dataset from the unlabelled target sequences. YOLO is fine-tuned on this custom dataset. When the GM-PHD filter selects the samples, the steps include the prediction step, update step, and pruning.In the GM-PHD filter,xk−1i is distributed across the state space based on Gaussian density Nmk−1i,Pk−1i, where mk−1i and Pk−1i are the mean and variance, respectively. In the prediction step, for existing targets, N (mk−1i and Pk−1i) are predicted as mk|k−1i=Fmk−1i and Pk|k−1i=Q+FPk−1iFT, respectively, where Q is the transition noise variance. Their weight is calculated as ωk|k−1i=psωki. Birth targets are randomly chosen in the tracking area. In the update step, for undetected targets, the mean and variance retain their values, and their weights are calculated as ωki=1−pDωk|k−1i. For detected targets, the mean is calculated as(18)mki=mk|k−1i+Pk|k−1HTR+HPk|k−1HT−1zk−Hmk|k−1i.The variance is updated as(19)Pki=I−Pk|k−1HTR+HPk|k−1HT−1HPk|k−1i.The particle weight is updated as(20)ωki=pDωk|k−1iNzk;Hmk|k−1i,R+HPk|k−1HT.The weight is normalized as(21)ωki=ωkiκkzk+∑i=1nkωki.A simple pruning procedure is further employed to reduce the number of Gaussian components. The high weight targets are set toXk and are utilized to build the custom dataset. ## 3.1. Prediction Step To build the custom dataset,Xkk−1nI, several particles are applied. At frame k−1, particles are represented as xk−1i,ωk−1i, where ωk−1i is the particle weight. Our work considers only two kinds of particles: survival particles and birth particles. The spawn particles of the SMC-PHD filter are disregarded. For nk−1 survival particles, the particle state is calculated by the transition function F:(5)xk|k−1i=Fxk−1i.Forbk birth particles, the particle state is normally set in the tracking area. The particle weight is calculated by(6)ωk∣k−1i=ϕk∣k−1xki,xk−1iωk−1iqkxki|xk−1i,Zk,i=1,…,nk−1,ξkxkibkpkxki|Zk,i=nk−1+1,…,nk−1+bk.However, if the new birth particle is located near the survival particles, then one target is repeatedly estimated by survival particles and birth particles. Thus, the number of targets would exceed the ground truth. To address this problem, we propose a novel birth density function based on the target state history:(7)ξkxki=maxpb,psmaxωkj∈Ωkδxkixkj1Xk−1xkjωkj,where(8)δxkixkj=1,ifxki=xkj,0, otherwise,1Xk−1xkj=1, ifxkj⊂Xk−1,0, otherwise,where ps is the survival probability and pb is the birth probability. ps represents the probability that the sample xki still exists. When ps=1, a sample still exists in the new dataset. When ps=0, samples are resampled, and samples in different iterations are independent. ## 3.2. Update Step In the update step, the particle states are further updated according to the output of YOLO,Zkk=1nI. The update step of the PHD recursion is approximated by updating the weight of the predicted particles when the likelihood hkzk|xki is obtained. The predicted weights are updated as(9)ωki=1−pDxki+∑zk∈ZkpDxkihkzk|xkiκkzk+Ckzkωk|k−1i,where(10)Ckzk=∑i=1nk−1+bkpD,kxkihkzk|xkiωk|k−1i.The detection probabilitypDxki is simplified as pD,ki in our following work. The number of targets is estimated as the sum of the weights, Sk=∑i=1nkωki.To ignore the clutter, the clutter density functionκk. is applied, and the value of κkzk is varied for the different detections zk. κkzk indicates the level of clutter and is a set value. When zkr has a high probability of being cluttered, κkzk is a high value. If the detection is not cluttered, then κkzk is given as 0. Normally, κkzk is set as a constant or estimated by the Beta-Gaussian mixture model [32].pDxki is the detection probability, which is chosen based on the sample and can be estimated by the Gaussian mixture model [32]. If the sample is occluded, then pDxki would have a low value (near 0). Therefore, the occluded samples have high weights and are selected for retraining the YOLO network. If the sample is not occluded, then pDxki is equal to 1, and the value hki,r is not changed. ## 3.3. Likelihood Function In addition to the detected probability and clutter density, the likelihood density determines whether the sample is selected for retraining. Samples with high weights are employed to retrain the YOLO network, while samples with low weights are disregarded. The likelihood density is applied to represent the relationship between the detections of the YOLO network and the samples. Therefore, we define the likelihood as(11)hk=fsmaxfx,βk,where(12)βk=β0k.During the iterative process,βk is decreased. When the selected sample applied to retrain the YOLO detector has a high associated score, the sample likelihood is maximized. The confidence scores fs are provided by the YOLO network output layer. When fs=0, the weight of the sample is set to 0, and the sample is removed from the specialized dataset. fx indicates whether the sample was detected by the YOLO network. For visual cues, we calculate the Euclidean distance between the selected sample xki and the previous sample Xk−1.(13)fx=e∑xki∈XkDki,rαki,where(14)Dki,r=ukr−uki2+vkr−vki2+wkr−wki2+hkr−hki2,where ukr,vkr,wkr,hkr is the state of the detection zkr. To select high-score samples xki, we use a dynamic threshold:(15)αki=maxxj∈Xt−1δxkixj1Xk−1xjδyjykisj, ifk≠0,α0,ifk=0,where yj and yki are the target class label s calculated by Yt−1. skj is the associated score, and α0 is the initial threshold. ## 3.4. Resampling Step The SMC-PHD filter is utilized to construct a new, specific dataset for retraining, according to the resampling approach, in which resamples from the weighted dataset are included in the generated datasetxkii=1nk. However, the traditional SMC-PHD meets the weight degeneracy problem and the number of samples decreases during the retraining step. To generate a new, unweighted dataset with the same number of samples as the weighted dataset, a sampling strategy is employed. Moreover, the effective sample size (ESS) of xki,ωkii=1nk is calculated:(16)ESS=∑i=1nkωki2∑i=1nkωki2.When the ESS is greater than 0.5, the particles can be considered to be positive samples for the special training dataset. When the ESS is less than 0.5, the particles should be resampled via the Kullback–Leibler distance (KLD) sampling [33]:(17)xkii=1nk←xki,ωkii=1nK.An extra k-means method is used to estimateXk based on the particles xkik=1nk. Note that the aspect ratio of the positive training sample may differ from the initial anchors At−1, as we use the IoU overlap as the positive sample. We employ the k-means method to cluster the aspect ratio of samples to update the anchors. To decrease the computational cost, only three anchors are used to retrain the YOLO network; they are set to At. These proposals are employed to retrain the YOLO network, which is produced by fine-tuning the specific dataset. In the next iteration, these networks will become the input of the forecast phase and be used to create target proposals (bounding boxes) in the target scene. ## 3.5. GM-PHD YOLO SMC-PHD is mainly discussed and applied to improve the YOLO network since it is more robust than the GM-PHD filter for nonlinear systems. However, for linear systems, the GM-PHD filter can provide a higher accuracy rate than the SMC-PHD filter. Therefore, in this subsection, we briefly discuss how to use the GM-PHD filter to improve the YOLO network. The pipeline of the GM-PHD YOLO is similar to that of SMC-PHD YOLO. YOLO is pretrained on the generic dataset, and GM-PHD assists in building the custom dataset from the unlabelled target sequences. YOLO is fine-tuned on this custom dataset. When the GM-PHD filter selects the samples, the steps include the prediction step, update step, and pruning.In the GM-PHD filter,xk−1i is distributed across the state space based on Gaussian density Nmk−1i,Pk−1i, where mk−1i and Pk−1i are the mean and variance, respectively. In the prediction step, for existing targets, N (mk−1i and Pk−1i) are predicted as mk|k−1i=Fmk−1i and Pk|k−1i=Q+FPk−1iFT, respectively, where Q is the transition noise variance. Their weight is calculated as ωk|k−1i=psωki. Birth targets are randomly chosen in the tracking area. In the update step, for undetected targets, the mean and variance retain their values, and their weights are calculated as ωki=1−pDωk|k−1i. For detected targets, the mean is calculated as(18)mki=mk|k−1i+Pk|k−1HTR+HPk|k−1HT−1zk−Hmk|k−1i.The variance is updated as(19)Pki=I−Pk|k−1HTR+HPk|k−1HT−1HPk|k−1i.The particle weight is updated as(20)ωki=pDωk|k−1iNzk;Hmk|k−1i,R+HPk|k−1HT.The weight is normalized as(21)ωki=ωkiκkzk+∑i=1nkωki.A simple pruning procedure is further employed to reduce the number of Gaussian components. The high weight targets are set toXk and are utilized to build the custom dataset. ## 4. Experimental Results This section introduces the test results obtained on several public and private datasets. First, the implementation details of our proposed method are given. Second, the dataset and baseline algorithms are introduced. Third, the ablation study of the SMC-PHD YOLO filter is discussed. Our proposed SMC-PHD YOLO detector and several baseline methods are compared. ### 4.1. Implementation Details The initialized YOLO in our proposed SMC-PHD YOLO filter is pretrained on the COCO dataset [31]. The Adam optimizer is applied, where the weight decline is 0.0005 and the momentum is 0.9. Although the transition matrix F differs substantially across the different object classes in the different datasets, to simplify the problem, we assume F to be(22)F=1010010100100001.The YOLO network is fine-tuned on our evaluation dataset for the different tasks with the help of the SMC-PHD-based transforming method. The YOLO detector is tuned with a 64 GB NVIDIA GeForce GTX TITANX GPU. ### 4.2. Evaluation Methodology and Dataset We train the YOLO detector on a training collection containing 80k training frames and 500k example annotations from the COCO dataset, which contains 2.5 million labelled instances among 328k images of only 91 objects. Although the COCO dataset does not contain continuous frames, it is only used to pretrain the YOLO network before the experiments. In the evaluation step, datasets should contain continuous frames. The evaluation was performed with three different datasets.GOT-10k [34] is a large-scale, visual dataset with broad coverage of real-world objects. It contains 10k videos of 563 categories, and its categories are more than one order of magnitude wider than those of counterparts of a similar scale. Some of its categories are not included in the COCO dataset. Therefore, GOT-10k is suitable for fine-tuning the YOLO network pretrained on the COCO dataset. The annotations that we tested include birds, cars, tapirs, and cows. YouTubeBB [35] is a large, diverse dataset with 380,000 video sections and 5.6 million human-drawn bounding boxes in 23 classifications from 240,000 distinct YouTube videos. Each video includes time-localized, frame-level features, so classifier predictions at segment-level granularity are feasible. The annotations that we tested include cars and zebras. In the MIT Traffic dataset [36], a 90-minute video is provided. A total of 420 frames from the first 45 minutes are employed for specialization, and 420 images from the last 45 minutes are utilized for testing. The video was recorded by a stationary camera. The size of the scene is 720 by 480, and it is divided into 20 clips. The annotation that we tested includes only the cars. False-positive curves per frame (FPPI) and receiver operating characteristic (ROC) curves are used to evaluate our proposed detector and baseline methods. The pipeline of the data preparation for the PHD YOLO experiment is shown in Figure 4.Figure 4 The pipeline of the data preparation for the PHD YOLO experiment. ### 4.3. Baseline Method The algorithms compared with the SMC-PHD YOLO algorithm are Generic YOLO [5], Generic Faster R-CNN [37], SMC Faster R-CNN [8], that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], spatiotemporal sampling network (STSN) [41], salient object detection (SOD) [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45]. Table 1 shows the comparison between baseline methods and our method. The detector pretrained on the general dataset is presented in the second column. Some methods automatically fine-tune the network with the target dataset collected by the methods shown in the third column. For example, the algorithm of Kang et al. [40] does not include a fine-tuning step, and there is no information in its block. The computational complexity of fine-tuning with the target dataset is shown in the last column, where nI is the number of frames in the video, n is the number of particles for the SMC method, m is the average number of targets in each frame, l∗h is the size (length ∗ width) of the frame, and a is the number of auxiliary networks.Table 1 Comparison between baseline methods and our method. BaselineDetectorFine-tunedComputational complexityYOLO [5]YOLO——R-CNN [37]R-CNN——SMC R-CNN [8]R-CNNSMCOnI∗n∗mSingh et al. [38]R-CNNTrack and segment [46]OnI∗l∗hDeshmukh and Moh [39]CNNEdge detectorsOnI∗l∗hKang et al. [40]Contextual R-CNN——Maâmatou et al. [10]SVMSMCOnI∗n∗mSTSN [41]STSN——SOD [42]SODR101 FPNOnI∗n∗mLee et al. [43]R-CNNAuxiliary networkOnI∗n∗m∗aJie et al. [44]R-CNNOnline supportive sample harvesting [44]OnI∗n∗mGhahremani et al. [45]CNNF1 score thresholdOnSMC-PHD YOLOYOLOSMC-PHDOnI∗n∗m ### 4.4. SMC-PHD Filter YOLO for Multitarget Detection In this subsection, we discuss the contribution of the SMC-PHD in our proposed method via three experiments. In these three experiments, we evaluate the performance of the detection probability and clutter density. Note that for a fixed label dataset and fixed YOLO, these parameters are also fixed and can be measured from the dataset. To show the contribution of the detection probability and clutter density, we set different values in the experiments. #### 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 #### 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ### 4.5. Error Analysis of the SMC-PHD YOLO Network Since the target dataset is automatically generated by an SMC-PHD filter, it may include some error samples with uncorrected labels. To analyse whether the error samples affect the final performance, we test our SMC-PHD YOLO network with the YouTube dataset. The annotations that we employ comprise cars and zebras. The video length for each annotation, which contains 36000 frames, is 20 min. These frames are manually labelled by researchers and automatically labelled by our methods. After manually labelling these videos, 831,615 and 88,234 positive target samples were obtained for cars and zebras since multiple targets may appear in the same frame. For labels labelled by our methods, “cars” includes 797,660 true-positive samples and 212 false-positive samples, while “zebras” includes 69,821 true-positive samples and 17 false-positive samples. These results show that algorithms assign fewer labels than humans because some tiny targets and low-possibility targets are considered clutter to be disregarded. “Car” has a higher recall rate (96%) than “zebra” (79%) since cars with a regular profile are easier to detect. To further analyse these error samples, we print these data distributions. The selected features comprise the input of the last fully connected layer of YOLO. Two main dimensions are selected by t-distributed stochastic neighbour embedding. Figure5 shows the data distribution of true positives, false positives, and false negatives. This finding proves that tiny targets are considered to be outliers and are disregarded. We also discovered that some clutter (green points) in the target dataset is considered positive samples (false positives). After the clutter is manually disregarded in the target dataset, the YOLO performance does not change. The main potential reason for this is the high threat score (99%), and the SMC-PHD filter disregards the most uncertain samples. However, this approach does not fundamentally solve the problem of clutter since some low-possibility positive samples are considered to be false negatives (red points). Some researchers suggest the use of extra information, such as audio information, to address the clutter problem [48]. Addressing the clutter problem will be one of our future research topics.Figure 5 Data distribution of true positives, false positives, and false negatives for “car” and “zebra” of the YouTube BB dataset. ### 4.6. Scene-Specialized Multitarget Detector To show the performance of the PHD method for transfer learning, we compare the baseline YOLO network, SMC YOLO network, SMC R-CNN, and our proposed SMC-PHD YOLO network and GM-PHD YOLO on the YouTubeBB dataset. Since SMC R-CNN cannot address occluded samples, we propose SMC-PHD R-CNN with SMC-PHD to improve the performance of Faster R-CNN and show the effect of the PHD method. We train the YOLO network with a general training set (COCO dataset), which contains a limited amount of target data. SMC-PHD then augments a dataset containing unseen data. The unseen data in augmented data are assigned labels that may contain errors. YOLO is fine-tuned on this target dataset, and YOLO is applied without an SMC-PHD filter. The SMC-PHD filter is only applied to augment data in this work. The parameters of the PHD filter are chosen according to the Beta-Gaussian mixture model [32]. We test these methods for the airplane, bicycle, boat, and car categories of the YouTubeBB dataset. For different categories, we train the different SMC-PHD YOLO networks where parameters are independent. The YOLO network and R-CNN fine-tuned by the SMC-PHD, GM-PHD, and SMC filters are shown in Table 4. After fine-tuning YOLO, filters are not employed for target detection. Our proposed method has the highest FPPI value of all methods for the boat and car categories, and SMC-PHD YOLO performs similarly to SMC-PHD R-CNN. According to the results, SMC improves the performance of YOLO and R-CNN by approximately 8%, and PHD further improves their performance by approximately 6%. Although GM-PHD YOLO has an 8% higher FPPI than YOLO, it is still lower than that of SMC-PHD YOLO. We speculate that the reason for this is that the number of bounding boxes identified by GM-PHD YOLO is 4% more than that identified by SMC-PHD YOLO. It is proven that SMC-PHD YOLO is more robust than GM-PHD YOLO. Therefore, in the following experiment, we mainly test SMC-PHD YOLO.Table 4 FPPI of our proposed SMC-PHD YOLO, SMC YOLO, YOLO, SMC-PHD R-CNN, and SMC R-CNN on the YouTubeBB dataset. MethodAirplaneBicycleBoatCarSMC-PHD YOLO0.810.690.840.88GM-PHD YOLO0.790.650.820.84SMC YOLO0.760.630.760.81YOLO0.710.570.680.76SMC-PHD R-CNN0.820.700.830.88SMC R-CNN0.790.670.830.89Some results of the proposed method and baseline methods are shown in Figure6. The first line and second line of each subfigure are detected by Generic YOLO and specific YOLO, respectively. In Figure 6(a), the flapping bird is detected only by the specialized YOLO detectors. Thus, our proposed method can customize the detector for a moving target because the dataset is selected from a sequence with the likelihood function. In addition, some occluded cars are detected by our proposed method due to the detection probability. In Figure 6(b), cars and zebras are successfully detected by the specialized YOLO detector, even though only parts of the vehicles and zebras are shown in the images. For the traffic sequences shown in Figure 6(c), the number of cars detected with the specialized YOLO detector is higher than that detected with the Generic YOLO detector. With the SMC-PHD filter, our proposed method can detect occluded cars and certain small vehicles.Figure 6 Improvement of the scene-specific detector for GOT-10k (a), YouTubeBB (b), and MIT Traffic (c). The first line of each subfigure indicates the Generic YOLO, and the second line of each subfigure indicates the SMC-PHD YOLO detector. (a)(b)(c)To further evaluate our proposed method, we further compare our methods with other baseline methods, such as that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], STSN [41], SOD [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45].Figure7 shows the ROC curves of the filters for the different annotations. In this experiment, we chose the bird and boat categories from the GOT-10k and YouTubeBB datasets and the car category from the MIT Traffic dataset. Due to the page limitation, Figures 7(a) and 7(b) only show a comparison between SMC-based detectors, such as SMC-PHD YOLO, and generic detectors, such as YOLO. The comparison between our proposed method and state-of-the-art methods is shown in Figures 7(c)–7(e). In Figure 7(a), the method of Kang achieves a higher true-positive rate than that of Kumar and Dalal because the former is specially designed for boat detection. Compared with the Generic YOLO for boat detection, the SMC-PHD YOLO detector achieves an ROC improvement of 13%. As the boat is often occluded in the bay, the SMC-PHD YOLO detector with the detection probability performs better than the other methods. The boat detection results on the YouTubeBB dataset are similar to those on the GOT-10k dataset. Compared with generic methods, specialized methods achieve ROC improvements of approximately 10%. More baseline transform learning methods are considered in Figure 7(c), which are shown as dashed lines. The transform methods achieve better performance than the generic R-CNN or YOLO methods. SMC based on R-CNN achieves a similar ROC value as other transform detectors. Based on SMC, the SMC R-CNN detector and SMC-PHD YOLO detector achieve increases in the ROC values of 3.8% and 5.8%, respectively, compared with their baseline methods. For car detection, we test the methods only on the MIT Traffic dataset. As shown by the ROC curves in Figure 7(e), the YOLO SMC-PHD sensor outperforms all other car detection frameworks. The SMC-PHD YOLO detector also outperforms the four other specialized detectors, i.e., SMC Faster R-CNN, that of Kumar, that of Dalal, and that of Maamatou, by 5%, 6%, 9%, and 2%, respectively.Figure 7 ROC curves for the Kumar, Dalal, Faster R-CNN, SMC Faster R-CNN, YOLO, and SMC-PHD YOLO methods with the bird (a) and boat (b) annotations of GOT-10k, the bird (c) and boat (d) annotations of YouTubeBB, and the car (e) annotation of the MIT Traffic dataset. (a)(b)(c)(d)(e)Table5 reports the average detection rate of our proposed method and other state-of-the-art methods for the different datasets. We list the ten annotations on GOT-10k and YouTubeBB. As the Kang and Maamatou methods are designed for boat and traffic detection, they are not included in this table. Our proposed method achieves the highest detection rate, especially for the MIT Traffic dataset. SMC-PHD YOLO can detect occluded targets, such as cars. Although SMC R-CNN achieves a detection rate similar to that of the SMC-PHD YOLO detector, the number of frames per second (FPS) of the SMC-PHD YOLO network is 100 times that of SMC R-CNN. Therefore, the SMC-PHD YOLO detector considerably outperforms the generic detector with several annotations on all government datasets. Compared to the baseline YOLO detector, the SMC-PHD YOLO detector achieves a 12% higher detection rate.Table 5 Detection rate for the different datasets with different detections (at 1 FPPI). DataSMC-PHD YOLOYOLO [5]SMC R-CNN [8]Kumar [38]Dalal [39]STSN [41]Jie [44]Ghahremani [45]Lee [43]SOD [42]YoutubeBBAirplane0.910.810.870.80.80.830.850.860.830.89Bicycle0.890.770.860.780.750.820.830.820.840.87Bird0.940.850.920.820.840.870.860.860.880.89Boat0.980.870.960.830.840.890.910.920.930.95Bus0.960.830.950.820.810.890.860.90.920.93Car0.980.860.950.840.830.910.920.920.930.94Cat0.940.850.920.820.830.930.910.930.920.95Cow0.980.870.950.860.880.950.960.940.950.96Dog0.920.810.890.800.820.880.910.890.90.88Horse0.960.850.940.860.860.920.90.890.930.95GOT-10kAnteater0.530.390.410.370.420.520.480.510.490.52Bird0.940.880.920.870.790.860.860.880.890.93Cat0.910.830.900.840.790.840.860.880.90.87Elephant0.880.730.860.750.700.820.840.870.890.85Boat0.980.870.970.840.840.870.890.920.940.97Goat0.880.720.870.760.690.780.80.830.850.87Horse0.870.710.850.730.750.810.830.840.860.85Lion0.860.730.840.710.770.810.830.840.850.83Car0.950.850.910.860.870.850.870.930.940.94Tank0.740.610.680.630.610.630.660.690.710.73MITPedestrian0.970.850.930.860.820.910.930.950.940.96Car0.950.880.890.930.890.90.920.930.950.96Average0.90.790.870.790.780.840.850.860.870.89Although our proposed method has the highest detection rate and large ROC values among all methods, the proposed SMC-PHD YOLO performance depends on the hyperparameters, such as the detection probability and clutter density. These parameters should be established at the beginning of training based on previous experience. Some researchers have proposed solutions for estimating the parameters of the SMC-PHD filter. For example, Lian et al. [49] used the expectation maximum to estimate the unknown clutter probability, and Li et al. [50] used the gamma Gaussian mixture model to estimate the detection probability. Applying this kind of estimation method to improve the SMC-PHD YOLO filter will be addressed in our future work. ## 4.1. Implementation Details The initialized YOLO in our proposed SMC-PHD YOLO filter is pretrained on the COCO dataset [31]. The Adam optimizer is applied, where the weight decline is 0.0005 and the momentum is 0.9. Although the transition matrix F differs substantially across the different object classes in the different datasets, to simplify the problem, we assume F to be(22)F=1010010100100001.The YOLO network is fine-tuned on our evaluation dataset for the different tasks with the help of the SMC-PHD-based transforming method. The YOLO detector is tuned with a 64 GB NVIDIA GeForce GTX TITANX GPU. ## 4.2. Evaluation Methodology and Dataset We train the YOLO detector on a training collection containing 80k training frames and 500k example annotations from the COCO dataset, which contains 2.5 million labelled instances among 328k images of only 91 objects. Although the COCO dataset does not contain continuous frames, it is only used to pretrain the YOLO network before the experiments. In the evaluation step, datasets should contain continuous frames. The evaluation was performed with three different datasets.GOT-10k [34] is a large-scale, visual dataset with broad coverage of real-world objects. It contains 10k videos of 563 categories, and its categories are more than one order of magnitude wider than those of counterparts of a similar scale. Some of its categories are not included in the COCO dataset. Therefore, GOT-10k is suitable for fine-tuning the YOLO network pretrained on the COCO dataset. The annotations that we tested include birds, cars, tapirs, and cows. YouTubeBB [35] is a large, diverse dataset with 380,000 video sections and 5.6 million human-drawn bounding boxes in 23 classifications from 240,000 distinct YouTube videos. Each video includes time-localized, frame-level features, so classifier predictions at segment-level granularity are feasible. The annotations that we tested include cars and zebras. In the MIT Traffic dataset [36], a 90-minute video is provided. A total of 420 frames from the first 45 minutes are employed for specialization, and 420 images from the last 45 minutes are utilized for testing. The video was recorded by a stationary camera. The size of the scene is 720 by 480, and it is divided into 20 clips. The annotation that we tested includes only the cars. False-positive curves per frame (FPPI) and receiver operating characteristic (ROC) curves are used to evaluate our proposed detector and baseline methods. The pipeline of the data preparation for the PHD YOLO experiment is shown in Figure 4.Figure 4 The pipeline of the data preparation for the PHD YOLO experiment. ## 4.3. Baseline Method The algorithms compared with the SMC-PHD YOLO algorithm are Generic YOLO [5], Generic Faster R-CNN [37], SMC Faster R-CNN [8], that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], spatiotemporal sampling network (STSN) [41], salient object detection (SOD) [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45]. Table 1 shows the comparison between baseline methods and our method. The detector pretrained on the general dataset is presented in the second column. Some methods automatically fine-tune the network with the target dataset collected by the methods shown in the third column. For example, the algorithm of Kang et al. [40] does not include a fine-tuning step, and there is no information in its block. The computational complexity of fine-tuning with the target dataset is shown in the last column, where nI is the number of frames in the video, n is the number of particles for the SMC method, m is the average number of targets in each frame, l∗h is the size (length ∗ width) of the frame, and a is the number of auxiliary networks.Table 1 Comparison between baseline methods and our method. BaselineDetectorFine-tunedComputational complexityYOLO [5]YOLO——R-CNN [37]R-CNN——SMC R-CNN [8]R-CNNSMCOnI∗n∗mSingh et al. [38]R-CNNTrack and segment [46]OnI∗l∗hDeshmukh and Moh [39]CNNEdge detectorsOnI∗l∗hKang et al. [40]Contextual R-CNN——Maâmatou et al. [10]SVMSMCOnI∗n∗mSTSN [41]STSN——SOD [42]SODR101 FPNOnI∗n∗mLee et al. [43]R-CNNAuxiliary networkOnI∗n∗m∗aJie et al. [44]R-CNNOnline supportive sample harvesting [44]OnI∗n∗mGhahremani et al. [45]CNNF1 score thresholdOnSMC-PHD YOLOYOLOSMC-PHDOnI∗n∗m ## 4.4. SMC-PHD Filter YOLO for Multitarget Detection In this subsection, we discuss the contribution of the SMC-PHD in our proposed method via three experiments. In these three experiments, we evaluate the performance of the detection probability and clutter density. Note that for a fixed label dataset and fixed YOLO, these parameters are also fixed and can be measured from the dataset. To show the contribution of the detection probability and clutter density, we set different values in the experiments. ### 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 ### 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ## 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 ## 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ## 4.5. Error Analysis of the SMC-PHD YOLO Network Since the target dataset is automatically generated by an SMC-PHD filter, it may include some error samples with uncorrected labels. To analyse whether the error samples affect the final performance, we test our SMC-PHD YOLO network with the YouTube dataset. The annotations that we employ comprise cars and zebras. The video length for each annotation, which contains 36000 frames, is 20 min. These frames are manually labelled by researchers and automatically labelled by our methods. After manually labelling these videos, 831,615 and 88,234 positive target samples were obtained for cars and zebras since multiple targets may appear in the same frame. For labels labelled by our methods, “cars” includes 797,660 true-positive samples and 212 false-positive samples, while “zebras” includes 69,821 true-positive samples and 17 false-positive samples. These results show that algorithms assign fewer labels than humans because some tiny targets and low-possibility targets are considered clutter to be disregarded. “Car” has a higher recall rate (96%) than “zebra” (79%) since cars with a regular profile are easier to detect. To further analyse these error samples, we print these data distributions. The selected features comprise the input of the last fully connected layer of YOLO. Two main dimensions are selected by t-distributed stochastic neighbour embedding. Figure5 shows the data distribution of true positives, false positives, and false negatives. This finding proves that tiny targets are considered to be outliers and are disregarded. We also discovered that some clutter (green points) in the target dataset is considered positive samples (false positives). After the clutter is manually disregarded in the target dataset, the YOLO performance does not change. The main potential reason for this is the high threat score (99%), and the SMC-PHD filter disregards the most uncertain samples. However, this approach does not fundamentally solve the problem of clutter since some low-possibility positive samples are considered to be false negatives (red points). Some researchers suggest the use of extra information, such as audio information, to address the clutter problem [48]. Addressing the clutter problem will be one of our future research topics.Figure 5 Data distribution of true positives, false positives, and false negatives for “car” and “zebra” of the YouTube BB dataset. ## 4.6. Scene-Specialized Multitarget Detector To show the performance of the PHD method for transfer learning, we compare the baseline YOLO network, SMC YOLO network, SMC R-CNN, and our proposed SMC-PHD YOLO network and GM-PHD YOLO on the YouTubeBB dataset. Since SMC R-CNN cannot address occluded samples, we propose SMC-PHD R-CNN with SMC-PHD to improve the performance of Faster R-CNN and show the effect of the PHD method. We train the YOLO network with a general training set (COCO dataset), which contains a limited amount of target data. SMC-PHD then augments a dataset containing unseen data. The unseen data in augmented data are assigned labels that may contain errors. YOLO is fine-tuned on this target dataset, and YOLO is applied without an SMC-PHD filter. The SMC-PHD filter is only applied to augment data in this work. The parameters of the PHD filter are chosen according to the Beta-Gaussian mixture model [32]. We test these methods for the airplane, bicycle, boat, and car categories of the YouTubeBB dataset. For different categories, we train the different SMC-PHD YOLO networks where parameters are independent. The YOLO network and R-CNN fine-tuned by the SMC-PHD, GM-PHD, and SMC filters are shown in Table 4. After fine-tuning YOLO, filters are not employed for target detection. Our proposed method has the highest FPPI value of all methods for the boat and car categories, and SMC-PHD YOLO performs similarly to SMC-PHD R-CNN. According to the results, SMC improves the performance of YOLO and R-CNN by approximately 8%, and PHD further improves their performance by approximately 6%. Although GM-PHD YOLO has an 8% higher FPPI than YOLO, it is still lower than that of SMC-PHD YOLO. We speculate that the reason for this is that the number of bounding boxes identified by GM-PHD YOLO is 4% more than that identified by SMC-PHD YOLO. It is proven that SMC-PHD YOLO is more robust than GM-PHD YOLO. Therefore, in the following experiment, we mainly test SMC-PHD YOLO.Table 4 FPPI of our proposed SMC-PHD YOLO, SMC YOLO, YOLO, SMC-PHD R-CNN, and SMC R-CNN on the YouTubeBB dataset. MethodAirplaneBicycleBoatCarSMC-PHD YOLO0.810.690.840.88GM-PHD YOLO0.790.650.820.84SMC YOLO0.760.630.760.81YOLO0.710.570.680.76SMC-PHD R-CNN0.820.700.830.88SMC R-CNN0.790.670.830.89Some results of the proposed method and baseline methods are shown in Figure6. The first line and second line of each subfigure are detected by Generic YOLO and specific YOLO, respectively. In Figure 6(a), the flapping bird is detected only by the specialized YOLO detectors. Thus, our proposed method can customize the detector for a moving target because the dataset is selected from a sequence with the likelihood function. In addition, some occluded cars are detected by our proposed method due to the detection probability. In Figure 6(b), cars and zebras are successfully detected by the specialized YOLO detector, even though only parts of the vehicles and zebras are shown in the images. For the traffic sequences shown in Figure 6(c), the number of cars detected with the specialized YOLO detector is higher than that detected with the Generic YOLO detector. With the SMC-PHD filter, our proposed method can detect occluded cars and certain small vehicles.Figure 6 Improvement of the scene-specific detector for GOT-10k (a), YouTubeBB (b), and MIT Traffic (c). The first line of each subfigure indicates the Generic YOLO, and the second line of each subfigure indicates the SMC-PHD YOLO detector. (a)(b)(c)To further evaluate our proposed method, we further compare our methods with other baseline methods, such as that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], STSN [41], SOD [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45].Figure7 shows the ROC curves of the filters for the different annotations. In this experiment, we chose the bird and boat categories from the GOT-10k and YouTubeBB datasets and the car category from the MIT Traffic dataset. Due to the page limitation, Figures 7(a) and 7(b) only show a comparison between SMC-based detectors, such as SMC-PHD YOLO, and generic detectors, such as YOLO. The comparison between our proposed method and state-of-the-art methods is shown in Figures 7(c)–7(e). In Figure 7(a), the method of Kang achieves a higher true-positive rate than that of Kumar and Dalal because the former is specially designed for boat detection. Compared with the Generic YOLO for boat detection, the SMC-PHD YOLO detector achieves an ROC improvement of 13%. As the boat is often occluded in the bay, the SMC-PHD YOLO detector with the detection probability performs better than the other methods. The boat detection results on the YouTubeBB dataset are similar to those on the GOT-10k dataset. Compared with generic methods, specialized methods achieve ROC improvements of approximately 10%. More baseline transform learning methods are considered in Figure 7(c), which are shown as dashed lines. The transform methods achieve better performance than the generic R-CNN or YOLO methods. SMC based on R-CNN achieves a similar ROC value as other transform detectors. Based on SMC, the SMC R-CNN detector and SMC-PHD YOLO detector achieve increases in the ROC values of 3.8% and 5.8%, respectively, compared with their baseline methods. For car detection, we test the methods only on the MIT Traffic dataset. As shown by the ROC curves in Figure 7(e), the YOLO SMC-PHD sensor outperforms all other car detection frameworks. The SMC-PHD YOLO detector also outperforms the four other specialized detectors, i.e., SMC Faster R-CNN, that of Kumar, that of Dalal, and that of Maamatou, by 5%, 6%, 9%, and 2%, respectively.Figure 7 ROC curves for the Kumar, Dalal, Faster R-CNN, SMC Faster R-CNN, YOLO, and SMC-PHD YOLO methods with the bird (a) and boat (b) annotations of GOT-10k, the bird (c) and boat (d) annotations of YouTubeBB, and the car (e) annotation of the MIT Traffic dataset. (a)(b)(c)(d)(e)Table5 reports the average detection rate of our proposed method and other state-of-the-art methods for the different datasets. We list the ten annotations on GOT-10k and YouTubeBB. As the Kang and Maamatou methods are designed for boat and traffic detection, they are not included in this table. Our proposed method achieves the highest detection rate, especially for the MIT Traffic dataset. SMC-PHD YOLO can detect occluded targets, such as cars. Although SMC R-CNN achieves a detection rate similar to that of the SMC-PHD YOLO detector, the number of frames per second (FPS) of the SMC-PHD YOLO network is 100 times that of SMC R-CNN. Therefore, the SMC-PHD YOLO detector considerably outperforms the generic detector with several annotations on all government datasets. Compared to the baseline YOLO detector, the SMC-PHD YOLO detector achieves a 12% higher detection rate.Table 5 Detection rate for the different datasets with different detections (at 1 FPPI). DataSMC-PHD YOLOYOLO [5]SMC R-CNN [8]Kumar [38]Dalal [39]STSN [41]Jie [44]Ghahremani [45]Lee [43]SOD [42]YoutubeBBAirplane0.910.810.870.80.80.830.850.860.830.89Bicycle0.890.770.860.780.750.820.830.820.840.87Bird0.940.850.920.820.840.870.860.860.880.89Boat0.980.870.960.830.840.890.910.920.930.95Bus0.960.830.950.820.810.890.860.90.920.93Car0.980.860.950.840.830.910.920.920.930.94Cat0.940.850.920.820.830.930.910.930.920.95Cow0.980.870.950.860.880.950.960.940.950.96Dog0.920.810.890.800.820.880.910.890.90.88Horse0.960.850.940.860.860.920.90.890.930.95GOT-10kAnteater0.530.390.410.370.420.520.480.510.490.52Bird0.940.880.920.870.790.860.860.880.890.93Cat0.910.830.900.840.790.840.860.880.90.87Elephant0.880.730.860.750.700.820.840.870.890.85Boat0.980.870.970.840.840.870.890.920.940.97Goat0.880.720.870.760.690.780.80.830.850.87Horse0.870.710.850.730.750.810.830.840.860.85Lion0.860.730.840.710.770.810.830.840.850.83Car0.950.850.910.860.870.850.870.930.940.94Tank0.740.610.680.630.610.630.660.690.710.73MITPedestrian0.970.850.930.860.820.910.930.950.940.96Car0.950.880.890.930.890.90.920.930.950.96Average0.90.790.870.790.780.840.850.860.870.89Although our proposed method has the highest detection rate and large ROC values among all methods, the proposed SMC-PHD YOLO performance depends on the hyperparameters, such as the detection probability and clutter density. These parameters should be established at the beginning of training based on previous experience. Some researchers have proposed solutions for estimating the parameters of the SMC-PHD filter. For example, Lian et al. [49] used the expectation maximum to estimate the unknown clutter probability, and Li et al. [50] used the gamma Gaussian mixture model to estimate the detection probability. Applying this kind of estimation method to improve the SMC-PHD YOLO filter will be addressed in our future work. ## 5. Conclusion To customize the YOLO detector for unique target identification, we suggested an effective and precise structure based on the SMC-PHD filter and GM-PHD filter. On the basis of the proposed confidence score-based likelihood and novel resampling strategy, the framework can be employed by choosing appropriate samples from target datasets to train and then detect a target. This framework automatically offers a strong specialized detector with a Generic YOLO detector and some target videos. The tests showed that the proposed framework can generate a specific YOLO detector that considerably outperforms the Generic YOLO detector on a distinct dataset for bird, boat, and vehicle detection. Correlated clutter is still challenging for SMC-PHD filters. Our future research will focus on expanding the algorithm with multimodal information to address the correlated clutter problem. --- *Source: 1010767-2022-04-28.xml*
1010767-2022-04-28_1010767-2022-04-28.md
84,458
Scene-Specialized Multitarget Detector with an SMC-PHD Filter and a YOLO Network
Qianli Liu; Yibing Li; Qianhui Dong; Fang Ye
Computational Intelligence and Neuroscience (2022)
Medical & Health Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2022/1010767
1010767-2022-04-28.xml
--- ## Abstract You only look once (YOLO) is one of the most efficient target detection networks. However, the performance of the YOLO network decreases significantly when the variation between the training data and the real data is large. To automatically customize the YOLO network, we suggest a novel transfer learning algorithm with the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter and Gaussian mixture probability hypothesis density (GM-PHD) filter. The proposed framework can automatically customize the YOLO framework with unlabelled target sequences. The frames of the unlabelled target sequences are automatically labelled. The detection probability and clutter density of the SMC-PHD filter and GM-PHD are applied to retrain the YOLO network for occluded targets and clutter. A novel likelihood density with the confidence probability of the YOLO detector and visual context indications is implemented to choose target samples. A simple resampling strategy is proposed for SMC-PHD YOLO to address the weight degeneracy problem. Experiments with different datasets indicate that the proposed framework achieves positive outcomes relative to state-of-the-art frameworks. --- ## Body ## 1. Introduction Learning-based detection algorithms have proven important in several subject areas, including smart surveillance systems [1], wireless sensors [2, 3], and secure transportation systems [4]. Over the past several years, convolutional neural networks (CNNs) have achieved excellent results in multiple computer vision assignments. You only look once (YOLO) is an effective visual detection method [5]. Compared with other detection networks, the YOLO network can predict class probabilities and bounding boxes in an assessment directly from the input frame. YOLO detectors, however, are taught with annotated datasets and utilized to attain the highest variability of the target. The distribution of the target captured by the camera may not be a subset of the initial learning set when these detectors are applied to a specific scene, such as in the case of a closed-circuit television (CCTV) camera. Therefore, the resulting Generic YOLO detector may not function effectively, especially for a limited amount of training data [6].To address this problem, transfer learning with cross-domain adaptation is proposed. A specific training dataset is needed to generate a specific detector. Normally, these positive samples of the specific training dataset are manually selected from the target dataset. However, a large amount of labelled data is needed to tune the detector in each frame, and labelling is a labor-intensive task. A typical solution for reducing the collection time is to automatically provide the sample labels with the target frame. Labelled samples are iteratively collected from the unlabelled sequence and added to the training dataset [7].We propose a novel transfer learning method with a probability hypothesis density (PHD) filter, which can automatically retrain a YOLO network for a special object. The scene-specific detector is generated with a Generic YOLO detector trained by labelled frames and sequences without labelled information. The parameters of the YOLO detector are estimated by an iterative process. After automatic and iterative training, the final specialized YOLO detector is produced and can run without the SMC-PHD filter. Figure1 illustrates the structure of our method.Figure 1 YOLO network. The red line shows the grids of images, and the red box shows the bounding boxes. The pattern-filled boxes show the grids with high probabilities.Although improving the YOLO with the SMC method has been employed for transfer learning [8], the detection probability and clutter density are not considered in the target sequence. In the updated step of our proposed method, the occluded targets are selected and collected as positive samples for training. The primary benefit of our method is that the recognition model can learn the appearance of occluded targets and clutter. As shown in the experimental results in Section 4, our proposed SMC-PHD YOLO can detect some occluded speakers with the SMC-PHD filter-based occlusion strategy, while the SMC Faster region-based CNN (R-CNN) [8] cannot detect the occluded targets. In addition, when positive samples are collected, some false samples (clutter) may be added to the positive training dataset. The performance of the SMC Faster R-CNN [8] would be affected by the clutter. When there is clutter in the training dataset, the SMC Faster R-CNN produces false detection. Based on the clutter density, this clutter would be assigned a low weight, and our proposed method could disregard false samples. Our proposed PHD YOLO network has four main contributions:(i) To address the bias between the training dataset and target set, we propose a PHD based transfer learning method for YOLO. For nonlinear tasks, a scene-specialized multitarget detector, SMC-PHD YOLO, is proposed. For linear systems and Gaussian noise tasks, we extend our method to GM-PHD YOLO to eliminate concerns about SMC dependence.(ii) In SMC-PHD YOLO, we show that the detection probability and clutter density of the SMC-PHD filter improve the performance of the retrained YOLO networks for the occluded targets and multiscale targets. When the image quality of the target scenes is unsatisfactory, even with noise, the specialized YOLO network can still detect the target with the posterior density.(iii) A novel likelihood is proposed to verify the selected samples in PHD YOLO. To collect positive samples for training, the confidence probability of the YOLO detector and visual context indications are applied.(iv) For the weight degeneracy problem of SMC YOLO, we also propose a novel and simple resampling strategy that can collect samples from the target sequence based on their weights, and the proposed distribution is assumed to be the target distribution. With the detection distribution, the strategy can function effectively even when a small number of samples is employed.The remainder of this document is structured as follows: Section2 introduces the current approach applied in this sector and offers details regarding the benefits of our proposed method over other specialization methods. Section 3 describes our proposed strategy in detail. Section 4 details the configuration of the simulation and presents experimental outcomes, and concluding comments are provided in Section 5. We adhere to the convention that scale variables, such as confidence, are presented in lowercase italics, e.g., f. Symbols for vector-formed states and their densities are shown in lowercase bold italics, e.g., x, and multitarget states are represented by uppercase bold italics, e.g., X. Uppercase nonbold letters represent polynomials. Symbols for matrices, such as the transition matrix, are shown in uppercase bold letters, e.g., F. ## 2. Background ### 2.1. Specialization Frameworks If the distribution of the training samples is different from that of target scenes, then a traditional visual detector may not function effectively [9]. To address this problem, specialization frameworks are utilized to automatically create scene-specific detectors for a target scene. Transfer learning algorithms based on state-of-the-art theories use the annotated model and expertise gained through prior assignments. There are three main types of transfer learning methods [10]. First, by changing the parameters of the source learning model, the model is improved in a target domain [11, 12]. Second, the variation between the source and target distributions is decreased, and the source learning model is adapted to the target domain [13, 14]. Third, the training samples are manually or automatically chosen, and the model is retrained with a subset of selected samples [15]. We focus on the third category because it can automatically label the selected samples and the training parameters remain unchanged.However, the new training dataset may contain some incorrectly labelled samples because the labels of the samples are not manually verified. With this type of dataset, the accuracy of the detection framework may decrease. To address this problem, various contextual indications, such as the visual appearance of objects, pedestrian movement, road model, size, and place, are used to verify favourable samples for retraining the training dataset; however, this method is sensitive to occlusion [16]. Moreover, some techniques may only use samples from the target domain and waste helpful samples [17]. Htike and Hogg employed a background subtraction algorithm to train a particular detector [9] to select the target samples from the source and target datasets. To automatically label target information, tracklet chains are utilized to link the proposed samples to tracklets [15] predicted by an appearance-target detector. However, for each target scene, this framework, which includes many manual parameters and thresholds, may affect the specialization performance. Alternatively, Maâmatou et al. [10] collected fresh samples. To train a fresh dedicated retrained sensor, an SMC transfer learning method was employed to create a new dataset [8]. ### 2.2. YOLO Network In this work, we used the YOLO (V3) network [5] since it passes the image only once into a fully CNN (FCNN), which enables it to achieve real-time performance. YOLO (V3) was developed based on YOLO [18] and YOLO (V2) [19]. The YOLO network considers the detection problem as a regression problem. Therefore, the network directly generates a bounding box for each class via regression without any proposal region, which decreases the computational cost compared to Faster R-CNN.The YOLO detection model is shown in Figure1, where the network divides each input image of the training set into S×S grids. When the grid is filled by the centre of the target ground truth, the grid is used to detect the object. For each grid, several bounding boxes and their confidence scores are predicted. The confidence fs is defined as(1)fs=pr×IoUpredtruth,pr∈0,1.If the target is in the grid,pr=1; otherwise, pr=0. IoUpredtruth (intersection over the union of the prediction and ground truth) is used to present the coincidence between the predicted bounding box and the reference bounding box, which indicates whether the grid contains targets. If several bounding boxes detect the same target, then nonmaximum suppression (NMS) is applied to select the best bounding box.YOLO has a lower computational cost than Faster R-CNN; however, it has more errors. To address this problem, YOLO uses the “anchor” of the Faster R-CNN to generate suitable prior bounding boxes; YOLO uses k-means cluttering. The adoption of the anchor boxes decreases the mean average precision (mAP). In addition, unlike YOLO, YOLO-V3 uses batch normalization, multiscale prediction, a high-resolution classifier, dimension clutter, direct location prediction, fine-grained features, multiscale training, and other methods that greatly improve the detection accuracy. ### 2.3. Random Finite Set and PHD Filters In this subsection, we discuss the random finite set and PHD filters for scene-specialized transform learning. The probability hypothesis density and random finite set are proposed for multitarget tracking [20–22]. The random finite set is a flexible algorithm that can be combined with any object detector to generate positional and dimensional information on objects of interest. Maggio et al. used detectors such as background subtraction, AdaBoost classifiers, and a statistical change detector to track objects associated with a random finite set (RFS) [23, 24]. For handling occlusion problems during tracking, Kim et al. proposed the labelled RFS [25]. As the RFS is a computationally expensive approximation of the multidistribution Bayes filter, the PHD is the first-order moment of the RFS, which is a set of random variables (or vectors) with random cardinality [20]. An alternative derivation of the PHD filter based on classical point process theory was given in [26]. In multitarget research, the Gaussian mixture PHD (GM-PHD) filter [27] and SMC-PHD filter [28] are widely utilized. The GM-PHD filter is a closed-form solution, as it assumes that the model is linear and Gaussian. By limiting the number of considered partitions and possible alternatives, Granstrom et al. proposed a GM-PHD filter for tracking extended targets [29]. Since different objects have different levels of clutter, an N-type GM-PHD filter was proposed for real video sequences by integrating object detector information into this filter for two scenarios [30]. However, the accuracy may decrease for nonlinear problems. To address nonlinear problems, the SMC-PHD filter was proposed based on the Monte Carlo method. With the weights of the samples (particles), the SMC-PHD filter can track a varying number of unknown targets.The PHD filter is defined as the intensityψk, which is applied to estimate the number of speakers. The PHD filter involves a prediction step and an update step that recursively propagates the intensity function. The PHD prediction step is defined as(2)ψk|k−1xk=ξkxk+∫ϕk∣k−1xk|xk−1ψk−1xk−1dxk−1where x is the target bounding box state. ξkxk is the intensity of the birth RFS. ϕk|k−1xk|xk−1 is the analogue of the state transition probability,(3)ϕk|k−1xk|xk−1=pS,kxk−1fk|k−1xk|xk−1+βk∣k−1xk|xk−1,where pS,kxk−1 is the survival probability and fk|k−1xk|xk−1 is the transition density. βk|k−1xk|xk−1 is the intensity function of the spawn RFS with the previous state xk−1. The PHD update equation is given as(4)ψkxk=1−pD,kxkψk|k−1xk+∑zk∈ZkpD,kxkhkzk∣xkψk|k−1xkκkzk+∫pD,kxkhkzk|xkψk|k−1xk,where hkzk|xk is the likelihood defining the probability of zk given xk. pD,kxk is the detection probability. The intensity of the clutter RFS Ck is shown as κkzk=γuzk, where γ is the average number of Poisson clutter points per scan and uzk is the probability distribution of each clutter point. The PHD recursion involves multiple integrals in equations (2) and (4), which have no closed-form solution in general. To address this issue, the SMC-PHD filter has been proposed and widely utilized [28]. In the SMC-PHD filter, at time k−1, the target PHD ψk−1xk−1 is represented by a set of particles, xk−1i,ωk−1ii=1nk−1, where nk−1 is the number of particles at k−1. To the best of our limited knowledge, this article is the first study to use the PHD filter to train a scene-specialized, multitarget detector. As the number of targets is unknown in our unlabelled dataset and the sample collection is nonlinear and non-Gaussian, the SMC-PHD filter is applied to collect the unlabelled training data and customize the YOLO network. ## 2.1. Specialization Frameworks If the distribution of the training samples is different from that of target scenes, then a traditional visual detector may not function effectively [9]. To address this problem, specialization frameworks are utilized to automatically create scene-specific detectors for a target scene. Transfer learning algorithms based on state-of-the-art theories use the annotated model and expertise gained through prior assignments. There are three main types of transfer learning methods [10]. First, by changing the parameters of the source learning model, the model is improved in a target domain [11, 12]. Second, the variation between the source and target distributions is decreased, and the source learning model is adapted to the target domain [13, 14]. Third, the training samples are manually or automatically chosen, and the model is retrained with a subset of selected samples [15]. We focus on the third category because it can automatically label the selected samples and the training parameters remain unchanged.However, the new training dataset may contain some incorrectly labelled samples because the labels of the samples are not manually verified. With this type of dataset, the accuracy of the detection framework may decrease. To address this problem, various contextual indications, such as the visual appearance of objects, pedestrian movement, road model, size, and place, are used to verify favourable samples for retraining the training dataset; however, this method is sensitive to occlusion [16]. Moreover, some techniques may only use samples from the target domain and waste helpful samples [17]. Htike and Hogg employed a background subtraction algorithm to train a particular detector [9] to select the target samples from the source and target datasets. To automatically label target information, tracklet chains are utilized to link the proposed samples to tracklets [15] predicted by an appearance-target detector. However, for each target scene, this framework, which includes many manual parameters and thresholds, may affect the specialization performance. Alternatively, Maâmatou et al. [10] collected fresh samples. To train a fresh dedicated retrained sensor, an SMC transfer learning method was employed to create a new dataset [8]. ## 2.2. YOLO Network In this work, we used the YOLO (V3) network [5] since it passes the image only once into a fully CNN (FCNN), which enables it to achieve real-time performance. YOLO (V3) was developed based on YOLO [18] and YOLO (V2) [19]. The YOLO network considers the detection problem as a regression problem. Therefore, the network directly generates a bounding box for each class via regression without any proposal region, which decreases the computational cost compared to Faster R-CNN.The YOLO detection model is shown in Figure1, where the network divides each input image of the training set into S×S grids. When the grid is filled by the centre of the target ground truth, the grid is used to detect the object. For each grid, several bounding boxes and their confidence scores are predicted. The confidence fs is defined as(1)fs=pr×IoUpredtruth,pr∈0,1.If the target is in the grid,pr=1; otherwise, pr=0. IoUpredtruth (intersection over the union of the prediction and ground truth) is used to present the coincidence between the predicted bounding box and the reference bounding box, which indicates whether the grid contains targets. If several bounding boxes detect the same target, then nonmaximum suppression (NMS) is applied to select the best bounding box.YOLO has a lower computational cost than Faster R-CNN; however, it has more errors. To address this problem, YOLO uses the “anchor” of the Faster R-CNN to generate suitable prior bounding boxes; YOLO uses k-means cluttering. The adoption of the anchor boxes decreases the mean average precision (mAP). In addition, unlike YOLO, YOLO-V3 uses batch normalization, multiscale prediction, a high-resolution classifier, dimension clutter, direct location prediction, fine-grained features, multiscale training, and other methods that greatly improve the detection accuracy. ## 2.3. Random Finite Set and PHD Filters In this subsection, we discuss the random finite set and PHD filters for scene-specialized transform learning. The probability hypothesis density and random finite set are proposed for multitarget tracking [20–22]. The random finite set is a flexible algorithm that can be combined with any object detector to generate positional and dimensional information on objects of interest. Maggio et al. used detectors such as background subtraction, AdaBoost classifiers, and a statistical change detector to track objects associated with a random finite set (RFS) [23, 24]. For handling occlusion problems during tracking, Kim et al. proposed the labelled RFS [25]. As the RFS is a computationally expensive approximation of the multidistribution Bayes filter, the PHD is the first-order moment of the RFS, which is a set of random variables (or vectors) with random cardinality [20]. An alternative derivation of the PHD filter based on classical point process theory was given in [26]. In multitarget research, the Gaussian mixture PHD (GM-PHD) filter [27] and SMC-PHD filter [28] are widely utilized. The GM-PHD filter is a closed-form solution, as it assumes that the model is linear and Gaussian. By limiting the number of considered partitions and possible alternatives, Granstrom et al. proposed a GM-PHD filter for tracking extended targets [29]. Since different objects have different levels of clutter, an N-type GM-PHD filter was proposed for real video sequences by integrating object detector information into this filter for two scenarios [30]. However, the accuracy may decrease for nonlinear problems. To address nonlinear problems, the SMC-PHD filter was proposed based on the Monte Carlo method. With the weights of the samples (particles), the SMC-PHD filter can track a varying number of unknown targets.The PHD filter is defined as the intensityψk, which is applied to estimate the number of speakers. The PHD filter involves a prediction step and an update step that recursively propagates the intensity function. The PHD prediction step is defined as(2)ψk|k−1xk=ξkxk+∫ϕk∣k−1xk|xk−1ψk−1xk−1dxk−1where x is the target bounding box state. ξkxk is the intensity of the birth RFS. ϕk|k−1xk|xk−1 is the analogue of the state transition probability,(3)ϕk|k−1xk|xk−1=pS,kxk−1fk|k−1xk|xk−1+βk∣k−1xk|xk−1,where pS,kxk−1 is the survival probability and fk|k−1xk|xk−1 is the transition density. βk|k−1xk|xk−1 is the intensity function of the spawn RFS with the previous state xk−1. The PHD update equation is given as(4)ψkxk=1−pD,kxkψk|k−1xk+∑zk∈ZkpD,kxkhkzk∣xkψk|k−1xkκkzk+∫pD,kxkhkzk|xkψk|k−1xk,where hkzk|xk is the likelihood defining the probability of zk given xk. pD,kxk is the detection probability. The intensity of the clutter RFS Ck is shown as κkzk=γuzk, where γ is the average number of Poisson clutter points per scan and uzk is the probability distribution of each clutter point. The PHD recursion involves multiple integrals in equations (2) and (4), which have no closed-form solution in general. To address this issue, the SMC-PHD filter has been proposed and widely utilized [28]. In the SMC-PHD filter, at time k−1, the target PHD ψk−1xk−1 is represented by a set of particles, xk−1i,ωk−1ii=1nk−1, where nk−1 is the number of particles at k−1. To the best of our limited knowledge, this article is the first study to use the PHD filter to train a scene-specialized, multitarget detector. As the number of targets is unknown in our unlabelled dataset and the sample collection is nonlinear and non-Gaussian, the SMC-PHD filter is applied to collect the unlabelled training data and customize the YOLO network. ## 3. Proposed Framework This section introduces our proposed framework, which customizes the YOLO model based on the PHD filter. The PHD filter is used to label the target in unlabelled videos based on the YOLO output. The positive samples estimated by the PHD filter are used to build a new custom dataset. The YOLO network is fine-tuned on this custom dataset, which may contain occluded targets and targets of different styles. Since the number of unlabelled videos is large, the bias between the training dataset and the real data decreases. Compared to the state-of-the-art method, our proposed framework is not sensitive to occlusion and target shape. The overall framework of the proposed method is shown in Figure2.Figure 2 Overall framework of the proposed method. The framework input is a generic, fine-tuned YOLO detector. A visual sequence is provided to the scheme without manual labelling. To customize the YOLO network, an iterative method automatically estimates both parameters.To be more specific, assume that a Generic YOLO networkY0 is trained with generic datasets, such as Common Objects in Context (COCO) [31]. For the target sequence, unlabelled frames are represented as Ikk=1nI, where k is the index of the frame. The detection output of Y0 at frame k is Zkk=1nI. Zk=zkrr=1mk is a detection set at frame k, where zkr is a bounding box state of the detected target. r is the index of the detected target, and mk is the number of detected targets. Furthermore, the PHD filter updates Zkk=1nI to the estimated target state Xkk=1nI. Xk=xkjj=1Sk is an estimated target set, where bk is the number of estimated targets at k and j is the index of the estimated targets. Note that nI is not equal to SK. The PHD filter removes some clutter from Zkk=1nI and adds some missed targets. The nI images with an estimated target bounding box set Xkk=1nI are applied to fine-tune the YOLO network. The fine-tuned YOLO is referred to as Yt, where t is the time of fine-tuning. The training pipeline of the PHD YOLO detector can be found in Figure 3.Figure 3 The training pipeline of the PHD YOLO detector.The challenge is how to select the samples with the SMC-PHD filter. In this section, the iterative process is divided into three steps: prediction, updating, and resampling. In the following subsections, the details of the three primary steps are outlined. Since the SMC-PHD filter is more robust than the GM-PHD filter in the tracking task, PHD YOLO is mainly implemented as an SMC-PHD filter. To extend our proposed method to linear systems, GM-PHD YOLO is briefly discussed at the end of this section. ### 3.1. Prediction Step To build the custom dataset,Xkk−1nI, several particles are applied. At frame k−1, particles are represented as xk−1i,ωk−1i, where ωk−1i is the particle weight. Our work considers only two kinds of particles: survival particles and birth particles. The spawn particles of the SMC-PHD filter are disregarded. For nk−1 survival particles, the particle state is calculated by the transition function F:(5)xk|k−1i=Fxk−1i.Forbk birth particles, the particle state is normally set in the tracking area. The particle weight is calculated by(6)ωk∣k−1i=ϕk∣k−1xki,xk−1iωk−1iqkxki|xk−1i,Zk,i=1,…,nk−1,ξkxkibkpkxki|Zk,i=nk−1+1,…,nk−1+bk.However, if the new birth particle is located near the survival particles, then one target is repeatedly estimated by survival particles and birth particles. Thus, the number of targets would exceed the ground truth. To address this problem, we propose a novel birth density function based on the target state history:(7)ξkxki=maxpb,psmaxωkj∈Ωkδxkixkj1Xk−1xkjωkj,where(8)δxkixkj=1,ifxki=xkj,0, otherwise,1Xk−1xkj=1, ifxkj⊂Xk−1,0, otherwise,where ps is the survival probability and pb is the birth probability. ps represents the probability that the sample xki still exists. When ps=1, a sample still exists in the new dataset. When ps=0, samples are resampled, and samples in different iterations are independent. ### 3.2. Update Step In the update step, the particle states are further updated according to the output of YOLO,Zkk=1nI. The update step of the PHD recursion is approximated by updating the weight of the predicted particles when the likelihood hkzk|xki is obtained. The predicted weights are updated as(9)ωki=1−pDxki+∑zk∈ZkpDxkihkzk|xkiκkzk+Ckzkωk|k−1i,where(10)Ckzk=∑i=1nk−1+bkpD,kxkihkzk|xkiωk|k−1i.The detection probabilitypDxki is simplified as pD,ki in our following work. The number of targets is estimated as the sum of the weights, Sk=∑i=1nkωki.To ignore the clutter, the clutter density functionκk. is applied, and the value of κkzk is varied for the different detections zk. κkzk indicates the level of clutter and is a set value. When zkr has a high probability of being cluttered, κkzk is a high value. If the detection is not cluttered, then κkzk is given as 0. Normally, κkzk is set as a constant or estimated by the Beta-Gaussian mixture model [32].pDxki is the detection probability, which is chosen based on the sample and can be estimated by the Gaussian mixture model [32]. If the sample is occluded, then pDxki would have a low value (near 0). Therefore, the occluded samples have high weights and are selected for retraining the YOLO network. If the sample is not occluded, then pDxki is equal to 1, and the value hki,r is not changed. ### 3.3. Likelihood Function In addition to the detected probability and clutter density, the likelihood density determines whether the sample is selected for retraining. Samples with high weights are employed to retrain the YOLO network, while samples with low weights are disregarded. The likelihood density is applied to represent the relationship between the detections of the YOLO network and the samples. Therefore, we define the likelihood as(11)hk=fsmaxfx,βk,where(12)βk=β0k.During the iterative process,βk is decreased. When the selected sample applied to retrain the YOLO detector has a high associated score, the sample likelihood is maximized. The confidence scores fs are provided by the YOLO network output layer. When fs=0, the weight of the sample is set to 0, and the sample is removed from the specialized dataset. fx indicates whether the sample was detected by the YOLO network. For visual cues, we calculate the Euclidean distance between the selected sample xki and the previous sample Xk−1.(13)fx=e∑xki∈XkDki,rαki,where(14)Dki,r=ukr−uki2+vkr−vki2+wkr−wki2+hkr−hki2,where ukr,vkr,wkr,hkr is the state of the detection zkr. To select high-score samples xki, we use a dynamic threshold:(15)αki=maxxj∈Xt−1δxkixj1Xk−1xjδyjykisj, ifk≠0,α0,ifk=0,where yj and yki are the target class label s calculated by Yt−1. skj is the associated score, and α0 is the initial threshold. ### 3.4. Resampling Step The SMC-PHD filter is utilized to construct a new, specific dataset for retraining, according to the resampling approach, in which resamples from the weighted dataset are included in the generated datasetxkii=1nk. However, the traditional SMC-PHD meets the weight degeneracy problem and the number of samples decreases during the retraining step. To generate a new, unweighted dataset with the same number of samples as the weighted dataset, a sampling strategy is employed. Moreover, the effective sample size (ESS) of xki,ωkii=1nk is calculated:(16)ESS=∑i=1nkωki2∑i=1nkωki2.When the ESS is greater than 0.5, the particles can be considered to be positive samples for the special training dataset. When the ESS is less than 0.5, the particles should be resampled via the Kullback–Leibler distance (KLD) sampling [33]:(17)xkii=1nk←xki,ωkii=1nK.An extra k-means method is used to estimateXk based on the particles xkik=1nk. Note that the aspect ratio of the positive training sample may differ from the initial anchors At−1, as we use the IoU overlap as the positive sample. We employ the k-means method to cluster the aspect ratio of samples to update the anchors. To decrease the computational cost, only three anchors are used to retrain the YOLO network; they are set to At. These proposals are employed to retrain the YOLO network, which is produced by fine-tuning the specific dataset. In the next iteration, these networks will become the input of the forecast phase and be used to create target proposals (bounding boxes) in the target scene. ### 3.5. GM-PHD YOLO SMC-PHD is mainly discussed and applied to improve the YOLO network since it is more robust than the GM-PHD filter for nonlinear systems. However, for linear systems, the GM-PHD filter can provide a higher accuracy rate than the SMC-PHD filter. Therefore, in this subsection, we briefly discuss how to use the GM-PHD filter to improve the YOLO network. The pipeline of the GM-PHD YOLO is similar to that of SMC-PHD YOLO. YOLO is pretrained on the generic dataset, and GM-PHD assists in building the custom dataset from the unlabelled target sequences. YOLO is fine-tuned on this custom dataset. When the GM-PHD filter selects the samples, the steps include the prediction step, update step, and pruning.In the GM-PHD filter,xk−1i is distributed across the state space based on Gaussian density Nmk−1i,Pk−1i, where mk−1i and Pk−1i are the mean and variance, respectively. In the prediction step, for existing targets, N (mk−1i and Pk−1i) are predicted as mk|k−1i=Fmk−1i and Pk|k−1i=Q+FPk−1iFT, respectively, where Q is the transition noise variance. Their weight is calculated as ωk|k−1i=psωki. Birth targets are randomly chosen in the tracking area. In the update step, for undetected targets, the mean and variance retain their values, and their weights are calculated as ωki=1−pDωk|k−1i. For detected targets, the mean is calculated as(18)mki=mk|k−1i+Pk|k−1HTR+HPk|k−1HT−1zk−Hmk|k−1i.The variance is updated as(19)Pki=I−Pk|k−1HTR+HPk|k−1HT−1HPk|k−1i.The particle weight is updated as(20)ωki=pDωk|k−1iNzk;Hmk|k−1i,R+HPk|k−1HT.The weight is normalized as(21)ωki=ωkiκkzk+∑i=1nkωki.A simple pruning procedure is further employed to reduce the number of Gaussian components. The high weight targets are set toXk and are utilized to build the custom dataset. ## 3.1. Prediction Step To build the custom dataset,Xkk−1nI, several particles are applied. At frame k−1, particles are represented as xk−1i,ωk−1i, where ωk−1i is the particle weight. Our work considers only two kinds of particles: survival particles and birth particles. The spawn particles of the SMC-PHD filter are disregarded. For nk−1 survival particles, the particle state is calculated by the transition function F:(5)xk|k−1i=Fxk−1i.Forbk birth particles, the particle state is normally set in the tracking area. The particle weight is calculated by(6)ωk∣k−1i=ϕk∣k−1xki,xk−1iωk−1iqkxki|xk−1i,Zk,i=1,…,nk−1,ξkxkibkpkxki|Zk,i=nk−1+1,…,nk−1+bk.However, if the new birth particle is located near the survival particles, then one target is repeatedly estimated by survival particles and birth particles. Thus, the number of targets would exceed the ground truth. To address this problem, we propose a novel birth density function based on the target state history:(7)ξkxki=maxpb,psmaxωkj∈Ωkδxkixkj1Xk−1xkjωkj,where(8)δxkixkj=1,ifxki=xkj,0, otherwise,1Xk−1xkj=1, ifxkj⊂Xk−1,0, otherwise,where ps is the survival probability and pb is the birth probability. ps represents the probability that the sample xki still exists. When ps=1, a sample still exists in the new dataset. When ps=0, samples are resampled, and samples in different iterations are independent. ## 3.2. Update Step In the update step, the particle states are further updated according to the output of YOLO,Zkk=1nI. The update step of the PHD recursion is approximated by updating the weight of the predicted particles when the likelihood hkzk|xki is obtained. The predicted weights are updated as(9)ωki=1−pDxki+∑zk∈ZkpDxkihkzk|xkiκkzk+Ckzkωk|k−1i,where(10)Ckzk=∑i=1nk−1+bkpD,kxkihkzk|xkiωk|k−1i.The detection probabilitypDxki is simplified as pD,ki in our following work. The number of targets is estimated as the sum of the weights, Sk=∑i=1nkωki.To ignore the clutter, the clutter density functionκk. is applied, and the value of κkzk is varied for the different detections zk. κkzk indicates the level of clutter and is a set value. When zkr has a high probability of being cluttered, κkzk is a high value. If the detection is not cluttered, then κkzk is given as 0. Normally, κkzk is set as a constant or estimated by the Beta-Gaussian mixture model [32].pDxki is the detection probability, which is chosen based on the sample and can be estimated by the Gaussian mixture model [32]. If the sample is occluded, then pDxki would have a low value (near 0). Therefore, the occluded samples have high weights and are selected for retraining the YOLO network. If the sample is not occluded, then pDxki is equal to 1, and the value hki,r is not changed. ## 3.3. Likelihood Function In addition to the detected probability and clutter density, the likelihood density determines whether the sample is selected for retraining. Samples with high weights are employed to retrain the YOLO network, while samples with low weights are disregarded. The likelihood density is applied to represent the relationship between the detections of the YOLO network and the samples. Therefore, we define the likelihood as(11)hk=fsmaxfx,βk,where(12)βk=β0k.During the iterative process,βk is decreased. When the selected sample applied to retrain the YOLO detector has a high associated score, the sample likelihood is maximized. The confidence scores fs are provided by the YOLO network output layer. When fs=0, the weight of the sample is set to 0, and the sample is removed from the specialized dataset. fx indicates whether the sample was detected by the YOLO network. For visual cues, we calculate the Euclidean distance between the selected sample xki and the previous sample Xk−1.(13)fx=e∑xki∈XkDki,rαki,where(14)Dki,r=ukr−uki2+vkr−vki2+wkr−wki2+hkr−hki2,where ukr,vkr,wkr,hkr is the state of the detection zkr. To select high-score samples xki, we use a dynamic threshold:(15)αki=maxxj∈Xt−1δxkixj1Xk−1xjδyjykisj, ifk≠0,α0,ifk=0,where yj and yki are the target class label s calculated by Yt−1. skj is the associated score, and α0 is the initial threshold. ## 3.4. Resampling Step The SMC-PHD filter is utilized to construct a new, specific dataset for retraining, according to the resampling approach, in which resamples from the weighted dataset are included in the generated datasetxkii=1nk. However, the traditional SMC-PHD meets the weight degeneracy problem and the number of samples decreases during the retraining step. To generate a new, unweighted dataset with the same number of samples as the weighted dataset, a sampling strategy is employed. Moreover, the effective sample size (ESS) of xki,ωkii=1nk is calculated:(16)ESS=∑i=1nkωki2∑i=1nkωki2.When the ESS is greater than 0.5, the particles can be considered to be positive samples for the special training dataset. When the ESS is less than 0.5, the particles should be resampled via the Kullback–Leibler distance (KLD) sampling [33]:(17)xkii=1nk←xki,ωkii=1nK.An extra k-means method is used to estimateXk based on the particles xkik=1nk. Note that the aspect ratio of the positive training sample may differ from the initial anchors At−1, as we use the IoU overlap as the positive sample. We employ the k-means method to cluster the aspect ratio of samples to update the anchors. To decrease the computational cost, only three anchors are used to retrain the YOLO network; they are set to At. These proposals are employed to retrain the YOLO network, which is produced by fine-tuning the specific dataset. In the next iteration, these networks will become the input of the forecast phase and be used to create target proposals (bounding boxes) in the target scene. ## 3.5. GM-PHD YOLO SMC-PHD is mainly discussed and applied to improve the YOLO network since it is more robust than the GM-PHD filter for nonlinear systems. However, for linear systems, the GM-PHD filter can provide a higher accuracy rate than the SMC-PHD filter. Therefore, in this subsection, we briefly discuss how to use the GM-PHD filter to improve the YOLO network. The pipeline of the GM-PHD YOLO is similar to that of SMC-PHD YOLO. YOLO is pretrained on the generic dataset, and GM-PHD assists in building the custom dataset from the unlabelled target sequences. YOLO is fine-tuned on this custom dataset. When the GM-PHD filter selects the samples, the steps include the prediction step, update step, and pruning.In the GM-PHD filter,xk−1i is distributed across the state space based on Gaussian density Nmk−1i,Pk−1i, where mk−1i and Pk−1i are the mean and variance, respectively. In the prediction step, for existing targets, N (mk−1i and Pk−1i) are predicted as mk|k−1i=Fmk−1i and Pk|k−1i=Q+FPk−1iFT, respectively, where Q is the transition noise variance. Their weight is calculated as ωk|k−1i=psωki. Birth targets are randomly chosen in the tracking area. In the update step, for undetected targets, the mean and variance retain their values, and their weights are calculated as ωki=1−pDωk|k−1i. For detected targets, the mean is calculated as(18)mki=mk|k−1i+Pk|k−1HTR+HPk|k−1HT−1zk−Hmk|k−1i.The variance is updated as(19)Pki=I−Pk|k−1HTR+HPk|k−1HT−1HPk|k−1i.The particle weight is updated as(20)ωki=pDωk|k−1iNzk;Hmk|k−1i,R+HPk|k−1HT.The weight is normalized as(21)ωki=ωkiκkzk+∑i=1nkωki.A simple pruning procedure is further employed to reduce the number of Gaussian components. The high weight targets are set toXk and are utilized to build the custom dataset. ## 4. Experimental Results This section introduces the test results obtained on several public and private datasets. First, the implementation details of our proposed method are given. Second, the dataset and baseline algorithms are introduced. Third, the ablation study of the SMC-PHD YOLO filter is discussed. Our proposed SMC-PHD YOLO detector and several baseline methods are compared. ### 4.1. Implementation Details The initialized YOLO in our proposed SMC-PHD YOLO filter is pretrained on the COCO dataset [31]. The Adam optimizer is applied, where the weight decline is 0.0005 and the momentum is 0.9. Although the transition matrix F differs substantially across the different object classes in the different datasets, to simplify the problem, we assume F to be(22)F=1010010100100001.The YOLO network is fine-tuned on our evaluation dataset for the different tasks with the help of the SMC-PHD-based transforming method. The YOLO detector is tuned with a 64 GB NVIDIA GeForce GTX TITANX GPU. ### 4.2. Evaluation Methodology and Dataset We train the YOLO detector on a training collection containing 80k training frames and 500k example annotations from the COCO dataset, which contains 2.5 million labelled instances among 328k images of only 91 objects. Although the COCO dataset does not contain continuous frames, it is only used to pretrain the YOLO network before the experiments. In the evaluation step, datasets should contain continuous frames. The evaluation was performed with three different datasets.GOT-10k [34] is a large-scale, visual dataset with broad coverage of real-world objects. It contains 10k videos of 563 categories, and its categories are more than one order of magnitude wider than those of counterparts of a similar scale. Some of its categories are not included in the COCO dataset. Therefore, GOT-10k is suitable for fine-tuning the YOLO network pretrained on the COCO dataset. The annotations that we tested include birds, cars, tapirs, and cows. YouTubeBB [35] is a large, diverse dataset with 380,000 video sections and 5.6 million human-drawn bounding boxes in 23 classifications from 240,000 distinct YouTube videos. Each video includes time-localized, frame-level features, so classifier predictions at segment-level granularity are feasible. The annotations that we tested include cars and zebras. In the MIT Traffic dataset [36], a 90-minute video is provided. A total of 420 frames from the first 45 minutes are employed for specialization, and 420 images from the last 45 minutes are utilized for testing. The video was recorded by a stationary camera. The size of the scene is 720 by 480, and it is divided into 20 clips. The annotation that we tested includes only the cars. False-positive curves per frame (FPPI) and receiver operating characteristic (ROC) curves are used to evaluate our proposed detector and baseline methods. The pipeline of the data preparation for the PHD YOLO experiment is shown in Figure 4.Figure 4 The pipeline of the data preparation for the PHD YOLO experiment. ### 4.3. Baseline Method The algorithms compared with the SMC-PHD YOLO algorithm are Generic YOLO [5], Generic Faster R-CNN [37], SMC Faster R-CNN [8], that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], spatiotemporal sampling network (STSN) [41], salient object detection (SOD) [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45]. Table 1 shows the comparison between baseline methods and our method. The detector pretrained on the general dataset is presented in the second column. Some methods automatically fine-tune the network with the target dataset collected by the methods shown in the third column. For example, the algorithm of Kang et al. [40] does not include a fine-tuning step, and there is no information in its block. The computational complexity of fine-tuning with the target dataset is shown in the last column, where nI is the number of frames in the video, n is the number of particles for the SMC method, m is the average number of targets in each frame, l∗h is the size (length ∗ width) of the frame, and a is the number of auxiliary networks.Table 1 Comparison between baseline methods and our method. BaselineDetectorFine-tunedComputational complexityYOLO [5]YOLO——R-CNN [37]R-CNN——SMC R-CNN [8]R-CNNSMCOnI∗n∗mSingh et al. [38]R-CNNTrack and segment [46]OnI∗l∗hDeshmukh and Moh [39]CNNEdge detectorsOnI∗l∗hKang et al. [40]Contextual R-CNN——Maâmatou et al. [10]SVMSMCOnI∗n∗mSTSN [41]STSN——SOD [42]SODR101 FPNOnI∗n∗mLee et al. [43]R-CNNAuxiliary networkOnI∗n∗m∗aJie et al. [44]R-CNNOnline supportive sample harvesting [44]OnI∗n∗mGhahremani et al. [45]CNNF1 score thresholdOnSMC-PHD YOLOYOLOSMC-PHDOnI∗n∗m ### 4.4. SMC-PHD Filter YOLO for Multitarget Detection In this subsection, we discuss the contribution of the SMC-PHD in our proposed method via three experiments. In these three experiments, we evaluate the performance of the detection probability and clutter density. Note that for a fixed label dataset and fixed YOLO, these parameters are also fixed and can be measured from the dataset. To show the contribution of the detection probability and clutter density, we set different values in the experiments. #### 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 #### 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ### 4.5. Error Analysis of the SMC-PHD YOLO Network Since the target dataset is automatically generated by an SMC-PHD filter, it may include some error samples with uncorrected labels. To analyse whether the error samples affect the final performance, we test our SMC-PHD YOLO network with the YouTube dataset. The annotations that we employ comprise cars and zebras. The video length for each annotation, which contains 36000 frames, is 20 min. These frames are manually labelled by researchers and automatically labelled by our methods. After manually labelling these videos, 831,615 and 88,234 positive target samples were obtained for cars and zebras since multiple targets may appear in the same frame. For labels labelled by our methods, “cars” includes 797,660 true-positive samples and 212 false-positive samples, while “zebras” includes 69,821 true-positive samples and 17 false-positive samples. These results show that algorithms assign fewer labels than humans because some tiny targets and low-possibility targets are considered clutter to be disregarded. “Car” has a higher recall rate (96%) than “zebra” (79%) since cars with a regular profile are easier to detect. To further analyse these error samples, we print these data distributions. The selected features comprise the input of the last fully connected layer of YOLO. Two main dimensions are selected by t-distributed stochastic neighbour embedding. Figure5 shows the data distribution of true positives, false positives, and false negatives. This finding proves that tiny targets are considered to be outliers and are disregarded. We also discovered that some clutter (green points) in the target dataset is considered positive samples (false positives). After the clutter is manually disregarded in the target dataset, the YOLO performance does not change. The main potential reason for this is the high threat score (99%), and the SMC-PHD filter disregards the most uncertain samples. However, this approach does not fundamentally solve the problem of clutter since some low-possibility positive samples are considered to be false negatives (red points). Some researchers suggest the use of extra information, such as audio information, to address the clutter problem [48]. Addressing the clutter problem will be one of our future research topics.Figure 5 Data distribution of true positives, false positives, and false negatives for “car” and “zebra” of the YouTube BB dataset. ### 4.6. Scene-Specialized Multitarget Detector To show the performance of the PHD method for transfer learning, we compare the baseline YOLO network, SMC YOLO network, SMC R-CNN, and our proposed SMC-PHD YOLO network and GM-PHD YOLO on the YouTubeBB dataset. Since SMC R-CNN cannot address occluded samples, we propose SMC-PHD R-CNN with SMC-PHD to improve the performance of Faster R-CNN and show the effect of the PHD method. We train the YOLO network with a general training set (COCO dataset), which contains a limited amount of target data. SMC-PHD then augments a dataset containing unseen data. The unseen data in augmented data are assigned labels that may contain errors. YOLO is fine-tuned on this target dataset, and YOLO is applied without an SMC-PHD filter. The SMC-PHD filter is only applied to augment data in this work. The parameters of the PHD filter are chosen according to the Beta-Gaussian mixture model [32]. We test these methods for the airplane, bicycle, boat, and car categories of the YouTubeBB dataset. For different categories, we train the different SMC-PHD YOLO networks where parameters are independent. The YOLO network and R-CNN fine-tuned by the SMC-PHD, GM-PHD, and SMC filters are shown in Table 4. After fine-tuning YOLO, filters are not employed for target detection. Our proposed method has the highest FPPI value of all methods for the boat and car categories, and SMC-PHD YOLO performs similarly to SMC-PHD R-CNN. According to the results, SMC improves the performance of YOLO and R-CNN by approximately 8%, and PHD further improves their performance by approximately 6%. Although GM-PHD YOLO has an 8% higher FPPI than YOLO, it is still lower than that of SMC-PHD YOLO. We speculate that the reason for this is that the number of bounding boxes identified by GM-PHD YOLO is 4% more than that identified by SMC-PHD YOLO. It is proven that SMC-PHD YOLO is more robust than GM-PHD YOLO. Therefore, in the following experiment, we mainly test SMC-PHD YOLO.Table 4 FPPI of our proposed SMC-PHD YOLO, SMC YOLO, YOLO, SMC-PHD R-CNN, and SMC R-CNN on the YouTubeBB dataset. MethodAirplaneBicycleBoatCarSMC-PHD YOLO0.810.690.840.88GM-PHD YOLO0.790.650.820.84SMC YOLO0.760.630.760.81YOLO0.710.570.680.76SMC-PHD R-CNN0.820.700.830.88SMC R-CNN0.790.670.830.89Some results of the proposed method and baseline methods are shown in Figure6. The first line and second line of each subfigure are detected by Generic YOLO and specific YOLO, respectively. In Figure 6(a), the flapping bird is detected only by the specialized YOLO detectors. Thus, our proposed method can customize the detector for a moving target because the dataset is selected from a sequence with the likelihood function. In addition, some occluded cars are detected by our proposed method due to the detection probability. In Figure 6(b), cars and zebras are successfully detected by the specialized YOLO detector, even though only parts of the vehicles and zebras are shown in the images. For the traffic sequences shown in Figure 6(c), the number of cars detected with the specialized YOLO detector is higher than that detected with the Generic YOLO detector. With the SMC-PHD filter, our proposed method can detect occluded cars and certain small vehicles.Figure 6 Improvement of the scene-specific detector for GOT-10k (a), YouTubeBB (b), and MIT Traffic (c). The first line of each subfigure indicates the Generic YOLO, and the second line of each subfigure indicates the SMC-PHD YOLO detector. (a)(b)(c)To further evaluate our proposed method, we further compare our methods with other baseline methods, such as that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], STSN [41], SOD [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45].Figure7 shows the ROC curves of the filters for the different annotations. In this experiment, we chose the bird and boat categories from the GOT-10k and YouTubeBB datasets and the car category from the MIT Traffic dataset. Due to the page limitation, Figures 7(a) and 7(b) only show a comparison between SMC-based detectors, such as SMC-PHD YOLO, and generic detectors, such as YOLO. The comparison between our proposed method and state-of-the-art methods is shown in Figures 7(c)–7(e). In Figure 7(a), the method of Kang achieves a higher true-positive rate than that of Kumar and Dalal because the former is specially designed for boat detection. Compared with the Generic YOLO for boat detection, the SMC-PHD YOLO detector achieves an ROC improvement of 13%. As the boat is often occluded in the bay, the SMC-PHD YOLO detector with the detection probability performs better than the other methods. The boat detection results on the YouTubeBB dataset are similar to those on the GOT-10k dataset. Compared with generic methods, specialized methods achieve ROC improvements of approximately 10%. More baseline transform learning methods are considered in Figure 7(c), which are shown as dashed lines. The transform methods achieve better performance than the generic R-CNN or YOLO methods. SMC based on R-CNN achieves a similar ROC value as other transform detectors. Based on SMC, the SMC R-CNN detector and SMC-PHD YOLO detector achieve increases in the ROC values of 3.8% and 5.8%, respectively, compared with their baseline methods. For car detection, we test the methods only on the MIT Traffic dataset. As shown by the ROC curves in Figure 7(e), the YOLO SMC-PHD sensor outperforms all other car detection frameworks. The SMC-PHD YOLO detector also outperforms the four other specialized detectors, i.e., SMC Faster R-CNN, that of Kumar, that of Dalal, and that of Maamatou, by 5%, 6%, 9%, and 2%, respectively.Figure 7 ROC curves for the Kumar, Dalal, Faster R-CNN, SMC Faster R-CNN, YOLO, and SMC-PHD YOLO methods with the bird (a) and boat (b) annotations of GOT-10k, the bird (c) and boat (d) annotations of YouTubeBB, and the car (e) annotation of the MIT Traffic dataset. (a)(b)(c)(d)(e)Table5 reports the average detection rate of our proposed method and other state-of-the-art methods for the different datasets. We list the ten annotations on GOT-10k and YouTubeBB. As the Kang and Maamatou methods are designed for boat and traffic detection, they are not included in this table. Our proposed method achieves the highest detection rate, especially for the MIT Traffic dataset. SMC-PHD YOLO can detect occluded targets, such as cars. Although SMC R-CNN achieves a detection rate similar to that of the SMC-PHD YOLO detector, the number of frames per second (FPS) of the SMC-PHD YOLO network is 100 times that of SMC R-CNN. Therefore, the SMC-PHD YOLO detector considerably outperforms the generic detector with several annotations on all government datasets. Compared to the baseline YOLO detector, the SMC-PHD YOLO detector achieves a 12% higher detection rate.Table 5 Detection rate for the different datasets with different detections (at 1 FPPI). DataSMC-PHD YOLOYOLO [5]SMC R-CNN [8]Kumar [38]Dalal [39]STSN [41]Jie [44]Ghahremani [45]Lee [43]SOD [42]YoutubeBBAirplane0.910.810.870.80.80.830.850.860.830.89Bicycle0.890.770.860.780.750.820.830.820.840.87Bird0.940.850.920.820.840.870.860.860.880.89Boat0.980.870.960.830.840.890.910.920.930.95Bus0.960.830.950.820.810.890.860.90.920.93Car0.980.860.950.840.830.910.920.920.930.94Cat0.940.850.920.820.830.930.910.930.920.95Cow0.980.870.950.860.880.950.960.940.950.96Dog0.920.810.890.800.820.880.910.890.90.88Horse0.960.850.940.860.860.920.90.890.930.95GOT-10kAnteater0.530.390.410.370.420.520.480.510.490.52Bird0.940.880.920.870.790.860.860.880.890.93Cat0.910.830.900.840.790.840.860.880.90.87Elephant0.880.730.860.750.700.820.840.870.890.85Boat0.980.870.970.840.840.870.890.920.940.97Goat0.880.720.870.760.690.780.80.830.850.87Horse0.870.710.850.730.750.810.830.840.860.85Lion0.860.730.840.710.770.810.830.840.850.83Car0.950.850.910.860.870.850.870.930.940.94Tank0.740.610.680.630.610.630.660.690.710.73MITPedestrian0.970.850.930.860.820.910.930.950.940.96Car0.950.880.890.930.890.90.920.930.950.96Average0.90.790.870.790.780.840.850.860.870.89Although our proposed method has the highest detection rate and large ROC values among all methods, the proposed SMC-PHD YOLO performance depends on the hyperparameters, such as the detection probability and clutter density. These parameters should be established at the beginning of training based on previous experience. Some researchers have proposed solutions for estimating the parameters of the SMC-PHD filter. For example, Lian et al. [49] used the expectation maximum to estimate the unknown clutter probability, and Li et al. [50] used the gamma Gaussian mixture model to estimate the detection probability. Applying this kind of estimation method to improve the SMC-PHD YOLO filter will be addressed in our future work. ## 4.1. Implementation Details The initialized YOLO in our proposed SMC-PHD YOLO filter is pretrained on the COCO dataset [31]. The Adam optimizer is applied, where the weight decline is 0.0005 and the momentum is 0.9. Although the transition matrix F differs substantially across the different object classes in the different datasets, to simplify the problem, we assume F to be(22)F=1010010100100001.The YOLO network is fine-tuned on our evaluation dataset for the different tasks with the help of the SMC-PHD-based transforming method. The YOLO detector is tuned with a 64 GB NVIDIA GeForce GTX TITANX GPU. ## 4.2. Evaluation Methodology and Dataset We train the YOLO detector on a training collection containing 80k training frames and 500k example annotations from the COCO dataset, which contains 2.5 million labelled instances among 328k images of only 91 objects. Although the COCO dataset does not contain continuous frames, it is only used to pretrain the YOLO network before the experiments. In the evaluation step, datasets should contain continuous frames. The evaluation was performed with three different datasets.GOT-10k [34] is a large-scale, visual dataset with broad coverage of real-world objects. It contains 10k videos of 563 categories, and its categories are more than one order of magnitude wider than those of counterparts of a similar scale. Some of its categories are not included in the COCO dataset. Therefore, GOT-10k is suitable for fine-tuning the YOLO network pretrained on the COCO dataset. The annotations that we tested include birds, cars, tapirs, and cows. YouTubeBB [35] is a large, diverse dataset with 380,000 video sections and 5.6 million human-drawn bounding boxes in 23 classifications from 240,000 distinct YouTube videos. Each video includes time-localized, frame-level features, so classifier predictions at segment-level granularity are feasible. The annotations that we tested include cars and zebras. In the MIT Traffic dataset [36], a 90-minute video is provided. A total of 420 frames from the first 45 minutes are employed for specialization, and 420 images from the last 45 minutes are utilized for testing. The video was recorded by a stationary camera. The size of the scene is 720 by 480, and it is divided into 20 clips. The annotation that we tested includes only the cars. False-positive curves per frame (FPPI) and receiver operating characteristic (ROC) curves are used to evaluate our proposed detector and baseline methods. The pipeline of the data preparation for the PHD YOLO experiment is shown in Figure 4.Figure 4 The pipeline of the data preparation for the PHD YOLO experiment. ## 4.3. Baseline Method The algorithms compared with the SMC-PHD YOLO algorithm are Generic YOLO [5], Generic Faster R-CNN [37], SMC Faster R-CNN [8], that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], spatiotemporal sampling network (STSN) [41], salient object detection (SOD) [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45]. Table 1 shows the comparison between baseline methods and our method. The detector pretrained on the general dataset is presented in the second column. Some methods automatically fine-tune the network with the target dataset collected by the methods shown in the third column. For example, the algorithm of Kang et al. [40] does not include a fine-tuning step, and there is no information in its block. The computational complexity of fine-tuning with the target dataset is shown in the last column, where nI is the number of frames in the video, n is the number of particles for the SMC method, m is the average number of targets in each frame, l∗h is the size (length ∗ width) of the frame, and a is the number of auxiliary networks.Table 1 Comparison between baseline methods and our method. BaselineDetectorFine-tunedComputational complexityYOLO [5]YOLO——R-CNN [37]R-CNN——SMC R-CNN [8]R-CNNSMCOnI∗n∗mSingh et al. [38]R-CNNTrack and segment [46]OnI∗l∗hDeshmukh and Moh [39]CNNEdge detectorsOnI∗l∗hKang et al. [40]Contextual R-CNN——Maâmatou et al. [10]SVMSMCOnI∗n∗mSTSN [41]STSN——SOD [42]SODR101 FPNOnI∗n∗mLee et al. [43]R-CNNAuxiliary networkOnI∗n∗m∗aJie et al. [44]R-CNNOnline supportive sample harvesting [44]OnI∗n∗mGhahremani et al. [45]CNNF1 score thresholdOnSMC-PHD YOLOYOLOSMC-PHDOnI∗n∗m ## 4.4. SMC-PHD Filter YOLO for Multitarget Detection In this subsection, we discuss the contribution of the SMC-PHD in our proposed method via three experiments. In these three experiments, we evaluate the performance of the detection probability and clutter density. Note that for a fixed label dataset and fixed YOLO, these parameters are also fixed and can be measured from the dataset. To show the contribution of the detection probability and clutter density, we set different values in the experiments. ### 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 ### 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ## 4.4.1. Detection Probability To evaluate the detection probability performance, we set the detection probability as different constants. The detection probability in the SMC-PHD is incrementally increased from 0 to 1, and six situations are considered: 0, 0.2, 0.4, 0.6, 0.8, and 1. The YouTubeBB dataset is selected since it includes several situations. For example, the vehicles in traffic videos are frequently occluded by other vehicles, while airplanes at an airport always appear in the scene.Table2 shows the FPPI of the SMC-PHD YOLO network versus the detection probability and category. A correctly estimated detection probability can produce a high FPPI. For example, since the airplanes are always shown in the centre of the scene in the airplane sequences, the lowest FPPI for the airplane category is pD,k=0.2. The best results for the car category are pD,k=0.6 due to the occluded cars. Therefore, if targets are frequently occluded, then the detection probability should be of high value. Furthermore, for the airplane category, the FPPI at pD,k=1 is only 85% of that at pD,k=0.2. Thus, if the detection probability is too high, such as 1, then the FPPI of the detection would decrease.Table 2 FPPI of the SMC-PHD YOLO network versus detection probability for the “airplane” and “car” categories of the YouTubeBB dataset. pD,k00.20.40.60.81Airplane0.800.810.780.750.720.69Car0.800.830.860.880.850.81 ## 4.4.2. Clutter Density Function The clutter density function is employed to address the clutter problem. For the PHD filter, the clutter density function is varied based on the detection results, and it is given a constant value in many references [26, 28, 32, 47]. In these experiments, clutter density is a constant value for all detections. However, a large κkzkr may decrease the weights of the targets, which causes an insufficient number of samples to be included in the training dataset. A low κkzkr cannot address the clutter problem, and the retrained YOLO model is still sensitive to clutter. Since κkzkr is normally set to a value from 0 to infinity, we test 8 different values on the boat and bicycle sequences of the YouTubeBB dataset. Distant buildings may be detected as boats, and the bicycle detection performance is also easily affected by the surroundings. The results are shown in Table 3. The highest FPPIs for the boat sequence and bicycle sequence are 0.3 and 0.1, respectively, since the level of clutter varies for different categories. For “boat,” if κk is lower than 0.3, the FPPI would slightly decrease since clutter is added to the specialized training data and the retrained model is still sensitive to the clutter. If κk exceeds 0.3, the FPPI also decreases since the weight of the target samples decreases and the retraining dataset does not include sufficient training samples.Table 3 FPPI of the SMC-PHD YOLO network versus detection probability for the “boat” and “bicycle” categories of the YouTubeBB dataset. κk00.10.30.70.91510Boat0.810.820.840.820.740.680.580.51Bicycle0.670.690.650.590.510.480.390.34 ## 4.5. Error Analysis of the SMC-PHD YOLO Network Since the target dataset is automatically generated by an SMC-PHD filter, it may include some error samples with uncorrected labels. To analyse whether the error samples affect the final performance, we test our SMC-PHD YOLO network with the YouTube dataset. The annotations that we employ comprise cars and zebras. The video length for each annotation, which contains 36000 frames, is 20 min. These frames are manually labelled by researchers and automatically labelled by our methods. After manually labelling these videos, 831,615 and 88,234 positive target samples were obtained for cars and zebras since multiple targets may appear in the same frame. For labels labelled by our methods, “cars” includes 797,660 true-positive samples and 212 false-positive samples, while “zebras” includes 69,821 true-positive samples and 17 false-positive samples. These results show that algorithms assign fewer labels than humans because some tiny targets and low-possibility targets are considered clutter to be disregarded. “Car” has a higher recall rate (96%) than “zebra” (79%) since cars with a regular profile are easier to detect. To further analyse these error samples, we print these data distributions. The selected features comprise the input of the last fully connected layer of YOLO. Two main dimensions are selected by t-distributed stochastic neighbour embedding. Figure5 shows the data distribution of true positives, false positives, and false negatives. This finding proves that tiny targets are considered to be outliers and are disregarded. We also discovered that some clutter (green points) in the target dataset is considered positive samples (false positives). After the clutter is manually disregarded in the target dataset, the YOLO performance does not change. The main potential reason for this is the high threat score (99%), and the SMC-PHD filter disregards the most uncertain samples. However, this approach does not fundamentally solve the problem of clutter since some low-possibility positive samples are considered to be false negatives (red points). Some researchers suggest the use of extra information, such as audio information, to address the clutter problem [48]. Addressing the clutter problem will be one of our future research topics.Figure 5 Data distribution of true positives, false positives, and false negatives for “car” and “zebra” of the YouTube BB dataset. ## 4.6. Scene-Specialized Multitarget Detector To show the performance of the PHD method for transfer learning, we compare the baseline YOLO network, SMC YOLO network, SMC R-CNN, and our proposed SMC-PHD YOLO network and GM-PHD YOLO on the YouTubeBB dataset. Since SMC R-CNN cannot address occluded samples, we propose SMC-PHD R-CNN with SMC-PHD to improve the performance of Faster R-CNN and show the effect of the PHD method. We train the YOLO network with a general training set (COCO dataset), which contains a limited amount of target data. SMC-PHD then augments a dataset containing unseen data. The unseen data in augmented data are assigned labels that may contain errors. YOLO is fine-tuned on this target dataset, and YOLO is applied without an SMC-PHD filter. The SMC-PHD filter is only applied to augment data in this work. The parameters of the PHD filter are chosen according to the Beta-Gaussian mixture model [32]. We test these methods for the airplane, bicycle, boat, and car categories of the YouTubeBB dataset. For different categories, we train the different SMC-PHD YOLO networks where parameters are independent. The YOLO network and R-CNN fine-tuned by the SMC-PHD, GM-PHD, and SMC filters are shown in Table 4. After fine-tuning YOLO, filters are not employed for target detection. Our proposed method has the highest FPPI value of all methods for the boat and car categories, and SMC-PHD YOLO performs similarly to SMC-PHD R-CNN. According to the results, SMC improves the performance of YOLO and R-CNN by approximately 8%, and PHD further improves their performance by approximately 6%. Although GM-PHD YOLO has an 8% higher FPPI than YOLO, it is still lower than that of SMC-PHD YOLO. We speculate that the reason for this is that the number of bounding boxes identified by GM-PHD YOLO is 4% more than that identified by SMC-PHD YOLO. It is proven that SMC-PHD YOLO is more robust than GM-PHD YOLO. Therefore, in the following experiment, we mainly test SMC-PHD YOLO.Table 4 FPPI of our proposed SMC-PHD YOLO, SMC YOLO, YOLO, SMC-PHD R-CNN, and SMC R-CNN on the YouTubeBB dataset. MethodAirplaneBicycleBoatCarSMC-PHD YOLO0.810.690.840.88GM-PHD YOLO0.790.650.820.84SMC YOLO0.760.630.760.81YOLO0.710.570.680.76SMC-PHD R-CNN0.820.700.830.88SMC R-CNN0.790.670.830.89Some results of the proposed method and baseline methods are shown in Figure6. The first line and second line of each subfigure are detected by Generic YOLO and specific YOLO, respectively. In Figure 6(a), the flapping bird is detected only by the specialized YOLO detectors. Thus, our proposed method can customize the detector for a moving target because the dataset is selected from a sequence with the likelihood function. In addition, some occluded cars are detected by our proposed method due to the detection probability. In Figure 6(b), cars and zebras are successfully detected by the specialized YOLO detector, even though only parts of the vehicles and zebras are shown in the images. For the traffic sequences shown in Figure 6(c), the number of cars detected with the specialized YOLO detector is higher than that detected with the Generic YOLO detector. With the SMC-PHD filter, our proposed method can detect occluded cars and certain small vehicles.Figure 6 Improvement of the scene-specific detector for GOT-10k (a), YouTubeBB (b), and MIT Traffic (c). The first line of each subfigure indicates the Generic YOLO, and the second line of each subfigure indicates the SMC-PHD YOLO detector. (a)(b)(c)To further evaluate our proposed method, we further compare our methods with other baseline methods, such as that of Singh et al. [38], that of Deshmukh and Moh [39], that of Kang et al. [40], that of Maâmatou et al. [10], STSN [41], SOD [42], that of Lee et al. [43], that of Jie et al. [44], and that of Ghahremani et al. [45].Figure7 shows the ROC curves of the filters for the different annotations. In this experiment, we chose the bird and boat categories from the GOT-10k and YouTubeBB datasets and the car category from the MIT Traffic dataset. Due to the page limitation, Figures 7(a) and 7(b) only show a comparison between SMC-based detectors, such as SMC-PHD YOLO, and generic detectors, such as YOLO. The comparison between our proposed method and state-of-the-art methods is shown in Figures 7(c)–7(e). In Figure 7(a), the method of Kang achieves a higher true-positive rate than that of Kumar and Dalal because the former is specially designed for boat detection. Compared with the Generic YOLO for boat detection, the SMC-PHD YOLO detector achieves an ROC improvement of 13%. As the boat is often occluded in the bay, the SMC-PHD YOLO detector with the detection probability performs better than the other methods. The boat detection results on the YouTubeBB dataset are similar to those on the GOT-10k dataset. Compared with generic methods, specialized methods achieve ROC improvements of approximately 10%. More baseline transform learning methods are considered in Figure 7(c), which are shown as dashed lines. The transform methods achieve better performance than the generic R-CNN or YOLO methods. SMC based on R-CNN achieves a similar ROC value as other transform detectors. Based on SMC, the SMC R-CNN detector and SMC-PHD YOLO detector achieve increases in the ROC values of 3.8% and 5.8%, respectively, compared with their baseline methods. For car detection, we test the methods only on the MIT Traffic dataset. As shown by the ROC curves in Figure 7(e), the YOLO SMC-PHD sensor outperforms all other car detection frameworks. The SMC-PHD YOLO detector also outperforms the four other specialized detectors, i.e., SMC Faster R-CNN, that of Kumar, that of Dalal, and that of Maamatou, by 5%, 6%, 9%, and 2%, respectively.Figure 7 ROC curves for the Kumar, Dalal, Faster R-CNN, SMC Faster R-CNN, YOLO, and SMC-PHD YOLO methods with the bird (a) and boat (b) annotations of GOT-10k, the bird (c) and boat (d) annotations of YouTubeBB, and the car (e) annotation of the MIT Traffic dataset. (a)(b)(c)(d)(e)Table5 reports the average detection rate of our proposed method and other state-of-the-art methods for the different datasets. We list the ten annotations on GOT-10k and YouTubeBB. As the Kang and Maamatou methods are designed for boat and traffic detection, they are not included in this table. Our proposed method achieves the highest detection rate, especially for the MIT Traffic dataset. SMC-PHD YOLO can detect occluded targets, such as cars. Although SMC R-CNN achieves a detection rate similar to that of the SMC-PHD YOLO detector, the number of frames per second (FPS) of the SMC-PHD YOLO network is 100 times that of SMC R-CNN. Therefore, the SMC-PHD YOLO detector considerably outperforms the generic detector with several annotations on all government datasets. Compared to the baseline YOLO detector, the SMC-PHD YOLO detector achieves a 12% higher detection rate.Table 5 Detection rate for the different datasets with different detections (at 1 FPPI). DataSMC-PHD YOLOYOLO [5]SMC R-CNN [8]Kumar [38]Dalal [39]STSN [41]Jie [44]Ghahremani [45]Lee [43]SOD [42]YoutubeBBAirplane0.910.810.870.80.80.830.850.860.830.89Bicycle0.890.770.860.780.750.820.830.820.840.87Bird0.940.850.920.820.840.870.860.860.880.89Boat0.980.870.960.830.840.890.910.920.930.95Bus0.960.830.950.820.810.890.860.90.920.93Car0.980.860.950.840.830.910.920.920.930.94Cat0.940.850.920.820.830.930.910.930.920.95Cow0.980.870.950.860.880.950.960.940.950.96Dog0.920.810.890.800.820.880.910.890.90.88Horse0.960.850.940.860.860.920.90.890.930.95GOT-10kAnteater0.530.390.410.370.420.520.480.510.490.52Bird0.940.880.920.870.790.860.860.880.890.93Cat0.910.830.900.840.790.840.860.880.90.87Elephant0.880.730.860.750.700.820.840.870.890.85Boat0.980.870.970.840.840.870.890.920.940.97Goat0.880.720.870.760.690.780.80.830.850.87Horse0.870.710.850.730.750.810.830.840.860.85Lion0.860.730.840.710.770.810.830.840.850.83Car0.950.850.910.860.870.850.870.930.940.94Tank0.740.610.680.630.610.630.660.690.710.73MITPedestrian0.970.850.930.860.820.910.930.950.940.96Car0.950.880.890.930.890.90.920.930.950.96Average0.90.790.870.790.780.840.850.860.870.89Although our proposed method has the highest detection rate and large ROC values among all methods, the proposed SMC-PHD YOLO performance depends on the hyperparameters, such as the detection probability and clutter density. These parameters should be established at the beginning of training based on previous experience. Some researchers have proposed solutions for estimating the parameters of the SMC-PHD filter. For example, Lian et al. [49] used the expectation maximum to estimate the unknown clutter probability, and Li et al. [50] used the gamma Gaussian mixture model to estimate the detection probability. Applying this kind of estimation method to improve the SMC-PHD YOLO filter will be addressed in our future work. ## 5. Conclusion To customize the YOLO detector for unique target identification, we suggested an effective and precise structure based on the SMC-PHD filter and GM-PHD filter. On the basis of the proposed confidence score-based likelihood and novel resampling strategy, the framework can be employed by choosing appropriate samples from target datasets to train and then detect a target. This framework automatically offers a strong specialized detector with a Generic YOLO detector and some target videos. The tests showed that the proposed framework can generate a specific YOLO detector that considerably outperforms the Generic YOLO detector on a distinct dataset for bird, boat, and vehicle detection. Correlated clutter is still challenging for SMC-PHD filters. Our future research will focus on expanding the algorithm with multimodal information to address the correlated clutter problem. --- *Source: 1010767-2022-04-28.xml*
2022
# Methane Source and Turnover in the Shallow Sediments to the West of Haima Cold Seeps on the Northwestern Slope of the South China Sea **Authors:** Junxi Feng; Shengxiong Yang; Hongbin Wang; Jinqiang Liang; Yunxin Fang; Min Luo **Journal:** Geofluids (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1010824 --- ## Abstract The Haima cold seeps are active cold seep areas that were recently discovered on the northwestern slope of the South China Sea (SCS). Three piston cores (CL30, CL44, and CL47) were collected within an area characterized by bottom simulating reflectors to the west of Haima cold seeps. Porewater profiles of the three cores exhibit typical kink-type feature, which is attributed to elevated methane flux (CL30) and bubble irrigation (CL44 and CL47). By simulating the porewater profiles of SO42-, CH4, PO43-, Ca2+, Mg2+, and dissolved inorganic carbon (DIC) in CL44 and CL47 using a steady-state reaction-transport model, we estimated that the dissolved SO42- was predominantly consumed by anaerobic oxidation of methane (AOM) at rates of 74.3 mmol m−2 yr−1 in CL44 and 85.0 mmol m−2 yr−1 in CL47. The relatively high AOM rates were sustained by free gas dissolution rather than local methanogenesis. Based on the diffusive Ba2+ fluxes and the excess barium contents in the sediments slightly above the current SMTZ, we estimated that methane fluxes at core CL44 and CL47 have persisted for ca. 3 kyr and 0.8-1.6 kyr, respectively. The non-steady-state modeling for CL30 predicted that a recent increase in upward dissolved methane flux was initiated ca. 85 yr ago. However, the required time for the formation of the barium front above the SMTZ at this core is much longer (ca. 2.2-4.2 kyr), which suggests that the depth of SMTZ possibly has fluctuated due to episodic changes in methane flux. Furthermore, using the model-derived fractions of different DIC sources and the δ13CDIC mass balance calculation, we estimated that the δ13C values of the external methane in cores CL30, CL44, and CL47 are -74.1‰, -75.4‰, and -66.7‰, respectively, indicating the microbial origin of methane. Our results suggest that methane seepage in the broader area surrounding the Haima cold seeps probably has persisted at least hundreds to thousands of years with changing methane fluxes. --- ## Body ## 1. Introduction Methane in marine sediments as dissolved gas in porewater or free gas (bubbles) depending on its in situ solubility is a significant component of the global carbon cycle. Methane could also exist in an ice-like solid as gas hydrate if the in situ gas hydrate solubility concentration is oversaturated at suitable pressure-temperature conditions [1]. The base of the gas hydrate reservoir in marine sediments is present as a characteristic discontinuity known as a bottom-simulating reflector (BSR), which results from the occurrence of free gas beneath the gas hydrate stability zone (GHSZ) [2]. The great majority of methane is consumed by the microbial consortium via anaerobic oxidation of methane within the sulfate-methane transition zone (SMTZ) where methane meets sulfate diffusing downwards from seawater (AOM: CH4+SO42‐→HCO3‐+HS‐+H2O) [3, 4]. Through this, reaction methane is converted to dissolved inorganic carbon (DIC) which could be partially removed from solution by authigenic carbonate precipitation [5, 6]. Therefore, AOM largely prevents dissolved methane from entering water column and plays a significant role in marine carbon cycling.Gas bubble rise is a particularly effective mechanism for transporting methane through the sediment and into the bottom water because gas ascension can be faster than bubble dissolution [7] and methane gas cannot directly be consumed by microorganisms [8]. The rising methane gas bubbles emitting from the seafloor can mix bottom seawater down into the sediment column over several meters. The resulting kink-type porewater profiles are supposed to be stable for several years to decades even after active gas bubble ebullition has ceased [7]. Enhancement in upward methane flux could also result in kink-type or concave-up porewater profiles [9–12]. Hence, these types of nonlinear porewater profiles can thus be used to estimate the timing of (sub)recent methane pulse and provide insights into the dynamics of methane seepage and the underlying gas hydrate reservoir [10, 11, 13, 14].At the steady-state condition, Ba2+ diffusing upward into the sulfate-bearing zone above the SMTZ precipitates as barite and forms the authigenic barium fronts (Ba2++SO42‐→BaSO4) [15]. When buried beneath the sulfate-bearing zone, barite tends to dissolve and release Ba2+ into the porewater below the SMTZ due to unsaturation. Through this cycling, authigenic barium fronts would be stably developed just above the SMTZ [15–17]. The content of authigenic barite depends on the upward diffusive Ba2+ flux and the duration that the SMTZ has persisted at a given depth interval. The time required for barium front formation above the SMTZ could thus be calculated based on the depth-integrated excess barium contents and the porewater dissolved barium concentration gradients assuming a constant upward Ba2+ flux. Therefore, the authigenic barium fronts in sediments can be used to trace present and past SMTZ and associated methane release events as well as the duration of methane seepage that has persisted under a given methane flux [16–20].Methane seepages are widespread on the northern slope of the South China Sea (SCS) as revealed by authigenic carbonates collected at more than 30 cold seep sites [21–26]. The Haima cold seeps were recently discovered on the northwestern slope of SCS [25]. Several sites with gas bubbling identified by hydroacoustic anomalies, and shallow gas hydrates were found around this area [27–31]. Recent studies have shown a pronounced temporal change in methane seepages and a potential lateral migration of methane-bearing fluid along more permeable sand-bearing layer at Haima cold seeps [25, 32, 33]. Nevertheless, our quantitative understanding of the methane dynamics in this area remains scarce.In this study, we present porewater geochemical data of three piston cores (CL30, CL44, and CL47) collected to the west of Haima cold seeps, including concentrations of sulfate (SO42-), calcium (Ca2+), magnesium (Mg2+), barium (Ba2+), phosphate (PO43-), methane (CH4), and DIC as well as the carbon isotopic compositions of DIC. Using a steady-state reaction-transport model, we quantify the methane turnover rates in CL44 and CL47 which are mainly supplied by rising free gas. The kink in the porewater profiles of CL30 was reconstructed using a non-steady-state modeling approach assuming a recent increase in methane flux. In addition, authigenic Ba enrichments were used to constrain the durations that the current or past methane seepages have persisted. Furthermore, a simple mass balance model of DIC and δ13CDIC was applied to explore the methane source. ## 2. Geological Background The northern SCS is characterized as a Cenozoic, Atlantic-type passive continental margin [34], where the marginal basins generally underwent two stages of evolution, including the rift stage and the postrift thermal subsidence stage [35]. Qiongdongnan Basin is a northeastern trended Cenozoic sedimentary basin which developed on the northwestern part of the SCS [36]. Covered by sedimentary materials of up to 10 km, the depositional environment of the basin initially transformed from lacustrine to marine conditions and later from neritic to bathyal, starting from Eocene till present [37]. During the rifting stage, numerous half-grabens and sags were developed. After that, postrift thermal subsidence occurred and a thick sediment sequence dominated by mudstones was deposited in the basin since Miocene. Collectively, the sedimentation rates and the present-day geothermal gradient are both high in the Qiongdongnan Basin [38]. The thick sediment sequences, high geothermal gradient along with faulting and/or diapirism, have facilitated the generation and migration of the hydrocarbons in the basin [39]. The widely distributed bottom-simulating reflectors and gas chimneys identified in the Qiongdongnan Basin were linked to the accumulation of gas hydrate [40, 41].The active Haima cold seeps have been discovered in the southern uplift belt of the Qiongdongnan Basin on the lower continental slope of the northwestern SCS during R/V Haiyang-6 cruises in 2015 and 2016. Abundant chemosynthetic communities, methane-derived authigenic carbonates, and massive gas hydrates were found at the Haima cold seeps [25]. The dating of bivalve shells and seep carbonates revealed episodic changes in seepage activity [25]. Other features of methane seeps, such as acoustic plume, acoustic void, chimney structures, and pockmarks, were also reported at the Haima cold seeps and its surrounding area [27–31, 42]. The sampling sites are ca. 20 to 30 kilometers west of the Haima cold seeps, where BSR is well developed (Fang Y., unpublished data). The bathymetric investigation has shown a relatively flat topography, and the water depths range from 1250 to 1300 m in the study area (Figure 1).Figure 1 (a) Location of the study area. The grey area represents the subsurface area lineated by seismic investigation. The location of the reference core SO49-37KL (blue circle) is also shown. (b) Locations of sampling sites (red dots) and the Haima cold seeps (blue dots). (a) (b) ## 3. Materials and Methods ### 3.1. Sampling and Analytical Methods Three piston cores (CL30, CL44, and CL47) were collected from the southern Qiongdongnan Basin west to the Haima cold seeps at water depths ranging from 1255 m to 1301 m during the R/V Haiyang-4 cruise conducted by Guangzhou Marine Geological Survey in 2014 (Figure1 and Table 1). The sediments of the three cores mainly consist of greyish-green silty clay. Notably, the sediments at the bottom of core CL44 yielded a strong odour of hydrogen sulfide. Porewater samples were then collected onboard using Rhizon samplers with pore sizes of the porous part of approximately 0.2 mm at intervals of 20 cm for CL44 and 60 cm for CL30 and CL47. All the porewater samples were preserved at ~4°C until further analyses.Table 1 Information on the studied cores from the northwestern South China Sea. Site Water depth (m) Seafloor temperature (°C) Core length (cm) CL30 1255 3.3 630 CL44 1279 3.2 752 CL47 1301 3.1 775PO43- concentrations were measured onboard using the spectrophotometric method according to Grasshoff et al. [43] with a UV-Vis spectrophotometer (Hitachi U5100). The precision for phosphate was ±3.0%. 10 ml of sediments was added to 20 ml empty vials onboard to replace the 10 ml headspace needed for the chromatograph injection. The concentrations of hydrocarbon gas were measured onboard using the gas chromatograph method (Agilent 7890N). The precision for methane measurements was ±2.5% [44]. Porosity and density were determined from the weight loss before and after freeze-drying of the wet sediments using a cutting ring with definite mass (15 g) and volume (9.82 cm3) onboard at core CL44. The porosity and density were calculated assuming a density of the porewater of 1.0 g cm-3.The offshore analyses of porewater samples for core CL44 and for cores CL30 and CL47 were performed at the Nanjing University and the Third Institute of Oceanography, State Oceanic Administration, respectively. For core CL44, SO42-, Ca2+, and Mg2+ were measured using the standard method of ion chromatography (Metrohm 790-1, Metrosep A Supp 4-250/Metrosep C 2-150). The relative standard deviation was less than 3%. Ba2+ concentrations were measured by inductively coupled plasma mass spectrometry (ICP-MS, Finnigan Element II). Before measurement, samples were prepared by diluting in 2% HNO3 with 10 ppb of Rh as an internal standard. The analytical precisions were estimated to be <5% for Ba2+. For cores CL30 and CL47, SO42-, Ca2+, and Mg2+ concentrations were determined on a Thermo Dionex ICS-1100 ion chromatograph after a 500-fold dilution using ultrapure water [44]. Porewater samples were prepared by diluting in 2% HNO3 with 10 ppb of Tb as an internal standard before analysis for Ba2+ using the ICP-MS (Thermo Fisher iCAPQ). The analytical precisions were estimated to be <5% for Ba2+.For core CL44, DIC concentrations andδ13CDIC values were determined using a continuous flow mass spectrometer (Thermo Fisher Delta-Plus). 0.5 ml porewater was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through the measurement of the δ13C value [45]. For cores CL30 and CL47, the DIC concentrations and carbon isotopic ratios were determined via a continuous flow mass spectrometer (Thermo Delta V Advantage). A 0.2 ml porewater sample was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through which the δ13C values were measured. The analytical precisions were better than 0.2‰ for δ13C and better than 2% for DIC concentration [44].The particulate organic carbon (POC) contents were determined using the potassium dichromate wet oxidation method. The relative standard deviation of the POC content is <1.5%. The aluminium (Al), silicon (Si), and titanium (Ti) concentrations of the sediment samples at cores CL30, CL44, and CL47 were analyzed using PANalytical AXIOSX X-ray fluorescence spectrometry (XRF). The analytical precisions were estimated to be <2% for Al, Si, and Ti. The contents of Ba, zircon (Zr), and rubidium (Rb) in bulk sediments were determined using a PerkinElmer Optima 4300DV ICP-OES after digestion using HCl, HF, and HClO4 acid mixture. Rhodium was added as an internal standard for calculating the concentrations of the trace elements. The analytical precisions were estimated to be <2% for Ba, Zr, and Rb. The carbonate (CaCO3) contents of the sediment samples were determined by titration with EDTA standard solution. The analytical precisions were estimated to be <2%. For grain size measurements, approximately 0.5 g of the unground sample was treated with 10% (v/v) H2O2 for 48 h to oxidize organic matter and then dispersed and homogenized in sodium hexametaphosphate solution using ultrasonic vibration for 30 s before being analyzed by a laser grain size analyzer (Mastersizer 2000). The detection limit ranged from 0.5 μm to 2000 μm. Particles<4μm in size were classified as clay, 4 to 63 μm as silt, and larger than 63 μm as sand. The analytical precision is better than 3%. ### 3.2. Diffusive Flux Calculation To calculate the diffusive Ba2+ fluxes below the kink at cores CL30, CL44, and CL47, equations (1) and (2) were used assuming a steady-state condition [46]: (1)Jx=−φDsdCdx,(2)Ds=D01−lnφ2,where Jx represents the diffusive flux of Ba2+ (mmol m-2 yr-1), φ is the porosity, D0 is the diffusion coefficient for seawater (m2 s-1), DS is the diffusion coefficient for sediments (m2 s-1), C is the concentration of barium (mmol l-1), and x is the sediment depth (m). The average of sediment porosity of core CL44 (0.69) is applied to cores CL30 and CL47. ### 3.3. Estimating the Accumulation Time of Diagenetic Barite The total amount of excess Ba within the interval of the barium peak was calculated using an integral equation:(3)Ax=∫uvCx⋅ρ⋅1−φ,where ∫uvCx is the integral value of barium concentration in a peak from a depth interval from u to v, ρ and φ are the average grain density and porosity of the sediments, respectively.Under the premise of a constant diffusive upward flux of Ba2+ into the sulfate-bearing zone, the time needed for barium front formation was calculated using the equation: (4)tx=AxJx.In this case,tx is the time for barite enrichment, Ax is the depth-integrated excess barium content within a peak, and Jx is the upward diffusive flux of Ba2+. The diffusive flux was calculated using equations (1) and (2). Ds is the tortuosity- and temperature-corrected diffusion coefficient of Ba2+ in the sediment, calculated from the diffusion coefficient in free solution (D0) of 4.64, 4.62, and 4.61×10−6 cm2 s-1 (3.3, 3.2, and 3.1°C) for CL30, CL44, and CL47, respectively, according to Boudreau [47]. ### 3.4. Reaction-Transport Model A one-dimensional, steady-state, and reaction-transport model was applied to simulate one solid (POC) and six dissolved species including SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+. The model is modified from previous simulations of methane-rich sediments [48–51], and a full description of the model is shown in Supplementary Materials. All the reactions considered in the model and the expression of kinetic rate are listed in Table 2.Table 2 Rate expressions of the reactions considered in the model. Rate Kinetic rate law∗ Total POC degradation (wt.% C yr-1) RPOC=0.16⋅a0+xvs−0.95⋅POC POM degradation via sulfate reduction (mmol cm-3 yr-1 of SO42-) RSR=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Methanogenesis (mmol cm-3 yr-1 of CH4) RMG=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Anaerobic oxidation of methane (mmol cm-3 yr-1 of CH4) RAOM=kAOM⋅SO42−CH4 Authigenic Ca-carbonate precipitation (mmol cm-3 yr-1 of Ca2+) RCP‐Ca=kCa⋅Ca2+⋅CO32−KSP−1 Authigenic Mg-carbonate precipitation (mmol cm-3 yr-1 of Mg2+) RCP‐Mg=kMg⋅Mg2+⋅CO32−KSP−1 Gas bubble irrigation (mmol cm-3 yr-1) RBui=α1⋅expLirr−x/α21+expLirr−x/α2⋅C0−Cx Gas bubble dissolution (mmol cm-3 yr-1 of CH4) Rdiss=kMB⋅LMB−CH4 ∗fPOC converts between POC (dry wt.%) and DIC (mmol cm-3 of porewater): fPOC=MWC/10Φ/1−Φ/ρS, where MWC is the molecular weight of carbon (12 g mol-1), ρS is the density of dry sediments, and Φ is the porosity.Solid species are transported through the sediments only by burial with prescribed compaction, which is justified because we are only concerned with the anoxic diagenesis below the bioturbated zone. For sites CL44 and CL47, solutes are considered to be transported by molecular diffusion, porewater burial, and gas bubble irrigation, whereas for site CL30, solutes are regarded to be transported by molecular diffusion and porewater burial. Rising gas bubbles facilitate the exchange of porewater and bottom water as they move through tube structures in soft sediments [7]. Although this process was not observed directly, there are evidences implying that it is a significant pathway for transporting methane into the upper 10 m of sediment at sites CL44 and CL47 and driving the mixture of porewater and seawater in the upper two meters (see Section 5.1). The induced porewater mixing process was described as a nonlocal transport mechanism whose rate for each species is proportional to the difference between solute concentrations at the sediment surface C0 (mmol cm-3) and at depth below the sediment surface Cx (mmol cm-3) (RBui, Table 2). Bubble irrigation is described by parameters α1 (yr-1) and α2 (cm) that define the irrigation intensity and its attenuation below the irrigation depth Lirr (cm), respectively [49]. The latter can be determined by visual inspection of the porewater data (see Results) whereas α1 is a model fitting parameter. For the sake of parsimony, α2 is assumed to be constant for both sites.Although dissolution of gas was allowed to occur over the whole sediment column, the rising methane gas was not explicitly modeled. The rate of gas dissolution,Rdiss (mmol cm-3 yr-1), was described using a pseudo-first-order kinetic expression of the departure from the local methane gas solubility concentration, LMB (mmol cm-3), where kMB (yr-1) is the kinetic constant for gas bubble dissolution (Table 2). Methane only dissolves if the porewater is undersaturated with respect to LMB: (5)CH4g→CH4aqforCH4≤LMBLMB was calculated for the in situ salinity, temperature, and pressure using the algorithm in [52]. kMB was constrained using the dissolved sulfate and DIC data (see below).Major biogeochemical reactions considered in the model are particulate organic matter (POM) degradation via sulfate reduction, methanogenesis, AOM, and authigenic carbonate precipitation. Organic matter mineralization via aerobic respiration, denitrification, and metal oxide reduction were ignored since these processes mainly occur in the surface sediments which were mostly lost during coring.POM is chemically defined asCH2OPOPrP, where CH2O and POP denote particulate organic carbon and phosphate, respectively. The total rate of POM mineralization, RPOC (wt.% C yr-1), is calculated by the power law model from [53] that considers the initial age of organic matter in surface sediments, a0 (yr) (Table S2). POM mineralization coupled to sulfate reduction follows the stoichiometry: (6)2CH2OPOPrP+SO42−+2rPH+→2HCO3−+H2S+2rPPO43−where rP is the ratios of particulate organic phosphate to carbon. It is assumed to be the typical ratios as 1/106 [48].When sulfate is almost completely consumed, the remaining POM is degraded via methanogenesis:(7)2CH2OPOPrP→CO2+CH4+2rPPO43−The dominant pathways of methanogenesis in marine sediments are organic matter fermentation and CO2 reduction [54]. Their net reactions at steady state are balanced with equivalent amounts of CO2 and CH4 being produced per mole of POM degraded [55]. Therefore, the reaction of methanogenesis is a net reaction.Methane is considered to be consumed by AOM [3]: (8)CH4+SO42−→HCO3−+HS−+H2OThe rate constant for AOM,kAOM (cm3 mmol-1 yr-1), is tuned to the sulfate profiles within the SMTZ.The loss of Ca2+ and Mg2+ resulting from the precipitation of authigenic carbonates as Ca-calcite and Mg-calcite Ca2+,Mg2++HCO3‐→Ca,MgCO3+H+ was simulated in the model using the thermodynamic solubility constant as defined in [56] (Table 2). A typical porewater pH value of 7.6 was used to calculate CO32- from modeled DIC concentrations [57]. Ca,MgCO3 was not simulated explicitly in the model.The length of the simulated model domain was set to 1000 cm. Upper boundary conditions for all species were imposed as fixed concentrations (Dirichlet boundary) using measured values in the uppermost sediment layer where available. For CL44 and CL47, a zero concentration gradient (Neumann-type boundary) was imposed at the lower boundary for all the species. For CL30, a zero concentration gradient was imposed at the lower boundary for all the species except CH4. CH4 concentration at the lower boundary was a tunable parameter constrained from the SO42- profile. The model was solved using the NDSolve object of MATHEMATICA V. 10.0. The steady-state simulations were run for 107 yrs to achieve the steady state with a mass conservation of >99%. Further details on the model solutions can be found in Supplementary Materials. For the non-steady-state modeling of CL30, a fixed methane concentration in equilibrium with the gas hydrate solubility constrained by local seafloor temperature, pressure, and salinity was defined as the lower boundary of methane [58]. The extrapolation of sulfate concentrations in the upper 3.5 m to zero was taken as the initial condition prior to the increase in methane flux (Supplementary Materials). The basic model construction and kinetic rate expressions as well as the upper and lower boundary conditions for other species were identical to those in the steady-state model. ## 3.1. Sampling and Analytical Methods Three piston cores (CL30, CL44, and CL47) were collected from the southern Qiongdongnan Basin west to the Haima cold seeps at water depths ranging from 1255 m to 1301 m during the R/V Haiyang-4 cruise conducted by Guangzhou Marine Geological Survey in 2014 (Figure1 and Table 1). The sediments of the three cores mainly consist of greyish-green silty clay. Notably, the sediments at the bottom of core CL44 yielded a strong odour of hydrogen sulfide. Porewater samples were then collected onboard using Rhizon samplers with pore sizes of the porous part of approximately 0.2 mm at intervals of 20 cm for CL44 and 60 cm for CL30 and CL47. All the porewater samples were preserved at ~4°C until further analyses.Table 1 Information on the studied cores from the northwestern South China Sea. Site Water depth (m) Seafloor temperature (°C) Core length (cm) CL30 1255 3.3 630 CL44 1279 3.2 752 CL47 1301 3.1 775PO43- concentrations were measured onboard using the spectrophotometric method according to Grasshoff et al. [43] with a UV-Vis spectrophotometer (Hitachi U5100). The precision for phosphate was ±3.0%. 10 ml of sediments was added to 20 ml empty vials onboard to replace the 10 ml headspace needed for the chromatograph injection. The concentrations of hydrocarbon gas were measured onboard using the gas chromatograph method (Agilent 7890N). The precision for methane measurements was ±2.5% [44]. Porosity and density were determined from the weight loss before and after freeze-drying of the wet sediments using a cutting ring with definite mass (15 g) and volume (9.82 cm3) onboard at core CL44. The porosity and density were calculated assuming a density of the porewater of 1.0 g cm-3.The offshore analyses of porewater samples for core CL44 and for cores CL30 and CL47 were performed at the Nanjing University and the Third Institute of Oceanography, State Oceanic Administration, respectively. For core CL44, SO42-, Ca2+, and Mg2+ were measured using the standard method of ion chromatography (Metrohm 790-1, Metrosep A Supp 4-250/Metrosep C 2-150). The relative standard deviation was less than 3%. Ba2+ concentrations were measured by inductively coupled plasma mass spectrometry (ICP-MS, Finnigan Element II). Before measurement, samples were prepared by diluting in 2% HNO3 with 10 ppb of Rh as an internal standard. The analytical precisions were estimated to be <5% for Ba2+. For cores CL30 and CL47, SO42-, Ca2+, and Mg2+ concentrations were determined on a Thermo Dionex ICS-1100 ion chromatograph after a 500-fold dilution using ultrapure water [44]. Porewater samples were prepared by diluting in 2% HNO3 with 10 ppb of Tb as an internal standard before analysis for Ba2+ using the ICP-MS (Thermo Fisher iCAPQ). The analytical precisions were estimated to be <5% for Ba2+.For core CL44, DIC concentrations andδ13CDIC values were determined using a continuous flow mass spectrometer (Thermo Fisher Delta-Plus). 0.5 ml porewater was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through the measurement of the δ13C value [45]. For cores CL30 and CL47, the DIC concentrations and carbon isotopic ratios were determined via a continuous flow mass spectrometer (Thermo Delta V Advantage). A 0.2 ml porewater sample was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through which the δ13C values were measured. The analytical precisions were better than 0.2‰ for δ13C and better than 2% for DIC concentration [44].The particulate organic carbon (POC) contents were determined using the potassium dichromate wet oxidation method. The relative standard deviation of the POC content is <1.5%. The aluminium (Al), silicon (Si), and titanium (Ti) concentrations of the sediment samples at cores CL30, CL44, and CL47 were analyzed using PANalytical AXIOSX X-ray fluorescence spectrometry (XRF). The analytical precisions were estimated to be <2% for Al, Si, and Ti. The contents of Ba, zircon (Zr), and rubidium (Rb) in bulk sediments were determined using a PerkinElmer Optima 4300DV ICP-OES after digestion using HCl, HF, and HClO4 acid mixture. Rhodium was added as an internal standard for calculating the concentrations of the trace elements. The analytical precisions were estimated to be <2% for Ba, Zr, and Rb. The carbonate (CaCO3) contents of the sediment samples were determined by titration with EDTA standard solution. The analytical precisions were estimated to be <2%. For grain size measurements, approximately 0.5 g of the unground sample was treated with 10% (v/v) H2O2 for 48 h to oxidize organic matter and then dispersed and homogenized in sodium hexametaphosphate solution using ultrasonic vibration for 30 s before being analyzed by a laser grain size analyzer (Mastersizer 2000). The detection limit ranged from 0.5 μm to 2000 μm. Particles<4μm in size were classified as clay, 4 to 63 μm as silt, and larger than 63 μm as sand. The analytical precision is better than 3%. ## 3.2. Diffusive Flux Calculation To calculate the diffusive Ba2+ fluxes below the kink at cores CL30, CL44, and CL47, equations (1) and (2) were used assuming a steady-state condition [46]: (1)Jx=−φDsdCdx,(2)Ds=D01−lnφ2,where Jx represents the diffusive flux of Ba2+ (mmol m-2 yr-1), φ is the porosity, D0 is the diffusion coefficient for seawater (m2 s-1), DS is the diffusion coefficient for sediments (m2 s-1), C is the concentration of barium (mmol l-1), and x is the sediment depth (m). The average of sediment porosity of core CL44 (0.69) is applied to cores CL30 and CL47. ## 3.3. Estimating the Accumulation Time of Diagenetic Barite The total amount of excess Ba within the interval of the barium peak was calculated using an integral equation:(3)Ax=∫uvCx⋅ρ⋅1−φ,where ∫uvCx is the integral value of barium concentration in a peak from a depth interval from u to v, ρ and φ are the average grain density and porosity of the sediments, respectively.Under the premise of a constant diffusive upward flux of Ba2+ into the sulfate-bearing zone, the time needed for barium front formation was calculated using the equation: (4)tx=AxJx.In this case,tx is the time for barite enrichment, Ax is the depth-integrated excess barium content within a peak, and Jx is the upward diffusive flux of Ba2+. The diffusive flux was calculated using equations (1) and (2). Ds is the tortuosity- and temperature-corrected diffusion coefficient of Ba2+ in the sediment, calculated from the diffusion coefficient in free solution (D0) of 4.64, 4.62, and 4.61×10−6 cm2 s-1 (3.3, 3.2, and 3.1°C) for CL30, CL44, and CL47, respectively, according to Boudreau [47]. ## 3.4. Reaction-Transport Model A one-dimensional, steady-state, and reaction-transport model was applied to simulate one solid (POC) and six dissolved species including SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+. The model is modified from previous simulations of methane-rich sediments [48–51], and a full description of the model is shown in Supplementary Materials. All the reactions considered in the model and the expression of kinetic rate are listed in Table 2.Table 2 Rate expressions of the reactions considered in the model. Rate Kinetic rate law∗ Total POC degradation (wt.% C yr-1) RPOC=0.16⋅a0+xvs−0.95⋅POC POM degradation via sulfate reduction (mmol cm-3 yr-1 of SO42-) RSR=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Methanogenesis (mmol cm-3 yr-1 of CH4) RMG=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Anaerobic oxidation of methane (mmol cm-3 yr-1 of CH4) RAOM=kAOM⋅SO42−CH4 Authigenic Ca-carbonate precipitation (mmol cm-3 yr-1 of Ca2+) RCP‐Ca=kCa⋅Ca2+⋅CO32−KSP−1 Authigenic Mg-carbonate precipitation (mmol cm-3 yr-1 of Mg2+) RCP‐Mg=kMg⋅Mg2+⋅CO32−KSP−1 Gas bubble irrigation (mmol cm-3 yr-1) RBui=α1⋅expLirr−x/α21+expLirr−x/α2⋅C0−Cx Gas bubble dissolution (mmol cm-3 yr-1 of CH4) Rdiss=kMB⋅LMB−CH4 ∗fPOC converts between POC (dry wt.%) and DIC (mmol cm-3 of porewater): fPOC=MWC/10Φ/1−Φ/ρS, where MWC is the molecular weight of carbon (12 g mol-1), ρS is the density of dry sediments, and Φ is the porosity.Solid species are transported through the sediments only by burial with prescribed compaction, which is justified because we are only concerned with the anoxic diagenesis below the bioturbated zone. For sites CL44 and CL47, solutes are considered to be transported by molecular diffusion, porewater burial, and gas bubble irrigation, whereas for site CL30, solutes are regarded to be transported by molecular diffusion and porewater burial. Rising gas bubbles facilitate the exchange of porewater and bottom water as they move through tube structures in soft sediments [7]. Although this process was not observed directly, there are evidences implying that it is a significant pathway for transporting methane into the upper 10 m of sediment at sites CL44 and CL47 and driving the mixture of porewater and seawater in the upper two meters (see Section 5.1). The induced porewater mixing process was described as a nonlocal transport mechanism whose rate for each species is proportional to the difference between solute concentrations at the sediment surface C0 (mmol cm-3) and at depth below the sediment surface Cx (mmol cm-3) (RBui, Table 2). Bubble irrigation is described by parameters α1 (yr-1) and α2 (cm) that define the irrigation intensity and its attenuation below the irrigation depth Lirr (cm), respectively [49]. The latter can be determined by visual inspection of the porewater data (see Results) whereas α1 is a model fitting parameter. For the sake of parsimony, α2 is assumed to be constant for both sites.Although dissolution of gas was allowed to occur over the whole sediment column, the rising methane gas was not explicitly modeled. The rate of gas dissolution,Rdiss (mmol cm-3 yr-1), was described using a pseudo-first-order kinetic expression of the departure from the local methane gas solubility concentration, LMB (mmol cm-3), where kMB (yr-1) is the kinetic constant for gas bubble dissolution (Table 2). Methane only dissolves if the porewater is undersaturated with respect to LMB: (5)CH4g→CH4aqforCH4≤LMBLMB was calculated for the in situ salinity, temperature, and pressure using the algorithm in [52]. kMB was constrained using the dissolved sulfate and DIC data (see below).Major biogeochemical reactions considered in the model are particulate organic matter (POM) degradation via sulfate reduction, methanogenesis, AOM, and authigenic carbonate precipitation. Organic matter mineralization via aerobic respiration, denitrification, and metal oxide reduction were ignored since these processes mainly occur in the surface sediments which were mostly lost during coring.POM is chemically defined asCH2OPOPrP, where CH2O and POP denote particulate organic carbon and phosphate, respectively. The total rate of POM mineralization, RPOC (wt.% C yr-1), is calculated by the power law model from [53] that considers the initial age of organic matter in surface sediments, a0 (yr) (Table S2). POM mineralization coupled to sulfate reduction follows the stoichiometry: (6)2CH2OPOPrP+SO42−+2rPH+→2HCO3−+H2S+2rPPO43−where rP is the ratios of particulate organic phosphate to carbon. It is assumed to be the typical ratios as 1/106 [48].When sulfate is almost completely consumed, the remaining POM is degraded via methanogenesis:(7)2CH2OPOPrP→CO2+CH4+2rPPO43−The dominant pathways of methanogenesis in marine sediments are organic matter fermentation and CO2 reduction [54]. Their net reactions at steady state are balanced with equivalent amounts of CO2 and CH4 being produced per mole of POM degraded [55]. Therefore, the reaction of methanogenesis is a net reaction.Methane is considered to be consumed by AOM [3]: (8)CH4+SO42−→HCO3−+HS−+H2OThe rate constant for AOM,kAOM (cm3 mmol-1 yr-1), is tuned to the sulfate profiles within the SMTZ.The loss of Ca2+ and Mg2+ resulting from the precipitation of authigenic carbonates as Ca-calcite and Mg-calcite Ca2+,Mg2++HCO3‐→Ca,MgCO3+H+ was simulated in the model using the thermodynamic solubility constant as defined in [56] (Table 2). A typical porewater pH value of 7.6 was used to calculate CO32- from modeled DIC concentrations [57]. Ca,MgCO3 was not simulated explicitly in the model.The length of the simulated model domain was set to 1000 cm. Upper boundary conditions for all species were imposed as fixed concentrations (Dirichlet boundary) using measured values in the uppermost sediment layer where available. For CL44 and CL47, a zero concentration gradient (Neumann-type boundary) was imposed at the lower boundary for all the species. For CL30, a zero concentration gradient was imposed at the lower boundary for all the species except CH4. CH4 concentration at the lower boundary was a tunable parameter constrained from the SO42- profile. The model was solved using the NDSolve object of MATHEMATICA V. 10.0. The steady-state simulations were run for 107 yrs to achieve the steady state with a mass conservation of >99%. Further details on the model solutions can be found in Supplementary Materials. For the non-steady-state modeling of CL30, a fixed methane concentration in equilibrium with the gas hydrate solubility constrained by local seafloor temperature, pressure, and salinity was defined as the lower boundary of methane [58]. The extrapolation of sulfate concentrations in the upper 3.5 m to zero was taken as the initial condition prior to the increase in methane flux (Supplementary Materials). The basic model construction and kinetic rate expressions as well as the upper and lower boundary conditions for other species were identical to those in the steady-state model. ## 4. Results ### 4.1. General Geochemical Trends The depth profiles of SO42- concentration showed kink-type features at all the three cores (Figures 2–4 and Table 3). At site CL30, SO42- concentrations decreased gradually above a kink at ~3.5 mbsf and the gradient became steeper below that depth towards the SMTZ at ~4.7 mbsf (Figure 2). In contrast, SO42- concentrations at sites CL44 and CL47 displayed near-seawater values in the upper ~2 mbsf above the kinks and then decreased sharply down to the SMTZ located at ~7 and ~6.8 mbsf, respectively (Figures 3 and 4). Ca2+ and Mg2+ concentrations showed similar trends, with gradual decrease in the upper layers at core CL30 and close to seawater concentration above the kinks at cores CL44 and CL47. Ca2+ and Mg2+ concentrations declined sharply below the kinks due to ongoing carbonate precipitation and reached minimum at the SMTZ (Figures 2–4 and Table 3). Concentrations of DIC and PO43- showed opposite trends to SO42-, being depleted within the upper layer and enriched below it with the maximum at the STMZ (Figures 2–4 and Table 3). Moreover, CH4 concentrations at the three cores sharply increased below the SMTZ. The scatter in the CH4 contents was due to the degassing during core retrieval. The DIC concentrations increased with depth and reached maximum at the SMTZ, with opposite trends of δ13CDIC values (minimum values: -46.4‰ for CL30, -41.0‰ for CL44, and -38.8‰ for CL47) (Figures 2–4 and Table 3).Figure 2 Measured (dots) and simulated (curves) depth profiles of core CL30. Down-depth concentration of particulate organic carbon (POC), sulfate (SO42-), methane (CH4), phosphate (PO43-), dissolved inorganic carbon (DIC), calcium (Ca2+), magnesium (Mg2+), and δ13CDIC is shown.Figure 3 Measured (dots) and simulated (curves) depth profiles of core CL44. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Figure 4 Measured (dots) and simulated (curves) depth profiles of core CL47. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Table 3 Concentrations and isotope ratios of various dissolved components at cores CL30, CL44, and CL47. Depth (cmbsf) CH4 (mM) SO42- (mM) Ca2+ (mM) Mg2+ (mM) PO43- (μM) Ba2+ (μM) DIC (mM) δ13CDIC (‰, VPDB) CL30 55 0.0016 26.7 10.9 47.9 27.0 23.0 6.0 -20.4 110 0.0012 24.4 10.5 48.3 31.5 19.8 7.6 -26.0 170 0.0010 21.7 9.5 47.6 41.0 17.9 10.1 -31.9 230 0.0051 19.0 8.9 47.5 49.9 17.6 11.1 -35.4 290 0.0012 15.4 7.9 46.5 39.2 9.9 12.4 -38.0 350 0.0000 13.1 7.0 46.1 45.5 20.0 14.3 -40.5 410 0.0056 5.3 4.5 44.3 97.2 16.1 19.2 -45.7 470 0.8752 1.0 3.7 43.8 112 46.5 22.8 -46.4 530 3.2660 0.4 3.5 43.1 122 53.5 23.0 -40.0 590 0.1318 0.2 3.4 43.7 116 60.8 23.7 -35.7 CL44 10 0.0012 27.9 7.4 52.7 11.9 0.3 2.2 -10.9 30 0.0004 27.9 7.4 52.4 10.1 0.3 2.6 -10.5 50 0.0008 27.6 7.5 54.1 10.4 0.2 3.0 -12.1 70 0.0003 27.2 7.0 52.7 9.3 0.2 3.0 -14.8 90 0.0020 27.3 7.3 53.8 9.6 0.2 2.4 -10.9 110 0.0005 26.7 7.2 54.0 9.1 0.2 2.4 -11.6 130 0.0005 27.4 7.2 54.5 8.3 0.2 2.4 -11.9 150 0.0008 26.9 7.3 54.3 7.1 0.2 2.5 -14.1 170 0.0007 27.5 7.5 54.3 8.1 0.2 3.0 -13.1 190 0.0006 27.5 7.3 54.7 10.9 0.2 2.5 -12.1 210 0.0005 25.6 7.3 54.6 13.2 0.2 3.8 -17.2 230 0.0017 25.9 7.0 54.2 13.7 0.2 3.5 -18.5 250 0.0009 25.3 6.9 52.5 17.7 0.2 4.6 -20.4 270 0.0018 24.1 6.3 51.6 22.3 0.3 5.5 -24.0 290 0.0018 22.6 6.2 52.6 22.6 0.4 5.9 -23.1 310 0.0014 21.5 6.0 53.0 35.1 0.3 6.8 -25.0 330 0.0011 19.0 5.5 51.6 35.6 0.3 7.7 -28.8 350 0.0017 19.7 5.1 51.0 47.3 0.3 8.1 -24.3 370 0.0020 17.9 4.9 50.5 49.8 0.3 8.8 -30.1 390 0.0015 16.4 4.6 50.5 50.8 0.4 9.5 -31.2 410 0.0020 16.0 4.2 49.7 54.9 0.6 10.2 -31.9 430 0.0012 15.6 3.9 47.8 52.1 0.5 11.0 -33.4 450 0.0021 15.4 3.7 48.4 56.5 0.5 11.6 -35.1 470 0.0022 13.2 3.6 50.3 55.4 0.6 12.0 -35.7 490 0.0019 12.8 3.1 48.0 63.6 0.7 13.1 -35.8 510 0.0026 12.0 2.7 48.4 67.1 0.8 13.1 -39.9 530 0.0027 10.0 2.4 47.5 81.9 1.0 15.5 -39.1 550 0.0024 8.9 2.2 47.4 88.0 1.3 16.3 -39.3 570 0.0034 5.6 2.0 45.5 96.7 2.5 18.2 -40.5 590 0.0039 4.7 1.8 44.6 101 4.9 19.5 -41.0 610 0.0028 3.7 1.7 44.5 103 11.0 18.9 -39.7 630 0.0022 2.7 1.6 44.2 57.0 19.3 20.5 -40.5 650 0.0442 2.3 1.7 43.3 111 22.2 19.9 -39.6 670 0.5139 2.1 1.8 44.7 109 31.7 21.1 -37.1 690 1.1022 0.9 1.4 44.5 107 37.9 20.4 -37.4 710 2.5450 0.8 1.3 43.9 102 38.6 21.5 -36.1 730 0.0086 0.9 1.4 44.6 105 37.4 20 -36.8 750 1.1475 1.1 1.2 43.0 86.8 36.0 19.2 -33.3 CL47 55 0.0008 24.9 9.2 33.1 18.3 21.3 6.7 -21.1 110 0.0006 21.1 170 0.0004 26.0 10.9 39.7 16.4 13.6 6.8 -22.9 230 0.0008 28.2 290 0.0008 21.9 10.0 40.2 13.8 14.8 10.0 -28.5 350 0.0009 44.7 410 0.0009 16.5 8.8 39.8 43.1 13.9 13.7 -32.1 470 0.0006 12.7 7.8 40.4 66.4 16.9 15.9 -34.0 530 0.0006 9.3 6.5 40.0 65.9 13.7 18.2 -34.7 590 0.0007 5.4 4.9 39.7 66.4 25.9 20.6 -38.2 650 0.0773 1.1 4.5 39.8 76.3 52.0 24.2 -38.8 710 0.7947 1.2 4.6 40.6 74.5 58.5 23.5 -37.4 770 0.7692 1.4 70.2 22.1 -32.1Vertical profiles of CL30, CL44, and CL47 for porewater barium concentrations and sediment barium contents together with barium/aluminium (Ba/Al) ratios are shown in Figure5. Dissolved Ba2+ concentrations display maxima of 60.8, 38.6, and 58.5 μM below the SMTZ, respectively, and decreased upward towards the SMTZ (Figure 5). Bulk sediment Ba concentrations range from 306 to 957 mg kg-1 (Table S6) with averages of 461 mg kg-1 for CL30, 502 mg kg-1 for CL44, and 502 mg kg-1 for CL47. High Ba concentrations of bulk sediments at each core occur over narrow depth intervals (0.3–0.8 m) above the present SMTZ (Figure 5). Peak Ba concentrations within these zones reach 957, 741, and 790 mg kg-1 and appear at approximately 4.3, 5.9, and 6.3 mbsf at cores CL30, CL44, and CL47, respectively. The refractory amount of solid phase barium at these cores amounts to 530, 550, and 590 mg kg-1, respectively, which is considered to represent the “background” levels of solid phase barium [16]. Ba contents were normalized to Al in order to account for variations in lithology. Depth intervals with Ba content higher than these “background” levels are referred to as “Ba fronts.” At each core examined, the Ba fronts exist within 1.5 m above the depth of current SMTZ (Figure 5). The distance between the peak Ba concentration and the depth of sulfate depletion is approximately 0.4 m at CL30, 1.1 m at CL44, and 0.5 m at CL47.Figure 5 Concentration depth profiles of dissolved barium (Ba2+), sulfate (SO42-), and solid-phase total barium (Batotal), barium/aluminium ratios (Ba/Al), and diagenetic barium (shown as red peaks) for cores CL30 (a), CL44 (b), and CL47 (c). Blue bands mark the SMTZ. Pink dash lines indicate the background barium contents based on the distribution of barium content. Barium contents above background represent diagenetic barite enrichments (red polygons). (a) (b) (c)POC contents at all the sites did not follow a general downward trend with average contents as 0.97% for CL30, 1.05% for CL44, and 1.05% for CL47 (Figures2–4 and Table S6). The sediments in the study cores are mainly composed of silt and clay. At sites CL30 and CL44, the relative fractions of silt and clay are nearly constant with depth and the fractions of sand remain low values with depth except at ~320 cm in CL30 displaying elevated sand fraction (Figure S2). At site CL47, the sand fractions are low with depth in the interval of 0–200 cm, followed by an increase in sand fraction with two peaks at the depth of ~270 and ~430 cm. Below 500 cm, the sand fraction almost decreased to zero (Figure S2). ### 4.2. Timing of Authigenic Barite Front Accumulation Dissolved barium fluxes towards the SMTZ were 1.58 mmol m-2 yr-1 for CL30, 1.54 mmol m-2 yr-1 for CL44, and 1.61 mmol m-2 yr-1 for CL47. The calculated time required for the formation of barite front is about 3.2, 3.0, and 1.3 kyr for the three cores, respectively, using an average porosity of 0.69 taken from CL44 (Table S5). Variations of porosity from 0.65 to 0.75 yield the time for barite front formation ranging between 2.2 and 4.2 kyr for CL30 and between 0.8 and 1.6 kyr for CL47. Sensitivity tests of the background Ba content, Ba2+ fluxes, and porosity are shown in Figures S5&S6. ### 4.3. Reaction-Transport Modeling The modeled profiles and reaction rates are shown in Figures2–4 and Table 4, respectively. The model parameters used to derive these results are listed in Tables S2-S4. The steady-state modeling reproduced the measured concentrations of SO42-, DIC, Ca2+, Mg2+, and PO43- at sites CL44, CL47, and CL30 above the kink with obvious discrepancies between modeled and measured concentrations of CH4 due to aforementioned degassing during core recovery (Figures 2–4). At site CL30, the model failed to reproduce the concentration gradients of SO42-, DIC, Ca2+, Mg2+, and PO43- below the kink (~3.5 mbsf) which is likely caused by a transient condition that is not considered in the steady-state model.Table 4 Depth-integrated simulated turnover rates and benthic methane fluxes based on the steady-state modeling. CL30 CL44 CL47 Unit FPOC: total POC mineralization rate 18.8 55.2 58.1 mmol m−2 yr−1 of C FOSR: sulfate reduction via POC degradation 7.6 23.9 25.1 mmol m−2 yr−1 of SO42− FME: methane formation via POC degradation 3.6 3.7 4.0 mmol m−2 yr−1 of CH4 FDISS: gas dissolution 28.4 73.3 84.7 mmol m−2 yr−1 of CH4 FAOM: anaerobic oxidation of methane 30.1 74.3 85.0 mmol m−2 yr−1 of CH4 FCP−Ca: authigenic CaCO3 precipitation 3.0 2.2 5.6 mmol m−2 yr−1 of C FCP−Mg: authigenic MgCO3 precipitation 5.4 7.1 0 mmol m−2 yr−1 of C Sulfate consumed by AOM 79.8 75.7 77.2 % Benthic flux of CH4 at SWI 0.5 1.9 2.7 mmol m−2 yr−1 of CH4 Percentage of CH4 flux from depth 88.0 95.0 95.3 % Percentage of CH4 consumed by AOM 94.1 96.5 95.8 %The sulfate concentration profile with a kink at site CL30 (Figure2) could be explained by a recent increase in upward methane flux [9]. The linear extrapolation of the sulfate concentrations in the upper 3.5 m to zero sulfate concentration was taken as the initial condition for the non-steady-state model. Under this condition, the sulfate profile was fitted by a fixed CH4 concentration (67 mM) at the lower boundary in equilibrium with the gas hydrate solubility under the conditions of in situ S, T, and P. A sudden increase in CH4 concentration reproduces the observed SO42- concentration profile after running the model for ~85 yr (Figure 6). The increase in methane flux resulted in a prominent increase in the depth-integrated AOM rate from 30.1 mmol m-2 y-1 (t=0yr) to 140 mmol m-2 y-1 (t=85yr).Figure 6 Evolution of the sulfate profile over time from simulation of non-steady-state porewater profiles of core CL30.The initial age of the organic matter was tuned until a good fit was obtained for the PO43-. The mean total depth-integrated rates of POC degradation were about 3 times higher at sites CL44 and CL47 (55.2 and 58.1 mmol m-2 yr-1) than that at site CL30 (18.8 mmol m-2 yr-1) (Table 4). The rates of POC degradation through sulfate reduction (POCSR) were 7.6, 23.9, and 25.1 mmol m-2 yr-1 at cores CL30, CL44, and CL47, respectively. In contrast to the relatively low rates of POCSR, AOM dominated the sulfate consumption with rates of 30.1, 74.3, and 84.7 mmol m-2 yr-1 for CL30, CL44, and CL47, respectively. The AOM rates were mainly sustained by an external methane source, and methanogenesis contributed only a negligible amount of methane (Table 4). The AOM consumes almost all the CH4 with benthic CH4 fluxes of 0.49, 2.0, and 2.7 mmol m-2 yr-1 at sites CL30, CL44, and CL47, respectively. ## 4.1. General Geochemical Trends The depth profiles of SO42- concentration showed kink-type features at all the three cores (Figures 2–4 and Table 3). At site CL30, SO42- concentrations decreased gradually above a kink at ~3.5 mbsf and the gradient became steeper below that depth towards the SMTZ at ~4.7 mbsf (Figure 2). In contrast, SO42- concentrations at sites CL44 and CL47 displayed near-seawater values in the upper ~2 mbsf above the kinks and then decreased sharply down to the SMTZ located at ~7 and ~6.8 mbsf, respectively (Figures 3 and 4). Ca2+ and Mg2+ concentrations showed similar trends, with gradual decrease in the upper layers at core CL30 and close to seawater concentration above the kinks at cores CL44 and CL47. Ca2+ and Mg2+ concentrations declined sharply below the kinks due to ongoing carbonate precipitation and reached minimum at the SMTZ (Figures 2–4 and Table 3). Concentrations of DIC and PO43- showed opposite trends to SO42-, being depleted within the upper layer and enriched below it with the maximum at the STMZ (Figures 2–4 and Table 3). Moreover, CH4 concentrations at the three cores sharply increased below the SMTZ. The scatter in the CH4 contents was due to the degassing during core retrieval. The DIC concentrations increased with depth and reached maximum at the SMTZ, with opposite trends of δ13CDIC values (minimum values: -46.4‰ for CL30, -41.0‰ for CL44, and -38.8‰ for CL47) (Figures 2–4 and Table 3).Figure 2 Measured (dots) and simulated (curves) depth profiles of core CL30. Down-depth concentration of particulate organic carbon (POC), sulfate (SO42-), methane (CH4), phosphate (PO43-), dissolved inorganic carbon (DIC), calcium (Ca2+), magnesium (Mg2+), and δ13CDIC is shown.Figure 3 Measured (dots) and simulated (curves) depth profiles of core CL44. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Figure 4 Measured (dots) and simulated (curves) depth profiles of core CL47. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Table 3 Concentrations and isotope ratios of various dissolved components at cores CL30, CL44, and CL47. Depth (cmbsf) CH4 (mM) SO42- (mM) Ca2+ (mM) Mg2+ (mM) PO43- (μM) Ba2+ (μM) DIC (mM) δ13CDIC (‰, VPDB) CL30 55 0.0016 26.7 10.9 47.9 27.0 23.0 6.0 -20.4 110 0.0012 24.4 10.5 48.3 31.5 19.8 7.6 -26.0 170 0.0010 21.7 9.5 47.6 41.0 17.9 10.1 -31.9 230 0.0051 19.0 8.9 47.5 49.9 17.6 11.1 -35.4 290 0.0012 15.4 7.9 46.5 39.2 9.9 12.4 -38.0 350 0.0000 13.1 7.0 46.1 45.5 20.0 14.3 -40.5 410 0.0056 5.3 4.5 44.3 97.2 16.1 19.2 -45.7 470 0.8752 1.0 3.7 43.8 112 46.5 22.8 -46.4 530 3.2660 0.4 3.5 43.1 122 53.5 23.0 -40.0 590 0.1318 0.2 3.4 43.7 116 60.8 23.7 -35.7 CL44 10 0.0012 27.9 7.4 52.7 11.9 0.3 2.2 -10.9 30 0.0004 27.9 7.4 52.4 10.1 0.3 2.6 -10.5 50 0.0008 27.6 7.5 54.1 10.4 0.2 3.0 -12.1 70 0.0003 27.2 7.0 52.7 9.3 0.2 3.0 -14.8 90 0.0020 27.3 7.3 53.8 9.6 0.2 2.4 -10.9 110 0.0005 26.7 7.2 54.0 9.1 0.2 2.4 -11.6 130 0.0005 27.4 7.2 54.5 8.3 0.2 2.4 -11.9 150 0.0008 26.9 7.3 54.3 7.1 0.2 2.5 -14.1 170 0.0007 27.5 7.5 54.3 8.1 0.2 3.0 -13.1 190 0.0006 27.5 7.3 54.7 10.9 0.2 2.5 -12.1 210 0.0005 25.6 7.3 54.6 13.2 0.2 3.8 -17.2 230 0.0017 25.9 7.0 54.2 13.7 0.2 3.5 -18.5 250 0.0009 25.3 6.9 52.5 17.7 0.2 4.6 -20.4 270 0.0018 24.1 6.3 51.6 22.3 0.3 5.5 -24.0 290 0.0018 22.6 6.2 52.6 22.6 0.4 5.9 -23.1 310 0.0014 21.5 6.0 53.0 35.1 0.3 6.8 -25.0 330 0.0011 19.0 5.5 51.6 35.6 0.3 7.7 -28.8 350 0.0017 19.7 5.1 51.0 47.3 0.3 8.1 -24.3 370 0.0020 17.9 4.9 50.5 49.8 0.3 8.8 -30.1 390 0.0015 16.4 4.6 50.5 50.8 0.4 9.5 -31.2 410 0.0020 16.0 4.2 49.7 54.9 0.6 10.2 -31.9 430 0.0012 15.6 3.9 47.8 52.1 0.5 11.0 -33.4 450 0.0021 15.4 3.7 48.4 56.5 0.5 11.6 -35.1 470 0.0022 13.2 3.6 50.3 55.4 0.6 12.0 -35.7 490 0.0019 12.8 3.1 48.0 63.6 0.7 13.1 -35.8 510 0.0026 12.0 2.7 48.4 67.1 0.8 13.1 -39.9 530 0.0027 10.0 2.4 47.5 81.9 1.0 15.5 -39.1 550 0.0024 8.9 2.2 47.4 88.0 1.3 16.3 -39.3 570 0.0034 5.6 2.0 45.5 96.7 2.5 18.2 -40.5 590 0.0039 4.7 1.8 44.6 101 4.9 19.5 -41.0 610 0.0028 3.7 1.7 44.5 103 11.0 18.9 -39.7 630 0.0022 2.7 1.6 44.2 57.0 19.3 20.5 -40.5 650 0.0442 2.3 1.7 43.3 111 22.2 19.9 -39.6 670 0.5139 2.1 1.8 44.7 109 31.7 21.1 -37.1 690 1.1022 0.9 1.4 44.5 107 37.9 20.4 -37.4 710 2.5450 0.8 1.3 43.9 102 38.6 21.5 -36.1 730 0.0086 0.9 1.4 44.6 105 37.4 20 -36.8 750 1.1475 1.1 1.2 43.0 86.8 36.0 19.2 -33.3 CL47 55 0.0008 24.9 9.2 33.1 18.3 21.3 6.7 -21.1 110 0.0006 21.1 170 0.0004 26.0 10.9 39.7 16.4 13.6 6.8 -22.9 230 0.0008 28.2 290 0.0008 21.9 10.0 40.2 13.8 14.8 10.0 -28.5 350 0.0009 44.7 410 0.0009 16.5 8.8 39.8 43.1 13.9 13.7 -32.1 470 0.0006 12.7 7.8 40.4 66.4 16.9 15.9 -34.0 530 0.0006 9.3 6.5 40.0 65.9 13.7 18.2 -34.7 590 0.0007 5.4 4.9 39.7 66.4 25.9 20.6 -38.2 650 0.0773 1.1 4.5 39.8 76.3 52.0 24.2 -38.8 710 0.7947 1.2 4.6 40.6 74.5 58.5 23.5 -37.4 770 0.7692 1.4 70.2 22.1 -32.1Vertical profiles of CL30, CL44, and CL47 for porewater barium concentrations and sediment barium contents together with barium/aluminium (Ba/Al) ratios are shown in Figure5. Dissolved Ba2+ concentrations display maxima of 60.8, 38.6, and 58.5 μM below the SMTZ, respectively, and decreased upward towards the SMTZ (Figure 5). Bulk sediment Ba concentrations range from 306 to 957 mg kg-1 (Table S6) with averages of 461 mg kg-1 for CL30, 502 mg kg-1 for CL44, and 502 mg kg-1 for CL47. High Ba concentrations of bulk sediments at each core occur over narrow depth intervals (0.3–0.8 m) above the present SMTZ (Figure 5). Peak Ba concentrations within these zones reach 957, 741, and 790 mg kg-1 and appear at approximately 4.3, 5.9, and 6.3 mbsf at cores CL30, CL44, and CL47, respectively. The refractory amount of solid phase barium at these cores amounts to 530, 550, and 590 mg kg-1, respectively, which is considered to represent the “background” levels of solid phase barium [16]. Ba contents were normalized to Al in order to account for variations in lithology. Depth intervals with Ba content higher than these “background” levels are referred to as “Ba fronts.” At each core examined, the Ba fronts exist within 1.5 m above the depth of current SMTZ (Figure 5). The distance between the peak Ba concentration and the depth of sulfate depletion is approximately 0.4 m at CL30, 1.1 m at CL44, and 0.5 m at CL47.Figure 5 Concentration depth profiles of dissolved barium (Ba2+), sulfate (SO42-), and solid-phase total barium (Batotal), barium/aluminium ratios (Ba/Al), and diagenetic barium (shown as red peaks) for cores CL30 (a), CL44 (b), and CL47 (c). Blue bands mark the SMTZ. Pink dash lines indicate the background barium contents based on the distribution of barium content. Barium contents above background represent diagenetic barite enrichments (red polygons). (a) (b) (c)POC contents at all the sites did not follow a general downward trend with average contents as 0.97% for CL30, 1.05% for CL44, and 1.05% for CL47 (Figures2–4 and Table S6). The sediments in the study cores are mainly composed of silt and clay. At sites CL30 and CL44, the relative fractions of silt and clay are nearly constant with depth and the fractions of sand remain low values with depth except at ~320 cm in CL30 displaying elevated sand fraction (Figure S2). At site CL47, the sand fractions are low with depth in the interval of 0–200 cm, followed by an increase in sand fraction with two peaks at the depth of ~270 and ~430 cm. Below 500 cm, the sand fraction almost decreased to zero (Figure S2). ## 4.2. Timing of Authigenic Barite Front Accumulation Dissolved barium fluxes towards the SMTZ were 1.58 mmol m-2 yr-1 for CL30, 1.54 mmol m-2 yr-1 for CL44, and 1.61 mmol m-2 yr-1 for CL47. The calculated time required for the formation of barite front is about 3.2, 3.0, and 1.3 kyr for the three cores, respectively, using an average porosity of 0.69 taken from CL44 (Table S5). Variations of porosity from 0.65 to 0.75 yield the time for barite front formation ranging between 2.2 and 4.2 kyr for CL30 and between 0.8 and 1.6 kyr for CL47. Sensitivity tests of the background Ba content, Ba2+ fluxes, and porosity are shown in Figures S5&S6. ## 4.3. Reaction-Transport Modeling The modeled profiles and reaction rates are shown in Figures2–4 and Table 4, respectively. The model parameters used to derive these results are listed in Tables S2-S4. The steady-state modeling reproduced the measured concentrations of SO42-, DIC, Ca2+, Mg2+, and PO43- at sites CL44, CL47, and CL30 above the kink with obvious discrepancies between modeled and measured concentrations of CH4 due to aforementioned degassing during core recovery (Figures 2–4). At site CL30, the model failed to reproduce the concentration gradients of SO42-, DIC, Ca2+, Mg2+, and PO43- below the kink (~3.5 mbsf) which is likely caused by a transient condition that is not considered in the steady-state model.Table 4 Depth-integrated simulated turnover rates and benthic methane fluxes based on the steady-state modeling. CL30 CL44 CL47 Unit FPOC: total POC mineralization rate 18.8 55.2 58.1 mmol m−2 yr−1 of C FOSR: sulfate reduction via POC degradation 7.6 23.9 25.1 mmol m−2 yr−1 of SO42− FME: methane formation via POC degradation 3.6 3.7 4.0 mmol m−2 yr−1 of CH4 FDISS: gas dissolution 28.4 73.3 84.7 mmol m−2 yr−1 of CH4 FAOM: anaerobic oxidation of methane 30.1 74.3 85.0 mmol m−2 yr−1 of CH4 FCP−Ca: authigenic CaCO3 precipitation 3.0 2.2 5.6 mmol m−2 yr−1 of C FCP−Mg: authigenic MgCO3 precipitation 5.4 7.1 0 mmol m−2 yr−1 of C Sulfate consumed by AOM 79.8 75.7 77.2 % Benthic flux of CH4 at SWI 0.5 1.9 2.7 mmol m−2 yr−1 of CH4 Percentage of CH4 flux from depth 88.0 95.0 95.3 % Percentage of CH4 consumed by AOM 94.1 96.5 95.8 %The sulfate concentration profile with a kink at site CL30 (Figure2) could be explained by a recent increase in upward methane flux [9]. The linear extrapolation of the sulfate concentrations in the upper 3.5 m to zero sulfate concentration was taken as the initial condition for the non-steady-state model. Under this condition, the sulfate profile was fitted by a fixed CH4 concentration (67 mM) at the lower boundary in equilibrium with the gas hydrate solubility under the conditions of in situ S, T, and P. A sudden increase in CH4 concentration reproduces the observed SO42- concentration profile after running the model for ~85 yr (Figure 6). The increase in methane flux resulted in a prominent increase in the depth-integrated AOM rate from 30.1 mmol m-2 y-1 (t=0yr) to 140 mmol m-2 y-1 (t=85yr).Figure 6 Evolution of the sulfate profile over time from simulation of non-steady-state porewater profiles of core CL30.The initial age of the organic matter was tuned until a good fit was obtained for the PO43-. The mean total depth-integrated rates of POC degradation were about 3 times higher at sites CL44 and CL47 (55.2 and 58.1 mmol m-2 yr-1) than that at site CL30 (18.8 mmol m-2 yr-1) (Table 4). The rates of POC degradation through sulfate reduction (POCSR) were 7.6, 23.9, and 25.1 mmol m-2 yr-1 at cores CL30, CL44, and CL47, respectively. In contrast to the relatively low rates of POCSR, AOM dominated the sulfate consumption with rates of 30.1, 74.3, and 84.7 mmol m-2 yr-1 for CL30, CL44, and CL47, respectively. The AOM rates were mainly sustained by an external methane source, and methanogenesis contributed only a negligible amount of methane (Table 4). The AOM consumes almost all the CH4 with benthic CH4 fluxes of 0.49, 2.0, and 2.7 mmol m-2 yr-1 at sites CL30, CL44, and CL47, respectively. ## 5. Discussion ### 5.1. Formation Mechanisms of the Nonlinear Porewater Profiles The sulfate concentration profiles of the porewater in marine sediments depend on the availability of labile organic matter amenable to sulfate reducers, diffusive/advective methane flux, and depositional conditions [9, 59–64]. Combination of these factors would result in linear, kink, concave-up, concave-down, and sigmoidal (S-shape) type sulfate concentration trends in marine sediments [9].The porewater profiles of the three study cores exhibit kink-type features. The plausible mechanisms for the occurrence of the kink-type profile include (1) irrigation and seawater intrusion due to biological, physical, and hydrological processes; (2) changes in the sedimentation rate or porosity due to depositional events; and (3) changes in methane flux and upward advection of fluid [14]. Bioirrigation has been shown to generally occur in a decimeter scale in the surface sediments [65, 66]. In fact, no macroorganisms were observed in the study cores below the upper few centimeters of sediment. The lithology of the upper two-three meters of the sediments in the study cores was dominated by fine-grained hemipelagic sediments mainly consisting of silty clay without any discernible abnormal deposition (Figure S2). Although deep-water turbidity current channel and fan systems are well developed in the study region [67], the homogeneous grain size distributions in cores CL44 and CL47 reveal that the sediment above the kinks of sulfate was not impacted by turbidites, which are typically characterized by upward grading in grain size. The C-M plot also suggests the absence of turbidites in the study cores (Figure S2 [68]). Moreover, by comparing the depth profiles of CaCO3 content in CL44 and CL47 with that in an adjacent core (SO49-37KL) with established Marine Isotope Stage, we found that the upper ~2 m sediments of CL44 and CL47 represent normal hemipelagic background deposition during the Holocene (Figure S3) [69, 70]. The relatively constant ratios of Ti/Al, Si/Al, and Zr/Rb above the kinks indicate a stable input of detrital fraction (Figure S4). In contrast, the layers at the interval of ~1.4 to ~4.2 mbsf in CL44 and ~1.8 to ~5 mbsf in CL47 exhibiting high Si, Ti, and Zr/Rb contents and coarser grain sizes (Figure S4) suggest elevated input of detrital fraction during sea-level lowstands [71, 72]. In addition, the flat seafloor topography in the study area also precludes the occurrence of abrupt depositional event such as landslide (Figure 1). Therefore, it is unlikely that the irrigation-like feature in CL44 and CL47 was caused by mass-transport deposits [44]. Furthermore, there is no indicator for upward fluid advection at sites CL44 and CL47.We argue that the cause for the formation of the irrigation-like porewater profiles is probably the bubble irrigation by rising free gas through escaping tubes [7, 12, 51]. Such features were observed at the nearby Haima cold seeps and attributed to bubble irrigation or a recent increase in methane flux [33]. Moreover, BSR and acoustic blanking which are indicative of free gas accumulation were identified in the study area (Fang Y., unpublished data). Hence, gas bubble irrigation is the most likely mechanism to explain the observed profiles at cores CL44 and CL47.At core CL30, the sediments consist of homogenous silty clay without discernible abnormal deposition and the sulfate concentrations decrease gradually without maintaining seawater-like values above the kink at 3.5 mbsf (Figure2). We thus hypothesize that the kink in the sulfate profile at core CL30 results from a (sub)recent increase in the upward methane flux, similar to the scenario reported in the Sea of Marmara, the continental margin offshore Pakistan, the slope area south of Svalbard, the Niger Delta, the southern SCS, and so on [10–13, 44]. A simplified numerical model exercise, assuming a diffusional porewater system with POCSR and AOM as the only biogeochemical reactions, was used to demonstrate this scenario (Figure 6). The assumption of diffusive transport of porewater species is warranted because it has been suggested that porewater solute distributions are dominated by diffusion even if free gas transport and fluid advection exist [14, 73].The current barite front is located at about 4.2-6.4 mbsf, very close to the current SMTZ (4.7-7 mbsf), indicating that the barite front might form in the recent past to the present day induced by a recent enhancement of methane flux [11]. Actually, the measured SO42- concentration profile can be reproduced after a sudden increase in CH4 concentration lasting for ~85 yr. On the other hand, based on the calculated diffusive Ba2+ fluxes and the depth-integrated Ba contents, the time required to form the observed authigenic barite front above the current SMTZ is about 2.2-4.2 kyr for CL30, given the uncertainties of porosity. The difference of estimated duration of constant methane flux between these two approaches may suggest that the barite front was not a result of the recent increase in methane flux inducing the kink-type sulfate profile. Instead, it is more likely that the SMTZ has experienced several fluctuations in depth, considering the episodic pulses of upward methane flux which have occurred in this area as shown by previous studies [25, 32, 33]. However, this decoupled record between sediments and porewaters is commonly observed at cold seeps [74–77] and is considered to reflect the variations of methane fluxes and the resulting SMTZ in the sedimentary column [74]. Observations and numerical modeling suggest that the response of porewater geochemical signatures is more rapid on timescales of months to centuries than the accumulation of authigenic barite deposits on timescales of decades to hundred thousands of years [11, 12, 14, 16–19, 74]. On the whole, our results suggest that combining porewater data with sedimentary barite front records may provide important clues for better understanding of the evolution of methane seepage. ### 5.2. Methane-Related Carbon Cycling and Source of Methane Based on the simulation results derived from the steady-state modeling, AOM consumed ~80%, 76%, and 77% of sulfate in CL30, CL44, and CL47, respectively. AOM thus acts as an efficient barrier preventing methane from being released into the water column at the studied cores. This is supported by the lowδ13CDIC values at the SMTZs mainly derived from methane. AOM increases porewater alkalinity by producing bicarbonate and results in the precipitation of authigenic carbonates as shown by the decrease in Ca2+ and Mg2+ concentrations with depth (Figures 3–6).In addition, theδ13CDIC values below the SMTZ become more positive than those at the SMTZ. The reversal in δ13CDIC below the SMTZ is caused by the generation of 13C-enriched DIC via local methanogenesis at the methanogenic zone [63, 78]. The 13C-enriched DIC would migrate into the SMTZ from the methanogenic zone and “dilute” the 12C pool of DIC in porewater. Thus, in a closed system, DIC generated by local methanogenesis is an important source of DIC in the carbon budget within the SMTZ [78, 79].Based on the modeling results of methane turnovers, the depth-intergrated AOM rates at cores CL30, CL44, and CL47 are about 8 to 21 times of the in situ methanogenesis rates (Table4). Therefore, the relative proportions of external methane sources contributed to the total methane pool are 88%, 95%, and 95% at cores CL30, CL44, and CL47, respectively. This indicates that the majority of methane fuelling AOM at the SMTZ was sourced from subsurface sediments. There are two general pathways for producing methane in marine sediments, including microbial methane generated via CO2 reduction or the fermentation of reduced carbon substrates (e.g., acetate and methanol; [80]) and thermogenic methane formed via thermal cracking of organic matter and/or heavy hydrocarbons [81]. The δ13C values of methane are generally distinct between these two types of methane. The δ13C values of microbial methane typically range from −50‰ to −110‰ [80], whereas those of thermogenic methane range from −30‰ to −50‰ [81].Becauseδ13C values of headspace methane in the sediments are absent in the study area, porewater DIC content and δ13CDIC are utilized to constrain the origin of methane. Generally, porewater DIC in marine sediments is mainly derived from (1) the DIC that is diffusing from the overlying seawater into the sediments or the seawater DIC trapped within sediments during burial, (2) the DIC generated by the degradation of sedimentary organic matter, (3) the DIC produced by AOM, and (4) the residual DIC derived from methanogenesis [82, 83]. In order to obtain the carbon isotopic composition of DIC derived from external methane, we applied a simple four-end-member mixing model. The four end-members are (1) seawater-derived DIC trapped within sediments during burial (SW), (2) DIC produced by POCSR, (3) DIC derived from external methane (EM) via AOM, and (4) DIC generated by in situ methanogenesis (ME). Note that methane production via local methanogenesis was assumed to be competently recycled by AOM. As a result, the carbon isotopic composition of DIC produced by local methanogenesis was identical to that of organic matter (OM) [82–84]. In a closed system, δ13C balance of the porewater DIC pool at the SMTZ can be expressed by (9)δ13Cex=XSW∗δ13CSW+XOSR∗δ13COM+XAOM∗δ13CEM+XME∗δ13COM,where X is the proportion of DIC contributed to the total DIC pool and the subscripts SW, OSR, AOM, and ME refer to DIC derived from seawater, organic matter, external methane, and local methanogenesis, respectively. The XSW values are estimated as typical seawater DIC concentration (2.1 mM) divided by DIC concentration at the SMTZ, and the δ13CSW is assumed to be 0‰. The δ13C value of sedimentary organic matter in sediment of the SCS (−20‰; [85]) is used for the δ13COM. The overall δ13C of DIC derived from methanogenesis is equal to the δ13COM assuming that methane produced by local methanogenesis was completely converted to DIC by AOM [84]. The contribution fractions of OSR, AOM, and local methanogenesis shown as XOSR, XAOM, and XME are calculated from the steady-state modeling. The δ13Cex can be acquired from a regression of porewater δ13CDIC×DIC vs. DIC (Figure 7). The regression commonly shows linear in seep-impacted sediments, thus providing definitive δ13CDIC supplied to porewater [82, 86, 87]. The δ13Cex values and the contribution fractions estimated from the model are listed in Table 5. The estimated δ13CEM values in the shallow sediments are -74.1‰ (CL30), -75.4‰ (CL44), and -66.7‰ (CL47), suggesting the external methane migrating into the shallow sediments is microbial in origin [83, 88]. The absence of higher hydrocarbons in headspace gas samples also supports the microbial origin of methane in the study area.Figure 7 Plots of DIC vs.DIC×δ13DIC. The δ13Cex were calculated using the linear regression of DIC×δ13C vs. DIC for CL30 (a), CL44 (b), and CL47 (c). (a) (b) (c)Table 5 Fractions of DIC from different sources contributed to the total DIC pool andδ13C values from different sources. Core ID δ13CSMTZ (‰, VPDB) δ13Cex (‰, VPDB) δ13CSW (‰, VPDB) δ13COM (‰, VPDB) XSW (%) XME (%) XOSR (%) XAOM (%) δ13Cmethane (‰, VPDB) CL30 -46.4 -48.1 0 -20 9.8 6.6 28.0 55.5 -74.1 CL44 -41.0 -46.1 0 -20 9.8 5.3 34.3 50.6 -75.4 CL47 -38.8 -43.1 0 -20 8.7 5.2 32.9 53.1 -66.7Previous studies have suggested that microbial methane was the main hydrocarbon source with minor contributions of oil-derived compounds and pyrolysis gas at the Haima cold seeps [32, 33, 89, 90]. Gas chimney structures, which are well developed around the Haima cold seeps, might serve as conduits for the upward migration of biogenic gas from the underlying free gas reservoir beneath the GHSZ to shallow sediments [29, 42]. This observation suggests that microbial methane at the study sites might be derived from an underlying free gas reservoir trapped beneath the GHSZ. ### 5.3. Implications of the Time Constraint on Methane Seepage Based on the diffusive dissolved barium flux and the excess barium content in the sediments, the time for the observed barite enrichments just above the current SMTZ is estimated to be about 3 kyr and 0.8-1.6 kyr for CL44 and CL47, respectively, given the uncertainties of porosity. These results suggest that the SMTZ has been fixed at the current sediment depth for a time period of at least several thousand years at these sites. The irrigation-type sulfate profiles are possibly maintained by continuous mixing of seawater into the sediment over these time periods, like the case in the pockmark sediments of Congo Fan [74]. Furthermore, the depth of SMTZ was speculated to have fluctuated due to variations in methane flux as suggested by the difference in the estimated duration of the barite enrichment and the recent increase methane flux at CL30. Overall, our results show that the methane flux has been fluctuating over the last hundreds to thousands of years in the vicinity of Haima cold seeps.In fact, methane seepages around the Haima cold seeps are characterized by distinct periodicity of seep activities during the past several thousands of years. Radiocarbon ages of bivalve shells suggest that a major seepage event occurred during the period of 6.1 to 5.1 ka B.P., followed by a subordinate seepage event spanning from 3.9 to 2.9 ka B.P. at the Haima cold seeps [25]. The widespread occurrence of dead bivalves on the seafloor reflects a decline in current seepage intensity [25]. Moreover, modeling of porewater profiles at the Haima cold seeps predicts that gas hydrate formation in the seepage center started at least 150 yr B.P. and the subsequent sealing of gas hydrates favored the lateral migration of methane-rich fluids in the coarser, more permeable interval [33]. Sedimentation dynamics, including sediment instabilities and mass wasting, may trigger the destabilization of the gas hydrate reservoir and the resulting occurrence of methane seepage. The evolution and fate of methane seepage are also considered to be affected by local fluid flow dynamics and associated migration of both free gas and methane-rich fluids along fractures, as well as the redirection of gas supply from the reservoir due to pore space clogging by gas hydrate in shallow sediments [25, 33]. The exact mechanism of the changes in methane flux around the Haima cold seeps area is beyond the scope of this study. Despite this, our quantitative study provides some constraint on the duration of methane seepage and may have implication for understanding the evolution of methane seepage in the petroliferous Qiongdongnan Basin. ## 5.1. Formation Mechanisms of the Nonlinear Porewater Profiles The sulfate concentration profiles of the porewater in marine sediments depend on the availability of labile organic matter amenable to sulfate reducers, diffusive/advective methane flux, and depositional conditions [9, 59–64]. Combination of these factors would result in linear, kink, concave-up, concave-down, and sigmoidal (S-shape) type sulfate concentration trends in marine sediments [9].The porewater profiles of the three study cores exhibit kink-type features. The plausible mechanisms for the occurrence of the kink-type profile include (1) irrigation and seawater intrusion due to biological, physical, and hydrological processes; (2) changes in the sedimentation rate or porosity due to depositional events; and (3) changes in methane flux and upward advection of fluid [14]. Bioirrigation has been shown to generally occur in a decimeter scale in the surface sediments [65, 66]. In fact, no macroorganisms were observed in the study cores below the upper few centimeters of sediment. The lithology of the upper two-three meters of the sediments in the study cores was dominated by fine-grained hemipelagic sediments mainly consisting of silty clay without any discernible abnormal deposition (Figure S2). Although deep-water turbidity current channel and fan systems are well developed in the study region [67], the homogeneous grain size distributions in cores CL44 and CL47 reveal that the sediment above the kinks of sulfate was not impacted by turbidites, which are typically characterized by upward grading in grain size. The C-M plot also suggests the absence of turbidites in the study cores (Figure S2 [68]). Moreover, by comparing the depth profiles of CaCO3 content in CL44 and CL47 with that in an adjacent core (SO49-37KL) with established Marine Isotope Stage, we found that the upper ~2 m sediments of CL44 and CL47 represent normal hemipelagic background deposition during the Holocene (Figure S3) [69, 70]. The relatively constant ratios of Ti/Al, Si/Al, and Zr/Rb above the kinks indicate a stable input of detrital fraction (Figure S4). In contrast, the layers at the interval of ~1.4 to ~4.2 mbsf in CL44 and ~1.8 to ~5 mbsf in CL47 exhibiting high Si, Ti, and Zr/Rb contents and coarser grain sizes (Figure S4) suggest elevated input of detrital fraction during sea-level lowstands [71, 72]. In addition, the flat seafloor topography in the study area also precludes the occurrence of abrupt depositional event such as landslide (Figure 1). Therefore, it is unlikely that the irrigation-like feature in CL44 and CL47 was caused by mass-transport deposits [44]. Furthermore, there is no indicator for upward fluid advection at sites CL44 and CL47.We argue that the cause for the formation of the irrigation-like porewater profiles is probably the bubble irrigation by rising free gas through escaping tubes [7, 12, 51]. Such features were observed at the nearby Haima cold seeps and attributed to bubble irrigation or a recent increase in methane flux [33]. Moreover, BSR and acoustic blanking which are indicative of free gas accumulation were identified in the study area (Fang Y., unpublished data). Hence, gas bubble irrigation is the most likely mechanism to explain the observed profiles at cores CL44 and CL47.At core CL30, the sediments consist of homogenous silty clay without discernible abnormal deposition and the sulfate concentrations decrease gradually without maintaining seawater-like values above the kink at 3.5 mbsf (Figure2). We thus hypothesize that the kink in the sulfate profile at core CL30 results from a (sub)recent increase in the upward methane flux, similar to the scenario reported in the Sea of Marmara, the continental margin offshore Pakistan, the slope area south of Svalbard, the Niger Delta, the southern SCS, and so on [10–13, 44]. A simplified numerical model exercise, assuming a diffusional porewater system with POCSR and AOM as the only biogeochemical reactions, was used to demonstrate this scenario (Figure 6). The assumption of diffusive transport of porewater species is warranted because it has been suggested that porewater solute distributions are dominated by diffusion even if free gas transport and fluid advection exist [14, 73].The current barite front is located at about 4.2-6.4 mbsf, very close to the current SMTZ (4.7-7 mbsf), indicating that the barite front might form in the recent past to the present day induced by a recent enhancement of methane flux [11]. Actually, the measured SO42- concentration profile can be reproduced after a sudden increase in CH4 concentration lasting for ~85 yr. On the other hand, based on the calculated diffusive Ba2+ fluxes and the depth-integrated Ba contents, the time required to form the observed authigenic barite front above the current SMTZ is about 2.2-4.2 kyr for CL30, given the uncertainties of porosity. The difference of estimated duration of constant methane flux between these two approaches may suggest that the barite front was not a result of the recent increase in methane flux inducing the kink-type sulfate profile. Instead, it is more likely that the SMTZ has experienced several fluctuations in depth, considering the episodic pulses of upward methane flux which have occurred in this area as shown by previous studies [25, 32, 33]. However, this decoupled record between sediments and porewaters is commonly observed at cold seeps [74–77] and is considered to reflect the variations of methane fluxes and the resulting SMTZ in the sedimentary column [74]. Observations and numerical modeling suggest that the response of porewater geochemical signatures is more rapid on timescales of months to centuries than the accumulation of authigenic barite deposits on timescales of decades to hundred thousands of years [11, 12, 14, 16–19, 74]. On the whole, our results suggest that combining porewater data with sedimentary barite front records may provide important clues for better understanding of the evolution of methane seepage. ## 5.2. Methane-Related Carbon Cycling and Source of Methane Based on the simulation results derived from the steady-state modeling, AOM consumed ~80%, 76%, and 77% of sulfate in CL30, CL44, and CL47, respectively. AOM thus acts as an efficient barrier preventing methane from being released into the water column at the studied cores. This is supported by the lowδ13CDIC values at the SMTZs mainly derived from methane. AOM increases porewater alkalinity by producing bicarbonate and results in the precipitation of authigenic carbonates as shown by the decrease in Ca2+ and Mg2+ concentrations with depth (Figures 3–6).In addition, theδ13CDIC values below the SMTZ become more positive than those at the SMTZ. The reversal in δ13CDIC below the SMTZ is caused by the generation of 13C-enriched DIC via local methanogenesis at the methanogenic zone [63, 78]. The 13C-enriched DIC would migrate into the SMTZ from the methanogenic zone and “dilute” the 12C pool of DIC in porewater. Thus, in a closed system, DIC generated by local methanogenesis is an important source of DIC in the carbon budget within the SMTZ [78, 79].Based on the modeling results of methane turnovers, the depth-intergrated AOM rates at cores CL30, CL44, and CL47 are about 8 to 21 times of the in situ methanogenesis rates (Table4). Therefore, the relative proportions of external methane sources contributed to the total methane pool are 88%, 95%, and 95% at cores CL30, CL44, and CL47, respectively. This indicates that the majority of methane fuelling AOM at the SMTZ was sourced from subsurface sediments. There are two general pathways for producing methane in marine sediments, including microbial methane generated via CO2 reduction or the fermentation of reduced carbon substrates (e.g., acetate and methanol; [80]) and thermogenic methane formed via thermal cracking of organic matter and/or heavy hydrocarbons [81]. The δ13C values of methane are generally distinct between these two types of methane. The δ13C values of microbial methane typically range from −50‰ to −110‰ [80], whereas those of thermogenic methane range from −30‰ to −50‰ [81].Becauseδ13C values of headspace methane in the sediments are absent in the study area, porewater DIC content and δ13CDIC are utilized to constrain the origin of methane. Generally, porewater DIC in marine sediments is mainly derived from (1) the DIC that is diffusing from the overlying seawater into the sediments or the seawater DIC trapped within sediments during burial, (2) the DIC generated by the degradation of sedimentary organic matter, (3) the DIC produced by AOM, and (4) the residual DIC derived from methanogenesis [82, 83]. In order to obtain the carbon isotopic composition of DIC derived from external methane, we applied a simple four-end-member mixing model. The four end-members are (1) seawater-derived DIC trapped within sediments during burial (SW), (2) DIC produced by POCSR, (3) DIC derived from external methane (EM) via AOM, and (4) DIC generated by in situ methanogenesis (ME). Note that methane production via local methanogenesis was assumed to be competently recycled by AOM. As a result, the carbon isotopic composition of DIC produced by local methanogenesis was identical to that of organic matter (OM) [82–84]. In a closed system, δ13C balance of the porewater DIC pool at the SMTZ can be expressed by (9)δ13Cex=XSW∗δ13CSW+XOSR∗δ13COM+XAOM∗δ13CEM+XME∗δ13COM,where X is the proportion of DIC contributed to the total DIC pool and the subscripts SW, OSR, AOM, and ME refer to DIC derived from seawater, organic matter, external methane, and local methanogenesis, respectively. The XSW values are estimated as typical seawater DIC concentration (2.1 mM) divided by DIC concentration at the SMTZ, and the δ13CSW is assumed to be 0‰. The δ13C value of sedimentary organic matter in sediment of the SCS (−20‰; [85]) is used for the δ13COM. The overall δ13C of DIC derived from methanogenesis is equal to the δ13COM assuming that methane produced by local methanogenesis was completely converted to DIC by AOM [84]. The contribution fractions of OSR, AOM, and local methanogenesis shown as XOSR, XAOM, and XME are calculated from the steady-state modeling. The δ13Cex can be acquired from a regression of porewater δ13CDIC×DIC vs. DIC (Figure 7). The regression commonly shows linear in seep-impacted sediments, thus providing definitive δ13CDIC supplied to porewater [82, 86, 87]. The δ13Cex values and the contribution fractions estimated from the model are listed in Table 5. The estimated δ13CEM values in the shallow sediments are -74.1‰ (CL30), -75.4‰ (CL44), and -66.7‰ (CL47), suggesting the external methane migrating into the shallow sediments is microbial in origin [83, 88]. The absence of higher hydrocarbons in headspace gas samples also supports the microbial origin of methane in the study area.Figure 7 Plots of DIC vs.DIC×δ13DIC. The δ13Cex were calculated using the linear regression of DIC×δ13C vs. DIC for CL30 (a), CL44 (b), and CL47 (c). (a) (b) (c)Table 5 Fractions of DIC from different sources contributed to the total DIC pool andδ13C values from different sources. Core ID δ13CSMTZ (‰, VPDB) δ13Cex (‰, VPDB) δ13CSW (‰, VPDB) δ13COM (‰, VPDB) XSW (%) XME (%) XOSR (%) XAOM (%) δ13Cmethane (‰, VPDB) CL30 -46.4 -48.1 0 -20 9.8 6.6 28.0 55.5 -74.1 CL44 -41.0 -46.1 0 -20 9.8 5.3 34.3 50.6 -75.4 CL47 -38.8 -43.1 0 -20 8.7 5.2 32.9 53.1 -66.7Previous studies have suggested that microbial methane was the main hydrocarbon source with minor contributions of oil-derived compounds and pyrolysis gas at the Haima cold seeps [32, 33, 89, 90]. Gas chimney structures, which are well developed around the Haima cold seeps, might serve as conduits for the upward migration of biogenic gas from the underlying free gas reservoir beneath the GHSZ to shallow sediments [29, 42]. This observation suggests that microbial methane at the study sites might be derived from an underlying free gas reservoir trapped beneath the GHSZ. ## 5.3. Implications of the Time Constraint on Methane Seepage Based on the diffusive dissolved barium flux and the excess barium content in the sediments, the time for the observed barite enrichments just above the current SMTZ is estimated to be about 3 kyr and 0.8-1.6 kyr for CL44 and CL47, respectively, given the uncertainties of porosity. These results suggest that the SMTZ has been fixed at the current sediment depth for a time period of at least several thousand years at these sites. The irrigation-type sulfate profiles are possibly maintained by continuous mixing of seawater into the sediment over these time periods, like the case in the pockmark sediments of Congo Fan [74]. Furthermore, the depth of SMTZ was speculated to have fluctuated due to variations in methane flux as suggested by the difference in the estimated duration of the barite enrichment and the recent increase methane flux at CL30. Overall, our results show that the methane flux has been fluctuating over the last hundreds to thousands of years in the vicinity of Haima cold seeps.In fact, methane seepages around the Haima cold seeps are characterized by distinct periodicity of seep activities during the past several thousands of years. Radiocarbon ages of bivalve shells suggest that a major seepage event occurred during the period of 6.1 to 5.1 ka B.P., followed by a subordinate seepage event spanning from 3.9 to 2.9 ka B.P. at the Haima cold seeps [25]. The widespread occurrence of dead bivalves on the seafloor reflects a decline in current seepage intensity [25]. Moreover, modeling of porewater profiles at the Haima cold seeps predicts that gas hydrate formation in the seepage center started at least 150 yr B.P. and the subsequent sealing of gas hydrates favored the lateral migration of methane-rich fluids in the coarser, more permeable interval [33]. Sedimentation dynamics, including sediment instabilities and mass wasting, may trigger the destabilization of the gas hydrate reservoir and the resulting occurrence of methane seepage. The evolution and fate of methane seepage are also considered to be affected by local fluid flow dynamics and associated migration of both free gas and methane-rich fluids along fractures, as well as the redirection of gas supply from the reservoir due to pore space clogging by gas hydrate in shallow sediments [25, 33]. The exact mechanism of the changes in methane flux around the Haima cold seeps area is beyond the scope of this study. Despite this, our quantitative study provides some constraint on the duration of methane seepage and may have implication for understanding the evolution of methane seepage in the petroliferous Qiongdongnan Basin. ## 6. Conclusions This study is aimed at understanding the methane source and turnover as well as provide some constraints on the timing of methane seepage to the west of “Haima cold seeps.” The steady-state reaction-transport modeling of SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+ in CL44 and CL47 suggests that gas bubble transport may lead to the irrigation-like feature in the upper 2 m and relatively high AOM rates (74.3 mmol m−2 yr−1 for CL44 and 85.0 mmol m−2 yr−1 for CL47). The time required for the enrichment of authigenic barium fronts slightly above the current SMTZ is approximately 3 kyr for CL44 and 0.8-1.6 kyr for CL47, respectively. In contrast, a recent increase in methane flux (prior to ~85 yr) is the likely cause of the kink at 3.5 m of the sulfate profile in CL30 demonstrated by the transient-state modeling. The estimated time required for the formation of the diagenetic barium peak just above the current SMTZ was 2.2-4.2 kyr at this core. The discrepancy in the time estimates constrained by two different approaches suggests that the position of SMTZ possibly has fluctuated due to variation in methane flux at the site. In addition, based on the four DIC end-member mixing calculation, the δ13C values of the external methane in cores CL30, CL44, and CL47 are -74.1‰, -75.4‰, and -66.7‰, respectively. This is indicative of the biogenic origin of external methane from an underlying reservoir. Our results suggest that methane seepage exists in a broader area in the vicinity of the “Haima cold seeps” and the methane fluxes may have fluctuated frequently for the last several hundreds to thousands of years. --- *Source: 1010824-2019-08-05.xml*
1010824-2019-08-05_1010824-2019-08-05.md
92,735
Methane Source and Turnover in the Shallow Sediments to the West of Haima Cold Seeps on the Northwestern Slope of the South China Sea
Junxi Feng; Shengxiong Yang; Hongbin Wang; Jinqiang Liang; Yunxin Fang; Min Luo
Geofluids (2019)
Engineering & Technology
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1010824
1010824-2019-08-05.xml
--- ## Abstract The Haima cold seeps are active cold seep areas that were recently discovered on the northwestern slope of the South China Sea (SCS). Three piston cores (CL30, CL44, and CL47) were collected within an area characterized by bottom simulating reflectors to the west of Haima cold seeps. Porewater profiles of the three cores exhibit typical kink-type feature, which is attributed to elevated methane flux (CL30) and bubble irrigation (CL44 and CL47). By simulating the porewater profiles of SO42-, CH4, PO43-, Ca2+, Mg2+, and dissolved inorganic carbon (DIC) in CL44 and CL47 using a steady-state reaction-transport model, we estimated that the dissolved SO42- was predominantly consumed by anaerobic oxidation of methane (AOM) at rates of 74.3 mmol m−2 yr−1 in CL44 and 85.0 mmol m−2 yr−1 in CL47. The relatively high AOM rates were sustained by free gas dissolution rather than local methanogenesis. Based on the diffusive Ba2+ fluxes and the excess barium contents in the sediments slightly above the current SMTZ, we estimated that methane fluxes at core CL44 and CL47 have persisted for ca. 3 kyr and 0.8-1.6 kyr, respectively. The non-steady-state modeling for CL30 predicted that a recent increase in upward dissolved methane flux was initiated ca. 85 yr ago. However, the required time for the formation of the barium front above the SMTZ at this core is much longer (ca. 2.2-4.2 kyr), which suggests that the depth of SMTZ possibly has fluctuated due to episodic changes in methane flux. Furthermore, using the model-derived fractions of different DIC sources and the δ13CDIC mass balance calculation, we estimated that the δ13C values of the external methane in cores CL30, CL44, and CL47 are -74.1‰, -75.4‰, and -66.7‰, respectively, indicating the microbial origin of methane. Our results suggest that methane seepage in the broader area surrounding the Haima cold seeps probably has persisted at least hundreds to thousands of years with changing methane fluxes. --- ## Body ## 1. Introduction Methane in marine sediments as dissolved gas in porewater or free gas (bubbles) depending on its in situ solubility is a significant component of the global carbon cycle. Methane could also exist in an ice-like solid as gas hydrate if the in situ gas hydrate solubility concentration is oversaturated at suitable pressure-temperature conditions [1]. The base of the gas hydrate reservoir in marine sediments is present as a characteristic discontinuity known as a bottom-simulating reflector (BSR), which results from the occurrence of free gas beneath the gas hydrate stability zone (GHSZ) [2]. The great majority of methane is consumed by the microbial consortium via anaerobic oxidation of methane within the sulfate-methane transition zone (SMTZ) where methane meets sulfate diffusing downwards from seawater (AOM: CH4+SO42‐→HCO3‐+HS‐+H2O) [3, 4]. Through this, reaction methane is converted to dissolved inorganic carbon (DIC) which could be partially removed from solution by authigenic carbonate precipitation [5, 6]. Therefore, AOM largely prevents dissolved methane from entering water column and plays a significant role in marine carbon cycling.Gas bubble rise is a particularly effective mechanism for transporting methane through the sediment and into the bottom water because gas ascension can be faster than bubble dissolution [7] and methane gas cannot directly be consumed by microorganisms [8]. The rising methane gas bubbles emitting from the seafloor can mix bottom seawater down into the sediment column over several meters. The resulting kink-type porewater profiles are supposed to be stable for several years to decades even after active gas bubble ebullition has ceased [7]. Enhancement in upward methane flux could also result in kink-type or concave-up porewater profiles [9–12]. Hence, these types of nonlinear porewater profiles can thus be used to estimate the timing of (sub)recent methane pulse and provide insights into the dynamics of methane seepage and the underlying gas hydrate reservoir [10, 11, 13, 14].At the steady-state condition, Ba2+ diffusing upward into the sulfate-bearing zone above the SMTZ precipitates as barite and forms the authigenic barium fronts (Ba2++SO42‐→BaSO4) [15]. When buried beneath the sulfate-bearing zone, barite tends to dissolve and release Ba2+ into the porewater below the SMTZ due to unsaturation. Through this cycling, authigenic barium fronts would be stably developed just above the SMTZ [15–17]. The content of authigenic barite depends on the upward diffusive Ba2+ flux and the duration that the SMTZ has persisted at a given depth interval. The time required for barium front formation above the SMTZ could thus be calculated based on the depth-integrated excess barium contents and the porewater dissolved barium concentration gradients assuming a constant upward Ba2+ flux. Therefore, the authigenic barium fronts in sediments can be used to trace present and past SMTZ and associated methane release events as well as the duration of methane seepage that has persisted under a given methane flux [16–20].Methane seepages are widespread on the northern slope of the South China Sea (SCS) as revealed by authigenic carbonates collected at more than 30 cold seep sites [21–26]. The Haima cold seeps were recently discovered on the northwestern slope of SCS [25]. Several sites with gas bubbling identified by hydroacoustic anomalies, and shallow gas hydrates were found around this area [27–31]. Recent studies have shown a pronounced temporal change in methane seepages and a potential lateral migration of methane-bearing fluid along more permeable sand-bearing layer at Haima cold seeps [25, 32, 33]. Nevertheless, our quantitative understanding of the methane dynamics in this area remains scarce.In this study, we present porewater geochemical data of three piston cores (CL30, CL44, and CL47) collected to the west of Haima cold seeps, including concentrations of sulfate (SO42-), calcium (Ca2+), magnesium (Mg2+), barium (Ba2+), phosphate (PO43-), methane (CH4), and DIC as well as the carbon isotopic compositions of DIC. Using a steady-state reaction-transport model, we quantify the methane turnover rates in CL44 and CL47 which are mainly supplied by rising free gas. The kink in the porewater profiles of CL30 was reconstructed using a non-steady-state modeling approach assuming a recent increase in methane flux. In addition, authigenic Ba enrichments were used to constrain the durations that the current or past methane seepages have persisted. Furthermore, a simple mass balance model of DIC and δ13CDIC was applied to explore the methane source. ## 2. Geological Background The northern SCS is characterized as a Cenozoic, Atlantic-type passive continental margin [34], where the marginal basins generally underwent two stages of evolution, including the rift stage and the postrift thermal subsidence stage [35]. Qiongdongnan Basin is a northeastern trended Cenozoic sedimentary basin which developed on the northwestern part of the SCS [36]. Covered by sedimentary materials of up to 10 km, the depositional environment of the basin initially transformed from lacustrine to marine conditions and later from neritic to bathyal, starting from Eocene till present [37]. During the rifting stage, numerous half-grabens and sags were developed. After that, postrift thermal subsidence occurred and a thick sediment sequence dominated by mudstones was deposited in the basin since Miocene. Collectively, the sedimentation rates and the present-day geothermal gradient are both high in the Qiongdongnan Basin [38]. The thick sediment sequences, high geothermal gradient along with faulting and/or diapirism, have facilitated the generation and migration of the hydrocarbons in the basin [39]. The widely distributed bottom-simulating reflectors and gas chimneys identified in the Qiongdongnan Basin were linked to the accumulation of gas hydrate [40, 41].The active Haima cold seeps have been discovered in the southern uplift belt of the Qiongdongnan Basin on the lower continental slope of the northwestern SCS during R/V Haiyang-6 cruises in 2015 and 2016. Abundant chemosynthetic communities, methane-derived authigenic carbonates, and massive gas hydrates were found at the Haima cold seeps [25]. The dating of bivalve shells and seep carbonates revealed episodic changes in seepage activity [25]. Other features of methane seeps, such as acoustic plume, acoustic void, chimney structures, and pockmarks, were also reported at the Haima cold seeps and its surrounding area [27–31, 42]. The sampling sites are ca. 20 to 30 kilometers west of the Haima cold seeps, where BSR is well developed (Fang Y., unpublished data). The bathymetric investigation has shown a relatively flat topography, and the water depths range from 1250 to 1300 m in the study area (Figure 1).Figure 1 (a) Location of the study area. The grey area represents the subsurface area lineated by seismic investigation. The location of the reference core SO49-37KL (blue circle) is also shown. (b) Locations of sampling sites (red dots) and the Haima cold seeps (blue dots). (a) (b) ## 3. Materials and Methods ### 3.1. Sampling and Analytical Methods Three piston cores (CL30, CL44, and CL47) were collected from the southern Qiongdongnan Basin west to the Haima cold seeps at water depths ranging from 1255 m to 1301 m during the R/V Haiyang-4 cruise conducted by Guangzhou Marine Geological Survey in 2014 (Figure1 and Table 1). The sediments of the three cores mainly consist of greyish-green silty clay. Notably, the sediments at the bottom of core CL44 yielded a strong odour of hydrogen sulfide. Porewater samples were then collected onboard using Rhizon samplers with pore sizes of the porous part of approximately 0.2 mm at intervals of 20 cm for CL44 and 60 cm for CL30 and CL47. All the porewater samples were preserved at ~4°C until further analyses.Table 1 Information on the studied cores from the northwestern South China Sea. Site Water depth (m) Seafloor temperature (°C) Core length (cm) CL30 1255 3.3 630 CL44 1279 3.2 752 CL47 1301 3.1 775PO43- concentrations were measured onboard using the spectrophotometric method according to Grasshoff et al. [43] with a UV-Vis spectrophotometer (Hitachi U5100). The precision for phosphate was ±3.0%. 10 ml of sediments was added to 20 ml empty vials onboard to replace the 10 ml headspace needed for the chromatograph injection. The concentrations of hydrocarbon gas were measured onboard using the gas chromatograph method (Agilent 7890N). The precision for methane measurements was ±2.5% [44]. Porosity and density were determined from the weight loss before and after freeze-drying of the wet sediments using a cutting ring with definite mass (15 g) and volume (9.82 cm3) onboard at core CL44. The porosity and density were calculated assuming a density of the porewater of 1.0 g cm-3.The offshore analyses of porewater samples for core CL44 and for cores CL30 and CL47 were performed at the Nanjing University and the Third Institute of Oceanography, State Oceanic Administration, respectively. For core CL44, SO42-, Ca2+, and Mg2+ were measured using the standard method of ion chromatography (Metrohm 790-1, Metrosep A Supp 4-250/Metrosep C 2-150). The relative standard deviation was less than 3%. Ba2+ concentrations were measured by inductively coupled plasma mass spectrometry (ICP-MS, Finnigan Element II). Before measurement, samples were prepared by diluting in 2% HNO3 with 10 ppb of Rh as an internal standard. The analytical precisions were estimated to be <5% for Ba2+. For cores CL30 and CL47, SO42-, Ca2+, and Mg2+ concentrations were determined on a Thermo Dionex ICS-1100 ion chromatograph after a 500-fold dilution using ultrapure water [44]. Porewater samples were prepared by diluting in 2% HNO3 with 10 ppb of Tb as an internal standard before analysis for Ba2+ using the ICP-MS (Thermo Fisher iCAPQ). The analytical precisions were estimated to be <5% for Ba2+.For core CL44, DIC concentrations andδ13CDIC values were determined using a continuous flow mass spectrometer (Thermo Fisher Delta-Plus). 0.5 ml porewater was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through the measurement of the δ13C value [45]. For cores CL30 and CL47, the DIC concentrations and carbon isotopic ratios were determined via a continuous flow mass spectrometer (Thermo Delta V Advantage). A 0.2 ml porewater sample was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through which the δ13C values were measured. The analytical precisions were better than 0.2‰ for δ13C and better than 2% for DIC concentration [44].The particulate organic carbon (POC) contents were determined using the potassium dichromate wet oxidation method. The relative standard deviation of the POC content is <1.5%. The aluminium (Al), silicon (Si), and titanium (Ti) concentrations of the sediment samples at cores CL30, CL44, and CL47 were analyzed using PANalytical AXIOSX X-ray fluorescence spectrometry (XRF). The analytical precisions were estimated to be <2% for Al, Si, and Ti. The contents of Ba, zircon (Zr), and rubidium (Rb) in bulk sediments were determined using a PerkinElmer Optima 4300DV ICP-OES after digestion using HCl, HF, and HClO4 acid mixture. Rhodium was added as an internal standard for calculating the concentrations of the trace elements. The analytical precisions were estimated to be <2% for Ba, Zr, and Rb. The carbonate (CaCO3) contents of the sediment samples were determined by titration with EDTA standard solution. The analytical precisions were estimated to be <2%. For grain size measurements, approximately 0.5 g of the unground sample was treated with 10% (v/v) H2O2 for 48 h to oxidize organic matter and then dispersed and homogenized in sodium hexametaphosphate solution using ultrasonic vibration for 30 s before being analyzed by a laser grain size analyzer (Mastersizer 2000). The detection limit ranged from 0.5 μm to 2000 μm. Particles<4μm in size were classified as clay, 4 to 63 μm as silt, and larger than 63 μm as sand. The analytical precision is better than 3%. ### 3.2. Diffusive Flux Calculation To calculate the diffusive Ba2+ fluxes below the kink at cores CL30, CL44, and CL47, equations (1) and (2) were used assuming a steady-state condition [46]: (1)Jx=−φDsdCdx,(2)Ds=D01−lnφ2,where Jx represents the diffusive flux of Ba2+ (mmol m-2 yr-1), φ is the porosity, D0 is the diffusion coefficient for seawater (m2 s-1), DS is the diffusion coefficient for sediments (m2 s-1), C is the concentration of barium (mmol l-1), and x is the sediment depth (m). The average of sediment porosity of core CL44 (0.69) is applied to cores CL30 and CL47. ### 3.3. Estimating the Accumulation Time of Diagenetic Barite The total amount of excess Ba within the interval of the barium peak was calculated using an integral equation:(3)Ax=∫uvCx⋅ρ⋅1−φ,where ∫uvCx is the integral value of barium concentration in a peak from a depth interval from u to v, ρ and φ are the average grain density and porosity of the sediments, respectively.Under the premise of a constant diffusive upward flux of Ba2+ into the sulfate-bearing zone, the time needed for barium front formation was calculated using the equation: (4)tx=AxJx.In this case,tx is the time for barite enrichment, Ax is the depth-integrated excess barium content within a peak, and Jx is the upward diffusive flux of Ba2+. The diffusive flux was calculated using equations (1) and (2). Ds is the tortuosity- and temperature-corrected diffusion coefficient of Ba2+ in the sediment, calculated from the diffusion coefficient in free solution (D0) of 4.64, 4.62, and 4.61×10−6 cm2 s-1 (3.3, 3.2, and 3.1°C) for CL30, CL44, and CL47, respectively, according to Boudreau [47]. ### 3.4. Reaction-Transport Model A one-dimensional, steady-state, and reaction-transport model was applied to simulate one solid (POC) and six dissolved species including SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+. The model is modified from previous simulations of methane-rich sediments [48–51], and a full description of the model is shown in Supplementary Materials. All the reactions considered in the model and the expression of kinetic rate are listed in Table 2.Table 2 Rate expressions of the reactions considered in the model. Rate Kinetic rate law∗ Total POC degradation (wt.% C yr-1) RPOC=0.16⋅a0+xvs−0.95⋅POC POM degradation via sulfate reduction (mmol cm-3 yr-1 of SO42-) RSR=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Methanogenesis (mmol cm-3 yr-1 of CH4) RMG=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Anaerobic oxidation of methane (mmol cm-3 yr-1 of CH4) RAOM=kAOM⋅SO42−CH4 Authigenic Ca-carbonate precipitation (mmol cm-3 yr-1 of Ca2+) RCP‐Ca=kCa⋅Ca2+⋅CO32−KSP−1 Authigenic Mg-carbonate precipitation (mmol cm-3 yr-1 of Mg2+) RCP‐Mg=kMg⋅Mg2+⋅CO32−KSP−1 Gas bubble irrigation (mmol cm-3 yr-1) RBui=α1⋅expLirr−x/α21+expLirr−x/α2⋅C0−Cx Gas bubble dissolution (mmol cm-3 yr-1 of CH4) Rdiss=kMB⋅LMB−CH4 ∗fPOC converts between POC (dry wt.%) and DIC (mmol cm-3 of porewater): fPOC=MWC/10Φ/1−Φ/ρS, where MWC is the molecular weight of carbon (12 g mol-1), ρS is the density of dry sediments, and Φ is the porosity.Solid species are transported through the sediments only by burial with prescribed compaction, which is justified because we are only concerned with the anoxic diagenesis below the bioturbated zone. For sites CL44 and CL47, solutes are considered to be transported by molecular diffusion, porewater burial, and gas bubble irrigation, whereas for site CL30, solutes are regarded to be transported by molecular diffusion and porewater burial. Rising gas bubbles facilitate the exchange of porewater and bottom water as they move through tube structures in soft sediments [7]. Although this process was not observed directly, there are evidences implying that it is a significant pathway for transporting methane into the upper 10 m of sediment at sites CL44 and CL47 and driving the mixture of porewater and seawater in the upper two meters (see Section 5.1). The induced porewater mixing process was described as a nonlocal transport mechanism whose rate for each species is proportional to the difference between solute concentrations at the sediment surface C0 (mmol cm-3) and at depth below the sediment surface Cx (mmol cm-3) (RBui, Table 2). Bubble irrigation is described by parameters α1 (yr-1) and α2 (cm) that define the irrigation intensity and its attenuation below the irrigation depth Lirr (cm), respectively [49]. The latter can be determined by visual inspection of the porewater data (see Results) whereas α1 is a model fitting parameter. For the sake of parsimony, α2 is assumed to be constant for both sites.Although dissolution of gas was allowed to occur over the whole sediment column, the rising methane gas was not explicitly modeled. The rate of gas dissolution,Rdiss (mmol cm-3 yr-1), was described using a pseudo-first-order kinetic expression of the departure from the local methane gas solubility concentration, LMB (mmol cm-3), where kMB (yr-1) is the kinetic constant for gas bubble dissolution (Table 2). Methane only dissolves if the porewater is undersaturated with respect to LMB: (5)CH4g→CH4aqforCH4≤LMBLMB was calculated for the in situ salinity, temperature, and pressure using the algorithm in [52]. kMB was constrained using the dissolved sulfate and DIC data (see below).Major biogeochemical reactions considered in the model are particulate organic matter (POM) degradation via sulfate reduction, methanogenesis, AOM, and authigenic carbonate precipitation. Organic matter mineralization via aerobic respiration, denitrification, and metal oxide reduction were ignored since these processes mainly occur in the surface sediments which were mostly lost during coring.POM is chemically defined asCH2OPOPrP, where CH2O and POP denote particulate organic carbon and phosphate, respectively. The total rate of POM mineralization, RPOC (wt.% C yr-1), is calculated by the power law model from [53] that considers the initial age of organic matter in surface sediments, a0 (yr) (Table S2). POM mineralization coupled to sulfate reduction follows the stoichiometry: (6)2CH2OPOPrP+SO42−+2rPH+→2HCO3−+H2S+2rPPO43−where rP is the ratios of particulate organic phosphate to carbon. It is assumed to be the typical ratios as 1/106 [48].When sulfate is almost completely consumed, the remaining POM is degraded via methanogenesis:(7)2CH2OPOPrP→CO2+CH4+2rPPO43−The dominant pathways of methanogenesis in marine sediments are organic matter fermentation and CO2 reduction [54]. Their net reactions at steady state are balanced with equivalent amounts of CO2 and CH4 being produced per mole of POM degraded [55]. Therefore, the reaction of methanogenesis is a net reaction.Methane is considered to be consumed by AOM [3]: (8)CH4+SO42−→HCO3−+HS−+H2OThe rate constant for AOM,kAOM (cm3 mmol-1 yr-1), is tuned to the sulfate profiles within the SMTZ.The loss of Ca2+ and Mg2+ resulting from the precipitation of authigenic carbonates as Ca-calcite and Mg-calcite Ca2+,Mg2++HCO3‐→Ca,MgCO3+H+ was simulated in the model using the thermodynamic solubility constant as defined in [56] (Table 2). A typical porewater pH value of 7.6 was used to calculate CO32- from modeled DIC concentrations [57]. Ca,MgCO3 was not simulated explicitly in the model.The length of the simulated model domain was set to 1000 cm. Upper boundary conditions for all species were imposed as fixed concentrations (Dirichlet boundary) using measured values in the uppermost sediment layer where available. For CL44 and CL47, a zero concentration gradient (Neumann-type boundary) was imposed at the lower boundary for all the species. For CL30, a zero concentration gradient was imposed at the lower boundary for all the species except CH4. CH4 concentration at the lower boundary was a tunable parameter constrained from the SO42- profile. The model was solved using the NDSolve object of MATHEMATICA V. 10.0. The steady-state simulations were run for 107 yrs to achieve the steady state with a mass conservation of >99%. Further details on the model solutions can be found in Supplementary Materials. For the non-steady-state modeling of CL30, a fixed methane concentration in equilibrium with the gas hydrate solubility constrained by local seafloor temperature, pressure, and salinity was defined as the lower boundary of methane [58]. The extrapolation of sulfate concentrations in the upper 3.5 m to zero was taken as the initial condition prior to the increase in methane flux (Supplementary Materials). The basic model construction and kinetic rate expressions as well as the upper and lower boundary conditions for other species were identical to those in the steady-state model. ## 3.1. Sampling and Analytical Methods Three piston cores (CL30, CL44, and CL47) were collected from the southern Qiongdongnan Basin west to the Haima cold seeps at water depths ranging from 1255 m to 1301 m during the R/V Haiyang-4 cruise conducted by Guangzhou Marine Geological Survey in 2014 (Figure1 and Table 1). The sediments of the three cores mainly consist of greyish-green silty clay. Notably, the sediments at the bottom of core CL44 yielded a strong odour of hydrogen sulfide. Porewater samples were then collected onboard using Rhizon samplers with pore sizes of the porous part of approximately 0.2 mm at intervals of 20 cm for CL44 and 60 cm for CL30 and CL47. All the porewater samples were preserved at ~4°C until further analyses.Table 1 Information on the studied cores from the northwestern South China Sea. Site Water depth (m) Seafloor temperature (°C) Core length (cm) CL30 1255 3.3 630 CL44 1279 3.2 752 CL47 1301 3.1 775PO43- concentrations were measured onboard using the spectrophotometric method according to Grasshoff et al. [43] with a UV-Vis spectrophotometer (Hitachi U5100). The precision for phosphate was ±3.0%. 10 ml of sediments was added to 20 ml empty vials onboard to replace the 10 ml headspace needed for the chromatograph injection. The concentrations of hydrocarbon gas were measured onboard using the gas chromatograph method (Agilent 7890N). The precision for methane measurements was ±2.5% [44]. Porosity and density were determined from the weight loss before and after freeze-drying of the wet sediments using a cutting ring with definite mass (15 g) and volume (9.82 cm3) onboard at core CL44. The porosity and density were calculated assuming a density of the porewater of 1.0 g cm-3.The offshore analyses of porewater samples for core CL44 and for cores CL30 and CL47 were performed at the Nanjing University and the Third Institute of Oceanography, State Oceanic Administration, respectively. For core CL44, SO42-, Ca2+, and Mg2+ were measured using the standard method of ion chromatography (Metrohm 790-1, Metrosep A Supp 4-250/Metrosep C 2-150). The relative standard deviation was less than 3%. Ba2+ concentrations were measured by inductively coupled plasma mass spectrometry (ICP-MS, Finnigan Element II). Before measurement, samples were prepared by diluting in 2% HNO3 with 10 ppb of Rh as an internal standard. The analytical precisions were estimated to be <5% for Ba2+. For cores CL30 and CL47, SO42-, Ca2+, and Mg2+ concentrations were determined on a Thermo Dionex ICS-1100 ion chromatograph after a 500-fold dilution using ultrapure water [44]. Porewater samples were prepared by diluting in 2% HNO3 with 10 ppb of Tb as an internal standard before analysis for Ba2+ using the ICP-MS (Thermo Fisher iCAPQ). The analytical precisions were estimated to be <5% for Ba2+.For core CL44, DIC concentrations andδ13CDIC values were determined using a continuous flow mass spectrometer (Thermo Fisher Delta-Plus). 0.5 ml porewater was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through the measurement of the δ13C value [45]. For cores CL30 and CL47, the DIC concentrations and carbon isotopic ratios were determined via a continuous flow mass spectrometer (Thermo Delta V Advantage). A 0.2 ml porewater sample was treated with pure H3PO4 in a glass vial at 25°C. The CO2 produced was stripped with He and transferred into the mass spectrometer through which the δ13C values were measured. The analytical precisions were better than 0.2‰ for δ13C and better than 2% for DIC concentration [44].The particulate organic carbon (POC) contents were determined using the potassium dichromate wet oxidation method. The relative standard deviation of the POC content is <1.5%. The aluminium (Al), silicon (Si), and titanium (Ti) concentrations of the sediment samples at cores CL30, CL44, and CL47 were analyzed using PANalytical AXIOSX X-ray fluorescence spectrometry (XRF). The analytical precisions were estimated to be <2% for Al, Si, and Ti. The contents of Ba, zircon (Zr), and rubidium (Rb) in bulk sediments were determined using a PerkinElmer Optima 4300DV ICP-OES after digestion using HCl, HF, and HClO4 acid mixture. Rhodium was added as an internal standard for calculating the concentrations of the trace elements. The analytical precisions were estimated to be <2% for Ba, Zr, and Rb. The carbonate (CaCO3) contents of the sediment samples were determined by titration with EDTA standard solution. The analytical precisions were estimated to be <2%. For grain size measurements, approximately 0.5 g of the unground sample was treated with 10% (v/v) H2O2 for 48 h to oxidize organic matter and then dispersed and homogenized in sodium hexametaphosphate solution using ultrasonic vibration for 30 s before being analyzed by a laser grain size analyzer (Mastersizer 2000). The detection limit ranged from 0.5 μm to 2000 μm. Particles<4μm in size were classified as clay, 4 to 63 μm as silt, and larger than 63 μm as sand. The analytical precision is better than 3%. ## 3.2. Diffusive Flux Calculation To calculate the diffusive Ba2+ fluxes below the kink at cores CL30, CL44, and CL47, equations (1) and (2) were used assuming a steady-state condition [46]: (1)Jx=−φDsdCdx,(2)Ds=D01−lnφ2,where Jx represents the diffusive flux of Ba2+ (mmol m-2 yr-1), φ is the porosity, D0 is the diffusion coefficient for seawater (m2 s-1), DS is the diffusion coefficient for sediments (m2 s-1), C is the concentration of barium (mmol l-1), and x is the sediment depth (m). The average of sediment porosity of core CL44 (0.69) is applied to cores CL30 and CL47. ## 3.3. Estimating the Accumulation Time of Diagenetic Barite The total amount of excess Ba within the interval of the barium peak was calculated using an integral equation:(3)Ax=∫uvCx⋅ρ⋅1−φ,where ∫uvCx is the integral value of barium concentration in a peak from a depth interval from u to v, ρ and φ are the average grain density and porosity of the sediments, respectively.Under the premise of a constant diffusive upward flux of Ba2+ into the sulfate-bearing zone, the time needed for barium front formation was calculated using the equation: (4)tx=AxJx.In this case,tx is the time for barite enrichment, Ax is the depth-integrated excess barium content within a peak, and Jx is the upward diffusive flux of Ba2+. The diffusive flux was calculated using equations (1) and (2). Ds is the tortuosity- and temperature-corrected diffusion coefficient of Ba2+ in the sediment, calculated from the diffusion coefficient in free solution (D0) of 4.64, 4.62, and 4.61×10−6 cm2 s-1 (3.3, 3.2, and 3.1°C) for CL30, CL44, and CL47, respectively, according to Boudreau [47]. ## 3.4. Reaction-Transport Model A one-dimensional, steady-state, and reaction-transport model was applied to simulate one solid (POC) and six dissolved species including SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+. The model is modified from previous simulations of methane-rich sediments [48–51], and a full description of the model is shown in Supplementary Materials. All the reactions considered in the model and the expression of kinetic rate are listed in Table 2.Table 2 Rate expressions of the reactions considered in the model. Rate Kinetic rate law∗ Total POC degradation (wt.% C yr-1) RPOC=0.16⋅a0+xvs−0.95⋅POC POM degradation via sulfate reduction (mmol cm-3 yr-1 of SO42-) RSR=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Methanogenesis (mmol cm-3 yr-1 of CH4) RMG=0.5⋅RPOC⋅KSO42−/SO42−+KSO42−fPOC Anaerobic oxidation of methane (mmol cm-3 yr-1 of CH4) RAOM=kAOM⋅SO42−CH4 Authigenic Ca-carbonate precipitation (mmol cm-3 yr-1 of Ca2+) RCP‐Ca=kCa⋅Ca2+⋅CO32−KSP−1 Authigenic Mg-carbonate precipitation (mmol cm-3 yr-1 of Mg2+) RCP‐Mg=kMg⋅Mg2+⋅CO32−KSP−1 Gas bubble irrigation (mmol cm-3 yr-1) RBui=α1⋅expLirr−x/α21+expLirr−x/α2⋅C0−Cx Gas bubble dissolution (mmol cm-3 yr-1 of CH4) Rdiss=kMB⋅LMB−CH4 ∗fPOC converts between POC (dry wt.%) and DIC (mmol cm-3 of porewater): fPOC=MWC/10Φ/1−Φ/ρS, where MWC is the molecular weight of carbon (12 g mol-1), ρS is the density of dry sediments, and Φ is the porosity.Solid species are transported through the sediments only by burial with prescribed compaction, which is justified because we are only concerned with the anoxic diagenesis below the bioturbated zone. For sites CL44 and CL47, solutes are considered to be transported by molecular diffusion, porewater burial, and gas bubble irrigation, whereas for site CL30, solutes are regarded to be transported by molecular diffusion and porewater burial. Rising gas bubbles facilitate the exchange of porewater and bottom water as they move through tube structures in soft sediments [7]. Although this process was not observed directly, there are evidences implying that it is a significant pathway for transporting methane into the upper 10 m of sediment at sites CL44 and CL47 and driving the mixture of porewater and seawater in the upper two meters (see Section 5.1). The induced porewater mixing process was described as a nonlocal transport mechanism whose rate for each species is proportional to the difference between solute concentrations at the sediment surface C0 (mmol cm-3) and at depth below the sediment surface Cx (mmol cm-3) (RBui, Table 2). Bubble irrigation is described by parameters α1 (yr-1) and α2 (cm) that define the irrigation intensity and its attenuation below the irrigation depth Lirr (cm), respectively [49]. The latter can be determined by visual inspection of the porewater data (see Results) whereas α1 is a model fitting parameter. For the sake of parsimony, α2 is assumed to be constant for both sites.Although dissolution of gas was allowed to occur over the whole sediment column, the rising methane gas was not explicitly modeled. The rate of gas dissolution,Rdiss (mmol cm-3 yr-1), was described using a pseudo-first-order kinetic expression of the departure from the local methane gas solubility concentration, LMB (mmol cm-3), where kMB (yr-1) is the kinetic constant for gas bubble dissolution (Table 2). Methane only dissolves if the porewater is undersaturated with respect to LMB: (5)CH4g→CH4aqforCH4≤LMBLMB was calculated for the in situ salinity, temperature, and pressure using the algorithm in [52]. kMB was constrained using the dissolved sulfate and DIC data (see below).Major biogeochemical reactions considered in the model are particulate organic matter (POM) degradation via sulfate reduction, methanogenesis, AOM, and authigenic carbonate precipitation. Organic matter mineralization via aerobic respiration, denitrification, and metal oxide reduction were ignored since these processes mainly occur in the surface sediments which were mostly lost during coring.POM is chemically defined asCH2OPOPrP, where CH2O and POP denote particulate organic carbon and phosphate, respectively. The total rate of POM mineralization, RPOC (wt.% C yr-1), is calculated by the power law model from [53] that considers the initial age of organic matter in surface sediments, a0 (yr) (Table S2). POM mineralization coupled to sulfate reduction follows the stoichiometry: (6)2CH2OPOPrP+SO42−+2rPH+→2HCO3−+H2S+2rPPO43−where rP is the ratios of particulate organic phosphate to carbon. It is assumed to be the typical ratios as 1/106 [48].When sulfate is almost completely consumed, the remaining POM is degraded via methanogenesis:(7)2CH2OPOPrP→CO2+CH4+2rPPO43−The dominant pathways of methanogenesis in marine sediments are organic matter fermentation and CO2 reduction [54]. Their net reactions at steady state are balanced with equivalent amounts of CO2 and CH4 being produced per mole of POM degraded [55]. Therefore, the reaction of methanogenesis is a net reaction.Methane is considered to be consumed by AOM [3]: (8)CH4+SO42−→HCO3−+HS−+H2OThe rate constant for AOM,kAOM (cm3 mmol-1 yr-1), is tuned to the sulfate profiles within the SMTZ.The loss of Ca2+ and Mg2+ resulting from the precipitation of authigenic carbonates as Ca-calcite and Mg-calcite Ca2+,Mg2++HCO3‐→Ca,MgCO3+H+ was simulated in the model using the thermodynamic solubility constant as defined in [56] (Table 2). A typical porewater pH value of 7.6 was used to calculate CO32- from modeled DIC concentrations [57]. Ca,MgCO3 was not simulated explicitly in the model.The length of the simulated model domain was set to 1000 cm. Upper boundary conditions for all species were imposed as fixed concentrations (Dirichlet boundary) using measured values in the uppermost sediment layer where available. For CL44 and CL47, a zero concentration gradient (Neumann-type boundary) was imposed at the lower boundary for all the species. For CL30, a zero concentration gradient was imposed at the lower boundary for all the species except CH4. CH4 concentration at the lower boundary was a tunable parameter constrained from the SO42- profile. The model was solved using the NDSolve object of MATHEMATICA V. 10.0. The steady-state simulations were run for 107 yrs to achieve the steady state with a mass conservation of >99%. Further details on the model solutions can be found in Supplementary Materials. For the non-steady-state modeling of CL30, a fixed methane concentration in equilibrium with the gas hydrate solubility constrained by local seafloor temperature, pressure, and salinity was defined as the lower boundary of methane [58]. The extrapolation of sulfate concentrations in the upper 3.5 m to zero was taken as the initial condition prior to the increase in methane flux (Supplementary Materials). The basic model construction and kinetic rate expressions as well as the upper and lower boundary conditions for other species were identical to those in the steady-state model. ## 4. Results ### 4.1. General Geochemical Trends The depth profiles of SO42- concentration showed kink-type features at all the three cores (Figures 2–4 and Table 3). At site CL30, SO42- concentrations decreased gradually above a kink at ~3.5 mbsf and the gradient became steeper below that depth towards the SMTZ at ~4.7 mbsf (Figure 2). In contrast, SO42- concentrations at sites CL44 and CL47 displayed near-seawater values in the upper ~2 mbsf above the kinks and then decreased sharply down to the SMTZ located at ~7 and ~6.8 mbsf, respectively (Figures 3 and 4). Ca2+ and Mg2+ concentrations showed similar trends, with gradual decrease in the upper layers at core CL30 and close to seawater concentration above the kinks at cores CL44 and CL47. Ca2+ and Mg2+ concentrations declined sharply below the kinks due to ongoing carbonate precipitation and reached minimum at the SMTZ (Figures 2–4 and Table 3). Concentrations of DIC and PO43- showed opposite trends to SO42-, being depleted within the upper layer and enriched below it with the maximum at the STMZ (Figures 2–4 and Table 3). Moreover, CH4 concentrations at the three cores sharply increased below the SMTZ. The scatter in the CH4 contents was due to the degassing during core retrieval. The DIC concentrations increased with depth and reached maximum at the SMTZ, with opposite trends of δ13CDIC values (minimum values: -46.4‰ for CL30, -41.0‰ for CL44, and -38.8‰ for CL47) (Figures 2–4 and Table 3).Figure 2 Measured (dots) and simulated (curves) depth profiles of core CL30. Down-depth concentration of particulate organic carbon (POC), sulfate (SO42-), methane (CH4), phosphate (PO43-), dissolved inorganic carbon (DIC), calcium (Ca2+), magnesium (Mg2+), and δ13CDIC is shown.Figure 3 Measured (dots) and simulated (curves) depth profiles of core CL44. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Figure 4 Measured (dots) and simulated (curves) depth profiles of core CL47. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Table 3 Concentrations and isotope ratios of various dissolved components at cores CL30, CL44, and CL47. Depth (cmbsf) CH4 (mM) SO42- (mM) Ca2+ (mM) Mg2+ (mM) PO43- (μM) Ba2+ (μM) DIC (mM) δ13CDIC (‰, VPDB) CL30 55 0.0016 26.7 10.9 47.9 27.0 23.0 6.0 -20.4 110 0.0012 24.4 10.5 48.3 31.5 19.8 7.6 -26.0 170 0.0010 21.7 9.5 47.6 41.0 17.9 10.1 -31.9 230 0.0051 19.0 8.9 47.5 49.9 17.6 11.1 -35.4 290 0.0012 15.4 7.9 46.5 39.2 9.9 12.4 -38.0 350 0.0000 13.1 7.0 46.1 45.5 20.0 14.3 -40.5 410 0.0056 5.3 4.5 44.3 97.2 16.1 19.2 -45.7 470 0.8752 1.0 3.7 43.8 112 46.5 22.8 -46.4 530 3.2660 0.4 3.5 43.1 122 53.5 23.0 -40.0 590 0.1318 0.2 3.4 43.7 116 60.8 23.7 -35.7 CL44 10 0.0012 27.9 7.4 52.7 11.9 0.3 2.2 -10.9 30 0.0004 27.9 7.4 52.4 10.1 0.3 2.6 -10.5 50 0.0008 27.6 7.5 54.1 10.4 0.2 3.0 -12.1 70 0.0003 27.2 7.0 52.7 9.3 0.2 3.0 -14.8 90 0.0020 27.3 7.3 53.8 9.6 0.2 2.4 -10.9 110 0.0005 26.7 7.2 54.0 9.1 0.2 2.4 -11.6 130 0.0005 27.4 7.2 54.5 8.3 0.2 2.4 -11.9 150 0.0008 26.9 7.3 54.3 7.1 0.2 2.5 -14.1 170 0.0007 27.5 7.5 54.3 8.1 0.2 3.0 -13.1 190 0.0006 27.5 7.3 54.7 10.9 0.2 2.5 -12.1 210 0.0005 25.6 7.3 54.6 13.2 0.2 3.8 -17.2 230 0.0017 25.9 7.0 54.2 13.7 0.2 3.5 -18.5 250 0.0009 25.3 6.9 52.5 17.7 0.2 4.6 -20.4 270 0.0018 24.1 6.3 51.6 22.3 0.3 5.5 -24.0 290 0.0018 22.6 6.2 52.6 22.6 0.4 5.9 -23.1 310 0.0014 21.5 6.0 53.0 35.1 0.3 6.8 -25.0 330 0.0011 19.0 5.5 51.6 35.6 0.3 7.7 -28.8 350 0.0017 19.7 5.1 51.0 47.3 0.3 8.1 -24.3 370 0.0020 17.9 4.9 50.5 49.8 0.3 8.8 -30.1 390 0.0015 16.4 4.6 50.5 50.8 0.4 9.5 -31.2 410 0.0020 16.0 4.2 49.7 54.9 0.6 10.2 -31.9 430 0.0012 15.6 3.9 47.8 52.1 0.5 11.0 -33.4 450 0.0021 15.4 3.7 48.4 56.5 0.5 11.6 -35.1 470 0.0022 13.2 3.6 50.3 55.4 0.6 12.0 -35.7 490 0.0019 12.8 3.1 48.0 63.6 0.7 13.1 -35.8 510 0.0026 12.0 2.7 48.4 67.1 0.8 13.1 -39.9 530 0.0027 10.0 2.4 47.5 81.9 1.0 15.5 -39.1 550 0.0024 8.9 2.2 47.4 88.0 1.3 16.3 -39.3 570 0.0034 5.6 2.0 45.5 96.7 2.5 18.2 -40.5 590 0.0039 4.7 1.8 44.6 101 4.9 19.5 -41.0 610 0.0028 3.7 1.7 44.5 103 11.0 18.9 -39.7 630 0.0022 2.7 1.6 44.2 57.0 19.3 20.5 -40.5 650 0.0442 2.3 1.7 43.3 111 22.2 19.9 -39.6 670 0.5139 2.1 1.8 44.7 109 31.7 21.1 -37.1 690 1.1022 0.9 1.4 44.5 107 37.9 20.4 -37.4 710 2.5450 0.8 1.3 43.9 102 38.6 21.5 -36.1 730 0.0086 0.9 1.4 44.6 105 37.4 20 -36.8 750 1.1475 1.1 1.2 43.0 86.8 36.0 19.2 -33.3 CL47 55 0.0008 24.9 9.2 33.1 18.3 21.3 6.7 -21.1 110 0.0006 21.1 170 0.0004 26.0 10.9 39.7 16.4 13.6 6.8 -22.9 230 0.0008 28.2 290 0.0008 21.9 10.0 40.2 13.8 14.8 10.0 -28.5 350 0.0009 44.7 410 0.0009 16.5 8.8 39.8 43.1 13.9 13.7 -32.1 470 0.0006 12.7 7.8 40.4 66.4 16.9 15.9 -34.0 530 0.0006 9.3 6.5 40.0 65.9 13.7 18.2 -34.7 590 0.0007 5.4 4.9 39.7 66.4 25.9 20.6 -38.2 650 0.0773 1.1 4.5 39.8 76.3 52.0 24.2 -38.8 710 0.7947 1.2 4.6 40.6 74.5 58.5 23.5 -37.4 770 0.7692 1.4 70.2 22.1 -32.1Vertical profiles of CL30, CL44, and CL47 for porewater barium concentrations and sediment barium contents together with barium/aluminium (Ba/Al) ratios are shown in Figure5. Dissolved Ba2+ concentrations display maxima of 60.8, 38.6, and 58.5 μM below the SMTZ, respectively, and decreased upward towards the SMTZ (Figure 5). Bulk sediment Ba concentrations range from 306 to 957 mg kg-1 (Table S6) with averages of 461 mg kg-1 for CL30, 502 mg kg-1 for CL44, and 502 mg kg-1 for CL47. High Ba concentrations of bulk sediments at each core occur over narrow depth intervals (0.3–0.8 m) above the present SMTZ (Figure 5). Peak Ba concentrations within these zones reach 957, 741, and 790 mg kg-1 and appear at approximately 4.3, 5.9, and 6.3 mbsf at cores CL30, CL44, and CL47, respectively. The refractory amount of solid phase barium at these cores amounts to 530, 550, and 590 mg kg-1, respectively, which is considered to represent the “background” levels of solid phase barium [16]. Ba contents were normalized to Al in order to account for variations in lithology. Depth intervals with Ba content higher than these “background” levels are referred to as “Ba fronts.” At each core examined, the Ba fronts exist within 1.5 m above the depth of current SMTZ (Figure 5). The distance between the peak Ba concentration and the depth of sulfate depletion is approximately 0.4 m at CL30, 1.1 m at CL44, and 0.5 m at CL47.Figure 5 Concentration depth profiles of dissolved barium (Ba2+), sulfate (SO42-), and solid-phase total barium (Batotal), barium/aluminium ratios (Ba/Al), and diagenetic barium (shown as red peaks) for cores CL30 (a), CL44 (b), and CL47 (c). Blue bands mark the SMTZ. Pink dash lines indicate the background barium contents based on the distribution of barium content. Barium contents above background represent diagenetic barite enrichments (red polygons). (a) (b) (c)POC contents at all the sites did not follow a general downward trend with average contents as 0.97% for CL30, 1.05% for CL44, and 1.05% for CL47 (Figures2–4 and Table S6). The sediments in the study cores are mainly composed of silt and clay. At sites CL30 and CL44, the relative fractions of silt and clay are nearly constant with depth and the fractions of sand remain low values with depth except at ~320 cm in CL30 displaying elevated sand fraction (Figure S2). At site CL47, the sand fractions are low with depth in the interval of 0–200 cm, followed by an increase in sand fraction with two peaks at the depth of ~270 and ~430 cm. Below 500 cm, the sand fraction almost decreased to zero (Figure S2). ### 4.2. Timing of Authigenic Barite Front Accumulation Dissolved barium fluxes towards the SMTZ were 1.58 mmol m-2 yr-1 for CL30, 1.54 mmol m-2 yr-1 for CL44, and 1.61 mmol m-2 yr-1 for CL47. The calculated time required for the formation of barite front is about 3.2, 3.0, and 1.3 kyr for the three cores, respectively, using an average porosity of 0.69 taken from CL44 (Table S5). Variations of porosity from 0.65 to 0.75 yield the time for barite front formation ranging between 2.2 and 4.2 kyr for CL30 and between 0.8 and 1.6 kyr for CL47. Sensitivity tests of the background Ba content, Ba2+ fluxes, and porosity are shown in Figures S5&S6. ### 4.3. Reaction-Transport Modeling The modeled profiles and reaction rates are shown in Figures2–4 and Table 4, respectively. The model parameters used to derive these results are listed in Tables S2-S4. The steady-state modeling reproduced the measured concentrations of SO42-, DIC, Ca2+, Mg2+, and PO43- at sites CL44, CL47, and CL30 above the kink with obvious discrepancies between modeled and measured concentrations of CH4 due to aforementioned degassing during core recovery (Figures 2–4). At site CL30, the model failed to reproduce the concentration gradients of SO42-, DIC, Ca2+, Mg2+, and PO43- below the kink (~3.5 mbsf) which is likely caused by a transient condition that is not considered in the steady-state model.Table 4 Depth-integrated simulated turnover rates and benthic methane fluxes based on the steady-state modeling. CL30 CL44 CL47 Unit FPOC: total POC mineralization rate 18.8 55.2 58.1 mmol m−2 yr−1 of C FOSR: sulfate reduction via POC degradation 7.6 23.9 25.1 mmol m−2 yr−1 of SO42− FME: methane formation via POC degradation 3.6 3.7 4.0 mmol m−2 yr−1 of CH4 FDISS: gas dissolution 28.4 73.3 84.7 mmol m−2 yr−1 of CH4 FAOM: anaerobic oxidation of methane 30.1 74.3 85.0 mmol m−2 yr−1 of CH4 FCP−Ca: authigenic CaCO3 precipitation 3.0 2.2 5.6 mmol m−2 yr−1 of C FCP−Mg: authigenic MgCO3 precipitation 5.4 7.1 0 mmol m−2 yr−1 of C Sulfate consumed by AOM 79.8 75.7 77.2 % Benthic flux of CH4 at SWI 0.5 1.9 2.7 mmol m−2 yr−1 of CH4 Percentage of CH4 flux from depth 88.0 95.0 95.3 % Percentage of CH4 consumed by AOM 94.1 96.5 95.8 %The sulfate concentration profile with a kink at site CL30 (Figure2) could be explained by a recent increase in upward methane flux [9]. The linear extrapolation of the sulfate concentrations in the upper 3.5 m to zero sulfate concentration was taken as the initial condition for the non-steady-state model. Under this condition, the sulfate profile was fitted by a fixed CH4 concentration (67 mM) at the lower boundary in equilibrium with the gas hydrate solubility under the conditions of in situ S, T, and P. A sudden increase in CH4 concentration reproduces the observed SO42- concentration profile after running the model for ~85 yr (Figure 6). The increase in methane flux resulted in a prominent increase in the depth-integrated AOM rate from 30.1 mmol m-2 y-1 (t=0yr) to 140 mmol m-2 y-1 (t=85yr).Figure 6 Evolution of the sulfate profile over time from simulation of non-steady-state porewater profiles of core CL30.The initial age of the organic matter was tuned until a good fit was obtained for the PO43-. The mean total depth-integrated rates of POC degradation were about 3 times higher at sites CL44 and CL47 (55.2 and 58.1 mmol m-2 yr-1) than that at site CL30 (18.8 mmol m-2 yr-1) (Table 4). The rates of POC degradation through sulfate reduction (POCSR) were 7.6, 23.9, and 25.1 mmol m-2 yr-1 at cores CL30, CL44, and CL47, respectively. In contrast to the relatively low rates of POCSR, AOM dominated the sulfate consumption with rates of 30.1, 74.3, and 84.7 mmol m-2 yr-1 for CL30, CL44, and CL47, respectively. The AOM rates were mainly sustained by an external methane source, and methanogenesis contributed only a negligible amount of methane (Table 4). The AOM consumes almost all the CH4 with benthic CH4 fluxes of 0.49, 2.0, and 2.7 mmol m-2 yr-1 at sites CL30, CL44, and CL47, respectively. ## 4.1. General Geochemical Trends The depth profiles of SO42- concentration showed kink-type features at all the three cores (Figures 2–4 and Table 3). At site CL30, SO42- concentrations decreased gradually above a kink at ~3.5 mbsf and the gradient became steeper below that depth towards the SMTZ at ~4.7 mbsf (Figure 2). In contrast, SO42- concentrations at sites CL44 and CL47 displayed near-seawater values in the upper ~2 mbsf above the kinks and then decreased sharply down to the SMTZ located at ~7 and ~6.8 mbsf, respectively (Figures 3 and 4). Ca2+ and Mg2+ concentrations showed similar trends, with gradual decrease in the upper layers at core CL30 and close to seawater concentration above the kinks at cores CL44 and CL47. Ca2+ and Mg2+ concentrations declined sharply below the kinks due to ongoing carbonate precipitation and reached minimum at the SMTZ (Figures 2–4 and Table 3). Concentrations of DIC and PO43- showed opposite trends to SO42-, being depleted within the upper layer and enriched below it with the maximum at the STMZ (Figures 2–4 and Table 3). Moreover, CH4 concentrations at the three cores sharply increased below the SMTZ. The scatter in the CH4 contents was due to the degassing during core retrieval. The DIC concentrations increased with depth and reached maximum at the SMTZ, with opposite trends of δ13CDIC values (minimum values: -46.4‰ for CL30, -41.0‰ for CL44, and -38.8‰ for CL47) (Figures 2–4 and Table 3).Figure 2 Measured (dots) and simulated (curves) depth profiles of core CL30. Down-depth concentration of particulate organic carbon (POC), sulfate (SO42-), methane (CH4), phosphate (PO43-), dissolved inorganic carbon (DIC), calcium (Ca2+), magnesium (Mg2+), and δ13CDIC is shown.Figure 3 Measured (dots) and simulated (curves) depth profiles of core CL44. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Figure 4 Measured (dots) and simulated (curves) depth profiles of core CL47. Down-depth concentration of POC, SO42-, CH4, PO43-, DIC, Ca2+, Mg2+, and δ13CDIC is shown.Table 3 Concentrations and isotope ratios of various dissolved components at cores CL30, CL44, and CL47. Depth (cmbsf) CH4 (mM) SO42- (mM) Ca2+ (mM) Mg2+ (mM) PO43- (μM) Ba2+ (μM) DIC (mM) δ13CDIC (‰, VPDB) CL30 55 0.0016 26.7 10.9 47.9 27.0 23.0 6.0 -20.4 110 0.0012 24.4 10.5 48.3 31.5 19.8 7.6 -26.0 170 0.0010 21.7 9.5 47.6 41.0 17.9 10.1 -31.9 230 0.0051 19.0 8.9 47.5 49.9 17.6 11.1 -35.4 290 0.0012 15.4 7.9 46.5 39.2 9.9 12.4 -38.0 350 0.0000 13.1 7.0 46.1 45.5 20.0 14.3 -40.5 410 0.0056 5.3 4.5 44.3 97.2 16.1 19.2 -45.7 470 0.8752 1.0 3.7 43.8 112 46.5 22.8 -46.4 530 3.2660 0.4 3.5 43.1 122 53.5 23.0 -40.0 590 0.1318 0.2 3.4 43.7 116 60.8 23.7 -35.7 CL44 10 0.0012 27.9 7.4 52.7 11.9 0.3 2.2 -10.9 30 0.0004 27.9 7.4 52.4 10.1 0.3 2.6 -10.5 50 0.0008 27.6 7.5 54.1 10.4 0.2 3.0 -12.1 70 0.0003 27.2 7.0 52.7 9.3 0.2 3.0 -14.8 90 0.0020 27.3 7.3 53.8 9.6 0.2 2.4 -10.9 110 0.0005 26.7 7.2 54.0 9.1 0.2 2.4 -11.6 130 0.0005 27.4 7.2 54.5 8.3 0.2 2.4 -11.9 150 0.0008 26.9 7.3 54.3 7.1 0.2 2.5 -14.1 170 0.0007 27.5 7.5 54.3 8.1 0.2 3.0 -13.1 190 0.0006 27.5 7.3 54.7 10.9 0.2 2.5 -12.1 210 0.0005 25.6 7.3 54.6 13.2 0.2 3.8 -17.2 230 0.0017 25.9 7.0 54.2 13.7 0.2 3.5 -18.5 250 0.0009 25.3 6.9 52.5 17.7 0.2 4.6 -20.4 270 0.0018 24.1 6.3 51.6 22.3 0.3 5.5 -24.0 290 0.0018 22.6 6.2 52.6 22.6 0.4 5.9 -23.1 310 0.0014 21.5 6.0 53.0 35.1 0.3 6.8 -25.0 330 0.0011 19.0 5.5 51.6 35.6 0.3 7.7 -28.8 350 0.0017 19.7 5.1 51.0 47.3 0.3 8.1 -24.3 370 0.0020 17.9 4.9 50.5 49.8 0.3 8.8 -30.1 390 0.0015 16.4 4.6 50.5 50.8 0.4 9.5 -31.2 410 0.0020 16.0 4.2 49.7 54.9 0.6 10.2 -31.9 430 0.0012 15.6 3.9 47.8 52.1 0.5 11.0 -33.4 450 0.0021 15.4 3.7 48.4 56.5 0.5 11.6 -35.1 470 0.0022 13.2 3.6 50.3 55.4 0.6 12.0 -35.7 490 0.0019 12.8 3.1 48.0 63.6 0.7 13.1 -35.8 510 0.0026 12.0 2.7 48.4 67.1 0.8 13.1 -39.9 530 0.0027 10.0 2.4 47.5 81.9 1.0 15.5 -39.1 550 0.0024 8.9 2.2 47.4 88.0 1.3 16.3 -39.3 570 0.0034 5.6 2.0 45.5 96.7 2.5 18.2 -40.5 590 0.0039 4.7 1.8 44.6 101 4.9 19.5 -41.0 610 0.0028 3.7 1.7 44.5 103 11.0 18.9 -39.7 630 0.0022 2.7 1.6 44.2 57.0 19.3 20.5 -40.5 650 0.0442 2.3 1.7 43.3 111 22.2 19.9 -39.6 670 0.5139 2.1 1.8 44.7 109 31.7 21.1 -37.1 690 1.1022 0.9 1.4 44.5 107 37.9 20.4 -37.4 710 2.5450 0.8 1.3 43.9 102 38.6 21.5 -36.1 730 0.0086 0.9 1.4 44.6 105 37.4 20 -36.8 750 1.1475 1.1 1.2 43.0 86.8 36.0 19.2 -33.3 CL47 55 0.0008 24.9 9.2 33.1 18.3 21.3 6.7 -21.1 110 0.0006 21.1 170 0.0004 26.0 10.9 39.7 16.4 13.6 6.8 -22.9 230 0.0008 28.2 290 0.0008 21.9 10.0 40.2 13.8 14.8 10.0 -28.5 350 0.0009 44.7 410 0.0009 16.5 8.8 39.8 43.1 13.9 13.7 -32.1 470 0.0006 12.7 7.8 40.4 66.4 16.9 15.9 -34.0 530 0.0006 9.3 6.5 40.0 65.9 13.7 18.2 -34.7 590 0.0007 5.4 4.9 39.7 66.4 25.9 20.6 -38.2 650 0.0773 1.1 4.5 39.8 76.3 52.0 24.2 -38.8 710 0.7947 1.2 4.6 40.6 74.5 58.5 23.5 -37.4 770 0.7692 1.4 70.2 22.1 -32.1Vertical profiles of CL30, CL44, and CL47 for porewater barium concentrations and sediment barium contents together with barium/aluminium (Ba/Al) ratios are shown in Figure5. Dissolved Ba2+ concentrations display maxima of 60.8, 38.6, and 58.5 μM below the SMTZ, respectively, and decreased upward towards the SMTZ (Figure 5). Bulk sediment Ba concentrations range from 306 to 957 mg kg-1 (Table S6) with averages of 461 mg kg-1 for CL30, 502 mg kg-1 for CL44, and 502 mg kg-1 for CL47. High Ba concentrations of bulk sediments at each core occur over narrow depth intervals (0.3–0.8 m) above the present SMTZ (Figure 5). Peak Ba concentrations within these zones reach 957, 741, and 790 mg kg-1 and appear at approximately 4.3, 5.9, and 6.3 mbsf at cores CL30, CL44, and CL47, respectively. The refractory amount of solid phase barium at these cores amounts to 530, 550, and 590 mg kg-1, respectively, which is considered to represent the “background” levels of solid phase barium [16]. Ba contents were normalized to Al in order to account for variations in lithology. Depth intervals with Ba content higher than these “background” levels are referred to as “Ba fronts.” At each core examined, the Ba fronts exist within 1.5 m above the depth of current SMTZ (Figure 5). The distance between the peak Ba concentration and the depth of sulfate depletion is approximately 0.4 m at CL30, 1.1 m at CL44, and 0.5 m at CL47.Figure 5 Concentration depth profiles of dissolved barium (Ba2+), sulfate (SO42-), and solid-phase total barium (Batotal), barium/aluminium ratios (Ba/Al), and diagenetic barium (shown as red peaks) for cores CL30 (a), CL44 (b), and CL47 (c). Blue bands mark the SMTZ. Pink dash lines indicate the background barium contents based on the distribution of barium content. Barium contents above background represent diagenetic barite enrichments (red polygons). (a) (b) (c)POC contents at all the sites did not follow a general downward trend with average contents as 0.97% for CL30, 1.05% for CL44, and 1.05% for CL47 (Figures2–4 and Table S6). The sediments in the study cores are mainly composed of silt and clay. At sites CL30 and CL44, the relative fractions of silt and clay are nearly constant with depth and the fractions of sand remain low values with depth except at ~320 cm in CL30 displaying elevated sand fraction (Figure S2). At site CL47, the sand fractions are low with depth in the interval of 0–200 cm, followed by an increase in sand fraction with two peaks at the depth of ~270 and ~430 cm. Below 500 cm, the sand fraction almost decreased to zero (Figure S2). ## 4.2. Timing of Authigenic Barite Front Accumulation Dissolved barium fluxes towards the SMTZ were 1.58 mmol m-2 yr-1 for CL30, 1.54 mmol m-2 yr-1 for CL44, and 1.61 mmol m-2 yr-1 for CL47. The calculated time required for the formation of barite front is about 3.2, 3.0, and 1.3 kyr for the three cores, respectively, using an average porosity of 0.69 taken from CL44 (Table S5). Variations of porosity from 0.65 to 0.75 yield the time for barite front formation ranging between 2.2 and 4.2 kyr for CL30 and between 0.8 and 1.6 kyr for CL47. Sensitivity tests of the background Ba content, Ba2+ fluxes, and porosity are shown in Figures S5&S6. ## 4.3. Reaction-Transport Modeling The modeled profiles and reaction rates are shown in Figures2–4 and Table 4, respectively. The model parameters used to derive these results are listed in Tables S2-S4. The steady-state modeling reproduced the measured concentrations of SO42-, DIC, Ca2+, Mg2+, and PO43- at sites CL44, CL47, and CL30 above the kink with obvious discrepancies between modeled and measured concentrations of CH4 due to aforementioned degassing during core recovery (Figures 2–4). At site CL30, the model failed to reproduce the concentration gradients of SO42-, DIC, Ca2+, Mg2+, and PO43- below the kink (~3.5 mbsf) which is likely caused by a transient condition that is not considered in the steady-state model.Table 4 Depth-integrated simulated turnover rates and benthic methane fluxes based on the steady-state modeling. CL30 CL44 CL47 Unit FPOC: total POC mineralization rate 18.8 55.2 58.1 mmol m−2 yr−1 of C FOSR: sulfate reduction via POC degradation 7.6 23.9 25.1 mmol m−2 yr−1 of SO42− FME: methane formation via POC degradation 3.6 3.7 4.0 mmol m−2 yr−1 of CH4 FDISS: gas dissolution 28.4 73.3 84.7 mmol m−2 yr−1 of CH4 FAOM: anaerobic oxidation of methane 30.1 74.3 85.0 mmol m−2 yr−1 of CH4 FCP−Ca: authigenic CaCO3 precipitation 3.0 2.2 5.6 mmol m−2 yr−1 of C FCP−Mg: authigenic MgCO3 precipitation 5.4 7.1 0 mmol m−2 yr−1 of C Sulfate consumed by AOM 79.8 75.7 77.2 % Benthic flux of CH4 at SWI 0.5 1.9 2.7 mmol m−2 yr−1 of CH4 Percentage of CH4 flux from depth 88.0 95.0 95.3 % Percentage of CH4 consumed by AOM 94.1 96.5 95.8 %The sulfate concentration profile with a kink at site CL30 (Figure2) could be explained by a recent increase in upward methane flux [9]. The linear extrapolation of the sulfate concentrations in the upper 3.5 m to zero sulfate concentration was taken as the initial condition for the non-steady-state model. Under this condition, the sulfate profile was fitted by a fixed CH4 concentration (67 mM) at the lower boundary in equilibrium with the gas hydrate solubility under the conditions of in situ S, T, and P. A sudden increase in CH4 concentration reproduces the observed SO42- concentration profile after running the model for ~85 yr (Figure 6). The increase in methane flux resulted in a prominent increase in the depth-integrated AOM rate from 30.1 mmol m-2 y-1 (t=0yr) to 140 mmol m-2 y-1 (t=85yr).Figure 6 Evolution of the sulfate profile over time from simulation of non-steady-state porewater profiles of core CL30.The initial age of the organic matter was tuned until a good fit was obtained for the PO43-. The mean total depth-integrated rates of POC degradation were about 3 times higher at sites CL44 and CL47 (55.2 and 58.1 mmol m-2 yr-1) than that at site CL30 (18.8 mmol m-2 yr-1) (Table 4). The rates of POC degradation through sulfate reduction (POCSR) were 7.6, 23.9, and 25.1 mmol m-2 yr-1 at cores CL30, CL44, and CL47, respectively. In contrast to the relatively low rates of POCSR, AOM dominated the sulfate consumption with rates of 30.1, 74.3, and 84.7 mmol m-2 yr-1 for CL30, CL44, and CL47, respectively. The AOM rates were mainly sustained by an external methane source, and methanogenesis contributed only a negligible amount of methane (Table 4). The AOM consumes almost all the CH4 with benthic CH4 fluxes of 0.49, 2.0, and 2.7 mmol m-2 yr-1 at sites CL30, CL44, and CL47, respectively. ## 5. Discussion ### 5.1. Formation Mechanisms of the Nonlinear Porewater Profiles The sulfate concentration profiles of the porewater in marine sediments depend on the availability of labile organic matter amenable to sulfate reducers, diffusive/advective methane flux, and depositional conditions [9, 59–64]. Combination of these factors would result in linear, kink, concave-up, concave-down, and sigmoidal (S-shape) type sulfate concentration trends in marine sediments [9].The porewater profiles of the three study cores exhibit kink-type features. The plausible mechanisms for the occurrence of the kink-type profile include (1) irrigation and seawater intrusion due to biological, physical, and hydrological processes; (2) changes in the sedimentation rate or porosity due to depositional events; and (3) changes in methane flux and upward advection of fluid [14]. Bioirrigation has been shown to generally occur in a decimeter scale in the surface sediments [65, 66]. In fact, no macroorganisms were observed in the study cores below the upper few centimeters of sediment. The lithology of the upper two-three meters of the sediments in the study cores was dominated by fine-grained hemipelagic sediments mainly consisting of silty clay without any discernible abnormal deposition (Figure S2). Although deep-water turbidity current channel and fan systems are well developed in the study region [67], the homogeneous grain size distributions in cores CL44 and CL47 reveal that the sediment above the kinks of sulfate was not impacted by turbidites, which are typically characterized by upward grading in grain size. The C-M plot also suggests the absence of turbidites in the study cores (Figure S2 [68]). Moreover, by comparing the depth profiles of CaCO3 content in CL44 and CL47 with that in an adjacent core (SO49-37KL) with established Marine Isotope Stage, we found that the upper ~2 m sediments of CL44 and CL47 represent normal hemipelagic background deposition during the Holocene (Figure S3) [69, 70]. The relatively constant ratios of Ti/Al, Si/Al, and Zr/Rb above the kinks indicate a stable input of detrital fraction (Figure S4). In contrast, the layers at the interval of ~1.4 to ~4.2 mbsf in CL44 and ~1.8 to ~5 mbsf in CL47 exhibiting high Si, Ti, and Zr/Rb contents and coarser grain sizes (Figure S4) suggest elevated input of detrital fraction during sea-level lowstands [71, 72]. In addition, the flat seafloor topography in the study area also precludes the occurrence of abrupt depositional event such as landslide (Figure 1). Therefore, it is unlikely that the irrigation-like feature in CL44 and CL47 was caused by mass-transport deposits [44]. Furthermore, there is no indicator for upward fluid advection at sites CL44 and CL47.We argue that the cause for the formation of the irrigation-like porewater profiles is probably the bubble irrigation by rising free gas through escaping tubes [7, 12, 51]. Such features were observed at the nearby Haima cold seeps and attributed to bubble irrigation or a recent increase in methane flux [33]. Moreover, BSR and acoustic blanking which are indicative of free gas accumulation were identified in the study area (Fang Y., unpublished data). Hence, gas bubble irrigation is the most likely mechanism to explain the observed profiles at cores CL44 and CL47.At core CL30, the sediments consist of homogenous silty clay without discernible abnormal deposition and the sulfate concentrations decrease gradually without maintaining seawater-like values above the kink at 3.5 mbsf (Figure2). We thus hypothesize that the kink in the sulfate profile at core CL30 results from a (sub)recent increase in the upward methane flux, similar to the scenario reported in the Sea of Marmara, the continental margin offshore Pakistan, the slope area south of Svalbard, the Niger Delta, the southern SCS, and so on [10–13, 44]. A simplified numerical model exercise, assuming a diffusional porewater system with POCSR and AOM as the only biogeochemical reactions, was used to demonstrate this scenario (Figure 6). The assumption of diffusive transport of porewater species is warranted because it has been suggested that porewater solute distributions are dominated by diffusion even if free gas transport and fluid advection exist [14, 73].The current barite front is located at about 4.2-6.4 mbsf, very close to the current SMTZ (4.7-7 mbsf), indicating that the barite front might form in the recent past to the present day induced by a recent enhancement of methane flux [11]. Actually, the measured SO42- concentration profile can be reproduced after a sudden increase in CH4 concentration lasting for ~85 yr. On the other hand, based on the calculated diffusive Ba2+ fluxes and the depth-integrated Ba contents, the time required to form the observed authigenic barite front above the current SMTZ is about 2.2-4.2 kyr for CL30, given the uncertainties of porosity. The difference of estimated duration of constant methane flux between these two approaches may suggest that the barite front was not a result of the recent increase in methane flux inducing the kink-type sulfate profile. Instead, it is more likely that the SMTZ has experienced several fluctuations in depth, considering the episodic pulses of upward methane flux which have occurred in this area as shown by previous studies [25, 32, 33]. However, this decoupled record between sediments and porewaters is commonly observed at cold seeps [74–77] and is considered to reflect the variations of methane fluxes and the resulting SMTZ in the sedimentary column [74]. Observations and numerical modeling suggest that the response of porewater geochemical signatures is more rapid on timescales of months to centuries than the accumulation of authigenic barite deposits on timescales of decades to hundred thousands of years [11, 12, 14, 16–19, 74]. On the whole, our results suggest that combining porewater data with sedimentary barite front records may provide important clues for better understanding of the evolution of methane seepage. ### 5.2. Methane-Related Carbon Cycling and Source of Methane Based on the simulation results derived from the steady-state modeling, AOM consumed ~80%, 76%, and 77% of sulfate in CL30, CL44, and CL47, respectively. AOM thus acts as an efficient barrier preventing methane from being released into the water column at the studied cores. This is supported by the lowδ13CDIC values at the SMTZs mainly derived from methane. AOM increases porewater alkalinity by producing bicarbonate and results in the precipitation of authigenic carbonates as shown by the decrease in Ca2+ and Mg2+ concentrations with depth (Figures 3–6).In addition, theδ13CDIC values below the SMTZ become more positive than those at the SMTZ. The reversal in δ13CDIC below the SMTZ is caused by the generation of 13C-enriched DIC via local methanogenesis at the methanogenic zone [63, 78]. The 13C-enriched DIC would migrate into the SMTZ from the methanogenic zone and “dilute” the 12C pool of DIC in porewater. Thus, in a closed system, DIC generated by local methanogenesis is an important source of DIC in the carbon budget within the SMTZ [78, 79].Based on the modeling results of methane turnovers, the depth-intergrated AOM rates at cores CL30, CL44, and CL47 are about 8 to 21 times of the in situ methanogenesis rates (Table4). Therefore, the relative proportions of external methane sources contributed to the total methane pool are 88%, 95%, and 95% at cores CL30, CL44, and CL47, respectively. This indicates that the majority of methane fuelling AOM at the SMTZ was sourced from subsurface sediments. There are two general pathways for producing methane in marine sediments, including microbial methane generated via CO2 reduction or the fermentation of reduced carbon substrates (e.g., acetate and methanol; [80]) and thermogenic methane formed via thermal cracking of organic matter and/or heavy hydrocarbons [81]. The δ13C values of methane are generally distinct between these two types of methane. The δ13C values of microbial methane typically range from −50‰ to −110‰ [80], whereas those of thermogenic methane range from −30‰ to −50‰ [81].Becauseδ13C values of headspace methane in the sediments are absent in the study area, porewater DIC content and δ13CDIC are utilized to constrain the origin of methane. Generally, porewater DIC in marine sediments is mainly derived from (1) the DIC that is diffusing from the overlying seawater into the sediments or the seawater DIC trapped within sediments during burial, (2) the DIC generated by the degradation of sedimentary organic matter, (3) the DIC produced by AOM, and (4) the residual DIC derived from methanogenesis [82, 83]. In order to obtain the carbon isotopic composition of DIC derived from external methane, we applied a simple four-end-member mixing model. The four end-members are (1) seawater-derived DIC trapped within sediments during burial (SW), (2) DIC produced by POCSR, (3) DIC derived from external methane (EM) via AOM, and (4) DIC generated by in situ methanogenesis (ME). Note that methane production via local methanogenesis was assumed to be competently recycled by AOM. As a result, the carbon isotopic composition of DIC produced by local methanogenesis was identical to that of organic matter (OM) [82–84]. In a closed system, δ13C balance of the porewater DIC pool at the SMTZ can be expressed by (9)δ13Cex=XSW∗δ13CSW+XOSR∗δ13COM+XAOM∗δ13CEM+XME∗δ13COM,where X is the proportion of DIC contributed to the total DIC pool and the subscripts SW, OSR, AOM, and ME refer to DIC derived from seawater, organic matter, external methane, and local methanogenesis, respectively. The XSW values are estimated as typical seawater DIC concentration (2.1 mM) divided by DIC concentration at the SMTZ, and the δ13CSW is assumed to be 0‰. The δ13C value of sedimentary organic matter in sediment of the SCS (−20‰; [85]) is used for the δ13COM. The overall δ13C of DIC derived from methanogenesis is equal to the δ13COM assuming that methane produced by local methanogenesis was completely converted to DIC by AOM [84]. The contribution fractions of OSR, AOM, and local methanogenesis shown as XOSR, XAOM, and XME are calculated from the steady-state modeling. The δ13Cex can be acquired from a regression of porewater δ13CDIC×DIC vs. DIC (Figure 7). The regression commonly shows linear in seep-impacted sediments, thus providing definitive δ13CDIC supplied to porewater [82, 86, 87]. The δ13Cex values and the contribution fractions estimated from the model are listed in Table 5. The estimated δ13CEM values in the shallow sediments are -74.1‰ (CL30), -75.4‰ (CL44), and -66.7‰ (CL47), suggesting the external methane migrating into the shallow sediments is microbial in origin [83, 88]. The absence of higher hydrocarbons in headspace gas samples also supports the microbial origin of methane in the study area.Figure 7 Plots of DIC vs.DIC×δ13DIC. The δ13Cex were calculated using the linear regression of DIC×δ13C vs. DIC for CL30 (a), CL44 (b), and CL47 (c). (a) (b) (c)Table 5 Fractions of DIC from different sources contributed to the total DIC pool andδ13C values from different sources. Core ID δ13CSMTZ (‰, VPDB) δ13Cex (‰, VPDB) δ13CSW (‰, VPDB) δ13COM (‰, VPDB) XSW (%) XME (%) XOSR (%) XAOM (%) δ13Cmethane (‰, VPDB) CL30 -46.4 -48.1 0 -20 9.8 6.6 28.0 55.5 -74.1 CL44 -41.0 -46.1 0 -20 9.8 5.3 34.3 50.6 -75.4 CL47 -38.8 -43.1 0 -20 8.7 5.2 32.9 53.1 -66.7Previous studies have suggested that microbial methane was the main hydrocarbon source with minor contributions of oil-derived compounds and pyrolysis gas at the Haima cold seeps [32, 33, 89, 90]. Gas chimney structures, which are well developed around the Haima cold seeps, might serve as conduits for the upward migration of biogenic gas from the underlying free gas reservoir beneath the GHSZ to shallow sediments [29, 42]. This observation suggests that microbial methane at the study sites might be derived from an underlying free gas reservoir trapped beneath the GHSZ. ### 5.3. Implications of the Time Constraint on Methane Seepage Based on the diffusive dissolved barium flux and the excess barium content in the sediments, the time for the observed barite enrichments just above the current SMTZ is estimated to be about 3 kyr and 0.8-1.6 kyr for CL44 and CL47, respectively, given the uncertainties of porosity. These results suggest that the SMTZ has been fixed at the current sediment depth for a time period of at least several thousand years at these sites. The irrigation-type sulfate profiles are possibly maintained by continuous mixing of seawater into the sediment over these time periods, like the case in the pockmark sediments of Congo Fan [74]. Furthermore, the depth of SMTZ was speculated to have fluctuated due to variations in methane flux as suggested by the difference in the estimated duration of the barite enrichment and the recent increase methane flux at CL30. Overall, our results show that the methane flux has been fluctuating over the last hundreds to thousands of years in the vicinity of Haima cold seeps.In fact, methane seepages around the Haima cold seeps are characterized by distinct periodicity of seep activities during the past several thousands of years. Radiocarbon ages of bivalve shells suggest that a major seepage event occurred during the period of 6.1 to 5.1 ka B.P., followed by a subordinate seepage event spanning from 3.9 to 2.9 ka B.P. at the Haima cold seeps [25]. The widespread occurrence of dead bivalves on the seafloor reflects a decline in current seepage intensity [25]. Moreover, modeling of porewater profiles at the Haima cold seeps predicts that gas hydrate formation in the seepage center started at least 150 yr B.P. and the subsequent sealing of gas hydrates favored the lateral migration of methane-rich fluids in the coarser, more permeable interval [33]. Sedimentation dynamics, including sediment instabilities and mass wasting, may trigger the destabilization of the gas hydrate reservoir and the resulting occurrence of methane seepage. The evolution and fate of methane seepage are also considered to be affected by local fluid flow dynamics and associated migration of both free gas and methane-rich fluids along fractures, as well as the redirection of gas supply from the reservoir due to pore space clogging by gas hydrate in shallow sediments [25, 33]. The exact mechanism of the changes in methane flux around the Haima cold seeps area is beyond the scope of this study. Despite this, our quantitative study provides some constraint on the duration of methane seepage and may have implication for understanding the evolution of methane seepage in the petroliferous Qiongdongnan Basin. ## 5.1. Formation Mechanisms of the Nonlinear Porewater Profiles The sulfate concentration profiles of the porewater in marine sediments depend on the availability of labile organic matter amenable to sulfate reducers, diffusive/advective methane flux, and depositional conditions [9, 59–64]. Combination of these factors would result in linear, kink, concave-up, concave-down, and sigmoidal (S-shape) type sulfate concentration trends in marine sediments [9].The porewater profiles of the three study cores exhibit kink-type features. The plausible mechanisms for the occurrence of the kink-type profile include (1) irrigation and seawater intrusion due to biological, physical, and hydrological processes; (2) changes in the sedimentation rate or porosity due to depositional events; and (3) changes in methane flux and upward advection of fluid [14]. Bioirrigation has been shown to generally occur in a decimeter scale in the surface sediments [65, 66]. In fact, no macroorganisms were observed in the study cores below the upper few centimeters of sediment. The lithology of the upper two-three meters of the sediments in the study cores was dominated by fine-grained hemipelagic sediments mainly consisting of silty clay without any discernible abnormal deposition (Figure S2). Although deep-water turbidity current channel and fan systems are well developed in the study region [67], the homogeneous grain size distributions in cores CL44 and CL47 reveal that the sediment above the kinks of sulfate was not impacted by turbidites, which are typically characterized by upward grading in grain size. The C-M plot also suggests the absence of turbidites in the study cores (Figure S2 [68]). Moreover, by comparing the depth profiles of CaCO3 content in CL44 and CL47 with that in an adjacent core (SO49-37KL) with established Marine Isotope Stage, we found that the upper ~2 m sediments of CL44 and CL47 represent normal hemipelagic background deposition during the Holocene (Figure S3) [69, 70]. The relatively constant ratios of Ti/Al, Si/Al, and Zr/Rb above the kinks indicate a stable input of detrital fraction (Figure S4). In contrast, the layers at the interval of ~1.4 to ~4.2 mbsf in CL44 and ~1.8 to ~5 mbsf in CL47 exhibiting high Si, Ti, and Zr/Rb contents and coarser grain sizes (Figure S4) suggest elevated input of detrital fraction during sea-level lowstands [71, 72]. In addition, the flat seafloor topography in the study area also precludes the occurrence of abrupt depositional event such as landslide (Figure 1). Therefore, it is unlikely that the irrigation-like feature in CL44 and CL47 was caused by mass-transport deposits [44]. Furthermore, there is no indicator for upward fluid advection at sites CL44 and CL47.We argue that the cause for the formation of the irrigation-like porewater profiles is probably the bubble irrigation by rising free gas through escaping tubes [7, 12, 51]. Such features were observed at the nearby Haima cold seeps and attributed to bubble irrigation or a recent increase in methane flux [33]. Moreover, BSR and acoustic blanking which are indicative of free gas accumulation were identified in the study area (Fang Y., unpublished data). Hence, gas bubble irrigation is the most likely mechanism to explain the observed profiles at cores CL44 and CL47.At core CL30, the sediments consist of homogenous silty clay without discernible abnormal deposition and the sulfate concentrations decrease gradually without maintaining seawater-like values above the kink at 3.5 mbsf (Figure2). We thus hypothesize that the kink in the sulfate profile at core CL30 results from a (sub)recent increase in the upward methane flux, similar to the scenario reported in the Sea of Marmara, the continental margin offshore Pakistan, the slope area south of Svalbard, the Niger Delta, the southern SCS, and so on [10–13, 44]. A simplified numerical model exercise, assuming a diffusional porewater system with POCSR and AOM as the only biogeochemical reactions, was used to demonstrate this scenario (Figure 6). The assumption of diffusive transport of porewater species is warranted because it has been suggested that porewater solute distributions are dominated by diffusion even if free gas transport and fluid advection exist [14, 73].The current barite front is located at about 4.2-6.4 mbsf, very close to the current SMTZ (4.7-7 mbsf), indicating that the barite front might form in the recent past to the present day induced by a recent enhancement of methane flux [11]. Actually, the measured SO42- concentration profile can be reproduced after a sudden increase in CH4 concentration lasting for ~85 yr. On the other hand, based on the calculated diffusive Ba2+ fluxes and the depth-integrated Ba contents, the time required to form the observed authigenic barite front above the current SMTZ is about 2.2-4.2 kyr for CL30, given the uncertainties of porosity. The difference of estimated duration of constant methane flux between these two approaches may suggest that the barite front was not a result of the recent increase in methane flux inducing the kink-type sulfate profile. Instead, it is more likely that the SMTZ has experienced several fluctuations in depth, considering the episodic pulses of upward methane flux which have occurred in this area as shown by previous studies [25, 32, 33]. However, this decoupled record between sediments and porewaters is commonly observed at cold seeps [74–77] and is considered to reflect the variations of methane fluxes and the resulting SMTZ in the sedimentary column [74]. Observations and numerical modeling suggest that the response of porewater geochemical signatures is more rapid on timescales of months to centuries than the accumulation of authigenic barite deposits on timescales of decades to hundred thousands of years [11, 12, 14, 16–19, 74]. On the whole, our results suggest that combining porewater data with sedimentary barite front records may provide important clues for better understanding of the evolution of methane seepage. ## 5.2. Methane-Related Carbon Cycling and Source of Methane Based on the simulation results derived from the steady-state modeling, AOM consumed ~80%, 76%, and 77% of sulfate in CL30, CL44, and CL47, respectively. AOM thus acts as an efficient barrier preventing methane from being released into the water column at the studied cores. This is supported by the lowδ13CDIC values at the SMTZs mainly derived from methane. AOM increases porewater alkalinity by producing bicarbonate and results in the precipitation of authigenic carbonates as shown by the decrease in Ca2+ and Mg2+ concentrations with depth (Figures 3–6).In addition, theδ13CDIC values below the SMTZ become more positive than those at the SMTZ. The reversal in δ13CDIC below the SMTZ is caused by the generation of 13C-enriched DIC via local methanogenesis at the methanogenic zone [63, 78]. The 13C-enriched DIC would migrate into the SMTZ from the methanogenic zone and “dilute” the 12C pool of DIC in porewater. Thus, in a closed system, DIC generated by local methanogenesis is an important source of DIC in the carbon budget within the SMTZ [78, 79].Based on the modeling results of methane turnovers, the depth-intergrated AOM rates at cores CL30, CL44, and CL47 are about 8 to 21 times of the in situ methanogenesis rates (Table4). Therefore, the relative proportions of external methane sources contributed to the total methane pool are 88%, 95%, and 95% at cores CL30, CL44, and CL47, respectively. This indicates that the majority of methane fuelling AOM at the SMTZ was sourced from subsurface sediments. There are two general pathways for producing methane in marine sediments, including microbial methane generated via CO2 reduction or the fermentation of reduced carbon substrates (e.g., acetate and methanol; [80]) and thermogenic methane formed via thermal cracking of organic matter and/or heavy hydrocarbons [81]. The δ13C values of methane are generally distinct between these two types of methane. The δ13C values of microbial methane typically range from −50‰ to −110‰ [80], whereas those of thermogenic methane range from −30‰ to −50‰ [81].Becauseδ13C values of headspace methane in the sediments are absent in the study area, porewater DIC content and δ13CDIC are utilized to constrain the origin of methane. Generally, porewater DIC in marine sediments is mainly derived from (1) the DIC that is diffusing from the overlying seawater into the sediments or the seawater DIC trapped within sediments during burial, (2) the DIC generated by the degradation of sedimentary organic matter, (3) the DIC produced by AOM, and (4) the residual DIC derived from methanogenesis [82, 83]. In order to obtain the carbon isotopic composition of DIC derived from external methane, we applied a simple four-end-member mixing model. The four end-members are (1) seawater-derived DIC trapped within sediments during burial (SW), (2) DIC produced by POCSR, (3) DIC derived from external methane (EM) via AOM, and (4) DIC generated by in situ methanogenesis (ME). Note that methane production via local methanogenesis was assumed to be competently recycled by AOM. As a result, the carbon isotopic composition of DIC produced by local methanogenesis was identical to that of organic matter (OM) [82–84]. In a closed system, δ13C balance of the porewater DIC pool at the SMTZ can be expressed by (9)δ13Cex=XSW∗δ13CSW+XOSR∗δ13COM+XAOM∗δ13CEM+XME∗δ13COM,where X is the proportion of DIC contributed to the total DIC pool and the subscripts SW, OSR, AOM, and ME refer to DIC derived from seawater, organic matter, external methane, and local methanogenesis, respectively. The XSW values are estimated as typical seawater DIC concentration (2.1 mM) divided by DIC concentration at the SMTZ, and the δ13CSW is assumed to be 0‰. The δ13C value of sedimentary organic matter in sediment of the SCS (−20‰; [85]) is used for the δ13COM. The overall δ13C of DIC derived from methanogenesis is equal to the δ13COM assuming that methane produced by local methanogenesis was completely converted to DIC by AOM [84]. The contribution fractions of OSR, AOM, and local methanogenesis shown as XOSR, XAOM, and XME are calculated from the steady-state modeling. The δ13Cex can be acquired from a regression of porewater δ13CDIC×DIC vs. DIC (Figure 7). The regression commonly shows linear in seep-impacted sediments, thus providing definitive δ13CDIC supplied to porewater [82, 86, 87]. The δ13Cex values and the contribution fractions estimated from the model are listed in Table 5. The estimated δ13CEM values in the shallow sediments are -74.1‰ (CL30), -75.4‰ (CL44), and -66.7‰ (CL47), suggesting the external methane migrating into the shallow sediments is microbial in origin [83, 88]. The absence of higher hydrocarbons in headspace gas samples also supports the microbial origin of methane in the study area.Figure 7 Plots of DIC vs.DIC×δ13DIC. The δ13Cex were calculated using the linear regression of DIC×δ13C vs. DIC for CL30 (a), CL44 (b), and CL47 (c). (a) (b) (c)Table 5 Fractions of DIC from different sources contributed to the total DIC pool andδ13C values from different sources. Core ID δ13CSMTZ (‰, VPDB) δ13Cex (‰, VPDB) δ13CSW (‰, VPDB) δ13COM (‰, VPDB) XSW (%) XME (%) XOSR (%) XAOM (%) δ13Cmethane (‰, VPDB) CL30 -46.4 -48.1 0 -20 9.8 6.6 28.0 55.5 -74.1 CL44 -41.0 -46.1 0 -20 9.8 5.3 34.3 50.6 -75.4 CL47 -38.8 -43.1 0 -20 8.7 5.2 32.9 53.1 -66.7Previous studies have suggested that microbial methane was the main hydrocarbon source with minor contributions of oil-derived compounds and pyrolysis gas at the Haima cold seeps [32, 33, 89, 90]. Gas chimney structures, which are well developed around the Haima cold seeps, might serve as conduits for the upward migration of biogenic gas from the underlying free gas reservoir beneath the GHSZ to shallow sediments [29, 42]. This observation suggests that microbial methane at the study sites might be derived from an underlying free gas reservoir trapped beneath the GHSZ. ## 5.3. Implications of the Time Constraint on Methane Seepage Based on the diffusive dissolved barium flux and the excess barium content in the sediments, the time for the observed barite enrichments just above the current SMTZ is estimated to be about 3 kyr and 0.8-1.6 kyr for CL44 and CL47, respectively, given the uncertainties of porosity. These results suggest that the SMTZ has been fixed at the current sediment depth for a time period of at least several thousand years at these sites. The irrigation-type sulfate profiles are possibly maintained by continuous mixing of seawater into the sediment over these time periods, like the case in the pockmark sediments of Congo Fan [74]. Furthermore, the depth of SMTZ was speculated to have fluctuated due to variations in methane flux as suggested by the difference in the estimated duration of the barite enrichment and the recent increase methane flux at CL30. Overall, our results show that the methane flux has been fluctuating over the last hundreds to thousands of years in the vicinity of Haima cold seeps.In fact, methane seepages around the Haima cold seeps are characterized by distinct periodicity of seep activities during the past several thousands of years. Radiocarbon ages of bivalve shells suggest that a major seepage event occurred during the period of 6.1 to 5.1 ka B.P., followed by a subordinate seepage event spanning from 3.9 to 2.9 ka B.P. at the Haima cold seeps [25]. The widespread occurrence of dead bivalves on the seafloor reflects a decline in current seepage intensity [25]. Moreover, modeling of porewater profiles at the Haima cold seeps predicts that gas hydrate formation in the seepage center started at least 150 yr B.P. and the subsequent sealing of gas hydrates favored the lateral migration of methane-rich fluids in the coarser, more permeable interval [33]. Sedimentation dynamics, including sediment instabilities and mass wasting, may trigger the destabilization of the gas hydrate reservoir and the resulting occurrence of methane seepage. The evolution and fate of methane seepage are also considered to be affected by local fluid flow dynamics and associated migration of both free gas and methane-rich fluids along fractures, as well as the redirection of gas supply from the reservoir due to pore space clogging by gas hydrate in shallow sediments [25, 33]. The exact mechanism of the changes in methane flux around the Haima cold seeps area is beyond the scope of this study. Despite this, our quantitative study provides some constraint on the duration of methane seepage and may have implication for understanding the evolution of methane seepage in the petroliferous Qiongdongnan Basin. ## 6. Conclusions This study is aimed at understanding the methane source and turnover as well as provide some constraints on the timing of methane seepage to the west of “Haima cold seeps.” The steady-state reaction-transport modeling of SO42-, CH4, DIC, PO43-, Ca2+, and Mg2+ in CL44 and CL47 suggests that gas bubble transport may lead to the irrigation-like feature in the upper 2 m and relatively high AOM rates (74.3 mmol m−2 yr−1 for CL44 and 85.0 mmol m−2 yr−1 for CL47). The time required for the enrichment of authigenic barium fronts slightly above the current SMTZ is approximately 3 kyr for CL44 and 0.8-1.6 kyr for CL47, respectively. In contrast, a recent increase in methane flux (prior to ~85 yr) is the likely cause of the kink at 3.5 m of the sulfate profile in CL30 demonstrated by the transient-state modeling. The estimated time required for the formation of the diagenetic barium peak just above the current SMTZ was 2.2-4.2 kyr at this core. The discrepancy in the time estimates constrained by two different approaches suggests that the position of SMTZ possibly has fluctuated due to variation in methane flux at the site. In addition, based on the four DIC end-member mixing calculation, the δ13C values of the external methane in cores CL30, CL44, and CL47 are -74.1‰, -75.4‰, and -66.7‰, respectively. This is indicative of the biogenic origin of external methane from an underlying reservoir. Our results suggest that methane seepage exists in a broader area in the vicinity of the “Haima cold seeps” and the methane fluxes may have fluctuated frequently for the last several hundreds to thousands of years. --- *Source: 1010824-2019-08-05.xml*
2019
# Regulation Effect of Zinc Fingers and Homeoboxes 2 on Alpha-Fetoprotein in Human Hepatocellular Carcinoma **Authors:** Shao Wei Hu; Meng Zhang; Ling Xue; Jian Ming Wen **Journal:** Gastroenterology Research and Practice (2013) **Publisher:** Hindawi Publishing Corporation **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2013/101083 --- ## Abstract Aim. To investigate the relationship between alpha-fetoprotein and zinc fingers and homeoboxes 2 in hepatocellular carcinoma. Materials and Methods. The expressions of zinc fingers and homeoboxes 2, nuclear factor-YA, and alpha-fetoprotein mRNA in 63 hepatocellular carcinoma were detected by reverse transcriptase-polymerase chain reaction and compared with the clinical parameters of the patients. Selectively, silence of zinc fingers and homeoboxes 2 in HepG2 cells was detected by RNA interference technique. Results. Alpha-fetoprotein mRNA expression was detected in 60.3% of hepatocellular carcinoma cases. Zinc fingers and homeoboxes 2 mRNA expression (36.5%) was significantly negatively correlated with serum alpha-fetoprotein concentration and mRNA expression. A strong positive correlation was found between zinc fingers and homeoboxes 2 and nuclear factor-YA mRNA expression (42.9%), while the latter was negatively correlated with serum alpha-fetoprotein concentration and mRNA expression. Treatment with zinc fingers and homeoboxes 2 small interfering RNA led to 85% and 83% silence of zinc fingers and homeoboxes 2 mRNA and protein expression and 60% and 61% reduction of nuclear factor-YA mRNA and protein levels in the HepG2 cells, respectively. Downregulation of zinc fingers and homeoboxes 2 also induced a 2.4-fold increase in both alpha-fetoprotein mRNA and protein levels. Conclusions. Zinc fingers and homeoboxes 2 can regulate alpha-fetoprotein expression via the interaction with nuclear factor-YA in human hepatocellular carcinoma and may be used as an adjuvant diagnostic marker for alpha-fetoprotein-negative hepatocellular carcinoma. --- ## Body ## 1. Introduction Alpha-fetoprotein (AFP) is one of the major serum proteins in fetal mammals. Its concentration dramatically decreases after birth and remains at a low basal level in adults [1]. Olsson et al. found that the average AFP level in the BALB/cJ mice is about 10-fold higher than that of the controls (C3H/He and BALB/c/BOM) 9-10 weeks postnatally [2]. The postnatal AFP level in BALB/cJ mice is controlled by a single recessive Mendelian gene, previously named raf (regulation of alpha-fetoprotein) and renamed Afr1 (alpha-fetoprotein regulator 1) [3]. Afr1 governs postnatal AFP mRNA levels in adult mice liver [4, 5]. The AFP promoter is the target of Afr1-mediated postnatal repression [6]. Recently, Afr1 was identified as zinc fingers and homeoboxes 2 (ZHX2). In adult BALB/cJ mice, retrotransposon insertion in ZHX2 reduces its mRNA expression, resulting in an elevated expression of AFP [7].Our previous studies have demonstrated reduced ZHX2 expression in hepatocellular carcinoma (HCC) [8]. This expression silence is involved in hypermethylation of the ZHX2 gene promoter [9]. Shen et al. [10] also have shown that the expression level of AFP in HepG2 cells is remarkably reduced by transfection of ZHX2 vector into the cells. In contrast, using siRNA inhibition technique AFP is derepressed in LO2 and SMMC7721 cells, when ZHX2 levels are reduced. ZHX2 repression is governed by the AFP promoter and requires intact nuclear factor (HNF1) binding sites.Nuclear factor-Y (NF-Y) is an ubiquitous transcription factor that is an comprised of three subunits: NF-YA, NF-YB, and NF-YC [11]. The YB and YC subunits form a tightly bound dimer that presents a complex surface for the subsequent association of the YA subunit [12, 13]. The resulting trimer binds to an inverted CCAAT box, stimulating the transcription of a number of genes [14]. The NF-YA subunit contains two activation domains: a glutamine-rich region and a serine/threonine-rich region. ZHX2 residues 263–497 interact with the latter region. Immunoprecipitation analysis detected an interaction between ZHX2 and NY-FA in human embryonic kidney cells. Moreover, ZHX2 regulates NF-YA-regulable genes such as cdc25C [15]. Thus, ZHX2 can form homodimers or heterodimers with other ZHX members [16], then interacts with the activation domain of NF-YA, and represses transcription of its regulable gene [17].In fact, AFP gene could be reactivated in human HCC [18]. The serum AFP levels were typically markedly elevated in HCC patients [19]. ZHX2 promoter hypermethylation could cause a low mRNA expression of ZHX2 in HCC [9]. However, the association between ZHX2, NF-YA, and AFP expressions in HCC has not been documented. It is also not clear whether ZHX2 regulates AFP gene expression by interacting with NF-YA in HCC. In this paper, we studied ZHX2, NF-YA, and AFP expressions in human HCC tissues by reverse transcriptase-polymerase chain reaction (RT-PCR). We also used RNA interference (RNAi) technology to selectively silence ZHX2 in HepG2 cells in order to clarify the possible regulation of AFP expression. ## 2. Materials and Methods ### 2.1. HCC Tissues The study was approved by the Ethics Committee of The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, and informed consent was obtained from the participating patients. Clinical samples were obtained by hepatectomy from 63 HCC patients (53 males, 10 females; age range 20–70 years; average age of 47 years) at the First Affiliated Hospital of Sun Yat-sen University. The collected cancer tissues were immediately frozen in liquid nitrogen and stored at −80°C for RT-PCR analysis. None of the cases received adjuvant therapy before operation.The tumor grade was described by Edmondson and Steiner [20] and classified as grade I (2 cases), grade II (41 cases), grade III (18 cases), and grade IV (2 cases). Hepatitis B surface antigen (HBsAg) was positive in the serum of patients examined (56/56), in the rest 7 cases it was not done. The tumor size was less than 5 cm in 18 cases and larger than 5 cm in 45. Only 13 (20.7%) in all 63 cases had normal serum AFP concentration (<20 μg/L). The cut-off for normal AFP level (20 μg/L) and tumor size (5 cm) was according to previous studies [15, 18]. In addition, there were 28 cases with metastases, in the portal vein (17 cases), lymph node (4 cases), extrahepatic bile duct (2 cases), adrenal gland (2 cases), stomach (1 case), and peritoneal dissemination (2 cases). Cirrhosis was observed in 32 of 63 adjacent nontumorous tissues. ### 2.2. RNA Preparation and RT-PCR Total RNA was extracted by using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s guidelines. The concentration and purity of RNA were determined by measuring the absorbance at 260 and 280 nm. The RNA was dissolved in diethylpyrocarbonate-treated water to a final concentration of 1μg/μL.Total RNA (5μg) was reverse-transcribed into the first-strand cDNA at 42°C for 1 hour in 20 μL reaction mixtures consisting of oligo(dT)18 primer (0.5 μg), RiboLock Ribonuclease inhibitor (20 units), 10 mM dNTP mix (2 μL), and RevertAid M-MuLV Reverse Transcriptase (200 units) (Fermentas Life Sciences, European Union). All cDNAs were then subjected to amplification with primers for ZHX2, AFP, NF-YA, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH); the latter served as an internal standard. The primers spanned intron/exon boundaries (Table 1). Prior to the amplification of the experimental samples, the amount of cDNA in all of the samples was equalized. In addition, optimal conditions including Mg2+ concentration and annealing temperature for each set of primers were determined. Subsequently, optimization for the number of PCR cycles was determined for linear amplification. PCR was carried out in a Gene Amp PCR System 9600 (Perkin Elmer, Foster City, CA, USA). The PCR products were analyzed on an 8.0% acrylamide gel and the images of the silver nitrate stained bands were obtained with a Nikon E4500 digital camera (Nikon Corp., Tokyo, Japan). As a negative control, the cDNA template was omitted in the reaction. The RT-PCR was performed at least twice in independent experiments.Table 1 Primer sequences for RT-PCR analysis. Gene Primer sequences (5′-3′) Annealing temperature (°C) Cycle numbers Amplified products (bp) ZHX2 Sense: GGTAGCGACGAGAACGAGAntisense: AGGACTTTGGCACTATGAAC 58 34 389 NF-YA Sense: GAGTCTCGGCACCGTCATAntisense: TGCTTCTTCATCGGCTTG 57 34 117 AFP Sense: GTTGCCAACTCAGTGAGGACAntisense: GAGCTTGGCACAGATCCTTA 59 28 240 GAPDH Sense: GCTGAGAACGGGAAGCTTGTAntisense: GCCAGGGGTGCTAAGCAGTT 58 30 299 AFP: alpha-fetoprotein, bp: base pair, GAPDH: glyceraldehyde-3-phosphate dehydrogenase, NF-Y: nuclear factor-Y, and ZHX2: zinc fingers and homeoboxes 2. ### 2.3. Cell Culture Human hepatocellular carcinoma cells (HepG2) were cultured at 37°C in a humidified incubator with 5% CO2 and 95% air atmosphere in RPMI 1640 (Gibco BRL, Grand Island, NY, USA) supplemented with 10% fetal calf serum (FCS), 2 mmol/L L-glutamine, 100 μg/mL streptomycin, and 100 IU/mL penicillin (Hyclone, Bio-Check Laboratories Ltd., USA). ### 2.4. Small Interfering RNA (siRNA) and Transfection The sequences of human ZHX2 specific siRNA were 5′- GACACAUUAGGACACGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUAAUCCUGUGCAGU-3′ (antisense). As a control siRNA, we used a corresponding nonsilencing siRNA with the sequences 5′-GACACAGAUACGAUCGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUCUAUGCUAGCAGU-3′ (antisense). All synthetic RNA oligonucleotides were synthesized and purified at Ribobio (Guangzhou, China).One day before transfection, HepG2 cells were seeded at a density of 3 × 105 cells/well in a complete medium without antibiotics in 12-well plates. The siRNA (either ZHX2 siRNA or control siRNA) was diluted in a final volume of 100 μL of serum-free Opti-MEM (Gibco) medium. In a separate tube, 2 μL Lipofectamine 2000 (Invitrogen, Carlsbad CA, USA) was mixed into a final volume of 100 μL of serum-free Opti-MEM medium per well and incubated for 5 minutes at room temperature. The two mixtures were mixed gently and incubated for another 20 minutes. Add the complexes to the well containing cells and medium. The final concentration of siRNA was 50 nM. Cells were subsequently cultured for 48 hours before further analysis (RT-PCR and Western blot). ### 2.5. Protein Preparation and Western Blot HepG2 cells were treated for 30 minutes on ice with lysis buffer (RIPA, Shenerg Biocolor, China). Protein concentrations were determined using the bicinchoninic acid protein assay. Equal amounts of cellular protein were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride membranes. The membranes were incubated in blocking buffer consisting of Tris-buffered saline (TBS) containing 5% nonfat dry milk for 1 hour at room temperature. The membranes were then incubated with the primary antibodies against ZHX2 (1 : 4000 dilution, ABNOVA Corporation, Taiwan), NF-YA (1 : 200 dilution, Santa Cruz, CA, USA), AFP (1 : 200 dilution, NeoMarkers, Fremont, CA, USA), and beta-actin (1 : 200 dilution, Boster, China). Horseradish peroxidase-labeled anti-mouse secondary antibody (1 : 1000 dilution, DAKO, Carpinteria, CA, USA) was applied onto the blots. After incubation with the electrochemiluminescence (ECL, Applygen Technologies Inc.), reagent, immunochemiluminescence signals were recorded on X-ray film. ### 2.6. Statistical Analysis Statistical differences were evaluated usingχ2 test performed with SPSS11.5 for Windows software. A P value less than .05 for each test was considered statistically significant. ## 2.1. HCC Tissues The study was approved by the Ethics Committee of The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, and informed consent was obtained from the participating patients. Clinical samples were obtained by hepatectomy from 63 HCC patients (53 males, 10 females; age range 20–70 years; average age of 47 years) at the First Affiliated Hospital of Sun Yat-sen University. The collected cancer tissues were immediately frozen in liquid nitrogen and stored at −80°C for RT-PCR analysis. None of the cases received adjuvant therapy before operation.The tumor grade was described by Edmondson and Steiner [20] and classified as grade I (2 cases), grade II (41 cases), grade III (18 cases), and grade IV (2 cases). Hepatitis B surface antigen (HBsAg) was positive in the serum of patients examined (56/56), in the rest 7 cases it was not done. The tumor size was less than 5 cm in 18 cases and larger than 5 cm in 45. Only 13 (20.7%) in all 63 cases had normal serum AFP concentration (<20 μg/L). The cut-off for normal AFP level (20 μg/L) and tumor size (5 cm) was according to previous studies [15, 18]. In addition, there were 28 cases with metastases, in the portal vein (17 cases), lymph node (4 cases), extrahepatic bile duct (2 cases), adrenal gland (2 cases), stomach (1 case), and peritoneal dissemination (2 cases). Cirrhosis was observed in 32 of 63 adjacent nontumorous tissues. ## 2.2. RNA Preparation and RT-PCR Total RNA was extracted by using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s guidelines. The concentration and purity of RNA were determined by measuring the absorbance at 260 and 280 nm. The RNA was dissolved in diethylpyrocarbonate-treated water to a final concentration of 1μg/μL.Total RNA (5μg) was reverse-transcribed into the first-strand cDNA at 42°C for 1 hour in 20 μL reaction mixtures consisting of oligo(dT)18 primer (0.5 μg), RiboLock Ribonuclease inhibitor (20 units), 10 mM dNTP mix (2 μL), and RevertAid M-MuLV Reverse Transcriptase (200 units) (Fermentas Life Sciences, European Union). All cDNAs were then subjected to amplification with primers for ZHX2, AFP, NF-YA, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH); the latter served as an internal standard. The primers spanned intron/exon boundaries (Table 1). Prior to the amplification of the experimental samples, the amount of cDNA in all of the samples was equalized. In addition, optimal conditions including Mg2+ concentration and annealing temperature for each set of primers were determined. Subsequently, optimization for the number of PCR cycles was determined for linear amplification. PCR was carried out in a Gene Amp PCR System 9600 (Perkin Elmer, Foster City, CA, USA). The PCR products were analyzed on an 8.0% acrylamide gel and the images of the silver nitrate stained bands were obtained with a Nikon E4500 digital camera (Nikon Corp., Tokyo, Japan). As a negative control, the cDNA template was omitted in the reaction. The RT-PCR was performed at least twice in independent experiments.Table 1 Primer sequences for RT-PCR analysis. Gene Primer sequences (5′-3′) Annealing temperature (°C) Cycle numbers Amplified products (bp) ZHX2 Sense: GGTAGCGACGAGAACGAGAntisense: AGGACTTTGGCACTATGAAC 58 34 389 NF-YA Sense: GAGTCTCGGCACCGTCATAntisense: TGCTTCTTCATCGGCTTG 57 34 117 AFP Sense: GTTGCCAACTCAGTGAGGACAntisense: GAGCTTGGCACAGATCCTTA 59 28 240 GAPDH Sense: GCTGAGAACGGGAAGCTTGTAntisense: GCCAGGGGTGCTAAGCAGTT 58 30 299 AFP: alpha-fetoprotein, bp: base pair, GAPDH: glyceraldehyde-3-phosphate dehydrogenase, NF-Y: nuclear factor-Y, and ZHX2: zinc fingers and homeoboxes 2. ## 2.3. Cell Culture Human hepatocellular carcinoma cells (HepG2) were cultured at 37°C in a humidified incubator with 5% CO2 and 95% air atmosphere in RPMI 1640 (Gibco BRL, Grand Island, NY, USA) supplemented with 10% fetal calf serum (FCS), 2 mmol/L L-glutamine, 100 μg/mL streptomycin, and 100 IU/mL penicillin (Hyclone, Bio-Check Laboratories Ltd., USA). ## 2.4. Small Interfering RNA (siRNA) and Transfection The sequences of human ZHX2 specific siRNA were 5′- GACACAUUAGGACACGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUAAUCCUGUGCAGU-3′ (antisense). As a control siRNA, we used a corresponding nonsilencing siRNA with the sequences 5′-GACACAGAUACGAUCGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUCUAUGCUAGCAGU-3′ (antisense). All synthetic RNA oligonucleotides were synthesized and purified at Ribobio (Guangzhou, China).One day before transfection, HepG2 cells were seeded at a density of 3 × 105 cells/well in a complete medium without antibiotics in 12-well plates. The siRNA (either ZHX2 siRNA or control siRNA) was diluted in a final volume of 100 μL of serum-free Opti-MEM (Gibco) medium. In a separate tube, 2 μL Lipofectamine 2000 (Invitrogen, Carlsbad CA, USA) was mixed into a final volume of 100 μL of serum-free Opti-MEM medium per well and incubated for 5 minutes at room temperature. The two mixtures were mixed gently and incubated for another 20 minutes. Add the complexes to the well containing cells and medium. The final concentration of siRNA was 50 nM. Cells were subsequently cultured for 48 hours before further analysis (RT-PCR and Western blot). ## 2.5. Protein Preparation and Western Blot HepG2 cells were treated for 30 minutes on ice with lysis buffer (RIPA, Shenerg Biocolor, China). Protein concentrations were determined using the bicinchoninic acid protein assay. Equal amounts of cellular protein were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride membranes. The membranes were incubated in blocking buffer consisting of Tris-buffered saline (TBS) containing 5% nonfat dry milk for 1 hour at room temperature. The membranes were then incubated with the primary antibodies against ZHX2 (1 : 4000 dilution, ABNOVA Corporation, Taiwan), NF-YA (1 : 200 dilution, Santa Cruz, CA, USA), AFP (1 : 200 dilution, NeoMarkers, Fremont, CA, USA), and beta-actin (1 : 200 dilution, Boster, China). Horseradish peroxidase-labeled anti-mouse secondary antibody (1 : 1000 dilution, DAKO, Carpinteria, CA, USA) was applied onto the blots. After incubation with the electrochemiluminescence (ECL, Applygen Technologies Inc.), reagent, immunochemiluminescence signals were recorded on X-ray film. ## 2.6. Statistical Analysis Statistical differences were evaluated usingχ2 test performed with SPSS11.5 for Windows software. A P value less than .05 for each test was considered statistically significant. ## 3. Results ZHX2 mRNA expression was detected in 23 (36.5%) of 63 HCC tissues (Figure1). ZHX2 expression rate (69.2%) with less than 20 μg/L AFP concentration in serum was significantly higher than that (28%) with more than 20 μg/L serum AFP concentration (P=.009, OR=.17). Statistically, ZHX2 expression was not significantly associated with age, tumor size, cirrhosis, grading, and metastasis (Table 2).Table 2 Relationship between ZHX2 mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) ZHX2 mRNAP value OR (95% CI) Total 23 (36.5) AFP (μg/L) <20 9 (69.2) .009 .17 (.05–.65) >20 14 (28) Tumor size (cm) ≤5 7 (38.9) .80 .87 (.28–2.68) >5 16 (35.6) Background liver Without cirrhosis 12 (38.7) .72 .83 (.30–2.32) With cirrhosis 11 (34.4) Grade I-II 15 (34.9) .70 1.24 (.42–3.71) III-IV 8 (40) Metastasis Without 15 (42.9) .24 .53 (.185–1.54) With 8 (28.6) CI: confidence interval, HCC: hepatocellular carcinoma, OR: odds ratio, and ZHX2: zinc fingers and homeoboxes 2.Figure 1 expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) mRNA in hepatocellular carcinoma (HCC) tissues. The target mRNA expression was detected by RT-PCR analysis. The expression of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as an internal control. In cases 1, 8, 15, 47, and 49, ZHX2 and NF-YA mRNAs were detected, but AFP mRNA expression was not observed. In cases 5, 12, 46, and 48, ZHX2 and NF-YA mRNAs were not detected, but AFP mRNA expression was observed.NF-YA mRNA expression was detected in 27 (42.9%) of 63 HCC tissues (Figure1). It was significantly negatively correlated with serum AFP concentration (P=.005, OR=.16). NF-YA expression was not significantly associated statistically with age, tumor size, cirrhosis, grading, and metastasis (Table 3).Table 3 Relationship between NF-YA mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) NF-YA mRNAP value OR (95% CI) Total 27 (42.9) AFP (μg/L) <20 10 (76.9) .005 .16 (.04–.64) >20 17 (34) Tumor size (cm) ≤5 7 (38.9) .69 1.26 (.41–3.84) >5 20 (44.4) Background liver Without cirrhosis 15 (48.4) .38 .64 (.23–1.75) With cirrhosis 12 (37.5) Grade I-II 16 (37.2) .18 2.06 (.70–6.05) III-IV 11 (55) Metastasis Without 18 (51.4) .27 .56 (.20–1.57) With 9 (32.1) CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, and OR: odds ratio.AFP mRNA expression was detected in 38 (60.3%) of 63 HCC tissues (Figure1). It was not significantly associated with age, tumor size, cirrhosis, grading, and metastasis. However, there was a significant association between AFP mRNA expression and serum AFP concentration (P=.001, OR=14.14) (Table 4).Table 4 Relationship between AFP mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) AFP mRNAP value OR (95% CI) Total 38 (60.3) AFP (μg/L) <20 2 (15.4) .001 14.14 (2.78–72.05) >20 36 (72) Tumor size (cm) ≤5 9 (50) .29 1.81 (.60–5.49) >5 29 (64.4) Background liver Without cirrhosis 22 (71) .09 .41 (.15–1.16) With cirrhosis 16 (50) Grade I-II 29 (67.4) .09 .40 (.13–1.17) III-IV 9 (45) Metastasis Without 21 (60) .95 1.03 (.37–2.85) With 17 (60.7) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, and OR: odds ratio.ZHX2 mRNA expression rate (26.3%) in HCC tissues with AFP mRNA expression was significantly lower than that (52%) without AFP mRNA expression (P=.04, OR=.33). In 27 HCC tissues with NF-YA expression, ZHX2 mRNA expression rate was 74.1%. In 36 NF-YA negative HCC tissues, only 3 ZHX2-positive tissues (8.3%) were detected. The difference was statistically significant (P=.001, OR=31.43) (Table 5). Furthermore, NF-YA expression rate (31.6%) in HCC tissues with AFP expression was significantly lower than that (60%) without AFP expression (P=.03, OR=.31) (Table 6).Table 5 Relationship between ZHX2, NF-YA, and AFP mRNA expressions in HCC tissues. + (%) ZHX2 mRNAP value OR (95% CI) NF-YA mRNA − 3 (8.3) .001 31.43 (7.28–135.62) + 20 (74.1) AFP mRNA − 13 (52) .04 .33 (.11–.96) + 10 (26.3) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, OR: odds ratio, and ZHX2: zinc fingers and homeoboxes 2.Table 6 Relationship between  NF-YA and AFP mRNA expression in HCC tissues. + (%) NF-YA mRNAP value OR (95% CI) AFP mRNA − 15 (60) .03 .38 (.11–.88) + 12 (31.6) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, and OR: odds ratio.To further confirm the relations between ZHX2, NF-YA, and AFP, we used siRNA to silence the expression of ZHX2 and then detected the change of NF-YA and AFP expression in HepG2 cells. After transfection of siRNA into the cells, the level of ZHX2 mRNA and protein expression decreased significantly by 85% and 83%, respectively, as compared to control siRNA. Treatment with ZHX2 siRNA simultaneously led to 60% and 61% reduction in the NF-YA mRNA and protein levels in the cells, respectively. Downregulation of ZHX2 also induced a 2.4-fold increase in both AFP mRNA and protein levels (Figures2 and 3).Figure 2 expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) mRNA in HepG2 cells after siRNA transfection. The target mRNA expression was detected by RT-PCR analysis. The expression of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as an internal control. The amounts of mRNA in the ZHX2 siRNA-treated cells were compared to those of control cells. After siRNA transfection, ZHX2 mRNA expression (389 bp) was decreased by 85%, NF-YA mRNA expression (117 bp) was reduced by 60%, and AFP mRNA expression (240 bp) was increased by 2.4-fold.Figure 3 Expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) in HepG2 cells after siRNA transfection. The target protein expression was detected by Western blot analysis. The expression ofβ-actin was analyzed as an internal control. The amounts of protein in the ZHX2 siRNA-treated cells were compared to those of control cells. After siRNA transfection, ZHX2 protein expression (92KD) was decreased by 83%, NF-YA protein expression (40 KD and 43 KD) was reduced by 61%, and AFP protein expression (70 KD) was increased by 2.4-fold. ## 4. Discussion ZHX2 is a novel transcriptional repressor, which consists of 837 amino acid residues. The protein has two Cys2-His2-type zinc finger motifs and five homeodomains and is localized in the nuclei [15]. We previously found that promoter hypermethylation caused a low mRNA expression of ZHX2 in HCC. In this study, ZHX2 mRNA expression rate in HCC tissues was 36.5% and similar to the previous study (34.4%) [9]. Compared with clinicopathological parameters, ZHX2 mRNA expression was negatively associated with preoperative AFP level in serum. We also detected AFP mRNA expression in HCC tissues by RT-PCR, which was found to be lower in HCC tissues with ZHX2 expression. Taken together, these findings confirm a negative correlation between ZHX2 and AFP expressions in HCC. To investigate the regulation of AFP, we used RNAi technique because it is known to downregulate specific gene at a posttranscriptional level [21]. We found that the transfection of siRNA into HepG2 cells caused a silence of ZHX2 and an increased expression of AFP mRNA and protein. Yamada et al. also demonstrated that the promoter activity of alpha-fetoprotein was repressed by the expression of ZHX2 in HLE hepatoma cells in a dose-dependent manner [22]. They concluded that ZHX2 and ZHX3 were involved in the transcriptional repression of the HCC markers in normal hepatocytes, suggesting that the failure of the ZHX2 and/or ZHX3 expression might be a critical factor in hepatocyte carcinogenesis [22].The interaction of ZHX2 with the serine/threonine-rich AD of NF-YA has been previously confirmed [15]. By RT-PCR analysis, we found that ZHX2 expression was positively correlated with NF-YA expression. Treatment with ZHX2 siRNA led to 60% and 61% reduction in the NF-YA mRNA and protein levels, respectively. These results indicated that ZHX2 was correlated to NF-YA in HCC. It is possible that ZHX2 downregulates gene expression via the interaction with NF-YA in HCC.Furthermore, we analyzed NF-YA and AFP expressions in HCC tissues and found that NF-YA expression was negatively correlated to AFP expression and serum AFP level. Treatment with ZHX2 siRNA could lead to decreased NF-YA and increased AFP at the mRNA and protein levels. Thus, it is possible that ZHX2 regulates AFP transcription via the interaction with NF-YA in HCC. Although there is no evidence to show that NF-YA directly interacts with AFP, NF-YA might regulate AFP expression indirectly through other genes, such as p300 [14] and p53 [23]. Further studies are required to explain the exact mechanism(s) of AFP regulation in HCC.In conclusion, we detected ZHX2, NF-YA, and AFP expressions in human HCC tissues by RT-PCR and found a close correlation between ZHX2 and NF-YA and a negative relation between ZHX2 and AFP. The RNAi of ZHX2 in HepG2 cells further verified these relations. Therefore, AFP can be regulated by ZHX2 in HCC, and this regulation may be via the interaction with NF-YA. ZHX2 may be used as an adjuvant diagnostic tissue marker for AFP-negative HCC.Further study is required to detect the expression levels of ZHX2 and AFP in hepatic cirrhosis and dysplastic nodules, in order to understand the expression difference in both lesions and establish the possibility of ZHX2 as an earlier screening marker for patients with cirrhosis and/or preneoplastic nodule. --- *Source: 101083-2013-02-26.xml*
101083-2013-02-26_101083-2013-02-26.md
28,358
Regulation Effect of Zinc Fingers and Homeoboxes 2 on Alpha-Fetoprotein in Human Hepatocellular Carcinoma
Shao Wei Hu; Meng Zhang; Ling Xue; Jian Ming Wen
Gastroenterology Research and Practice (2013)
Medical & Health Sciences
Hindawi Publishing Corporation
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2013/101083
101083-2013-02-26.xml
--- ## Abstract Aim. To investigate the relationship between alpha-fetoprotein and zinc fingers and homeoboxes 2 in hepatocellular carcinoma. Materials and Methods. The expressions of zinc fingers and homeoboxes 2, nuclear factor-YA, and alpha-fetoprotein mRNA in 63 hepatocellular carcinoma were detected by reverse transcriptase-polymerase chain reaction and compared with the clinical parameters of the patients. Selectively, silence of zinc fingers and homeoboxes 2 in HepG2 cells was detected by RNA interference technique. Results. Alpha-fetoprotein mRNA expression was detected in 60.3% of hepatocellular carcinoma cases. Zinc fingers and homeoboxes 2 mRNA expression (36.5%) was significantly negatively correlated with serum alpha-fetoprotein concentration and mRNA expression. A strong positive correlation was found between zinc fingers and homeoboxes 2 and nuclear factor-YA mRNA expression (42.9%), while the latter was negatively correlated with serum alpha-fetoprotein concentration and mRNA expression. Treatment with zinc fingers and homeoboxes 2 small interfering RNA led to 85% and 83% silence of zinc fingers and homeoboxes 2 mRNA and protein expression and 60% and 61% reduction of nuclear factor-YA mRNA and protein levels in the HepG2 cells, respectively. Downregulation of zinc fingers and homeoboxes 2 also induced a 2.4-fold increase in both alpha-fetoprotein mRNA and protein levels. Conclusions. Zinc fingers and homeoboxes 2 can regulate alpha-fetoprotein expression via the interaction with nuclear factor-YA in human hepatocellular carcinoma and may be used as an adjuvant diagnostic marker for alpha-fetoprotein-negative hepatocellular carcinoma. --- ## Body ## 1. Introduction Alpha-fetoprotein (AFP) is one of the major serum proteins in fetal mammals. Its concentration dramatically decreases after birth and remains at a low basal level in adults [1]. Olsson et al. found that the average AFP level in the BALB/cJ mice is about 10-fold higher than that of the controls (C3H/He and BALB/c/BOM) 9-10 weeks postnatally [2]. The postnatal AFP level in BALB/cJ mice is controlled by a single recessive Mendelian gene, previously named raf (regulation of alpha-fetoprotein) and renamed Afr1 (alpha-fetoprotein regulator 1) [3]. Afr1 governs postnatal AFP mRNA levels in adult mice liver [4, 5]. The AFP promoter is the target of Afr1-mediated postnatal repression [6]. Recently, Afr1 was identified as zinc fingers and homeoboxes 2 (ZHX2). In adult BALB/cJ mice, retrotransposon insertion in ZHX2 reduces its mRNA expression, resulting in an elevated expression of AFP [7].Our previous studies have demonstrated reduced ZHX2 expression in hepatocellular carcinoma (HCC) [8]. This expression silence is involved in hypermethylation of the ZHX2 gene promoter [9]. Shen et al. [10] also have shown that the expression level of AFP in HepG2 cells is remarkably reduced by transfection of ZHX2 vector into the cells. In contrast, using siRNA inhibition technique AFP is derepressed in LO2 and SMMC7721 cells, when ZHX2 levels are reduced. ZHX2 repression is governed by the AFP promoter and requires intact nuclear factor (HNF1) binding sites.Nuclear factor-Y (NF-Y) is an ubiquitous transcription factor that is an comprised of three subunits: NF-YA, NF-YB, and NF-YC [11]. The YB and YC subunits form a tightly bound dimer that presents a complex surface for the subsequent association of the YA subunit [12, 13]. The resulting trimer binds to an inverted CCAAT box, stimulating the transcription of a number of genes [14]. The NF-YA subunit contains two activation domains: a glutamine-rich region and a serine/threonine-rich region. ZHX2 residues 263–497 interact with the latter region. Immunoprecipitation analysis detected an interaction between ZHX2 and NY-FA in human embryonic kidney cells. Moreover, ZHX2 regulates NF-YA-regulable genes such as cdc25C [15]. Thus, ZHX2 can form homodimers or heterodimers with other ZHX members [16], then interacts with the activation domain of NF-YA, and represses transcription of its regulable gene [17].In fact, AFP gene could be reactivated in human HCC [18]. The serum AFP levels were typically markedly elevated in HCC patients [19]. ZHX2 promoter hypermethylation could cause a low mRNA expression of ZHX2 in HCC [9]. However, the association between ZHX2, NF-YA, and AFP expressions in HCC has not been documented. It is also not clear whether ZHX2 regulates AFP gene expression by interacting with NF-YA in HCC. In this paper, we studied ZHX2, NF-YA, and AFP expressions in human HCC tissues by reverse transcriptase-polymerase chain reaction (RT-PCR). We also used RNA interference (RNAi) technology to selectively silence ZHX2 in HepG2 cells in order to clarify the possible regulation of AFP expression. ## 2. Materials and Methods ### 2.1. HCC Tissues The study was approved by the Ethics Committee of The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, and informed consent was obtained from the participating patients. Clinical samples were obtained by hepatectomy from 63 HCC patients (53 males, 10 females; age range 20–70 years; average age of 47 years) at the First Affiliated Hospital of Sun Yat-sen University. The collected cancer tissues were immediately frozen in liquid nitrogen and stored at −80°C for RT-PCR analysis. None of the cases received adjuvant therapy before operation.The tumor grade was described by Edmondson and Steiner [20] and classified as grade I (2 cases), grade II (41 cases), grade III (18 cases), and grade IV (2 cases). Hepatitis B surface antigen (HBsAg) was positive in the serum of patients examined (56/56), in the rest 7 cases it was not done. The tumor size was less than 5 cm in 18 cases and larger than 5 cm in 45. Only 13 (20.7%) in all 63 cases had normal serum AFP concentration (<20 μg/L). The cut-off for normal AFP level (20 μg/L) and tumor size (5 cm) was according to previous studies [15, 18]. In addition, there were 28 cases with metastases, in the portal vein (17 cases), lymph node (4 cases), extrahepatic bile duct (2 cases), adrenal gland (2 cases), stomach (1 case), and peritoneal dissemination (2 cases). Cirrhosis was observed in 32 of 63 adjacent nontumorous tissues. ### 2.2. RNA Preparation and RT-PCR Total RNA was extracted by using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s guidelines. The concentration and purity of RNA were determined by measuring the absorbance at 260 and 280 nm. The RNA was dissolved in diethylpyrocarbonate-treated water to a final concentration of 1μg/μL.Total RNA (5μg) was reverse-transcribed into the first-strand cDNA at 42°C for 1 hour in 20 μL reaction mixtures consisting of oligo(dT)18 primer (0.5 μg), RiboLock Ribonuclease inhibitor (20 units), 10 mM dNTP mix (2 μL), and RevertAid M-MuLV Reverse Transcriptase (200 units) (Fermentas Life Sciences, European Union). All cDNAs were then subjected to amplification with primers for ZHX2, AFP, NF-YA, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH); the latter served as an internal standard. The primers spanned intron/exon boundaries (Table 1). Prior to the amplification of the experimental samples, the amount of cDNA in all of the samples was equalized. In addition, optimal conditions including Mg2+ concentration and annealing temperature for each set of primers were determined. Subsequently, optimization for the number of PCR cycles was determined for linear amplification. PCR was carried out in a Gene Amp PCR System 9600 (Perkin Elmer, Foster City, CA, USA). The PCR products were analyzed on an 8.0% acrylamide gel and the images of the silver nitrate stained bands were obtained with a Nikon E4500 digital camera (Nikon Corp., Tokyo, Japan). As a negative control, the cDNA template was omitted in the reaction. The RT-PCR was performed at least twice in independent experiments.Table 1 Primer sequences for RT-PCR analysis. Gene Primer sequences (5′-3′) Annealing temperature (°C) Cycle numbers Amplified products (bp) ZHX2 Sense: GGTAGCGACGAGAACGAGAntisense: AGGACTTTGGCACTATGAAC 58 34 389 NF-YA Sense: GAGTCTCGGCACCGTCATAntisense: TGCTTCTTCATCGGCTTG 57 34 117 AFP Sense: GTTGCCAACTCAGTGAGGACAntisense: GAGCTTGGCACAGATCCTTA 59 28 240 GAPDH Sense: GCTGAGAACGGGAAGCTTGTAntisense: GCCAGGGGTGCTAAGCAGTT 58 30 299 AFP: alpha-fetoprotein, bp: base pair, GAPDH: glyceraldehyde-3-phosphate dehydrogenase, NF-Y: nuclear factor-Y, and ZHX2: zinc fingers and homeoboxes 2. ### 2.3. Cell Culture Human hepatocellular carcinoma cells (HepG2) were cultured at 37°C in a humidified incubator with 5% CO2 and 95% air atmosphere in RPMI 1640 (Gibco BRL, Grand Island, NY, USA) supplemented with 10% fetal calf serum (FCS), 2 mmol/L L-glutamine, 100 μg/mL streptomycin, and 100 IU/mL penicillin (Hyclone, Bio-Check Laboratories Ltd., USA). ### 2.4. Small Interfering RNA (siRNA) and Transfection The sequences of human ZHX2 specific siRNA were 5′- GACACAUUAGGACACGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUAAUCCUGUGCAGU-3′ (antisense). As a control siRNA, we used a corresponding nonsilencing siRNA with the sequences 5′-GACACAGAUACGAUCGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUCUAUGCUAGCAGU-3′ (antisense). All synthetic RNA oligonucleotides were synthesized and purified at Ribobio (Guangzhou, China).One day before transfection, HepG2 cells were seeded at a density of 3 × 105 cells/well in a complete medium without antibiotics in 12-well plates. The siRNA (either ZHX2 siRNA or control siRNA) was diluted in a final volume of 100 μL of serum-free Opti-MEM (Gibco) medium. In a separate tube, 2 μL Lipofectamine 2000 (Invitrogen, Carlsbad CA, USA) was mixed into a final volume of 100 μL of serum-free Opti-MEM medium per well and incubated for 5 minutes at room temperature. The two mixtures were mixed gently and incubated for another 20 minutes. Add the complexes to the well containing cells and medium. The final concentration of siRNA was 50 nM. Cells were subsequently cultured for 48 hours before further analysis (RT-PCR and Western blot). ### 2.5. Protein Preparation and Western Blot HepG2 cells were treated for 30 minutes on ice with lysis buffer (RIPA, Shenerg Biocolor, China). Protein concentrations were determined using the bicinchoninic acid protein assay. Equal amounts of cellular protein were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride membranes. The membranes were incubated in blocking buffer consisting of Tris-buffered saline (TBS) containing 5% nonfat dry milk for 1 hour at room temperature. The membranes were then incubated with the primary antibodies against ZHX2 (1 : 4000 dilution, ABNOVA Corporation, Taiwan), NF-YA (1 : 200 dilution, Santa Cruz, CA, USA), AFP (1 : 200 dilution, NeoMarkers, Fremont, CA, USA), and beta-actin (1 : 200 dilution, Boster, China). Horseradish peroxidase-labeled anti-mouse secondary antibody (1 : 1000 dilution, DAKO, Carpinteria, CA, USA) was applied onto the blots. After incubation with the electrochemiluminescence (ECL, Applygen Technologies Inc.), reagent, immunochemiluminescence signals were recorded on X-ray film. ### 2.6. Statistical Analysis Statistical differences were evaluated usingχ2 test performed with SPSS11.5 for Windows software. A P value less than .05 for each test was considered statistically significant. ## 2.1. HCC Tissues The study was approved by the Ethics Committee of The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, and informed consent was obtained from the participating patients. Clinical samples were obtained by hepatectomy from 63 HCC patients (53 males, 10 females; age range 20–70 years; average age of 47 years) at the First Affiliated Hospital of Sun Yat-sen University. The collected cancer tissues were immediately frozen in liquid nitrogen and stored at −80°C for RT-PCR analysis. None of the cases received adjuvant therapy before operation.The tumor grade was described by Edmondson and Steiner [20] and classified as grade I (2 cases), grade II (41 cases), grade III (18 cases), and grade IV (2 cases). Hepatitis B surface antigen (HBsAg) was positive in the serum of patients examined (56/56), in the rest 7 cases it was not done. The tumor size was less than 5 cm in 18 cases and larger than 5 cm in 45. Only 13 (20.7%) in all 63 cases had normal serum AFP concentration (<20 μg/L). The cut-off for normal AFP level (20 μg/L) and tumor size (5 cm) was according to previous studies [15, 18]. In addition, there were 28 cases with metastases, in the portal vein (17 cases), lymph node (4 cases), extrahepatic bile duct (2 cases), adrenal gland (2 cases), stomach (1 case), and peritoneal dissemination (2 cases). Cirrhosis was observed in 32 of 63 adjacent nontumorous tissues. ## 2.2. RNA Preparation and RT-PCR Total RNA was extracted by using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s guidelines. The concentration and purity of RNA were determined by measuring the absorbance at 260 and 280 nm. The RNA was dissolved in diethylpyrocarbonate-treated water to a final concentration of 1μg/μL.Total RNA (5μg) was reverse-transcribed into the first-strand cDNA at 42°C for 1 hour in 20 μL reaction mixtures consisting of oligo(dT)18 primer (0.5 μg), RiboLock Ribonuclease inhibitor (20 units), 10 mM dNTP mix (2 μL), and RevertAid M-MuLV Reverse Transcriptase (200 units) (Fermentas Life Sciences, European Union). All cDNAs were then subjected to amplification with primers for ZHX2, AFP, NF-YA, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH); the latter served as an internal standard. The primers spanned intron/exon boundaries (Table 1). Prior to the amplification of the experimental samples, the amount of cDNA in all of the samples was equalized. In addition, optimal conditions including Mg2+ concentration and annealing temperature for each set of primers were determined. Subsequently, optimization for the number of PCR cycles was determined for linear amplification. PCR was carried out in a Gene Amp PCR System 9600 (Perkin Elmer, Foster City, CA, USA). The PCR products were analyzed on an 8.0% acrylamide gel and the images of the silver nitrate stained bands were obtained with a Nikon E4500 digital camera (Nikon Corp., Tokyo, Japan). As a negative control, the cDNA template was omitted in the reaction. The RT-PCR was performed at least twice in independent experiments.Table 1 Primer sequences for RT-PCR analysis. Gene Primer sequences (5′-3′) Annealing temperature (°C) Cycle numbers Amplified products (bp) ZHX2 Sense: GGTAGCGACGAGAACGAGAntisense: AGGACTTTGGCACTATGAAC 58 34 389 NF-YA Sense: GAGTCTCGGCACCGTCATAntisense: TGCTTCTTCATCGGCTTG 57 34 117 AFP Sense: GTTGCCAACTCAGTGAGGACAntisense: GAGCTTGGCACAGATCCTTA 59 28 240 GAPDH Sense: GCTGAGAACGGGAAGCTTGTAntisense: GCCAGGGGTGCTAAGCAGTT 58 30 299 AFP: alpha-fetoprotein, bp: base pair, GAPDH: glyceraldehyde-3-phosphate dehydrogenase, NF-Y: nuclear factor-Y, and ZHX2: zinc fingers and homeoboxes 2. ## 2.3. Cell Culture Human hepatocellular carcinoma cells (HepG2) were cultured at 37°C in a humidified incubator with 5% CO2 and 95% air atmosphere in RPMI 1640 (Gibco BRL, Grand Island, NY, USA) supplemented with 10% fetal calf serum (FCS), 2 mmol/L L-glutamine, 100 μg/mL streptomycin, and 100 IU/mL penicillin (Hyclone, Bio-Check Laboratories Ltd., USA). ## 2.4. Small Interfering RNA (siRNA) and Transfection The sequences of human ZHX2 specific siRNA were 5′- GACACAUUAGGACACGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUAAUCCUGUGCAGU-3′ (antisense). As a control siRNA, we used a corresponding nonsilencing siRNA with the sequences 5′-GACACAGAUACGAUCGUCAdAdA-3′ (sense) and 5′-dAdACUGUGUCUAUGCUAGCAGU-3′ (antisense). All synthetic RNA oligonucleotides were synthesized and purified at Ribobio (Guangzhou, China).One day before transfection, HepG2 cells were seeded at a density of 3 × 105 cells/well in a complete medium without antibiotics in 12-well plates. The siRNA (either ZHX2 siRNA or control siRNA) was diluted in a final volume of 100 μL of serum-free Opti-MEM (Gibco) medium. In a separate tube, 2 μL Lipofectamine 2000 (Invitrogen, Carlsbad CA, USA) was mixed into a final volume of 100 μL of serum-free Opti-MEM medium per well and incubated for 5 minutes at room temperature. The two mixtures were mixed gently and incubated for another 20 minutes. Add the complexes to the well containing cells and medium. The final concentration of siRNA was 50 nM. Cells were subsequently cultured for 48 hours before further analysis (RT-PCR and Western blot). ## 2.5. Protein Preparation and Western Blot HepG2 cells were treated for 30 minutes on ice with lysis buffer (RIPA, Shenerg Biocolor, China). Protein concentrations were determined using the bicinchoninic acid protein assay. Equal amounts of cellular protein were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride membranes. The membranes were incubated in blocking buffer consisting of Tris-buffered saline (TBS) containing 5% nonfat dry milk for 1 hour at room temperature. The membranes were then incubated with the primary antibodies against ZHX2 (1 : 4000 dilution, ABNOVA Corporation, Taiwan), NF-YA (1 : 200 dilution, Santa Cruz, CA, USA), AFP (1 : 200 dilution, NeoMarkers, Fremont, CA, USA), and beta-actin (1 : 200 dilution, Boster, China). Horseradish peroxidase-labeled anti-mouse secondary antibody (1 : 1000 dilution, DAKO, Carpinteria, CA, USA) was applied onto the blots. After incubation with the electrochemiluminescence (ECL, Applygen Technologies Inc.), reagent, immunochemiluminescence signals were recorded on X-ray film. ## 2.6. Statistical Analysis Statistical differences were evaluated usingχ2 test performed with SPSS11.5 for Windows software. A P value less than .05 for each test was considered statistically significant. ## 3. Results ZHX2 mRNA expression was detected in 23 (36.5%) of 63 HCC tissues (Figure1). ZHX2 expression rate (69.2%) with less than 20 μg/L AFP concentration in serum was significantly higher than that (28%) with more than 20 μg/L serum AFP concentration (P=.009, OR=.17). Statistically, ZHX2 expression was not significantly associated with age, tumor size, cirrhosis, grading, and metastasis (Table 2).Table 2 Relationship between ZHX2 mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) ZHX2 mRNAP value OR (95% CI) Total 23 (36.5) AFP (μg/L) <20 9 (69.2) .009 .17 (.05–.65) >20 14 (28) Tumor size (cm) ≤5 7 (38.9) .80 .87 (.28–2.68) >5 16 (35.6) Background liver Without cirrhosis 12 (38.7) .72 .83 (.30–2.32) With cirrhosis 11 (34.4) Grade I-II 15 (34.9) .70 1.24 (.42–3.71) III-IV 8 (40) Metastasis Without 15 (42.9) .24 .53 (.185–1.54) With 8 (28.6) CI: confidence interval, HCC: hepatocellular carcinoma, OR: odds ratio, and ZHX2: zinc fingers and homeoboxes 2.Figure 1 expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) mRNA in hepatocellular carcinoma (HCC) tissues. The target mRNA expression was detected by RT-PCR analysis. The expression of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as an internal control. In cases 1, 8, 15, 47, and 49, ZHX2 and NF-YA mRNAs were detected, but AFP mRNA expression was not observed. In cases 5, 12, 46, and 48, ZHX2 and NF-YA mRNAs were not detected, but AFP mRNA expression was observed.NF-YA mRNA expression was detected in 27 (42.9%) of 63 HCC tissues (Figure1). It was significantly negatively correlated with serum AFP concentration (P=.005, OR=.16). NF-YA expression was not significantly associated statistically with age, tumor size, cirrhosis, grading, and metastasis (Table 3).Table 3 Relationship between NF-YA mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) NF-YA mRNAP value OR (95% CI) Total 27 (42.9) AFP (μg/L) <20 10 (76.9) .005 .16 (.04–.64) >20 17 (34) Tumor size (cm) ≤5 7 (38.9) .69 1.26 (.41–3.84) >5 20 (44.4) Background liver Without cirrhosis 15 (48.4) .38 .64 (.23–1.75) With cirrhosis 12 (37.5) Grade I-II 16 (37.2) .18 2.06 (.70–6.05) III-IV 11 (55) Metastasis Without 18 (51.4) .27 .56 (.20–1.57) With 9 (32.1) CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, and OR: odds ratio.AFP mRNA expression was detected in 38 (60.3%) of 63 HCC tissues (Figure1). It was not significantly associated with age, tumor size, cirrhosis, grading, and metastasis. However, there was a significant association between AFP mRNA expression and serum AFP concentration (P=.001, OR=14.14) (Table 4).Table 4 Relationship between AFP mRNA expression and clinicopathological parameters in HCC tissues. Parameter + (%) AFP mRNAP value OR (95% CI) Total 38 (60.3) AFP (μg/L) <20 2 (15.4) .001 14.14 (2.78–72.05) >20 36 (72) Tumor size (cm) ≤5 9 (50) .29 1.81 (.60–5.49) >5 29 (64.4) Background liver Without cirrhosis 22 (71) .09 .41 (.15–1.16) With cirrhosis 16 (50) Grade I-II 29 (67.4) .09 .40 (.13–1.17) III-IV 9 (45) Metastasis Without 21 (60) .95 1.03 (.37–2.85) With 17 (60.7) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, and OR: odds ratio.ZHX2 mRNA expression rate (26.3%) in HCC tissues with AFP mRNA expression was significantly lower than that (52%) without AFP mRNA expression (P=.04, OR=.33). In 27 HCC tissues with NF-YA expression, ZHX2 mRNA expression rate was 74.1%. In 36 NF-YA negative HCC tissues, only 3 ZHX2-positive tissues (8.3%) were detected. The difference was statistically significant (P=.001, OR=31.43) (Table 5). Furthermore, NF-YA expression rate (31.6%) in HCC tissues with AFP expression was significantly lower than that (60%) without AFP expression (P=.03, OR=.31) (Table 6).Table 5 Relationship between ZHX2, NF-YA, and AFP mRNA expressions in HCC tissues. + (%) ZHX2 mRNAP value OR (95% CI) NF-YA mRNA − 3 (8.3) .001 31.43 (7.28–135.62) + 20 (74.1) AFP mRNA − 13 (52) .04 .33 (.11–.96) + 10 (26.3) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, OR: odds ratio, and ZHX2: zinc fingers and homeoboxes 2.Table 6 Relationship between  NF-YA and AFP mRNA expression in HCC tissues. + (%) NF-YA mRNAP value OR (95% CI) AFP mRNA − 15 (60) .03 .38 (.11–.88) + 12 (31.6) AFP: alpha-fetoprotein, CI: confidence interval, HCC: hepatocellular carcinoma, NF-Y: nuclear factor-Y, and OR: odds ratio.To further confirm the relations between ZHX2, NF-YA, and AFP, we used siRNA to silence the expression of ZHX2 and then detected the change of NF-YA and AFP expression in HepG2 cells. After transfection of siRNA into the cells, the level of ZHX2 mRNA and protein expression decreased significantly by 85% and 83%, respectively, as compared to control siRNA. Treatment with ZHX2 siRNA simultaneously led to 60% and 61% reduction in the NF-YA mRNA and protein levels in the cells, respectively. Downregulation of ZHX2 also induced a 2.4-fold increase in both AFP mRNA and protein levels (Figures2 and 3).Figure 2 expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) mRNA in HepG2 cells after siRNA transfection. The target mRNA expression was detected by RT-PCR analysis. The expression of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as an internal control. The amounts of mRNA in the ZHX2 siRNA-treated cells were compared to those of control cells. After siRNA transfection, ZHX2 mRNA expression (389 bp) was decreased by 85%, NF-YA mRNA expression (117 bp) was reduced by 60%, and AFP mRNA expression (240 bp) was increased by 2.4-fold.Figure 3 Expression of zinc fingers and homeoboxes 2 (ZHX2), nuclear factor-YA (NF-YA), and alpha-fetal protein (AFP) in HepG2 cells after siRNA transfection. The target protein expression was detected by Western blot analysis. The expression ofβ-actin was analyzed as an internal control. The amounts of protein in the ZHX2 siRNA-treated cells were compared to those of control cells. After siRNA transfection, ZHX2 protein expression (92KD) was decreased by 83%, NF-YA protein expression (40 KD and 43 KD) was reduced by 61%, and AFP protein expression (70 KD) was increased by 2.4-fold. ## 4. Discussion ZHX2 is a novel transcriptional repressor, which consists of 837 amino acid residues. The protein has two Cys2-His2-type zinc finger motifs and five homeodomains and is localized in the nuclei [15]. We previously found that promoter hypermethylation caused a low mRNA expression of ZHX2 in HCC. In this study, ZHX2 mRNA expression rate in HCC tissues was 36.5% and similar to the previous study (34.4%) [9]. Compared with clinicopathological parameters, ZHX2 mRNA expression was negatively associated with preoperative AFP level in serum. We also detected AFP mRNA expression in HCC tissues by RT-PCR, which was found to be lower in HCC tissues with ZHX2 expression. Taken together, these findings confirm a negative correlation between ZHX2 and AFP expressions in HCC. To investigate the regulation of AFP, we used RNAi technique because it is known to downregulate specific gene at a posttranscriptional level [21]. We found that the transfection of siRNA into HepG2 cells caused a silence of ZHX2 and an increased expression of AFP mRNA and protein. Yamada et al. also demonstrated that the promoter activity of alpha-fetoprotein was repressed by the expression of ZHX2 in HLE hepatoma cells in a dose-dependent manner [22]. They concluded that ZHX2 and ZHX3 were involved in the transcriptional repression of the HCC markers in normal hepatocytes, suggesting that the failure of the ZHX2 and/or ZHX3 expression might be a critical factor in hepatocyte carcinogenesis [22].The interaction of ZHX2 with the serine/threonine-rich AD of NF-YA has been previously confirmed [15]. By RT-PCR analysis, we found that ZHX2 expression was positively correlated with NF-YA expression. Treatment with ZHX2 siRNA led to 60% and 61% reduction in the NF-YA mRNA and protein levels, respectively. These results indicated that ZHX2 was correlated to NF-YA in HCC. It is possible that ZHX2 downregulates gene expression via the interaction with NF-YA in HCC.Furthermore, we analyzed NF-YA and AFP expressions in HCC tissues and found that NF-YA expression was negatively correlated to AFP expression and serum AFP level. Treatment with ZHX2 siRNA could lead to decreased NF-YA and increased AFP at the mRNA and protein levels. Thus, it is possible that ZHX2 regulates AFP transcription via the interaction with NF-YA in HCC. Although there is no evidence to show that NF-YA directly interacts with AFP, NF-YA might regulate AFP expression indirectly through other genes, such as p300 [14] and p53 [23]. Further studies are required to explain the exact mechanism(s) of AFP regulation in HCC.In conclusion, we detected ZHX2, NF-YA, and AFP expressions in human HCC tissues by RT-PCR and found a close correlation between ZHX2 and NF-YA and a negative relation between ZHX2 and AFP. The RNAi of ZHX2 in HepG2 cells further verified these relations. Therefore, AFP can be regulated by ZHX2 in HCC, and this regulation may be via the interaction with NF-YA. ZHX2 may be used as an adjuvant diagnostic tissue marker for AFP-negative HCC.Further study is required to detect the expression levels of ZHX2 and AFP in hepatic cirrhosis and dysplastic nodules, in order to understand the expression difference in both lesions and establish the possibility of ZHX2 as an earlier screening marker for patients with cirrhosis and/or preneoplastic nodule. --- *Source: 101083-2013-02-26.xml*
2013
# Impacts of Different Physical Parameterization Configurations on Widespread Heavy Rain Forecast over the Northern Area of Vietnam in WRF-ARW Model **Authors:** Tien Du Duc; Cuong Hoang Duc; Lars Robert Hole; Lam Hoang; Huyen Luong Thi Thanh; Hung Mai Khanh **Journal:** Advances in Meteorology (2019) **Publisher:** Hindawi **License:** http://creativecommons.org/licenses/by/4.0/ **DOI:** 10.1155/2019/1010858 --- ## Abstract This study investigates the impacts of different physical parameterization schemes in the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model) on the forecasts of heavy rainfall over the northern part of Vietnam (Bac Bo area). Various physical model configurations generated from different typical cumulus, shortwave radiation, and boundary layer and from simple to complex cloud microphysics schemes are examined and verified for the cases of extreme heavy rainfall during 2012–2016. It is found that the most skilled forecasts come from the Kain–Fritsch (KF) scheme. However, relating to the different causes of the heavy rainfall events, the forecast cycles using the Betts–Miller–Janjic (BMJ) scheme show better skills for tropical cyclones or slowly moving surface low-pressure system situations compared to KF scheme experiments. Most of the sensitivities to KF scheme experiments are related to boundary layer schemes. Both configurations using KF or BMJ schemes show that more complex cloud microphysics schemes can also improve the heavy rain forecast with the WRF-ARW model for the Bac Bo area of Vietnam. --- ## Body ## 1. Introduction Elongated to 15 degrees (from 8 to 23 degrees north), Vietnam weather is affected by a variety of extratropical and tropical systems such as cold surges and cold fronts, subtropical troughs and monsoon, tropical disturbances, and typhoons [1–3, 4]. An approximate number of 28–30 heavy rain events occurring over the whole Vietnam region every year were determined from local surface stations. According to Nguyen et al. [4], the rainy season in the Vietnam area usually lasts from May to October with the peak rainy months starting in the north and then moving to the south with time because of the movement of the subtropical ridge and Intertropical Convergence Zone (ITCZ). In the northern part of Vietnam, the peak rainy months can be seen during the June–September period, particularly in July and August, which is partly related to the activity of the ITCZ [5].Apart from single weather patterns causing heavy rain for Vietnam, it is common to observe a mix of different patterns simultaneously. A good example for this is the historical rain and flood over Central Vietnam in the Hue city in 1999 due to the combination of cold surges and tropical depression-type disturbances [6]. In 2008, extreme heavy rain was witnessed in Hanoi as a result of the dramatic intensification of the easterly disturbance through the midlatitude-tropical interactions [7]. Likewise, a surface low pressure which coincided with the subtropical upper-level trough was the main factor causing historical rainfall over northeastern Vietnam in Quang Ninh Province in 2015 [8].In the northern part of Vietnam (hereinafter referred to as Bac Bo area)—the focus area of this study—tropical cyclones or surface low-pressure patterns are one among several key factors leading to heavy rain for different regions in Vietnam. ITCZ plays the vital role in facilitating the development of low-level vortexes which are likely to develop into tropical disturbances and typhoons in the summer time, particularly in July and August. As a result, rain may fall over the Bac Bo area with excessive amount, and rain duration directly depends upon the lifetime of ITCZ and the disturbances itself. The other causing reasons are related to the effects of cold surge from the north (Siberian area) or the combinations of different patterns (a cold surge with a tropical cyclone, a cold surge with a trough, a surface low-pressure system with a trough, etc.). Table1 provides the observed precipitation from SYNOP stations in the Bac Bo area with the annual rainfall mostly ranging from 1500 to 2000 mm and a high number of daily heavy rain occurrences (over 50 mm/24 h and 100 mm/24 h).Table 1 Station information over the northern part of Vietnam, annual rainfall (over the period 1998–2018) in mm, and the number of days with daily accumulated rainfall over 50 mm/24 h (#50 mm) and over 100 mm/24 h (#100 m) in average. The number of samples is ∼7300 each station. Station name Long. Lat. Annual rain #50 mm #100 mm Muong Te 102.83 22.37 2423 11 2 Sin Ho 103.23 22.37 2787 13 2 Tam Duong 103.48 22.42 2355 9 2 Muong La 104.03 21.52 1412 5 1 Than Uyen 103.88 21.95 1920 8 1 Quynh Nhai 103.57 21.85 1678 7 1 Mu Cang Chai 104.05 21.87 1741 5 1 Tuan Giao 103.42 21.58 1586 5 1 Pha Din 103.5 21.57 1840 6 1 Van Chan 104.52 21.58 1481 6 1 Song Ma 103.75 21.5 1118 2 0 Co Noi 104.15 21.13 1308 4 0 Yen Chau 104.3 21.05 1229 3 0 Bac Yen 104.42 21.23 1519 4 0 Phu Yen 104.63 21.27 1496 5 1 Minh Dai 105.05 21.17 1695 6 1 Moc Chau 104.68 20.83 1581 5 1 Mai Chau 105.05 20.65 1698 7 1 Pho Rang 104.47 22.23 1648 7 1 Bac Ha 104.28 22.53 1611 4 0 Hoang Su Phi 104.68 22.75 1739 5 1 Bac Me 105.37 22.73 1753 6 0 Bao Lac 105.67 22.95 1179 3 0 Bac Quang 104.87 22.5 4295 24 9 Luc Yen 104.78 22.1 1917 8 1 Ham Yen 105.03 22.07 1858 8 1 Chiem Hoa 105.27 22.15 1631 6 1 Cho Ra 105.73 22.45 1417 5 0 Nguyen Binh 105.9 22.65 1686 6 1 Ngan Son 105.98 22.43 1853 8 1 Trung Khanh 106.57 22.83 1821 7 2 Dinh Hoa 105.63 21.92 1749 8 2 Bac Son 106.32 21.9 1693 7 2 Huu Lung 106.35 21.5 1572 7 1 Dinh Lap 107.1 21.53 1865 7 2 Quang Ha 107.75 21.45 2883 16 5 Phu Ho 105.23 21.45 1480 6 1 Tam Dao 105.65 21.47 2585 13 4 Hiep Hoa 105.97 21.35 1601 6 1 Bac Ninh 106.08 21.18 1608 7 2 Luc Ngan 106.55 21.38 1406 6 1 Son Dong 106.85 21.33 1686 8 2 Ba Vi 105.42 21.15 1942 9 2 Ha Dong 105.75 20.97 1635 7 1 Chi Linh 106.38 21.08 1536 7 1 Uong Bi 106.75 21.03 1824 9 2 Kim Boi 105.53 20.33 2115 9 2 Chi Ne 105.78 20.48 1841 8 2 Lac Son 105.45 20.45 2040 9 2 Cuc Phuong 105.72 20.25 1918 9 2 Yen Dinh 105.67 19.98 1509 7 1 Sam Son 105.9 19.75 1759 9 3 Do Luong 105.3 18.89 1871 9 2 Lai Chau 103.15 22.07 2178 10 2 Sa Pa 103.82 22.35 2581 10 2 Lao Cai 103.97 22.5 1704 7 1 Ha Giang 104.97 22.82 2372 12 2 Son La 103.9 21.33 1456 4 1 That Khe 106.47 22.25 1482 6 1 Cao Bang 106.25 22.67 1439 6 1 Bac Giang 106.22 21.3 1518 7 1 Hon Ngu 105.77 18.8 2007 11 4 Bac Can 105.83 22.15 1421 5 1 Dien Bien Phu 103 21.37 1536 5 1 Tuyen Quang 105.22 21.82 1654 8 2 Viet Tri 105.42 21.3 1601 8 2 Vinh Yen 105.6 21.32 1489 6 1 Yen Bai 104.87 21.7 1768 8 1 Son Tay 105.5 21.13 1612 7 1 Hoa Binh 105.33 20.82 1828 8 2 Huong Son 105.43 18.52 2135 10 4 Ha Noi 105.8 21.03 1647 8 2 Phu Ly 105.92 20.55 1731 8 2 Hung Yen 106.05 20.65 1381 6 1 Nam Dinh 106.15 20.39 1564 7 1 Ninh Binh 105.97 20.23 1662 8 2 Phu Lien 106.63 20.8 1572 7 1 Hai Duong 106.3 20.93 1561 7 1 Hon Dau 106.8 20.67 1501 8 1 Van Ly 106.3 20.12 1620 9 2 Lang Son 106.77 21.83 2874 12 2 Thai Nguyen 105.83 21.6 1266 5 1 Nho Quan 105.73 20.32 1728 8 2 Bai Chay 107.07 20.97 1700 8 2 Co To 107.77 20.98 1933 10 3 Thai Binh 106.35 20.45 1930 10 3 Cua Ong 107.35 21.02 1591 8 1 Tien Yen 107.4 21.33 2189 12 3 Mong Cai 107.97 21.52 2168 10 3 Bach Long Vi 107.72 20.13 2670 15 5 Huong Khe 105.72 18.18 1216 6 1 Thanh Hoa 105.78 19.75 2516 12 5 Hoi Xuan 105.12 20.37 1701 8 3 Tuong Duong 104.43 19.28 1720 6 1 Vinh 105.7 18.67 1305 5 1 Ha Tinh 105.9 18.35 1896 10 4 Ky Anh 106.28 18.07 2507 13 5 Bai Thuong 105.38 19.9 1830 7 2 Nhu Xuan 105.57 19.63 1758 8 3 Tinh Gia 105.78 19.45 1777 9 3 Quy Chau 105.12 19.57 1661 7 1 Quy Hop 105.15 19.32 1579 7 2 Tay Hieu 105.4 19.32 1517 7 2 Quynh Luu 105.63 19.17 1559 8 2 Con Cuong 104.88 19.05 1647 7 2In Vietnam National Center for Hydro-Meteorological Forecasting (NCHMF), several global model (NWP) products are mostly applied in operational forecast to predict the occurrence of heavy rain. These include the models from National Centers for Environmental Prediction (NCEP), European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency (JMA), and Germany’s National Meteorological Service (DWD). In addition, some regional NWP products including the High Resolution Regional Model (HRM) (the HRM’s information can be found at DWD’s internet linkhttps://www.dwd.de/SharedDocs/downloads/DE/modelldokumentationen/nwv/hrm/HRM_users_guide.pdf) [9], the Consortium for Small-Scale Modeling (COSMO) [10] models from DWD, and the Weather Research and Forecasting (WRF-ARW) model [11] from the National Center for Atmospheric Research (NCAR) are also a useful reference source in the operational heavy rainfall forecast of Vietnam [12]. Despite the predictability of these models in some certain cases, they may fail to predict extreme events because of several reasons. Lorenz [13] pointed out the three main factors causing uncertainties in NWP are the initial conditions, the imperfection of the models, and the chaos of the atmosphere. While the initial condition problem for NWP can be reduced by data assimilation methods, the imperfection of models, which relates to many subgrid processes, can be alleviated by using proper physical parameterizations. For regional weather forecasting centers with limited capabilities in data assimilation and resource computation to provide the cloud resolved resolution forecast, choosing correct physical parameterization schemes still plays the most important role in downscaling the processes in regional NWP models [14].To illustrate the dependence of heavy rainfall forecast on physical parameterizations, the typical heavy rainfall event relating to the activities of ITCZ over the South China Sea—the East Sea of Vietnam—from 27 to 30 August 2014 in the Bac Bo area was simulated by the WRF-ARW model (see mean sea level pressure analysis of the GFS model in Figure1(a)). The 3-day accumulated rainfall was mostly from 100 mm to 150 mm, and some stations recorded more than 200 mm such as Kim Boi which recorded 229 mm (Hoa Binh Province; marked with a square in Figure 1(b)), Dinh Lap 245 mm (Lang Son Province, the northeast area; marked with a star in Figure 1(b)), and Tam Dao 341 mm (Vinh Phuc Province, center of the domain; marked with a circle in Figure 1(b)).Figure 1 Example of impact of different physical parameterization schemes on heavy rain forecast with the WRF-ARW model (5 km horizontal resolution) over the Bac Bo area (the northern part of Vietnam), issued at 00UTC 27/08/2014. (a) Analysis of mean sea level pressure from the GFS at 00UTC 27/08/2014. (b) 72 h accumulated precipitation from synoptic observation from 00UTC 27/08/2014 to 00UTC 30/08/2014. 72 h accumulated precipitation forecast from the WRF-ARW model with the configuration of KF cumulus parameterization and shortwave radiation and cloud microphysics schemes: (c) KF-Lin-Duh-MYJ; (d) KF-WSM3-Duh-MYJ; (e) KF-WSM5-God-MYJ; (f) KF-WSM5-God-YSU. 72 h accumulated precipitation forecast from the WRF-ARW model with the configuration of BMJ cumulus parameterization: (g) BMJ-Lin-Duh-MYJ; (h) BMJ-WSM3-Duh-MYJ; (i) BMJ-WSM5-God-MYJ; (j) BMJ-WSM5-God-YSU. More explanation of model configurations from (c) to (j) experiments can be found in Table2. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)Figure1 illustrates 72 h accumulated rainfall forecasts from the WRF-ARW model at 5 km horizontal resolution, issued at 00UTC 27/08/2014 with different physical parameterization configurations by combining the Betts–Miller–Janjic (BMJ) or Kain–Fritsch (KF) cumulus schemes, the Lin or WRF single-moment three- or five-class (WSM3/WSM5) cloud microphysics schemes, the Dudhia or Goddard shortwave radiation schemes, and the Yonsei University (YSU) or Mellor–Yamada–Janjic (MYJ) boundary layer schemes.With the KF cumulus parameterization scheme in Figures1(c)–1(f), the results show that extreme rainfall over the northeast area (Lang Son Province) can be predictive with the amount of rainfall over >150 mm but was forecasted overestimation for the mountainous regions over the northwest area (Hoang Lien Son mountain ranges). The extreme rainfall over the center of the domain (Vinh Phuc Province) was not simulated well using the KF scheme for this situation.For the experiments using the BMJ scheme in Figures1(g)–1(j), the extreme rainfall over the Lang Son Province is underestimated, but cases using Lin or WSM3 for microphysics, Dudhia for shortwave radiation scheme, and the MYJ surface boundary layer (Figure 1(g) and 1(h)) can reduce the overestimation related to Hoang Lien Son mountain ranges in KF scheme experiments. The extreme heavy rainfall over the center of the domain (in Vinh Phuc Province) can be quite well forecasted when using WSM5 with Goddard Hoang Lien Son mountain ranges and the MYJ surface boundary layer (Figure 1(i)). The same configuration in Figure 1(i) but using another surface boundary layer (YSU) scheme provided totally different results (Figure 1(j)) for the southern area of the domain (overestimation).Many studies have been carried out for validating the effects of physical parameterization schemes in the WRF-ARW model. Zeyaeyan et. al. [15] evaluated the effect of various physic schemes in the WRF-ARW model on the simulation of summer rainfall over the northwest of Iran (NWI). The result shows that cumulus schemes are the most sensitive and microphysics schemes are the least sensitive. The comparison between 15 km and 5 km resolution simulations does not show obvious advantages in downscaling. These investigations showed the best results for both the 5 and 15 km resolutions with model configurations from the newer Tiedtke cumulus scheme, MYJ scheme, and WSM3/Kessler microphysics scheme. Tan [16] processed sensitivity tests with microphysics parameterization of the WRF-ARW model in the concept of quantitative extreme precipitation forecasting for hydrological inputs. About 19 bulk microphysics parameterization schemes were evaluated for a storm situation in California in 1997. The most important finding was that the extreme and short-interval simulated precipitation of the WRF-ARW model, which is very important for hydrological forecast input, can be improved by the choice of microphysics schemes. Nasrollahi et al. [17] showed that different features from hurricane (track, intensity, and precipitation) in the WRF-ARW model can be improved with suitable selections of microphysics and cumulus schemes. These results showed that the best simulated precipitation can be achieved by using BMJ cumulus parameterization combined with the WSM5 microphysics scheme, but the hurricane’s track was best estimated by using the Lin or Kessler microphysics option with BMJ cumulus parameterization. Other validations of parameterization of physical processes on the tropical cyclone of Pattanayak et al. [18] with WRF for the NMM dynamical core showed the important role of cumulus parameterization in track forecasts, thereby contributing to rainfall induced by landfall of tropical cyclones.In the other aspects related to using cumulus parameterization schemes at high-resolution simulations, Gilliland et al. [19] compared simulations using different cumulus parameterization schemes in summer-time convective activities. Compared with the model simulations at below 5 km horizontal resolution, which resolve the convection, the study showed that depending on the strength of the synoptic-scale forcing, the use of a cumulus parameterization scheme can still be warrant for representing the effects of subgrid-scale convective processes (the Kain–Fritsch scheme in this study).Thus, the application of regional models to certain regions such as Vietnam will be influenced significantly by local factors (topography and microclimate mode) as well as the effects of physical combinations. Among regional models which are applied in Vietnam, the WRF-ARW model is the most useful tool because of the capabilities of providing different configuration physical/dynamical models for implementation to various scientific communities for both research and operation. For this reason, this study will focus on the impact of some physical parameterization configurations of the WRF-ARW model combining cumulus, cloud microphysics, and shortwave radiation parameterization schemes on heavy rainfall forecast. Two typical cumulus parameterizations (adjustment and mass-flux approaches) will be investigated with the combination of simple to complex cloud microphysics schemes. The verification of dependence of the typical synoptic situations/weather patterns causing heavy rainfall for the Bac Bo area on parameterization schemes is also carried out. For the real-time adaptation forecasting verification purposes, the lateral boundary condition will be used from the Global Forecast System (GFS) of NCEP.The remainder of this paper is organized as follows: In Section2, experimental design and validation methods will be presented. Section 3 discusses the impacts of different physical parameterization configurations on heavy rainfall over the Bac Bo area, and conclusions are given in Section 4. ## 2. Experiments ### 2.1. Model Description This study used the recently released version of the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model; version 3.9.1.1) with multinested grids and two-way interactive options. The WRF model has been integrated many advances in model physics/numerical aspects and data assimilation by scientists and developers from the expansive research community, therefore becoming a very flexible and useful tool for both researchers and operational forecasters (https://www.mmm.ucar.edu/weather-research-and-forecasting-model).For the purpose of investigating the impact of physical parameterization schemes, similarly to the study of Kieu et. al. [20, 21], a set of combination of physical parameterizations has been generated based on (a) the modified KF and BMJ cumulus parameterization schemes; (b) the Goddard and Dudhia schemes for the shortwave radiation; (d) the YSU and MYJ planetary boundary schemes; and (e) the Lin, WSM3, WSM5, and WSM6 schemes for the cloud microphysics. There are a maximum of 32 different configuration forecasts for each heavy rainfall case listed in Table 2. The other options are the Monin–Obukhov surface layer scheme and the Rapid Radiative Transfer Model scheme for longwave radiation. Note that, with the MYJ scheme, the surface layer option will be switched to Janjic’s Eta–Monin–Obukhov scheme which is based on similar theory with viscous sublayers over both solid surfaces and water points. Skamarock et al. [22] provided the detailed description of the WRF-ARW model, and various references for physical parameterizations of the WRF-ARW model can be found from various listed references [11, 23–29, 30].Table 2 Details of physical parameterization configurations in different experiments. Abbreviation Microphysics Shortwave radiation Boundary layer Betts–Miller–Janjic (BMJ) cumulus parameterization BMJ-Lin-Duh-MYJ Lin Duhia MYJ BMJ-Lin-Duh-YSU Lin Duhia YSU BMJ-Lin-God-MYJ Lin Goddard MYJ BMJ-Lin-God-YSU Lin Goddard YSU BMJ-WSM3-Duh-MYJ WSM3 Duhia MYJ BMJ-WSM3-Duh-YSU WSM3 Duhia YSU BMJ-WSM3-God-MYJ WSM3 Goddard MYJ BMJ-WSM3-God-YSU WSM3 Goddard YSU BMJ-WSM5-Duh-MYJ WSM5 Duhia MYJ BMJ-WSM5-Duh-YSU WSM5 Duhia YSU BMJ-WSM5-God-MYJ WSM5 Duhia MYJ BMJ-WSM5-God-YSU WSM5 Goddard YSU BMJ-WSM6-Duh-MYJ WSM6 Duhia MYJ BMJ-WSM6-Duh-YSU WSM6 Duhia YSU BMJ-WSM6-God-MYJ WSM6 Goddard MYJ BMJ-WSM6-God-YSU WSM6 Goddard YSU Kain–Fritsch (KF) cumulus parameterization KF-Lin-Duh-MYJ Lin Duhia MYJ KF-Lin-Duh-YSU Lin Duhia YSU KF-Lin-God-MYJ Lin Goddard MYJ KF-Lin-God-YSU Lin Goddard YSU KF-WSM3-Duh-MYJ WSM3 Duhia MYJ KF-WSM3-Duh-YSU WSM3 Duhia YSU KF-WSM3-God-MYJ WSM3 Goddard MYJ KF-WSM3-God-YSU WSM3 Goddard YSU KF-WSM5-Duh-MYJ WSM5 Duhia MYJ KF-WSM5-Duh-YSU WSM5 Duhia YSU KF-WSM5-God-MYJ WSM5 Goddard MYJ KF-WSM5-God-YSU WSM5 Goddard YSU KF-WSM6-Duh-MYJ WSM6 Duhia MYJ KF-WSM6-Duh-YSU WSM6 Duhia YSU KF-WSM6-God-MYJ WSM6 Goddard MYJ KF-WSM6-God-YSU WSM6 Goddard YSUThe WRF-ARW model is configured with two nested grid domains consisting of 199 × 199 grid points in the (x, y) dimensions with horizontal resolutions of 15 km (denoted as d01 domain) and 5 km (denoted as d02 domain). All domains share 41 similar vertical σ levels with the model top at 50 hPa. The higher resolution domain covers the northern part of Vietnam with a time step of 15 seconds. All validations will be carried out with forecasts from the d02 domain. Figure 2 shows the terrain used in d01 and d02 domains.Figure 2 SYNOP station distribution over Vietnam and closed countries (black dots) and terrain of computing domains for the WRF-ARW model. The blue contour is for the outer domain (d01, 15 km), and dark green shading is for the inner domain (d02, 5 km), illustrated by the Diana meteorological visualization software of the Norwegian Meteorological Institute. ### 2.2. Boundary Conditions The GFS model of NCEP used to provide boundary conditions for the WRF-ARW model in this study has a 0.5-degree horizontal resolution and be prepared every three hours from 1000 hPa to 1 hPa. The GFS data for this study were downloaded from the Research Data Archive at the National Center for Atmospheric Research via website linkhttps://rda.ucar.edu/datasets/ds335.0. More information on GFS data can be found at https://www.nco.ncep.noaa.gov/pmb/products/gfs/. ### 2.3. Observation Data The number of observation stations in Vietnam increased from 89 in 1988 to 186 in 2017, with 4 or 8 observations per day (black dots are 8 observations/day for stations in Vietnam and nearby countries in Figure2), but only 24 stations are reported to WMO. The difference in location and topography results in the significant change from one climate to another; therefore, Vietnam has been divided into several climate zones. The highest station density is in the Red River Delta area (the southern part of northern Vietnam) with approximately 1 station per 25 km × 25 km area. The coarsest station density is in the Central Highlands area (latitude between ∼11 N and 16 N) with approximately 1 station per 55 km × 55 km area. On average, the current surface observation network density of Vietnam is about 1 station per 35 km × 35 km for flat regions and 1 station per 50 km × 50 km for mountainous complex regions. In this paper, in order to verify model forecast for the Bac Bo area, we used observation data from the northern SYNOP stations for the period from 2012 to 2016 listed in Table 1.In this study, 72 cases of typical widespread heavy rains which occurred in the northern part of Vietnam in the period 2012–2016 were selected. For each case, the forecasting cycles are chosen so that the 72-hour forecast range can cover the maximum duration of heavy rain episodes.With respect to causes of heavy rainfall events, there are four main categories: (i) activities of ITCZs or troughs: type I, (ii) affected by tropical cyclones or surface low-pressure system (staying at least more than 2 days over the Bac Bo area): type II, (iii) related to the cold surge from the north: type III, and (iv) the complex combinations from different patterns: type IV. The list of forecasted events is given in Table3 with the station name, the maximum value of daily rainfall, and the type of each heavy rainfall case. The sample number of type I, type II, type III, and type IV has 37, 21, 5, and 9 forecast cycles, respectively.Table 3 List of forecast cycles related to heavy rain cases from 2012 to 2016, the station with maximum daily accumulated rainfall up to 72 h for each cycle, and types of main synoptic situations. Forecast cycle (year month day hour) Maximum 24 h accumulation observation (mm) Station with maximum observation Types of rain events 2012 05 18 00 37 Bac Yen Type I 2012 05 19 00 44 Lac Son Type I 2012 05 20 00 114 Bac Quang Type I 2012 05 21 00 195 Bac Quang Type I 2012 05 22 00 131 Ha Dong Type I 2012 05 23 00 186 Quang Ha Type I 2012 07 20 00 41 Bac Quang Type I 2012 07 21 00 24.6 Lai Chau Type II 2012 07 22 00 116 Vinh Yen Type II 2012 07 23 00 75 Vinh Type II 2012 07 25 00 229 Tuyen Quang Type II 2012 07 26 00 104 Ham Yen Type II 2012 07 27 00 112 Bac Quang Type I 2012 08 03 00 58.1 Con Cuong Type I 2012 08 04 00 34.1 Tinh Gia Type I 2012 08 05 00 70 Lang Son Type I 2012 08 06 00 153 Muong La Type I 2012 08 07 00 163 Quang Ha Type I 2012 08 13 00 63.6 Muong La Type II 2012 08 14 00 64.5 Nho Quan Type II 2012 08 15 00 74 Do Luong Type II 2012 08 16 00 76 Tam Dao Type II 2012 08 31 00 49.1 Sin Ho Type IV 2012 09 01 00 67 Nhu Xuan Type IV 2012 09 02 00 124 Quynh Luu Type IV 2012 09 03 00 84 Moc Chau Type IV 2012 09 04 00 157 Huong Khe Type IV 2012 09 16 00 151.4 Muong Te Type IV 2012 09 17 00 104.6 Tam Duong Type III 2013 06 19 00 109 Bac Quang Type II 2013 06 20 00 72.2 Huong Khe Type II 2013 06 21 00 59 Nam Dinh Type II 2013 06 22 00 218 Ha Tinh Type II 2014 08 25 00 91 Lao Cai Type II 2014 08 26 00 89 Van Ly Type II 2014 08 27 00 157 Hon Ngu Type II 2015 05 18 00 90 Hoi Xuan Type III 2015 05 19 00 58 Sin Ho Type III 2015 05 20 00 106 That Khe Type III 2015 05 21 00 72 Tuyen Quang Type III 2015 06 21 00 124 Luc Yen Type II 2015 06 22 00 72.7 Do Luong Type II 2015 07 01 00 84.5 Cao Bang Type I 2015 07 02 00 123.0 Van Chan Type I 2015 07 03 00 74.0 Huong Khe Type I 2015 07 21 00 54.5 Hoa Binh Type I 2015 07 22 00 19 Sin Ho Type I 2015 07 23 00 92 Sin Ho Type I 2015 07 24 00 180 Quynh Nhai Type I 2015 07 25 00 181 Cua Ong Type I 2015 07 26 00 432 Cua Ong Type I 2015 07 27 00 347 Quang Ha Type I 2015 07 28 00 224 Mong Cai Type I 2015 07 29 00 247 Cua Ong Type I 2015 07 30 00 145 Quang Ha Type I 2015 07 31 00 239 Quang Ha Type I 2015 08 01 00 157 Phu Lien Type I 2015 09 18 00 73 Hai Duong Type IV 2015 09 19 00 78.0 Dinh Hoa Type IV 2016 05 20 00 118 Tinh Gia Type I 2016 05 21 00 107.0 Ha Nam Type I 2016 05 22 00 92 Sa Pa Type I 2016 05 23 00 190.4 Yen Bai Type I 2016 07 24 00 60 Phu Lien Type II 2016 07 25 00 17 Thai Binh Type II 2016 07 26 00 46 Dinh Hoa Type II 2016 07 30 00 18.1 Mu Cang Chai Type II 2016 08 01 00 19 Sin Ho Type I 2016 08 02 00 150 Ninh Binh Type I 2016 08 10 00 81 Quynh Nhai Type I 2016 08 11 00 117 Thanh Hoa Type I 2016 08 12 00 87 Tay Hieu Type I Type I: activities of trough or ITCZ; type II: affected by tropical cyclone or low-pressure system; type III: related to cold surge from the north; type IV: combinations of different patterns. ### 2.4. Validation Methods By finding the nearest grids to each station position (listed in Table1), the daily accumulated rainfall for these heavy rainfall cases from WRF-ARW model forecasts can be assigned. The verification scores used in this study are frequency bias (BIAS), probability of detection (POD), false alarm ratio (FAR), threat score (TS), and equitable threat score (ETS). If we denote H for the hit rate of occurred rainfalls (at a given threshold) for both forecast and observation, M for the missed rate of occurred rainfall forecast, and F for the false alarm rate of the forecast, the BIAS, POD, FAR, and TS are calculated by the following equations:(1)BIAS=H+FH+M,perfect value=1,<1=underestimation,>1=overestimation,POD=HH+M,perfect value=1,FAR=FH+F,perfect value=0,TS=HH+M+F,perfect value=1,no skill=0.If we set Hitsrandom = (H + F) (H + M)/T, where T is the sum of H, M, F, and the number of nonoccurred rainfalls for both forecast and observation, the ETS is calculated by(2)ETS=H−HitsrandomH+M+F−Hitsrandom,perfect value=1,no skill<=0.Other meanings of these scores can be found in Wilks’ study [31]. The verification will be carried out for the 5 km domain and for 24 h accumulated rainfall at 24 h, 48 h, and 72 h forecast ranges. Other analysis charts include the histogram of precipitation occurrences at given thresholds (>25 mm/24 h, >50 mm/24 h, and >100 mm/24 h) at the observation stations. ## 2.1. Model Description This study used the recently released version of the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model; version 3.9.1.1) with multinested grids and two-way interactive options. The WRF model has been integrated many advances in model physics/numerical aspects and data assimilation by scientists and developers from the expansive research community, therefore becoming a very flexible and useful tool for both researchers and operational forecasters (https://www.mmm.ucar.edu/weather-research-and-forecasting-model).For the purpose of investigating the impact of physical parameterization schemes, similarly to the study of Kieu et. al. [20, 21], a set of combination of physical parameterizations has been generated based on (a) the modified KF and BMJ cumulus parameterization schemes; (b) the Goddard and Dudhia schemes for the shortwave radiation; (d) the YSU and MYJ planetary boundary schemes; and (e) the Lin, WSM3, WSM5, and WSM6 schemes for the cloud microphysics. There are a maximum of 32 different configuration forecasts for each heavy rainfall case listed in Table 2. The other options are the Monin–Obukhov surface layer scheme and the Rapid Radiative Transfer Model scheme for longwave radiation. Note that, with the MYJ scheme, the surface layer option will be switched to Janjic’s Eta–Monin–Obukhov scheme which is based on similar theory with viscous sublayers over both solid surfaces and water points. Skamarock et al. [22] provided the detailed description of the WRF-ARW model, and various references for physical parameterizations of the WRF-ARW model can be found from various listed references [11, 23–29, 30].Table 2 Details of physical parameterization configurations in different experiments. Abbreviation Microphysics Shortwave radiation Boundary layer Betts–Miller–Janjic (BMJ) cumulus parameterization BMJ-Lin-Duh-MYJ Lin Duhia MYJ BMJ-Lin-Duh-YSU Lin Duhia YSU BMJ-Lin-God-MYJ Lin Goddard MYJ BMJ-Lin-God-YSU Lin Goddard YSU BMJ-WSM3-Duh-MYJ WSM3 Duhia MYJ BMJ-WSM3-Duh-YSU WSM3 Duhia YSU BMJ-WSM3-God-MYJ WSM3 Goddard MYJ BMJ-WSM3-God-YSU WSM3 Goddard YSU BMJ-WSM5-Duh-MYJ WSM5 Duhia MYJ BMJ-WSM5-Duh-YSU WSM5 Duhia YSU BMJ-WSM5-God-MYJ WSM5 Duhia MYJ BMJ-WSM5-God-YSU WSM5 Goddard YSU BMJ-WSM6-Duh-MYJ WSM6 Duhia MYJ BMJ-WSM6-Duh-YSU WSM6 Duhia YSU BMJ-WSM6-God-MYJ WSM6 Goddard MYJ BMJ-WSM6-God-YSU WSM6 Goddard YSU Kain–Fritsch (KF) cumulus parameterization KF-Lin-Duh-MYJ Lin Duhia MYJ KF-Lin-Duh-YSU Lin Duhia YSU KF-Lin-God-MYJ Lin Goddard MYJ KF-Lin-God-YSU Lin Goddard YSU KF-WSM3-Duh-MYJ WSM3 Duhia MYJ KF-WSM3-Duh-YSU WSM3 Duhia YSU KF-WSM3-God-MYJ WSM3 Goddard MYJ KF-WSM3-God-YSU WSM3 Goddard YSU KF-WSM5-Duh-MYJ WSM5 Duhia MYJ KF-WSM5-Duh-YSU WSM5 Duhia YSU KF-WSM5-God-MYJ WSM5 Goddard MYJ KF-WSM5-God-YSU WSM5 Goddard YSU KF-WSM6-Duh-MYJ WSM6 Duhia MYJ KF-WSM6-Duh-YSU WSM6 Duhia YSU KF-WSM6-God-MYJ WSM6 Goddard MYJ KF-WSM6-God-YSU WSM6 Goddard YSUThe WRF-ARW model is configured with two nested grid domains consisting of 199 × 199 grid points in the (x, y) dimensions with horizontal resolutions of 15 km (denoted as d01 domain) and 5 km (denoted as d02 domain). All domains share 41 similar vertical σ levels with the model top at 50 hPa. The higher resolution domain covers the northern part of Vietnam with a time step of 15 seconds. All validations will be carried out with forecasts from the d02 domain. Figure 2 shows the terrain used in d01 and d02 domains.Figure 2 SYNOP station distribution over Vietnam and closed countries (black dots) and terrain of computing domains for the WRF-ARW model. The blue contour is for the outer domain (d01, 15 km), and dark green shading is for the inner domain (d02, 5 km), illustrated by the Diana meteorological visualization software of the Norwegian Meteorological Institute. ## 2.2. Boundary Conditions The GFS model of NCEP used to provide boundary conditions for the WRF-ARW model in this study has a 0.5-degree horizontal resolution and be prepared every three hours from 1000 hPa to 1 hPa. The GFS data for this study were downloaded from the Research Data Archive at the National Center for Atmospheric Research via website linkhttps://rda.ucar.edu/datasets/ds335.0. More information on GFS data can be found at https://www.nco.ncep.noaa.gov/pmb/products/gfs/. ## 2.3. Observation Data The number of observation stations in Vietnam increased from 89 in 1988 to 186 in 2017, with 4 or 8 observations per day (black dots are 8 observations/day for stations in Vietnam and nearby countries in Figure2), but only 24 stations are reported to WMO. The difference in location and topography results in the significant change from one climate to another; therefore, Vietnam has been divided into several climate zones. The highest station density is in the Red River Delta area (the southern part of northern Vietnam) with approximately 1 station per 25 km × 25 km area. The coarsest station density is in the Central Highlands area (latitude between ∼11 N and 16 N) with approximately 1 station per 55 km × 55 km area. On average, the current surface observation network density of Vietnam is about 1 station per 35 km × 35 km for flat regions and 1 station per 50 km × 50 km for mountainous complex regions. In this paper, in order to verify model forecast for the Bac Bo area, we used observation data from the northern SYNOP stations for the period from 2012 to 2016 listed in Table 1.In this study, 72 cases of typical widespread heavy rains which occurred in the northern part of Vietnam in the period 2012–2016 were selected. For each case, the forecasting cycles are chosen so that the 72-hour forecast range can cover the maximum duration of heavy rain episodes.With respect to causes of heavy rainfall events, there are four main categories: (i) activities of ITCZs or troughs: type I, (ii) affected by tropical cyclones or surface low-pressure system (staying at least more than 2 days over the Bac Bo area): type II, (iii) related to the cold surge from the north: type III, and (iv) the complex combinations from different patterns: type IV. The list of forecasted events is given in Table3 with the station name, the maximum value of daily rainfall, and the type of each heavy rainfall case. The sample number of type I, type II, type III, and type IV has 37, 21, 5, and 9 forecast cycles, respectively.Table 3 List of forecast cycles related to heavy rain cases from 2012 to 2016, the station with maximum daily accumulated rainfall up to 72 h for each cycle, and types of main synoptic situations. Forecast cycle (year month day hour) Maximum 24 h accumulation observation (mm) Station with maximum observation Types of rain events 2012 05 18 00 37 Bac Yen Type I 2012 05 19 00 44 Lac Son Type I 2012 05 20 00 114 Bac Quang Type I 2012 05 21 00 195 Bac Quang Type I 2012 05 22 00 131 Ha Dong Type I 2012 05 23 00 186 Quang Ha Type I 2012 07 20 00 41 Bac Quang Type I 2012 07 21 00 24.6 Lai Chau Type II 2012 07 22 00 116 Vinh Yen Type II 2012 07 23 00 75 Vinh Type II 2012 07 25 00 229 Tuyen Quang Type II 2012 07 26 00 104 Ham Yen Type II 2012 07 27 00 112 Bac Quang Type I 2012 08 03 00 58.1 Con Cuong Type I 2012 08 04 00 34.1 Tinh Gia Type I 2012 08 05 00 70 Lang Son Type I 2012 08 06 00 153 Muong La Type I 2012 08 07 00 163 Quang Ha Type I 2012 08 13 00 63.6 Muong La Type II 2012 08 14 00 64.5 Nho Quan Type II 2012 08 15 00 74 Do Luong Type II 2012 08 16 00 76 Tam Dao Type II 2012 08 31 00 49.1 Sin Ho Type IV 2012 09 01 00 67 Nhu Xuan Type IV 2012 09 02 00 124 Quynh Luu Type IV 2012 09 03 00 84 Moc Chau Type IV 2012 09 04 00 157 Huong Khe Type IV 2012 09 16 00 151.4 Muong Te Type IV 2012 09 17 00 104.6 Tam Duong Type III 2013 06 19 00 109 Bac Quang Type II 2013 06 20 00 72.2 Huong Khe Type II 2013 06 21 00 59 Nam Dinh Type II 2013 06 22 00 218 Ha Tinh Type II 2014 08 25 00 91 Lao Cai Type II 2014 08 26 00 89 Van Ly Type II 2014 08 27 00 157 Hon Ngu Type II 2015 05 18 00 90 Hoi Xuan Type III 2015 05 19 00 58 Sin Ho Type III 2015 05 20 00 106 That Khe Type III 2015 05 21 00 72 Tuyen Quang Type III 2015 06 21 00 124 Luc Yen Type II 2015 06 22 00 72.7 Do Luong Type II 2015 07 01 00 84.5 Cao Bang Type I 2015 07 02 00 123.0 Van Chan Type I 2015 07 03 00 74.0 Huong Khe Type I 2015 07 21 00 54.5 Hoa Binh Type I 2015 07 22 00 19 Sin Ho Type I 2015 07 23 00 92 Sin Ho Type I 2015 07 24 00 180 Quynh Nhai Type I 2015 07 25 00 181 Cua Ong Type I 2015 07 26 00 432 Cua Ong Type I 2015 07 27 00 347 Quang Ha Type I 2015 07 28 00 224 Mong Cai Type I 2015 07 29 00 247 Cua Ong Type I 2015 07 30 00 145 Quang Ha Type I 2015 07 31 00 239 Quang Ha Type I 2015 08 01 00 157 Phu Lien Type I 2015 09 18 00 73 Hai Duong Type IV 2015 09 19 00 78.0 Dinh Hoa Type IV 2016 05 20 00 118 Tinh Gia Type I 2016 05 21 00 107.0 Ha Nam Type I 2016 05 22 00 92 Sa Pa Type I 2016 05 23 00 190.4 Yen Bai Type I 2016 07 24 00 60 Phu Lien Type II 2016 07 25 00 17 Thai Binh Type II 2016 07 26 00 46 Dinh Hoa Type II 2016 07 30 00 18.1 Mu Cang Chai Type II 2016 08 01 00 19 Sin Ho Type I 2016 08 02 00 150 Ninh Binh Type I 2016 08 10 00 81 Quynh Nhai Type I 2016 08 11 00 117 Thanh Hoa Type I 2016 08 12 00 87 Tay Hieu Type I Type I: activities of trough or ITCZ; type II: affected by tropical cyclone or low-pressure system; type III: related to cold surge from the north; type IV: combinations of different patterns. ## 2.4. Validation Methods By finding the nearest grids to each station position (listed in Table1), the daily accumulated rainfall for these heavy rainfall cases from WRF-ARW model forecasts can be assigned. The verification scores used in this study are frequency bias (BIAS), probability of detection (POD), false alarm ratio (FAR), threat score (TS), and equitable threat score (ETS). If we denote H for the hit rate of occurred rainfalls (at a given threshold) for both forecast and observation, M for the missed rate of occurred rainfall forecast, and F for the false alarm rate of the forecast, the BIAS, POD, FAR, and TS are calculated by the following equations:(1)BIAS=H+FH+M,perfect value=1,<1=underestimation,>1=overestimation,POD=HH+M,perfect value=1,FAR=FH+F,perfect value=0,TS=HH+M+F,perfect value=1,no skill=0.If we set Hitsrandom = (H + F) (H + M)/T, where T is the sum of H, M, F, and the number of nonoccurred rainfalls for both forecast and observation, the ETS is calculated by(2)ETS=H−HitsrandomH+M+F−Hitsrandom,perfect value=1,no skill<=0.Other meanings of these scores can be found in Wilks’ study [31]. The verification will be carried out for the 5 km domain and for 24 h accumulated rainfall at 24 h, 48 h, and 72 h forecast ranges. Other analysis charts include the histogram of precipitation occurrences at given thresholds (>25 mm/24 h, >50 mm/24 h, and >100 mm/24 h) at the observation stations. ## 3. Results ### 3.1. General Performance The histogram charts (Figure3) show the number of observations or forecasts that occurred at stations for given ranges—or bins. We divided rainfall into 4 main bins (0–25 mm, 25–50 mm, 50–100 mm, and >100 mm) for different rainfall classes. For all 24 h, 48 h, and 72 h forecasts, it is quite clear that most forecasts from the BMJ scheme are in the 0–25 mm range, higher than the number of observations, while the KF scheme tends to have less forecasts than observations. In contrast, at the thresholds greater than 25 mm, for the BMJ scheme, the number of forecasts is less than the number of observations, while the KF scheme tends to have more forecasts than the number of observations.Figure 3 Histogram of daily rainfall frequency at different thresholds (bins) for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dotted lines are the observation frequency. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure4 shows the BIAS score at different thresholds (>25 mm/24 h and >50 mm/24 h) and separated for KF and BMJ scheme combinations. Overall assessment through the BIAS score is quite similar to the results from the evaluation through histograms: simulations with the BMJ scheme tend to be lower than observations at most forecast ranges and different thresholds (BIAS < 1), while the simulations with the KF scheme tend to be higher than the observations (BIAS > 1). The BIAS tends to decrease significantly when the forecast ranges increase. When the validation thresholds are increased, the simulation with the BMJ scheme tends to decrease BIAS, whereas in combination with the KF scheme, BIAS increases with both the forecast ranges and the evaluation thresholds.Figure 4 The BIAS score at 24 h, 48 h, and 72 h for different thresholds (over 25 mm, 50 mm, and 100 mm) for BMJ scheme combinations (a) and for KF scheme combinations (b). The number of samples is 8064 for an individual forecast. (a) (b)The individual assessment in each combination of BMJ and KF schemes shows that when combined with the Goddard radiation scheme, the BIAS increases from 0.1 to 0.2 compared to the Dudhia scheme. These results are similar for all validating thresholds and for 24 h, 48 h, and 72 h forecast ranges. The difference in simulations with different boundary layer schemes is unclear while evaluating with thresholds below 25 mm/24 h; however, at higher thresholds, it is apparent: the change of BIAS when combining with the KF scheme is much higher. For example, compared to the BMJ scheme, the BIAS score of KF-WSM3-God-MYJ at the >100 mm/24 h threshold at the 24-hour forecast range is 1.6458 and that of KF-WSM3-God-YSU is 2.0833, while BMJ-WSM3-God-MYJ and BMJ-WSM3-God-YSU had approximately equal BIAS scores (0.8229). Thus, the combinations of the boundary layer schemes are different (here only the two schemes YSU and MYJ) with the KF scheme being much more sensitive to its combinations with the BMJ scheme and especially at the high rainfall thresholds. Details of numerical values of BIAS can be found in Tables4, 5, and 6.Table 4 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.207 0.1517 0.5854 0.2719 0.5355 301 347 806 6106 0.1049 0.0874 0.4139 0.1342 0.6757 60 125 387 6988 BMJ-Lin-Duh-YSU 0.2087 0.1551 0.5592 0.2692 0.5186 298 321 809 6132 0.1072 0.0899 0.4094 0.1365 0.6667 61 122 386 6991 BMJ-Lin-God-MYJ 0.2283 0.1613 0.7986 0.3342 0.5814 370 514 737 5939 0.1225 0.0996 0.6197 0.1767 0.7148 79 198 368 6915 BMJ-Lin-God-YSU 0.2304 0.1649 0.7706 0.3315 0.5698 367 486 740 5967 0.122 0.1 0.5839 0.1723 0.705 77 184 370 6929 BMJ-WSM3-Duh-MYJ 0.2051 0.1433 0.6929 0.2882 0.5841 319 448 788 6005 0.1066 0.0846 0.5794 0.1521 0.7375 68 191 379 6922 BMJ-WSM3-Duh-YSU 0.2201 0.1595 0.6775 0.3026 0.5533 335 415 772 6038 0.1153 0.0928 0.6018 0.1655 0.7249 74 195 373 6918 BMJ-WSM3-God-MYJ 0.2388 0.1649 0.9539 0.3767 0.6051 417 639 690 5814 0.1364 0.1089 0.8456 0.2215 0.7381 99 279 348 6834 BMJ-WSM3-God-YSU 0.2408 0.1709 0.8663 0.3622 0.5819 401 558 706 5895 0.1391 0.1116 0.8501 0.226 0.7342 101 279 346 6834 BMJ-WSM5-Duh-MYJ 0.2147 0.153 0.692 0.299 0.5679 331 435 776 6018 0.1166 0.0951 0.5638 0.1633 0.7103 73 179 374 6934 BMJ-WSM5-Duh-YSU 0.2156 0.1553 0.6703 0.2963 0.558 328 414 779 6039 0.1319 0.1092 0.613 0.1879 0.6934 84 190 363 6923 BMJ-WSM5-God-MYJ 0.2408 0.1743 0.7967 0.3487 0.5624 386 496 721 5957 0.1273 0.1044 0.6242 0.1834 0.7061 82 197 365 6916 BMJ-WSM5-God-YSU 0.2449 0.1766 0.8365 0.3613 0.568 400 526 707 5927 0.1386 0.112 0.8009 0.2192 0.7263 98 260 349 6853 BMJ-WSM6-Duh-MYJ 0.2079 0.1419 0.7687 0.3044 0.604 337 514 770 5939 0.1166 0.0925 0.6711 0.1745 0.74 78 222 369 6891 BMJ-WSM6-Duh-YSU 0.2126 0.1469 0.7669 0.3098 0.596 343 506 764 5947 0.1386 0.1135 0.7271 0.2103 0.7108 94 231 353 6882 BMJ-WSM6-God-MYJ 0.2552 0.1793 1.0126 0.4092 0.5959 453 668 654 5785 0.1554 0.1269 0.9128 0.2573 0.7181 115 293 332 6820 BMJ-WSM6-God-YSU 0.2477 0.1747 0.9386 0.3848 0.59 426 613 681 5840 0.1644 0.1355 0.9485 0.2752 0.7099 123 301 324 6812 KF-Lin-Duh-MYJ 0.2399 0.162 1.0497 0.3966 0.6222 439 723 668 5730 0.1525 0.1262 0.7919 0.2371 0.7006 106 248 341 6865 KF-Lin-Duh-YSU 0.2656 0.1884 1.0533 0.4309 0.5909 477 689 630 5764 0.1457 0.1168 0.9351 0.2461 0.7368 110 308 337 6805 KF-Lin-God-MYJ 0.2507 0.165 1.2755 0.4562 0.6424 505 907 602 5546 0.1564 0.1233 1.2327 0.302 0.755 135 416 312 6697 KF-Lin-God-YSU 0.2649 0.1795 1.2818 0.4779 0.6272 529 890 578 5563 0.162 0.128 1.311 0.3221 0.7543 144 442 303 6671 KF-WSM3-Duh-MYJ 0.2442 0.1627 1.1491 0.4219 0.6329 467 805 640 5648 0.1581 0.1266 1.1141 0.2886 0.741 129 369 318 6744 KF-WSM3-Duh-YSU 0.2664 0.1843 1.1861 0.4598 0.6123 509 804 598 5649 0.1667 0.135 1.1298 0.3043 0.7307 136 369 311 6744 KF-WSM3-God-MYJ 0.2743 0.1867 1.3668 0.5095 0.6272 564 949 543 5504 0.172 0.1368 1.4385 0.3579 0.7512 160 483 287 6630 KF-WSM3-God-YSU 0.2739 0.1852 1.4029 0.5167 0.6317 572 981 535 5472 0.1778 0.1413 1.5638 0.387 0.7525 173 526 274 6587 KF-WSM5-Duh-MYJ 0.2583 0.1766 1.1653 0.4444 0.6186 492 798 615 5655 0.1648 0.1333 1.1186 0.2998 0.732 134 366 313 6747 KF-WSM5-Duh-YSU 0.2656 0.1828 1.2042 0.4625 0.6159 512 821 595 5632 0.1588 0.1269 1.1387 0.2931 0.7426 131 378 316 6735 KF-WSM5-God-MYJ 0.2693 0.1812 1.3758 0.5041 0.6336 558 965 549 5488 0.1773 0.1421 1.4362 0.3669 0.7445 164 478 283 6635 KF-WSM5-God-YSU 0.2826 0.1939 1.4146 0.5321 0.6239 589 977 518 5476 0.178 0.1415 1.5615 0.387 0.7521 173 525 274 6588 KF-WSM6-Duh-MYJ 0.2598 0.1758 1.234 0.4607 0.6266 510 856 597 5597 0.1663 0.1326 1.2908 0.3266 0.747 146 431 301 6682 KF-WSM6-Duh-YSU 0.28 0.1952 1.2836 0.4995 0.6108 553 868 554 5585 0.1958 0.1611 1.4049 0.3937 0.7197 176 452 271 6661 KF-WSM6-God-MYJ 0.26 0.1698 1.43 0.5014 0.6494 555 1028 552 5425 0.1926 0.156 1.5906 0.4183 0.737 187 524 260 6589 KF-WSM6-God-YSU 0.2792 0.1888 1.467 0.5384 0.633 596 1028 511 5425 0.1844 0.1463 1.7584 0.4295 0.7557 192 594 255 6519Table 5 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.1791 0.1054 0.4842 0.2254 0.5344 365 419 1254 5522 0.0962 0.0703 0.3463 0.1181 0.6589 88 170 657 6645 BMJ-Lin-Duh-YSU 0.1853 0.1069 0.5287 0.239 0.5479 387 469 1232 5472 0.1105 0.0803 0.4295 0.1423 0.6687 106 214 639 6601 BMJ-Lin-God-MYJ 0.234 0.1345 0.769 0.3354 0.5639 543 702 1076 5239 0.1225 0.0846 0.5987 0.1745 0.7085 130 316 615 6499 BMJ-Lin-God-YSU 0.243 0.1408 0.8073 0.3533 0.5624 572 735 1047 5206 0.1357 0.0971 0.6174 0.1933 0.687 144 316 601 6499 BMJ-WSM3-Duh-MYJ 0.2332 0.1353 0.7505 0.3311 0.5588 536 679 1083 5262 0.152 0.109 0.7396 0.2295 0.6897 171 380 574 6435 BMJ-WSM3-Duh-YSU 0.2579 0.1573 0.7956 0.3681 0.5373 596 692 1023 5249 0.1587 0.1142 0.7839 0.2443 0.6884 182 402 563 6413 BMJ-WSM3-God-MYJ 0.2616 0.1452 1.0167 0.4182 0.5887 677 969 942 4972 0.1546 0.1034 1.0054 0.2685 0.733 200 549 545 6266 BMJ-WSM3-God-YSU 0.2641 0.148 1.0136 0.4206 0.585 681 960 938 4981 0.1539 0.1034 0.9826 0.2644 0.7309 197 535 548 6280 BMJ-WSM5-Duh-MYJ 0.2425 0.1455 0.7437 0.3403 0.5424 551 653 1068 5288 0.1582 0.1171 0.6899 0.2309 0.6654 172 342 573 6473 BMJ-WSM5-Duh-YSU 0.2545 0.154 0.7931 0.3638 0.5413 589 695 1030 5246 0.1698 0.1257 0.7758 0.2577 0.6678 192 386 553 6429 BMJ-WSM5-God-MYJ 0.2687 0.1601 0.9074 0.404 0.5548 654 815 965 5126 0.174 0.1286 0.8201 0.2698 0.671 201 410 544 6405 BMJ-WSM5-God-YSU 0.2677 0.1522 1.0068 0.4237 0.5791 686 944 933 4997 0.1674 0.1166 1.0027 0.2872 0.7135 214 533 531 6282 BMJ-WSM6-Duh-MYJ 0.2282 0.1261 0.7986 0.3342 0.5816 541 752 1078 5189 0.1414 0.097 0.7772 0.2201 0.7168 164 415 581 6400 BMJ-WSM6-Duh-YSU 0.2471 0.1426 0.8394 0.3644 0.5659 590 769 1029 5172 0.169 0.1198 0.9409 0.2805 0.7019 209 492 536 6323 BMJ-WSM6-God-MYJ 0.2484 0.1255 1.1081 0.4194 0.6215 679 1115 940 4826 0.1471 0.0931 1.1141 0.2711 0.7566 202 628 543 6187 BMJ-WSM6-God-YSU 0.283 0.1642 1.0723 0.4571 0.5737 740 996 879 4945 0.1588 0.1053 1.0966 0.2872 0.7381 214 603 531 6212 KF-Lin-Duh-MYJ 0.2144 0.1147 0.7634 0.3113 0.5922 504 732 1115 5209 0.1299 0.0905 0.6349 0.1879 0.704 140 333 605 6482 KF-Lin-Duh-YSU 0.2312 0.136 0.7171 0.3224 0.5504 522 639 1097 5302 0.124 0.0847 0.6309 0.1799 0.7149 134 336 611 6479 KF-Lin-God-MYJ 0.2584 0.1355 1.1174 0.4348 0.6108 704 1105 915 4836 0.1304 0.0781 1.0362 0.2349 0.7733 175 597 570 6218 KF-Lin-God-YSU 0.2806 0.1647 1.0241 0.4435 0.5669 718 940 901 5001 0.1611 0.1097 1.0215 0.2805 0.7254 209 552 536 6263 KF-WSM3-Duh-MYJ 0.266 0.149 1.0284 0.4262 0.5856 690 975 929 4966 0.1735 0.1212 1.0604 0.3047 0.7127 227 563 518 6252 KF-WSM3-Duh-YSU 0.2839 0.1719 0.9691 0.4355 0.5507 705 864 914 5077 0.1948 0.1437 1.0255 0.3302 0.678 246 518 499 6297 KF-WSM3-God-MYJ 0.2845 0.1503 1.3484 0.5201 0.6143 842 1341 777 4600 0.1766 0.1161 1.4416 0.3664 0.7458 273 801 472 6014 KF-WSM3-God-YSU 0.3018 0.1706 1.3125 0.5361 0.5915 868 1257 751 4684 0.1877 0.1271 1.455 0.3879 0.7334 289 795 456 6020 KF-WSM5-Duh-MYJ 0.2801 0.1622 1.055 0.4497 0.5738 728 980 891 4961 0.1785 0.1262 1.0644 0.3128 0.7062 233 560 512 6255 KF-WSM5-Duh-YSU 0.2948 0.1812 1.0019 0.4558 0.545 738 884 881 5057 0.2037 0.1509 1.102 0.3557 0.6772 265 556 480 6259 KF-WSM5-God-MYJ 0.284 0.1477 1.3904 0.5287 0.6197 856 1395 763 4546 0.1685 0.1064 1.5221 0.3638 0.761 271 863 474 5952 KF-WSM5-God-YSU 0.2998 0.1683 1.3162 0.5343 0.5941 865 1266 754 4675 0.1996 0.1388 1.4846 0.4134 0.7215 308 798 437 6017 KF-WSM6-Duh-MYJ 0.2799 0.1571 1.1353 0.467 0.5887 756 1082 863 4859 0.1973 0.1415 1.2242 0.3664 0.7007 273 639 472 6176 KF-WSM6-Duh-YSU 0.3002 0.1833 1.0574 0.475 0.5508 769 943 850 4998 0.211 0.157 1.157 0.3758 0.6752 280 582 465 6233 KF-WSM6-God-MYJ 0.2917 0.155 1.4095 0.5442 0.6139 881 1401 738 4540 0.1788 0.1154 1.6107 0.396 0.7542 295 905 450 5910 KF-WSM6-God-YSU 0.3113 0.1785 1.357 0.5596 0.5876 906 1291 713 4650 0.2101 0.1475 1.6054 0.4523 0.7182 337 859 408 5956Table 6 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 72 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.154 0.0672 0.3964 0.1863 0.53 400 451 1747 4962 0.0797 0.0443 0.3412 0.099 0.7098 101 247 919 6293 BMJ-Lin-Duh-YSU 0.1539 0.0595 0.442 0.1924 0.5648 413 536 1734 4877 0.0766 0.0373 0.3912 0.099 0.7469 101 298 919 6242 BMJ-Lin-God-MYJ 0.1755 0.0611 0.5752 0.2352 0.5911 505 730 1642 4683 0.0719 0.0285 0.4471 0.0971 0.7829 99 357 921 6183 BMJ-Lin-God-YSU 0.1893 0.0645 0.653 0.2632 0.597 565 837 1582 4576 0.0832 0.0342 0.5314 0.1176 0.7786 120 422 900 6118 BMJ-WSM3-Duh-MYJ 0.1802 0.0656 0.5771 0.2408 0.5827 517 722 1630 4691 0.0867 0.0391 0.5108 0.1206 0.7639 123 398 897 6142 BMJ-WSM3-Duh-YSU 0.1988 0.0737 0.6572 0.2748 0.5819 590 821 1557 4592 0.0935 0.0407 0.5941 0.1363 0.7706 139 467 881 6073 BMJ-WSM3-God-MYJ 0.2273 0.0777 0.871 0.3465 0.6021 744 1126 1403 4287 0.0992 0.0404 0.7049 0.1539 0.7816 157 562 863 5978 BMJ-WSM3-God-YSU 0.2239 0.0691 0.9171 0.3507 0.6176 753 1216 1394 4197 0.1022 0.0373 0.8294 0.1696 0.7955 173 673 847 5867 BMJ-WSM5-Duh-MYJ 0.1788 0.0634 0.5817 0.2399 0.5877 515 734 1632 4679 0.0944 0.0482 0.4892 0.1284 0.7375 131 368 889 6172 BMJ-WSM5-Duh-YSU 0.2041 0.0796 0.6544 0.2804 0.5715 602 803 1545 4610 0.0966 0.0427 0.6137 0.1422 0.7684 145 481 875 6059 BMJ-WSM5-God-MYJ 0.2169 0.0787 0.7667 0.3149 0.5893 676 970 1471 4443 0.0908 0.037 0.6127 0.1343 0.7808 137 488 883 6052 BMJ-WSM5-God-YSU 0.2278 0.0779 0.8728 0.3475 0.6019 746 1128 1401 4285 0.0947 0.031 0.802 0.1559 0.8056 159 659 861 5881 BMJ-WSM6-Duh-MYJ 0.1712 0.0601 0.5515 0.2268 0.5887 487 697 1660 4716 0.0834 0.0379 0.4775 0.1137 0.7618 116 371 904 6169 BMJ-WSM6-Duh-YSU 0.1971 0.069 0.68 0.2767 0.5932 594 866 1553 4547 0.0992 0.0432 0.652 0.149 0.7714 152 513 868 6027 BMJ-WSM6-God-MYJ 0.2191 0.069 0.8714 0.3363 0.6141 722 1149 1425 4264 0.1 0.0385 0.7578 0.1598 0.7891 163 610 857 5930 BMJ-WSM6-God-YSU 0.215 0.0632 0.885 0.3335 0.6232 716 1184 1431 4229 0.0824 0.0185 0.8039 0.1373 0.8293 140 680 880 5860 KF-Lin-Duh-MYJ 0.2144 0.0871 0.6782 0.2962 0.5632 636 820 1511 4593 0.1162 0.0632 0.601 0.1667 0.7227 170 443 850 6097 KF-Lin-Duh-YSU 0.2063 0.0825 0.6502 0.2823 0.5659 606 790 1541 4623 0.1069 0.052 0.6343 0.1578 0.7512 161 486 859 6054 KF-Lin-God-MYJ 0.2399 0.0805 0.9693 0.381 0.6069 818 1263 1329 4150 0.1392 0.0702 0.9333 0.2363 0.7468 241 711 779 5829 KF-Lin-God-YSU 0.2427 0.0831 0.9725 0.3852 0.6039 827 1261 1320 4152 0.1309 0.0605 0.9647 0.2275 0.7642 232 752 788 5788 KF-WSM3-Duh-MYJ 0.2377 0.0889 0.8677 0.3586 0.5867 770 1093 1377 4320 0.1242 0.0613 0.7922 0.198 0.75 202 606 818 5934 KF-WSM3-Duh-YSU 0.2618 0.1129 0.8812 0.3903 0.5571 838 1054 1309 4359 0.1675 0.0983 0.948 0.2794 0.7053 285 682 735 5858 KF-WSM3-God-MYJ 0.2596 0.0844 1.1495 0.4429 0.6147 951 1517 1196 3896 0.1334 0.0538 1.2235 0.2618 0.7861 267 981 753 5559 KF-WSM3-God-YSU 0.2817 0.1073 1.1593 0.4746 0.5906 1019 1470 1128 3943 0.1631 0.081 1.3216 0.3255 0.7537 332 1016 688 5524 KF-WSM5-Duh-MYJ 0.2508 0.1005 0.8882 0.3787 0.5737 813 1094 1334 4319 0.1333 0.0694 0.8167 0.2137 0.7383 218 615 802 5925 KF-WSM5-Duh-YSU 0.2671 0.1193 0.8738 0.395 0.548 848 1028 1299 4385 0.158 0.0902 0.9118 0.2608 0.714 266 664 754 5876 KF-WSM5-God-MYJ 0.2723 0.0967 1.1635 0.463 0.6021 994 1504 1153 3909 0.1354 0.0571 1.1863 0.2608 0.7802 266 944 754 5596 KF-WSM5-God-YSU 0.2652 0.0932 1.1178 0.4439 0.6029 953 1447 1194 3966 0.1507 0.0709 1.2382 0.2931 0.7633 299 964 721 5576 KF-WSM6-Duh-MYJ 0.2545 0.0952 0.9767 0.401 0.5894 861 1236 1286 4177 0.1237 0.0546 0.9324 0.2127 0.7718 217 734 803 5806 KF-WSM6-Duh-YSU 0.2615 0.1084 0.9208 0.3982 0.5675 855 1122 1292 4291 0.156 0.0841 1.0127 0.2716 0.7318 277 756 743 5784 KF-WSM6-God-MYJ 0.2693 0.0883 1.2259 0.4723 0.6147 1014 1618 1133 3795 0.1398 0.0571 1.3265 0.2853 0.7849 291 1062 729 5478 KF-WSM6-God-YSU 0.2853 0.1072 1.2054 0.4895 0.5939 1051 1537 1096 3876 0.1513 0.0661 1.4245 0.3186 0.7763 325 1128 695 5412 ### 3.2. Skill Score Validation The charts for the skill scores at the two thresholds in Figures5 and 6 show that the TS value at the 24 h forecast range is about 0.2 to 0.27 for >25 mm/24 h and 0.1 to 0.2 for >50 mm/24 h. At 48 h, the TS is around ∼0.18 to 0.3 and ∼0.1 to 0.19 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h. At 72 h, the TS is around 0.15 to 0.25 and ∼0.08 to 0.15 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h.Figure 5 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 25 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure 6 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 50 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)The probability of detection decreases and the false alarm rate clearly increases with forecast ranges and the validating thresholds. In addition, when the threshold increases, the difference between the TS and the ETS decreases which means that the amount of Hitsrandom is too small or the cause of very small hit rate (Hitsrandom rate decreases by 90% when changing the threshold from >25 mm/24 h to >50 mm/24 h).Specific comparisons between the combinations of the BMJ or KF scheme show that the KF scheme model’s skills in heavy rain forecast in the northern region of Vietnam are better than BMJ scheme model’s skills. The average TS with the KF scheme can be about 15–25% larger than that using the BMJ scheme. If the difference of skills in a regional model is insignificant when changing the physical parameterization schemes, the lateral boundary conditions (from global forecasts) will greatly affect the quality of the dynamical downscaling forecasts after 24 h integration. However, here the skill difference when combining the two different cumulus schemes in the longer forecast range (such as 72 h) shows the importance of convection simulation capability contributing to the forecasting quality of the model. Detailed evaluation of the combination with the radiation physical schemes of KF or BMJ does not show any difference compared to changing the boundary layer schemes when looking at the skill scores TS or ETS.The combinations with YSU boundary layer schemes have better skills compared to those with MYJ schemes. In addition, when changing the complexity of the cloud microphysics schemes, the more complex the microphysical processing simulation, the better the TS and ETS (at 24, 48, and 72 h forecast ranges and two validating thresholds; see Figure7 for comparison of the change in TSs and ETSs with the cloud microphysics scheme).Figure 7 Brief comparison of TSs and ETSs for illustration of sensitivities with microphysics schemes.For the skill comparisons of different event types (I, II, III, and IV), Figure8 shows the TSs at thresholds over 25 mm/24 h and over 50 mm/24 h for 24 h and 48 h forecast ranges. For type I, which is associated with the activity of ITCZ and low-pressure trough over the Bac Bo area, the KF scheme proved its forecast skills in almost forecast ranges and thresholds mentioned in this research. However, with the rain caused by tropical cyclone in type II, the difference between KF and BMJ schemes within 24 h was smaller than that in type I. In 48 h and 72 h, the BMJ scheme showed more skilled forecast with the skill score for threshold 25 mm ranging from 0.25 to 0.35 of the BMJ scheme, compared with 0.2 to 0.3 of the KF scheme, and the skill score for threshold 50 mm ranging from 0.2 to 0.3 of the BMJ scheme, compared with 0.2 to 0.25 of the KF scheme. In type II, both KF and BMJ schemes combined with the simple cloud microphysics Lin scheme showed lowest skill score. For type III, which is associated with the activity of cold surge and its role in squeezing the low-pressure trough from the north towards Bac Bo, the KF scheme was only skillful in threshold 25 mm in 24 h. Particularly in type IV, which contains heavy rain events caused by a complex combination of situations resulting in a trough in Bac Bo, the KF scheme still showed skilled forecast compared to very low forecast (no skill with threshold over 50 mm/24 h and 25 mm/24 h) of BMJ scheme experiments. More details of TSs for different types are listed in Tables 7 and 8.Figure 8 TSs for different types of heavy rainfall events in northern Vietnam for daily accumulation thresholds over (a) 25 mm and 50 mm (b) at 24 h forecast ranges and over 25 mm (c) and 50 mm (d) at 48 h forecast ranges. The right vertical axis is from 0 to 0.4. (a) (b) (c) (d)Table 7 TSs for different types of main heavy rainfall, at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.2546 0.1309 0.15 0.0682 0.124 0.0791 0.0769 0 BMJ-Lin-Duh-YSU 0.2625 0.1019 0.2346 0.0706 0.1244 0.0692 0.16 0 BMJ-Lin-God-MYJ 0.2644 0.1638 0.2235 0.1011 0.139 0.1064 0.0882 0 BMJ-Lin-God-YSU 0.2761 0.1519 0.1939 0.093 0.1369 0.087 0.1562 0.0333 BMJ-WSM3-Duh-MYJ 0.2503 0.1237 0.1818 0.09 0.1279 0.0621 0.129 0 BMJ-WSM3-Duh-YSU 0.2745 0.1179 0.2439 0.0652 0.1379 0.0699 0.1471 0 BMJ-WSM3-God-MYJ 0.2834 0.1651 0.21 0.09 0.1556 0.1104 0.1136 0 BMJ-WSM3-God-YSU 0.2943 0.1505 0.202 0.0745 0.167 0.0798 0.1304 0.0312 BMJ-WSM5-Duh-MYJ 0.2618 0.1349 0.1977 0.0707 0.1429 0.0563 0.1515 0 BMJ-WSM5-Duh-YSU 0.263 0.1351 0.2073 0.0652 0.1603 0.0789 0.1389 0 BMJ-WSM5-God-MYJ 0.3032 0.1376 0.1978 0.0714 0.1565 0.0496 0.1515 0.0345 BMJ-WSM5-God-YSU 0.3028 0.1525 0.1939 0.0737 0.1677 0.0696 0.15 0.0312 BMJ-WSM6-Duh-MYJ 0.2571 0.133 0.1304 0.0926 0.1403 0.0872 0.0732 0 BMJ-WSM6-Duh-YSU 0.2643 0.1281 0.1828 0.06 0.1714 0.0719 0.1212 0 BMJ-WSM6-God-MYJ 0.2991 0.1781 0.2574 0.0784 0.1706 0.1391 0.15 0.0256 BMJ-WSM6-God-YSU 0.3043 0.1595 0.1944 0.0652 0.1916 0.1104 0.1429 0 KF-Lin-Duh-MYJ 0.2881 0.1673 0.2136 0.1449 0.1844 0.0683 0.1026 0.1818 KF-Lin-Duh-YSU 0.3125 0.1951 0.2474 0.1667 0.1778 0.0698 0.06 0.1892 KF-Lin-God-MYJ 0.2917 0.1914 0.2255 0.1558 0.1725 0.1237 0.0889 0.1471 KF-Lin-God-YSU 0.3129 0.183 0.2653 0.1769 0.1839 0.1162 0.093 0.1316 KF-WSM3-Duh-MYJ 0.2872 0.1776 0.2212 0.1655 0.1893 0.0753 0.1224 0.1667 KF-WSM3-Duh-YSU 0.309 0.1881 0.2551 0.219 0.1858 0.1176 0.1111 0.2059 KF-WSM3-God-MYJ 0.3264 0.1924 0.2526 0.1656 0.2057 0.0909 0.0889 0.1579 KF-WSM3-God-YSU 0.3188 0.1841 0.3069 0.2014 0.2027 0.1185 0.098 0.1795 KF-WSM5-Duh-MYJ 0.3062 0.1836 0.25 0.1655 0.2048 0.0688 0.0851 0.1795 KF-WSM5-Duh-YSU 0.3116 0.1892 0.2653 0.1838 0.1823 0.1111 0.08 0.1818 KF-WSM5-God-MYJ 0.3141 0.1996 0.25 0.1772 0.2047 0.1122 0.1064 0.1622 KF-WSM5-God-YSU 0.3317 0.2025 0.2642 0.1875 0.2063 0.1085 0.1228 0.1579 KF-WSM6-Duh-MYJ 0.3063 0.1823 0.2233 0.1812 0.1963 0.0952 0.0833 0.1538 KF-WSM6-Duh-YSU 0.3339 0.1927 0.2476 0.1898 0.2244 0.1179 0.1071 0.2812 KF-WSM6-God-MYJ 0.3138 0.1781 0.2268 0.1465 0.2207 0.1268 0.1429 0.125 KF-WSM6-God-YSU 0.3323 0.1793 0.2843 0.2 0.2078 0.1255 0.1538 0.1429Table 8 TSs for different types of main heavy rainfall, at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.1629 0.2426 0.0854 0.0922 0.0768 0.1533 0.0526 0.0333 BMJ-Lin-Duh-YSU 0.1755 0.2271 0.0909 0.1438 0.0971 0.1559 0.0833 0.0164 BMJ-Lin-God-MYJ 0.2276 0.3104 0.075 0.0798 0.1139 0.1631 0.0333 0.0303 BMJ-Lin-God-YSU 0.2482 0.2723 0.1429 0.12 0.1405 0.1543 0.0385 0.0167 BMJ-WSM3-Duh-MYJ 0.2235 0.3055 0.0833 0.1311 0.1147 0.2848 0.0571 0 BMJ-WSM3-Duh-YSU 0.2365 0.3419 0.1538 0.1617 0.1136 0.3072 0.0556 0.0116 BMJ-WSM3-God-MYJ 0.2583 0.3448 0.0979 0.1029 0.1386 0.2466 0.0182 0 BMJ-WSM3-God-YSU 0.2587 0.3338 0.1176 0.1272 0.1541 0.2125 0.02 0 BMJ-WSM5-Duh-MYJ 0.2301 0.3206 0.0928 0.1228 0.1287 0.2706 0.0741 0.0115 BMJ-WSM5-Duh-YSU 0.243 0.3157 0.1373 0.172 0.1397 0.2813 0.0312 0.0241 BMJ-WSM5-God-MYJ 0.2594 0.344 0.1327 0.1337 0.1479 0.2928 0.0286 0 BMJ-WSM5-God-YSU 0.2588 0.3329 0.1797 0.1364 0.1383 0.2918 0 0.0115 BMJ-WSM6-Duh-MYJ 0.217 0.304 0.1418 0.1027 0.1214 0.2212 0.0784 0.0235 BMJ-WSM6-Duh-YSU 0.2331 0.3216 0.1368 0.1587 0.1348 0.2968 0.0714 0.0103 BMJ-WSM6-God-MYJ 0.2522 0.3055 0.1474 0.0955 0.132 0.2314 0.0541 0.0215 BMJ-WSM6-God-YSU 0.2739 0.3707 0.2 0.1026 0.1333 0.2658 0.0484 0.0319 KF-Lin-Duh-MYJ 0.2259 0.2093 0.0811 0.225 0.1331 0.1208 0.027 0.1899 KF-Lin-Duh-YSU 0.2714 0.172 0.0957 0.2199 0.1347 0.1017 0.0667 0.1395 KF-Lin-God-MYJ 0.268 0.2537 0.1043 0.2771 0.1267 0.1545 0.0385 0.1237 KF-Lin-God-YSU 0.3125 0.2384 0.1058 0.2788 0.1704 0.1608 0.0488 0.1327 KF-WSM3-Duh-MYJ 0.2689 0.2805 0.1298 0.2783 0.156 0.241 0.0577 0.1327 KF-WSM3-Duh-YSU 0.3066 0.2739 0.069 0.285 0.1971 0.2068 0.0444 0.2088 KF-WSM3-God-MYJ 0.2945 0.2976 0.1286 0.2595 0.1651 0.2451 0.0526 0.1111 KF-WSM3-God-YSU 0.3202 0.2956 0.1407 0.2822 0.1872 0.2136 0.0923 0.1569 KF-WSM5-Duh-MYJ 0.2924 0.2921 0.1176 0.2546 0.1586 0.238 0.0727 0.1765 KF-WSM5-Duh-YSU 0.3217 0.2847 0.0976 0.26 0.1995 0.2403 0.0755 0.1753 KF-WSM5-God-MYJ 0.2946 0.2953 0.1192 0.2672 0.1499 0.25 0.038 0.1293 KF-WSM5-God-YSU 0.3265 0.2768 0.1361 0.2881 0.201 0.2344 0.0959 0.1404 KF-WSM6-Duh-MYJ 0.2922 0.2874 0.1389 0.2597 0.1874 0.25 0.0685 0.1667 KF-WSM6-Duh-YSU 0.3291 0.2706 0.1311 0.2995 0.208 0.2378 0.0893 0.2083 KF-WSM6-God-MYJ 0.3002 0.3115 0.1465 0.2556 0.1675 0.2537 0.0435 0.1333 KF-WSM6-God-YSU 0.3426 0.2798 0.1595 0.2888 0.2091 0.2565 0.0674 0.177 ## 3.1. General Performance The histogram charts (Figure3) show the number of observations or forecasts that occurred at stations for given ranges—or bins. We divided rainfall into 4 main bins (0–25 mm, 25–50 mm, 50–100 mm, and >100 mm) for different rainfall classes. For all 24 h, 48 h, and 72 h forecasts, it is quite clear that most forecasts from the BMJ scheme are in the 0–25 mm range, higher than the number of observations, while the KF scheme tends to have less forecasts than observations. In contrast, at the thresholds greater than 25 mm, for the BMJ scheme, the number of forecasts is less than the number of observations, while the KF scheme tends to have more forecasts than the number of observations.Figure 3 Histogram of daily rainfall frequency at different thresholds (bins) for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dotted lines are the observation frequency. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure4 shows the BIAS score at different thresholds (>25 mm/24 h and >50 mm/24 h) and separated for KF and BMJ scheme combinations. Overall assessment through the BIAS score is quite similar to the results from the evaluation through histograms: simulations with the BMJ scheme tend to be lower than observations at most forecast ranges and different thresholds (BIAS < 1), while the simulations with the KF scheme tend to be higher than the observations (BIAS > 1). The BIAS tends to decrease significantly when the forecast ranges increase. When the validation thresholds are increased, the simulation with the BMJ scheme tends to decrease BIAS, whereas in combination with the KF scheme, BIAS increases with both the forecast ranges and the evaluation thresholds.Figure 4 The BIAS score at 24 h, 48 h, and 72 h for different thresholds (over 25 mm, 50 mm, and 100 mm) for BMJ scheme combinations (a) and for KF scheme combinations (b). The number of samples is 8064 for an individual forecast. (a) (b)The individual assessment in each combination of BMJ and KF schemes shows that when combined with the Goddard radiation scheme, the BIAS increases from 0.1 to 0.2 compared to the Dudhia scheme. These results are similar for all validating thresholds and for 24 h, 48 h, and 72 h forecast ranges. The difference in simulations with different boundary layer schemes is unclear while evaluating with thresholds below 25 mm/24 h; however, at higher thresholds, it is apparent: the change of BIAS when combining with the KF scheme is much higher. For example, compared to the BMJ scheme, the BIAS score of KF-WSM3-God-MYJ at the >100 mm/24 h threshold at the 24-hour forecast range is 1.6458 and that of KF-WSM3-God-YSU is 2.0833, while BMJ-WSM3-God-MYJ and BMJ-WSM3-God-YSU had approximately equal BIAS scores (0.8229). Thus, the combinations of the boundary layer schemes are different (here only the two schemes YSU and MYJ) with the KF scheme being much more sensitive to its combinations with the BMJ scheme and especially at the high rainfall thresholds. Details of numerical values of BIAS can be found in Tables4, 5, and 6.Table 4 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.207 0.1517 0.5854 0.2719 0.5355 301 347 806 6106 0.1049 0.0874 0.4139 0.1342 0.6757 60 125 387 6988 BMJ-Lin-Duh-YSU 0.2087 0.1551 0.5592 0.2692 0.5186 298 321 809 6132 0.1072 0.0899 0.4094 0.1365 0.6667 61 122 386 6991 BMJ-Lin-God-MYJ 0.2283 0.1613 0.7986 0.3342 0.5814 370 514 737 5939 0.1225 0.0996 0.6197 0.1767 0.7148 79 198 368 6915 BMJ-Lin-God-YSU 0.2304 0.1649 0.7706 0.3315 0.5698 367 486 740 5967 0.122 0.1 0.5839 0.1723 0.705 77 184 370 6929 BMJ-WSM3-Duh-MYJ 0.2051 0.1433 0.6929 0.2882 0.5841 319 448 788 6005 0.1066 0.0846 0.5794 0.1521 0.7375 68 191 379 6922 BMJ-WSM3-Duh-YSU 0.2201 0.1595 0.6775 0.3026 0.5533 335 415 772 6038 0.1153 0.0928 0.6018 0.1655 0.7249 74 195 373 6918 BMJ-WSM3-God-MYJ 0.2388 0.1649 0.9539 0.3767 0.6051 417 639 690 5814 0.1364 0.1089 0.8456 0.2215 0.7381 99 279 348 6834 BMJ-WSM3-God-YSU 0.2408 0.1709 0.8663 0.3622 0.5819 401 558 706 5895 0.1391 0.1116 0.8501 0.226 0.7342 101 279 346 6834 BMJ-WSM5-Duh-MYJ 0.2147 0.153 0.692 0.299 0.5679 331 435 776 6018 0.1166 0.0951 0.5638 0.1633 0.7103 73 179 374 6934 BMJ-WSM5-Duh-YSU 0.2156 0.1553 0.6703 0.2963 0.558 328 414 779 6039 0.1319 0.1092 0.613 0.1879 0.6934 84 190 363 6923 BMJ-WSM5-God-MYJ 0.2408 0.1743 0.7967 0.3487 0.5624 386 496 721 5957 0.1273 0.1044 0.6242 0.1834 0.7061 82 197 365 6916 BMJ-WSM5-God-YSU 0.2449 0.1766 0.8365 0.3613 0.568 400 526 707 5927 0.1386 0.112 0.8009 0.2192 0.7263 98 260 349 6853 BMJ-WSM6-Duh-MYJ 0.2079 0.1419 0.7687 0.3044 0.604 337 514 770 5939 0.1166 0.0925 0.6711 0.1745 0.74 78 222 369 6891 BMJ-WSM6-Duh-YSU 0.2126 0.1469 0.7669 0.3098 0.596 343 506 764 5947 0.1386 0.1135 0.7271 0.2103 0.7108 94 231 353 6882 BMJ-WSM6-God-MYJ 0.2552 0.1793 1.0126 0.4092 0.5959 453 668 654 5785 0.1554 0.1269 0.9128 0.2573 0.7181 115 293 332 6820 BMJ-WSM6-God-YSU 0.2477 0.1747 0.9386 0.3848 0.59 426 613 681 5840 0.1644 0.1355 0.9485 0.2752 0.7099 123 301 324 6812 KF-Lin-Duh-MYJ 0.2399 0.162 1.0497 0.3966 0.6222 439 723 668 5730 0.1525 0.1262 0.7919 0.2371 0.7006 106 248 341 6865 KF-Lin-Duh-YSU 0.2656 0.1884 1.0533 0.4309 0.5909 477 689 630 5764 0.1457 0.1168 0.9351 0.2461 0.7368 110 308 337 6805 KF-Lin-God-MYJ 0.2507 0.165 1.2755 0.4562 0.6424 505 907 602 5546 0.1564 0.1233 1.2327 0.302 0.755 135 416 312 6697 KF-Lin-God-YSU 0.2649 0.1795 1.2818 0.4779 0.6272 529 890 578 5563 0.162 0.128 1.311 0.3221 0.7543 144 442 303 6671 KF-WSM3-Duh-MYJ 0.2442 0.1627 1.1491 0.4219 0.6329 467 805 640 5648 0.1581 0.1266 1.1141 0.2886 0.741 129 369 318 6744 KF-WSM3-Duh-YSU 0.2664 0.1843 1.1861 0.4598 0.6123 509 804 598 5649 0.1667 0.135 1.1298 0.3043 0.7307 136 369 311 6744 KF-WSM3-God-MYJ 0.2743 0.1867 1.3668 0.5095 0.6272 564 949 543 5504 0.172 0.1368 1.4385 0.3579 0.7512 160 483 287 6630 KF-WSM3-God-YSU 0.2739 0.1852 1.4029 0.5167 0.6317 572 981 535 5472 0.1778 0.1413 1.5638 0.387 0.7525 173 526 274 6587 KF-WSM5-Duh-MYJ 0.2583 0.1766 1.1653 0.4444 0.6186 492 798 615 5655 0.1648 0.1333 1.1186 0.2998 0.732 134 366 313 6747 KF-WSM5-Duh-YSU 0.2656 0.1828 1.2042 0.4625 0.6159 512 821 595 5632 0.1588 0.1269 1.1387 0.2931 0.7426 131 378 316 6735 KF-WSM5-God-MYJ 0.2693 0.1812 1.3758 0.5041 0.6336 558 965 549 5488 0.1773 0.1421 1.4362 0.3669 0.7445 164 478 283 6635 KF-WSM5-God-YSU 0.2826 0.1939 1.4146 0.5321 0.6239 589 977 518 5476 0.178 0.1415 1.5615 0.387 0.7521 173 525 274 6588 KF-WSM6-Duh-MYJ 0.2598 0.1758 1.234 0.4607 0.6266 510 856 597 5597 0.1663 0.1326 1.2908 0.3266 0.747 146 431 301 6682 KF-WSM6-Duh-YSU 0.28 0.1952 1.2836 0.4995 0.6108 553 868 554 5585 0.1958 0.1611 1.4049 0.3937 0.7197 176 452 271 6661 KF-WSM6-God-MYJ 0.26 0.1698 1.43 0.5014 0.6494 555 1028 552 5425 0.1926 0.156 1.5906 0.4183 0.737 187 524 260 6589 KF-WSM6-God-YSU 0.2792 0.1888 1.467 0.5384 0.633 596 1028 511 5425 0.1844 0.1463 1.7584 0.4295 0.7557 192 594 255 6519Table 5 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.1791 0.1054 0.4842 0.2254 0.5344 365 419 1254 5522 0.0962 0.0703 0.3463 0.1181 0.6589 88 170 657 6645 BMJ-Lin-Duh-YSU 0.1853 0.1069 0.5287 0.239 0.5479 387 469 1232 5472 0.1105 0.0803 0.4295 0.1423 0.6687 106 214 639 6601 BMJ-Lin-God-MYJ 0.234 0.1345 0.769 0.3354 0.5639 543 702 1076 5239 0.1225 0.0846 0.5987 0.1745 0.7085 130 316 615 6499 BMJ-Lin-God-YSU 0.243 0.1408 0.8073 0.3533 0.5624 572 735 1047 5206 0.1357 0.0971 0.6174 0.1933 0.687 144 316 601 6499 BMJ-WSM3-Duh-MYJ 0.2332 0.1353 0.7505 0.3311 0.5588 536 679 1083 5262 0.152 0.109 0.7396 0.2295 0.6897 171 380 574 6435 BMJ-WSM3-Duh-YSU 0.2579 0.1573 0.7956 0.3681 0.5373 596 692 1023 5249 0.1587 0.1142 0.7839 0.2443 0.6884 182 402 563 6413 BMJ-WSM3-God-MYJ 0.2616 0.1452 1.0167 0.4182 0.5887 677 969 942 4972 0.1546 0.1034 1.0054 0.2685 0.733 200 549 545 6266 BMJ-WSM3-God-YSU 0.2641 0.148 1.0136 0.4206 0.585 681 960 938 4981 0.1539 0.1034 0.9826 0.2644 0.7309 197 535 548 6280 BMJ-WSM5-Duh-MYJ 0.2425 0.1455 0.7437 0.3403 0.5424 551 653 1068 5288 0.1582 0.1171 0.6899 0.2309 0.6654 172 342 573 6473 BMJ-WSM5-Duh-YSU 0.2545 0.154 0.7931 0.3638 0.5413 589 695 1030 5246 0.1698 0.1257 0.7758 0.2577 0.6678 192 386 553 6429 BMJ-WSM5-God-MYJ 0.2687 0.1601 0.9074 0.404 0.5548 654 815 965 5126 0.174 0.1286 0.8201 0.2698 0.671 201 410 544 6405 BMJ-WSM5-God-YSU 0.2677 0.1522 1.0068 0.4237 0.5791 686 944 933 4997 0.1674 0.1166 1.0027 0.2872 0.7135 214 533 531 6282 BMJ-WSM6-Duh-MYJ 0.2282 0.1261 0.7986 0.3342 0.5816 541 752 1078 5189 0.1414 0.097 0.7772 0.2201 0.7168 164 415 581 6400 BMJ-WSM6-Duh-YSU 0.2471 0.1426 0.8394 0.3644 0.5659 590 769 1029 5172 0.169 0.1198 0.9409 0.2805 0.7019 209 492 536 6323 BMJ-WSM6-God-MYJ 0.2484 0.1255 1.1081 0.4194 0.6215 679 1115 940 4826 0.1471 0.0931 1.1141 0.2711 0.7566 202 628 543 6187 BMJ-WSM6-God-YSU 0.283 0.1642 1.0723 0.4571 0.5737 740 996 879 4945 0.1588 0.1053 1.0966 0.2872 0.7381 214 603 531 6212 KF-Lin-Duh-MYJ 0.2144 0.1147 0.7634 0.3113 0.5922 504 732 1115 5209 0.1299 0.0905 0.6349 0.1879 0.704 140 333 605 6482 KF-Lin-Duh-YSU 0.2312 0.136 0.7171 0.3224 0.5504 522 639 1097 5302 0.124 0.0847 0.6309 0.1799 0.7149 134 336 611 6479 KF-Lin-God-MYJ 0.2584 0.1355 1.1174 0.4348 0.6108 704 1105 915 4836 0.1304 0.0781 1.0362 0.2349 0.7733 175 597 570 6218 KF-Lin-God-YSU 0.2806 0.1647 1.0241 0.4435 0.5669 718 940 901 5001 0.1611 0.1097 1.0215 0.2805 0.7254 209 552 536 6263 KF-WSM3-Duh-MYJ 0.266 0.149 1.0284 0.4262 0.5856 690 975 929 4966 0.1735 0.1212 1.0604 0.3047 0.7127 227 563 518 6252 KF-WSM3-Duh-YSU 0.2839 0.1719 0.9691 0.4355 0.5507 705 864 914 5077 0.1948 0.1437 1.0255 0.3302 0.678 246 518 499 6297 KF-WSM3-God-MYJ 0.2845 0.1503 1.3484 0.5201 0.6143 842 1341 777 4600 0.1766 0.1161 1.4416 0.3664 0.7458 273 801 472 6014 KF-WSM3-God-YSU 0.3018 0.1706 1.3125 0.5361 0.5915 868 1257 751 4684 0.1877 0.1271 1.455 0.3879 0.7334 289 795 456 6020 KF-WSM5-Duh-MYJ 0.2801 0.1622 1.055 0.4497 0.5738 728 980 891 4961 0.1785 0.1262 1.0644 0.3128 0.7062 233 560 512 6255 KF-WSM5-Duh-YSU 0.2948 0.1812 1.0019 0.4558 0.545 738 884 881 5057 0.2037 0.1509 1.102 0.3557 0.6772 265 556 480 6259 KF-WSM5-God-MYJ 0.284 0.1477 1.3904 0.5287 0.6197 856 1395 763 4546 0.1685 0.1064 1.5221 0.3638 0.761 271 863 474 5952 KF-WSM5-God-YSU 0.2998 0.1683 1.3162 0.5343 0.5941 865 1266 754 4675 0.1996 0.1388 1.4846 0.4134 0.7215 308 798 437 6017 KF-WSM6-Duh-MYJ 0.2799 0.1571 1.1353 0.467 0.5887 756 1082 863 4859 0.1973 0.1415 1.2242 0.3664 0.7007 273 639 472 6176 KF-WSM6-Duh-YSU 0.3002 0.1833 1.0574 0.475 0.5508 769 943 850 4998 0.211 0.157 1.157 0.3758 0.6752 280 582 465 6233 KF-WSM6-God-MYJ 0.2917 0.155 1.4095 0.5442 0.6139 881 1401 738 4540 0.1788 0.1154 1.6107 0.396 0.7542 295 905 450 5910 KF-WSM6-God-YSU 0.3113 0.1785 1.357 0.5596 0.5876 906 1291 713 4650 0.2101 0.1475 1.6054 0.4523 0.7182 337 859 408 5956Table 6 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 72 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.154 0.0672 0.3964 0.1863 0.53 400 451 1747 4962 0.0797 0.0443 0.3412 0.099 0.7098 101 247 919 6293 BMJ-Lin-Duh-YSU 0.1539 0.0595 0.442 0.1924 0.5648 413 536 1734 4877 0.0766 0.0373 0.3912 0.099 0.7469 101 298 919 6242 BMJ-Lin-God-MYJ 0.1755 0.0611 0.5752 0.2352 0.5911 505 730 1642 4683 0.0719 0.0285 0.4471 0.0971 0.7829 99 357 921 6183 BMJ-Lin-God-YSU 0.1893 0.0645 0.653 0.2632 0.597 565 837 1582 4576 0.0832 0.0342 0.5314 0.1176 0.7786 120 422 900 6118 BMJ-WSM3-Duh-MYJ 0.1802 0.0656 0.5771 0.2408 0.5827 517 722 1630 4691 0.0867 0.0391 0.5108 0.1206 0.7639 123 398 897 6142 BMJ-WSM3-Duh-YSU 0.1988 0.0737 0.6572 0.2748 0.5819 590 821 1557 4592 0.0935 0.0407 0.5941 0.1363 0.7706 139 467 881 6073 BMJ-WSM3-God-MYJ 0.2273 0.0777 0.871 0.3465 0.6021 744 1126 1403 4287 0.0992 0.0404 0.7049 0.1539 0.7816 157 562 863 5978 BMJ-WSM3-God-YSU 0.2239 0.0691 0.9171 0.3507 0.6176 753 1216 1394 4197 0.1022 0.0373 0.8294 0.1696 0.7955 173 673 847 5867 BMJ-WSM5-Duh-MYJ 0.1788 0.0634 0.5817 0.2399 0.5877 515 734 1632 4679 0.0944 0.0482 0.4892 0.1284 0.7375 131 368 889 6172 BMJ-WSM5-Duh-YSU 0.2041 0.0796 0.6544 0.2804 0.5715 602 803 1545 4610 0.0966 0.0427 0.6137 0.1422 0.7684 145 481 875 6059 BMJ-WSM5-God-MYJ 0.2169 0.0787 0.7667 0.3149 0.5893 676 970 1471 4443 0.0908 0.037 0.6127 0.1343 0.7808 137 488 883 6052 BMJ-WSM5-God-YSU 0.2278 0.0779 0.8728 0.3475 0.6019 746 1128 1401 4285 0.0947 0.031 0.802 0.1559 0.8056 159 659 861 5881 BMJ-WSM6-Duh-MYJ 0.1712 0.0601 0.5515 0.2268 0.5887 487 697 1660 4716 0.0834 0.0379 0.4775 0.1137 0.7618 116 371 904 6169 BMJ-WSM6-Duh-YSU 0.1971 0.069 0.68 0.2767 0.5932 594 866 1553 4547 0.0992 0.0432 0.652 0.149 0.7714 152 513 868 6027 BMJ-WSM6-God-MYJ 0.2191 0.069 0.8714 0.3363 0.6141 722 1149 1425 4264 0.1 0.0385 0.7578 0.1598 0.7891 163 610 857 5930 BMJ-WSM6-God-YSU 0.215 0.0632 0.885 0.3335 0.6232 716 1184 1431 4229 0.0824 0.0185 0.8039 0.1373 0.8293 140 680 880 5860 KF-Lin-Duh-MYJ 0.2144 0.0871 0.6782 0.2962 0.5632 636 820 1511 4593 0.1162 0.0632 0.601 0.1667 0.7227 170 443 850 6097 KF-Lin-Duh-YSU 0.2063 0.0825 0.6502 0.2823 0.5659 606 790 1541 4623 0.1069 0.052 0.6343 0.1578 0.7512 161 486 859 6054 KF-Lin-God-MYJ 0.2399 0.0805 0.9693 0.381 0.6069 818 1263 1329 4150 0.1392 0.0702 0.9333 0.2363 0.7468 241 711 779 5829 KF-Lin-God-YSU 0.2427 0.0831 0.9725 0.3852 0.6039 827 1261 1320 4152 0.1309 0.0605 0.9647 0.2275 0.7642 232 752 788 5788 KF-WSM3-Duh-MYJ 0.2377 0.0889 0.8677 0.3586 0.5867 770 1093 1377 4320 0.1242 0.0613 0.7922 0.198 0.75 202 606 818 5934 KF-WSM3-Duh-YSU 0.2618 0.1129 0.8812 0.3903 0.5571 838 1054 1309 4359 0.1675 0.0983 0.948 0.2794 0.7053 285 682 735 5858 KF-WSM3-God-MYJ 0.2596 0.0844 1.1495 0.4429 0.6147 951 1517 1196 3896 0.1334 0.0538 1.2235 0.2618 0.7861 267 981 753 5559 KF-WSM3-God-YSU 0.2817 0.1073 1.1593 0.4746 0.5906 1019 1470 1128 3943 0.1631 0.081 1.3216 0.3255 0.7537 332 1016 688 5524 KF-WSM5-Duh-MYJ 0.2508 0.1005 0.8882 0.3787 0.5737 813 1094 1334 4319 0.1333 0.0694 0.8167 0.2137 0.7383 218 615 802 5925 KF-WSM5-Duh-YSU 0.2671 0.1193 0.8738 0.395 0.548 848 1028 1299 4385 0.158 0.0902 0.9118 0.2608 0.714 266 664 754 5876 KF-WSM5-God-MYJ 0.2723 0.0967 1.1635 0.463 0.6021 994 1504 1153 3909 0.1354 0.0571 1.1863 0.2608 0.7802 266 944 754 5596 KF-WSM5-God-YSU 0.2652 0.0932 1.1178 0.4439 0.6029 953 1447 1194 3966 0.1507 0.0709 1.2382 0.2931 0.7633 299 964 721 5576 KF-WSM6-Duh-MYJ 0.2545 0.0952 0.9767 0.401 0.5894 861 1236 1286 4177 0.1237 0.0546 0.9324 0.2127 0.7718 217 734 803 5806 KF-WSM6-Duh-YSU 0.2615 0.1084 0.9208 0.3982 0.5675 855 1122 1292 4291 0.156 0.0841 1.0127 0.2716 0.7318 277 756 743 5784 KF-WSM6-God-MYJ 0.2693 0.0883 1.2259 0.4723 0.6147 1014 1618 1133 3795 0.1398 0.0571 1.3265 0.2853 0.7849 291 1062 729 5478 KF-WSM6-God-YSU 0.2853 0.1072 1.2054 0.4895 0.5939 1051 1537 1096 3876 0.1513 0.0661 1.4245 0.3186 0.7763 325 1128 695 5412 ## 3.2. Skill Score Validation The charts for the skill scores at the two thresholds in Figures5 and 6 show that the TS value at the 24 h forecast range is about 0.2 to 0.27 for >25 mm/24 h and 0.1 to 0.2 for >50 mm/24 h. At 48 h, the TS is around ∼0.18 to 0.3 and ∼0.1 to 0.19 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h. At 72 h, the TS is around 0.15 to 0.25 and ∼0.08 to 0.15 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h.Figure 5 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 25 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure 6 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 50 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)The probability of detection decreases and the false alarm rate clearly increases with forecast ranges and the validating thresholds. In addition, when the threshold increases, the difference between the TS and the ETS decreases which means that the amount of Hitsrandom is too small or the cause of very small hit rate (Hitsrandom rate decreases by 90% when changing the threshold from >25 mm/24 h to >50 mm/24 h).Specific comparisons between the combinations of the BMJ or KF scheme show that the KF scheme model’s skills in heavy rain forecast in the northern region of Vietnam are better than BMJ scheme model’s skills. The average TS with the KF scheme can be about 15–25% larger than that using the BMJ scheme. If the difference of skills in a regional model is insignificant when changing the physical parameterization schemes, the lateral boundary conditions (from global forecasts) will greatly affect the quality of the dynamical downscaling forecasts after 24 h integration. However, here the skill difference when combining the two different cumulus schemes in the longer forecast range (such as 72 h) shows the importance of convection simulation capability contributing to the forecasting quality of the model. Detailed evaluation of the combination with the radiation physical schemes of KF or BMJ does not show any difference compared to changing the boundary layer schemes when looking at the skill scores TS or ETS.The combinations with YSU boundary layer schemes have better skills compared to those with MYJ schemes. In addition, when changing the complexity of the cloud microphysics schemes, the more complex the microphysical processing simulation, the better the TS and ETS (at 24, 48, and 72 h forecast ranges and two validating thresholds; see Figure7 for comparison of the change in TSs and ETSs with the cloud microphysics scheme).Figure 7 Brief comparison of TSs and ETSs for illustration of sensitivities with microphysics schemes.For the skill comparisons of different event types (I, II, III, and IV), Figure8 shows the TSs at thresholds over 25 mm/24 h and over 50 mm/24 h for 24 h and 48 h forecast ranges. For type I, which is associated with the activity of ITCZ and low-pressure trough over the Bac Bo area, the KF scheme proved its forecast skills in almost forecast ranges and thresholds mentioned in this research. However, with the rain caused by tropical cyclone in type II, the difference between KF and BMJ schemes within 24 h was smaller than that in type I. In 48 h and 72 h, the BMJ scheme showed more skilled forecast with the skill score for threshold 25 mm ranging from 0.25 to 0.35 of the BMJ scheme, compared with 0.2 to 0.3 of the KF scheme, and the skill score for threshold 50 mm ranging from 0.2 to 0.3 of the BMJ scheme, compared with 0.2 to 0.25 of the KF scheme. In type II, both KF and BMJ schemes combined with the simple cloud microphysics Lin scheme showed lowest skill score. For type III, which is associated with the activity of cold surge and its role in squeezing the low-pressure trough from the north towards Bac Bo, the KF scheme was only skillful in threshold 25 mm in 24 h. Particularly in type IV, which contains heavy rain events caused by a complex combination of situations resulting in a trough in Bac Bo, the KF scheme still showed skilled forecast compared to very low forecast (no skill with threshold over 50 mm/24 h and 25 mm/24 h) of BMJ scheme experiments. More details of TSs for different types are listed in Tables 7 and 8.Figure 8 TSs for different types of heavy rainfall events in northern Vietnam for daily accumulation thresholds over (a) 25 mm and 50 mm (b) at 24 h forecast ranges and over 25 mm (c) and 50 mm (d) at 48 h forecast ranges. The right vertical axis is from 0 to 0.4. (a) (b) (c) (d)Table 7 TSs for different types of main heavy rainfall, at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.2546 0.1309 0.15 0.0682 0.124 0.0791 0.0769 0 BMJ-Lin-Duh-YSU 0.2625 0.1019 0.2346 0.0706 0.1244 0.0692 0.16 0 BMJ-Lin-God-MYJ 0.2644 0.1638 0.2235 0.1011 0.139 0.1064 0.0882 0 BMJ-Lin-God-YSU 0.2761 0.1519 0.1939 0.093 0.1369 0.087 0.1562 0.0333 BMJ-WSM3-Duh-MYJ 0.2503 0.1237 0.1818 0.09 0.1279 0.0621 0.129 0 BMJ-WSM3-Duh-YSU 0.2745 0.1179 0.2439 0.0652 0.1379 0.0699 0.1471 0 BMJ-WSM3-God-MYJ 0.2834 0.1651 0.21 0.09 0.1556 0.1104 0.1136 0 BMJ-WSM3-God-YSU 0.2943 0.1505 0.202 0.0745 0.167 0.0798 0.1304 0.0312 BMJ-WSM5-Duh-MYJ 0.2618 0.1349 0.1977 0.0707 0.1429 0.0563 0.1515 0 BMJ-WSM5-Duh-YSU 0.263 0.1351 0.2073 0.0652 0.1603 0.0789 0.1389 0 BMJ-WSM5-God-MYJ 0.3032 0.1376 0.1978 0.0714 0.1565 0.0496 0.1515 0.0345 BMJ-WSM5-God-YSU 0.3028 0.1525 0.1939 0.0737 0.1677 0.0696 0.15 0.0312 BMJ-WSM6-Duh-MYJ 0.2571 0.133 0.1304 0.0926 0.1403 0.0872 0.0732 0 BMJ-WSM6-Duh-YSU 0.2643 0.1281 0.1828 0.06 0.1714 0.0719 0.1212 0 BMJ-WSM6-God-MYJ 0.2991 0.1781 0.2574 0.0784 0.1706 0.1391 0.15 0.0256 BMJ-WSM6-God-YSU 0.3043 0.1595 0.1944 0.0652 0.1916 0.1104 0.1429 0 KF-Lin-Duh-MYJ 0.2881 0.1673 0.2136 0.1449 0.1844 0.0683 0.1026 0.1818 KF-Lin-Duh-YSU 0.3125 0.1951 0.2474 0.1667 0.1778 0.0698 0.06 0.1892 KF-Lin-God-MYJ 0.2917 0.1914 0.2255 0.1558 0.1725 0.1237 0.0889 0.1471 KF-Lin-God-YSU 0.3129 0.183 0.2653 0.1769 0.1839 0.1162 0.093 0.1316 KF-WSM3-Duh-MYJ 0.2872 0.1776 0.2212 0.1655 0.1893 0.0753 0.1224 0.1667 KF-WSM3-Duh-YSU 0.309 0.1881 0.2551 0.219 0.1858 0.1176 0.1111 0.2059 KF-WSM3-God-MYJ 0.3264 0.1924 0.2526 0.1656 0.2057 0.0909 0.0889 0.1579 KF-WSM3-God-YSU 0.3188 0.1841 0.3069 0.2014 0.2027 0.1185 0.098 0.1795 KF-WSM5-Duh-MYJ 0.3062 0.1836 0.25 0.1655 0.2048 0.0688 0.0851 0.1795 KF-WSM5-Duh-YSU 0.3116 0.1892 0.2653 0.1838 0.1823 0.1111 0.08 0.1818 KF-WSM5-God-MYJ 0.3141 0.1996 0.25 0.1772 0.2047 0.1122 0.1064 0.1622 KF-WSM5-God-YSU 0.3317 0.2025 0.2642 0.1875 0.2063 0.1085 0.1228 0.1579 KF-WSM6-Duh-MYJ 0.3063 0.1823 0.2233 0.1812 0.1963 0.0952 0.0833 0.1538 KF-WSM6-Duh-YSU 0.3339 0.1927 0.2476 0.1898 0.2244 0.1179 0.1071 0.2812 KF-WSM6-God-MYJ 0.3138 0.1781 0.2268 0.1465 0.2207 0.1268 0.1429 0.125 KF-WSM6-God-YSU 0.3323 0.1793 0.2843 0.2 0.2078 0.1255 0.1538 0.1429Table 8 TSs for different types of main heavy rainfall, at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.1629 0.2426 0.0854 0.0922 0.0768 0.1533 0.0526 0.0333 BMJ-Lin-Duh-YSU 0.1755 0.2271 0.0909 0.1438 0.0971 0.1559 0.0833 0.0164 BMJ-Lin-God-MYJ 0.2276 0.3104 0.075 0.0798 0.1139 0.1631 0.0333 0.0303 BMJ-Lin-God-YSU 0.2482 0.2723 0.1429 0.12 0.1405 0.1543 0.0385 0.0167 BMJ-WSM3-Duh-MYJ 0.2235 0.3055 0.0833 0.1311 0.1147 0.2848 0.0571 0 BMJ-WSM3-Duh-YSU 0.2365 0.3419 0.1538 0.1617 0.1136 0.3072 0.0556 0.0116 BMJ-WSM3-God-MYJ 0.2583 0.3448 0.0979 0.1029 0.1386 0.2466 0.0182 0 BMJ-WSM3-God-YSU 0.2587 0.3338 0.1176 0.1272 0.1541 0.2125 0.02 0 BMJ-WSM5-Duh-MYJ 0.2301 0.3206 0.0928 0.1228 0.1287 0.2706 0.0741 0.0115 BMJ-WSM5-Duh-YSU 0.243 0.3157 0.1373 0.172 0.1397 0.2813 0.0312 0.0241 BMJ-WSM5-God-MYJ 0.2594 0.344 0.1327 0.1337 0.1479 0.2928 0.0286 0 BMJ-WSM5-God-YSU 0.2588 0.3329 0.1797 0.1364 0.1383 0.2918 0 0.0115 BMJ-WSM6-Duh-MYJ 0.217 0.304 0.1418 0.1027 0.1214 0.2212 0.0784 0.0235 BMJ-WSM6-Duh-YSU 0.2331 0.3216 0.1368 0.1587 0.1348 0.2968 0.0714 0.0103 BMJ-WSM6-God-MYJ 0.2522 0.3055 0.1474 0.0955 0.132 0.2314 0.0541 0.0215 BMJ-WSM6-God-YSU 0.2739 0.3707 0.2 0.1026 0.1333 0.2658 0.0484 0.0319 KF-Lin-Duh-MYJ 0.2259 0.2093 0.0811 0.225 0.1331 0.1208 0.027 0.1899 KF-Lin-Duh-YSU 0.2714 0.172 0.0957 0.2199 0.1347 0.1017 0.0667 0.1395 KF-Lin-God-MYJ 0.268 0.2537 0.1043 0.2771 0.1267 0.1545 0.0385 0.1237 KF-Lin-God-YSU 0.3125 0.2384 0.1058 0.2788 0.1704 0.1608 0.0488 0.1327 KF-WSM3-Duh-MYJ 0.2689 0.2805 0.1298 0.2783 0.156 0.241 0.0577 0.1327 KF-WSM3-Duh-YSU 0.3066 0.2739 0.069 0.285 0.1971 0.2068 0.0444 0.2088 KF-WSM3-God-MYJ 0.2945 0.2976 0.1286 0.2595 0.1651 0.2451 0.0526 0.1111 KF-WSM3-God-YSU 0.3202 0.2956 0.1407 0.2822 0.1872 0.2136 0.0923 0.1569 KF-WSM5-Duh-MYJ 0.2924 0.2921 0.1176 0.2546 0.1586 0.238 0.0727 0.1765 KF-WSM5-Duh-YSU 0.3217 0.2847 0.0976 0.26 0.1995 0.2403 0.0755 0.1753 KF-WSM5-God-MYJ 0.2946 0.2953 0.1192 0.2672 0.1499 0.25 0.038 0.1293 KF-WSM5-God-YSU 0.3265 0.2768 0.1361 0.2881 0.201 0.2344 0.0959 0.1404 KF-WSM6-Duh-MYJ 0.2922 0.2874 0.1389 0.2597 0.1874 0.25 0.0685 0.1667 KF-WSM6-Duh-YSU 0.3291 0.2706 0.1311 0.2995 0.208 0.2378 0.0893 0.2083 KF-WSM6-God-MYJ 0.3002 0.3115 0.1465 0.2556 0.1675 0.2537 0.0435 0.1333 KF-WSM6-God-YSU 0.3426 0.2798 0.1595 0.2888 0.2091 0.2565 0.0674 0.177 ## 4. Conclusions For the purpose of investigating the effects of physical schemes in the WRF-ARW model on the operational heavy rainfall forecast for the Bac Bo area, 32 different model configurations have been established by switching two typical cumulus parameterization schemes (BMJ and KF), the cloud microphysics schemes from simple (Lin) to complex (WSM with 3/5/6-layer closure assumptions), and boundary layer (YSU and MYJ) and shortwave radiation (Dudhia and Goddard) schemes. The 72 experiments of widespread heavy rainfall occurring in the Bac Bo area used boundaries from the GFS model and had the highest horizontal resolution of 5 km × 5 km.The model verification with local observation data illustrated the limited capabilities in heavy rainfall forecast for the northern part of Vietnam. On average, for the threshold over 25 mm/24 h, the TSs are from 0.2 to 0.25, 0.2 to 0.3, and 0.2 to 0.25 for 24 h, 48 h, and 72 h forecast ranges, respectively. For the threshold over 50 mm/24 h, TSs are 0.1–0.2 for 24 h and 48 h forecast ranges and about 0.1–0.15 for the 72 h forecast range. At above the 100 mm/24 h threshold, a very low skill value (below 0.1) is validated for most forecast ranges.The change of the microcloud physics from simple to complex closure assumptions shows that complex schemes give very positive results for both the BMJ and KF schemes. The model configurations with the KF scheme showed higher skills compared to BMJ scheme configurations, and these higher skill scores (TS and POD) mainly come from a higher hit rate (H) and a lower missed rate (M) but also a higher false alarm rate (F). More preliminary assessment for the KF scheme configurations showed the most sensitivity of boundary layer schemes compared to microphysics or shortwave radiation schemes and some initial comments that boundary layer interaction during the application of the KF scheme is an important factor to find the appropriate parameters for heavy rainfall forecast over the Bac Bo area in the WRF-ARW model.In terms of sample size of each category, the first two types are comparable because of the similar sample size. The other two types, however, are limited in sample size and need to be further analyzed in the subsequent research. A detailed assessment related to the origin of the mechanisms causing heavy rain illustrates that the KF scheme showed more skilled forecast with the BMJ scheme during trough- or ITCZ-related heavy rain events, whereas it was less skilled in the events caused by tropical cyclones. With the events caused by the cold surge and a combination of different patterns, the skill of the BMJ scheme was quite low. More verification with the last two types needs to be further investigated in the subsequent research because of the limited sample size studied. --- *Source: 1010858-2019-08-18.xml*
1010858-2019-08-18_1010858-2019-08-18.md
94,294
Impacts of Different Physical Parameterization Configurations on Widespread Heavy Rain Forecast over the Northern Area of Vietnam in WRF-ARW Model
Tien Du Duc; Cuong Hoang Duc; Lars Robert Hole; Lam Hoang; Huyen Luong Thi Thanh; Hung Mai Khanh
Advances in Meteorology (2019)
Earth and Environmental Sciences
Hindawi
CC BY 4.0
http://creativecommons.org/licenses/by/4.0/
10.1155/2019/1010858
1010858-2019-08-18.xml
--- ## Abstract This study investigates the impacts of different physical parameterization schemes in the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model) on the forecasts of heavy rainfall over the northern part of Vietnam (Bac Bo area). Various physical model configurations generated from different typical cumulus, shortwave radiation, and boundary layer and from simple to complex cloud microphysics schemes are examined and verified for the cases of extreme heavy rainfall during 2012–2016. It is found that the most skilled forecasts come from the Kain–Fritsch (KF) scheme. However, relating to the different causes of the heavy rainfall events, the forecast cycles using the Betts–Miller–Janjic (BMJ) scheme show better skills for tropical cyclones or slowly moving surface low-pressure system situations compared to KF scheme experiments. Most of the sensitivities to KF scheme experiments are related to boundary layer schemes. Both configurations using KF or BMJ schemes show that more complex cloud microphysics schemes can also improve the heavy rain forecast with the WRF-ARW model for the Bac Bo area of Vietnam. --- ## Body ## 1. Introduction Elongated to 15 degrees (from 8 to 23 degrees north), Vietnam weather is affected by a variety of extratropical and tropical systems such as cold surges and cold fronts, subtropical troughs and monsoon, tropical disturbances, and typhoons [1–3, 4]. An approximate number of 28–30 heavy rain events occurring over the whole Vietnam region every year were determined from local surface stations. According to Nguyen et al. [4], the rainy season in the Vietnam area usually lasts from May to October with the peak rainy months starting in the north and then moving to the south with time because of the movement of the subtropical ridge and Intertropical Convergence Zone (ITCZ). In the northern part of Vietnam, the peak rainy months can be seen during the June–September period, particularly in July and August, which is partly related to the activity of the ITCZ [5].Apart from single weather patterns causing heavy rain for Vietnam, it is common to observe a mix of different patterns simultaneously. A good example for this is the historical rain and flood over Central Vietnam in the Hue city in 1999 due to the combination of cold surges and tropical depression-type disturbances [6]. In 2008, extreme heavy rain was witnessed in Hanoi as a result of the dramatic intensification of the easterly disturbance through the midlatitude-tropical interactions [7]. Likewise, a surface low pressure which coincided with the subtropical upper-level trough was the main factor causing historical rainfall over northeastern Vietnam in Quang Ninh Province in 2015 [8].In the northern part of Vietnam (hereinafter referred to as Bac Bo area)—the focus area of this study—tropical cyclones or surface low-pressure patterns are one among several key factors leading to heavy rain for different regions in Vietnam. ITCZ plays the vital role in facilitating the development of low-level vortexes which are likely to develop into tropical disturbances and typhoons in the summer time, particularly in July and August. As a result, rain may fall over the Bac Bo area with excessive amount, and rain duration directly depends upon the lifetime of ITCZ and the disturbances itself. The other causing reasons are related to the effects of cold surge from the north (Siberian area) or the combinations of different patterns (a cold surge with a tropical cyclone, a cold surge with a trough, a surface low-pressure system with a trough, etc.). Table1 provides the observed precipitation from SYNOP stations in the Bac Bo area with the annual rainfall mostly ranging from 1500 to 2000 mm and a high number of daily heavy rain occurrences (over 50 mm/24 h and 100 mm/24 h).Table 1 Station information over the northern part of Vietnam, annual rainfall (over the period 1998–2018) in mm, and the number of days with daily accumulated rainfall over 50 mm/24 h (#50 mm) and over 100 mm/24 h (#100 m) in average. The number of samples is ∼7300 each station. Station name Long. Lat. Annual rain #50 mm #100 mm Muong Te 102.83 22.37 2423 11 2 Sin Ho 103.23 22.37 2787 13 2 Tam Duong 103.48 22.42 2355 9 2 Muong La 104.03 21.52 1412 5 1 Than Uyen 103.88 21.95 1920 8 1 Quynh Nhai 103.57 21.85 1678 7 1 Mu Cang Chai 104.05 21.87 1741 5 1 Tuan Giao 103.42 21.58 1586 5 1 Pha Din 103.5 21.57 1840 6 1 Van Chan 104.52 21.58 1481 6 1 Song Ma 103.75 21.5 1118 2 0 Co Noi 104.15 21.13 1308 4 0 Yen Chau 104.3 21.05 1229 3 0 Bac Yen 104.42 21.23 1519 4 0 Phu Yen 104.63 21.27 1496 5 1 Minh Dai 105.05 21.17 1695 6 1 Moc Chau 104.68 20.83 1581 5 1 Mai Chau 105.05 20.65 1698 7 1 Pho Rang 104.47 22.23 1648 7 1 Bac Ha 104.28 22.53 1611 4 0 Hoang Su Phi 104.68 22.75 1739 5 1 Bac Me 105.37 22.73 1753 6 0 Bao Lac 105.67 22.95 1179 3 0 Bac Quang 104.87 22.5 4295 24 9 Luc Yen 104.78 22.1 1917 8 1 Ham Yen 105.03 22.07 1858 8 1 Chiem Hoa 105.27 22.15 1631 6 1 Cho Ra 105.73 22.45 1417 5 0 Nguyen Binh 105.9 22.65 1686 6 1 Ngan Son 105.98 22.43 1853 8 1 Trung Khanh 106.57 22.83 1821 7 2 Dinh Hoa 105.63 21.92 1749 8 2 Bac Son 106.32 21.9 1693 7 2 Huu Lung 106.35 21.5 1572 7 1 Dinh Lap 107.1 21.53 1865 7 2 Quang Ha 107.75 21.45 2883 16 5 Phu Ho 105.23 21.45 1480 6 1 Tam Dao 105.65 21.47 2585 13 4 Hiep Hoa 105.97 21.35 1601 6 1 Bac Ninh 106.08 21.18 1608 7 2 Luc Ngan 106.55 21.38 1406 6 1 Son Dong 106.85 21.33 1686 8 2 Ba Vi 105.42 21.15 1942 9 2 Ha Dong 105.75 20.97 1635 7 1 Chi Linh 106.38 21.08 1536 7 1 Uong Bi 106.75 21.03 1824 9 2 Kim Boi 105.53 20.33 2115 9 2 Chi Ne 105.78 20.48 1841 8 2 Lac Son 105.45 20.45 2040 9 2 Cuc Phuong 105.72 20.25 1918 9 2 Yen Dinh 105.67 19.98 1509 7 1 Sam Son 105.9 19.75 1759 9 3 Do Luong 105.3 18.89 1871 9 2 Lai Chau 103.15 22.07 2178 10 2 Sa Pa 103.82 22.35 2581 10 2 Lao Cai 103.97 22.5 1704 7 1 Ha Giang 104.97 22.82 2372 12 2 Son La 103.9 21.33 1456 4 1 That Khe 106.47 22.25 1482 6 1 Cao Bang 106.25 22.67 1439 6 1 Bac Giang 106.22 21.3 1518 7 1 Hon Ngu 105.77 18.8 2007 11 4 Bac Can 105.83 22.15 1421 5 1 Dien Bien Phu 103 21.37 1536 5 1 Tuyen Quang 105.22 21.82 1654 8 2 Viet Tri 105.42 21.3 1601 8 2 Vinh Yen 105.6 21.32 1489 6 1 Yen Bai 104.87 21.7 1768 8 1 Son Tay 105.5 21.13 1612 7 1 Hoa Binh 105.33 20.82 1828 8 2 Huong Son 105.43 18.52 2135 10 4 Ha Noi 105.8 21.03 1647 8 2 Phu Ly 105.92 20.55 1731 8 2 Hung Yen 106.05 20.65 1381 6 1 Nam Dinh 106.15 20.39 1564 7 1 Ninh Binh 105.97 20.23 1662 8 2 Phu Lien 106.63 20.8 1572 7 1 Hai Duong 106.3 20.93 1561 7 1 Hon Dau 106.8 20.67 1501 8 1 Van Ly 106.3 20.12 1620 9 2 Lang Son 106.77 21.83 2874 12 2 Thai Nguyen 105.83 21.6 1266 5 1 Nho Quan 105.73 20.32 1728 8 2 Bai Chay 107.07 20.97 1700 8 2 Co To 107.77 20.98 1933 10 3 Thai Binh 106.35 20.45 1930 10 3 Cua Ong 107.35 21.02 1591 8 1 Tien Yen 107.4 21.33 2189 12 3 Mong Cai 107.97 21.52 2168 10 3 Bach Long Vi 107.72 20.13 2670 15 5 Huong Khe 105.72 18.18 1216 6 1 Thanh Hoa 105.78 19.75 2516 12 5 Hoi Xuan 105.12 20.37 1701 8 3 Tuong Duong 104.43 19.28 1720 6 1 Vinh 105.7 18.67 1305 5 1 Ha Tinh 105.9 18.35 1896 10 4 Ky Anh 106.28 18.07 2507 13 5 Bai Thuong 105.38 19.9 1830 7 2 Nhu Xuan 105.57 19.63 1758 8 3 Tinh Gia 105.78 19.45 1777 9 3 Quy Chau 105.12 19.57 1661 7 1 Quy Hop 105.15 19.32 1579 7 2 Tay Hieu 105.4 19.32 1517 7 2 Quynh Luu 105.63 19.17 1559 8 2 Con Cuong 104.88 19.05 1647 7 2In Vietnam National Center for Hydro-Meteorological Forecasting (NCHMF), several global model (NWP) products are mostly applied in operational forecast to predict the occurrence of heavy rain. These include the models from National Centers for Environmental Prediction (NCEP), European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency (JMA), and Germany’s National Meteorological Service (DWD). In addition, some regional NWP products including the High Resolution Regional Model (HRM) (the HRM’s information can be found at DWD’s internet linkhttps://www.dwd.de/SharedDocs/downloads/DE/modelldokumentationen/nwv/hrm/HRM_users_guide.pdf) [9], the Consortium for Small-Scale Modeling (COSMO) [10] models from DWD, and the Weather Research and Forecasting (WRF-ARW) model [11] from the National Center for Atmospheric Research (NCAR) are also a useful reference source in the operational heavy rainfall forecast of Vietnam [12]. Despite the predictability of these models in some certain cases, they may fail to predict extreme events because of several reasons. Lorenz [13] pointed out the three main factors causing uncertainties in NWP are the initial conditions, the imperfection of the models, and the chaos of the atmosphere. While the initial condition problem for NWP can be reduced by data assimilation methods, the imperfection of models, which relates to many subgrid processes, can be alleviated by using proper physical parameterizations. For regional weather forecasting centers with limited capabilities in data assimilation and resource computation to provide the cloud resolved resolution forecast, choosing correct physical parameterization schemes still plays the most important role in downscaling the processes in regional NWP models [14].To illustrate the dependence of heavy rainfall forecast on physical parameterizations, the typical heavy rainfall event relating to the activities of ITCZ over the South China Sea—the East Sea of Vietnam—from 27 to 30 August 2014 in the Bac Bo area was simulated by the WRF-ARW model (see mean sea level pressure analysis of the GFS model in Figure1(a)). The 3-day accumulated rainfall was mostly from 100 mm to 150 mm, and some stations recorded more than 200 mm such as Kim Boi which recorded 229 mm (Hoa Binh Province; marked with a square in Figure 1(b)), Dinh Lap 245 mm (Lang Son Province, the northeast area; marked with a star in Figure 1(b)), and Tam Dao 341 mm (Vinh Phuc Province, center of the domain; marked with a circle in Figure 1(b)).Figure 1 Example of impact of different physical parameterization schemes on heavy rain forecast with the WRF-ARW model (5 km horizontal resolution) over the Bac Bo area (the northern part of Vietnam), issued at 00UTC 27/08/2014. (a) Analysis of mean sea level pressure from the GFS at 00UTC 27/08/2014. (b) 72 h accumulated precipitation from synoptic observation from 00UTC 27/08/2014 to 00UTC 30/08/2014. 72 h accumulated precipitation forecast from the WRF-ARW model with the configuration of KF cumulus parameterization and shortwave radiation and cloud microphysics schemes: (c) KF-Lin-Duh-MYJ; (d) KF-WSM3-Duh-MYJ; (e) KF-WSM5-God-MYJ; (f) KF-WSM5-God-YSU. 72 h accumulated precipitation forecast from the WRF-ARW model with the configuration of BMJ cumulus parameterization: (g) BMJ-Lin-Duh-MYJ; (h) BMJ-WSM3-Duh-MYJ; (i) BMJ-WSM5-God-MYJ; (j) BMJ-WSM5-God-YSU. More explanation of model configurations from (c) to (j) experiments can be found in Table2. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)Figure1 illustrates 72 h accumulated rainfall forecasts from the WRF-ARW model at 5 km horizontal resolution, issued at 00UTC 27/08/2014 with different physical parameterization configurations by combining the Betts–Miller–Janjic (BMJ) or Kain–Fritsch (KF) cumulus schemes, the Lin or WRF single-moment three- or five-class (WSM3/WSM5) cloud microphysics schemes, the Dudhia or Goddard shortwave radiation schemes, and the Yonsei University (YSU) or Mellor–Yamada–Janjic (MYJ) boundary layer schemes.With the KF cumulus parameterization scheme in Figures1(c)–1(f), the results show that extreme rainfall over the northeast area (Lang Son Province) can be predictive with the amount of rainfall over >150 mm but was forecasted overestimation for the mountainous regions over the northwest area (Hoang Lien Son mountain ranges). The extreme rainfall over the center of the domain (Vinh Phuc Province) was not simulated well using the KF scheme for this situation.For the experiments using the BMJ scheme in Figures1(g)–1(j), the extreme rainfall over the Lang Son Province is underestimated, but cases using Lin or WSM3 for microphysics, Dudhia for shortwave radiation scheme, and the MYJ surface boundary layer (Figure 1(g) and 1(h)) can reduce the overestimation related to Hoang Lien Son mountain ranges in KF scheme experiments. The extreme heavy rainfall over the center of the domain (in Vinh Phuc Province) can be quite well forecasted when using WSM5 with Goddard Hoang Lien Son mountain ranges and the MYJ surface boundary layer (Figure 1(i)). The same configuration in Figure 1(i) but using another surface boundary layer (YSU) scheme provided totally different results (Figure 1(j)) for the southern area of the domain (overestimation).Many studies have been carried out for validating the effects of physical parameterization schemes in the WRF-ARW model. Zeyaeyan et. al. [15] evaluated the effect of various physic schemes in the WRF-ARW model on the simulation of summer rainfall over the northwest of Iran (NWI). The result shows that cumulus schemes are the most sensitive and microphysics schemes are the least sensitive. The comparison between 15 km and 5 km resolution simulations does not show obvious advantages in downscaling. These investigations showed the best results for both the 5 and 15 km resolutions with model configurations from the newer Tiedtke cumulus scheme, MYJ scheme, and WSM3/Kessler microphysics scheme. Tan [16] processed sensitivity tests with microphysics parameterization of the WRF-ARW model in the concept of quantitative extreme precipitation forecasting for hydrological inputs. About 19 bulk microphysics parameterization schemes were evaluated for a storm situation in California in 1997. The most important finding was that the extreme and short-interval simulated precipitation of the WRF-ARW model, which is very important for hydrological forecast input, can be improved by the choice of microphysics schemes. Nasrollahi et al. [17] showed that different features from hurricane (track, intensity, and precipitation) in the WRF-ARW model can be improved with suitable selections of microphysics and cumulus schemes. These results showed that the best simulated precipitation can be achieved by using BMJ cumulus parameterization combined with the WSM5 microphysics scheme, but the hurricane’s track was best estimated by using the Lin or Kessler microphysics option with BMJ cumulus parameterization. Other validations of parameterization of physical processes on the tropical cyclone of Pattanayak et al. [18] with WRF for the NMM dynamical core showed the important role of cumulus parameterization in track forecasts, thereby contributing to rainfall induced by landfall of tropical cyclones.In the other aspects related to using cumulus parameterization schemes at high-resolution simulations, Gilliland et al. [19] compared simulations using different cumulus parameterization schemes in summer-time convective activities. Compared with the model simulations at below 5 km horizontal resolution, which resolve the convection, the study showed that depending on the strength of the synoptic-scale forcing, the use of a cumulus parameterization scheme can still be warrant for representing the effects of subgrid-scale convective processes (the Kain–Fritsch scheme in this study).Thus, the application of regional models to certain regions such as Vietnam will be influenced significantly by local factors (topography and microclimate mode) as well as the effects of physical combinations. Among regional models which are applied in Vietnam, the WRF-ARW model is the most useful tool because of the capabilities of providing different configuration physical/dynamical models for implementation to various scientific communities for both research and operation. For this reason, this study will focus on the impact of some physical parameterization configurations of the WRF-ARW model combining cumulus, cloud microphysics, and shortwave radiation parameterization schemes on heavy rainfall forecast. Two typical cumulus parameterizations (adjustment and mass-flux approaches) will be investigated with the combination of simple to complex cloud microphysics schemes. The verification of dependence of the typical synoptic situations/weather patterns causing heavy rainfall for the Bac Bo area on parameterization schemes is also carried out. For the real-time adaptation forecasting verification purposes, the lateral boundary condition will be used from the Global Forecast System (GFS) of NCEP.The remainder of this paper is organized as follows: In Section2, experimental design and validation methods will be presented. Section 3 discusses the impacts of different physical parameterization configurations on heavy rainfall over the Bac Bo area, and conclusions are given in Section 4. ## 2. Experiments ### 2.1. Model Description This study used the recently released version of the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model; version 3.9.1.1) with multinested grids and two-way interactive options. The WRF model has been integrated many advances in model physics/numerical aspects and data assimilation by scientists and developers from the expansive research community, therefore becoming a very flexible and useful tool for both researchers and operational forecasters (https://www.mmm.ucar.edu/weather-research-and-forecasting-model).For the purpose of investigating the impact of physical parameterization schemes, similarly to the study of Kieu et. al. [20, 21], a set of combination of physical parameterizations has been generated based on (a) the modified KF and BMJ cumulus parameterization schemes; (b) the Goddard and Dudhia schemes for the shortwave radiation; (d) the YSU and MYJ planetary boundary schemes; and (e) the Lin, WSM3, WSM5, and WSM6 schemes for the cloud microphysics. There are a maximum of 32 different configuration forecasts for each heavy rainfall case listed in Table 2. The other options are the Monin–Obukhov surface layer scheme and the Rapid Radiative Transfer Model scheme for longwave radiation. Note that, with the MYJ scheme, the surface layer option will be switched to Janjic’s Eta–Monin–Obukhov scheme which is based on similar theory with viscous sublayers over both solid surfaces and water points. Skamarock et al. [22] provided the detailed description of the WRF-ARW model, and various references for physical parameterizations of the WRF-ARW model can be found from various listed references [11, 23–29, 30].Table 2 Details of physical parameterization configurations in different experiments. Abbreviation Microphysics Shortwave radiation Boundary layer Betts–Miller–Janjic (BMJ) cumulus parameterization BMJ-Lin-Duh-MYJ Lin Duhia MYJ BMJ-Lin-Duh-YSU Lin Duhia YSU BMJ-Lin-God-MYJ Lin Goddard MYJ BMJ-Lin-God-YSU Lin Goddard YSU BMJ-WSM3-Duh-MYJ WSM3 Duhia MYJ BMJ-WSM3-Duh-YSU WSM3 Duhia YSU BMJ-WSM3-God-MYJ WSM3 Goddard MYJ BMJ-WSM3-God-YSU WSM3 Goddard YSU BMJ-WSM5-Duh-MYJ WSM5 Duhia MYJ BMJ-WSM5-Duh-YSU WSM5 Duhia YSU BMJ-WSM5-God-MYJ WSM5 Duhia MYJ BMJ-WSM5-God-YSU WSM5 Goddard YSU BMJ-WSM6-Duh-MYJ WSM6 Duhia MYJ BMJ-WSM6-Duh-YSU WSM6 Duhia YSU BMJ-WSM6-God-MYJ WSM6 Goddard MYJ BMJ-WSM6-God-YSU WSM6 Goddard YSU Kain–Fritsch (KF) cumulus parameterization KF-Lin-Duh-MYJ Lin Duhia MYJ KF-Lin-Duh-YSU Lin Duhia YSU KF-Lin-God-MYJ Lin Goddard MYJ KF-Lin-God-YSU Lin Goddard YSU KF-WSM3-Duh-MYJ WSM3 Duhia MYJ KF-WSM3-Duh-YSU WSM3 Duhia YSU KF-WSM3-God-MYJ WSM3 Goddard MYJ KF-WSM3-God-YSU WSM3 Goddard YSU KF-WSM5-Duh-MYJ WSM5 Duhia MYJ KF-WSM5-Duh-YSU WSM5 Duhia YSU KF-WSM5-God-MYJ WSM5 Goddard MYJ KF-WSM5-God-YSU WSM5 Goddard YSU KF-WSM6-Duh-MYJ WSM6 Duhia MYJ KF-WSM6-Duh-YSU WSM6 Duhia YSU KF-WSM6-God-MYJ WSM6 Goddard MYJ KF-WSM6-God-YSU WSM6 Goddard YSUThe WRF-ARW model is configured with two nested grid domains consisting of 199 × 199 grid points in the (x, y) dimensions with horizontal resolutions of 15 km (denoted as d01 domain) and 5 km (denoted as d02 domain). All domains share 41 similar vertical σ levels with the model top at 50 hPa. The higher resolution domain covers the northern part of Vietnam with a time step of 15 seconds. All validations will be carried out with forecasts from the d02 domain. Figure 2 shows the terrain used in d01 and d02 domains.Figure 2 SYNOP station distribution over Vietnam and closed countries (black dots) and terrain of computing domains for the WRF-ARW model. The blue contour is for the outer domain (d01, 15 km), and dark green shading is for the inner domain (d02, 5 km), illustrated by the Diana meteorological visualization software of the Norwegian Meteorological Institute. ### 2.2. Boundary Conditions The GFS model of NCEP used to provide boundary conditions for the WRF-ARW model in this study has a 0.5-degree horizontal resolution and be prepared every three hours from 1000 hPa to 1 hPa. The GFS data for this study were downloaded from the Research Data Archive at the National Center for Atmospheric Research via website linkhttps://rda.ucar.edu/datasets/ds335.0. More information on GFS data can be found at https://www.nco.ncep.noaa.gov/pmb/products/gfs/. ### 2.3. Observation Data The number of observation stations in Vietnam increased from 89 in 1988 to 186 in 2017, with 4 or 8 observations per day (black dots are 8 observations/day for stations in Vietnam and nearby countries in Figure2), but only 24 stations are reported to WMO. The difference in location and topography results in the significant change from one climate to another; therefore, Vietnam has been divided into several climate zones. The highest station density is in the Red River Delta area (the southern part of northern Vietnam) with approximately 1 station per 25 km × 25 km area. The coarsest station density is in the Central Highlands area (latitude between ∼11 N and 16 N) with approximately 1 station per 55 km × 55 km area. On average, the current surface observation network density of Vietnam is about 1 station per 35 km × 35 km for flat regions and 1 station per 50 km × 50 km for mountainous complex regions. In this paper, in order to verify model forecast for the Bac Bo area, we used observation data from the northern SYNOP stations for the period from 2012 to 2016 listed in Table 1.In this study, 72 cases of typical widespread heavy rains which occurred in the northern part of Vietnam in the period 2012–2016 were selected. For each case, the forecasting cycles are chosen so that the 72-hour forecast range can cover the maximum duration of heavy rain episodes.With respect to causes of heavy rainfall events, there are four main categories: (i) activities of ITCZs or troughs: type I, (ii) affected by tropical cyclones or surface low-pressure system (staying at least more than 2 days over the Bac Bo area): type II, (iii) related to the cold surge from the north: type III, and (iv) the complex combinations from different patterns: type IV. The list of forecasted events is given in Table3 with the station name, the maximum value of daily rainfall, and the type of each heavy rainfall case. The sample number of type I, type II, type III, and type IV has 37, 21, 5, and 9 forecast cycles, respectively.Table 3 List of forecast cycles related to heavy rain cases from 2012 to 2016, the station with maximum daily accumulated rainfall up to 72 h for each cycle, and types of main synoptic situations. Forecast cycle (year month day hour) Maximum 24 h accumulation observation (mm) Station with maximum observation Types of rain events 2012 05 18 00 37 Bac Yen Type I 2012 05 19 00 44 Lac Son Type I 2012 05 20 00 114 Bac Quang Type I 2012 05 21 00 195 Bac Quang Type I 2012 05 22 00 131 Ha Dong Type I 2012 05 23 00 186 Quang Ha Type I 2012 07 20 00 41 Bac Quang Type I 2012 07 21 00 24.6 Lai Chau Type II 2012 07 22 00 116 Vinh Yen Type II 2012 07 23 00 75 Vinh Type II 2012 07 25 00 229 Tuyen Quang Type II 2012 07 26 00 104 Ham Yen Type II 2012 07 27 00 112 Bac Quang Type I 2012 08 03 00 58.1 Con Cuong Type I 2012 08 04 00 34.1 Tinh Gia Type I 2012 08 05 00 70 Lang Son Type I 2012 08 06 00 153 Muong La Type I 2012 08 07 00 163 Quang Ha Type I 2012 08 13 00 63.6 Muong La Type II 2012 08 14 00 64.5 Nho Quan Type II 2012 08 15 00 74 Do Luong Type II 2012 08 16 00 76 Tam Dao Type II 2012 08 31 00 49.1 Sin Ho Type IV 2012 09 01 00 67 Nhu Xuan Type IV 2012 09 02 00 124 Quynh Luu Type IV 2012 09 03 00 84 Moc Chau Type IV 2012 09 04 00 157 Huong Khe Type IV 2012 09 16 00 151.4 Muong Te Type IV 2012 09 17 00 104.6 Tam Duong Type III 2013 06 19 00 109 Bac Quang Type II 2013 06 20 00 72.2 Huong Khe Type II 2013 06 21 00 59 Nam Dinh Type II 2013 06 22 00 218 Ha Tinh Type II 2014 08 25 00 91 Lao Cai Type II 2014 08 26 00 89 Van Ly Type II 2014 08 27 00 157 Hon Ngu Type II 2015 05 18 00 90 Hoi Xuan Type III 2015 05 19 00 58 Sin Ho Type III 2015 05 20 00 106 That Khe Type III 2015 05 21 00 72 Tuyen Quang Type III 2015 06 21 00 124 Luc Yen Type II 2015 06 22 00 72.7 Do Luong Type II 2015 07 01 00 84.5 Cao Bang Type I 2015 07 02 00 123.0 Van Chan Type I 2015 07 03 00 74.0 Huong Khe Type I 2015 07 21 00 54.5 Hoa Binh Type I 2015 07 22 00 19 Sin Ho Type I 2015 07 23 00 92 Sin Ho Type I 2015 07 24 00 180 Quynh Nhai Type I 2015 07 25 00 181 Cua Ong Type I 2015 07 26 00 432 Cua Ong Type I 2015 07 27 00 347 Quang Ha Type I 2015 07 28 00 224 Mong Cai Type I 2015 07 29 00 247 Cua Ong Type I 2015 07 30 00 145 Quang Ha Type I 2015 07 31 00 239 Quang Ha Type I 2015 08 01 00 157 Phu Lien Type I 2015 09 18 00 73 Hai Duong Type IV 2015 09 19 00 78.0 Dinh Hoa Type IV 2016 05 20 00 118 Tinh Gia Type I 2016 05 21 00 107.0 Ha Nam Type I 2016 05 22 00 92 Sa Pa Type I 2016 05 23 00 190.4 Yen Bai Type I 2016 07 24 00 60 Phu Lien Type II 2016 07 25 00 17 Thai Binh Type II 2016 07 26 00 46 Dinh Hoa Type II 2016 07 30 00 18.1 Mu Cang Chai Type II 2016 08 01 00 19 Sin Ho Type I 2016 08 02 00 150 Ninh Binh Type I 2016 08 10 00 81 Quynh Nhai Type I 2016 08 11 00 117 Thanh Hoa Type I 2016 08 12 00 87 Tay Hieu Type I Type I: activities of trough or ITCZ; type II: affected by tropical cyclone or low-pressure system; type III: related to cold surge from the north; type IV: combinations of different patterns. ### 2.4. Validation Methods By finding the nearest grids to each station position (listed in Table1), the daily accumulated rainfall for these heavy rainfall cases from WRF-ARW model forecasts can be assigned. The verification scores used in this study are frequency bias (BIAS), probability of detection (POD), false alarm ratio (FAR), threat score (TS), and equitable threat score (ETS). If we denote H for the hit rate of occurred rainfalls (at a given threshold) for both forecast and observation, M for the missed rate of occurred rainfall forecast, and F for the false alarm rate of the forecast, the BIAS, POD, FAR, and TS are calculated by the following equations:(1)BIAS=H+FH+M,perfect value=1,<1=underestimation,>1=overestimation,POD=HH+M,perfect value=1,FAR=FH+F,perfect value=0,TS=HH+M+F,perfect value=1,no skill=0.If we set Hitsrandom = (H + F) (H + M)/T, where T is the sum of H, M, F, and the number of nonoccurred rainfalls for both forecast and observation, the ETS is calculated by(2)ETS=H−HitsrandomH+M+F−Hitsrandom,perfect value=1,no skill<=0.Other meanings of these scores can be found in Wilks’ study [31]. The verification will be carried out for the 5 km domain and for 24 h accumulated rainfall at 24 h, 48 h, and 72 h forecast ranges. Other analysis charts include the histogram of precipitation occurrences at given thresholds (>25 mm/24 h, >50 mm/24 h, and >100 mm/24 h) at the observation stations. ## 2.1. Model Description This study used the recently released version of the Weather Research and Forecasting model with the ARW dynamical core (WRF-ARW model; version 3.9.1.1) with multinested grids and two-way interactive options. The WRF model has been integrated many advances in model physics/numerical aspects and data assimilation by scientists and developers from the expansive research community, therefore becoming a very flexible and useful tool for both researchers and operational forecasters (https://www.mmm.ucar.edu/weather-research-and-forecasting-model).For the purpose of investigating the impact of physical parameterization schemes, similarly to the study of Kieu et. al. [20, 21], a set of combination of physical parameterizations has been generated based on (a) the modified KF and BMJ cumulus parameterization schemes; (b) the Goddard and Dudhia schemes for the shortwave radiation; (d) the YSU and MYJ planetary boundary schemes; and (e) the Lin, WSM3, WSM5, and WSM6 schemes for the cloud microphysics. There are a maximum of 32 different configuration forecasts for each heavy rainfall case listed in Table 2. The other options are the Monin–Obukhov surface layer scheme and the Rapid Radiative Transfer Model scheme for longwave radiation. Note that, with the MYJ scheme, the surface layer option will be switched to Janjic’s Eta–Monin–Obukhov scheme which is based on similar theory with viscous sublayers over both solid surfaces and water points. Skamarock et al. [22] provided the detailed description of the WRF-ARW model, and various references for physical parameterizations of the WRF-ARW model can be found from various listed references [11, 23–29, 30].Table 2 Details of physical parameterization configurations in different experiments. Abbreviation Microphysics Shortwave radiation Boundary layer Betts–Miller–Janjic (BMJ) cumulus parameterization BMJ-Lin-Duh-MYJ Lin Duhia MYJ BMJ-Lin-Duh-YSU Lin Duhia YSU BMJ-Lin-God-MYJ Lin Goddard MYJ BMJ-Lin-God-YSU Lin Goddard YSU BMJ-WSM3-Duh-MYJ WSM3 Duhia MYJ BMJ-WSM3-Duh-YSU WSM3 Duhia YSU BMJ-WSM3-God-MYJ WSM3 Goddard MYJ BMJ-WSM3-God-YSU WSM3 Goddard YSU BMJ-WSM5-Duh-MYJ WSM5 Duhia MYJ BMJ-WSM5-Duh-YSU WSM5 Duhia YSU BMJ-WSM5-God-MYJ WSM5 Duhia MYJ BMJ-WSM5-God-YSU WSM5 Goddard YSU BMJ-WSM6-Duh-MYJ WSM6 Duhia MYJ BMJ-WSM6-Duh-YSU WSM6 Duhia YSU BMJ-WSM6-God-MYJ WSM6 Goddard MYJ BMJ-WSM6-God-YSU WSM6 Goddard YSU Kain–Fritsch (KF) cumulus parameterization KF-Lin-Duh-MYJ Lin Duhia MYJ KF-Lin-Duh-YSU Lin Duhia YSU KF-Lin-God-MYJ Lin Goddard MYJ KF-Lin-God-YSU Lin Goddard YSU KF-WSM3-Duh-MYJ WSM3 Duhia MYJ KF-WSM3-Duh-YSU WSM3 Duhia YSU KF-WSM3-God-MYJ WSM3 Goddard MYJ KF-WSM3-God-YSU WSM3 Goddard YSU KF-WSM5-Duh-MYJ WSM5 Duhia MYJ KF-WSM5-Duh-YSU WSM5 Duhia YSU KF-WSM5-God-MYJ WSM5 Goddard MYJ KF-WSM5-God-YSU WSM5 Goddard YSU KF-WSM6-Duh-MYJ WSM6 Duhia MYJ KF-WSM6-Duh-YSU WSM6 Duhia YSU KF-WSM6-God-MYJ WSM6 Goddard MYJ KF-WSM6-God-YSU WSM6 Goddard YSUThe WRF-ARW model is configured with two nested grid domains consisting of 199 × 199 grid points in the (x, y) dimensions with horizontal resolutions of 15 km (denoted as d01 domain) and 5 km (denoted as d02 domain). All domains share 41 similar vertical σ levels with the model top at 50 hPa. The higher resolution domain covers the northern part of Vietnam with a time step of 15 seconds. All validations will be carried out with forecasts from the d02 domain. Figure 2 shows the terrain used in d01 and d02 domains.Figure 2 SYNOP station distribution over Vietnam and closed countries (black dots) and terrain of computing domains for the WRF-ARW model. The blue contour is for the outer domain (d01, 15 km), and dark green shading is for the inner domain (d02, 5 km), illustrated by the Diana meteorological visualization software of the Norwegian Meteorological Institute. ## 2.2. Boundary Conditions The GFS model of NCEP used to provide boundary conditions for the WRF-ARW model in this study has a 0.5-degree horizontal resolution and be prepared every three hours from 1000 hPa to 1 hPa. The GFS data for this study were downloaded from the Research Data Archive at the National Center for Atmospheric Research via website linkhttps://rda.ucar.edu/datasets/ds335.0. More information on GFS data can be found at https://www.nco.ncep.noaa.gov/pmb/products/gfs/. ## 2.3. Observation Data The number of observation stations in Vietnam increased from 89 in 1988 to 186 in 2017, with 4 or 8 observations per day (black dots are 8 observations/day for stations in Vietnam and nearby countries in Figure2), but only 24 stations are reported to WMO. The difference in location and topography results in the significant change from one climate to another; therefore, Vietnam has been divided into several climate zones. The highest station density is in the Red River Delta area (the southern part of northern Vietnam) with approximately 1 station per 25 km × 25 km area. The coarsest station density is in the Central Highlands area (latitude between ∼11 N and 16 N) with approximately 1 station per 55 km × 55 km area. On average, the current surface observation network density of Vietnam is about 1 station per 35 km × 35 km for flat regions and 1 station per 50 km × 50 km for mountainous complex regions. In this paper, in order to verify model forecast for the Bac Bo area, we used observation data from the northern SYNOP stations for the period from 2012 to 2016 listed in Table 1.In this study, 72 cases of typical widespread heavy rains which occurred in the northern part of Vietnam in the period 2012–2016 were selected. For each case, the forecasting cycles are chosen so that the 72-hour forecast range can cover the maximum duration of heavy rain episodes.With respect to causes of heavy rainfall events, there are four main categories: (i) activities of ITCZs or troughs: type I, (ii) affected by tropical cyclones or surface low-pressure system (staying at least more than 2 days over the Bac Bo area): type II, (iii) related to the cold surge from the north: type III, and (iv) the complex combinations from different patterns: type IV. The list of forecasted events is given in Table3 with the station name, the maximum value of daily rainfall, and the type of each heavy rainfall case. The sample number of type I, type II, type III, and type IV has 37, 21, 5, and 9 forecast cycles, respectively.Table 3 List of forecast cycles related to heavy rain cases from 2012 to 2016, the station with maximum daily accumulated rainfall up to 72 h for each cycle, and types of main synoptic situations. Forecast cycle (year month day hour) Maximum 24 h accumulation observation (mm) Station with maximum observation Types of rain events 2012 05 18 00 37 Bac Yen Type I 2012 05 19 00 44 Lac Son Type I 2012 05 20 00 114 Bac Quang Type I 2012 05 21 00 195 Bac Quang Type I 2012 05 22 00 131 Ha Dong Type I 2012 05 23 00 186 Quang Ha Type I 2012 07 20 00 41 Bac Quang Type I 2012 07 21 00 24.6 Lai Chau Type II 2012 07 22 00 116 Vinh Yen Type II 2012 07 23 00 75 Vinh Type II 2012 07 25 00 229 Tuyen Quang Type II 2012 07 26 00 104 Ham Yen Type II 2012 07 27 00 112 Bac Quang Type I 2012 08 03 00 58.1 Con Cuong Type I 2012 08 04 00 34.1 Tinh Gia Type I 2012 08 05 00 70 Lang Son Type I 2012 08 06 00 153 Muong La Type I 2012 08 07 00 163 Quang Ha Type I 2012 08 13 00 63.6 Muong La Type II 2012 08 14 00 64.5 Nho Quan Type II 2012 08 15 00 74 Do Luong Type II 2012 08 16 00 76 Tam Dao Type II 2012 08 31 00 49.1 Sin Ho Type IV 2012 09 01 00 67 Nhu Xuan Type IV 2012 09 02 00 124 Quynh Luu Type IV 2012 09 03 00 84 Moc Chau Type IV 2012 09 04 00 157 Huong Khe Type IV 2012 09 16 00 151.4 Muong Te Type IV 2012 09 17 00 104.6 Tam Duong Type III 2013 06 19 00 109 Bac Quang Type II 2013 06 20 00 72.2 Huong Khe Type II 2013 06 21 00 59 Nam Dinh Type II 2013 06 22 00 218 Ha Tinh Type II 2014 08 25 00 91 Lao Cai Type II 2014 08 26 00 89 Van Ly Type II 2014 08 27 00 157 Hon Ngu Type II 2015 05 18 00 90 Hoi Xuan Type III 2015 05 19 00 58 Sin Ho Type III 2015 05 20 00 106 That Khe Type III 2015 05 21 00 72 Tuyen Quang Type III 2015 06 21 00 124 Luc Yen Type II 2015 06 22 00 72.7 Do Luong Type II 2015 07 01 00 84.5 Cao Bang Type I 2015 07 02 00 123.0 Van Chan Type I 2015 07 03 00 74.0 Huong Khe Type I 2015 07 21 00 54.5 Hoa Binh Type I 2015 07 22 00 19 Sin Ho Type I 2015 07 23 00 92 Sin Ho Type I 2015 07 24 00 180 Quynh Nhai Type I 2015 07 25 00 181 Cua Ong Type I 2015 07 26 00 432 Cua Ong Type I 2015 07 27 00 347 Quang Ha Type I 2015 07 28 00 224 Mong Cai Type I 2015 07 29 00 247 Cua Ong Type I 2015 07 30 00 145 Quang Ha Type I 2015 07 31 00 239 Quang Ha Type I 2015 08 01 00 157 Phu Lien Type I 2015 09 18 00 73 Hai Duong Type IV 2015 09 19 00 78.0 Dinh Hoa Type IV 2016 05 20 00 118 Tinh Gia Type I 2016 05 21 00 107.0 Ha Nam Type I 2016 05 22 00 92 Sa Pa Type I 2016 05 23 00 190.4 Yen Bai Type I 2016 07 24 00 60 Phu Lien Type II 2016 07 25 00 17 Thai Binh Type II 2016 07 26 00 46 Dinh Hoa Type II 2016 07 30 00 18.1 Mu Cang Chai Type II 2016 08 01 00 19 Sin Ho Type I 2016 08 02 00 150 Ninh Binh Type I 2016 08 10 00 81 Quynh Nhai Type I 2016 08 11 00 117 Thanh Hoa Type I 2016 08 12 00 87 Tay Hieu Type I Type I: activities of trough or ITCZ; type II: affected by tropical cyclone or low-pressure system; type III: related to cold surge from the north; type IV: combinations of different patterns. ## 2.4. Validation Methods By finding the nearest grids to each station position (listed in Table1), the daily accumulated rainfall for these heavy rainfall cases from WRF-ARW model forecasts can be assigned. The verification scores used in this study are frequency bias (BIAS), probability of detection (POD), false alarm ratio (FAR), threat score (TS), and equitable threat score (ETS). If we denote H for the hit rate of occurred rainfalls (at a given threshold) for both forecast and observation, M for the missed rate of occurred rainfall forecast, and F for the false alarm rate of the forecast, the BIAS, POD, FAR, and TS are calculated by the following equations:(1)BIAS=H+FH+M,perfect value=1,<1=underestimation,>1=overestimation,POD=HH+M,perfect value=1,FAR=FH+F,perfect value=0,TS=HH+M+F,perfect value=1,no skill=0.If we set Hitsrandom = (H + F) (H + M)/T, where T is the sum of H, M, F, and the number of nonoccurred rainfalls for both forecast and observation, the ETS is calculated by(2)ETS=H−HitsrandomH+M+F−Hitsrandom,perfect value=1,no skill<=0.Other meanings of these scores can be found in Wilks’ study [31]. The verification will be carried out for the 5 km domain and for 24 h accumulated rainfall at 24 h, 48 h, and 72 h forecast ranges. Other analysis charts include the histogram of precipitation occurrences at given thresholds (>25 mm/24 h, >50 mm/24 h, and >100 mm/24 h) at the observation stations. ## 3. Results ### 3.1. General Performance The histogram charts (Figure3) show the number of observations or forecasts that occurred at stations for given ranges—or bins. We divided rainfall into 4 main bins (0–25 mm, 25–50 mm, 50–100 mm, and >100 mm) for different rainfall classes. For all 24 h, 48 h, and 72 h forecasts, it is quite clear that most forecasts from the BMJ scheme are in the 0–25 mm range, higher than the number of observations, while the KF scheme tends to have less forecasts than observations. In contrast, at the thresholds greater than 25 mm, for the BMJ scheme, the number of forecasts is less than the number of observations, while the KF scheme tends to have more forecasts than the number of observations.Figure 3 Histogram of daily rainfall frequency at different thresholds (bins) for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dotted lines are the observation frequency. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure4 shows the BIAS score at different thresholds (>25 mm/24 h and >50 mm/24 h) and separated for KF and BMJ scheme combinations. Overall assessment through the BIAS score is quite similar to the results from the evaluation through histograms: simulations with the BMJ scheme tend to be lower than observations at most forecast ranges and different thresholds (BIAS < 1), while the simulations with the KF scheme tend to be higher than the observations (BIAS > 1). The BIAS tends to decrease significantly when the forecast ranges increase. When the validation thresholds are increased, the simulation with the BMJ scheme tends to decrease BIAS, whereas in combination with the KF scheme, BIAS increases with both the forecast ranges and the evaluation thresholds.Figure 4 The BIAS score at 24 h, 48 h, and 72 h for different thresholds (over 25 mm, 50 mm, and 100 mm) for BMJ scheme combinations (a) and for KF scheme combinations (b). The number of samples is 8064 for an individual forecast. (a) (b)The individual assessment in each combination of BMJ and KF schemes shows that when combined with the Goddard radiation scheme, the BIAS increases from 0.1 to 0.2 compared to the Dudhia scheme. These results are similar for all validating thresholds and for 24 h, 48 h, and 72 h forecast ranges. The difference in simulations with different boundary layer schemes is unclear while evaluating with thresholds below 25 mm/24 h; however, at higher thresholds, it is apparent: the change of BIAS when combining with the KF scheme is much higher. For example, compared to the BMJ scheme, the BIAS score of KF-WSM3-God-MYJ at the >100 mm/24 h threshold at the 24-hour forecast range is 1.6458 and that of KF-WSM3-God-YSU is 2.0833, while BMJ-WSM3-God-MYJ and BMJ-WSM3-God-YSU had approximately equal BIAS scores (0.8229). Thus, the combinations of the boundary layer schemes are different (here only the two schemes YSU and MYJ) with the KF scheme being much more sensitive to its combinations with the BMJ scheme and especially at the high rainfall thresholds. Details of numerical values of BIAS can be found in Tables4, 5, and 6.Table 4 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.207 0.1517 0.5854 0.2719 0.5355 301 347 806 6106 0.1049 0.0874 0.4139 0.1342 0.6757 60 125 387 6988 BMJ-Lin-Duh-YSU 0.2087 0.1551 0.5592 0.2692 0.5186 298 321 809 6132 0.1072 0.0899 0.4094 0.1365 0.6667 61 122 386 6991 BMJ-Lin-God-MYJ 0.2283 0.1613 0.7986 0.3342 0.5814 370 514 737 5939 0.1225 0.0996 0.6197 0.1767 0.7148 79 198 368 6915 BMJ-Lin-God-YSU 0.2304 0.1649 0.7706 0.3315 0.5698 367 486 740 5967 0.122 0.1 0.5839 0.1723 0.705 77 184 370 6929 BMJ-WSM3-Duh-MYJ 0.2051 0.1433 0.6929 0.2882 0.5841 319 448 788 6005 0.1066 0.0846 0.5794 0.1521 0.7375 68 191 379 6922 BMJ-WSM3-Duh-YSU 0.2201 0.1595 0.6775 0.3026 0.5533 335 415 772 6038 0.1153 0.0928 0.6018 0.1655 0.7249 74 195 373 6918 BMJ-WSM3-God-MYJ 0.2388 0.1649 0.9539 0.3767 0.6051 417 639 690 5814 0.1364 0.1089 0.8456 0.2215 0.7381 99 279 348 6834 BMJ-WSM3-God-YSU 0.2408 0.1709 0.8663 0.3622 0.5819 401 558 706 5895 0.1391 0.1116 0.8501 0.226 0.7342 101 279 346 6834 BMJ-WSM5-Duh-MYJ 0.2147 0.153 0.692 0.299 0.5679 331 435 776 6018 0.1166 0.0951 0.5638 0.1633 0.7103 73 179 374 6934 BMJ-WSM5-Duh-YSU 0.2156 0.1553 0.6703 0.2963 0.558 328 414 779 6039 0.1319 0.1092 0.613 0.1879 0.6934 84 190 363 6923 BMJ-WSM5-God-MYJ 0.2408 0.1743 0.7967 0.3487 0.5624 386 496 721 5957 0.1273 0.1044 0.6242 0.1834 0.7061 82 197 365 6916 BMJ-WSM5-God-YSU 0.2449 0.1766 0.8365 0.3613 0.568 400 526 707 5927 0.1386 0.112 0.8009 0.2192 0.7263 98 260 349 6853 BMJ-WSM6-Duh-MYJ 0.2079 0.1419 0.7687 0.3044 0.604 337 514 770 5939 0.1166 0.0925 0.6711 0.1745 0.74 78 222 369 6891 BMJ-WSM6-Duh-YSU 0.2126 0.1469 0.7669 0.3098 0.596 343 506 764 5947 0.1386 0.1135 0.7271 0.2103 0.7108 94 231 353 6882 BMJ-WSM6-God-MYJ 0.2552 0.1793 1.0126 0.4092 0.5959 453 668 654 5785 0.1554 0.1269 0.9128 0.2573 0.7181 115 293 332 6820 BMJ-WSM6-God-YSU 0.2477 0.1747 0.9386 0.3848 0.59 426 613 681 5840 0.1644 0.1355 0.9485 0.2752 0.7099 123 301 324 6812 KF-Lin-Duh-MYJ 0.2399 0.162 1.0497 0.3966 0.6222 439 723 668 5730 0.1525 0.1262 0.7919 0.2371 0.7006 106 248 341 6865 KF-Lin-Duh-YSU 0.2656 0.1884 1.0533 0.4309 0.5909 477 689 630 5764 0.1457 0.1168 0.9351 0.2461 0.7368 110 308 337 6805 KF-Lin-God-MYJ 0.2507 0.165 1.2755 0.4562 0.6424 505 907 602 5546 0.1564 0.1233 1.2327 0.302 0.755 135 416 312 6697 KF-Lin-God-YSU 0.2649 0.1795 1.2818 0.4779 0.6272 529 890 578 5563 0.162 0.128 1.311 0.3221 0.7543 144 442 303 6671 KF-WSM3-Duh-MYJ 0.2442 0.1627 1.1491 0.4219 0.6329 467 805 640 5648 0.1581 0.1266 1.1141 0.2886 0.741 129 369 318 6744 KF-WSM3-Duh-YSU 0.2664 0.1843 1.1861 0.4598 0.6123 509 804 598 5649 0.1667 0.135 1.1298 0.3043 0.7307 136 369 311 6744 KF-WSM3-God-MYJ 0.2743 0.1867 1.3668 0.5095 0.6272 564 949 543 5504 0.172 0.1368 1.4385 0.3579 0.7512 160 483 287 6630 KF-WSM3-God-YSU 0.2739 0.1852 1.4029 0.5167 0.6317 572 981 535 5472 0.1778 0.1413 1.5638 0.387 0.7525 173 526 274 6587 KF-WSM5-Duh-MYJ 0.2583 0.1766 1.1653 0.4444 0.6186 492 798 615 5655 0.1648 0.1333 1.1186 0.2998 0.732 134 366 313 6747 KF-WSM5-Duh-YSU 0.2656 0.1828 1.2042 0.4625 0.6159 512 821 595 5632 0.1588 0.1269 1.1387 0.2931 0.7426 131 378 316 6735 KF-WSM5-God-MYJ 0.2693 0.1812 1.3758 0.5041 0.6336 558 965 549 5488 0.1773 0.1421 1.4362 0.3669 0.7445 164 478 283 6635 KF-WSM5-God-YSU 0.2826 0.1939 1.4146 0.5321 0.6239 589 977 518 5476 0.178 0.1415 1.5615 0.387 0.7521 173 525 274 6588 KF-WSM6-Duh-MYJ 0.2598 0.1758 1.234 0.4607 0.6266 510 856 597 5597 0.1663 0.1326 1.2908 0.3266 0.747 146 431 301 6682 KF-WSM6-Duh-YSU 0.28 0.1952 1.2836 0.4995 0.6108 553 868 554 5585 0.1958 0.1611 1.4049 0.3937 0.7197 176 452 271 6661 KF-WSM6-God-MYJ 0.26 0.1698 1.43 0.5014 0.6494 555 1028 552 5425 0.1926 0.156 1.5906 0.4183 0.737 187 524 260 6589 KF-WSM6-God-YSU 0.2792 0.1888 1.467 0.5384 0.633 596 1028 511 5425 0.1844 0.1463 1.7584 0.4295 0.7557 192 594 255 6519Table 5 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.1791 0.1054 0.4842 0.2254 0.5344 365 419 1254 5522 0.0962 0.0703 0.3463 0.1181 0.6589 88 170 657 6645 BMJ-Lin-Duh-YSU 0.1853 0.1069 0.5287 0.239 0.5479 387 469 1232 5472 0.1105 0.0803 0.4295 0.1423 0.6687 106 214 639 6601 BMJ-Lin-God-MYJ 0.234 0.1345 0.769 0.3354 0.5639 543 702 1076 5239 0.1225 0.0846 0.5987 0.1745 0.7085 130 316 615 6499 BMJ-Lin-God-YSU 0.243 0.1408 0.8073 0.3533 0.5624 572 735 1047 5206 0.1357 0.0971 0.6174 0.1933 0.687 144 316 601 6499 BMJ-WSM3-Duh-MYJ 0.2332 0.1353 0.7505 0.3311 0.5588 536 679 1083 5262 0.152 0.109 0.7396 0.2295 0.6897 171 380 574 6435 BMJ-WSM3-Duh-YSU 0.2579 0.1573 0.7956 0.3681 0.5373 596 692 1023 5249 0.1587 0.1142 0.7839 0.2443 0.6884 182 402 563 6413 BMJ-WSM3-God-MYJ 0.2616 0.1452 1.0167 0.4182 0.5887 677 969 942 4972 0.1546 0.1034 1.0054 0.2685 0.733 200 549 545 6266 BMJ-WSM3-God-YSU 0.2641 0.148 1.0136 0.4206 0.585 681 960 938 4981 0.1539 0.1034 0.9826 0.2644 0.7309 197 535 548 6280 BMJ-WSM5-Duh-MYJ 0.2425 0.1455 0.7437 0.3403 0.5424 551 653 1068 5288 0.1582 0.1171 0.6899 0.2309 0.6654 172 342 573 6473 BMJ-WSM5-Duh-YSU 0.2545 0.154 0.7931 0.3638 0.5413 589 695 1030 5246 0.1698 0.1257 0.7758 0.2577 0.6678 192 386 553 6429 BMJ-WSM5-God-MYJ 0.2687 0.1601 0.9074 0.404 0.5548 654 815 965 5126 0.174 0.1286 0.8201 0.2698 0.671 201 410 544 6405 BMJ-WSM5-God-YSU 0.2677 0.1522 1.0068 0.4237 0.5791 686 944 933 4997 0.1674 0.1166 1.0027 0.2872 0.7135 214 533 531 6282 BMJ-WSM6-Duh-MYJ 0.2282 0.1261 0.7986 0.3342 0.5816 541 752 1078 5189 0.1414 0.097 0.7772 0.2201 0.7168 164 415 581 6400 BMJ-WSM6-Duh-YSU 0.2471 0.1426 0.8394 0.3644 0.5659 590 769 1029 5172 0.169 0.1198 0.9409 0.2805 0.7019 209 492 536 6323 BMJ-WSM6-God-MYJ 0.2484 0.1255 1.1081 0.4194 0.6215 679 1115 940 4826 0.1471 0.0931 1.1141 0.2711 0.7566 202 628 543 6187 BMJ-WSM6-God-YSU 0.283 0.1642 1.0723 0.4571 0.5737 740 996 879 4945 0.1588 0.1053 1.0966 0.2872 0.7381 214 603 531 6212 KF-Lin-Duh-MYJ 0.2144 0.1147 0.7634 0.3113 0.5922 504 732 1115 5209 0.1299 0.0905 0.6349 0.1879 0.704 140 333 605 6482 KF-Lin-Duh-YSU 0.2312 0.136 0.7171 0.3224 0.5504 522 639 1097 5302 0.124 0.0847 0.6309 0.1799 0.7149 134 336 611 6479 KF-Lin-God-MYJ 0.2584 0.1355 1.1174 0.4348 0.6108 704 1105 915 4836 0.1304 0.0781 1.0362 0.2349 0.7733 175 597 570 6218 KF-Lin-God-YSU 0.2806 0.1647 1.0241 0.4435 0.5669 718 940 901 5001 0.1611 0.1097 1.0215 0.2805 0.7254 209 552 536 6263 KF-WSM3-Duh-MYJ 0.266 0.149 1.0284 0.4262 0.5856 690 975 929 4966 0.1735 0.1212 1.0604 0.3047 0.7127 227 563 518 6252 KF-WSM3-Duh-YSU 0.2839 0.1719 0.9691 0.4355 0.5507 705 864 914 5077 0.1948 0.1437 1.0255 0.3302 0.678 246 518 499 6297 KF-WSM3-God-MYJ 0.2845 0.1503 1.3484 0.5201 0.6143 842 1341 777 4600 0.1766 0.1161 1.4416 0.3664 0.7458 273 801 472 6014 KF-WSM3-God-YSU 0.3018 0.1706 1.3125 0.5361 0.5915 868 1257 751 4684 0.1877 0.1271 1.455 0.3879 0.7334 289 795 456 6020 KF-WSM5-Duh-MYJ 0.2801 0.1622 1.055 0.4497 0.5738 728 980 891 4961 0.1785 0.1262 1.0644 0.3128 0.7062 233 560 512 6255 KF-WSM5-Duh-YSU 0.2948 0.1812 1.0019 0.4558 0.545 738 884 881 5057 0.2037 0.1509 1.102 0.3557 0.6772 265 556 480 6259 KF-WSM5-God-MYJ 0.284 0.1477 1.3904 0.5287 0.6197 856 1395 763 4546 0.1685 0.1064 1.5221 0.3638 0.761 271 863 474 5952 KF-WSM5-God-YSU 0.2998 0.1683 1.3162 0.5343 0.5941 865 1266 754 4675 0.1996 0.1388 1.4846 0.4134 0.7215 308 798 437 6017 KF-WSM6-Duh-MYJ 0.2799 0.1571 1.1353 0.467 0.5887 756 1082 863 4859 0.1973 0.1415 1.2242 0.3664 0.7007 273 639 472 6176 KF-WSM6-Duh-YSU 0.3002 0.1833 1.0574 0.475 0.5508 769 943 850 4998 0.211 0.157 1.157 0.3758 0.6752 280 582 465 6233 KF-WSM6-God-MYJ 0.2917 0.155 1.4095 0.5442 0.6139 881 1401 738 4540 0.1788 0.1154 1.6107 0.396 0.7542 295 905 450 5910 KF-WSM6-God-YSU 0.3113 0.1785 1.357 0.5596 0.5876 906 1291 713 4650 0.2101 0.1475 1.6054 0.4523 0.7182 337 859 408 5956Table 6 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 72 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.154 0.0672 0.3964 0.1863 0.53 400 451 1747 4962 0.0797 0.0443 0.3412 0.099 0.7098 101 247 919 6293 BMJ-Lin-Duh-YSU 0.1539 0.0595 0.442 0.1924 0.5648 413 536 1734 4877 0.0766 0.0373 0.3912 0.099 0.7469 101 298 919 6242 BMJ-Lin-God-MYJ 0.1755 0.0611 0.5752 0.2352 0.5911 505 730 1642 4683 0.0719 0.0285 0.4471 0.0971 0.7829 99 357 921 6183 BMJ-Lin-God-YSU 0.1893 0.0645 0.653 0.2632 0.597 565 837 1582 4576 0.0832 0.0342 0.5314 0.1176 0.7786 120 422 900 6118 BMJ-WSM3-Duh-MYJ 0.1802 0.0656 0.5771 0.2408 0.5827 517 722 1630 4691 0.0867 0.0391 0.5108 0.1206 0.7639 123 398 897 6142 BMJ-WSM3-Duh-YSU 0.1988 0.0737 0.6572 0.2748 0.5819 590 821 1557 4592 0.0935 0.0407 0.5941 0.1363 0.7706 139 467 881 6073 BMJ-WSM3-God-MYJ 0.2273 0.0777 0.871 0.3465 0.6021 744 1126 1403 4287 0.0992 0.0404 0.7049 0.1539 0.7816 157 562 863 5978 BMJ-WSM3-God-YSU 0.2239 0.0691 0.9171 0.3507 0.6176 753 1216 1394 4197 0.1022 0.0373 0.8294 0.1696 0.7955 173 673 847 5867 BMJ-WSM5-Duh-MYJ 0.1788 0.0634 0.5817 0.2399 0.5877 515 734 1632 4679 0.0944 0.0482 0.4892 0.1284 0.7375 131 368 889 6172 BMJ-WSM5-Duh-YSU 0.2041 0.0796 0.6544 0.2804 0.5715 602 803 1545 4610 0.0966 0.0427 0.6137 0.1422 0.7684 145 481 875 6059 BMJ-WSM5-God-MYJ 0.2169 0.0787 0.7667 0.3149 0.5893 676 970 1471 4443 0.0908 0.037 0.6127 0.1343 0.7808 137 488 883 6052 BMJ-WSM5-God-YSU 0.2278 0.0779 0.8728 0.3475 0.6019 746 1128 1401 4285 0.0947 0.031 0.802 0.1559 0.8056 159 659 861 5881 BMJ-WSM6-Duh-MYJ 0.1712 0.0601 0.5515 0.2268 0.5887 487 697 1660 4716 0.0834 0.0379 0.4775 0.1137 0.7618 116 371 904 6169 BMJ-WSM6-Duh-YSU 0.1971 0.069 0.68 0.2767 0.5932 594 866 1553 4547 0.0992 0.0432 0.652 0.149 0.7714 152 513 868 6027 BMJ-WSM6-God-MYJ 0.2191 0.069 0.8714 0.3363 0.6141 722 1149 1425 4264 0.1 0.0385 0.7578 0.1598 0.7891 163 610 857 5930 BMJ-WSM6-God-YSU 0.215 0.0632 0.885 0.3335 0.6232 716 1184 1431 4229 0.0824 0.0185 0.8039 0.1373 0.8293 140 680 880 5860 KF-Lin-Duh-MYJ 0.2144 0.0871 0.6782 0.2962 0.5632 636 820 1511 4593 0.1162 0.0632 0.601 0.1667 0.7227 170 443 850 6097 KF-Lin-Duh-YSU 0.2063 0.0825 0.6502 0.2823 0.5659 606 790 1541 4623 0.1069 0.052 0.6343 0.1578 0.7512 161 486 859 6054 KF-Lin-God-MYJ 0.2399 0.0805 0.9693 0.381 0.6069 818 1263 1329 4150 0.1392 0.0702 0.9333 0.2363 0.7468 241 711 779 5829 KF-Lin-God-YSU 0.2427 0.0831 0.9725 0.3852 0.6039 827 1261 1320 4152 0.1309 0.0605 0.9647 0.2275 0.7642 232 752 788 5788 KF-WSM3-Duh-MYJ 0.2377 0.0889 0.8677 0.3586 0.5867 770 1093 1377 4320 0.1242 0.0613 0.7922 0.198 0.75 202 606 818 5934 KF-WSM3-Duh-YSU 0.2618 0.1129 0.8812 0.3903 0.5571 838 1054 1309 4359 0.1675 0.0983 0.948 0.2794 0.7053 285 682 735 5858 KF-WSM3-God-MYJ 0.2596 0.0844 1.1495 0.4429 0.6147 951 1517 1196 3896 0.1334 0.0538 1.2235 0.2618 0.7861 267 981 753 5559 KF-WSM3-God-YSU 0.2817 0.1073 1.1593 0.4746 0.5906 1019 1470 1128 3943 0.1631 0.081 1.3216 0.3255 0.7537 332 1016 688 5524 KF-WSM5-Duh-MYJ 0.2508 0.1005 0.8882 0.3787 0.5737 813 1094 1334 4319 0.1333 0.0694 0.8167 0.2137 0.7383 218 615 802 5925 KF-WSM5-Duh-YSU 0.2671 0.1193 0.8738 0.395 0.548 848 1028 1299 4385 0.158 0.0902 0.9118 0.2608 0.714 266 664 754 5876 KF-WSM5-God-MYJ 0.2723 0.0967 1.1635 0.463 0.6021 994 1504 1153 3909 0.1354 0.0571 1.1863 0.2608 0.7802 266 944 754 5596 KF-WSM5-God-YSU 0.2652 0.0932 1.1178 0.4439 0.6029 953 1447 1194 3966 0.1507 0.0709 1.2382 0.2931 0.7633 299 964 721 5576 KF-WSM6-Duh-MYJ 0.2545 0.0952 0.9767 0.401 0.5894 861 1236 1286 4177 0.1237 0.0546 0.9324 0.2127 0.7718 217 734 803 5806 KF-WSM6-Duh-YSU 0.2615 0.1084 0.9208 0.3982 0.5675 855 1122 1292 4291 0.156 0.0841 1.0127 0.2716 0.7318 277 756 743 5784 KF-WSM6-God-MYJ 0.2693 0.0883 1.2259 0.4723 0.6147 1014 1618 1133 3795 0.1398 0.0571 1.3265 0.2853 0.7849 291 1062 729 5478 KF-WSM6-God-YSU 0.2853 0.1072 1.2054 0.4895 0.5939 1051 1537 1096 3876 0.1513 0.0661 1.4245 0.3186 0.7763 325 1128 695 5412 ### 3.2. Skill Score Validation The charts for the skill scores at the two thresholds in Figures5 and 6 show that the TS value at the 24 h forecast range is about 0.2 to 0.27 for >25 mm/24 h and 0.1 to 0.2 for >50 mm/24 h. At 48 h, the TS is around ∼0.18 to 0.3 and ∼0.1 to 0.19 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h. At 72 h, the TS is around 0.15 to 0.25 and ∼0.08 to 0.15 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h.Figure 5 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 25 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure 6 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 50 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)The probability of detection decreases and the false alarm rate clearly increases with forecast ranges and the validating thresholds. In addition, when the threshold increases, the difference between the TS and the ETS decreases which means that the amount of Hitsrandom is too small or the cause of very small hit rate (Hitsrandom rate decreases by 90% when changing the threshold from >25 mm/24 h to >50 mm/24 h).Specific comparisons between the combinations of the BMJ or KF scheme show that the KF scheme model’s skills in heavy rain forecast in the northern region of Vietnam are better than BMJ scheme model’s skills. The average TS with the KF scheme can be about 15–25% larger than that using the BMJ scheme. If the difference of skills in a regional model is insignificant when changing the physical parameterization schemes, the lateral boundary conditions (from global forecasts) will greatly affect the quality of the dynamical downscaling forecasts after 24 h integration. However, here the skill difference when combining the two different cumulus schemes in the longer forecast range (such as 72 h) shows the importance of convection simulation capability contributing to the forecasting quality of the model. Detailed evaluation of the combination with the radiation physical schemes of KF or BMJ does not show any difference compared to changing the boundary layer schemes when looking at the skill scores TS or ETS.The combinations with YSU boundary layer schemes have better skills compared to those with MYJ schemes. In addition, when changing the complexity of the cloud microphysics schemes, the more complex the microphysical processing simulation, the better the TS and ETS (at 24, 48, and 72 h forecast ranges and two validating thresholds; see Figure7 for comparison of the change in TSs and ETSs with the cloud microphysics scheme).Figure 7 Brief comparison of TSs and ETSs for illustration of sensitivities with microphysics schemes.For the skill comparisons of different event types (I, II, III, and IV), Figure8 shows the TSs at thresholds over 25 mm/24 h and over 50 mm/24 h for 24 h and 48 h forecast ranges. For type I, which is associated with the activity of ITCZ and low-pressure trough over the Bac Bo area, the KF scheme proved its forecast skills in almost forecast ranges and thresholds mentioned in this research. However, with the rain caused by tropical cyclone in type II, the difference between KF and BMJ schemes within 24 h was smaller than that in type I. In 48 h and 72 h, the BMJ scheme showed more skilled forecast with the skill score for threshold 25 mm ranging from 0.25 to 0.35 of the BMJ scheme, compared with 0.2 to 0.3 of the KF scheme, and the skill score for threshold 50 mm ranging from 0.2 to 0.3 of the BMJ scheme, compared with 0.2 to 0.25 of the KF scheme. In type II, both KF and BMJ schemes combined with the simple cloud microphysics Lin scheme showed lowest skill score. For type III, which is associated with the activity of cold surge and its role in squeezing the low-pressure trough from the north towards Bac Bo, the KF scheme was only skillful in threshold 25 mm in 24 h. Particularly in type IV, which contains heavy rain events caused by a complex combination of situations resulting in a trough in Bac Bo, the KF scheme still showed skilled forecast compared to very low forecast (no skill with threshold over 50 mm/24 h and 25 mm/24 h) of BMJ scheme experiments. More details of TSs for different types are listed in Tables 7 and 8.Figure 8 TSs for different types of heavy rainfall events in northern Vietnam for daily accumulation thresholds over (a) 25 mm and 50 mm (b) at 24 h forecast ranges and over 25 mm (c) and 50 mm (d) at 48 h forecast ranges. The right vertical axis is from 0 to 0.4. (a) (b) (c) (d)Table 7 TSs for different types of main heavy rainfall, at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.2546 0.1309 0.15 0.0682 0.124 0.0791 0.0769 0 BMJ-Lin-Duh-YSU 0.2625 0.1019 0.2346 0.0706 0.1244 0.0692 0.16 0 BMJ-Lin-God-MYJ 0.2644 0.1638 0.2235 0.1011 0.139 0.1064 0.0882 0 BMJ-Lin-God-YSU 0.2761 0.1519 0.1939 0.093 0.1369 0.087 0.1562 0.0333 BMJ-WSM3-Duh-MYJ 0.2503 0.1237 0.1818 0.09 0.1279 0.0621 0.129 0 BMJ-WSM3-Duh-YSU 0.2745 0.1179 0.2439 0.0652 0.1379 0.0699 0.1471 0 BMJ-WSM3-God-MYJ 0.2834 0.1651 0.21 0.09 0.1556 0.1104 0.1136 0 BMJ-WSM3-God-YSU 0.2943 0.1505 0.202 0.0745 0.167 0.0798 0.1304 0.0312 BMJ-WSM5-Duh-MYJ 0.2618 0.1349 0.1977 0.0707 0.1429 0.0563 0.1515 0 BMJ-WSM5-Duh-YSU 0.263 0.1351 0.2073 0.0652 0.1603 0.0789 0.1389 0 BMJ-WSM5-God-MYJ 0.3032 0.1376 0.1978 0.0714 0.1565 0.0496 0.1515 0.0345 BMJ-WSM5-God-YSU 0.3028 0.1525 0.1939 0.0737 0.1677 0.0696 0.15 0.0312 BMJ-WSM6-Duh-MYJ 0.2571 0.133 0.1304 0.0926 0.1403 0.0872 0.0732 0 BMJ-WSM6-Duh-YSU 0.2643 0.1281 0.1828 0.06 0.1714 0.0719 0.1212 0 BMJ-WSM6-God-MYJ 0.2991 0.1781 0.2574 0.0784 0.1706 0.1391 0.15 0.0256 BMJ-WSM6-God-YSU 0.3043 0.1595 0.1944 0.0652 0.1916 0.1104 0.1429 0 KF-Lin-Duh-MYJ 0.2881 0.1673 0.2136 0.1449 0.1844 0.0683 0.1026 0.1818 KF-Lin-Duh-YSU 0.3125 0.1951 0.2474 0.1667 0.1778 0.0698 0.06 0.1892 KF-Lin-God-MYJ 0.2917 0.1914 0.2255 0.1558 0.1725 0.1237 0.0889 0.1471 KF-Lin-God-YSU 0.3129 0.183 0.2653 0.1769 0.1839 0.1162 0.093 0.1316 KF-WSM3-Duh-MYJ 0.2872 0.1776 0.2212 0.1655 0.1893 0.0753 0.1224 0.1667 KF-WSM3-Duh-YSU 0.309 0.1881 0.2551 0.219 0.1858 0.1176 0.1111 0.2059 KF-WSM3-God-MYJ 0.3264 0.1924 0.2526 0.1656 0.2057 0.0909 0.0889 0.1579 KF-WSM3-God-YSU 0.3188 0.1841 0.3069 0.2014 0.2027 0.1185 0.098 0.1795 KF-WSM5-Duh-MYJ 0.3062 0.1836 0.25 0.1655 0.2048 0.0688 0.0851 0.1795 KF-WSM5-Duh-YSU 0.3116 0.1892 0.2653 0.1838 0.1823 0.1111 0.08 0.1818 KF-WSM5-God-MYJ 0.3141 0.1996 0.25 0.1772 0.2047 0.1122 0.1064 0.1622 KF-WSM5-God-YSU 0.3317 0.2025 0.2642 0.1875 0.2063 0.1085 0.1228 0.1579 KF-WSM6-Duh-MYJ 0.3063 0.1823 0.2233 0.1812 0.1963 0.0952 0.0833 0.1538 KF-WSM6-Duh-YSU 0.3339 0.1927 0.2476 0.1898 0.2244 0.1179 0.1071 0.2812 KF-WSM6-God-MYJ 0.3138 0.1781 0.2268 0.1465 0.2207 0.1268 0.1429 0.125 KF-WSM6-God-YSU 0.3323 0.1793 0.2843 0.2 0.2078 0.1255 0.1538 0.1429Table 8 TSs for different types of main heavy rainfall, at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.1629 0.2426 0.0854 0.0922 0.0768 0.1533 0.0526 0.0333 BMJ-Lin-Duh-YSU 0.1755 0.2271 0.0909 0.1438 0.0971 0.1559 0.0833 0.0164 BMJ-Lin-God-MYJ 0.2276 0.3104 0.075 0.0798 0.1139 0.1631 0.0333 0.0303 BMJ-Lin-God-YSU 0.2482 0.2723 0.1429 0.12 0.1405 0.1543 0.0385 0.0167 BMJ-WSM3-Duh-MYJ 0.2235 0.3055 0.0833 0.1311 0.1147 0.2848 0.0571 0 BMJ-WSM3-Duh-YSU 0.2365 0.3419 0.1538 0.1617 0.1136 0.3072 0.0556 0.0116 BMJ-WSM3-God-MYJ 0.2583 0.3448 0.0979 0.1029 0.1386 0.2466 0.0182 0 BMJ-WSM3-God-YSU 0.2587 0.3338 0.1176 0.1272 0.1541 0.2125 0.02 0 BMJ-WSM5-Duh-MYJ 0.2301 0.3206 0.0928 0.1228 0.1287 0.2706 0.0741 0.0115 BMJ-WSM5-Duh-YSU 0.243 0.3157 0.1373 0.172 0.1397 0.2813 0.0312 0.0241 BMJ-WSM5-God-MYJ 0.2594 0.344 0.1327 0.1337 0.1479 0.2928 0.0286 0 BMJ-WSM5-God-YSU 0.2588 0.3329 0.1797 0.1364 0.1383 0.2918 0 0.0115 BMJ-WSM6-Duh-MYJ 0.217 0.304 0.1418 0.1027 0.1214 0.2212 0.0784 0.0235 BMJ-WSM6-Duh-YSU 0.2331 0.3216 0.1368 0.1587 0.1348 0.2968 0.0714 0.0103 BMJ-WSM6-God-MYJ 0.2522 0.3055 0.1474 0.0955 0.132 0.2314 0.0541 0.0215 BMJ-WSM6-God-YSU 0.2739 0.3707 0.2 0.1026 0.1333 0.2658 0.0484 0.0319 KF-Lin-Duh-MYJ 0.2259 0.2093 0.0811 0.225 0.1331 0.1208 0.027 0.1899 KF-Lin-Duh-YSU 0.2714 0.172 0.0957 0.2199 0.1347 0.1017 0.0667 0.1395 KF-Lin-God-MYJ 0.268 0.2537 0.1043 0.2771 0.1267 0.1545 0.0385 0.1237 KF-Lin-God-YSU 0.3125 0.2384 0.1058 0.2788 0.1704 0.1608 0.0488 0.1327 KF-WSM3-Duh-MYJ 0.2689 0.2805 0.1298 0.2783 0.156 0.241 0.0577 0.1327 KF-WSM3-Duh-YSU 0.3066 0.2739 0.069 0.285 0.1971 0.2068 0.0444 0.2088 KF-WSM3-God-MYJ 0.2945 0.2976 0.1286 0.2595 0.1651 0.2451 0.0526 0.1111 KF-WSM3-God-YSU 0.3202 0.2956 0.1407 0.2822 0.1872 0.2136 0.0923 0.1569 KF-WSM5-Duh-MYJ 0.2924 0.2921 0.1176 0.2546 0.1586 0.238 0.0727 0.1765 KF-WSM5-Duh-YSU 0.3217 0.2847 0.0976 0.26 0.1995 0.2403 0.0755 0.1753 KF-WSM5-God-MYJ 0.2946 0.2953 0.1192 0.2672 0.1499 0.25 0.038 0.1293 KF-WSM5-God-YSU 0.3265 0.2768 0.1361 0.2881 0.201 0.2344 0.0959 0.1404 KF-WSM6-Duh-MYJ 0.2922 0.2874 0.1389 0.2597 0.1874 0.25 0.0685 0.1667 KF-WSM6-Duh-YSU 0.3291 0.2706 0.1311 0.2995 0.208 0.2378 0.0893 0.2083 KF-WSM6-God-MYJ 0.3002 0.3115 0.1465 0.2556 0.1675 0.2537 0.0435 0.1333 KF-WSM6-God-YSU 0.3426 0.2798 0.1595 0.2888 0.2091 0.2565 0.0674 0.177 ## 3.1. General Performance The histogram charts (Figure3) show the number of observations or forecasts that occurred at stations for given ranges—or bins. We divided rainfall into 4 main bins (0–25 mm, 25–50 mm, 50–100 mm, and >100 mm) for different rainfall classes. For all 24 h, 48 h, and 72 h forecasts, it is quite clear that most forecasts from the BMJ scheme are in the 0–25 mm range, higher than the number of observations, while the KF scheme tends to have less forecasts than observations. In contrast, at the thresholds greater than 25 mm, for the BMJ scheme, the number of forecasts is less than the number of observations, while the KF scheme tends to have more forecasts than the number of observations.Figure 3 Histogram of daily rainfall frequency at different thresholds (bins) for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dotted lines are the observation frequency. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure4 shows the BIAS score at different thresholds (>25 mm/24 h and >50 mm/24 h) and separated for KF and BMJ scheme combinations. Overall assessment through the BIAS score is quite similar to the results from the evaluation through histograms: simulations with the BMJ scheme tend to be lower than observations at most forecast ranges and different thresholds (BIAS < 1), while the simulations with the KF scheme tend to be higher than the observations (BIAS > 1). The BIAS tends to decrease significantly when the forecast ranges increase. When the validation thresholds are increased, the simulation with the BMJ scheme tends to decrease BIAS, whereas in combination with the KF scheme, BIAS increases with both the forecast ranges and the evaluation thresholds.Figure 4 The BIAS score at 24 h, 48 h, and 72 h for different thresholds (over 25 mm, 50 mm, and 100 mm) for BMJ scheme combinations (a) and for KF scheme combinations (b). The number of samples is 8064 for an individual forecast. (a) (b)The individual assessment in each combination of BMJ and KF schemes shows that when combined with the Goddard radiation scheme, the BIAS increases from 0.1 to 0.2 compared to the Dudhia scheme. These results are similar for all validating thresholds and for 24 h, 48 h, and 72 h forecast ranges. The difference in simulations with different boundary layer schemes is unclear while evaluating with thresholds below 25 mm/24 h; however, at higher thresholds, it is apparent: the change of BIAS when combining with the KF scheme is much higher. For example, compared to the BMJ scheme, the BIAS score of KF-WSM3-God-MYJ at the >100 mm/24 h threshold at the 24-hour forecast range is 1.6458 and that of KF-WSM3-God-YSU is 2.0833, while BMJ-WSM3-God-MYJ and BMJ-WSM3-God-YSU had approximately equal BIAS scores (0.8229). Thus, the combinations of the boundary layer schemes are different (here only the two schemes YSU and MYJ) with the KF scheme being much more sensitive to its combinations with the BMJ scheme and especially at the high rainfall thresholds. Details of numerical values of BIAS can be found in Tables4, 5, and 6.Table 4 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.207 0.1517 0.5854 0.2719 0.5355 301 347 806 6106 0.1049 0.0874 0.4139 0.1342 0.6757 60 125 387 6988 BMJ-Lin-Duh-YSU 0.2087 0.1551 0.5592 0.2692 0.5186 298 321 809 6132 0.1072 0.0899 0.4094 0.1365 0.6667 61 122 386 6991 BMJ-Lin-God-MYJ 0.2283 0.1613 0.7986 0.3342 0.5814 370 514 737 5939 0.1225 0.0996 0.6197 0.1767 0.7148 79 198 368 6915 BMJ-Lin-God-YSU 0.2304 0.1649 0.7706 0.3315 0.5698 367 486 740 5967 0.122 0.1 0.5839 0.1723 0.705 77 184 370 6929 BMJ-WSM3-Duh-MYJ 0.2051 0.1433 0.6929 0.2882 0.5841 319 448 788 6005 0.1066 0.0846 0.5794 0.1521 0.7375 68 191 379 6922 BMJ-WSM3-Duh-YSU 0.2201 0.1595 0.6775 0.3026 0.5533 335 415 772 6038 0.1153 0.0928 0.6018 0.1655 0.7249 74 195 373 6918 BMJ-WSM3-God-MYJ 0.2388 0.1649 0.9539 0.3767 0.6051 417 639 690 5814 0.1364 0.1089 0.8456 0.2215 0.7381 99 279 348 6834 BMJ-WSM3-God-YSU 0.2408 0.1709 0.8663 0.3622 0.5819 401 558 706 5895 0.1391 0.1116 0.8501 0.226 0.7342 101 279 346 6834 BMJ-WSM5-Duh-MYJ 0.2147 0.153 0.692 0.299 0.5679 331 435 776 6018 0.1166 0.0951 0.5638 0.1633 0.7103 73 179 374 6934 BMJ-WSM5-Duh-YSU 0.2156 0.1553 0.6703 0.2963 0.558 328 414 779 6039 0.1319 0.1092 0.613 0.1879 0.6934 84 190 363 6923 BMJ-WSM5-God-MYJ 0.2408 0.1743 0.7967 0.3487 0.5624 386 496 721 5957 0.1273 0.1044 0.6242 0.1834 0.7061 82 197 365 6916 BMJ-WSM5-God-YSU 0.2449 0.1766 0.8365 0.3613 0.568 400 526 707 5927 0.1386 0.112 0.8009 0.2192 0.7263 98 260 349 6853 BMJ-WSM6-Duh-MYJ 0.2079 0.1419 0.7687 0.3044 0.604 337 514 770 5939 0.1166 0.0925 0.6711 0.1745 0.74 78 222 369 6891 BMJ-WSM6-Duh-YSU 0.2126 0.1469 0.7669 0.3098 0.596 343 506 764 5947 0.1386 0.1135 0.7271 0.2103 0.7108 94 231 353 6882 BMJ-WSM6-God-MYJ 0.2552 0.1793 1.0126 0.4092 0.5959 453 668 654 5785 0.1554 0.1269 0.9128 0.2573 0.7181 115 293 332 6820 BMJ-WSM6-God-YSU 0.2477 0.1747 0.9386 0.3848 0.59 426 613 681 5840 0.1644 0.1355 0.9485 0.2752 0.7099 123 301 324 6812 KF-Lin-Duh-MYJ 0.2399 0.162 1.0497 0.3966 0.6222 439 723 668 5730 0.1525 0.1262 0.7919 0.2371 0.7006 106 248 341 6865 KF-Lin-Duh-YSU 0.2656 0.1884 1.0533 0.4309 0.5909 477 689 630 5764 0.1457 0.1168 0.9351 0.2461 0.7368 110 308 337 6805 KF-Lin-God-MYJ 0.2507 0.165 1.2755 0.4562 0.6424 505 907 602 5546 0.1564 0.1233 1.2327 0.302 0.755 135 416 312 6697 KF-Lin-God-YSU 0.2649 0.1795 1.2818 0.4779 0.6272 529 890 578 5563 0.162 0.128 1.311 0.3221 0.7543 144 442 303 6671 KF-WSM3-Duh-MYJ 0.2442 0.1627 1.1491 0.4219 0.6329 467 805 640 5648 0.1581 0.1266 1.1141 0.2886 0.741 129 369 318 6744 KF-WSM3-Duh-YSU 0.2664 0.1843 1.1861 0.4598 0.6123 509 804 598 5649 0.1667 0.135 1.1298 0.3043 0.7307 136 369 311 6744 KF-WSM3-God-MYJ 0.2743 0.1867 1.3668 0.5095 0.6272 564 949 543 5504 0.172 0.1368 1.4385 0.3579 0.7512 160 483 287 6630 KF-WSM3-God-YSU 0.2739 0.1852 1.4029 0.5167 0.6317 572 981 535 5472 0.1778 0.1413 1.5638 0.387 0.7525 173 526 274 6587 KF-WSM5-Duh-MYJ 0.2583 0.1766 1.1653 0.4444 0.6186 492 798 615 5655 0.1648 0.1333 1.1186 0.2998 0.732 134 366 313 6747 KF-WSM5-Duh-YSU 0.2656 0.1828 1.2042 0.4625 0.6159 512 821 595 5632 0.1588 0.1269 1.1387 0.2931 0.7426 131 378 316 6735 KF-WSM5-God-MYJ 0.2693 0.1812 1.3758 0.5041 0.6336 558 965 549 5488 0.1773 0.1421 1.4362 0.3669 0.7445 164 478 283 6635 KF-WSM5-God-YSU 0.2826 0.1939 1.4146 0.5321 0.6239 589 977 518 5476 0.178 0.1415 1.5615 0.387 0.7521 173 525 274 6588 KF-WSM6-Duh-MYJ 0.2598 0.1758 1.234 0.4607 0.6266 510 856 597 5597 0.1663 0.1326 1.2908 0.3266 0.747 146 431 301 6682 KF-WSM6-Duh-YSU 0.28 0.1952 1.2836 0.4995 0.6108 553 868 554 5585 0.1958 0.1611 1.4049 0.3937 0.7197 176 452 271 6661 KF-WSM6-God-MYJ 0.26 0.1698 1.43 0.5014 0.6494 555 1028 552 5425 0.1926 0.156 1.5906 0.4183 0.737 187 524 260 6589 KF-WSM6-God-YSU 0.2792 0.1888 1.467 0.5384 0.633 596 1028 511 5425 0.1844 0.1463 1.7584 0.4295 0.7557 192 594 255 6519Table 5 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.1791 0.1054 0.4842 0.2254 0.5344 365 419 1254 5522 0.0962 0.0703 0.3463 0.1181 0.6589 88 170 657 6645 BMJ-Lin-Duh-YSU 0.1853 0.1069 0.5287 0.239 0.5479 387 469 1232 5472 0.1105 0.0803 0.4295 0.1423 0.6687 106 214 639 6601 BMJ-Lin-God-MYJ 0.234 0.1345 0.769 0.3354 0.5639 543 702 1076 5239 0.1225 0.0846 0.5987 0.1745 0.7085 130 316 615 6499 BMJ-Lin-God-YSU 0.243 0.1408 0.8073 0.3533 0.5624 572 735 1047 5206 0.1357 0.0971 0.6174 0.1933 0.687 144 316 601 6499 BMJ-WSM3-Duh-MYJ 0.2332 0.1353 0.7505 0.3311 0.5588 536 679 1083 5262 0.152 0.109 0.7396 0.2295 0.6897 171 380 574 6435 BMJ-WSM3-Duh-YSU 0.2579 0.1573 0.7956 0.3681 0.5373 596 692 1023 5249 0.1587 0.1142 0.7839 0.2443 0.6884 182 402 563 6413 BMJ-WSM3-God-MYJ 0.2616 0.1452 1.0167 0.4182 0.5887 677 969 942 4972 0.1546 0.1034 1.0054 0.2685 0.733 200 549 545 6266 BMJ-WSM3-God-YSU 0.2641 0.148 1.0136 0.4206 0.585 681 960 938 4981 0.1539 0.1034 0.9826 0.2644 0.7309 197 535 548 6280 BMJ-WSM5-Duh-MYJ 0.2425 0.1455 0.7437 0.3403 0.5424 551 653 1068 5288 0.1582 0.1171 0.6899 0.2309 0.6654 172 342 573 6473 BMJ-WSM5-Duh-YSU 0.2545 0.154 0.7931 0.3638 0.5413 589 695 1030 5246 0.1698 0.1257 0.7758 0.2577 0.6678 192 386 553 6429 BMJ-WSM5-God-MYJ 0.2687 0.1601 0.9074 0.404 0.5548 654 815 965 5126 0.174 0.1286 0.8201 0.2698 0.671 201 410 544 6405 BMJ-WSM5-God-YSU 0.2677 0.1522 1.0068 0.4237 0.5791 686 944 933 4997 0.1674 0.1166 1.0027 0.2872 0.7135 214 533 531 6282 BMJ-WSM6-Duh-MYJ 0.2282 0.1261 0.7986 0.3342 0.5816 541 752 1078 5189 0.1414 0.097 0.7772 0.2201 0.7168 164 415 581 6400 BMJ-WSM6-Duh-YSU 0.2471 0.1426 0.8394 0.3644 0.5659 590 769 1029 5172 0.169 0.1198 0.9409 0.2805 0.7019 209 492 536 6323 BMJ-WSM6-God-MYJ 0.2484 0.1255 1.1081 0.4194 0.6215 679 1115 940 4826 0.1471 0.0931 1.1141 0.2711 0.7566 202 628 543 6187 BMJ-WSM6-God-YSU 0.283 0.1642 1.0723 0.4571 0.5737 740 996 879 4945 0.1588 0.1053 1.0966 0.2872 0.7381 214 603 531 6212 KF-Lin-Duh-MYJ 0.2144 0.1147 0.7634 0.3113 0.5922 504 732 1115 5209 0.1299 0.0905 0.6349 0.1879 0.704 140 333 605 6482 KF-Lin-Duh-YSU 0.2312 0.136 0.7171 0.3224 0.5504 522 639 1097 5302 0.124 0.0847 0.6309 0.1799 0.7149 134 336 611 6479 KF-Lin-God-MYJ 0.2584 0.1355 1.1174 0.4348 0.6108 704 1105 915 4836 0.1304 0.0781 1.0362 0.2349 0.7733 175 597 570 6218 KF-Lin-God-YSU 0.2806 0.1647 1.0241 0.4435 0.5669 718 940 901 5001 0.1611 0.1097 1.0215 0.2805 0.7254 209 552 536 6263 KF-WSM3-Duh-MYJ 0.266 0.149 1.0284 0.4262 0.5856 690 975 929 4966 0.1735 0.1212 1.0604 0.3047 0.7127 227 563 518 6252 KF-WSM3-Duh-YSU 0.2839 0.1719 0.9691 0.4355 0.5507 705 864 914 5077 0.1948 0.1437 1.0255 0.3302 0.678 246 518 499 6297 KF-WSM3-God-MYJ 0.2845 0.1503 1.3484 0.5201 0.6143 842 1341 777 4600 0.1766 0.1161 1.4416 0.3664 0.7458 273 801 472 6014 KF-WSM3-God-YSU 0.3018 0.1706 1.3125 0.5361 0.5915 868 1257 751 4684 0.1877 0.1271 1.455 0.3879 0.7334 289 795 456 6020 KF-WSM5-Duh-MYJ 0.2801 0.1622 1.055 0.4497 0.5738 728 980 891 4961 0.1785 0.1262 1.0644 0.3128 0.7062 233 560 512 6255 KF-WSM5-Duh-YSU 0.2948 0.1812 1.0019 0.4558 0.545 738 884 881 5057 0.2037 0.1509 1.102 0.3557 0.6772 265 556 480 6259 KF-WSM5-God-MYJ 0.284 0.1477 1.3904 0.5287 0.6197 856 1395 763 4546 0.1685 0.1064 1.5221 0.3638 0.761 271 863 474 5952 KF-WSM5-God-YSU 0.2998 0.1683 1.3162 0.5343 0.5941 865 1266 754 4675 0.1996 0.1388 1.4846 0.4134 0.7215 308 798 437 6017 KF-WSM6-Duh-MYJ 0.2799 0.1571 1.1353 0.467 0.5887 756 1082 863 4859 0.1973 0.1415 1.2242 0.3664 0.7007 273 639 472 6176 KF-WSM6-Duh-YSU 0.3002 0.1833 1.0574 0.475 0.5508 769 943 850 4998 0.211 0.157 1.157 0.3758 0.6752 280 582 465 6233 KF-WSM6-God-MYJ 0.2917 0.155 1.4095 0.5442 0.6139 881 1401 738 4540 0.1788 0.1154 1.6107 0.396 0.7542 295 905 450 5910 KF-WSM6-God-YSU 0.3113 0.1785 1.357 0.5596 0.5876 906 1291 713 4650 0.2101 0.1475 1.6054 0.4523 0.7182 337 859 408 5956Table 6 Skill scores (TS, ETS, BIAS, POD, and FAR) and hit ratesH, false alarm rates F, missed rates M, and total corrected rates T at the 72 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. The number of samples is 8064. >25 mm/24 h >50 mm/24 h TS ETS BIAS POD FAR H F M T TS ETS BIAS POD FAR H F M T BMJ-Lin-Duh-MYJ 0.154 0.0672 0.3964 0.1863 0.53 400 451 1747 4962 0.0797 0.0443 0.3412 0.099 0.7098 101 247 919 6293 BMJ-Lin-Duh-YSU 0.1539 0.0595 0.442 0.1924 0.5648 413 536 1734 4877 0.0766 0.0373 0.3912 0.099 0.7469 101 298 919 6242 BMJ-Lin-God-MYJ 0.1755 0.0611 0.5752 0.2352 0.5911 505 730 1642 4683 0.0719 0.0285 0.4471 0.0971 0.7829 99 357 921 6183 BMJ-Lin-God-YSU 0.1893 0.0645 0.653 0.2632 0.597 565 837 1582 4576 0.0832 0.0342 0.5314 0.1176 0.7786 120 422 900 6118 BMJ-WSM3-Duh-MYJ 0.1802 0.0656 0.5771 0.2408 0.5827 517 722 1630 4691 0.0867 0.0391 0.5108 0.1206 0.7639 123 398 897 6142 BMJ-WSM3-Duh-YSU 0.1988 0.0737 0.6572 0.2748 0.5819 590 821 1557 4592 0.0935 0.0407 0.5941 0.1363 0.7706 139 467 881 6073 BMJ-WSM3-God-MYJ 0.2273 0.0777 0.871 0.3465 0.6021 744 1126 1403 4287 0.0992 0.0404 0.7049 0.1539 0.7816 157 562 863 5978 BMJ-WSM3-God-YSU 0.2239 0.0691 0.9171 0.3507 0.6176 753 1216 1394 4197 0.1022 0.0373 0.8294 0.1696 0.7955 173 673 847 5867 BMJ-WSM5-Duh-MYJ 0.1788 0.0634 0.5817 0.2399 0.5877 515 734 1632 4679 0.0944 0.0482 0.4892 0.1284 0.7375 131 368 889 6172 BMJ-WSM5-Duh-YSU 0.2041 0.0796 0.6544 0.2804 0.5715 602 803 1545 4610 0.0966 0.0427 0.6137 0.1422 0.7684 145 481 875 6059 BMJ-WSM5-God-MYJ 0.2169 0.0787 0.7667 0.3149 0.5893 676 970 1471 4443 0.0908 0.037 0.6127 0.1343 0.7808 137 488 883 6052 BMJ-WSM5-God-YSU 0.2278 0.0779 0.8728 0.3475 0.6019 746 1128 1401 4285 0.0947 0.031 0.802 0.1559 0.8056 159 659 861 5881 BMJ-WSM6-Duh-MYJ 0.1712 0.0601 0.5515 0.2268 0.5887 487 697 1660 4716 0.0834 0.0379 0.4775 0.1137 0.7618 116 371 904 6169 BMJ-WSM6-Duh-YSU 0.1971 0.069 0.68 0.2767 0.5932 594 866 1553 4547 0.0992 0.0432 0.652 0.149 0.7714 152 513 868 6027 BMJ-WSM6-God-MYJ 0.2191 0.069 0.8714 0.3363 0.6141 722 1149 1425 4264 0.1 0.0385 0.7578 0.1598 0.7891 163 610 857 5930 BMJ-WSM6-God-YSU 0.215 0.0632 0.885 0.3335 0.6232 716 1184 1431 4229 0.0824 0.0185 0.8039 0.1373 0.8293 140 680 880 5860 KF-Lin-Duh-MYJ 0.2144 0.0871 0.6782 0.2962 0.5632 636 820 1511 4593 0.1162 0.0632 0.601 0.1667 0.7227 170 443 850 6097 KF-Lin-Duh-YSU 0.2063 0.0825 0.6502 0.2823 0.5659 606 790 1541 4623 0.1069 0.052 0.6343 0.1578 0.7512 161 486 859 6054 KF-Lin-God-MYJ 0.2399 0.0805 0.9693 0.381 0.6069 818 1263 1329 4150 0.1392 0.0702 0.9333 0.2363 0.7468 241 711 779 5829 KF-Lin-God-YSU 0.2427 0.0831 0.9725 0.3852 0.6039 827 1261 1320 4152 0.1309 0.0605 0.9647 0.2275 0.7642 232 752 788 5788 KF-WSM3-Duh-MYJ 0.2377 0.0889 0.8677 0.3586 0.5867 770 1093 1377 4320 0.1242 0.0613 0.7922 0.198 0.75 202 606 818 5934 KF-WSM3-Duh-YSU 0.2618 0.1129 0.8812 0.3903 0.5571 838 1054 1309 4359 0.1675 0.0983 0.948 0.2794 0.7053 285 682 735 5858 KF-WSM3-God-MYJ 0.2596 0.0844 1.1495 0.4429 0.6147 951 1517 1196 3896 0.1334 0.0538 1.2235 0.2618 0.7861 267 981 753 5559 KF-WSM3-God-YSU 0.2817 0.1073 1.1593 0.4746 0.5906 1019 1470 1128 3943 0.1631 0.081 1.3216 0.3255 0.7537 332 1016 688 5524 KF-WSM5-Duh-MYJ 0.2508 0.1005 0.8882 0.3787 0.5737 813 1094 1334 4319 0.1333 0.0694 0.8167 0.2137 0.7383 218 615 802 5925 KF-WSM5-Duh-YSU 0.2671 0.1193 0.8738 0.395 0.548 848 1028 1299 4385 0.158 0.0902 0.9118 0.2608 0.714 266 664 754 5876 KF-WSM5-God-MYJ 0.2723 0.0967 1.1635 0.463 0.6021 994 1504 1153 3909 0.1354 0.0571 1.1863 0.2608 0.7802 266 944 754 5596 KF-WSM5-God-YSU 0.2652 0.0932 1.1178 0.4439 0.6029 953 1447 1194 3966 0.1507 0.0709 1.2382 0.2931 0.7633 299 964 721 5576 KF-WSM6-Duh-MYJ 0.2545 0.0952 0.9767 0.401 0.5894 861 1236 1286 4177 0.1237 0.0546 0.9324 0.2127 0.7718 217 734 803 5806 KF-WSM6-Duh-YSU 0.2615 0.1084 0.9208 0.3982 0.5675 855 1122 1292 4291 0.156 0.0841 1.0127 0.2716 0.7318 277 756 743 5784 KF-WSM6-God-MYJ 0.2693 0.0883 1.2259 0.4723 0.6147 1014 1618 1133 3795 0.1398 0.0571 1.3265 0.2853 0.7849 291 1062 729 5478 KF-WSM6-God-YSU 0.2853 0.1072 1.2054 0.4895 0.5939 1051 1537 1096 3876 0.1513 0.0661 1.4245 0.3186 0.7763 325 1128 695 5412 ## 3.2. Skill Score Validation The charts for the skill scores at the two thresholds in Figures5 and 6 show that the TS value at the 24 h forecast range is about 0.2 to 0.27 for >25 mm/24 h and 0.1 to 0.2 for >50 mm/24 h. At 48 h, the TS is around ∼0.18 to 0.3 and ∼0.1 to 0.19 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h. At 72 h, the TS is around 0.15 to 0.25 and ∼0.08 to 0.15 corresponding to two thresholds >25 mm/24 h and >50 mm/24 h.Figure 5 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 25 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)Figure 6 Skill scores calculated for northern Vietnam for daily accumulation thresholds over 50 mm for (a) 24 h, (b) 48 h, and (c) 72 h forecast ranges. The dark grey bar is for POD, light grey bar is for FAR, blue dotted line is for TS, and red dotted line is for ETS. In each chart, the left vertical axis (0–0.9) is for POD and FAR, while the right vertical axis (0–0.35) is for TS and ETS values. The number of samples is 8064 for an individual forecast. (a) (b) (c)The probability of detection decreases and the false alarm rate clearly increases with forecast ranges and the validating thresholds. In addition, when the threshold increases, the difference between the TS and the ETS decreases which means that the amount of Hitsrandom is too small or the cause of very small hit rate (Hitsrandom rate decreases by 90% when changing the threshold from >25 mm/24 h to >50 mm/24 h).Specific comparisons between the combinations of the BMJ or KF scheme show that the KF scheme model’s skills in heavy rain forecast in the northern region of Vietnam are better than BMJ scheme model’s skills. The average TS with the KF scheme can be about 15–25% larger than that using the BMJ scheme. If the difference of skills in a regional model is insignificant when changing the physical parameterization schemes, the lateral boundary conditions (from global forecasts) will greatly affect the quality of the dynamical downscaling forecasts after 24 h integration. However, here the skill difference when combining the two different cumulus schemes in the longer forecast range (such as 72 h) shows the importance of convection simulation capability contributing to the forecasting quality of the model. Detailed evaluation of the combination with the radiation physical schemes of KF or BMJ does not show any difference compared to changing the boundary layer schemes when looking at the skill scores TS or ETS.The combinations with YSU boundary layer schemes have better skills compared to those with MYJ schemes. In addition, when changing the complexity of the cloud microphysics schemes, the more complex the microphysical processing simulation, the better the TS and ETS (at 24, 48, and 72 h forecast ranges and two validating thresholds; see Figure7 for comparison of the change in TSs and ETSs with the cloud microphysics scheme).Figure 7 Brief comparison of TSs and ETSs for illustration of sensitivities with microphysics schemes.For the skill comparisons of different event types (I, II, III, and IV), Figure8 shows the TSs at thresholds over 25 mm/24 h and over 50 mm/24 h for 24 h and 48 h forecast ranges. For type I, which is associated with the activity of ITCZ and low-pressure trough over the Bac Bo area, the KF scheme proved its forecast skills in almost forecast ranges and thresholds mentioned in this research. However, with the rain caused by tropical cyclone in type II, the difference between KF and BMJ schemes within 24 h was smaller than that in type I. In 48 h and 72 h, the BMJ scheme showed more skilled forecast with the skill score for threshold 25 mm ranging from 0.25 to 0.35 of the BMJ scheme, compared with 0.2 to 0.3 of the KF scheme, and the skill score for threshold 50 mm ranging from 0.2 to 0.3 of the BMJ scheme, compared with 0.2 to 0.25 of the KF scheme. In type II, both KF and BMJ schemes combined with the simple cloud microphysics Lin scheme showed lowest skill score. For type III, which is associated with the activity of cold surge and its role in squeezing the low-pressure trough from the north towards Bac Bo, the KF scheme was only skillful in threshold 25 mm in 24 h. Particularly in type IV, which contains heavy rain events caused by a complex combination of situations resulting in a trough in Bac Bo, the KF scheme still showed skilled forecast compared to very low forecast (no skill with threshold over 50 mm/24 h and 25 mm/24 h) of BMJ scheme experiments. More details of TSs for different types are listed in Tables 7 and 8.Figure 8 TSs for different types of heavy rainfall events in northern Vietnam for daily accumulation thresholds over (a) 25 mm and 50 mm (b) at 24 h forecast ranges and over 25 mm (c) and 50 mm (d) at 48 h forecast ranges. The right vertical axis is from 0 to 0.4. (a) (b) (c) (d)Table 7 TSs for different types of main heavy rainfall, at the 24 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.2546 0.1309 0.15 0.0682 0.124 0.0791 0.0769 0 BMJ-Lin-Duh-YSU 0.2625 0.1019 0.2346 0.0706 0.1244 0.0692 0.16 0 BMJ-Lin-God-MYJ 0.2644 0.1638 0.2235 0.1011 0.139 0.1064 0.0882 0 BMJ-Lin-God-YSU 0.2761 0.1519 0.1939 0.093 0.1369 0.087 0.1562 0.0333 BMJ-WSM3-Duh-MYJ 0.2503 0.1237 0.1818 0.09 0.1279 0.0621 0.129 0 BMJ-WSM3-Duh-YSU 0.2745 0.1179 0.2439 0.0652 0.1379 0.0699 0.1471 0 BMJ-WSM3-God-MYJ 0.2834 0.1651 0.21 0.09 0.1556 0.1104 0.1136 0 BMJ-WSM3-God-YSU 0.2943 0.1505 0.202 0.0745 0.167 0.0798 0.1304 0.0312 BMJ-WSM5-Duh-MYJ 0.2618 0.1349 0.1977 0.0707 0.1429 0.0563 0.1515 0 BMJ-WSM5-Duh-YSU 0.263 0.1351 0.2073 0.0652 0.1603 0.0789 0.1389 0 BMJ-WSM5-God-MYJ 0.3032 0.1376 0.1978 0.0714 0.1565 0.0496 0.1515 0.0345 BMJ-WSM5-God-YSU 0.3028 0.1525 0.1939 0.0737 0.1677 0.0696 0.15 0.0312 BMJ-WSM6-Duh-MYJ 0.2571 0.133 0.1304 0.0926 0.1403 0.0872 0.0732 0 BMJ-WSM6-Duh-YSU 0.2643 0.1281 0.1828 0.06 0.1714 0.0719 0.1212 0 BMJ-WSM6-God-MYJ 0.2991 0.1781 0.2574 0.0784 0.1706 0.1391 0.15 0.0256 BMJ-WSM6-God-YSU 0.3043 0.1595 0.1944 0.0652 0.1916 0.1104 0.1429 0 KF-Lin-Duh-MYJ 0.2881 0.1673 0.2136 0.1449 0.1844 0.0683 0.1026 0.1818 KF-Lin-Duh-YSU 0.3125 0.1951 0.2474 0.1667 0.1778 0.0698 0.06 0.1892 KF-Lin-God-MYJ 0.2917 0.1914 0.2255 0.1558 0.1725 0.1237 0.0889 0.1471 KF-Lin-God-YSU 0.3129 0.183 0.2653 0.1769 0.1839 0.1162 0.093 0.1316 KF-WSM3-Duh-MYJ 0.2872 0.1776 0.2212 0.1655 0.1893 0.0753 0.1224 0.1667 KF-WSM3-Duh-YSU 0.309 0.1881 0.2551 0.219 0.1858 0.1176 0.1111 0.2059 KF-WSM3-God-MYJ 0.3264 0.1924 0.2526 0.1656 0.2057 0.0909 0.0889 0.1579 KF-WSM3-God-YSU 0.3188 0.1841 0.3069 0.2014 0.2027 0.1185 0.098 0.1795 KF-WSM5-Duh-MYJ 0.3062 0.1836 0.25 0.1655 0.2048 0.0688 0.0851 0.1795 KF-WSM5-Duh-YSU 0.3116 0.1892 0.2653 0.1838 0.1823 0.1111 0.08 0.1818 KF-WSM5-God-MYJ 0.3141 0.1996 0.25 0.1772 0.2047 0.1122 0.1064 0.1622 KF-WSM5-God-YSU 0.3317 0.2025 0.2642 0.1875 0.2063 0.1085 0.1228 0.1579 KF-WSM6-Duh-MYJ 0.3063 0.1823 0.2233 0.1812 0.1963 0.0952 0.0833 0.1538 KF-WSM6-Duh-YSU 0.3339 0.1927 0.2476 0.1898 0.2244 0.1179 0.1071 0.2812 KF-WSM6-God-MYJ 0.3138 0.1781 0.2268 0.1465 0.2207 0.1268 0.1429 0.125 KF-WSM6-God-YSU 0.3323 0.1793 0.2843 0.2 0.2078 0.1255 0.1538 0.1429Table 8 TSs for different types of main heavy rainfall, at the 48 h forecast range for thresholds >25 mm/24 h and >50 mm/24 h, for the period 2012–2016. >25 mm/24 h >50 mm/24 h Type I Type II Type III Type IV Type I Type II Type III Type IV BMJ-Lin-Duh-MYJ 0.1629 0.2426 0.0854 0.0922 0.0768 0.1533 0.0526 0.0333 BMJ-Lin-Duh-YSU 0.1755 0.2271 0.0909 0.1438 0.0971 0.1559 0.0833 0.0164 BMJ-Lin-God-MYJ 0.2276 0.3104 0.075 0.0798 0.1139 0.1631 0.0333 0.0303 BMJ-Lin-God-YSU 0.2482 0.2723 0.1429 0.12 0.1405 0.1543 0.0385 0.0167 BMJ-WSM3-Duh-MYJ 0.2235 0.3055 0.0833 0.1311 0.1147 0.2848 0.0571 0 BMJ-WSM3-Duh-YSU 0.2365 0.3419 0.1538 0.1617 0.1136 0.3072 0.0556 0.0116 BMJ-WSM3-God-MYJ 0.2583 0.3448 0.0979 0.1029 0.1386 0.2466 0.0182 0 BMJ-WSM3-God-YSU 0.2587 0.3338 0.1176 0.1272 0.1541 0.2125 0.02 0 BMJ-WSM5-Duh-MYJ 0.2301 0.3206 0.0928 0.1228 0.1287 0.2706 0.0741 0.0115 BMJ-WSM5-Duh-YSU 0.243 0.3157 0.1373 0.172 0.1397 0.2813 0.0312 0.0241 BMJ-WSM5-God-MYJ 0.2594 0.344 0.1327 0.1337 0.1479 0.2928 0.0286 0 BMJ-WSM5-God-YSU 0.2588 0.3329 0.1797 0.1364 0.1383 0.2918 0 0.0115 BMJ-WSM6-Duh-MYJ 0.217 0.304 0.1418 0.1027 0.1214 0.2212 0.0784 0.0235 BMJ-WSM6-Duh-YSU 0.2331 0.3216 0.1368 0.1587 0.1348 0.2968 0.0714 0.0103 BMJ-WSM6-God-MYJ 0.2522 0.3055 0.1474 0.0955 0.132 0.2314 0.0541 0.0215 BMJ-WSM6-God-YSU 0.2739 0.3707 0.2 0.1026 0.1333 0.2658 0.0484 0.0319 KF-Lin-Duh-MYJ 0.2259 0.2093 0.0811 0.225 0.1331 0.1208 0.027 0.1899 KF-Lin-Duh-YSU 0.2714 0.172 0.0957 0.2199 0.1347 0.1017 0.0667 0.1395 KF-Lin-God-MYJ 0.268 0.2537 0.1043 0.2771 0.1267 0.1545 0.0385 0.1237 KF-Lin-God-YSU 0.3125 0.2384 0.1058 0.2788 0.1704 0.1608 0.0488 0.1327 KF-WSM3-Duh-MYJ 0.2689 0.2805 0.1298 0.2783 0.156 0.241 0.0577 0.1327 KF-WSM3-Duh-YSU 0.3066 0.2739 0.069 0.285 0.1971 0.2068 0.0444 0.2088 KF-WSM3-God-MYJ 0.2945 0.2976 0.1286 0.2595 0.1651 0.2451 0.0526 0.1111 KF-WSM3-God-YSU 0.3202 0.2956 0.1407 0.2822 0.1872 0.2136 0.0923 0.1569 KF-WSM5-Duh-MYJ 0.2924 0.2921 0.1176 0.2546 0.1586 0.238 0.0727 0.1765 KF-WSM5-Duh-YSU 0.3217 0.2847 0.0976 0.26 0.1995 0.2403 0.0755 0.1753 KF-WSM5-God-MYJ 0.2946 0.2953 0.1192 0.2672 0.1499 0.25 0.038 0.1293 KF-WSM5-God-YSU 0.3265 0.2768 0.1361 0.2881 0.201 0.2344 0.0959 0.1404 KF-WSM6-Duh-MYJ 0.2922 0.2874 0.1389 0.2597 0.1874 0.25 0.0685 0.1667 KF-WSM6-Duh-YSU 0.3291 0.2706 0.1311 0.2995 0.208 0.2378 0.0893 0.2083 KF-WSM6-God-MYJ 0.3002 0.3115 0.1465 0.2556 0.1675 0.2537 0.0435 0.1333 KF-WSM6-God-YSU 0.3426 0.2798 0.1595 0.2888 0.2091 0.2565 0.0674 0.177 ## 4. Conclusions For the purpose of investigating the effects of physical schemes in the WRF-ARW model on the operational heavy rainfall forecast for the Bac Bo area, 32 different model configurations have been established by switching two typical cumulus parameterization schemes (BMJ and KF), the cloud microphysics schemes from simple (Lin) to complex (WSM with 3/5/6-layer closure assumptions), and boundary layer (YSU and MYJ) and shortwave radiation (Dudhia and Goddard) schemes. The 72 experiments of widespread heavy rainfall occurring in the Bac Bo area used boundaries from the GFS model and had the highest horizontal resolution of 5 km × 5 km.The model verification with local observation data illustrated the limited capabilities in heavy rainfall forecast for the northern part of Vietnam. On average, for the threshold over 25 mm/24 h, the TSs are from 0.2 to 0.25, 0.2 to 0.3, and 0.2 to 0.25 for 24 h, 48 h, and 72 h forecast ranges, respectively. For the threshold over 50 mm/24 h, TSs are 0.1–0.2 for 24 h and 48 h forecast ranges and about 0.1–0.15 for the 72 h forecast range. At above the 100 mm/24 h threshold, a very low skill value (below 0.1) is validated for most forecast ranges.The change of the microcloud physics from simple to complex closure assumptions shows that complex schemes give very positive results for both the BMJ and KF schemes. The model configurations with the KF scheme showed higher skills compared to BMJ scheme configurations, and these higher skill scores (TS and POD) mainly come from a higher hit rate (H) and a lower missed rate (M) but also a higher false alarm rate (F). More preliminary assessment for the KF scheme configurations showed the most sensitivity of boundary layer schemes compared to microphysics or shortwave radiation schemes and some initial comments that boundary layer interaction during the application of the KF scheme is an important factor to find the appropriate parameters for heavy rainfall forecast over the Bac Bo area in the WRF-ARW model.In terms of sample size of each category, the first two types are comparable because of the similar sample size. The other two types, however, are limited in sample size and need to be further analyzed in the subsequent research. A detailed assessment related to the origin of the mechanisms causing heavy rain illustrates that the KF scheme showed more skilled forecast with the BMJ scheme during trough- or ITCZ-related heavy rain events, whereas it was less skilled in the events caused by tropical cyclones. With the events caused by the cold surge and a combination of different patterns, the skill of the BMJ scheme was quite low. More verification with the last two types needs to be further investigated in the subsequent research because of the limited sample size studied. --- *Source: 1010858-2019-08-18.xml*
2019